Theory and Modern Applications

# Properties of interval-valued function space under the gH-difference and their application to semi-linear interval differential equations

## Abstract

The conventional subtraction arithmetic on interval numbers makes studies on interval systems difficult because of irreversibility on addition, whereas the gH-difference as a popular concept can ensure interval analysis to be a valuable research branch like real analysis. However, many properties of interval numbers still remain open. This work focuses on developing a complete normed quasi-linear space composed of continuous interval-valued functions, in which some fundamental properties of continuity, differentiability, and integrability are discussed based on the gH-difference, the gH-derivative, and the Hausdorff-Pompeiu metric. Such properties are adopted to investigate semi-linear interval differential equations. While the existence and uniqueness of the (i)- or (ii)-solution are studied, a necessary condition that the (i)- and the (ii)-solutions to be strong solutions is obtained. For such a kind of equation it is demonstrated that there exists at least a strong solution under certain assumptions.

## 1 Introduction

In real-world engineering fields, many dynamic problems can be formulated by dynamic models, such as motor servo systems, navigation control, and so forth. However, this kind of system involves usually multiple uncertain parameters or interval coefficients , and thus interval analysis, developed by Moore  plays an important role in studying the existence and uniqueness of the solutions for interval differential equations (IDEs). Despite being initially introduced as an attempt to handle interval uncertainty, which appears in mathematical programming problems with bounded uncertain parameters (e.g. ), one such theory has been gradually applied to IDEs , due to the development of differential dynamics. To our knowledge, a great deal of work on interval theory was done by researchers many years ago, and on studies of fundamental arithmetic properties of the interval number [2, 2330]. Especially, after Moore established the interval arithmetic rules , Oppenheimer and Michel  claimed subsequently that interval systems with the usual addition were a commutative semi-group but failed as a group. Unfortunately, since such systems are not an Abelian group for addition, interval arithmetic cannot yield the structure of a linear space. Generally, the arithmetic of addition is irreversible, namely for any two interval numbers a and b, if $$a+b=0$$, b is not equivalent to −a usually, where $$0=[0,0]$$. This way, whereas the difference of a and b can be defined by such a version of addition, many properties present in real analysis are not true in the context of interval number arithmetic, e.g. $$a-a\neq0$$. In order to develop a useful theoretical framework on interval number like real number theory, Hukuhara  introduced another concept of interval difference for a and b in 1967, namely the Hukuhara difference (H-difference, $$a\ominus b$$), where $$a\ominus b=c$$ if and only if $$a=b+c$$. However, although such a concept can satisfy $$a\ominus a=0$$, $$a\ominus b$$ is meaningful only when $$w(a)\ge w(b)$$, where $$w(a)$$ and $$w(b)$$ denote the widths of a and b, respectively. In order to overcome one such fault, after Markov  pointed out that the width of $$a-b$$ was equal to the sum of the widths of a and b, he gave the concept of a nonstandard subtraction expressed also by the symbol of ‘−’. Such a concept can guarantee that $$a-a=0$$, and the width of the interval $$a-b$$ equals the absolute value of the difference of the widths of a and b. Thereafter, Stefanini [9, 2729] extended the version of the H-difference to the concept of a generalized Hukuhara difference (gH-difference, $$a\ominus_{g}b$$) which coincided with the nonstandard subtraction operator introduced in Markov , Definition 1, p.326. Such a gH-difference has been comprehensively adopted to investigate interval dynamic systems, because apart from still satisfying $$a\ominus_{g}a=0$$, the gH-difference always exists for any two intervals. Thus, it is an invaluable mathematical concept in probing interval number theory. In our last work , some fundamental arithmetic rules on interval numbers, based on the conventional addition and gH-difference were extended to the case of interval-valued vectors, while some properties, in particular associative and distributive laws, were obtained.

After the conventional subtraction arithmetic was generalized, multiple kinds of concepts of derivatives for interval-valued functions were reported [9, 16, 23, 24, 3135]. Hukuhara  introduced the concept of H-differentiability for set-valued functions by using the concept of the H-difference. This is a starting point to study set, fuzzy and later interval differential equations. However, the H-derivative has some shortcomings which make it difficult to study the properties of interval-valued differential or integral equations, as the H-difference does not always exist for any two interval numbers. This limits its wide application to interval dynamic systems. Fortunately, based on the H-difference and the gH-difference, two recent generalized concepts of the GH-derivative  and gH-derivative  were introduced by Stefanini et al. Two such kinds of derivatives can be more comprehensively adopted to study IDEs by comparison with the H-derivative. Many valuable fundamental properties have been discovered by researchers [32, 34, 36]. We also note that there exist some intrinsic relationships between these two concepts, for example a GH-differential interval-valued function is usually a gH-differential under a few weak assumptions . From the viewpoint of theoretical analysis, the gH-derivative of an interval-valued function at some point can be computed by only one formula, but the GH-derivative is opposite. Therefore, in comparison with the concept of the GH-derivative, the version of the gH-derivative will become more and more focused upon in the coming theoretical research on IDEs, demonstrated by some recent results [9, 1416, 22].

Recently, several researchers have paid great attention to studies on the properties of interval-valued functions, in particular continuity, differentiability, and integrability. Some reported representative achievements promote the importance of interval dynamics. Chalco-Cano et al. [32, 34, 35] made hard efforts to study systematically some relationships among GH-, gH-, Markov, and π-differentiability [9, 24, 33, 37]. They claimed that (i) if an interval-valued function f was GH-differentiable, then it was π-differentiable, and (ii) if f was π-differentiable, then it was gH-differentiable. They also derived several Ostrowski inequalities capable of being used for studying IDEs’ solution estimates, relying upon the concept of the gH-derivative . The concepts of the GH-derivative and the gH-derivative are usually utilized to define the types of the solutions for IDEs [913, 16, 17, 22]. However, since many arithmetic properties of real number theory are not true in the branch of interval analysis, it is extremely difficult to probe IDEs’ theoretical foundations. Even so, some pioneering works on the existence and uniqueness of the solutions are gaining great interest among researchers. Theoretically, studies on IDEs depend greatly on the type of interval-valued derivative, as different concepts of derivatives require that IDEs satisfy different conditions so as to ensure IDEs’ solution existence and uniqueness.

More recently, several special IDEs were defined based on GH-differentiability, and then transformed into integral equations with the H-difference [1013, 1620, 38]. Their solution properties, including the existence, uniqueness, and continuous dependence, have been well investigated by some researchers. Malinowski [10, 11] made great contributions to analyzing a kind of IDE, depending on the second type of Hukuhara derivative included in the concept of GH-derivative. Subsequently, some important properties of the solutions were found such as the existence of local solutions, convergence, and continuous dependence of the solution on initial value and right-hand side of the equation. Skripnic  proved the existence of the solutions of IDEs by virtue of the Caratheodory theorem and the concept of generalized differentiability , in which the version of derivative was equivalent to that of GH-derivative. Additionally, based on the GH-derivative, Ngo et al. [12, 13, 1820] carried out a series of studies for multiple kinds of IDEs such as interval-valued integro-differential equations, interval-valued functional differential equations, and so on. They obtained some significant conclusions as regards the existence of the solutions, by developing comparison theorems.

On the other hand, IDEs have also been well studied based on the concept of the gH-difference in the recent years [9, 1417, 21, 22, 32, 40]. Stefanini and Bede  gave the existence and uniqueness of two types of local solutions for an initial valued IDE with a gH-derivative, and meanwhile the characteristics of the solutions were found. After that, they also carried out an experimental analysis of such a kind of IDE . Especially, Chalco-Cano et al.  investigated exhaustively the properties of an interval-valued function expressed by $$Cg(t)$$ with interval number C and real single-valued function g. They also derived out the representation of the solutions for a class of linear initial valued IDEs. In addition, Lupulescu  proposed the concepts of differentiability and integrability for the interval-valued functions on time scales, while the properties of the delta generalized Hukuhra derivative and integration of interval-valued functions on time scales were studied. An illustrative example of an IDE on time scale was also given. Lupulescu  also use the gH-difference to develop a theory of the fractional calculus for interval-valued functions, and it is the foundation of interval-valued fractional differential equations.

Summarizing, interval differential dynamic systems are a still open research topic in the context of differential dynamic systems. Three fundamental issues are under consideration: (i) how to define and analyze the space of the solutions, (ii) whether some classical conclusions such as fixed point theorems in the branch of classical functional analysis can be adapted to IDEs, and (iii) how to derive analytic solutions or numerical ones for IDEs. Thus, in this paper we probe into the existence and uniqueness of the solutions for a class of semi-linear interval dynamic systems, after developing a complete normed quasi-linear space. Most precisely, we first give a quasi-linear space on interval number and a related continuous interval-valued function space, and meanwhile their properties are sufficiently discussed. Second, an important and classical fixed point theorem is generalized to the interval-valued case so as to discover IDEs’ properties. Finally, we conclude that there exists at least a strong solution for a kind of semi-linear IDE as considered in this work.

## 2 Preliminaries and basic properties of gH-difference

Let IR denote a set composed of all closed intervals in R. For a given interval $$a=[a^{L},a^{R}]$$, a is said to be a degenerate interval if $$a^{L}=a^{R}$$. We say that a equals interval b only when $$a^{L}=b^{L}$$ and $$a^{R}=b^{R}$$, where $$b=[b^{L},b^{R}]$$. Some interval arithmetic rules on IR are defined below :

1. (i)

$$a+b=[a^{L}+b^{L},a^{R}+b^{R}]$$;

2. (ii)

$$ka=\bigl \{\scriptsize{ \begin{array}{l@{\quad}l} {[ka^{L},ka^{R}]}, & k \geq0, \\ {[ka^{R},ka^{L}]}, & k < 0; \end{array}} \bigr.$$

3. (iii)

$$a-b=a+(-1)b=[a^{L}-b^{R},a^{R}-b^{L}]$$;

4. (iv)

$$ab=[\min\{u\in A\}, \max\{u\in A\}]$$, where $$A=\{ a^{L}b^{L},a^{L}b^{R},a^{R}b^{L},a^{R}b^{R}\}$$;

5. (v)

$$|a|= \max\{|a^{L}|,|a^{R}|\}$$, $$w(a)=a^{R}-a^{L}$$;

6. (vi)

$$a\leq b \Leftrightarrow a^{L}\leq b^{L}$$, $$a^{R}\leq b^{R}$$.

In general, $$a-a$$ does not equal 0 except that a is a degenerate interval. This indicates that the subtraction is not the inverse of Minkowski addition above. However, the cancellation law of addition on interval numbers holds, i.e., $$a+c=b+c$$ if and only if $$a=b$$. Since $$a-a\neq0$$, many properties of the real number theory cannot be extended to interval analysis. So, Hukuhara  introduced another concept of subtraction in order to overcome this drawback. He defined the H-difference (i.e., $$a\ominus b$$) of a and b as c if $$a=b+c$$, namely $$a\ominus b=c$$. Although such a subtraction can yield $$a\ominus a=0$$, $$a\ominus b$$ exists only when $$w(a)\geq w(b)$$. Subsequently, Stefanini  proposed a more general concept of subtraction as below.

### Definition 2.1

()

The gH-difference of a and b is defined by

$$a\ominus_{g}b=c,$$
(2.1)

where c satisfies $$a=b+c$$ if $$w(a)\geq w(b)$$, or $$b=a+(-1)c$$ if $$w(a)< w(b)$$.

The above definition indicates that any two intervals a and b have their gH-difference. In addition, we notice that the concepts of the gH-difference and H-difference have an important relationship, namely $$a\ominus_{g}b=a\ominus b$$ if $$w(a)\geq w(b)$$. However, when $$w(a)< w(b)$$, $$a\ominus_{g}b$$ is meaningful, but $$a\ominus b$$ is not. Thus, the gH-difference is an extension version of the H-difference. Further, Stefanini  obtained the following basic properties of the gH-difference:

1. (i)

$$a\ominus_{g}b=[\min\{a^{L}-b^{L},a^{R}-b^{R}\},\max\{a^{L}-b^{L},a^{R}-b^{R}\}]$$;

2. (ii)

$$a\ominus_{g}a=0$$, $$a\ominus_{g}0=a$$, $$0\ominus_{g}a=(-1)a$$;

3. (iii)

$$(-a)\ominus_{g}b=(-b)\ominus_{g}a$$;

4. (iv)

$$a\ominus_{g}b=(-b)\ominus_{g}(-a)=-(b\ominus_{g}a)$$;

5. (v)

$$(a+b)\ominus_{g}b=a$$, $$a\ominus_{g}(a+b)=-b$$;

6. (vi)

$$(a\ominus_{g}b)+b=a$$, if $$w(a)\geq w(b)$$; $$a+(-1)(a\ominus_{g}b)=b$$, if $$w(a)< w(b)$$;

7. (vii)

$$k(a\ominus_{g}b)=ka\ominus_{g}kb$$, $$k\in R$$.

In our last work , we also obtained some properties of the gH-difference, for example:

1. (i)

$$(a+b)\ominus_{g}c=a+b\ominus_{g}c$$ if and only if $$w(c)\leq w(b)$$ with $$c\in IR$$;

2. (ii)

$$a(b\ominus_{g}c)=ab\ominus_{g}ac$$, if b and c are symmetric, or one of the following conditions holds:

1. (a)

$$w(b)\geq w(c)$$, and $$0\leq c\leq b$$, $$b\leq c\leq0$$;

2. (b)

$$w(b)\leq w(c)$$, and $$0\leq b\leq c$$, $$c\leq b\leq0$$.

In the present work, in terms of the concept of the gH-difference, we obtain some properties summed up below.

### Lemma 2.2

The following properties are true:

1. (i)

$$a\ominus_{g}b=0$$ if and only if $$b=a$$;

2. (ii)

$$(a+b)\ominus_{g}(a+c)=b\ominus_{g}c$$;

3. (iii)

$$(a\ominus_{g}b)\ominus_{g}(a\ominus_{g}c)=c\ominus_{g}b$$, if $$w(a)\leq \min(w(b),w(c))$$ or $$w(a)\geq\max(w(b),w(c))$$.

### Proof

Case (i) is true by the definition of the gH-difference.

Case (ii): write $$(a+b)\ominus_{g}(a+c)=d$$. Based on the gH-difference, we have $$a+b=a+c+d$$ or $$a+c=a+b+(-1)d$$. Hence, it follows from the cancellation law of addition on interval number that $$b=c+d$$ or $$c=b+(-1)d$$. This illustrates that $$b\ominus_{g}c=d$$.

Case (iii): write $$a\ominus_{g}b=e$$, $$a\ominus_{g}c=f$$, and $$e\ominus _{g}f=g$$. When $$w(a)\leq\min(w(b),w(c))$$, we note that $$b=a+(-1)e$$ and $$c=a+(-1)f$$. Thus, if $$w(e)\geq w(f)$$, then $$e=f+g$$, and hence $$a+(-1)e=a+(-1)f+(-1)g$$. This yields $$b=c+(-1)g$$. On the other hand, if $$w(e)< w(f)$$, we have $$f=e+(-1)g$$, and hence $$a+(-1)f=a+(-1)e+g$$. Thus one derives that $$c=b+g$$. In total, we obtain $$g=c\ominus_{g}b$$. Similarly, when $$w(a)\geq\max(w(b),w(c))$$, it follows that $$a=b+e$$ and $$a=c+f$$. If $$w(e)\geq w(f)$$, one can derive that $$c+f=b+e=b+f+g$$, that is, $$c=b+g$$. On the other hand, if $$w(e)< w(f)$$, we have $$b+e=c+f=c+e+(-1)g$$, and then $$b = c+(-1)g$$. Thus we also get $$g=c\ominus_{g}b$$. □

For the convenience of notation, write $$r_{ab}=w(a)-w(b)$$ and $$r_{bc}=w(b)-w(c)$$. We obtain the following properties.

### Lemma 2.3

The following properties hold:

\begin{aligned} (\mathrm{i})&\quad (a\ominus_{g}b)\ominus_{g}c= \left \{ \textstyle\begin{array}{l@{\quad}l} a\ominus_{g}(b+c), & \textit{if }r_{ab}\geq0, \\ (a+(-1)c)\ominus_{g}b,& \textit{else}; \end{array}\displaystyle \right . \\ (\mathrm{ii})&\quad a\ominus_{g}(b\ominus_{g}c)= \left \{ \textstyle\begin{array}{l@{\quad}l} (a+c)\ominus_{g}b, & \textit{if }r_{bc}\geq0, \\ c\ominus_{g}(b+(-1)a), & \textit{else}; \end{array}\displaystyle \right . \\ (\mathrm{iii})&\quad a\ominus_{g}(b+c)= \left \{ \textstyle\begin{array}{l@{\quad}l} (a\ominus_{g}b)\ominus_{g}c, & \textit{if }r_{ab}\geq0, \\ a\ominus_{g}b+(-1)c, & \textit{else}; \end{array}\displaystyle \right . \\ (\mathrm{iv})&\quad a\ominus_{g}b+c= \left \{ \textstyle\begin{array}{l@{\quad}l} (a+c)\ominus_{g}b, & \textit{if }r_{ab}\geq0, \\ a\ominus_{g}(b+(-1)c), & \textit{else}. \end{array}\displaystyle \right . \end{aligned}

### Proof

Write $$a\ominus_{g}b=d$$ and $$b\ominus_{g}c=d_{1}$$. If $$r_{ab}\geq0$$, then $$a=b+d$$; if $$r_{ab}<0$$, then $$b=a+(-1)d$$. Similarly, if $$r_{bc}\geq0$$, then $$b=c+d_{1}$$; if $$r_{bc}<0$$ then $$c=b+(-1)d_{1}$$.

Case (i): write $$d\ominus_{g}c=e_{1}$$. By definition, it implies that $$d=c+e_{1}$$ if $$w(d)\geq w(c)$$, and $$c=d+(-1)e_{1}$$ if $$w(d)< w(c)$$. In the case of $$r_{ab}\geq0$$, if $$w(d)\geq w(c)$$, one gets that $$a=b+d=b+c+e_{1}$$; and if $$w(d)< w(c)$$, then $$b+c=b+d+(-1)e_{1}=a+(-1)e_{1}$$. Therefore, we have $$e_{1}=a\ominus_{g}(b+c)$$. Conversely, in the case of $$r_{ab}<0$$, if $$w(d)\geq w(c)$$, then $$b=a+(-1)d=a+(-1)c+(-1)e_{1}$$, and if $$w(d)< w(c)$$, then $$a+(-1)c=a+(-1)d+e_{1}=b+e_{1}$$. This indicates that $$e_{1}=(a+(-1)c)\ominus_{g}b$$.

Case (ii): write $$a\ominus_{g}d_{1}=e_{2}$$. We can obtain $$a=d_{1}+e_{2}$$ if $$w(a)\geq w(d_{1})$$, and $$d_{1}=a+(-1)e_{2}$$ if $$w(a)< w(d_{1})$$. In the case of $$r_{bc}\geq0$$, if $$w(a)\geq w(d_{1})$$, one can derive that $$a+c=c+d_{1}+e_{2}=b+e_{2}$$; if $$w(a)< w(d_{1})$$, then $$b=c+d_{1}=a+c+(-1)e_{2}$$. These two equalities follow from $$e_{2}=(a+c)\ominus_{g}b$$. On the other hand, in the case of $$r_{bc}<0$$, if $$w(a)\geq w(d_{1})$$, then $$b+(-1)a=b+(-1)d_{1}+(-1)e_{2}=c+(-1)e_{2}$$; if $$w(a)< w(d_{1})$$, then $$c=b+(-1)d_{1}=b+(-1)a+e_{2}$$. Thus, we get $$e_{2}=c\ominus_{g}(b+(-1)a)$$.

Case (iii): write $$a\ominus_{g}(b+c)=e_{3}$$. The first equality is the same as the first one of case (i). We only need to demonstrate the second one. To this end, if $$r_{ab}<0$$, it is obvious that $$w(a)\leq w(b+c)$$. This means that $$b+c=a+(-1)e_{3}$$. Therefore, $$(a+(-1)d)+c=a+(-1)e_{3}$$. Hence, we get $$e_{3}=d+(-1)c$$.

Case (iv): in the case of $$r_{ab}\geq0$$, we know that $$a=b+d$$, which yields $$a+c=b+c+d$$; again since $$w(a+c)\geq w(b)$$, we get $$d+c=(a+c)\ominus_{g}b$$. Conversely, in the case of $$r_{ab}<0$$, we note that $$w(a)< w(b+(-1)c)$$ and $$b=a+(-1)d$$, which illustrates that $$b+(-1)c=a+(-1)(d+c)$$. Thus, $$d+c=a\ominus_{g}(b+(-1)c)$$. This completes the proof. □

## 3 Normed quasi-linear space

### 3.1 Interval number space

In this section, we first develop a quasi-linear space on IR, and then we analyze its properties under the gH-difference, by introducing the Hausdorff-Pompeiu metric on interval numbers. For $$a,b,c\in IR$$, and $$k,l\in R$$, the addition and scalar multiplication have some well-known properties: (i) $$a+b=b+a$$, (ii) $$a+(b+c)=(a+b)+c$$, (iii) $$a+0=a$$, (iv) $$k(a+b)=ka+kb$$, (v) $$k(la)=(kl)a$$, (vi) $$1a=a$$. Unfortunately, there usually does not exist $$d\in IR$$ s.t. $$a+d=0$$, and the equality $$(k+l)a=ka+la$$ is true only when $$kl\geq0$$ . For example, take $$a=[1,2]$$, $$b=[-2,-1]$$, $$k=1$$, and $$l=-1$$. Obviously, one can find that $$(k+l)a=0$$ and $$ka+la=[2, 4]$$. Thus, $$(k+l)a\neq ka+lb$$; on the other hand, if $$a+b=0$$, then $$1+b^{L}=0$$ and $$2+b^{R}=0$$, and hence $$b=[-1, -2]$$, which yields a contradiction.

In brief, IR is not a linear space under the above arithmetic rules of addition and scalar multiplication, but it can almost keep the features of linear space, provided that we replace subtraction by the gH-difference. Thus, we call IR a quasi-linear space with gH-difference. In one such quasi-linear space, we can easily obtain an additional property, namely for a given $$a\in IR$$, there exists a unique $$d\in IR$$ such that $$a\ominus_{g}d=0$$. Additionally, in order to investigate the relation between elements in IR, we introduce the Hausdorff-Pompeiu metric  on IR, i.e.,

$$H(a,b)=\max\bigl\{ \bigl\vert a^{L}-b^{L}\bigr\vert ,\bigl\vert a^{R}-b^{R}\bigr\vert \bigr\} .$$
(3.1)

Through simple induction, the triangle inequality of the Hausdorff-Pompeiu metric on IR always holds, namely

$$H(a,b)\leq H(a,c)+H(c,b).$$
(3.2)

Aubin and Cellina  asserted that $$(IR,H)$$ was a complete metric space. Further, such a metric can imply the following properties with the H-difference [10, 42]:

1. (i)

$$H(a+b,a+c)=H(b,c)$$;

2. (ii)

$$H(ka,kb)=|k|H(a,b)$$, where $$k\in R$$;

3. (iii)
$$H(a+b,c+d)\leq H(a,c)+H(b,d);$$
(3.3)
4. (iv)

if $$a\ominus b$$, $$a\ominus c$$ exist, then $$H(a\ominus b, a\ominus c)=H(b,c)$$;

5. (v)

if $$a\ominus b$$, $$c\ominus d$$ exist, then $$H(a\ominus b, c\ominus d)=H(a+d,b+c)$$.

Notice that equations (iv) and (v) are true only when $$a\ominus b$$, $$a\ominus c$$, and $$c\ominus d$$ exist. We next identify whether the two equations of (iv) and (v) above hold after replacing by $$\ominus_{g}$$. For convenience of the representation, write $$r_{ab}=w(a)-w(b)$$, $$r_{ac}=w(a)-w(c)$$, and $$r_{cd}=w(c)-w(d)$$ with $$a,b,c,d\in IR$$.

### Lemma 3.1

There always exists the following inequality:

$$H(a\ominus_{g}b, a\ominus_{g}c)\leq H(b,c).$$
(3.4)

Especially, the equality holds if $$r_{ab}r_{ac}\geq0$$.

### Proof

Write $$d=a\ominus_{g}b$$ and $$e=a\ominus_{g}c$$. In the case of $$r_{ab}r_{ac}\geq0$$, if $$r_{ab}\geq0$$ and $$r_{ac}\geq0$$, we can obtain $$d=a\ominus b$$ and $$e=a\ominus c$$ by the definition as in equation (2.1), and hence it follows from property (iv) of Hausdorff-Pompeiu metric above that the equality is true; if $$r_{ab}\leq0$$ and $$r_{ac}\leq0$$, then $$b=a+(-1)d$$ and $$c=a+(-1)e$$. This, together with properties (i) and (ii) of Hausdorff-Pompeiu metric above, easily shows $$H(d,e)=H(a+(-1)d, a+(-1)e)=H(b,c)$$. Therefore, when $$r_{ab}$$ and $$r_{ac}$$ have the same sign, the equality in equation (3.4) holds. In the case of $$r_{ab}r_{ac}<0$$, if $$r_{ab}>0$$ and $$r_{ac}<0$$, then $$a=b+d$$ and $$c=a+(-1)e$$, and accordingly, we have

\begin{aligned} H(b,c) =& H\bigl(b,b+d+(-1)e\bigr) \\ =& H\bigl(0,d+(-1)e\bigr) \\ =& \max\bigl\{ \bigl\vert d^{L}-e^{R}\bigr\vert , \bigl\vert d^{R}-e^{L}\bigr\vert \bigr\} \\ \geq& \max\bigl\{ \bigl\vert d^{L}-e^{L}\bigr\vert , \bigl\vert d^{R}-e^{R}\bigr\vert \bigr\} =H(d,e). \end{aligned}

Similarly, when $$r_{ab}<0$$ and $$r_{ac}>0$$, we can also prove that equation (3.4) holds. □

The above lemma can be illustrated by taking $$a=[1,3]$$, $$b=[-2,-1]$$, and $$c=[2,6]$$. Through simple inference, we obtain that $$H(a\ominus_{g}b, a\ominus_{g}c)=H([3,4],[-3,-1])=6$$, and $$H(b,c)=H([-2,-1],[2,6])=7$$. So, equation (3.4) is true.

### Lemma 3.2

The following inequality is always true,

$$H(a\ominus_{g}b, c\ominus_{g}d)\leq H(a+d,b+c).$$
(3.5)

Especially, the equality holds if $$r_{ab}r_{cd}\geq0$$.

### Proof

Write $$e=a\ominus_{g}b$$ and $$h=c\ominus_{g}d$$. In the case of $$r_{ab}r_{cd}\geq0$$, if $$r_{ab}\geq0$$ and $$r_{cd}\geq0$$, we can obtain both $$e=a\ominus b$$ and $$h=c\ominus d$$; if $$r_{ab}\leq0$$ and $$r_{cd}\leq0$$, then $$b=a+(-1)e$$ and $$d=c+(-1)h$$. This easily shows that equation (3.5) is valid. In the case of $$r_{ab}r_{cd}<0$$, if $$r_{ab}>0$$ and $$r_{cd}<0$$, then $$a=b+e$$ and $$d=c+(-1)h$$. Therefore,

\begin{aligned} H(a+d,b+c) = & H\bigl(b+c+e+(-1)h,b+c\bigr) \\ = & H\bigl(e+(-1)h,0\bigr) \\ = & \max\bigl\{ \bigl\vert e^{L}-h^{R}\bigr\vert , \bigl\vert e^{R}-h^{L}\bigr\vert \bigr\} \\ \geq& \max\bigl\{ \bigl\vert e^{L}-h^{L}\bigr\vert , \bigl\vert e^{R}-h^{R}\bigr\vert \bigr\} =H(e,h). \end{aligned}

In the same way, if $$r_{ab}<0$$ and $$r_{cd}>0$$, then

\begin{aligned} H(a+d,b+c) = & H\bigl(a+d,a+d+(-1)e+h\bigr) \\ = & H\bigl(0,(-1)e+h\bigr)\geq H(e,h). \end{aligned}

In brief, the above conclusion holds. □

For example, take $$a=[1,2]$$, $$b=[3,5]$$, $$c=[2,6]$$, and $$d=[-2,-1]$$. We can see that $$H(a\ominus_{g}b,c\ominus_{g}d)=H([-3,-2],[4,7])=9$$, and $$H(a+d,b+c)=H([-1,1],[5,11])=10$$. Hence, equation (3.5) is valid.

### Lemma 3.3

()

Let $$a,b,c\in IR$$, then

$$H(ac, bc)\leq H(0,c)H(a,b).$$
(3.6)

Based on the above Hausdorff-Pompeiu metric, define $$\|a\|_{I}=H(a,0)$$ with $$a\in IR$$. Further, by simple inference we notice that $$\|\cdot \|_{I}$$ satisfies the basic properties of the classical concept of norm. Therefore, IR can be naturally said to be a normed quasi-linear space.

### Theorem 3.4

For $$a,b\in IR$$, the following basic properties are true:

1. (i)

$$\|a\ominus_{g}b\|_{I}=H(a,b)$$;

2. (ii)

$$\|a\|_{I}-\|b\|_{I}\leq\|a\ominus_{g}b\|_{I}\leq\|a\|_{I}+\|b\|_{I}$$;

3. (iii)

$$\|ab\|_{I}=\|a\|_{I}\|b\|_{I}$$.

### Proof

Cases (i) and (ii) hold obviously. Case (iii): since

\begin{aligned} \|ab\|_{I} = & H(ab,0) \\ =&\max\bigl\{ \bigl\vert a^{L}b^{L} \bigr\vert ,\bigl\vert a^{L}b^{R}\bigr\vert ,\bigl\vert a^{R}b^{L}\bigr\vert ,\bigl\vert a^{R}b^{R}\bigr\vert \bigr\} \\ = & \max\bigl\{ \bigl\vert a^{L}\bigr\vert ,\bigl\vert a^{R}\bigr\vert \bigr\} \cdot\max\bigl\{ \bigl\vert b^{L} \bigr\vert ,\bigl\vert b^{R}\bigr\vert \bigr\} \\ = & H(a,0)H(b,0)=\|a\|_{I}\|b\|_{I}, \end{aligned}

the conclusion is true. □

Take $$a=[-2,-1]$$ and $$b=[1,3]$$. Then $$\|a\ominus _{g}b\|_{I}=\|[-4,-3]\|_{I}=4$$ and $$H(a,b)=4$$. Further, $$\|a\|_{I}-\|b\|_{I}=-1$$, $$\|a\|_{I}+\|b\|_{I}=5$$, $$\|ab\|_{I}=\|[-6,-1]\|_{I}=6$$, and $$\|a\|_{I}\|b\|_{I}=6$$. Thus, the above conclusions in Theorem 3.4 hold.

We next discuss the completeness of the normed quasi-linear space IR, where a version of interval convergence is given.

### Definition 3.5

For $$a_{n}, a\in IR$$, $$n=1,2,\ldots$$ , if $$\|a_{n}\ominus_{g}a\|_{I}\rightarrow 0$$ as $$n\rightarrow\infty$$, $$\{a_{n}\}_{n\geq1}$$ is said to be convergent to a (simply written as $$\lim_{n\rightarrow\infty}a_{n}=a$$).

Similarly, we introduce the version of Cauchy convergence in IR. That is, $$\{a_{n}\}_{n\geq1}$$ is convergent if and only if $$\|a_{n}\ominus _{g}a_{m}\|_{I}\rightarrow0$$ as $$m,n\rightarrow\infty$$. It is easy to prove that $$(IR,\|\cdot\|_{I})$$ is complete by means of the completeness of $$(IR,H)$$.

### Theorem 3.6

$$(IR,\|\cdot\|_{I})$$ is a complete normed quasi-linear space.

### Proof

Let $$\{a_{n}\}_{n\geq1}$$ be an arbitrary Cauchy sequence in $$(IR,\|\cdot \|_{I})$$. Since $$H(a_{n},a_{m})=\|a_{n}\ominus_{g}a_{m}\|_{I}$$, we obtain $$H(a_{n},a_{m})\rightarrow0$$ as $$m,n\rightarrow\infty$$. Therefore, $$\{ a_{n}\}_{n\geq1}$$ is a Cauchy sequence in $$(IR,H)$$ and, accordingly, there exists $$a\in IR$$ such that $$H(a_{n},a)\rightarrow0$$ as $$n\rightarrow\infty$$, due to the completeness of $$(IR,H)$$. This implies that $$\|a_{n}\ominus_{g}a\|_{I}\rightarrow0$$ as $$n\rightarrow\infty$$. □

### 3.2 Interval-valued function space

Let $$I=[t_{1},t_{2}]$$ and $$t_{0}\in I$$. $$f:I\rightarrow IR$$ is an interval-valued function. We say that $$a\in IR$$ is the limit of f at the point $$t_{0}$$ if $$\|f(t)\ominus_{g}a\|_{I}\rightarrow0$$ as $$t\rightarrow t_{0}$$. f is said to be continuous on I, if for any given $$t_{0}\in I$$, $$\|f(t)\ominus_{g}f(t_{0})\|_{I}\rightarrow0$$ as $$t\rightarrow t_{0}$$. Define the following continuous interval-valued function space,

$$C(I,IR)=\{f|f:I\rightarrow IR, f \mbox{ is continuous on } I\}.$$

Introduce the following well-known arithmetic rules for $$f,g\in C(I,IR)$$:

1. (i)

$$(f+g)(t)=f(t)+g(t)$$;

2. (ii)

$$(kf)(t)=kf(t)$$, $$k\in R$$;

3. (iii)

$$(f\ominus_{g}g)(t)=f(t)\ominus_{g}g(t)$$;

4. (iv)

$$(fg)(t)=f(t)g(t)$$.

Under these arithmetic rules, we discuss some basic properties in $$C(I,IR)$$.

### Theorem 3.7

If $$f,g\in C(I,IR)$$, then kf, $$f+g$$, $$f\ominus_{g}g$$, and fg are continuous on I.

### Proof

For any $$t_{0}\in I$$, since

$$H\bigl(kf(t),kf(t_{0})\bigr)=|k|H\bigl(f(t),f(t_{0}) \bigr),\quad k\in R,$$

kf is continuous at the point $$t_{0}$$. Again, through equation (3.3), we obtain

$$H\bigl(f(t)+g(t),f(t_{0})+g(t_{0})\bigr)\leq H \bigl(f(t),f(t_{0})\bigr)+H\bigl(g(t),g(t_{0})\bigr).$$

Thus, following the definition of continuity, we derive that $$f+g\in C(I,IR)$$. Further, equations (3.3) and (3.5) yield

\begin{aligned}& H\bigl(f(t)\ominus_{g}g(t),f(t_{0})\ominus_{g}g(t_{0}) \bigr) \\& \quad \leq H\bigl(f(t)+g(t_{0}),g(t)+f(t_{0})\bigr) \\& \quad \leq H\bigl(f(t),f(t_{0})\bigr)+H\bigl(g(t),g(t_{0}) \bigr). \end{aligned}

Thus, $$f\ominus_{g}g\in C(I,IR)$$. On the other hand, equations (3.2) and (3.6) imply that

\begin{aligned}& H\bigl(f(t)g(t),f(t_{0})g(t_{0})\bigr) \\& \quad \leq H\bigl(f(t)g(t),f(t)g(t_{0})\bigr)+H\bigl(f(t)g(t_{0}),f(t_{0})g(t_{0}) \bigr) \\& \quad \leq H\bigl(0,f(t)\bigr)H\bigl(g(t),g(t_{0})\bigr)+H \bigl(0,g(t_{0})\bigr)H\bigl(f(t),f(t_{0})\bigr) \end{aligned}

and, consequently, $$fg\in C(I,IR)$$. □

Through the process of the proof above, we notice that $$f\in C(I,IR)$$ if and only if $$f^{L}, f^{R}\in C(I,R)$$, where $$f(t)=[f^{L}(t), f^{R}(t)]$$. Further, according to the above arithmetic rules, $$C(I,IR)$$ is also a quasi-linear space. Define

$$\rho(f,g)=\sup_{t\in I}\bigl\{ H\bigl(f(t),g(t) \bigr)\bigr\} .$$
(3.7)

One can prove that the metric of ρ satisfies the three basic properties of a metric space, namely if $$f,g,h\in C(I,IR)$$, then

1. (i)

$$\rho(f,g)\geq0$$; $$\rho(f,g)=0$$ if and only if $$f=g$$;

2. (ii)

$$\rho(f,g)=\rho(g,f)$$;

3. (iii)

$$\rho(f,g)\leq\rho(f,h)+\rho(h,g)$$.

Thus, $$(C(I,IR),\rho)$$ is a metric space. In addition, in terms of the properties of the Hausdorff-Pompeiu metric, it is easy to see that ρ has the following properties, namely if $$f,g,\varphi,\psi\in C(I,IR)$$, then:

1. (i)

$$\rho(f+\varphi,f+\psi)=\rho(\varphi,\psi)$$;

2. (ii)

$$\rho(kf,kg)=|k|\rho(f,g)$$, where $$k\in R$$;

3. (iii)

$$\rho(f\varphi,f\psi)=\rho(0,f)\rho(\varphi,\psi)$$;

4. (iv)

$$\rho(f+g,\varphi+\psi)\leq\rho(f,\varphi)+\rho(g,\psi)$$;

5. (v)

$$\rho(f\ominus_{g}\varphi,f\ominus_{g}\psi)\leq\rho(\varphi,\psi)$$;

6. (vi)

$$\rho(f\ominus_{g}g,\varphi\ominus_{g}\psi)\leq\rho(f+\psi,g+\varphi)$$.

We further discuss some properties of $$C(I,IR)$$ useful for studying the properties of IDEs. To this point, introduce the version of convergence of an interval-valued function sequence. For $$f_{n}, f\in C(I,IR)$$, $$n=1, 2,\ldots$$ , if $$\rho(f_{n},f)\rightarrow0$$ as $$n\rightarrow\infty$$, $$\{ f_{n}\}_{n\geq1}$$ is said to be convergent to f. Similarly, we say that $$\{f_{n}\}_{n\geq1}$$ is a Cauchy sequence if $$\rho(f_{n},f_{m})\rightarrow0$$ as $$m,n\rightarrow\infty$$.

### Theorem 3.8

The quasi-linear space $$(C(I,IR),\rho)$$ is complete.

### Proof

Let $$\{f_{n}\}_{n\geq1}$$ be an arbitrary Cauchy sequence in $$C(I,IR)$$. It follows that for any $$\varepsilon>0$$, there exists $$N_{0}(\varepsilon)>0$$ such that if $$n,m>N_{0}(\varepsilon)$$, then

$$\bigl\Vert f_{n}(t)\ominus_{g}f_{m}(t)\bigr\Vert _{I}=H\bigl(f_{n}(t),f_{m}(t)\bigr)\leq \rho(f_{n},f_{m})< \frac {\varepsilon}{3}, \quad t\in I.$$

Therefore, for any fixed t, $$\{f_{n}(t)\}_{n\geq1}$$ is a Cauchy sequence in the complete normed quasi-linear space IR and, accordingly, there exists an element $$f(t)\in IR$$ such that when $$n>N_{0}(\varepsilon)$$, one can derive that

$$\bigl\Vert f_{n}(t)\ominus_{g}f(t)\bigr\Vert _{I}< \frac{\varepsilon}{3}.$$

This way, $$\{f_{n}\}_{n\geq1}$$ converges uniformly to f on I. On the other hand, for any $$t_{0}\in I$$ there exists $$\delta(\varepsilon)>0$$ such that if $$|t-t_{0}|<\delta(\varepsilon)$$, then $$\|f_{n}(t)\ominus _{g}f_{n}(t_{0})\|_{I}< \frac{\varepsilon}{3}$$. According to equation (3.2) and property (i) as in Theorem 3.4, we see that

$$\bigl\Vert f(t)\ominus_{g}f(t_{0})\bigr\Vert _{I}\leq\bigl\Vert f(t)\ominus_{g}f_{n}(t) \bigr\Vert _{I} +\bigl\Vert f_{n}(t) \ominus_{g}f_{n}(t_{0})\bigr\Vert _{I}+\bigl\Vert f_{n}(t_{0})\ominus _{g}f(t_{0})\bigr\Vert _{I}< \varepsilon.$$

Consequently, $$f\in C(I,IR)$$, and hence the proof is completed. □

Like the above normed quasi-linear space IR, we can introduce the version of a norm on $$C(I,IR)$$, namely $$\|f\|_{C}=\rho(f,0)$$. By means of the Hausdorff-Pompeiu metric on interval numbers above, one can see that $$(C(I,IR), \|\cdot\|_{C})$$ is a normed quasi-linear space. We also notice that $$|f(t)|=H(f(t),0)$$. Therefore, we can rewrite $$\|f\|_{C}$$ as $$\sup_{t\in I}|f(t)|$$. Additionally, by means of Theorems 3.4 and 3.8, the following basic properties are valid.

### Theorem 3.9

If $$f,g\in C(I,IR)$$, then

1. (i)

$$\|f\ominus_{g}g\|_{C}=\rho(f,g)$$;

2. (ii)

$$\|f\|_{C}-\|g\|_{C}\leq\|f\ominus_{g}g\|_{C}\leq\|f\|_{C}+\|g\|_{C}$$;

3. (iii)

$$\|fg\|_{C}\leq\|f\|_{C}\|g\|_{C}$$;

4. (iv)

$$(C(I,IR),\|\cdot\|_{C})$$ is a complete normed quasi-linear space.

We next develop a fixed point theorem under the gH-difference. x is said to be a fixed point of a mapping $$T:C(I,IR)\rightarrow C(I,IR)$$ if $$Tx=x$$. We say that T is a contraction mapping on $$C(I,IR)$$, if there exists a real number α with $$0<\alpha<1$$ such that $$\|Tx\ominus _{g}Ty\|_{C}\leq\alpha\|x\ominus_{g}y\|_{C}$$ for any $$x,y\in C(I,IR)$$. Similar to the process of the proof as in the classical principle of contraction mapping, we obtain a fixed point theorem as below.

### Theorem 3.10

If $$T:C(I,IR)\rightarrow C(I,IR)$$ is a contraction mapping, there exists a fixed point.

## 4 The properties of gH-differentiability

Since the addition arithmetic on interval number is irreversible, the concept of a derivative of an interval-valued function has been gaining great concern among researchers. As to this point, Hukuhara  introduced the concept of the H-derivative related to the version of the H-difference. However, as we mention in Section 1, this concept cannot be comprehensively adopted to investigate the properties of interval-valued functions, as it cannot ensure that the H-difference exists for any two interval numbers. Thereafter, Bede and Gal  generalized such concept of a derivative to the version of a GH-derivative. However, the latter concept is relatively more useful, it still needs the same basic assumptions as the concept of H-derivative. Fortunately, Stefanini and Bede  proposed a more general concept of derivative (i.e., the gH-derivative) by comparison with the GH-derivative. The main merit consists in the fact that their concept is similar to the version of the derivative of a real-valued function.

### Definition 4.1

()

$$f:I\rightarrow IR$$ is said to be gH-differentiable on I if f is gH-differentiable in $$t\in I$$, namely there exists an interval number $$f'(t)\in IR$$ such that

$$f'(t)=\lim_{h\rightarrow0}\frac{f(t+h)\ominus_{g}f(t)}{h}.$$
(4.1)

f is said to be (i)-differentiable in $$t\in I$$ if $$f'(t)=[(f^{L})'(t), (f^{R})'(t)]$$ and (ii)-differentiable if $$f'(t)=[(f^{R})'(t), (f^{L})'(t)]$$, where $$f(t)=[f^{L}(t), f^{R}(t)]$$.

The main advantage of such a definition is that the formulation of the gH-derivative is simpler than that of the GH-derivative. Thus, it is easily utilized to study interval-valued functions. According to the definition, we observe that, when f is (i)-differentiable, $$w(f(t))$$ is an increasing function, and conversely when f is (ii)-differentiable, $$w(f(t))$$ is decreasing. Further, similar to the formulation of the conventional derivative in real analysis, the formulas of left and right derivatives at the point t for f can be expressed by

\begin{aligned}& f_{+}'(t)=\lim_{h\rightarrow0^{+}}\frac{1}{h} \bigl[f(t+h)\ominus_{g}f(t)\bigr], \end{aligned}
(4.2)
\begin{aligned}& f_{-}'(t)=\lim_{h\rightarrow0^{-}} \frac{1}{h}\bigl[f(t+h)\ominus_{g}f(t)\bigr]. \end{aligned}
(4.3)

Accordingly, we can give a sufficient and necessary condition on gH-differentiability below.

### Theorem 4.2

f is gH-differentiable in $$t\in I$$ if and only if $$f_{+}'(t)$$ and $$f_{-}'(t)$$ exist and $$f_{+}'(t)=f_{-}'(t)$$.

### Proof

If f is gH-differentiable in $$t\in I$$, then for any $$\varepsilon>0$$, there exists $$\delta(\varepsilon)>0$$, such that when $$|h|<\delta (\varepsilon)$$, we have

$$\biggl\Vert \frac{1}{h} \bigl[f(t+h)\ominus_{g}f(t) \bigr]\ominus_{g}f'(t) \biggr\Vert _{I}< \varepsilon.$$
(4.4)

Consequently, if taking $$0< h<\delta(\varepsilon)$$ or $$-\delta (\varepsilon)< h<0$$, the conclusion is true. Conversely, when $$f_{+}'(t)=f_{-}'(t)$$, it is easy to prove that the conclusion is valid through equations (4.2) and (4.3). □

Based on the concept of the gH-derivative, Stefanini et al. developed the relationship between gH-derivative and conventional derivatives, in other words, the gH-derivative of f can be expressed by the derivatives of its endpoint functions.

### Theorem 4.3

()

$$f:I\rightarrow IR$$ is gH-differentiable in t if and only if $$f^{L}$$ and $$f^{R}$$ are both differentiable, and

$$f'=\bigl[\min\bigl\{ \bigl(f^{L} \bigr)',\bigl(f^{R}\bigr)'\bigr\} ,\max\bigl\{ \bigl(f^{L}\bigr)',\bigl(f^{R} \bigr)'\bigr\} \bigr].$$
(4.5)

It should be pointed out that usually one can only find that $$(f+g)' \subseteq f'+g'$$ when f and g are differentiable . However, in some weak assumptions, the symbol of inclusion can be replaced by the symbol of equality. To this end, for convenience of notation below, we write $$\omega(t)=f(t)\ominus_{g}g(t)$$, $$u(t,h)=f(t+h)\ominus_{g}f(t)$$, and $$v(t,h)=g(t+h)\ominus_{g}g(t)$$ with $$t+h\in I$$.

### Theorem 4.4

The following property is true:

$$(f+g)'=f'+g',$$
(4.6)

provided that f and g are simultaneously (i)-differentiable or (ii)-differentiable.

### Proof

Assume that f and g are (i)-differentiable. One can prove that both $$w(f(t))$$ and $$w(g(t))$$ are increasing functions. Hence, in the case of $$h>0$$, since $$w(f(t+h))\geq w(f(t))$$ and $$w(g(t+h))\geq w(g(t))$$, through the definition of the gH-difference we obtain $$f(t+h)=f(t)+u(t,h)$$ and $$g(t+h)=g(t)+v(t,h)$$, and thus

$$f(t+h)+g(t+h)=f(t)+g(t)+u(t,h)+v(t,h).$$
(4.7)

Further, in the case of $$h<0$$, since $$w(f(t+h))\leq w(f(t))$$ and $$w(g(t+h))\leq w(g(t))$$, we get $$f(t)=f(t+h)+(-1)u(t,h)$$ and $$g(t)=g(t+h)+(-1)v(t,h)$$, and accordingly

$$f(t)+g(t)=f(t+h)+g(t+h)+(-1)\bigl[u(t,h)+v(t,h)\bigr].$$
(4.8)

Hence,

\begin{aligned}& \lim_{h\rightarrow0^{+}}\frac{1}{h}\bigl[\bigl(f(t+h)+g(t+h)\bigr) \ominus _{g}\bigl(f(t)+g(t)\bigr)\bigr] \\& \quad = \lim_{h\rightarrow0^{+}}\frac{1}{h}\bigl[u(t,h)+v(t,h) \bigr]=f'(t)+g'(t) \\& \quad = \lim_{h\rightarrow0^{-}}\frac{1}{h}\bigl[\bigl(f(t+h)+g(t+h) \bigr)\ominus_{g}\bigl(f(t)+g(t)\bigr)\bigr]. \end{aligned}

Thus, $$f+g$$ is gH-differentiable and equation (4.6) is true. Similarly, when f and g are (ii)-differentiable, one can prove that $$f+g$$ is gH-differentiable and equation (4.6) is also true. □

### Theorem 4.5

The following property is true:

$$(f\ominus_{g}g)'=f'+(-1)g',$$
(4.9)

if one of the following conditions is satisfied:

1. (i)

f is (i)-differentiable and g is (ii)-differentiable;

2. (ii)

f is (ii)-differentiable and g is (i)-differentiable.

### Proof

Case (i): by the definition of the gH-derivative, $$w(f(t))$$ is increasing and $$w(g(t))$$ is decreasing. Consequently, in the case of $$h>0$$, we obtain $$f(t+h)=f(t)+u(t,h)$$ and $$g(t)=g(t+h)+(-1)v(t,h)$$. Hence,

$$f(t+h)+g(t)=f(t)+g(t+h)+u(t,h)+(-1)v(t,h).$$
(4.10)

Again if $$f(t)=g(t)+\omega(t)$$, equation (4.10) yields

$$g(t+h)+\omega(t+h)+g(t)=g(t)+\omega(t)+g(t+h)+u(t,h)+(-1)v(t,h),$$
(4.11)

from which, together with the cancellation of addition on interval numbers, it follows that

$$\omega(t+h)=\omega(t)+u(t,h)+(-1)v(t,h).$$
(4.12)

In the same way, if $$g(t)=f(t)+(-1)\omega(t)$$, from equation (4.10) it follows that

$$\omega(t)=\omega(t+h)+(-1)\bigl[u(t,h)+(-1)v(t,h)\bigr].$$
(4.13)

Hence, equations (4.12) and (4.13) imply that

$$\omega(t+h)\ominus_{g}\omega(t)=u(t,h)+(-1)v(t,h).$$
(4.14)

In the case of $$h<0$$, since $$w(f(t+h))\leq w(f(t))$$ and $$w(g(t+h))\geq w(g(t))$$, the definition of the gH-difference yields $$f(t)=f(t+h)+(-1)u(t,h)$$ and $$g(t+h)=g(t)+v(t,h)$$, and then we also see that equation (4.14) is valid. This, together with the differentiability of f and g, yields

\begin{aligned} \omega_{+}'(t) =&\lim_{h\rightarrow0^{+}}\frac{1}{h}\bigl[ \omega(t+h)\ominus _{g}\omega(t)\bigr] \\ =&\lim_{h\rightarrow0^{+}} \frac{1}{h}\bigl[u(t,h)+(-1)v(t,h)\bigr] \\ =& f'(t)+(-1)g'(t)=\omega_{-}'(t). \end{aligned}

Thus, equation (4.9) is true in case (i). Similar to the process of the proof above, one can see that equation (4.9) is also valid in case (ii). □

Notice that when f and g are both (i)-differentiable or both (ii)-differentiable, equation (4.9) is not true, which can be illustrated by a simple example as below.

### Example 4.6

Take $$f(t)=[t,2t+1]$$ and $$g(t)=[t,3t+1]$$ with $$0\leq t\leq1$$. Then $$f(t)\ominus_{g}g(t)=[-t,0]$$. Again f and g are (i)-differentiable. One can know that $$(f(t)\ominus_{g}g(t))'=[-t,0]'=[-1,0]$$. However, $$f'(t)+(-1)g'(t)=[1,2]+(-1)[1,3]=[-2,1]$$. Thus, $$(f(t)\ominus _{g}g(t))'\neq f'(t)+(-1)g'(t)$$.

As we know, if two real scalar functions are differentiable, their product function is also differentiable. However, for two given interval-valued functions, even if they are all differentiable, their product function is not differentiable usually. This can be illustrated by an example as below.

### Example 4.7

Take $$f(t)=[e^{t}, e^{t+1}]$$ and $$g(t)=[-\cos t, \cos t]$$ with $$0\leq t\leq\frac{\pi}{4}$$. It is easy to see that $$f(t)g(t)=[-e^{t+1}\cos t, e^{t+1}\cos t]$$. Further, through a simple deduction, we know that

\begin{aligned}& \bigl(f(t)g(t)\bigr)'=\bigl[-e^{t+1}(\cos t-\sin t),e^{t+1}(\cos t-\sin t)\bigr], \\& f'(t)g(t)=\bigl[-e^{t+1}\cos t,e^{t+1}\cos t\bigr],\qquad f(t)g'(t)=\bigl[-e^{t+1}\sin t,e^{t+1}\sin t\bigr]. \end{aligned}

This is a hint that

$$f'(t)g(t)+f(t)g'(t)\neq\bigl(f(t)g(t) \bigr)'.$$

In the following subsection, we degenerate the interval-valued function f into a scalar function, and we study some properties of the product function of fg. In addition to the notations presented in Theorems 4.4 and 4.5, write $$W(t,h)=f(t+h)g(t+h)\ominus _{g}f(t)g(t)$$, $$U(t,h)=f(t+h)-f(t)$$.

### Theorem 4.8

Assume that $$f\in C^{1}(I,R)$$ and g is (i)-differentiable. If $$f(t)f'(t)>0$$, then

$$(fg)'=f'g +fg'.$$
(4.15)

### Proof

In the case of $$h>0$$ with $$t+h\in I$$, since g is (i)-differentiable, $$w(g(t))$$ is increasing and, accordingly, $$g(t+h)=g(t)+v(t,h)$$. Again, since $$f(t)f'(t)>0$$, $$f(t)$$ and $$U(t,h)$$ have the same sign, which yields

$$f(t+h)g(t+h)=f(t)g(t)+f(t)v(t,h)+U(t,h)g(t)+U(t,h)v(t,h).$$
(4.16)

Thus,

$$W(t,h)=f(t)v(t,h)+U(t,h)g(t)+U(t,h)v(t,h).$$
(4.17)

On the other hand, in the case of $$h<0$$, we have $$w(g(t+h))\leq w(g(t))$$, and hence $$g(t)=g(t+h)+(-1)v(t,h)$$. Further, it follows from $$f(t)f'(t)>0$$ that $$f(t+h)$$ and $$(-1)U(t,h)$$ have the same sign. Consequently,

\begin{aligned} f(t)g(t) =& f(t+h)g(t+h)+(-1)\bigl[f(t+h)v(t,h) \\ &{}+ U(t,h)g(t+h)+(-1)U(t,h)v(t,h)\bigr]. \end{aligned}
(4.18)

Hence,

$$W(t,h)=f(t+h)v(t,h)+U(t,h)g(t+h)+(-1)U(t,h)v(t,h).$$
(4.19)

Further, depending on the continuity and differentiability of f and g, equations (4.17) and (4.19) imply that

\begin{aligned} \lim_{h\rightarrow0^{+}}\frac{1}{h}W(t,h) =& \lim _{h\rightarrow0^{+}} \biggl(\frac{1}{h}U(t,h) \biggr)g(t)+f(t)\lim _{h\rightarrow0^{+}}\frac {1}{h}v(t,h) \\ &{}+\lim_{h\rightarrow0^{+}}U(t,h) \biggl(\frac{1}{h}v(t,h) \biggr) \\ =& f'(t)g(t) +f(t)g'(t)=\lim_{h\rightarrow0^{-}} \frac{1}{h}W(t,h). \end{aligned}
(4.20)

This shows that equation (4.15) is valid by Theorem 4.2. □

### Theorem 4.9

Let g be (i)-differentiable and $$f\in C^{1}(I,R)$$. Under $$f(t)f'(t)<0$$, if $$w(f(t)g(t))$$ is increasing, then fg is gH-differentiable and

$$(fg)'+(-1)f'g =fg';$$
(4.21)

if $$w(f(t)g(t))$$ is decreasing, then fg is gH-differentiable and

$$(fg)'+(-1)fg'=f'g.$$
(4.22)

### Proof

Let $$w(f(t)g(t))$$ be increasing. Since $$f(t)f'(t)<0$$, we can see that $$f(t+h)$$ and $$(-1)U(t,h)$$ have the same sign when $$h>0$$, and meanwhile $$f(t)$$ and $$U(t,h)$$ are so if $$h<0$$. In the case of $$h>0$$, we know that $$g(t+h)=g(t)+v(t,h)$$ by the (i)-differentiability of g. Thus,

$$\bigl[f(t+h)+(-1)U(t,h)\bigr]g(t+h)=f(t)\bigl[g(t)+v(t,h)\bigr].$$
(4.23)

Namely,

$$f(t+h)g(t+h)+(-1)U(t,h)g(t+h)=f(t)g(t)+f(t)v(t,h).$$
(4.24)

Again since $$w(f(t+h)g(t+h))\geq w(f(t)g(t))$$, we can obtain

$$f(t+h)g(t+h)=f(t)g(t)+W(t,h).$$
(4.25)

This way, by substituting equation (4.25) into equation (4.24) and using the cancellation law on interval numbers, we see that

$$W(t,h)+(-1)U(t,h)g(t+h)=f(t)v(t,h).$$
(4.26)

So,

$$\lim_{h\rightarrow0^{+}}\frac{1}{h}W(t,h)+(-1)g(t)\lim _{h\rightarrow 0^{+}}\frac{1}{h}U(t,h)=f(t)\lim_{h\rightarrow0^{+}} \frac{1}{h}v(t,h),$$

that is,

$$\bigl(f(t)g(t)\bigr)_{+}'+(-1)f'(t)g(t)=f(t)g'(t).$$
(4.27)

In addition, in the case of $$h<0$$, since g is (i)-differentiable, we can derive that $$g(t)=g(t+h)+(-1)v(t,h)$$. Thus,

$$f(t+h)\bigl[g(t+h)+(-1)v(t,h)\bigr]=\bigl[f(t)+U(t,h)\bigr]g(t).$$
(4.28)

In other words,

$$f(t+h)g(t+h)+(-1)f(t+h)v(t,h)=f(t)g(t)+U(t,h)g(t).$$
(4.29)

Again since

$$f(t)g(t)=f(t+h)g(t+h)+(-1)W(t,h),$$
(4.30)

by virtue of the cancellation law on interval numbers from equations (4.29) and (4.30) we infer that

$$W(t,h)+(-1)U(t,h)g(t)=f(t+h)v(t,h).$$
(4.31)

Thus,

$$\lim_{h\rightarrow0^{-}}\frac{1}{h}W(t,h)+(-1)g(t)\lim _{h\rightarrow 0^{-}}\frac{1}{h}U(t,h)=f(t)\lim_{h\rightarrow0^{-}} \frac{1}{h}v(t,h),$$

namely

$$\bigl(f(t)g(t)\bigr)_{-}'+(-1)f'(t)g(t)=f(t)g'(t).$$
(4.32)

Equations (4.27) and (4.32) imply that fg is gH-differentiable, and meanwhile equation (4.21) holds. Similar to the process of the proof above, one can prove that equation (4.22) is also valid. □

Similarly, when g is (ii)-differentiable, we can obtain the following properties of fg according to the sign of $$f(t)f'(t)$$, for which their proofs are omitted.

### Theorem 4.10

Assume that $$f\in C^{1}(I,R)$$ and g is (ii)-differentiable. If $$f(t)f'(t)<0$$, then

$$(fg)'=f'g+fg'.$$
(4.33)

Further, under $$f(t)f'(t)>0$$, no matter whether $$w(f(t)g(t))$$ is increasing or decreasing, fg is gH-differentiable. More precisely, if $$w(f(t)g(t))$$ is increasing, then

$$(fg)'+(-1)fg'=f'g;$$
(4.34)

if $$w(f(t)g(t))$$ is decreasing, then

$$(fg)'+(-1)f'g =fg'.$$
(4.35)

## 5 Integral of interval-valued function

In this section, we cite the concept of an integral of an interval-valued function originally proposed by Stefanini and Bede , and meanwhile some new properties are discussed. Let $$J=[t_{0},t_{f}]$$ and $$f(t)=[f^{L}(t),f^{R}(t)]$$ with $$t\in J$$. The integral of f is defined by the integrals of the endpoints , namely

$$\int_{t_{0}}^{t_{f}}f(t)\, dt= \biggl[ \int_{t_{0}}^{t_{f}}f^{L}(t)\, dt, \int _{t_{0}}^{t_{f}}f^{R}(t)\, dt \biggr].$$
(5.1)

In such a case, f is said to be integrable on J. For an integrable interval-valued function g, by the definition of the gH-difference we easily see that

\begin{aligned} \int_{t_{0}}^{t_{f}}f(t)\,dt\ominus_{g} \int_{t_{0}}^{t_{f}}g(t)\,dt =& \biggl[\min \biggl\{ \int_{t_{0}}^{t_{f}}\bigl(f^{L}(t)-g^{L}(t) \bigr)\,dt, \int_{t_{0}}^{t_{f}}\bigl(f^{R}(t)-g^{R}(t) \bigr)\,dt \biggr\} , \\ &\max \biggl\{ \int_{t_{0}}^{t_{f}}\bigl(f^{L}(t)-g^{L}(t) \bigr)\,dt, \int _{t_{0}}^{t_{f}}\bigl(f^{R}(t)-g^{R}(t) \bigr)\,dt \biggr\} \biggr]. \end{aligned}
(5.2)

Correspondingly, some fundamental properties have been studied.

### Theorem 5.1

()

Let $$f,g\in C(J,IR)$$. Then

1. (i)

$$\int_{t_{0}}^{t_{f}}(f(t)+g(t))\, dt=\int_{t_{0}}^{t_{f}}f(t)\, dt+\int _{t_{0}}^{t_{f}}g(t)\, dt$$;

2. (ii)

$$\int_{t_{0}}^{t_{f}}f(t)\, dt=\int_{t_{0}}^{\tau}f(t)\, dt+\int_{\tau }^{t_{f}}f(t)\, dt$$, $$t_{0}<\tau<t_{f}$$.

### Theorem 5.2

()

Let $$f\in C(J,IR)$$. Then

1. (i)

$$F(t)$$ is gH-differentiable, and $$F'(t)=f(t)$$, where $$F(t)=\int _{t_{0}}^{t}f(t)\, dt$$;

2. (ii)

$$G(t)$$ is gH-differentiable, and $$G'(t)=-f(t)$$, where $$G(t)=\int _{t}^{t_{f}}f(t)\, dt$$.

We next present two important integral properties helpful for discussing the following IDE.

### Theorem 5.3

Let f and g be integrable on J. Then $$f\ominus_{g}g$$ is integrable on J, and meanwhile

$$\int_{t_{0}}^{t_{f}}(f\ominus_{g}g) (t)\,dt= \int_{t_{0}}^{t_{f}}f(t)\,dt\ominus_{g} \int _{t_{0}}^{t_{f}}g(t)\,dt,$$
(5.3)

provided that $$w(f(t))\geq w(g(t))$$ for $$t\in J$$ or $$w(f(t))\leq w(g(t))$$ for $$t\in J$$.

### Proof

Write $$f(t)\ominus_{g}g(t)=h(t)$$. If $$w(f(t))\geq w(g(t))$$ with any $$t\in J$$, then $$f(t)=g(t)+h(t)$$, and accordingly

$$\int_{t_{0}}^{t_{f}}f(t)\,dt= \int_{t_{0}}^{t_{f}}g(t)\,dt+ \int_{t_{0}}^{t_{f}}h(t)\,dt.$$
(5.4)

If $$w(f(t))\leq w(g(t))$$ with any $$t\in J$$, then $$g(t)=f(t)+(-1)h(t)$$ and hence,

$$\int_{t_{0}}^{t_{f}}g(t)\,dt= \int_{t_{0}}^{t_{f}}f(t)\,dt+(-1) \int_{t_{0}}^{t_{f}}h(t)\,dt.$$
(5.5)

Thus, equations (5.4) and (5.5) illustrate that equation (5.3) is true. □

### Theorem 5.4

If $$g_{1},g_{2}\in C(J,IR)$$, the following inequality holds:

$$H \biggl( \int_{t_{0}}^{t}g_{1}(s)\,ds, \int_{t_{0}}^{t}g_{2}(s)\,ds \biggr)\leq \int _{t_{0}}^{t}H \bigl(g_{1}(s),g_{2}(s) \bigr)\,ds.$$
(5.6)

### Proof

Write $$g_{1}(t)=[g_{1}^{L}(t), g_{1}^{R}(t)]$$ and $$g_{2}(t)=[g_{2}^{L}(t), g_{2}^{R}(t)]$$. We can prove that

\begin{aligned}& H \biggl( \int_{t_{0}}^{t}g_{1}(s)\,ds, \int_{t_{0}}^{t}g_{2}(s)\,ds \biggr) \\& \quad = \max \biggl\{ \biggl\vert \int_{t_{0}}^{t}g_{1}^{L}(s)\,ds- \int_{t_{0}}^{t}g_{2}^{L}(s)\,ds \biggr\vert , \biggl\vert \int_{t_{0}}^{t}g_{1}^{R}(s)\,ds- \int_{t_{0}}^{t}g_{2}^{R}(s)\,ds \biggr\vert \biggr\} \\& \quad \leq \max \biggl\{ \int_{t_{0}}^{t}\bigl\vert g_{1}^{L}(s)-g_{2}^{L}(s) \bigr\vert \,ds, \int _{t_{0}}^{t}\bigl\vert g_{1}^{R}(s)-g_{2}^{R}(s) \bigr\vert \,ds \biggr\} \\& \quad \leq \int_{t_{0}}^{t}\max \bigl\{ \bigl\vert g_{1}^{L}(s)-g_{2}^{L}(s)\bigr\vert , \bigl\vert g_{1}^{R}(s)-g_{2}^{R}(s) \bigr\vert \bigr\} \,ds = \int_{t_{0}}^{t}H \bigl(g_{1}(s),g_{2}(s) \bigr)\,ds. \end{aligned}

This completes the proof. □

In terms of Lemma 3.3 and Theorem 5.4, one can easily gain the following conclusion.

### Corollary 5.5

If $$f\in C(J,R)$$ and $$g_{1},g_{2}\in C(J,IR)$$, we have

$$H \biggl( \int_{t_{0}}^{t}f(s)g_{1}(s)\,ds, \int_{t_{0}}^{t}f(s)g_{2}(s)\,ds \biggr)\leq \int _{t_{0}}^{t}\bigl|f(s)\bigr|H\bigl(g_{1}(s),g_{2}(s) \bigr)\,ds.$$
(5.7)

## 6 Interval differential equation

In this section, we consider the following semi-linear interval differential equation (SIDE):

$$\left \{ \textstyle\begin{array}{l} x'=a(t)x+f(t,x), \\ x(t_{0})=x_{0}, \end{array}\displaystyle \right .$$
(6.1)

where $$a:J\rightarrow R$$ is an integrable real scalar function, and $$f : J\times IR\rightarrow IR$$ is an interval-valued function; $$x_{0}$$ is a given initial interval number in IR. In order to analyze the properties of the solutions in SIDE, three basic concepts are introduced below.

### Definition 6.1

For given $$x\in C(J,IR)$$, x is continuous gH-differentiable on J if $$x'$$ is continuous.

### Definition 6.2

Let x be continuous gH-differentiable on J. x is a strong solution of SIDE if satisfying the initial condition and the above equation.

### Definition 6.3

$$x\in C(J,IR)$$ is the (i)-solution of SIDE if

$$x(t)=\exp \biggl( \int_{t_{0}}^{t}a(u)\,du \biggr)x_{0}+ \int_{t_{0}}^{t}f\bigl(s,x(s)\bigr)\exp \biggl( \int_{s}^{t}a(u)\,du \biggr)\,ds, \quad t\in J ,$$
(6.2)

and the (ii)-solution if

$$x(t)=\exp \biggl( \int_{t_{0}}^{t}a(u)\,du \biggr)x_{0} \ominus_{g}(-1) \int _{t_{0}}^{t}f\bigl(s,x(s)\bigr)\exp \biggl( \int_{s}^{t}a(u)\,du \biggr)\,ds, \quad t\in J .$$
(6.3)

Equations (6.2) and (6.3) are constructed in terms of the formulation of the solutions for ordinary differentiable equations. However, (i)- and the (ii)-solutions are not SIDE’s strong solutions usually. Based on this consideration, we discuss the relationship between SIDE’s solutions, depending on the above properties of $$C(J,IR)$$. For the convenience of representation, write

$$p(t)=\exp \biggl( \int_{t_{0}}^{t}a(u)\,du \biggr)$$

and

$$F(t,x)= \int_{t_{0}}^{t}\frac{f(s,x(s))}{p(s)}\,ds.$$

### Theorem 6.4

Let $$a\in C(J,R)$$ and $$f\in C(J\times IR,IR)$$. If $$a(t)>0$$ with $$t\in J$$, the (i)-solution x is a strong solution of SIDE; under $$a(t)<0$$, if (ii)-solution x satisfies

$$w\bigl(F(t,x)\bigr)\leq w(x_{0}),$$
(6.4)

it is also SIDE’s strong solution.

### Proof

Let $$a(t)>0$$ with $$t\in J$$ and x be a (i)-solution of SIDE. Write

$$g(t,x)=x_{0}+F(t,x).$$
(6.5)

As related to Definition 4.1 and Theorem 5.2 it follows that $$F(t,x)$$ is (i)-differentiable and that

$$F'(t,x)=\frac{f(t,x(t))}{p(t)}.$$
(6.6)

Hence, Theorem 4.4 and equation (6.5) imply that $$g(t,x)$$ is (i)-differentiable and

$$g'(t,x)=\frac{f(t,x(t))}{p(t)}.$$
(6.7)

Again since $$a(t)>0$$ and $$p'(t)=a(t)p(t)$$, it is obvious that $$p(t)p'(t)>0$$. Hence, it follows from Theorem 4.8 and equations (6.2) and (6.5)-(6.7) that

\begin{aligned} x'(t) =& \bigl(p(t)g(t,x)\bigr)'=p'(t)g(t,x)+p(t)g'(t,x) \\ =& a(t)p(t)g(t,x)+f\bigl(t,x(t)\bigr) \\ =& a(t)x(t)+f\bigl(t,x(t)\bigr). \end{aligned}
(6.8)

Therefore, x is a strong solution of SIDE.

On the other hand, let x be a (ii)-solution with $$a(t)<0$$. Since $$p(t)>0$$, we know that $$p(t)p'(t)<0$$. This way, equation (6.3) can be rewritten by

$$x(t)=p(t)h(t,x),$$
(6.9)

where

$$h(t,x)=x_{0}\ominus_{g}(-1)F(t,x).$$
(6.10)

As we mentioned above, $$F(t,x)$$ is (i)-differentiable, and accordingly, from Theorem 4.5 and equation (6.10) it follows that

$$h'(t,x)=\frac{f(t,x(t))}{p(t)}.$$
(6.11)

Since $$w(F(t,x))\leq w(x_{0})$$, equations (6.10) and (6.11) indicate that $$h(t,x)$$ is (ii)-differentiable. Therefore, Theorem 4.10 and equation (6.10) imply that

\begin{aligned} x'(t) =& p'(t)h(t,x)+p(t)h'(t,x) \\ =& a(t)p(t)h(t,x)+f\bigl(t,x(t)\bigr) \\ =& a(t)x(t)+f\bigl(t,x(t)\bigr). \end{aligned}

Hence, the conclusion is true. □

In the subsequent subsection, we first give a prior estimate of the solution, and then discuss the existence and uniqueness of strong solutions of SIDE.

### Hypothesis 6.1

There exist $$K>0$$ and $$\alpha>0$$ such that

$$\bigl\Vert f(t,z)\bigr\Vert _{I}\leq K\|z \|_{I}^{\alpha}, \quad \forall(t,z)\in J\times IR.$$
(6.12)

### Lemma 6.5

Under Hypothesis 6.1, if $$f\in C(J\times IR,IR)$$, there is a positive constant $$M_{\alpha}$$ such that the (i)- or (ii)-solution x satisfies

$$\|x\|_{C}\leq M_{\alpha},$$
(6.13)

where

$$M_{\alpha}= \left \{ \textstyle\begin{array}{l@{\quad}l} \|p\|_{C(J,R)}\|x_{0}\|_{I}\exp(K\|p\|_{C(J,R)}\int_{t_{0}}^{t_{f}}\frac{1}{p(s)}\,ds), & \textit{if }\alpha=1, \\ {[(\|p\|_{C(J,R)}\|x_{0}\|_{I})^{1-\alpha}+(1-\alpha )K\|p\|_{C(J,R)}\int_{t_{0}}^{t_{f}}\frac{1}{p(s)}\,ds ]^{\frac{1}{1-\alpha}}}, & \textit{else}. \end{array}\displaystyle \right .$$

### Proof

By means of Theorem 3.4, Corollary 5.5, and equations (6.2) and (6.3), we can prove that

\begin{aligned} \bigl\Vert x(t)\bigr\Vert _{I} \leq& \Vert p\Vert _{C(J,R)} \biggl(\Vert x_{0}\Vert _{I}+ \int_{t_{0}}^{t}\frac {\Vert f(s,x(s))\Vert _{I}}{p(s)}\,ds \biggr) \\ \leq& \Vert x_{0}\Vert _{I}\Vert p\Vert _{C(J,R)}+K\Vert p\Vert _{C(J,R)} \int_{t_{0}}^{t}\frac {1}{p(s)}\bigl\Vert x(s)\bigr\Vert _{I}^{\alpha}\,ds. \end{aligned}

When $$\alpha=1$$, the Gronwall inequality indicates that equation (6.13) is valid; when $$\alpha\neq1$$, the generalized Bellman lemma  hints that equation (6.13) is also true. □

In addition, based on Lemma 6.5, define

$$C_{\alpha}(J,IR)=\bigl\{ x\in C(J,IR):\|x\|_{C}\leq M_{\alpha}\bigr\} .$$

We can prove that $$C_{\alpha}(J,IR)$$ is a complete metric space.

### Hypothesis 6.2

Assume that f satisfies the uniformly Lipschitz condition, namely there exists $$L>0$$ such that

$$\bigl\Vert f(t,z_{1})\ominus_{g}f(t,z_{2}) \bigr\Vert _{I}\leq L\|z_{1}\ominus_{g}z_{2} \|_{I},$$
(6.14)

with $$\forall t\in J$$ and $$\|z_{1}\|_{I}, \|z_{2}\|_{I}\leq M_{\alpha}$$.

In the above SIDE problem, when $$a(t)\equiv0$$, Stefanini and Bede  proved that there exist only two strong solutions under some limitations. We here discuss SIDE’s existence and uniqueness of strong solutions under $$a(t)\neq0$$.

### Theorem 6.6

Let $$a\in C(J,R)$$. Under Hypotheses 6.1 and 6.2, when

$$\beta=L\|p\|_{C(J,R)} \int_{t_{0}}^{t_{f}}\frac{1}{p(s)}\,ds< 1,$$
(6.15)

SIDE has a unique (i)-solution in $$C_{\alpha}(J,IR)$$ if $$a(t)>0$$, and a unique (ii)-solution in $$C_{\alpha}(J,IR)$$ if $$a(t)<0$$, provided that the initial value $$x_{0}$$ satisfies

$$\|p\|_{C(J,R)}\|x_{0}\|_{I}\leq(1- \beta)M_{\alpha}.$$
(6.16)

### Proof

Under $$a(t)>0$$, define a mapping $$T_{1}$$ on $$C_{\alpha}(J,IR)$$ given by

$$(T_{1}x) (t)=\exp \biggl( \int_{t_{0}}^{t}a(u)\,du \biggr)x_{0}+ \int _{t_{0}}^{t}f\bigl(s,x(s)\bigr)\exp \biggl( \int_{s}^{t}a(u)\,du \biggr)\,ds,$$
(6.17)

with $$t\in J$$, namely,

$$(T_{1}x) (t)=p(t)g(t,x),$$
(6.18)

where $$g(t,x)$$ is decided by equation (6.5) above. For $$t, t+\Delta t\in J$$ and $$x\in C_{\alpha}(J,IR)$$, in terms of Lemma 2.2 and Theorem 5.1 we have

$$\bigl\Vert g(t+\Delta t,x)\ominus_{g}g(t,x)\bigr\Vert _{I}= \bigl\Vert F(t+\Delta t,x)\ominus_{g}F(t,x)\bigr\Vert _{I} = \biggl\Vert \int_{t}^{t+\Delta t}\frac{f(s,x(s))}{p(s)}\,ds\biggr\Vert _{I} .$$

Thus, we see that $$g(t,x)$$ is continuous in t, due to Hypothesis 6.1 and the prior estimate of the solution in Lemma 6.5. This way, from Theorem 3.7 it follows that $$p(t)g(t,x)$$ is continuous in t, and hence $$T_{1}x\in C(J,IR)$$. Additionally, by means of Lemma 3.3 and the additive property of the Hausdorff-Pompeiu metric on interval numbers, we derive for $$x,y\in C_{\alpha}(J,IR)$$ that

\begin{aligned} H\bigl((T_{1}x) (t),(T_{1}y) (t)\bigr) =& H\bigl(p(t) \bigl(x_{0}+F(t,x)\bigr),p(t) \bigl(x_{0}+F(t,y)\bigr)\bigr) \\ \leq& H\bigl(0,p(t)\bigr)H\bigl(x_{0}+F(t,x),x_{0}+F(t,y) \bigr) \\ =& p(t)H\bigl(F(t,x),F(t,y)\bigr). \end{aligned}

Thus,

\begin{aligned} \|T_{1}x\ominus_{g} T_{1}y\|_{C} =& \sup_{t\in J}H\bigl((T_{1}x) (t),(T_{1}y) (t) \bigr) \leq \sup_{t\in J}p(t)H\bigl(F(t,x),F(t,y)\bigr) \\ =& \sup_{t\in J}p(t)H \biggl( \int_{t_{0}}^{t}\frac{f(s,x(s))}{p(s)}\,ds, \int _{t_{0}}^{t}\frac{f(s,y(s))}{p(s)}\,ds \biggr) \\ =& \sup_{t\in J}p(t)H \biggl( \int_{t_{0}}^{t}\frac{f(s,x(s))\ominus _{g}f(s,y(s))}{p(s)}\,ds,0 \biggr) \\ \leq& \sup_{t\in J}p(t) \int_{t_{0}}^{t}\frac{\|f(s,x(s))\ominus _{g}f(s,y(s))\|_{I}}{p(s)}\,ds. \end{aligned}

Accordingly, we prove by Hypothesis 6.2 that

\begin{aligned} \begin{aligned}[b] \|T_{1}x\ominus_{g} T_{1}y \|_{C} &\leq L\sup_{t\in J}p(t) \int_{t_{0}}^{t}\frac {\|x(s)\ominus_{g}y(s)\|_{I}}{p(s)}\,ds \\ &\leq L \|p\|_{C(J,R)} \int_{t_{0}}^{t_{f}}\frac{1}{p(s)}\,ds\|x\ominus _{g}y\|_{C} \\ &= \beta\|x\ominus_{g}y\|_{C}. \end{aligned} \end{aligned}
(6.19)

Further, Theorem 3.9, Hypothesis 6.1, and equations (6.16), (6.17), and (6.19) imply that

\begin{aligned} \|T_{1}x\|_{C} \leq& \|T_{1}x \ominus_{g} T_{1}0\|_{C}+\|T_{1}0 \|_{C} \\ \leq& \beta\|x\|_{C}+\|T_{1}0\|_{C} \\ \leq& \beta\|x\|_{C}+\|p\|_{C(J,R)}\|x_{0} \|_{I}\leq M_{\alpha}. \end{aligned}
(6.20)

Consequently, $$T_{1}$$ is a contraction mapping on $$C_{\alpha}(J,IR)$$ and hence has a unique fixed point. This shows that SIDE has a unique (i)-solution. We next prove that SIDE has a unique (ii)-solution. If $$a(t)<0$$, define a mapping $$T_{2}$$ on $$C(J,IR)$$ given by

$$(T_{2}x) (t)=\exp \biggl( \int_{t_{0}}^{t}a(u)\,du \biggr)x_{0} \ominus_{g}(-1) \int _{t_{0}}^{t}f\bigl(s,x(s)\bigr)\exp \biggl( \int_{s}^{t}a(u)\,du \biggr)\,ds,$$
(6.21)

with $$t\in J$$, namely,

$$(T_{2}x) (t)=p(t)h(t,x),$$
(6.22)

where $$h(t,x)$$ is decided by equation (6.10) above. On one hand, from Lemma 2.2 it follows that

$$\bigl\Vert h(t+\Delta t,x)\ominus_{g}h(t,x)\bigr\Vert _{I}= \bigl\Vert F(t,x)\ominus_{g}F(t+\Delta t,x)\bigr\Vert _{I} = \biggl\Vert \int_{t}^{t+\Delta t}\frac{f(s,x(s))}{p(s)}\,ds\biggr\Vert _{I} .$$

On the other hand, Lemmas 3.1 and 3.3 yield

\begin{aligned} H\bigl((T_{2}x) (t),(T_{2}y) (t)\bigr) =& H \bigl(p(t) \bigl(x_{0}\ominus_{g}(-1)F(t,x)\bigr),p(t) \bigl(x_{0}\ominus _{g}(-1)F(t,y)\bigr)\bigr) \\ \leq& p(t)H\bigl(F(t,x),F(t,y)\bigr). \end{aligned}
(6.23)

Subsequently, through a similar deduction to above we can prove that $$T_{2}$$ is a contraction mapping on $$C_{\alpha}(J,IR)$$. Therefore, SIDE has a unique (ii)-solution. □

In the above theoretical analysis, from Theorem 6.4 one draws the conclusion that the (i)- and the (ii)-solutions are strong solutions under certain conditions; Theorem 6.6 shows the conditions of existence and uniqueness of (i)- and the (ii)-solutions for equations (6.2) and (6.3), respectively. These hint that SIDE has at least a strong solution under certain assumptions, for which we give a conclusion to emphasize the existence of strong solutions in SIDE.

### Theorem 6.7

Let $$a(t)$$ be a continuous scalar function with at most countable zero points on J. Then SIDE has at least a strong solution under Theorems 6.4 and 6.6.

### Proof

Let $$\{t_{n}\}_{n\geq1}$$ be a series of zero points of $$a(t)$$. Divide the interval J into countable subintervals $$J_{n}$$ with $$n\geq1$$ and $$J_{n}=[t_{n-1},t_{n}]$$, in which $$a(t)$$ always maintains the same sign on $$J_{n}$$. If $$a(t)>0$$ with $$t_{n-1}< t< t_{n}$$, Theorems 6.4 and 6.6 show that the unique (i)-solution, $$x_{n1}(t)$$, is a strong solution of SIDE on $$J_{n}$$. In the same way, if $$a(t)<0$$ with $$t_{n-1}< t< t_{n}$$, two such theorems ensure that SIDE has a strong solution on $$J_{n}$$, i.e., the (ii)-solution $$x_{n2}(t)$$. Therefore, we can obtain a strong solution $$x(t)$$ for the above SIDE given by

$$x(t)= \left \{ \textstyle\begin{array}{l@{\quad}l} x_{n}(t), & \text{if }a(t)\neq0, t\in(t_{n-1},t_{n}), \\ x_{n-1}(t_{n-1}), & \text{if }t=t_{n-1}, \end{array}\displaystyle \right .$$

where

$$x_{n}(t)= \left \{ \textstyle\begin{array}{l@{\quad}l} x_{n1}(t), & \text{if }a(t)>0, t\in(t_{n-1},t_{n}), \\ x_{n2}(t), & \text{if }a(t)< 0, t\in(t_{n-1},t_{n}). \end{array}\displaystyle \right .$$

Especially, when $$n=1$$, $$x_{1}(t)$$ denotes a strong solution of SIDE on $$[t_{0},t_{1}]$$; when $$n=2$$, $$x_{2}(t_{1})$$ takes the form of $$x_{1}(t_{1})$$. This way, $$x(t)$$ takes the form $$x_{n-1}(t_{n-1})$$ at the endpoint $$t_{n-1}$$. □

## 7 Illustrative examples

In this section, our experiments are implemented through MATLAB’s 7.0 standard ODE solver (ode45). Three simple interval-valued Cauchy problems are used to examine our theoretical results.

### Example 7.1

()

$$\left \{ \textstyle\begin{array}{l} x'=-x+[1,2]\sin t,\quad 0\leq t\leq4, \\ x(0)=[1,3]. \end{array}\displaystyle \right .$$

This in fact is a linear interval-valued Cauchy problem, satisfying the conditions of Theorem 6.7 with $$a(t)=-1$$. Therefore, there exists a strong solution, i.e., the (ii)-solution, expressed by

$$x(t)= \left \{ \textstyle\begin{array}{l@{\quad}l} e^{-t}([1,3]\ominus_{g}(-1)[1,2]\int_{0}^{t}e^{s}\sin s\, ds),& 0\leq t\leq\pi, \\ e^{\pi-t}(x(\pi)\ominus_{g}[1,2]\int_{\pi}^{t}-e^{s-\pi}\sin s\, ds),& \pi< t\leq4. \end{array}\displaystyle \right .$$

One such solution, $$x=[x^{L},x^{R}]$$, is drawn in Figure 1, where $$x^{L}$$ and $$x^{R}$$ denote the lower and upper bound curves of the (ii)-solution x obtained through Theorem 6.7. $$x_{1}=[x_{1}^{L},x_{1}^{R}]$$ and $$x_{2}=[x_{2}^{L},x_{2}^{R}]$$ represent the curves of I- and II-type solutions gotten through (i)-differentiability and the (ii)-differentiability, respectively .

By Figure 1, solutions x and $$x_{2}$$ have the same switching point $$t_{\alpha}=1.3606$$, and meanwhile are almost the same in $$[0,t_{\alpha}]$$. However, they present different characteristics within $$t_{\alpha}$$ and 4, as $$w(x(t))$$ is decreasing but $$w(x_{2}(t))$$ is increasing. This indicates the uncertainty degree of x is ever smaller with time t, and thereby x is better than $$x_{2}$$. In addition, whereas $$x_{2}$$ is of a smaller uncertainty degree than $$x_{1}$$, it will become divergent with time t. In total, $$x_{1}$$ and $$x_{2}$$ are not rational because of their divergence.

### Example 7.2

$$\left \{ \textstyle\begin{array}{l} x'=x\sin t+f(t,x), \quad 0\leq t\leq2\pi, \\ x(0)=[1,3], \end{array}\displaystyle \right .$$

where $$f(t,x)= \bigl \{\scriptsize{ \begin{array}{l@{\quad}l} \sin t,& 0\leq t\leq\pi, \\ x\sin t,& \pi< t\leq2\pi. \end{array}} \bigr.$$

We note that there is a zero point π for $$a(t)=\sin t$$. It can be checked that f is a continuous interval-valued function in t and x, and it satisfies the conditions as in Theorem 6.7. Consequently, there exists a strong solution composed of the (i)-solution in $$[0,\pi]$$ and the (ii)-solution in $$(\pi,2\pi]$$, namely

$$x(t)= \left \{ \textstyle\begin{array}{l@{\quad}l} e^{1-\cos t}([1,3]+\int_{0}^{t}e^{\cos s-1}\sin s\, ds),& 0\leq t\leq\pi, \\ e^{-2-2\cos t}x(\pi),& \pi< t\leq2\pi. \end{array}\displaystyle \right .$$

The solution x is drawn in Figure 2. In addition, $$x_{1}$$ and $$x_{2}$$ represent the curves of I- and II-type solutions mentioned above, respectively.

By Figure 2, solutions x and $$x_{1}$$ are almost the same in $$[0,\pi]$$. However, $$w(x(t))$$ is decreasing but $$w(x_{1}(t))$$ is increasing in $$(\pi,2\pi]$$. This indicates the uncertainty degree of x is smaller than that of $$x_{1}$$ in $$(\pi,2\pi]$$, and thus x is superior to $$x_{1}$$. On the other hand, $$x_{2}$$ is of a smaller uncertainty degree than x and $$x_{1}$$, as $$w(x_{2}(t))$$ remains decreasing. It is emphasized that I- and the II-type solutions are two kinds of extreme solutions decided under (i)-differentiability and the (ii)-differentiability, whereas through Theorem 6.7, we can obtain a more rational strong solution which switches between the (i)- and the (ii)-solutions.

### Example 7.3

$$\left \{ \textstyle\begin{array}{l} x'=x\cos t+\frac{x^{3}\sin t}{100+\sin^{2}t},\quad 0\leq t\leq2\pi, \\ x(0)=[-1,1]. \end{array}\displaystyle \right .$$

It can be examined that f satisfies Hypotheses 6.1 and 6.2, where $$f(t,x)=x^{3}\varphi(t)$$ and $$\varphi(t)=\frac{\sin t}{100+\sin^{2}t}$$. The reason can be found below.

1. (i)

For any $$(t,x)\in J\times IR$$, since

$$\bigl\Vert f(t,x)\bigr\Vert _{I}=H\bigl(f(t,x),0\bigr)=\max\bigl\{ \bigl\vert \bigl(x^{L}\bigr)^{3}\varphi(t)\bigr\vert , \bigl\vert \bigl(x^{R}\bigr)^{3}\varphi (t)\bigr\vert \bigr\} \leq\|x\|_{I}^{3}.$$

Therefore, Hypothesis 6.1 holds.

2. (ii)

For $$\|x\|_{C}, \|y\|_{C}\leq M_{\alpha}$$, we have

\begin{aligned} H\bigl(f(t,x),f(t,y)\bigr) =& \max\bigl\{ \bigl\vert \bigl(x^{L} \bigr)^{3}\varphi(t)-\bigl(y^{L}\bigr)^{3}\varphi (t)\bigr\vert ,\bigl\vert \bigl(x^{R}\bigr)^{3} \varphi(t)-\bigl(y^{R}\bigr)^{3}\varphi(t)\bigr\vert \bigr\} \\ \leq& \max\bigl\{ \bigl\vert \bigl(x^{L}\bigr)^{3}- \bigl(y^{L}\bigr)^{3}\bigr\vert ,\bigl\vert \bigl(x^{R}\bigr)^{3}-\bigl(y^{R} \bigr)^{3}\bigr\vert \bigr\} \\ \leq& 3M_{\alpha}^{2}H(x,y). \end{aligned}

This illustrates that Hypothesis 6.2 is true.

We note that there are two zero points $$\frac{\pi}{2}$$ and $$\frac{3\pi }{2}$$ for $$a(t)=\cos t$$. As associated to Theorem 6.7, there exists a strong solution composed of two (i)-solutions on $$[0,\frac{\pi }{2}]$$ and $$(\frac{3\pi}{2},2\pi]$$, and a (ii)-solution on $$(\frac{\pi }{2},\frac{3\pi}{2}]$$, namely

$$x(t)= \left \{ \textstyle\begin{array}{l@{\quad}l} e^{\sin t}([-1,1]+\int_{0}^{t}\varphi(s)x^{3}(s)e^{-\sin s}\,ds),& 0\leq t\leq \frac{\pi}{2}, \\ e^{\sin t-1}(x(\frac{\pi}{2})\ominus_{g}(-1)\int_{\frac{\pi}{2}}^{t}\varphi (s)x^{3}(s)e^{-\sin s+1}\,ds), &\frac{\pi}{2}< t\leq\frac{3\pi}{2}, \\ e^{\sin t+1}(x(\frac{3\pi}{2})+\int_{\frac{3\pi}{2}}^{t}\varphi (s)x^{3}(s)e^{-\sin s-1}\,ds), &\frac{3\pi}{2}< t\leq2\pi. \end{array}\displaystyle \right .$$

The solution x is drawn in Figure 3. In addition, $$x_{1}$$ and $$x_{2}$$ represent the curves of I- and II-type solutions gotten in the same fashion as above, respectively.

By Figure 3, the solutions x and $$x_{1}$$ are almost the same on $$[0,\frac{\pi}{2}]$$. $$w(x(t))$$ is increasing in $$[0,\frac{\pi}{2}]$$, decreasing in $$(\frac{\pi}{2},\frac{3\pi}{2}]$$, and increasing in $$(\frac{3\pi}{2},2\pi]$$. On the other hand, $$w(x_{1}(t))$$ always remains increasing, and $$w(x_{2}(t))$$ keeps decreasing. This shows that the I-type solution $$x_{1}$$ diverges and the II-type solution $$x_{2}$$ is very conservative. Thus, $$x_{1}$$ and $$x_{2}$$ cannot effectively reflect the dynamic characteristics of the above dynamic system, but x is opposite.

## 8 Conclusions

This work aims at studying the properties of interval-valued functions under the gH-difference and also probing the existence of the solutions for a class of semi-linear interval differential equations. As associated to the concept of the gH-difference and also the conventional arithmetic rules such as addition and scalar multiplication, we have developed a complete normed quasi-linear space on interval numbers, in which some important properties of the gH-difference are found under the Hausdorff-Pompeiu metric. Subsequently, on the basis of one such space, we introduced a continuous interval-valued function space which has been proven to be a complete normed quasi-linear space. A contracting mapping theorem on one such space, similar to the classical contracting mapping principle, has been obtained, relying upon the gH-difference. Based on these fundamental works, some arithmetic properties of the gH-derivative for interval-valued functions were investigated exhaustively, among which some results can be adopted to study the existence and uniqueness of the solutions for such a kind of semi-linear equation. After some simple properties of the integral of interval-valued functions were discussed, we have obtained a necessary condition that the (i)- and the (ii)-solutions are strong solutions, including the conditions of the existence and uniqueness of the solutions.

## References

1. Galanis, GN, Bhaskar, TG, Lakshmikantham, V, Palamides, PK: Set valued functions in Fréchet spaces: continuity, Hukuhara differentiability and applications to set differential equations. Nonlinear Anal. 61, 559-575 (2005)

2. Moore, RE: Interval Analysis. Prentice Hall, Englewood Cliffs (1966)

3. Jiang, C, Han, X, Liu, GR, Liu, GP: A nonlinear interval number programming method for uncertain optimization problems. Eur. J. Oper. Res. 188, 1-13 (2008)

4. Lu, HW, Cao, MF, Wang, Y, Fan, X, He, L: Numerical solutions comparison for interval linear programming problems based on coverage and validity rates. Appl. Math. Model. 38, 1092-1100 (2014)

5. Zhang, ZH, Tao, J: Efficient micro immune optimization approach solving constrained nonlinear interval number programming. Appl. Intell. 43, 276-295 (2015)

6. Neumaier, A: Interval Methods for Systems of Equations. Cambridge University Press, Cambridge (1990)

7. Nedialkov, NS, Jackson, KR, Pryce, JD: An effective high order interval method for validating existence and uniqueness of the solution of an IVP for an ODE. Reliab. Comput. 7, 449-465 (2011)

8. Lin, YD, Stadtherr, MA: Validated solutions of initial value problems for parametric ODEs. Appl. Numer. Math. 57, 1145-1162 (2007)

9. Stefanini, L, Bede, B: Generalized Hukuhara differentiability of interval-valued functions and interval differential equations. Nonlinear Anal. 71, 1311-1328 (2009)

10. Malinowski, MT: Interval differential equations with a second type Hukuhara derivative. Appl. Math. Lett. 24, 2118-2123 (2011)

11. Malinowski, MT: Interval Cauchy problem with a second type Hukuhara derivative. Inf. Sci. 213, 94-105 (2012)

12. Truong, VA, Ngo, VH, Nguyen, DP: Global existence of solutions for interval-valued integro-differential equations under generalized H-differentiability. Adv. Differ. Equ. 2013, 217 (2013). doi:10.1186/1687-1847-2013-217

13. Ngo, VH, Nguyen, DP, Tran, TT, Le, TQ: Interval-valued functional integro-differential equations. Adv. Differ. Equ. 2014, 177 (2014). doi:10.1186/1687-1847-2014-177

14. Bede, B, Stefanini, L: Numerical solution of interval differential equations with generalized Hukuhara differentiability. In: IFSA-EUSFLAT, pp. 730-735 (2009)

15. Stefanini, L, Bede, B: Some notes on generalized Hukuhara differentiability of interval-valued functions and interval differential equations. Working paper EMS series, University of Urbino (2012). www.repec.org

16. Lupulescu, V: Hukuhara differentiability of interval-valued functions and interval differential equations on time scales. Inf. Sci. 248, 50-67 (2013)

17. Skripnic, N: Interval-valued differential equations with generalized derivative. Appl. Math. 2, 116-120 (2012)

18. Ngo, VH, Nguyen, DP: Global existence of solutions for interval-valued second-order differential equations under generalized Hukuhara derivative. Adv. Differ. Equ. 2013, 290 (2013). doi:10.1186/1687-1847-2013-290

19. Nguyen, DP, Truong, VA, Ngo, VH, Nguyen, TH: Interval-valued functional differential equations under dissipative conditions. Adv. Differ. Equ. 2014, 198 (2014). doi:10.1186/1687-1847-2014-198

20. Ngo, VH: The initial value problem for interval-valued second-order differential equations under generalized H-differentiability. Inf. Sci. 311, 119-148 (2015)

21. Allahviranloo, T, Gholami, S: Note on ‘Generalized Hukuhara differentiability of interval-valued functions and interval differential equations’. J. Fuzzy Set Valued Anal. 2012, Article ID jfsva-00135 (2012)

22. Chalco-Cano, Y, Rufián-Lizana, A, Román-Flores, H, Jiménez-Gamero, MD: Calculus for interval-valued functions using generalized Hukuhara derivative and applications. Fuzzy Sets Syst. 219, 49-67 (2013)

23. Hukuhara, M: Intégration des applications measurables dont la valeur est un compact convexe. Funkc. Ekvacioj 10, 205-223 (1967)

24. Markov, S: Calculus for interval functions of a real variable. Computing 22, 325-337 (1979)

25. Oppenheimer, EP, Michel, AN: Application of interval analysis techniques to linear systems. Part I: fundamental results. IEEE Trans. Circuits Syst. 35, 1129-1138 (1988)

26. Moore, RE, Kearfott, RB, Cloud, MJ: Introduction to Interval Analysis. SIAM, Philadelphia (2009)

27. Stefanini, L: A generalization of Hukuhara difference. In: Soft Methods for Handling Variability and Imprecision, pp. 203-210. Springer, Berlin (2008)

28. Stefanini, L: A generalization of Hukuhara difference for interval and fuzzy arithmetic. Working paper EMS series, University of Urbino (2008). www.repec.org

29. Stefanini, L: A generalization of Hukuhara difference and division for interval and fuzzy arithmetic. Fuzzy Sets Syst. 161, 1564-1584 (2010)

30. Tao, J, Zhang, ZH: Properties of interval vector-valued arithmetic based on gH-difference. Math. Comput. 4, 7-12 (2015)

31. Plotnikova, NV: Systems of linear differential equations with π-derivative and linear differential inclusions. Sb. Math. 196, 1677-1691 (2005)

32. Chalco-Cano, Y, Flores-Franulic̆, A, Román-Flores, H: Ostrowski type inequalities for interval-valued functions using generalized Hukuhara derivative. Comput. Appl. Math. 31, 457-472 (2012)

33. Bede, B, Gal, SG: Generalizations of the differentiability of fuzzy-number-valued functions with applications to fuzzy differential equations. Fuzzy Sets Syst. 151, 581-599 (2005)

34. Chalco-Cano, Y, Román-Flores, H, Jiménez-Gamero, MD: Generalized derivative and π-derivative for set-valued functions. Inf. Sci. 181, 2177-2188 (2011)

35. Chalco-Cano, Y, Lodwick, W: On difference of intervals and differentiability of interval-valued functions. In: 2010 Annual Meeting of the North American Fuzzy Information Processing Society (2010)

36. Guerra, ML, Stefanini, L: A comparison index for interval ordering. In: IEEE Symposium on Foundations of Computational Intelligence (FOCI), pp. 53-58 (2011)

37. Banks, HT, Jacobs, MQ: A differential calculus for multifunctions. J. Math. Anal. Appl. 29, 246-272 (1970)

38. Bede, B, Rudas, IJ, Bencsik, AL: First order linear fuzzy differential equations under generalized differentiability. Inf. Sci. 177, 1648-1662 (2007)

39. Plotnikov, AV, Skripnik, NV: Set-valued differential equations with generalized derivative. J. Adv. Res. Pure Math. 3, 144-160 (2011)

40. Lupulescu, V: Fractional calculus for interval-valued functions. Fuzzy Sets Syst. 265, 63-85 (2015)

41. Aubin, JP, Cellina, A: Differential Inclusions. Springer, New York (1984)

42. Lakshmikantham, V, Bhaskar, TG, Devi, JV: Theory of Set Differential Equations in a Metric Space. Cambridge Scientific Publishers, Cambridge (2006)

43. Dhage, BC: Multi-valued operators and fixed point theorems in Banach algebras I. Taiwan. J. Math. 10, 1025-1045 (2006)

44. Bihari, IA: A generalization of a lemma of Bellman and its application to uniqueness problem of differential equation. Acta Math. Hung. 7, 81-94 (1956)

## Acknowledgements

This work is supported by the Doctoral Fund of Ministry of Education of China (20125201110003), National Natural Science Foundation (61563009). The authors would like to thank the editor in chief and the anonymous referee, for their comments and suggestions that greatly improved the paper.

## Author information

Authors

### Corresponding author

Correspondence to Zhuhong Zhang.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

This paper was completed by JT under the guidance of Prof. ZZ. All authors read and approved the final manuscript.

## Rights and permissions 