In this section, we give some notations on the functional spaces, useful properties on the Mittag–Leffler function and the fractional time derivatives, regularity results for the fractional variational inequalities with nonsingular Mittag–Leffler kernel (1.1)–(1.3).
Let \(L^{2}(\Omega)\) be the usual Hilbert space equipped with the scalar product \((\cdot,\cdot)\), and \(H^{m}(\Omega)\), \(H^{m}_{0} (\Omega)\) denote the usual Sobolev spaces.
We first start by recalling the Mittag–Leffler function \(E_{\alpha,\beta}(z)\) (see [42, 47]) which will be used extensively throughout this work and is defined below:
$$\begin{gathered} E_{\alpha}(z)=\sum_{k=0}^{\infty} \frac{z^{k}}{\Gamma(k\alpha +1)}=:E_{\alpha,1}(z),\qquad E_{\alpha,\beta}(z)=\sum _{k=0}^{\infty}\frac{z^{k}}{\Gamma(k\alpha +\beta)},\\ E_{\alpha,1}(z)=E_{\alpha}(z),\quad z\in\mathbb{C}, \mathcal{R}( \alpha)>0, \end{gathered} $$
where \(\Gamma(\cdot)\) denotes the gamma function defined as
$$\Gamma(z)= \int^{\infty}_{0}t^{z-1}e^{-t}\,dt, \quad \mathcal{R}(z)>0. $$
The Mittag–Leffler function is a two-parameter family of entire functions of z. The exponential function is a particular case of the Mittag–Leffler function, namely
$$\begin{gathered} E_{1,1}(z) = e^{z}, \qquad E_{2,1}(z)=\cosh\sqrt{z},\qquad E_{1,2}(z)=\frac{e^{z}-1}{z}, \qquad E_{2,2}(z)=\frac{\sinh \sqrt{z}}{\sqrt{z}}, \\ E_{\alpha,\beta}^{\lambda}(z)=\sum_{k=0}^{\infty} \frac{(\lambda )_{k}}{\Gamma(k\alpha+\beta)}\frac{z^{k}}{n!},\quad z,\beta, \lambda\in\mathbb{C}, \mathcal{R}(\alpha)>0, \end{gathered} $$
where (and throughout this investigation) \((\lambda)_{k}\) denotes the familiar Pochhammer symbol or the shifted factorial. Furthermore, we recall the following lemma from [42].
Lemma 2.1
Let
\(\alpha, \beta\in\mathbb{C}\)
such that
\(\mathcal{R}(\alpha)>0\)
and
\(\mathcal{R}(\beta)>0\). Then we have that
$$ \biggl(\frac{d}{dz}\biggr){E}_{ \alpha,\beta}(z) = \frac{1}{\alpha} \bigl[(1+\alpha-\beta) { E}_{\alpha,\beta}(z)+{ E}_{\alpha,\beta-1}(z) \bigr], \quad z\in\mathbb{C}. $$
(2.1)
Let us recall some useful definitions of fractional derivatives in the sense of Atangana–Baleanu [10].
Definition 2.1
([31, 32])
For a given function \(u\in H^{1}(a, T)\), \(T > a\), \(\alpha\in(0,1)\), the Atangana–Baleanu fractional derivative (AB derivative) of u of order α in the Caputo sense \({}^{\mathrm{ABC}}_{\quad a}D_{t}^{\alpha}u(t)\) (where A denotes Atangana, B denotes Baleanu, and C denotes Caputo type) with base point a is defined at a point \(t\in(a, T)\) by
$$ {}^{\mathrm{ABC}}_{\quad a}D_{t}^{\alpha}u(t)= \frac{B(\alpha )}{1-\alpha} \int_{a}^{t}u^{\prime}(s)E_{\alpha} \bigl[-\gamma (t-s)^{\alpha}\bigr]\,ds\quad \text{(left ABCD)}, $$
(2.2)
where
$$\gamma= \frac{\alpha}{(1 - \alpha)}, $$
\(E_{\alpha}(\cdot)\) stands for the Mittag–Leffler function, and with \(B(\alpha)\) being a normalization function satisfying
$$B(\alpha) = (1-\alpha) + \frac{\alpha}{\Gamma(\alpha)},\quad \text{where } B(0)=B(1)=1, $$
and in the Riemann–Liouville sense with
$$ {}^{\mathrm{ABR}}_{\quad a}D_{t}^{\alpha}u(t)= \frac{B(\alpha )}{1-\alpha}\frac{d}{dt} \int_{a}^{t}u(s)E_{\alpha}\bigl[-\gamma (t-s)^{\alpha}\bigr]\,ds \quad\text{(left ABRD)}. $$
(2.3)
For \(\alpha=1\) in (2.3) we consider the usual classical derivative \(\partial _{t}\).
The associated left AB fractional integral \({}^{\mathrm{AB}}_{\hspace{5pt}a}I_{t}^{\alpha} u(t)\) is also defined as
$$ {}^{\mathrm{AB}}_{\hspace{5pt}a}I_{t}^{\alpha}u(t)= \frac{1-\alpha}{B(\alpha )}u(t)+\frac{\alpha}{B(\alpha)\Gamma(\alpha)} \int _{a}^{t}u(s) (t-s)^{\alpha-1}\,ds \quad\text{(left ABI)}. $$
(2.4)
Notice that if \(\alpha=0\) in (2.4) we recover the initial function and if \(\alpha=1\) in (2.4) we consider the usual ordinary integral. Some recent results and properties concerning this operator can be found in [1] and papers therein. In addition, we recall the following definition from [1].
Definition 2.2
For a given function \(u\in H^{1}(a, T)\), \(T >t> a\), the right Atangana–Baleanu fractional derivative of u of order α in the Caputo sense with base point T is defined at a point \(t\in(a, T)\) by
$$ {}^{\mathrm{ABC}}_{\quad T}D_{t}^{\alpha}g(t)=- \frac{B(\alpha )}{1-\alpha} \int_{t}^{T}g^{\prime}(s)E_{\alpha} \bigl[-\gamma (s-t)^{\alpha}\bigr]\,ds \quad\text{(right ABCD)}, $$
(2.5)
and in the Riemann–Liouville sense with
$$ {}^{\mathrm{ABR}}_{\quad T}D_{t}^{\alpha}g(t)=- \frac{B(\alpha )}{1-\alpha}\frac{d}{dt} \int_{t}^{T}g(s) E_{\alpha}\bigl[- \gamma(s-t)^{\alpha}\bigr]\,ds \quad\text{(right ABRD)}. $$
(2.6)
The associated right AB fractional integral \({}^{\mathrm{AB}}_{\hspace{5pt}a}I_{t}^{\alpha}u(t)\) is also defined as
$$ {}^{\mathrm{AB}}_{\hspace{5pt}t}I_{T}^{\alpha}u(t)= \frac{1-\alpha}{B(\alpha )}u(t)+\frac{\alpha}{B(\alpha)\Gamma(\alpha)} \int _{t}^{T}u(s) (s-t)^{\alpha-1}\,ds \quad\text{(right ABI)}. $$
(2.7)
Next we state the following proposition which gives integration by parts (see [1]).
Proposition 2.2
(Integration by parts, see [1])
Let
\(\alpha>0\), \(p\geq1\), \(q\geq1\), and
\(\frac{1}{p}+\frac{1}{q}\leq1+\alpha\) (\(p\neq1\)
and
\(q\neq1\)
in the case
\(\frac{1}{p}+\frac{1}{q}= 1+\alpha\)). Then, for any
\(\phi(x)\in L^{p}(a,b)\), \(\psi(x)\in L^{q}(a,b)\), we have
$$\begin{aligned}& \int_{a}^{b}\phi(x) {}^{\mathrm{AB}}_{\hspace{5pt} a}I_{t}^{\alpha} \psi (x)\,dx= \int_{a}^{b}\psi(x) {}^{\mathrm{AB}}_{\hspace{5pt} t}I_{b}^{\alpha } \phi(x)\,dx, \end{aligned}$$
(2.8)
$$\begin{aligned}& \int_{a}^{b}\phi(x) {}^{\mathrm{AB}}_{\hspace{5pt} t}I_{b}^{\alpha} \psi (x)\,dx= \int_{a}^{b}\psi(x) {}^{\mathrm{AB}}_{\hspace{5pt} a}I_{t}^{\alpha } \phi(x)\,dx \end{aligned}$$
(2.9)
if
\(\phi(x)\in {}^{\mathrm{AB}}_{\hspace{5pt} t}I_{b}^{\alpha}(L^{p})\)
and
\(\psi(x)\in {}^{\mathrm{AB}}_{\hspace{5pt} a}I_{t}^{\alpha}(L^{q})\), then
$$\begin{aligned}& \int_{a}^{b}\phi(x) {}^{\mathrm{ABR}}_{\quad a}D_{t}^{\alpha} \psi (x)\,dx= \int_{a}^{b}\psi(x) {}^{\mathrm{ABR}}_{\quad t}D_{b}^{\alpha } \phi(x)\,dx, \end{aligned}$$
(2.10)
$$\begin{aligned}& \int_{a}^{b}\phi(x) {}^{\mathrm{ABC}}_{\quad a}D_{t}^{\alpha} \psi (x)\,dx= \int_{a}^{b}\psi(x) {}^{\mathrm{ABR}}_{\quad t}D_{b}^{\alpha } \phi(x)\,dx+\frac{B(\alpha)}{1-\alpha}\psi(t)E^{1}_{\alpha,1,\frac {-\alpha}{1-\alpha},b}\phi(t) \bigg\vert _{a}^{b}, \end{aligned}$$
(2.11)
$$\begin{aligned}& \int_{a}^{b}\phi(x) {}^{\mathrm{ABC}}_{\quad t}D_{b}^{\alpha} \psi (x)\,dx= \int_{a}^{b}\psi(x) {}^{\mathrm{ABR}}_{\quad a}D_{t}^{\alpha } \phi(x)\,dx-\frac{B(\alpha)}{1-\alpha}\psi(t)E^{1}_{\alpha,1,\frac {-\alpha}{1-\alpha},a}\phi(t) \bigg\vert _{a}^{b}, \end{aligned}$$
(2.12)
where the left generalized fractional integral operator
$$E^{\alpha}_{\gamma,\mu,\omega,a}x(t)= \int_{a}^{t}(t-\tau)^{\mu -1}E_{\gamma,\mu}^{\alpha} \bigl[\omega(t-\tau)^{\gamma}\bigr]x(\tau)\,d\tau, \quad t>a, $$
and the right generalized fractional integral operator
$$E^{\alpha}_{\gamma,\mu,\omega,b}x(t)= \int_{t}^{b}(\tau-t)^{\mu -1}E_{\gamma,\mu}^{\alpha} \bigl[\omega(\tau-t)^{\gamma}\bigr]x(\tau)\,d\tau, \quad t< b. $$
We recall some useful relations of the Laplace transform (\(\mathcal{L} [\cdot]\)) of the generalized Mittag–Leffler function (see [42, 47]):
$$\begin{aligned}& \mathcal{L}\bigl[t^{\alpha}\bigr] (s)= \frac{\Gamma(\alpha+1)}{s^{\alpha +1}} \quad \bigl( \mathcal{R}(\alpha) > -1, \mathcal{R}(s)> 0\bigr), \end{aligned}$$
(2.13)
$$\begin{aligned}& \mathcal{L}\bigl[t^{\beta-1} E_{\alpha,\beta}\bigl(\lambda t^{\alpha}\bigr)\bigr] (s)= \frac{s^{\alpha- \beta}}{s^{\alpha}-\lambda} \quad \bigl(\mathcal {R}(\alpha) > -1, \lambda\in\mathbb{C}, \bigl\vert \lambda s^{\alpha}\bigr\vert < 1 \bigr), \end{aligned}$$
(2.14)
$$\begin{aligned}& \mathcal{L}\bigl[f(t) * g(t)\bigr] (s)= \mathcal{L}\bigl[f(t)\bigr] (s) . \mathcal {L}\bigl[g(t)\bigr] (s). \end{aligned}$$
(2.15)
The following lemma gives estimates of the behavior of the Mittag–Leffler function in the complex plane (see [47]).
Lemma 2.3
Let
β
be an arbitrary real number, \(\alpha\in(0,2)\), and
μ
is such that
\(\frac{\alpha\pi}{2}<\mu<\min(\pi, \alpha\pi )\). There exists a positive constant
\(C> 0\)
such that
$$ \bigl\vert E_{\alpha,\beta}(z) \bigr\vert \leq \frac{C}{1 + \vert z \vert },\quad \mu\leq \bigl\vert \arg(z) \bigr\vert \leq \pi. $$
(2.16)
Next we state the following proposition which gives the fractional Green’s formula that will be used in our analysis.
Proposition 2.4
(Fractional Green’s formula, see [31])
Let
\(0<\alpha\leq1\). Then, for any
\(\phi,y\in C^{\infty}(\overline{Q})\), we have
$$\begin{aligned} & \int_{0}^{T} \int_{\Omega} \bigl( {}^{\mathrm{ABC}}_{\quad 0}D_{t}^{\alpha}y(x,t)- \Delta y(x,t) \bigr)\phi(x,t)\,dx \,dt \\ &\quad = \int_{0}^{T} \int_{\partial \Omega}y\frac{\partial \phi}{ \partial \nu}\,d\Gamma \,dt- \int_{0}^{T} \int_{\partial \Omega}\phi\frac{\partial y}{\partial \nu}\,d\Gamma \,dt \\ &\qquad{}+\frac{B(\alpha)}{1-\alpha} \int_{\Omega}\phi(x,T) \int _{0}^{T}y(x,t)E_{\alpha, \alpha}\bigl[- \gamma(T-t)^{\alpha}\bigr]\,dt \,dx \\ &\qquad {}-\frac{B(\alpha)}{1-\alpha} \int_{\Omega} \int _{0}^{T}y(x,0)E_{\alpha, \alpha}\bigl[-\gamma t^{\alpha}\bigr]\phi(x,t)\,dt \,dx \\ &\qquad{}+ \int_{0}^{T} \int_{\Omega}y(x,t) \bigl(-{}^{\mathrm{ABC}}_{\quad T} D_{t}^{\alpha}\phi(x,t)-\Delta\phi(x,t) \bigr)\,dx \,dt. \end{aligned}$$
(2.17)
We also introduce the Hilbert space
$${\mathcal {W}}(0,T):=\bigl\{ y: y\in L^{2}\bigl(0,T;H_{0}^{1}( \Omega)\bigr), {}^{\mathrm{ABC}} _{\quad 0}D_{t}^{\alpha}y(t) \in L^{2}\bigl(0,T;H^{-1}(\Omega)\bigr)\bigr\} , $$
in which a solution of a fractional differential system is contained. The spaces considered in this paper are assumed to be real.
Definitions of cones and Lyusternik theorem
Milyutin–Dubovitskii’s method [49, 50] arises from the geometric from the Hahn–Banach theorem (theorem about separation of convex sets). It will be shown in the example.
Let us assume that E is a linear topological space, locally convex, \(I(x)\) is a functional defined on E, \(A_{i}\), \(i=1,2,\ldots,n\), are sets in E with inner points which represent inequality constraints, B is a set in E without inner points representing equality constraints.
We must find some conditions necessary for a local minimum of the functional \(I(x)\) on the set \(Q=\bigcap_{i=1}^{n}A_{i}\cap B\) or find a point \(x_{0}\in E\) so that \(I(x_{0})=\min_{Q\cup U} I(x)\), where U means certain neighborhood of the point \(x_{0}\). We define the set
$$A_{0}=\bigl\{ x:I(x)< I(x_{0})\bigr\} . $$
Then we formulate the necessary condition of optimality as follows: in the neighborhood of the local minimum point, the intersection of the class of sets (the set on which the functional attains smaller values than \(I(x_{0})\), and the sets representing constraints) is empty or \(\bigcap_{i=0}^{n}A_{i}\cap B=\phi\).
The condition \(\bigcap_{i=0}^{n}A_{i}\cap B=\phi\) is also equivalent to the one in which, instead of sets \(A_{i}\), \(i=1,2,\ldots,n\), or B, there are approximations of \(A_{i}\), \(i=1,2,\ldots,n\), and B. These approximations are cones with the vertices at a point \(\{0\}\).
We shall approximate the inequality constraints by the regular admissible cones \(\operatorname{RAC}(A_{i},x_{0})\), \(i=1,2,\ldots,n\), the equality constraints by the regular tangent cone \(\operatorname{RTC}(B,x_{0})\), and for the performance functional, we shall construct the regular improvement cone \(\operatorname{RFC}(I,x_{0})\).
Then we have the necessary condition of the optimality \(I(x)\) on the set \(Q=\bigcap_{i=1}^{n}A_{i}\cap B\) that has the form of Euler–Lagrange’s equation
$$\sum_{i=0}^{n+1}f_{i}=0, $$
where \(f_{i}\) (\(i=0,1,\ldots,n+1\)) are the linear, continuous functionals; all of them are not equal to zero at the same time, and they belong to the adjoint cones
$$\begin{gathered} f_{i}\in\bigl[\operatorname{RAC}(A_{i},x_{0}) \bigr]^{*},\quad i=1,2,\ldots,n,\qquad f_{n+1}\in \bigl[ \operatorname{RTC}(B,x_{0})\bigr]^{*},\qquad f_{0}\in\bigl[ \operatorname{RFC}(I_{i},x_{0})\bigr]^{*}, \\ \bigl\{ f_{i}\in\bigl[\operatorname{RAC}(A_{i},x_{0}) \bigr]^{*} \Leftrightarrow f_{i}(x)\geq0\ \forall x\in \operatorname{RAC}(A_{i},x_{0}), i=1,2,\ldots,n\bigr\} . \end{gathered} $$
For the convex problem, i.e., the problem in which the constraints are convex sets and the performance functional is convex, the Euler–Lagrange equation is the necessary and sufficient condition of optimality, provided that certain additional assumptions are fulfilled (the so-called Slater condition).
At first we recall definitions of conical approximations and cones of the same sense or of the opposite sense. Let A be a set contained in a Banach space X and \(F:X\to \Bbb {R}\) be a given functional.
Definition 2.3
(see [43, 50])
A set \(\operatorname{TC}(A, x^{0}):=\{h\in X: \exists \epsilon_{0}>0,\forall\epsilon\in(0,\epsilon_{0}),\exists r(\epsilon)\in X; x^{0}+\epsilon h + r(\epsilon)\in A\}\), where \({r(\epsilon)\over \epsilon}\to0\) as \(\epsilon\to0\) is called the tangent cone to the set A at the point \(x^{0}\in A\).
Definition 2.4
(see [43, 50])
A set \(\operatorname{AC}(A, x^{0}):=\{h\in X: \exists\epsilon_{0}>0,\exists U(h),\forall \epsilon\in(0,\epsilon_{0}),\forall\overline{h}\in U(h); x^{0}+\epsilon \overline{h}\in A\}\), where \(U(h)\) is a neighborhood of h, is called the admissible cone to the set A at the point \(x^{0}\in A\).
Definition 2.5
(see [43, 50])
A set \(\operatorname{FC}(F, x^{0}):=\{h\in X: \exists\epsilon_{0}>0,\exists U(h),\forall \epsilon\in(0,\epsilon_{0}),\forall\overline{h}\in U(h); F(x^{0}+\epsilon\overline{h})< F(x^{0})\}\), is called the cone of decrease of the functional F at the point \(x^{0}\in X\).
Definition 2.6
(see [43, 50])
A set \(\operatorname{NC}(F, x^{0}):=\{h\in X: \exists\epsilon_{0}>0,\exists U(h),\forall \epsilon\in(0,\epsilon_{0}),\forall\overline{h}\in U(h); F(x^{0}+\epsilon\overline{h})\leq F(x^{0})\}\), is called the cone of nonincrease of the functional F at the point \(x^{0}\in X\).
All the cones defined above are cones with vertices at the origin. The cones \(\operatorname{AC}(A, x^{0})\), \(\operatorname{FC}(F, x^{0})\), and \(\operatorname{NC}(F, x^{0})\) are open, while the cone \(\operatorname{TC}(A, x^{0})\) is closed. If \(\operatorname{int} A\neq \emptyset\), then \(\operatorname{AC}(A, x^{0})\) does not exist. Moreover, if \(A_{1},\ldots,A_{n}\in X\), \(x^{0}\in \bigcap_{i=1}^{n}A_{i}\), then
$$\bigcap_{i=1}^{n}\operatorname{TC} \bigl(A_{i},x^{0}\bigr)\supset \operatorname{TC}\Biggl( \bigcap_{i=1}^{n}A_{i},x^{0} \Biggr)\quad\text{and}\quad \bigcap_{i=1}^{n} \operatorname{AC}\bigl(A_{i},x^{0}\bigr)= \operatorname{AC} \Biggl(\bigcap_{i=1}^{n}A_{i},x^{0} \Biggr). $$
If the cones \(\operatorname{TC}(A, x^{0})\), \(\operatorname{AC}(A, x^{0})\), \(\operatorname{FC}(F, x^{0})\), and \(\operatorname{NC}(F, x^{0})\) are convex, then they are called regular cones and we denote them by \(\operatorname{RTC}(A, x^{0})\), \(\operatorname{RAC}(A, x^{0})\), \(\operatorname{RFC}(F, x^{0})\), and \(\operatorname{RNC}(F,x^{0})\), respectively.
Let \(C_{i}\), \(i=1,\ldots,n\), be a system of cones and \(B_{M}\) be a ball with center 0 and radius \(M>0\) in the space X.
Definition 2.7
(see [43, 50])
The cones \(C_{i}\), \(i=1,\ldots,n\), are of the same sense if \(\forall M>0\), \(\exists M_{1},\ldots,M_{2}>0\) so that \(\forall x\in B_{M}\cap \sum_{i=1}^{n}C_{i}\), \(x= \sum_{i=1}^{n}x_{i}\), \(x_{i}\in C_{i}\), \(i=1,\ldots,n\), we have \(x_{i}\in B_{M_{i}}\cap C_{i}\), \(i=1,\ldots,n\) (or equivalently the inequality \(\Vert x \Vert\leq M\) implies the inequalities \(\Vert x_{i} \Vert\leq M_{i}\), \(i=1,\ldots,n\)).
Definition 2.8
(see [43, 50])
The cones \(C_{i}\), \(i=1,\ldots,n\), are of the opposite sense if \(\exists (x_{1},\ldots,x_{n})\neq (0,\ldots,0)\), \(x_{i}\in C_{i}\), \(i=1,\ldots,n\), so that \(0= \sum_{i=1}^{n}x_{i}\).
Remark 2.5
(see [43, 50])
From Definitions 2.6 and 2.7 it follows that the set of cones of the same sense is disjoint with the set of cones of the opposite sense. If a certain subsystem of cones is of the opposite sense, then the whole system is also of the opposite sense.
In finite dimensional spaces only the cones of the two types mentioned above may exist, while in arbitrary infinite dimensional normed spaces the situation is more complicated as shown in the example below.
Example 2.1
(see [43, 50])
In the space \(C^{1}[0,1]\) we take the norm \(\Vert x \Vert:=\sqrt{\int_{0}^{1}x^{2}(t)\,dt}\) and define the functional \(A(x):={d\over dt}x(t)|_{t={1\over 2}}=:r \in\Bbb {R}\). That functional is linear and bounded (in fact, for the sequence \(x_{n}(t)={1\over \sqrt{n}}\sin 2\pi nt\), we have \(\Vert x_{n} \Vert ={1\over \sqrt{2n}}\to0\), while \(A(x_{n})=2\pi\sqrt{n}\cos\pi n=2\pi\sqrt{n}(-1)^{n}\not\to0\)). Further, we define \(C_{1}:=\operatorname{cl}\{(x,r);r=A(x)\}\), \(C_{2}:=\{\Theta\}\times\Bbb {R}\) closed and convex cones in \(E:=L^{2}(0,1)\times\Bbb {R}\) with the norm \(\Vert(x,r) \Vert_{E}:=\max \{ \Vert x \Vert,|r|\}\). The cones \(C_{1}\) and \(C_{2}\) are not of the same sense. To prove this, we take \(v_{1}=(x_{1},r_{1})\in C_{1}\), \(v_{2}=(\Theta,r_{2})\in C_{2}\), and an arbitrary constant \(M>0\).
If
$$\Vert v_{1}+v_{2} \Vert _{E}=\max\bigl\{ \Vert x_{1} \Vert , \vert r_{1}+r_{2} \vert \bigr\} \leq M, $$
then the following inequalities
$$\Vert v_{1} \Vert _{E}\leq M_{1},\qquad \Vert v_{2} \Vert _{E}\leq M_{2} $$
generally do not hold with some \(M_{1},M_{2}>0\). Actually, since A is an unbounded functional, then there exists \(\tilde{M_{1}}>0\) so that \(|r_{1}|=|A(x_{1})|>\tilde{M_{1}}\). Further fix \(M<\tilde{M_{1}}\). Then \(M\geq|r_{1}+r_{2}|\geq |r_{1}|-|r_{2}|>\tilde{M_{1}}-|r_{2}|\) and finally \(\Vert v_{2} \Vert_{E}=\tilde{M_{1}}-M=:M_{2}>0\), which contradicts the same sense of the cones \(C_{1}\) and \(C_{2}\) are also not of the opposite sense.
In [49, 50] the conditions, under which a system of cones is of the same sense, are given.
Definition 2.9
(see [43, 50])
Let K be a cone in X. The adjoint cone \(K^{*}\) of K is defined as
$$K^{*}:=\bigl\{ f\in X^{*}; f(x)\geq0 \ \forall x \in K\bigr\} , $$
where \(X^{*}\) denotes the dual space of X.
Definition 2.10
(see [43, 50])
Let Q be a set in X, \(x^{0}\in Q\). A functional \(f\in X^{*}\) is said to be a support functional to the set Q at \(x^{0}\) if \(f(x)\geq f(x^{0})\)
\(\forall x\in Q\).
Lemma 2.6
(Tangent directions, see [35])
The theorem of Lyusternik proved below is a powerful tool for the calculation of tangent directions. Before proceeding to a statement of the theorem, we recall the definition of a differentiable operator. Let
\(E_{1}\), \(E_{2}\)
be Banach spaces, \(P(x)\)
be an operator (generally nonlinear) with domain in
\(E_{1}\)
and range in
\(E_{2}\). Then
\(P(x)\)
is said to be differentiable at a point
\(x_{0}\in E_{1}\)
if there exists a continuous linear operator
A
mapping
\(E_{1}\)
into
\(E_{2}\)
such that, for all
\(h\in E_{1}\),
$$P(x_{0}+h)=P(x_{0})+Ah+r(x_{0}, h), $$
where
\(\Vert r(x_{0}, h) \Vert= O( \Vert h \Vert)\), The operator
A
is called the Frèchet derivative of the operator
\(P(x)\)
and often denoted by
\(A = P'(x_{0})\). It is clear that if
\(E_{2} = R^{1}\) (i.e., \(P(x)\)
is a functional), this definition coincides with the derivative of a functional. The derivative of an operator possesses the usual properties of derivatives (rules for differentiation of sums, composite functions, etc.). The derivative of a continuous linear operator coincides with the operator.
Theorem 2.7
(Lyusternik theorem, see [35])
Let P(x) be an operator mapping
\(E_{1}\)
into
\(E_{2}\), differentiable in a neighborhood of a point
\(x_{0}\), \(P(x_{0})=0\). Let
\(P'(x)\)
be continuous in a neighborhood of
\(x_{0}\), and suppose that
\(P'(x_{0})\)
maps
\(E_{1}\)
onto
\(E_{2}\) (i.e., the linear equation
\(P'(x_{0})h = b\)
has a solution
h
for any
\(b\in E_{2}\)). Then the set of tangent directions
K
to the set
\(Q=\{x: P(x) = 0\}\)
at the point
\(x_{0}\)
is the subspace
\(K= \{h: P'(x_{0})h= 0\}\).
The proof of this theorem (which is by no means trivial) may be found in [35].
Generalized Dubovitskii–Milyutin theorem
Let X be a Banach space, \(Q_{k}\subset X\), \(\operatorname{int} Q_{k}\neq\emptyset\), \(k=1,\ldots,p\), represent inequality constraints, \(Q_{k}\subset X\), \(\operatorname{int} Q_{k}=\emptyset\), \(k=p+1,\ldots,n\), represent equality constraints, \(I_{i}:=X\to\mathcal{R} \), \(i=1,\ldots,s\), are given functionals \(I=(I_{1},\ldots,I_{s})^{T}\), i.e., \(I:X\to\mathcal{R}^{s}\) is a vector performance index. We are interested in the following problem (see [43, 50]).
Problem (P): Find \(x^{0}\in Q\) such that
$$\min_{x\in Q\cap U(x^{0})} I(x)=I\bigl(x^{0}\bigr), $$
where \(Q= \bigcap_{k=1}^{n}Q_{k}\) and \(U(x^{0})\) is some neighborhood of \(x^{0}\).
If we define equality constraints in the operator form
$$Q_{k}:=\bigl\{ x\in X: F_{k}(x)=0\bigr\} , $$
where \(F_{k}:X\to Y_{k}\) are given operators, \(Y_{k}\) are Banach spaces, \(k=p+1,\ldots,n\), then we obtain Problem (P1) instead of Problem (P).
Definition 2.11
(see [43, 50])
A point \(x^{0}\in X\) is called global (local) optimal for Problem (P) or (P1) if \(x^{0}\in Q \) and there is no \(x^{0}\neq x\in Q\) (\(Q\cap U(x^{0})\)) with \(I_{i}(x)\leq I(x^{0})\) for \(i=1,\ldots,s\), with strict inequality for at least one i, \(1\leq i\leq s\).
Theorem 2.8
(Generalized Dubovitskii–Milyutin theorem, see [43, 50])
We assume for problem (P) that:
-
(i)
the cones
\(K_{i}\), \(i=1,\ldots,s\), \(D_{j}\), \(j=1,\ldots,s\), \(C_{k}\), \(k=1,\ldots,p\), are open and convex;
-
(ii)
the cones
\(C_{k}\), \(k=p+1,\ldots,n\), are convex and closed;
-
(iii)
the cone
\(\tilde {C}= \bigcap_{k=p+1}^{n}C_{k}\)
is contained in the cone tangent to the set
\(\bigcap_{k=p+1}^{n}Q_{k}\);
-
(iv)
the cones
\(C_{k}^{*}\), \(k=p+1,\ldots,n\), are either of the same sense or of the opposite sense,
-
(v)
\(x^{0}\in Q\)
is a local optimum for problem (P),
then the following “s” equations (the so-called Euler–Lagrange equations) must hold:
$$f_{i}+\sum_{j=1,j\neq i}f_{j}^{(i)}+ \sum_{k=1}^{n}\varphi _{k}^{(i)}=0, \quad i=1,2,\ldots,s, $$
where
\(f_{i}\in K_{i}^{*}\), \(f_{j}^{(i)}\in D_{j}^{*}\), \(j=1,\ldots,s\), \(j\neq i \), \(\varphi_{k}^{(i)} \in C_{k}^{*} \), \(k=1,\ldots,n\), with not all functionals equal to zero simultaneously.
Case
\(s=1\)
Problems (P) and (P1) for \(s=1\) convert to scalar ones. For them we have the following.
Theorem 2.9
(see [43, 50])
Let us assume for Problem (P) that:
-
(i)
\(s=1\);
-
(ii)
\(I:X\to\mathcal{R}\)
is convex and continuous;
-
(iii)
\(Q_{k}\), \(k=1,\ldots,n\)
are convex;
-
(iv)
there exists
x̃
so that
\(\tilde{x}\in ( \bigcap_{k=1}^{p}\operatorname{int} Q_{k})\cap (\bigcap_{k=p+1}^{n}\operatorname{int} Q_{k})\);
-
(v)
\(\operatorname{RTC}( \bigcap_{k=p+1}^{n}Q_{k},x^{0})= \bigcap_{k=p+1}^{n}\operatorname{RTC}(Q_{k},x^{0})\);
-
(vi)
the cones
\([\operatorname{RTC}(Q_{k},x^{0})]^{*}\), \(k=p+1,\ldots,n\), are either of the same sense or of the opposite sense,
then
\(x^{0}\)
is a global optimum for Problem (P) if and only if the Euler–Lagrange equation
$$f_{1}+\sum_{k=1}^{n} \varphi_{k}=0 $$
holds, where
\(f\in[\operatorname{RFC}(I,x^{0})]^{*}\), \(\varphi_{k} \in [\operatorname{RAC}(Q_{k},x^{0})]^{*}\), \(k=1,\ldots,p\), and
\(\varphi_{k} \in[\operatorname{RAC}(Q_{k},x^{0})]^{*}\), \(k=p+1,\ldots,n\), and the functionals are not simultaneously equal to zero.
Using Milyutin–Dubovitskii’s theorem, we shall derive the necessary conditions of optimality for differential inclusions with Mittag–Leffler kernel.