2.1 Statement of the problem
Let \((\Omega, \mathcal{F}, P)\) be a probability space with filtration \(\mathcal{F}_{t}\). The controlled stochastic system is described as follows:
$$ \left \{ \begin{array}{l} dx(t) = b(t, x(t), u(t))\, dS_{\alpha}(t)+\sigma(t, x(t), u(t))\, d B(S_{\alpha}(t)), \\ x(0) = \xi, \quad t\in[0, T], \end{array} \right . $$
(1)
where \(b(t, x(t), u(t)):[0, T]\times\mathbb{R}^{n}\times\mathcal {U}[0, T]\rightarrow \mathbb{R}^{n}\), \(\sigma(t, x(t), u(t)):[0, T]\times\mathbb{R}^{n}\times\mathcal{U}[0, T]\rightarrow \mathbb {R}^{n}\) are given functionals, ξ is the initial value, \(u(t)\) is the control process, and \(x(t)\) is the corresponding state process. The inverse α-stable subordinator is defined in the following way:
$$S_{\alpha}(t)=\inf\bigl\{ \tau>0:U_{\alpha}(\tau)>t\bigr\} , $$
where \(U_{\alpha}(\tau)\) is a strictly increasing α-stable Lévy process. \(U_{\alpha}\) is a pure-jump process whose Laplace transform is given by \(\mathbb{E}(e^{-kU_{\alpha}(\tau)})=e^{-\tau k^{\alpha}}\), \(0<\alpha<1\). For every jump of \(U_{\alpha}(\tau)\), there is a corresponding flat period of its inverse \(S_{\alpha}(t)\).
The space of admissible controls is defined as
$$\begin{aligned} \mathcal{U}[0, T] \triangleq& \biggl\{ u:[0,T] \times \Omega\rightarrow \mathbb{R}^{n}\Big|u \text{ is } \mathcal{F}_{t} \text {-adapted stochastic process and } \\ & E\biggl(\int^{T}_{0}\bigl\vert u(t)\bigr\vert ^{2}\, dt\biggr) < +\infty \biggr\} . \end{aligned}$$
The cost functional is
$$ J\bigl(u(t)\bigr)=E \biggl\{ \int^{T}_{0}l \bigl(t, x(t) ,u(t)\bigr)\, dS_{\alpha}(t)+h\bigl(x(T)\bigr) \biggr\} , $$
(2)
where \(l(t, x(t), u(t)):[0, T]\times\mathbb{R}^{n}\times\mathcal {U}[0, T]\rightarrow \mathbb{R}^{n}\) and \(h(t):\mathbb {R}^{n}\rightarrow\mathbb{R}^{n}\) are given continuously differentiable functionals. We introduce the following basic assumptions which will be assumed throughout the paper.
-
(H1)
b, σ, l, g are continuously differentiable with respect to x. There exists a constant \(L_{1} > 0\) such that, for \(\varphi(t, x, u)=b(t, x, u)\), \(\sigma(t, x, u)\), we have:
-
1.
\(|\varphi(t, x, u)-\varphi(t, \hat{x}, \hat{u})| \leq L_{1}(|x-\hat{x}|+|u-\hat{u}|)\), \(\forall t\in[0, T]\), \(x,\hat{x}\in\mathbb {R}^{n}\), \(u, \hat{u}\in\mathcal{U}[0, T]\).
-
2.
\(|\varphi(t, x)|\leq C(1+|x|)\), \(x\in\mathbb{R}^{n}\), \(t \in[0, T]\).
-
(H2)
The maps b, σ, l, h are \(C^{2}\) in x with bounded (denoted by M) derivatives. There exists a constant \(L_{2} > 0\) such that for \(\varphi(t, x, u)=b(t, x, u)\), \(\sigma(t, x, u)\), we have
$$\begin{aligned}& \bigl\vert \varphi_{x}(t, x, u)-\varphi_{x}(t, \hat{x}, \hat{u})\bigr\vert \leq L_{2}\bigl(\vert x-\hat{x}\vert +|u- \hat{u}|\bigr), \\& \quad \forall t\in[0, T], x,\hat{x}\in\mathbb {R}^{n}, u, \hat{u}\in\mathcal{U}[0, T]. \end{aligned}$$
Then we can pose the following optimal control problem.
Problem (A)
Find a pair \((x^{*}(t), u^{*}(t))\in\mathbb {R}^{n}\times\mathcal{U}[0, T]\) such that
$$ J\bigl(u^{*}(t)\bigr)=\inf_{u(t)\in\mathcal{U}[0, T]}J \bigl(u(t)\bigr). $$
(3)
Now, we introduce the variational equation of (1),
$$ \left \{ \begin{array}{l} d\hat{x}(t)=(b_{x}(t)\hat{x}(t)+b_{u}(t)\hat {u}(t))\, dS_{\alpha}(t)+(\sigma_{x}(t)\hat{x}(t) \\ \hphantom{d\hat{x}(t)=}{}+\sigma_{u}(t)\hat{u}(t))\, d B(S_{\alpha}(t)), \\ \hat{x}(t)=0, \quad t\in[0, T] , \end{array} \right . $$
(4)
and the adjoint equation of (1), respectively,
$$ \left \{ \begin{array}{l} dy(t) = -[b_{x}(t, x(t), u(t))y(t)+l_{x}(t, x(t), u(t)) \\ \hphantom{dy(t) =}{} +\sigma_{x}(t, x(t), u(t))z(t)]\, dS_{\alpha}(t) \\ \hphantom{dy(t) =}{} +z(t, x(t), u(t))\, d B(S_{\alpha}(t)), \\ y(T) = h_{x}(x(T)), \quad t\in[0, T]. \end{array} \right . $$
(5)
The Hamiltonian of our optimal control problem is obtained as follows:
$$ H(t, x, u, y, z)=l\bigl(t, x(t), u(t)\bigr)+b\bigl(t, x(t), u(t) \bigr)y(t)+\sigma\bigl(t, x(t), u(t)\bigr)z(t). $$
(6)
2.2 Well-posedness of the problem
To obtain our results of maximum principle, we need the following results.
Proposition 2.1
(Itô formula; see [17, Theorem 2.4])
Suppose that
\(x(\cdot)\)
has a stochastic differential
$$ dx=F\, dS_{\alpha}+G\, dB(S_{\alpha}) $$
for
\(F\in\mathbb{L}^{1}(0,T)\), \(G\in\mathbb{L}^{2}(0,T)\). Assume
\(u:\mathbb{R}\times[0,T]\rightarrow\mathbb{R}\)
is continuous and that
\(\frac{\partial u}{\partial t}\), \(\frac{\partial u}{\partial x}\), \(\frac {\partial^{2} u}{\partial x^{2}}\)
exist and are continuous. Set
$$Y(t):=u\bigl(x(t),t\bigr). $$
Then
Y
has the stochastic differential equation
$$ dY=\frac{\partial u}{\partial t}\, dt+\frac{\partial u}{\partial x}\bigl(F\, dS_{\alpha}+G\, dB(S_{\alpha})\bigr)+\frac{1}{2} \frac{\partial^{2} u}{\partial x^{2}}G^{2}\, dS_{\alpha}, \quad 0< \alpha<1. $$
(7)
Lemma 2.1
(See [4])
Let
\(S_{\alpha}(t)\)
be the inverse
α-stable subordinator, \(g(t)\)
be an integrable function. Then
$$E\biggl[\int^{t}_{0}g\bigl(S_{\alpha}( \tau)\bigr)\, dS_{\alpha}(\tau)\biggr]=\frac{1}{\Gamma (\alpha)}\int ^{t}_{0}(t-\tau)^{\alpha-1}E\bigl[g \bigl(S_{\alpha}(\tau)\bigr)\bigr]\, d\tau. $$
Lemma 2.2
(See [4])
The following equation holds for any continuous function
\(f(t)\):
$$E\biggl[\int^{t}_{0}f(\tau)g \bigl(S_{\alpha}(\tau)\bigr)\, dS_{\alpha}(\tau)\biggr]=\int ^{t}_{0}f(\tau)D^{1-\alpha}_{\tau}E \bigl[g\bigl(S_{\alpha}(\tau)\bigr)\bigr]\, d\tau. $$
Here the operator \(D^{1-\alpha}_{t}f(t)=\frac{1}{\Gamma(\alpha)}\frac {\partial}{\partial t}\int_{0}^{t}(t-s)^{\alpha-1}f(s)\, ds\) is the fractional derivative of Riemann-Liouville type. Especially, the derivative of a constant C need not be zero \(D^{1-\alpha}_{t}C=\frac {t^{\alpha-1}}{\Gamma(\alpha)}C\).
Remark 2.1
We get \(\int^{t}_{0}1\, dS_{\alpha}(t)=\int^{t}_{0}D^{1-\alpha}_{t}\, dt=\frac{t^{\alpha}}{\alpha\Gamma(\alpha)}\). It is bounded when \(\alpha\in(0, 1)\). We set \(\frac{t^{\alpha}}{\alpha \Gamma(\alpha)}< P\).
Theorem 2.1
Let
b
and
σ
be measurable functions satisfying (H1) and (H2), \(T>0\)
and
T
be independent of
\(X(0)\). Then the stochastic differential equation
$$ dX(t)=b\bigl(t,X(t)\bigr)\, dS_{\alpha}(t)+\sigma\bigl(t, X(t)\bigr)\, dB\bigl(S_{\alpha}(t)\bigr),\quad t\in[0, T] $$
(8)
has a unique solution
\(X(t)\).
Proof
Define \(Y^{(0)}(t)=X(0)\) and \(Y^{(k)}(t)=Y^{(k)}(t)(\omega)\). We consider the equation
$$ Y^{(k+1)}(t)=X(0)+\int^{t}_{0}b \bigl(s, Y^{(k)}(s)\bigr)\, dS_{\alpha}(s)+\int ^{t}_{0}\sigma\bigl(s, Y^{(k)}(s)\bigr)\, dB\bigl(S_{\alpha}(s)\bigr). $$
(9)
Then, for \(k\geq1\), \(t\leq T\), we have
$$\begin{aligned}& E\bigl\Vert Y^{(k+1)}(t)-Y^{(k)}(t)\bigr\Vert ^{2} \\& \quad = E\biggl\Vert \int^{t}_{0}\bigl(b \bigl(s, X^{(k)}(t)\bigr)-b\bigl(s, X^{(k-1)}(t)\bigr)\bigr)\, dS_{\alpha}(s) \\& \qquad {} +\int^{t}_{0}\bigl(\sigma\bigl(s, X^{(k)}(t)\bigr)-\sigma\bigl(s, X^{(k-1)}(t)\bigr)\bigr)\, dB \bigl(S_{\alpha}(s)\bigr)\biggr\Vert ^{2} \\& \quad \leq2\frac{t^{\alpha}}{\alpha\Gamma(\alpha)}E\int^{t}_{0}\bigl\Vert \bigl(b\bigl(s, X^{(k)}(t)\bigr)-b\bigl(s, X^{(k-1)}(t) \bigr)\bigr)\bigr\Vert ^{2}\, dS_{\alpha}(s) \\& \qquad {} +2E\int^{t}_{0}\bigl\Vert \bigl( \sigma\bigl(s, X^{(k)}(t)\bigr)-\sigma\bigl(s, X^{(k-1)}(t)\bigr) \bigr)\bigr\Vert ^{2}\, dS_{\alpha}(s) \\& \quad \leq2 (P+1)\frac{t^{\alpha}}{\alpha\Gamma(\alpha)}L^{2}E\int^{t}_{0} \bigl\Vert X^{(k)}(t)-X^{(k-1)}(t)\bigr\Vert ^{2}\, dS_{\alpha}(t) \end{aligned}$$
and
$$\begin{aligned} E\bigl\Vert Y^{(1)}(t)-Y^{(0)}(t)\bigr\Vert ^{2}&\leq\frac{4t^{\alpha}}{\alpha\Gamma (\alpha)}\bigl(1+E|X_{0}|^{2} \bigr)\frac{ t^{\alpha}}{\alpha\Gamma(\alpha)}+\frac {4 t^{\alpha}}{\alpha\Gamma(\alpha)}\bigl(1+E|X_{0}|^{2} \bigr) \\ &\leq\frac{4t^{\alpha}}{\alpha\Gamma(\alpha)}\bigl(1+E|X_{0}|^{2}\bigr) \biggl( \frac {t^{\alpha}}{\alpha\Gamma(\alpha)}+1\biggr) \\ &\leq A_{1}t, \end{aligned}$$
where the constant \(A_{1}\) depends on L, P, and \(E|X_{0}|^{2}\). Hence we obtain
$$ E\bigl\Vert Y^{(k+1)}(t)-Y^{(k)}(t)\bigr\Vert ^{2}\leq\bigl(2(P+1)PL^{2}\bigr)^{k}(A_{1} t)^{k} \leq(A_{2}t)^{k}. $$
Here the constant \(A_{2}\) depends on L, P, and \(E|X_{0}|^{2}\). We set \(A_{2}t<\frac{1}{2}\), \(m\geq n \geq0\). Then
$$\begin{aligned} \bigl\Vert Y^{(m)}(t)-Y^{(n)}(t)\bigr\Vert _{L^{2}(0, T)}&=\Biggl\Vert \sum_{k=n}^{m-1}Y^{(k+1)}(t)-Y^{(k)}(t) \Biggr\Vert _{L^{2}(0, T)} \\ &\leq\sum_{k=n}^{m-1}\biggl(E\biggl[\int ^{T}_{0}\bigl\vert Y^{(k+1)}(t)-Y^{(k)}(t) \bigr\vert ^{2}\, dS_{\alpha}(t)\biggr]^{\frac{1}{2}}\biggr) \\ &\leq\sum_{k=n}^{m-1}\biggl(\int ^{T}_{0}(A_{2}t)^{k}\, dS_{\alpha}(t)\biggr)^{\frac {1}{2}} \\ &\leq\sum_{k=n}^{m-1}\bigl(P(A_{2}t)^{k} \bigr)^{\frac{1}{2}}\rightarrow0 \end{aligned}$$
as \(m, n\rightarrow\infty\). Therefore \(\{Y^{(n)}(t)\}_{n=0}^{\infty}\) is a Cauchy sequence in \(L^{2}(0, T)\). Hence \({Y^{(n)}(t)}_{n=0}^{\infty}\) is convergent in \(L^{2}(0, T)\). Define
$$X(t):= \lim_{n\rightarrow\infty}Y^{(n)}(t). $$
Next, we prove that \(X(t)\) satisfies (8). For all n and \(t\in[0, T]\), we have
$$Y^{(n+1)}(t)=X(0)+\int^{t}_{0}b\bigl(s, Y^{(n)}(s)\bigr)\, dS_{\alpha}(s)+\int^{t}_{0} \sigma\bigl(s, Y^{(n)}(s)\bigr)\, dB\bigl(S_{\alpha}(s)\bigr). $$
Then we get
$$\int^{t}_{0}b\bigl(s, Y^{(n)}(s)\bigr) \, dS_{\alpha}(s)\rightarrow\int^{t}_{0}b \bigl(s, X(s)\bigr)\, dS_{\alpha}(s) \quad \mbox{as } n \rightarrow\infty. $$
Also
$$\int^{t}_{0}\sigma\bigl(s, Y^{(n)}(s) \bigr)\, dB\bigl(S_{\alpha}(s)\bigr)\rightarrow\int^{t}_{0} \sigma\bigl(s, X(s)\bigr)\, dB\bigl(S_{\alpha}(s)\bigr) \quad \mbox{as } n \rightarrow\infty. $$
We conclude that for all \(t\in[0, T]\) we have
$$X(t)=X(0)+\int^{t}_{0}b\bigl(s, X(s)\bigr)\, dS_{\alpha}(s)+\int^{t}_{0}\sigma\bigl(s, X(s)\bigr)\, dB\bigl(S_{\alpha}(s)\bigr). $$
That is, \(X(t)\) satisfies (8).
Now we prove uniqueness. Let \(X_{1}(t)\) and \(X_{2}(t)\) be solutions of (8) with the same initial values. Then
$$\begin{aligned} E\bigl\Vert X_{1}(t)-X_{2}(t)\bigr\Vert ^{2} =&E\biggl\Vert \int^{t}_{0} \bigl(b\bigl(s, X_{1}(s)\bigr)-b\bigl(s, X_{2}(s)\bigr)\bigr) \, dS_{\alpha}(s) \\ &{}+\int^{t}_{0}\bigl(\sigma\bigl(s, X_{1}(s)\bigr)-\sigma\bigl(s, X_{2}(s)\bigr)\, dB \bigl(S_{\alpha }(s)\bigr)\bigr)\biggr\Vert ^{2} \\ \leq&2\frac{t^{\alpha}}{\alpha\Gamma(\alpha)}E\int^{t}_{0}\bigl\Vert \bigl(b\bigl(s, X_{1}(s)\bigr)-b\bigl(s, X_{2}(t) \bigr)\bigr)\bigr\Vert ^{2}\, dS_{\alpha}(s) \\ &{}+2E\int^{t}_{0}\bigl\Vert \bigl(\sigma \bigl(s, X_{1}(s)\bigr)-\sigma\bigl(s, X_{2}(s)\bigr)\bigr) \bigr\Vert ^{2}\, dS_{\alpha}(s) \\ \leq&2(P+1)\frac{t^{\alpha}}{\alpha\Gamma(\alpha)}L^{2}E\int^{t}_{0} \bigl\Vert X_{1}(s)-X_{2}(s)\bigr\Vert ^{2}\, dS_{\alpha}(s). \end{aligned}$$
From Lemmas 2.1 and 2.2, we get
$$\begin{aligned}& E\bigl\Vert X_{1}(t)-X_{2}(t)\bigr\Vert ^{2} \\& \quad \leq2(P+1)\frac{t^{\alpha}}{\alpha\Gamma(\alpha)}L^{2}E\int^{t}_{0} \bigl\Vert X_{1}(s)-X_{2}(s)\bigr\Vert ^{2} \frac{s^{\alpha-1}}{\Gamma(\alpha)}\, ds \\& \quad \leq2(P+1)P L^{2}CE\int^{t}_{0} \bigl\Vert X_{1}(s)-X_{2}(s)\bigr\Vert ^{2}\, ds. \end{aligned}$$
By the Gronwall inequality, we conclude that
$$E\bigl\Vert X_{1}(t)-X_{2}(t)\bigr\Vert ^{2}=0 \quad \mbox{for all } t \in[0, T]. $$
The uniqueness is proved. □
2.3 Some estimates of the solution
Let \(u^{*}\) and v be two admissible controls. For any \(\varepsilon\in \mathbb{R}\), we denote \(u^{\varepsilon}=u^{*}+\varepsilon(v-u^{*})\). Corresponding to \(u^{\varepsilon}\) and \(u^{*}\), there are two solutions \(x^{\varepsilon}(\cdot)\) and \(x^{*}(\cdot)\) to (1). That is,
$$\begin{aligned}& x^{*}(t)=\xi+\int_{0}^{t}b\bigl(s, x^{*}(s), u^{*}(s)\bigr)\, dS_{\alpha}(s)+\int _{0}^{t}\sigma\bigl(s, x^{*}(s), u^{*}(s)\bigr)\, dB\bigl(S_{\alpha}(s)\bigr), \\& x^{\varepsilon}(t)=\xi+\int_{0}^{t}b\bigl(s, x^{\varepsilon}(s), u^{\varepsilon}(s)\bigr)\, dS_{\alpha}(s)+\int _{0}^{t}\sigma\bigl(s, x^{\varepsilon }(s), u^{\varepsilon}(s)\bigr)\, dB\bigl(S_{\alpha}(s)\bigr). \end{aligned}$$
Theorem 2.2
Let (H1)-(H2) hold. Then, for any
\(K\geq1\),
$$\begin{aligned}& \sup_{t\in[0, T]}E\bigl\vert x^{\varepsilon}(t)-x^{*}(t) \bigr\vert ^{2}=O\bigl(\varepsilon^{2}\bigr), \end{aligned}$$
(10)
$$\begin{aligned}& \sup_{t\in[0, T]}E|\hat{x}|^{2}=O\bigl( \varepsilon^{2}\bigr), \end{aligned}$$
(11)
$$\begin{aligned}& \sup_{t\in[0, T]}E\bigl\vert x^{\varepsilon}(t)-x^{*}(t)- \hat {x}(t)\bigr\vert ^{2}=O\bigl(\varepsilon^{2}\bigr). \end{aligned}$$
(12)
Proof
We have
$$\begin{aligned}& \sup_{t\in[0, T]}E\bigl\vert x^{\varepsilon}(t)-x^{*}(t) \bigr\vert ^{2} \\& \quad = \sup_{t\in[0, T]}E\biggl\vert \int_{0}^{t} \bigl(b\bigl(s, x^{\varepsilon}(s), u^{\varepsilon}(s)\bigr)-b\bigl(s, x^{*}(s), u^{*}(s)\bigr)\bigr)\, dS_{\alpha}(s) \\& \qquad {} +\int_{0}^{t}\bigl(\sigma\bigl(s, x^{\varepsilon}(s), u^{\varepsilon}(s)\bigr)-\sigma \bigl(s, x^{*}(s), u^{*}(s)\bigr)\bigr)\, dB\bigl(S_{\alpha}(s) \bigr)\biggr\vert ^{2} \\& \quad \leq \sup_{t\in[0, T]}2E \biggl\{ \frac{t^{\alpha}}{\alpha\Gamma (\alpha)}\biggl\vert \int_{0}^{t}\bigl(b\bigl(s, x^{\varepsilon}(s), u^{\varepsilon }(s)\bigr)-b\bigl(s, x^{*}(s), u^{*}(s)\bigr)\bigr)^{2}\, dS_{\alpha}(s)\biggr\vert \\& \qquad {} +\biggl\vert \int_{0}^{t}\bigl( \sigma\bigl(s, x^{\varepsilon}(s), u^{\varepsilon}(s)\bigr)-\sigma \bigl(s, x^{*}(s), u^{*}(s)\bigr)\bigr)^{2}\, dS_{\alpha}(s)\biggr\vert \biggr\} \\& \quad \leq \sup_{t\in[0, T]} 4\frac{t^{\alpha}}{\alpha\Gamma(\alpha )}L^{2}(P+1) \biggl\{ \int^{T}_{0}E\bigl(\bigl\vert x^{\varepsilon }(s)-x^{*}(s)\bigr\vert ^{2}\bigr)\, dS_{\alpha}(s) \\& \qquad {}+\frac{T^{\alpha}}{\alpha\Gamma(\alpha )}\varepsilon^{2}E \bigl(v-u^{*}\bigr)^{2} \biggr\} . \end{aligned}$$
From Lemmas 2.1 and 2.2 and the Gronwall inequality, we get
$$\begin{aligned}& \sup_{t\in[0, T]}E\bigl\vert x^{\varepsilon}(t)-x^{*}(t) \bigr\vert ^{2} \\& \quad \leq4\frac{t^{\alpha}}{\alpha\Gamma(\alpha)}L^{2}(P+1) \biggl\{ \int ^{T}_{0}E\bigl(\bigl\vert x^{\varepsilon}(s)-x^{*}(s) \bigr\vert ^{2}\bigr)\frac{s^{\alpha-1}}{\Gamma (\alpha)}\, ds+\frac{T^{\alpha}}{\alpha\Gamma(\alpha)} \varepsilon ^{2}E\bigl(v-u^{*}\bigr)^{2} \biggr\} \\& \quad \leq C_{P, L}\varepsilon^{2}, \end{aligned}$$
where \(C_{P, L}\) is a constant that depends on P, L. This proves (10). Similarly, we can prove (11).
We set \(\eta(t)=x^{\varepsilon}(t)-x^{*}(t)-\hat{x}(t)\). Then
$$\begin{aligned} \bigl\vert \eta(t)\bigr\vert ^{2} =&\biggl\vert \int ^{T}_{0} \biggl\{ \int^{1}_{0}b_{x} \bigl(s, x^{*}(s)+\theta\bigl(x^{\varepsilon}(s)-x^{*}(s) \bigr), u^{\varepsilon}(s)\bigr)\, d\theta \bigl(x^{\varepsilon}(s)-x^{*}(s) \bigr) \\ &{}+\int^{1}_{0}b_{u}\bigl(s, x^{*}(s), u^{*}(s)+\theta\bigl(u^{\varepsilon }(s)-u^{*}(s) \bigr)\bigr)\, d\theta\bigl(u^{\varepsilon}(s)-u^{*}(s)\bigr) \\ &{}-b_{x}(s)\hat{x}(s)-b_{u}(s)\hat{u}(s) \biggr\} \, dS_{\alpha}(s) \\ &{}+\int^{T}_{0} \biggl\{ \int ^{1}_{0}\sigma_{x}\bigl(s, x^{*}(s)+\theta \bigl(x^{\varepsilon}(s)-x^{*}(s)\bigr), u^{\varepsilon}\bigr)\, d\theta\bigl(x^{\varepsilon }(s)-x^{*}(s) \bigr) \\ &{}+\int^{1}_{0}\sigma_{u}\bigl(s, x^{*}, u^{*}(s)+\theta\bigl(u^{\varepsilon }(s)-u^{*}(s) \bigr)\bigr)\, d\theta\bigl(u^{\varepsilon}(s)-u^{*}(s)\bigr) \\ &{}-\sigma_{x}(s)\hat{x}(s)-b_{u}(s)\hat{u}(s) \biggr\} \, dB\bigl(S_{\alpha}(s)\bigr) \biggr\vert ^{2} \\ =&\biggl\vert \int^{T}_{0} \biggl\{ \int ^{1}_{0}b_{x}\bigl(s, x^{*}(s)+\theta \bigl(x^{\varepsilon}(s)-x^{*}(s)\bigr), u^{\varepsilon}\bigr)\, d\theta\eta(s) \\ &{}+\biggl[\int^{1}_{0}b_{x}\bigl(s, x^{*}(s)+\theta\bigl(x^{\varepsilon}(s)-x^{*}(s)\bigr), u^{\varepsilon}(s)\bigr)\, d\theta-b_{x}(s)\biggr]\hat{x}(s) \\ &{}+\int^{1}_{0}b_{u}\bigl(s, x^{*}, u^{*}(s)+\theta\bigl(u^{\varepsilon }(s)-u^{*}(s) \bigr)\bigr)\, d\theta\bigl(u^{\varepsilon}(s)-u^{*}(s)\bigr) \\ &{}-b_{u}(s)\hat{u}(s) \biggr\} \, dS_{\alpha}(s) \\ &{}+\int^{T}_{0} \biggl\{ \int ^{1}_{0}\sigma_{x}\bigl(s, x^{*}(s)+\theta \bigl(x^{\varepsilon}(s)-x^{*}(s)\bigr), u^{\varepsilon}\bigr)\,d\theta\eta(s) \\ &{}+\biggl[\int^{1}_{0}\sigma_{x} \bigl(s, x^{*}(s)+\theta\bigl(x^{\varepsilon }(s)-x^{*}(s) \bigr), u^{\varepsilon}\bigr)\, d\theta-\sigma_{x}(s)\biggr]\hat {x}(s) \\ &{}+\int^{1}_{0}\sigma_{u}\bigl(s, x^{*}, u^{*}(s)+\theta\bigl(u^{\varepsilon }(s)-u^{*}(s) \bigr)\bigr)\, d\theta\bigl(u^{\varepsilon}(s)-u^{*}(s)\bigr) \\ &{}-\sigma_{u}(s)\hat{u}(s) \biggr\} \, dBS_{\alpha}(s)\biggr\vert ^{2} \\ \leq& 2\biggl\vert \int^{T}_{0} \biggl\{ \int ^{1}_{0}b_{x}\bigl(s, x^{*}(s)+\theta \bigl(x^{\varepsilon}(s)-x^{*}(s)\bigr), u^{\varepsilon}\bigr)\, d\theta\eta(s) \\ &{}+\biggl[\int^{1}_{0}b_{x}\bigl(s, x^{*}(s)+\theta\bigl(x^{\varepsilon}(s)-x^{*}(s)\bigr), u^{\varepsilon}\bigr)\, d\theta-b_{x}(s)\biggr]\hat{x}(s) \\ &{}+\int^{1}_{0}b_{u}\bigl(s, x^{*}, u^{*}(s)+\theta\bigl(u^{\varepsilon }(s)-u^{*}(s) \bigr)\bigr)\, d\theta\bigl(u^{\varepsilon}(s)-u^{*}(s)\bigr) \\ &{}-b_{u}(s)\hat{u}(s) \biggr\} ^{2}\, dS_{\alpha}(s) \biggr\vert \\ &{}+2\biggl\vert \int^{T}_{0} \biggl\{ \int ^{1}_{0}\sigma_{x}\bigl(s, x^{*}(s)+\theta \bigl(x^{\varepsilon}(s)-x^{*}(s)\bigr), u^{\varepsilon}\bigr)\, d\theta\eta(s) \\ &{}+\biggl[\int^{1}_{0}\sigma_{x} \bigl(s, x^{*}(s)+\theta\bigl(x^{\varepsilon }(s)-x^{*}(s) \bigr), u^{\varepsilon}\bigr)\, d\theta-\sigma_{x}(s)\biggr]\hat {x}(s) \\ &{}+\int^{1}_{0}\sigma_{u}\bigl(s, x^{*}, u^{*}(s)+\theta\bigl(u^{\varepsilon }(s)-u^{*}(s) \bigr)\bigr)\, d\theta\bigl(u^{\varepsilon}(s)-u^{*}(s)\bigr) \\ &{}-\sigma_{u}(s)\hat{u}(s) \biggr\} ^{2}\, dS_{\alpha}(s)\biggr\vert \\ \leq&8\int^{T}_{0}\biggl(\int ^{1}_{0}b_{x}\bigl(s, x^{*}(s)+\theta\bigl(x^{\varepsilon }(s)-x^{*}(s)\bigr), u^{\varepsilon}\bigr)\biggr)^{2}\, d\theta\eta^{2}(s) \\ &{}+\biggl[\int^{1}_{0}b_{x}\bigl(s, x^{*}(s)+\theta\bigl(x^{\varepsilon}(s)-x^{*}(s)\bigr), u^{\varepsilon}\bigr)\, d\theta-b_{x}(s)\biggr]^{2} \hat{x}^{2}(s) \\ &{}+\int^{1}_{0}b_{u}\bigl(s, x^{*}, u^{*}(s)+\theta\bigl(u^{\varepsilon }(s)-u^{*}(s) \bigr)\bigr)^{2}\, d\theta\bigl(u^{\varepsilon}(s)-u^{*}(s) \bigr)^{2} \\ &{}-b_{u}(s)^{2}\hat{u}(s)^{2}\, dS_{\alpha}(s) \\ &{}+8\int^{T}_{0}\biggl(\int ^{1}_{0}\sigma_{x}\bigl(s, x^{*}(s)+\theta\bigl(x^{\varepsilon }(s)-x^{*}(s)\bigr), u^{\varepsilon}\bigr)\biggr)^{2}\, d\theta\eta^{2}(s) \\ &{}+\biggl[\int^{1}_{0}\sigma_{x} \bigl(s, x^{*}(s)+\theta\bigl(x^{\varepsilon }(s)-x^{*}(s) \bigr), u^{\varepsilon}\bigr)\, d\theta-b_{x}(s)\biggr]^{2} \hat {x}^{2}(s) \\ &{}+\int^{1}_{0}\sigma_{u}\bigl(s, x^{*}, u^{*}(s)+\theta\bigl(u^{\varepsilon }(s)-u^{*}(s) \bigr)\bigr)^{2}\, d\theta\bigl(u^{\varepsilon}(s)-u^{*}(s) \bigr)^{2} \\ &{}-\sigma_{u}(s)^{2}\hat{u}(s)^{2}\, dS_{\alpha}(s). \end{aligned}$$
(13)
From Lemmas 2.1 and 2.2 and the Gronwall inequality, we get
$$\begin{aligned} \sup_{t\in[0, T]}E\bigl\vert \eta(t)\bigr\vert ^{2}& \leq E\int^{T}_{0}\bigl(C_{1}\eta ^{2}(s)+\varepsilon^{2} C_{2}\bigr) \, dS_{\alpha}(s) \\ &\leq\int^{T}_{0}C_{1}E \eta^{2}(s)\frac{s^{\alpha-1}}{\Gamma(\alpha )}\, ds+\varepsilon^{2}C_{2} \frac{T^{\alpha}}{\Gamma(\alpha)\alpha } \\ &\leq\varepsilon^{2}M_{C_{1}, C_{2}}, \end{aligned}$$
(14)
where \(C_{1}=16(M^{2}+L^{2}C)\), \(C_{2}=(L^{2}C-M^{2})(v-\hat {u})^{2}+M^{2}(v-u^{*})^{2}\), \(M_{C_{1}, C_{2}}\) is a constant that depends on \(C_{1}\), \(C_{2}\). □