In this section, we introduce some basic notions and preliminaries on path-wise integrals with respect to fBm, and for more detailed discussion, we refer the reader to [6, 32–35].
Let \((\varOmega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\geq{0}} , P)\) be a complete probability space equipped with a natural filtration \(\{ \mathcal{F}_{t}\}_{t\geq{0}}\), where \(\mathcal{F}_{t}\) is the σ-algebra generated by \(\{W^{H}(t), t\in[0,T]\}\) and \(\mathcal{F}_{0}\) contains all P-null sets.
Definition 2.1
The process \(\{W^{H}(t), 1/2< H<1\}\) is said to be a centered self-similar fBm if the following properties are satisfied:
\(W^{H}(0) =0\),
\(E[W^{H}(t)]=0\), \(t\in[0,T]\),
\(E[W^{H}(t)W^{H}(s)]= \frac{1}{2} (|t|^{2H} +|s|^{2H} -|t-s|^{2H})\), \(t,s\in[0,T]\).
Next, for the convenience of readers, we provide some basic properties on path-wise integrals. Firstly, we introduce the function \(\varphi: \mathbb{R}_{+}\times\mathbb{R}_{+} \rightarrow\mathbb{R}_{+}\) defined as
$$\begin{aligned} \varphi(t,s)=H(2H-1) \vert t-s \vert ^{2H-2},\quad t,s\in \mathbb{R}_{+}, \end{aligned}$$
where \(H \in(\frac{1}{2},1)\). Let \(f:\mathbb{R}_{+} \rightarrow \mathbb{R}_{+}\) be a Borel measurable function and define the space
$$\begin{aligned} L^{2}_{\varphi}(\mathbb{R}_{+})= \biggl\{ f: \Vert f \Vert ^{2}_{\varphi}= \int_{\mathbb {R}_{+}} \int_{\mathbb{R}_{+}}{f(t)f(s)\varphi(t,s)}\,ds\,dt < \infty \biggr\} , \end{aligned}$$
which becomes a separable Hilbert space under the inner product
$$\begin{aligned} \langle f_{1},f_{2}\rangle_{\varphi}= \int_{\mathbb{R}_{+}} \int_{\mathbb {R}_{+}}{f_{1}(t)f_{2}(s) \varphi(t,s)}\,ds\,dt,\quad f_{1}, f_{2} \in L^{2}_{\varphi}(\mathbb{R}_{+}). \end{aligned}$$
Now, consider the set \(\mathcal{E}\) of smooth and cylindrical random variables of the form
$$\begin{aligned} F(\omega)=g \biggl( \int_{0}^{T}\psi_{1}(t)\,dW^{H}(t), \ldots, \int_{0}^{T}\psi _{n}(t)\,dW^{H}(t) \biggr), \end{aligned}$$
where \(n\geq1\) and \(g\in\mathcal{C}^{\infty}_{b}(\mathbb{R}^{n})\) (i.e., g and its partial derivatives are bounded). Moreover, let \(\mathcal {H}\) be the family of measurable functions such that, for \(\psi_{i}\in \mathcal{H}\), \(i=1,\ldots,n\), \(n\in\mathbb{N}\), we have \(\langle\psi _{i},\psi_{j}\rangle_{\varphi}=\delta_{ij}\) and \(\|\psi\|^{2}_{\varphi}<\infty\). The elements of \(\mathcal{H}\) may not be functions but distributions of negative order. Thanks to this reason, it is convenient to introduce the space \(|\mathcal{H}|\) of measurable functions h on \([0,T]\) satisfying
$$\begin{aligned} \Vert h \Vert ^{2}_{ \vert \mathcal{H} \vert } = \int_{0}^{T} \int_{0}^{T}{ \bigl\vert h(t) \bigr\vert \bigl\vert h(s) \bigr\vert \varphi (t,s)}\,ds\,dt < \infty, \end{aligned}$$
and it is easy to show that \(|\mathcal{H}|\) is a Banach space under the norm \(\|\cdot\|_{|\mathcal{H}|}\).
Definition 2.2
The Malliavin derivative \(D^{H}_{t}\) of a smooth and cylindrical random variable F is defined as an \(\mathcal {H}\)-valued random variable such that
$$\begin{aligned} D^{H}_{t}F=\sum_{i=1}^{n}{ \frac{\partial g}{\partial x_{i}} \biggl( \int _{0}^{T}\psi_{1}(t)\,dW^{H}(t), \ldots, \int_{0}^{T}\psi_{n}(t)\,dW^{H}(t) \biggr)\psi_{i}(t)}, \end{aligned}$$
hence, \(D^{H}_{t}\) represents a closable operator, so that \(D^{H}_{t}:L^{p}(\varOmega) \mapsto L^{p}(\varOmega,\mathcal{H})\), \(p\geq1\). The iteration of Malliavin derivative is denoted by \(D^{H,k}_{t}\), \(k\geq1\). For any \(p \geq1\), the Sobolev space \(\mathbb{D}^{k,p}\) represents the closer of \(\mathcal{E}\) with respect to the norm
$$\begin{aligned} \Vert F \Vert ^{p}_{k,p} = E \vert F \vert ^{p}+E\sum_{i=1}^{k} \bigl\Vert D^{H,i}_{t}F \bigr\Vert ^{p}_{\mathcal {H}\otimes i}, \end{aligned}$$
where ⊗ denotes the tensor product.
Similarly, for a Hilbert space U, we denote by \(\mathbb{D}^{k,p}(U)\) the corresponding Sobolev space of U-valued random variables, and for \(p>0\), we denote by \(\mathbb{D}^{1,p}(| \mathcal{H}|)\) the subspace of \(\mathbb{D}^{1,p}(\mathcal{H})\) formed by the elements h of \(|\mathcal{H}|\). According to [6], we introduce φ-derivative of F as follows:
$$\begin{aligned} D^{\varphi}_{t}F= \int_{\mathbb{R_{+}}} \varphi(t,s) D^{H}_{s}F \,ds. \end{aligned}$$
Definition 2.3
The space \(\mathcal{L}_{\varphi}[0,T]\) of integrals is defined as the family of stochastic processes \(V(t)\) on \([0,T]\) such that \(E\|V(t)\|^{2}_{\varphi}< \infty\), \(V(t)\) is φ-differentiable, the trace of the derivative \(D^{\varphi}_{s} V(t)\) exists, and for \(t,s \in[0,T]\),
$$\begin{aligned} E \biggl[ \int_{0}^{T} \int_{0}^{T}{ \bigl\vert D^{\varphi}_{t} V(s) \bigr\vert ^{2}}\,ds\,dt \biggr] < \infty. \end{aligned}$$
In addition, for each sequence of partitions \((\pi_{n}, n \in\mathbb {N})\) with \(|\pi_{n}| \rightarrow0\) as \(n\rightarrow\infty\), the following are satisfied:
$$\begin{aligned} \sum_{i=0}^{n-1} E \biggl[ \int_{t^{(n)}_{i}}^{t^{(n)}_{i+1}} \int _{t^{(n)}_{j}}^{t^{(n)}_{j+1}}{ \bigl\vert D^{\varphi}_{s} V^{\pi } \bigl(t^{(n)}_{i} \bigr)D^{\varphi}_{t} V^{\pi} \bigl(t^{(n)}_{j} \bigr)-D^{\varphi}_{s} V(t)D^{\varphi}_{t} V(s) \bigr\vert ^{2}}\,ds\,dt \biggr] \rightarrow0 \end{aligned}$$
and
$$\begin{aligned} E \bigl\Vert V^{\pi}- V \bigr\Vert ^{2}_{\varphi}\rightarrow0, \end{aligned}$$
as n tends to infinity, where \(\pi_{n} = t^{(n)}_{0}< t^{(n)}_{1} < \cdots < t^{(n)}_{n-1} < t^{(n)}_{n}=T\), \(|\pi|:= \max_{i}{(t_{i+1}-t_{i})}\) and \(V^{\pi}=V_{t_{i}}\).
Now, define the space \(\mathbb{H}^{1,2}_{\varphi}\), which represents the intersection of the spaces \(\mathbb{D}^{1,2}(|\mathcal{H}|)\) and \(\mathcal{L}_{\varphi}[0,T]\), such that \(\mathbb{H}^{1,2}_{\varphi}=\mathbb{D}^{1,2}(|\mathcal{H}|)\cap\mathcal{L}_{\varphi}[0,T]\).
Definition 2.4
Let \(V(t)\) be a stochastic process with integrable trajectories.
The symmetric integral of \(V(t)\) with respect to \(W^{H}(t)\) is defined as follows:
$$\begin{aligned} \lim_{\epsilon\rightarrow0}\frac{1}{2\epsilon} \int _{0}^{T}{V(s) \bigl[W^{H}(s+ \epsilon)-W^{H}(s-\epsilon) \bigr]}\,ds, \end{aligned}$$
provided that the limit exists in probability, the symmetric integral is denoted by
$$\int_{0}^{T}{V(s)}\,d^{\circ} W^{H}(s). $$
The forward integral of \(V(t)\) with respect to \(W^{H}(t)\) is defined as follows:
$$\begin{aligned} \lim_{\epsilon\rightarrow0}\frac{1}{\epsilon} \int_{0}^{T}{V(s) \biggl[\frac{W^{H}(s+\epsilon)-W^{H}(s)}{\epsilon} \biggr]}\,ds, \end{aligned}$$
provided that the limit exists in probability, the forward integral is denoted by
$$\int_{0}^{T}{V(s)}\,d^{-} W^{H}(s). $$
The backward integral of \(V(t)\) with respect to \(W^{H}(t)\) is defined as follows:
$$\begin{aligned} \lim_{\epsilon\rightarrow0}\frac{1}{\epsilon} \int_{0}^{T} {V(s) \biggl[\frac{W^{H}(s-\epsilon)-W^{H}(s)}{\epsilon} \biggr]}\,ds, \end{aligned}$$
provided that the limit exists in probability, the backward integral is denoted by
$$\int_{0}^{T}{V(s)}\,d^{+} W^{H}(s). $$
In order to establish our results, we need to introduce some lemmas. The next lemma follows (Remark 1 in [35]) and (Proposition 6.2.3 in [6]).
Lemma 2.5
If the stochastic process
\(V(t)\)satisfies
$$\begin{aligned} \int_{0}^{T} \int_{0}^{T}{ \bigl\vert D^{H}_{s}V(t) \bigr\vert \vert t-s \vert ^{2H-2}}\,ds\,dt < \infty,\quad V\in \mathbb{D}^{1,2} \bigl( \vert \mathcal{H} \vert \bigr), \end{aligned}$$
then the symmetric integral coincides with the forward and backward integrals.
Since fBm is neither semi-martingale nor Markov process, we definitely lost the use of Burkholder–Davis–Gundy inequality and Ito-isometry. Therefore, there is a pressing need to use the following two lemmas from [6] and [28].
Lemma 2.6
If
\(V(t)\)is a stochastic process on
\(\mathbb {H}^{1,2}_{\varphi}\), then the symmetric integral is well defined and
$$\begin{aligned} \int_{0}^{T}{V(s)}\,d^{\circ} W^{H}(s)= \int_{0}^{T}{V(s)\diamond}dW^{H}(s)+ \int _{0}^{T}{D^{\varphi}_{s}V(s)}\,ds, \end{aligned}$$
where ⋄ denotes the Wick product.
We note that the forward and backward integrals are also well defined. Hence, by Lemma 2.5, the forward and backward integrals coincide with the symmetric integral under the condition of Lemma 2.6.
Lemma 2.7
Let
\(W^{H}(t)\)be fBm with Hurst index
\(H\in(\frac {1}{2},1)\)and
\(V(t)\)be a stochastic process in
\(\mathbb {H}^{1,2}_{\varphi}\), then, for
\(0 \leq T < \infty\), there exists a constant
\(C >0\)such that
$$ E \biggl\vert \int_{0}^{T}{V(s)}\,d^{\circ} W^{H}(s) \biggr\vert ^{2} \leq2HT^{2H-1}E \int _{0}^{T}{ \bigl\vert V(s) \bigr\vert ^{2}}\,ds+4CT^{2}. $$
The following requisite lemma is taken from [36].
Lemma 2.8
Let
\(T>0\), \(x_{0}\geq0\), and
\(x(t)\), \(y(t)\)be two continuous functions on
\([0,T]\). Assume that
\(\kappa:\mathbb{R}_{+} \rightarrow\mathbb{R}_{+}\)is a concave continuous nondecreasing function such that
\(\kappa(v)>0\)for
\(v>0\). If we have
$$\begin{aligned} x(t)\leq x_{0}+ \int_{0}^{t}{y(s)\kappa \bigl(x(s) \bigr)}\,ds\quad \forall t\in[0,T], \end{aligned}$$
then
$$\begin{aligned} x(t)\leq G^{-1} \biggl(G(x_{0})+ \int_{0}^{t}{y(s)}\,ds \biggr)\quad \forall t \in[0,T], \end{aligned}$$
where
\((G(x_{0})+\int_{0}^{t}{y(s)}\,ds)\in \operatorname{Dom}(G^{-1})\), \(G(v)=\int _{0}^{v}{\frac{ds}{\kappa(s)}}\,ds\), \(v>0\). Moreover, if
\(x_{0}=0\)and
\(\int _{0^{+}}{\frac{ds}{\kappa(s)}}\,ds = \infty\), then
\(x(t) = 0\)for all
\(t\in[0,T]\).
Throughout this paper, the following assumptions are imposed.
Assumption A
For all \(x,y \in\mathbb{R}^{n}\), \(t\in[0,T]\), and \(a(t,\cdot), b(t,\cdot) \in \mathbb{H}^{1,2}_{\varphi}\), there exists a function \(\kappa(\cdot)\) such that
$$\begin{aligned} \bigl\vert a(t,x)-a(t,y) \bigr\vert ^{2}+ \bigl\vert b(t,x)-b(t,y) \bigr\vert ^{2}+ \bigl\vert D^{\varphi}_{t} \bigl(b(t,x)-b(t,y) \bigr) \bigr\vert ^{2} \leq\kappa \bigl( \vert x-y \vert ^{2} \bigr), \end{aligned}$$
where \(\kappa(\cdot)\) is a concave continuous nondecreasing function such that \(\kappa(0)=0\) and
$$\begin{aligned} \int_{0^{+}}{\frac{1}{\kappa(x)}}\,dx=\infty. \end{aligned}$$
Moreover, since \(\kappa(\cdot)\) is a concave continuous nondecreasing function, then there must exist two constants \(\lambda_{1}>0\) and \(\lambda_{2}>0\) such that
$$ \kappa(x)\leq\lambda_{1}x+\lambda_{2}. $$
Remark 2.9
In view of Assumption A, we can see clearly, for a special case, if \(\kappa(|x|)=K|x|\), then the Lipschitz condition is recovered. Therefore, Assumption A is much weaker than the usual Lipschitz condition.
Next, according to Lemma 3.1 in [37], the solution of impulsive stochastic dynamical system (1) can be given by the following integral equation:
$$\begin{aligned} x(t)= x_{0}+ \int_{0}^{t} {a \bigl(s,x(s) \bigr)}\,ds+ \int_{0}^{t} {b \bigl(s,x(s) \bigr)}\,d^{\circ }W^{H}(s)+ \sum_{0 < t_{j} < t}I_{j} \bigl(x(t_{j}) \bigr). \end{aligned}$$
(2)
Now, consider the standard ISDE with fBm
$$\begin{aligned} x_{\epsilon}(t)= x_{0}+\epsilon^{2H} \int_{0}^{t} {a \bigl(s,x_{\epsilon }(s) \bigr)}\,ds +\epsilon^{H} \int_{0}^{t} {b \bigl(s,x_{\epsilon}(s) \bigr)}\,d^{\circ }W^{H}(s)+\epsilon^{H}\sum _{0 < t_{j} < t}I_{j} \bigl(x_{\epsilon }(t_{j}) \bigr), \end{aligned}$$
(3)
where \(\epsilon\in(0,\epsilon_{0}]\) is a positive small parameter and \(\epsilon_{0}\) is a fixed number. Moreover, the averaged SDE of the standard ISDE (3) is
$$ z_{\epsilon}(t)= x_{0}+\epsilon^{2H} \int_{0}^{t} { \bigl[\bar{a} \bigl(z_{\epsilon }(s) \bigr)+\bar{I} \bigl(z_{\epsilon}(s) \bigr) \bigr]}\,ds+\epsilon^{H} \int_{0}^{t} {\bar {b} \bigl(z_{\epsilon}(s) \bigr)}\,d^{\circ}W^{H}(s), $$
(4)
where the functions \(\bar{a}(x): \mathbb{R}^{n} \rightarrow\mathbb {R}^{n}\), \(\bar{b}(x): \mathbb{R}^{n} \rightarrow\mathbb{R}^{n}\) and \(\bar{I}(x): \mathbb{R}^{n} \rightarrow\mathbb{R}^{n}\) are measurable functions satisfying
$$\begin{aligned}& \bar{a}(x)=\frac{1}{T} \int_{0}^{T} {a(t,x)}\,dt, \\& \bar{b}(x)=\frac{1}{T} \int_{0}^{T} {b(t,x)}\,dt, \\& \bar{I}(x)=\frac{1}{T}\sum_{j=1}^{k} {I_{j}(x)}. \end{aligned}$$
Assumption B
For any \(x,y \in\mathbb{R}^{n}\), there exist positive constants \(N_{1}\) and \(N_{2}\) such that
$$ \bigl\vert I_{j}(x) \bigr\vert ^{2} \leq N_{1}, \qquad\bigl\vert I_{j}(x) - I_{j}(y) \bigr\vert ^{2} \leq N_{2} \vert x-y \vert ^{2}. $$
Assumption C
For all \(t\in[0,T]\), \(x \in\mathbb{R}^{n}\), the coefficients of Eq. (3) and Eq. (4) are bounded. Then there exists a positive constant M such that
$$\begin{aligned} \bigl\vert a(t,x) \bigr\vert ^{2} \leq M,\qquad \bigl\vert b(t,x) \bigr\vert ^{2} \leq M, \qquad\bigl\vert \bar{a}(x) \bigr\vert ^{2} \leq M, \qquad\bigl\vert \bar{b}(x) \bigr\vert ^{2} \leq M. \end{aligned}$$
Now, the existence and uniqueness result for Eq. (2) is given by the following theorem.
Theorem 2.10
Assume that Assumptions
A–Care satisfied. Then, for every initial value
\(x_{0} \in\mathbb{R}^{n}\), there exists a unique solution
\(x(t)\)to Eq. (2) on
\([0,T]\).
Proof
The proof is a special case of the proof of Theorem 3.1 in Abouagwa et al. [38] and easy to be derived. So, we omit the proof here. □