- Research
- Open Access
- Published:
On the averaging principle for stochastic delay differential equations with jumps
Advances in Difference Equations volume 2015, Article number: 70 (2015)
Abstract
In this paper, we investigate the averaging principle for stochastic delay differential equations (SDDEs) and SDDEs with pure jumps. By the Itô formula, the Taylor formula, and the Burkholder-Davis-Gundy inequality, we show that the solution of the averaged SDDEs converges to that of the standard SDDEs in the sense of pth moment and also in probability. Finally, two examples are provided to illustrate the theory.
1 Introduction
The averaging principle for dynamical system is important in problems of mechanics, control and many other areas. It was first put forward by Krylov and Bogolyubov [1], then by Gikhman [2] and Volosov [3] for non-linear ordinary differential equations. With the developing of the theory of stochastic analysis, many authors began to study the averaging principle for stochastic differential equations (SDEs). See, for instance, Khasminskii [4], Khasminskii and Yin [5], Golec and Ladde [6], Veretennikov [7, 8], Givon et al. [9], Tan and Lei [10].
On the other hand, in real phenomena, many real systems may be perturbed by abrupt pulses or extreme events and these systems with white noise are not always appropriate to interpret real data in a reasonable way. A more natural mathematical framework for these phenomena has been taken into account other than purely Brownian perturbations. It is recognized that SDEs with jumps are quite suitable to describe such discontinuous systems. In the meantime, the averaging method for a class of stochastic equations with jumps has received much attention from many authors and there exists some literature [11–14] concerned with the averaging method for SDEs with jumps.
Motivated by the above discussion, in this paper we study the averaging principle for a class of stochastic delay differential equations (SDDEs) with variable delays and jumps. To the best of our knowledge, the published papers on the averaging method are concentrated on the case of SDEs, there are few results concerning the averaging principle for SDDEs with jumps. What is more, associated with all the work mentioned above, we pay special attention to the fact that most authors just focus on the mean-square convergence of the solution of the averaged stochastic equations and that of the standard stochastic equations. They do not consider the general pth (\(p>2\)) moment convergence case. In order to close this gap, the main aim of this paper is to study the solution of the averaged SDDEs converging to that of the standard SDDEs in the sense of pth moment. By using the Itô formula, the Taylor formula, and stochastic inequalities, we give the proof of the pth moment convergence results. It should be pointed out that we will not get the pth moment convergence results by using the proof of [11–14] and we need to develop several new techniques to deal with the pth moment case and the term with Poisson random measure. The results obtained are a generalization and improvement of some results in [11–14].
The rest of this paper is organized as follows. In Section 2, we prove that the solution of the averaged SDDEs converges to that of the standard SDDEs in the sense of pth moment and also in probability; in Section 3 we also consider the above results as regards the pure jump case. Finally, we give two examples to illustrate the theory in Section 4.
2 Averaging principle for Brownian motion case
Let \((\Omega,\mathcal{F},P)\) be a complete probability space equipped with some filtration \((\mathcal{F}_{t})_{t\ge0}\) satisfying the usual conditions. Here \(w(t)\) is an m-dimensional Brownian motion defined on the probability space \((\Omega,\mathcal{F},P)\) adapted to the filtration \((\mathcal {F}_{t})_{t\ge{0}}\). Let \(\tau>0\) and \(C([-\tau,0];R^{n})\) denote the family of all continuous functions φ from \([-\tau,0] \to R^{n}\). The space \(C([-\tau,0];R^{n})\) is assumed to be equipped with the norm \(\|\varphi\|=\sup_{-\tau\le t\le0}|\varphi(t)|\). \(C_{\mathcal {F}_{0}}^{b}([-\tau,0];R^{n})\) denote the family of all almost surely bounded, \(\mathcal{F}_{0}\)-measurable, \(C([-\tau,0];R^{n})\) valued random variables \(\xi=\{\xi(\theta):-\tau\le\theta\le0\}\). Let \(p\ge2\), \(\mathcal{L}_{\mathcal{F}_{0}}^{p}([-\tau,0];R^{n})\) denote the family of all \(\mathcal{F}_{0}\) measurable, \(C([-\tau,0];R^{n})\)-valued random variables \(\varphi=\{\varphi(\theta):{-\tau\le\theta\le0}\}\) such that \(E\sup_{-\tau\le\theta\le0}|\varphi(\theta)|^{p}<\infty\).
Consider the following SDDEs:
where \(f:[0,T]\times R^{n}\times R^{n} \to R^{n} \) and \(g:[0,T]\times R^{n}\times R^{n} \to R^{n\times m} \) are both Borel-measurable functions. The function \(\delta: [0,T]\to R\) is the time delay which satisfies \(-\tau\le\delta (t)\le t\). The initial condition \(x_{0}\) is defined by
that is, ξ is an \(\mathcal{F}_{0}\)-measurable \(C([-\tau,0];R^{n})\)-valued random variable and \(E\|\xi\|^{p}<\infty\).
To study the averaging method of (1), we need the following assumptions.
(H2.1) For all \(x_{1},y_{1},x_{2},y_{2}\in R^{n}\) and \(t\in[0,T]\), there exist two positive constants \(k_{1}\) and \(k_{0}\) such that
and
Clearly, condition (2) together with (3) implies the linear growth condition
where \(K=\max\{k_{1},k_{0}\}\).
In fact, we have, for any \(x,y\in R^{n}\),
Similar to the above derivation, we have
which implies condition (4).
Similar to the proof of [15], we have the following existence result.
Theorem 2.1
Under condition (H2.1), (1) has a unique solution in \(L^{p}\), \(p\ge2\). Moreover, we have
The proof of Theorem 2.1 is given in the Appendix.
Next, let us consider the standard form of SDDEs (1),
where the coefficients f, g have the same conditions as in (2), (4) and \(\varepsilon\in[0,\varepsilon_{0}]\) is a positive small parameter with \(\varepsilon_{0}\) is a fixed number.
Let \(\bar{f}(x,y): R^{n}\times R^{n} \to R^{n} \) and \(\bar{g}(x,y): R^{n}\times R^{n} \to R^{n\times m} \) be measurable functions, satisfying condition (H2.1). We also assume that the following conditions are satisfied.
(H2.2) For any \(x,y\in R^{n}\), there exist two positive bounded functions \(\varphi_{i}(T_{1})\), \(i=1,2\), such that
where \(a=f,g\) and \(\lim_{T_{1}\to\infty}\varphi_{i}(T_{1})=0\).
Then we have the averaging form of the corresponding standard SDDEs
Obviously, under condition (H2.1), the standard SDDEs (5) and the averaged SDDEs (6) has a unique solution on \(t\in[0,T]\), respectively.
Now, we present and prove our main results, which are used for revealing the relationship between the \(x_{\varepsilon}(t)\) and \(y_{\varepsilon}(t)\).
Theorem 2.2
Let the conditions (H2.1), (H2.2) hold. For a given arbitrary small number \(\delta_{1}>0\), there exist \(L>0\), \(\varepsilon_{1}\in(0,\varepsilon_{0}]\), and \(\beta\in(0,1)\) such that
for all \(\varepsilon\in(0,\varepsilon_{1}]\).
Proof
For simplicity, denote the difference \(e_{\varepsilon}(t)=x_{\varepsilon}(t)-y_{\varepsilon}(t)\). From (5) and (6), we have
By the Itô formula (see [15]), we have, for \(p\ge2\),
Using the basic inequality \(2ab\le a^{2}+b^{2}\) and taking expectation on both sides of (8), it follows that, for any \(u\in[0,T]\),
By the Young inequality, it follows that for any \(\epsilon_{1}>0\)
where the second term of (10) can be written by
For any \(\epsilon_{2}>0\), we derive that
By (H2.1), we have
Letting \(\epsilon_{2}=(\sqrt{2k_{1}})^{p-1}\) yields
Inserting (12) into (10), we have
By setting \(\epsilon_{1}=1+\sqrt{2k_{1}}\), we get
From condition (H2.2), we then see that
Next, we will estimate \(I_{2}\) of (9). Using the Young inequality again, it follows that for any \(\epsilon_{3}>0\)
Similar to the computation of (12), we can obtain
Letting \(\epsilon_{3}=(1+\sqrt{2k_{1}})^{2}\), we get
From condition (H2.2), it follows that
On the other hand, by the Burkholder-Davis-Gundy inequality, we have
Similar to the estimation of \(I_{2}\), we derive that
Hence, combing (14), (17), and (18),
where
By Theorem 2.1, we have the following fact: for each \(t\ge0\), if \(E\|\xi\|^{p}<\infty\), then \(E|y_{\varepsilon }(t)|^{p}<\infty\). Hence condition (H2.2) implies that
Finally, by the Gronwall inequality, we have
Choose \(\beta\in(0,1)\) and \(L>0\) such that for every \(t\in [0,L\varepsilon^{-\beta}]\subseteq[0,T]\),
where \(c=c_{2}(1+E\|\xi\|^{p}+2C)e^{c_{1}L\varepsilon^{1-\beta}}\). Consequently, given any number \(\delta_{1}\), we can choose \(\varepsilon _{1}\in[0,\varepsilon_{0}]\) such that for each \(\varepsilon\in[0,\varepsilon _{1}]\) and for \(t\in[0,L\varepsilon^{-\beta}]\),
The proof is completed. □
With Theorem 2.2, it is easy to show the convergence in probability between \(x_{\varepsilon}(t)\) and \(y_{\varepsilon}(t)\).
Corollary 2.1
Let the conditions (H2.1) and (H2.2) hold. For a given arbitrary small number \(\delta_{2}>0\), there exists \(\varepsilon_{2}\in[0,\varepsilon_{0}]\) such that for all \(\varepsilon\in (0,\varepsilon_{2}]\), we have
where L and β are defined by Theorem 2.2.
Proof
By Theorem 2.2 and the Chebyshev inequality, for any given number \(\delta_{2}>0\), we can obtain
Let \(\varepsilon\to0\), and the required result follows. □
3 Averaging principle for pure jump case
In this section we turn to the counterpart for SDDEs with jumps. We further need to introduce some notations.
Let \(\tau>0\), and \(D([-\tau,0];R^{n})\) denote the family of all right-continuous functions with left-hand limits φ from \([-\tau,0] \to R^{n}\). The space \(D([-\tau,0];R^{n})\) is assumed to be equipped with the norm \(\|\varphi\|=\sup_{-\tau\le t\le0}|\varphi(t)|\). \(D_{\mathcal{F}_{0}}^{b}([-\tau,0];R^{n})\) denotes the family of all almost surely bounded, \(\mathcal{F}_{0}\)-measurable, \(D([-\tau,0];R^{n})\) valued random variables \(\xi=\{\xi(\theta):-\tau\le\theta\le0\}\). Let \(p\ge2\), \(\mathcal{L}_{\mathcal{F}_{0}}^{p}([-\tau,0];R^{n})\) denote the family of all \(\mathcal{F}_{0}\) measurable, \(D([-\tau,0];R^{n})\)-valued random variables \(\varphi=\{\varphi(\theta):{-\tau\le\theta\le0}\}\) such that \(E\sup_{-\tau\le\theta\le0}|\varphi(\theta)|^{p}<\infty\).
Let \((Z,\mathcal{B}(Z))\) be a measurable space and \(\pi(dv)\) a σ-finite measure on it. Let \(\{\bar{p}=\bar{p}(t), t\ge0\}\) be a stationary \(\mathcal {F}_{t}\)-Poisson point process on Z with characteristic measure π. Then, for \(A\in\mathcal{B}(Z-\{0\})\), here \(0\in\mbox{the closure of }A\), the Poisson counting measure N is defined by
By [16], we find that there exists a σ-finite measure π such that
where \(N(t,A):=N((0,t]\times A)\). This measure π is said the Lévy measure. Then the compensated Poisson random measure \(\tilde{N}\) is defined by
We refer to Ikeda and Watanable [16] and Applebaum [17] for the details of Poisson point processes and Lévy processes.
In this section, we consider the SDDEs with pure jumps:
where \(f:[0,T]\times R^{n}\times R^{n} \to R^{n} \) and \(h:[0,T]\times R^{n}\times R^{n} \times Z \to R^{n} \) are both Borel-measurable functions. The initial condition \(x_{0}\) is defined by
To guarantee the existence and uniqueness of the solution, we introduce the following conditions on the jump term.
(H3.1) For all \(x_{1},y_{1},x_{2},y_{2}\in R^{n}\) and \(v\in Z\), there exist two positive constants \(k_{2}\) and \(k_{3}\) such that
(H3.2) For all \(x_{1},x_{2},y_{1},y_{2}\in R^{n}\), \(v\in Z\) and \(p>2\), there exists \(L>0\) such that
with \(\int_{Z}|v|^{p}\pi(dv)<\infty\).
Theorem 3.1
Under conditions (H2.1), (H3.1), and (H3.2), (19) has a unique solution in \(L^{p}\), \(p\ge2\). Moreover, we have
Proof
Similar to the proof of [18], we find that (19) has a unique solution in \(L^{p}\). □
Let us consider the standard form of SDDEs with pure jumps (19),
where the coefficients f, h have the same conditions as in (H2.1), (H3.1), and (H3.2) and \(\varepsilon\in[0,\varepsilon_{0}]\) is a positive small parameter with \(\varepsilon_{0}\) is a fixed number.
Let \(\bar{f}(x,y): R^{n}\times R^{n} \to R^{n} \) and \(\bar{h}(x,y,v):R^{n}\times R^{n}\times Z \to R^{n}\) be measurable functions, satisfying conditions (H2.1), (H3.1), and (H3.2). We also assume that the following inequalities are satisfied.
(H3.3) For any \(x,y\in R^{n}\) and \(v\in Z\), there exists a positive bounded function \(\varphi_{3}(T_{1})\), such that
where \(\lim_{T_{1}\to\infty}\varphi_{3}(T_{1})=0\).
Then the averaging form of (19) is given by
Obviously, under conditions (H2.1), (H3.1), and (H3.2), the standard SDDEs with pure jumps (23) and the averaged SDDEs with pure jumps (24) have unique solutions on \(t\in[0,T]\), respectively.
Now, we present and prove our main results.
Theorem 3.2
Let the conditions (H2.1) and (H3.1)-(H3.3) hold. For a given arbitrary small number \(\delta_{3}>0\) and \(p\ge2\), there exist \(L>0\), \(\varepsilon_{3}\in(0,\varepsilon_{0}]\), and \(\beta\in(0,1)\) such that
for all \(\varepsilon\in(0,\varepsilon_{3}]\).
Proof
For simplicity, denote the difference \(e_{\varepsilon}(t)=x_{\varepsilon}(t)-y_{\varepsilon}(t)\). From (23) and (24), we have
By the Itô formula (see [17, 19]), we obtain
Using the basic inequality \(2ab\le a^{2}+b^{2}\) and taking expectations on both sides of (27), it follows that
By the Burkholder-Davis-Gundy inequality, there exists a positive constant \(c_{p}\) such that
Next, the Young inequality implies that
where \(\epsilon_{4}>0\). By setting \(\epsilon_{4}=c_{p}^{-1}\), we get
Similar to the estimate of \(I_{2}\), we have
where \(c_{3}=c_{p}^{2}p^{2}(1+\sqrt{2L})^{2}\). Condition (H3.3) implies that
Finally, let us estimate \(J_{3}\). Since \(\tilde{N}(dt,du)\) is a martingale measure and \(N(dt,du)=\tilde{N}(dt,du)+\pi(du)\,dt\), we have
Note that \(J_{3}\) has the form
where \(f(x)=|x|^{p}\) and \(\tilde{h}_{\varepsilon}^{v}(t)=\sqrt{\varepsilon }[h(t,x_{\varepsilon}(t),x_{\varepsilon} (\delta(t)),v)-\bar{h}(y_{\varepsilon}(t),y_{\varepsilon}(\delta(t)),v)]\). By the Taylor formula, there exists a positive constant \(M(p)\) such that
Applying the basic inequality \(|a+b|^{p-2}\le 2^{p-3}(|a|^{p-2}+|b|^{p-2})\), we have
Similar to \(J_{2}\), we derive that
where \(c_{4}=2M(p)2^{p-3}(1+\sqrt{2L})^{p}\). Combing (29) and (30), we have
where
By Theorem 3.1, we have the following fact: for each \(t\ge0\), if \(E\|\xi\|^{p}<\infty\), then \(E|y_{\varepsilon }(t)|^{p}<\infty\). Hence, condition (H3.3) implies that
By the Gronwall inequality, we obtain
Consequently, given any number \(\delta_{3}\), we can choose \(\varepsilon _{3}\in[0,\varepsilon_{0}]\) such that for each \(\varepsilon\in[0,\varepsilon _{3}]\) and for \(t\in[0,L\varepsilon^{-\beta}]\),
The proof is completed. □
Similarly, we have the following results as regards the convergence in probability between \(x_{\varepsilon}(t)\) and \(y_{\varepsilon}(t)\).
Corollary 3.1
Let the conditions (H2.1) and (H3.1)-(H3.3) hold. For a given arbitrary small number \(\delta_{4}>0\), there exists \(\varepsilon_{4}\in[0,\varepsilon_{0}]\) such that for all \(\varepsilon\in [0,\varepsilon_{4}]\), we have
where L and β are defined by Theorem 3.2.
Remark 3.1
When the time delay \(\delta(t)=t\), (19) will reduce to SDEs with jumps, which have been studied by [11–14]. In particularly, if \(p=2\) in (25), then we have the mean-square sense convergence of the standard solution of (23) and the averaged solution of (24). So the corresponding results in [11–14] are generalized and improved.
Remark 3.2
In [10], Tan and Lei studied the averaging method for SDDEs under non-Lipschitz conditions. In particular, we see that the Lipschitz condition is a special case of non-Lipschitz conditions which are studied by many scholars [20–23]. Similarly, by applying the proof of Theorem 3.2, we can prove the standard solution of (23) converges to the averaged solution of (24) in the pth moment under non-Lipschitz conditions. In other words, we obtain a more general result on the averaging principle for SDDEs with jumps than Theorem 3.2.
4 Examples
In this section, we construct two examples to demonstrate the averaging principle results.
Example 4.1
Let \(\tilde{N}(dt,dv)\) be a compensated Poisson random measures and is given by \(\pi(du)\,dt=\lambda f(v)\,dv\,dt\), where \(\lambda>0\) is a constant and
is the density function of a lognormal random variable. Consider the following SDEs with pure jumps:
with initial data \(x_{\varepsilon}(0)=x_{0}\), where \(\delta(t)=t\). Here \(f(t,x_{\varepsilon}(t))=\sin t x_{\varepsilon}(t)\) and \(h(t,x_{\varepsilon}(t))=-x_{\varepsilon}(t)\log x_{\varepsilon}(t)\). Let
and
Hence, we have the corresponding averaged SDEs with pure jumps
Now, we impose the non-Lipschitz condition on (32).
(H4.1) For all \(x,y\in R^{n}\), \(v\in Z\), and \(p\ge2\),
where \(\rho(\cdot)\) is a concave nondecreasing function from \(R_{+}\) to \(R_{+}\) such that \(\rho(0)=0\), \(\rho(u)>0\) for \(u>0\) and \(\int_{0}^{1} \frac{du}{\rho (u)}=\infty\).
Let us return to (32). It is easy to see that \(h(t, \cdot)\) is a nondecreasing, positive and concave function on \([0,\infty]\) with \(h(t,0)=0\). Moreover, by a straight computation, we have
Hence, the coefficients of (32) and (33) satisfy our condition (H4.1). Similar to the proof of [20–23], we find that (32) and (33) have unique solutions in \(L^{p}\), \(p\ge2\), respectively.
Similar to the proof of Theorem 3.2, we find that the standard solution of (32) converges to the averaged solution of (33) in the sense of the pth moment.
Corollary 4.1
Let the conditions (H2.2), (H3.3), and (H4.1) hold and \(\delta(t)=t\). For a given arbitrary small number \(\delta_{5}>0\) and \(p\ge2\), there exist \(L>0\), \(\varepsilon_{5}\in(0,\varepsilon_{0}]\), and \(\beta\in(0,1)\) such that
for all \(\varepsilon\in(0,\varepsilon_{5}]\).
The proof of Corollary 4.1 is given in the Appendix.
Remark 4.1
Similarly to Corollary 4.1, we can show the convergence in probability of the standard solution of (32) and the averaged solution of (33).
Example 4.2
Let \(w_{t}\) be a one-dimensional Brownian motion and the compensated Poisson random measure \(\tilde{N}(dt,dv)\) is defined by Example 4.1. Of course \(\tilde{N}(dt,dv)\) and \(w_{t}\) are assumed to be independent. Consider the following linear SDDEs with jumps:
with initial data \(x_{\varepsilon}(t)=\xi(t)\), when \(t\in[-\tau,0]\), Ï„ is a fixed delay, where \(\delta(t)=t-\tau\), \(T=N\tau\), \(N\in Z^{+}\), \(a,b,c\in R\). Here
and
Let
Hence, we have the corresponding averaged SDDEs with jumps
When \(t\in[0,\tau]\), the explicit solution of SDDEs with jumps is given by
where
When \(t\in[\tau,2\tau]\), the explicit solution of SDDEs with jumps is given by
where
Repeating this procedure over the interval \([2\tau,3\tau]\), \([3\tau,4\tau ]\), etc. we can obtain the explicit solution \(y_{\varepsilon}(t)\) on the entire interval \([0,T]\). On the other hand, it is easy to find that the conditions of Theorems 2.2 and 3.2 are satisfied, so the solution of averaged SDDEs with jumps (36) will converge to that of the standard SDDEs with jumps (37) in the sense of the pth moment and in probability.
5 Conclusion
In this paper, we study the averaging method for SDDEs and SDDEs with pure jumps. By applying the Itô formula, the Taylor formula, and the BDG inequality, we prove that the solution of the averaged SDDEs converges to that of the standard SDDEs in the sense of the pth moment and also in probability. Finally, two examples are provided to demonstrate the proposed results.
References
Krylov, NM, Bogolyubov, NN: Les proprietes ergodiques des suites des probabilites en chaine. C. R. Math. Acad. Sci. 204, 1454-1546 (1937)
Gikhman, II: On a theorem of N.N. Bogoliubov. Ukr. Mat. Zh. 4, 215-219 (1952)
Volosov, VM: Averaging in systems of ordinary differential equations. Russ. Math. Surv. 17, 1-126 (1962)
Khasminskii, RZ: On the principle of averaging the Itô stochastic differential equations. Kibernetika 4, 260-279 (1968)
Khasminskii, RZ, Yin, G: On averaging principles: an asymptotic expansion approach. SIAM J. Math. Anal. 35, 1534-1560 (2004)
Golec, J, Ladde, G: Averaging principle and systems of singularly perturbed stochastic differential equations. J. Math. Phys. 31, 1116-1123 (1990)
Veretennikov, AY: On the averaging principle for systems of stochastic differential equations. Math. USSR Sb. 69, 271-284 (1991)
Veretennikov, AY: On large deviations in the averaging principle for SDEs with full dependence. Ann. Probab. 27, 284-296 (1999)
Givon, D, Kevrekidis, IG, Kupferman, R: Strong convergence of projective integration schemes for singular perturbed stochastic differential systems. Commun. Math. Sci. 4, 707-729 (2006)
Tan, L, Lei, D: The averaging method for stochastic differential delay equations under non-Lipschitz conditions. Adv. Differ. Equ. 2013, 38 (2013)
Stoyanov, IM, Bainov, DD: The averaging method for a class of stochastic differential equations. Ukr. Math. J. 26, 186-194 (1974)
Kolomiets, VG, Melnikov, AI: Averaging of stochastic systems of integral-differential equations with Poisson noise. Ukr. Math. J. 43, 242-246 (1991)
Givon, D: Strong convergence rate for two-time-scale jump-diffusion stochastic differential systems. SIAM J. Multiscale Model. Simul. 6, 577-594 (2007)
Xu, Y, Duan, JQ, Xu, W: An averaging principle for stochastic dynamical systems with Lévy noise. Physica D 240, 1395-1401 (2011)
Mao, X: Stochastic Differential Equations and Their Applications. Ellis Horwood, Chichester (1997)
Ikeda, N, Watanable, S: Stochastic Differential Equations and Diffusion Processes. North-Holland, Amsterdam (1989)
Applebaum, D: Lévy Process and Stochastic Calculus. Cambridge University Press, Cambridge (2009)
Yuan, C, Bao, J: On the exponential stability of switching-diffusion processes with jumps. Q. Appl. Math. 2, 311-329 (2013)
Peszat, S, Zabczyk, J: Stochastic Partial Differential Equations with Lévy Noise: An Evolution Equation Approach. Cambridge University Press, Cambridge (2007)
Taniguchi, T: Successive approximations to solutions of stochastic differential equations. J. Differ. Equ. 96, 152-169 (1992)
Mao, X: Adapted solutions of backward stochastic differential equations with non-Lipschitz coefficients. Stoch. Process. Appl. 58, 281-292 (1995)
Vinodkumar, A: Existence, uniqueness and stability results of impulsive stochastic semilinear functional differential equations with infinite delays. J. Nonlinear Sci. Appl. 4, 236-246 (2011)
Negrea, R, Preda, C: Fixed point technique for a class of backward stochastic differential equations. J. Nonlinear Sci. Appl. 6, 41-50 (2013)
Acknowledgements
This paper is completed when the first author visits the Department of Mathematics and Statistics in the University of Strathclyde, whose hospitality is highly appreciated. The authors would like to thank the Royal Society of Edinburgh, the National Natural Science Foundation of China under NSFC grant (No. 11401261), Qing Lan Project of Jiangsu Province (2012), the NSF of Higher Education Institutions of Jiangsu Province (13KJB110005), the grant of Jiangsu Second Normal University (JSNU-ZY-02) and the Jiangsu Government Overseas Study Scholarship for their financial support.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the manuscript and typed, read and approved the final manuscript.
Appendix
Appendix
Proof of Theorem 2.1
Let \(x^{0}(t)=\xi(0)\) for \(t\in[{0},T]\) and define the sequence of successive approximations to (1)
The proof will be split into the following steps.
Step 1. Let us show that \(\{x^{n}(t)\}_{n\ge 1}\) is bounded. Let \(f_{t}^{n}=f(t,x^{n}(t),x^{n}(\delta(t)))\), \(g_{t}^{n}=g(t,x^{n}(t), x^{n}(\delta(t)))\). From (40), by the inequality \(|a+b+c|^{p}\le3^{p-1}[|a|^{p}+|b|^{p}+|c|^{p}]\), we have
Using the Hölder inequality and the BDG inequality, we get
By condition (H2.1), we obtain
For any \(r\ge1\), we have
From the Gronwall inequality, we derive that
Step 2. Let us show that \(\{x^{n}(t)\}_{n\ge 1}\) is Cauchy. For \(n\ge1\) and \(t\in[{0},T]\), we derive that, from (40),
where
By the Hölder inequality and the BDG inequality, we have
Setting \(\varphi_{n}(t)=E\sup_{{0} \le s\le t}|x^{n+1}(s)-x^{n}(s)|^{p}\), we have
By (41) and \(\varphi_{0}(t)\le c_{4}KE\|\xi\|^{p}=\tilde{c}_{4}\), we obtain
Hence (42) implies that for each t, \(\{x^{n}(t)\}_{n=1,2,\ldots}\) is a Cauchy sequence on \([{t_{0}},T]\).
Step 3. Uniqueness. Let \(x(t)\) and \(y(t)\) be two solutions of (1). Then, for \(t\in[{0},T]\), we have
Therefore, the Gronwall inequality implies
The above expression means that \(x(t)=y(t)\) for all \(t\in[0,T]\).
Existence. We derive from (42) that \(\{x^{n}(t)\}_{n=1,2,\ldots}\) is a Cauchy sequence. Hence there exists a unique \(x(t)\) such that \(x^{n}(t) \to x(t)\) as \(n\to\infty\). For all \(t\in[{0},T]\), taking the limits on both sides of (40) and letting \(n\to\infty\), we then can show that \(x(t)\) is the solution of (1). So the proof of Theorem 2.1 is complete. □
Proof of Corollary 4.1
The key technique to prove this corollary is already presented in the proofs of Theorems 2.2 and 3.2, so we here only highlight some parts which need to be modified. We use the same notations as in the proofs of Theorems 2.2 and 3.2. It is easy to see that inequality (11) should become
In fact, since the function \(\rho(\cdot)\) is concave and increasing, there must exist a positive number \(k^{p}\) such that
Hence,
Letting \(\epsilon_{2}=k^{p-1}\), we get
Inserting (44) into (43), it follows that
By setting \(\epsilon_{1}=1+k \), we have
Similarly, \(J_{2}\) and \(J_{3}\) can be estimated as \(I_{1}\). Finally, all of required assertions can be obtained in the same way as the proof of Theorems 2.2 and 3.2. The proof is therefore complete. □
Rights and permissions
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
About this article
Cite this article
Mao, W., You, S., Wu, X. et al. On the averaging principle for stochastic delay differential equations with jumps. Adv Differ Equ 2015, 70 (2015). https://doi.org/10.1186/s13662-015-0411-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662-015-0411-0
Keywords
- averaging principle
- stochastic delay differential equations
- Poisson random measure
- \(L^{p}\) convergence