Throughout this paper, let \((\Omega,\mathcal{F},P)\) be a complete probability space equipped with some filtration \((\mathcal{F}_{t})_{t\ge0}\) satisfying the usual conditions. Here \(w(t)\) is an m-dimensional Brownian motion defined on the probability space \((\Omega,\mathcal{F},P)\) adapted to the filtration \((\mathcal {F}_{t})_{t\ge{0}}\). Let \(\tau>0\), and \(D([-\tau,0];R^{n})\) denote the family of all right-continuous functions with left-hand limits φ from \([-\tau,0] \to R^{n}\). The space \(D([-\tau,0];R^{n})\) is assumed to be equipped with the norm \(\|\varphi\|=\sup_{-\tau\le t\le0}|\varphi(t)|\). \(D_{\mathcal{F}_{0}}^{b}([-\tau,0];R^{n})\) denotes the family of all almost surely bounded, \(\mathcal{F}_{0}\)-measurable, \(D([-\tau,0];R^{n})\) valued random variable \(\xi=\{\xi(\theta):-\tau\le\theta\le0\}\). For any \(p\ge2\), let \(\mathcal{L}_{\mathcal{F}_{0}}^{p}([-\tau ,0];R^{n})\) denote the family of all \(\mathcal{F}_{0}\) measurable, \(D([-\tau,0];R^{n})\)-valued random variables \(\varphi=\{\varphi(\theta):{-\tau\le\theta\le0}\}\) such that E\(\sup_{-\tau\le\theta\le0}|\varphi(\theta)|^{p}<\infty\).
Let \(\{\bar{p}=\bar{p}(t), t\ge0\}\) be a stationary \(\mathcal {F}_{t}\)-adapted and \(R^{n}\)-valued Poisson point process. Then, for \(A\in\mathcal{B}(R^{n}-\{0\})\), 0∉ the closure of A, we define the Poisson counting measure N associated with p̄ by
$$N\bigl((0,t]\times A\bigr):=\#\bigl\{ {0}< s\le t,{\bar{p}}(s)\in A\bigr\} =\sum _{{t_{0}}< s\le t}I_{A}\bigl({\bar{p}}(s)\bigr), $$
where # denotes the cardinality of the set \(\{\cdot\}\). For simplicity, we denote \(N(t,A):=N(({0},t]\times A)\). It is well known that there exists a σ-finite measure π such that
$$E\bigl[N(t,A)\bigr]=\pi(A)t, \qquad P\bigl(N(t,A)=n\bigr)=\frac{\exp(-t\pi(A))(\pi(A)t)^{n}}{n!}. $$
This measure π is called the Lévy measure. Moreover, by Doob-Meyer’s decomposition theorem, there exists a unique \(\{\mathcal{F}_{t}\}\)-adapted martingale \(\tilde{N}(t,A)\) and a unique \(\{\mathcal{F}_{t}\}\)-adapted natural increasing process \(\hat{N}(t,A)\) such that
$$N(t,A)=\tilde{N}(t,A)+\hat{N}(t,A),\quad t>0. $$
Here \(\tilde{N}(t,A)\) is called the compensated Poisson random measure and \(\hat{N}(t,A)=\pi(A)t\) is called the compensator. For more details on Poisson point process and Lévy jumps, see [26–28].
Consider the following neutral SFDEs with Poisson random measure
$$\begin{aligned} d\bigl[x(t)-D(x_{t})\bigr]=f(t,x_{t})\,dt+g(t,x_{t})\,dw(t)+ \int_{Z}h(t,x_{t},v)N(dt,dv), \end{aligned}$$
(1)
where \(x_{t}=\{x(t+\theta):-\tau\le\theta\le0\}\) is regarded as a \(D([-\tau,0];R^{n})\)-valued stochastic process. \(f:[0,T]\times D([-\tau,0];R^{n}) \to R^{n}\), \(g:[0,T]\times D([-\tau,0];R^{n}) \to R^{n\times m}\) and \(h:[0,T]\times D([-\tau,0];R^{n})\times Z \to R^{n} \) are both Borel-measurable functions. The initial condition \(x_{0}\) is defined by
$$x_{0}=\xi=\bigl\{ \xi(t):-\tau\le t\le0\bigr\} \in\mathcal{L}_{\mathcal {F}_{0}}^{2} \bigl([-\tau,0];R^{n}\bigr), $$
that is, ξ is an \(\mathcal{F}_{0}\)-measurable \(D([-\tau,0];R^{n})\)-valued random variable and \(E\|\xi\|^{2}<\infty\).
To study the averaging method of equation (1), we need the following assumptions.
Assumption 2.1
Let \(D(0)=0\) and for all \(\varphi,\psi\in D([-\tau ,0];R^{n})\), there exists a constant \(k_{0}\in(0,1)\) such that
$$\begin{aligned} \bigl|D(\varphi)-D(\psi)\bigr|\le k_{0}\|\varphi-\psi\|. \end{aligned}$$
(2)
Assumption 2.2
For all \(\varphi,\psi\in D([-\tau,0];R^{n})\) and \(t\in[0,T]\), there exist two positive constants \(k_{1}\), \(k_{2}\) such that
$$\bigl|f(t,\varphi)-f(t,\psi)\bigr|^{2}\vee\bigl|g(t,\varphi)-g(t,\psi)\bigr|^{2} \le k_{1}\|\varphi -\psi\|^{2} $$
and
$$\begin{aligned} \int_{Z}\bigl|h(t,\varphi,v)-h(t,\varphi,v)\bigr|^{p}\pi(dv) \le k_{2}\|\varphi-\psi\|^{p},\quad p\ge2. \end{aligned}$$
(3)
Assumption 2.3
For all \(\varphi\in D([-\tau,0];R^{n})\) and \(t\in [0,T]\), there exist two positive constants \(k_{3}\), \(k_{4}\) such that
$$\bigl|f(t,\varphi)\bigr|^{2}\vee\bigl|g(t,\varphi)\bigr|^{2}\le k_{3} \bigl(1+\|\varphi\|^{2}\bigr) $$
and
$$\begin{aligned} \int_{Z}\bigl|h(t,\varphi,v)\bigr|^{p}\pi(dv)\le k_{4}\bigl(1+\|\varphi\|^{p}\bigr),\quad p\ge2. \end{aligned}$$
(4)
Let \(C^{2,1}( [-\tau,T]\times R^{n}; R_{+})\) denote the family of all nonnegative functions \(V(t,x)\) defined on \([-\tau,T]\times R^{n} \) which are continuously twice differentiable in x and once differentiable in t. For each \(V\in C^{2,1}( [-\tau ,T]\times R^{n}; R_{+})\), define an operator LV by
$$\begin{aligned} LV(t,x,y) =& V_{t}\bigl(t,x-D(y)\bigr)+V_{x} \bigl(t,x-D(y)\bigr) f(t,y) \\ &{}+\frac{1}{2}\operatorname{trace}\bigl[g^{\top}(t,y)V_{xx} \bigl(t,x-D(y)\bigr)g(t,y)\bigr] \\ &{}+ \int_{Z}\bigl[V\bigl(t,x-D(y)+h(t,y,v)\bigr)-V\bigl(t,x-D(y) \bigr)\bigr]\pi(dv), \end{aligned}$$
(5)
where
$$\begin{aligned}& V_{t}(t,x) = \frac{\partial V(t,x) }{\partial t}, \qquad V_{x}(t,x) = \biggl( \frac{\partial V(t,x) }{\partial x_{1}}, \ldots, \frac{\partial V(t,x) }{\partial x_{n}} \biggr), \\& V_{xx}(t,x) = \biggl( \frac{\partial^{2} V(t,x) }{\partial x_{i}\,\partial x_{j}} \biggr)_{n\times n}. \end{aligned}$$
Similar to the proof of [29], we have the following existence result.
Theorem 2.1
If Assumptions
2.1-2.3
hold, equation (1) has a unique solution in the sense of
\(L^{p}\).
Now, we study the averaging principle for neutral SFDEs with Poisson random measure. Let us consider the standard form of equation (1)
$$\begin{aligned} x_{\varepsilon}(t) =&x(0)+D(x_{\varepsilon,t})-D(x_{0})+ \varepsilon \int _{0}^{t}f(s,x_{\varepsilon,s})\,ds \\ &{}+\sqrt{\varepsilon} \int_{0}^{t}g(s,x_{\varepsilon,s})\,dw(s)+\sqrt{ \varepsilon} \int_{0}^{t} \int_{Z}h(s,x_{\varepsilon,s},v)N(ds,dv), \end{aligned}$$
(6)
where the coefficients f, g, and h have the same assumptions as in (3), (4), and \(\varepsilon\in[0,\varepsilon_{0}]\) is a positive small parameter with \(\varepsilon_{0}\) is a fixed number.
Let \(\bar{f}(x): D([-\tau,0];R^{n}) \to R^{n} \), \(\bar{g}(x): D([-\tau,0];R^{n}) \to R^{n\times m} \) and \(\bar{h}(x,v): D([-\tau,0];R^{n})\times Z \to R^{n}\) be measurable functions, satisfying Assumptions 2.2 and 2.3. We also assume that the following condition is satisfied.
Assumption 2.4
For any \(\varphi\in D([-\tau,0];R^{n})\) and \(p\ge2\), there exist three positive bounded functions \(\psi_{i}(T_{1})\), \(i=1,2,3\), such that
$$\begin{aligned}& \frac{1}{T_{1}} \int_{0}^{T_{1}}\bigl|f(t,\varphi)-\bar{f}( \varphi)\bigr|^{p}\,dt \le\psi _{1}(T_{1}) \bigl(1+\| \varphi\|^{p}\bigr), \\& \frac{1}{T_{1}} \int_{0}^{T_{1}}\bigl|g(t,\varphi)-\bar{g}( \varphi)\bigr|^{p}\,dt \le\psi _{2}(T_{1}) \bigl(1+\| \varphi\|^{p}\bigr), \end{aligned}$$
and
$$\begin{aligned} \frac{1}{T_{1}} \int_{0}^{T_{1}} \int_{Z}\bigl|h(t,\varphi,v)-\bar{h}(\varphi ,v)\bigr|^{p} \pi(dv)\,dt \le\psi_{3}(T_{1}) \bigl(1+\|\varphi \|^{p}\bigr), \end{aligned}$$
where \(\lim_{T_{1}\to\infty}\psi_{i}(T_{1})=0\).
Then we have the averaging form of the standard neutral SFDEs with Poisson random measure
$$\begin{aligned} y_{\varepsilon}(t) =&y(0)+D(y_{\varepsilon,t})-D(y_{0})+ \varepsilon \int _{0}^{t}\bar{f}(y_{\varepsilon,s})\,ds+\sqrt{ \varepsilon} \int_{0}^{t}\bar{g}(y_{\varepsilon,s})\,dw(s) \\ &{}+\sqrt{\varepsilon} \int_{0}^{t} \int_{Z}\bar{h}(y_{\varepsilon,s},v)N(ds,dv), \end{aligned}$$
(7)
where \(y(0)=x(0)\), \(y_{0}=x_{0}\).
Obviously, under Assumptions 2.1-2.3, the standard neutral SFDEs with Poisson random measure (6) and the averaged one (7) have a unique solutions in \(L^{p}\), respectively.
Now, we present our main results which are used for revealing the relationship between the processes \(x_{\varepsilon}(t)\) and \(y_{\varepsilon}(t)\).
Theorem 2.2
Let Assumptions
2.1-2.4
hold. For a given arbitrary small number
\(\delta_{1}>0\)
and
\(p\ge2\), there exist
\(L>0\), \(\varepsilon_{1}\in(0,\varepsilon_{0}]\), and
\(\beta\in(0,1)\)
such that
$$\begin{aligned} E\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|^{p}\le \delta_{1}, \quad \forall t\in\bigl[0,L\varepsilon^{-\beta} \bigr], \end{aligned}$$
(8)
for all
\(\varepsilon\in(0,\varepsilon_{1}]\).
The proof of this theorem will be shown in Section 4.
Remark 2.1
In particular, when \(p=2\), we see that the solution of the averaged neutral SFDEs with Poisson random measure converges to that of the standard one in second moment.
With Theorem 2.2, it is easy to show the convergence in probability between the processes \(x_{\varepsilon}(t)\) and \(y_{\varepsilon}(t)\).
Corollary 2.1
Let Assumptions
2.1-2.4
hold. For a given arbitrary small number
\(\delta_{2}>0\), there exists
\(\varepsilon_{2}\in[0,\varepsilon_{0}]\)
such that for all
\(\varepsilon\in (0,\varepsilon_{2}]\), we have
$$\begin{aligned} \lim_{\varepsilon\to0}P\Bigl(\sup_{0< t\le L\varepsilon^{-\beta }}\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|> \delta_{2}\Bigr)=0, \end{aligned}$$
where
L
and
β
are defined by Theorem
2.2.
Proof
By Theorem 2.2 and the Chebyshev inequality, for any given number \(\delta_{2}>0\), we can obtain
$$P\Bigl(\sup_{0< t\le L\varepsilon^{-\beta}}\bigl|x_{\varepsilon}(t)-y_{\varepsilon }(t)\bigr|> \delta_{2}\Bigr)\le\frac{1}{\delta_{2}^{p}}E\Bigl(\sup_{0< t\le L\varepsilon ^{-\beta}}\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|^{p} \Bigr)\le\frac {cL\varepsilon^{1-\beta}}{\delta_{2}^{p}}. $$
Let \(\varepsilon\to0\), and the required result follows. □
Next, we extend the averaging principle for neutral SFDEs with Poisson random measure to the case of non-Lipschitz condition.
Assumption 2.5
Let \(k(\cdot)\), \(\rho(\cdot)\) be two concave nondecreasing functions from \(R_{+}\) to \(R_{+}\) such that \(k(0)=\rho(0)=0\) and \(\int_{0^{+}} \frac{u^{p-1}}{k^{p}(u)+\rho ^{p}(u)}\,du=\infty\). For all \(\varphi,\psi\in D([-\tau,0];R^{n})\), \(t\in [0,T]\), and \(p\ge2\), then
$$ \begin{aligned} &\bigl|f(t,\varphi)-f(t,\psi)\bigr|\vee\bigl|g(t,\varphi)-g(t,\psi)\bigr| \le k\bigl(\| \varphi -\psi\|\bigr),\\ &{\biggl[ \int_{Z}\bigl|h(t,\varphi,v)-h(t,\psi,v)\bigr|^{p}\pi(dv) \biggr]}^{\frac{1}{p}} \le \rho\bigl(\|\varphi-\psi\|\bigr). \end{aligned} $$
(9)
Remark 2.2
As we know, the existence and uniqueness of solution for NSFDEs under the above assumptions were proved by Bao and Hou [30], Ren and Xia [31] and Wei and Cai [32]. If \(k(u)=\rho(u)=Lu\), then Assumption 2.5 reduces to the Lipschitz conditions (3). In other words, Assumption 2.5 is much weaker than Assumption 2.2.
Theorem 2.3
If Assumptions
2.1
and
2.5
hold, then there exists a unique solution to equation (1) in the sense of
\(L^{p}\).
Proof
The proof is similar to Ren and Xia [31] and Wei and Cai [32], and we thus omit here. □
Theorem 2.4
Let Assumptions
2.1, 2.4, and
2.5
hold. For a given arbitrary small number
\(\delta_{3}>0\), there exist
\(L>0\), \(\varepsilon_{3}\in(0,\varepsilon_{0}]\), and
\(\beta\in(0,1)\)
such that
$$\begin{aligned} E\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|^{2}\le \delta_{3}, \quad \forall t\in\bigl[0,L\varepsilon^{-\beta} \bigr], \end{aligned}$$
(10)
for all
\(\varepsilon\in(0,\varepsilon_{3}]\).
Proof
The proof of this theorem will be shown in Section 4. □
Similarly, with Theorem 2.4, we can show that the convergence in probability of the standard solution of equation (6) and averaged solution of equation (7).
Corollary 2.2
Let Assumptions
2.1, 2.4, and
2.5
hold. For a given arbitrary small number
\(\delta_{4}>0\), there exists
\(\varepsilon_{4}\in[0,\varepsilon_{0}]\)
such that for all
\(\varepsilon\in (0,\varepsilon_{4}]\), we have
$$\begin{aligned} \lim_{\varepsilon\to0}P\Bigl(\sup_{0< t\le L\varepsilon^{-\beta }}\bigl|x_{\varepsilon}(t)-y_{\varepsilon}(t)\bigr|> \delta_{4}\Bigr)=0, \end{aligned}$$
where
L
and
β
are defined by Theorem
2.4.
Remark 2.3
If jump term \(h=\tilde{h}=0\), then equation (1) and (36) will become neutral SFDEs (SDDEs) which have been investigated by [21–25]. Under our assumptions, we can show that the solution of the averaged neutral SFDEs (SDDEs) converges to that of the standard one in pth moment and in probability.
Remark 2.4
If the neutral term \(D(\cdot)=0\) and \(\tilde{D}(\cdot)=0\), then equation (1) and (36) will reduce to SFDEs (SDDEs) with jumps which have been studied by [18, 19]. Hence, the corresponding results in [18, 19] are generalized and improved.