Theory and Modern Applications

# Convergence and stability of the compensated split-step θ-method for stochastic differential equations with jumps

## Abstract

In this paper, we develop a new compensated split-step θ (CSSθ) method for stochastic differential equations with jumps (SDEwJs). First, it is proved that the proposed method is convergent with strong order 1/2 in the mean-square sense. Then the condition of the mean-square (MS) stability of the CSSθ method is obtained. Finally, some scalar test equations are simulated to verify the results obtained from theory, and a comparison between the compensated stochastic theta (CST) method by Wang and Gan (Appl. Numer. Math. 60:877-887, 2010) and CSSθ is analyzed. Meanwhile, the results show the higher efficiency of the CSSθ method.

## 1 Introduction

In this paper, we consider one-dimensional Itô stochastic differential equations (SDEs) with Poisson-driven jumps

$\mathrm{d}X\left(t\right)=f\left(X\left({t}^{-}\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}t+g\left(X\left({t}^{-}\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}W\left(t\right)+h\left(X\left({t}^{-}\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}N\left(t\right)$
(1.1)

for $t>0$, with $X\left({0}^{-}\right)={X}_{0}$, where $X\left({t}^{-}\right)$ denotes ${lim}_{s\to {t}^{-}}X\left(s\right)$, $f:\mathbb{R}\to \mathbb{R}$, $g:\mathbb{R}\to \mathbb{R}$, $h:\mathbb{R}\to \mathbb{R}$, $W\left(t\right)$ is a scalar standard Wiener process, and $N\left(t\right)$ is a scalar Poisson process with intensity λ.

Recently, stochastic differential equations with jumps (SDEwJs) are becoming increasingly used to model real-world phenomena in different fields, such as economics, finance, biology, and physics. However, few analytical solutions have been proposed so far; thus, it is necessary to develop numerical methods for SDEwJs and study the properties of these methods. For example, Higham and Kloeden [1] studied the convergence and stability of the implicit method for jump-diffusion systems, and they further analyzed the strong convergence rates of the backward Euler method for a nonlinear jump-diffusion system [2]. Chalmers and Higham [3] studied the convergence and stability for the implicit simulations of SDEs with random jump magnitudes. Higham and Kloeden [4] constructed the split-step backward Euler (SSBE) method and the compensated split-step backward Euler (CSSBE) method for nonlinear SDEwJs. Bruti-Liberati and Platen [5, 6] developed strong and weak approximations of SDEwJs.

Lately, Wang and Gan [7] started to focus on the CST method for stochastic differential equations with jumps. Hu and Gan [8] studied the convergence and stability of the balanced methods for SDEwJs. The split-step θ (SSθ) method was firstly developed by Ding et al. [9] to solve the stochastic differential equations. Thus, we will construct the compensated split-step θ method (CSSθ) for SDEwJs.

In this paper, we investigate the convergence and mean-square stability of the CSSθ method for SDEwJs. The outline of the paper is as follows. In Section 2, we introduce some notations and hypotheses and give the CSSθ method for SDEwJs. In Section 3, we prove that the numerical solutions produced by the CSSθ method converge to the true solutions with strong order 1/2. In Section 4, the mean-square stability of the CSSθ method for linear test equation is studied. At last, some numerical experiments are used to verify the results obtained from the theory.

## 2 The compensated split-step θ-method

For the existence and uniqueness of the solution for (1.1), we usually assume that f, g, and h satisfy the following assumptions:

(H1) (The uniform Lipschitz condition) There is a constant $K>0$, for all $x,y\in \mathbb{R}$, such that

$|f\left(x\right)-f\left(y\right){|}^{2}\vee |g\left(x\right)-g\left(y\right){|}^{2}\vee |h\left(x\right)-h\left(y\right){|}^{2}\le K|x-y{|}^{2}.$
(2.1)

(H2) (The linear growth condition) There is a constant $L>0$, for all $x\in \mathbb{R}$, such that

$|f\left(x\right){|}^{2}\vee |g\left(x\right){|}^{2}\vee |h\left(x\right){|}^{2}\le L\left(1+|{x}^{2}|\right).$
(2.2)

We assume that the initial data $E|X\left(0\right){|}^{2}$ is finite and $X\left(0\right)$ is independent of $W\left(t\right)$ and $N\left(t\right)$ for all $t\ge 0$. Under these conditions, we note that equation (1.1) has a unique solution on $\left[0,+\mathrm{\infty }\right)$, see [10, 11].

For a constant step size $h=\mathrm{\Delta }t>0$, we first define the split-step θ (SSθ) method for (1.1) by ${Y}_{0}=X\left({0}^{-}\right)$ and

${{Y}_{n}}^{\ast }={Y}_{n}+\left[\left(1-\theta \right)f\left({Y}_{n}\right)+\theta f\left({{Y}_{n}}^{\ast }\right)\right]\mathrm{\Delta }t,$
(2.3)
${Y}_{n+1}={{Y}_{n}}^{\ast }+g\left({{Y}_{n}}^{\ast }\right)\mathrm{\Delta }{W}_{n}+h\left({{Y}_{n}}^{\ast }\right)\mathrm{\Delta }{N}_{n},$
(2.4)

where $\theta \in \left[0,1\right]$, ${Y}_{n}$ is the numerical approximation of $X\left({t}_{n}\right)$ with ${t}_{n}=n\cdot \mathrm{\Delta }t$. Moreover, the increments $\mathrm{\Delta }{W}_{n}:=W\left({t}_{n+1}\right)-W\left({t}_{n}\right)$ are independent Gaussian random variables with mean 0 and variance Δt; $\mathrm{\Delta }{N}_{n}:=N\left({t}_{n+1}\right)-N\left({t}_{n}\right)$ are independent Poisson distributed random variables with mean $\lambda \mathrm{\Delta }t$ and variance $\lambda \mathrm{\Delta }t$.

If we give $\theta =1$, the SSθ method becomes the SSBE method in [4]. If $\theta =0$, the SSθ method is an explicit method.

Note that the compensated Poisson process

$\stackrel{˜}{N}\left(t\right):=N\left(t\right)-\lambda t,$

which is a martingale. Defining

${f}_{{\phantom{\rule{0.1em}{0ex}}}_{\lambda }}:=f\left(x\right)+\lambda h\left(x\right),$

we can rewrite the jump-diffusion system (1.1) in the form

$\mathrm{d}X\left(t\right)={f}_{\lambda }\left(X\left({t}^{-}\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}t+g\left(X\left({t}^{-}\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}W\left(t\right)+h\left(X\left({t}^{-}\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}\stackrel{˜}{N}\left(t\right).$
(2.5)

We note that ${f}_{\lambda }$ also satisfies the uniform Lipschitz condition and linear growth condition with larger constants

${K}_{\lambda }=2{\left(\lambda +1\right)}^{2}K,\phantom{\rule{2em}{0ex}}{L}_{\lambda }=2{\left(\lambda +1\right)}^{2}L.$
(2.6)

Then we define the compensated split-step θ method (CSSθ) for (1.1) by ${Y}_{0}=X\left({0}^{-}\right)$ and

${{Y}_{n}}^{\ast }={Y}_{n}+\left[\left(1-\theta \right){f}_{\lambda }\left({Y}_{n}\right)+\theta {f}_{\lambda }\left({{Y}_{n}}^{\ast }\right)\right]\mathrm{\Delta }t,$
(2.7)
${Y}_{n+1}={{Y}_{n}}^{\ast }+g\left({{Y}_{n}}^{\ast }\right)\mathrm{\Delta }{W}_{n}+h\left({{Y}_{n}}^{\ast }\right)\mathrm{\Delta }{\stackrel{˜}{N}}_{n},$
(2.8)

where $\mathrm{\Delta }{\stackrel{˜}{N}}_{n}:=\stackrel{˜}{N}\left({t}_{n+1}\right)-\stackrel{˜}{N}\left({t}_{n}\right)$.

If we give $\theta =1$, the CSSθ method becomes the CSSBE method in [4].

To answer the question of the existence of numerical solution, we will give the following lemma.

Lemma 2.1 Assume that $f:\mathbb{R}\to \mathbb{R}$ satisfies (2.1), and let $0<\theta <1$, $0<\mathrm{\Delta }t<1/\left(\sqrt{{K}_{\lambda }}\theta \right)$, then equation (2.7) can be solved uniquely for ${{Y}_{n}}^{\ast }$, with probability 1.

Proof Writing (2.7) as ${{Y}_{n}}^{\ast }=F\left({{Y}_{n}}^{\ast }\right)=a+\theta \mathrm{\Delta }t{f}_{\lambda }\left({{Y}_{n}}^{\ast }\right)$, $a\in \mathbb{R}$, and using condition (2.6), we have

$\begin{array}{rcl}|F\left(u\right)-F\left(v\right)|& =& |\theta \mathrm{\Delta }t{f}_{\lambda }\left(u\right)-\theta \mathrm{\Delta }t{f}_{\lambda }\left(v\right)|\\ \le & \sqrt{{K}_{\lambda }}\theta \mathrm{\Delta }t|u-v|.\end{array}$

Then the result follows from the classical Banach contraction mapping theorem [12]. □

## 3 Strong convergence on a finite time interval $\left[0,T\right]$

In this section, we prove the strong convergence of the CSSθ method for problem (1.1) on a finite time interval $\left[0,T\right]$, where T is a constant.

When Lemma 2.1 is followed, we find it is convenient to use continuous-time approximation solution in our strong convergence analysis. Hence, for $t\in \left[{t}_{n},{t}_{n+1}\right)$, we can define the two step-functions:

${Z}_{1}\left(t\right)=\sum _{n=0}^{N-1}{Y}_{n}{I}_{\left[n\mathrm{\Delta }t,\left(n+1\right)\mathrm{\Delta }t\right)}\left(t\right),$
(3.1)
${Z}_{2}\left(t\right)=\sum _{n=0}^{N-1}{Y}_{n}^{\ast }{I}_{\left[n\mathrm{\Delta }t,\left(n+1\right)\mathrm{\Delta }t\right)}\left(t\right),$
(3.2)

where N is the largest number such that $N\mathrm{\Delta }t\le T$, and ${I}_{A}$ is the indicator function for the set A, i.e., ${I}_{A}\left(x\right)=\left\{\begin{array}{ll}1,& x\in A,\\ 0,& x\notin A.\end{array}$

When $t\in \left[{t}_{n},{t}_{n+1}\right)$, Lemma 2.1 ensures the existence of ${Y}_{n}^{\ast }$ by (2.7), then we define

$\begin{array}{rcl}Y\left(t\right)& =& {Y}_{n}+\left[\left(1-\theta \right){f}_{\lambda }\left({Y}_{n}\right)+\theta {f}_{\lambda }\left({Y}_{n}^{\ast }\right)\right]\left(t-{t}_{n}\right)+g\left({Y}_{n}^{\ast }\right)\left(W\left(t\right)-W\left({t}_{n}\right)\right)\\ +h\left({Y}_{n}^{\ast }\right)\left(\stackrel{˜}{N}\left(t\right)-\stackrel{˜}{N}\left({t}_{n}\right)\right).\end{array}$
(3.3)

Thus we can rewrite (3.3) in the integral form as follows:

$\begin{array}{rcl}Y\left(t\right)& =& {Y}_{0}+{\int }_{0}^{t}\left(1-\theta \right){f}_{\lambda }\left({Z}_{1}\left(s\right)\right)+\theta {f}_{\lambda }\left({Z}_{2}\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s+{\int }_{0}^{t}g\left({Z}_{2}\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}W\left(s\right)\\ +{\int }_{0}^{t}h\left({Z}_{2}\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}\stackrel{˜}{N}\left(s\right).\end{array}$
(3.4)

It is easy to verify that ${Z}_{1}\left({t}_{n}\right)={Y}_{n}=Y\left({t}_{n}\right)$, that is, ${Z}_{1}\left(t\right)$ and $Y\left(t\right)$ coincide with the discrete solutions at the gridpoints. Hence we refer to $Y\left(t\right)$ as a continuous-time extension of the discrete approximation $\left\{{Y}_{n}\right\}$. So our plan is to prove a strong convergence result for $Y\left(t\right)$.

Now we begin the proof of the strong convergence of the CSSθ method, our first lemma shows the relationship between $E|{{Y}_{n}}^{\ast }{|}^{2}$ and $E|{Y}_{n}{|}^{2}$.

Lemma 3.1 Suppose $f:\mathbb{R}\to \mathbb{R}$ satisfies (2.2), and let $0<\theta <1$, $0<\mathrm{\Delta }t, then there exist two positive constants $A=4\left(1+{L}_{\lambda }\right)$ and $B=8{L}_{\lambda }$ such that

$E|{{Y}_{n}}^{\ast }{|}^{2}\le AE|{Y}_{n}{|}^{2}+B,$

where ${{Y}_{n}}^{\ast }$ and ${Y}_{n}$ are produced by (2.7) and (2.8).

Proof Squaring both sides of (2.7), we find

$\begin{array}{rcl}|{{Y}_{n}}^{\ast }{|}^{2}& =& |{Y}_{n}+\left(1-\theta \right)\mathrm{\Delta }t{f}_{\lambda }\left({Y}_{n}\right)+\theta \mathrm{\Delta }t{f}_{\lambda }\left({{Y}_{n}}^{\ast }\right){|}^{2}\\ =& |{Y}_{n}{|}^{2}+|\left(1-\theta \right)\mathrm{\Delta }t{f}_{\lambda }\left({Y}_{n}\right){|}^{2}+|\theta \mathrm{\Delta }t{f}_{\lambda }\left({{Y}_{n}}^{\ast }\right){|}^{2}+2\theta \mathrm{\Delta }t{f}_{\lambda }\left({{Y}_{n}}^{\ast }\right){Y}_{n}\\ +2\left(1-\theta \right)\mathrm{\Delta }t{f}_{\lambda }\left({Y}_{n}\right){Y}_{n}+2\theta \left(1-\theta \right)\mathrm{\Delta }{t}^{2}{f}_{\lambda }\left({Y}_{n}\right){f}_{\lambda }\left({{Y}_{n}}^{\ast }\right).\end{array}$
(3.5)

Using the elementary inequality $2ab\le {a}^{2}+{b}^{2}$, we obtain

$\begin{array}{rcl}|{{Y}_{n}}^{\ast }{|}^{2}& \le & |{Y}_{n}{|}^{2}+{\left(1-\theta \right)}^{2}\mathrm{\Delta }{t}^{2}|{f}_{\lambda }\left({Y}_{n}\right){|}^{2}+{\theta }^{2}\mathrm{\Delta }{t}^{2}|{f}_{\lambda }\left({{Y}_{n}}^{\ast }\right){|}^{2}\\ +\theta \mathrm{\Delta }t\left[|{Y}_{n}{|}^{2}+|{f}_{\lambda }\left({{Y}_{n}}^{\ast }\right){|}^{2}\right]+\left(1-\theta \right)\mathrm{\Delta }t\left[|{Y}_{n}{|}^{2}+|{f}_{\lambda }\left({Y}_{n}\right){|}^{2}\right]\\ +\theta \left(1-\theta \right)\mathrm{\Delta }{t}^{2}\left[|{f}_{\lambda }\left({Y}_{n}\right){|}^{2}+|{f}_{\lambda }\left({{Y}_{n}}^{\ast }\right){|}^{2}\right]\\ =& |{Y}_{n}{|}^{2}+\left[{\left(1-\theta \right)}^{2}\mathrm{\Delta }{t}^{2}+\left(1-\theta \right)\mathrm{\Delta }t+\theta \left(1-\theta \right)\mathrm{\Delta }{t}^{2}\right]|{f}_{\lambda }\left({Y}_{n}\right){|}^{2}\\ +\mathrm{\Delta }t|{Y}_{n}{|}^{2}+\left[{\theta }^{2}\mathrm{\Delta }{t}^{2}+\theta \mathrm{\Delta }t+\theta \left(1-\theta \right)\mathrm{\Delta }{t}^{2}\right]|{f}_{\lambda }\left({{Y}_{n}}^{\ast }\right){|}^{2}\\ =& |{Y}_{n}{|}^{2}+\left[\left(1-\theta \right)\mathrm{\Delta }{t}^{2}+\left(1-\theta \right)\mathrm{\Delta }t\right]|{f}_{\lambda }\left({Y}_{n}\right){|}^{2}\\ +\mathrm{\Delta }t|{Y}_{n}{|}^{2}+\left[\theta \mathrm{\Delta }{t}^{2}+\theta \mathrm{\Delta }t\right]|{f}_{\lambda }\left({{Y}_{n}}^{\ast }\right){|}^{2}.\end{array}$
(3.6)

Due to $\mathrm{\Delta }t<1$, linear growth condition (2.6), and $0<\theta <1$, we can get

$\begin{array}{rl}|{{Y}_{n}}^{\ast }{|}^{2}\le & |{Y}_{n}{|}^{2}+2\left(1-\theta \right)\mathrm{\Delta }t{L}_{\lambda }\left(1+|{Y}_{n}{|}^{2}\right)+\mathrm{\Delta }t|{Y}_{n}{|}^{2}\\ +2\theta \mathrm{\Delta }t{L}_{\lambda }\left(1+|{{Y}_{n}}^{\ast }{|}^{2}\right)\\ \le & |{Y}_{n}{|}^{2}+2\left(1-\theta \right)\mathrm{\Delta }t{L}_{\lambda }|{Y}_{n}{|}^{2}+\mathrm{\Delta }t|{Y}_{n}{|}^{2}\\ +2\theta \mathrm{\Delta }t{L}_{\lambda }|{{Y}_{n}}^{\ast }{|}^{2}+2\left({L}_{\lambda }+{L}_{\lambda }\right)\mathrm{\Delta }t.\end{array}$
(3.7)

Taking mathematical expectation for both sides, we can obtain

$\begin{array}{rcl}E|{{Y}_{n}}^{\ast }{|}^{2}& \le & \left(1+2\left(1-\theta \right)\mathrm{\Delta }t{L}_{\lambda }+\mathrm{\Delta }t\right)E|{Y}_{n}{|}^{2}\\ +2\theta \mathrm{\Delta }t{L}_{\lambda }E|{{Y}_{n}}^{\ast }{|}^{2}+4{L}_{\lambda }\mathrm{\Delta }t.\end{array}$
(3.8)

Since $2\theta {L}_{\lambda }\mathrm{\Delta }t<1/2$, thus $1-2\theta {L}_{\lambda }\mathrm{\Delta }t\ge 1/2$, then by $\mathrm{\Delta }t<1$ and $0<\theta <1$, we have

$\begin{array}{rcl}E|{{Y}_{n}}^{\ast }{|}^{2}& \le & \frac{\left(1+2\left(1-\theta \right)\mathrm{\Delta }t{L}_{\lambda }+\mathrm{\Delta }t\right)}{1-2\theta {L}_{\lambda }\mathrm{\Delta }t}E|{Y}_{n}{|}^{2}+\frac{4{L}_{\lambda }\mathrm{\Delta }t}{1-2\theta \mathrm{\Delta }t{L}_{\lambda }}\\ \le & 2\left(1+2{L}_{\lambda }+1\right)E|{Y}_{n}{|}^{2}+8{L}_{\lambda }\\ =& AE|{Y}_{n}{|}^{2}+B,\end{array}$
(3.9)

where $A=4\left(1+{L}_{\lambda }\right)$ and $B=8{L}_{\lambda }$. The proof is completed. □

The next lemma shows that the discrete numerical solutions ${Y}_{n}$ and ${{Y}_{n}}^{\ast }$ ($n=0,1,\dots ,N$), produced by the CSSθ method, have bounded second moments.

Lemma 3.2 Under conditions (2.1)-(2.2), let ${Y}_{n}$ and ${{Y}_{n}}^{\ast }$ ($n=0,1,\dots ,N$) be produced by (2.7) and (2.8), and let $0<\theta <1$, $0<\mathrm{\Delta }t, then

$E|{Y}_{n}{|}^{2}\le {C}_{1}$
(3.10)

and

$E|{{Y}_{n}}^{\ast }{|}^{2}\le {C}_{2},$
(3.11)

where ${C}_{1}$ and ${C}_{2}$ are two positive constants independent of Δt.

Proof By Lemma 2.1, we can express the CSSθ method (2.7) and (2.8) in the following form:

$\begin{array}{rcl}{Y}_{n+1}& =& {Y}_{0}+{\int }_{0}^{\left(n+1\right)\mathrm{\Delta }t}\left[\left(1-\theta \right){f}_{\lambda }\left({Z}_{1}\left(s\right)\right)+\theta {f}_{\lambda }\left({Z}_{2}\left(s\right)\right)\right]\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ +{\int }_{0}^{\left(n+1\right)\mathrm{\Delta }t}g\left({Z}_{2}\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}W\left(s\right)+{\int }_{0}^{\left(n+1\right)\mathrm{\Delta }t}h\left({Z}_{2}\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}\stackrel{˜}{N}\left(s\right),\end{array}$

where $n=0,1,\dots ,N-1$.

Squaring both sides, taking the mathematical expectation and using the element inequality ${\left(a+b+c+d\right)}^{2}\le 4|a{|}^{2}+4|b{|}^{2}+4|c{|}^{2}+4|d{|}^{2}$, we have

$\begin{array}{rcl}E|{Y}_{n+1}{|}^{2}& \le & 4E|{Y}_{0}{|}^{2}+4E|{\int }_{0}^{\left(n+1\right)\mathrm{\Delta }t}\left[\left(1-\theta \right){f}_{\lambda }\left({Z}_{1}\left(s\right)\right)+\theta {f}_{\lambda }\left({Z}_{2}\left(s\right)\right)\right]\phantom{\rule{0.2em}{0ex}}\mathrm{d}s{|}^{2}\\ +4E|{\int }_{0}^{\left(n+1\right)\mathrm{\Delta }t}g\left({Z}_{2}\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}W\left(s\right){|}^{2}\\ +4E|{\int }_{0}^{\left(n+1\right)\mathrm{\Delta }t}h\left({Z}_{2}\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}\stackrel{˜}{N}\left(s\right){|}^{2}.\end{array}$
(3.12)

Now, using the Cauchy-Schwarz inequality and the inequality $|\theta x+\left(1-\theta \right)y{|}^{2}\le \theta |x{|}^{2}+\left(1-\theta \right)|y{|}^{2}$, the linear growth condition (2.6) and Fubini’s theorem, we can get

$\begin{array}{r}E|{\int }_{0}^{\left(n+1\right)\mathrm{\Delta }t}\left[\left(1-\theta \right)f\left({Z}_{1}\left(s\right)\right)+\theta f\left({Z}_{2}\left(s\right)\right)\right]\phantom{\rule{0.2em}{0ex}}\mathrm{d}s{|}^{2}\\ \phantom{\rule{1em}{0ex}}\le TE{\int }_{0}^{\left(n+1\right)\mathrm{\Delta }t}|\left(1-\theta \right){f}_{\lambda }\left({Z}_{1}\left(s\right)\right)+\theta {f}_{\lambda }\left({Z}_{2}\left(s\right)\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{1em}{0ex}}\le 2TE{\int }_{0}^{\left(n+1\right)\mathrm{\Delta }t}|{f}_{\lambda }\left({Z}_{1}\left(s\right)\right){|}^{2}+|{f}_{\lambda }\left({Z}_{2}\left(s\right)\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{1em}{0ex}}\le 2T{L}_{\lambda }E{\int }_{0}^{\left(n+1\right)\mathrm{\Delta }t}2+|{Z}_{1}\left(s\right){|}^{2}+|{Z}_{2}\left(s\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{1em}{0ex}}\le 4{T}^{2}{L}_{\lambda }+2T{L}_{\lambda }{\int }_{0}^{\left(n+1\right)\mathrm{\Delta }t}E|{Z}_{1}\left(s\right){|}^{2}+E|{Z}_{2}\left(s\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{1em}{0ex}}\le 4{T}^{2}{L}_{\lambda }+2T{L}_{\lambda }\mathrm{\Delta }t\left(\sum _{i=0}^{n}E|{Y}_{i}{|}^{2}+\sum _{i=0}^{n}E|{{Y}_{i}}^{\ast }{|}^{2}\right).\end{array}$
(3.13)

Using the martingale isometry and linear growth condition (2.2), we have

$\begin{array}{rl}E|{\int }_{0}^{\left(n+1\right)\mathrm{\Delta }t}g\left({Z}_{2}\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}W\left(s\right){|}^{2}& ={\int }_{0}^{\left(n+1\right)\mathrm{\Delta }t}E|g\left({Z}_{2}\left(s\right)\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ =\mathrm{\Delta }t\sum _{i=0}^{n}E|g\left({Y}_{i}^{\ast }\right){|}^{2}\\ \le \mathrm{\Delta }tL\sum _{i=0}^{n}\left(1+E|{Y}_{i}^{\ast }{|}^{2}\right)\\ \le LT+\mathrm{\Delta }tL\sum _{i=0}^{n}E|{{Y}_{i}}^{\ast }{|}^{2}.\end{array}$
(3.14)

For the jump integral, as the compensated Poisson process $\stackrel{˜}{N}\left(t\right)=N\left(t\right)-\lambda t$ is a martingale, so we use the isometry

$E|{\int }_{a}^{b}h\left({Z}_{2}\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}\stackrel{˜}{N}\left(s\right){|}^{2}=\lambda {\int }_{a}^{b}E|h\left({Z}_{2}\left(s\right)\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s$

(see, for example, [13]), then we have

$\begin{array}{rl}E|{\int }_{0}^{\left(n+1\right)\mathrm{\Delta }t}h\left({Z}_{2}\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}\stackrel{˜}{N}\left(s\right){|}^{2}& =\lambda {\int }_{0}^{\left(n+1\right)\mathrm{\Delta }t}E|h\left({Z}_{2}\left(s\right)\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ =\lambda \mathrm{\Delta }t\sum _{i=0}^{n}E|h\left({{Y}_{i}}^{\ast }\right){|}^{2}\\ \le \lambda \mathrm{\Delta }tL\sum _{i=0}^{n}\left(1+E|{{Y}_{i}}^{\ast }{|}^{2}\right)\\ \le \lambda TL+\lambda \mathrm{\Delta }tL\sum _{i=0}^{n}E|{{Y}_{i}}^{\ast }{|}^{2}.\end{array}$
(3.15)

Inserting (3.13)-(3.15) in (3.12) gives

$\begin{array}{rcl}E|{Y}_{n+1}{|}^{2}& \le & 4\left(E|{Y}_{0}{|}^{2}+4{T}^{2}{L}_{\lambda }+LT+\lambda TL\right)\\ +4\mathrm{\Delta }t\left(2T{L}_{\lambda }+L+\lambda L\right)\sum _{i=0}^{n}E|{{Y}_{i}}^{\ast }{|}^{2}\\ +8T{L}_{\lambda }\mathrm{\Delta }t\sum _{i=0}^{n}E|{Y}_{i}{|}^{2}.\end{array}$
(3.16)

By Lemma 3.1, we can derive that

$\begin{array}{rcl}E|{Y}_{n+1}{|}^{2}& \le & 4\left(E|{Y}_{0}{|}^{2}+4{T}^{2}{L}_{\lambda }+LT+\lambda TL\right)\\ +4\mathrm{\Delta }t\left(2T{L}_{\lambda }+L+\lambda L\right)\left(A\sum _{i=0}^{n}E|{Y}_{i}{|}^{2}+\left(n+1\right)B\right)\\ +8T{L}_{\lambda }\mathrm{\Delta }t\sum _{i=0}^{n}E|{Y}_{i}{|}^{2}\\ \le & 4\left(E|{Y}_{0}{|}^{2}+4{T}^{2}{L}_{\lambda }+LT+\lambda TL\right)\\ +4\left(n+1\right)B\left(2T{L}_{\lambda }+L+\lambda L\right)\mathrm{\Delta }t\\ +\left[4A\left(2T{L}_{\lambda }+L+\lambda L\right)+8T{L}_{\lambda }\right]\mathrm{\Delta }t\sum _{i=0}^{n}E|{Y}_{i}{|}^{2}\\ \le & {c}_{1}+{c}_{2}\mathrm{\Delta }t\sum _{i=0}^{n}E|{Y}_{i}{|}^{2},\end{array}$
(3.17)

where

${c}_{1}=4\left(E|{Y}_{0}{|}^{2}+4{T}^{2}{L}_{\lambda }+LT+\lambda TL\right)+4\left(n+1\right)B\left(2T{L}_{\lambda }+L+\lambda L\right)$

and

${c}_{2}=4A\left(2T{L}_{\lambda }+L+\lambda L\right)+8T{L}_{\lambda }$

are both independent of Δt.

Then, using the discrete Gronwall inequality, we can get

$E|{Y}_{n}{|}^{2}\le {c}_{1}{e}^{{c}_{2}}\equiv {C}_{1}.$

Then, by Lemma 3.1, we can obtain that

$E|{{Y}_{n}}^{\ast }{|}^{2}\le AE|{Y}_{n}{|}^{2}+B\le A{C}_{1}+B\equiv {C}_{2}.$

□

The next lemma shows that the continuous-time approximation $Y\left(t\right)$ in (3.4) remains close to the step functions ${Z}_{1}\left(t\right)$ and ${Z}_{2}\left(t\right)$ in the mean square sense.

Lemma 3.3 Under conditions (2.1)-(2.2), let ${{Y}_{n}}^{\ast }$ and ${Y}_{n}$ be produced by (2.7) and (2.8), and let $0<\theta <1$, $0<\mathrm{\Delta }t, then there exist two positive constants ${C}_{3}$ and ${C}_{4}$ that are independent of Δt, such that

$E|Y\left(t\right)-{Z}_{1}\left(t\right){|}^{2}\le {C}_{3}\mathrm{\Delta }t,$
(3.18)

and

$E|Y\left(t\right)-{Z}_{2}\left(t\right){|}^{2}\le {C}_{4}\mathrm{\Delta }t,$
(3.19)

where $t\in \left[0,T\right]$, ${Z}_{1}\left(t\right)$, ${Z}_{2}\left(t\right)$, and $Y\left(t\right)$ are defined by (3.1), (3.2), (3.4), respectively.

Proof For any $t\in \left[0,T\right]$, there exists a nonnegative integer n such that

$t\in \left[n\mathrm{\Delta }t,\left(n+1\right)\mathrm{\Delta }t\right]\subseteq \left[0,T\right],$

we have

$\begin{array}{rcl}Y\left(t\right)-{Z}_{1}\left(t\right)& =& Y\left(t\right)-{Y}_{n}\\ =& {\int }_{n\mathrm{\Delta }t}^{t}\left(1-\theta \right){f}_{\lambda }\left({Z}_{1}\left(s\right)\right)+\theta {f}_{\lambda }\left({Z}_{2}\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ +{\int }_{n\mathrm{\Delta }t}^{t}g\left({Z}_{2}\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}W\left(s\right)\\ +{\int }_{n\mathrm{\Delta }t}^{t}h\left({Z}_{2}\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}\stackrel{˜}{N}\left(s\right).\end{array}$

Squaring both sides and using the element inequality ${\left(a+b+c\right)}^{2}\le 3|a{|}^{2}+3|b{|}^{2}+3|c{|}^{2}$, we have

$\begin{array}{rcl}|Y\left(t\right)-{Z}_{1}\left(t\right){|}^{2}& \le & 3|{\int }_{n\mathrm{\Delta }t}^{t}\left[\left(1-\theta \right){f}_{\lambda }\left({Z}_{1}\left(s\right)\right)+\theta {f}_{\lambda }\left({Z}_{2}\left(s\right)\right)\right]\phantom{\rule{0.2em}{0ex}}\mathrm{d}s{|}^{2}\\ +3|{\int }_{n\mathrm{\Delta }t}^{t}g\left({Z}_{2}\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}W\left(s\right){|}^{2}\\ +3|{\int }_{n\mathrm{\Delta }t}^{t}h\left({Z}_{2}\left(s\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}\stackrel{˜}{N}\left(s\right){|}^{2}.\end{array}$

Taking mathematical expectation, by the element inequality ${\left(a+b\right)}^{2}\le 2|a{|}^{2}+2|b{|}^{2}$, and using the martingale isometry, we have

$\begin{array}{rcl}E|Y\left(t\right)-{Z}_{1}\left(t\right){|}^{2}& \le & 6\mathrm{\Delta }t{\int }_{n\mathrm{\Delta }t}^{t}\left[E|{f}_{\lambda }\left({Z}_{1}\left(s\right)\right){|}^{2}+E|{f}_{\lambda }\left({Z}_{2}\left(s\right)\right){|}^{2}\right]\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ +3{\int }_{n\mathrm{\Delta }t}^{t}E|g\left({Z}_{2}\left(s\right)\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ +3\lambda {\int }_{n\mathrm{\Delta }t}^{t}E|h\left({Z}_{2}\left(s\right)\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s.\end{array}$

By the linear growth conditions (2.2) and (2.6), we get

$\begin{array}{rcl}E|Y\left(t\right)-{Z}_{1}\left(t\right){|}^{2}& \le & 6\mathrm{\Delta }t{L}_{\lambda }{\int }_{n\mathrm{\Delta }t}^{t}2+E|{Z}_{1}\left(s\right){|}^{2}+E|{Z}_{2}\left(s\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ +3L\left(1+\lambda \right){\int }_{n\mathrm{\Delta }t}^{t}1+E|{Z}_{2}\left(s\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s.\end{array}$

Since ${Z}_{1}\left(t\right)\equiv {Y}_{n}$ and ${Z}_{2}\left(t\right)\equiv {Y}_{n}^{\ast }$ on $\left[n\mathrm{\Delta }t,\left(n+1\right)\mathrm{\Delta }t\right)$, we have

$\begin{array}{rcl}E|Y\left(t\right)-{Z}_{1}\left(t\right){|}^{2}& \le & 6\mathrm{\Delta }{t}^{2}{L}_{\lambda }\left(2+E|{Y}_{n}{|}^{2}+E|{Y}_{n}^{\ast }{|}^{2}\right)\\ +3L\mathrm{\Delta }t\left(1+\lambda \right)\left(1+E|{Y}_{n}^{\ast }{|}^{2}\right).\end{array}$

Then, for each $t\in \left[0,T\right]$, and by Lemma 3.2, we can derive

$\begin{array}{rcl}E|Y\left(t\right)-{Z}_{1}\left(t\right){|}^{2}& \le & 6\mathrm{\Delta }{t}^{2}{L}_{\lambda }\left(2+{C}_{1}+{C}_{2}\right)\\ +3L\mathrm{\Delta }t\left(1+\lambda \right)\left(1+{C}_{2}\right)\\ \le & {C}_{3}\mathrm{\Delta }t,\end{array}$
(3.20)

where ${C}_{3}=6{L}_{\lambda }\left(2+{C}_{1}+{C}_{2}\right)+3L\left(1+\lambda \right)\left(1+{C}_{2}\right)$. Thus we can prove (3.18).

Now we give the proof of (3.19).

By (2.7) and for each $t\in \left[n\mathrm{\Delta }t,\left(n+1\right)\mathrm{\Delta }t\right]\subseteq \left[0,T\right]$, we get

${Z}_{1}\left(t\right)-{Z}_{2}\left(t\right)={Y}_{n}-{{Y}_{n}}^{\ast }=-\left[\left(1-\theta \right){f}_{\lambda }\left({Y}_{n}\right)+\theta {f}_{\lambda }\left({{Y}_{n}}^{\ast }\right)\right]\mathrm{\Delta }t.$

Using the inequality $|\theta x+\left(1-\theta \right)y{|}^{2}\le \theta |x{|}^{2}+\left(1-\theta \right)|y{|}^{2}$, and $0<\theta <1$, we can get

$\begin{array}{rcl}|{Z}_{1}\left(t\right)-{Z}_{2}\left(t\right){|}^{2}& =& |\left(1-\theta \right){f}_{\lambda }\left({Y}_{n}\right)+\theta {f}_{\lambda }\left({{Y}_{n}}^{\ast }\right){|}^{2}\mathrm{\Delta }{t}^{2}\\ \le & \left[\left(1-\theta \right)|{f}_{\lambda }\left({Y}_{n}\right){|}^{2}+\theta |{f}_{\lambda }\left({{Y}_{n}}^{\ast }\right){|}^{2}\right]\mathrm{\Delta }{t}^{2}\\ \le & \left[|{f}_{\lambda }\left({Y}_{n}\right){|}^{2}+|{f}_{\lambda }\left({{Y}_{n}}^{\ast }\right){|}^{2}\right]\mathrm{\Delta }{t}^{2}.\end{array}$

Taking mathematical expectation, and by the linear growth condition (2.6),

$\begin{array}{rcl}E|{Z}_{1}\left(t\right)-{Z}_{2}\left(t\right){|}^{2}& \le & \left[E|{f}_{\lambda }\left({Y}_{n}\right){|}^{2}+E|{f}_{\lambda }\left({{Y}_{n}}^{\ast }\right){|}^{2}\right]\mathrm{\Delta }{t}^{2}\\ \le & {L}_{\lambda }\left(2+E|{Y}_{n}{|}^{2}+E|{Y}_{n}^{\ast }{|}^{2}\right)\mathrm{\Delta }{t}^{2}.\end{array}$

Then by Lemma 3.2 we can derive

$E|{Z}_{1}\left(t\right)-{Z}_{2}\left(t\right){|}^{2}\le {L}_{\lambda }\left(2+{C}_{1}+{C}_{2}\right)\mathrm{\Delta }t.$
(3.21)

Then, by the element inequality ${\left(a+b\right)}^{2}\le 2|a{|}^{2}+2|b{|}^{2}$ and using (3.20) and (3.21), we have

$\begin{array}{rcl}E|Y\left(t\right)-{Z}_{2}\left(t\right){|}^{2}& \le & 2E|Y\left(t\right)-{Z}_{1}\left(t\right){|}^{2}+2E|{Z}_{1}\left(t\right)-{Z}_{2}\left(t\right){|}^{2}\\ \le & 2{C}_{3}\mathrm{\Delta }t+2{L}_{\lambda }\left(2+{C}_{1}+{C}_{2}\right)\mathrm{\Delta }t\\ \le & {C}_{4}\mathrm{\Delta }t,\end{array}$

where ${C}_{4}=2{C}_{3}+2{L}_{\lambda }\left(2+{C}_{1}+{C}_{2}\right)$. Then we have proved (3.19). □

Now we use the above lemmas to prove a strong convergence result.

Definition 3.1 A numerical method is said to have strong order of convergence equal to γ if there exists a constant C such that the numerical solution sequence ${Y}_{n}$ produced by this numerical scheme satisfies

$E|{Y}_{n}-X\left(\tau \right)|\le C\mathrm{\Delta }{t}^{\gamma }$

for any fixed $\tau =n\mathrm{\Delta }t\in \left[0,T\right]$, and Δt sufficiently small.

Theorem 3.1 Under conditions (2.1)-(2.2), let $0<\theta <1$, $0<\mathrm{\Delta }t, the continuous-time approximate solution $Y\left(t\right)$ defined by (3.4) will converge to the true solution of (2.5) in the mean square sense, i.e.,

$E\underset{0\le t\le T}{sup}|Y\left(t\right)-X\left(t\right){|}^{2}\le {C}_{5}\mathrm{\Delta }t,$
(3.22)

where ${C}_{5}$ is a positive constant independent of Δt.

Proof From (2.5) and (3.4), we have

$\begin{array}{r}Y\left(t\right)-X\left(t\right)\\ \phantom{\rule{1em}{0ex}}={\int }_{0}^{t}\left(1-\theta \right)\left[{f}_{\lambda }\left({Z}_{1}\left(s\right)\right)-{f}_{\lambda }\left(X\left({s}^{-}\right)\right)\right]+\theta \left[{f}_{\lambda }\left({Z}_{2}\left(s\right)\right)-{f}_{\lambda }\left(X\left({s}^{-}\right)\right)\right]\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{2em}{0ex}}+{\int }_{0}^{t}g\left({Z}_{2}\left(s\right)\right)-g\left(X\left({s}^{-}\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}W\left(s\right)+{\int }_{0}^{t}h\left({Z}_{2}\left(s\right)\right)-g\left(X\left({s}^{-}\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}\stackrel{˜}{N}\left(s\right).\end{array}$
(3.23)

For any ${t}_{1}\in \left[0,T\right]$, using the Cauchy-Schwarz inequality and the inequality $|\theta x+\left(1-\theta \right)y{|}^{2}\le \theta |x{|}^{2}+\left(1-\theta \right)|y{|}^{2}$, we have

$\begin{array}{r}E\underset{0\le t\le {t}_{1}}{sup}{|Y\left(t\right)-X\left(t\right)|}^{2}\\ \phantom{\rule{1em}{0ex}}\le 3E\underset{0\le t\le {t}_{1}}{sup}|{\int }_{0}^{t}\left(1-\theta \right)\left[{f}_{\lambda }\left({Z}_{1}\left(s\right)\right)-{f}_{\lambda }\left(X\left({s}^{-}\right)\right)\right]\\ \phantom{\rule{2em}{0ex}}+\theta \left[{f}_{\lambda }\left({Z}_{2}\left(s\right)\right)-{f}_{\lambda }\left(X\left({s}^{-}\right)\right)\right]\phantom{\rule{0.2em}{0ex}}\mathrm{d}s{|}^{2}\\ \phantom{\rule{2em}{0ex}}+3E\underset{0\le t\le {t}_{1}}{sup}|{\int }_{0}^{t}g\left({Z}_{2}\left(s\right)\right)-g\left(X\left({s}^{-}\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}W\left(s\right){|}^{2}\\ \phantom{\rule{2em}{0ex}}+3E\underset{0\le t\le {t}_{1}}{sup}|{\int }_{0}^{t}h\left({Z}_{2}\left(s\right)\right)-h\left(X\left({s}^{-}\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}\stackrel{˜}{N}\left(s\right){|}^{2}\\ \phantom{\rule{1em}{0ex}}\le 6\underset{0\le t\le {t}_{1}}{sup}{\int }_{0}^{t}{1}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}sE\underset{0\le t\le {t}_{1}}{sup}{\int }_{0}^{t}|{f}_{\lambda }\left({Z}_{1}\left(s\right)\right)-{f}_{\lambda }\left(X\left({s}^{-}\right)\right){|}^{2}\\ \phantom{\rule{2em}{0ex}}+|{f}_{\lambda }\left({Z}_{2}\left(s\right)\right)-{f}_{\lambda }\left(X\left({s}^{-}\right)\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{2em}{0ex}}+3E\underset{0\le t\le {t}_{1}}{sup}|{\int }_{0}^{t}g\left({Z}_{2}\left(s\right)\right)-g\left(X\left({s}^{-}\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}W\left(s\right){|}^{2}\\ \phantom{\rule{2em}{0ex}}+3E\underset{0\le t\le {t}_{1}}{sup}|{\int }_{0}^{t}h\left({Z}_{2}\left(s\right)\right)-h\left(X\left({s}^{-}\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}\stackrel{˜}{N}\left(s\right){|}^{2}.\end{array}$

Now using the Doob martingale inequality for the two martingale terms, we have

$\begin{array}{r}E\underset{0\le t\le {t}_{1}}{sup}|Y\left(t\right)-X\left(t\right){|}^{2}\\ \phantom{\rule{1em}{0ex}}\le 6{t}_{1}E{\int }_{0}^{{t}_{1}}|{f}_{\lambda }\left({Z}_{1}\left(s\right)\right)-{f}_{\lambda }\left(X\left({s}^{-}\right)\right){|}^{2}+|{f}_{\lambda }\left({Z}_{2}\left(s\right)\right)-{f}_{\lambda }\left(X\left({s}^{-}\right)\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{2em}{0ex}}+12E|{\int }_{0}^{{t}_{1}}g\left({Z}_{2}\left(s\right)\right)-g\left(X\left({s}^{-}\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}W\left(s\right){|}^{2}\\ \phantom{\rule{2em}{0ex}}+12E|{\int }_{0}^{{t}_{1}}h\left({Z}_{2}\left(s\right)\right)-h\left(X\left({s}^{-}\right)\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}\stackrel{˜}{N}\left(s\right){|}^{2}.\end{array}$
(3.24)

Then Fubini’s theorem and the martingale isometries give

$\begin{array}{r}E\underset{0\le t\le {t}_{1}}{sup}|Y\left(t\right)-X\left(t\right){|}^{2}\\ \phantom{\rule{1em}{0ex}}\le 6T{\int }_{0}^{{t}_{1}}E|{f}_{\lambda }\left({Z}_{1}\left(s\right)\right)-{f}_{\lambda }\left(X\left({s}^{-}\right)\right){|}^{2}+E|{f}_{\lambda }\left({Z}_{2}\left(s\right)\right)-{f}_{\lambda }\left(X\left({s}^{-}\right)\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{2em}{0ex}}+12{\int }_{0}^{{t}_{1}}E|g\left({Z}_{2}\left(s\right)\right)-g\left(X\left({s}^{-}\right)\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{2em}{0ex}}+12\lambda {\int }_{0}^{{t}_{1}}E|h\left({Z}_{2}\left(s\right)\right)-h\left(X\left({s}^{-}\right)\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s.\end{array}$

Applying Lipschitz conditions (2.1) and (2.6), we get

$\begin{array}{r}E\underset{0\le t\le {t}_{1}}{sup}|Y\left(t\right)-X\left(t\right){|}^{2}\\ \phantom{\rule{1em}{0ex}}\le 6T{K}_{\lambda }{\int }_{0}^{{t}_{1}}E|{Z}_{1}\left(s\right)-X\left({s}^{-}\right){|}^{2}+E|{Z}_{2}\left(s\right)-X\left({s}^{-}\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{2em}{0ex}}+12K{\int }_{0}^{{t}_{1}}E|{Z}_{2}\left(s\right)-X\left({s}^{-}\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s+12\lambda K{\int }_{0}^{{t}_{1}}E|{Z}_{2}\left(s\right)-X\left({s}^{-}\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{1em}{0ex}}=6T{K}_{\lambda }{\int }_{0}^{{t}_{1}}E|{Z}_{1}\left(s\right)-X\left({s}^{-}\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{2em}{0ex}}+6\left(T{K}_{\lambda }+2K+2\lambda K\right){\int }_{0}^{{t}_{1}}E|{Z}_{2}\left(s\right)-X\left({s}^{-}\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{1em}{0ex}}\le 12T{K}_{\lambda }{\int }_{0}^{{t}_{1}}E|{Z}_{1}\left(s\right)-Y\left({s}^{-}\right){|}^{2}+E|Y\left(s\right)-X\left({s}^{-}\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{2em}{0ex}}+12\left(T{K}_{\lambda }+2K+2\lambda K\right){\int }_{0}^{{t}_{1}}E|{Z}_{2}\left(s\right)-Y\left({s}^{-}\right){|}^{2}+E|Y\left(s\right)-X\left({s}^{-}\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s.\end{array}$

Finally, applying Lemma 3.3, we have

$\begin{array}{r}E\underset{0\le t\le {t}_{1}}{sup}|Y\left(t\right)-X\left(t\right){|}^{2}\\ \phantom{\rule{1em}{0ex}}\le 12{T}^{2}{K}_{\lambda }{C}_{3}\mathrm{\Delta }t+12\left(T{K}_{\lambda }+2K+2\lambda K\right)T{C}_{4}\mathrm{\Delta }t\\ \phantom{\rule{2em}{0ex}}+12\left(T{K}_{\lambda }+T{K}_{\lambda }+2K+2\lambda K\right){\int }_{0}^{{t}_{1}}E|Y\left(s\right)-X\left({s}^{-}\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s\\ \phantom{\rule{1em}{0ex}}\le 12{T}^{2}{K}_{\lambda }{C}_{3}\mathrm{\Delta }t+12\left(T{K}_{\lambda }+2K+2\lambda K\right)T{C}_{4}\mathrm{\Delta }t\\ \phantom{\rule{2em}{0ex}}+12\left(2T{K}_{\lambda }+2K+2\lambda K\right){\int }_{0}^{{t}_{1}}E\underset{0\le r\le s}{sup}|Y\left(r\right)-X\left({r}^{-}\right){|}^{2}\phantom{\rule{0.2em}{0ex}}\mathrm{d}s.\end{array}$
(3.25)

Using the Gronwall inequality (see [14]), we have

$E\underset{0\le t\le {t}_{1}}{sup}|Y\left(t\right)-X\left(t\right){|}^{2}\le {C}_{5}\mathrm{\Delta }t.$
(3.26)

Thus for any ${t}_{1}\in \left[0,T\right]$, we have

$E\underset{0\le t\le T}{sup}|Y\left(t\right)-X\left(t\right){|}^{2}\le {C}_{5}\mathrm{\Delta }t.$
(3.27)

□

## 4 Mean-square stability

In order to study the stability property of the CSSθ method, we consider a linear test equation with scalar coefficients

$\mathrm{d}X\left(t\right)=aX\left({t}^{-}\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}t+bX\left({t}^{-}\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}W\left(t\right)+cX\left({t}^{-}\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}N\left(t\right),$
(4.1)

where $a,b,c\in \mathbb{R}$. Hence, the mean-square stability of the zero solution to equation (4.1) was proved in [1], i.e.,

$\underset{t\to \mathrm{\infty }}{lim}E|X\left(t\right){|}^{2}=0\phantom{\rule{1em}{0ex}}⇔\phantom{\rule{1em}{0ex}}2a+{b}^{2}+\lambda c\left(c+2\right)<0.$
(4.2)

Applying the CSSθ method (2.7)-(2.8) to equation (4.1), we have

${{Y}_{n}}^{\ast }={Y}_{n}+\left[\left(1-\theta \right)\left(a+\lambda c\right){Y}_{n}+\theta \left(a+\lambda c\right){{Y}_{n}}^{\ast }\right]h,$
(4.3)
${Y}_{n+1}={{Y}_{n}}^{\ast }+b{{Y}_{n}}^{\ast }\mathrm{\Delta }{W}_{n}+c{{Y}_{n}}^{\ast }\mathrm{\Delta }{\stackrel{˜}{N}}_{n}.$
(4.4)

Definition 4.1 Under condition (4.2), a numerical method applied to equation (4.1) is said to be MS-stable if there exists ${h}_{0}\left(a,b,c,\lambda \right)>0$ such that the numerical solution sequence ${Y}_{n}$ produced by this numerical scheme satisfies

$\underset{n\to \mathrm{\infty }}{lim}E|{Y}_{n}{|}^{2}=0$
(4.5)

for all $h\in \left(0,{h}_{0}\left(a,b,c,\lambda \right)\right)$.

Theorem 4.1 Under condition (4.2), then for

$\mathrm{\Delta }t\le {h}_{0}\left(a,b,c,\lambda ,\theta \right)=\frac{-B+\sqrt{{B}^{2}-4AC}}{2A},$
(4.6)

where

$\begin{array}{c}A={\left(1-\theta \right)}^{2}{\left(a+\lambda c\right)}^{2}\left({b}^{2}+\lambda {c}^{2}\right),\hfill \\ B=\left(1-2\theta \right){\left(a+\lambda c\right)}^{2}+2\left(1-\theta \right)\left(a+\lambda c\right)\left({b}^{2}+\lambda {c}^{2}\right),\hfill \\ C=2a+{b}^{2}+\lambda c\left(c+2\right),\hfill \\ \theta \in \left[0,1\right),\hfill \end{array}$

the CSSθ method (2.7)-(2.8) applied to equation (4.1) is MS-stable.

Proof Assuming that $1-\theta \left(a+\lambda c\right)h\ne 0$, from (4.3) we have

${{Y}_{n}}^{\ast }=\frac{1+\left(1-\theta \right)\left(a+\lambda c\right)h}{1-\theta \left(a+\lambda c\right)h}{Y}_{n}.$
(4.7)

Substituting this into (4.4) yields

${Y}_{n+1}=\frac{1+\left(1-\theta \right)\left(a+\lambda c\right)h}{1-\theta \left(a+\lambda c\right)h}\left(1+b\mathrm{\Delta }{W}_{n}+c\mathrm{\Delta }{\stackrel{˜}{N}}_{n}\right){Y}_{n}.$
(4.8)

Squaring both sides of (4.8), we can get

$|{Y}_{n+1}{|}^{2}={\left(\frac{1+\left(1-\theta \right)\left(a+\lambda c\right)h}{1-\theta \left(a+\lambda c\right)h}\right)}^{2}{\left(1+b\mathrm{\Delta }{W}_{n}+c\mathrm{\Delta }{\stackrel{˜}{N}}_{n}\right)}^{2}|{Y}_{n}{|}^{2}.$
(4.9)

Noting that $E\left(\mathrm{\Delta }{W}_{n}\right)=0$, $E\left[{\left(\mathrm{\Delta }{W}_{n}\right)}^{2}\right]=h$, $E\left(\mathrm{\Delta }{\stackrel{˜}{N}}_{n}\right)=0$, $E\left[{\left(\mathrm{\Delta }{\stackrel{˜}{N}}_{n}\right)}^{2}\right]=\lambda h$, we have

$E|{Y}_{n+1}{|}^{2}={\left(\frac{1+\left(1-\theta \right)\left(a+\lambda c\right)h}{1-\theta \left(a+\lambda c\right)h}\right)}^{2}\left(1+{b}^{2}h+\lambda {c}^{2}h\right)E|{Y}_{n}{|}^{2}.$
(4.10)

By the iteration of (4.10), we conclude that ${lim}_{n\to \mathrm{\infty }}E|{Y}_{n}{|}^{2}=0$ if

${\left(\frac{1+\left(1-\theta \right)\left(a+\lambda c\right)h}{1-\theta \left(a+\lambda c\right)h}\right)}^{2}\left(1+{b}^{2}h+\lambda {c}^{2}h\right)<1,$
(4.11)

which is equivalent to

${\left(1+\left(1-\theta \right)\left(a+\lambda c\right)h\right)}^{2}\left(1+{b}^{2}h+\lambda {c}^{2}h\right)<{\left(1-\theta \left(a+\lambda c\right)h\right)}^{2},$
(4.12)

i.e.,

$\begin{array}{r}\left({\left(1-\theta \right)}^{2}{\left(a+\lambda c\right)}^{2}\left({b}^{2}+\lambda {c}^{2}\right)\right){h}^{2}\\ \phantom{\rule{1em}{0ex}}+\left[\left(1-2\theta \right){\left(a+\lambda c\right)}^{2}+2\left(1-\theta \right)\left(a+\lambda c\right)\left({b}^{2}+\lambda {c}^{2}\right)\right]h\\ \phantom{\rule{1em}{0ex}}+2a+{b}^{2}+\lambda c\left(c+2\right)<0.\end{array}$
(4.13)

Let

$\begin{array}{rcl}f\left(h\right)& =& \left({\left(1-\theta \right)}^{2}{\left(a+\lambda c\right)}^{2}\left({b}^{2}+\lambda {c}^{2}\right)\right){h}^{2}\\ +\left[\left(1-2\theta \right){\left(a+\lambda c\right)}^{2}+2\left(1-\theta \right)\left(a+\lambda c\right)\left({b}^{2}+\lambda {c}^{2}\right)\right]h\\ +2a+{b}^{2}+\lambda c\left(c+2\right).\end{array}$
(4.14)

If $\theta =1$, (4.13) becomes

$-{\left(a+\lambda c\right)}^{2}h+2a+{b}^{2}+\lambda c\left(c+2\right)<0.$
(4.15)

By (4.2), we know that (4.15) holds for all $h>0$, i.e., the CSSθ method is MS-stable for all $h>0$. Note that if $\theta =1$, the CSSθ method reduces to CSSBE, and (4.15) coincides with Theorem 7 which was studied in [4].

If $\theta \in \left[0,1\right)$, let

$\begin{array}{r}A={\left(1-\theta \right)}^{2}{\left(a+\lambda c\right)}^{2}\left({b}^{2}+\lambda {c}^{2}\right),\\ B=\left(1-2\theta \right){\left(a+\lambda c\right)}^{2}+2\left(1-\theta \right)\left(a+\lambda c\right)\left({b}^{2}+\lambda {c}^{2}\right),\\ C=2a+{b}^{2}+\lambda c\left(c+2\right).\end{array}$
(4.16)

In view of (4.2), we know that $a+\lambda c<0$, then $A\ne 0$ (if $A=0$, ${b}^{2}+\lambda {c}^{2}=0$, i.e., $b=0$, $c=0$, then equation (4.1) becomes nonsense), so we can get

$\begin{array}{r}A>0,\\ B=\left(1-2\theta \right){\left(a+\lambda c\right)}^{2}+2\left(1-\theta \right)\left(a+\lambda c\right)\left({b}^{2}+\lambda {c}^{2}\right)\\ \phantom{B}<\left(1-2\theta \right){\left(a+\lambda c\right)}^{2}-2\left(1-\theta \right)\left(a+\lambda c\right)\left(2a+2\lambda c\right)\\ \phantom{B}=\left(-3+2\theta \right){\left(a+\lambda c\right)}^{2}<0,\\ C<0,\\ \mathrm{\Delta }={B}^{2}-4AC>0.\end{array}$
(4.17)

So $f\left(h\right)=0$ has two real roots ${h}_{0}$ and ${h}_{1}$, with ${h}_{1}<0<{h}_{0}$, where

$\begin{array}{r}{h}_{0}\left(a,b,c,\lambda ,\theta \right)=\frac{-B+\sqrt{\mathrm{\Delta }}}{2A}>0,\\ {h}_{1}\left(a,b,c,\lambda ,\theta \right)=\frac{-B-\sqrt{\mathrm{\Delta }}}{2A}<0.\end{array}$
(4.18)

So we can easily obtain that $f\left(h\right)<0$ holds when

$h\in \left(0,{h}_{0}\left(a,b,c,\lambda ,\theta \right)\right).$

From (4.13), we know that the CSSθ method is MS-stable. This proves the theorem. □

## 5 Numerical experiments

We consider the following equation:

$\left\{\begin{array}{l}\mathrm{d}X\left(t\right)=aX\left({t}^{-}\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}t+bX\left({t}^{-}\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}W\left(t\right)+cX\left({t}^{-}\right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}N\left(t\right),\\ X\left(0\right)=1.\end{array}$
(5.1)

Equation (5.1) has the exact solution

$X\left(t\right)=X\left(0\right)exp\left(\left(a-\frac{1}{2}{b}^{2}\right)t+bW\left(t\right)\right){\left(1+c\right)}^{N\left(t\right)},$
(5.2)

see, for example, [15].

To illustrate the convergence order and the linear mean-square stability of the CSSθ method, we choose the following examples from the reference [7].

Example 5.1 $a=-7$, $b=1$, $c=1$, $\lambda =4$.

Example 5.2 $a=2$, $b=2$, $c=-0.9$, $\lambda =9$.

In this section, the data used in all figures are obtained by the mean square of data by 1,000 trajectories, that is, ${\omega }_{i}:1\le i\le 1,000$, ${Y}_{n}=1/1,000{\sum }_{i=1}^{1,000}|{Y}_{n}\left({\omega }_{i}\right){|}^{2}$; in all figures ${t}_{n}$ denotes the mesh-point.

To show the strong convergence order of the CSSθ method, we apply the CSSθ method to Example 5.1. First, we plot the exact solution of Example 5.1 for one sample path and the CSSθ approximations in Figure 1. Then we simulate the numerical solutions with five different step sizes $h={2}^{p-1}\mathrm{\Delta }t$ for $1\le p\le 5$, $\mathrm{\Delta }t={2}^{-14}$. The mean-square errors $\epsilon =1/1,000{\sum }_{i=1}^{1,000}|{Y}_{n}\left({\omega }_{i}\right)-X\left(T\right){|}^{2}$ all measured at time $T=1$ are estimated by trajectory averaging. We plot our approximation to against Δt on a log-log scale. For reference a dashed line of slope one-half is added. We see that the slopes of the two curves appear to match well in Figure 2. Hence, our results are consistent with a strong order of convergence equal to 1/2.

To illustrate the step size h on the mean-square stability of the CSSθ method, we applied the CSSθ method to Examples 5.1 and 5.2.

For Example 5.1, we first choose $\theta =0.1$, then by Theorem 4.1 we know that the CSSθ method is MS-stable when ${h}_{0}\left(a,b,c,\lambda ,\theta \right)=0.5897$. Figure 3 illustrates the numerical solution produced by the CSSθ method is MS-stable when $h=1/2$. However, applied to the same test equation, and also choose $\theta =0.1$, then by Theorem 3.1 in [7] the CSTM is MS-stable when the step size $h\in \left(0,0.138\right)$.

When we choose $\theta =0.4$, by Theorem 4.1 we know that the CSSθ method is MS-stable when ${h}_{0}\left(a,b,c,\lambda ,\theta \right)=1.0583$, while the CST method in [7] is MS-stable when the step size $h\in \left(0,0.556\right)$. Figure 4 illustrates the numerical solution produced by the CSSθ method is MS-stable when $h=1$. At the same times we know that the Euler-Maruyama (EM) method in [1] is MS-stable for Example 5.1 when the step size $h\in \left(0,0.111\right)$.

Remark 1 Figures 3 and 4 indicate that the restriction on the step size h of the CSSθ method for the MS-stability is less than that of both the CST method and the EM method.

For Example 5.2, we note that $c=-0.9<0$, then the theta method in [1] is not guaranteed to preserve stability for all $\mathrm{\Delta }t\ge 0$. However, if we choose $\theta =0.1$, then by Theorem 4.1 we know that the CSSθ method is MS-stable when ${h}_{0}\left(a,b,c,\lambda ,\theta \right)=0.2862$, and when $\theta =0.4$, ${h}_{0}\left(a,b,c,\lambda ,\theta \right)=0.5091$. Figure 5 and Figure 6 (upper) illustrate the numerical solution produced by the CSSθ method is MS-stable for Example 5.2 when the step size $h\in \left(0,{h}_{0}\left(a,b,c,\lambda ,\theta \right)\right)=\left(0,0.5091\right)$.

At last, Figure 6 (lower) shows that the numerical solution of the CSSθ method is still stable when $h=0.6>{h}_{0}\left(a,b,c,\lambda ,\theta \right)=0.5091$. This implies that maybe the mean-square stability bound we obtained by Theorem 4.1 is not optimal.

## References

1. Higham DJ, Kloeden PE: Convergence and stability of implicit methods for jump-diffusion. Int. J. Numer. Anal. Model. 2006, 3: 125-140.

2. Higham DJ, Kloeden PE: Strong convergence rates for backward Euler on a class of nonlinear jump-diffusion problems. J. Comput. Appl. Math. 2007, 205: 949-956. 10.1016/j.cam.2006.03.039

3. Chalmers GD, Higham DJ: Convergence and stability analysis for implicit simulations of stochastic differential equations with random jump magnitudes. Discrete Contin. Dyn. Syst., Ser. B 2008, 9: 47-64.

4. Higham DJ, Kloeden PE: Numerical methods for nonlinear stochastic differential equations with jumps. Numer. Math. 2005, 101: 101-119. 10.1007/s00211-005-0611-8

5. Bruti-Liberati, N, Platen, E: On the weak approximation of jump-diffusion processes. Technical report, University of Technology Sydney, Sydney (2006)

6. Bruti-Liberati N, Platen E: Strong approximations of stochastic differential equations with jumps. J. Comput. Appl. Math. 2007, 205: 982-1001. 10.1016/j.cam.2006.03.040

7. Wang XJ, Gan SQ: Compensated stochastic theta methods for stochastic differential equations with jumps. Appl. Numer. Math. 2010, 60: 877-887. 10.1016/j.apnum.2010.04.012

8. Hu L, Gan SQ: Convergence and stability of the balanced methods for stochastic differential equations with jumps. Int. J. Comput. Math. 2011, 88: 2089-2108. 10.1080/00207160.2010.521548

9. Ding XH, Ma Q, Zhang L: Convergence and stability of the split-step θ -method for stochastic differential equations. Comput. Math. Appl. 2010, 60: 1310-1321. 10.1016/j.camwa.2010.06.011

10. Gikhman II, Skorokhod AV: Stochastic Differential Equations. Springer, Berlin; 1972.

11. Sobczyk K: Stochastic Differential Equations with Applications to Physics and Engineering. Kluwer Academic, Dordrecht; 1991.

12. Smart DR: Fixed Point Theorems. Cambridge University Press, Cambridge; 1974.

13. Gardon A: The order of approximation for solutions of Itô-type stochastic differential equations with jumps. Stoch. Anal. Appl. 2004, 22: 679-699. 10.1081/SAP-120030451

14. Mao XR: Stochastic Differential Equations and Applications. Ellis Horwood, Chichester; 1997.

15. Glasserman P: Monte Carlo Methods in Financial Engineering. Springer, Berlin; 2003.

## Acknowledgements

This research was supported with funds provided by the National Natural Science Foundation of China (Nos. 11226321, 11272229 and 11102132). We thank two anonymous reviewers for their very valuable comments and helpful suggestions which improved this paper significantly.

## Author information

Authors

### Corresponding author

Correspondence to Jianguo Tan.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All the authors contributed equally to this work. They all read and approved the final version of the manuscript.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

Reprints and permissions

Tan, J., Mu, Z. & Guo, Y. Convergence and stability of the compensated split-step θ-method for stochastic differential equations with jumps. Adv Differ Equ 2014, 209 (2014). https://doi.org/10.1186/1687-1847-2014-209