 Research
 Open access
 Published:
An improved Milstein method for stiff stochastic differential equations
Advances in Difference Equations volumeÂ 2015, ArticleÂ number:Â 369 (2015)
Abstract
To solve the stiff stochastic differential equations, we propose an improved Milstein method, which is constructed by adding an error correction term to the Milstein scheme. The correction term is derived from an approximation of the difference between the exact solution of stochastic differential equations and the Milstein continuoustime extension. The scheme is proved to be strongly convergent with order one and is as easy to implement as standard explicit schemes but much more efficient for solving stiff stochastic problems. The efficiency and the advantage of the method lie in its very large stability region. For a linear scalar test equation, it is shown that the meansquare stability domain of the method is much bigger than that of the Milstein method. Finally, numerical examples are reported to highlight the accuracy and effectiveness of the method.
1 Introduction
Stochastic differential equations (SDEs) play a prominent role in a range of scientific areas like biology, chemistry, epidemiology, mechanics, microelectronics, and finance [1â€“6]. Since explicit solutions are rarely available for nonlinear SDEs, numerical approximations become increasingly important in many applications. To make the implementation viable, effective numerical methods are clearly the key ingredient and deserve much investigation. In the present work we make efforts in this direction and propose a new efficient scheme, which enjoys cheap computational costs in a strong approximation of stiff SDEs.
As the considered problem, we look at the following ddimensional SDEs driven by multiplicative noise:
where T is a positive constant, \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) is the drift coefficient and \(g^{j}\), \(j=1,2,\ldots,m:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) are the diffusion coefficients. Moreover, \(W^{j}(t)\), \(j=1,2,\ldots,m\), are independent scalar Wiener processes defined on the complete probability space \((\Omega,\mathcal{F},P)\) with a filtration \(\{\mathcal{F}_{t}\}_{t\geq0}\) satisfying the usual conditions (that is, it is increasing and right continuous while \(\mathcal{F}_{0}\) contains all Pnull sets). Furthermore, the initial data \(X_{0}\) is assumed to be independent of the Wiener process satisfying \(\mathbb{E}X_{0}^{2}<\infty\).
Over the last decades, much progress has been made in construction and analysis of various numerical schemes for (1.1) from different numerical points of view; see, e.g., [7â€“26]. Roughly speaking, there are two major types of numerical methods, called explicit numerical methods and implicit numerical methods in the existing literature. As for the deterministic case, explicit methods [7â€“9] are easy to implement and are advocated to solve nonstiff problems. For stiff problems, however, the standard explicit methods with poor stability properties suffer a lot from stepsize reduction and turn out to be inefficient in terms of overall computational costs. In order to address this issue, a number of implicit methods including driftimplicit methods [10, 15] and fully implicit methods [12, 16â€“19, 21, 22] have been introduced, which possess better stability properties than the explicit methods and thus are well adapted for stiff problems. Although implicit methods can usually ease the difficulty arising from stiffness in SDEs, one needs to solve (possibly large) nonlinear algebraic equations at each time step. This might lead to traditional implicit methods still being costly when they are used to approximate large stiff systems, for instance, SDEs produced from spatially discretization of stochastic partial differential equations. In this paper, an improved Milstein (IM) method is developed, which successfully avoids solving nonlinear algebra equations encountered with implicit methods as mentioned above. More importantly, the proposed scheme admits good meansquare stability (MSstability) properties and therefore serves as a good candidate to treat stiff SDEs.
For simplicity of presentation, here we restrict ourselves to the special case \(d =m =1\) in (1.1) and refer to SectionÂ 2 for the general case. In this setting we introduce the IM method for equation (1.1) as follows:
where \(Y_{n}\) is the approximation of the exact solution \(X(t)\) of (1.1) at time \(t_{n} = nh\) with h being the time step size. Apparently, scheme (1.2) can be regarded as a modification of the classical Milstein scheme by adding the error correction term \((1hf'(\bar {Y}_{n+1}) )^{1}h (f(\bar{Y}_{n+1})f(Y_{n}) )\), where \(\bar {Y}_{n+1}\) is produced by the classical Milstein method. The benefits of this error correction term are twofold. On the one hand, the MSstability property of the new scheme shows much better than that of the classical Milstein method and no nonlinear algebra problems are encountered during implementation. On the other hand, the computational accuracy of the IM method is, to a certain extent, improved even for the nonstiff problem (see TableÂ 1 in SectionÂ 5), despite the fact that the strong convergence rate of IM method remains the same as that of the classical Milstein method. We shall elaborate the derivation of the IM method in the forthcoming section. In short, the derivation of (1.2) is based on the local truncated error analysis and the error correction term comes from a certain approximation of the difference between the true solution of SDEs and onestep explicit Milstein approximation. Further, the proposed method (1.2) is justified by proving its strong convergence of order one under standard assumptions (see Theorem 3.4). Also, the linear MSstability of the numerical scheme is examined and it turns out that the proposed scheme possesses a much larger stability domain than the classical Milstein method.
In addition, it is worthwhile to point out that the proposed scheme is close to the Rosenbrock type methods in the literature [24, 27] due to the presence of the inverse Jacobian matrices. Despite the similarity, the scheme we develop here does not coincide with any Rosenbrock type method formulated in [24, 27]. Indeed, the new scheme can be regarded as a modified version of the predictorcorrector method. Based on the classical Milstein method as a predictor, a corrector term involved with the inverse Jacobian matrices is determined following the way in SectionÂ 2 of the manuscript. This approach is different from the idea of obtaining Rosenbrock type methods in the literature. Similarly to Rosenbrock methods, the proposed scheme is well suited for stiff problems with the stiffness itself appearing in linear terms from the drift coefficients, and may lose efficiency in the case of stiff nonlinear terms. Finally, we would like to mention that the idea of error correction methods based on Chebyshev collocation was previously employed in [28] to construct methods for stiff deterministic ordinary differential equations.
The remainder of the paper is organized as follows. In the next section, how to construct the IM scheme based on the local truncated error analysis is presented. In SectionÂ 3, the strong convergence order in meansquare sense is analyzed. SectionÂ 4 is devoted to the MSstability of the IM method. Numerical experiments are reported to confirm the accuracy and effectiveness of the method in SectionÂ 5. At the end of this article, conclusions are made briefly.
2 Derivation of the IM method
In this section, to illuminate the derivation of the IM method, we first restrict our attention to a scalar SDE driven by a scalar Wiener process, i.e.,
The extension to a general multidimensional case will be presented at the end of this section. For SDE (2.1), we define a uniform mesh on the finite time interval \([0,T ]\) with step size \(h=\frac{T}{N}\), \(N\in\mathbb{N}^{+}\) by
Based on a local truncation error analysis, we are to devise a onestep approximation of \(X(t_{1})\), denoted by \(Y_{1}\), starting with the initial value \(Y_{0} =X_{0}\). To this end, we introduce an auxiliary onestep approximation \(\bar{Y}_{1}\) defined by
where \(\Delta W_{1}:= W(t_{1})  W(t_{0})\). Note that \(\bar{Y}_{1}\) can be viewed as a onestep approximation generated by the classical Milstein method [8, 29] starting with the initial value \(Y_{0} = X_{0}\). Moreover, we define the continuoustime extension \(\tilde{Y}(t)\) on the interval \([t_{0},t_{1}]\) such that
Note that \(\tilde{Y}(t_{0})=Y_{0}\) and \(\tilde{Y}(t_{1})=\bar{Y}_{1}\). Now let us examine the difference between \(X(t)\) and \(\tilde{Y}(t)\) defined by
Further, we attempt to find a proper approximation of \(\varphi(t_{1})\), denoted by \(\bar{\varphi}_{1}\), based on which we introduce a new onestep approximation \(Y_{1}\) given by
Such a scheme can be regarded as an improved Milstein method, which is expected to reduce the local truncation error compared with the original Milstein scheme (2.2). Bearing this idea in mind, we carry out the following analysis. Recall that \(X(t_{0})=X_{0}=Y_{0}=\tilde{Y}(t_{0})\) and therefore \(\varphi(t_{0})=0\). For \(t\in(t_{0},t_{1}]\), it follows from (2.1) and (2.3) that
where we denote
and
for \(t\in(t_{0},t_{1}]\). To sum up, \(\varphi(t)\) is governed by the following SDE:
With \(m(t)\) and \(h(t)\) being approximated by
for \(t\in(t_{0},t_{1}]\), we can get the following approximation equation of (2.11):
Here the process \(\bar{\varphi}(t)\) is an approximation to the exact solution \(\varphi(t)\) of (2.11) for \(t\in(t_{0},t_{1}]\). As mentioned earlier, we are interested in the approximation \(\overline{\varphi}_{1}\) to \(\bar{\varphi}(t_{1})\), which is used to construct a new scheme. To this aim, we apply the semiimplicit Milstein method [15] with \(\theta=1\) to the linear SDE (2.12) to obtain
where the second equality holds since \(\tilde{Y}(t_{1})=\bar{Y}_{1}\), \(\overline{\varphi}_{0} = \overline{\varphi} (t_{0})=0\), and
According to (2.5) we construct the following local improved Milstein method:
where \(\bar{Y}_{1}\) is given by (2.2). As a result, we propose the following global scheme for (2.1):
where \(\Delta W_{n}=W(t_{n+1})W(t_{n})\).
For the general system (1.1) with \(d,m>1\), the above analysis can be adapted without any difficulty. Recall that the classical Milstein method in multidimensional setting is given by
where for \(j_{1},j_{2}=1,2,\ldots,m\) we denote
with \(x^{i}\) and \(g^{i,j_{1}}\) being the ith element of the vector functions x and \(g^{j_{1}}\), respectively. Along the same lines as above, we derive the IM method for general system (1.1)
where I is the ddimensional identity matrix and \(f'\) stands for Jacobian matrix of the vectorvalued function f.
3 Meansquare convergence analysis
In this section, we will justify the proposed method by proving its strong convergence order of one in meansquare sense. To this end, we make the following standard assumptions [15].
Assumption 3.1
Assume that \(f(x)\) and \(g(x)\) in (2.1) satisfy globally Lipschitz condition and linear growth condition, that is, there exist positive constants L and K such that

(1)
(globally Lipschitz condition) for all \(x,y\in\mathbb{R}\)
$$ \biglf(x)f(y)\bigr^{2}\vee\biglg(x)g(y)\bigr^{2}\leq Lxy^{2}, $$(3.1) 
(2)
(linear growth condition) for all \(x\in\mathbb{R}\)
$$ \biglf(x)\bigr^{2}\vee\biglg(x)\bigr^{2}\leq K \bigl(1+x^{2}\bigr). $$(3.2)
Here and throughout this work, we use the convention that K represents a generic positive constant independent of h, whose value may be different for different appearances. This assumption guarantees the existence and uniqueness of the exact solution \(X(t)\) of equation (2.1), and, moreover, the solution \(X(t)\) satisfies \(\sup_{0\leq t\leq T} \mathbb{E}X(t)^{2}<\infty\); see, e.g., [30] for more details. In addition, we require the following assumption.
Assumption 3.2
Assume that the functions \(f(x)\) and \(g(x)\) in (2.1) have continuously bounded derivatives up to the required order for the following analysis, and the coefficient functions in ItÃ´Taylor expansions (up to a sufficient order) are globally Lipschitz and satisfy the linear growth conditions.
Subsequently, we present the fundamental strong convergence theorem [29, 31], which was frequently used in the literature to establish the meansquare convergence orders of various numerical schemes.
Theorem 3.3
Suppose that a onestep approximation \(\bar{X}_{t,x}(t+h)\) has order of accuracy \(p_{1}\) for the mathematical expectation of the deviation and order of accuracy \(p_{2}\) for the meansquare deviation. More precisely, for arbitrary \(0\leq t\leq Th\), \(x\in\mathbb{R}^{d}\) the following inequalities hold:
Also let \(p_{2}\geq\frac{1}{2}\), \(p_{1}\geq p_{2}+\frac{1}{2}\). Then for all \(N \in\mathbb{N}^{+}\) and \(k=0,1,\ldots, N\) the following inequality holds:
i.e., the method is meansquare convergent with order \(p=p_{2}\frac{1}{2}\).
The notations used in Theorem 3.3 are explained as follows: \(Y_{k}\) generated by the onestep method is an approximation to the exact solution \(X(t_{k})\) of (1.1) with \(t_{k}=kh\), \(X_{t,x}(t+h)\) denotes the exact solution of (1.1) with initial value x at time t and \(\bar{X}_{t,x}(t+h)\) denotes a numerical solution generated by the onestep method with initial value x at time t.
After the above preparations, we start to prove rigorously that the IM method is meansquare convergent with order one under Assumption 3.1 and Assumption 3.2. For simplicity of presentation, we focus on the scalar SDE and the extension to multidimensional case is an easy work and hence omitted here.
Theorem 3.4
Assume that all conditions in Assumption 3.1 and Assumption 3.2 are fulfilled. Then there exists a step size \(h_{0}<\frac{1}{\sqrt {L}}\) such that, for any \(h=T/N\leq h_{0}\), \(N\in\mathbb{N}^{+}\) the method (2.15) applied to SDE (2.1) is meansquare convergent with order one, i.e., for all \(N \in\mathbb{N}^{+}\) and \(k=0,1,\ldots , N\), the following inequality holds:
Proof
The proof is divided into two steps.
Step 1. We shall prove that the inequality (3.3) holds for the IM method with \(p_{1}=2\). Let \(\bar{X}_{t,x}^{M}(t+h)\) denote the onestep Milstein approximation defined by
and let \(\bar{X}_{t,x}(t+h)\) denote the onestep version of the proposed scheme (2.15):
where \(x\in\mathbb{R}\) and \(\Delta W_{h}=W(t+h)W(t)\). Analogously, let \(X_{t,x}(t+h)\) denote the onestep exact solution of (2.1). Therefore, one can get
and thus
Next, we address the estimates of \(H_{1}\) and \(H_{2}\). First of all, \(H_{1}\) can be split into the following two parts:
To handle the estimate of \(H_{11}\), one can use ItÃ´â€™s formula under conditions imposed on the coefficient functions in ItÃ´Taylor expansions (see Assumption 3.2) to get
The estimate of \(H_{12}\) relies on the meansquare deviation of the classical Milstein onestep approximation [29]:
Armed with this, one can readily check that
Substituting (3.12) and (3.14) into (3.11) yields
At this point, it remains to estimate \(H_{2}\). Note first that
where due to (3.7), Assumption 3.1 and Assumption 3.2. To guarantee the denominator \(1hf' (\bar{X}_{t,x}^{M}(t+h) ) \neq0\) in \(H_{2}\), that is, \(hf' (\bar{X}_{t,x}^{M}(t+h) ) \neq1\), one can derive that \(h\leq h_{0}<\frac {1}{\sqrt{L}}\) armed with \(f'(x)\leq\sqrt{L}\) due to Assumption 3.1. Therefore, using Assumption 3.2, (3.1) and (3.16) for \(h\leq h_{0}<\frac{1}{\sqrt{L}}\) together shows that
Finally, inserting (3.15) and (3.17) into (3.10) implies that
Therefore, the inequality (3.3) with \(p_{1}=2\) is satisfied for the IM method.
Step 2. We prove that the inequality (3.4) with \(p_{2}=\frac{3}{2}\) holds for the IM method. To this aim, we divide (3.4) into two parts as follows:
For the second part on the righthand side of the (3.19), due to (3.8), Assumption 3.2, (3.1) and (3.16), for \(h\leq h_{0}<\frac{1}{\sqrt{L}}\) one can derive that
This together with (3.13) enables us to arrive at
Thus the inequality (3.4) with \(p_{2}=\frac{3}{2}\) holds for the IM method.
Now an application of Theorem 3.3 with \(p_{1}=2\) and \(p_{2}=\frac{3}{2}\) shows that the scheme is meansquare convergent with order \(p_{2}\frac{1}{2}=1\).â€ƒâ–¡
In the same manner, it is not hard to establish the meansquare convergence of order one for IM method (2.17) applied to general system (1.1).
4 Meansquare stability
For SDEs, two very natural, but distinct stability concepts are MSstability and asymptotical stability. MSstability is applied to measure the stability of moments, while the asymptotical stability is to measure the overall behavior of sample function. In this section, we focus on the MSstability of the IM method applied to a scalar linear test equation.
Consider the scalar linear test equation
where \(\lambda,\mu\in\mathbb{C}\) are constants and \(X_{0}\neq0\) with probability 1. The exact solution of (4.1) is given by
It is a classical result [32] that the zero solution of (4.1) is MSstable if and only if
Suppose that the parameters Î» and Î¼ are chosen so that the SDE (4.1) is stable in the meansquare sense. A natural question is for what range of h the numerical solution is stable in an analogous sense. We apply a onestep numerical scheme to equation (4.1) and represent the resulting stochastic difference equation as
where \(\widehat{N}_{n}\) are independent standard Gaussian random variables \(\widehat{N}_{n}=\frac{\Delta W_{n}}{\sqrt{h}}\sim N(0,1)\). Saito and Mitsui [32] introduced the following definition of MSstability for a numerical scheme.
Definition 4.1
For fixed h, Î», Î¼, a numerical method is said to be MSstable if
where \(\overline{R}(h,\lambda,\mu)\) is called MSstability function of the numerical method.
Theorem 4.2
For fixed h, Î», Î¼, the IM method is MSstable if
Proof
Applying the IM method (2.15) to (4.1) results in
where \(\widehat{N}_{n}=\frac{\Delta W_{n}}{\sqrt{h}}\sim N(0,1)\) and \(1\lambda h\neq0\). Substituting \(\bar{Y}_{n+1}\) into \(Y_{n+1}\) in (4.7) yields
Therefore, the function \(R(h,\lambda,\mu,\widehat{N}_{n})\) of the IM method is given by
and thus
Applying \(\mathbb{E}(\widehat{N}_{n})=\mathbb{E}(\widehat{N}_{n}^{3})=0\), \(\mathbb{E}(\widehat{N}_{n}^{2})=1\), and \(\mathbb{E}(\widehat{N}_{n}^{4})=3\) yields
This together with Definition 4.1 implies that the method is MSstable if
â€ƒâ–¡
It turns out that the proposed scheme is not meansquare Astable in the sense that its meansquare stability domain does not contain the stability domain of the exact solution (compare (4.3) and (4.6)). Thus, the stability condition of Theorem 4.2 is not very convenient in practical applications. As immediate consequences of Theorem 4.2, the following corollaries provide convenient stability conditions.
Corollary 4.3
Let \(\lambda, \mu\in\mathbb{R}\) such that \(2 \lambda+ \sqrt{2} \mu^{2}<0\). Then the test problem (4.1) is MSstable and the proposed method is MSstable for any step size \(h >0\).
Proof
The desired assertion follows from (4.3) and (4.6) directly.â€ƒâ–¡
Based on the above observations, we believe that the new scheme is well suited for stiff meansquare stable problems with moderate (small) stochastic noise intensity or additive noise, where the drift coefficient plays an essential role in the dynamics.
Corollary 4.4
Suppose that \(2\operatorname{Re}\lambda+\mu^{2}<0\) and \(\operatorname{Re}\lambda\leq \operatorname{Im}\lambda\), then the IM method is MSstable for any step size \(h>0\).
Proof
Applying the condition \(2\operatorname{Re}\lambda+\mu^{2}<0\) yields
Using (4.10) and the condition \(\operatorname{Re}\lambda\leq\operatorname {Im}\lambda\), one can obtain
By Theorem 4.2, the IM scheme is MSstable.â€ƒâ–¡
Remark 4.5
For fixed h, Î», Î¼, it is well known that [33] the usual explicit Milstein method is MSstable if
Clearly, (4.12) implies (4.6). In other words, the MSstable region of the IM method contains the domain of the Milstein method. From FigureÂ 1, one can easily observe that the stability domain of the proposed scheme is much larger than that of the explicit Milstein method.
5 Numerical tests
In this section, three numerical experiments are reported to illustrate convergence properties and MSstability properties of the IM method. In the following numerical experiments, to approximate the meansquare error at time \(T = N h\), given by
we use averages over 10,000 paths, i.e., \(e_{h}^{\mathrm{strong}}\approx\sqrt{\tfrac{1}{10{,}000}\sum_{i=1}^{10{,}000} Y_{N}^{(i)}X^{(i)}(t_{N})^{2}}\), where \(Y_{N}^{(i)}\) denotes the numerical approximation to \(X^{(i)}(t_{N})\) at the step point \(t_{N}\) in the ith simulation of all 10,000 paths.
Example 1
Consider the scalar test equation
with two groups of parameters as follows:

parameter I: \(\lambda=2\), \(\mu=1\),

parameter II: \(\lambda=20\), \(\mu=5\).
As a test of meansquare convergence, we apply the IM method to equation (5.1) on the interval \([0,1]\) with the parameter I. In order to visualize the strong convergence order of the IM method, the resulting meansquare errors against h on a loglog scale is plotted in FigureÂ 2. This produces the blue asterisks connected with solid lines. For reference, aÂ dashed red line of slope one is also added. There one can see that the slopes of the two curves appear to match well. As expected, the IM method gives errors that decreases proportional to h.
In TableÂ 1, we list the meansquare errors of the IM and Milstein methods for equation (5.1) with the parameter I. TableÂ 1 shows that the computational accuracy of the IM method is, to some degree, improved even for the nonstiff problem. TableÂ 2 highlights that, for the stiff problem, the Milstein method works unreliably and has large errors for not too small step sizes, whereas the IM method works very well. FigureÂ 3 depicts the meansquare error behavior of the IM method for equation (5.1) with the parameter II. It is noted that the vertical axis in FigureÂ 3 is logarithmically scaled.
To test the MSstability of the IM method, we numerically solve (5.1) over \([0, 20]\) with the parameter II. The parameter II satisfies (4.3) and hence the problem is MSstable. We apply the IM and Milstein methods over 10,000 discrete Brownian paths for three different step sizes \(h=1,\frac{1}{2},\frac{1}{4}\). FigureÂ 4 plots the sample average of \(Y_{j}^{2}\) against \(t_{j}\) for the IM method and Milstein method. Note that the vertical axis is logarithmically scaled. In the upper picture, curves decay toward to zero for all \(h=1,\frac{1}{2},\frac{1}{4}\), which demonstrates that the IM scheme is MSstable for these three step sizes. On the contrary, the Milstein method for \(h=1,\frac{1}{2}, \frac {1}{4}\) gives unstable numerical solutions as shown in the lower picture in FigureÂ 4.
In order to offer further insight into the above stability results, we restrict ourselves to \(\lambda,\mu\in\mathbb{R}\) in (5.1) and plot MSstability regions of the IM and Milstein methods in FigureÂ 1. As shown there, the MSstability region of the IM method is much larger than that of the Milstein method.
Example 2
Consider a 2dimensional stiff linear SDE system
where U and V are matrices defined by
The exact solution of this equation is given by [15]
where \(\rho^{\pm}(t)=(u\frac{1}{2}v^{2}\pm u)t+vW(t)\), \(P^{1}=P\), and we set \(X_{0}=(1,2)^{T}\). It is noticed that the Lyapunov exponents of (5.2) are explicitly given by \(L_{1}=\frac{v^{2}}{2}\), \(L_{2}=\frac {v^{2}}{2}2u\). Therefore, the stiffness is from its deterministic and stochastic components (u and v, respectively) [15]. We choose the parameters in the numerical experiment as \(u=80\), \(v=1\). Under this condition, the Lyapunov exponents are \(L_{1}=0.5\), \(L_{2}=160.5\). It is clear that the system (5.2) is stiff for the great difference between the Lyapunov exponents \(L_{1}\) and \(L_{2}\) [15]. The approximation errors by applying the IM and Milstein methods to solve (5.2) are shown in TableÂ 3. Clearly, the numerical results reported in TableÂ 3 are in favor of the IM method, which has higher accuracy and advantage of using larger step sizes.
Example 3
Consider the following nonlinear problem:
which is a normalized version of a problem dynamics model (see [11]). A linearization about the stationary solution \(X(t)\equiv1\) leads to the linear test problem (4.1). In the experiment, we choose parameters \(\lambda<1\), \(\mu=\sqrt{2(1+\lambda)}\), \(X(0)=0.9\). We notice that increasing the value \(\lambda\) will increase the stiffness both in drift and in diffusion term. At the same time, the choice ensures that \(\lambda+\frac{\mu^{2}}{2}=1<0\), which is required by (4.3). We choose \(\lambda=2\) for nonstiff case and \(\lambda=15\) for stiff case with stiffness in both drift and diffusion coefficients. Because the analytic form of the exact solution \(X(t)\) of (5.4) is not available, we solve (5.4) by EulerMaruyama (EM) method with a sufficiently small mesh size (here \(h=2^{11}\)) and identify their outcomes as the â€˜exact solutionâ€™ \(X(t)\) for the errorcomparison. In TableÂ 4 and TableÂ 5, we give the meansquare errors and the relative errors of the numerical solution for (5.4) with \(\lambda=2\), respectively. Analogously, the approximation for (5.4) with \(\lambda=15\) are listed in TableÂ 6 and TableÂ 7. Numerical results in TablesÂ 4, 5, 6, and 7 indicate that, even for a nonlinear equation, the computational accuracy of the IM method is improved for both a nonstiff problem and a stiff problem.
6 Conclusions
This work has proposed the IM method for solving stiff stochastic differential equations. The method is derived by adding a correction term to the classical Milstein method and is easy to implement. Further, the good MSstability and strong convergence order of one are obtained for the scheme. It turns out that the IM method has a larger MSstability region than the classical Milstein method. Numerical results also confirm that the IM method is computationally effective and superior to the Milstein method for solving stiff SDE systems.
In this work, we always assume that the drift and diffusion functions satisfy globally Lipschitz conditions (cf. (3.1)), which excludes many important model equations in applications. Therefore, a future direction is to establish strong convergence rate of the IM scheme for SDEs under a nonglobal Lipschitz condition as studied in [34].
References
Bodo, BA, Thompson, ME, Unny, TE: A review on stochastic differential equations for application in hydrology. Stoch. Hydrol. Hydraul. 1(2), 81100 (1987)
Boucher, DH: The Biology of Mutualism. Oxford University Press, New York (1985)
Beretta, E, Kolmanovskii, VB, Shaikhet, L: Stability of epidemic model with time delays influenced by stochastic perturbations. Math. Comput. Simul. 45(34), 269277 (1998)
Gillespie, DT: Stochastic simulation of chemical kinetics. Annu. Rev. Phys. Chem. 58, 3555 (2007)
Platen, E, BrutiLiberati, N: Numerical Solutions of Stochastic Differential Equations with Jumps in Finance. Springer, Berlin (2010)
Wilkinson, DJ: Stochastic modelling for quantitative description of heterogeneous biological systems. Nat. Rev. Genet. 10, 122133 (2009)
Maruyama, G: Continuous Markov processes and stochastic equations. Rend. Circ. Mat. Palermo 4(1), 4890 (1955)
Milstein, GN: Approximate integration of stochastic differential equations. Theory Probab. Appl. 19(3), 557562 (1975)
RÃ¼melin, W: Numerical treatment of stochastic differential equations. SIAM J. Numer. Anal. 19(3), 604613 (1982)
Kloeden, PE, Platen, E, Schurz, H: The numerical solution of nonlinear stochastic dynamical systems: a brief introduction. Int. J. Bifurc. Chaos 1(2), 277286 (1991)
Guard, TC: Introduction to Stochastic Differential Equations. Dekker, New York (1988)
Kahl, C, Schurz, H: Balanced Milstein methods for ordinary SDEs. Monte Carlo Methods Appl. 12(2), 143164 (2006)
Buckwar, E, Kelly, C: Towards a systematic linear stability analysis of numerical methods for systems of stochastic differential equations. SIAM J. Numer. Anal. 48(1), 298321 (2010)
Buckwar, E, Sickenberger, T: A comparative linear meansquare stability analysis of Maruyamaand Milsteintype methods. Math. Comput. Simul. 81(6), 11101127 (2011)
Kloeden, PE, Platen, E: The Numerical Solution of Stochastic Differential Equations. Springer, Berlin (1999)
Milstein, GN, Platen, E, Schurz, H: Balanced implicit methods for stiff stochastic systems. SIAM J. Numer. Anal. 35(3), 10101019 (1998)
Alcock, J, Burrage, K: A note on the balanced method. BIT Numer. Math. 46(4), 689710 (2006)
Omar, MA, AboulHassan, A, Rabia, SI: The composite Milstein methods for the numerical solution of Stratonovich stochastic differential equations. Appl. Math. Comput. 215(2), 727745 (2009)
Burrage, K, Tian, T: The composite Euler method for stiff stochastic differential equations. J. Comput. Appl. Math. 131(12), 407426 (2001)
Burrage, K, Tian, T: Implicit stochastic RungeKutta methods for stochastic differential equations. BIT Numer. Math. 44(1), 2139 (2004)
Wang, X, Gan, G, Wang, D: A family of fully implicit Milstein methods for stiff stochastic differential equations with multiplicative noise. BIT Numer. Math. 52(3), 741772 (2012)
Tian, T, Burrage, K: Implicit Taylor methods for stiff stochastic differential equations. Appl. Numer. Math. 38(12), 167185 (2001)
Abdulle, A, Cirilli, S: Stabilized methods for stiff stochastic systems. C. R. Math. Acad. Sci. Paris 345(10), 593598 (2007)
Schurz, H: A brief introduction to numerical analysis of (ordinary) stochastic differential equations without tears. Minneapolis: Institute for Mathematics and its Applications, 1161 (1999)
Abdulle, A, Cirilli, S: SROCK: Chebyshev methods for stiff stochastic differential equations. SIAM J. Sci. Comput. 30(2), 9971014 (2008)
Abdulle, A, Li, T: SROCK methods for stiff ItÃ´ SDEs. Commun. Math. Sci. 6(4), 845868 (2008)
Artemiev, S, Averina, A: Numerical Analysis of Systems of Ordinary and Stochastic Differential Equations. de Gruyter, Berlin (1997)
Kim, P, Piao, X, Kim, SD: An error corrected Euler method for solving stiff problems based on Chebyshev collocation. SIAM J. Numer. Anal. 48, 17591780 (2011)
Milstein, GN, Tretyakov, MV: Stochastic Numerics for Mathematical Physics. Spring, Berlin (2004)
Mao, X: Stochastic Differential Equations and Their Applications. Horwood Publishing, Chichester (1997)
Milstein, GN: A theorem on the order of convergence of mean square approximations of solutions of systems of stochastic differential equations. Theory Probab. Appl. 32(4), 738741 (1988)
Saito, Y, Mitsui, T: Stability analysis of numerical schemes for stochastic differential equations. SIAM J. Numer. Anal. 33(6), 22542267 (1996)
Higham, DJ: Astability and stochastic meansquare stability. BIT Numer. Math. 40(2), 404409 (2000)
Neuenkirch, A, Szpruch, L: First order strong approximations of scalar SDEs defined in a domain. Numer. Math. 128(1), 103136 (2014)
Acknowledgements
The authors would like to thank the anonymous referees for their valuable and insightful comments which have improved the paper. This work is supported by the National Natural Science Foundation of China (No. 11171352, No.Â 11571373, No. 11371123), the New Teachersâ€™ Specialized Research Fund for the Doctoral Program from Ministry of Education of China (No. 20120162120096) and Mathematics and Interdisciplinary Sciences Project, Central South University.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authorsâ€™ contributions
Both authors contributed equally to the writing of this paper. Both authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Yin, Z., Gan, S. An improved Milstein method for stiff stochastic differential equations. Adv Differ Equ 2015, 369 (2015). https://doi.org/10.1186/s1366201506999
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1366201506999