 Research
 Open access
 Published:
Meansquare numerical approximations to random periodic solutions of stochastic differential equations
Advances in Difference Equations volumeÂ 2015, ArticleÂ number:Â 292 (2015)
Abstract
This paper is devoted to the possibility of meansquare numerical approximations to random periodic solutions of dissipative stochastic differential equations. The existence and expression of random periodic solutions are established. We also prove that the random periodic solutions are meansquare uniformly asymptotically stable, which ensures the numerical approximations are feasible. The convergence of the numerical approximations by the random Romberg algorithm is also proved to be meansquare. A numerical example is presented to show the effectiveness of the proposed method.
1 Introduction
Stochastic differential equations (SDEs) have an important position in theory and application, for more details we refer the reader to [1] and [2]. In recent years, there has an increasing interest in random periodic solutions of SDEs. Random periodic solutions describe many physical phenomena and play an important role in aeronautics, electronics, biology, and so on [3, 4]. The existence of random periodic solutions was established by Feng et al. [5]. However, random periodic solutions have not been explicitly constructed as yet. Therefore, numerical approximations to random periodic solutions are an important method for studying their dynamic behavior. There are, however, few numerical studies in this field. The main difficulties lie in determining the initial value at the starting time and simulating improper integrals more efficiently. Therefore, we are concerned with the possibility of meansquare numerical approximation and numerical analysis of convergence in this paper.
There are two main motivations for this work. It is well known that, in the deterministic case, some researchers have obtained extensive results, including numerical approximations to periodic solution. We refer the reader to [6] and [7] and the references therein. However, few studies have been done in the random case. Yevik and Zhao [8] treated the numerical stationary solutions of SDEs. Liu et al. [3] investigated squaremean almost periodic solutions for a class of stochastic integrodifferential equations. To the best of our knowledge, no investigations of meansquare numerical approximations to random periodic solutions of SDEs exist in the literature. Numerical approximation is still an interesting method for studying random periodicity in random dynamical systems.
Because there exist errors in the initial value at the starting time, and the random periodic solutions are sensitive to the initial value, we can only deal with SDEs whose random periodic solutions are meansquare uniformly asymptotically stable. The main results we obtain are the numerical approximations to random periodic solutions of dissipative SDEs, and the proof of meansquare convergence. This shows that meansquare numerical approximations to random periodic solutions are in fact close to the exact solutions and the iterative error can be controlled in the range of the presupposed error tolerance.
This paper is organized as follows. SectionÂ 2 deals with some preliminaries intended to clarify the presentation of concepts and norms used later. In SectionÂ 3 we present theoretical results on random periodic solutions of dissipative SDEs. This is the main conclusion of the article, which contains the existence and stability of random periodic solutions, the numerical implementation method and the meansquare convergence theorem. SectionÂ 4 is devoted to numerical experiments, which demonstrate that these algorithms can be applied to simulate random periodic solutions of dissipative SDEs. Finally, SectionÂ 5 gives some brief conclusions.
2 Preliminaries
Let \(W(t)\), \(t\in R\) be an kdimensional Brownian motion, and \((\Omega, \mathcal{F}, P)\) be the filtered Wiener space. Here \(\mathcal{F}_{s}^{t}:=\sigma(W_{u}W_{v},s\leq v\leq u\leq t)\), and \(\mathcal {F}^{t}:=\bigvee_{s\leq t}\mathcal{F}_{s}^{t}\), where \(s\in R\) is any given time [9]. We consider a class of ItÃ´ SDE of the form
where \(X_{t}:\Omega\rightarrow R^{d}\), \(f:R\times R^{d}\rightarrow R^{d}\), \(g:R\rightarrow R^{d\times k}\), A is a class of \(d\times d\) hyperbolic matrix whose all eigenvalues are positive, and we define \(T_{t}=e^{At}\) to be a hyperbolic linear flow induced by âˆ’A.
We define
and \(\Delta:=\{(s,t)\in R^{2},s\leq t\}\). By the conclusions in [9], SDE (1) generates a stochastic flow \(\varphi :\Delta\times R^{d}\times\Omega\rightarrow R^{d}\) when the solution of SDE (1) exists uniquely, which is usually written as \(\varphi (s,t,x_{0},\omega):=\varphi(s, t,\omega)x_{0}\) on the metric dynamical systems \((\Omega, \mathcal{F}, P,\theta_{t})\). The stochastic flow Ï† is given by
Throughout the rest of this paper, we make the following notations.
Let \(L^{2}(\Omega,P)\) be the space of all bounded squareintegrable random variables \(x:\Omega\rightarrow R^{d}\). For any random vector \(x=({x_{1}},{ x_{2}},\ldots,{ x_{d}}) \in R^{d}\), the norm of x is defined in the form of
For any stochastic process \(x(t,\omega) \in R^{d}\), the norm of \(x(t,\omega)\) is defined as follows:
We define the norm of random matrices as follows:
where G is a random matrix and \(\cdot\) is the operator norm.
For simplicity in notations, the norms \(\\cdot\_{2}\) and \(\ \cdot\ _{L^{2}(\Omega,P)}\) are usually written as \(\\cdot\\).
The following hypotheses are made for the theoretical analysis.
Hypothesis 2.1

(i)
There exists a constant \(K^{*}>0\) such that \(\x_{0}\\leq K^{*}\).

(ii)
The mapping \(f:R\times R^{d}\rightarrow R^{d} \) is continuous, and there exist positive constants \(J_{1}\) and \(K_{1}\) such that \(f(t,0)\) is globally bounded with \(f(t,0)\leq J_{1}\) and for any \(X_{1},X_{2}\in R^{d}\), the following inequality holds:
$$ \biglf(t,X_{1})f(t,X_{2})\bigr\leq K_{1}X_{1}X_{2}. $$(6) 
(iii)
The mapping \(g:R\rightarrow R^{d} \) is continuous, and there exists a positive constant \(J_{2}\) such that \(g(t)\) is globally bounded with \(g(t)\leq J_{2}\).
3 Theoretical results
3.1 Existence of random periodic solutions
The following result guarantees the existence of random periodic solutions for dissipative SDEs, and is a direct consequence of TheoremÂ 3.2.4 in [4].
Lemma 3.1
For any \(\infty< s\leq t<+\infty\), \(x_{0},\hat{x}_{0}\in B\), if the following conditions hold:

(i)
\(\varphi(s,t,\omega)\cdot:B\rightarrow B\) is a.s. continuous;

(ii)
\(\varphi(s+\tau,t+\tau,\omega)x_{0}=\varphi (s,t,\theta_{\tau}\omega)x_{0}\);

(iii)
there exist constants \(c\in(0,1)\) and \(M>0\) such that \(\\varphi(s,t,\omega)x_{0}\varphi(s,t,\omega)\hat{x}_{0}\\leq c^{ts}\ x_{0}\hat{x}_{0}\\) and \(\\varphi(s,t,\omega)x_{0}\\leq M\), where M may depend on \((ts)\),
then there exists a unique random Ï„periodic solution \(Y(t,\omega)\) of Ï†. Moreover, \(\varphi(tm\tau,t,\theta_{m\tau }\omega)x_{0}\rightarrow Y(t,\omega)\in L^{p}(\Omega,B)\) as \(m \rightarrow +\infty\), where \(B\subset R^{d}\) and m is a positive integer.
Lemma 3.2
Suppose that A is a class of hyperbolic \(d\times d\) matrix, the eigenvalues of A are denoted by \(\{\lambda _{j},j=1,2,\ldots,d\}\) and satisfy \(0< \lambda_{1} \leq\lambda_{2} \leq \cdots\leq\lambda_{d}\). Then the function \(e^{At}\) tends to zero as \(t\rightarrow+\infty\), that is,
Proof
We start with the onedimensional case, that is, \(d=1\). If Î» is the eigenvalue of A, that is, \(A=\lambda\), we obtain
where \(\lambda>0\).
So it is valid for the onedimensional case.
Now we consider the case \(d>1\). The matrix exponential of A is diagonalizable \(e^{A}=Qe^{D}Q^{1}\), where A is invertible and D is diagonal with eigenvalues of A as its spectrum [8]. Then we get
and
It follows from the result of the onedimensional case that it is also valid for the ddimensional case. This completes the proof.â€ƒâ–¡
From the conclusions of Lemmas 3.1 and 3.2, we obtain the following theorem.
Theorem 3.3
Assume that there exists a constant \(\tau>0\) such that for any \(t \in R\) and any \(X\in R^{d}\), the following equalities hold:
Suppose that A satisfies LemmaÂ 3.2. Moreover, suppose that SDE (1) satisfies Hypothesis 2.1 and the global Lipschitz constant of f be \(K_{1}\in[0,\sqrt{2}\lambda_{d})\).
If SDE (1) has a unique random periodic solution \(Y(t,\omega):(\infty,+\infty)\times\Omega\rightarrow R^{d}\), that is,
then \(Y(t,\omega)\) is a solution of the forward infinite horizon integral equation
Proof
In order to utilize LemmaÂ 3.1 to this problem, we only need to check that the conditions of this theorem satisfy its three hypotheses.
First and foremost, by the assumptions of SDE (1), the hypothesis (i) obviously holds.
Secondly, utilizing (3) and Duhamelâ€™s formula, we obtain
Putting \(\bar{\omega}:=\theta_{\tau}\omega\) and \(\bar{\varphi}(s,r,\bar {\omega})x_{0}:=\varphi(s+\tau,r+\tau,\omega)x_{0}\), we have found that
By the uniqueness of the solution, we obtain
Therefore, it implies that
This completes the check of the second hypothesis.
Last but not least, for any \(x_{0},\hat{x}_{0}\in R^{d}\), it follows from (10) that
We notice from the fact \((a+b)^{2}\leq2a^{2}+2b^{2}\), \(a,b\in R\) and the CauchySchwarz inequality that we can obtain
By the condition (6) we have
where
By the Gronwall inequality, there exists a number \(M_{1}\) such that
where
It is the fact that \(M_{1}\) tends to zero as \(s\rightarrow\infty\). Therefore there exists \(0< c<1\) such that the inequality \(M_{1}\leq c^{ts}\x_{0}\hat{x}_{0}\\) holds.
By the method which is similar to the paper [8], we obtain the estimation of \(\\varphi(s, t,\omega)x_{0}\\). We also notice from (10) and the fact \((a+b+c)^{2}\leq 3a^{2}+3b^{2}+3c^{2}\), \(a,b,c\in R\), that we can obtain
Using the CauchySchwarz inequality and the ItÃ´ isometry we have
From the global Lipschitz condition of the function f it follows that for any \(X\in R^{d}\), the linear growth condition also holds:
From the globally boundedness of the function g and the boundedness of the initial value in Hypothesis 2.1, we obtain
By the Gronwall inequality, there exists a number \(M_{2}\) such that
where
Here \(M_{2}\) satisfies the assumption that it may depend on \(ts\).
This completes the check of the third hypothesis.
Moreover, the pullback method only works for dissipative systems, that is, the systems are contractive. The pullback of SDE (1) is
where \(x_{0}\) is \(\mathcal{F}_{m\tau}\)measurable, and \(X(tm\tau ,t,\theta_{m\tau}\omega, x_{0})\) denotes the solution at the time t with the initial condition \(x_{0}(\theta_{m\tau}\omega)\) at the time \(tm\tau\).
Therefore, the exact solution of SDE (14) at the time t has the form
It follows from LemmaÂ 3.2 and the periodic property of f and g that
Therefore, the conclusion follows from LemmaÂ 3.1 and TheoremÂ 4.2.2 in [4]. The proof is finished.â€ƒâ–¡
3.2 Stability
In this section we investigate the meansquare uniformly asymptotic stability of the random periodic solution \(Y(t,\omega)\) of SDE (1). The pullback method is a powerful tool in the proof of uniformly asymptotic stability. To be precise, let us introduce some related definitions [10].
Definition 3.1
(i) The random periodic solution \(Y(t,\omega)\) of SDE (1) is said to be meansquare asymptotically stable if for any given \(\epsilon>0\), every other random periodic solution \(\hat{Y}(t,\omega)\) of SDE (1) satisfies
for any bounded \(\mathcal{F}_{s}\)measurable bounded initial values \(x_{0}\) and \(\hat{x}_{0}\), respectively, where \(\x_{0}\hat{x}_{0}\<\epsilon\), where \(s=tm\tau\).
(ii) The random periodic solution \(Y(t,\omega)\) of SDE (1) is said to be meansquare uniformly stable if for any given \(\epsilon>0\) and every other random periodic solution \(\hat{Y}(t,\omega )\) of SDE (1), there exists \(\delta=\delta(\epsilon)\) such that \(\x_{0}\hat{x}_{0}\\leq\delta\) implies the inequality \(\Y(t,\omega )\hat{Y}(t,\omega)\<\epsilon\) holds for any \(t\geq s\), where \(s=tm\tau\).
(iii) The random periodic solution \(Y(t,\omega)\) of SDE (1) is said to be meansquare uniformly asymptotically stable if it is meansquare uniformly stable and meansquare asymptotically stable.
Theorem 3.4
Assume that for any initial values \(x_{0} \) and \(\hat{x}_{0}\in L^{2}(\Omega,P)\), the coefficients of SDE (1) satisfy TheoremÂ 3.3, then the random periodic solution \(Y(t,\omega)\) of SDE (1) is meansquare uniformly asymptotically stable.
Proof
First and foremost, let \(\varphi(tm\tau,t,\theta_{m\tau}\omega)\hat {x}_{0}\) be another solution of SDE (1) and \(\epsilon>0\) be an arbitrary constant. If \(\x_{0}\hat{x}_{0}\\leq\epsilon\), it follows from (15) and the method which is used to estimate (12) that
where
By the Gronwall inequality, there exists a number \(M_{3}\) such that
where
Therefore, by the fact that \(M_{3}\rightarrow0\) as \(m\rightarrow+\infty \), we obtain
Fatouâ€™s lemma implies that
Then we have
Then by DefinitionÂ 3.1(i), it is meansquare asymptotically stable.
Secondly, let \(V(s,t,\omega)\bar{x}_{0}=Y(t,\omega)\hat{Y}(t,\omega)\), where \(\bar{x}_{0}=(x_{0},\hat{x}_{0})\). It is the fact that \(V(s,t,\omega )\bar{x}_{0}\) is also a random periodic solution of SDE (1). Without loss of generality, we only consider the case \(s\geq0\). The proof of other case is similar, that is, by the transformation \(\breve {s}=s+m'\tau\), we can change the case of \(s\leq0\) to the case of \(\breve{s}\geq0\), where \(m'\) is a positive integer. Let \(\bar{x}'_{0}\) be the initial value at the starting time \(s=0\). From the above result, that is, the meansquare asymptotic stability, it follows that for any given \(\epsilon>0\), there exists \(\delta_{0}=\delta_{0}(\epsilon )>0\) such that \(\x'_{0}\hat{x}'_{0}\\leq\delta_{0}\) implies the inequality \(\V(0,t,\omega)\bar{x}'_{0}\<\epsilon\) holds for \(t\geq0\).
For the first case \(s\in[0,\tau]\), by the fact that \(V(s,t,\omega)\bar {x}_{0}\) is continuous with respect to \((s,\bar{x}_{0})\) and uniformly continuous with respect to s for \(s\in[0,\tau]\), there exists \(\delta=\delta(\epsilon)>0\) such that \(\x_{0}\hat{x}_{0}\ \leq\delta\) implies the inequality \(\V(s,0,\omega)\bar{x}_{0}\<\delta _{0} \) holds for \(s\in[0,\tau]\).
Put \(\bar{x}''_{0}=V(s,0,\omega)\bar{x}_{0}\), and we obtain \(V(s,t,\omega )\bar{x}_{0}=V(0,t,\theta_{s}\omega)\bar{x}''_{0}\) for any \(t\geq0\). Therefore if \(\x_{0}\hat{x}_{0}\\leq\delta\) and \(s\in[0,\tau]\), the inequality \(\V(s,t,\omega)\bar{x}_{0}\<\epsilon\) holds for any \(t \geq s\), that is,
For the second case \(s>0\), there exists a positive integer \(m''\) such that \(s\in[m''\tau, (m''+1)\tau]\). It follows from the random periodicity that \(V(sm''\tau,tm''\tau,\theta_{m''\tau}\omega)\bar{x}_{0}\) is also the random periodic solution of SDE (1) and
Then \(\x_{0}\hat{x}_{0}\\leq\delta\) implies the inequality \(\Y(t,\omega )\hat{Y}(t,\omega)\<\epsilon\) holds for any \(s\geq0\) and \(t\geq s\).
Therefore it follows from DefinitionÂ 3.1(ii) that it is meansquare uniformly asymptotically stable. The conclusion follows from DefinitionÂ 3.1(iii). This completes the proof.â€ƒâ–¡
3.3 Numerical implementation method of random periodic solutions
It follows from TheoremÂ 3.3 that the forward infinite horizon integral equation (9) is the random periodic solution of SDE (1). However, if the numerical method is applied to the improper integral (9), only the numerical approximations to (9) are obtained. Therefore, approximating the random periodic solution requires that the random periodic solution (9) is meansquare uniformly asymptotically stable. It follows from TheoremÂ 3.4 that the random periodic solution (9) is meansquare uniformly asymptotically stable. This implies that the numerical solution to initial problem is the numerical solution to the random periodic solution.
Therefore, a numerical implementation method is as follows. We obtain from (9),
which can be viewed as the initial value at the time \(t=0\) of random periodic solutions of SDE (1). The finite time interval \([0,t]\) is divided into N subintervals with the length \(\Delta t:=\frac{t}{N}\). For any given presupposed error tolerance \(\delta\in(0, \Delta t]\), if \(s'<0\) is chosen such that
then the improper integral (16) can be approximated by the ItÃ´ integral \(\bar{Y}(0,\omega)\), where
Therefore the improper integral (9) in the finite time interval \([0,t]\) can be approximated by the ItÃ´ integral (18)
with initial value \(\bar{Y}(0,\omega)\) at the time \(t=0\).
By means of reselecting the corresponding starting time and \(s'\), we can simulate a random periodic solution in an arbitrary finite time interval with any given presupposed error tolerance.
In order to improve the accuracy of the integral, the random Romberg algorithm is applied to (18) and \(\bar{Y}(0,\omega)\). The method applied to (18) in detail is shown as follows.
Let \(\tilde{Y}_{n}(t,\omega)\) be the approximation of \(\tilde{Y}(0,n\Delta t,\omega)\), where
then we see that the iterative relation
where \(n=1,\ldots,N\) and \(\tilde{Y}_{N}(t,\omega)\) is the numerical approximation of \(\tilde{Y}(t,\omega)\).
The sequence of time step and the increment of the Brownian motion are defined in the form of
Let
then we obtain
From the induction principle, it implies that
Utilizing the extrapolation method, we obtain the element
For any presupposed error tolerance \(\varepsilon\in[0, \delta]\), if the inequality holds
the computation is ended and \(R_{jj}\) is viewed as the approximation of (19). That is,
The method applied to \(\bar{Y}(0,\omega)\) in detail is shown as follows, which is similar to the former.
Let \(\Delta h:=\frac{s'}{N'}\) and \(\bar{Y}_{n}(0,\omega)\) be the approximation of \(\bar{Y}(0,n\Delta h,\omega)\), where
then we see that the iterative relation
where \(n=1,\ldots,N'\) and \(\bar{Y}_{N'}(0,\omega)\) is the numerical approximation of \(\bar{Y}(0,\omega)\).
The sequence of time steps and the increment of the Brownian motion are defined in the form
Let
then we obtain
By a similar method to (20), we can obtain \(R'_{j1}\), \(j=2,3,\ldots\)â€‰, by the induction principle and \(R'_{jk}\), \(k=2,\ldots,j\) by the extrapolation method. For any presupposed error tolerance \(\varepsilon' \in[0, \delta]\), if the following inequality holds:
the computation is ended and \(R'_{jj}\) is viewed as the approximation of (21). That is,
3.4 Convergence
The finite time interval \([0,t]\) is divided into N subintervals with the length Î”t. The exact solution of SDE (1) in \([0,t]\) has the form
The following result shows that the numerical approximation \(\hat {Y}_{N}(t,\omega)\) to random periodic solutions is meansquare convergent to the exact solution (22) under some conditions.
Theorem 3.5
Assume that for any initial value \(x_{0}\in L^{2}(\Omega,P)\), the coefficients of SDE (1) satisfy Theorems 3.3 and 3.4, then the numerical approximation \(\tilde{Y}_{N}(t,\omega)\) to random periodic solutions of SDE (1) by the random Romberg algorithm is meansquare convergent.
Proof
From the expression of \(\tilde{Y}_{N}(t,\omega)\), we obtain
Then it implies that
We notice from the CauchySchwarz inequality and the global Lipschitz condition (6) of f that we can obtain
where
Then the fact \((a+b)^{2}\leq2a^{2}+2b^{2}\), \(a,b \in R\) implies that
By the random Romberg algorithm in SectionÂ 3.3 and meansquare uniformly asymptotic stability, we obtain
It follows from the Gronwall inequality that there exists a number \(M_{4}\) such that
where
By the fact that \(M_{4}\) tends to zero as \(N\rightarrow+\infty\), we obtain
Therefore, it is meansquare convergent. This proof is finished.â€ƒâ–¡
4 Numerical experiments
We would like to provide a numerical example to illustrate the effectiveness of the algorithms which are used to simulate random periodic solutions of dissipative SDEs. Assume that we are working in a onedimensional space of real numbers and consider the following SDE:
that is,
It follows from TheoremÂ 3.3 that there exist random periodic solutions with period \(\tau=2\pi\) of SDE (23). If we choose \(\delta=0.01\), \(s'\) is equal to âˆ’30 such that the inequality (17) holds. We can check that it satisfies TheoremÂ 3.4, then the random periodic solution of (23) is meansquare uniformly asymptotically stable, therefore we can obtain the numerical approximations in the presupposed initial error tolerance. To get the Brownian trajectory for the negative time, we construct the positive time path and reflect it against point zero. We will run the simulation with the following meshes [2, 11, 12]:
to construct a random periodic solution with the starting point \(x_{0}=0.1\). We generate Brownian trajectories in the following way:
where
Similarly, we can choose another presupposed initial error tolerance \(\delta=0.011\) such that \(s'=35\) and \(x_{0}=0.15\) are determined, which satisfy TheoremÂ 3.3, too.
Then we obtain the graphs for numerical approximations to random periodic solutions in the time interval \([0,35]\) as FiguresÂ 1 and 2, respectively. As we see there exist random periodic phenomena with period \(\tau=2\pi\). That is, unlike the case in the deterministic systems, random periodic solutions of SDE with the same starting point exist with a small difference between two periods of time. However, the approximative structure of the graphs of the random periodic solutions with different starting points is preserved in the same period time. Random oscillation in the phase space leads to these phenomena due to the random noise pumped into this system constantly.
To check the convergence of numerical approximations, we plot the curves from different starting points at the time \(t=0\) in the same graph. As we see from FigureÂ 3, whose starting points are \(x_{0}=0.1\) and \(x_{0}=0.15\), respectively, as time progresses, the trajectories become asymptotically close. FigureÂ 4, whose starting points are \(x_{0}=0.12\) and \(x_{0}=0.08\), respectively, also reflects the fact that whatever starting points we choose, as we move forward in time, the random periodic solutions arrive at the exact trajectories which depend on different \(\omega\in\Omega\), that is, random periodic solutions are stochastic processes and different for every \(\omega\in\Omega\). These results confirm that the numerical methods are efficient.
5 Conclusion
Finally, conclusions and future work are summarized. In this paper, the possibility of meansquare numerical approximations to random periodic solutions of SDEs is discussed. The random Romberg algorithm is shown in detail. The results show that the method is effective and universal; numerical experiments are performed and match the results of theoretical analysis. In our further work, we will consider more simple and practical methods which will be used to simulate a broader class of SDEs whose diffusion coefficient is a function of t and x.
References
Mao, X: Stochastic Differential Equations and Applications, 2nd edn. Ellis Horwood, Chichester (2008)
Milstein, G: Numerical Integration of Stochastic Differential Equations. Kluwer Academic, Dordrecht (1995)
Liu, B, Han, Y, Sun, X: Squaremean almost periodic solutions for a class of stochastic integrodifferential equations. J.Â Jilin Univ. Sci. Ed. 51(3), 393397 (2013)
Luo, Y: Random periodic solutions of stochastic functional differential equations. PhD thesis, Loughborough University, Department of Mathematical Sciences (2014)
Feng, C, Zhao, H, Zhou, B: Pathwise random periodic solutions of stochastic differential equations. J. Differ. Equ. 251, 119149 (2011)
Hong, J, Liu, Y: Numerical simulation of periodic and quasiperiodic solutions for nonautonomous Hamiltonian systems via the scheme preserving weak invariance. Comput. Phys. Commun. 131, 8694 (2000)
Liu, Y, Hong, J: Numerical method of almost periodic solutions for LotkaVolterra system. J. Tsinghua Univ. (Sci. Technol.) 40(5), 111113 (2000)
Yevik, A, Zhao, H: Numerical approximations to the stationary solutions of stochastic differential equations. SIAM J.Â Numer. Anal. 49(4), 13971416 (2011)
Arnold, L: Random Dynamical Systems, 2nd edn. Springer, Berlin (2003)
Khasminskii, R: Stochastic Stability of Differential Equations, 2nd edn. Springer, Berlin (2011)
Wang, P: Astable RungeKutta methods for stiff stochastic differential equations with multiplicative noise. Comput. Appl. Math. 34, 773792 (2015)
Wang, T: Optimal pointwise error estimate of a compact difference scheme for the coupled GrossPitaevskii equations in one dimension. J. Sci. Comput. 59(1), 158186 (2014)
Acknowledgements
The author would like to express his gratitude to Prof. Jialin Hong for his helpful discussion. This work is supported by NSFC (Nos. 11021101, 11290142, and 91130003).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The author declares that he has no competing interests.
Authorâ€™s contributions
The author has read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Zhan, Q. Meansquare numerical approximations to random periodic solutions of stochastic differential equations. Adv Differ Equ 2015, 292 (2015). https://doi.org/10.1186/s1366201506260
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1366201506260