Skip to main content

Theory and Modern Applications

Mean square numerical solution of stochastic differential equations by fourth order Runge-Kutta method and its application in the electric circuits with noise

Abstract

We consider numerical solutions of stochastic initial value problems via the random Runge-Kutta method of the fourth order. A random mean value theorem is established and the mean square convergence of these methods is proved. The expectation and variance of the solution are derived. We supplement this method by plotting computational errors.

1 Introduction

Stochastic differential equations (SDEs) have many applications in economics, ecology, and finance [1–3]. In recent years, the development of numerical methods for the approximation of SDEs has become a field of increasing interest; see e.g. [4–10] and references therein. For example in [11], a numerical solution of SDEs is given by a random Euler method and in [12–15], we obtain the expectation and variance of a numerical solution of these equations by a random Runge-Kutta method of the second order that have good accuracy, with respect to the Euler method [11], and in this paper we obtain the expectation and variance of numerical solution of these equations by a random Runge-Kutta method of the fourth order.

A stochastic differential equation of the form

$$ \left \{ \begin{array}{@{}l} \dot{X}(t)=f(X(t),t),\quad t\in I=[t_{0},T],\\ X(t_{0})=X_{0}, \end{array} \right . $$
(1)

where \(X_{0}\) is a random variable, and the unknown \(X(t)\) as well as the right-hand side \(f(X(t),t)\) are stochastic processes defined on the same probability space \((\Omega,\digamma,P)\), are powerful tools to model real problems with uncertainty. The authors of [16] treated the numerical solution of stochastic initial value problems based on a sample treatment of the right-hand side of the differential equations. The sample treatment approach developed in [16] has the advantage that conclusions remain true in the deterministic case, but in many situations the hypotheses assumed in [16] are not satisfied. This fact motivates research for alternative conditions under which good numerical approximations could be constructed. Here we do not assume any trajectorial condition but mean square change information of \(f(X(t),t)\) is expressed in terms of its mean square modulus of continuity. Other numerical schemes for stochastic differential equations may be found in [4, 6, 12, 16].

This paper is organized as follows: Section 2 deals with some preliminaries addressed to clarify the presentation of concepts and results used later. A mean value theorem for stochastic processes is given in Section 3 and in Section 4 the mean square convergence of a random fourth order Runge-Kutta method is established. In Section 5 some examples of [11, 12] illustrate the accuracy of the presented results. Finally, Section 6 gives some brief conclusions.

2 Preliminaries

Definition 1

We are interested in second order random variables X, having a density function \(f_{X}\),

$$E\bigl[X^{2}\bigr]=\int_{-\infty}^{\infty}x^{2}f_{X}(x)\,dx< \infty, $$

where E denotes the expectation operator, and it allows the introduction of the Banach space \(L_{2}\) of all the second order random variables endowed with the norm

$$\|X\|=\sqrt{E\bigl[X^{2}\bigr]}. $$

Definition 2

A stochastic process \(X(t)\) defined on the same probability space \((\Omega,\digamma, P)\) is called a second order stochastic process if for each t, \(X(t)\) is a second order random variable. Hence the meaning of \(\dot{X}(t)\) in (1) is the mean square limit in \(L_{2}\) of the expression

$$\frac{X(t+\Delta{t})-X(t)}{\Delta{t}}, \quad \mbox{as } \Delta{t}\rightarrow{0}. $$

Lemma 1

Let \({X_{n}}\) and \({Y_{n}}\) be two sequences of second order random variables mean square convergent to the second order random variable X, Y, respectively, i.e.,

$$X_{n}\rightarrow X \quad \textit{and}\quad Y_{n} \rightarrow Y \quad \textit{as } n \rightarrow\infty, $$

then

$$E[X_{n}Y_{n}]\rightarrow E[XY] \quad \textit{as } n \rightarrow\infty, $$

and so

$$\lim_{n\rightarrow{\infty}}E[X_{n}]=E[X] \quad\textit{and}\quad \lim _{n\rightarrow{\infty}}\operatorname{Var}[X_{n}]=\operatorname{Var}[X]. $$

Definition 3

Let \(g:I\longrightarrow L_{2}\) be a mean square bounded function and let \(h>0\), then the mean square modulus of continuity of g is the function

$$\omega(g,h)= \sup_{|t-t^{*}|\leq h}{\bigl\| g(t)-g\bigl(t^{*}\bigr) \bigr\| },\quad t,t^{*}\in I. $$

Definition 4

The function g is said to be mean square uniformly continuous in I, if

$$\lim_{h\rightarrow0}\omega(g,h)=0. $$

Definition 5

Let \(f(X,t)\) be defined on \(S \times I\) where S is a bounded set in \(L_{2}\). We say that f is randomly bounded uniformly continuous in S, if

$$\lim_{h\rightarrow0} \omega\bigl(f(X,\cdot),h\bigr)=0, $$

uniformly for \(X\in S\), and finally we have

$$\sup_{X\in S}\omega\bigl(f(X,\cdot),h\bigr)=\omega(h) \rightarrow0. $$

Definition 6

Let \(\{N_{t}\}_{t\geq0}\) be an increasing family of σ-algebras of sub-sets of Ω. A process \(g(t,\omega)\) from \([0,\infty) \times\Omega\) to \(R^{n}\) is called \(N_{t}\)-adapted if for each \(t\geq0\) the function \(\omega \rightarrow g(t,\omega)\) is \(N_{t}\)-measurable, [17].

Definition 7

Let \(\nu= \nu(S,T)\) be the class of functions \(f(t,\omega):[0,\infty)\times\Omega\rightarrow R\) such that:

  1. (i)

    \((t,\omega)\rightarrow f(t,\omega)\) is \(B\times\mathcal {F}\)-measurable, where B denotes the Borel σ-algebra on \([0,\infty)\) and ℱ is the σ-algebra on Ω,

  2. (ii)

    \(f(t,\omega)\) is \(\mathcal{F}_{t}\)-adapted, where \(\mathcal{F}_{t}\) is the σ-algebra generated by the random variables \(B_{s}\); \(s\leq t\),

  3. (iii)

    \(E [\int_{S}^{T} f^{2}(t,\omega)\,dt]<\infty\), [17].

Definition 8

(The Itô integral), [17]

Let \(f\in \nu(S,T)\), then the Itô integral of f (from S to T) is defined by

$$\int_{S}^{T} f(t,\omega)\,dB_{t}(\omega)= \lim_{n\rightarrow{\infty}}\int_{S}^{T} \phi_{n}(t,\omega)\,dB_{t}(\omega), $$

where \({\phi_{n}}\) is a sequence of elementary functions such that

$$E\biggl[\int_{S}^{T} \bigl(f(t,\omega)- \phi_{n}(t,\omega)\bigr)^{2}\,dt\biggr] \rightarrow0, \quad \mbox{as } n\rightarrow\infty. $$

Theorem 1

(The Itô isometry), [17]

Let \(f\in\nu(S,T)\), then

$$E \biggl[ \biggl(\int_{S}^{T} f(t, \omega)\,dB_{t}(\omega) \biggr)^{2} \biggr]=E \biggl[\int _{S}^{T} f^{2}(t,\omega)\,dt \biggr]. $$

Definition 9

(1-dimensional Itô processes), [17]

Let \(B_{t}\) be 1-dimensional Brownian motion on \((\Omega,\mathcal{F},P)\). A (1-dimensional) Itô process (or stochastic integral) is a stochastic process \(X_{t}\) on \((\Omega,\mathcal{F},P)\) of the form

$$X_{t}=X_{0}+\int_{0}^{t} u(s,\omega)\,ds+\int_{0}^{t} v(s, \omega)\,dB_{s}, $$

where

$$\begin{aligned}& P \biggl[\int_{0}^{t} v^{2}(s, \omega)\,ds< \infty, \mbox{for all } t\geq0 \biggr]=1, \\& P \biggl[\int_{0}^{t} \bigl| u(s,\omega)\bigr|\,ds<\infty, \mbox{for all } t\geq0 \biggr]=1. \end{aligned}$$

The Itô processes \(X_{t}\) is sometimes written in the shorter differential form

$$ dX_{t}=u\,dt+v\,dB_{t} . $$
(2)

Theorem 2

(The 1-dimensional Itô formula), [17]

Let \(X_{t}\) be an Itô process given by (2) and \(g(t,x)\in C^{2}([0,\infty)\times R)\), then

$$Y_{t}=g(t,X_{t}) $$

is again an Itô process, and

$$ dY_{t}=\frac{\partial g}{\partial t}(t,X_{t})\,dt+\frac{\partial g}{\partial x}(t,X_{t})\,dX_{t}+ \frac{1}{2}\frac{\partial^{2} g}{\partial x^{2}}(t,X_{t}) (dX_{t})^{2}, $$
(3)

where \((dX_{t})^{2}=(dX_{t})(dX_{t})\) is computed according to the rules

$$ dt\cdot dt=dt\cdot dB_{t}=dB_{t}\cdot dt=0,\qquad dB_{t}\cdot dB_{t}=dt. $$
(4)

Lemma 2

[1]

Let \(X(t)\) be a second order stochastic process, mean square continuous on \(I=[t_{0},T]\), then there exists \(\eta\in I\) such that

$$\int_{t_{0}}^{t}X(s)\,ds=X(\eta) (t-t_{0}), \quad t_{0}< t<T. $$

The purpose of the theorem below is to establish a relationship between the increment \(X(t)-X(t_{0})\) of a second order stochastic process, and its mean square derivative \(\dot{X}(\eta)\) for some η in \([t_{0},t]\) for \(t>t_{0}\). The result will be used to prove the convergence of the random Runge-Kutta method.

Theorem 3

Let \(X(t)\) be a mean square differentiable second order stochastic process in \(I=[t_{0},T]\) and mean square continuous in it. Then there exists \(\eta\in I\) such that

$$X(t)-X(t_{0})= \dot{X}(\eta) (t-t_{0}). $$

Proof

See [1]. □

3 Convergence of random fourth order Runge-Kutta method

A random fourth order Runge-Kutta method will have the following form:

$$ X_{n+1}=X_{n}+\frac{1}{6} (k_{1}+2k_{2}+2k_{3}+k_{4} ),\quad n=1,2,\ldots, $$
(5)

where

$$ \left \{\begin{array}{@{}l} k_{1}=hf(X_{n},t_{n}),\\ k_{2}=hf (X_{n}+\frac{k_{1}}{2},t_{n}+\frac{h}{2} ), \\ k_{3}=hf (X_{n}+\frac{k_{2}}{2},t_{n}+\frac{h}{2} ), \\ k_{4}=hf(X_{n}+k_{3},t_{n}+h). \end{array} \right . $$
(6)

Theorem 4

Let \(f(X(t),t)\) be defined on \(S \times I\) to \(L_{2}\), where S is a bounded set in \(L_{2}\). If \(f(X(t),t)\) satisfies the conditions (C1) and (C2),

  1. (C1)

    \(f(X,t)\) is randomly bounded uniformly continuous,

  2. (C2)

    \(f(X,t)\) satisfies the mean square Lipschitz condition, that is,

    $$ \bigl\| f(X,t)-f(Y,t) \bigr\| \leq k(t) \| X-Y\|, $$
    (7)

    where \(\int_{t_{0}}^{T}k(t)\,dt<\infty\),

then the random fourth order Runge-Kutta scheme (5) is mean square convergent.

Proof

Note that under hypotheses (C1) and (C2), we are interested in the mean square convergence to zero of the error

$$ e_{n}=X_{n}-X(t_{n}), $$
(8)

where \(X(t)\) is the theoretical solution of the fourth order stochastic process of the problem (1).

From Theorem 3 it follows that

$$ X(t_{n+1})=X(t_{n})+hf\bigl(X(t_{\eta}),t_{\eta} \bigr), \quad t_{\eta} \in (t_{n},t_{n+1}). $$
(9)

By (5), (6), (8), and (9) it follows that

$$\begin{aligned} \| e_{n+1} \|\leq{}&\| e_{n} \|+ \frac{h}{6} \bigl\| f(X_{n},t_{n})-f\bigl(X(t_{\eta}),t_{\eta} \bigr) \bigr\| +\frac{h}{3} \biggl\| f \biggl(X_{n}+\frac{k_{1}}{2},t_{n}+ \frac{h}{2} \biggr)-f\bigl(X(t_{\eta}),t_{\eta }\bigr) \biggr\| \\ &{}+\frac{h}{3} \biggl\| f \biggl(X_{n}+\frac{k_{2}}{2},t_{n}+ \frac {h}{2} \biggr)-f\bigl(X(t_{\eta}),t_{\eta}\bigr) \biggr\| \\ &{}+ \frac{h}{6} \bigl\| f(X_{n}+k_{3},t_{n}+h)-f \bigl(X(t_{\eta}),t_{\eta}\bigr) \bigr\| . \end{aligned}$$
(10)

By assumption

$$ M=\sup_{t_{0}\leq t\leq T}\bigl\| \dot{X}(t) \bigr\| , $$
(11)

and using (C1), (C2), and Theorem 3 we have

$$\begin{aligned} \bigl\| f(X_{n},t_{n})-f\bigl(X(t_{\eta}),t_{\eta} \bigr)\bigr\| \leq{}&\bigl\| f(X_{n},t_{n})-f\bigl(X(t_{n}),t_{n} \bigr)\bigr\| +\bigl\| f\bigl(X(t_{n}),t_{n}\bigr)-f \bigl(X(t_{\eta }),t_{n}\bigr)\bigr\| \\ &{}+\bigl\| f\bigl(X(t_{\eta}),t_{n}\bigr)-f\bigl(X(t_{\eta}),t_{\eta} \bigr)\bigr\| \\ \leq{}& k(t_{n})\| e_{n}\|+k(t_{n})Mh+ \omega(h) \end{aligned}$$
(12)

and

$$\begin{aligned}& \begin{aligned}[b] &\biggl\| f \biggl(X_{n}+\frac{k_{1}}{2},t_{n}+ \frac{h}{2}\biggr) -f\bigl(X(t_{\eta}),t_{\eta}\bigr) \biggr\| \\ &\quad\leq k \biggl(t_{n}+\frac{h}{2} \biggr) \|e_{n}\|+ \frac{3}{2}Mhk \biggl(t_{n}+\frac{h}{2} \biggr)+ \omega(h), \end{aligned} \end{aligned}$$
(13)
$$\begin{aligned}& \begin{aligned}[b] &\biggl\| f \biggl(X_{n}+\frac{k_{2}}{2},t_{n}+ \frac{h}{2} \biggr) -f\bigl(X(t_{\eta}),t_{\eta}\bigr) \biggr\| \\ &\quad\leq k \biggl(t_{n}+\frac{h}{2} \biggr) \|e_{n}\|+ \frac{3}{2}Mhk \biggl(t_{n}+\frac{h}{2} \biggr)+ \omega(h), \end{aligned} \end{aligned}$$
(14)
$$\begin{aligned}& \bigl\| f (X_{n}+k_{3},t_{n}+h )-f \bigl(X(t_{\eta}),t_{\eta}\bigr)\bigr\| \leq k (t_{n}+h ) \|e_{n}\|+2Mhk (t_{n}+h )+\omega(h). \end{aligned}$$
(15)

So, from substituting (12), (13), (14), and (15) in (10), one gets

$$\begin{aligned} \|e_{n+1}\|\leq{}& \biggl[1+\frac{h}{6}k(t_{n})+ \frac{2h}{3}k \biggl(t_{n}+\frac{h}{2} \biggr)+ \frac{h}{6}k(t_{n}+h) \biggr]\|e_{n}\| \\ &{}+M\frac{h^{2}}{6}k(t_{n})+Mh^{2}k \biggl(t_{n}+\frac{h}{2} \biggr)+M\frac{h^{2}}{3}k(t_{n}+h)+h \omega(h), \end{aligned}$$
(16)

and by setting

$$\begin{aligned} &a_{n}=1+\frac{h}{6}k(t_{n})+ \frac{2h}{3}k \biggl(t_{n} +\frac{h}{2} \biggr)+ \frac{h}{6}k(t_{n}+h), \end{aligned}$$
(17)
$$\begin{aligned} &b_{n}=M\frac{h^{2}}{6}k(t_{n})+Mh^{2}k \biggl(t_{n}+\frac{h}{2} \biggr) +M\frac{h^{2}}{3}k(t_{n}+h)+h \omega(h) \end{aligned}$$
(18)

the inequality (16) gets the following form:

$$ \| e_{n+1} \| \leq a_{n}\| e_{n} \|+ b_{n}, \quad n=0,1,2,\ldots, $$
(19)

and by successive substitution, (19) will become

$$ \|e_{n+1}\|\leq \Biggl(\prod_{i=0}^{n}a_{i} \Biggr)\|e_{0}\|+\sum_{i=0}^{n} \Biggl(\prod_{j=i+1}^{n}a_{j} \Biggr)b_{i}, \quad n=0,1,2,\ldots, $$
(20)

by (17) we can write

$$\begin{aligned} \prod_{i=0}^{n}a_{i} &\leq\prod_{i=0}^{n}\exp \biggl( \frac{h}{6} \biggl[k(t_{i}) +4k \biggl(t_{i}+ \frac{h}{2} \biggr)+k(t_{i}+h) \biggr] \biggr) \\ &\leq\exp \biggl((n+1)\frac{h}{6} \biggl[k(t_{n})+4k \biggl(t_{n}+\frac {h}{2} \biggr)+k(t_{n}+h) \biggr] \biggr), \end{aligned}$$
(21)

and by (21) and geometrical progression we conclude

$$ \sum_{i=0}^{n} \Biggl(\prod _{j=i+1}^{n}a_{j} \Biggr)\leq \frac{\exp ((n+1)\frac{h}{6} [k(t_{n})+4k (t_{n}+\frac{h}{2} )+k(t_{n}+h) ] )-1}{\frac{h}{6} [k(t_{n}) +4k (t_{n}+\frac{h}{2} )+k(t_{n}+h) ]}. $$
(22)

Finally, from (18) and substituting (21) and (22) in (20), we obtain the following error bound:

$$\begin{aligned} \| e_{n+1} \| \leq{}&\exp \biggl((n+1)\frac{h}{6} \biggl[k(t_{n})+4k \biggl(t_{n}+\frac{h}{2} \biggr)+k(t_{n}+h) \biggr] \biggr)\| e_{0} \| \\ &{} +\frac{\exp ((n+1)\frac{h}{6} [k(t_{n}) +4k (t_{n}+\frac{h}{2} )+k(t_{n}+h) ] )-1}{\frac{h}{6} [k(t_{n})+4k (t_{n}+\frac{h}{2} )+k(t_{n}+h) ]} \\ &\times{} \biggl[M\frac{h^{2}}{6}k(t_{n})+Mh^{2}k \biggl(t_{n}+\frac{h}{2} \biggr) +M\frac{h^{2}}{3}k(t_{n}+h)+h\omega(h) \biggr]; \end{aligned}$$
(23)

by assumption \(e_{0}=0\) and \(nh=T-t_{0}\), the above inequality can be written as

$$\begin{aligned} \| e_{n+1} \| \leq{}&\frac{\exp (\frac{T-t_{0}+h}{6} [k(T)+4k (T+\frac{h}{2} )+k(T+h) ] )-1}{k(T) +4k (T+\frac{h}{2} )+k(T+h)} \\ &\times{}\biggl[Mhk(T)+6Mhk \biggl(T+ \frac{h}{2} \biggr) +2Mhk(T+h)+6\omega(h) \biggr]; \end{aligned}$$
(24)

since \(\omega(h)\rightarrow0\) as \(h\rightarrow0\), by condition (C1) and inequality (24) we can deduce that the sequence \({e_{n}}\) is mean square convergent to zero as \(h\rightarrow0\). Thus we have established the theorem. □

4 Numerical examples

Here we present some examples. Since these examples can be found in [1, 2], we can compare the results.

Example 1

Consider the following problem:

$$ \left \{\begin{array}{@{}l} \dot{X}(t)=2tX(t)+\exp(-t)+B(t), \quad t\in[0,1], \\ X(0)=X_{0}, \end{array} \right . $$
(25)

where \(B(t)\) is a Brownian motion process and \(X_{0}\) is a normal random variable, \(X_{0}\sim N(\frac{1}{2},\frac{1}{12})\) independent of \(B(t)\) for each \(t\in[0,1]\).

For computing the exact solution of the problem, by multiplying the equation by \(\exp(-t^{2})\) and using \(W(t)=\frac{dB(t)}{dt}\), we have

$$-2t\exp\bigl(-t^{2}\bigr)X(t)\,dt+\exp\bigl(-t^{2}\bigr)\,dX(t)= \exp\bigl(-t^{2}\bigr) \bigl(\exp(-t)+B(t) \bigr)\,dt $$

using the Itô formula [17], we deduce

$$d \bigl(\exp\bigl(-t^{2}\bigr)X(t) \bigr)=-2t\exp\bigl(-t^{2} \bigr)X(t)\,dt+\exp\bigl(-t^{2}\bigr)\,dX(t)=\exp \bigl(-t^{2} \bigr) \bigl(\exp(-t)+B(t) \bigr)\,dt $$

and so

$$ X(t)=\exp\bigl(t^{2}\bigr) \biggl\{ X_{0}+\int _{0}^{t}\exp\bigl(-s^{2}\bigr) \bigl( \exp (-s)+B(s) \bigr)\,ds\biggr\} . $$
(26)

If \(f(X(t),t)=2tX(t)+\exp(-t)+B(t)\), we have

$$ \bigl\| f(X,t)-f\bigl(X,t^{*}\bigr)\bigr\| \leq \bigl(2\| X \| +1 \bigr)\bigl| t-t^{*} \bigr|+\bigl| t-t^{*} \bigr|^{\frac{1}{2}} $$
(27)

so \(f(X,t)\) is randomly bounded uniformly continuous in any bounded set \(S\subset L\).

Now, from the random fourth order Runge-Kutta method we have

$$ X_{n+1}=X_{n}+\frac{1}{6} (k_{1}+2k_{2}+2k_{3}+k_{4} ), $$
(28)

where

$$\begin{aligned} k_{1}={}&2ht_{n}X_{n}+h \bigl( \exp(-t_{n})+B(t_{n}) \bigr), \\ k_{2}={}&2h \biggl(t_{n}+\frac{h}{2} \biggr) (1+ht_{n} )X_{n}+h^{2} \biggl(t_{n}+ \frac{h}{2} \biggr) \bigl(\exp (-t_{n})+B(t_{n}) \bigr) \\ &{}+h \biggl(\exp \biggl(- \biggl(t_{n}+\frac{h}{2} \biggr) \biggr) +B \biggl(t_{n}+\frac{h}{2} \biggr) \biggr), \\ k_{3}=&2h \biggl(t_{n}+\frac{h}{2} \biggr) \biggl(1+h \biggl(t_{n}+\frac {h}{2} \biggr) (1+ht_{n} ) \biggr)X_{n}+h^{3} \biggl(t_{n}+ \frac {h}{2} \biggr)^{2} \bigl(\exp(-t_{n})+B(t_{n}) \bigr) \\ &{}+h \biggl(1+h \biggl(t_{n}+\frac{h}{2} \biggr) \biggr) \biggl(\exp \biggl(- \biggl(t_{n}+\frac{h}{2} \biggr) \biggr) +B \biggl(t_{n}+\frac{h}{2} \biggr) \biggr), \\ k_{4}={}&2h (t_{n}+h ) \biggl(1+2h \biggl(t_{n}+ \frac{h}{2} \biggr) \biggl(1+h \biggl(t_{n}+\frac{h}{2} \biggr) (1+ht_{n} ) \biggr) \biggr)X_{n}+2h^{4} (t_{n}+h ) \\ &{}\times \biggl(t_{n}+\frac{h}{2} \biggr)^{2} \bigl(\exp(-t_{n})+B(t_{n}) \bigr) +2h^{2} (t_{n}+h ) \biggl(1+h \biggl(t_{n}+\frac{h}{2} \biggr) \biggr) \\ &{}\times \biggl(\exp \biggl(- \biggl(t_{n}+\frac{h}{2} \biggr) \biggr) +B \biggl(t_{n}+\frac{h}{2} \biggr) \biggr)+h \bigl(\exp \bigl(- (t_{n}+h ) \bigr) +B (t_{n}+h ) \bigr), \end{aligned}$$

and by setting

$$\begin{aligned} a_{n}={}&1+\frac{h}{3}t_{n}+\frac{2h}{3} \biggl(t_{n}+\frac{h}{2} \biggr) \biggl(1+ (1+ht_{n} ) \biggl(1+h \biggl(t_{n}+\frac{h}{2} \biggr) \biggr) \biggr) \\ &{}+\frac{h}{3} (t_{n}+h ) \biggl(1+2h \biggl(t_{n}+ \frac {h}{2} \biggr) \biggl(1+h \biggl(t_{n}+\frac{h}{2} \biggr) (1+ht_{n} ) \biggr) \biggr), \\ b_{n}={}&\frac{h}{6} \biggl(1+2h \biggl(t_{n}+ \frac{h}{2} \biggr) +2h^{2} \biggl(t_{n}+ \frac{h}{2} \biggr)^{2}+2h^{3} (t_{n}+h ) \biggl(t_{n} +\frac{h}{2} \biggr)^{2} \biggr) \bigl( \exp(-t_{n})+B(t_{n}) \bigr) \\ &{}+\frac{h}{3} \biggl(2+h \biggl(t_{n}+\frac{h}{2} \biggr)+h (t_{n}+h ) \biggl(1+h \biggl(t_{n}+ \frac{h}{2} \biggr) \biggr) \biggr) \\ &{}\times\biggl(\exp \biggl(- \biggl(t_{n} +\frac{h}{2} \biggr) \biggr)+B \biggl(t_{n}+\frac{h}{2} \biggr) \biggr) +\frac{h}{6} \bigl(\exp \bigl(- (t_{n}+h ) \bigr)+B (t_{n}+h ) \bigr), \end{aligned}$$

we have

$$ X_{n+1}=a_{n}X_{n}+b_{n}, \quad n=0,1,2,\ldots, $$
(29)

and so

$$ X_{n}= \Biggl(\prod_{i=0}^{n-1}a_{i} \Biggr)X_{0}+\sum_{i=0}^{n-1} \Biggl(\prod_{j=i+1}^{n-1}a_{j} \Biggr)b_{i}, \quad n=1,2,3,\ldots. $$
(30)

From (26) and (30), we obtain the expectations and variances of \(X(t)\) and \(X_{n}\).

$$\begin{aligned}& E\bigl[X(t)\bigr]=\exp\bigl(t^{2}\bigr) \biggl[\frac{1}{2}+\int _{0}^{t}\exp\bigl(-s^{2}-s\bigr)\,ds \biggr], \end{aligned}$$
(31)
$$\begin{aligned}& E[X_{n}]=\frac{1}{2}\prod_{i=0}^{n-1}a_{i}+ \sum_{i=0}^{n-1} \Biggl(\prod _{j=i+1}^{n-1}a_{j} \Biggr)E[b_{i}], \quad n=1,2,3,\ldots, \end{aligned}$$
(32)

where

$$\begin{aligned} E[b_{i}]={}&\frac{h}{6} \biggl(1+2h \biggl(t_{i}+ \frac{h}{2} \biggr) +2h^{2} \biggl(t_{i}+ \frac{h}{2} \biggr)^{2}+2h^{3} (t_{i}+h ) \biggl(t_{i}+\frac{h}{2} \biggr)^{2} \biggr) \exp(-t_{i})\\ &{}+\frac{h}{3} \biggl(2+h \biggl(t_{i}+\frac{h}{2} \biggr)+h (t_{i}+h ) \biggl(1+h \biggl(t_{i}+ \frac{h}{2} \biggr) \biggr) \biggr) \exp \biggl(- \biggl(t_{i}+ \frac{h}{2} \biggr) \biggr)\\ &{}+\frac{h}{6}\exp \bigl(-(t_{i}+h) \bigr) \end{aligned}$$

and

$$\begin{aligned}& \begin{aligned}[b] \operatorname{Var}\bigl[X(t)\bigr]={}&\exp \bigl(2t^{2}\bigr) \biggl[\frac{1}{12}+\int_{0}^{t} \int_{0}^{t}\exp \bigl(-s^{2}-r^{2} \bigr)\min(s,r)\,ds\,dr \biggr] \\ ={}&\exp\bigl(2t^{2}\bigr) \biggl[\frac{1}{12}+\int _{0}^{t} \bigl(\exp\bigl(-s^{2}\bigr)- \exp \bigl(-2s^{2}\bigr) \bigr)\,ds \biggr], \end{aligned} \end{aligned}$$
(33)
$$\begin{aligned}& \operatorname{Var}[X_{n}]=\frac{1}{12} \Biggl(\prod _{i=0}^{n-1}a_{i} \Biggr)^{2}+\sum _{i=0}^{n-1}\sum_{k=0}^{n-1} \Biggl(\prod_{j=i+1}^{n-1}a_{j} \Biggr) \Biggl(\prod_{l=k+1}^{n-1}a_{l} \Biggr)\operatorname{Cov}[b_{i},b_{k}], \quad n=1,2,3, \ldots, \end{aligned}$$
(34)

where

$$\begin{aligned} \operatorname{Cov}[b_{i},b_{k}]={}&A_{i,k}\min (t_{i},t_{k} ) +B_{i,k}\min \biggl(t_{i},t_{k}+ \frac{h}{2} \biggr)\\ &{}+C_{i} \min (t_{i},t_{k}+h )+B_{k,i} \min \biggl(t_{i}+\frac{h}{2},t_{k} \biggr) \\ &{}+D_{i,k}\min \biggl(t_{i}+\frac{h}{2},t_{k}+ \frac{h}{2} \biggr)+E_{i} \min \biggl(t_{i}+ \frac{h}{2},t_{k}+h \biggr)+C_{k}\min (t_{i}+h,t_{k} ) \\ &{}+E_{k}\min \biggl(t_{i}+h,t_{k}+ \frac{h}{2} \biggr)+\frac{h^{2}}{36}\min (t_{i}+h,t_{k}+h ), \end{aligned}$$

where

$$\begin{aligned}& \begin{aligned}[b] A_{i,k}={}&\frac{h^{2}}{36} \biggl(1+2h \biggl(t_{i}+\frac{h}{2} \biggr) +2h^{2} \biggl(t_{i}+\frac{h}{2} \biggr)^{2}+2h^{3} (t_{i}+h ) \biggl(t_{i}+\frac{h}{2} \biggr)^{2} \biggr) \\ &{}\times \biggl(1+2h \biggl(t_{k}+\frac{h}{2} \biggr)+2h^{2} \biggl(t_{k}+\frac {h}{2} \biggr)^{2}+2h^{3} (t_{k}+h ) \biggl(t_{k}+\frac {h}{2} \biggr)^{2} \biggr), \end{aligned} \\& \begin{aligned}[b] B_{i,k}={}&\frac{h^{2}}{18} \biggl(1+2h \biggl(t_{i}+\frac{h}{2} \biggr) +2h^{2} \biggl(t_{i}+\frac{h}{2} \biggr)^{2}+2h^{3} (t_{i}+h ) \biggl(t_{i}+\frac{h}{2} \biggr)^{2} \biggr) \\ &{}\times \biggl(2+h \biggl(t_{k}+\frac{h}{2} \biggr)+h (t_{k}+h ) \biggl(1+h \biggl(t_{k}+\frac{h}{2} \biggr) \biggr) \biggr), \end{aligned} \\& C_{i}=\frac{h^{2}}{36} \biggl(1+2h \biggl(t_{i}+ \frac{h}{2} \biggr)+2h^{2} \biggl(t_{i}+ \frac{h}{2} \biggr)^{2}+2h^{3} (t_{i}+h ) \biggl(t_{i}+\frac{h}{2} \biggr)^{2} \biggr), \\& \begin{aligned}[b] D_{i,k}={}&\frac{h^{2}}{9} \biggl(2+h \biggl(t_{i}+\frac{h}{2} \biggr) +h (t_{i}+h ) \biggl(1+h \biggl(t_{i}+\frac{h}{2} \biggr) \biggr) \biggr) \\ &{}\times \biggl(2+h \biggl(t_{k}+\frac{h}{2} \biggr)+h (t_{k}+h ) \biggl(1+h \biggl(t_{k}+\frac{h}{2} \biggr) \biggr) \biggr), \end{aligned} \\& E_{i}=\frac{h^{2}}{18} \biggl(2+h \biggl(t_{i}+ \frac{h}{2} \biggr) +h (t_{i}+h ) \biggl(1+h \biggl(t_{i}+\frac{h}{2} \biggr) \biggr) \biggr), \quad i,k=0,1,2,\ldots,n-1. \end{aligned}$$

The absolute error of the expectation and variance of \(X(t)\) with the Euler, RK2 and RK4 methods and \(h=\frac{1}{20}\), \(h=\frac{1}{50}\) are shown in Tables 1, 2. In Figure 1, the expectation and variance of the exact and numerical solutions of Example 1 with the RK4 method and \(h=\frac{1}{20}\) are compared. They show that the numerical values of \(E[X_{n}]\) and \(\operatorname{Var}[X_{n}]\) are closer to the theoretical values \(E[X(t_{n})]\) and \(\operatorname{Var}[X(t_{n})]\) when the parameter h decreases.

Figure 1
figure 1

Expectations and variances of \(\pmb{X(t)}\) and \(\pmb{X_{n}}\) with the RK4 method and \(\pmb{h=\frac{1}{20}}\) .

Table 1 Absolute error of the expectation of \(\pmb{X(t)}\) with the Euler, RK2, and RK4 methods and \(\pmb{h=\frac{1}{20}}\) , \(\pmb{h=\frac{1}{50}}\)
Table 2 Absolute error of variance of \(\pmb{X(t)}\) with the Euler, RK2, and RK4 methods and \(\pmb{h=\frac{1}{20}}\) , \(\pmb{h=\frac{1}{50}}\)

Example 2

Consider the following initial value problem:

$$ \left \{\begin{array}{@{}l} \dot{X}(t)=t^{2}X(t)+W(t), \quad t\in[0,1], \\ X(0)=X_{0}, \end{array} \right . $$
(35)

where \(W(t)\) is a Gaussian white noise process with mean zero and \(X_{0}\) is an exponential random variable with parameter \(\lambda=\frac{1}{2}\), independent of \(W(t)\) for each \(t\in [0,1]\). Here \(f(X(t),t)\) involves the white noise process with mean zero \(W(t)\), i.e. \(f(X(t),t)=t^{2}X(t)+W(t)\).

The covariance of \(W(t)\) is

$$ \operatorname{Cov}\bigl[W(t),W(s)\bigr]=\delta(t-s), $$
(36)

where \(\delta(t)\) is the delta generalized function. A convolution with the delta function always exists, see [18], and the delta function plays the same role for the convolution as unity does for multiplication,

$$\delta\ast g=g. $$

So, taking \(g(s)=h(s)\chi_{[0,t]}(s)\), where \(h(s)\) is a \(C^{\infty}\) function and \(\chi_{[0,t]}(s)\) denotes the characteristic function on the interval \([0,t]\), from (36) it follows that

$$\int_{-\infty}^{\infty}g(s)\delta(s-r)\,ds=\int _{-\infty}^{\infty}h(s)\chi _{[0,t]}(s)\delta(s-r)\,ds= \int_{0}^{t}h(s)\delta(s-r)\,ds=h(r). $$

For computing the exact solution of the problem, by multiplying both sides of (35) by \(\exp (\frac{-t^{3}}{3} )\), and using \(W(t)=\frac{dB(t)}{dt}\), we have

$$-t^{2}\exp \biggl(\frac{-t^{3}}{3} \biggr)X(t)\,dt+\exp \biggl( \frac{-t^{3}}{3} \biggr)\,dX(t) =\exp \biggl(\frac{-t^{3}}{3} \biggr)\,dB(t), $$

using the Itô formula, [17], we conclude

$$d \biggl(\exp \biggl(\frac{-t^{3}}{3} \biggr)X(t) \biggr)=-t^{2}\exp \biggl(\frac {-t^{3}}{3} \biggr)X(t)\,dt+\exp \biggl(\frac{-t^{3}}{3} \biggr)\,dX(t) =\exp \biggl(\frac{-t^{3}}{3} \biggr)\,dB(t), $$

and so

$$ X(t)=\exp \biggl(\frac{t^{3}}{3} \biggr) \biggl[X_{0}+\int _{0}^{t} \exp \biggl(\frac{-s^{3}}{3} \biggr)\,dB(s) \biggr]. $$
(37)

Now, we compute \(X_{n}\) from the random fourth order Runge-Kutta method,

$$ X_{n+1}=X_{n}+\frac{1}{6} (k_{1}+2k_{2}+2k_{3}+k_{4} ), $$
(38)

where

$$\begin{aligned} k_{1}={}&ht_{n}^{2}X_{n}+hW(t_{n}), \\ k_{2}={}&h \biggl(t_{n}+\frac{h}{2} \biggr)^{2} \biggl(1+\frac {h}{2}t_{n}^{2} \biggr)X_{n}+\frac{h^{2}}{2} \biggl(t_{n}+ \frac{h}{2} \biggr)^{2}W(t_{n})+hW \biggl(t_{n}+\frac{h}{2} \biggr), \\ k_{3}={}&h \biggl(t_{n}+\frac{h}{2} \biggr)^{2} \biggl(1+\frac{h}{2} \biggl(t_{n}+ \frac{h}{2} \biggr)^{2} \biggl(1+\frac{h}{2}t_{n}^{2} \biggr) \biggr)X_{n}+\frac{h^{3}}{4} \biggl(t_{n}+ \frac{h}{2} \biggr)^{4}W(t_{n}) \\ &{}+h \biggl(1+\frac{h}{2} \biggl(t_{n}+\frac{h}{2} \biggr)^{2} \biggr)W \biggl(t_{n}+\frac{h}{2} \biggr), \\ k_{4}={}&h (t_{n}+h )^{2} \biggl(1+h \biggl(t_{n}+\frac{h}{2} \biggr)^{2} \biggl(1+ \frac{h}{2} \biggl(t_{n}+\frac{h}{2} \biggr)^{2} \biggl(1+\frac{h}{2}t_{n}^{2} \biggr) \biggr) \biggr)X_{n}+\frac{h^{4}}{4} \biggl(t_{n}+ \frac{h}{2} \biggr)^{4} \\ &{}\times (t_{n}+h )^{2}W(t_{n})+h^{2} (t_{n}+h )^{2} \biggl(1+\frac{h}{2} \biggl(t_{n}+\frac{h}{2} \biggr)^{2} \biggr) W \biggl(t_{n}+\frac{h}{2} \biggr)+hW (t_{n}+h ), \end{aligned}$$

and by setting

$$\begin{aligned}& \begin{aligned}[b] a_{n}={}&1+\frac{h}{6}t_{n}^{2}+ \frac{h}{3} \biggl(t_{n}+\frac{h}{2} \biggr)^{2} \biggl(1+ \biggl(1+\frac{h}{2}t_{n}^{2} \biggr) \biggl(1+\frac {h}{2} \biggl(t_{n}+\frac{h}{2} \biggr)^{2} \biggr) \biggr) \\ &{}+\frac{h}{6} (t_{n}+h )^{2} \biggl(1+h \biggl(t_{n}+\frac {h}{2} \biggr)^{2} \biggl(1+ \frac{h}{2} \biggl(t_{n}+\frac{h}{2} \biggr)^{2} \biggl(1+\frac{h}{2}t_{n}^{2} \biggr) \biggr) \biggr), \end{aligned} \\& \begin{aligned}[b] b_{n}={}&\frac{h}{6} \biggl(1+h \biggl(t_{n}+ \frac{h}{2} \biggr)^{2}+\frac {h^{2}}{2} \biggl(t_{n}+ \frac{h}{2} \biggr)^{4} \biggl(1+\frac{h}{2} (t_{n}+h )^{2} \biggr) \biggr)W(t_{n}) \\ &{}+\frac{h}{3} \biggl(1+ \biggl(1+\frac{h}{2} \biggl(t_{n} +\frac{h}{2} \biggr)^{2} \biggr) \biggl(1+\frac{h}{2} (t_{n}+h )^{2} \biggr) \biggr)W \biggl(t_{n} +\frac{h}{2} \biggr)+\frac{h}{6}W (t_{n}+h ), \end{aligned} \end{aligned}$$

we have

$$X_{n+1}=a_{n}X_{n}+b_{n}, \quad n=0,1,2,\ldots, $$

and so

$$ X_{n}= \Biggl(\prod_{i=0}^{n-1}a_{i} \Biggr)X_{0} +\sum_{i=0}^{n-1} \Biggl(\prod_{j=i+1}^{n-1}a_{j} \Biggr)b_{i}, \quad n=1,2,3,\ldots. $$
(39)

From (37) and (39) we obtain the expectation and variance of \(X(t)\) and \(X_{n}\):

$$\begin{aligned}& E\bigl[X(t)\bigr]=2\exp \biggl(\frac{t^{3}}{3} \biggr), \end{aligned}$$
(40)
$$\begin{aligned}& E[X_{n}]=2\prod_{i=0}^{n-1}a_{i}+ \sum_{i=0}^{n-1} \Biggl(\prod _{j=i+1}^{n-1}a_{j} \Biggr)E[b_{i}]=2 \prod_{i=0}^{n-1}a_{i}, \end{aligned}$$
(41)

and

$$\begin{aligned}& \operatorname{Var}\bigl[X(t)\bigr]=\exp \biggl(\frac{2t^{3}}{3} \biggr) \biggl[4+\int_{0}^{t}\exp \biggl( \frac{-2s^{3}}{3} \biggr)\,ds \biggr], \end{aligned}$$
(42)
$$\begin{aligned}& \operatorname{Var}[X_{n}]=4 \Biggl( \prod _{i=0}^{n-1}a_{i} \Biggr)^{2}+\sum _{i=0}^{n-1}\sum_{k=0}^{n-1} \Biggl(\prod_{j=i+1}^{n-1}a_{j} \Biggr) \Biggl(\prod_{l=k+1}^{n-1}a_{l} \Biggr)E[b_{i}b_{k}], \end{aligned}$$
(43)

where

$$\begin{aligned} E[b_{i}b_{k}]={}&A_{i,k}\delta (t_{i}-t_{k} ) +B_{i,k}\delta \biggl(t_{i}-t_{k}-\frac{h}{2} \biggr)+B_{k,i} \delta \biggl(t_{i}-t_{k}+\frac {h}{2} \biggr) \\ &{}+C_{i}\delta (t_{i}-t_{k}-h )+C_{k}\delta (t_{i}-t_{k}+h ), \end{aligned}$$

where

$$\begin{aligned}& \begin{aligned}[b] A_{i,k}={}&\frac{h^{2}}{36} \biggl(1+ \biggl[1+h \biggl(t_{i}+\frac {h}{2} \biggr)^{2}+ \frac{h^{2}}{2} \biggl(t_{i}+\frac{h}{2} \biggr)^{4} \biggl(1+\frac{h}{2} (t_{i}+h )^{2} \biggr) \biggr] \\ &{}\times \biggl[1+h \biggl(t_{k}+\frac{h}{2} \biggr)^{2}+\frac{h^{2}}{2} \biggl(t_{k}+ \frac{h}{2} \biggr)^{4} \biggl(1+\frac{h}{2} (t_{k}+h )^{2} \biggr) \biggr] \biggr) \\ &{}+\frac{h^{2}}{9} \biggl[1+ \biggl(1+\frac{h}{2} \biggl(t_{i}+\frac {h}{2} \biggr)^{2} \biggr) \biggl(1+ \frac{h}{2} (t_{i}+h )^{2} \biggr) \biggr]\\ &{}\times \biggl[1+ \biggl(1+\frac{h}{2} \biggl(t_{k}+\frac {h}{2} \biggr)^{2} \biggr) \biggl(1+\frac{h}{2} (t_{k}+h )^{2} \biggr) \biggr], \end{aligned} \\& \begin{aligned}[b] B_{i,k}={}&\frac{h^{2}}{18} \biggl(1+ \biggl(1+ \frac{h}{2} \biggl(t_{i}+\frac{h}{2} \biggr)^{2} \biggr) \biggl(1+\frac{h}{2} (t_{i}+h )^{2} \biggr)+ \biggl[1+h \biggl(t_{i}+\frac{h}{2} \biggr)^{2} +\frac{h^{2}}{2} \biggl(t_{i}+ \frac{h}{2} \biggr)^{4} \\ &{}\times \biggl(1+\frac{h}{2} (t_{i}+h )^{2} \biggr) \biggr] \biggl[1+ \biggl(1+\frac{h}{2} \biggl(t_{k}+ \frac{h}{2} \biggr)^{2} \biggr) \biggl(1+\frac{h}{2} (t_{k}+h )^{2} \biggr) \biggr] \biggr), \end{aligned} \\& C_{i}=\frac{h^{2}}{36} \biggl(1+h \biggl(t_{i} + \frac{h}{2} \biggr)^{2}+\frac{h^{2}}{2} \biggl(t_{i} +\frac{h}{2} \biggr)^{4} \biggl(1+\frac {h}{2} (t_{i}+h )^{2} \biggr) \biggr),\quad i,k=0,1,2,\ldots,n-1. \end{aligned}$$

The absolute errors of the expectation and variance of \(X(t)\) with the Euler, RK2, and RK4 methods and \(h=\frac{1}{20}\), \(h=\frac{1}{50}\) are shown in Tables 3, 4. In Figure 2, the expectation and variance of the exact and numerical solutions of Example 2 with the RK4 method and \(h=\frac{1}{20}\) are compared.

Figure 2
figure 2

Expectations and variances of \(\pmb{X(t)}\) and \(\pmb{X_{n}}\) with the RK4 method and \(\pmb{h=\frac{1}{20}}\) .

Table 3 Absolute error of the expectation of \(\pmb{X(t)}\) with the Euler, RK2 and RK4 methods and \(\pmb{h=\frac{1}{20}}\) , \(\pmb{h=\frac{1}{50}}\)
Table 4 Absolute error of variance of \(\pmb{X(t)}\) with the Euler, RK2, and RK4 methods and \(\pmb{h=\frac{1}{20}}\) , \(\pmb{h=\frac{1}{50}}\)

Figures 1, 2 show that \(E[X_{n}]\) and \(\operatorname{Var}[X_{n}]\) of the numerical solutions of stochastic initial value problems via random Runge-Kutta methods of the fourth order are close to \(E[X(t)]\) and \(\operatorname{Var}[X(t)]\), respectively, as \(h \rightarrow0\).

5 Applications in the electric circuits with noise

Consider the following RC circuit with constant parameters:

$$ \left \{\begin{array}{@{}l} R\frac{dQ(t)}{dt}+\frac{1}{C}Q(t)=V(t)+\alpha(t)W(t),\\ Q(0)=Q_{0}, \end{array} \right . $$
(44)

where \(Q(t)\) is the electric charge at time t and \(Q_{0}\) is an exponential random variable with parameter \(\lambda=\frac{1}{3}\), independent of \(W(t)\) for each \(t\in[0,1]\), which means the initial charge at time \(t=0\), and \(V(t)\) are nonrandom functions of time variable, which means the voltage at time t and \(W(t)=\frac{dB(t)}{dt}\) is a 1-dimensional white noise process and \(B(t)\) is a 1-dimensional Brownian motion and \(\alpha(t)\) is a nonrandom function that shows the infirmity and intensity of noise at time t.

Now, solving this stochastic differential equation, we have

$$ e^{\frac{t}{RC}}\,dQ(t)+\frac{1}{RC}e^{\frac{t}{RC}}Q(t)\,dt= \frac {1}{R}e^{\frac{t}{RC}}V(t)\,dt+\frac{1}{R}\alpha(t)e^{\frac{t}{RC}}\,dB(t). $$
(45)

Now, by assuming \(g(t,x)=e^{\frac{t}{RC}}x\) and using Theorem 2, we conclude

$$ d \bigl(e^{\frac{t}{RC}}Q(t) \bigr)=\frac{1}{RC}e^{\frac {t}{RC}}Q(t)\,dt+e^{\frac{t}{RC}}\,dQ(t). $$
(46)

By (45) and (46) we have

$$ Q(t)=e^{\frac{-t}{RC}} \biggl[Q_{0}+\frac{1}{R} \int_{0}^{t}e^{\frac {s}{RC}}V(s)\,ds+ \frac{1}{R}\int_{0}^{t} \alpha(s)e^{\frac{s}{RC}}\,dB(s) \biggr]. $$
(47)

Now, we compute \(Q_{n}\) from the random fourth order Runge-Kutta method,

$$ Q_{n+1}=Q_{n}+\frac{1}{6} (k_{1}+2k_{2}+2k_{3}+k_{4} ), $$
(48)

where

$$\begin{aligned} k_{1}={}&\frac{h}{R} \biggl[-\frac{1}{C}Q_{n}+V(t_{n})+ \alpha (t_{n})W(t_{n}) \biggr], \\ k_{2}={}&\frac{h}{R} \biggl[-\frac{1}{C} \biggl(1- \frac{h}{2RC} \biggr)Q_{n}-\frac{h}{2RC} \bigl(V(t_{n})+\alpha(t_{n})W(t_{n}) \bigr)+V \biggl(t_{n}+\frac{h}{2}\biggr)\\ &{}+\alpha\biggl(t_{n}+ \frac{h}{2}\biggr)W\biggl(t_{n}+\frac {h}{2}\biggr) \biggr], \\ k_{3}={}&\frac{h}{R} \biggl[-\frac{1}{C} \biggl(1- \frac{h}{2RC}+\frac {h^{2}}{4R^{2}C^{2}} \biggr)Q_{n}+\frac{h^{2}}{4R^{2}C^{2}} \bigl(V(t_{n}) +\alpha(t_{n})W(t_{n}) \bigr)\\ &{}+ \biggl(1-\frac{h}{2RC} \biggr) \biggl(V\biggl(t_{n}+ \frac{h}{2}\biggr) +\alpha\biggl(t_{n}+\frac{h}{2}\biggr)W \biggl(t_{n}+\frac{h}{2}\biggr) \biggr) \biggr], \\ k_{4}={}&\frac{h}{R} \biggl[-\frac{1}{C} \biggl(1- \frac{h}{RC}+\frac {h^{2}}{2R^{2}C^{2}}-\frac{h^{3}}{4R^{3}C^{3}} \biggr)Q_{n}\\ &{}- \frac {h^{3}}{4R^{3}C^{3}} \bigl(V(t_{n})+\alpha(t_{n})W(t_{n}) \bigr)-\frac {h}{RC} \biggl(1-\frac{h}{2RC} \biggr) \\ &{}\times \biggl(V\biggl(t_{n}+\frac{h}{2}\biggr)+\alpha \biggl(t_{n}+\frac{h}{2}\biggr)W\biggl(t_{n}+ \frac {h}{2}\biggr) \biggr)+V(t_{n}+h)+\alpha(t_{n}+h)W(t_{n}+h) \biggr], \end{aligned}$$

and by setting

$$\begin{aligned} a={}&1-\frac{h}{RC}+\frac{h^{2}}{2R^{2}C^{2}}-\frac {h^{3}}{6R^{3}C^{3}}+ \frac{h^{4}}{24R^{4}C^{4}}, \\ b_{n}={}&\frac{h}{6R} \biggl[1-\frac{h}{RC}+ \frac {h^{2}}{2R^{2}C^{2}}-\frac{h^{3}}{4R^{3}C^{3}} \biggr] \bigl(V(t_{n})+ \alpha(t_{n})W(t_{n}) \bigr)+\frac{h}{3R} \biggl[1+ \biggl(1-\frac{h}{2RC} \biggr)^{2} \biggr] \\ &{}\times \biggl(V\biggl(t_{n}+\frac{h}{2}\biggr)+\alpha \biggl(t_{n}+\frac{h}{2}\biggr)W\biggl(t_{n}+ \frac {h}{2}\biggr) \biggr)\\ &{}+\frac{h}{6R} \bigl(V(t_{n}+h)+ \alpha (t_{n}+h)W(t_{n}+h) \bigr), \end{aligned}$$

we have

$$Q_{n+1}=aQ_{n}+b_{n}, \quad n=0,1,2,\ldots, $$

and so

$$ Q_{n}=a^{n}Q_{0}+\sum _{i=0}^{n-1}a^{n-i-1}b_{i}, \quad n=1,2,3,\ldots. $$
(49)

From (47) and (49), we obtain the expectation and variance of \(Q(t)\) and \(Q_{n}\).

$$\begin{aligned}& E\bigl[Q(t)\bigr]=e^{\frac{-t}{RC}} \biggl[3+\frac{1}{R}\int _{0}^{t}e^{\frac {s}{RC}}V(s)\,ds \biggr], \end{aligned}$$
(50)
$$\begin{aligned}& \begin{aligned}[b] E[Q_{n}]={}&3a^{n}+\sum _{i=0}^{n-1}a^{n-i-1} \biggl(\frac{h}{6R} \biggl[1-\frac{h}{RC}+\frac{h^{2}}{2R^{2}C^{2}}-\frac {h^{3}}{4R^{3}C^{3}} \biggr]V(t_{i})\\ &{}+\frac{h}{3R} \biggl[1+ \biggl(1- \frac{h}{2RC} \biggr)^{2} \biggr] V\biggl(t_{i}+\frac{h}{2}\biggr)+ \frac{h}{6R}V(t_{i}+h) \biggr), \end{aligned} \end{aligned}$$
(51)

and

$$\begin{aligned}& \operatorname{Var}\bigl[Q(t)\bigr]=\exp \biggl(\frac{-2t}{RC} \biggr) \biggl[9+\frac{1}{R^{2}}\int_{0}^{t} \alpha^{2}(s)\exp \biggl(\frac{2s}{RC} \biggr)\,ds \biggr], \end{aligned}$$
(52)
$$\begin{aligned}& \operatorname{Var}[Q_{n}]=9a^{2n}+\sum _{i=0}^{n-1} \sum_{k=0}^{n-1}a^{2n-i-k-2} \operatorname{Cov}[b_{i},b_{k}], \end{aligned}$$
(53)

where

$$\begin{aligned} \operatorname{Cov}[b_{i},b_{k}]={}&A_{i,k}\delta (t_{i}-t_{k} )+B_{i,k}\delta \biggl(t_{i}-t_{k}- \frac{h}{2} \biggr)+B_{k,i}\delta \biggl(t_{i}-t_{k}+ \frac{h}{2} \biggr) \\ &{}+C_{i,k}\delta (t_{i}-t_{k}-h )+C_{k,i}\delta (t_{i}-t_{k}+h ), \end{aligned}$$

where

$$\begin{aligned}& \begin{aligned}[b] A_{i,k}={}&\frac{h^{2}}{36R^{2}} \biggl[1- \frac{h}{RC}+\frac {h^{2}}{2R^{2}C^{2}}-\frac{h^{3}}{4R^{3}C^{3}} \biggr]^{2} \alpha (t_{i})\alpha(t_{k})+\frac{h^{2}}{9R^{2}} \biggl[1+ \biggl(1-\frac {h}{2RC} \biggr)^{2} \biggr]^{2} \\ &{}\times\alpha\biggl(t_{i}+\frac{h}{2}\biggr)\alpha \biggl(t_{k}+\frac{h}{2}\biggr)+\frac {h^{2}}{36R^{2}} \alpha(t_{i}+h)\alpha(t_{k}+h), \end{aligned} \\& \begin{aligned}[b] B_{i,k}={}&\frac{h^{2}}{18R^{2}} \biggl[1- \frac{h}{RC}+\frac {h^{2}}{2R^{2}C^{2}}-\frac{h^{3}}{4R^{3}C^{3}} \biggr] \biggl[1+ \biggl(1-\frac{h}{2RC} \biggr)^{2} \biggr]\alpha(t_{i}) \alpha\biggl(t_{k}+\frac {h}{2}\biggr) \\ &{}+\frac{h^{2}}{18R^{2}} \biggl[1+ \biggl(1-\frac{h}{2RC} \biggr)^{2} \biggr] \alpha\biggl(t_{i}+\frac{h}{2} \biggr)\alpha(t_{k}+h), \\ C_{i,k}=&\frac{h^{2}}{36R^{2}} \biggl[1-\frac{h}{RC}+ \frac {h^{2}}{2R^{2}C^{2}}-\frac{h^{3}}{4R^{3}C^{3}} \biggr]\alpha(t_{i})\alpha (t_{k}+h), \quad i,k=0,1,2,\ldots,n-1. \end{aligned} \end{aligned}$$

The absolute error of the expectation and variance of \(Q_{n}\) with \(V(t)=\exp(t)\), \(\alpha(t)=\frac{\sin(t)}{25}\), \(R=1\), \(C=2\) are shown in Table 5.

Table 5 Absolute error of the expectation and variance of \(\pmb{Q_{n}}\) with \(\pmb{h=\frac{1}{20}}\)

The absolute error of the expectation and variance of \(Q_{n}\) with \(V(t)=\exp(t)\), \(\alpha(t)=\frac{\sin(t)}{25}\), \(R=1\), \(C=2\) are shown in Figure 3.

Figure 3
figure 3

Expectations and variances of \(\pmb{Q(t)}\) and \(\pmb{Q_{n}}\) with \(\pmb{h=\frac{1}{20}}\) .

6 Conclusion

In this paper, the numerical solution of a stochastic differential equation is discussed by fourth order Runge-Kutta methods in detail. The results can be compared with [1, 2]. Our comparison showed that this method has more accuracy than the Euler method and the second order Runge-Kutta methods in [1, 2].

References

  1. Cortés, JC, Jódar, L, Villafuerte, L: Numerical solution of random differential equations: a mean square approach. Math. Comput. Model. 45, 757-765 (2007)

    Article  MATH  Google Scholar 

  2. Khodabin, M, Maleknejad, K, Rostami, M, Nouri, M: Numerical solution of stochastic differential equations by second order Runge-Kutta methods. Math. Comput. Model. 53, 1910-1920 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  3. Khodabin, M, Maleknejad, K, Rostami, M, Nouri, M: Interpolation solution in generalized stochastic exponential population growth model. Appl. Math. Model. 36, 1023-1033 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  4. Soboleva, TK, Pleasants, AB: Population growth as a nonlinear stochastic process. Math. Comput. Model. 38, 1437-1442 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  5. Koskodan, R, Allen, E: Construction of consistent discrete and continuous stochastic models for multiple assets with application to option valuation. Math. Comput. Model. 48, 1775-1786 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  6. Cortés, JC, Jódar, L, Villafuerte, L: Random linear-quadratic mathematical models: computing explicit solutions and applications. Math. Comput. Simul. 79, 2076-2090 (2009)

    Article  MATH  Google Scholar 

  7. Kloeden, PE, Platen, E: Numerical Solution of Stochastic Differential Equations. Applications of Mathematics. Springer, Berlin (1999)

    Google Scholar 

  8. Milstein, GN: Numerical Integration of Stochastic Differential Equations. Kluwer Academic, Dordrecht (1995)

    Book  Google Scholar 

  9. Calbo, G, Cortés, JC, Jódar, L: Random analytic solution of coupled differential models with uncertain initial condition and source term. Comput. Math. Appl. 56, 785-798 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  10. Cortés, JC, Jódar, L, Camacho, F, Villafuerte, L: Random Airy type differential equations: mean square exact and numerical solutions. Comput. Math. Appl. 60, 1237-1244 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  11. Cortés, JC, Jódar, L, Villafuerte, L, Company, R: Numerical solution of random differential models. Math. Comput. Model. 54, 1846-1851 (2011)

    Article  MATH  Google Scholar 

  12. Cortés, JC, Jódar, L, Villafuerte, L, Villanueva, RJ: Computing mean square approximations of random diffusion models with source term. Math. Comput. Simul. 76, 44-48 (2007)

    Article  MATH  Google Scholar 

  13. Maleknejad, K, Khodabin, M, Rostami, M: Numerical solution of stochastic Volterra integral equations by stochastic operational matrix based on block pulse functions. Math. Comput. Model. 55, 791-800 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  14. Cortés, JC, Jódar, L, Villafuerte, L: Mean square numerical solution of random differential equations: facts and possibilities. Comput. Math. Appl. 53, 1098-1106 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  15. Soong, TT: Random Differential Equations in Science and Engineering. Academic Press, New York (1973)

    MATH  Google Scholar 

  16. Calbo, G, Cortés, JC, Jódar, L, Villafuerte, L: Analytic stochastic process solutions of second-order random differential equations. Appl. Math. Lett. 23, 1421-1424 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  17. Oksendal, B: Stochastic Differential Equations: An Introduction with Applications, 5th edn. Springer, New York (1998)

    Book  Google Scholar 

  18. Lighthill, MJ: An Introduction to Fourier Analysis and Generalised Functions. Cambridge University Press, Cambridge (1996)

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank Islamic Azad University of Karaj Branch for partially financially supporting this research and providing facilities and encouraging this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Morteza Khodabin.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khodabin, M., Rostami, M. Mean square numerical solution of stochastic differential equations by fourth order Runge-Kutta method and its application in the electric circuits with noise. Adv Differ Equ 2015, 62 (2015). https://doi.org/10.1186/s13662-015-0398-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-015-0398-6

Keywords