Skip to main content

Theory and Modern Applications

Time-averaging principle for G-SDEs based on Lyapunov condition

Abstract

In this paper, we tame the uncertainty about the volatility in time-averaging principle for stochastic differential equations driven by G-Brownian motion (G-SDEs) based on the Lyapunov condition. That means we treat the time-averaging principle for stochastic differential equations based on the Lyapunov condition in the presence of a family of probability measures, each corresponding to a different scenario for the volatility. The main tool for the mathematical analysis is the G-stochastic calculus, which is introduced in the book by Peng (Nonlinear Expectations and Stochastic Calculus Under Uncertainty. Springer, Berlin, 2019). We show that the solution of a standard equation converges to the solution of the corresponding averaging equation in the sense of sublinear expectation with the help of some properties of G-stochastic calculus. Numerical results obtained using PYTHON illustrate the efficiency of the averaging method.

1 Introduction

In mathematical finance, the traditional way of representing random fluctuations of financial quantities is to use a Brownian motion, which is typically scaled by the volatility constant. However, there is plenty of empirical evidence, derived from market prices, showing that the volatility of financial quantities is not constant and even not deterministic. The presence of volatility uncertainty leads to mathematical difficulties since the family of probability measures representing volatility uncertainty contains mutually singular measures. The canonical process B has a different volatility under each probability measure in a family of probability measures \(\mathcal{P}\). Thus the quadratic variation process \(\langle B \rangle = ( \langle B \rangle _{t} )_{t\geq 0}\) differs among the probability measures in \(\mathcal{P}\). For example, we consider the probability measures \(P^{\bar{\sigma}}\) and \(P^{\underline{\sigma}}\), induced by the constant volatilities σ̄ and \(\underline{\sigma}\), respectively, and we have

$$\begin{aligned} P^{\underline{\sigma}} \bigl( \langle B \rangle _{t} = \underline{ \sigma}^{2} t \bigr) =1 \neq 0 = P^{\bar{\sigma}} \bigl( \langle B \rangle _{t} = \underline{\sigma}^{2} t \bigr). \end{aligned}$$

Therefore, the set \(\mathcal{P}\) contains mutually singular probability measures, that is, there are probability measures in the set of probability measures that have different null sets. This causes mathematical problems since many results from probability theory and stochastic calculus only hold up to null sets of the underlying measure. Important examples include the time consistent conditional expectations and stochastic integrals.

The averaging principle is an important property in the study of the dynamical behavior for nonlinear dynamical systems. The key technique of the averaging principle is time-scales separation. In particular, the averaging principle provides a powerful tool for simplifying dynamical systems and obtaining approximate solutions to differential equations arising from mechanics, mathematics, physics, control, and other areas. Averaging principles for stochastic systems were proposed by Stratonovich [19, 20] to examine nonlinear oscillation problems in the presence of random noise. Since then there has been a big amount of papers devoted to the study of the averaging principle for stochastic (partial) differential equations, see Khasminskii [12], Freidlin and Wentzell [2], Givon [7], Fu and Liu [3], Xu, Duan and Xu [22], Fu, Wan, Liu [4], Xu, Miao, Liu [21], etc.

From the point of view of fully nonlinear parabolic partial differential equations, in Hu and Wang [10] the authors perfectly established the averaging principle for stochastic differential equations driven by G-Brownian motion, where the condition on the coefficients is the global Lipschitz condition, see assumption (H1) in Hu and Wang [10]. From the same point of view, Hu, Jiang, and Wang [9] extended the one of Hu and Wang [10] to the forward-backward stochastic differential equations driven by G-Brownian motion with global Lipschitz condition. To the authors knowledge, averaging principles for stochastic differential equations with locally Lipschitz coefficients based on a Lyapunov condition under volatility uncertainty, that is, stochastic differential equations driven by G-Brownian motion (G-SDEs) under sublinear expectation, have not been considered. Therefore in this paper we consider these averaging principles in a family of probability measures \(\mathcal{P}\), where the coefficients in the stochastic differential equations have no global Lipschitz assumption. Another important difference from Hu, Jiang, and Wang [9] is that the convergence mode is different. As the reviewer pointed out to the author, the global Lipschitz is not essential, the main difference is that the author obtained a strong convergence result instead of weak convergence compared with Hu et al. [9], but the author needed quite a strong condition (B). Mao, Chen, and You [15] obtained an excellent result about the averaging principle for stochastic differential equations driven by G-Brownian motion with global non-Lipschitz coefficients. Recent important progress in the theory of volatility uncertainty/G-Brownian motion is reviewed by Peng [16] with comments on its explanation, theory, and significance.

In this paper, we study the averaging principle for the following stochastic differential equation with locally Lipschitz coefficients based on a Lyapunov condition under volatility uncertainty:

$$\begin{aligned} X^{\epsilon}_{t} ={}& X^{\epsilon}_{0} + \int _{0}^{t} b\biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) \,ds + \int _{0}^{t} h\biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) \,d\langle B\rangle _{s} \\ &{} + \int _{0}^{t} \sigma \biggl(\frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) \,dB_{s} ,\quad t\in [0,T], \end{aligned}$$

where the canonical process B is a 1-dimensional G-Brownian motion, which was introduced by Peng [16] and is used to represent the uncertainty about the volatility, and \(\langle B\rangle \). is a quadratic variation process of the G-Brownian motion B. From the construction of G-Brownian motion, Peng proved that the quadratic variation process \((\langle B\rangle _{t})_{t\geq 0}\) is an increasing stochastic process with \((\langle B\rangle _{t})_{0}=0\), and \(\langle B\rangle _{t}\) is not a deterministic process unless \((B_{t})_{t\geq 0}\) is a classical Brownian motion. For more details on G-Brownian motion, we refer to Peng [16].

The main difficulty of this paper is dealing with the local Lipschitz coefficients based on a Lyapunov condition. We shall apply the localization technique to approximate the solution of stochastic differential equation with locally Lipschitz coefficients. Based on the work of Li, Peng [14], Hu, Wang, Zheng [11], the space of suitable integrands is essentially expanded, which requires less regularity and only local integrability. Particularly, we will define the truncated G-SDEs that are uniform Lipschitz and carefully choose the stopping times to construct a consistent localized sequence. However, the Lyapunov-type condition ensures that the G-SDEs with the local Lipschitz coefficients can be approximated pathwisely by the truncated G-SDEs.

This paper is organized as follows. Section 2 introduces G-Brownian motion to represent the volatility uncertainty and related stochastic calculus briefly. In Sect. 3, we prove the averaging principle for stochastic differential equations in a family of probability measures. In Sect. 4, we present numerical simulation of stochastic differential equations under volatility uncertainty and give three examples to demonstrate the averaging method using PYTHON.

2 Preliminaries

We introduce in this section some of the basic notions relating to volatility uncertainty and G-Brownian motion and then recall some preliminary results in G-Brownian motion, which are needed in the sequel. More details can be found in Peng [16].

2.1 G-Brownian motion

Definition 2.1

Given a set Ω and a linear space \(\mathcal{H}\) of real-valued functions defined on Ω. Moreover, if \(X_{i}\in \mathcal{H}, i=1,2,\ldots, d\), then \(\varphi (X_{1},\ldots,X_{d}) \in \mathcal{H}\) for all \(\varphi \in C_{b, \mathrm{Lip}}(\mathbb{R}^{d})\), where \(C_{b, \mathrm{Lip}}(\mathbb{R}^{d})\) is the space of all bounded real-valued Lipschitz continuous functions. A sublinear expectation \(\hat{\mathbb{E}}\) on \(\mathcal{H}\) is a functional \(\hat{\mathbb{E}}: \mathcal{H}\to \mathbb{R}\) satisfying the following properties: for all \(X, Y\in \mathcal{H}\),

  1. (i)

    Monotonicity: If \(X\leq Y\), then \(\hat{\mathbb{E}}[X]\leq \hat{\mathbb{E}}[Y]\);

  2. (ii)

    Constant preserving: \(\hat{\mathbb{E}}[c]=c\) for any \(c\in \mathbb{R}\);

  3. (iii)

    Subadditivity: \(\hat{\mathbb{E}}[X+Y]\leq \hat{\mathbb{E}}[X]+\hat{\mathbb{E}}[Y]\);

  4. (iv)

    Positive homogeneity: \(\hat{\mathbb{E}}[\lambda X]= \lambda \hat{\mathbb{E}}[X] \) for any \(\lambda \geq 0\).

The triple \((\Omega, \mathcal{H}, \hat{\mathbb{E}})\) is called a sublinear expectation space.

Denote by \(\Omega = C_{0}(\mathbb{R}^{+})\) the space of all \(\mathbb{R}\)-valued continuous paths \((\omega _{t})_{t\in \mathbb{R}^{+}}\), with \(\omega _{0}=0\), equipped with the distance

$$\begin{aligned} d\bigl(\omega ^{1},\omega ^{2}\bigr):= \sum _{i=1}^{\infty }2^{-i} \Bigl[\max _{t \in [0,i]} \bigl\vert \omega _{t}^{1}-\omega _{t}^{2} \bigr\vert \wedge 1\Bigr]. \end{aligned}$$

\(\mathcal{B}(\Omega )\) is the Borel σ-algebra of Ω. For each \(t\in [0,\infty )\), we introduce the following spaces:

  • \(\Omega _{t}:= \{\omega (\cdot \wedge t): \omega \in \Omega \}, \mathcal{F}_{t}:= \mathcal{B}(\Omega _{t})\);

  • \(L^{0}(\Omega )\): the space of all \(\mathcal{B}(\Omega )\)-measurable real functions;

  • \(L^{0}(\Omega _{t})\): the space of all \(\mathcal{F}_{t}\)-measurable real functions;

  • \(B_{b}(\Omega )\): all bounded elements in \(L^{0}(\Omega )\), \(B_{b}(\Omega _{t}):= B_{b}(\Omega ) \cap L^{0}(\Omega _{t})\);

  • \(C_{b}(\Omega )\): all continuous elements in \(B_{b}(\Omega )\), \(C_{b}(\Omega _{t}):= C_{b}(\Omega ) \cap L^{0}(\Omega _{t})\).

  • \(C_{b, \mathrm{Lip}}(\mathbb{R}^{n})\): the space of all bounded \(\mathbb{R}\)-valued Lipschitz continuous functions on \(\mathbb{R}^{n}\).

Let \(\Omega = C_{0}\) be the space of all \(\mathbb{R}\)-valued continuous paths \((\omega _{t})_{t\geq 0}\) starting from origin, equipped with local uniformity, and let \(B_{t}(\omega )= \omega _{t}\) be the canonical process. For each \(t\in [0, \infty )\), define \(\Omega _{t}= \{\omega _{\cdot \wedge t}: \omega \in \Omega \}\). Set

$$\begin{aligned} \mathrm{Lip}(\Omega _{t}) = \bigl\{ \varphi (B_{t_{1}}, \dots, B_{t_{n}}): n\geq 1, t_{1}, \dots, t_{n}\in {0, t}, \varphi \in C_{b, \mathrm{Lip}(\mathbb{R}^{n})} \bigr\} , \end{aligned}$$

and \(\mathrm{Lip}(\Omega )= \bigcup_{t\geq 0} \mathrm{Lip}(\Omega _{t})\). For each \(x\in \mathbb{R}\), we consider any given monotonic and sublinear function

$$\begin{aligned} G(x):= \frac{1}{2}\bigl(\bar{\sigma}^{2} x^{+} -\underline{\sigma}^{2} x^{-}\bigr). \end{aligned}$$
(1)

Here, we assume that G is nondegenerate, i.e., \(0< \underline{\sigma}^{2} \leq \bar{\sigma}^{2} <\infty \). In Peng [16], a G-Brownian motion is constructed on a sublinear expectation space \((\Omega, \mathrm{Lip}(\Omega ), \hat{\mathbb{E}}, (\hat{\mathbb{E}}_{t})_{t \geq 0})\), which is called G-expectation space. In this space the corresponding canonical process \(B_{t}(\omega ) = \omega _{t}\) is a G-Brownian motion.

Let \(\mathbb{L}^{p}_{G}(\Omega )\) (respectively \(L_{\ast}^{p}(\Omega )\)) be the completion of \(\mathrm{Lip}(\Omega )\) (respectively \(B_{b}(\Omega )\)) under the natural norm \(\|X\|_{p} = \hat{\mathbb{E}}[|X|^{p}]^{1/p}\). Denis, Hu, and Peng [1] proved that

$$\begin{aligned} C_{b}(\Omega ) \subset \mathbb{L}_{G}^{p}( \Omega ) \subset L_{\ast}^{p}( \Omega ), \end{aligned}$$

and there exists a weakly compact family \(\mathcal{P}\) of probability measures defined on \((\Omega, \mathcal{B}(\Omega ))\) such that

$$\begin{aligned} \hat{\mathbb{E}}[X] =\sup_{P\in \mathcal{P}} E_{P}[X] \quad\text{for any } X\in \mathbb{L}^{1}_{G}(\Omega ). \end{aligned}$$

Then \(L_{\ast}^{p}(\Omega )\) and \(\mathbb{L}_{G}^{p}(\Omega )\) can be characterized as follows:

$$\begin{aligned} L_{\ast}^{p}(\Omega ) = \Bigl\{ X\in L^{0}(\Omega ) | \lim_{x\to \infty} \hat{\mathbb{E}}\bigl[ \vert X \vert ^{p}I_{ \vert X \vert \ge x}\bigr] =0 \Bigr\} \end{aligned}$$

and

$$\begin{aligned} \mathbb{L}_{G}^{p}(\Omega ) = \bigl\{ X\in L_{\ast}^{p}(\Omega ) | \text{ $X$ has a quasi-continuous version} \bigr\} . \end{aligned}$$

We will introduce two natural capacities:

$$\begin{aligned} \mathbb{V}(A):= \hat{\mathbb{E}}[I_{A}] = \sup_{P\in \mathcal{P}} E_{P}[I_{A}] = \sup_{P\in \mathcal{P}} P(A),\quad A\in \mathcal{B}(\Omega ) \end{aligned}$$

and

$$\begin{aligned} v(A):= -\hat{\mathbb{E}}[-I_{A}] = -\sup_{P\in \mathcal{P}} E_{P}[-I_{A}] = \inf_{P\in \mathcal{P}} P(A),\quad A\in \mathcal{B}(\Omega ). \end{aligned}$$

Definition 2.2

A set \(A\in \mathcal{B}(\Omega )\) is polar if \(V(A) = 0\). The property holds quasi-surely (q.s.) if it holds outside a polar set.

In what follows, we do not distinguish between two random variables X and Y if \(X = Y\) q.s.

The following inequality is a capacity version of the Markov inequality.

Proposition 2.3

Let \(X\in L^{0}(\Omega )\) and \(\hat{\mathbb{E}}[|X|^{p}]< \infty, p>0\). For any \(x>0\), then

$$\begin{aligned} \mathbb{V}\bigl( \vert X \vert \geq x\bigr) \leq \frac{\hat{\mathbb{E}}[ \vert X \vert ^{p}]}{x^{p}}. \end{aligned}$$

For the proof, see Lemma 6.1.17 in Peng [16].

2.2 G-stochastic calculus

Peng [16] also introduced the related stochastic calculus of Itô type with respect to G-Brownian motion. Now we recall Peng’s G-stochastic calculus from Li and Peng [14] or Chap. 8 in Peng [16], and let \(T>0\) be fixed.

Definition 2.4

Consider the following simple type of processes:

$$\begin{aligned} M_{b,0} (0,T) ={}& \Biggl\{ \eta:= \eta _{t}(\omega ) = \sum _{j=0}^{N-1} \xi _{j}(\omega )I_{[t_{j}, t_{j+1})} (t) \ \forall N>0, \\ & 0=t_{0}< \cdots < t_{N}=T, \xi _{j} \in B_{b}( \Omega _{t_{j}}), j=0,1,2,\ldots, N-1 \Biggr\} . \end{aligned}$$

For an element \(\eta \in M_{b,0}(0,T)\) with \(\eta _{t}= \sum_{j=0}^{N-1} \xi _{j}(\omega )I_{[t_{j}, t_{j+1})} (t)\), the related Bochner integral is

$$\begin{aligned} \int _{0}^{T} \eta _{t}(\omega ) \,dt = \sum_{j=0}^{N-1} \xi _{i}( \omega ) (t_{j+1}-t_{j}). \end{aligned}$$

Definition 2.5

For each \(p\geq 1\), we denote by \(M_{\ast}^{p} (0,T)\) the completion of \(M_{b,0} (0,T)\) under the norm

$$\begin{aligned} \Vert \eta \Vert _{M^{p} (0,T)} = \biggl(\hat{\mathbb{E}} \biggl[ \int _{0}^{T} \bigl\vert \eta (t) \bigr\vert ^{p}\,dt \biggr] \biggr)^{1/p}. \end{aligned}$$

Definition 2.6

For each \(\eta \in M_{b,0} (0,T)\) of the form

$$\begin{aligned} \eta _{t} (\omega ) = \sum_{j=0}^{N-1} \xi _{j}(\omega )I_{[t_{j}, t_{j+1})} (t), \end{aligned}$$

define

$$\begin{aligned} I(\eta ) = \int _{0}^{T} \eta _{s} \,dB_{s}:= \sum_{j=0}^{N-1} \xi _{j}(B_{t_{j+1}^{N}} - B_{t_{j}^{N}} ). \end{aligned}$$

The mapping \(I: M_{b,0}(0,T) \to L_{\ast}^{2}(\Omega _{T})\) can be continuously extended to \(I: M_{\ast}^{2}(0,T) \to L_{\ast}^{2}(\Omega _{T})\). For each \(\eta \in M_{\ast}^{2}(0,T)\), the stochastic integral is defined by

$$\begin{aligned} I(\eta ) = \int _{0}^{T} \eta _{s} \,dB_{s}, \quad \eta \in M_{\ast}^{2}(0,T). \end{aligned}$$

Definition 2.7

For fixed \(p\geq 1\), a stochastic process \((\eta _{t})_{t\geq 0}\) is said to be in \(M_{w}^{p}(0,T)\) if it is associated with a sequence of increasing stopping times \(\{\sigma _{n}\}_{n\in \mathbb{N}}\) such that

$$\begin{aligned} \bigl\{ \eta _{t} I_{[0, \sigma _{n}]}(t)\bigr\} _{t\in [0,T]} \in M_{\ast}^{p}(0,T),\quad \forall n\in \mathbb{N}, \end{aligned}$$

and if \(\Omega ^{(n)}:= \{\omega \in \Omega: \sigma _{n}(\omega )\wedge T +T \}\) and \(\hat{\Omega}:= \lim_{n\to \infty} \Omega ^{(n)}\), then \(V(\hat{\Omega}^{c})=0\).

Given \(\eta \in M_{w}^{2}(0,T)\) associated with \(\{\sigma _{n}\}_{n\in \mathbb{N}}\), we note \(\tau _{n}:= \sigma _{n}\wedge T\) and consider the continuous modification of \((\int _{0}^{t} \eta _{s} I_{[0, \tau _{n}]}(s)\,dB_{s})_{0\leq t\leq T}\). For each \(m,n\in \mathbb{N}, n>m\), we can find a polar set \(\hat{A}^{m,n}\) such that for all \(\omega \in (\hat{A}^{m,n})^{c}\) the following equality holds:

$$\begin{aligned} \int _{0}^{t\wedge \tau _{m}} \eta _{s} \,dB_{s}(\omega ) = \int _{0}^{t} \eta _{s} I_{[0, \tau _{m}]}(s) I_{[0, \tau _{n}]}(s) \,dB_{s}(\omega ) = \int _{0}^{t\wedge \tau _{m}} \eta _{s} I_{[0, \tau _{n}]}(s) \,dB_{s}( \omega ) \end{aligned}$$

for \(0\leq t\leq T\). Define a polar set \(\hat{A}:= \bigcup_{m=1}^{\infty }\bigcup_{n=m+1}^{\infty }\hat{A}^{m,n}\). For each \(n\in \mathbb{N}\) and \((\omega, t) \in \Omega \times [0,T]\), we set

$$\begin{aligned} X_{t}^{n} (\omega ):= \textstyle\begin{cases} \int _{0}^{t} \eta _{s} I_{[0, \tau _{n}]} \,dB_{s}(\omega ), & \omega \in \hat{A}^{c}\cap \Omega; \\ 0, & \text{otherwise.} \end{cases}\displaystyle \end{aligned}$$

For each \(\omega \in \hat{A}^{c}\) and \(m,n\in \mathbb{N}, n>m, X^{n}(\omega ) \equiv X^{m}(\omega )\) on \([0, \tau _{m}(\omega )]\). Therefore we can define unambiguously a process by stipulating that it is equal to \(X^{m}\) on \([0, \tau _{m}(\omega )]\).

Definition 2.8

Let \(\eta \in M_{w}^{2}(0,T)\) for each \((\omega, t)\in \Omega \times [0,T]\), we define

$$\begin{aligned} \int _{0}^{t} \eta _{s} \,dB_{s}(\omega ):= \lim_{n\to \infty} X_{t}^{n}( \omega ). \end{aligned}$$

It is important that the quadratic process of G-Brownian motion B is not always a deterministic process, and it can be formulated by

$$\begin{aligned} \langle B\rangle _{t}:= \lim_{N\to \infty} \sum _{j=0}^{N-1} (B_{t_{j+1}^{N}} - B_{t_{j}^{N}} )^{2} = B_{t}^{2}-2 \int _{0}^{t} B_{s} \,dB_{s}, \end{aligned}$$

where \(t_{i}^{N} = (jT)/N\) for each integer \(N\geq 1\).

Definition 2.9

Define a mapping \(Q: M_{b,0}(0,T) \to \mathbb{L}_{\ast}^{1}(\Omega _{T})\):

$$\begin{aligned} Q(\eta ) = \int _{0}^{T} \eta _{s} \,d\langle B \rangle _{s}:= \sum_{j=0}^{N-1} \xi _{j}\bigl(\langle B\rangle _{t_{j+1}^{N}} - \langle B\rangle _{t_{j}^{N}} \bigr). \end{aligned}$$

Then Q can be uniquely extended to \(M_{w}^{1}(0,T)\), we also denote this mapping by

$$\begin{aligned} Q(\eta ) = \int _{0}^{T} \eta _{s} \,d\langle B \rangle _{s}, \quad\eta \in M_{w}^{1}(0,T). \end{aligned}$$

In view of the dual formulation of sublinear expectation as well as the properties of the quadratic variation process \(\langle B\rangle \) in the framework of sublinear expectation, we can generalize the following BDG-type inequalities in Gao [5] to \(\eta \in M_{w}^{p}([0,T])\).

Lemma 2.10

  1. (1)

    For each \(p\geq 1\) and \(\eta \in M_{w}^{p}(0,T)\),

    $$\begin{aligned} \hat{\mathbb{E}}\biggl[\sup_{0\leq t\leq T} \biggl\vert \int _{0}^{t} \eta _{s} \,d \langle B \rangle _{s} \biggr\vert ^{p}\biggr] \leq \bar{ \sigma}^{2p} T^{p-1} \int _{0}^{T} \hat{\mathbb{E}}\bigl[ \vert \eta _{s} \vert ^{p}\bigr] \,ds. \end{aligned}$$
  2. (2)

    For each \(p\geq 2\) and \(\eta \in M_{w}^{p}(0,T)\), there exists some constant \(C_{p}\) depending only on p and T such that

    $$\begin{aligned} \hat{\mathbb{E}}\biggl[\sup_{0\leq t\leq T} \biggl\vert \int _{0}^{t} \eta _{s} \,dB_{s} \biggr\vert ^{p}\biggr] \leq C_{p} \hat{\mathbb{E}} \biggl[ \biggl\vert \int _{0}^{T} \vert \eta _{s} \vert ^{2} \,ds \biggr\vert ^{\frac{p}{2}} \biggr]. \end{aligned}$$

2.3 G-stochastic differential equation

Consider the following SDE driven by a 1-dimensional G-Brownian motion:

$$\begin{aligned} X_{t} ={}& X_{0} + \int _{0}^{t} b(s, X _{s}) \,ds + \int _{0}^{t} h(s, X _{s}) \,d\langle B \rangle _{s} + \int _{0}^{t} \sigma (s, X _{s}) \,dB_{s},\quad t\in [0,T], \end{aligned}$$
(2)

where the initial condition \(X_{0}\in \mathbb{R}\) is a given constant.

We will consider this GSDE, whose coefficients satisfy both a locally Lipschitz condition and a Lyapunov-type condition.

  1. (A1)

    \(b, h, \sigma: [0,T]\times \mathbb{R}\to \mathbb{R}\) are given deterministic functions satisfying continuous in t and locally Lipschitz in x, i.e., for each \(x, y\in B_{0}(R):= \{a| |a|\leq R\}\), there exists a positive constant \(C_{R}\) that depends only on R such that for each \(t\in [0,T]\),

    $$\begin{aligned} \bigl\vert \psi (t,x)-\psi (t,y) \bigr\vert \leq C_{R} \vert x-y \vert \end{aligned}$$

    and

    $$\begin{aligned} \sup_{t\in [0,T]} \bigl\vert \psi (t,0) \bigr\vert \leq L, \end{aligned}$$

    where \(\psi =b,h,\sigma \), respectively.

  2. (A2)

    There exists a deterministic nonnegative Lyapunov function \(V\in C^{1,2}([0,T]\times \mathbb{R})\) such that

    $$\begin{aligned} \inf_{|x|\geq R} \inf_{t\in [0,T]} V(t,x) \to \infty,\quad \text{as $R\to \infty $}, \end{aligned}$$

    and for some constant \(C_{L}>0\) and all \((t, x)\in [0,T]\times \mathbb{R}\),

    $$\begin{aligned} \mathcal{L}V(t,x) \leq C_{L} V(t,x), \end{aligned}$$

    where \(\mathcal{L}\) is a differential operator defined by

    $$\begin{aligned} \mathcal{L}V = \partial _{t} V + \partial _{x} V b + G \bigl(2 \partial _{x} V h + \partial ^{2}_{x^{2}}V \sigma ^{2} \bigr). \end{aligned}$$

    Here, \(G(\cdot )\) is a sublinear function defined in (1).

In Li, Lin, and Lin [13], they established the existence and uniqueness of the solution for the above GSDE with locally Lipschitz and Lyapunov conditions through the localization methods. We review their excellent results in what follows.

Theorem 2.11

Under assumptions (A1) and (A2), there exists a unique solution \(X\in M_{w}^{2}(0,T; \mathbb{R})\) to the G-stochastic differential equation (2) and X has t-continuous paths on \([0,T]\).

We notice that the domain of coefficients here is a little larger than the one in Peng [16], where Peng [16] states the following result. We recall the following standard linear growth and Lipschitz assumption:

  1. (H1)

    There exists some constant L such that

    $$\begin{aligned} \bigl\vert \psi (t,x)-\psi (t,y) \bigr\vert \leq L \vert x-y \vert \end{aligned}$$

    for each \(t\in [0,T], x, y\in \mathbb{R}\), and

    $$\begin{aligned} \sup_{t\in [0,T]} \bigl\vert \psi (t, 0) \bigr\vert \leq L, \end{aligned}$$

    where \(\psi =b,h,\sigma \), respectively.

  2. (H2)

    \(b, h, \sigma: [0,T]\times \mathbb{R}\to \mathbb{R}\) are given functions satisfying, for each \(x\in \mathbb{R}\), \(b(\cdot, x), h(\cdot, x),\sigma (\cdot, x) \in M_{G}^{2}(0,T)\) and

    $$\begin{aligned} \bigl\vert b(\cdot, x) \bigr\vert ^{2} + \bigl\vert h(\cdot, x) \bigr\vert ^{2} + \bigl\vert \sigma (\cdot, x) \bigr\vert ^{2} < C\bigl(1+ \vert x \vert ^{2}\bigr). \end{aligned}$$

We also recall the following excellent results from Peng [16].

Theorem 2.12

Under assumptions (H1) and (H2), there exists a unique solution \(X\in M_{G}^{2}(0,T)\) to the G-stochastic differential equation (2). Denote by \(X_{t}\) the solution starting with \(X_{0}\in \mathbb{R}\), then there exists \(C>0\) that depends on T such that

$$\begin{aligned} \hat{\mathbb{E}}\Bigl[\sup_{0\leq t\leq T} \vert X_{t} \vert ^{2}\Bigr] \leq C\bigl(1+ \vert X_{0} \vert ^{2}\bigr). \end{aligned}$$
(3)

The following corollary can be deduced from (3), see also Corollary 5.3.2 in Peng [16].

Corollary 2.13

Assume that Lipschitz condition (H1) and linear growth condition (H2) hold, then we have

$$\begin{aligned} \hat{\mathbb{E}}\bigl[ \vert X_{t}-X_{s} \vert ^{2}\bigr] \leq C \vert t-s \vert , \end{aligned}$$

where the constant C depends only on the Lipschitz constant and the initial value \(X_{0}\).

To apply this theorem, we need to assume that (H2) holds throughout this paper, and C may be a positive constant whose value may change in different occasions.

3 Averaging principle under volatility uncertainty

We now study an averaging principle for a stochastic differential equation driven by a G-Brownian motion in \(\mathbb{R}\):

$$\begin{aligned} X^{\epsilon}_{t} ={}& X_{0} + \int _{0}^{t} b\biggl(\frac{s}{\epsilon}, X^{ \epsilon}_{s}\biggr) \,ds + \int _{0}^{t} h\biggl(\frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) \,d\langle B\rangle _{s} \\ &{} + \int _{0}^{t} \sigma \biggl(\frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) \,dB_{s} ,\quad t\in [0,T], \end{aligned}$$
(4)

where \(\epsilon \in (0,1)\) and the initial condition \(X_{0}\in \mathbb{R}\) is a given constant. When the functions \(b, h, \sigma \) satisfy the conditions as in (A1)–(A2), then, by Theorem 2.11, equation (4) has a unique solution \(X^{\epsilon}\in M_{w}^{2}(0,T; \mathbb{R})\).

Our objective is to show that the solution \(X^{\epsilon}\in M_{w}^{2}(0,T; \mathbb{R})\) could be approximated by the solution of some simplified equations. For this, we associate the above stochastic differential equation with the following averaged stochastic differential equation driven by a G-Brownian motion:

$$\begin{aligned} \bar{X}_{t} ={}& X_{0} + \int _{0}^{t} \bar{b}( \bar{X}_{s}) \,ds + \int _{0}^{t} \bar{h}( \bar{X}_{s}) \,d \langle B\rangle _{s} \\ &{} + \int _{0}^{t} \bar{\sigma}( \bar{X}_{s}) \,dB_{s},\quad t \in [0,T]. \end{aligned}$$
(5)

Here, the functions \(\bar{b}, \bar{h}, \bar{\sigma}\) are called time averaged functions, and the locally Lipschitz condition and the Lyapunov-type condition (A1) and (A2) are satisfied without time term, where the differential operator \(\mathcal{L}\) is defined by

$$\begin{aligned} \mathcal{L} V = \partial _{t} V + \partial _{x} V \bar{b} + G\bigl(2 \partial _{x} V \bar{h} + \partial _{x^{2}}^{2} V \bar{\sigma}^{2}\bigr). \end{aligned}$$

By Theorem 2.11, the above averaged G-stochastic differential equation has a unique solution \(\bar{X}\in M_{w}^{2}(0,T)\) to (5).

To get the averaging principle, we need the following time averaging conditions for functions \(b, h, \sigma \) and \(\bar{b}, \bar{h}, \bar{\sigma}\):

(B):

For any \(T_{1}\in [0,T]\) and all x, there exists a function φ such that

$$\begin{aligned} &\sup_{t\geq 0} \biggl\vert \frac{1}{T_{1}} \int _{t}^{t+T_{1}} \bigl(b(s,x)- \bar{b}(x) \bigr) \,ds \biggr\vert ^{2} \leq \varphi (T_{1}) \bigl(1+ \vert x \vert ^{2}\bigr), \\ &\sup_{t\geq 0} \frac{1}{T_{1}} \int _{t}^{t+T_{1}} \bigl\vert h(s,x)-\bar{h}(x) \bigr\vert ^{2} \,ds \leq \varphi (T_{1}) \bigl(1+ \vert x \vert ^{2}\bigr), \end{aligned}$$
(6)

and

$$\begin{aligned} \sup_{t\geq 0} \frac{1}{T_{1}} \int _{t}^{t+T_{1}} \bigl\vert \sigma (s,x)- \bar{ \sigma}(x) \bigr\vert ^{2} \,ds \leq \varphi (T_{1}) \bigl(1+ \vert x \vert ^{2}\bigr). \end{aligned}$$

Here, \(\varphi (T_{1})\) is a positive bounded function with \(\lim_{T_{1}\to \infty} \varphi (T_{1}) =0\). The averaged functions (where \(\bar{f} = \bar{b}, \bar{h}, \bar{\sigma}\)) have been given many different definitions in the literature, for instance, we can choose \(\bar{f}(x) = \frac{1}{T} \int _{0}^{T} f(s,x) \,ds\).

In fact, it follows from assumption (B) that the averaged functions (where \(\bar{f} = \bar{b}, \bar{h}, \bar{\sigma}\)) satisfy (locally) Lipschitz conditions with the same Lipschitz constant as f, provided that f satisfy (locally) Lipschitz conditions. We only state that the function satisfies the locally Lipschitz condition as b. In fact, for every \(x, y\in B_{0}(R)\) and every \(T>0\), we have

$$\begin{aligned} \bigl\vert \bar{b}(x) -\bar{b}(y) \bigr\vert ^{2} \leq{} & \biggl\vert \frac{1}{T} \int _{0}^{T} \bigl[b(s, x) - \bar{b}(x)\bigr] \,ds \biggr\vert ^{2} + \biggl\vert \frac{1}{T} \int _{0}^{T} \bigl[b(s, y) - \bar{b}(y)\bigr] \,ds \biggr\vert ^{2} \\ &{} + \biggl\vert \frac{1}{T} \int _{0}^{T} \bigl[b(s, x) - b(s, y)\bigr] \,ds \biggr\vert ^{2} \\ \leq{} & 2C \varphi (T) \bigl(1+ \vert x \vert ^{2}+ \vert y \vert ^{2}\bigr) + C_{R}^{2} \vert x-y \vert ^{2}. \end{aligned}$$

Then, taking T tending to infinity in the inequality, we get the function is locally Lipschitz. Similar discussion and the following remark can be found in Gao [6], Guo, Lv, Wei [8], Shen, Song, Wu [17], and Shen, Xiang, Wu [18].

Remark 3.1

Due to

$$\begin{aligned} \sup_{t\geq 0} \biggl\vert \frac{1}{T_{1}} \int _{t}^{t+T_{1}} \bigl(b(s,x)- \bar{b}(x) \bigr) \,ds \biggr\vert ^{2} \leq \sup_{t\geq 0}\frac{1}{T_{1}} \int _{t}^{t+T_{1}} \bigl\vert \bigl(b(s,x)-\bar{b}(x) \bigr) \bigr\vert ^{2} \,ds, \end{aligned}$$

we can claim that (6) in assumption (B) is weaker than the following traditional averaging condition:

$$\begin{aligned} \sup_{t\geq 0}\frac{1}{T_{1}} \int _{t}^{t+T_{1}} \bigl\vert \bigl(b(s,x)- \bar{b}(x) \bigr) \bigr\vert ^{2} \,ds \leq \varphi (T_{1}) \bigl(1+ \vert x \vert ^{2}\bigr). \end{aligned}$$

With several preliminary assumptions at our hands, we are in a position to present our main results. We first introduce a lemma which is important for our averaging principle, and then we consider the averaging principle of the GSDE with standard linear growth and Lipschitz assumption. Finally, we extend those assumptions to a locally Lipschitz condition and a Lyapunov-type condition.

Lemma 3.2

Suppose that assumptions (H1), (H2), and (B) are satisfied. Then

$$\begin{aligned} \lim_{\epsilon \to 0} \hat{\mathbb{E}} \biggl[ \sup _{0\leq t\leq T} \biggl\vert \int _{0}^{t} \biggl[b\biggl(\frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr)- \bar{b}\bigl(X^{ \epsilon}_{s} \bigr)\biggr] \,ds \biggr\vert ^{2} \biggr] =0. \end{aligned}$$
(7)

Proof

Let \(\{t_{1}, t_{2}, \ldots, t_{N}\}\) be a partition of \([0,T]\):

$$\begin{aligned} t_{i} = i\sqrt{\epsilon},\quad i=0,1,2,\ldots, N-1; t_{N} =T, \end{aligned}$$

and

$$\begin{aligned} 0< T-t_{N-1} \leq \sqrt{\epsilon}. \end{aligned}$$

Then it is easy to obtain that \(T\leq N\sqrt{\epsilon} < T+\sqrt{\epsilon}\). Let

$$\begin{aligned} Z_{i}:= \int _{t_{i}}^{t_{i+1}} \biggl[b\biggl(\frac{s}{\epsilon}, X_{s}^{ \epsilon}\biggr) - \bar{b}\bigl(X^{\epsilon}_{s} \bigr)\biggr] \,ds, \end{aligned}$$

then we have

$$\begin{aligned} & \biggl\vert \int _{0}^{t} \biggl[ b\biggl(\frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{b}\bigl(X^{\epsilon}_{s} \bigr) \biggr] \,ds \biggr\vert ^{2} \\ &\quad\leq N \biggl\vert \int _{[\frac{t}{\sqrt{\epsilon}}]\sqrt{\epsilon}}^{t} \biggl[ b\biggl(\frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{b}\bigl(X^{\epsilon}_{s} \bigr) \biggr] \,ds \biggr\vert ^{2} + N \sum _{i=0}^{N-2} \vert Z_{i} \vert ^{2}. \end{aligned}$$
(8)

Using Hölder’s inequality and linear growth assumption on b and , we get

$$\begin{aligned} & \biggl\vert \int _{[\frac{t}{\sqrt{\epsilon}}]\sqrt{\epsilon}}^{t} \biggl[ b\biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{b}\bigl(X^{\epsilon}_{s} \bigr) \biggr] \,ds \biggr\vert ^{2} \\ &\quad\leq 2 \biggl(t- \biggl[\frac{t}{\epsilon}\biggr]\sqrt{\epsilon}\biggr) \int _{[ \frac{t}{\sqrt{\epsilon}}]\sqrt{\epsilon}}^{t} \biggl[ \biggl\vert b\biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) \biggr\vert ^{2} + \bigl\vert \bar{b}\bigl(X^{\epsilon}_{s}\bigr) \bigr\vert ^{2} \biggr] \,ds \\ &\quad\leq C \biggl\vert t - \biggl[\frac{t}{\sqrt{\epsilon}}\biggr]\sqrt{\epsilon} \biggr\vert ^{2} \Bigl(1+ \sup_{0\leq t\leq T} \bigl\vert X^{\epsilon}_{s} \bigr\vert ^{2}\Bigr) \\ &\quad\leq C \epsilon \Bigl(1+ \sup_{0\leq t\leq T} \bigl\vert X^{\epsilon}_{s} \bigr\vert ^{2}\Bigr). \end{aligned}$$
(9)

By Theorem 2.12, (8), and (9), we have

$$\begin{aligned} & \hat{\mathbb{E}} \biggl[ \sup_{0\leq t\leq T} \biggl\vert \int _{0}^{t} \bigl[b\bigl(s, X^{\epsilon}_{s} \bigr)- \bar{b}\bigl(X^{\epsilon}_{s}\bigr)\bigr] \,ds \biggr\vert ^{2} \biggr] \\ &\quad\leq C \epsilon N + N \hat{\mathbb{E}} \Biggl[ \sum _{i=0}^{N-2} \vert Z_{i} \vert ^{2} \Biggr] \\ &\quad\leq C \epsilon (T+\sqrt{\epsilon}) + N \sum_{i=0}^{N-2} \hat{\mathbb{E}}\bigl[ \vert Z_{i} \vert ^{2} \bigr]. \end{aligned}$$
(10)

Here, in the last inequality, we have used the subadditivity of sublinear expectation. By assumption (B), the Lipschitz conditions of b and , we obtain

$$\begin{aligned} \vert Z_{i} \vert ^{2} ={}& \biggl\vert \int _{t_{i}}^{t_{i+1}} \biggl[b\biggl(\frac{s}{\epsilon}, X_{s}^{ \epsilon}\biggr) - \bar{b}\bigl(X^{\epsilon}_{s} \bigr)\biggr] \,ds \biggr\vert ^{2} \\ \leq {}& 3 \biggl\vert \int _{t_{i}}^{t_{i+1}} \biggl[b\biggl(\frac{s}{\epsilon}, X_{s}^{ \epsilon}\biggr) - b\biggl(\frac{s}{\epsilon}, X_{t_{i}}^{\epsilon}\biggr) \biggr] \,ds \biggr\vert ^{2} + 3 \biggl\vert \int _{t_{i}}^{t_{i+1}} \biggl[b\biggl(\frac{s}{\epsilon}, X_{t_{i}}^{ \epsilon}\biggr) - \bar{b}\bigl(X^{\epsilon}_{t_{i}} \bigr)\biggr] \,ds \biggr\vert ^{2} \\ &{} + 3 \biggl\vert \int _{t_{i}}^{t_{i+1}} \bigl[\bar{b}\bigl(X^{\epsilon}_{t_{i}} \bigr) - \bar{b}\bigl(X^{\epsilon}_{s}\bigr)\bigr] \,ds \biggr\vert ^{2} \\ \leq {}& 3 \biggl\vert \epsilon \int _{t_{i}/\epsilon}^{t_{i+1}/\epsilon} \bigl[b\bigl(s, X_{t_{i}}^{\epsilon} \bigr) - \bar{b}\bigl(X^{\epsilon}_{t_{i}}\bigr)\bigr] \,ds \biggr\vert ^{2} + 6 L \sqrt{\epsilon} \int _{t_{i}}^{t_{i+1}} \bigl\vert X^{\epsilon}_{s} - X^{ \epsilon}_{t_{i}} \bigr\vert ^{2} \,ds \\ \leq {}& C\epsilon \varphi \biggl(\frac{1}{\sqrt{\epsilon}} \biggr) \Bigl(1+ \sup _{0\leq t\leq T} \bigl\vert X^{\epsilon}_{t} \bigr\vert ^{2}\Bigr) + 6 L \sqrt{\epsilon} \int _{t_{i}}^{t_{i+1}} \bigl\vert X^{\epsilon}_{s} - X^{\epsilon}_{t_{i}} \bigr\vert ^{2} \,ds . \end{aligned}$$
(11)

Hence, by Corollary 2.13, we have

$$\begin{aligned} N \sum_{i=0}^{N-2} \hat{ \mathbb{E}}\bigl[ \vert Z_{i} \vert ^{2} \bigr] \leq {}& C \epsilon N \sum_{i=0}^{N-2} \hat{\mathbb{E}} \biggl[ \varphi \biggl( \frac{1}{\sqrt{\epsilon}} \biggr) \Bigl(1+ \sup _{0\leq t\leq T} \bigl\vert X^{\epsilon}_{t} \bigr\vert ^{2}\Bigr) \biggr] \\ &{} + 6 l_{1} \sqrt{\epsilon} N \sum_{i=0}^{N-2} \hat{\mathbb{E}} \biggl[ \int _{t_{i}}^{t_{i+1}} \bigl\vert X^{\epsilon}_{s} - X^{\epsilon}_{t_{i}} \bigr\vert ^{2} \,ds \biggr] \\ \leq {}& C\epsilon N^{2} \biggl[ \varphi \biggl(\frac{1}{\sqrt{\epsilon}} \biggr) + \sqrt{\epsilon} \biggr] \\ \leq {}& C (T+\sqrt{\epsilon})^{2} \biggl[ \varphi \biggl( \frac{1}{\sqrt{\epsilon}} \biggr) + \sqrt{\epsilon} \biggr]. \end{aligned}$$
(12)

Then, substituting (12) in (10), we finally obtain the estimate

$$\begin{aligned} & \hat{\mathbb{E}} \biggl[ \sup_{0\leq t\leq T} \biggl\vert \int _{0}^{t} \bigl[b\bigl(s, X^{\epsilon}_{s} \bigr)- \bar{b}\bigl(X^{\epsilon}_{s}\bigr)\bigr] \,ds \biggr\vert ^{2} \biggr] \\ &\quad\leq C \epsilon (T+\sqrt{\epsilon}) + C (T+\sqrt{\epsilon})^{2} \biggl[ \varphi \biggl(\frac{1}{\sqrt{\epsilon}} \biggr) + \sqrt{\epsilon} \biggr]. \end{aligned}$$

Finally, the required inequality follows by letting ϵ tend to zero in the inequality. □

Lemma 3.3

Suppose that assumptions (H1), (H2), and (B) are satisfied. Then

$$\begin{aligned} \lim_{\epsilon \to 0} \hat{\mathbb{E}} \biggl[ \int _{0}^{T} \biggl\vert h\biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr)- \bar{h} \bigl(X^{\epsilon}_{s}\bigr) \biggr\vert ^{2} \,d s \biggr] =0. \end{aligned}$$
(13)

Proof

Let \(\{t_{1}, t_{2}, \ldots, t_{N}\}\) be a partition of \([0,T]\), \(t_{i} = i\sqrt{\epsilon}, i=0,1,2,\ldots, N-1; t_{N} =T, 0< T-t_{N-1} \leq \sqrt{\epsilon}\). Hence \(T\leq N\sqrt{\epsilon} < T+\sqrt{\epsilon}\). Let

$$\begin{aligned} H_{i}:= \int _{t_{i}}^{t_{i+1}} \biggl\vert h\biggl( \frac{s}{\epsilon}, X^{ \epsilon}_{s}\biggr)- \bar{h} \bigl(X^{\epsilon}_{s}\bigr) \biggr\vert ^{2} \,d s, \end{aligned}$$

then, by subadditivity of the sublinear expectation, we have

$$\begin{aligned} \hat{\mathbb{E}} \biggl[ \int _{0}^{T} \biggl\vert h\biggl( \frac{s}{\epsilon}, X^{ \epsilon}_{s}\biggr)- \bar{h} \bigl(X^{\epsilon}_{s}\bigr) \biggr\vert \,ds \biggr] \leq N \sum _{i=0}^{N-1} \hat{\mathbb{E}}H_{i}. \end{aligned}$$
(14)

By assumptions (H1), (H2), and (B), we get

$$\begin{aligned} \hat{\mathbb{E}}H_{i} ={}& \hat{\mathbb{E}} \int _{t_{i}}^{t_{i+1}} \biggl\vert h\biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr)- \bar{h} \bigl(X^{\epsilon}_{s}\bigr) \biggr\vert ^{2} \,ds \\ \leq {}& 3 \int _{t_{i}}^{t_{i+1}} \biggl\vert h\biggl( \frac{s}{\epsilon}, X^{ \epsilon}_{t_{i}}\biggr)- \bar{h} \bigl(X^{\epsilon}_{t_{i}}\bigr) \biggr\vert ^{2} \,ds + 3 \int _{t_{i}}^{t_{i+1}} \biggl\vert h\biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr)- h\biggl( \frac{s}{\epsilon}, X^{\epsilon}_{t_{i}}\biggr) \biggr\vert ^{2} \,ds \\ &{} + 3 \int _{t_{i}}^{t_{i+1}} \bigl\vert \bar{h} \bigl(X^{\epsilon}_{t_{i}}\bigr)- \bar{h}\bigl(X^{\epsilon}_{s} \bigr) \bigr\vert ^{2} \,ds \\ \leq {}& 3 \epsilon \int _{\frac{ t_{i}}{\epsilon} }^{ \frac{t_{i+1}}{\epsilon} } \bigl\vert h\bigl(s, X^{\epsilon}_{t_{i}}\bigr)- \bar{h}\bigl(X^{ \epsilon}_{t_{i}} \bigr) \bigr\vert ^{2} \,d\langle B \rangle _{s} + 6L \int _{t_{i}}^{t_{i+1}} \bigl\vert X^{\epsilon}_{s} - X^{\epsilon}_{t_{i}} \bigr\vert ^{2} \,ds \\ \leq {}& 3 \sqrt{\epsilon} \varphi \biggl(\frac{1}{\sqrt{\epsilon}}\biggr) \Bigl(1+ \sup _{t\in [0, T]} \bigl\vert X^{\epsilon}_{t} \bigr\vert ^{2}\Bigr) + 6L \int _{t_{i}}^{t_{i+1}} \bigl\vert X^{\epsilon}_{s} - X^{\epsilon}_{t_{i}} \bigr\vert ^{2} \,ds. \end{aligned}$$
(15)

Hence, using Corollary 2.13 again, we have

$$\begin{aligned} \sum_{i=0}^{N-1} \hat{ \mathbb{E}}[ H_{i}] \leq {}& 3 \sqrt{\epsilon} \sum _{i=0}^{N-1} \varphi \biggl(\frac{1}{\sqrt{\epsilon}}\biggr) \Bigl(1+ \hat{\mathbb{E}} \Bigl[\sup_{t\in [0, T]} \bigl\vert X^{\epsilon}_{t} \bigr\vert ^{2} \Bigr] \Bigr) \\ &{} + 6L \sum_{i=0}^{N-1} \hat{\mathbb{E}} \biggl[ \int _{t_{i}}^{t_{i+1}} \bigl\vert X^{\epsilon}_{s} - X^{\epsilon}_{t_{i}} \bigr\vert ^{2} \,ds \biggr] \\ \leq{} & CN\sqrt{\epsilon} \varphi \biggl(\frac{1}{\sqrt{\epsilon}}\biggr) + CN \sqrt{ \epsilon} L \epsilon. \end{aligned}$$
(16)

Then, substituting (16) in (14), we obtain the following estimate:

$$\begin{aligned} & \hat{\mathbb{E}} \biggl[ \int _{0}^{T} \bigl\vert h\bigl(s, X^{\epsilon}_{s}\bigr)- \bar{h}\bigl(X^{\epsilon}_{s} \bigr) \bigr\vert ^{2} \,ds \biggr] \\ &\quad \leq C(T+\sqrt{\epsilon}) \biggl[\varphi \biggl(\frac{1}{\sqrt{\epsilon}}\biggr) + L \epsilon \biggr]. \end{aligned}$$

Finally, the required inequality follows by letting ϵ tend to zero in the inequality. □

Lemma 3.4

Suppose that assumptions (H1), (H2), and (B) are satisfied. Then

$$\begin{aligned} \lim_{\epsilon \to 0} \hat{\mathbb{E}} \biggl[ \int _{0}^{T} \biggl\vert \sigma \biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr)- \bar{\sigma} \bigl(X^{ \epsilon}_{s}\bigr) \biggr\vert ^{2} \,ds \biggr] =0. \end{aligned}$$
(17)

The proof is the same as Lemma 3.3, here we omit the proof.

Now, we present our first main result, the averaging principle of the GSDE with standard linear growth and Lipschitz assumption.

Theorem 3.5

Assume that assumptions (H1), (H2), and (B) are satisfied. Then

$$\begin{aligned} \lim_{\epsilon \to 0} \hat{\mathbb{E}} \Bigl[\sup_{0\leq t\leq T} \bigl\vert X^{ \epsilon}_{t} - \bar{X}_{t} \bigr\vert ^{2} \Bigr] =0. \end{aligned}$$

Proof

Starting with

$$\begin{aligned} X^{\epsilon}_{t} - \bar{X}_{t} ={}& \int _{0}^{t} \biggl[b\biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr)- \bar{b}( \bar{X}_{s})\biggr] \,ds + \int _{0}^{t} \biggl[h\biggl(\frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{h}( \bar{X}_{s})\biggr] \,d\langle B\rangle _{s} \\ &{}+ \int _{0}^{t} \biggl[\sigma \biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{\sigma}( \bar{X}_{s})\biggr] \,dB_{s} \end{aligned}$$
(18)

and employing the simple arithmetic inequality

$$\begin{aligned} \vert x_{1}+x_{2}+ \cdots x_{m} \vert ^{2} \leq m\bigl( \vert x_{1} \vert ^{2}+ \vert x_{2} \vert ^{2}+ \cdots + \vert x_{m} \vert ^{2}\bigr), \end{aligned}$$

we arrive at

$$\begin{aligned} \sup_{0\leq t\leq T} \bigl\vert X^{\epsilon}_{t} - \bar{X}_{t} \bigr\vert ^{2} \leq {}& 3 \sup _{0\leq t\leq T} \biggl( \int _{0}^{t} \biggl[b\biggl(\frac{s}{\epsilon}, X^{ \epsilon}_{s}\biggr)- \bar{b}( \bar{X}_{s})\biggr] \,ds \biggr)^{2} \\ &{} + 3 \sup_{0\leq t\leq T} \biggl( \int _{0}^{t} \biggl[h\biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{h}( \bar{X}_{s})\biggr] \,d \langle B\rangle _{s} \biggr)^{2} \\ &{} +3 \sup_{0\leq t\leq T} \biggl( \int _{0}^{t} \biggl[\sigma \biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{\sigma}( \bar{X}_{s})\biggr] \,dB_{s} \biggr)^{2} \\ =:{}& I_{1} + I_{2} + I_{3}, \end{aligned}$$
(19)

where \(u\in [0,T]\), and \(I_{i}, i=1,2,3\), denote the above terms respectively. Now we present some useful estimates for \(I_{i}, i=1,2,3\).

Firstly, we apply the arithmetic inequality and Hölder’s inequality to obtain

$$\begin{aligned} I_{1} ={}& 3 \sup_{0\leq t\leq T} \biggl( \int _{0}^{t} \biggl[b\biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr)- \bar{b}( \bar{X}_{s})\biggr] \,ds \biggr)^{2} \\ ={}& 6 \sup_{0\leq t\leq T} \biggl( \int _{0}^{t} \bigl[b\bigl(s, X^{\epsilon}_{s} \bigr)- \bar{b}\bigl(X^{\epsilon}_{s}\bigr) \bigr] \,ds \biggr)^{2} \\ &{} +6 \sup_{0\leq t\leq T} \biggl\vert \int _{0}^{t} \bigl[\bar{b}( \bar{X}_{s}) - \bar{b}\bigl(X^{\epsilon}_{s}\bigr)\bigr]\,ds \biggr\vert ^{2} \\ \leq {}& 6 \sup_{0\leq t\leq T} \biggl( \int _{0}^{t} \bigl[b\bigl(s, X^{\epsilon}_{s} \bigr)- \bar{b}\bigl(X^{\epsilon}_{s}\bigr) \bigr] \,ds \biggr)^{2} \\ &{} +6T \int _{0}^{T} \bigl\vert \bar{b}( \bar{X}_{s}) - \bar{b}\bigl(X^{\epsilon}_{s}\bigr) \bigr\vert ^{2}\,ds \\ \leq{} & 6 \sup_{0\leq t\leq T} \biggl( \int _{0}^{t} \bigl[b\bigl(s, X^{\epsilon}_{s} \bigr)- \bar{b}\bigl(X^{\epsilon}_{s}\bigr) \bigr] \,ds \biggr)^{2} \\ &{} +6T \bar{L} \int _{0}^{T} \bigl\vert X^{\epsilon}_{s}- X_{s} \bigr\vert ^{2} \,ds. \end{aligned}$$
(20)

Here, in the last inequality, we have used the Lipschitz condition of .

Second, for \(I_{2}\), we take the expectation on \(I_{2}\):

$$\begin{aligned} \hat{\mathbb{E}}[ I_{2}] \leq {}& 3 \hat{\mathbb{E}} \biggl[\sup_{0\leq t \leq T} \biggl( \int _{0}^{t} \biggl[h\biggl(\frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{h}( \bar{X}_{s})\biggr] \,d\langle B\rangle _{s} \biggr)^{2} \biggr] \\ \leq {}& 6 \hat{\mathbb{E}} \biggl[ \sup_{0\leq t\leq T} \biggl( \int _{0}^{t} \biggl[h\biggl(\frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{h}\bigl( X^{\epsilon}_{s} \bigr)\biggr]\,d \langle B\rangle _{s} \biggr)^{2} \biggr] \\ &{} +6 \hat{\mathbb{E}} \biggl[ \sup_{0\leq t\leq T} \biggl( \int _{0}^{t} \bigl[ \bar{h}\bigl( X^{\epsilon}_{s} \bigr) - \bar{h}( \bar{X}_{s})\bigr]\,d \langle B \rangle _{s} \biggr)^{2} \biggr] \\ \leq {}& 6 \bar{\sigma}^{4} \hat{\mathbb{E}} \biggl[ \int _{0}^{T} \biggl[h\biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{h}\bigl( X^{\epsilon}_{s} \bigr)\biggr]^{2} \,ds \biggr] \\ &{} +6 \bar{\sigma}^{4} \hat{\mathbb{E}} \biggl[ \int _{0}^{T} \bigl[\bar{h}\bigl( X^{ \epsilon}_{s} \bigr) - \bar{h}( \bar{X}_{s}) \bigr]^{2} \,ds \biggr] \\ \leq {}& 6 \bar{\sigma}^{4} \hat{\mathbb{E}} \biggl[ \int _{0}^{T} \biggl[h\biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{h}\bigl( X^{\epsilon}_{s} \bigr)\biggr]^{2} \,ds \biggr] \\ &{}+6 \bar{\sigma}^{4} \bar{L} \hat{\mathbb{E}} \biggl[ \int _{0}^{T} \bigl\vert X^{ \epsilon}_{s}- X_{s} \bigr\vert ^{2} \,ds \biggr]. \end{aligned}$$
(21)

Here, we have used the BDG-type inequality (see Lemma 2.10).

Finally, we take expectation on \(I_{3}\) using the BDG-type inequality in Lemma 2.10(2):

$$\begin{aligned} \hat{\mathbb{E}}[ I_{3}] ={}& 3 \hat{\mathbb{E}} \biggl[ \sup_{0\leq t \leq T} \biggl( \int _{0}^{t} \biggl[\sigma \biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{\sigma}( \bar{X}_{s})\biggr] \,dB_{s} \biggr)^{2} \biggr] \\ \leq {}& 6 \hat{\mathbb{E}} \biggl[ \sup_{0\leq t\leq T} \biggl( \int _{0}^{t} \biggl[\sigma \biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{\sigma}\bigl( \bar{X}^{\epsilon}_{s}\bigr)\biggr] \,dB_{s} \biggr)^{2} \biggr] \\ &{} +6 \hat{\mathbb{E}} \biggl[ \sup_{0\leq t\leq T} \biggl( \int _{0}^{t} \bigl[ \bar{\sigma}\bigl( \bar{X}^{\epsilon}_{s}\bigr) - \bar{\sigma}( \bar{X}_{s}) \bigr] \,dB_{s} \biggr)^{2} \biggr] \\ \leq{} & 6C \hat{\mathbb{E}} \biggl[ \int _{0}^{t} \biggl[\sigma \biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{\sigma}( \bar{X}_{s})\biggr]^{2} \,ds \biggr] \\ &{} +6C \hat{\mathbb{E}} \biggl[ \int _{0}^{t} \bigl[ \bar{\sigma}( \bar{X}_{s}) - \bar{\sigma}( \bar{X}_{s}) \bigr]^{2} \,ds \biggr] \\ \leq {}& 6C \hat{\mathbb{E}} \biggl[ \int _{0}^{t} \biggl[\sigma \biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{\sigma}( \bar{X}_{s})\biggr]^{2} \,ds \biggr] \\ &{} +6C \bar{L} \hat{\mathbb{E}} \biggl[ \int _{0}^{T} \bigl\vert X^{\epsilon}_{s}- X_{s} \bigr\vert ^{2} \,ds \biggr]. \end{aligned}$$
(22)

Here, in the last inequality, we have used the Lipschitz condition of σ̄.

Therefore, taking sublinear expectation on (19), substituting (20)–(22) in (19), we get

$$\begin{aligned} & \hat{\mathbb{E}} \Bigl[\sup_{0\leq t\leq T} \bigl\vert X^{\epsilon}_{t} - \bar{X}_{t} \bigr\vert ^{2} \Bigr] \\ &\quad\leq 6 \hat{\mathbb{E}} \biggl[ \sup_{0\leq t\leq T} \biggl( \int _{0}^{t} \bigl[b\bigl(s, X^{\epsilon}_{s} \bigr)- \bar{b}\bigl(X^{\epsilon}_{s}\bigr) \bigr] \,ds \biggr)^{2} \biggr] \\ &\qquad{}+ 6 \bar{\sigma}^{4} \hat{\mathbb{E}} \biggl[ \int _{0}^{T} \biggl[h\biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{h}\bigl( X^{\epsilon}_{s} \bigr)\biggr]^{2} \,ds \biggr] \\ &\qquad{}+ 6C \hat{\mathbb{E}} \biggl[ \int _{0}^{t} \biggl[\sigma \biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{\sigma}( \bar{X}_{s})\biggr]^{2} \,ds \biggr] \\ &\qquad{}+ \bigl(6T \bar{L} +6 \bar{\sigma}^{4} \bar{L} +6C \bar{L}\bigr) \hat{\mathbb{E}} \biggl[ \int _{0}^{T} \bigl\vert X^{\epsilon}_{s}- X_{s} \bigr\vert ^{2} \,ds \biggr] \\ &\quad\leq 6 \hat{\mathbb{E}} \biggl[ \sup_{0\leq t\leq T} \biggl( \int _{0}^{t} \bigl[b\bigl(s, X^{\epsilon}_{s} \bigr)- \bar{b}\bigl(X^{\epsilon}_{s}\bigr) \bigr] \,ds \biggr)^{2} \biggr] \\ &\qquad{}+ 6 \bar{\sigma}^{4} \hat{\mathbb{E}} \biggl[ \int _{0}^{T} \biggl[h\biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{h}\bigl( X^{\epsilon}_{s} \bigr)\biggr]^{2} \,ds \biggr] \\ &\qquad{}+ 6C \hat{\mathbb{E}} \biggl[ \int _{0}^{t} \biggl[\sigma \biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{\sigma}( \bar{X}_{s})\biggr]^{2} \,ds \biggr] \\ &\qquad{}+ \bigl(6T \bar{L} +6 \bar{\sigma}^{4} \bar{L} +6C \bar{L}\bigr) \int _{0}^{T} \hat{\mathbb{E}} \Bigl[ \sup _{0\leq r\leq s} \bigl\vert X^{\epsilon}_{r}- X_{r} \bigr\vert ^{2} \Bigr] \,ds. \end{aligned}$$
(23)

An application of the Gronwall inequality in (23) implies that

$$\begin{aligned} & \hat{\mathbb{E}} \Bigl[\sup_{0\leq t\leq T} \bigl\vert X^{\epsilon}_{t} - \bar{X}_{t} \bigr\vert ^{2} \Bigr] \\ &\quad\leq e^{(6T \bar{L} +6 \bar{\sigma}^{4} \bar{L} +6C \bar{L})T} \biggl\{ 6 \hat{\mathbb{E}} \biggl[ \sup _{0\leq t\leq T} \biggl( \int _{0}^{t} \bigl[b\bigl(s, X^{\epsilon}_{s} \bigr)- \bar{b}\bigl(X^{\epsilon}_{s}\bigr) \bigr] \,ds \biggr)^{2} \biggr] \\ &\qquad{}+ 6 \bar{\sigma}^{4} \hat{\mathbb{E}} \biggl[ \int _{0}^{T} \biggl[h\biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{h}\bigl( X^{\epsilon}_{s} \bigr)\biggr]^{2} \,ds \biggr] \\ &\qquad{}+ 6C \hat{\mathbb{E}} \biggl[ \int _{0}^{t} \biggl[\sigma \biggl( \frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) - \bar{\sigma}( \bar{X}_{s})\biggr]^{2} \,ds \biggr] \biggr\} . \end{aligned}$$
(24)

Consequently, the required result follows by applying Lemmas 3.23.4. The proof is complete. □

Next we turn our attention to the time-averaging principle of the GSDE with a locally Lipschitz condition and a Lyapunov-type condition. Our method of argument of the time-averaging principle is based on the localization method.

Theorem 3.6

Assume that assumptions (A1, A2) and (B) are satisfied. Then

$$\begin{aligned} \lim_{\epsilon \to 0}\hat{\mathbb{E}} \Bigl[\sup_{0\leq t\leq T} \bigl\vert X^{ \epsilon}_{t} - \bar{X}_{t} \bigr\vert ^{2} \Bigr] =0. \end{aligned}$$

Proof

We first consider the following truncated GSDE of (4) and (5), respectively, with coefficients that satisfy assumptions (A1, A2) and (B) for each \(N\in \mathbb{N}\), \(0\leq t\leq T\):

$$\begin{aligned} X^{\epsilon, N}_{t} ={}& X_{0} + \int _{0}^{t} b^{N}\biggl( \frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) \,ds + \int _{0}^{t} h^{N}\biggl( \frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) \,d\langle B\rangle _{s} \\ &{} + \int _{0}^{t} \sigma ^{N}\biggl( \frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) \,dB_{s} \end{aligned}$$
(25)

and

$$\begin{aligned} \bar{X}^{ N}_{t} ={}& X_{0} + \int _{0}^{t} \bar{b}^{N}\bigl( \bar{X}^{N}_{s}\bigr) \,ds + \int _{0}^{t} \bar{h}^{N}\bigl( \bar{X}^{ N}_{s}\bigr) \,d\langle B\rangle _{s} \\ &{} + \int _{0}^{t} \bar{\sigma}^{N}\bigl( \bar{X}^{N}_{s}\bigr) \,dB_{s}, \end{aligned}$$
(26)

and then we consider the following truncated GSDEs of (18):

$$\begin{aligned} X^{\epsilon, N}_{t} - \bar{X}^{ N}_{t} ={}& \int _{0}^{t} \biggl[b^{ N}\biggl( \frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr)- \bar{b}^{N} \bigl( \bar{X}^{ N}_{s}\bigr)\biggr] \,ds + \int _{0}^{t} \biggl[h^{N}\biggl( \frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) - \bar{h}^{N} \bigl( \bar{X}^{ N}_{s}\bigr)\biggr]\,d\langle B\rangle _{s} \\ &{} + \int _{0}^{t} \biggl[\sigma ^{N}\biggl( \frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) - \bar{ \sigma}^{N}\bigl( \bar{X}^{N}_{s}\bigr)\biggr] \,dB_{s}, \end{aligned}$$
(27)

where \(b^{N}, h^{N}, \sigma ^{N}, \bar{b}^{N}, \bar{h}^{N}, \bar{\sigma}^{N}\) are defined in the following way:

$$\begin{aligned} f^{N}(\cdot, x) = \textstyle\begin{cases} f(\cdot, x) & \text{if $ \vert x \vert \leq N$;} \\ f(\cdot, \frac{Nx}{ \vert x \vert }) & \text{if $ \vert x \vert >N$.} \end{cases}\displaystyle \end{aligned}$$

It is easy to verify that \(b^{N}, h^{N}, \sigma ^{N}, \bar{b}^{N}, \bar{h}^{N}, \bar{\sigma}^{N}\) are all bounded functions and uniformly Lipschitz in x. Then, by the result of Lipschitz GSDE with coefficients in \(M_{G}^{p}(0,T; \mathbb{R})\) in Theorem 3.5, for the truncated GSDEs (25) and (26), we obtain the following result:

$$\begin{aligned} \lim_{\epsilon \to 0} \hat{\mathbb{E}} \Bigl[\sup _{0\leq t\leq T} \bigl\vert X^{ \epsilon, N}_{t} - \bar{X}^{ N}_{t} \bigr\vert ^{2} \Bigr] =0. \end{aligned}$$
(28)

Define two sequences of stopping times by

$$\begin{aligned} \tau _{N}:= \inf \bigl\{ t: \bigl\vert X^{\epsilon, N}_{t} \bigr\vert \geq N\bigr\} \wedge T \end{aligned}$$

and

$$\begin{aligned} \bar{\tau}_{N}:= \inf \bigl\{ t: \bigl\vert \bar{X}^{ N}_{t} \bigr\vert \geq N\bigr\} \wedge T, \end{aligned}$$

which satisfy \(\{\tau _{N}\leq t\}\cup \{\bar{\tau}_{N}\leq t\} \in \mathcal{F}_{t}\). Based on stopping times, we can deduce from (25) and (26) that

$$\begin{aligned} X^{\epsilon, N}_{t\wedge \tau _{N}} ={}& X_{0}+ \int _{0}^{t} b^{ N}\biggl( \frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) I_{[0, \tau _{N}]} \,ds + \int _{0}^{t} h^{N}\biggl( \frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) I_{[0, \tau _{N}]} \,d \langle B\rangle _{s} \\ &{}+ \int _{0}^{t} \sigma ^{N}\biggl( \frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) I_{[0, \tau _{N}]} \,dB_{s} \\ ={}& X_{0} + \int _{0}^{t} b^{ N+1}\biggl( \frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) I_{[0, \tau _{N}]} \,ds + \int _{0}^{t} h^{N+1}\biggl( \frac{s}{\epsilon}, X^{ \epsilon, N}_{s}\biggr) I_{[0, \tau _{N}]} \,d \langle B\rangle _{s} \\ &{}+ \int _{0}^{t} \sigma ^{N+1}\biggl( \frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) I_{[0, \tau _{N}]} \,dB_{s} \end{aligned}$$

and

$$\begin{aligned} \bar{X}^{ N}_{t\wedge \bar{\tau}_{N}} ={}& X_{0}+ \int _{0}^{t} \bar{b}^{N}\bigl( \bar{X}^{ N}_{s}\bigr) I_{[0, \bar{\tau}_{N}]} \,ds + \int _{0}^{t} \bar{h}^{N}\bigl( \bar{X}^{N}_{s}\bigr) I_{[0, \bar{\tau}_{N}]} \,d\langle B\rangle _{s} \\ &{}+ \int _{0}^{t} \bar{\sigma}^{N}\bigl( \bar{X}^{ N}_{s}\bigr) I_{[0, \bar{\tau}_{N}]} \,dB_{s} \\ ={}& X_{0} + \int _{0}^{t} \bar{b}^{N+1}\bigl( \bar{X}^{ N}_{s}\bigr) I_{[0, \bar{\tau}_{N}]} \,ds + \int _{0}^{t} \bar{h}^{N+1}\bigl( \bar{X}^{ N}_{s}\bigr) I_{[0, \bar{\tau}_{N}]} \,d\langle B\rangle _{s} \\ &{}+ \int _{0}^{t} \bar{\sigma}^{N+1}\bigl( \bar{X}^{N}_{s}\bigr) I_{[0, \bar{\tau}_{N}]} \,dB_{s}. \end{aligned}$$

On the other hand, by the representations of \(X^{N+1}, \bar{X}^{N+1}\) and the continuity of process \(X^{N+1}, \bar{X}^{N+1}\), we have

$$\begin{aligned} X^{\epsilon, N+1}_{t\wedge \tau _{N}} ={}& X_{0} + \int _{0}^{t} b^{ N+1}\biggl( \frac{s}{\epsilon}, X^{\epsilon, N+1}_{s}\biggr) I_{[0, \tau _{N}]} \,ds + \int _{0}^{t} h^{N+1}\biggl( \frac{s}{\epsilon}, X^{\epsilon, N+1}_{s}\biggr) I_{[0, \tau _{N}]} \,d \langle B\rangle _{s} \\ &{}+ \int _{0}^{t} \sigma ^{N+1}\biggl( \frac{s}{\epsilon}, X^{\epsilon, N+1}_{s}\biggr) I_{[0, \tau _{N}]} \,dB_{s} \end{aligned}$$

and

$$\begin{aligned} \bar{X}^{ N+1}_{t\wedge \bar{\tau}_{N}} ={}& X_{0} + \int _{0}^{t} \bar{b}^{N+1}\bigl( \bar{X}^{ N+1}_{s}\bigr) I_{[0, \bar{\tau}_{N}]} \,ds + \int _{0}^{t} \bar{h}^{N+1}\bigl( \bar{X}^{ N+1}_{s}\bigr) I_{[0, \bar{\tau}_{N}]} \,d\langle B \rangle _{s} \\ &{} + \int _{0}^{t} \bar{\sigma}^{N+1}\bigl( \bar{X}^{ N+1}_{s}\bigr) I_{[0, \bar{\tau}_{N}]} \,dB_{s}. \end{aligned}$$

By the uniqueness of the solution to the truncated GSDEs (25) and (26), for each \(N\in \mathbb{N}\), \(X^{N} \) and \(X^{N+1} \) are distinguishable on \([0,\tau _{n}]\); at the same time, \(\bar{X}^{N}\) and \(\bar{X}^{N+1}\) are distinguishable on \([0,\bar{\tau}_{n}]\). This also implies that the sequences \(\{\tau _{N}\}_{N\in \mathbb{N}}\) and \(\{\bar{\tau}_{N}\}_{N\in \mathbb{N}}\) are q.s. increasing. Now we claim that

$$\begin{aligned} \mathbbm{ }\mathbb{V} \Biggl( \bigcup_{N=1}^{\infty } \bigl\{ \omega: \tau _{N}(\omega ) =T\bigr\} \Biggr) =1 \end{aligned}$$
(29)

and

$$\begin{aligned} \mathbbm{ }\mathbb{V} \Biggl( \bigcup_{N=1}^{\infty } \bigl\{ \omega: \bar{\tau}_{N}( \omega ) =T\bigr\} \Biggr) =1. \end{aligned}$$
(30)

By the definition of \(\tau _{N}\) we know that \(|X^{\epsilon, N+1}_{\cdot}| \leq N\) on \([0, \tau _{n}]\), hence we have for \(0\leq y\leq T\)

$$\begin{aligned} f\biggl(\frac{t}{\epsilon}, X^{\epsilon, N}_{t}\biggr) I_{[0, \tau _{n}]}(t) = f^{N}\biggl( \frac{t}{\epsilon}, X^{\epsilon, N}_{t}\biggr) I_{[0, \tau _{n}]}(t) \end{aligned}$$

and

$$\begin{aligned} \bar{f}\bigl(\bar{X}^{N}_{t}\bigr) I_{[0, \bar{\tau}_{n}]}(t) = \bar{f}^{N}\bigl(\bar{X}^{N}_{t}\bigr) I_{[0, \bar{\tau}_{n}]}(t), \end{aligned}$$

where \(f= b, h,\sigma \) and \(\bar{f}= \bar{b}, \bar{h}, \bar{\sigma}\). It follows from Lemma 4.2 in Li and Peng [14] that both \(f^{N}(\frac{t}{\epsilon}, X^{\epsilon, N}_{t}) I_{[0, \tau _{n}]}(t)\) and \(\bar{f}^{N}( \bar{X}^{N}_{t} ) I_{[0, \bar{\tau}_{n}]}(t)\) are \(M_{G}^{p} (0,T; \mathbb{R})\)-processes for any \(p\geq 2\). Therefore, for any \(t\in [0, T]\), we have

$$\begin{aligned} X^{\epsilon, N}_{t\wedge \tau _{N}} ={}& X_{0}+ \int _{0}^{t} b^{ N}\biggl( \frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) I_{[0, \tau _{N}]} \,ds + \int _{0}^{t} h^{N}\biggl( \frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) I_{[0, \tau _{N}]} \,d \langle B\rangle _{s} \\ &{}+ \int _{0}^{t} \sigma ^{N}\biggl( \frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) I_{[0, \tau _{N}]} \,dB_{s} \\ ={}& \int _{0}^{t} b\biggl(\frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) I_{[0, \tau _{N}]} \,ds + \int _{0}^{t} h\biggl(\frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) I_{[0, \tau _{N}]} \,d\langle B\rangle _{s} \\ &{}+ \int _{0}^{t} \sigma \biggl(\frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) I_{[0, \tau _{N}]} \,dB_{s}, \end{aligned}$$

and, similarly, we have

$$\begin{aligned} \bar{X}^{ N}_{t\wedge \bar{\tau}_{N}} ={}& \int _{0}^{t} \bar{b}\bigl( \bar{X}^{N}_{s} \bigr) I_{[0, \bar{\tau}_{N}]} \,ds + \int _{0}^{t} \bar{h}\bigl( \bar{X}^{N}_{s} \bigr) I_{[0, \bar{\tau}_{N}]} \,d\langle B\rangle _{s} \\ &{} + \int _{0}^{t} \bar{\sigma}\bigl( \bar{X}^{ N}_{s}\bigr) I_{[0, \bar{\tau}_{N}]} \,dB_{s}. \end{aligned}$$

Applying G-Itô’s formula to

$$\begin{aligned} \Phi \bigl(t\wedge \tau _{N}, X^{\epsilon, N}_{t\wedge \tau _{N}} \bigr):= e^{-C_{L}(t \wedge \tau _{N})} V\bigl(t\wedge \tau _{N}, X^{\epsilon, N}_{t\wedge \tau _{N}} \bigr) \end{aligned}$$

and

$$\begin{aligned} \bar{\Phi}\bigl(t\wedge \bar{\tau}_{N}, \bar{X}^{ N}_{t\wedge \bar{\tau}_{N}} \bigr):= e^{-C_{L}(t\wedge \bar{\tau}_{N})} V\bigl(t\wedge \tau _{N}, \bar{X}^{N}_{t \wedge \bar{\tau}_{N}}\bigr), \end{aligned}$$

respectively, where we can take \(V(t, x) =1+ |x|^{2}\), we obtain

$$\begin{aligned} & \Phi \bigl(t\wedge \tau _{N}, X^{\epsilon, N}_{t\wedge \tau _{N}} \bigr) - \Phi (0, 0) \\ &\quad= \int _{0}^{t\wedge \tau _{N}} \biggl[ \partial _{t} \Phi \bigl(s, X_{s}^{ \epsilon, N} \bigr) + \partial _{x} \Phi \bigl(s, X_{s}^{\epsilon,N}\bigr) b\biggl( \frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) \biggr] \,ds \\ &\qquad{}+ \int _{0}^{t\wedge \tau _{N}} \partial _{x} \Phi \bigl(s, X_{s}^{ \epsilon, N} \bigr) \sigma \biggl(\frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) \,dB_{s} \\ &\qquad{}+ \int _{0}^{t\wedge \tau _{N}} \biggl( \partial _{x} \Phi \bigl(s, X_{s}^{ \epsilon, N} \bigr) h\biggl(\frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) \\ &\qquad{}+ \frac{1}{2} \partial ^{2}_{xx} \Phi \bigl(s, X_{s}^{\epsilon, N} \bigr) \biggl[ \sigma \biggl(\frac{s}{\epsilon}, X^{\epsilon, N}_{s} \biggr) \biggr]^{2} \biggr) \,d \langle B \rangle _{s} \end{aligned}$$
(31)

and

$$\begin{aligned} & \bar{\Phi}\bigl(t\wedge \bar{\tau}_{N}, \bar{X}^{ N}_{t\wedge \bar{\tau}_{N}}\bigr) - \bar{\Phi}(0, 0) \\ &\quad= \int _{0}^{t\wedge \bar{\tau}_{N}} \bigl[ \partial _{t} \bar{\Phi}\bigl(s, \bar{X}_{s}^{ N}\bigr) + \partial _{x} \bar{\Phi}\bigl(s, X_{s}^{\epsilon,N}\bigr) \bar{b} \bigl( \bar{X}^{ N}_{s}\bigr) \bigr] \,ds \\ &\qquad{}+ \int _{0}^{t\wedge \bar{\tau}_{N}} \partial _{x} \bar{\Phi}\bigl(s, \bar{X}_{s}^{ N}\bigr) \bar{\sigma}\bigl( \bar{X}^{ N}_{s}\bigr) \,dB_{s} \\ &\qquad{}+ \int _{0}^{t\wedge \bar{\tau}_{N}} \biggl( \partial _{x} \bar{\Phi}\bigl(s, \bar{X}_{s}^{ N}\bigr) \bar{h}\bigl( \bar{X}^{ N}_{s}\bigr) + \frac{1}{2} \partial ^{2}_{xx} \bar{\Phi}\bigl(s, \bar{X}_{s}^{ N} \bigr) \bigl[ \bar{\sigma}\bigl( \bar{X}^{ N}_{s}\bigr) \bigr]^{2} \biggr) \,d \langle B \rangle _{s}. \end{aligned}$$
(32)

Letting

$$\begin{aligned} &\eta _{s}\bigl(\Phi, X^{\epsilon, N}\bigr):= \partial _{x} \Phi \bigl(s, X_{s}^{ \epsilon, N} \bigr) h\biggl( \frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) + \frac{1}{2} \partial ^{2}_{xx} \Phi \bigl(s, X_{s}^{\epsilon, N} \bigr) \sigma ^{2} \biggl(\frac{s}{\epsilon}, X^{\epsilon, N}_{s} \biggr), \end{aligned}$$
(33)
$$\begin{aligned} &\bar{\eta}_{s}\bigl(\bar{\Phi}, \bar{X}^{N} \bigr):= \partial _{x} \bar{\Phi}\bigl(s, \bar{X}_{s}^{ N} \bigr) \bar{h}\bigl( \bar{X}^{ N}_{s}\bigr) + \frac{1}{2} \partial ^{2}_{xx} \bar{\Phi}\bigl(s, \bar{X}_{s}^{ N}\bigr) \bar{\sigma}^{2}\bigl( \bar{X}^{ N}_{s}\bigr), \end{aligned}$$
(34)

we have \(\eta _{s}(\Phi, X^{\epsilon, N}), \bar{\eta}_{s}(\bar{\Phi}, \bar{X}^{N}) \in M_{w}^{2}(0,T; \mathbb{R})\). Hence, substituting (33) in (31), we arrive at

$$\begin{aligned} & \Phi \bigl(t\wedge \tau _{N}, X^{\epsilon, N}_{t\wedge \tau _{N}} \bigr) - \Phi (0, 0) \\ &\quad= \int _{0}^{t\wedge \tau _{N}} \biggl( \partial _{t} \Phi \bigl(s, X_{s}^{ \epsilon, N} \bigr) + \partial _{x} \Phi \bigl(s, X_{s}^{\epsilon,N}\bigr) b\biggl( \frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) + 2 G\bigl(\eta _{s}\bigl(\Phi, X^{N}\bigr)\bigr) \biggr) \,ds \\ &\qquad{}+ \int _{0}^{t\wedge \tau _{N}} \partial _{x} \Phi \bigl(s, X_{s}^{ \epsilon, N} \bigr) \sigma \biggl(\frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) \,dB_{s} \\ &\qquad{}+ \int _{0}^{t\wedge \tau _{N}} \eta _{s}\bigl(\Phi, X^{N}\bigr) \,d \langle B \rangle _{s} - \int _{0}^{t\wedge \tau _{N}} 2 G\bigl(\eta _{s}\bigl( \Phi, X^{N}\bigr)\bigr) \,ds \\ &\quad= \int _{0}^{t\wedge \tau _{N}} \mathcal{L} \Phi \bigl(s, X_{s}^{ \epsilon, N}\bigr) \,ds + \int _{0}^{t\wedge \tau _{N}} \eta _{s}\bigl(\Phi, X^{N}\bigr) \,d \langle B \rangle _{s} - \int _{0}^{t\wedge \tau _{N}} 2 G\bigl(\eta _{s}\bigl( \Phi, X^{N}\bigr)\bigr) \,ds \\ &\qquad{}+ \int _{0}^{t\wedge \tau _{N}} \partial _{x} \Phi \bigl(s, X_{s}^{ \epsilon, N} \bigr) \sigma \biggl(\frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) \,dB_{s} . \end{aligned}$$
(35)

By assumption (A2), \(\mathcal{L} V\leq C_{L} V\) implies \(\mathcal{L} \Phi \leq 0\). It follows from Proposition 1.4 in IV-1 of Peng [16] that, for \(\eta \in M_{w}^{1}(0,T; \mathbb{R})\),

$$\begin{aligned} \int _{0}^{t\wedge \tau _{N}} \eta _{s}\bigl(\Phi, X^{N}\bigr) \,d \langle B \rangle _{s} - \int _{0}^{t\wedge \tau _{N}} 2 G\bigl(\eta _{s}\bigl( \Phi, X^{N}\bigr)\bigr) \,ds \leq 0,\quad \text{q.s.} \end{aligned}$$

Taking expectation on (35), we obtain

$$\begin{aligned} \hat{\mathbb{E}} \bigl[ \Phi \bigl(t\wedge \tau _{N}, X^{\epsilon, N}_{t \wedge \tau _{N}} - \bar{X}^{ N}_{t\wedge \tau _{N}}\bigr) \bigr] \leq \Phi (0, 0). \end{aligned}$$

In particular, we have

$$\begin{aligned} \hat{\mathbb{E}} \bigl[ V\bigl(T\wedge \tau _{N}, X^{\epsilon, N}_{T\wedge \tau _{N}}\bigr) I_{\tau _{N} \leq T} \bigr] \leq V(0,0)e^{C_{L}T}. \end{aligned}$$

Since \(\tau _{N}< T\) implies \(|X^{\epsilon, N}_{T\wedge \tau _{N}}| = N \), q.s., from which we deduce

$$\begin{aligned} \mathbb{V}(\tau _{N}< T) \cdot \inf_{|x|\geq N} \inf _{t\in [0,T]} V(t,x) \leq V(0, 0) e^{C_{L}T}. \end{aligned}$$

Letting \(N\to \infty \), by (A2), we obtain

$$\begin{aligned} 1\geq \lim_{N\to \infty} \mathbb{V}(\tau _{N}=T) \geq 1- \lim_{N\to \infty} \mathbb{V}(\tau _{N}< T) =1. \end{aligned}$$

Since \(\{\omega: \tau _{N}(\omega ) =T\}\) is increasing, the upwards convergence theorem yields (29). Similarly, substituting (34) in (32), after same discussion as above, we get that (30) holds.

Therefore, there exists a polar set A such that for all \(\omega \in A^{c}\) the following assertion holds: one can find an \(N_{0}(\omega )\) that depends on ω such that for all \(N\geq N_{0}(\omega ), N\in \mathbb{N}, \tau _{N}(\omega ) =T\). Then we define, for \(t\in [0, T]\),

$$\begin{aligned} X_{t}^{\epsilon}(\omega ) = \textstyle\begin{cases} X_{t}^{\epsilon, N_{0}(\omega )}(\omega ), & \omega \in A^{c}; \\ 0, & \omega \in A. \end{cases}\displaystyle \end{aligned}$$

Similarly, one can find an \(N_{1}(\omega )\) that depends on ω such that for all \(N\geq N_{1}(\omega ), N\in \mathbb{N}, \bar{\tau}_{N}(\omega ) =T\). Then we define, for \(t\in [0, T]\),

$$\begin{aligned} \bar{X}_{t}(\omega ) = \textstyle\begin{cases} \bar{X}_{t}^{N_{1}(\omega )}(\omega ), & \omega \in A^{c}; \\ 0, & \omega \in A. \end{cases}\displaystyle \end{aligned}$$

From the argument above, we have \(X^{\epsilon }I_{[0, \tau _{N} ]} = X^{\epsilon, N} I_{[0, \tau _{N}]} \in M_{G}^{2}(0, T; \mathbb{R})\), and thus \(X^{\epsilon}\in M_{w}^{2}(0,T; \mathbb{R})\). Also, we have \(\bar{X} \in M_{w}^{2}(0,T; \mathbb{R})\). Moreover,

$$\begin{aligned} X^{\epsilon }_{t\wedge \tau _{N} } ={}& X^{\epsilon, N}_{t\wedge \tau _{N} } \\ ={}& X_{0}+ \int _{0}^{t\wedge \tau _{N} } b^{ N}\biggl( \frac{s}{\epsilon}, X^{ \epsilon, N}_{s}\biggr) \,ds + \int _{0}^{t\wedge \tau _{N} } h^{N}\biggl( \frac{s}{\epsilon}, X^{\epsilon, N}_{s}\biggr) \,d\langle B\rangle _{s} \\ &{} + \int _{0}^{t\wedge \tau _{N} } \sigma ^{N}\biggl( \frac{s}{\epsilon}, X^{ \epsilon, N}_{s}\biggr) \,dB_{s} \\ ={}& \int _{0}^{t\wedge \tau _{N} } b \biggl(\frac{s}{\epsilon}, X^{\epsilon}_{s}\biggr) \,ds + \int _{0}^{t\wedge \tau _{N} } h \biggl(\frac{s}{\epsilon}, X^{ \epsilon }_{s}\biggr) \,d\langle B\rangle _{s} + \int _{0}^{t\wedge \tau _{N} } \sigma \biggl(\frac{s}{\epsilon}, X^{\epsilon }_{s}\biggr) \,dB_{s} \end{aligned}$$

and

$$\begin{aligned} \bar{X}_{t \wedge \bar{\tau}_{N}}& = \bar{X}^{ N}_{t \wedge \bar{\tau}_{N}} \\ &= X_{0}+ \int _{0}^{t \wedge \bar{\tau}_{N}} - \bar{b} ( \bar{X}_{s}) \,ds + \int _{0}^{t \wedge \bar{\tau}_{N}} \bar{h} ( \bar{X}_{s}) \,d \langle B\rangle _{s} + \int _{0}^{t \wedge \bar{\tau}_{N}} \bar{\sigma} ( \bar{X}_{s}) \,dB_{s}, \end{aligned}$$

Hence, the two equations imply that \(X^{\epsilon}, \bar{X}\) satisfy (4), (5), respectively. Furthermore, we obtain that \(X^{\epsilon } - \bar{X} \) satisfies

$$\begin{aligned} X^{\epsilon }_{t } - \bar{X}_{t } ={}& \int _{0}^{t} \biggl[b \biggl( \frac{s}{\epsilon}, X^{\epsilon }_{s}\biggr)- \bar{b} ( \bar{X}_{s})\biggr] \,ds + \int _{0}^{t} \biggl[h \biggl(\frac{s}{\epsilon}, X^{\epsilon }_{s}\biggr) - \bar{h} ( \bar{X}_{s})\biggr] \,d\langle B\rangle _{s} \\ &{} + \int _{0}^{t} \biggl[\sigma \biggl( \frac{s}{\epsilon}, X^{\epsilon }_{s}\biggr) - \bar{\sigma} ( \bar{X}_{s})\biggr] \,dB_{s}. \end{aligned}$$

Therefore, letting \(N\to \infty \) in (28), we can deduce

$$\begin{aligned} \lim_{\epsilon \to 0} \hat{\mathbb{E}} \Bigl[\sup _{0\leq t\leq T} \bigl\vert X^{ \epsilon}_{t} - \bar{X}_{t} \bigr\vert ^{2} \Bigr] =0, \end{aligned}$$

which completes the proof of this theorem. □

As a consequence of Theorem 3.6 and Markov-type inequality of capacity, the convergence also holds in the sense of capacity.

Corollary 3.7

Under the same assumptions as in Theorem 3.6, for all \(\delta >0\), we have

$$\begin{aligned} \lim_{\varepsilon \to 0}v \Bigl(\sup_{0\leq t\leq T} \bigl\vert X^{\epsilon}_{t} - \bar{X}_{t} \bigr\vert ^{2} \leq \delta \Bigr) =1. \end{aligned}$$

Proof

By Proposition 2.3 and Theorem 3.6, for any number \(\delta >0\), one can find

$$\begin{aligned} & \mathbb{V} \Bigl(\sup_{0\leq t\leq T} \bigl\vert X^{\epsilon}_{t} - \bar{X}_{t} \bigr\vert ^{2} >\delta \Bigr) \\ &\quad\leq \frac{1}{\delta} \hat{\mathbb{E}} \Bigl[\sup_{0\leq t\leq T} \bigl\vert X^{ \epsilon}_{t} - \bar{X}_{t} \bigr\vert ^{2} \Bigr]. \end{aligned}$$

Noting that \(v(A)=1- \mathbb{V}(A^{c})\), we have

$$\begin{aligned} v \Bigl(\sup_{0\leq t\leq T} \bigl\vert X^{\epsilon}_{t} - \bar{X}_{t} \bigr\vert ^{2} \leq \delta \Bigr) \geq 1- \frac{1}{\delta} \hat{\mathbb{E}} \Bigl[\sup_{0 \leq t\leq T} \bigl\vert X^{\epsilon}_{t} - \bar{X}_{t} \bigr\vert ^{2} \Bigr]. \end{aligned}$$

Let \(\epsilon \to 0\) and the required result follows from Theorem 3.6. □

4 Numerical simulation

In this section, we present numerical simulation of GSDE obtained using PYTHON and give three examples to demonstrate the averaging principle for the GSDE driven by G-Brownian motion.

Example 1

(Without quadratic variation term)

Consider the following standard GSDE:

$$\begin{aligned} d X^{\epsilon }= - 2 \lambda X^{\epsilon }\sin ^{2} \biggl( \frac{t}{\epsilon} \biggr) \,dt + dB_{t} \end{aligned}$$
(36)

and the averaged GSDE

$$\begin{aligned} d Z = - \lambda Z \,dt + dB_{t} \end{aligned}$$
(37)

with the same initial condition \(X^{\epsilon}_{0} = Z_{0}= X_{0}\), where \(B_{t}\) is a G-Brownian motion and satisfies

$$\begin{aligned} \underline{\sigma}^{2} t \leq \langle B \rangle _{t} \leq \bar{\sigma}^{2} t. \end{aligned}$$

Obviously, \(\frac{1}{\pi} \int _{0}^{\pi }\lambda X^{\epsilon }\sin ^{2}(t) \,dt = \frac{1}{2} \lambda X^{\epsilon }\), all coefficients of the standard GSDE and averaged GSDE satisfy conditions (A1)–(A2) and (B1)–(B3) for the functions \(b, h, \sigma, \bar{b}, \bar{h}, \bar{\sigma}\). Thus Theorem 3.5 and Theorem 3.6 hold, that is,

$$\begin{aligned} \lim_{\epsilon \to 0} \hat{\mathbb{E}} \Bigl[\sup_{0\leq t\leq T} \bigl\vert X^{ \epsilon}_{t} - Z_{t} \bigr\vert ^{2} \Bigr] =0 \end{aligned}$$

and

$$\begin{aligned} \lim_{\epsilon \to 0}v \Bigl(\sup_{0\leq t\leq T} \bigl\vert X^{\epsilon}_{t} - Z_{t} \bigr\vert ^{2} \leq \delta \Bigr) =1. \end{aligned}$$

Now we carry out the numerical simulation to get the solutions of GSDE (36) and averaged GSDE (37) under conditions \(X_{0} = 1, \lambda =1.0, \varepsilon =0.01\), (a) \(\underline{\sigma} = 0.1, \bar{\sigma}=0.5\); (b) \(\underline{\sigma} = 1, \bar{\sigma}=5\); (c) \(\underline{\sigma} = 0.5, \bar{\sigma}=2\), and (d) Brownian motion \(\sigma =1\), respectively. Figure 1 depicts a sample average of 5000 trajectories of the SDE \(X^{\epsilon}\), a sample average of 5000 trajectories of the averaged SDE Z, and a sample average of 5000 trajectories of the error \(X^{\epsilon}-Z\). Not only do we see a good agreement between solutions of the equation and the averaged equation, but we are also aware of the fact: Larger volatility can cause greater fluctuation!

Figure 1
figure 1

Sample averages of 5000 trajectories of \(X^{\epsilon}\) and Z without quadratic variation terms

Example 2

(With quadratic variation term)

Consider the following standard GSDE:

$$\begin{aligned} d X^{\epsilon }= \biggl[ -\lambda X^{\epsilon }+ \biggl(1+ \frac{t}{\epsilon} \biggr)^{-1} \sin \biggl(\frac{t}{\epsilon}+ X^{ \epsilon } \biggr) \biggr] \,d \langle B \rangle _{t} + dB_{t} \end{aligned}$$
(38)

and the averaged GSDE

$$\begin{aligned} dZ = -\lambda Z \,d\langle B \rangle _{t} + dB_{t}, \end{aligned}$$
(39)

with the same initial condition \(X^{\epsilon}_{0} = Z_{0}= X_{0}\), where \(B_{t}\) is a G-Brownian motion and satisfies

$$\begin{aligned} \underline{\sigma}^{2} t \leq \langle B \rangle _{t} \leq \bar{\sigma}^{2} t. \end{aligned}$$

We can easily calculate the following claim for any \(T_{1}\in [0,T]\):

$$\begin{aligned} \sup_{t\geq 0} \frac{1}{T_{1}} \int _{t}^{t+T_{1}} \bigl\vert h(s, x) - \bar{h}(x) \bigr\vert ^{2} \,ds \leq \varphi (T_{1}) \bigl(1+ \vert x \vert ^{2}\bigr), \end{aligned}$$

where φ is a positive bounded function with \(\lim_{T_{1} \to \infty} \varphi (T_{1})=0\). In fact, due to \(|\sin (x)|\leq 1\), for any \(T_{1}\in [0,T], \epsilon \in (0,1)\), we have

$$\begin{aligned} & \sup_{t\geq 0} \frac{1}{T_{1}} \int _{t}^{t+T_{1}} \biggl\vert \biggl[ \lambda x + \frac{1}{1+s/\epsilon} \sin (s/\epsilon +x)\biggr] - \lambda x \biggr\vert ^{2} \,ds \\ &\quad= \sup_{t\geq 0} \frac{1}{T_{1}} \int _{t}^{t+T_{1}} \biggl\vert \frac{1}{1+s/\epsilon} \sin (s/\epsilon +x) \biggr\vert ^{2} \,ds \\ &\quad\leq \sup_{t\geq 0} \frac{1}{T_{1}} \int _{t}^{t+T_{1}} \frac{1}{(1+s/\epsilon )^{2}} \,ds \\ &\quad\leq \sup_{t\geq 0} \epsilon ^{2}\frac{1}{T_{1}} \frac{T_{1}+t}{T_{1}+t+\epsilon} \\ &\quad\leq \frac{1}{T_{1}}=: \varphi (T_{1}). \end{aligned}$$

Hence all the coefficients of GSDE and averaged GSDE satisfy conditions (A1)–(A2) and (B) for the functions \(b, h, \sigma, \bar{b}, \bar{h}, \bar{\sigma}\). Thus we can use the solution Z of GSDE (39) to approximate the original solution \(X^{\epsilon}\) of GSDE (38), and the convergence will be assured.

Now we carry out the numerical simulation to get the solutions of GSDE (38) and averaged GSDE (39) under conditions \(X_{0} = 1, \lambda =1.0, \varepsilon =0.01\), (a) \(\underline{\sigma} = 0.1, \bar{\sigma}=0.5\); (b) \(\underline{\sigma} = 1, \bar{\sigma}=5\); (c) \(\underline{\sigma} = 0.5, \bar{\sigma}=2\), and (d) \(\underline{\sigma} = \bar{\sigma}=1\), respectively. Figure 2 depicts a sample average of 5000 trajectories of the SDE \(X^{\epsilon}\), a sample average of 5000 trajectories of the averaged SDE Z, and a sample average of 5000 trajectories of the error \(X^{\epsilon}-Z\). We can see a good agreement between solutions of the equation and the averaged equation. Comparing (b)(c) with (d), we can observe the following fact: because GSDE contains quadratic variation term, the solution of the equation can better reflect the change of trend: The solution of this equation decays faster as the uncertainty of the volatility increases.

Figure 2
figure 2

Sample averages of 5000 trajectories of \(X^{\epsilon}\) and Z with quadratic variation terms

Example 3

(With diffusion coefficient term)

Consider the following standard GSDE, where the diffusion coefficient is not a constant:

$$\begin{aligned} d X^{\epsilon }= -X^{\epsilon }\,dt + \biggl[ \lambda \cos \bigl(X^{\epsilon}\bigr) + \biggl(1+\frac{t}{\epsilon} \biggr)^{-1} \sin \biggl(\frac{t}{\epsilon}+ X^{ \epsilon } \biggr) \biggr] \,dB_{t} \end{aligned}$$
(40)

and the averaged GSDE

$$\begin{aligned} dZ = -Z \,dt + \lambda \cos (Z) \,dB_{t}, \end{aligned}$$
(41)

with the same initial condition \(X^{\epsilon}_{0} = Z_{0}= X_{0}\), where \(B_{t}\) is a G-Brownian motion and satisfies

$$\begin{aligned} \underline{\sigma}^{2} t \leq \langle B \rangle _{t} \leq \bar{\sigma}^{2} t. \end{aligned}$$

Here,

$$\begin{aligned} \sigma (t,x)= \lambda \cos (x) + \frac{1}{1+t/\epsilon} \sin (t/ \epsilon +x), \text{ and} \bar{\sigma (x)} =\lambda \cos (x). \end{aligned}$$

Due to \(|\sin (x)|\leq 1\), for any \(\epsilon \in (0,1)\), we have

$$\begin{aligned} & \sup_{t\geq 0} \frac{1}{T_{1}} \int _{t}^{t+T_{1}} \biggl\vert \biggl[ \lambda \cos (x) + \frac{1}{1+s/\epsilon} \sin (s/\epsilon +x)\biggr] - \lambda \cos (x) \biggr\vert ^{2} \,ds \\ &\quad\leq \sup_{t\geq 0} \frac{1}{T_{1}} \int _{t}^{t+T_{1}} \frac{1}{(1+s/\epsilon )^{2}}\,ds \\ &\quad\leq \sup_{t\geq 0}\frac{1}{T_{1}} \frac{\epsilon ^{2}(T_{1}+t)}{T_{1}+t+\epsilon} \leq \frac{1}{T_{1}}:= \varphi (T_{1}). \end{aligned}$$

Hence, all the coefficients of GSDE and averaged GSDE satisfy conditions (A1)–(A2) and (B) for the functions \(b, h, \sigma, \bar{b}, \bar{h}, \bar{\sigma}\). Thus we can use the solution Z of GSDE (41) to approximate the original solution \(X^{\epsilon}\) of GSDE (40), and the convergence will be assured.

Now we carry out the numerical simulation to get the solutions of GSDE (40) and averaged GSDE (41) under conditions \(X_{0} = 1, \lambda =1.0, \varepsilon =0.01\), (a) \(\underline{\sigma} = 0.1, \bar{\sigma}=0.5\); (b) \(\underline{\sigma} = 1, \bar{\sigma}=5\); (c) \(\underline{\sigma} = 0.5, \bar{\sigma}=2\), and (d) \(\underline{\sigma} = \bar{\sigma}=1\), respectively. Figure 3 depicts a sample average of 5000 trajectories of the SDE \(X^{\epsilon}\), a sample average of 5000 trajectories of the averaged SDE Z, and a sample average of 5000 trajectories of the error \(X^{\epsilon}-Z\). We can see a good agreement between solutions of the equation and the averaged equation, and the error is approximately zero. The numerical verification is consistent with the theoretical results.

Figure 3
figure 3

Sample averages of 5000 trajectories of \(X^{\epsilon}\) and Z with diffusion coefficient terms

5 Conclusion

In this paper we have studied the time-averaging principle for stochastic differential equations based on the Lyapunov condition in the presence of a family of probability measures, each corresponding to a different scenario for the volatility. To overcome the difficulty from the locally Lipschitz coefficients of G-SDEs, the G-stochastic calculus and the localization technique have been used. We show that the solution of a standard equation converges to the solution of the corresponding averaging equation in the sense of sublinear expectation with the help of some properties of G-stochastic calculus. We present numerical simulations of G-SDEs and give three examples to demonstrate the averaging method. The numerical results exhibit that there is a good agreement between solutions of the equation and the averaged equation, and the error is approximately zero. The numerical verification is consistent with the theoretical results.

Availability of data and materials

Not applicable.

References

  1. Denis, L., Hu, M., Peng, S.: Function spaces and capacity related to a sublinear expectation: application to G-Brownian motion paths. Potential Anal. 34, 139–161 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  2. Freidlin, M.I., Wentzell, A.D.: Random perturbations of dynamical systems. In: Grundlehren der Mathematischen Wissenschaften, vol. 260. Springer, Berlin (1998)

    Google Scholar 

  3. Fu, H., Liu, J.: Strong convergence in stochastic averaging principle for two time-scales stochastic partial differential equations. J. Math. Anal. Appl. 384(1), 70–86 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  4. Fu, H., Wan, L., Liu, J.: Strong convergence in averaging principle for stochastic hyperbolic-parabolic equations with two time-scales. Stoch. Process. Appl. 125(8), 3255–3279 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  5. Gao, F.: Pathwise properties and homeomorphic flows for stochastic differential equations driven by G-Brownian motion. Stoch. Process. Appl. 119(10), 3356–3382 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Gao, P.: Averaging principle for stochastic Korteweg-de Vries equation. J. Differ. Equ. 267(12), 6872–6909 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  7. Givon, D.: Strong convergence rate for two-time-scale jump-diffusion stochastic differential systems. Multiscale Model. Simul. 6(2), 577–594 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  8. Guo, Z., Lv, G., Wei, J.: Averaging principle for stochastic differential equations under a weak condition. Chaos, Interdiscip. J. Nonlinear Sci. 30(12), 123139 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  9. Hu, M., Jiang, L., Wang, F.: An averaging principle for nonlinear parabolic PDEs via FBSDEs driven by G-Brownian motion. J. Math. Anal. Appl. 508, Article ID 125893 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  10. Hu, M., Wang, F.: Probabilistic approach to singular perturbations of viscosity solutions to nonlinear parabolic PDEs. Stoch. Process. Appl. 141, 139–171 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  11. Hu, M., Wang, F., Zheng, G.: Quasi-continuous random variables and processes under the G-expectation framework. Stoch. Process. Appl. 126(8), 2367–2387 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  12. Khasminskii, R.Z.: On the principle of averaging the Itô’s stochastic differential equations. Kybernetika 4, 260–279 (1968)

    MathSciNet  Google Scholar 

  13. Li, X., Lin, X., Lin, Y.: Lyapunov-type conditions and stochastic differential equations driven by G-Brownian motion. J. Math. Anal. Appl. 439(1), 235–255 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  14. Li, X., Peng, S.: Stopping times and related Itôs calculus with G-Brownian motion. Stoch. Process. Appl. 121(7), 1492–1508 (2011)

    Article  MATH  Google Scholar 

  15. Mao, W., Chen, B., You, S.: On the averaging principle for SDEs driven by G-Brownian motion with non-Lipschitz coefficients. Adv. Differ. Equ. 2021(1), 1 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  16. Peng, S.: Nonlinear Expectations and Stochastic Calculus Under Uncertainty. Probability Theory and Stochastic Modelling, vol. 65. Springer, Berlin (2019)

    MATH  Google Scholar 

  17. Shen, G., Song, J., Wu, J.L.: Stochastic averaging principle for distribution dependent stochastic differential equations. Appl. Math. Lett. 125, 107761 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  18. Shen, G., Xiang, J., Wu, J.L.: Averaging principle for distribution dependent stochastic differential equations driven by fractional Brownian motion and standard Brownian motion. J. Differ. Equ. 321, 381–414 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  19. Stratonovich, R.L.: Topics in the Theory of Random Noise (Vol. 2). CRC Press, Boca Raton (1967)

    MATH  Google Scholar 

  20. Stratonovich, R.L.: Conditional Markov Processes and Their Application to the Theory of Optimal Control (1968)

    MATH  Google Scholar 

  21. Xu, J., Miao, Y., Liu, J.: A note on strong convergence rate in averaging principle for stochastic FitzHugh–Nagumo system with two time-scales. Stoch. Anal. Appl. 34(1), 178–181 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  22. Xu, Y., Duan, J., Xu, W.: An averaging principle for stochastic dynamical systems with Lévy noise. Phys. D, Nonlinear Phenom. 240(17), 1395–1401 (2011)

    Article  MATH  Google Scholar 

Download references

Acknowledgements

I am grateful to the anonymous reviewer for carefully reading the manuscript and giving many valuable suggestions. I also wish to thank Prof. Wang, Falei for his helpful discussions and suggestions.

Funding

This work is supported by the National Natural Science Foundation of China (Grant No. 11501325, 71871129), the China Postdoctoral Science Foundation (Grant No. 2018T110706, 2018M642641) and Shandong Province Key R&D Plan (Major Science and Technology Innovation) Project (Grant No. 2021CXGC010108).

Author information

Authors and Affiliations

Authors

Contributions

GZ is the unique author of the manuscript. GZ prepared the manuscript initially and performed all the steps of the proofs in this research. GZ read and approved the final manuscript.

Corresponding author

Correspondence to Gaofeng Zong.

Ethics declarations

Competing interests

The author declares that he has no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zong, G. Time-averaging principle for G-SDEs based on Lyapunov condition. Adv Cont Discr Mod 2023, 28 (2023). https://doi.org/10.1186/s13662-023-03772-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-023-03772-6

MSC

Keywords