- Research
- Open access
- Published:
On almost sure asymptotic periodicity for scalar stochastic difference equations
Advances in Difference Equations volume 2017, Article number: 220 (2017)
Abstract
We consider a perturbed linear stochastic difference equation
with real coefficients \(a(n)\), \(g(n)\), \(\sigma(n)\), and independent identically distributed random variables \(\xi(n)\) having zero mean and unit variance. The sequence \((a(n) )_{n\in\mathbf {N}}\) is K-periodic, where K is some positive integer, \(\lim_{n\to\infty }g(n)=\hat{g}<\infty\) and \(\lim_{n\to\infty}\sigma(n) \xi(n+1)=0\), almost surely. We establish conditions providing almost sure asymptotic periodicity of the solution \(X(n)\) for \(|L|=1\) and \(|L|<1\), where \(L:=\prod_{i=0}^{K-1}a(i)\). A sharp result on the asymptotic periodicity of \(X(n)\) is also proved. The results are illustrated by computer simulations.
1 Introduction
There is a vast literature about the periodic solutions of difference equations and we mention here only few works. Reference [1] which can be considered as a text book for difference equations, discusses periodicity. Reference [2] gives an overview of the results on the existence of periodic solutions of difference equations that have been obtained in the last two decades. It covers both ordinary and Volterra difference systems. Reference [3] is devoted to the periodicity for nonlinear difference equations. In [4, 5] the authors study linear difference equations perturbed by Volterra terms. Reference [6] discusses asymptotic stability of perturbed continuous time-difference equation with a small parameter. All these works consider only deterministic difference equations.
Stochastic difference equations have been studied intensively during the last 10 years. For results on asymptotic behaviour of the solutions to stochastic difference equations and stabilisation see e.g. [7–11]. References [8] and [10] deal with the nonlinear stochastic difference equation perturbed by a vanishing noise. The main equation in the present note has a similar structure, but it is linear, contains periodic coefficients and in addition to stochastic perturbations it also has deterministic perturbations. In other words we consider a linear difference equation perturbed by the deterministic term g and the stochastic term σξ:
Here \(\mathbf {N}_{0}=\mathbf {N} \cup\{0\}\), \((a(m) )_{m\in\mathbf {N}}\), \((\beta(m) )_{m\in \mathbf {N}}\) and \((\sigma(m) )_{m\in\mathbf {N}}\) are nonrandom sequences of real numbers, sequence \((a(m) )_{m\in\mathbf {N}}\) is periodic with a period \(K\in\mathbf {N}\), \(\lim_{m\to\infty }g(m)=\hat{g}\in\mathbb {R}\), and \((\xi(m) )_{m\in\mathbf {N}}\) is a sequence of independent and identically distributed random variables with zero mean, variance 1 and a distribution function F. The term \(\sigma(m)\xi(m+1)\) is the random perturbation added on step \(n+1\).
There are several publications about the periodic, asymptotically periodic, and almost periodic solutions of the stochastic differential equations; see e.g. [12–14]. However, to the best of our knowledge, the periodicity for stochastic difference equations of type (2) was discussed only in [15], where sufficient conditions of periodicity was derived for \(g\equiv0\). This note can be viewed as an extension and generalisation of [15].
Let
The unperturbed counterpart of (2), i.e. the equation
has a periodic solution only when \(|L|=1\). This case along with all other possible cases of the asymptotic behaviour of the solution \(Z(m)\) is discussed in Lemma 1, Section 3.
We assume that
A detailed analysis of condition (5) can be found in [10] (see also [9, 16]). In particular, it was shown there that when \(\xi(m)\) are independent \(\mathcal {N}(0,1)\)-random variables, the following rate of decay of σ:
is the critical one which guarantees (5). It was also shown in [10] that when tails of ξ decay polynomially, i.e. \([1-F(y)]y^{M}\to\mbox{constant}\), as \(y\to\infty\), for some \(M\ge2\), then (5) holds if and only if \(\sum_{i=1}^{\infty} [\sigma(i) ]^{M}<\infty\).
In several statements of the paper we impose more restrictions on the decay of σ, assuming that \(\sigma\in\boldsymbol {l}_{2}\), that is, \(\sum_{i=1}^{\infty}\sigma^{2}(i)<\infty\). This assumption implies that the series \(\sum_{i=1}^{\infty}\sigma(i)\xi(i+1)\) converges almost surely (see, e.g., [17], page 384, or Lemma 4 below). This, in turn, implies condition (5).
We cannot expect the solution \(X(m)\) of (2) to be periodic when perturbations \(g(m)\) and \(\sigma(m) \xi(m+1)\) are not periodic. So we are looking for the asymptotic periodicity, when \(X(n)\) approaches a periodic stochastic process almost surely. The main result of the paper about the asymptotic periodicity of \(X(n)\) is given in Theorem 1, Section 6.2. For \(L=1\), \(\sigma\in\boldsymbol {l}_{2}\) and some additional assumptions on g, it states that there exists an almost surely finite random function \(\mathcal {R}(s)\), defined on \(\mathcal {S}:=\{0, 1, \dots, K-1\} \), such that, almost surely,
Equation (6) also holds when \(|L|<1\) and condition (5) is fulfilled. In case \(L=-1\), \(\sigma\in \boldsymbol {l}_{2}\), \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\), instead of limit (6) we get, almost surely,
In Proposition 2, Section 6.2, we present a sharp result, which proves that, under some additional assumptions on g, condition \(\sigma\in\boldsymbol {l}_{2}\) is necessary and sufficient for the asymptotic periodicity of \(X(n)\) in form (6), when \(L=1\), and in form (7), when \(L=-1\).
The proofs of Theorem 1 and Proposition 2 are based on Lemma 10, Section 6.1, which establishes the asymptotic behaviour of the auxiliary process \(Y^{s}\), defined by
In Section 3.1 we show that \(Y^{s}\) satisfies the equation
where \(G^{s}(n)\) and \(H^{s}(n+1)\) behave similar to \(g(n)\) and \(\sigma(n) \xi(n+1)\) from equation (2). In particular, for each \(s\in \mathcal {S}\), \(G^{s}(n)\) is nonrandom and converges to a finite limit and \((H^{s}(n))_{n\in\mathbf {N}}\) is a sequence of independent random perturbations having mean zero and uniformly bounded second moments. Properties of \(G^{s}\) and \(H^{s}\) are discussed in Sections 4.1 and 5.2.
By solving the linear equation (8) we get the following representation of \(Y^{s}\):
which allows us to get a conclusion about the asymptotic behaviour of \(Y^{s}(n)\) based on the limits of each term in the right-hand side of (9). The convergence of the sequences \((Y^{s}(n))_{n\in\mathbf {N}}\) for each \(s\in\mathcal {S}\), implies convergence of \(X(n)\). So we reduce the K-periodic case with any \(K\in\mathbf {N}\) to \(K=1\).
Deterministic sequences \((\mathcal {V}^{s}(n))_{n\in\mathbf {N}}\) are analysed in Lemma 3, Section 4.2. Stochastic sequences \((\mathcal {H}^{s}(n))_{n\in\mathbf {N}}\) are analysed in Lemma 9, Section 5.3. The proof of Lemma 9 is based on the results about the limits of martingales, which are given in Section 5.1. In Section 2 we give necessary definitions and formulate our main assumptions. Two auxiliary lemmata are referred to the Appendix, Section 7.2.
2 Main notations and assumptions
In this section we give a number of necessary definitions and lemmata which we use in the proofs of our results. A detailed exposition of the definitions and facts of the theory of random processes can be found in, for example, [17].
Let \((\Omega, {\mathcal{F}}, {\mathbb {P}} )\) be a given probability space.
Assumption 1
Let \((\xi(n))_{n\in\mathbf {N}}\) be a sequence of independent and identically distributed random variables with zero mean and variance 1, \(\mathbf {E}\xi=0\), \(\mathbf {E}\xi^{2}=1\), and with a distribution function F.
The sequence of random variables \((\xi(n))_{n\in\mathbf {N}}\) satisfying Assumption 1 generates a filtration \(\{{\mathcal{F}}_{n}\}_{n \in\mathbf {N}}\), where
We use the standard abbreviation ‘a.s.’ for the wordings ‘almost sure’ or ‘almost surely’ throughout the text.
A stochastic process \((M(n))_{n \in\mathbf {N}}\) is said to be an \(\mathcal{F}_{n}\) -martingale if \(M(n)\) is \({\mathcal{F}}_{n}\)-measurable, \({\mathbf {E}}|M(n)|<\infty\) and \(\mathbf {E} [M(n) |\mathcal {F}_{n-1} ]=M(n-1)\) for all \(n\in\mathbf {N}\) a.s.
A martingale \((M(n))_{n\in\mathbf {N}}\) is called square integrable, if \(\mathbf {E} M^{2}(n)<\infty\) for all \(n \in\mathbf {N}\).
Let \((\rho(n))_{n \in\mathbf {N}}\) be a sequence of independent random variables with \(\mathbf {E}\rho(n)=0\) and \(\mathbf {E}[\rho(n)]^{2}<\infty\), for all \(n \in\mathbf {N}\). Then the stochastic process \((M(n))_{n\in \mathbf {N}}\), where \(M(0)=0\) and \(M(n)=\sum_{i=0}^{n-1} \rho(i)\), is a square integrable martingale with the quadratic variation \(\langle M(n)\rangle\) defined by
In this situation the quadratic variation \(\langle M(n)\rangle\) is not random and \(\langle M(n)\rangle=\operatorname{Var} (M(n))\), for all \(n \in\mathbf {N}\).
Assumption 2
Let \((\sigma(n))_{n\in\mathbf {N}}\) be a bounded sequence of real numbers: for some \(H_{\sigma}>0\) and all \(n\in\mathbf {N}\)
To avoid the trivial case we also assume that there are infinitely many \(i\in\mathbf {N}\) such that \(\sigma(i)\neq0\).
Remark 1
If Assumptions 1 and 2 hold, then \((M(n))_{n\in \mathbf {N}}\), where \(M(0)=0\) and \(M(n)=\sum_{i=0}^{n-1} \sigma(i)\xi (i)\), is a square integrable martingale with
Also, \(\langle M(n)\rangle\neq0\) for big enough \(n\in\mathbf {N}\).
Assumption 3
Let \((a(n))_{n\in\mathbf {N}}\) be a periodic sequence of nonzero real numbers with a period \(K\in\mathbf {N}\), i.e. \(a(n+K)=a(n)\), \(a(n)\neq0\) for each \(n\in\mathbf {N}\).
Assumption 4
Let \((g(n))_{n\in\mathbf {N}}\) be a sequence of real numbers such that, for some \(\hat{g}\in\mathbb {R}\),
Denote by \(\boldsymbol {l}_{2}\) a Banach space of sequences \(\sigma=(\sigma (n))_{n\in\mathbf {N}}\) of real numbers, such that
Denote by \(\boldsymbol {L}_{2}= \boldsymbol {L}_{2}(\Omega, \mathcal {F}, \mathbf {P})\) a Banach space of random variables ς with \(\mathbf {E}|\varsigma|^{2}<\infty\).
Since random variables \(\xi(n)\), \(n\in\mathbf {N}\) which satisfy Assumption 1 are identically distributed, sometimes we omit the index n and write, for example, \(\mathbf {E}|\xi|\).
3 Presentation of the solution
Consider the perturbed stochastic linear difference equation
where sequences \((\xi(m))_{m\in\mathbf {N}}\), \((\sigma(m))_{m\in\mathbf {N}}\), \((a(m))_{m\in\mathbf {N}}\), and \((g(m))_{m\in\mathbf {N}}\) satisfy Assumptions 1, 2, 3, and 4, respectively.
Define
and
Since \(a(m)\neq0\) for each \(m\in\mathbf {N}\), the function \(J:\mathbf {N}\to\mathbb {R}\setminus\{0\}\), so \([J(m)]^{-1}\) is well defined and \([J(m)]^{-1}\neq0\) for all \(m\in\mathbf {N}\). By the periodicity of \(a(i)\), we have, for all \(m\in\mathbf {N}\),
Denote
Lemma 1
Let Assumption 3 hold. Let J be defined as in (14) and \(\mathcal {S}\) be defined as in (15). Then
-
(i)
\(L=\prod_{i=0}^{K-1}a(i+l)=\prod_{i=l}^{K+l-1}a(i)\) for each \(l\in\mathbf {N}\).
-
(ii)
For each \(\tau\in\mathbf {N}\), \(s\in\mathcal {S}\), we have
$$ J(\tau K+s)=L^{\tau}J(s). $$ -
(iii)
If \(L=1\), then J is K-periodic.
-
(iv)
If \(L=-1\), then J is 2K-periodic.
-
(v)
If \(|L|<1\), then \(\lim_{m\to\infty}|J(m)|=0\).
-
(vi)
If \(|L|>1\), then \(\lim_{m\to\infty}|J(m)|=\infty\).
The proof of Lemma 1 is straightforward and we do not present it. Note that Lemma 1 gives a full description of the limiting behaviour of the solution of unperturbed equation (4).
Since equation (13) is linear, solution \(X(n)\) can be presented in the following form (see e.g. [1], page 131):
Denoting, for \(m\in\mathbf {N}\),
we write (18) in the form
Remark 2
Notice that we have adopted the notation
3.1 Reduction to \(K=1\)
Let X be a solution to equation (13). Since each \(m\in \mathbf {N}\) is presented in the form
for each \(s\in\mathcal {S}\) we can introduce a new process
From (21) we conclude that convergence of the sequences \((Y^{s}(n))_{n\in\mathbf {N}}\), \(s\in\mathcal {S}\), implies asymptotic behaviour of X. Equation (25) which we derive for \(Y^{s}\) in this section will also show that by introduction of \(Y^{s}\) we reduce the K-periodic case with any \(K\in\mathbf {N}\) to \(K=1\).
From (20) we get the following expression for \(Y^{s}\):
where, for each \(s\in\mathcal {S}\),
Note that the initial value \(Y^{s}(0)\) is random and \(\mathcal {F}_{s}\)-measurable.
Applying Lemma 1(i)-(ii), and (19) we get
where
Since
we arrive at
Denote
Remark 3
Note that the reason for considering \(H^{s}\) in (24) as a function of \(n+1\) is that the ξ with the maximum index is
Since for \(j\in\mathcal {S}\)
by substituting \(j=i+1-nK-s\) we obtain
and
Now equation (22) for \(Y^{s}\) can be written as
From equation (25) we derive, for each \(n\in\mathbf {N}\),
Denoting
we arrive at the following presentation of solution \(Y^{s}(n)\) to equation (25):
Presentation (28) shows that in order to know limiting behaviour of \(Y^{s}\) it is enough to get the same for \(\mathcal {V}^{s}\) and \(\mathcal {H}^{s}\). In Lemma 3, Section 4.2, we analyse the asymptotic behaviour of \(\mathcal {V}^{s}\). Lemma 9, Section 5.3, deals with \(\mathcal {H}^{s}\).
4 Limiting behaviour of \(\mathcal {V}^{s}\)
Denote, for each \(s\in\mathcal {S}\),
and
Remark 4
Note that \(\mathcal {A}_{s}>0\) if \(a(i)>0\) for all \(i\in\mathcal {S}\) and \(\overline{\mathcal {A}_{s}}>0\). Also, for each \(s\in\mathcal {S}\),
In Example 1, Section 7, we consider coefficients \(a(i)\), \(i\in\{0, 1, 2\}\), such that \(\mathcal {A}_{2}=0\) while \(\mathcal {A}_{0}, \mathcal {A}_{1}\neq0\).
4.1 Limiting behaviour of \(G^{s}(n)\)
Lemma 2 below shows that the limiting behaviour of \(G^{s}\) is similar to g.
Lemma 2
Let Assumptions 3 and 4 hold. Let \(G^{s}\) be defined as in (23). Then, for each \(s\in\mathcal {S}\),
Proof
Let \(\overline{\mathcal {A}_{s}}\) be defined in (29). Fix some \(\varepsilon>0\) and let \(l_{\varepsilon}\in\mathbf {N}\) be such that, for \(l\ge l_{\varepsilon}\),
Then, for \(n\ge\frac{l_{\varepsilon}+1}{K}\) and for all \(s, j\in\mathcal {S}\), we have
which implies that
□
4.2 Limiting behaviour of \(\mathcal {V}^{s}\)
The next lemma describes some important cases of the limiting behaviour of \(\mathcal {V}^{s}\).
Lemma 3
-
(i)
Let \(L=1\).
-
(a)
If \(\hat{g}\neq0\), \(\mathcal {A}_{s}\neq0\), then \(|\mathcal {V}^{s}(n)| \to\infty\).
-
(b)
If either \(\mathcal {A}_{s}=0\), \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\) or \(\hat{g}=0\), \(\sum_{i=1}^{\infty}|g(i)|<\infty\), then there exists a number \(\overline{\mathcal {V}^{s}}\in\mathbb {R}\) such that
$$\lim_{n\to\infty}\mathcal {V}^{s}(n)=\overline{\mathcal {V}^{s}}. $$ -
(c)
If \(g(n)\equiv0\), then \(\mathcal {V}^{s}(n)\equiv0\).
-
(a)
-
(ii)
Let \(L=-1\) and \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\). Then there exist a number \(\overline{\mathcal {V}^{s}}\in\mathbb {R}\) and a 2-periodic function \(\mathcal {V}_{1}^{s}(n)\) such that
$$\lim_{n\to\infty} \bigl\vert \mathcal {V}^{s}(n)- \overline{\mathcal {V}^{s}}-\mathcal {V}_{1}^{s}(n) \bigr\vert =0. $$ -
(iii)
Let \(|L|<1\). Then
$$\lim_{n\to\infty}\mathcal {V}^{s}(n) = \frac{ \hat{g}\mathcal {A}_{s}}{1-L}. $$ -
(iv)
If \(|L|>1\) and \(\sum_{j=0}^{\infty}L^{-j}G^{s}(j) \neq0\) then \(|\mathcal {V}^{s}(n)| \to\infty\).
Proof
(i) In case (a), \(\lim_{n\to\infty}G^{s}(n)=\hat{g}\mathcal {A}_{s}\neq0\), so the series which defines \(\mathcal {V}^{s}(n)\) diverges. When \(\hat{g}\mathcal {A}_{s}>0\), we can find N such that \(G^{s}(n)>0\) for \(n\ge N\). So, for \(n\ge N\),
and \(\lim_{n\to\infty}\sum_{j=N+1}^{n}G^{s}(j)=\infty\). Similarly, \(\lim_{n\to\infty}\sum_{j=N+1}^{n}G^{s}(j)=-\infty\) when \(\hat{g}\mathcal {A}_{s}<0\).
In case (b), when \(\mathcal {A}_{s}=0\), \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\), we have
and
so \(\mathcal {V}^{s}(n)\) converges absolutely to the number
When \(\hat{g}=0\), \(\sum_{i=1}^{\infty}|g(i)|<\infty\), we substitute ĝ by 0 in (31)-(33) and obtain the result.
Case (c) is straightforward.
(ii) We have
The term \(\mathcal {V}_{0}^{s}(n)\) converges absolutely to the number \(\overline{\mathcal {V}^{s}}\) defined by (33). Noting that
we conclude that \(\mathcal {V}_{1}^{s}(n)\) is a 2-periodic nonrandom function.
(iii) The result follows from Lemma 2 and Lemma 11 (see Appendix).
(iv) The result follows from the representation
□
Remark 5
Note that if \(g(n)\equiv\hat{g}\), where ĝ is any real number, and \(\mathcal {A}_{s}=0\), then \(\overline{\mathcal {V}^{s}}=0\).
5 On limits of random series
In Section 5.1 we present several auxiliary statements about the limits of the martingales. In Section 5.2 we introduce a new sequence of σ-algebras and discuss properties of random variables \(H^{s}(n)\).
Lemma 9 in Section 5.3 describes the asymptotic behaviour of \(\mathcal {H}^{s}(n)\), as \(n\to\infty\).
5.1 Limits of martingales
In this section we deal with limits at infinity of the martingales \((M(n))_{n\in\mathbf {N}}\) which have the following form:
Here \(\beta(i)\) and \(\eta(i)\) satisfy the following assumptions.
Assumption 5
Let \((\eta(n))_{n\in\mathbf {N}}\) be a sequence of independent random variables with zero mean, \(\mathbf {E}\eta_{n}=0\), and with distribution functions \(F_{n}\). Let also \(\mathbf {E}|\eta(n)|^{2}\le H_{\eta}\) for some constant \(H_{\eta}>0\) and all \(n\in\mathbf {N}\).
Assumption 6
Let Assumption 5 hold. Let also there exist constants \(h_{\eta}, \bar{H}_{\eta}>0\) such that, for all \(n\in\mathbf {N}\),
Assumption 7
Let \(\beta=(\beta(n))_{n\in\mathbf {N}}\) be a bounded sequence of real numbers: \(|\beta(n)|\le H_{\beta}\) for some \(H_{\beta}>0\) and all \(n\in\mathbf {N}\).
Lemma 4 below is a variant of martingale convergence theorem (see e.g. [17]).
Lemma 4
Let Assumption 5 hold. Let \(M=(M(n))_{n\in\mathbf {N}}\) be a martingale defined by (36). Let \(\beta\in\boldsymbol {l}_{2}\). Then \(\lim_{n\to\infty}M(n)=\bar{M}\), where M̄ is an a.s. finite random variable.
Remark 6
Assumptions of Lemma 4 imply that M is a Cauchy sequence in \(\boldsymbol {L}_{2}(\Omega, \mathcal {F}, \mathbf {P})\), so
Also, \(\mathbf {E}\bar{M}=0\) and \(\mathbf {E}[\bar{M}]^{2}=\sum_{i=0}^{\infty} \beta^{2}(i)\mathbf {E} |\eta(i+1)|^{2}\le H_{\eta}\|\beta\|_{\boldsymbol {l}_{2}}\).
Lemma 4 provides conditions under which \(M(n)\) has an a.s. finite limit. In the proofs of our results in Section 5.3 we also need Lemma 6 about \(\limsup_{n\to\infty} \frac{M(n)}{\sqrt{\langle M(n)\rangle}}\) and \(\liminf_{n\to\infty} \frac{M(n)}{\sqrt{\langle M(n)\rangle}}\). To prove Lemma 6 we apply a variant of the central limit theorem which is based on Theorem 1 from [17], page 329, for the sum of independent but not identically distributed random variables. To apply Theorem 1 to the martingale M we need to show that the Lindeberg condition is satisfied. In order to do this we prove that the Lyapunov condition with \(\delta=1\) holds, which implies the Lindeberg condition for M (see Lemma 5, Corollary 1 and Corollary 2 below). For more details as regards the Lyapunov and Lindeberg conditions see [17], page 332.
Lemma 5
Let Assumption 7 hold and \(\beta\notin\boldsymbol {l}_{2}\). Then
Proof
The proof follows from the estimates
□
Corollary 1
Let Assumptions 5, 6 and 7 hold. Let \(\beta\notin\boldsymbol {l}_{2}\). Then the Lyapunov condition with \(\delta =1\) holds:
Proof
The result follows from Lemma 5 and the estimate
□
Corollary 2
Let Assumptions 5, 6 and 7 hold. Let \(\beta\notin\boldsymbol {l}_{2}\). Then the Lindeberg condition holds:
where \(\tilde{F}_{k}\) are distributions of \(\beta(k)\eta(k)\), \(D_{n}^{2}=\sum_{i=0}^{n} |\beta(i)|^{2}\mathbf {E}|\eta(i+1)|^{2}\).
Proof
By Corollary 1, the Lyapunov condition with \(\delta =1\) holds, which, by [17], page 332, implies the Lindeberg condition. □
Corollary 3
Let Assumptions 5, 6 and 7 hold. Let \(\beta\notin\boldsymbol {l}_{2}\). Let Φ be the standard normal cumulative distribution function. Then the central limit theorem holds:
The proof of the following result is an adaptation of the argument presented on pages 379-382 in [17] (see also [9]) and is referred to the Appendix.
Lemma 6
Let Assumptions 5, 6 and 7 hold. Let \(\beta\notin\boldsymbol {l}_{2}\). Then
5.2 Properties of \(H^{s}(n)\)
By (24) we have
Before discussing properties of random variables \(H^{s}(n)\) we need to introduce a new sequence of σ-algebras: for all \(n\in\mathbf {N}\), we define
Lemma 7
Let Assumptions 1, 2 and 3 hold. Let \(H_{\sigma}\) be defined by (11), \(H^{s}(n)\) be defined by (41), \(C_{s}\) be defined by (30).
Then, for any \(s\in\mathcal {S}\),
-
(i)
\(\mathbf {E}(H^{s}(n))=0\) for each \(n\in\mathbf {N}\);
-
(ii)
\(\mathbf {E}(H^{s}(n))^{2}< H^{2}_{\sigma} (K-1)C_{s}^{2}\) for each \(n\in\mathbf {N}\);
-
(iii)
\(H^{s}(n)\) and \(H^{s}(k)\) are independent for each \(n, k\in \mathbf {N}\), \(n\neq k\);
-
(iv)
the family \(\{\mathcal {G}^{s}_{n}\}_{n \in\mathbf {N}}\) is a filtration;
-
(v)
\(H^{s}(n)\) is \(\mathcal {G}^{s}_{n}\) measurable for each \(n\in \mathbf {N}\);
-
(vi)
\(H^{s}(n)\to0\) a.s, if \(\sigma(n)\xi(n+1)\to0\) a.s., as \(n\to\infty\).
Proof
Proof of (i) is straightforward. To prove (ii) we apply the inequality
To prove (iii) we note that, for any \(k>n\), \(k, n\in\mathbf {N}\), the random variable \(H^{s}(n)\) defined as in (41) is a weighted sum of random variables from the set
and the random variable \(H^{s}(k)\) is a weighted sum of random variables from the set
Since \(k\ge n+1\), the minimum index of ξ in set \(\mathcal {T}_{k}\) is greater than the maximum index of ξ in set \(\mathcal {T}_{n}\),
So \(\mathcal {T}_{n}\cap\mathcal {T}_{k}=\emptyset\), which, due to independence of \(\xi_{i}\), implies independence of \(H^{s}(n)\) and \(H^{s}(k)\).
To prove (iv) we notice that, for each \(n_{1}\le n_{2}\), we have
Item (v) follows from the proof of (iii) and the definition (42) of \(\mathcal {G}^{s}_{n}\).
To prove (vi) we apply the estimate
□
5.3 On limits of \(\mathcal {H}^{s}(n)\)
Fix \(s\in\mathcal {S}\) and \(L\in\mathbb {R}\). Let \(H^{s}(j)\) be defined by (24). Denote
The next lemma describes the properties of \(M^{s}(n)\) for \(|L|=1\) and \(|L|>1\). It is the main tool for proving Lemma 9 about the asymptotic behaviour of \(\mathcal {H}^{s}(n)\).
Lemma 8
Let Assumptions 1, 2 and 3 hold. Let \((\mathcal {G}^{s}_{n})_{n \in\mathbf {N}}\) be defined as in (42) and let \(M^{s}(n)\) be defined as in (44). Then
-
(i)
\(M^{s}:= (M^{s}(n))_{n \in\mathbf {N}}\) is a \(\mathcal {G}^{s}_{n}\)-martingale.
-
(ii)
Let \(|L|=1\).
-
(a)
If \(\sigma\in \boldsymbol{ l_{2}}\), then, for some a.s. finite random variable \(\bar{M}^{s}\),
$$ \lim_{n\to0}M^{s}(n)=\bar{M}^{s}, \quad\textit{a.s.} $$(45) -
(b)
If \(\sigma\notin \boldsymbol{l_{2}}\) and \(\mathbf {E} |\xi |^{3}<\infty\), then, a.s.,
$$\limsup_{n\to\infty}M^{s}(n)=\infty\quad\textit{and} \quad \liminf_{n\to \infty}M^{s}(n)=-\infty. $$
-
(a)
-
(iii)
Let \(|L|>1\), then (45) holds.
Proof
From Lemma 7 we conclude that \((\mathcal {G}^{s}_{n})_{n \in \mathbf {N}}\) is a filtration, the random variable \(M^{s}(n)\) is \(\mathcal {G}^{s}_{n}\)-measurable, \(\mathbf {E}H^{s}(j)=0\), and \(\mathbf {E} \vert H^{s}(j) \vert \le H_{\sigma}(K-1)C_{s}\), for each \(j\in\mathbf {N}\). This implies that, for each \(n\in\mathbf {N}\),
which proves (i).
(ii) Applying (43) we get
For \(|L|=1\), we have
and then, for \(C_{s}\) and \(c_{s}\) defined as in (30), we obtain
Now part (a) follows from Lemma 4.
To prove part (b) we present \(M^{s}(n)\) in the following form:
By the substitution
we transform (48) into
Denoting
we arrive at
Since \(\sigma\notin \boldsymbol{l_{2}}\) and \(|L|=1\), we have
as \(l\to\infty\). Then, for each \(s\in\mathcal {S}\),
as \(n\to\infty\).
In addition, \(\mathbf {E} |\xi|^{2}=1\), \(\mathbf {E} |\xi|^{3}<\infty\), so after application of Lemma 6 we obtain, a.s.,
The limits in (53) imply that, for each \(s\in\mathcal {S}\),
Then, applying (50), (51) and (52), we conclude that
(iii) For \(|L|>1\) we obtain from (46) that, for all \(n\in\mathbf {N}\),
which along with Lemma 4 implies (45). □
Lemma 9
Let Assumptions 1, 2, 3, and 4 hold. Let \(\mathcal {H}^{s}(n)\) be defined as in (27) and \(\bar{M}^{s}\) be defined as in (45).
-
(i)
Let \(|L|=1\).
-
(a)
If \(\sigma\in\boldsymbol{l_{2}}\), then \(\lim_{n\to0}\mathcal {H}^{s}(n)=\bar{M}^{s}\) a.s.
-
(b)
If \(\sigma\notin \boldsymbol{ l_{2}}\) and \(\mathbf {E} |\xi |^{3}<\infty\), then
$$\limsup_{n\to\infty}\mathcal {H}^{s}(n)=\infty\quad\textit{and} \quad \liminf_{n\to\infty}\mathcal {H}^{s}(n)=-\infty. $$
-
(a)
-
(ii)
Let \(|L|<1\) and
$$ \lim_{n\to\infty}\sigma(n)\xi(n+1)=0, \quad \textit{a.s.}, $$(54)then \(\lim_{n\to\infty}\mathcal {H}^{s}(n)=0\) a.s.
-
(iii)
Let \(|L|>1\), then \(\bar{M}^{s}\) is a.s. finite and \(\limsup_{n\to0}|\mathcal {H}^{s}(n)|=\infty\) a.s. on the set \(\{\omega: \bar{M}^{s}(\omega)\neq0\}\).
Proof
Parts (i)(a)-(b) follow from Lemma 8(ii)(a)-(b).
To prove (ii) we apply first Lemma 7(vi), and then apply, almost surely, Lemma 11 (see the Appendix).
When \(|L|>1\), Lemma 8(iii), implies that \(M^{s}(n)\to\bar{M}^{s}\), where \(\bar{M}^{s}\) is a.s. finite. Since \(|L|^{n}\to\infty\), part (iii) follows on the set \(\{\omega: \bar{M}^{s}(\omega)\neq0\}\). □
Remark 7
Note that condition (54) holds when \(\xi(n)\) are normally distributed random variables and \(\sigma(n)\) decays as \([\log n]^{-1/2-\varepsilon}\) or or more quickly as \(n\to\infty\).
When tails of ξ decay polynomially, i.e. \([1-F(n)]n^{M}\to \mbox{constant}\) as \(n\to\infty\), where F is the distribution function of the ξ and \(M\ge2\), then (54) holds if and only if \(\sum_{i=1}^{\infty} [\sigma(i) ]^{M}< \infty\).
Note also that assumption \(\sigma\in\boldsymbol {l}_{2}\) implies a.s. convergence of \(\sum_{i=1}^{\infty}\sigma(i)\xi(i+1)\), and, therefore, condition (54).
A detailed analysis of condition (54) can be found in [10] (see also [9, 16]).
6 Almost sure asymptotic periodicity of \(X(n)\)
In Section 6.1 we deal with the a.s. convergence of the solution \(Y^{s}(n)\), and then, in Section 6.2, with the a.s. asymptotic periodicity of the solution \(X(m)\) of the original equation (13).
In Section 6.2 we also discuss the possibility of a partial a.s. periodicity; see Remark 9.
6.1 On limits of \(Y^{s}(n)\)
In this section we prove Lemma 10 about the asymptotic behaviour of \(Y^{s}(n)\), applying Lemmata 3 and 9 and equation (20). Proposition 1, which is a corollary of Lemma 10, contains several sharp results about convergence of \(Y^{s}(n)\).
Lemma 10
Let Assumptions 1, 2, 3, and 4 hold. Let ĝ be defined as in (12) and \(\mathcal {S}\) be defined as in (15).
Let \(s\in\mathcal {S}\). Let \(\mathcal {A}_{s}\) be defined as in (29) and \(Y^{s}\) be defined as in (28).
-
(i)
Let \(L=1\), \(\sigma\in\boldsymbol {l}_{2}\), \(\hat{g}=0\), \(\sum_{i=1}^{\infty}|g(i)|<\infty\). Then there exists an a.s. finite random variable \(\mathcal {Q}^{s}\) such that
$$ \lim_{n\to\infty}\big|Y^{s}(n)-\mathcal {Q}^{s}\big|=0, \quad \textit{a.s.} $$(55) -
(ii)
Let \(L=1\), \(\sigma\in\boldsymbol {l}_{2}\), \(\mathcal {A}_{s}=0\), \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\). Then there exists an a.s. finite random variable \(\mathcal {Q}^{s}\) such that (55) holds.
-
(iii)
Let \(L=-1\), \(\sigma\in\boldsymbol {l}_{2}\), \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\). Then there exist an a.s. finite random variable \(\mathcal {Q}^{s}\) and a 2-periodic nonrandom function \(\hat{V}^{s}(n)\) such that
$$ \lim_{n\to0}\big| Y^{s}(n)-\mathcal {Q}^{s} -\hat{V}^{s}(n)\big|=0, \quad \textit{a.s.} $$(56) -
(iv)
Let \(|L|<1\) and let condition (54) hold. Then
$$\lim_{n\to\infty}Y^{s}(n)=\frac{\hat{g} \mathcal {A}_{s}}{1-L}. $$
Proof
In cases (i)-(ii) we have \(Y^{s}(n)=Y^{s}(0)+\mathcal {V}^{s}(n)+\mathcal {H}^{s}(n+1)\), Lemma 3(i)(b), and Lemma 9(i)(a), hold, and then
The result holds for \(\mathcal {Q}^{s}=Y^{s}(0)+\overline{\mathcal {V}^{s}}+\bar{M}^{s}\), where \(\overline{\mathcal {V}^{s}}\) defined by (33), with \(\hat{g}=0\) in case (i). Note that \(\overline{\mathcal {V}^{s}}=0\) if \(g(n)\equiv0\).
In case (iii) we have \(Y^{s}(n)=(-1)^{n}Y^{s}(0)+\mathcal {V}^{s}(n)+\mathcal {H}^{s}(n+1)\), Lemma 3(ii), and Lemma 9(i)(a), hold. So, in addition to (57), we have
where \(\overline{\mathcal {V}^{s}}\) is number defined by (33) and \(\mathcal {V}_{1}^{s}(n)\) is a 2-periodic nonrandom function. Then the result holds for \(\mathcal {Q}^{s}=\overline{\mathcal {V}^{s}}+\bar{M}^{s}\) and \(\hat{V}^{s}(n)= (-1)^{n}Y^{s}(0)+\mathcal {V}_{1}^{s}(n)\).
In case (iv) we have \(Y^{s}(n)= L^{n}Y^{s}(0)+\mathcal {V}^{s}(n)+\mathcal {H}^{s}(n+1)\), Lemma 3(iii), and Lemma 9(ii), hold, and
which implies the result. □
Remark 8
Recalling that \(\operatorname{Var} (\bar{M}^{s})\neq0\), we can conclude that under assumptions of Lemma 10,
-
(a)
\(Y^{s}(n)\) converges either to an a.s. finite random variable in (i)-(ii) and (iv), or to a 2-periodic function in (iii);
-
(b)
The limit in (iv) is nonrandom, while the limits in (i)-(iii) are random.
-
(c)
In all cases the limit of \(Y^{s}(n)\) may depend on \(s\in \mathcal {S}\); see Example 1, Section 7.
-
(d)
The only case when the limit of \(Y^{s}(n)\) can be zero is given in (iv), when either \(\hat{g}=0\) or \(\mathcal {A}_{s}\equiv0\).
In the next proposition, which is a corollary of Lemmata 10 and 9, we highlight the cases when condition \(\sigma\in \boldsymbol {l}_{2}\) is necessary and sufficient for the convergence of \(Y^{s}(n)\) to an a.s. finite random variable (or to 2-periodic nonrandom function).
Proposition 1
Let Assumptions 1, 2, 3, and 4 hold and let \(\mathbf {E}|\xi|^{3}<\infty\). Let ĝ be defined as in (12), \(\mathcal {S}\) be defined as in (15), L be defined as in (17), \(\mathcal {A}_{s}\) be defined as in (29).
-
(i)
Let \(L=1\), \(\hat{g}=0\), \(\sum_{i=1}^{\infty}|g(i)|<\infty\). Then, for each \(s\in\mathcal {S}\), there exists an a.s. finite random variable \(\mathcal {Q}^{s}\) such that
$$ \lim_{n\to0}\big|Y^{s}(n)-\mathcal {Q}^{s}\big|=0, \quad \textit{a.s.},\quad \textit{if and only if}\quad \sigma\in\boldsymbol {l}_{2}. $$(59) -
(ii)
Let \(L=1\), \(\hat{g}\neq0\) and \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\). Let \(\mathcal {A}_{s}=0\) for some \(s\in\mathcal {S}\). Then there exists an a.s. finite random variable \(\mathcal {Q}^{s}\) such that (59) holds.
-
(iii)
Let \(L=-1\). Then, for each \(s\in\mathcal {S}\), there exist an a.s. finite random variable \(\mathcal {Q}^{s}\) and 2-periodic nonrandom function \(\hat{V}^{s}(n)\) such that
$$\lim_{n\to0}\big| Y^{s}(n)-\mathcal {Q}^{s} -\hat{V}^{s}(n)\big|=0, \quad\textit{a.s.},\quad \textit{if and only if}\quad \sigma\in\boldsymbol {l}_{2}. $$
Proof
Lemma 10(i)-(ii), implies the sufficiency for parts (i)-(ii), respectively. To prove the necessity, assume that \(\sigma \notin\boldsymbol {l}_{2}\). By Lemma 9(i)(b), \(\limsup_{n\to 0}|\mathcal {H}^{s}(n)|=\infty\), a.s. The first term, \(L^{n}Y^{s}(0)=Y^{s}(0)\) in the right-hand-side of (28) is a.s. bounded, and, by Lemma 10(i)-(ii), the second term \(\mathcal {V}^{s}(n)\) is nonrandom and converges. This implies that \(\limsup_{n\to0}|Y^{s}(n)| = \infty\), a.s.
Lemma 10(iii), implies the sufficiency for part (iii). To prove the necessity, we are reasoning as in the proof for parts (i)-(ii). By Lemma 9(i)(b), \(\limsup_{n\to0}|\mathcal {H}^{s}(n)|=\infty\), a.s. if \(\sigma\notin\boldsymbol {l}_{2}\). Since the first term \((-1)^{n}Y^{s}(0)\) in the right-hand-side of (28) is a.s. bounded, and, by Lemma 10(iii), the second term \(\mathcal {V}^{s}(n)\) is nonrandom and bounded, we have \(\limsup_{n\to0}|Y^{s}(n)| = \infty\), a.s. □
6.2 Almost sure asymptotic periodicity of \(X(n)\)
In this section we return to the solution X of the original problem (13). Armed with Lemma 10 we formulate the main result of the paper, Theorem 1, which establishes conditions of a.s. asymptotic periodicity of \(X(n)\).
Define a set
Theorem 1
Let Assumptions 1, 2, 3, and 4 hold. Let \(\mathcal {S}\) be defined as in (15), ĝ be defined as in (12), \(\mathcal {A}_{s}\) be defined as in (29), \(\mathcal {E}\) be defined as in (60).
If X is a solution to equation (13), then:
-
(i)
There exists an a.s. finite random function \(\mathcal {R}(s)\), defined on \(\mathcal {S}\), such that
$$ \lim_{n\to\infty}\big|X(nK+s)-\mathcal {R}(s)\big|=0, \quad \textit{a.s. for each }s\in\mathcal {S}, $$(61)if one of the following conditions holds:
-
(a)
\(L=1\), \(\sigma\in\boldsymbol {l}_{2}\), \(\hat{g}=0\), \(\sum_{i=1}^{\infty}|g(i)|<\infty\).
-
(b)
\(L=1\), \(\sigma\in\boldsymbol {l}_{2}\), \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\), and \(\mathcal {A}_{s}=0\) for each \(s\in\mathcal {S}\).
-
(c)
\(|L|<1\) and condition (54) holds.
-
(a)
-
(ii)
There exists an a.s. finite random function \(\mathcal {R}(e)\), defined on \(\mathcal {E}\), such that
$$ \lim_{n\to\infty}\big|X(2nK+e)-\mathcal {R}(e)\big|=0, \quad \textit{a.s. for each }e\in\mathcal {E}, $$(62)if \(L=-1\), \(\sigma\in\boldsymbol {l}_{2}\), \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\).
Proof
Since \(X(nK+s)=Y^{s}(n)\), the results for (i)(a)-(i)(b) follow from Lemma 10(i)-(ii), respectively, with
The result for (i)(c) follows from Lemma 10(iv), with
Now we prove part (ii). Let \(\mathcal {Q}^{s}\) and \(\hat{V}^{s}(n)\) be, respectively, an a.s. finite random variable and 2-periodic nonrandom function defined as in Lemma 10(iii) (see also Lemma 3(ii)):
Define a random function \(\mathcal {R}(e)\) on \(\mathcal {E}\) by the following:
Recall that \(X(2kK+e)=Y^{e}(2k)\) for \(e\in\mathcal {S}\) and \(X(2kK+e)=X((2k+1)K+e-K)= Y^{e-K}(2k+1)\) for \(e-K\in\mathcal {S}\). So the result follows from equation (56) in Lemma 10(iii). □
Remark 9
Note that only in case (c) of Theorem 1, solution X tends to a periodic nonrandom function \(\hat{V}(n)\), which is identical to zero if either \(\hat{g}=0\) or \(\mathcal {A}_{s}=0\) for all \(s\in\mathcal {S}\). In all other cases X tends to a periodic stochastic process, which has nonzero variance.
It can be proved that, when \(K=3\), the case \(\mathcal {A}_{s}\equiv A\neq 0\), \(s=0, 1, 2\), is possible only when \(a(i)=a\neq0\), \(i=1,2,3\). However, this case cannot be considered as 3-periodic. So for \(|L|<1\), \(K=3\), X cannot converge to a constant nonzero limit.
Now we formulate the sharp statement about the asymptotic behaviour of X.
Proposition 2
Assumptions 1, 2, 3, and 4 hold and let \(\mathbf {E}|\xi|^{3}<\infty\). Let \(\mathcal {S}\) be defined as in (15), ĝ be defined as in (12), \(\mathcal {A}_{s}\) be defined as in (29), \(\mathcal {E}\) be defined as in (60). Let X be a solution to equation (13).
-
(i)
Let one of the following conditions hold:
-
(a)
\(L=1\), \(\hat{g}=0\), \(\sum_{i=1}^{\infty}|g(i)|<\infty\).
-
(b)
\(L=1\), \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\), and \(\mathcal {A}_{s}=0\) for each \(s\in\mathcal {S}\).
Then there exists an a.s. finite random function \(\mathcal {R}(s)\), defined on \(\mathcal {S}\), such that
$$\lim_{n\to\infty}\big|X(nK+s)-\mathcal {R}(s)\big|=0, \quad \textit{for each }s\in \mathcal {S}, \textit{a.s.}, $$if and only if \(\sigma\in\boldsymbol {l}_{2}\).
-
(a)
-
(ii)
Let \(L=-1\), \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\). Then there exists an a.s. finite random function \(\mathcal {R}(e)\), defined on \(\mathcal {E}\), such that
$$ \lim_{n\to\infty}\big|X(2nK+e)-\mathcal {R}(e)\big|=0, \quad\textit{for each }e\in\mathcal {E}, \textit{ a.s.}, $$if and only if \(\sigma\in\boldsymbol {l}_{2}\).
Proof
Since \(X(nK+s)=Y^{s}(n)\), the results for parts (i)(a)-(i)(b) follow from Proposition 1(i)-(ii), and the results for part (ii) follows from Proposition 1(iii). □
7 Examples and simulations
7.1 Calculations of \(\mathcal {A}_{s} \)
In Example 1 we present \(a(i)\) such that either \(\mathcal {A}_{s_{1}}=0\), but \(\mathcal {A}_{s_{2}}\neq0\) for some \(s_{1}, s_{2}\in\mathcal {S}\) or \(\mathcal {A}_{s}=0\) for all \(s\in\mathcal {S}\). However, the first case can happen only if \(L\neq1\), as will be shown in Example 2.
Example 1
Let \(\mathcal {A}_{s}\) be defined as in (29). We consider \(K=2\) and \(K=3\).
-
(i)
\(K=2\), \(\mathcal {A}_{s}=\sum_{j=1}^{2}\prod_{\tau=j}^{1}a(\tau +s)\), so
$$ \begin{gathered}\mathcal {A}_{0}=\sum _{j=1}^{2}\prod_{\tau=j}^{1}a( \tau)= a(1)+1, \\ \mathcal {A}_{1}=\sum_{j=1}^{2} \prod_{\tau=j}^{1}a(\tau+1)=a(2)+1=a(0)+1. \end{gathered} $$For \(a(0)=-1\), \(a(1)=1\), we have \(\mathcal {A}_{0}=2\), \(\mathcal {A}_{1}=0\).
-
(ii)
\(K=3\) and \(\mathcal {A}_{s}=\sum_{j=1}^{3}\prod_{\tau=j}^{2}a(\tau +s)\), so
$$ \begin{gathered} A_{0}=\sum _{j=1}^{3}\prod_{\tau=j}^{2}a( \tau)=a(1)a(2)+a(2)+1, \\ \mathcal {A}_{1}=\sum_{j=1}^{3} \prod_{\tau=j}^{2}a(\tau+1)=a(2)a(0)+a(0)+1, \\ \mathcal {A}_{2}=\sum_{j=1}^{3} \prod_{\tau=j}^{2}a(\tau+2)=a(0)a(1)+a(1)+1. \end{gathered} $$(64)-
(a)
For \(a(1)=1\), \(a(0)=-2\), \(a(2)=2\) we have \(\mathcal {A}_{0}=5\), \(\mathcal {A}_{1}=-5\), \(\mathcal {A}_{2}=0\).
-
(b)
For \(a(1)=1\), \(a(2)=-\frac{1}{a(1)+1}=-\frac{1}{2}\), \(a(0)=-\frac {a(1)+1}{a(1)}=-2\), we have \(\mathcal {A}_{0}=0\), \(\mathcal {A}_{1}=0\), \(\mathcal {A}_{2}=0\).
-
(a)
Example 2
Suppose that
We show that if \(\mathcal {A}_{0}=0\) then \(\mathcal {A}_{s}=0\) for all \(s=1, 2, \dots, K-1\), so partial periodicity is not possible.
We have
so
But then
Similar calculations can be done for each \(\mathcal {A}_{s}\).
7.2 Simulations
In this section we illustrate our results with computer simulations. We consider equation (2) for different types of \(a(m)\). In all the examples random variables \(\xi(m)\) are supposed to be independent and normally \(\mathcal {N}(0, 1)\) distributed, \(X(0)=2\) and
So the equation which we are simulating is
Example 3
Let
so
Since the assumptions of Theorem 1(ii), hold, we can expect to get four random limits of the solution. More exactly, the limits \(\mathcal {R}(e)\), \(e\in\{0, 1, 2, 3\}\), are given by (63). The following simulations illustrate the results. In all the simulations, we have used \(\sigma=0.1\).
Figure 1 demonstrates one run, with four coloured lines indicating the random limits for \(e=0, 1, 2, 3\).
Figures 2 and 3 demonstrate two different samples of ten runs, showing the random limits for \(e=0, 1, 2, 3\).
Figures 4 and 5 demonstrate the random limits for \(e=0\) and \(e=2\), respectively, for 80 different runs.
Example 4
Let
so
Now we are under assumptions of Theorem 1(i)(c), so we can expect to get two nonrandom limits of the solution. More exactly, they are \(\mathcal {R}(s)\), \(s\in\{0, 1\}\), where
Figure 6 demonstrates one run showing also the two limits for \(s=0, 1\), while Figure 7 demonstrates 10 different runs, showing nonrandom limits for \(s=0\) and \(s=1\). Both simulations used \(\sigma=0.5\).
Example 5
Let
so
Recall that, for \(L=1\), \(\mathcal {S}=\{0, 1\}\) we have
where \(\mathcal {V}^{s}\) and \(\mathcal {H}^{s}\) are defined by (27). Since both \(\mathcal {A}_{0}\neq0\) and \(\mathcal {A}_{1}\neq0\), and also \(\hat{g}\neq0\), we are under assumptions of Lemma 3(i)(a), which implies that \(|\mathcal {V}^{s}(n)|\to\infty\). Since \(\bar{M}^{s}\) is a.s. finite and \(L=1\), this implies that \(\lim_{n\to\infty}|X(2n+s)|=\infty\) for each \(s=0, 1\). This is illustrated on Figure 8.
If, however, we assume \(\hat{g}=0\), i.e. we simulate the solution of the equation
with \(a(m)\) given above, we will get two random a.s. bounded limits. Figure 9 shows the two random limits, \(s=0,1\), for 20 different trajectories.
References
Elaydi, SN: An Introduction to Difference Equations, 2nd edn. Springer, Berlin (1999)
Dannan, F, Elaydi, S, Liu, P: Periodic solutions of difference equations. J. Differ. Equ. Appl. 6(2), 203-232 (2000)
Grove, EA, Ladas, G: Periodicities in Nonlinear Difference Equations. Chapman & Hall/CRC, Boca Raton (2005)
Diblik, J, Ruzickova, M, Schmeidel, E, Zbaszyniak, M: Weighted asymptotically periodic solutions of linear Volterra difference equations. Abstr. Appl. Anal. (2011). doi:10.1155/2011/370982
Diblik, J, Ruzickova, M, Schmeidel, E: Existence of asymptotically periodic solutions of system of Volterra difference equations. J. Differ. Equ. Appl. 15(11-12), 1165-1177 (2009)
Agarval, RP, Romanenko, EY: Stable periodic solutions of difference equations. Appl. Math. Lett. 11(4), 81-84 (1998)
Appleby, JAD, Mao, X, Rodkina, A: On stochastic stabilization of difference equations. Discrete Contin. Dyn. Syst., Ser. A 15(3), 843-857 (2006)
Appleby, JAD, Kelly, C, Mao, X, Rodkina, A: On the local dynamics of polynomial difference equations with fading stochastic perturbations. Dyn. Contin. Discrete Impuls. Syst., Ser. A Math. Anal. 17(3), 401-430 (2010)
Appleby, JAD, Berkolaiko, G, Rodkina, A: Non-exponential stability and decay rates in nonlinear stochastic difference equations with unbounded noise. Stoch. Int. J. Probab. Stoch. Process. 81(2), 99-127 (2009)
Appleby, JAD, Berkolaiko, G, Rodkina, A: On local stability for a nonlinear difference equation with a non-hyperbolic equilibrium and fading stochastic perturbations. J. Differ. Equ. Appl. 14(9), 923-951 (2008)
Berkolaiko, G, Rodkina, A: Almost sure convergence of solutions to non-homogeneous stochastic difference equation. J. Differ. Equ. Appl. 12(6), 535-553 (2006)
Feng, C, Zhao, H, Zhou, B: Pathwise random periodic solutions of stochastic differential equations. J. Differ. Equ. 251, 119-149 (2011)
Cao, J, Yang, Q, Huang, Z, Liu, Q: Asymptotically almost periodic solutions of stochastic functional differential equations. Appl. Math. Comput. 218, 1499-1511 (2011)
Yang, L, Li, Y: Periodic solutions for impulsive BAM neural networks with time-varying delays in leakage term. Int. J. Differ. Equ. (2013). doi:10.1155/2013/543947
Dokuchaev, N, Rodkina, A: On limit periodicity of discrete time stochastic processes. Stoch. Dyn. 14(4), 1450011 (2014). doi:10.1142/S0219493714500117
Chan, T, Williams, D: An excursions approach to an annealing problem. Math. Proc. Camb. Philos. Soc. 105, 169-176 (1989)
Shiryaev, AN: Probability, 2nd edn. Springer, Berlin (1996)
Acknowledgements
This work was done when the first author worked as a visiting professor at the Department of Statistics, Science Campus, The University of the South Africa, Johannesburg, South Africa. The authors are grateful to the anonymous referees for their comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
The authors contributed equally to this paper. They read and approved the final manuscript.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
First we formulate and prove an auxiliary lemma, which is used in the proofs of Lemma 3 in Section 3 and Lemma 9 in Section 5. After that we present a proof of Lemma 6.
Lemma 11
Let \((\alpha_{n})_{n\in\mathbf {N}}\) be a sequence of real numbers such that \(\lim_{n\to\infty}\alpha_{n}=\bar{\alpha}\) and let \(|l|<1\). Then
Proof
Let \(A>0\) be such a number that, for each \(n\in\mathbf {N}\),
Fix some \(\varepsilon>0\) and find \(N_{1}\in\mathbf {N}\) such that, for \(n\ge N_{1}\)
Let
Then, for \(n\ge N_{2}\),
which concludes the proof. □
Proof of Lemma 6
For \(c>0\) define the events
Then \(A_{c}\downarrow A\) as \(c\to\infty\). The events \(A_{c}\) are tail events. Therefore it follows, from the independence of the sequence \((\eta(n))_{n\in\mathbb{N}}\) and the zero-one law, that
implies \(\mathbb{P}[A_{c}]=1\), and so \(\mathbb{P}[A]=\lim_{c\to\infty} \mathbb{P}[A_{c}]=1\). Therefore it suffices to prove (68) to establish the first part of (40).
Using the fact that for any sequence of random variables \(\{\chi(n)\}_{n\in\mathbb{N}}\) we have
and the fact that \(\mathbb{P}[B_{n} \text{ i.o.}]\geq\limsup_{n\to\infty} \mathbb{P}[B_{n}]\) for any sequence of events \(\{B_{n}\}_{n\in\mathbb {N}}\), and then Corollary 3 in turn, we get
proving (40). □
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Rodkina, A., Rapoo, E. On almost sure asymptotic periodicity for scalar stochastic difference equations. Adv Differ Equ 2017, 220 (2017). https://doi.org/10.1186/s13662-017-1269-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662-017-1269-0