Theory and Modern Applications

# Asymptotic behavior of random coefficient INAR model under random environment defined by difference equation

## Abstract

This paper proposes a first-order random coefficient integer-valued autoregressive model under random environment by introducing a Markov chain with a finite state space. We derive conditions for stationarity, geometric ergodicity, and β-mixing property with exponential decay for the random coefficient integer-valued autoregressive model under random environment.

MSC:60J05, 60J10, 60k37.

## 1 Introduction

In the fields of economics, finance, biology, and engineering, many time series data exhibit nonlinearity, which cannot be explained by the traditional linear time series models. In this context, many nonlinear time series models (see, among others, [15]) which have been more effective in capturing certain features of time series data were proposed. However, time series models for a sequence of dependent discrete random variables are rare. Al-Osh and Alzaid [6] introduced first-order integer-valued autoregressive (INAR(1)) model for modeling and generation of sequences of dependent counting processes. Nastić and Ristić [7] derived the distributions of the innovation processes of some mixed integer-valued autoregressive models of orders 1 and 2 with geometric marginal distributions and discussed several properties of the model. Zheng et al. [8] introduced first-order random coefficient integer-valued autoregressive (RCINAR(1)) model which is defined as

${X}_{t}={\varphi }_{t}\circ {X}_{t-1}+{\epsilon }_{t},\phantom{\rule{1em}{0ex}}t\ge 1,$
(1.1)

where $\left\{{\varphi }_{t}\right\}$ is an i.i.d. sequence on $\left[0,1\right)$; $\left\{{\epsilon }_{t}\right\}$ is an i.i.d. non-negative integer-valued sequence; ${\varphi }_{t}\circ {X}_{t-1}={\sum }_{i=1}^{{X}_{t-1}}{B}_{i}$, where $\left\{{B}_{i}\right\}$ is an i.i.d. Bernoulli random sequence with $\mathbb{P}\left({B}_{i}=1\mid {\varphi }_{t}\right)={\varphi }_{t}$, and $\left\{{B}_{i}\right\}$ is independent of ${X}_{t-1}$. This model assumes random coefficient and some basic probabilistic and statistical properties of it discussed by [8]. Chen and Wang [9] proposed a conditional least absolute deviation method to estimate the parameters of the model and investigated the asymptotic distribution of the new estimator. Roitershtein and Zhong [10] studied the asymptotic behavior of this model in the case where the additive term in the underlying random linear recursion belongs to the domain of attraction of a stable law.

There is a growing literature on the application of model (1.1). However, the model neglects the influence which is produced by environment (see, among others, [11, 12]). For instance, let ${X}_{t}$ be the number of queuer in the t th hour, ${\varphi }_{t}\circ {X}_{t-1}$ be the number of queuer left over from the previous hour and ${\epsilon }_{t}$ be number of the new queuer in the current hour. Here, ${X}_{t}$ satisfies model (1.1). In fact, the number of new queues may be influenced by a sudden change (e.g., blizzard) of the various environments and this could make a tremendous difference at different hours.

In this paper, we extend model (1.1) to a random environment model, where the ${\epsilon }_{t}$ varies with a new i.i.d. random variable which takes values in a finite set. We will investigate the basic probabilistic and statistic properties of the new model and provide mild sufficient conditions for geometric ergodicity in the present paper.

The remainder of the paper is organized as follows. Section 2 introduces the first-order random coefficient integer-valued autoregressive model under a random environment. Section 3 develops some useful lemmas and summarizes the main results. All the proofs are collected in Section 4.

## 2 The first-order random coefficient integer-valued autoregressive model under random environment

In this section, we first give some notations which will be used throughout the paper. We suppose that $\left(\mathrm{\Omega },\mathcal{F},\mathbb{P}\right)$ is a probability space. $\mathrm{E}=\left\{1,2,\dots ,r\right\}$ (r is a positive integer) denotes a finite set, and denotes the σ-algebra generated by all subsets of E. $\left\{{Z}_{t},t\ge 0\right\}$ is an irreducible and aperiodic Markov chain defined on $\left(\mathrm{\Omega },\mathcal{F},\mathbb{P}\right)$, and it takes values in E.

Let ${\epsilon }_{t}\left({Z}_{t}\right)={\sum }_{i=1}^{r}{\epsilon }_{t}\left(i\right){I}_{\left\{i\right\}}\left({Z}_{t}\right)$, where $\left\{{\epsilon }_{t}\left(1\right)\right\},\left\{{\epsilon }_{t}\left(2\right)\right\},\dots ,\left\{{\epsilon }_{t}\left(r\right)\right\}$ are i.i.d. non-negative integer-valued random variables and ${I}_{\left\{i\right\}}\left({Z}_{t}\right)$ denotes the indicator function of the single element set $\left\{i\right\}$.

This paper considers the following nonlinear time series model:

${X}_{t}={\varphi }_{t}\circ {X}_{t-1}+{\epsilon }_{t}\left({Z}_{t}\right),\phantom{\rule{1em}{0ex}}t\ge 1,$
(2.1)

where: (1) $\left\{{\varphi }_{t}\right\}$ is an i.i.d. sequence of random variables with probability distribution function ${P}_{\varphi }$ on $\left[0,1\right)$; (2) for each $i\in \mathrm{E}$, $\left\{{\epsilon }_{t}\left(i\right)\right\}$ has probability mass function ${f}_{i}\left(\cdot \right)$; (3) ${\varphi }_{t}\circ {X}_{t-1}={\sum }_{i=1}^{{X}_{t-1}}{B}_{i}$, where $\left\{{B}_{i}\right\}$ is an i.i.d. Bernoulli random sequence with $\mathbb{P}\left({B}_{i}=1\mid {\varphi }_{t}\right)={\varphi }_{t}$ and independent of ${X}_{t-1}$; (4) ${X}_{0}$, $\left\{{\varphi }_{t}\right\}$ and $\left\{{\epsilon }_{t}\left(i\right)\right\}$ ($\mathrm{\forall }i\in \mathrm{E}$), are independent. We call this new model a first-order random coefficient integer-valued autoregressive model under random environment (RERCINAR(1)).

Obviously, model (2.1) is a generalization of model (1.1). The difference between model (2.1) and model (1.1) lies in the fact that the former reflects the factors of the interference in a system as well as the system itself being influenced by a sudden environment change. So the new model (2.1) can better imitate many substantial problems in the real world.

The idea is similar to that of Tong and Lim [13], where a class of threshold autoregressive models were introduced to capture the notion of a limit cycle, which plays a key role in the modeling of cyclical data.

The iterative sequence in (1.1) develops a Markov chain on a general state space, while the iterative sequence of the nonlinear time series model (2.1) does not possess such a better nature. So until now, to the best of our knowledge, there is very little research on the limit behavior of the iterative sequence of model (2.1). In this paper, we try to add proper supplementary variables to the non-Markov process, thereby obtaining a Markov process, so we can use the theory of Markov processes to an analysis of the non-Markov process. Furthermore, the nature of the original non-Markov process can be obtained from the nature of the Markov process.

In the following, let $\mathrm{Z}=\left\{0,1,2,\dots \right\}$, and denote the σ-algebra generated by all subsets of Z. By Lemma 1 in the next section, we know that the sequence $\left\{\left({X}_{t},{Z}_{t}\right)\right\}$ is a Markov chain on $\mathrm{Z}×\mathrm{E}$ with the following transition probability:

$\begin{array}{r}\mathbb{P}\left\{\left({X}_{t},{Z}_{t}\right)=\left(y,j\right)\mid \left({X}_{t-1},{Z}_{t-1}\right)=\left(x,i\right)\right\}\\ \phantom{\rule{1em}{0ex}}={p}_{ij}\sum _{k=0}^{min\left(x,y\right)}{C}_{x}^{k}{f}_{j}\left(y-k\right){\int }_{0}^{1}{\varphi }_{1}^{k}{\left(1-{\varphi }_{1}\right)}^{x-k}\phantom{\rule{0.2em}{0ex}}\mathrm{d}{P}_{\varphi },\end{array}$
(2.2)

where ${p}_{ij}=\mathbb{P}\left({Z}_{t+1}=j\mid {Z}_{t}=i\right)$ is the transition function of Markov chain $\left\{{Z}_{t},t\ge 0\right\}$. In fact,

$\begin{array}{r}\mathbb{P}\left\{\left({X}_{t},{Z}_{t}\right)=\left(y,j\right)\mid \left({X}_{t-1},{Z}_{t-1}\right)=\left(x,i\right)\right\}\\ \phantom{\rule{1em}{0ex}}={p}_{ij}\mathbb{P}\left\{{\varphi }_{t}\circ x+{\epsilon }_{t}\left(j\right)=y\right\}={p}_{ij}{\int }_{0}^{1}\mathbb{P}\left\{{\varphi }_{t}\circ x+{\epsilon }_{t}\left(j\right)=y\mid {\varphi }_{t}\right\}\phantom{\rule{0.2em}{0ex}}\mathrm{d}{P}_{\varphi }\\ \phantom{\rule{1em}{0ex}}={p}_{ij}{\int }_{0}^{1}\sum _{k=0}^{min\left(x,y\right)}\mathbb{P}\left\{{\varphi }_{t}\circ x=k,{\epsilon }_{t}\left(j\right)=y-k\mid {\varphi }_{t}\right\}\phantom{\rule{0.2em}{0ex}}\mathrm{d}{P}_{\varphi }\\ \phantom{\rule{1em}{0ex}}={p}_{ij}{\int }_{0}^{1}\sum _{k=0}^{min\left(x,y\right)}\mathbb{P}\left\{{\epsilon }_{t}\left(j\right)=y-k\right\}\mathbb{P}\left\{{\varphi }_{t}\circ x=k\mid {\varphi }_{t}\right\}\phantom{\rule{0.2em}{0ex}}\mathrm{d}{P}_{\varphi }\\ \phantom{\rule{1em}{0ex}}={p}_{ij}{\int }_{0}^{1}\sum _{k=0}^{min\left(x,y\right)}{f}_{j}\left(y-k\right){C}_{x}^{k}{\varphi }_{t}^{k}{\left(1-{\varphi }_{t}\right)}^{x-k}\right\}\phantom{\rule{0.2em}{0ex}}\mathrm{d}{P}_{\varphi }\\ \phantom{\rule{1em}{0ex}}={p}_{ij}\sum _{k=0}^{min\left(x,y\right)}{C}_{x}^{k}{f}_{j}\left(y-k\right){\int }_{0}^{1}{\varphi }_{1}^{k}{\left(1-{\varphi }_{1}\right)}^{x-k}\phantom{\rule{0.2em}{0ex}}\mathrm{d}{P}_{\varphi },\end{array}$

where the first equation follows from the proof procedure of Lemma 1.

We introduce the following notation:

${P}^{\left(t\right)}\left\{\left(x,i\right),\left(y,j\right)\right\}=\mathbb{P}\left\{\left({X}_{s+t},{Z}_{s+t}\right)=\left(y,j\right)\mid \left({X}_{s},{Z}_{s}\right)=\left(x,i\right)\right\},\phantom{\rule{1em}{0ex}}\mathrm{\forall }x,y\in \mathrm{Z},i,j\in \mathrm{E}.$

Therefore by the property of conditional probability, we have

${P}^{\left(t\right)}\left\{\left(x,i\right),\left(y,j\right)\right\}=\sum _{k\in \mathrm{E}}\sum _{z\in \mathrm{Z}}P\left\{\left(x,i\right),\left(z,k\right)\right\}{P}^{\left(t-1\right)}\left\{\left(z,k\right),\left(y,j\right)\right\}.$

By the inductive approach, $\mathrm{\forall }t\ge 2$ it follows that

$\begin{array}{r}{P}^{\left(t\right)}\left\{\left(x,i\right),\left(y,j\right)\right\}\\ \phantom{\rule{1em}{0ex}}=\sum _{{k}_{1},{k}_{2},\dots ,{k}_{t-1}\in \mathrm{E}}{p}_{i{k}_{1}}{p}_{{k}_{1}{k}_{2}}\cdots {p}_{{k}_{t-1}j}\sum _{{z}_{1},{z}_{2},\dots ,{z}_{t-1}\in \mathrm{Z}}\sum _{{m}_{1}=0}^{min\left(x,{z}_{1}\right)}\sum _{{m}_{2}=0}^{min\left({z}_{1},{z}_{2}\right)}\cdots \\ \phantom{\rule{2em}{0ex}}\sum _{{m}_{t}=0}^{min\left({z}_{t-1},y\right)}{C}_{x}^{{m}_{1}}{f}_{{k}_{1}}\left({z}_{1}-{m}_{1}\right){\int }_{0}^{1}{\varphi }_{1}^{{m}_{1}}{\left(1-{\varphi }_{1}\right)}^{x-{m}_{1}}\phantom{\rule{0.2em}{0ex}}\mathrm{d}{P}_{\varphi }\cdot {C}_{{z}_{1}}^{{m}_{2}}{f}_{{k}_{2}}\left({z}_{2}-{m}_{2}\right)\\ \phantom{\rule{2em}{0ex}}×{\int }_{0}^{1}{\varphi }_{1}^{{m}_{2}}{\left(1-{\varphi }_{1}\right)}^{{z}_{1}-{m}_{2}}\phantom{\rule{0.2em}{0ex}}\mathrm{d}{P}_{\varphi }\cdots {C}_{{z}_{t-1}}^{{m}_{t}}{f}_{j}\left(y-{m}_{t}\right){\int }_{0}^{1}{\varphi }_{1}^{{m}_{t}}{\left(1-{\varphi }_{1}\right)}^{{z}_{t-1}-{m}_{t}}\phantom{\rule{0.2em}{0ex}}\mathrm{d}{P}_{\varphi }.\end{array}$
(2.3)

Generally, (2.2) is called a one step transition probability or a transition probability of the Markov chain $\left\{\left({X}_{t},{Z}_{t}\right)\right\}$, and (2.3) is called a t-step transition probability of the Markov chain $\left\{\left({X}_{t},{Z}_{t}\right)\right\}$.

## 3 Main results

Now we give some basic assumptions which guarantee that the following lemmas can be used properly throughout the paper.

• (A1) $\left\{{Z}_{t}\right\}$, $\left\{{\epsilon }_{t}\left(1\right)\right\}$, …, $\left\{{\epsilon }_{t}\left(r\right)\right\}$ are mutually independent satisfying $\mathrm{\forall }i\in \mathrm{E}$, $t\ge 0$, ${Z}_{t+1}$ and ${\epsilon }_{t+1}\left(i\right)$ are all independent of $\left\{{X}_{s},s\le t\right\}$;

• (A2) $E\left({\epsilon }_{t}\left(i\right)\right)$ is a constant, independent of t, $\mathrm{\forall }i\in \mathrm{E}$, $E\left({\varphi }_{t}\right)$, $E\left({\epsilon }_{t}^{2}\left(i\right)\right)$ are all assumed finite.

• (A3) The probability mass function ${f}_{i}\left(\cdot \right)$ of ${\epsilon }_{t}\left(i\right)$ is positive everywhere, that is, $\mathrm{\forall }i\in \mathrm{E}$, ${f}_{i}\left(\cdot \right)>0$.

Remark 1 The independence of $\left\{{\epsilon }_{t}\left(1\right)\right\},\dots ,\left\{{\epsilon }_{t}\left(r\right)\right\}$ and (A2) ensure the stationarity of $\left\{{\epsilon }_{t}\left(i\right)\right\}$, $i\in \mathrm{E}$, and the assumption (A3) is needed to guarantee the irreducibility and aperiodicity of $\left\{\left({X}_{t},{Z}_{t}\right)\right\}$.

A Markov chain $\left\{{Y}_{t}\right\}$ is said to be irreducible if each state can communicate with every other one, i.e., for every x and y, there exists $t>0$, such that $\mathbb{P}\left({Y}_{t}=y\mid {Y}_{0}=x\right)>0$. An irreducible chain on a countable space is said to be aperiodic if for some state x the probability of remaining in x is strictly positive: $P\left(x,x\right)>0$. This prevents the chain from having a cyclic behavior. But before we give the qualifications of $\left\{\left({X}_{t},{Z}_{t}\right)\right\}$ to become irreducible and aperiodic, we need the following lemma.

Lemma 1 Suppose (A1) and (A2) hold, then the sequence $\left\{\left({X}_{t},{Z}_{t}\right)\right\}$ is a time-homogeneous Markov chain defined on $\left(\mathrm{\Omega },\mathcal{F},\mathbb{P}\right)$ with state space $\left(\mathrm{Z}×\mathrm{E},\mathcal{B}×\mathcal{H}\right)$.

Next, we state the results about the irreducibility and aperiodicity of the sequence $\left\{\left({X}_{t},{Z}_{t}\right)\right\}$. Although we narrate them for the proofs of our main results, they are of independent interest. Note that $\left\{{Z}_{t}\right\}$ is irreducible, that is, for arbitrary measure λ defined on $\left(\mathrm{E},\mathcal{H}\right)$, $\left\{{Z}_{t}\right\}$ is λ-irreducible. Let φ be a measure satisfies $\phi \left\{i\right\}>0$, $\mathrm{\forall }i\in \mathrm{E}$. So, we can derive the measure $\mu ×\phi$ defined on $\left(\mathrm{Z}×\mathrm{E},\mathcal{B}×\mathcal{H}\right)$, where μ is a Lebesgue measure defined on $\left(\mathrm{Z},\mathcal{B}\right)$, such that $\mu \left(A\right)>0$ implies $\mu ×\phi \left(A×B\right)>0$, $A\in \mathcal{B}$, $B\in \mathcal{H}$.

Lemma 2 Under assumptions (A1)-(A3), the Markov chain $\left\{\left({X}_{t},{Z}_{t}\right)\right\}$ is $\mu ×\phi$ irreducible and aperiodic.

The following lemma is the key to the proof of Lemma 2.

Lemma 3 (Tong [14])

A φ-irreducible Markov chain $\left\{{Y}_{t}\right\}$ with state space $\left(\chi ,\mathcal{A}\right)$ is aperiodic if and only if there exists $A\in \mathcal{A}$ satisfying $\phi \left(A\right)>0$, and for every regular subset B of A, $\phi \left(B\right)>0$, there exists a positive integer t such that $\mathrm{\forall }y\in \chi$,

${P}^{\left(t\right)}\left(y,B\right)>0,\phantom{\rule{2em}{0ex}}{P}^{\left(t+1\right)}\left(y,B\right)>0.$

A Markov chain $\left\{{Y}_{t}\right\}$ with state space $\left(\chi ,\mathcal{A}\right)$ is said to be ergodic if there exists a probability distribution π, such that $\mathrm{\forall }y\in \chi$, ${lim}_{t\to \mathrm{\infty }}{\parallel {P}^{\left(t\right)}\left(y,\cdot \right)-\pi \left(\cdot \right)\parallel }_{\tau }=0$. Moreover, if there exists a constant $0<\beta <1$, such that $\mathrm{\forall }y\in \chi$, ${lim}_{t\to \mathrm{\infty }}{\beta }^{-t}{\parallel {P}^{\left(t\right)}\left(y,\cdot \right)-\pi \left(\cdot \right)\parallel }_{\tau }=0$, then $\left\{{Y}_{t}\right\}$ is geometrically ergodic, where ${P}^{\left(t\right)}\left(y,\cdot \right)$ is the transition probability of $\left\{{Y}_{t}\right\}$ and ${\parallel \cdot \parallel }_{\tau }$ denotes the total variation norm. Knowing the sufficient conditions for geometrical ergodicity of a time series is very useful for analyzing it. This is so, first, because it clarifies the parameter space for estimation purposes when the model is parametric, and second, because it validates useful limit theorems such as the asymptotic normality of various estimators (Meyn and Tweedie [15]).

Our main results are as follows: Theorem 1 gives the sufficient conditions for geometric ergodicity for the Markov chain $\left\{\left({X}_{t},{Z}_{t}\right)\right\}$, while Theorem 2 develops the idea that $\left\{{X}_{t}\right\}$ possesses the nature which is analogous to the geometric ergodicity of the Markov chain, though $\left\{{X}_{t}\right\}$ is not a Markov chain.

Theorem 1 Suppose (A1)-(A3) hold, and there exist constants $0<\alpha <1$ and $c\ge 0$, such that

$E\left({\varphi }_{1}\circ x\mid {Z}_{0}=i\right)\le \alpha x+c,\phantom{\rule{1em}{0ex}}\mathrm{\forall }x\in \mathrm{Z},i\in \mathrm{E},$

then the Markov chain $\left\{\left({X}_{t},{Z}_{t}\right)\right\}$ is geometrically ergodic. Moreover, if $\left\{\left({X}_{t},{Z}_{t}\right)\right\}$ is initialized from its invariant measure, then it is stationary and β-mixing with exponential decay.

Theorem 2 Suppose (A1)-(A3) hold and $\left\{\left({X}_{t},{Z}_{t}\right)\right\}$ is geometrically ergodic. Then there exist a unique probability distribution ${\pi }^{\ast }$ and a positive number $\beta <1$, such that for any initial value $x\in \mathrm{Z}$, ${X}_{0}=x$ and $\mathrm{\forall }y\in \mathrm{Z}$,

$\underset{t\to \mathrm{\infty }}{lim}{\beta }^{-t}{\parallel \mathbb{P}\left({X}_{t}=y\mid {X}_{0}=x\right)-{\pi }^{\ast }\left(y\right)\parallel }_{\tau }=0,$

where ${\parallel \cdot \parallel }_{\tau }$ is the total variation norm.

## 4 Proofs

Proof of Lemma 1 $\mathrm{\forall }x,y,{x}_{s}\in \mathrm{Z}$, and $\mathrm{\forall }i,j,{i}_{s}\in \mathrm{E}$, where s is a integer number satisfying $0\le s, we have

$\begin{array}{r}\mathbb{P}\left\{\left({X}_{t+1},{Z}_{t+1}\right)=\left(y,j\right)\mid \left({X}_{t},{Z}_{t}\right)=\left(x,i\right),\left({X}_{s},{Z}_{s}\right)=\left({x}_{s},{i}_{s}\right),0\le s

where the last equation follows from the definition of the RERCINAR(1) model, the assumption (A1), and the notation ${p}_{ij}=\mathbb{P}\left\{{Z}_{t+1}=j\mid {Z}_{t}=i\right\}$.

On the other hand,

$\begin{array}{r}\mathbb{P}\left\{\left({X}_{t+1},{Z}_{t+1}\right)=\left(y,j\right)\mid \left({X}_{t},{Z}_{t}\right)=\left(x,i\right)\right\}\\ \phantom{\rule{1em}{0ex}}=\mathbb{P}\left\{{\varphi }_{t+1}\circ x+{\epsilon }_{t+1}\left(j\right)=y,{Z}_{t+1}=j\mid \left({X}_{t},{Z}_{t}\right)=\left(x,i\right)\right\}\\ \phantom{\rule{1em}{0ex}}=\mathbb{P}\left\{{\varphi }_{t+1}\circ x+{\epsilon }_{t+1}\left(j\right)=y\mid {X}_{t}=x,{Z}_{t}=i\right\}P\left\{{Z}_{t+1}=j\mid {X}_{t}=x,{Z}_{t}=i\right\}\\ \phantom{\rule{1em}{0ex}}=\mathbb{P}\left\{{\varphi }_{t+1}\circ x+{\epsilon }_{t+1}\left(j\right)=y\mid {X}_{t}=x,{Z}_{t}=i\right\}P\left\{{Z}_{t+1}=j\mid {Z}_{t}=i\right\}\\ \phantom{\rule{1em}{0ex}}={p}_{ij}\cdot \mathbb{P}\left\{{\varphi }_{t+1}\circ x+{\epsilon }_{t+1}\left(j\right)=y\right\}.\end{array}$

Therefore $\mathrm{\forall }x,y,{x}_{s}\in \mathrm{Z}$, and $\mathrm{\forall }i,j,{i}_{s}\in \mathrm{E}$, we have

$\begin{array}{r}\mathbb{P}\left\{\left({X}_{t+1},{Z}_{t+1}\right)=\left(y,j\right)\mid \left({X}_{t},{Z}_{t}\right)=\left(x,i\right),\left({X}_{s},{Z}_{s}\right)=\left({x}_{s},{i}_{s}\right),0\le s

Hence the sequence $\left\{\left({X}_{t},{Z}_{t}\right)\right\}$ is a Markov chain, and its time-homogeneity follows from the stationarity of ${\epsilon }_{t+1}\left(j\right)$, $j\in \mathrm{E}$. □

Proof of Lemma 2 Suppose $A×B\in \mathcal{B}×\mathcal{H}$ and $\mu ×\phi \left(A×B\right)>0$. From the irreducibility of $\left\{{Z}_{t}\right\}$, we know that $\mathrm{\forall }i,j\in \mathrm{E}$, $\mathrm{\exists }s>0$, such that

${p}_{ij}^{\left(t\right)}=\mathbb{P}\left({Z}_{t+s}=j\mid {Z}_{s}=i\right)>0,\phantom{\rule{1em}{0ex}}\mathrm{\forall }t\ge s,$

that is, $\mathrm{\exists }{k}_{1},{k}_{2},\dots ,{k}_{t-1}\in \mathrm{E}$, such that

${p}_{i{k}_{1}}{p}_{{k}_{1}{k}_{2}}\cdots {p}_{{k}_{t-1}j}>0.$

Then from (2.2), $\mathrm{\forall }\left(x,i\right)\in \mathrm{Z}×\mathrm{E}$, we have

$\begin{array}{r}{P}^{\left(t\right)}\left\{\left(x,i\right),\left(y,j\right)\right\}\\ \phantom{\rule{1em}{0ex}}=\sum _{{k}_{1},{k}_{2},\dots ,{k}_{t-1}\in \mathrm{E}}{p}_{i{k}_{1}}{p}_{{k}_{1}{k}_{2}}\cdots {p}_{{k}_{t-1}j}\sum _{{z}_{1},{z}_{2},\dots ,{z}_{t-1}\in \mathrm{Z}}\sum _{{m}_{1}=0}^{min\left(x,{z}_{1}\right)}\sum _{{m}_{2}=0}^{min\left({z}_{1},{z}_{2}\right)}\cdots \\ \phantom{\rule{2em}{0ex}}\sum _{{m}_{t}=0}^{min\left({z}_{t-1},y\right)}{C}_{x}^{{m}_{1}}{f}_{{k}_{1}}\left({z}_{1}-{m}_{1}\right){\int }_{0}^{1}{\varphi }_{1}^{{m}_{1}}{\left(1-{\varphi }_{1}\right)}^{x-{m}_{1}}\phantom{\rule{0.2em}{0ex}}\mathrm{d}{P}_{\varphi }\cdot {C}_{{z}_{1}}^{{m}_{2}}{f}_{{k}_{2}}\left({z}_{2}-{m}_{2}\right)\\ \phantom{\rule{2em}{0ex}}×{\int }_{0}^{1}{\varphi }_{1}^{{m}_{2}}{\left(1-{\varphi }_{1}\right)}^{{z}_{1}-{m}_{2}}\phantom{\rule{0.2em}{0ex}}\mathrm{d}{P}_{\varphi }\cdots {C}_{{z}_{t-1}}^{{m}_{t}}{f}_{j}\left(y-{m}_{t}\right){\int }_{0}^{1}{\varphi }_{1}^{{m}_{t}}{\left(1-{\varphi }_{1}\right)}^{{z}_{t-1}-{m}_{t}}\phantom{\rule{0.2em}{0ex}}\mathrm{d}{P}_{\varphi }>0,\end{array}$

therefore the Markov chain $\left\{\left({X}_{t},{Z}_{t}\right)\right\}$ is $\mu ×\phi$ irreducible. The aperiodicity of $\left\{\left({X}_{t},{Z}_{t}\right)\right\}$ follows from Lemma 3. □

The proofs of our main results make use of the following well-known lemma.

Lemma 4 (Tweedie [16])

Suppose that $\left\{{Y}_{t}\right\}$ is a φ-irreducible and aperiodic Markov chain with state space $\left(\chi ,\mathcal{A}\right)$. If there exist a non-negative measurable function $g\left(\cdot \right)$, a finite set $B\in \mathcal{A}$, and three constants ${c}_{1}>0$, ${c}_{2}>0$, and $0<\rho <1$, such that

$\begin{array}{c}E\left\{g\left({Y}_{t}\right)\mid {Y}_{t-1}=y\right\}⩽\rho g\left(y\right)-{c}_{1},\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\notin B,\hfill \\ E\left\{g\left({Y}_{t}\right)\mid {Y}_{t-1}=y\right\}⩽{c}_{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall }y\in B,\hfill \end{array}$

then $\left\{{Y}_{t}\right\}$ is geometrically ergodic. If $\left\{{Y}_{t}\right\}$ is initialized from its invariant measure π, then it is strictly stationary and β-mixing with exponential decay.

Proof of Theorem 1 By Lemma 1, Lemma 2, and the conditions given in Theorem 1, we know that $\left\{\left({X}_{t},{Z}_{t}\right)\right\}$ is a $\mu ×\phi$ irreducible and aperiodic Markov chain. So by Lemma 4 it suffices to show that there exist a non-negative measurable function $g\left(\cdot \right)$, a finite set B, and three constants ${c}_{1}>0$, ${c}_{2}>0$, and $0<\rho <1$, such that

$E\left\{g\left({X}_{t},{Z}_{t}\right)\mid \left({X}_{t-1},{Z}_{t-1}\right)=\left(x,i\right)\right\}\le \rho g\left(x,i\right)-{c}_{1},\phantom{\rule{1em}{0ex}}\mathrm{\forall }\left(x,i\right)\notin B,$
(4.1)
$E\left\{g\left({X}_{t},{Z}_{t}\right)\mid \left({X}_{t-1},{Z}_{t-1}\right)=\left(x,i\right)\right\}\le {c}_{2},\phantom{\rule{1em}{0ex}}\mathrm{\forall }\left(x,i\right)\in B.$
(4.2)

Let

$g\left(x,i\right)=\sqrt{{x}^{2}+{i}^{2}}={\parallel \stackrel{⌢}{m}\parallel }_{2},$

where ${\parallel \cdot \parallel }_{2}$ denotes the Euclidean norm, and $\stackrel{⌢}{m}=\left(x,i\right)$, $x\in \mathrm{Z}$, $i\in \mathrm{E}$. Then we have

$\begin{array}{r}E\left\{g\left({X}_{1},{Z}_{1}\right)\mid \left({X}_{0},{Z}_{0}\right)=\left(x,i\right)\right\}\\ \phantom{\rule{1em}{0ex}}=E\left\{g\left({\varphi }_{1}\circ {X}_{0}+{\epsilon }_{1}\left({Z}_{1}\right),{Z}_{1}\right)\mid \left({X}_{0},{Z}_{0}\right)=\left(x,i\right)\right\}=E\left\{g\left({\varphi }_{1}\circ x+{\epsilon }_{1}\left({Z}_{1}\right),{Z}_{1}\right)\mid {Z}_{0}=i\right\}\\ \phantom{\rule{1em}{0ex}}\le E\left\{{\parallel {\varphi }_{1}\circ x+{\epsilon }_{1}\left({Z}_{1}\right)\parallel }_{1}\mid {Z}_{0}=i\right\}+E\left\{{\parallel {Z}_{1}\parallel }_{1}\mid {Z}_{0}=i\right\}\\ \phantom{\rule{1em}{0ex}}\le E\left\{{\parallel {\varphi }_{1}\circ x\parallel }_{1}\mid {Z}_{0}=i\right\}+E\left\{{\parallel {\epsilon }_{1}\left({Z}_{1}\right)\parallel }_{1}\mid {Z}_{0}=i\right\}+E\left\{{\parallel {Z}_{1}\parallel }_{1}\mid {Z}_{0}=i\right\}\\ \phantom{\rule{1em}{0ex}}=E\left\{{\varphi }_{1}\circ x\mid {Z}_{0}=i\right\}+{c}_{0}\\ \phantom{\rule{1em}{0ex}}\le \alpha x+c+{c}_{0},\end{array}$

where ${c}_{0}={max}_{i\in \mathrm{E}}\left(E\left\{{\parallel {\epsilon }_{1}\left({Z}_{1}\right)\parallel }_{1}\mid {Z}_{0}=i\right\}+E\left\{{\parallel {Z}_{1}\parallel }_{1}\mid {Z}_{0}=i\right\}\right)$.

Suppose $\mathrm{M}\subset \mathrm{Z}$ is a finite set, and $K\in \mathrm{M}$. Let

$\begin{array}{c}B=\left\{\left(x,i\right):x\le K,i\in \mathrm{E}\right\},\hfill \\ {c}_{1}=\left(\rho -\alpha \right)K-c-{c}_{0},\hfill \\ {c}_{2}=\alpha K+c+{c}_{0},\hfill \end{array}$

where $K>\left(c+{c}_{0}\right)/\left(\rho -\alpha \right)$ and α is a real number satisfying $\alpha <\rho <1$. Then we have

$\begin{array}{r}E\left\{g\left({X}_{1},{Z}_{1}\right)\mid \left({X}_{0},{Z}_{0}\right)=\left(x,i\right)\right\}\\ \phantom{\rule{1em}{0ex}}\le \rho x-\rho x+\alpha x+c+{c}_{0}\\ \phantom{\rule{1em}{0ex}}\le \rho x-\left[\left(\rho -\alpha \right)x-c-{c}_{0}\right]\\ \phantom{\rule{1em}{0ex}}\le \rho g\left(x,i\right)-\left[\left(\rho -\alpha \right)K-c-{c}_{0}\right],\end{array}$

therefore (4.1) and (4.2) hold. This completes the proof. □

Proof Since $\left\{\left({X}_{t},{Z}_{t}\right)\right\}$ is geometrically ergodic, there exist a probability measure π on $\left(\mathrm{Z}×\mathrm{E},\mathcal{B}×\mathcal{H}\right)$, and a constant $\beta :0<\beta <1$ such that $\mathrm{\forall }\left(x,i\right)\in \mathrm{Z}×\mathrm{E}$,

$\underset{t\to \mathrm{\infty }}{lim}{\beta }^{-t}{\parallel {P}^{\left(t\right)}\left(\left(x,i\right),\cdot \right)-\pi \left(\cdot \right)\parallel }_{\tau }=0.$
(4.3)

Suppose ${\pi }^{\ast }$ is a set function on $\left(\mathrm{Z},\mathcal{B}\right)$ satisfying

${\pi }^{\ast }\left(A\right)=\pi \left(A×\mathrm{E}\right),\phantom{\rule{1em}{0ex}}\mathrm{\forall }A\in \mathcal{B},$

obviously, ${\pi }^{\ast }$ is a probability measure on $\left(\mathrm{Z},\mathcal{B}\right)$. Suppose that $\left\{{X}_{t}\right\}$ is iterative sequence generated by (2.1) with initial value ${X}_{0}=x$, then $\mathrm{\forall }y\in \mathrm{Z}$, we have

$\begin{array}{r}\mathbb{P}\left({X}_{t}=y\mid {X}_{0}=x\right)\\ \phantom{\rule{1em}{0ex}}=\sum _{j\in \mathrm{E}}\mathbb{P}\left({X}_{t}=y,{Z}_{t}=j\mid {X}_{0}=x\right)\\ \phantom{\rule{1em}{0ex}}=\sum _{j\in \mathrm{E}}\sum _{i\in \mathrm{E}}\mathbb{P}\left({X}_{t}=y,{Z}_{t}=j\mid {X}_{0}=x,{Z}_{0}=i\right)\mathbb{P}\left({Z}_{0}=i\mid {X}_{0}=x\right),\end{array}$
(4.4)

and $\mathrm{\forall }A\in \mathcal{B}$,

${\pi }^{\ast }\left(A\right)=\pi \left(A×\mathrm{E}\right)=\sum _{j\in \mathrm{E}}\sum _{i\in \mathrm{E}}\pi \left(A×\left\{j\right\}\right)\mathbb{P}\left({Z}_{0}=i\mid {X}_{0}=x\right).$
(4.5)

Since E is a finite set, then (4.3), (4.4), and (4.5) imply that

$\underset{t\to \mathrm{\infty }}{lim}{\beta }^{-t}{\parallel \mathbb{P}\left({X}_{t}=y\mid {X}_{0}=x\right)-{\pi }^{\ast }\left(y\right)\parallel }_{\tau }=0.$
(4.6)

Then ${\pi }^{\ast }$ is an invariant probability measure of $\left\{{X}_{t}\right\}$, and the uniqueness of ${\pi }^{\ast }$ can be deduced from the uniqueness of π. This completes the proof. □

## References

1. Bollerslev T: Generalized autoregressive conditional heteroscedasticity. J. Econom. 1986, 31: 307-327. 10.1016/0304-4076(86)90063-1

2. Engle RF: Autoregressive conditional heteroscedasticity with estimates of the variance of U.K. inflation. Econometrica 1982, 50: 987-1008. 10.2307/1912773

3. Tong H: Threshold Models in Nonlinear Time Series Analysis. Springer, New York; 1983.

4. Fan J, Yao Q: Nonlinear Time Series: Nonparametric and Parametric Methods. Springer, New York; 2003.

5. Tang MT, Wang YY: On nonergodicity for nonparametric autoregressive models. Adv. Differ. Equ. 2013., 2013: Article ID 200

6. Al-Osh MA, Alzaid AA: First order integer-valued autoregressive (INAR(1)) processes. J. Time Ser. Anal. 1987, 8: 261-275. 10.1111/j.1467-9892.1987.tb00438.x

7. Nastić AS, Ristić MM: Some geometric mixed integer-valued autoregressive (INAR) models. Stat. Probab. Lett. 2012, 82: 805-811. 10.1016/j.spl.2012.01.007

8. Zheng H, Ishwar VB, Somnath D: First-order random coefficient integer-valued autoregressive processes. J. Stat. Plan. Inference 2007, 173: 212-229.

9. Chen X, Wang L:Conditional ${L}_{1}$ estimation for random coefficient integer-valued autoregressive processes. Stat. Risk. Model. 2013, 30: 221-235.

10. Roitershtein A, Zhong Z: On random coefficient INAR(1) processes. Sci. China Math. 2013, 56: 177-200. 10.1007/s11425-012-4547-z

11. Chan KS, Tong H: On the use of the deterministic Lyapunov function for the ergodicity of stochastic difference equations. Adv. Appl. Probab. 1985, 17: 666-678. 10.2307/1427125

12. Balaij S, Meyn SP: Multiplicative ergodicity and large deviations for an irreducible Markov chain. Stoch. Process. Appl. 2000, 90: 123-144. 10.1016/S0304-4149(00)00032-6

13. Tong H, Lim KS: Threshold autoregression, limit cycles and cyclical data. J. R. Stat. Soc. B 1980, 42: 245-292.

14. Tong H: Nonlinear Time Series: A Dynamical System Approach. Oxford University Press, Oxford; 1990.

15. Meyn SP, Tweedie RL: Markov Chains and Stochastic Stability. Cambridge University Press, Cambridge; 2009.

16. Tweedie RL: Sufficient conditions for ergodicity and geometric ergodicity of Markov chain on a general state space. Stoch. Process. Appl. 1975, 3: 385-403. 10.1016/0304-4149(75)90033-2

## Acknowledgements

This research is supported by the NSFC (No. 11326177) and NSFC (No. 11326238), the SF of Jiangxi Provincial Education Department (No. GJJ12356), and the NSF of Jiangxi Province (No. 20132BAB211005).

## Author information

Authors

### Corresponding author

Correspondence to Yunyan Wang.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors jointly worked on the results and they read and approved the final manuscript.

## Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

Tang, M., Wang, Y. Asymptotic behavior of random coefficient INAR model under random environment defined by difference equation. Adv Differ Equ 2014, 99 (2014). https://doi.org/10.1186/1687-1847-2014-99