Skip to main content

Theory and Modern Applications

Estimation for random coefficient integer-valued autoregressive model under random environment

Abstract

A first-order random coefficient integer-valued autoregressive model based on the negative binomial thinning operator under r states random environment is introduced. This paper derives numerical characteristics of the proposed model, establishes Yule–Walker estimators of model parameters, and discusses the strong consistency of the obtained estimators. Finally, a simulation is carried out to verify the feasibility of parameter estimation.

1 Introduction

Counting process is common in many real-life situations. Such examples are not only found in medicine, insurance theory, and crime but also in meteorology, queueing systems, biology, and other fields of insurance theory. Since counting sequences can count criminal offenses, patients, earthquakes, detected errors, traffic accidents, insurance transactions, and so on, they have attracted the interest of scientists for many years as well as nowadays.

Counting sequence is also called integer-valued time series. Binomial thinning operator is often used to build integer-valued models. For example, Al-Osh and Alzaid [1] established the first-order integer-valued autoregressive (INAR) model and considered the autoregressive parameter as survival probability. Freeland and McCabe [2] derived a corrected explicit expression for the asymptotic variance matrix of the conditional least squares estimators of the first-order integer-valued autoregressive model. Alzaid and Al-Osh [3] introduced an integer-valued autoregressive process with lag p. Du and Li [4] proved ergodicity of the pth-order integer-valued autoregressive model and derived the correlation structure. Zheng et al. [5] extended the INAR(1) model to a random coefficient model, in which the fixed autoregressive parameter was replaced by the random variable, and derived conditional least squares and quasi-likelihood estimators of the proposed model parameters and established their asymptotic properties. Tang and Wang [6] proposed the random coefficient integer-valued autoregressive model under random environment by introducing a Markov chain with a finite state. Liu et al. [7] introduced random environment binomial thinning integer-valued autoregressive process with Poisson or geometric marginal.

In the above references, Bernoulli random variable is used in the INAR models based on the binomial thinning operator. Since Bernoulli variable has only two possible values of 0 and 1, the mean and variance of Bernoulli distribution are equal, it is not always appropriate for the analysis of integer-valued time series. Therefore, Ristić et al. [8] defined negative binomial thinning operator for integer-valued time series and discussed in some details the first-order integer-valued autoregressive model with geometric marginal. From then on, integer-valued autoregressive models based on negative binomial thinning operator have attracted widespread attention in the fields of statistics and economics. Ristić et al. [9] proposed a bivariate integer-valued autoregressive model of the first-order with geometric marginal and developed parameter estimators of conditional least squares estimation. Bakouch [10] derived higher-order moments and numerical characteristics of the integer-valued autoregressive model with geometric marginal. Nastić et al. [11] modeled real data by the pth-order integer-valued autoregressive model with geometric marginal and derived some regression properties of the new model. Zhang [12] introduced the random coefficient integer-valued autoregressive process of order 1 and proved the strict stationarity of the proposed model.

On the other hand, the introduction of random environment in the INAR models has greatly improved the adaptability of the model. Nastić et al. [13] and Laketa et al. [14] studied integer-valued autoregressive models based on the negative binomial thinning operator with different geometric marginal under random environment. For a more detailed and profound introduction to random environment models, see Laketa [15]. Nastić et al. [16] introduced first-order random environment integer-valued autoregressive model with the geometric marginal, which is given as follows:

$$ {X_{n}}({z_{n}}) = \alpha * {X_{n - 1}}({z_{n - 1}}) + {\varepsilon _{n}}({z_{n - 1}},{z_{n}}),\quad n \in \mathbb{N}, $$
(1)

where the fixed coefficient \(\alpha\in(0,1)\), \({z_{n}}\) is the true value of r states random environment process \(\{ {Z_{n}}\} \), \(\{ {\varepsilon_{n}}({z_{n - 1}},{z_{n}})\} \) is an independent and identically distributed innovation sequence, \({X_{n}}({z_{n}})\) is non-negative random variables. In their paper, the distributional and correlation properties of model (1) are discussed, and the k-step-ahead conditional expectation and variance are derived.

However, the fixed coefficient α in model (1) may change with time or others in certain cases. For example, if \(\alpha * {X_{n - 1}}({z_{n - 1}})\) in model (1) indicates the number of offspring species under a small isolated area in time \(n-1\) (include maternal), and \({\varepsilon_{n}}({z_{n - 1}},{z_{n}})\) denotes the admitted of newly species in time n, then \({X_{n}}({z_{n}})\) stands for the number of species in time n, but model (1) does not show this influence from temperature, humidity, and some other factors which may affect \({X_{n}}({z_{n}})\) very much. It is more reasonable to express the fixed coefficient α as random variables \({\alpha_{n}}\). Therefore in this paper, we extend model (1) to a random coefficient model, where the fixed coefficient α is replaced by random autoregressive variables \({\alpha_{n}}\). Random coefficient models can be fitted to the data which are affected by external factors such as disease data, crime data, etc. Disease data are collected under the influence of medical level, patient constitution, and so on. The collection of criminal data is influenced by factors such as regional economy and government policies. Random coefficient models are better than model (1) for this kind of data. Therefore, random coefficient models have many applications in medical, criminal, financial, and other fields. The main idea of this article is to investigate the basic probabilistic and statistic properties of the proposed model and develop the Yule–Walker estimation methods for the relevant parameters.

The structure of the paper is as follows. In Sect. 2, we introduce the new random coefficient integer-valued autoregressive process of order 1 and study its properties. In Sect. 3, Yule–Walker estimators and the strong consistency of the proposed parameter estimators are established. Section 4 develops the results of numerical simulation.

2 The first-order random coefficient integer-valued autoregressive model under r states random environment

In this section, we give the definition and the probabilistic properties of the new model with random coefficient. Throughout the paper, let \(\{ {Z_{n}}\}\), \(n \in{\mathbb{N}_{0}}\), \({\mathbb{N}_{0}} = \mathbb{N} \cup0\), be r states Markov chain, where \(r \in\{ 1,2,3, \ldots\} \), \({Z_{n}} \in{E_{r}} = \{ 1,2, \ldots,r\} \). The random coefficient sequence \(\{ {X_{n}}({Z_{n}}) \}\) is defined as follows.

Definition 1

If a sequence of integer-valued random variables \(\{ {X_{n}}({z_{n}}) \} \), \(n \in{\mathbb{N}_{0}}\),

$$ {X_{n}}({z_{n}}) = {\alpha_{n}} * {X_{n - 1}}({z_{n - 1}}) + {\varepsilon _{n}}({z_{n - 1}},{z_{n}}),\quad n \in\mathbb{N}, $$
(2)

satisfies the following conditions:

  1. (i)

    \({z_{n}}\) is true value of the random environment process \(\{ {Z_{n}}\} \), \(n \in{\mathbb{N}_{0}}\), \({z_{n}} \in{E_{r}}\).

  2. (ii)

    \(\{ {\alpha_{n}}\} \) is a sequence of independent and identically distributed random variables which takes values on \((0,1)\). Let \(\alpha = E({\alpha_{n}})\), \(\sigma_{\alpha}^{{2}} = \operatorname{Var}({\alpha _{n}})\) and note that they are assumed finite, where \(0 < {\alpha^{2}} + \sigma_{\alpha}^{2} < 1\).

  3. (iii)

    ” is a negative binomial thinning operator and satisfies \({\alpha_{n}} * {X_{n - 1}}({z_{n - 1}}) = \sum_{i = 1}^{{X_{n - 1}}({z_{n - 1}})} {W_{i}^{(n)}} \), where \(W_{i}^{(n)}\) is independent and identically distributed random variables with probability mass function \(P(W_{i}^{(n)} = w) = \frac{{\alpha_{n}^{w}}}{{{{(1 + {\alpha_{n}})}^{w + { {1}}}}}}\), \(w \in{\mathbb{N}_{0}}\).

  4. (iv)

    \(\{ {\varepsilon_{n}}({z_{n - 1}},{z_{n}})\}\) is an independent and identically distributed non-negative random sequence. \(E({\varepsilon _{n}}({z_{n - 1}},{z_{n}})) = {\mu_{{\varepsilon_{n}}({z_{n - 1}},{z_{n}})}}\) and \(\operatorname{Var}({\varepsilon_{n}}({z_{n - 1}},{z_{n}})) = \sigma_{{\varepsilon _{n}}({z_{n - 1}},{z_{n}})}^{2}\) are assumed finite. The sequence \(\{ {\varepsilon_{n}}({z_{n - 1}},{z_{n}})\}\) meets the following conditions:

    1. (A1)

      \(\{ {Z_{n}}\}\), \(\{ {\varepsilon_{n}}({z_{n - 1}},{z_{n}})\}\), and \(\{ {\alpha_{n}}\}\), \(n \in{N_{0}}\), are mutually independent,

    2. (A2)

      \({z_{m}} \) and \({\varepsilon_{m}}({z_{m - 1}},{z_{m}})\) are independent of \({X_{n}}({z_{n}})\), \(n < m\).

  5. (v)

    For any \({z_{i}} = z\), \(i \ge0\), \(z \in{E_{r}}\), I is a characteristic function, where

    $${I_{\{ {z_{n}} = z\} }} = \left \{ { \textstyle\begin{array}{l@{\quad}l} {1,} & {{z_{n}} = z}; \\ {0,} & {{z_{n}} \ne z}. \end{array}\displaystyle } \right . $$
  6. (vi)

    The probability mass function of non-negative random variables \({X_{n}}({z_{n}})\) is as follows:

    $$P \bigl({X_{n}}({z_{n}}) = x \bigr) = \frac{{{{({\mu_{{z_{n}}}})}^{x}}}}{{{{(1 + {\mu _{{z_{n}}}})}^{x + 1}}}},\quad x \in{\mathbb{N}_{0}}, {\mu_{{z_{n}}}} \in\{ {\mu_{1}},{ \mu_{2}}, \ldots,{\mu_{r} }\}. $$

    We say model (2) is a first-order random coefficient integer-valued autoregressive model based on negative binomial thinning operator under r states random environment, for short, RrRCINAR(1) model.

Remark 1

Random variables \({X_{n}}({z_{n}})\) and \({\varepsilon _{n}}({z_{n - 1}},{z_{n}})\) can also be expressed as

$${X_{n}}({z_{n}}) = \sum_{z = 1}^{r} {{X_{n}}(z){I_{\{ {z_{n}} = z\} }}} $$

and

$${\varepsilon_{n}}({z_{n - 1}},{z_{n}}) = \sum _{{z_{1}} = 1}^{r} {\sum _{{z_{2}} = 1}^{r} {{\varepsilon_{n}}({z_{1}},{z_{2}}){I_{\{ {z_{n - 1}} = {z_{1}},{z_{n}} = {z_{2}}\} }}} }. $$

Next, we consider \(\{ {X_{n}}({z_{n}}),{z_{n}}\}\) as a bivariate time series and derive its transition probability. The Markov chain property of the process \(\{ {X_{n}}({z_{n}}),{z_{n}}\}\) is given by the following lemma.

Lemma 1

Suppose that the random variables \({X_{n}}({z_{n}})\)are given by model (2), then the bivariate process \(\{ {X_{n}}({z_{n}}),{z_{n}}\} \)is a Markov chain.

Proof

Let \({p_{ij}} = P({z_{n}} = j \vert {{z_{n - 1}} = i)} \) be transition probability of the Markov chain \(\{ {Z_{n}}\}\), where \(i,j \in{E_{r}}\). Let \({Y_{n}} = ({X_{n}}({z_{n}}),{z_{n}})\) and \({y_{n}} = ({x_{n}},j)\), where \({z_{n}} = j\). Let \(A = \{ {Y_{s}} = {y_{s}},0 \le s < n - 1\}\). Therefore, for \(n \in \mathbb{N}\), we have

$$ \begin{aligned}[b] {P_{n - 1,n}} &= P({Y_{n}} = {y_{n}}| {{Y_{n - 1}} = {y_{n - 1}},A} ) \\ &= P\bigl({X_{n}}({z_{n}}) = {x_{n}},{z_{n}} = j| {{z_{n - 1}} = i,{X_{n - 1}}({z_{n - 1}}) = {x_{n - 1}},A} \bigr) \\ &= P\bigl({X_{n}}({z_{n}}) = {x_{n}}| {{z_{n}} = j,{z_{n - 1}} = i,{X_{n - 1}}({z_{n - 1}}) = {x_{n - 1}},A} \bigr) \\ &\quad\cdot P\bigl({z_{n}} = j| {{z_{n - 1}} = i,{X_{n - 1}}({z_{n - 1}}) = {x_{n - 1}},A} \bigr) \\ &= P\bigl({\alpha_{n}}*{x_{n - 1}} + {\varepsilon_{n}}(i,j) = {x_{n}}\bigr) \cdot P({z_{n}} = j| {{z_{n - 1}} = i} ) \\ &= {p_{ij}}\cdot P\bigl({\alpha_{n}}*{x_{n - 1}} + { \varepsilon_{n}}(i,j) = {x_{n}}\bigr) \\ &= {p_{ij}}\cdot P\bigl({X_{n}}(j) = {x_{n}} \bigr) \\ &= {p_{ij}}\cdot\frac{{\mu_{j}^{{x_{n}}}}}{{{{(1 + {\mu _{j}})}^{{x_{n}}+1}}}}, \end{aligned} $$
(3)

where \({\mu_{{j}}} \in\{ {\mu_{1}},{\mu_{2}}, \ldots,{\mu_{r}}\}\). Analogous to (3), we get

$$ \begin{aligned}[b] &P({Y_{n}} = {y_{n}}| {{Y_{n - 1}} = {y_{n - 1}}} ) \\ &\quad= P\bigl({X_{n}}({z_{n}}) = {x_{n}},{z_{n}} = j| {{z_{n - 1}} = i,{X_{n - 1}}({z_{n - 1}}) = {x_{n - 1}}} \bigr) \\ &\quad= {p_{ij}} \cdot\frac{{\mu_{j}^{{x_{n}}}}}{{{{(1 + {\mu_{j}})}^{{x_{n}} + 1}}}}. \end{aligned} $$
(4)

The results of Eqs. (3) and (4) are equal. So the process \(\{ {X_{n}}({z_{n}}),{z_{n}}\}\) is a Markov chain. □

Then we introduce some properties of the negative binomial thinning operator when the random variable \({X_{n - 1}}({z_{n - 1}})\) has discrete distributions.

Lemma 2

The sequence \({\alpha_{n}} * {X_{n - 1}}({z_{n - 1}})\)is independent of the random variable \({X_{n - 1}}({z_{n - 1}})\). Then

  1. (i)

    \(E({\alpha_{n}} * {X_{n - 1}}({z_{n - 1}})) = \alpha E({X_{n - 1}}({z_{n - 1}}))\).

  2. (ii)

    \(E{({\alpha_{n}} * {X_{n - 1}}({z_{n - 1}}))^{2}} = {\alpha ^{2}}E({X_{n - 1}}{({z_{n - 1}})^{2}}) + \alpha(1 + \alpha)E({X_{n - 1}}({z_{n - 1}}))\).

Proof

(i) By the definition of the negative binomial thinning operator, we have

$$\begin{aligned} E\bigl({\alpha_{n}} * {X_{n - 1}}({z_{n - 1}}) \bigr) &= E\Biggl(\sum_{i = 1}^{{X_{n - 1}}({z_{n - 1}})} {{W_{i}^{(n)}}} \Biggr) \\ &= E\bigl({W_{1}^{(n)}} {X_{n - 1}}({z_{n - 1}}) \bigr) \\ &= EE\bigl({W_{1}^{(n)}}| {{\alpha_{n}}} \bigr) \cdot E\bigl({X_{n - 1}}({z_{n - 1}})\bigr) \\ &= \alpha E\bigl({X_{n - 1}}({z_{n - 1}})\bigr) . \end{aligned} $$

(ii) Using the result given by (i), we have that

$$\begin{gathered} E{\bigl({\alpha_{n}} * {X_{n - 1}}({z_{n - 1}})\bigr)^{2}} \\ \quad= E\bigl[E\bigl\{ {\bigl({\alpha_{n}} * {X_{n - 1}}({z_{n - 1}}) \bigr)^{2}}| {{X_{n - 1}}({z_{n - 1}})} \bigr\} \bigr] \\ \quad= E\bigl\{ \operatorname{Var}\bigl({\alpha_{n}} * {X_{n - 1}}({z_{n - 1}}) \bigl\vert {{X_{n - 1}}({z_{n - 1}})} \bigr) + {E^{2}} \bigl({\alpha_{n}} * {X_{n - 1}}({z_{n - 1}}) \bigr\vert {{X_{n - 1}}({z_{n - 1}})} \bigr)\bigr\} \\ \quad= EE\bigl\{ {X_{n - 1}}({z_{n - 1}})\operatorname{Var}W_{1}^{(n)} + X_{n - 1}^{2}({z_{n - 1}}){E^{2}} \bigl(W_{1}^{(n)}\bigr)| {{\alpha_{n}}} \bigr\} \\ \quad= {\alpha^{2}}E\bigl(X_{n - 1}^{2}({z_{n - 1}}) \bigr) + \alpha(1 + \alpha)E\bigl({X_{n - 1}}({z_{n - 1}})\bigr). \end{gathered} $$

 □

Remark 2

Lemma 2 obtains the first-order and second-order moments of the sequence \({\alpha_{n}} * {X_{n - 1}}({z_{n - 1}})\). Scholars who are interested in methods for solving higher order moments can be referred to Du and Li [4] and Silva and Oliveira [17].

The following lemma gives some properties of negative binomial thinning operator when \({\alpha_{n}} = 1\).

Lemma 3

The distribution of \(1 * {X_{n - 1}}({z_{n - 1}})\)under the state \({z_{n - 1}}\)can be written as

$$1 * {X_{n - 1}}({z_{n - 1}}) = \left \{ { \textstyle\begin{array}{l@{\quad}l} 0&{\textit{w.p. }\frac{1}{{1 + {\mu_{{z_{n - 1}}}}}};}\\ {{X_{n - 1}}({z_{n - 1}})}&{\textit{w.p. }\frac{{\mu_{{z_{n - 1}}}^{2}}}{{{{(1 + {\mu_{{z_{n - 1}}}})}^{2}}}};}\\ {{X_{n - 1}}({z_{n - 1}}) + {Y_{n - 1}}({z_{n - 1}})}&{\textit{w.p. }\frac{{{\mu _{{z_{n - 1}}}}}}{{{{(1 + {\mu_{{z_{n - 1}}}})}^{2}}}},} \end{array}\displaystyle } \right . $$

where the random variable \({{Y_{n - 1}}({z_{n - 1}})}\)has geometric distribution with parameter \(\frac{{1 + {\mu_{{z_{n - 1}}}}}}{{2 + {\mu_{{z_{n - 1}}}}}}\), \({\mu_{{z_{n-1}}}} > 0\), and \({{Y_{n - 1}}({z_{n - 1}})}\)is independent of \({X_{n - 1}}({z_{n - 1}})\).

Proof

We consider the probability generating function of the sequence \({1 * {X_{n - 1}}({z_{n - 1}})}\):

$$\begin{aligned}& E\bigl({s^{1 * {X_{n - 1}}({z_{n - 1}})}}\bigr) \\& \quad= E \bigl( {{{\bigl(E{s^{{W_{1}}}}\bigr)}^{{X_{n - 1}}({z_{n - 1}})}}} \bigr) = E \biggl[ {{{ \biggl( {\frac{1}{{2 - s}}} \biggr)}^{{X_{n - 1}}({z_{n - 1}})}}} \biggr] \\& \quad= \sum_{x = 0}^{\infty}{{{ \biggl( { \frac{1}{{2 - s}}} \biggr)}^{x}} \cdot P\bigl({X_{n - 1}}({z_{n - 1}}) = x\bigr)} \\& \quad= {\sum_{x = 0}^{\infty}{ \biggl( { \frac{{{\mu_{{z_{n - 1}}}}}}{{(2 - s)(1 + {\mu_{{z_{n - 1}}}})}}} \biggr)} ^{x}} \cdot\frac {1}{{(1 + {\mu_{{z_{n - 1}}}})}} \\& \quad= \frac{1}{{1 - \frac{{{\mu_{{z_{n - 1}}}}}}{{(2 - s)(1 + {\mu _{{z_{n - 1}}}})}}}} \cdot\frac{1}{{(1 + {\mu_{{z_{n - 1}}}})}} = \frac{{2 - s}}{{2 + {\mu_{{z_{n - 1}}}} - (1 + {\mu_{{z_{n - 1}}}})s}} \\& \quad= \frac{1}{{1 + {\mu_{{z_{n - 1}}}}}} + \frac{{\mu_{{z_{n - 1}}}^{2}}}{{{{(1 + {\mu_{{z_{n - 1}}}})}^{2}}}} \cdot\frac{1}{{1 + {\mu _{{z_{n - 1}}}} - s{\mu_{{z_{n - 1}}}}}} \\& \quad\quad{} + \frac{{{\mu_{{z_{n - 1}}}}}}{{{{(1 + {\mu_{{z_{n - 1}}}})}^{2}}}} \cdot\frac{1}{{1 + {\mu_{{z_{n - 1}}}} - s{\mu_{{z_{n - 1}}}}}} \cdot\frac{1}{{2 + {\mu_{{z_{n - 1}}}} - (1 + {\mu_{{z_{n - 1}}}})s}} \\& \quad= \frac{1}{{1 + {\mu_{{z_{n - 1}}}}}} \cdot E\bigl({s^{0}}\bigr) + \frac{{\mu _{{z_{n - 1}}}^{2}}}{{{{(1 + {\mu_{{z_{n - 1}}}})}^{2}}}} \cdot E\bigl({s^{{X_{n - 1}}({z_{n - 1}})}}\bigr)\\& \qquad{} + \frac{{{\mu_{{z_{n - 1}}}}}}{{{{(1 + {\mu_{{z_{n - 1}}}})}^{2}}}} \cdot E \bigl({s^{{X_{n - 1}}({z_{n - 1}}) + {Y_{n - 1}}({z_{n - 1}})}}\bigr). \end{aligned}$$

This completes the proof. □

Moments, covariances, and correlation coefficient of the random variable \({X_{n}}({z_{n}})\) will be useful in obtaining the estimating equations for Yule–Walker estimation. The moments and conditional moments are given by the following theorem.

Theorem 1

Let \(\{ {X_{n}}({z_{n}})\} \), \({z_{n}} \in{E_{r}}\), be the RrRCINAR(1) process, and let \({\mu_{1}} > 0\), \({\mu_{2}} > 0\), …, \({\mu _{r}} > 0\). For \(n \in\mathbb{N}\),

  1. (i)

    \(E ( {{X_{n}}({z_{n}})} ) = {\mu_{{z_{n}}}}\);

  2. (ii)

    \(E({X_{n}}{(}{z_{n}}{)|}{X_{n - 1}}({z_{n - 1}})) = \alpha \cdot{X_{n - 1}}{(}{z_{n-1}}{)} + {\mu_{{\varepsilon _{n}}({z_{n - 1}},{z_{n}})}}\);

  3. (iii)

    \(\operatorname{Var}({X_{n}}({z_{n}})) = {\mu_{{z_{n}}}}(1 + {\mu_{{z_{n}}}})\);

  4. (iv)

    \(\operatorname{Var}({X_{n}}({z_{n}})|{X_{n - 1}}({z_{n - 1}}),{\alpha_{n}}) = {\alpha _{n}}({\alpha_{n}} + 1){X_{n - 1}}({z_{n - 1}}) + \sigma_{{\varepsilon _{n}}({z_{n - 1}},{z_{n}})}^{2}\);

  5. (v)

    \(\operatorname{Var}({X_{n}}({z_{n}})|{X_{n - 1}}({z_{n - 1}})) = (\alpha + {\alpha ^{2}} + \sigma_{\alpha}^{2}){X_{n - 1}}({z_{n - 1}}) + \sigma_{\alpha}^{2} \cdot X_{n - 1}^{2}({z_{n - 1}}) + \sigma_{{\varepsilon_{n}}({z_{n - 1}},{z_{n}})}^{2}\);

  6. (vi)

    \(\gamma_{n}^{(k)} = {\alpha^{k}}(\mu_{{z_{n - k}}}^{2} + {\mu_{{z_{n - k}}}})\);

  7. (vii)

    \(\rho_{n}^{(k)} = {\alpha^{k}}\sqrt{\frac{{\mu_{{z_{n - k}}}^{2} + {\mu_{{z_{n - k}}}}}}{{\mu_{{z_{n}}}^{2} + {\mu_{{z_{n}}}}}}} \).

Proof

(i) Let \(\varPhi(s)\) be the probability generating function of the random variable \({X_{n}}({z_{n}})\), we obtain the following:

$$\begin{aligned} \varPhi(s)&= E \bigl( {{s^{{X_{n}}({z_{n}})}}} \bigr) \\ &= \sum_{x = 0}^{\infty}{{s^{x}}\cdot P \bigl( {{X_{n}}({z_{n}}) = x} \bigr)} \\ &= \sum_{x = 0}^{\infty}{{{ \biggl( { \frac{{s\cdot{\mu _{{z_{n}}}}}}{{1 + {\mu_{{z_{n}}}}}}} \biggr)}^{x}}} \cdot\frac{1}{{1 + {\mu _{{z_{n}}}}}} \\ &= \frac{1}{{1 - \frac{{s\cdot{\mu_{{z_{n}}}}}}{{1 + {\mu _{{z_{n}}}}}}}}\cdot\frac{1}{{1 + {\mu_{{z_{n}}}}}} \\ &= \frac{1}{{1 + {\mu_{{z_{n}}}} - s\cdot{\mu_{{z_{n}}}}}}, \end{aligned} $$

so we have \(\varPhi'(1) = {\mu_{{z_{n}}}}\). By the property of probability generating function, we have

$$E \bigl({X_{n}}({z_{n}}) \bigr) = \varPhi'( 1), $$

then we can get expectation of the random variable \({X_{n}}({z_{n}})\).

(ii) From the smoothness of expectation and the independence of \({{X_{n - 1}}({z_{n - 1}})}\) and \({W_{i}^{(n)}}\), for arbitrary \(n \in\mathbb {N}\), we have

$$\begin{gathered} E\bigl({X_{n}}({z_{n}})|{{X_{n-1}}({z_{n-1}})} \bigr) \\ \quad= E\bigl({\alpha_{n}}*{X_{n - 1}}({z_{n - 1}}) + { \varepsilon_{n}}({z_{n - 1}},{z_{n}})| {{X_{n - 1}}({z_{n - 1}})} \bigr) \\ \quad= E\bigl({\alpha_{n}}*{X_{n - 1}}({z_{n - 1}})| {{X_{n - 1}}({z_{n - 1}})} \bigr) + {\mu_{{\varepsilon_{n}}({z_{n - 1}},{z_{n}})}} \\ \quad= E\Biggl(\sum_{i = 1}^{{X_{n - 1}}({z_{n - 1}})} {W_{i}^{(n)}} | {{X_{n - 1}}({z_{n - 1}})} \Biggr) + {\mu_{{\varepsilon_{n}}({z_{n - 1}},{z_{n}})}} \\ \quad= E\bigl({X_{n - 1}}({z_{n - 1}})\cdot W_{1}^{(n)}| {{X_{n - 1}}({z_{n - 1}})} \bigr) + {\mu_{{\varepsilon_{n}}({z_{n - 1}},{z_{n}})}} \\ \quad= {X_{n - 1}}({z_{n - 1}})E\bigl(W_{1}^{(n)} \bigr) + {\mu_{{\varepsilon_{n}}({z_{n - 1}},{z_{n}})}} \\ \quad= {X_{n - 1}}({z_{n - 1}})EE\bigl(W_{1}^{(n)}| {{\alpha_{n}}} \bigr) + {\mu_{{\varepsilon_{n}}({z_{n - 1}},{z_{n}})}} \\ \quad= \alpha\cdot{X_{n - 1}}({z_{n - 1}}) + {\mu_{{\varepsilon_{n}}({z_{n - 1}},{z_{n}})}}, \end{gathered} $$

where \({\mu_{{\varepsilon_{n}}({z_{n - 1}},{z_{n}})}}\) is expectation of the random variable \({\varepsilon_{n}}({z_{n - 1}},{z_{n}})\).

(iii) Because of \(\varPhi'(1) = {\mu_{{z_{n}}}}\) and \(\varPhi''(1) = 2\mu_{{z_{n}}}^{2}\), applying the property of probability generating function, we have

$$\begin{aligned} \operatorname{Var}\bigl({X_{n}}({z_{n}})\bigr) &= \varPhi''(1) + \varPhi'(1) \bigl(1 - \varPhi'(1)\bigr) \\ &= 2\mu_{{z_{n}}}^{2} + {\mu_{{z_{n}}}}(1 - { \mu_{{z_{n}}}}) \\ &= {\mu_{{z_{n}}}}(1 + {\mu_{{z_{n}}}}). \end{aligned} $$

(iv) By using the definition of negative binomial thinning operator and the variance of random variables \({{\varepsilon_{n}}({z_{n - 1}},{z_{n}})}\), we have

$$\begin{gathered} \operatorname{Var}\bigl({X_{n}}({z_{n}})| {{X_{n - 1}}({z_{n - 1}}),{\alpha_{n}}} \bigr) \\ \quad= \operatorname{Var}\bigl({\alpha_{n}}*{X_{n - 1}}({z_{n - 1}}) + { \varepsilon_{n}}({z_{n - 1}},{z_{n}})| {{X_{n - 1}}({z_{n - 1}}),{\alpha_{n}}} \bigr) \\ \quad= \operatorname{Var}\bigl({\alpha_{n}}*{X_{n - 1}}({z_{n - 1}})| {{X_{n - 1}}({z_{n - 1}}),{\alpha_{n}}} \bigr) + \sigma_{{\varepsilon_{n}}({z_{n - 1}},{z_{n}})}^{2} \\ \quad= \operatorname{Var}\Biggl(\sum_{i = 1}^{{X_{n - 1}}({z_{n - 1}})} {W_{i}^{(n)}| {{X_{n - 1}}({z_{n - 1}}),{ \alpha_{n}}} } \Biggr) + \sigma _{{\varepsilon_{n}}({z_{n - 1}},{z_{n}})}^{2} \\ \quad= \operatorname{Var}\Biggl(\sum_{i = 1}^{{X_{n - 1}}({z_{n - 1}})} {W_{i}^{(n)}| {{X_{n - 1}}({z_{n - 1}}),{ \alpha_{n}}} } \Biggr) + \sigma _{{\varepsilon_{n}}({z_{n - 1}},{z_{n}})}^{2} \\ \quad= {\alpha_{n}}(1 + {\alpha_{n}}){X_{n - 1}}({z_{n - 1}}) + \sigma _{{\varepsilon_{n}}({z_{n - 1}},{z_{n}})}^{2}. \end{gathered} $$

(v) Next, we derive the conditional variance of the random variable \({X_{n}}({z_{n}})\) on \({{X_{n - 1}}({z_{n - 1}})}\):

$$\begin{gathered} \operatorname{Var}\bigl({X_{n}}({z_{n}})| {{X_{n - 1}}({z_{n - 1}})} \bigr) \\ \quad= \operatorname{Var}\bigl({\alpha_{n}} * {X_{n - 1}}({z_{n - 1}}) + { \varepsilon_{n}}({z_{n - 1}},{z_{n}})| {{X_{n - 1}}({z_{n - 1}})} \bigr) \\ \quad= \operatorname{Var}\bigl({\alpha_{n}} * {X_{n - 1}}({z_{n - 1}})| {{X_{n - 1}}({z_{n - 1}})} \bigr) + \sigma_{{\varepsilon_{n}}({z_{n - 1}},{z_{n}})}^{2} \\ \quad= {E_{{\alpha_{n}}}}\bigl(\operatorname{Var}\bigl({\alpha_{n}} * {X_{n - 1}}({z_{n - 1}})| {{X_{n - 1}}({z_{n - 1}}),} {\alpha_{n}}\bigr)\bigr) \\ \quad\quad{}+ \operatorname{Var}_{{\alpha_{n}}}\bigl(E\bigl({\alpha_{n}} * {X_{n - 1}}({z_{n - 1}})| {{X_{n - 1}}({z_{n - 1}}),{ \alpha_{n}}} \bigr)\bigr) + \sigma _{{\varepsilon_{n}}({z_{n - 1}},{z_{n}})}^{2}. \end{gathered} $$

Let \({E_{{\alpha_{n}}}}\) and \(\operatorname{Var}_{{\alpha_{n}}}\) be respectively the expectation and variance of the random variable \(\alpha_{n}\), we have

$$\begin{gathered} {E_{{\alpha_{n}}}}\bigl(\operatorname{Var}\bigl({ \alpha_{n}} * {X_{n - 1}}({z_{n - 1}})| {{X_{n - 1}}({z_{n - 1}}),} {\alpha_{n}}\bigr)\bigr) \\ \quad= {E_{{\alpha_{n}}}}\Biggl(\operatorname{Var}\Biggl(\sum_{i = 1}^{{X_{n - 1}}({z_{n - 1}})} {W_{i}^{(n)}| {{X_{n - 1}}({z_{n - 1}}),{ \alpha_{n}}} } \Biggr)\Biggr) \\ \quad= {E_{{\alpha_{n}}}}\bigl(\bigl({\alpha_{n}} + \alpha_{n}^{2} \bigr) \cdot{X_{n - 1}}({z_{n - 1}})\bigr) \\ \quad= \bigl(\alpha + {\alpha^{2}} + \sigma_{\alpha}^{2} \bigr){X_{n - 1}}({z_{n - 1}}) \end{gathered} $$

and

$$\begin{aligned} \operatorname{Var}_{{\alpha_{n}}}\bigl(E\bigl({ \alpha_{n}} * {X_{n - 1}}({z_{n - 1}})| {{X_{n - 1}}({z_{n - 1}}),} {\alpha_{n}}\bigr)\bigr) &= \operatorname{Var}_{{\alpha _{n}}}\bigl({\alpha_{n}} \cdot{X_{n - 1}}({z_{n - 1}}) \bigr) \\ &= \sigma_{\alpha}^{2}X_{n - 1}^{2}({z_{n - 1}}). \end{aligned} $$

Therefore

$$\begin{gathered} \operatorname{Var}\bigl({X_{n}}({z_{n}})| {{X_{n - 1}}({z_{n - 1}})} \bigr) \\ \quad= {E_{{\alpha_{n}}}}\bigl(\operatorname{Var}\bigl({\alpha_{n}} * {X_{n - 1}}({z_{n - 1}})| {{X_{n - 1}}({z_{n - 1}}),} {\alpha_{n}}\bigr)\bigr) \\ \qquad{}+\operatorname{Var}_{{\alpha_{n}}}\bigl(E\bigl({\alpha_{n}} * {X_{n - 1}}({z_{n - 1}})| {{X_{n - 1}}({z_{n - 1}}),{\alpha_{n}}} \bigr)\bigr) + \sigma _{{\varepsilon_{n}}({z_{n - 1}},{z_{n}})}^{2} \\ \quad= \bigl(\alpha + {\alpha^{2}} + \sigma_{\alpha}^{2} \bigr){X_{n - 1}}({z_{n - 1}})+\sigma_{\alpha}^{2}X_{n - 1}^{2}({z_{n - 1}})+ \sigma_{{\varepsilon _{n}}({z_{n - 1}},{z_{n}})}^{2}. \end{gathered} $$

(vi) By repeated application of the smoothness of expectation,

$$\begin{gathered} E\bigl({\alpha_{n}}*{\alpha_{n - 1}}* \cdots *{\alpha_{n - k + 1}}*{X_{n - k}}({z_{n - k}})\bigr) \\ \quad= EE\bigl({\alpha_{n}}*{\alpha_{n - 1}}* \cdots *{ \alpha_{n - k + 1}}*{X_{n - k}}({z_{n - k}})| {{ \alpha_{n - 1}}* \cdots *{\alpha _{n - k + 1}}*{X_{n - k}}({z_{n - k}})} \bigr) \\ \quad= E\bigl(\alpha\bigl({\alpha_{n - 1}}* \cdots *{\alpha_{n - k + 1}}*{X_{n - k}}({z_{n - k}}) \bigr)\bigr) \\ \quad\cdots \\ \quad= {\alpha^{k}}\cdot E\bigl({X_{n - k}}({z_{n - k}}) \bigr). \end{gathered} $$

Through similar approach, we have

$$\begin{aligned} E\bigl({X_{n - k}}({z_{n - k}}) \cdot \bigl({\alpha_{n}} * {\alpha_{n - 1}} * \cdots * { \alpha_{n - k + 1}} * {X_{n - k}}({z_{n - k}})\bigr)\bigr)= { \alpha ^{k}}E\bigl(X_{n - k}^{2}({z_{n - k}}) \bigr). \end{aligned} $$

By the independence of \({X_{n - k}}({z_{n - k}})\) and \(\{{\varepsilon _{n - k}}({z_{n - k - 1}},{z_{n - k}}),k \ge0 \}\), for \(n > k\), we have

$$\begin{aligned} \gamma_{n}^{(k)} &= \operatorname{Cov} \bigl({X_{n}}({z_{n}}),{X_{n - k}}({z_{n - k}}) \bigr) \\ &= \operatorname{Cov}\bigl({\alpha_{n}} * {X_{n - 1}}({z_{n - 1}}) + { \varepsilon_{n}}({z_{n - 1}},{z_{n}}),{X_{n - k}}({z_{n - k}}) \bigr) \\ &= \operatorname{Cov}\bigl({\alpha_{n}} * {X_{n - 1}}({z_{n - 1}}),{X_{n - k}}({z_{n - k}}) \bigr) + \operatorname{Cov}\bigl({\varepsilon_{n}}({z_{n - 1}},{z_{n}}),{X_{n - k}}({z_{n - k}}) \bigr) \\ &= \operatorname{Cov}\bigl({\alpha_{n}} * \bigl({\alpha_{n - 1}} * {X_{n - 2}}({z_{n - 2}}) + {\varepsilon_{n - 1}}({z_{n - 2}},{z_{n - 1}}) \bigr),{X_{n - k}}({z_{n - k}})\bigr) \\ &= \operatorname{Cov}\bigl({\alpha_{n}} * {\alpha_{n - 1}} * {X_{n - 2}}({z_{n - 2}}),{X_{n - k}}({z_{n - k}}) \bigr) \\ &\cdots \\ &= \operatorname{Cov}\bigl({\alpha_{n}} * {\alpha_{n - 1}} * { \alpha_{n - 2}} * \cdots * {\alpha_{n - k + 1}} * {X_{n - k}}({z_{n - k}}),{X_{n - k}}({z_{n - k}}) \bigr) \\ &= E\bigl({X_{n - k}}({z_{n - k}}) \cdot\bigl({ \alpha_{n}} * {\alpha_{n - 1}} * {\alpha_{n - 2}} * \cdots * {\alpha_{n - k + 1}} * {X_{n - k}}({z_{n - k}})\bigr) \bigr) \\ &\quad{}- E\bigl({X_{n - k}}({z_{n - k}})\bigr) \cdot E\bigl({ \alpha_{n}} * {\alpha_{n - 1}} * {\alpha_{n - 2}} * \cdots * {\alpha_{n - k + 1}} * {X_{n - k}}({z_{n - k}})\bigr) \\ &= {\alpha^{k}}E\bigl(X_{n - k}^{2}({z_{n - k}}) \bigr) - {\alpha^{k}} {E^{2}}\bigl({X_{n - k}}({z_{n - k}}) \bigr) \\ &= {\alpha^{k}}\operatorname{Var}\bigl({X_{n - k}}({z_{n - k}})\bigr) \\ &= {\alpha^{k}}\bigl(\mu_{{z_{n - k}}}^{2} + { \mu_{{z_{n - k}}}}\bigr). \end{aligned} $$

(vii) From (vi) in Theorem 1, we have

$$\rho_{n}^{(k)} = \frac{{\gamma_{n}^{(k)}}}{{\sqrt{\gamma_{n}^{(0)} \cdot \gamma_{n - k}^{(0)}} }} = \frac{{{\alpha^{k}}(\mu_{{z_{n - k}}}^{2} + {\mu_{{z_{n - k}}}})}}{{\sqrt{(\mu_{{z_{n}}}^{2} + {\mu_{{z_{n}}}})(\mu _{{z_{n - k}}}^{2} + {\mu_{{z_{n - k}}}})} }} = { \alpha^{k}}\sqrt{\frac {{\mu_{{z_{n - k}}}^{2} + {\mu_{{z_{n - k}}}}}}{{\mu_{{z_{n}}}^{2} + {\mu _{{z_{n}}}}}}} . $$

We complete the proof of this part. □

3 Yule–Walker estimation

Now we investigate the Yule–Walker estimators for the RrRCINAR(1) model. Since the marginal distribution of the RrRCINAR(1) model varies at different circumstances, we can not use the Yule–Walker estimation method directly. Therefore, in order to use the Yule–Walker estimation methods, we set a sample belonging to the same environment state.

Let us assume that the data that are in the same cluster are observed under the same state. Select a sample of size N in model (2), \({X_{1}}({z_{1}}),{X_{2}}({z_{2}}), \ldots,{X_{N}}({z_{N}})\). When the sample corresponds to the environment \(k \in{E_{r}}\), then \(\exists i,n \in\mathbb{N}\), \({z_{i}} \ne k\), \({z_{i + 1}} = {z_{i + 2}} = \cdots = {z_{n}} = k\), \({z_{n + 1}} \ne k\), we call \({X_{i + 1}}(k),{X_{i + 2}}(k), \ldots,{X_{n}}(k)\) a subsample if \({X_{i}}(j),{X_{i + 1}}(k),{X_{i + 2}}(k), \ldots,{X_{n}}(k), {X_{n + 1}}(l)\), \(j \ne k\), \(l \ne k\), \(j, k, l \in{E_{r}}\).

The sample \({X_{1}}({z_{1}}),{X_{2}}({z_{2}}), \ldots,{X_{N}}({z_{N}})\) can be partitioned into subsample with different states, for \(k \in\{ 1,2, \ldots,r\} \), let

$${I_{k}} = \bigl\{ i \in\{ 1,2, \ldots,N\}| {z_{i}} = k \bigr\} $$

be the subscript of the sample \({X_{1}}({z_{1}}),{X_{2}}({z_{2}}),\ldots ,{X_{N}}({z_{N}})\) corresponding to the environment k.

Let \({n_{k}}\) be the number of the sample \({X_{1}}({z_{1}}),{X_{2}}({z_{2}}), \ldots,{X_{N}}({z_{N}})\) in the circumstance k, then we have

$$\bigcup_{k = 1}^{r} {{I_{k}}} = \{ 1,2 ,\ldots,N\},\quad | {I_{k}} | = {n_{k}}, {n_{1}} + {n_{2}} + \cdots+{n_{r}} = N. $$

Let

$${U_{k}} = \bigl({X_{{k_{1}}}}(k),{X_{{k_{2}}}}(k), \ldots ,{X_{{k_{{n_{k}}}}}}(k) \bigr),\quad {k_{i}} \in{I_{k}}, {k_{i}} < {k_{i + 1}}, \forall i \in\{ 1,2, \ldots,{n_{k}} - 1\}, $$

it represents a set of elements in the state k. Subsamples are denoted as \({U_{k,1}},{U_{k,2}}, \ldots,{U_{k,{i_{k}}}}\), where \(\{ {1,2,}\ldots,{i_{k}}\}\) represents the order of subsamples in the state k.

Similar to Zhang [12], the Yule–Walker estimation of the sample mean \({\hat{\mu}_{k,l}}\), the sample variance \(\hat{\gamma}_{0,l}^{(k)}\), and the first-order sample covariance \(\hat{\gamma}_{{1,}l}^{(k)}\) of the set \({U_{k,l}}\) are given by

$$\begin{gathered} {\hat{\mu}_{k,l}} = \frac{1}{{{n_{k,l}}}}\sum_{i \in {J_{k,l}}} {{X_{i}}(k)}, \\ \hat{\gamma}_{0,l}^{ ( k )} = \frac{1}{{{n_{k,l}}}}\sum _{i \in{J_{k,l}}} {{{ \bigl({X_{i}} ( k ) - {{ \hat{\mu}}_{k,l}} \bigr)}^{2}}} , \\ \hat{\gamma}_{{1},l}^{(k)} = \frac{1}{{{n_{k,l}}}}\sum _{\{ i,i + 1\} \subseteq{J_{k,l}}} { \bigl({X_{i + 1}}(k) - {{\hat{\mu}}_{k,l}} \bigr) \bigl({X_{i}}(k) - {{\hat{\mu}}_{k,l}} \bigr)} , \end{gathered}$$

where \({J_{k,l}} = \{ i \in\{ 1,2, \ldots,N\} |{X_{i}}({z_{i}}) \in {U_{k,l}}\} \), \(|{J_{k,l}}| = {n_{k,l}}\), \({n_{k,1}} + {n_{k,2}} + \cdots+{n_{k,{i_{k}}}} = {n_{k}}\). By inequality of \(0 < {\alpha^{2}} + \sigma_{\alpha}^{2} < 1\), we can get that estimators \({\hat{\mu}_{k,l}}\), \(\hat{\gamma}_{0,l}^{(k)}\), and \(\hat{\gamma}_{{1,}l}^{(k)}\) are strongly consistent.

We can obtain Yule–Walker estimators \({\hat{\mu}_{k}}\), \(\hat{\gamma}_{0}^{(k)}\), and \(\hat{\gamma}_{1}^{(k)}\) of the set \({U_{k}}\) in the same way, these estimators are defined as follows:

$$\begin{gathered} {\hat{\mu}_{k}} = \frac{1}{{{n_{k}}}}\sum_{i \in{I_{k}}} {{X_{i}}(k)} , \\ \hat{\gamma}_{0}^{(k)} = \frac{1}{{{n_{k}}}}\sum _{i \in{I_{k}}} {{{ \bigl({X_{i}}(k) - {{\hat{\mu}}_{k}} \bigr)}^{2}}}, \\ \hat{\gamma}_{1}^{(k)} = \frac{1}{{{n_{k}}}}\sum _{\{ i,i + 1\} \subseteq{I_{k}}} { \bigl({X_{i + 1}}(k) - {{\hat{\mu}}_{k}} \bigr) \bigl({X_{i}}(k) - {{\hat{\mu}}_{k}} \bigr)} .\end{gathered} $$

Next, we prove the strong consistency of estimators \({\hat{\mu}_{k}}\), \(\hat{\gamma}_{0}^{(k)}\), and \(\hat{\gamma}_{1}^{(k)}\).

Theorem 2

The estimators \({\hat{\mu}_{k}}\), \(\hat{\gamma}_{0}^{(k)}\), and \(\hat{\gamma}_{1}^{(k)}\)under the circumstancekare strongly consistent.

Proof

First, let us prove that the estimator \({\hat{\mu}_{k}}\) is strongly consistent, that is, \(P({\hat{\mu}_{k}} \to{\mu_{k}}, {n_{k}} \to \infty) = 1\). We assume that

$$\begin{gathered} {n_{k,l}} \to\infty,\quad l \in \{ {1,2, \ldots,d} \}, \\ {n_{k,j}} \to{c_{j}} < \infty,\quad j \in \{ {d + 1,d + 2, \ldots ,{i_{k}}} \},\end{gathered} $$

where \({n_{k}} = {n_{k,1}} + {n_{k,2}} + \cdots + {n_{k,{i_{k}}}}\), when \({n_{k}} \to\infty\).

It holds that

$$\begin{aligned} {{\hat{\mu}}_{k}} &= \frac{1}{{{n_{k}}}}\sum _{i \in{I_{k}}} {{X_{i}}(k)} = \frac{1}{{{n_{k}}}} \sum_{l = 1}^{{i_{k}}} {\sum _{i \in {J_{k,l}}} {{X_{i}}(k)} } = \sum _{l = 1}^{{i_{k}}} {\frac {{{n_{k,l}}}}{{{n_{k}}}}\frac{{1}}{{{n_{k.l}}}}\sum _{i \in {J_{k,l}}} {{X_{i}}(k)} } \\ &= \sum_{l = 1}^{{i_{k}}} {\frac{{{n_{k,l}}}}{{{n_{k}}}}{{\hat{\mu}}_{k,l}}}= \sum_{l = 1}^{d} { \frac{{{n_{k,l}}}}{{{n_{k}}}}} {{\hat{\mu}}_{k,l}} + \sum _{l = d + 1}^{{i_{k}}} {\frac {{{n_{k,l}}}}{{{n_{k}}}}} {{\hat{\mu}}_{k,l}} , \end{aligned} $$

because \(\lim _{{n_{k}} \to\infty} \frac {{{n_{k,j}}}}{{{n_{k}}}} = 0\), \(j \in \{ {d + 1,d + 2, \ldots,{i_{k}}} \}\), we can rewrite

$${{\hat{\mu}}_{k}}=\sum_{l = 1}^{d} { \frac{{{n_{k,l}}}}{{{n_{k}}}}} {{\hat{\mu}}_{k,l}}. $$

Estimator \({\hat{\mu}_{k,l}}\) is strongly consistent, that is,

$$P \bigl({\hat{\mu}_{k,l}} \to{\mu_{k}},{n_{k,l}} \to \infty,\forall l \in\{ 1,2, \ldots,d\} \bigr)= 1. $$

The sum \(\sum_{j = d + 1}^{{i_{k}}} {{n_{k,j}}} \to\sum_{j = d + 1}^{{i_{k}}} {{c_{j}}} \) is negligible compared with \({n_{k,l}} \to \infty\), so we can rewrite \({n_{k}}\) as \({n_{k}} = {n_{k,1}} + {n_{k,2}} + \cdots + {n_{k,d}}\). This implies

$$\begin{aligned} {{\hat{\mu}}_{k}} &= \sum _{l = 1}^{d} {\frac{{{n_{k,l}}}}{{{n_{k}}}}} {{\hat{\mu}}_{k,l}} = \sum_{l = 1}^{d} { \frac {{{n_{k,l}}}}{{{n_{k}}}}} \bigl({\mu_{k}} + o({n_{k,l}})\bigr) = \sum _{l = 1}^{d} {\frac{{{n_{k,l}}}}{{{n_{k}}}}{ \mu_{k}} + } \sum_{l = 1}^{d} { \frac{{{n_{k,l}}}}{{{n_{k}}}}o({n_{k,l}})} \\ &= {\mu_{k}}\frac{{\sum_{l = 1}^{d} {{n_{k,l}}} }}{{{n_{k}}}} + \sum_{l = 1}^{d} {\frac{{{n_{k,l}}}}{{{n_{k}}}}o({n_{k,l}})} = {\mu _{k}} + \sum _{l = 1}^{d} {\frac{{{n_{k,l}}}}{{{n_{k}}}}o({n_{k,l}})} \\ & = {\mu_{k}} + \sum_{l = 1}^{d} { \frac {{{n_{k,l}}}}{{{n_{k}}}}o({n_{k,l}})} , \end{aligned} $$

where \(\lim_{{n_{k,l \to\infty,\forall l \in\{ 1,2, \ldots,d\} }}}\frac{{{n_{k,l}}}}{{{n_{k}}}} < \infty\). Therefore, \({{\hat{\mu}}_{k}} \to{\mu_{k}}\), \({n_{k,l}} \to\infty\), \(\forall l \in\{ 1,2, \ldots,d\}\).

According to the assumptions, we can get

$${{n_{k}} \to\infty}\quad \Leftrightarrow\quad {{n_{k,l}} \to\infty,\quad\forall l \in \{ {1,2, \ldots,d} \}}, $$

and then

$$\lim _{{n_{k,l}} \to\infty,\forall l \in \{ {1,2, \ldots,d} \}} {\hat{\mu}_{k}}= \lim _{{n_{k}} \to\infty} {\hat{\mu}_{k}} ={\mu_{k}}. $$

Here, we complete the proof that the estimator \({\hat{\mu}_{k}}\) is strongly consistent. The proofs of estimators \(\hat{\gamma}_{0}^{(k)}\) and \(\hat{\gamma}_{{1}}^{(k)}\) are similar to the proof of estimator \({\hat{\mu}_{k}}\). □

The parameter estimator α̂ of α can be expressed as

$$\hat{\alpha}= \sum_{k = 1}^{r} { \frac{{{n_{k}}}}{N}} { \hat{\alpha}_{k}}, $$

where

$${\hat{\alpha}_{k}} = \frac{{\hat{\gamma}_{1}^{(k)}}}{{\hat{\gamma}_{0}^{(k)}}}. $$

Theorem 3

The estimatorsα̂and \({\hat{\alpha}_{k}}\)are strongly consistent.

Proof

It is easy to get the strong consistency of estimator \({\hat{\alpha}_{k}}\) by the strong consistency of estimators \(\hat{\gamma}_{0}^{(k)}\) and \(\hat{\gamma}_{{1}}^{(k)}\).

Next, we prove the strong consistency of estimator \(\hat{\alpha}= \sum_{k = 1}^{r} {\frac{{{n_{k}}}}{N}} {\hat{\alpha}_{k}}\), when \({{n_{k}} \to\infty}\). Because \({\hat{\alpha}_{k}}\) is strongly consistent and \(( {N \to\infty} ) \Leftrightarrow ( {{n_{k}} \to \infty,\forall k \in\{ 1,2, \ldots,r\} } )\), it is obvious to prove that α̂ is strongly consistent by using the subadditivity of the probability. □

The remaining estimator \({\hat{\mu}_{{\varepsilon_{n}}(i,j)}}\) of \({ \mu _{{\varepsilon_{n}}(i,j)}}\) is given by

$${{\hat{\mu}}_{{\varepsilon_{n}}(i,j)}} = {{\hat{\mu}}_{j}} - \hat{\alpha}\cdot{{ \hat{\mu}}_{i}}. $$

Theorem 4

The estimator \({\hat{\mu}_{{\varepsilon_{n}}(i,j)}}\)is strongly consistent.

Proof

Using strong consistency of estimators \({\hat{\mu}_{j}}\), \({\hat{\mu}_{i}}\), and α̂, it is obvious to prove that estimator \({\hat{\mu}_{{\varepsilon_{n}}(i,j)}}\) is strongly consistent. □

Let

$${P_{M}} = \left ( { \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} {{p_{11}}} & {{p_{12}}} & \cdots & {{p_{1r}}} \\ {{p_{21}}} & {{p_{22}}} & \cdots & {{p_{2r}}} \\ \vdots & \vdots & \ddots & \vdots \\ {{p_{r1}}} & {{p_{r2}}} & \cdots & {{p_{rr}}} \end{array}\displaystyle } \right ) $$

be a transition probability matrix, where r is the number of states, and \({p_{kj}}\), \(k,j \in\{ 1,2, \ldots,r\}\) is the transition probability from state k to state j. The estimator of \({p_{kj}}\) is defined as

$${{\hat{p}}_{kj}} = \frac{{{n_{kj}}}}{{{n_{k}}}}, $$

where \({n_{k}}\) represents the number of elements in the state k, \({n_{kj}}\) represents the number of elements transferring from state k to state j. We can easily come to the following conclusion.

Theorem 5

The estimator \({{\hat{p}}_{kj}}\)is strongly consistent.

4 Model simulation

In this section we consider the following model:

$${X_{n}}({z_{n}}) = {\alpha_{n}} * {X_{n - 1}}({z_{n - 1}}) + {\varepsilon _{n}}({z_{n - 1}},{z_{n}}), $$

where \(\alpha_{n}\) has uniform distribution and takes values on \((c,d)\) with expectation \(E({\alpha_{n}}) =\alpha = \frac{{c + d}}{2}\), \(0 < c < d <1\).

The distribution of the random variable \({\varepsilon_{n}}({z_{n - 1}},{z_{n}})\) is defined as

$${\varepsilon_{n}}({z_{n - 1}},{z_{n}}) = \left \{ { \textstyle\begin{array}{l@{\quad}l} {\operatorname{Geom} ( {\frac{{{\mu_{{z_{n}}}}}}{{1 + {\mu_{{z_{n}}}}}}} ),}&{\text{w.p. }1 - \frac{{\alpha{\mu_{{z_{n - 1}}}}}}{{{\mu_{{z_{n}}}} - \alpha}};}\\ {\operatorname{Geom} ( {\frac{\alpha}{{1 + \alpha}}} ),}&{\text{w.p. }\frac {{\alpha{\mu_{{z_{n - 1}}}}}}{{{\mu_{{z_{n}}}} - \alpha}},} \end{array}\displaystyle } \right . $$

where \(\alpha = \frac{{c + d}}{2}\) and the expectation of the random variable \({\varepsilon_{n}}({z_{n - 1}},{z_{n}})\) is \({\mu_{{\varepsilon _{n}}({z_{n - 1}},{z_{n}})}} = {\mu_{{z_{n}}}} - \alpha{\mu_{{z_{n - 1}}}}\).

The RrRCINAR(1) model includes two processes, one is an environment state process \(\{ {Z_{n}}\} \) and the other is a random coefficient integer-valued autoregressive process. The realization of the environment state process is required so that the state process will occur one step earlier than the counting process. We choose four cases of different parameter values to generate random numbers and use the mean square error (MSE) to reflect the error of Yule–Walker estimators:

$$\operatorname{{MSE}}(\hat{\alpha}) = \frac{1}{{K - 1}}\sum_{i = 1}^{K} { \Biggl({{\hat{\alpha}}_{i}} - \frac{1}{K}\sum _{i = 1}^{K} {{{\hat{\alpha}}_{i}}} \Biggr)} $$

and

$$\operatorname{{MSE}} \bigl(\hat{\mu}_{i}^{(k)} \bigr) = \frac{1}{{K - 1}} \sum_{i = 1}^{K} { \Biggl( \hat{\mu}_{i}^{(k)} - \frac{1}{K}\sum _{i = 1}^{K} {\hat{\mu}_{i}^{(k)}} \Biggr)} , $$

where \({{\hat{\alpha}}_{i}}\) and \(\hat{\mu}_{i}^{(k)}\) are the ith simulation values, \(i \in K\), \(j \in{E_{r}}\), K is repeated times of each simulation. In the simulation study, the sample sizes are 100, 200, and 500, each simulation is repeated 500 times.

Note that the value of estimator \({\hat{\mu}_{{\varepsilon_{n}}(i,j)}} \) is directly related to estimators \({\hat{\mu}_{i}}\), \({\hat{\mu}_{j}}\), and α̂, where \(i,j \in{E_{r}}\). So value and mean square error of the estimator \({ \hat{\mu}_{{\varepsilon_{n}}(i,j)}} \) will not be given here.

In case (a), we suppose that the RrRCINAR(1) model counts in two possible random states, \({E_{r}} = \{ 1,2\}\). The true parameter value is \(\mu = (1,2)\). \({\mu_{{1}}} = 1\) is the expectation of the random variable \({X_{n}}({z_{n}})\) in state 1. Meanwhile, \({\mu_{{2}}} = 2\) is the expectation of the random variable \({X_{n}}({z_{n}})\) in state 2. The random variable \({\alpha_{n}}\) has uniform distribution with parameter vector \((0.05,0.25)\). The vector \({P_{V}} = (0.8,0.2)\) is the probability of the initial state \({z_{0}}\). The RrRCINAR(1) model is derived from a dynamic structure by a random environment probability transition matrix. In this case, using

$${P_{M}} = \left ( { \textstyle\begin{array}{c@{\quad}c} {0.8} & {0.2} \\ {0.5} & {0.5} \end{array}\displaystyle } \right )$$

as a probability transition matrix. This transition matrix shows that state 1 will not change with probability 0.8 and will change with probability 0.2. The probabilities of keeping the current state and entering other states are equal when the model is under state 2. The simulation result can be seen in Table 1.

Table 1 Case (a)

In case (b), we consider a mean vector \(\mu=(2,4)\). Let \(c = 0.15\), \(d = 0.25\), and \({E_{r}} = \{ 1,2\}\). The probability of the initial state is fair by probability value vector \({P_{V}} = (0.5,0.5)\). As a transition probability matrix, we have chosen

$${P_{M}} = \left ( { \textstyle\begin{array}{c@{\quad}c} {0.8} & {0.2} \\ {0.6} & {0.4} \end{array}\displaystyle } \right ).$$

We see that the present state will not change with probability 0.8 or 0.4 and will change with probability 0.2 or 0.6. Table 2 presents the result of the simulation of case (b).

Table 2 Case (b)

In case (c), a true mean vector value \(\mu = (3,4)\), two fixed values \(c=0.15\), \(d=0.45\), and a set \({E_{r}} = \{ 1,2\}\). The probability vector is \({P_{V}} = (0.6,0.4)\) of the initial random state. The random variable \({X_{0}}({z_{0}})\) is a probability of 0.6 in state 1 and a probability of 0.4 in state 2. The state transition matrix is the same as case (a). The simulation result of case (c) is shown in Table 3.

Table 3 Case (c)

In case (d), we assume that model (2) is performed in three different states, where \({E_{r}} = \{ 1,2,3\}\). The parameter true value is \(\mu = (1,2,3)\). Let us choose \(c =0.15\), \(d=0.25\). The probability of the initial state is close to fairness, due to the value of its distribution \({P_{V}} = (0.33,0.34,0.33)\). Next, the transition probability matrix of the random environment is given as

$${P_{M}} = \left ( { \textstyle\begin{array}{c@{\quad}c@{\quad}c} {0.7} & {0.1} & {0.2} \\ {0.1} & {0.6} & {0.3} \\ {0.5} & {0.2} & {0.3} \end{array}\displaystyle } \right ).$$

When model (2) is in state 1, the current state is maintained with a high probability, and it enters other states with a small probability. When the present state is 2, it stays in the current state with probability 0.6 and shifts to other states with probability 0.1 or 0.3. When state 3 is reached, it enters other states with a high probability and stays at the current state with a small probability. The simulation result of case (d) is listed in Table 4.

Table 4 Case (d)

From the simulation results in Tables 1, 2, 3, 4, we can see that the values of the mean square error decrease with the increase in sample capacity, and with the increase in the sample size, all Yule–Walker estimators are convergent with the mean square error decreasing towards zero. The random environment process determines the dynamic structure of the RrRCINAR(1) model, so in the simulation, the realization of a random environment process is ahead of the RrRCINAR(1) model by the random state process probability transition matrix.

5 Summary and conclusions

In this article, we have presented a random coefficient INAR(1) model of the adjustable nature with the negative binomial thinning operator. The new model is non-stationary due to different geometric marginal distributions. Yule–Walker estimators of the model parameters are obtained and their strong consistency is derived. Tests on model data indicate that the Yule–Walker estimation is effective. The numerical simulation shows that the proposed model is feasible. The RrRCINAR(1) process is a dynamic structure which is determined by the transition matrix, the random environment process transition probability matrix can be adjusted when the simulation is performed. The RrRCINAR(1) process with dynamic structure has flexibility in data processing. This random coefficient model with known states can be used in criminal, medical, and other fields.

References

  1. Al-Osh, M.A., Alzaid, A.A.: First-order integer-valued autoregressive (INAR(1)) process. J. Time Ser. Anal. 8, 261–275 (1987)

    Article  MathSciNet  Google Scholar 

  2. Freeland, R.K., McCabe, B.: Asymptotic properties of CLS estimators in the Poisson AR(1) model. Stat. Probab. Lett. 73, 147–153 (2005)

    Article  MathSciNet  Google Scholar 

  3. Alzaid, A.A., Al-Osh, M.A.: An integer-valued pth-order autoregressive structure (INAR(p)) process. J. Appl. Probab. 27, 314–324 (1990)

    Article  MathSciNet  Google Scholar 

  4. Du, J.G., Li, Y.: The integer-valued autoregressive (INAR(p)) model. J. Time Ser. Anal. 12, 129–142 (1991)

    Article  MathSciNet  Google Scholar 

  5. Zheng, H., Basawa, I.V., Datta, S.: First-order random coefficient integer-valued autoregressive processes. J. Stat. Plan. Inference 137, 212–229 (2007)

    Article  MathSciNet  Google Scholar 

  6. Tang, M.T., Wang, Y.Y.: Asymptotic behavior of random coefficient INAR model under random environment defined by difference equation. Adv. Differ. Equ. 2014, Article ID 99 (2014)

    Article  MathSciNet  Google Scholar 

  7. Liu, Z., Li, Q., Zhu, F.: Random environment binomial thinning integer-valued autoregressive process with Poisson or geometric marginal. Braz. J. Probab. Stat. (2019, forthcoming)

  8. Ristić, M.M., Bakouch, H.S., Nastić, A.S.: A new geometric first-order integer-valued autoregressive (NGINAR(1)) process. J. Stat. Plan. Inference 139, 2218–2226 (2009)

    Article  MathSciNet  Google Scholar 

  9. Ristić, M.M., Nastić, A.S., Jayakumar, K., Bakouch, H.S.: A bivariate INAR(1) time series model with geometric marginals. Appl. Math. Lett. 25, 481–485 (2012)

    Article  MathSciNet  Google Scholar 

  10. Bakouch, H.S.: Higher-order moments, cumulants and spectral densities of the NGINAR(1) process. Stat. Methodol. 7, 1–21 (2010)

    Article  MathSciNet  Google Scholar 

  11. Nastić, A.S., Ristić, M.M., Bakouch, H.S.: A combined geometric INAR(p) model based on negative binomial thinning. Math. Comput. Model. 25, 1665–1672 (2012)

    Article  MathSciNet  Google Scholar 

  12. Zhang, H.X.: Statistical inference for RCINAR(1) model based on negative binomial thinning operator. M.A. thesis, Institute of Mathematics, Jilin University (2009)

  13. Nastić, A.S., Laketa, P.N., Ristić, M.M.: Random environment INAR models of higher order. REVSTAT Stat. J. 17, 35–65 (2019)

    MathSciNet  MATH  Google Scholar 

  14. Laketa, P.N., Nastić, A.S., Ristić, M.M.: Generalized random environment INAR models of higher order. Mediterr. J. Math. 15, 1–22 (2018)

    Article  MathSciNet  Google Scholar 

  15. Laketa, P.: On random environment integer-valued autoregressive models a survey. Paper presented at 21st European Young Statisticians Meeting, University of Nis̆, Serbia (2019)

  16. Nastić, A.S., Laketa, P.N., Ristić, M.M.: Random environment integer-valued autoregressive process. J. Time Ser. Anal. 37, 267–287 (2016)

    Article  MathSciNet  Google Scholar 

  17. Silva, M.E., Oliveira, V.L.: Difference equations for the higher order moments and cumulants of the INAR(1) model. J. Time Ser. Anal. 25, 317–333 (2004)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are very grateful to the editor and the referee for suggestions and comments, which significantly increased the quality of the manuscript.

Funding

This research is supported by the NSFC (No. 11461032, No. 11401267) and the Program of Qingjiang Excellent Young Talents.

Author information

Authors and Affiliations

Authors

Contributions

All authors jointly worked on the results and they read and approved the final manuscript.

Corresponding author

Correspondence to Yun Y. Wang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cui, Y., Wang, Y.Y. Estimation for random coefficient integer-valued autoregressive model under random environment. Adv Differ Equ 2019, 500 (2019). https://doi.org/10.1186/s13662-019-2436-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-019-2436-2

MSC

Keywords