In this section, we verify that the system will become extinct if the noise is sufficiently large.
Theorem 4.1
Suppose that Assumption
1
holds and there exists an integer
\(m\leq n\)
such that
$$\begin{aligned} \begin{aligned} &\sum_{k\in S} \pi_{k} \biggl(b_{i}(k)-\frac{\sigma^{2}_{i}(k)}{2} \biggr)< 0,\quad i=1,2,\ldots,m, \\ &\sum_{k\in S}\pi_{k} \biggl(b_{i}(k)- \frac{\sigma^{2}_{i}(k)}{2} \biggr)=0,\quad i=m+1,\ldots,n. \end{aligned} \end{aligned}$$
(27)
Then we have
-
(i)
The previous
m
species of system (3) are almost surely exponentially extinct with the exponential rate of the
ith species
\(-\sum_{k\in S}\pi_{k} (\frac{\sigma^{2}_{i}(k)}{2}-b_{i}(k) )\), that is,
$$ \lim_{t\rightarrow\infty}\frac{\log x_{i}(t,x_{0},r_{0})}{t}=-\sum _{k\in S}\pi_{k} \biggl(\frac{\sigma^{2}_{i}(k)}{2}-b_{i}(k) \biggr)\quad \textit{a.s.} $$
(28)
-
(ii)
The following
\(n-m\)
species will extinct with zero exponential rate, that is,
$$ \lim_{t\rightarrow\infty} x(t,x_{0},r_{0})=0 \quad \textit{a.s.} $$
(29)
Proof
We begin by proving the exponential extinction for the top m species of system (3) when \(\sum_{k\in S}\pi_{k} (b_{i}(k)-\frac{\sigma^{2}_{i}(k)}{2} )<0\), \(i=1,2,\ldots,m\). The next goal is to show the zero exponential extinction for the bottom \(n-m\) species when \(\sum_{k\in S}\pi_{k} (b_{i}(k)-\frac{\sigma^{2}_{i}(k)}{2} )=0\), \(i=m+1,\ldots,n\).
Step 1. We aim to prove assertion (28). By Itô’s formula we get
$$\begin{aligned} &\begin{aligned} \log{x_{i}(t)}={}&\log{x_{i}(0)}+ \int^{t}_{0} \biggl(b_{i} \bigl(r(s) \bigr)-\frac{\sigma^{2}_{i} (r(s) )}{2} \biggr)\,ds \\ &{}- \int^{t}_{0} \biggl(a_{ii} \bigl(r(s) \bigr)x_{i}^{\theta_{i}}(s)+\sum_{j\neq{i}}a_{ij} \bigl(r(s) \bigr)x_{j}(s) \biggr)\,ds+M_{i}(t), \end{aligned} \\ &\quad i=1,2,\ldots,n, \end{aligned}$$
where \(M_{i}(t)=\int^{t}_{0}\sigma_{i}(r(s))\,dB_{i}(s)\), \(i=1,2,\ldots,n\). Dividing both sides by t yields
$$\begin{aligned} &\begin{aligned} \frac{\log x_{i}(t)}{t}={}&\frac{\log x_{i}(0)}{t}+ \frac{1}{t} \int^{t}_{0} \biggl(b_{i} \bigl(r(s) \bigr)-\frac{\sigma^{2}_{i} (r(s) )}{2} \biggr)\,ds \\ &{}-\frac{1}{t} \int^{t}_{0} \biggl(a_{ii} \bigl(r(s) \bigr)x_{i}^{\theta_{i}}(s)+\sum_{j\neq i}a_{ij} \bigl(r(s) \bigr)x_{j}(s) \biggr)\,ds+\frac{M_{i}(t)}{t}, \end{aligned} \\ &\quad i=1,2,\ldots,n. \end{aligned}$$
(30)
By the strong law of large numbers for martingales (see [20]) we derive \(\lim_{t\rightarrow\infty}\frac{1}{t}\times\int^{t}_{0}\sigma_{i} (r(s) )\,dB_{i}(s)=0\) a.s., \(i=1,2,\ldots,n \). For \(i=1,2,\ldots,m\), letting \(t\rightarrow\infty\) on both sides of (30), we have
$$\begin{aligned} \limsup_{t\rightarrow\infty}\frac{\log x_{i}(t)}{t}\leq-\sum _{k\in S}\pi_{k} \biggl(\frac{\sigma^{2}_{i}(k)}{2}-b_{i}(k) \biggr)< 0,\quad i=1,2,\ldots,m, \mbox{ a.s.} \end{aligned}$$
(31)
By (31), for any \(1\leq i\leq k\) and \(\varepsilon\in (0,\min_{1\leq i\leq k}\sum_{k\in S}\pi_{k} (\frac{\sigma^{2}_{i}(k)}{2}-b_{i}(k) ) )\), we can select a random variable \(T(\varepsilon)\) such that
$$\begin{aligned} x_{i}(t)\leq \exp{ \biggl(-\sum _{k\in S}\pi_{k} \biggl(\frac{\sigma^{2}_{i}(k)}{2}-b_{i}(k) \biggr)t+\varepsilon t \biggr)},\quad \forall t>T(\varepsilon),i=1,2,\ldots,m, \mbox{ a.s.} \end{aligned}$$
(32)
Thus it follows that
$$\begin{aligned} x_{i}^{\theta_{i}}(t)\leq \exp{ \biggl(-\theta_{i}\sum _{k\in S}\pi_{k} \biggl(\frac{\sigma^{2}_{i}(k)}{2}-b_{i}(k) \biggr)t+\theta_{i}\varepsilon t \biggr)},\quad \forall t>T( \varepsilon), i=1,2,\ldots,m, \mbox{ a.s.} \end{aligned}$$
Then we can readily verify that
$$\begin{aligned} \int^{\infty}_{0} \biggl(a_{ii} \bigl(r(s) \bigr)x_{i}^{\theta_{i}}(s)+\sum_{j\neq i}a_{ij} \bigl(r(s) \bigr)x_{j}(s) \biggr)\,ds< \infty,\quad i=1,2,\ldots,m, \mbox{ a.s.} \end{aligned}$$
(33)
By (30) and (33) we have
$$\begin{aligned} \lim_{t\rightarrow\infty}\frac{\log x_{i}(t)}{t}=-\sum _{k\in S}\pi_{k} \biggl(\frac{\sigma^{2}_{i}(k)}{2}-b_{i}(k) \biggr),\quad i=1,2,\ldots,m, \mbox{ a.s.} \end{aligned}$$
Step 2. We only need to show assertion (29). Applying Itô’s formula to \(\log x_{i}^{\theta_{i}}(t)\), we get
$$\begin{aligned} \frac{\log x_{i}(t)}{t} =&\frac{\log x_{i}(0)}{t}-\frac{1}{t} \int_{0}^{t}a_{ii} \bigl(r(s) \bigr)x_{i}^{\theta_{i}}(s)\,ds \\ &{}-\frac{1}{t}\sum_{j=1,j\neq i}^{n} \int_{0}^{t}a_{ij} \bigl(r(s) \bigr)x_{i}(s)\,ds+\frac{1}{t} \int_{0}^{t}\sigma_{i} \,dB_{i}(s). \end{aligned}$$
(34)
Based on the convergence of the integral \(\int^{\infty}_{0}x_{i}(s)\,ds\), the sample space Ω can be decomposed into two mutually exclusive events
$$\begin{aligned} G_{i1}= \biggl\{ \omega : \int^{\infty}_{0}x_{i}(s)\,ds< \infty\biggr\} \quad\mbox{and}\quad G_{i2}= \biggl\{ \omega : \int^{\infty}_{0}x_{i}(s)\,ds=\infty\biggr\} . \end{aligned}$$
Furthermore, we can also divide Ω into three mutually exclusive events
$$\begin{aligned} &\digamma_{i1}= \Bigl\{ \omega : \lim\sup_{t\rightarrow\infty} x_{i}(t)\geq \lim\inf_{t\rightarrow\infty}x_{i}(t)= \gamma_{i}>0 \Bigr\} , \\ &\digamma_{i2}= \Bigl\{ \omega : \lim\sup_{t\rightarrow\infty} x_{i}(t)> \lim\inf_{t\rightarrow\infty}x_{i}(t) = 0 \Bigr\} ,\quad \mbox{and} \\ &\digamma_{i3}= \Bigl\{ \omega : \lim_{t\rightarrow\infty}x_{i}(t)=0 \Bigr\} . \end{aligned}$$
The proof of \(\lim_{t\rightarrow\infty}x_{i}(t)=0\) a.s. is equivalent to showing that \(G_{i1}\subset \digamma_{i3}\) and \(G_{i2}\subset \digamma_{i3}\) a.s. The following is an outline of the proof.
First, using stochastic LaSalle methods proposed in [22], we prove that \(G_{i1}\subset \digamma_{i3}\). Second, using the novel techniques, we show that \(P(G_{i2}\cap\digamma_{i1})=0\) and \(P(G_{i2}\cap\digamma_{i2})=0\), which means that \(G_{i2}\subset\digamma_{i3}\) a.s. Now we map out our strategy.
Case 1: The continuity of \(x_{i}(t)\) and definition of \(G_{in}\) imply that \(P(G_{i1}\cap\digamma_{i1})=0\). Now we prove it by a contradiction.
Now let us show that \(G_{i1}\subset\digamma_{i3}\). Clearly, \(x_{i}(t)\in C(R_{+},R)\) a.s. It is easy to check from \(G_{i1}\) that \(\liminf_{t\to\infty}x_{i}(t)=0\) a.s. Therefore, we obtain that \(P(G_{i1}\cap\digamma_{i1})=0\). The only thing that remains to show is \(P(G_{i1}\cap\digamma_{i2})=0\). If \(P(G_{i1}\cap\digamma_{i2})>0\), then there exists a real number \(\varepsilon>0\) such that
$$\begin{aligned} P(Q_{1}\cap G_{i1})\geqslant2\varepsilon, \end{aligned}$$
(35)
where \(Q_{1}=\lbrace\limsup_{t\to\infty}x_{i}(t)>2\varepsilon\rbrace\). Define the sequence of stopping times
$$\begin{aligned} &\tau_{1}=\inf\bigl\lbrace t\geqslant 0 : x_{i}(t) \geqslant2\varepsilon\bigr\rbrace ,\qquad \tau_{2k}=\inf\bigl\lbrace t \geqslant\tau_{2k-1} : x_{i}(t)\leqslant \varepsilon\bigr\rbrace , \\ &\tau_{2k+1}=\inf\bigl\lbrace t\geqslant\tau_{2k} : x_{i}(t)\geqslant 2\varepsilon\bigr\rbrace ,\quad k=1,2,\ldots. \end{aligned}$$
We have \(E(I_{G_{i1}}\int_{0}^{\infty}x_{i}(s)\,ds)<\infty\) from \(G_{i1}\). Then we compute and rearrange
$$\begin{aligned} E \biggl(I_{G_{i1}} \int_{0}^{\infty}x_{i}(s)\,ds \biggr) \geqslant&\sum_{k=1}^{\infty}E \biggl( I_{\lbrace\tau_{2k-1}< \infty,\tau_{2k}< \infty\rbrace\cap G_{i1}} \int_{\tau_{2k-1}}^{\tau_{2k}}x_{i}(s)\,ds \biggr) \\ \geqslant&\varepsilon\sum_{k=1}^{\infty} E \bigl(I_{\lbrace\tau_{2k-1}< \infty\rbrace\cap G_{i1}}(\tau_{2k}-\tau_{2k-1}) \bigr), \end{aligned}$$
where \(I_{A}\) is the indicator function. Since \(\tau_{2k}<\infty\) whenever \(\tau_{2k-1}<\infty\), by the above formula we have
$$\begin{aligned} \varepsilon\sum_{k=1}^{\infty} E \bigl( I_{\lbrace\tau_{2k-1}< \infty\rbrace\cap G_{i1}}(\tau_{2k-1}-\tau_{2k}) \bigr)< \infty. \end{aligned}$$
(36)
Integrating equation (3), we have
$$\begin{aligned} x_{i}(t) =&x_{i}(0)+ \int_{0}^{t}\sigma_{i}x_{i}(s) \,dB_{i}(s) \\ &{}+ \int_{0}^{t}x_{i}(s) \Biggl(b_{i} \bigl(r(s) \bigr)-a_{ii} \bigl(r(s) \bigr)x_{i}^{\theta_{i}}(s)- \sum_{j=1,j\neq i}^{n}a_{ij} \bigl(r(s) \bigr)x_{j}(s) \Biggr)\,ds. \end{aligned}$$
(37)
Compute and rearrange
$$\begin{aligned} &E \Biggl\{ x_{i}^{2}(s)\cdot \Biggl(b_{i} \bigl(r(s) \bigr)-a_{ii} \bigl(r(s) \bigr)x_{i}^{\theta_{i}}(s) -\sum_{j=1,j\neq i}^{n}a_{ij} \bigl(r(s) \bigr)x_{j}(s) \Biggr)^{2} \Biggr\} \\ &\quad \leqslant\frac{1}{2}E \bigl(x_{i}^{4}(s) \bigr) +\frac{1}{2}E \Biggl(b_{i} \bigl(r(s) \bigr)-a_{ii} \bigl(r(s) \bigr)x_{i}^{\theta_{i}}(s)-\sum _{j=1,j\neq i}^{n}a_{ij} \bigl(r(s) \bigr)x_{j}(s) \Biggr)^{4} \\ &\quad \leq \frac{1}{2}E \bigl(x_{i}^{4}(s) \bigr)+ \frac{1}{2}E\Biggl(b^{4}+a_{ii}^{4}x_{i}^{4\theta_{i}}+ \sum_{j=1,j\neq i}^{n}a_{ij}^{4}x_{j}^{4}+4a_{ii}^{3}x_{i}^{3\theta_{i}} \sum_{j=1,j\neq i}^{n}a_{ij}x_{j}+4a_{ii}x_{i}^{\theta_{i}}\sum _{j=1,j\neq i}^{n}a_{ij}^{3}x_{j}^{3} \\ &\qquad {}+6b^{2}a_{ii}^{2}x_{i}^{2\theta_{i}}+6b^{2} \sum_{j=1,j\neq i}^{n}a_{ij}^{2}x_{j}^{2} +6a_{ii}^{2}x_{i}^{2\theta_{i}}\sum _{j=1,j\neq i}^{n}a_{ij}^{2}x_{j}^{2}+12b^{2}a_{ii}x_{i}^{\theta_{i}} \sum_{j=1,j\neq i}^{n}a_{ij}x_{j} \Biggr) \\ &\quad \leq\frac{1}{2}K_{4}+\frac{1}{2} \Biggl(b_{i}^{4}+a_{ii}^{4}K_{4\theta_{i}}+ \sum_{j=1,j\neq i}^{n}a_{ij}^{4}K_{4}+4a_{ii}^{3}K_{3\theta_{i}} \sum_{j=1,j\neq i}^{n}a_{ij}K_{1} +4a_{ii}K_{\theta_{i}}\sum_{j=1,j\neq i}^{n}a_{ij}^{3}K_{3} \\ &\qquad {} +6b^{2}a_{ii}^{2}K_{2\theta_{i}}+6b^{2} \sum_{j=1,j\neq i}^{n}a_{ij}^{2}K_{2} +6a_{ii}^{2}K_{2\theta_{i}}\sum _{j=1,j\neq i}^{n}a_{ij}^{2}K_{2}+12b^{2}a_{ii}K_{\theta_{i}} \sum_{j=1,j\neq i}^{n}a_{ij}K_{1} \Biggr) \\ &\quad =:U_{i}^{2} \end{aligned}$$
and
$$\begin{aligned} E \bigl(\sigma_{i}^{2}\cdot x_{i}^{2}(s) \bigr)=\sigma_{i}^{2}\cdot E \bigl(x_{i}^{2}(s) \bigr)\leqslant\sigma_{i}^{2}\cdot K_{2}=:V_{i}^{2}, \end{aligned}$$
where \(K_{1}\), \(K_{2}\), \(K_{3}\), \(K_{4}\) and \(K_{\theta_{i}}\), \(K_{2\theta_{i}}\), \(K_{3\theta_{i}}\), \(K_{4\theta_{i}}\) are defined in Lemma 2.1. By the BDG inequality (see [20]) and the Hölder inequality we compute
$$\begin{aligned} &E \Bigl( I_{\lbrace\tau_{2k-1}< \infty\rbrace\cap G_{i1}}\sup_{0\leqslant t\leqslant T} \bigl\vert x_{i}(\tau_{2k-1}+t)-x_{i}( \tau_{2k-1}) \bigr\vert ^{2} \Bigr) \\ &\quad \leqslant 2E \Biggl\{ I_{\lbrace\tau_{2k-1}< \infty\rbrace\cap G_{i1}}\sup_{0\leqslant t\leqslant T} \Biggl\vert \int_{\tau_{2k-1}}^{\tau_{2k-1}+t}x_{i}(s) \Biggl(b_{i} \bigl(r(s) \bigr)-a_{ii} \bigl(r(s) \bigr)x_{i}^{\theta_{i}}(s) \\ &\qquad {} -\sum_{j=1,j\neq i}^{n}a_{ij} \bigl(r(s) \bigr)x_{j}(s) \Biggr)\,ds \Biggr\vert ^{2} \Biggr\} \\ &\qquad {}+2E \biggl( I_{\lbrace\tau_{2k-1}< \infty\rbrace\cap G_{i1}}\sup_{0\leqslant t\leqslant T} \biggl\vert \int_{\tau_{2k-1}}^{\tau_{2k-1}+t} \bigl(\sigma_{i}\cdot x_{i}(s) \bigr)\,dB_{i}(s) \biggr\vert ^{2} \biggr) \\ &\quad \leqslant 2TE \Biggl\{ I_{\lbrace\tau_{2k-1}< \infty\rbrace\cap G_{i1}} \int_{\tau_{2k-1}}^{\tau_{2k-1}+T} x_{i}^{2}(s) \Biggl(b_{i} \bigl(r(s) \bigr)-a_{ii} \bigl(r(s) \bigr)x_{i}^{\theta_{i}}(s) \\ &\qquad {} -\sum_{j=1,j\neq i}^{n}a_{ij} \bigl(r(s) \bigr)x_{j}(s) \Biggr)^{2}\,ds \Biggr\} \\ &\qquad {}+8E \biggl( I_{\lbrace\tau_{2k-1}< \infty\rbrace\cap G_{i1}} \int_{\tau_{2k-1}}^{\tau_{2k-1}+T} \bigl(\sigma_{i}^{2} \cdot x_{i}^{2}(s) \bigr)\,ds \biggr) \\ &\quad \leqslant 2T\bigl(U_{i}^{2}+4V_{i}^{2} \bigr). \end{aligned}$$
(38)
Choosing \(T=T(\varepsilon)>0\) sufficiently small for \(2T(U_{i}^{2}+4V_{i}^{2})\leqslant\varepsilon^{3}\), from (38) we have
$$\begin{aligned} P\bigl(\lbrace\tau_{2k-1}< \infty\rbrace\cap\lbrace H_{k}\cap J_{i1}\rbrace\bigr) \leqslant\frac{2(T+4)T(U_{i}^{2}+V_{i}^{2})}{\varepsilon^{2}}\leqslant\varepsilon, \end{aligned}$$
(39)
where \(H_{k}=\lbrace\sup_{1\leqslant t\leqslant T} |x_{i}(\tau_{2k-1}+t)-x_{i}(\tau_{2k-1})|\geqslant\varepsilon\rbrace\), \(k=1,2,\ldots \) . Noting that \(\tau_{k}<\infty\) for \(k=1,2,\ldots \) whenever \(\omega\in Q_{1}\), we further compute
$$\begin{aligned} &P\bigl(\lbrace\tau_{2k-1}< \infty\rbrace\cap\bigl\lbrace H^{c}_{k}\cap G_{i1}\bigr\rbrace \bigr) \\ &\quad =P\bigl(\lbrace\tau_{2k-1}< \infty\rbrace\cap G_{i1} \bigr)-P\bigl(\lbrace\tau_{2k-1}< \infty\rbrace\cap \{H_{k} \cap G_{i1}\}\bigr) \\ &\quad \geqslant 2\varepsilon-\varepsilon=\varepsilon. \end{aligned}$$
Note that if \(\omega\in\lbrace\tau_{2k-1}<\infty\rbrace\cap\lbrace H^{c}_{k}\cap G_{i1}\rbrace\), then
$$ \tau_{2k}(\omega)-\tau_{2k-1}(\omega)\geqslant T. $$
(40)
We obtain from (36) and (40) that
$$\begin{aligned} \infty >&\varepsilon\sum_{k=1}^{\infty}E{} \bigl[ I_{\lbrace\tau_{2k-1}< \infty\rbrace\cap G_{i1}}(\tau_{2k}-\tau_{2k-1})\bigr] \\ \geqslant&\varepsilon\sum_{k=1}^{\infty}E{} \bigl[ I_{\lbrace\tau_{2k-1}< \infty\rbrace\cap\lbrace H^{c}_{k}\cap G_{i1}\rbrace} (\tau_{2k-1}-\tau_{2k})\bigr] \\ \geqslant&\varepsilon T\sum_{k=1}^{\infty}P \bigl(\lbrace\tau_{2k-1}< \infty\rbrace \cap\bigl\lbrace H^{c}_{k}\cap G_{i1}\bigr\rbrace \bigr) \\ \geqslant&\varepsilon T\sum_{k=1}^{\infty} \varepsilon=\infty, \end{aligned}$$
(41)
which is a contraction. So \(P(G_{i1}\cap\digamma_{i2})=0\) holds, and we derive that \(G_{i1}\subset\digamma_{i3}\).
Case 2. It remains to prove \(G_{i2}\subset\digamma_{i3}\) a.s. We need only to show that \(P(G_{i2}\cap\digamma_{i1})=0\) and \(P(G_{i2}\cap\digamma_{i2})=0\). We prove it by a contradiction. If \(P(G_{i2}\cap\digamma_{i1})>0\), then for any \(\omega\in G_{i2}\cap\digamma_{i1}\), \(\epsilon_{0}\in(0,\frac{\gamma_{i}}{2})\), there exists \(T=(\epsilon_{0},\omega)\) such that
$$x_{i}(t)>\gamma_{i}-\epsilon_{0}> \frac{\gamma_{i}}{2},\quad \forall t>T, \mbox{ a.s.} $$
Then it follows that
$$\begin{aligned} \frac{1}{t} \int_{0}^{t}x_{i}(s)\,ds =& \frac{1}{t} \int_{0}^{T}x_{i}(s)\,ds + \frac{1}{t} \int_{T}^{t}x_{i}(s)\,ds\\ \geqslant& \frac{1}{t} \int_{0}^{T}x_{i}(s)\,ds+ \frac{t-T}{t}\frac{\gamma_{i}}{2} \quad \mbox{a.s.} \end{aligned}$$
Letting \(t\to\infty\), we get
$$\liminf_{t\to\infty}\frac{1}{t} \int_{0}^{t}x_{i}(s)\,ds> \frac{\gamma_{i}}{2}>0 \quad \mbox{a.s.} $$
This implies
$$\begin{aligned} \limsup_{t\to\infty}\frac{\log x_{i}(t)}{t}\leqslant -\sum _{j=1}^{n}a_{ij}\frac{\gamma_{i}}{2}< 0\quad \mbox{a.s.}, \end{aligned}$$
which contradicts the definition of \(J_{i2}\) and \(\digamma_{i1}\). So \(P(G_{i2}\cap\digamma_{i1})=0\) must be established. We proceed to show that \(P(G_{i2}\cap\digamma_{i2})>0\) is false. We need several notations:
$$\begin{aligned} &\Gamma_{t}^{\varepsilon}(i):=\bigl\lbrace 0\leqslant s\leqslant t:x_{i}(s)\geqslant\varepsilon\bigr\rbrace ,\\ &\delta_{t}^{\varepsilon}(i):=\frac{m(\Gamma_{t}^{\varepsilon}(i))}{t}, \\ &\delta^{\varepsilon}(i):=\liminf_{t\to\infty}\delta_{t}^{\varepsilon}{i},\\ &\Delta^{\varepsilon}(i):=\bigl\lbrace \omega\in J_{i2}\cap \digamma_{i2}:\delta^{\varepsilon}(i)>0\bigr\rbrace , \end{aligned}$$
where \(m(\Gamma_{t}^{\varepsilon}(i))\) denotes the length of \(\Gamma_{t}^{\varepsilon}(i)\). It is easy to see that \(\Delta^{0}(i)=G_{i2}\cap\digamma_{i2}\). Note that, for any \(\varepsilon_{1}<\varepsilon_{2}\),
$$\begin{aligned} &\Gamma_{t}^{\varepsilon_{1}}(i)\supset \Gamma_{t}^{\varepsilon_{2}}(i), \qquad m\bigl(\Gamma_{t}^{\varepsilon_{1}}(i)\bigr)\geqslant m\bigl( \Gamma_{t}^{\varepsilon_{2}}(i)\bigr), \\ &\delta_{t}^{\varepsilon_{1}}(i)=\frac{m(\Gamma_{t}^{\varepsilon_{1}})(i)}{t} \geqslant \delta_{t}^{\varepsilon_{2}}(i)=\frac{m(\Gamma_{t}^{\varepsilon_{2}})(i)}{t}, \end{aligned}$$
which implies
$$\begin{aligned} \delta^{\varepsilon_{2}}(i)\leqslant \delta^{\varepsilon_{1}}(i), \qquad \Delta^{\varepsilon_{2}}(i)\subset \Delta^{\varepsilon_{1}}(i), \quad \forall \varepsilon_{1}< \varepsilon_{2}. \end{aligned}$$
By the continuity of probability we have
$$\begin{aligned} P\bigl(\Delta^{\varepsilon}(i)\bigr)\to P\bigl(\Delta^{0}(i) \bigr)=P(G_{2}\cap\digamma_{2})\quad \mbox{as }\varepsilon \to0. \end{aligned}$$
If \(P(G_{i2}\cap\digamma_{i2})>0\), then there exists \(\varepsilon>0\) such that \(P(D^{\varepsilon})>0\). For any \(\omega\in \Delta^{\varepsilon}(i)\), we have
$$\begin{aligned} \frac{1}{t} \int_{0}^{t}x_{i}(s)\,ds =& \frac{1}{t} \int_{\Pi_{t}^{\varepsilon}(i)}x_{i}(s)\,ds +\frac{1}{t} \int_{{}[0,t]\setminus \Pi_{t}^{\varepsilon}}x_{i}(s)\,ds\\ \geqslant&\frac{1}{t} \int_{\Pi_{t}^{\varepsilon}(i)}x_{i}(s)\,ds \quad \mbox{a.s.} \end{aligned}$$
Letting \(t\to\infty\), we get
$$ \liminf_{t\to\infty}\frac{1}{t} \int_{0}^{t}x_{i}(s)\,ds\geqslant \liminf _{t\to\infty}\frac{1}{t} \int_{\Pi_{t}^{\varepsilon}}x_{i}(s)\,ds\geqslant \delta^{\varepsilon}(i) \varepsilon\quad \mbox{a.s.} $$
(42)
Substituting (42) into (29), we have
$$\begin{aligned} \limsup_{t\to\infty}\frac{\log x_{i}(t)}{t}\leqslant -\sum _{j=1}^{n}a_{ij}\delta^{\varepsilon}(i) \varepsilon< 0 \quad \mbox{a.s.} \end{aligned}$$
This contradicts the definitions of \(G_{i2}\) and \(\digamma_{i2}\). Consequently, we conclude that \(P(G_{i2}\cap\digamma_{i2})=0\). Combining the facts \(G_{i1}\subset\digamma_{i3}\), \(P(J_{i2}\cap\digamma_{i1})=0\), and \(P(G_{i2}\cap\digamma_{i2})=0\), we derive
$$\begin{aligned} \lim_{t\to\infty}x_{i}(t)=0\quad \mbox{a.s.}, \end{aligned}$$
as desired. □
Remark 3
The difficulties come from the nonlinearities and regime switching. Based on stochastic LaSalle theorem and the space-decomposition method (see [8] and [22]), we overcome the difficulties. If system (3) does not contain parametric switching and \(\theta_{i}=1\), \(i=1,2,\ldots,n\), it happens to be the result in [7]. Therefore, Theorem 4.1 generalizes the results in [7] and [8].