Theory and Modern Applications

# Dynamical behaviors of a stochastic SIRV epidemic model with the Ornstein–Uhlenbeck process

## Abstract

Vaccination is an important tool in disease control to suppress disease, and vaccine-influenced diseases no longer conform to the general pattern of transmission. In this paper, by assuming that the infection rate is affected by the Ornstein–Uhlenbeck process, we obtained a stochastic SIRV model. First, we prove the existence and uniqueness of the global positive solution. Sufficient conditions for the extinction and persistence of the disease are then obtained. Next, by creating an appropriate Lyapunov function, the existence of the stationary distribution for the model is proved. Further, the explicit expression for the probability density function of the model around the quasi-equilibrium point is obtained. Finally, the analytical outcomes are examined by numerical simulations.

## 1 Introduction

Vaccination is an effective means of preventing infectious diseases. On the one hand, after vaccination, people can gain immunity to diseases, effectively reducing the risk of illness, severe illness, and death. On the other hand, through orderly vaccination, an immune barrier can be gradually established in the population, blocking the spread of diseases, and protecting people’s daily lives. Many diseases that have plagued humans for many years, such as measles, rabies, and hepatitis B, have been effectively mitigated by vaccination. In a number of articles [17], various dynamic behaviors of disease transmission with vaccination have been analyzed. Based on [810], Oke et al. [11] established a SIRV infectious disease model in 2019, which is shown below:

$$\textstyle\begin{cases} \frac{{{\mathrm{{d}}}S}}{{{\mathrm{{d}}}t}} = A - \mu S - \lambda SI - \rho S + { \delta _{1}}R + {\delta _{2}}V, \\ \frac{{{\mathrm{{d}}}I}}{{{\mathrm{{d}}}t}} = \lambda SI - (\mu + \alpha + \gamma + r)I, \\ \frac{{{\mathrm{{d}}}R}}{{{\mathrm{{d}}}t}} = \gamma I - (\mu + {\delta _{1}})R, \\ \frac{{{\mathrm{{d}}}V}}{{{\mathrm{{d}}}t}} = \rho S - (\mu + {\delta _{2}})V, \end{cases}$$
(1)

where $$S(t)$$ denotes the susceptible individuals, $$I(t)$$ denotes the infected individuals, $$R(t)$$ denotes the recovered individuals and $$V(t)$$ denotes the vaccinated individuals; A indicates the natural input rate, μ indicates the natural mortality of the individuals, α indicates the mortality rate of the individuals due to disease, λ indicates infection rate between susceptible individuals and infected individuals; ρ indicates the rate of vaccination of susceptible individuals, $$\delta _{1}$$ and $$\delta _{2}$$ indicate the rate of loss of immunity in recovered individuals and vaccinated individuals separately, γ indicates the natural recovery rate of the infected individuals, r indicates the treatment rate of the infected individuals.

The real environment is constantly changing, which leads to randomness everywhere in the natural environment, and the parameters of the model constantly fluctuate above or below the average value. Therefore, the construction of an infectious disease model considering the impact of environmental noise can better reflect the actual situation of the spread of infectious diseases in real world situations. In order to simulate the impact of environmental noise, there are generally two methods. The first method is to establish a linear function about white Gaussian noise by adding white Gaussian noise to parameters [1220], and the second method is to introduce Ornstein–Ulenbeck process into parameters to make them driven by Ornstein–Ulenbeck process [2131]. Kang et al. [32] noted that Gaussian colored noise, produced by the Ornstein–Ulenbeck process, is appropriate for modeling the correlation of environmental fluctuations, while Gaussian white noise, which is the formal derivative of Wiener process of stationary independent increments, is not able to represent the correlation of environmental fluctuations. Allen [33] also compared these two types of noise and reached the following conclusion:

(i) Assuming that the infection rate is affected by white Gaussian noise, its form is as follows:

$$\lambda (t) = \lambda + \sigma \frac{{{\mathrm{{d}}}B(t)}}{{{\mathrm{{d}}}t}},$$
(2)

where $$B(t)$$ denotes the standard Brownian movement, σ denotes environmental fluctuation intensity. Dividing by t on both sides after integrating the previous equation from 0 to t, we have

$$\frac{1}{t} \int _{0}^{t} {\lambda (s){\,\mathrm{d} }s} = \bar{\lambda}+ \sigma \frac{{{\mathrm{{d}}}B(t)}}{{{\mathrm{{d}}}t}} \sim \biggl( \bar{\lambda}, \frac{{{\sigma ^{2}}}}{t}\biggr).$$

It can be seen that the variance of $$\lambda (t)$$ goes to infinity as t approaches 0. That is to say, successive averages of this linear function undergoes significant random fluctuations. At the same time, introducing the Ornstein–Uhlenbeck process into the infection rate gives

$$\lambda (t) = \lambda + m(t),$$
(3)

where $$m(t)$$ obeys

$${\mathrm{{d}}}m(t) = - \theta m(t){\,\mathrm{d} }t + \xi {\,\mathrm{d} }B(t),$$

where θ represents the reverse speed, ξ represents the fluctuation intensity, $$B(t)$$ represents the standard Brownian motion. Integrating Eq. (3) from 0 to t, we obtain

$$m(t) = {m_{0}} {e^{ - \theta t}} + \xi \int _{0}^{t} {{e^{ - \theta (t - s)}} {\,\mathrm{d} }B(s)} ,$$
(4)

where $${m_{0}} = m(0)$$. Obviously, $$m (t)$$ follows the normal distribution in the form of $$\mathbb{N}( {m_{0}}{e^{ - \theta t}}, \frac{{{\xi ^{2}}}}{{2\theta }}(1 - {e^{ - 2\theta t}}))$$. As t approaches 0, the variance of the Ornstein–Uhlenbeck process also tends to 0. Obviously, using Ornstein–Ulenbeck process to simulate biological model is more appropriate.

(ii) Compared with white Gaussian noise, the Ornstein–Uhlenbeck process is a continuous process that changes over time. For example, the correlation coefficient $${\lambda _{1}}(t)$$ of the Ornstein–Uhlenbeck process admits $$\rho ({\lambda _{1}}(t),{\lambda _{1}}(t + \Delta t)) = 1 - o(t)$$, while the correlation coefficient $${\lambda _{2}}(t)$$ of white Gaussian noise admits $$\rho (\lambda _{2}(t), \lambda _{2}(t + \Delta t)) = 0$$. This indicates that adding white Gaussian noise as a disturbance to models with short-term is more suitable. However, in reality, environmental fluctuations are influenced by multiple relationships, which are also constantly changing. Hence, introducing the Ornstein–Uhlenbeck process is more realistic for simulating this longer period interaction, suitable for the assumption that continuous fluctuations in environmental noise lead to parameter oscillations near the mean value over a period of time. This is consistent with the viewpoint of Kang et al. [32].

Considering Eq. (4) again, it can be concluded that

$$\xi \int _{0}^{t} {{e^{ - \theta (t - s)}} {\,\mathrm{d} }B(s)} = \frac{\xi }{{\sqrt {2\theta } }}\sqrt {1 - {e^{ - 2\theta t}}} \frac{{{\mathrm{{d}}}B(t)}}{{{\mathrm{{d}}}t}}\quad \text{a.e.}$$

Then, Eq. (4) becomes

$$m(t) = {m_{0}} {e^{ - \theta t}} + \frac{\xi }{{\sqrt {2\theta } }} \sqrt {1 - {e^{ - 2\theta t}}}\frac{{{\mathrm{{d}}}B(t)}}{{{\mathrm{{d}}}t}}.$$

From this equation, it can be seen that no matter how $$m (t)$$ changes in the previous moment, it will continue to develop towards 0 in the next moment. This is also one of the advantages of the Ornstein–Uhlenbeck process [34].

Recently, many experts and scholars have studied the property of the Ornstein–Uhlenbeck process and incorporated it into existing dynamic models with remarkable results. Zhang et al. [35] studied a stochastic SVEIR epidemic model, where the transmission rate follows the log-normal Ornstein–Uhlenbeck process. They established sufficient conditions for the persistence and extinction of the disease, and studied the local asymptotic stability of equilibrium points in the specific deterministic system with bilinear incidence and $$\sigma =1$$. Zhang et al. [36] studied a stochastic constantizer model with mean-reverting Ornstein–Uhlenbeck process and Monod–Haldane response function and found that the rate of regression and volatility intensity had important effects on microbial extinction and persistence. These results can demonstrate the effectiveness of adding the Ornstein–Uhlenbeck process to the biological model. In this article, we assume that the infection rate is affected by the Ornstein–Uhlenbeck process and then establish the following stochastic SIRV model:

$$\textstyle\begin{cases} {\mathrm{{d}}}S = (A - \mu S - \lambda SI - mSI - \rho S + {\delta _{1}}R + {\delta _{2}}V){\,\mathrm{d} }t, \\ {\mathrm{{d}}}I = [\lambda SI + mSI - (\mu + \alpha + \gamma + r)I]{\,\mathrm{d} }t, \\ {\mathrm{{d}}}R = [\gamma I - (\mu + {\delta _{1}})R]{\,\mathrm{d} }t, \\ {\mathrm{{d}}}V = [\rho S - (\mu + {\delta _{2}})V]{\,\mathrm{d} }t, \\ {\mathrm{{d}}}m = - \theta m{\,\mathrm{d} }t + \xi {\,\mathrm{d} }B(t). \end{cases}$$
(5)

Suppose $$(\Omega ,{\{ {F_{t}}\} _{t \ge 0}},\mathbb{P})$$ is a whole space of probabilities with normal condition (Right continuous, $$F_{0}$$ includes all zero measurement sets), where $${\{F_{t}\}}_{t \ge 0}$$ is a σ algebra in Ω.

The main structure of this article is as follows. First, in Sect. 2, we demonstrate the existence and uniqueness of the global positive solution for model (5). In Sects. 3 and 4, we derive the conditions leading to the extinction and persistence of the disease by constructing appropriate functions. The sufficient conditions for the ergodic stationary distribution of model (5) are obtained in Sect. 5. In Sect. 6, the exact expression of the probability density function is obtained by solving the corresponding Fokker-Planck equation. Then we confirm the theoretical results by numerical simulations in Sect. 7.

## 2 Existence and uniqueness of global solution

Before studying this SIRV model, we first need to prove the existence and uniqueness of the global positive solutions of model (5) to demonstrate the feasibility of its dynamic behavior.

### Theorem 1

Model (5) has a particular global solution $$(m(t),S(t),I(t),R(t),V(t))$$, specified for any $$t\geq 0$$, and the solution stays in $$\mathbb{R}\times \mathbb{R}^{4}_{+}$$ with probability one for any initial value $$(m(0), S(0), I(0), R(0) , V(0))\in \mathbb{R}\times \mathbb{R}^{4}_{+}$$.

### Proof

Obviously, the coefficients in model (5) all satisfy the local Lipschitz condition, therefore, for any initial value $$(m(0),S(0),I(0),R(0),V(0)) \in \mathbb{R} \times \mathbb{R}_{+} ^{4}$$, there exists a unique local solution $$(m(t),S(t),I(t),R(t),V(t)) \in$$ on $$t \in [0,{\tau _{e}}]$$, where $${\tau _{e}}$$ an explosion time. In order to demonstrate that this particular solution is global, we need to establish that $${\tau _{e}} = \infty$$. Determine a constant n that is sufficient for $$m(0)$$, $$S(0)$$, $$I(0)$$, $$R(0)$$, $$V(0)$$ to fall totally within the range $$[\frac{1}{n},n]$$. For each instance of the integer $$k \ge n$$, define the stopping time

\begin{aligned} {\tau _{k}} &= \inf \biggl\{ t \in [0,{ \tau _{e}}]:\min \bigl\{ {e^{m(t)}},S(t),I(t),R(t),V(t) \bigr\} \\ & \le \frac{1}{k} \text{ or }\max \bigl\{ {e^{m(t)}},S(t),I(t),R(t),V(t) \bigr\} \ge k \biggr\} . \end{aligned}

In this article, we set $$\inf \emptyset {{ = }}\infty$$. It is worth noting that $${\tau _{k}}$$ increases monotonically with $$k \to \infty$$. Here we make $${\tau _{\infty }}{{ = }}\lim_{k \to \infty } { \tau _{k}}$$ and $${\tau _{\infty }} \le {\tau _{e}}$$ a.s. If we can prove $${\tau _{\infty }}{{ = }}\infty$$, then Theorem 1 can be shown to be complete.

Now we assume this situation is wrong, there is a pair of constants $$T > 0$$, $$\varepsilon \in (0,1)$$ obeys $$\mathbb{P} \{ {\tau _{\infty }} \le T \} > \varepsilon$$. Accordingly, set an integer $${k_{1}} \ge n$$ satisfies $$\mathbb{P} \{ {\tau _{k}} \le T \} > \varepsilon$$ ($$\forall k \ge {k_{1}}$$). Then define a nonnegative $$C^{2}$$-function

$${V_{1}} = S - 1 - \ln S + I - 1 - \ln I + R - 1 - \ln R + V - 1 - \ln V + \frac{1}{2}{m^{2}}.$$

By Itô’s formula, we obtain

\begin{aligned} L{V_{1}} & = \biggl(1 - \frac{1}{S}\biggr){\mathrm{{d}}}S + \biggl(1 - \frac{1}{I} \biggr){\,\mathrm{d} }I + \biggl(1 - \frac{1}{R}\biggr){\,\mathrm{d} }R + \biggl(1 - \frac{1}{V}\biggr){\,\mathrm{d} }V + {m} {\,\mathrm{d} }m+ \frac{1}{2}{\xi ^{2}} \\ & = A - \mu S - \mu I - \mu R - \mu V - \alpha I - \frac{A}{S} + \mu + \lambda I + \rho - \frac{{{\delta _{1}}R}}{S} - \frac{{{\delta _{2}}V}}{S} + mI - \lambda S - mS \\ &\quad{} + (\mu + \alpha + \gamma + r) - \frac{{\gamma I}}{R} + (\mu + { \delta _{1}}) - \frac{{\rho S}}{V} + (\mu + {\delta _{2}}) - \theta {m^{2}} + \frac{1}{2}{\xi ^{2}} \\ & \le A + \mu + \lambda I + \rho + \vert m \vert S+ \vert m \vert I + ( \mu + \alpha + \gamma + r) + (\mu + {\delta _{1}}) \\ &\quad {}+ (\mu + {\delta _{2}}) - \theta {m^{2}}+ \frac{1}{2}{\xi ^{2}}. \end{aligned}
(6)

From model (5), we can easily obtain that

$${\mathrm{{d}}}(S + I + R + V) = A - \mu S - \mu I - \alpha I - rI - \mu R - \mu V \le A - \mu (S + I + R + V),$$

then

\begin{aligned} &S(t) + I(t) + R(t) + V(t) \\ &\quad \le \textstyle\begin{cases} {\frac{A}{\mu },}&{S(0) + I(0) + R(0) + V(0) \le \frac{A}{\mu },} \\ {S(0) + I(0) + R(0) + V(0),}&{S(0) + I(0) + R(0) + V(0) > \frac{A}{\mu }.} \end{cases}\displaystyle \end{aligned}
(7)

Set $$K: = \max \{ \frac{A}{\mu },S(0) + I(0) + R(0) + V(0)\}$$, then $$S(t) \le K$$, $$I(t) \le K$$, $$R(t) \le K$$ and $$V(t) \le K$$. Equation (6) can be reduced to that

\begin{aligned} L{V_{1}} & \le A + \mu + \lambda I + \rho + 2 \vert m \vert K + (\mu + \alpha + \gamma + r) + (\mu + {\delta _{1}}) + (\mu + {\delta _{2}}) - \theta {m^{2}}+ \frac{1}{2}{\xi ^{2}} \\ & \le A + 4\mu + \lambda K + \rho + \alpha + \gamma + r + {\delta _{1}} + {\delta _{2}} + \frac{K^{2}}{\theta}+ \frac{1}{2}{\xi ^{2}} \\ &\stackrel{\Delta}{=} \widetilde{k}, \end{aligned}
(8)

where is a positive constant that is independent of the original value. By integrating inequality (8) from 0 to t, there is

$$\int _{0} ^{{\tau _{k}} \wedge T} {\mathrm{{d}}}V_{1} \le \int _{0} ^{{ \tau _{k}} \wedge T} {\widetilde{k}{\,\mathrm{d} }t} + \int _{0} ^{{\tau _{k}} \wedge T} {m^{3} \xi {\, \mathrm{d} }B(t)}.$$
(9)

Taking expectations on the both sides of inequality (9), one obtains

\begin{aligned} & \mathbb{E}\bigl[V_{1}\bigl(m({\tau _{k}} \wedge T),S({\tau _{k}} \wedge T),I({ \tau _{k}} \wedge T),R({\tau _{k}} \wedge T),V({\tau _{k}} \wedge T)\bigr)\bigr] \\ &\quad \le V_{1}\bigl(m(0),S(0),I(0),R(0),V(0)\bigr) + \mathbb{E} \int _{0} ^{{\tau _{k}} \wedge T} \widetilde{k}{\,\mathrm{d} }t \\ &\quad \le V_{1}\bigl[m(0),S(0),I(0),R(0),V(0)\bigr] + \widetilde{k}T. \end{aligned}

For $$k \ge {k_{1}}$$, we let $${\Omega _{k}} = \{ {\tau _{k}} \le T \}$$, and there exists $$\mathbb{P}({\Omega _{k}}) \ge \varepsilon$$. It is noting that for all $$w \in {\Omega _{k}}$$, there exists at minimum one in $${e^{m(\tau _{k}, w)}}$$, $$S(\tau _{k}, w)$$, $$I(\tau _{k}, w)$$, $$R(\tau _{k}, w)$$, $$V( \tau _{k}, w)$$ reaches either k or $$\frac{1}{k}$$. Therefore,

\begin{aligned} &{V_{1}}\bigl[m({\tau _{k}} \wedge T),S({\tau _{k}} \wedge T),I({\tau _{k}} \wedge T),R({\tau _{k}} \wedge T),V({\tau _{k}} \wedge T) \bigr] \\ &\quad \ge (k - 1 - \ln k) \wedge \biggl(\frac{1}{k} - 1 + \ln k\biggr) \wedge \frac{1}{4}{(\ln k)^{4}} \\ &\quad \stackrel{\Delta}{=} h(k). \end{aligned}

Since $$\lim_{t \to \infty } h(k) = \infty$$, there is

\begin{aligned} & {V_{1}}\bigl[m(0),S(0),I(0),R(0),V(0) \bigr] + \widetilde{k}T \\ &\quad \ge \mathbb{E}\bigl\{ {I_{{\Omega _{k}}}} {V_{1}}\bigl[m({\tau _{k}} \wedge T),S({ \tau _{k}} \wedge T),I({\tau _{k}} \wedge T),R({\tau _{k}} \wedge T),V({ \tau _{k}} \wedge T)\bigr]\bigr\} \\ &\quad \ge \varepsilon h(k). \end{aligned}

When $$k \to \infty$$, we can get $$\infty > V_{1}(m(0),S(0),I(0),R(0),V(0)) + \widetilde{k}T = \infty$$ which is a contradiction. Thus, we obtain $${\tau _{\infty }}{{ = }}\infty$$. The conclusion is confirmed. □

### Remark 1

Through Eq. (7), we can easily obtain that

$$\begin{gathered} S(t) + I(t) + R(t) + V(t) \\ \quad \le \textstyle\begin{cases} {\frac{A}{\mu },}&{S(0) + I(0) + R(0) + V(0) \le \frac{A}{\mu }} \\ {S(0) + I(0) + R(0) + V(0),}&{S(0) + I(0) + R(0) + V(0) > \frac{A}{\mu }} \end{cases}\displaystyle \stackrel{\Delta}{=} K. \end{gathered}$$

Then the feasible region of model (5) can be represented as follows:

$$\Gamma = \bigl\{ (S,I,R,V,m) \in \mathbb{R}_{+} ^{4} \times \mathbb{R}:S + I + R + V \le K \bigr\} .$$

## 3 Extinction

In this section, we aim to obtain the factors that lead to the demise of the disease. Before discussing this issue, we define

$$R_{0}^{E} = \frac{{\lambda A(\mu + {\delta _{2}})}}{{[(\mu + \rho )(\mu + {\delta _{2}}) - {\delta _{2}}\rho ](\mu + \alpha + \gamma + r)}} - \frac{{K\xi }}{{(\mu + \alpha + \gamma + r)\sqrt {\pi \theta } }}.$$

### Theorem 2

Assume that $$R_{0}^{E} <1$$. For any initial value $$(m(t),S(t),I(t),R(t),V(t)) \in \mathbb{R} \times \mathbb{R}_{+}^{4}$$, the following inequality holds

$$\lim_{t \to \infty } \sup \frac{{\ln I(t)}}{t} < ( \mu + \alpha + \gamma + r) \bigl(R_{0}^{E} - 1\bigr) < 0.$$
(10)

### Proof

Define a $$C^{2}$$-Lyapunov function as follows:

$${V_{2}} = \ln I + \frac{\lambda }{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl(S + I + \frac{{{\delta _{1}}}}{{\mu + {\delta _{1}}}}R + \frac{{{\delta _{2}}}}{{\mu + {\delta _{2}}}}V\biggr).$$

Employing Itô’s formula, we have

\begin{aligned} L{V_{2}} & = mS - (\mu + \alpha + \gamma + r) + \frac{{\lambda A}}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}} \\ &\quad {}- \frac{\lambda }{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr)I \\ & \le K \vert m \vert + \frac{{\lambda A}}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}} - (\mu + \alpha + \gamma + r) \\ & = K \vert m \vert + (\mu + \alpha + \gamma + r) \biggl( \frac{{\lambda A(\mu + {\delta _{2}})}}{{[(\mu + \rho )(\mu + {\delta _{2}}) - {\delta _{2}}\rho ](\mu + \alpha + \gamma + r)}} - 1\biggr). \end{aligned}
(11)

Integrating both sides of the above inequality from 0 to t and dividing t, we have

\begin{aligned} \frac{{{V_{2}}(t)}}{t}-\frac{{{V_{2}}(0)}}{t} &\le (\mu + \alpha + \gamma + r) \biggl( \frac{{\lambda A(\mu + {\delta _{2}})}}{{[(\mu + \rho )(\mu + {\delta _{2}}) - {\delta _{2}}\rho ](\mu + \alpha + \gamma + r)}} - 1\biggr) \\ &\quad {}+ \frac{1}{t} \int _{0}^{t} {K \bigl\vert m(\tau ) \bigr\vert \mathrm{{d} \tau }} . \end{aligned}

As t tends to infinity, the Ornstein–Uhlenbeck process weakly converges to the normal distribution $$\mathbb{N}(0,\frac{{{\xi ^{2}}}}{{2\theta }})$$, then its limit distribution density function is $$\pi (x) = \frac{{\sqrt {\theta}}}{{\sqrt {\pi}\xi }}{e^{ - \frac{{\theta {x^{2}}}}{\xi }}}$$. And the ergodic theorem of [21, 36, 37] states that the following condition exists:

$$\lim_{t \to \infty } \frac{1}{t} \int _{0}^{t} { \bigl\vert m( \tau ) \bigr\vert {\,\mathrm{d} }\tau } = \int _{ - \infty }^{\infty }{ \vert x \vert \pi (x){\, \mathrm{d} }x} = \frac{\xi }{{\sqrt {\pi \theta } }}.$$
(12)

Therefore, substituting Eq. (12) into the above inequality and taking limits on both sides gives

\begin{aligned} \lim_{t \to \infty } \sup \frac{{\ln I(t)}}{t} & \le \lim_{t \to \infty } \sup \biggl( \frac{{{V_{2}}(t)}}{t}-\frac{{{V_{2}}(0)}}{t}\biggr) \\ & \le (\mu + \alpha + \gamma + r) \biggl( \frac{{\lambda A(\mu + {\delta _{2}})}}{{[(\mu + \rho )(\mu + {\delta _{2}}) - {\delta _{2}}\rho ](\mu + \alpha + \gamma + r)}} - 1\biggr) \\ &\quad{} + \lim_{t \to \infty } \frac{1}{t} \int _{0}^{t} K \bigl\vert m(\tau ) \bigr\vert {\,\mathrm{d} }\tau \\ & = (\mu + \alpha + \gamma + r) \biggl( \frac{{\lambda A(\mu + {\delta _{2}})}}{{[(\mu + \rho )(\mu + {\delta _{2}}) - {\delta _{2}}\rho ](\mu + \alpha + \gamma + r)}} - 1\biggr) \\ &\quad{}+ \frac{{K\xi }}{{\sqrt {\pi \theta } }} \\ & < (\mu + \alpha + \gamma + r) \bigl(R_{0}^{E} - 1 \bigr) \\ & < 0. \end{aligned}
(13)

The proof is completed. □

## 4 Persistence

In this section, we analyze the condition that leads to the persistence of the disease.

### Theorem 3

Assume that $$R_{0}^{E} > 1$$. For any initial value $$(m(t),S(t),I(t),R(t),V(t)) \in \mathbb{R} \times \mathbb{R}_{+}^{4}$$, the following inequality holds

\begin{aligned} &\lim_{t \to \infty } \inf \frac{1}{t} \int _{0}^{t} {I( \tau ){\,\mathrm{d} }\tau } \\ &\quad \ge \frac{{(\mu + \alpha + \gamma + r) [(\mu + \rho )(\mu + {\delta _{2}}) - {\delta _{2}}\rho ] (\mu + {\delta _{1}})}}{{\lambda (\mu + {\delta _{2}})[(\mu + \alpha + \gamma + r)(\mu + {\delta _{1}}) - {\delta _{1}}\gamma ]}}\bigl(R_{0}^{E} - 1\bigr) \quad \textit{a.s.} \end{aligned}
(14)

### Proof

Through the first line of Eq. (11), we can get

\begin{aligned} L{{( - }} {V_{2}}) & = - mS + (\mu + \alpha + \gamma + r) - \frac{{\lambda A}}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}} \\ &\quad {}+ \frac{\lambda }{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr)I \\ & \le K \vert m \vert + (\mu + \alpha + \gamma + r) - \frac{{\lambda A}}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}} \\ &\quad {}+ \frac{\lambda }{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{2}}\gamma }}{{\mu + {\delta _{2}}}} \biggr)I. \end{aligned}

Integrating both sides of the above ineqality from 0 to t, there is

\begin{aligned} - \frac{{{V_{2}}(t) - {V_{2}}(0)}}{t} & \le \frac{1}{t} \int _{0}^{t} {K \bigl\vert m( \tau ) \bigr\vert {\,\mathrm{d} }\tau } + (\mu + \alpha + \gamma + r) - \frac{{\lambda A}}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}} \\ &\quad{} + \frac{\lambda }{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr)\frac{1}{t} \int _{0}^{t} {I(\tau ){\,\mathrm{d} }\tau } . \end{aligned}

Consequently, combining Eq. (12), we have

\begin{aligned} \lim_{t \to \infty } \inf \frac{1}{t} \int _{0}^{t} {I( \tau ){\,\mathrm{d} }\tau } & \ge \frac{{\frac{{\lambda A}}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}} - (\mu + \alpha + \gamma + r) - \frac{{K\xi }}{{\sqrt {\pi \theta } }}}}{{\frac{\lambda }{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}(\mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}})}} \\ & = \frac{{(\mu + \alpha + \gamma + r) [(\mu + \rho )(\mu + {\delta _{2}}) - {\delta _{2}}\rho ](\mu + {\delta _{1}})}}{{\lambda (\mu + {\delta _{2}})[(\mu + \alpha + \gamma + r)(\mu + {\delta _{1}}) - {\delta _{1}}\gamma ]}}\bigl(R_{0}^{E} - 1\bigr). \end{aligned}

The inequality (14) holds. This completes the proof. □

## 5 Stationary distribution

We first present a lemma before illustrating the ergodic stationary distribution of model (5).

### Lemma 1

[38] For any initial value $$P_{0}(0)=( m(0),S(0), I(0), R(0), V(0) )\in \Gamma$$, if there exists a bounded closed domain $$U_{\varepsilon }\in \Gamma$$ with regular boundary, and obeys

$$\lim_{t \to \infty } \inf \frac{1}{t} \int _{0}^{t} { \mathbb{P}\bigl(\tau ,P_{0}(0), U_{\varepsilon}\bigr) {\,\mathrm{d} }\tau } >0 \quad \textit{a.s.},$$
(15)

where $$\mathbb{P}(\tau ,P_{0}(0), U_{\varepsilon})$$ denotes the transition probability of $$P_{0}(t)$$. In that case, the stochastic system (5) contains at least one stationary distribution.

### Theorem 4

Assume that $$R_{0}^{E} > 1$$. For any initial value $$(m(t),S(t),I(t),R(t),V(t)) \in \mathbb{R} \times \mathbb{R}_{+}^{4}$$, the system (5) has a stationary distribution $$\pi (\cdot )$$.

### Proof

Applying Itô’s formula, we have

\begin{aligned}& \begin{aligned} L{{( - }} {V_{2}}) & \le K \vert m \vert + (\mu + \alpha + \gamma + r) - \frac{{\lambda A}}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}} \\ &\quad {}+ \frac{\lambda }{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}} \biggr)I \\ & = \frac{{K\xi }}{{\sqrt {\pi \theta } }} + (\mu + \alpha + \gamma + r) - \frac{{\lambda A}}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}} + K\biggl( \vert m \vert - \frac{\xi }{{\sqrt {\pi \theta } }}\biggr) \\ &\quad {} + \frac{\lambda }{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}} \biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr)I \\ & = - (\mu + \alpha + \gamma + r) \bigl(R_{0}^{E} - 1 \bigr) + K\biggl( \vert m \vert - \frac{\xi }{{\sqrt {\pi \theta } }}\biggr) \\ &\quad {}+ \frac{\lambda }{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr)I, \end{aligned} \end{aligned}
(16)
\begin{aligned}& L( - \ln S) = - \frac{A}{S} + \mu + \lambda I + mI + \rho - \frac{{{\delta _{1}}R}}{S} - \frac{{{\delta _{2}}V}}{S} \le - \frac{A}{S} + \mu + \lambda I + K \vert m \vert + \rho , \end{aligned}
(17)
\begin{aligned}& L( - \ln R) = - \frac{{\gamma I}}{R} + \mu + {\delta _{1}}, \end{aligned}
(18)
\begin{aligned}& L( - \ln V) = - \frac{{\rho S}}{V} + \mu + {\delta _{2}}, \end{aligned}
(19)
\begin{aligned}& \begin{aligned} L\bigl( - \ln (K - S - I - R - V)\bigr) & = \frac{{A - \mu S - \mu I - \alpha I - rI - \mu R - \mu V}}{{K - S - I - R - V}} \\ &\le \frac{{\mu (K - S - I - R - V)}}{{K - S - I - R - V}} - \frac{{(r + \alpha )I}}{{K - S - I - R - V}} \\ & = \mu - \frac{{(r + \alpha )I}}{{K - S - I - R - V}}. \end{aligned} \end{aligned}
(20)

Define a $$C^{2}$$-function as follows:

$$\bar{U}(S,I,R,V,m) = - M{V_{2}} - \ln S - \ln R - \ln V - \ln \biggl( \frac{A}{\mu } - S - I - R - V\biggr) + \frac{{{m^{2}}}}{2},$$

where M is a large enough positive constant satisfying the following inequality:

$$- M (\mu + \alpha + \gamma + r) \bigl(R_{0}^{E} - 1 \bigr) + B \le - 2,$$

and

$$B = 4\mu + \rho + {\delta _{1}} + {\delta _{2}} + \frac{{{K^{2}}}}{{2\theta }} + \frac{{{\xi ^{2}}}}{2}.$$

There exists a minimum value $$\bar{U} (S_{0},I_{0},R_{0},V_{0},m_{0})$$ in the interior of Γ. Thus, we define a non-negative $$C^{2}$$-function:

$$U(S,I,R,V,m) = \bar{U}(S,I,R,V,m) - \bar{U}({S_{0}},{I_{0}},{R_{0}},{V_{0}},{m_{0}}).$$

Combining (16)–(20), there is

\begin{aligned} LU & \le - M(\mu + \alpha + \gamma + r) \bigl(R_{0}^{E} - 1\bigr) + MK\biggl( \vert m \vert - \frac{\xi }{{\sqrt {\pi \theta } }}\biggr) + \frac{{M\lambda }}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}} \\ &\quad{} \times \biggl(\mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr)I - \frac{A}{S} + \mu + \lambda I + K \vert m \vert + \rho - \frac{{\gamma I}}{R} + \mu + {\delta _{1}} \\ &\quad{} - \frac{{\rho S}}{V} + \mu + {\delta _{2}} + \mu - \frac{{(r + \alpha )I}}{{K - S - I - R - V}} - \theta {m^{2}} + \frac{{{\xi ^{2}}}}{2} \\ & \le - M(\mu + \alpha + \gamma + r) \bigl(R_{0}^{E} - 1\bigr) + MK\biggl( \vert m \vert - \frac{\xi }{{\sqrt {\pi \theta } }}\biggr) + \frac{{M\lambda }}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}} \\ &\quad{} \times \biggl(\mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr)I- \frac{A}{S} + \mu + \lambda I + \rho - \frac{{\gamma I}}{R} + \mu + { \delta _{1}} - \frac{{\rho S}}{V} + \mu \\ &\quad{} + {\delta _{2}} + \mu - \frac{{(r + \alpha )I}}{{K - S - I - R - V}} + \frac{{{\xi ^{2}}}}{2} - \frac{\theta }{2}{m^{2}} + \frac{{{K^{2}}}}{{2\theta }} \\ & = - M (\mu + \alpha + \gamma + r) \bigl(R_{0}^{E} - 1\bigr) + 4\mu + \rho + { \delta _{1}} + {\delta _{2}} + \frac{{{K^{2}}}}{{2\theta }} + \frac{{{\xi ^{2}}}}{2} - \frac{A}{S} \\ &\quad{} + \biggl[ \frac{{M\lambda }}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr) + \lambda \biggr]I - \frac{{\gamma I}}{R} - \frac{{\rho S}}{V} \\ &\quad{} - \frac{{(r + \alpha )I}}{{K - S - I - R - V}}- \frac{\theta }{2}{m^{2}} + MK\biggl( \vert m \vert - \frac{\xi }{{\sqrt {\pi \theta } }}\biggr) \\ & = G(S,I,R,V,m) + MK\biggl( \vert m \vert - \frac{\xi }{{\sqrt {\pi \theta } }}\biggr), \end{aligned}

where

\begin{aligned} G(S,I,R,V,m) & = - M(\mu + \alpha + \gamma + r) \bigl(R_{0}^{E} - 1\bigr) + 4 \mu + \rho + {\delta _{1}} + {\delta _{2}} + \frac{{{K^{2}}}}{{2\theta }} + \frac{{{\xi ^{2}}}}{2} - \frac{A}{S} \\ &\quad{} + \biggl[ \frac{{M\lambda }}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr) + \lambda \biggr]I- \frac{{\gamma I}}{R} - \frac{{\rho S}}{V} \\ &\quad{} - \frac{{(r + \alpha )I}}{{K - S - I - R - V}} - \frac{\theta }{2}{m^{2}}. \end{aligned}

Then, we define a closed subset of $$U_{\varepsilon}$$ by

$${U_{\varepsilon }} = \biggl\{ (S,I,R,V,m) \in \Gamma |S \ge \varepsilon ,I \ge \varepsilon ,R \ge {\varepsilon ^{2}},V \ge {\varepsilon ^{2}},S + I + R + V \le K - {\varepsilon ^{2}}, \vert m \vert \le \frac{1}{\varepsilon }\biggr\} ,$$

where ε is a sufficiently small constant to satisfy the following equation:

\begin{aligned}& - 2 + \biggl[ \frac{{M\lambda }}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr) + \lambda \biggr] \varepsilon \le - 1, \\& - 2 + \biggl[ \frac{{M\lambda }}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr) + \lambda \biggr]K - \frac{A}{\varepsilon } \le - 1, \\& - 2 + \biggl[ \frac{{M\lambda }}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr) + \lambda \biggr]K - \frac{\gamma }{\varepsilon } \le - 1, \\& - 2 + \biggl[ \frac{{M\lambda }}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr) + \lambda \biggr]K - \frac{\rho }{\varepsilon } \le - 1, \\& - 2 + \biggl[ \frac{{M\lambda }}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr) + \lambda \biggr]K - \frac{{r + \alpha }}{\varepsilon } \le - 1, \\& - 2 + \biggl[ \frac{{M\lambda }}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr) + \lambda \biggr]K - \frac{\theta }{{2{\varepsilon ^{2}}}} \le - 1. \end{aligned}

Next, the complementary set of $$U_{\varepsilon }$$ can be classified into the following six subsets:

\begin{aligned} &U_{\varepsilon ,1}^{c} = \bigl\{ (S,I,R,V,m) \in \Gamma |I < \varepsilon \bigr\} , \\ &U_{\varepsilon ,2}^{c} = \bigl\{ (S,I,R,V,m) \in \Gamma |S < \varepsilon \bigr\} , \\ &U_{\varepsilon ,3}^{c} = \bigl\{ (S,I,R,V,m) \in \Gamma |I \ge \varepsilon ,R < {\varepsilon ^{2}}\bigr\} , \\ &U_{\varepsilon ,4}^{c} = \bigl\{ (S,I,R,V,m) \in \Gamma |S \ge \varepsilon ,V < {\varepsilon ^{2}}\bigr\} , \\ &U_{\varepsilon ,5}^{c} = \bigl\{ (S,I,R,V,m) \in \Gamma |I \ge \varepsilon ,S + I + R + V < K - {\varepsilon ^{2}}\bigr\} , \\ &U_{\varepsilon ,6}^{c} = \biggl\{ (S,I,R,V,m) \in \Gamma \Big| \vert m \vert > \frac{1}{\varepsilon }\biggr\} . \end{aligned}

In the following, we will prove $$G(S,I,R,V,m) \le -1$$ in each of the following six cases.

Case 1. For any $$(S,I,R,V,m) \in U_{\varepsilon ,1}^{c}$$, one obtains

$$G(S,I,R,V,m) \le - 2 + \biggl[ \frac{{M\lambda }}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr) + \lambda \biggr] \varepsilon \le - 1.$$

Case 2. For any $$(S,I,R,V,m) \in U_{\varepsilon ,2}^{c}$$, one obtains

$$G(S,I,R,V,m) \le - 2 + \biggl[ \frac{{M\lambda }}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr) + \lambda \biggr]K - \frac{A}{\varepsilon } \le - 1.$$

Case 3. For any $$(S,I,R,V,m) \in U_{\varepsilon ,3}^{c}$$, one obtains

$$G(S,I,R,V,m) \le - 2 + \biggl[ \frac{{M\lambda }}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr) + \lambda \biggr]K - \frac{\gamma }{\varepsilon } \le - 1.$$

Case 4. For any $$(S,I,R,V,m) \in U_{\varepsilon ,4}^{c}$$, one obtains

$$G(S,I,R,V,m) \le - 2 + \biggl[ \frac{{M\lambda }}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr) + \lambda \biggr]K - \frac{\rho }{\varepsilon } \le - 1.$$

Case 5. For any $$(S,I,R,V,m) \in U_{\varepsilon ,5}^{c}$$, one obtains

$$G(S,I,R,V,m) \le - 2 + \biggl[ \frac{{M\lambda }}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr) + \lambda \biggr]K - \frac{{r + \alpha }}{\varepsilon } \le - 1.$$

Case 6. For any $$(S,I,R,V,m) \in U_{\varepsilon ,6}^{c}$$, one obtains

$$G(S,I,R,V,m) \le - 2 + \biggl[ \frac{{M\lambda }}{{\mu + \rho - \frac{{{\delta _{2}}\rho }}{{\mu + {\delta _{2}}}}}}\biggl( \mu + \alpha + \gamma + r - \frac{{{\delta _{1}}\gamma }}{{\mu + {\delta _{1}}}}\biggr) + \lambda \biggr]K - \frac{\theta }{{2{\varepsilon ^{2}}}} \le - 1.$$

As a result, we are able to easily obtain

$$G(S,I,R,V,m) \le - 1, \quad \text{for any }(S,I,R,V,m) \in \Gamma \backslash U_{\varepsilon }$$

for sufficiently small ε. Moreover, there exists a positive constant Y that satisfies $$G(S,I, R,V,m) \le Y$$. Here, denote $$K_{0}(t)=(S(t),I(t),R(t),V(t),m(t))$$. Thus, we can get

\begin{aligned} 0 & \le \frac{{\mathbb{E}(V({K_{0}}(t))}}{t} = \frac{{\mathbb{E}(V({K_{0}}(0)))}}{t} + \frac{1}{t} \int _{0}^{t} { \mathbb{E}\bigl(LV \bigl({K_{0}}(\tau )\bigr)\bigr){\,\mathrm{d} }\tau } \\ & \le \frac{{\mathbb{E}(V({K_{0}}(0)))}}{t} + \frac{1}{t} \int _{0}^{t} {\mathbb{E}\bigl(G \bigl({K_{0}}(\tau )\bigr)\bigr)d\tau } + MK\biggl[\mathbb{E}\biggl( \frac{1}{t} \int _{0}^{t} { \bigl\vert m(\tau ) \bigr\vert {\,\mathrm{d} }\tau } \biggr) - \frac{\xi }{{\sqrt {\pi \theta } }}\biggr]. \end{aligned}

Taking the infimum bound for both sides of the above inequality and combining it with Eq. (12), we have

\begin{aligned} 0 & \le \lim_{t \to \infty } \inf \frac{{\mathbb{E}(V({K_{0}}(0)))}}{t} + \lim_{t \to \infty } \inf \frac{1}{t} \int _{0}^{t} {\mathbb{E}\bigl(G \bigl({K_{0}}( \tau )\bigr)\bigr)d\tau } \\ &\quad {}+ MK\biggl[\lim _{t \to \infty } \inf \mathbb{E}\biggl(\frac{1}{t} \int _{0}^{t} { \bigl\vert m(\tau ) \bigr\vert {\,\mathrm{d} }\tau } \biggr) - \frac{\xi }{{\sqrt {\pi \theta } }}\biggr] \\ & = \lim_{t \to \infty } \inf \frac{1}{t} \int _{0}^{t} {\mathbb{E}\bigl(G \bigl({K_{0}}(\tau )\bigr)\bigr){{\mathbf{1}}_{\{ {K_{0}}(\tau ) \in {U_{\varepsilon }}\} }} {\, \mathrm{d} }\tau } \\ &\quad {} + \lim_{t \to \infty } \inf \frac{1}{t} \int _{0}^{t} {\mathbb{E}\bigl(G \bigl({K_{0}}( \tau )\bigr)\bigr){{\mathbf{1}}_{\{ {K_{0}}(\tau ) \in \Gamma \backslash {U_{ \varepsilon }}\} }} {\, \mathrm{d} }\tau } \\ & \le Y\lim_{t \to \infty } \inf \frac{1}{t} \int _{0}^{t} \mathbb{P}\bigl\{ {K_{0}}(\tau ) \in {U_{\varepsilon }}\bigr\} {\,\mathrm{d} \tau } - \lim_{t \to \infty } \inf \frac{1}{t} \int _{0}^{t} \mathbb{P}\bigl\{ {K_{0}}(\tau ) \in \Gamma \backslash {U_{\varepsilon }}\bigr\} { \, \mathrm{d} \tau } \\ & \le - 1 + (Y + 1)\lim_{t \to \infty } \inf \frac{1}{t} \int _{0}^{t} \mathbb{P}\bigl\{ {K_{0}}(\tau ) \in {U_{ \varepsilon }}\bigr\} {\,\mathrm{d} \tau } , \end{aligned}

where $${{\mathbf{1}}_{\{ {K_{0}}(\tau ) \in {U_{\varepsilon }}\} }}$$ and $${{\mathbf{1}}_{\{ {K_{0}}(\tau ) \in \Gamma \backslash {U_{ \varepsilon }}\} }}$$ are the indicator functions of the set $$\{ K_{0}(\tau ) \in {U_{\varepsilon }}\}$$ and $$\{ K_{0}(\tau ) \in \Gamma \backslash {U_{\varepsilon }}\}$$. This indicates that

$$\lim_{t \to \infty } \inf \frac{1}{t} \int _{0}^{t} \mathbb{P}\bigl\{ K_{0}(\tau ) \in {U_{\varepsilon }}\bigr\} {\,\mathrm{d} }\tau \ge \frac{1}{{Y + 1}},$$

then

$$\lim_{t \to \infty } \inf \frac{1}{t} \int _{0}^{t} \mathbb{P}\bigl\{ \tau ,K_{0}(0),{U_{\varepsilon }}\bigr\} {\,\mathrm{d} }\tau \ge \frac{1}{{Y + 1}} > 0,\quad \forall K_{0}(0) \in \Gamma \text{ a.s.}$$
(21)

The Inequality (21) and the invariance of Γ indicate the existence of an invariance probability measure for model (5) on Γ. Moreover, the positive recurrence of model (5) is easily derived from the existence of the invariant probability measure. Hence, the system (5) has a stationary distribution $$\pi (\cdot )$$. □

## 6 Density function

In this section, we concentrate on the analysis of the probability density function. First, if $$R_{0}^{E} > 1$$, there is a quasi-equilibrium $$E^{*} = ({S^{*}},{I^{*}},{R^{*}},{V^{*}},{m^{*}}) = ({S^{*}},{I^{*}},{R^{*}},{V^{*}},0)$$ that satisfies the following equations:

$$\textstyle\begin{cases} A - \mu {S^{*}} - \lambda {S^{*}}{I^{*}} - {m^{*}}{S^{*}}{I^{*}} - \rho {S^{*}} + {\delta _{1}}{R^{*}} + {\delta _{2}}{V^{*}} = 0, \\ \lambda {S^{*}}{I^{*}} + {m^{*}}{S^{*}}{I^{*}} - (\mu + \alpha + \gamma + r){I^{*}} = 0, \\ \gamma {I^{*}} - (\mu + {\delta _{1}}){R^{*}} = 0, \\ \rho {S^{*}} - (\mu + {\delta _{2}}){V^{*}} = 0, \\ - \theta {m^{*}} = 0. \end{cases}$$
(22)

Notice that when the random factors in the model are not taken into account, the quasi-equilibrium $$E^{*}$$ is same as the equilibrium in the deterministic model. Taking $${L_{1}} = S - {S^{*}}$$, $${L_{2}} = I - {I^{*}}$$, $${L_{3}} = R - {R^{*}}$$, $${L_{4}} = V - {V^{*}}$$, $${L_{5}} = m - {m^{*}}$$, we are able to obtain the corresponding linearized system

$$\textstyle\begin{cases} {\mathrm{{d}}}{L_{1}} = ( - {a_{11}}{L_{1}} - {a_{12}}{L_{2}} + {a_{13}}{L_{3}} + {a_{14}}{L_{4}} - {a_{15}}{L_{5}}){\,\mathrm{d} }t, \\ {\mathrm{{d}}}{L_{2}} = ({a_{21}}{L_{1}} + {a_{15}}{L_{5}}){\,\mathrm{d} }t, \\ {\mathrm{{d}}}{L_{3}} = ({a_{32}}{L_{2}} - {a_{33}}{L_{3}}){\,\mathrm{d} }t, \\ {\mathrm{{d}}}{L_{4}} = ({a_{41}}{L_{1}} - {a_{44}}{L_{4}}){\,\mathrm{d} }t, \\ {\mathrm{{d}}}{L_{5}} = - \theta {L_{5}}{\,\mathrm{d} }t + \xi {\,\mathrm{d} }B(t), \end{cases}$$
(23)

where

$$\begin{gathered} {a_{11}} = \mu +\rho +\lambda I^{*} > 0,\qquad {a_{12}} = \lambda {S^{*}} > 0,\qquad {a_{13}} ={\delta _{1}} > 0,\qquad {a_{14}} = { \delta _{2}} > 0, \\ {a_{15}} = S ^{*}I^{*} > 0,\qquad {a_{21}} = \lambda {I^{*}} > 0,\qquad {a_{32}} = \gamma > 0,\qquad {a_{33}} = \mu + { \delta _{1}} > 0, \\{a_{41}} = \rho > 0,\qquad {a_{44}} = \mu + {\delta _{2}} > 0. \end{gathered}$$

### Theorem 5

If $$R_{0}^{E} > 1$$, the solution of system (23) is given by the specific normal probability density function $$\Phi ({L_{1}},{L_{2}},{L_{3}},{L_{4}},{L_{5}})$$, which has the following form.

$$\Phi ({L_{1}},{L_{2}},{L_{3}},{L_{4}},{L_{5}}) = {(2\pi )^{ - \frac{5}{2}}} \vert \Sigma \vert ^{ - \frac{1}{2}}{e^{ - \frac{1}{2}({L_{1}},{L_{2}},{L_{3}},{L_{4}},{L_{5}}){ \Sigma ^{ - 1}}{{({L_{1}},{L_{2}},{L_{3}},{L_{4}},{L_{5}})}^{T}}}},$$

where

\begin{aligned}& \Sigma = {q_{1}^{2}{\xi ^{2}}}(M_{1} J_{3} J_{2} J_{1})^{ - 1}{ \Sigma _{1}} {\bigl[(M_{1} J_{3} J_{2} J_{1})^{ - 1}\bigr]^{T}}, \\& \Sigma _{1} = \begin{pmatrix} {{c_{11}}}&0&{{c_{13}}}&0&{{c_{15}}} \\ 0&{ - {c_{13}}}&0&{ - {c_{15}}}&0 \\ {{c_{13}}}&0&{{c_{15}}}&0&{{c_{35}}} \\ 0&{ - {c_{15}}}&0&{ - {c_{35}}}&0 \\ {{c_{15}}}&0&{{c_{35}}}&0&{{c_{55}}} \end{pmatrix}, \\& {J_{1}} = \begin{pmatrix} 0&0&0&0&1 \\ 1&0&0&0&0 \\ 1&1&0&0&0 \\ 1&1&1&0&0 \\ 0&0&0&1&0 \end{pmatrix},\qquad {J_{2}} = \begin{pmatrix} 1&0&0&0&0 \\ 0&1&0&0&0 \\ 0&0&1&0&0 \\ 0&0&{ - 1 + \frac{{{a_{32}}}}{{ - {a_{11}} + {a_{12}} + {a_{21}}}}}&1&0 \\ 0&0&0&{ - \frac{{{a_{41}}}}{{ - {a_{11}} + {a_{12}} + {a_{21}} - {a_{32}}}}}&1 \end{pmatrix}, \\& {J_{3}} = \begin{pmatrix} 1&0&0&0&0 \\ 0&1&0&0&0 \\ 0&0&1&0&0 \\ 0&0&0&1&0 \\ 0&0&0&{ - \frac{{{a_{4}}}}{{{a_{2}}}}}&1 \end{pmatrix}, \\& {M_{1}} = \begin{pmatrix} {{q_{1}}}&{{q_{2}}}&{{q_{3}}}&{{q_{4}}}&{{q_{5}}} \\ 0&{{a_{2}}{a_{6}} ( { - {a_{11}} + {a_{12}} + {a_{21}}} )}&{{a_{6}} ( {{a_{2}} ( {{a_{1}} + {a_{3}} + {a_{7}}} ) + {a_{4}}{a_{9}}} )}&{{q_{6}}}&{{q_{7}}} \\ 0&0&{{a_{2}}{a_{6}}}&{{a_{6}}{a_{7}} + {a_{6}} ( {{a_{3}} + \frac{{{a_{4}}{a_{9}}}}{{{a_{2}}}}} )}&{a_{7}^{2} + {a_{6}}{a_{9}}} \\ 0&0&0&{{a_{6}}}&{{a_{7}}} \\ 0&0&0&0&1 \end{pmatrix}. \end{aligned}

### Proof

Taking $${\mathrm{{d}}}L = AL{\,\mathrm{d} }t + G{\,\mathrm{d} }B(t)$$, the matrix form of system (23) can be expressed as

$$A = \begin{pmatrix} { - {a_{11}}}&{ - {a_{12}}}&{{a_{13}}}&{{a_{14}}}&{ - {a_{15}}} \\ {{a_{21}}}&0&0&0&{{a_{15}}} \\ 0&{{a_{32}}}&{ - {a_{33}}}&0&0 \\ {{a_{41}}}&0&0&{ - {a_{44}}}&0 \\ 0&0&0&0&{ - \theta } \end{pmatrix},\qquad G = \operatorname{diag}(0,0,0,0,\xi ).$$

Then the characteristic polynomial of A can be represented as

$${\varphi _{A}}(\lambda ) = (\lambda + \theta ) \bigl({\lambda ^{4}} + {b_{1}} { \lambda ^{3}} + {b_{2}} {\lambda ^{2}} + {b_{3}}\lambda + {b_{4}}\bigr),$$

where

$$\begin{gathered} {b_{1}} = {a_{11}} + {a_{33}} + {a_{44}} > 0, \\ {b_{2}} = {a_{12}} {a_{21}} + {a_{11}} {a_{33}} - {a_{14}} {a_{41}} + {a_{11}} {a_{44}} + {a_{33}} {a_{44}} > 0, \\ {b_{3}} = - {a_{13}} {a_{21}} {a_{32}} + {a_{12}} {a_{21}} {a_{33}} - {a_{14}} {a_{33}} {a_{41}} + {a_{12}} {a_{21}} {a_{44}} + {a_{11}} {a_{33}} {a_{44}} > 0, \\ {b_{4}} = - {a_{13}} {a_{21}} {a_{32}} {a_{44}} + {a_{12}} {a_{21}} {a_{33}} {a_{44}} > 0. \end{gathered}$$

By the Routh–Hurwitz criterion and $${b_{1}}{b_{2}} - {b_{3}} > 0$$, $${b_{1}}{b_{2}}{b_{3}} - b_{3}^{2} - b_{1}^{2}{b_{4}} > 0$$, the real parts of the eigenvalues of matrix A are all negative. By [39], the probability density function $$\Phi (m,{L_{1}},{L_{2}},{L_{3}},{L_{4}})$$ of the system (23) can thus be written as the following Fokker–Planck equation:

\begin{aligned} & {-} \frac{{{\xi ^{2}}}}{2} \frac{{{\partial ^{2}}}}{{\partial {L_{5}^{2}}}}\Phi + \frac{\partial }{{\partial m}}\bigl[( - \theta L_{5})\Phi \bigr] + \frac{\partial }{{\partial L_{1}}}\bigl[( - {a_{11}} {L_{1}}- {a_{12}} {L_{2}} + {a_{13}} {L_{3}} + {a_{14}} {L_{4}} - {a_{15}} {L_{5}})\Phi \bigr] \\ &\quad{} + \frac{\partial }{{\partial L_{2}}}\bigl[({a_{21}} {L_{1}} + {a_{15}} {L_{5}}) \Phi \bigr] + \frac{\partial }{{\partial L_{3}}} \bigl[({a_{32}} {L_{2}} - {a_{33}} {L_{3}}) \Phi \bigr] + \frac{\partial }{{\partial L_{4}}}\bigl[({a_{41}} {L_{1}} - {a_{44}} {L_{4}}) \Phi \bigr] = 0, \end{aligned}

which can be denoted as the formula of Gaussian distribution

$$\Phi (L) = c\exp \biggl\{ - \frac{1}{2}LQ{L^{T}}\biggr\} ,$$

where Q satisfies $$Q{G^{2}}Q + {A^{T}}Q + QA = 0$$. Suppose that Q is is invertible and let $${Q^{ - 1}} = \Sigma$$, one obtains

$${G^{2}} + A\Sigma + \Sigma {A^{T}} = 0.$$
(24)

Two transformation matrices $$J_{1}$$ and $$J_{2}$$, which we import here, can be written as

$${J_{1}} = \begin{pmatrix} 0&0&0&0&1 \\ 1&0&0&0&0 \\ 1&1&0&0&0 \\ 1&1&1&0&0 \\ 0&0&0&1&0 \end{pmatrix},\qquad {J_{2}} = \begin{pmatrix} 1&0&0&0&0 \\ 0&1&0&0&0 \\ 0&0&1&0&0 \\ 0&0&{ - 1 + \frac{{{a_{32}}}}{{ - {a_{11}} + {a_{12}} + {a_{21}}}}}&1&0 \\ 0&0&0&{ - \frac{{{a_{41}}}}{{ - {a_{11}} + {a_{12}} + {a_{21}} - {a_{32}}}}}&1 \end{pmatrix}.$$

Then we can conclude that

\begin{aligned}& \begin{aligned} {A_{1}} &= {J_{1}}AJ_{1}^{T} \\ &= \begin{pmatrix} { - \theta }&0&0&0&0 \\ { - {a_{15}}}&{ - {a_{11}} + {a_{12}}}&{ - {a_{12}} - {a_{13}}}&{{a_{13}}}&{{a_{14}}} \\ 0&{ - {a_{11}} + {a_{12}} + {a_{21}}}&{ - {a_{12}} - {a_{13}}}&{{a_{13}}}&{{a_{14}}} \\ 0&{ - {a_{11}} + {a_{12}} + {a_{21}} - {a_{32}}}&{ - {a_{12}} - {a_{13}} + {a_{32}} + {a_{33}}}&{{a_{13}} - {a_{33}}}&{{a_{14}}} \\ 0&{{a_{41}}}&0&0&{ - {a_{44}}} \end{pmatrix}, \end{aligned} \\& \begin{aligned} {A_{2}} &= {J_{2}} {A_{1}}J_{2}^{T} \\ &= \begin{pmatrix} { - \theta }&0&0&0&0 \\ { - {a_{15}}}&{ - {a_{11}} + {a_{12}}}&{{a_{1}}}&{{a_{13}} + \frac{{{a_{14}}{a_{41}}}}{{ - {a_{11}} + {a_{12}} + {a_{21}} - {a_{32}}}}}&{{a_{14}}} \\ 0&{ - {a_{11}} + {a_{12}} + {a_{21}}}&{{a_{1}}}&{{a_{13}} + \frac{{{a_{14}}{a_{41}}}}{{ - {a_{11}} + {a_{12}} + {a_{21}} - {a_{32}}}}}&{{a_{14}}} \\ 0&0&{{a_{2}}}&{{a_{3}}}&{ - \frac{{{a_{14}}{a_{32}}}}{{{a_{11}} - {a_{12}} - {a_{21}}}}} \\ 0&0&{{a_{4}}}&{{a_{5}}}&{ \frac{{{a_{14}}{a_{41}}}}{{{a_{11}} - {a_{12}} - {a_{21}} + {a_{32}}}} - {a_{44}}} \end{pmatrix}, \end{aligned} \end{aligned}

where

\begin{aligned} &{a_{1}} = \frac{{{a_{12}} ( { - {a_{11}} + {a_{12}} + {a_{21}}} ) + {a_{13}}{a_{32}} - {a_{14}}{a_{41}}}}{{{a_{11}} - {a_{12}} - {a_{21}}}}, \\ &{a_{2}} = \frac{{{a_{32}} ( { - {a_{13}}{a_{32}} + ( {{a_{11}} - {a_{12}} - {a_{21}}} ) ( {{a_{11}} - {a_{21}} - {a_{33}}} ) + {a_{14}}{a_{41}}} )}}{{{{ ( { - {a_{11}} + {a_{12}} + {a_{21}}} )}^{2}}}}, \\ &{a_{3}} = - {a_{33}} + \frac{{{a_{32}} ( {{a_{13}} ( { - {a_{11}} + {a_{12}} + {a_{21}} - {a_{32}}} ) + {a_{14}}{a_{41}}} )}}{{ ( {{a_{11}} - {a_{12}} - {a_{21}}} ) ( {{a_{11}} - {a_{12}} - {a_{21}} + {a_{32}}} )}}, \\ &{a_{4}} = {a_{41}}\biggl( - \frac{{{a_{13}} - {a_{33}}}}{{ - {a_{11}} + {a_{12}} + {a_{21}}}} + \frac{{ - {a_{12}} - {a_{13}} + {a_{32}} + {a_{33}}}}{{{a_{11}} - {a_{12}} - {a_{21}} + {a_{32}}}} + \frac{{\frac{{{a_{14}}{a_{41}}}}{{{a_{11}} - {a_{12}} - {a_{21}} + {a_{32}}}} - {a_{44}}}}{{ - {a_{11}} + {a_{12}} + {a_{21}}}}\biggr), \\ &{a_{5}} = \frac{{{a_{41}} ( { - {a_{14}}{a_{41}} + ( {{a_{11}} - {a_{12}} - {a_{21}} + {a_{32}}} ) ( {{a_{13}} - {a_{33}} + {a_{44}}} )} )}}{{{{ ( {{a_{11}} - {a_{12}} - {a_{21}} + {a_{32}}} )}^{2}}}}. \end{aligned}

Define

$${J_{3}} = \begin{pmatrix} 1&0&0&0&0 \\ 0&1&0&0&0 \\ 0&0&1&0&0 \\ 0&0&0&1&0 \\ 0&0&0&{ - \frac{{{a_{4}}}}{{{a_{2}}}}}&1 \end{pmatrix},$$

then we can get

$${A_{3}} = {J_{3}} {A_{2}}J_{3}^{T} = \begin{pmatrix} { - \theta }&0&0&0&0 \\ { - {a_{15}}}&{ - {a_{11}} + {a_{12}}}&{{a_{1}}}&{{a_{8}}}&{{a_{14}}} \\ 0&{ - {a_{11}} + {a_{12}} + {a_{21}}}&{{a_{1}}}&{{a_{8}}}&{{a_{14}}} \\ 0&0&{{a_{2}}}&{{a_{3}} + \frac{{{a_{4}}{a_{9}}}}{{{a_{2}}}}}&{{a_{9}}} \\ 0&0&0&{{a_{6}}}&{{a_{7}}} \end{pmatrix},$$

where

\begin{aligned} &{a_{6}} = {a_{5}} + \frac{{{a_{4}} ( {\frac{{{a_{4}}{a_{14}}{a_{32}}}}{{{a_{11}} - {a_{12}} - {a_{21}}}} + {a_{2}} ( { - {a_{3}} + \frac{{{a_{14}}{a_{41}}}}{{{a_{11}} - {a_{12}} - {a_{21}} + {a_{32}}}} - {a_{44}}} )} )}}{{a_{2}^{2}}} \neq 0, \\ &{a_{7}} = {a_{14}} \biggl( { \frac{{{a_{4}}{a_{32}}}}{{{a_{2}} ( {{a_{11}} - {a_{12}} - {a_{21}}} )}} + \frac{{{a_{41}}}}{{{a_{11}} - {a_{12}} - {a_{21}} + {a_{32}}}}} \biggr) - {a_{44}} \neq 0, \\ &{a_{8}} = {a_{13}} + {a_{14}}\biggl( \frac{{{a_{4}}}}{{{a_{2}}}} + \frac{{{a_{41}}}}{{ - {a_{11}} + {a_{12}} + {a_{21}} - {a_{32}}}}\biggr), \\ &{a_{9}} = - \frac{{{a_{14}}{a_{32}}}}{{{a_{11}} - {a_{12}} - {a_{21}}}}. \end{aligned}

Next, consider the transform matrix

$${M_{1}} = \begin{pmatrix} {{q_{1}}}&{{q_{2}}}&{{q_{3}}}&{{q_{4}}}&{{q_{5}}} \\ 0&{{a_{2}}{a_{6}} ( { - {a_{11}} + {a_{12}} + {a_{21}}} )}&{{a_{6}} ( {{a_{2}} ( {{a_{1}} + {a_{3}} + {a_{7}}} ) + {a_{4}}{a_{9}}} )}&{{q_{6}}}&{{q_{7}}} \\ 0&0&{{a_{2}}{a_{6}}}&{{a_{6}}{a_{7}} + {a_{6}} ( {{a_{3}} + \frac{{{a_{4}}{a_{9}}}}{{{a_{2}}}}} )}&{a_{7}^{2} + {a_{6}}{a_{9}}} \\ 0&0&0&{{a_{6}}}&{{a_{7}}} \\ 0&0&0&0&1 \end{pmatrix},$$

where

\begin{aligned}& {q_{1}} = - {a_{2}} {a_{6}} {a_{15}} ( { - {a_{11}} + {a_{12}} + {a_{21}}} ), \\& {q_{2}} = {a_{6}} \bigl( { {a_{4}} {a_{9}} + {a_{2}} ( {{a_{1}} + {a_{3}} + {a_{7}} - {a_{11}} + {a_{12}}} )} \bigr) ( { - {a_{11}} + {a_{12}} + {a_{21}}} ), \\& \begin{aligned} {q_{3}} & = {a_{6}}\biggl({a_{1}} \bigl( {{a_{2}} ( {{a_{1}} + {a_{3}} + {a_{7}}} ) + {a_{4}} {a_{9}}} \bigr) \\ &\quad {}+ {a_{2}} \biggl( a_{7}^{2} + {a_{2}} {a_{8}} + {a_{6}} {a_{9}} + \frac{{ ( {{a_{2}}{a_{3}} + {a_{4}}{a_{9}}} ) ( {{a_{2}} ( {{a_{3}} + {a_{7}}} ) + {a_{4}}{a_{9}}} )}}{{a_{2}^{2}}} \biggr) \\ &\quad{} + {a_{1}} {a_{2}} ( { - {a_{11}} + {a_{12}} + {a_{21}}} )\biggr), \end{aligned} \\& \begin{aligned} {q_{4}} &= {a_{6}}\biggl(a_{7}^{3} + 2{a_{6}} {a_{7}} {a_{9}} + {a_{8}} \bigl( {a_{2}} ( {{a_{1}}+ {a_{3}} + {a_{7}}} ) + {a_{4}} {a_{9}} \bigr) \\ &\quad {} + \biggl( {{a_{3}} + \frac{{{a_{4}}{a_{9}}}}{{{a_{2}}}}} \biggr) \biggl(a_{7}^{2} + {a_{2}} {a_{8}} + {a_{6}} {a_{9}} + \frac{{ ( {{a_{2}}{a_{3}} + {a_{4}}{a_{9}}} ) ( {{a_{2}} ( {{a_{3}} + {a_{7}}} ) + {a_{4}}{a_{9}}} )}}{{a_{2}^{2}}}\biggr) \\ &\quad{} + {a_{6}} \biggl( {{a_{3}} {a_{9}} + \frac{{{a_{4}}a_{9}^{2}}}{{{a_{2}}}} + {a_{2}} {a_{14}}} \biggr) + {a_{2}} {a_{8}} ( { - {a_{11}} + {a_{12}} + {a_{21}}} )\biggr), \end{aligned} \\& \begin{aligned} {q_{5}} & = \frac{1}{{a_{2}^{2}}}\bigl( 2{a_{2}} {a_{4}} {a_{6}} ( {{a_{3}} + {a_{7}}} )a_{9}^{2} + a_{4}^{2}{a_{6}}a_{9}^{3} + a_{2}^{2} \bigl( a_{7}^{4} + 2{a_{3}} {a_{6}} {a_{7}} {a_{9}} + 3{a_{6}}a_{7}^{2}{a_{9}} \\ &\quad{} + {a_{6}} {a_{9}} \bigl( a_{3}^{2} + {a_{6}} {a_{9}} + {a_{4}} {a_{14}} \bigr) \bigr)+ a_{2}^{3}{a_{6}} \bigl( {{a_{8}} {a_{9}} + {a_{14}} ( {{a_{1}} + {a_{3}} + 2{a_{7}} - {a_{11}} + {a_{12}} + {a_{21}}} )} \bigr) \bigr), \end{aligned} \\& {q_{6}} = {a_{6}} \biggl( a_{7}^{2} + {a_{2}} {a_{8}} + {a_{6}} {a_{9}} + \frac{{ ( {{a_{2}}{a_{3}} + {a_{4}}{a_{9}}} ) ( {{a_{2}} ( {{a_{3}} + {a_{7}}} ) + {a_{4}}{a_{9}}} )}}{{a_{2}^{2}}} \biggr), \\& {q_{7}} = a_{7}^{3} + 2{a_{6}} {a_{7}} {a_{9}} + {a_{6}} \biggl( {{a_{3}} {a_{9}} + \frac{{{a_{4}}a_{9}^{2}}}{{{a_{2}}}} + {a_{2}} {a_{14}}} \biggr). \end{aligned}

Then

$${B_{1}} = {M_{1}} {A_{3}}M_{1}^{ - 1} = \begin{pmatrix} { - {d_{1}}}&{ - {d_{2}}}&{ - {d_{3}}}&{ - {d_{4}}}&{ - {d_{5}}} \\ 1&0&0&0&0 \\ 0&1&0&0&0 \\ 0&0&1&0&0 \\ 0&0&0&1&0 \end{pmatrix},$$

where $${d_{1}} = \theta + {b_{1}}$$, $${d_{2}} = \theta {b_{1}} + {b_{2}}$$, $${d_{3}} = \theta {b_{2}} + {b_{3}}$$, $${d_{4}} = \theta {b_{3}} + {b_{4}}$$, $${d_{5}} = \theta {b_{4}}$$. Then we can transform Eq. (24) into the following form for the corresponding equation:

$${M_{1}} {J_{3}} {J_{2}} {J_{1}} {G^{2}} {({M_{1}} {J_{3}} {J_{2}} {J_{1}})^{T}} + {B_{1}}\bigl[{M_{1}} {J_{3}} {J_{2}} {J_{1}}\Sigma {({M_{1}} {J_{3}} {J_{2}} {J_{1}})^{T}} \bigr] + \bigl[{M_{1}} {J_{3}} {J_{2}} {J_{1}}\Sigma {({M_{1}} {J_{3}} {J_{2}} {J_{1}})^{T}}\bigr]B_{1}^{T} = 0.$$

Set $$\Sigma _{1} = \frac{1}{{q_{1}^{2}{\xi ^{2}}}}{M_{1}}{J_{3}}{J_{2}}{J_{1}} \Sigma {({M_{1}}{J_{3}}{J_{2}}{J_{1}})^{T}}$$, then we can get

$$G_{0}^{2} + {B_{1}}\Sigma _{1} + \Sigma _{1} B_{1}^{T} = 0,$$

where $$G_{0}=\operatorname{diag}(1,0,0,0,0)$$ and

$$\Sigma _{1} = \begin{pmatrix} {{c_{11}}}&0&{{c_{13}}}&0&{{c_{15}}} \\ 0&{ - {c_{13}}}&0&{ - {c_{15}}}&0 \\ {{c_{13}}}&0&{{c_{15}}}&0&{{c_{35}}} \\ 0&{ - {c_{15}}}&0&{ - {c_{35}}}&0 \\ {{c_{15}}}&0&{{c_{35}}}&0&{{c_{55}}} \end{pmatrix},$$

where

\begin{aligned} &{c_{11}} = \frac{{{d_{2}}({d_{3}}{d_{4}} - {d_{2}}{d_{5}}) - {d_{4}}({d_{1}}{d_{4}} - {d_{5}})}}{{2({d_{1}}{d_{2}} - {d_{3}})({d_{3}}{d_{4}} - {d_{2}}{d_{5}}) - 2{{({d_{1}}{d_{4}} - {d_{5}})}^{2}}}}, \\ &{c_{13}} = - \frac{{{d_{3}}{d_{4}} - {d_{2}}{d_{5}}}}{{2({d_{1}}{d_{2}} - {d_{3}})({d_{3}}{d_{4}} - {d_{2}}{d_{5}}) - 2{{({d_{1}}{d_{4}} - {d_{5}})}^{2}}}}, \\ &{c_{15}} = \frac{{{d_{1}}{d_{4}} - {d_{5}}}}{{2({d_{1}}{d_{2}} - {d_{3}})({d_{3}}{d_{4}} - {d_{2}}{d_{5}}) - 2{{({d_{1}}{d_{4}} - {d_{5}})}^{2}}}}, \\ &{c_{35}} = - \frac{{{d_{1}}{d_{2}} - {d_{3}}}}{{2({d_{1}}{d_{2}} - {d_{3}})({d_{3}}{d_{4}} - {d_{2}}{d_{5}}) - 2{{({d_{1}}{d_{4}} - {d_{5}})}^{2}}}}, \\ &{c_{55}} = \frac{{{d_{3}}({d_{1}}{d_{2}} - {d_{3}}) - {d_{1}}({d_{1}}{d_{4}} - {d_{5}})}}{{2{d_{5}}({d_{1}}{d_{2}} - {d_{3}})({d_{3}}{d_{4}} - {d_{2}}{d_{5}}) - 2{d_{5}}{{({d_{1}}{d_{4}} - {d_{5}})}^{2}}}}. \end{aligned}

Note that $$\Sigma _{1}$$ is a positive definite matrix, hence, $$\Sigma = {q_{1}^{2}{\xi ^{2}}}(M_{1} J_{3} J_{2} J_{1})^{ - 1}{ \Sigma _{1}}{[(M_{1} J_{3} J_{2} J_{1})^{ - 1}]^{T}}$$ is also positive definite. Consequently, the density function around the quasi-equilibrium point $$E^{*}$$ can be written as

$$\Phi ({L_{1}},{L_{2}},{L_{3}},{L_{4}},{L_{5}}) = {(2\pi )^{ - \frac{5}{2}}} \vert \Sigma \vert ^{ - \frac{1}{2}}{e^{ - \frac{1}{2}({L_{1}},{L_{2}},{L_{3}},{L_{4}},{L_{5}}){ \Sigma ^{ - 1}}{{({L_{1}},{L_{2}},{L_{3}},{L_{4}},{L_{5}})}^{T}}}}.$$

The proof is completed. □

### Remark 2

Derived from Theorem 5, the solution $$(S(t),I(t),R(t),V(t),m(t))$$ to model (5) around $$((S^{*},I^{*},R^{*},V^{*},m^{*})^{T})$$ fulfils the normal density function $$\mathbb{N}((S^{*},I^{*},R^{*},V^{*},m^{*})^{T}, \Sigma )$$. Then, if $$R_{0}^{E} >1$$, the solution $$(S(t), I(t), R(t), V(t))$$ has a unique normal density function

$$\Phi (S,I,R,V)=(2 \pi )^{-2} \bigl\vert \Sigma ^{(4)} \bigr\vert ^{-\frac{1}{2}}e^{ - \frac{1}{2}({L_{1}},{L_{2}},{L_{3}},{L_{4}}){\Sigma ^{(4)^{ - 1}}} {{({L_{1}},{L_{2}},{L_{3}},{L_{4}})}^{T}}}.$$
(25)

Therefore, $$S(t)$$, $$I(t)$$, $$R(t)$$ and $$V(t)$$ will converge to the marginal density functions $$\Phi _{S}$$, $$\Phi _{I}$$, $$\Phi _{R}$$, $$\Phi _{V}$$, respectively, where

\begin{aligned}& \Phi _{S}= \frac{1}{\sqrt{2 \pi} \varphi _{1}}e^{- \frac{(S-S^{*})^{2}}{2 \varphi _{1}^{2}}},\qquad \Phi _{I}= \frac{1}{\sqrt{2 \pi} \varphi _{2}}e^{- \frac{(I-I^{*})^{2}}{2 \varphi _{2}^{2}}}, \\& \Phi _{R}= \frac{1}{\sqrt{2 \pi} \varphi _{3}}e^{- \frac{(R-R^{*})^{2}}{2 \varphi _{3}^{2}}}, \qquad \Phi _{V}= \frac{1}{\sqrt{2 \pi} \varphi _{4}}e^{- \frac{(V-V^{*})^{2}}{2 \varphi _{4}^{2}}}, \end{aligned}

$$\varphi _{i}^{2}$$ is the ith element on the main diagonal of Σ, respectively. Similarly, $$S (t)$$, $$I (t)$$, $$R(t)$$, and $$V (t)$$ also converge to the marginal distribution functions $$\mathbb{N}(S^{*},\varphi _{1}^{2})$$, $$\mathbb{N}(I^{*},\varphi _{2}^{2})$$, $$\mathbb{N}(R^{*},\varphi _{3}^{2})$$ and $$\mathbb{N}(V^{*},\varphi _{4}^{2})$$.

## 7 Numerical simulation

In this section, various situations are simulated by giving special values of parameters. First, we discretize model (5) through the Milstein method. The discretized form is as follows:

$$\textstyle\begin{cases} {m_{k + 1}} = {m_{k}} - \theta {m_{k}}\Delta t + \xi {\eta _{k}} \sqrt {\Delta t} + \frac{{{\xi ^{2}}}}{2}(\eta _{{k}}^{2} - 1) \Delta t, \\ {S_{k + 1}} = {S_{k}} + [A - \mu {S_{k}} - (\lambda + {m_{k}}){S_{k}}{I_{k}} - \rho {S_{k}} + {\delta _{1}}{R_{k}} + {\delta _{2}}{V_{k}}]\Delta t, \\ {I_{k + 1}} = {I_{k}} + [(\lambda + {m_{k}}){S_{k}}{I_{k}} - (\mu + \alpha + \gamma + r){I_{k}}]\Delta t, \\ {R_{k + 1}} = {R_{k}} + [\gamma {I_{k}} - (\mu + {\delta _{1}}){R_{k}}] \Delta t, \\ {V_{k + 1}} = {V_{k}} + [\rho {S_{k}} - (\mu + {\delta _{2}}){V_{k}}] \Delta t, \end{cases}$$

where Δt is time variation, $$\eta _{k}$$ is a random variable conforming to the standard normal distribution.

Next, we verify our scientific results in the following four aspects.

1. (i)

When $$R_{0}^{E}<1$$, whether the disease will become extinct with probability 1.

2. (ii)

When $$R_{0}^{E}>1$$, whether the disease will become persistent with probability 1.

3. (iii)

Average evolution and variance of infected individuals.

4. (iv)

The existence of the stationary distribution.

### Example 1

Choose $$A=0.014$$, $$\mu =0.014$$, $$\lambda =0.25$$, $$\rho =0.03$$, $$\delta _{1}=0.37$$, $$\delta _{2}=0.0014$$, $$\alpha =0.03$$, $$\gamma =0.05$$, $$r=0.001$$, $$\theta =0.1$$, $$\xi =0.05$$, and the initial value $$(m(0), S(0), I(0), R(0), V(0))=(-0.002, 0.03, 0.95, 0.01, 0.01)$$. In this case, $$R_{0}^{E} \approx 0.979<1$$ meets Theorem 2. Figure 1 simulates the number of four types of population, and it can be seen that $$I(t)$$ and $$R(t)$$ extinct after a period of time, while $$S(t)$$ and $$V(t)$$ stabilize with probability 1.

### Example 2

Choose $$A=0.16$$, $$\mu =0.016$$, $$\lambda =0.5$$, $$\rho =0.014$$, $$\delta _{1}=0.07$$, $$\delta _{2}=0.0016$$, $$\alpha =0.084$$, $$\gamma =0.1$$, $$r=0.1$$, $$\theta =0.1$$, $$\xi =0.05$$, and the initial value $$(m(0), S(0), I(0), R(0), V(0))= (-0.002, 0.03, 0.95 , 0.01, 0.01 )$$. In this case, $$R_{0}^{E} \approx 23.3>1$$ meets Theorem 3. Figure 2 simulates the number of four types of population, and it indicates that $$I(t)$$ and $$R(t)$$ do not extinct, but become endemic diseases.

### Example 3

Choose (a) $$A=0.16$$, $$\mu =0.016$$, $$\lambda =0.5$$, $$\rho =0.014$$, $$\delta _{1}=0.07$$, $$\delta _{2}=0.0016$$, $$\alpha =0.084$$, $$\gamma =0.1$$, $$r=0.1$$, $$\theta =0.1$$, $$\xi =0.05$$, (b) $$A=0.014$$, $$\mu =0.014$$, $$\lambda =0.25$$, $$\rho =0.03$$, $$\delta _{1}=0.37$$, $$\delta _{2}=0.0014$$, $$\alpha =0.03$$, $$\gamma =0.05$$, $$r=0.001$$, $$\theta =0.1$$, $$\xi =0.05$$, and the initial value $$(m(0), S(0), I(0), R(0), V(0)) =(-0.002, 0.03, 0.95, 0.01, 0.01)$$. Figure 3 simulates the expectation and standard deviation of the number of infected individuals, which demonstrates that in case (a), the disease spreads and becomes endemic, but in case (b), it quickly disappears.

### Example 4

Choose $$A=0.16$$, $$\mu =0.016$$, $$\lambda =0.5$$, $$\rho =0.014$$, $$\delta _{1}=0.07$$, $$\delta _{2}=0.0016$$, $$\alpha =0.084$$, $$\gamma =0.1$$, $$r=0.1$$, $$\theta =0.1$$, $$\xi =0.05$$, and the initial value $$(m(0), S(0), I(0), R(0), V(0))=(-0.002, 0.03, 0.95, 0.01, 0.01)$$. In this case, $$R_{0}^{E} \approx 23.3>1$$ and the positive equilibrium $$E_{2}^{*} = ({S^{*}},{I^{*}},{R^{*}},{V^{*}}) = (0.6,0.653,0.759,0.477 )$$. In addition, we get that the solution $$(S(t), I(t), R(t), V(t), m(t))$$ follows the normal density function $$\Phi (S,I,R,V,m) \sim \mathbb{N}_{4}((0.6,0.653, 0.759,0.477,0 )^{T}, \Sigma )$$, where

$$\Sigma = \begin{pmatrix} {0.0202}&{ - 0.0048}&{ - 0.0036}&{0.0023}&{ - 0.014} \\ { - 0.0048}&{0.0039}&{0.0008}&{0.0005}&{0.004} \\ { - 0.0036}&{0.0008}&{0.0009}&{0.00004}&{0.0021} \\ {0.0023}&{0.0005}&{0.00004}&{0.0018}&{ - 0.0017} \\ { - 0.014}&{0.004}&{0.0021}&{ - 0.0017}&{0.0127} \end{pmatrix}$$

and

$$\begin{gathered} {\Phi _{S}} = 2.81{{ {\mathrm{e}} } ^{ - 24.8{{(S - 0.6)}^{2}}}},\qquad { \Phi _{I}} = 6.384{{ { \mathrm{e}} } ^{ - 128.049{{(I - 0.653)}^{2}}}}, \\ {\Phi _{R}} = 13.2977{{ {\mathrm{e}} } ^{ - 555.524{{(R - 0.759)}^{2}}}},\qquad { \Phi _{V}} = 9.522{{ {\mathrm{e}} } ^{ - 284.865{{(V - 0.477)}^{2}}}}. \end{gathered}$$

Figure 4 simulates the density histogram and marginal density curve of model (5), which can verify the existence of the stationary distribution. It can be seen that the curve is basically consistent with the histogram.

### Example 5

Choose $$A=0.16$$, $$\mu =0.016$$, $$\lambda =0.5$$, $$\rho =0.014$$, $$\delta _{1}=0.07$$, $$\delta _{2}=0.0016$$, $$\alpha =0.084$$, $$\gamma =0.1$$, $$r=0.1$$, $$\theta =0.1$$, $$\xi =0.05$$, and the initial value $$(m(0), S(0), I(0), R(0), V(0))=(-0.002, 0.03, 0.95, 0.01, 0.01)$$. Figure 5 simulates the comparison chart of the stochastic model and the deterministic model. It can be seen that when the random disturbance is not too large, the trend of the stochastic model is the same as that of the deterministic model.

### Example 6

Choose $$A=0.16$$, $$\mu =0.016$$, $$\lambda =0.5$$, $$\rho =0.014$$, $$\delta _{1}=0.07$$, $$\delta _{2}=0.0016$$, $$\alpha =0.084$$, $$\gamma =0.1$$, $$r=0.1$$, $$\theta =0.1$$, and the initial value $$(m(0), S(0), I(0), R(0), V(0))=(-0.002, 0.03, 0.95, 0.01, 0.01)$$. Then, the impact of fluctuation intensity on the disease is identified by selecting three different ξ. Here we select three scenarios: $$\xi =0.005$$, $$\xi =0.01$$, and $$\xi =0.03$$. From Fig. 6, it can be seen that as ξ increases, the disease becomes increasingly unstable.

### Example 7

Choose $$A=0.16$$, $$\mu =0.016$$, $$\lambda =0.5$$, $$\rho =0.014$$, $$\delta _{1}=0.07$$, $$\delta _{2}=0.0016$$, $$\alpha =0.084$$, $$\gamma =0.1$$, $$r=0.1$$, $$\xi =0.02$$, and the initial value $$(m(0), S(0), I(0), R(0), V(0))=(-0.002, 0.03, 0.95, 0.01, 0.01)$$. Then, the impact of the reverse speed on the disease is identified by selecting three different θ. Here we select three scenarios: $$\theta =0.00001$$, $$\theta =0.01$$, and $$\theta =0.95$$. From Fig. 7, it can be seen that as θ decreases, the disease becomes increasingly unstable.

## 8 Conclusion

This article studies a stochastic SIRV model with vaccination to illustrate the random transmission of diseases in ecosystems. By comparing with classical white Gaussian noise, the reason why the Ornstein–Uhlenbeck process is more in line with reality is explained. Firstly, we prove the existence and uniqueness of global positive solutions for model (5) with constructing appropriate functions. Secondly, the conditions leading to the extinction and persistence of diseases are studied, and it is found that when $$R_{0}^{E}<1$$, the disease will become extinct with probability 1, while when $$R_{0}^{E}>1$$, the disease will continue to exist as an endemic disease. These results are beneficial for people to eliminate the disease or control its spread. Once again, it is studied that model (5) has a stationary distribution when $$R_{0}^{E}>1$$, and an explicit expression for the corresponding probability density function of the model is obtained. Finally, by providing specific parameter values, model (5) is simulated using Python to obtain sample trajectories and density histograms under different conditions. These results have certain reference value for reality.

Although the research conducted in this paper on the dynamic properties of the stochastic SIRV model, there are still numerous issues worthy of further exploration:

1. 1.

Given that the stochastic system is perturbed by the Ornstein–Uhlenbeck process, its diffusion matrix exhibits singular characteristics and does not satisfy the uniform ellipticity condition. This indicates that the model admits at least one stationary distribution, but to ensure its uniqueness, we still need to explore more appropriate methods and theories.

2. 2.

In the model presented in this paper, we only considered the impact of environmental fluctuations on the infection rate. However, in reality, every parameter in the biological model is potentially subject to environmental interference. Therefore, our next research focus will be on studying the influence of environmental fluctuations on more parameters in the biological model, in order to more comprehensively reveal the complexity and dynamic characteristics of the model.

Not applicable.

## References

1. Kabir, K., Tanimoto, J.: Dynamical behaviors for vaccination can suppress infectious disease – a game theoretical approach. Chaos Solitons Fractals 123, 229–239 (2019)

2. Pizzagalli, D.U., Latino, I., Pulfer, A., Palomino-Segura, M., Virgilio, T., Farsakoglu, Y., Krause, R., Gonzalez, S.F.: Characterization of the dynamic behavior of neutrophils following influenza vaccination. Front. Immunol. 10, 2621 (2019)

3. Meng, X., Chen, L.: Global dynamical behaviors for an SIR epidemic model with time delay and pulse vaccination. Taiwan. J. Math. 12(5), 1107–1122 (2008)

4. Meng, X., Chen, L.: The dynamics of a new SIR epidemic model concerning pulse vaccination strategy. Appl. Math. Comput. 197(2), 582–597 (2008)

5. Cai, C.R., Wu, Z.X., Guan, J.Y.: Effect of vaccination strategies on the dynamic behavior of epidemic spreading and vaccine coverage. Chaos Solitons Fractals 62–63, 36–43 (2014)

6. Nie, L.F., Shen, J.Y., Yang, C.X.: Dynamic behavior analysis of SIVS epidemic models with state-dependent pulse vaccination. Nonlinear Anal. Hybrid Syst. 27, 258–270 (2018)

7. Meng, X.Z., Chen, L.S., Song, Z.T.: Global dynamics behaviors for new delay SEIR epidemic disease model with vertical transmission and pulse vaccination. Appl. Math. Mech. 28(9), 1259–1271 (2007)

8. Hu, Z., Ma, W., Ruan, S.: Analysis of sir epidemic models with nonlinear incidence rate and treatment. Math. Biosci. 238(1), 12–20 (2012)

9. Miao, A., Wang, X., Zhang, T., Wang, W., Sampath Aruna Pradeep, B.: Dynamical analysis of a stochastic SIS epidemic model with nonlinear incidence rate and double epidemic hypothesis. Adv. Differ. Equ. 2017(1), 226 (2017)

10. Li, G.H., Zhang, Y.X.: Dynamic behaviors of a modified SIR model in epidemic diseases using nonlinear incidence and recovery rates. PLoS ONE 12(4), 0175789 (2017)

11. Oke, M., Ogunmiloro, O., Akinwumi, C., Raji, R.: Mathematical modeling and stability analysis of a SIRV epidemic model with non-linear force of infection and treatment. Commun. Math. Appl. 10(4), 717 (2019)

12. Gray, A., Greenhalgh, D., Hu, L., Mao, X., Pan, J.: A stochastic differential equation SIS epidemic model. SIAM J. Appl. Math. 71(3), 876–902 (2011)

13. Lahrouz, A., Omani, L.: Extinction and stationary distribution of a stochastic SIRS epidemic model with nonlinear incidence. Stat. Probab. Lett. 83(4), 960–968 (2013)

14. Zhao, D.: Study on the threshold of a stochastic SIR epidemic model and its extensions. Commun. Nonlinear Sci. Numer. Simul. 38, 172–177 (2016)

15. Liu, Q., Jiang, D.: Stationary distribution and probability density for a stochastic SEIR-type model of coronavirus (Covid-19) with asymptomatic carriers. Chaos Solitons Fractals 169, 113256 (2023)

16. Li, D., Cui, J., Liu, M., Liu, S.: The evolutionary dynamics of stochastic epidemic model with nonlinear incidence rate. Bull. Math. Biol. 77(9), 1705–1743 (2015)

17. Amador, J.: The SEIQS stochastic epidemic model with external source of infection. Appl. Math. Model. 40(19–20), 8352–8365 (2016)

18. Zhao, Y., Jiang, D., Mao, X., Gray, A.: The threshold of a stochastic SIRS epidemic model in a population with varying size. Discrete Contin. Dyn. Syst., Ser. B 20(4), 1277–1295 (2015)

19. Zhao, Y., Zhang, L., Yuan, S.: The effect of media coverage on threshold dynamics for a stochastic SIS epidemic model. Physica A 512, 248–260 (2018)

20. Zhao, Y., Yuan, S., Zhang, T.: The stationary distribution and ergodicity of a stochastic phytoplankton allelopathy model under regime switching. Commun. Nonlinear Sci. Numer. Simul. 37, 131–142 (2016)

21. Cai, Y., Jiao, J., Gui, Z., Liu, Y., Wang, W.: Environmental variability in a stochastic epidemic model. Appl. Math. Comput. 329, 210–226 (2018)

22. Wang, W., Cai, Y., Ding, Z., Gui, Z.: A stochastic differential equation sis epidemic model incorporating Ornstein–Uhlenbeck process. Physica A 509, 921–936 (2018)

23. Song, Y., Zhang, X.: Stationary distribution and extinction of a stochastic sveis epidemic model incorporating Ornstein–Uhlenbeck process. Appl. Math. Lett. 133, 108284 (2022)

24. Guo, W., Ye, M., Zhang, Q.: Stability in distribution for age-structured hiv model with delay and driven by Ornstein–Uhlenbeck process. Stud. Appl. Math. 147(2), 792–815 (2021)

25. Ni, Z., Jiang, D., Cao, Z., Mu, X.: Analysis of stochastic SIRC model with cross immunity based on Ornstein–Uhlenbeck process. Qual. Theory Dyn. Syst. 22(3), 87 (2023)

26. Zhang, X., Yang, Q., Su, T.: Dynamical behavior and numerical simulation of a stochastic eco-epidemiological model with Ornstein–Uhlenbeck process. Commun. Nonlinear Sci. Numer. Simul. 123, 107284 (2023)

27. Liu, Q.: Stationary distribution and probability density for a stochastic SISP respiratory disease model with Ornstein–Uhlenbeck process. Commun. Nonlinear Sci. Numer. Simul. 119, 107128 (2023)

28. Laaribi, A., Boukanjime, B., El Khalifi, M., Bouggar, D., El Fatini, M.: A generalized stochastic SIRS epidemic model incorporating mean-reverting Ornstein–Uhlenbeck process. Physica A 615, 128609 (2023)

29. Su, T., Yang, Q., Zhang, X., Jiang, D.: Stationary distribution, extinction and probability density function of a stochastic SEIV epidemic model with general incidence and Ornstein–Uhlenbeck process. Physica A 615, 128605 (2023)

30. Zhou, B., Jiang, D., Han, B., Hayat, T.: Threshold dynamics and density function of a stochastic epidemic model with media coverage and mean-reverting Ornstein–Uhlenbeck process. Math. Comput. Simul. 196, 15–44 (2022)

31. Zhao, Y., Yuan, S., Ma, J.: Survival and stationary distribution analysis of a stochastic competitive model of three species in a polluted environment. Bull. Math. Biol. 77, 1285–1326 (2015)

32. Kang, Y., Liu, R., Mao, X.: Aperiodic stochastic resonance in neural information processing with Gaussian colored noise. Cogn. Neurodyn. 15, 517–532 (2021)

33. Allen, E.: Environmental variability and mean-reverting processes. Discrete Contin. Dyn. Syst., Ser. B 21(7), 2073–2089 (2016)

34. Tian, B., Yang, L., Chen, X., Zhang, Y.: A generalized stochastic competitive system with Ornstein–Uhlenbeck process. Int. J. Biomath. 14(01), 2150001 (2021)

35. Zhang, X., Su, T., Jiang, D.: Dynamics of a stochastic SVEIR epidemic model incorporating general incidence rate and Ornstein–Uhlenbeck process. J. Nonlinear Sci. 33(5), 76 (2023)

36. Zhang, X., Yuan, R.: A stochastic chemostat model with mean-reverting Ornstein–Uhlenbeck process and Monod–Haldane response function. Appl. Math. Comput. 394, 125833 (2021)

37. Lin, Y., Jiang, D., Xia, P.: Long-time behavior of a stochastic SIR model. Appl. Math. Comput. 236, 1–9 (2014)

38. Meyn, S., Tweedie, R.: Stability of Markovian processes III: Fosterclyapunov criteria for continuous-time processes. Adv. Appl. Probab. 25(3), 518–548 (1993)

39. Roozen, H.: An asymptotic solution to a two-dimensional exit problem arising in population dynamics. SIAM J. Appl. Math. 49(6), 1793–1810 (1989)

## Acknowledgements

We thank to the reviewers for their helpful comments and suggestions. This work is supported by the National Natural Science Foundation of China Tianyuan Mathematical Foundation (No. 12126312, No. 12126328). The authors gratefully acknowledge the Natural Science Foundation of Heilongjiang Province (No. LH2022E023), Heilongjiang Provincial Postdoctoral Science Foundation (LBH-Z23259), and the Northeast Petroleum University Special Research Team Project (No. 2022TSTD-05) for the support in publishing this paper.

## Funding

The National Natural Science Foundation of China Tianyuan Mathematical Foundation (No. 12126312, No. 12126328), the Natural Science Foundation of Heilongjiang Province (No. LH2022E023) and the Northeast Petroleum University Special Research Team Project (No. 2022TSTD-05).

## Author information

Authors

### Contributions

SJ provided the formal analysis, preparation of software programmes and the original draft. LW provided the methodology, validation and acquired the funding. All authors read and approved the final manuscript.

### Corresponding author

Correspondence to Wenhe Li.

## Ethics declarations

Not applicable.

### Consent for publication

We (including all authors) agree to grant the exclusive license to the editorial department of ADVANCE IN CONTINUOUS AND DISCRETE MODELS worldwide for the copyright of this article.

### Competing interests

The authors declare that they have no competing interests.

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

Reprints and permissions

Shang, J., Li, W. Dynamical behaviors of a stochastic SIRV epidemic model with the Ornstein–Uhlenbeck process. Adv Cont Discr Mod 2024, 9 (2024). https://doi.org/10.1186/s13662-024-03807-6