Skip to main content

Theory and Modern Applications

The threshold of stochastic SIS epidemic model with saturated incidence rate

Abstract

This paper considers a stochastic SIS model with saturated incidence rate. We investigate the existence and uniqueness of the positive solution to the system, and we show the condition for the infectious individuals to be extinct. Moreover, we prove that the system has the ergodic property and derive the expression for its invariant density. The simulation results are illustrated finally.

1 Introduction

Epidemic models have been studied by many researchers due to their great influence on human life. Most of the researchers are interested in the incidence rate, some employ the bilinear incidence rate \(\beta SI\) [1, 2]; the classical SIS model with a bilinear incidence rate for a constant population is

$$ \left \{ \begin{array}{l} dS(t) =[\mu N-\beta S(t)I(t)+\gamma I(t)- \mu S(t) ]\, dt, \\ dI(t) =[\beta S(t)I(t)-(\mu+\gamma) I(t) ]\, dt, \end{array} \right . $$
(1.1)

where \(S(t)\), \(I(t)\) denote the number of susceptible individuals and infective individuals at time t respectively, N is the total size of the population, μ is the natural death rate, and γ is the rate at which infected individuals become cured, β is the disease transmission rate. According to the theory in [3], the dynamical behavior of model (1.1) is as follows:

  1. (i)

    The disease-free equilibrium \(E_{0}= (N,0)\) is globally asymptotically stable if \(R_{0}:=\frac{\beta N}{\mu+\gamma}<1\).

  2. (ii)

    The endemic equilibrium \(E^{*}=(\frac{N}{R_{0}}, N(1-\frac {1}{R_{0}}))\) is globally asymptotically stable if \(R_{0}>1\).

After studying the cholera epidemic spread in Bari in 1973, Capasso and Serio [4] introduced a saturated incidence rate \(\frac{\beta IS}{1+\alpha I}\) into epidemic models, where α is a positive constant, βI measures the infection force of the disease, and \(\frac{1}{1+\alpha I}\) measures the inhibition effect due to the crowding of the infective. Also, the models are inevitably affected by the environmental white noise [5, 6]. Then the stochastic SIS model with a saturated incidence rate can be written as

$$ \left \{ \begin{array}{l} dS(t) =[\mu N-\frac{\beta S(t)I(t)}{1+\alpha I(t)}+\gamma I(t)- \mu S(t) ]\, dt-\frac{\sigma S(t)I(t)}{1+\alpha I(t)}\, dB(t), \\ dI(t) =[\frac{\beta S(t)I(t)}{1+\alpha I(t)}-(\mu+\gamma) I(t) ]\, dt+\frac{\sigma S(t)I(t)}{1+\alpha I(t)}\, dB(t), \end{array} \right . $$
(1.2)

where \(B(t)\) is for the Brownian motion, \(\sigma^{2}\) represents the intensities of the white noise. Since \(S(t) + I (t) = N \), studying the following equation is enough:

$$ dI(t)=\biggl[\frac{\beta( N-I(t))I(t)}{1+\alpha I(t)}-(\mu+\gamma) I(t) \biggr]\, dt+ \frac{\sigma( N-I(t))I(t)}{1+\alpha I(t)}\, dB(t), $$
(1.3)

with initial value \(I (0) = I_{0}\in(0, N )\). In this paper, we will discuss the dynamical behavior of (1.3).

Throughout this paper, let \((\Omega, {\mathcal{F}},P)\) be a complete probability space with a filtration \(\{{\mathcal{F}}_{t}\} _{t\geq0}\) satisfying the usual conditions (i.e. it is increasing and right continuous while \({\mathcal{F}}_{0}\) contains all P-null sets) and \(B(t)\) be a scalar Brownian motion defined on the probability space.

2 Existence and uniqueness of the global positive solution

In the following, we will show there is a unique global positive solution to (1.3).

Theorem 2.1

For any given initial value \(I (0) = I_{0}\in(0, N )\), there exists a unique solution \(I(t)\in(0,N)\) for all \(t\geq0\) with probability 1.

Proof

By Itô’s formula, it is easy to see that

$$I(t)=e^{u(t)} $$

is the solution of (1.3) with initial value \(I_{0}\in(0, N )\), where \(u(t)\) satisfies the following equation:

$$ \left \{ \begin{array}{l} du(t) =[\frac{\beta(N-e^{u(t)})}{1+\alpha e^{u(t)}}-(\mu+\gamma )-\frac{\sigma^{2}(N-e^{u(t)})^{2}}{2(1+\alpha e^{u(t)})^{2}}]\, dt +\frac{\sigma(N-e^{u(t)})}{1+\alpha e^{u(t)}}\, dB(t), \\ u(0) = \ln I_{0},\quad I_{0}\in(0, N ). \end{array} \right . $$
(2.1)

Since the coefficients of (2.1) are locally Lipschitz continuous, there exists a unique maximal local solution \(u(t)\) on \(t\in[0,\tau_{e})\), where \(\tau_{e}\) is the explosion time [7, 8]. This implies that \(I(t)=e^{u(t)}\) is the unique positive local solution to (1.3) with initial value \(I_{0}\in(0, N )\).

In order to show that the solution of (1.3) is global, it is sufficient to show \(\tau_{e}=\infty\) a.s.

Let \(m_{0}> 0\) be sufficient large so that \(I_{0}\) lies within the interval \([\frac{1}{m_{0}},N-\frac{1}{m_{0}}]\). For each integer \(m\geq m_{0}\), we define the stopping time

$$\tau_{m}=\inf\biggl\{ t\in[0,\tau_{e}):I(t)\notin\biggl( \frac{1}{m},N-\frac {1}{m}\biggr)\biggr\} , $$

where, throughout this paper, we set \(\inf\emptyset=\infty\) (as usual denotes the empty set). It is clear that \(\tau_{m}\) is increasing as \(m\rightarrow\infty\). Denote by \(\tau_{\infty}=\lim_{m\rightarrow\infty}\tau_{m}\), whence \(\tau _{\infty}\leq \tau_{e}\). It is easy to show that \(\tau_{\infty}=\infty\) a.s. implies \(\tau_{e}=\infty\) a.s. and \(I(t)\in(0,N)\) a.s. for all \(t\geq0\). Therefore, to complete this proof, it is enough to show that \(\tau _{\infty}=\infty\) a.s. If this statement is not true, there will exist a pair of constants \(T>0\) and \(\epsilon\in(0,1)\) such that

$$P\{\tau_{\infty}\leq T\}>\epsilon. $$

Then there exists an integer \(m_{1}\geq m_{0}\) such that

$$ P\{\tau_{m}\leq T\}\geq\epsilon \quad \mbox{for all } m \geq m_{1} . $$
(2.2)

Define a function \(V : (0,N) \rightarrow {R}_{+} \) as follows:

$$V(x) =\frac{1}{x}+\frac{1}{N-x}. $$

By Itô’s formula, we get

$$\begin{aligned} d V\bigl(x(t)\bigr) =&\biggl\{ x\biggl(-\frac{1}{x^{2}}+ \frac{1}{(N-x)^{2}}\biggr)\biggl[\frac{\beta(N-x)}{1+\alpha x}-\mu-\gamma\biggr] \\ &{}+ \frac{\sigma^{2}x^{2}(N-x)^{2}}{(1+\alpha x)^{2}} \biggl(\frac{1}{x^{3}}+\frac{1}{(N-x)^{3}}\biggr)\biggr\} \, dt \\ &{} +\biggl\{ \biggl[-\frac{1}{x^{2}}+\frac{1}{(N-x)^{2}}\biggr] \frac{\sigma x(N-x)}{(1+\alpha x)}\biggr\} \, dB(t) \\ :=&LV(x)dt+\biggl\{ \biggl[-\frac{1}{x^{2}}+\frac{1}{(N-x)^{2}}\biggr] \frac{\sigma x(N-x)}{(1+\alpha x)}\biggr\} \, dB(t), \end{aligned}$$
(2.3)

where

$$\begin{aligned} LV(x) =&x\biggl(-\frac{1}{x^{2}}+\frac{1}{(N-x)^{2}}\biggr)\biggl[ \frac{\beta (N-x)}{1+\alpha x}-\mu-\gamma\biggr] \\ &{}+\frac{\sigma^{2}x^{2}(N-x)^{2}}{(1+\alpha x)^{2}} \biggl(\frac{1}{x^{3}}+ \frac{1}{(N-x)^{3}}\biggr) \\ \leq&\frac{\mu+\gamma}{x}+\frac{\beta N}{N-x}+\sigma^{2}N^{2} \biggl(\frac {1}{x}+\frac{1}{N-x}\biggr) \\ \leq& CV(x) \end{aligned}$$

and \(C=(\mu+\gamma)\vee(\beta N)+\sigma^{2}N^{2}\).

The proof is then complete by using a similar method to Theorem 3.1 in [9]. But for the completeness of Theorem 2.1 we will still show the rest of the proof.

For any \(0\leq t_{1}\leq T\), we have

$$ \int^{\tau_{m}\wedge t_{1}}_{0}dV\bigl(x(t)\bigr) \leq\int ^{\tau_{m}\wedge t_{1}}_{0}CV(x)\, dt+\int^{\tau_{m}\wedge t_{1}}_{0} \biggl\{ \biggl[-\frac{1}{x^{2}}+\frac{1}{(N-x)^{2}}\biggr]\frac{\sigma x(N-x)}{(1+\alpha x)} \biggr\} \, dB(t). $$

Taking the expectation of both sides yields

$$\begin{aligned} E\bigl[V\bigl(I(\tau_{m}\wedge t_{1})\bigr)\bigr]&\leq V \bigl(I(0)\bigr)+CE\int^{\tau_{m}\wedge t_{1}}_{0}V\bigl(I(t)\bigr)\, dt \\ &\leq V\bigl(I(0)\bigr)+C\int^{t_{1}}_{0}EV\bigl(I(t \wedge\tau_{m})\bigr)\, dt. \end{aligned}$$

Gronwall’s inequality yields

$$ E\bigl[V\bigl(I(\tau_{m}\wedge T)\bigr)\bigr]\leq M, $$
(2.4)

where \(M=V(I(0))e^{CT}\). Set \(\Omega_{m}=\{\tau_{m}\leq T\}\) for \(m\geq m_{1}\), due to (2.2), we have \(P(\Omega_{m})\geq\epsilon\). Note that for every \(\omega\in\Omega_{m}\), \(I(\tau_{m},\omega)\) equals \(\frac{1}{m}\) or \(N-\frac{1}{m}\). Clearly,

$$V\bigl(I(\tau_{m},\omega)\bigr)\geq m. $$

It follows from (2.4) that

$$\begin{aligned} M&\geq E\bigl[I_{\Omega_{m}}V\bigl(I(\tau_{m}, \omega)\bigr)\bigr] \\ &\geq\epsilon m, \end{aligned}$$

here \(I_{\Omega_{m}}\) is the indicator function of \(\Omega_{m}\). Letting \(m\rightarrow\infty\) yields the contradiction \(\infty>M=\infty\). Therefore we obtain \(\tau_{\infty}=\infty\) a.s. This completes the proof of Theorem 2.1. □

3 Extinction

In this section, we will discuss the extinction for \(I(t)\). First, we have a lemma which is a result in [10].

Considering the following stochastic equation:

$$ dX(t)=b\bigl(X(t)\bigr)\, dt+\sigma\bigl(X(t)\bigr)\, dB(t), $$
(3.1)

assume that the coefficients \(\sigma:J\rightarrow R\), \(b:J\rightarrow R\) satisfy the following conditions:

  1. (1)

    \(\sigma^{2}(x)>0\), \(\forall x\in J \),

  2. (2)

    \(\forall x\in I\), \(\exists\epsilon>0\) such that \(\int^{x+\epsilon} _{x-\epsilon}\frac{1+|b(y)|}{\sigma^{2}(y)}\, dy<\infty\),

where \(J=(l,r)\); \(-\infty\leq l< r\leq\infty\).

Lemma 3.1

(See [10])

Assume that (1), (2) hold, and let \(X(t)\) be a weak solution of (3.1) in J, with nonrandom initial condition \(X_{0}=x\in J\). Let p be given by

$$p(x)=\int^{x}_{c}e^{-\int^{v}_{c}\frac{2b(y)}{\sigma^{2}(y)}\, dy}\, dv ,\quad c\in J . $$

If \(p(l+)>-\infty\), \(p(r-)=\infty\), then \(P(\lim_{t\rightarrow\infty }X(t)=l)=P(\sup_{t\geq0}X(t)< r)=1\).

Theorem 3.1

If \(R^{s}_{0}:=\frac{N(\beta-\frac{1}{2}\sigma^{2}N)}{\mu+\gamma}<1\), then for any initial value \(I (0) = I_{0}\in(0, N )\), the solution of (1.3) obeys

$$P\Bigl(\lim_{t\rightarrow\infty}I(t)=0\Bigr)=1, $$

that is, the disease will be extinct with probability 1.

Proof

Note \(b(x)=\frac{\beta(N-x)x}{1+\alpha x}-(\mu+\gamma) x\), \(\sigma (x)=\frac{\sigma(N-x)x}{1+\alpha x}\), \(c\in(0,N)\), we have

$$\begin{aligned} \int^{x}_{c}{\frac{2b(y)}{\sigma^{2}(y)}}\, dy =& \frac{2}{\sigma^{2}}\biggl\{ \frac{\beta N-(\mu+\gamma)}{N^{2}}\ln x-\biggl[\frac {\beta N-(\mu+\gamma)}{N^{2}}+\alpha \bigl(\beta+\alpha(\mu+\gamma) \bigr)\biggr]\ln(N-x) \\ &{} -\frac{\frac{(\mu+\gamma)}{N}(N\alpha+1)^{2}}{N-x}\biggr\} +C_{0}. \end{aligned}$$

Clearly, conditions (1) and (2) are satisfied. Therefore, the scale function

$$ p(x)=e^{-c_{0}}\int^{x}_{c}{s^{-\frac{2(\beta N-(\mu+\gamma))}{\sigma ^{2}N^{2}}}(N-s)^{\frac{2[\frac{\beta N-(\mu+\gamma)}{N^{2}}+\alpha (\beta+\alpha(\mu+\gamma))]}{\sigma^{2}}} e^{\frac{2(\mu+\gamma)(N\alpha+1)^{2}}{N\sigma^{2}(N-s)}}}\, ds. $$

Let \(\frac{1}{N-s}=t\), then we have

$$\begin{aligned} p(N-)&=e^{-c_{0}}\int^{\infty}_{\frac{1}{N-c}}{(Nt-1)}^{-\frac{2(\beta N-(\mu+\gamma))}{\sigma^{2}N^{2}}}t^{\frac{2(\beta N-(\mu+\gamma ))}{\sigma^{2}N^{2}}} t^{-\frac{2[\frac{\beta N-(\mu+\gamma)}{N^{2}}+\alpha(\beta+\alpha (\mu+\gamma) )]}{\sigma^{2}}}e^{\frac{2(\mu+\gamma)(N\alpha +1)^{2}t}{N\sigma^{2}}}t^{-2}\, dt \\ &=e^{-c_{0}}\int^{\infty}_{\frac{1}{N-c}}{{(Nt-1)}^{-\frac{2(\beta N-(\mu+\gamma))}{\sigma^{2}N^{2}}} t^{-\frac{2\alpha(\beta N+2(\mu+\gamma)+3\alpha(\mu+\gamma) N^{2})}{\sigma^{2}}-2} e^{\frac{2(\mu+\gamma)(N\alpha+1)^{2}t}{N\sigma^{2}}}}\, dt \\ &=\infty. \end{aligned}$$
(3.2)

When \(R^{s}_{0}<1\), it follows that

$$ -p(0+)=e^{-c_{0}}\int^{c}_{0}{s^{-\frac{2(\beta N-(\mu+\gamma))}{\sigma ^{2}N^{2}}}(N-s)^{\frac{2[\frac{\beta N-(\mu+\gamma)}{N^{2}}+\alpha (\beta+\alpha(\mu+\gamma) )]}{\sigma^{2}}} e^{\frac{2(\mu+\gamma)(N\alpha+1)^{2}}{N\sigma^{2}(N-s)}}}\, ds<\infty, $$

that is,

$$p(0+)>-\infty. $$

It can be derived from Lemma 3.1 that

$$P\Bigl(\lim_{t\rightarrow\infty}I(t)=0\Bigr)=1. $$

The proof is completed. □

In Theorem 3.1, we derive that the disease will die out under the condition \(R^{s}_{0}<1\). In the following, we discuss the case when \(R^{s}_{0}=1\).

In (3.1), if \(\sigma(X(t))\equiv1\), then

$$ dX(t)=b\bigl(X(t)\bigr)\, dt+dB(t). $$
(3.3)

Assume that (3.3) has a non-explosive solution which is unique in the sense of a probability law.

Lemma 3.2

(See [11])

Assume \(X(t)\) is the solution of (3.3), let

$$\gamma(x)=\int^{x}_{0}e^{2\int^{u}_{0}b(v)\, dv}\, du , \qquad \lambda(x)=\int^{x}_{0}e^{-2\int^{u}_{0}b(v)\, dv}\, du. $$

If

$$\gamma(-\infty)=-\infty, \qquad \gamma(\infty)<\infty \quad \textit{and}\quad \lambda (-\infty)=-\infty, \qquad \lambda(\infty)=\infty, $$

then for any \(z\in R\), \(\lim_{t\uparrow\infty}(X_{t}< z)=1\). This means that \(X_{t}\rightarrow-\infty\) in the distributional sense.

Theorem 3.2

Suppose \(I(t)\) is the solution of (1.3) with initial value \(I(0)=I_{0} \in(0,N)\), then \(I(t)\rightarrow0\) in probability as \(t \rightarrow\infty\) if \(R^{s}_{0}=1\).

Proof

Let \(X(t)=\frac{1}{\sigma N}\log\frac{I(t)}{(N-I(t))^{N\alpha+1}}\), then

$$\begin{aligned} dX(t) =&\biggl[\frac{N\beta-\mu}{N\sigma}-\frac{(N\alpha+1)\mu}{N\sigma } \frac{I(t)}{N-I(t)} +\frac{\sigma[(N\alpha+1)I(t)^{2}-(N-I(t))^{2}]}{2N(1+\alpha I(t))^{2}}\biggr]\, dt \\ &{}+dB(t), \end{aligned}$$
(3.4)

where \(I=\phi(X)\), and \(X=\phi^{-1}(I)=\frac{1}{N\sigma}\log\frac {I}{(N-I)^{N\alpha+1}}\), \(I\in(0, N)\).

In connection with (3.3), we have

$$b(x)=\frac{N\beta-\mu}{N\sigma}-\frac{(N\alpha+1)\mu}{N\sigma }\frac{I}{N-I}+\frac{\sigma[(N\alpha+1)I^{2}-(N-I)^{2}]}{2N(1+\alpha I)^{2}}. $$

When \(R^{s}_{0}=1\), we have

$$\begin{aligned} e^{\int^{u}_{0} b(x)\, dx} =&C_{0}\phi(u)^{(\frac{N\beta-\mu}{N^{2}\sigma ^{2}}-\frac{1}{2})}\bigl(N-\phi(u) \bigr)^{-[\frac{(N\beta-\mu)(N\alpha +1)}{N^{2}\sigma^{2}} +\frac{(N\alpha+1)\alpha\mu}{N\sigma^{2}}+\frac{1}{2}]}\bigl(1+\alpha \phi(u)\bigr)^{\frac{1}{2}} \\ &{}\times e^{-\frac{(N\alpha+1)^{2}\mu}{N\sigma^{2}}\frac {1}{N-\phi(u)}} \\ =&C_{0}\bigl(N-\phi(u)\bigr)^{-[\frac{N\alpha+2}{2}+\frac{(N\alpha+1)\alpha \mu}{N\sigma^{2}}]} \bigl(1+\alpha\phi(u) \bigr)^{\frac{1}{2}}e^{-\frac{(N\alpha+1)^{2}\mu }{N\sigma^{2}}\frac{1}{N-\phi(u)}}, \end{aligned}$$

where

$$\begin{aligned} C_{0} =&\exp-\biggl\{ \biggl(\frac{N\beta-\mu}{N^{2}\sigma^{2}}-\frac{1}{2} \biggr)\ln \phi(0)-\biggl[\frac{(N\beta-\mu)(N\alpha+1)}{N^{2}\sigma^{2}} +\frac{(N\alpha+1)\alpha\mu}{N\sigma^{2}}+ \frac{1}{2}\biggr] \\ &{}\times\ln\bigl(N-\phi (0)\bigr) +\frac{1}{2}\ln\bigl(1+\alpha\phi(0)\bigr)-\frac{(N\alpha+1)^{2}\mu }{N\sigma^{2}} \frac{1}{N-\phi(0)}\biggr\} . \end{aligned}$$

Then

$$ \lambda(x)=\int^{x}_{0}e^{-2\int^{u}_{0}b(v)\, dv}\, du = \frac{C_{0}}{\sigma}\int^{\phi(x)}_{\phi(0)}(N-I)^{N\alpha+1+\frac {2(N\alpha+1)\alpha\mu}{N\sigma^{2}}}I^{-1} e^{\frac{2(N\alpha+1)^{2}\mu}{N\sigma^{2}}\frac{1}{N-I}}\, dI. $$

Note that \(\phi(-\infty)=0\), \(\phi(\infty)=N\), thus

$$ \lambda(\infty) =\frac{C_{0}}{\sigma}\int^{N}_{\phi(0)}(N-I)^{N\alpha+1+\frac {2(N\alpha+1)\alpha\mu}{N\sigma^{2}}}I^{-1} e^{\frac{2(N\alpha+1)^{2}\mu}{N\sigma^{2}}\frac{1}{N-I}}\, dI=\infty $$

and

$$ \lambda(-\infty) =\frac{C_{0}}{\sigma}\int^{0}_{\phi(0)}(N-I)^{N\alpha+1+\frac {2(N\alpha+1)\alpha\mu}{N\sigma^{2}}}I^{-1} e^{\frac{2(N\alpha+1)^{2}\mu}{N\sigma^{2}}\frac{1}{N-I}}\, dI=-\infty. $$

Next we compute \(\gamma(x)\), which is given by

$$ \gamma(x)=\int^{x}_{0}e^{2\int^{u}_{0}b(v)\, dv}\, du =\frac{C_{0}}{\sigma}\int^{\phi(x)}_{\phi(0)}(N-I)^{-N\alpha -3-\frac{2(N\alpha+1)\alpha\mu}{N\sigma^{2}}}(1+ \alpha I)^{2}I^{-1} e^{-\frac{2(N\alpha+1)^{2}\mu}{N\sigma^{2}}\frac{1}{N-I}}\, dI. $$

As \(\phi(-\infty)=0\) and \(\phi(\infty)=N\), we have

$$ \gamma(\infty) =\frac{C_{0}}{\sigma}\int^{N}_{\phi(0)}(N-I)^{-N\alpha -3-\frac{2(N\alpha+1)\alpha\mu}{N\sigma^{2}}}(1+ \alpha I)^{2}I^{-1} e^{-\frac{2(N\alpha+1)^{2}\mu}{N\sigma^{2}}\frac{1}{N-I}}\, dI. $$

Let \(t=\frac{1}{N-I}\), then

$$ \gamma(\infty) =\frac{C_{0}}{\sigma}\int^{\infty}_{\frac{1}{N-\phi(0)}}t^{N\alpha +\frac{2(N\alpha+1)\alpha\mu}{N\sigma^{2}}} \bigl[(1+\alpha N)t-\alpha\bigr]^{2} (Nt-1)^{-1}e^{-\frac{2(N\alpha+1)^{2}\mu t}{N\sigma^{2}}} \, dt<\infty. $$

Moreover,

$$ \gamma(-\infty) =\frac{C_{0}}{\sigma}\int^{0}_{\phi (0)}(N-I)^{-N\alpha-3-\frac{2(N\alpha+1)\alpha\mu}{N\sigma ^{2}}}(1+ \alpha I)^{2}I^{-1} e^{-\frac{2(N\alpha+1)^{2}\mu}{N\sigma^{2}}\frac{1}{N-I}}\, dI=-\infty. $$

Summing up the reasoning above, we have, when \(R^{s}_{0}=1\),

$$\gamma(-\infty)=-\infty, \qquad \gamma(\infty)<\infty \quad \mbox{and}\quad \lambda (-\infty)=-\infty, \qquad \lambda(\infty)=\infty. $$

By Lemma 3.2, \(\lim_{t\uparrow\infty}(X_{t}< z)=1\) for any \(z\in R\), which says that \(X(t)\rightarrow-\infty\) in the distributional sense. Therefore, \(I(t)\rightarrow0\) in probability as \(t\rightarrow\infty\). The proof is therefore completed. □

4 Ergodic property

Theorem 4.1

Let \(I(t)\) be the solution of (1.3). If \(R^{s}_{0}>1\), then the SIS model has the ergodic property.

Proof

Noting \(b(x)=\frac{\beta(N-x)x}{1+\alpha x}-(\mu+\gamma) x\), \(\sigma(x)=\frac{\sigma(N-x)x}{1+\alpha x}\), \(c\in(0,N)\), we compute

$$\begin{aligned}& \int^{c}_{0}\exp\biggl\{ -\int^{s}_{c}{ \frac{2b(\tau)}{\sigma^{2}(\tau)}}\, d\tau \biggr\} \, ds \\& \quad =e^{-c_{0}}\int ^{c}_{0}s^{-\frac{2(\beta N-(\mu+\gamma))}{\sigma^{2}N^{2}}} (N-s)^{\frac{2[\frac{\beta N-(\mu+\gamma)}{N^{2}}+\alpha(\beta +\alpha(\mu+\gamma))]}{\sigma^{2}}}e^{\frac{2(\mu+\gamma)(N\alpha +1)^{2}}{N\sigma^{2}(N-s)}} \, ds. \end{aligned}$$

Let \(t=\frac{N}{N-s}-1\), then

$$\begin{aligned}& \int^{N}_{0}{\frac{1}{\sigma^{2}(s)}\exp\biggl\{ \int ^{s}_{c}{\frac{2b(\tau )}{\sigma^{2}(\tau)}}\, d\tau\biggr\} \, ds} \\& \quad = e^{c_{0}}\int^{N}_{0}(1+\alpha s)^{2}s^{\frac{2(\beta N-(\mu+\gamma ))}{\sigma^{2}N^{2}}-2}(N-s)^{-\frac{2[\frac{\beta N-(\mu+\gamma )}{N^{2}}+\alpha(\beta +\alpha(\mu+\gamma))]}{\sigma^{2}}-2} \\& \qquad {}\times e^{-\frac{2(\mu+\gamma)(N\alpha+1)^{2}}{N\sigma^{2}(N-s)}}\, ds \\& \quad = e^{c_{0}}N^{-\frac{2\alpha(\beta\alpha(\mu+\gamma) )}{\sigma^{2}}-4}e^{-\frac{2(\mu+\gamma)(N\alpha+1)^{2}}{N^{2}\sigma ^{2}}}\int^{\infty}_{0} \bigl[1+(\alpha+N)t\bigr]^{2}t^{\frac{2(\beta N-(\mu+\gamma))}{\sigma^{2}N^{2}}-2} \\& \qquad {}\times (t+1)^{\frac{2\alpha(\beta+\alpha(\mu+\gamma))}{\sigma ^{2}}}e^{-\frac{2(\mu+\gamma)(N\alpha+1)^{2}t}{N^{2}\sigma^{2}}}\, dt. \end{aligned}$$

Clearly, under the condition \(R^{s}_{0}>1\), we have

$$ \begin{aligned} &\int^{c}_{0}\exp\biggl\{ -\int ^{s}_{c}{\frac{2b(\tau)}{\sigma^{2}(\tau)}}\, d\tau \biggr\} \, ds= \infty, \\ &\int^{N}_{0}{\frac{1}{\sigma^{2}(s)}\exp\biggl\{ \int^{s}_{c}{\frac{2b(\tau )}{\sigma^{2}(\tau)}}\, d\tau\biggr\} \, ds}<\infty. \end{aligned} $$
(4.1)

The conditions of Theorem 1.16 in [12] follow clearly from (3.2) and (4.1). Therefore the SIS model has the ergodic property, and the invariant density is given by

$$\begin{aligned} \pi(x) =&C_{1}(1+\alpha x)^{2}x^{\frac{2(\beta N-(\mu+\gamma))}{\sigma ^{2}N^{2}}-2}(N-x)^{-\frac{2[\frac{\beta N-(\mu+\gamma)}{N^{2}}+\alpha (\beta N+\alpha(\mu+\gamma))]}{\sigma^{2}}} \\ &{}\times e^{-\frac{2(\mu+\gamma )(N\alpha+1)^{2}}{N\sigma^{2}(N-x)}}, \quad x\in(0,N), \end{aligned}$$

where \(C_{1}\) is a constant such that \(\int^{N}_{0} \pi(x)\, dx=1\). □

Remark 4.1

If \(\alpha=0\), then

$$ \pi(x) =C_{1}x^{\frac{2[\beta N-(\mu+\gamma)]}{\sigma^{2}N^{2}}-2}(N-x)^{\frac {-2[\beta N-(\mu+\gamma)]}{\sigma^{2}N^{2}}-2} e^{\frac{-2(\mu+\gamma)}{N\sigma^{2}(N-x)}},\quad x \in(0,N). $$

It can be seen easily that

$$ \begin{aligned} &E(X)=\frac{\beta[2(\beta N-\mu-\gamma)-\sigma^{2}N^{2}]}{2\beta ^{2}-(\mu+\gamma+\beta N)\sigma^{2}}, \\ &\operatorname{Var}(X)= \frac{(\beta N-\mu-\gamma)E(X)}{\beta}-\bigl[E(X)\bigr]^{2}. \end{aligned} $$
(4.2)

So if \(\alpha=0\) the mean and the variance of the stationary distribution of model (1.3) are the same as the results of Theorem 6.3 in [9].

5 Simulations

We illustrate our results by using the method from [13]. Consider the corresponding discretization equation:

$$I_{k+1}=I_{k}+I_{k}\biggl[\frac{\beta(N-I_{k})}{1+\alpha I_{k}}-( \mu+\gamma )I_{k}\biggr]\Delta t +\sigma I_{k} \frac{N-I_{k}}{1+\alpha I_{k}}\epsilon_{k}\sqrt{\Delta t}, $$

where \(\epsilon_{k}\), \(k=1,2,\ldots,n\), are the Gaussian random variables \(N(0,1)\).

By setting parameters \(\beta=0.5\), \(N=2\), \(\mu=0.2\), \(\gamma=0.3\), \(\alpha=0.5\), we do simulation studies on the platform of Matlab.

In Figure 1, for the left sub-figure we choose \(\sigma=0.8\), then the condition \(R^{s}_{0}:=\frac{2(\beta N-\mu-\gamma)}{\sigma^{2}N^{2}}<1\) is satisfied. As the result in Theorem 3.1, the solution of system (1.3) tends to zero. In the right sub-figure, we choose \(\sigma=0.5\) so that condition \(R^{s}_{0}=1\) is satisfied. The solution of system (1.3) tends to zero as well, which agrees with the result of Theorem 3.2.

Figure 1
figure 1

Solution of system ( 1.3 ) with differing value of \(\pmb{\sigma=0.8, 0.5}\) and initial value \(\pmb{I(0)=1.5 }\) .

In Figure 2, we choose the parameters as they are in Theorem 4.1, that is, \(R^{s}_{0}>1\). In the left sub-figure, the solution of system (1.3) fluctuates in a small neighborhood, that is, the disease becomes epidemic, and \(I(t)\) in average of time conforms the ergodicity. Moreover, there is a stationary distribution (see the histogram on the right in Figure 2).

Figure 2
figure 2

Solution of system ( 1.3 ) with initial value \(\pmb{I(0)=1.5}\) and \(\pmb{\sigma=0.1}\) . In the left figure, the blue line represents the solution of system (1.3) and the black line represents \(I(t)\) in average of time. The right figure is a histogram of solution \(I(t)\).

6 Conclusion

In this paper, we have considered the features of a SIS epidemic system with the effect of environmental white noise. Firstly, we show that the solution of system (1.3) is globally positive. An important parameter is the stochastic basic reproduction number \(R_{0}^{s}\), which is less than the corresponding deterministic version \(R_{0}\). We also see that \(R_{0}^{s}\rightarrow R_{0}\) as \(\sigma\rightarrow0\). Theorems 3.1, 3.2, and 4.1 show that the disease will be extinct if \(R_{0}^{s}\leq1\), and the disease will be epidemic if \(R_{0}^{s}> 1\). Thus we consider that \(R_{0}^{s}\) is the threshold of extinction and we have prevalence of the disease. Theorem 4.1 also shows that system (1.3) has the ergodic property if \(R_{0}^{s}> 1\), and we can derive the expression for its invariant density. Finally, numerical simulations are carried out to support our results.

References

  1. Gabriela, M, Gomes, M, White, LJ, Medley, GF: The reinfection threshold. J. Theor. Biol. 236, 111-113 (2005)

    Article  Google Scholar 

  2. Wang, W, Ruan, S: Bifurcation in epidemic model with constant removal rate infectives. J. Math. Anal. Appl. 291, 775-793 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  3. Hethcote, HW: Three basic epidemic models. In: Applied Mathematical Ecology. Springer, Berlin (1989)

    Google Scholar 

  4. Capasso, V, Serio, G: A generalization of the Kermack-McKendrick deterministic epidemic model. Math. Biosci. 42, 43-61 (1978)

    Article  MATH  MathSciNet  Google Scholar 

  5. May, RM: Stability and Complexity in Model Ecosystems. Princeton University Press, Princeton (2001)

    MATH  Google Scholar 

  6. Lin, YG, Jiang, DQ: Long-time behaviour of a stochastic SIR model. Appl. Math. Comput. 236, 1-9 (2014)

    Article  MathSciNet  Google Scholar 

  7. Arnold, L: Stochastic Differential Equations: Theory and Applications. Wiley, New York (1972)

    Google Scholar 

  8. Mao, XR: Stochastic Differential Equations and Applications. Horwood, Chichester (1997)

    MATH  Google Scholar 

  9. Gray, A, Greenhalgh, D, Hu, L, Mao, XR, Pan, JF: A stochastic differential equation SIS epidemic model. SIAM J. Appl. Math. 71, 876-902 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  10. Karatzas, I, Shreve, SE: Brownian Motion and Stochastic Calculus. Springer, Berlin (1991)

    MATH  Google Scholar 

  11. Sun, X, Wang, Y: Stability analysis of a stochastic logistic model with nonlinear diffusion term. Appl. Math. Model. 32, 2067-2075 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  12. Kutoyants, AY: Statistical Inference for Ergodic Diffusion Processes. Springer, London (2003)

    Google Scholar 

  13. Higham, DJ: An algorithmic introduction to numerical simulation of stochastic differential equations. SIAM Rev. 43, 525-546 (2001)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The work was supported by the PhD Programs Foundation of Ministry of China (No. 200918), National Science Foundation of China (No. 11371085), and Natural Science Foundation of Changchun Normal University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daqing Jiang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The authors have contributed to the manuscript on an equal basis. All authors read and approved the final manuscript.

Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Han, Q., Jiang, D., Lin, S. et al. The threshold of stochastic SIS epidemic model with saturated incidence rate. Adv Differ Equ 2015, 22 (2015). https://doi.org/10.1186/s13662-015-0355-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-015-0355-4

Keywords