In this section we present a discretetime SEIRS epidemic model which is a discrete version of the continuous SEIRS model included in [3].
The individuals of the population are divided into four epidemiological classes that, at each time \(t\in \{0,1,2,\dots \}\), are represented by \(S(t)\), susceptible, \(E(t)\), exposed, \(I(t)\), infectious, and \(R(t)\), recovered. So, the state vector of the population is \(X(t)=(S(t),E(t),I(t), R(t))\), and the total population \(N(t)=S(t)+E(t)+I(t)+R(t)\).
Following [27, 29], we consider that in each time interval there exist two distinct temporal phases. In the first one, the disease dynamics acts and individuals can change from one epidemiological class to the next one, as shown in Fig. 1. Reproduction and survival happen in the second temporal phase. We assume that the disease does not affect the birth process. Concerning survival, to take into account both the natural and the disease induced mortalities in full generality, we associate different survival rates with each of the epidemiological classes. The corresponding diagram is included in Fig. 2.
The disease transmission is a function \(\Phi (I(t)/N(t))\) of the fraction of susceptible individuals that become exposed, where
$$ \Phi :[0,1]\longrightarrow [0,1],\qquad \Phi \in C^{2} \bigl([0,1] \bigr),\qquad \Phi (0)=0, \quad \text{and is increasing.} $$
(2)
A common choice, which corresponds to the socalled proportional or standard incidence, is \(\Phi (x)=\beta x\), where \(\beta \in (0,1]\) is the transmission parameter. When infections are modeled as Poisson processes, then \(\Phi (x)=1e^{\beta x}\) for \(\beta >0\) [29].
Upon infection, the transitions between classes are defined by parameters \(\gamma ^{C}\in (0,1)\) for \(C\in \{E,I,R\}\) that represent the fraction of the individuals of class C that pass to the next class per time unit (see Fig. 1).
Henceforth, we use C as a generic letter for an unspecified epidemiological class and call \(\boldsymbol{\mathcal{C}}:=\{S,E,I,R\}\).
Demography is included in the model in a simple form. It is assumed that there is no vertical transmission of the disease, so that all births occur into the susceptible class. The recruitment of individuals to the susceptible class per time unit is a function
$$ B:[0,\infty )\longrightarrow [0,\infty ), \quad B \in C^{1} \bigl([0, \infty)\bigr), $$
(3)
of the total population N. We will use a constant recruitment function as a particular case. Other common choices, proposed in [27, 29], are geometric, Beverton–Holt, and Ricker recruitment functions.
Individuals in all epidemiological classes are subject to natural death, possibly affected by the disease. Parameters \(\sigma ^{C}\in (0,1)\), for \(C\in \boldsymbol{\mathcal{C}}\), represent the fraction of the individuals of class C that survive per time unit. Thus, the fraction of individuals dying per time unit is \(1\sigma ^{C}\in (0,1)\) (see Fig. 2).
The transitions between epidemiological classes in a time unit are defined by the following map:
T(X)=\left(\begin{array}{c}S\mathrm{\Phi}(I/N)S+{\gamma}^{R}R\\ E+\mathrm{\Phi}(I/N)S{\gamma}^{E}E\\ I+{\gamma}^{E}E{\gamma}^{I}I\\ R+{\gamma}^{I}I{\gamma}^{R}R\end{array}\right)
and the demographic changes by the next one
$$ D(X)= \bigl(B(N)+\sigma ^{S}S,\sigma ^{E}E,\sigma ^{I}I,\sigma ^{R}R \bigr). $$
We now take into account both processes sequentially: first epidemiological transitions followed by demography. The discretetime SEIRS epidemic model that we propose can so be expressed in terms of maps T and D as
$$ X(t+1)=D \bigl(T \bigl(X(t) \bigr) \bigr), $$
or in a detailed form
$$ \begin{aligned} &S(t+1) = B \bigl(N(t) \bigr)+\sigma ^{S}\gamma ^{R} R(t)+\sigma ^{S} \biggl(1 \Phi \biggl(\frac{I(t)}{N(t)} \biggr) \biggr)S(t), \\ & E(t+1) = \sigma ^{E}\Phi \biggl( \frac{I(t)}{N(t)} \biggr)S(t)+ \sigma ^{E} \bigl(1\gamma ^{E} \bigr) E(t), \\ & I(t+1) = \sigma ^{I}\gamma ^{E} E(t)+\sigma ^{I} \bigl(1 \gamma ^{I} \bigr) I(t), \\ & R(t+1) = \sigma ^{R}\gamma ^{I} I(t)+\sigma ^{R} \bigl(1 \gamma ^{R} \bigr) R(t). \end{aligned} $$
(4)
The assumptions on the parameters of the model allow us to straightforwardly prove that \(T (\mathbf{R}_{+}^{4} )\subset \mathbf{R}_{+}^{4}\) and also \(D (\mathbf{R}_{+}^{4} )\subset \mathbf{R}_{+}^{4}\). Therefore, \(\mathbf{R}_{+}^{4}\) is forward invariant under the semiflow defined by the discretetime system (4).
Proposition 1
If the function B is bounded, then system (4) is dissipative.
Proof of Proposition 1
Let \(\hat{B}>0\) be such that \(B(x)\leq \hat{B}\) for all \(x\in {}[ 0,\infty )\), and \(\hat{\sigma }=\max_{C\in \boldsymbol{\mathcal{C}}}\{ \sigma ^{C}\}\in (0,1)\). Then
$$ \begin{aligned}N(t+1) = {}& B \bigl(N(t) \bigr)+ \biggl( \sigma ^{S} \biggl(1\Phi \biggl( \frac{I(t)}{N(t)} \biggr) \biggr)+\sigma ^{E} \Phi \biggl( \frac{I(t)}{N(t)} \biggr) \biggr) S(t) \\ &{} + \bigl(\sigma ^{E} \bigl(1\gamma ^{E} \bigr)+\sigma ^{I}\gamma ^{E} \bigr)E(t)+ \bigl(\sigma ^{I} \bigl(1\gamma ^{I} \bigr)+\sigma ^{R}\gamma ^{I} \bigr)I(t) \\ & {}+ \bigl(\sigma ^{R} \bigl(1\gamma ^{R} \bigr)+\sigma ^{S}\gamma ^{R} \bigr)R(t) \\ \leq{} & \hat{B}+\max \bigl\{ \sigma ^{S},\sigma ^{E} \bigr\} S(t)+\max \bigl\{ \sigma ^{E}, \sigma ^{I} \bigr\} E(t) \\ &{} +\max \bigl\{ \sigma ^{I},\sigma ^{R} \bigr\} I(t)+\max \bigl\{ \sigma ^{R},\sigma ^{S} \bigr\} R(t) \\ \leq {}& \hat{\sigma }N(t)+\hat{B.}\end{aligned} $$
(5)
We now present some basic properties for the solution of the linear scalar difference equation
with \(0< a<1\), \(b\geq 0\) that will be used here and later on in the manuscript. The solution to (6) is
$$ x(t)= \biggl(x(0)\frac{b}{1a} \biggr)a^{t}+\frac{b}{1a}, $$
(7)
from where it follows that \(x(t)\) converges monotonically to \(b/(1a)\). In particular, for all \(x(0)\geq 0\), we have
$$ \min \biggl\{ x(0),\frac{b}{1a} \biggr\} \leq x(t)\leq \max \biggl\{ x(0), \frac{b}{1a} \biggr\} ,\quad t=0,1,2,\ldots $$
(8)
Therefore, from (5) and (7) it follows that any solution of system (4), with initial conditions \(X(0)\in \mathbf{R}_{+}^{4}\), satisfies that
$$ N(t)\leq \biggl( N(0)\frac{\hat{B}}{1\hat{\sigma }} \biggr) \hat{\sigma }^{t}+\frac{\hat{B}}{1\hat{\sigma }} \underset{t\rightarrow \infty }{ \longrightarrow }\frac{\hat{B}}{1\hat{\sigma }}. $$
This inequality proves that, fixed any \(M>\hat{B}/(1\hat{\sigma })\), for every solution of (4), there is a time t̂ such that \(N(t)\leq M\) for all \(t\geq \hat{t}\) and, thus, the dissipativity of system (4). □
Function B is bounded in the particular case of having a constant, Beverton–Holt, or Ricker recruitment function.
Notice that in the proof it is shown that all nonnegative solutions of system (4) are attracted by the compact set \(K=\{(S,E,I,R)\in \mathbf{R}_{+}^{4}:N\in [0,\hat{B}/(1\hat{\sigma })] \}\).
We continue the analysis by finding conditions for the eradication or endemicity of the disease.
To consider disease eradication, we try to find an equilibrium of (4) with \(I=0\). We immediately obtain \(E=0\), \(R=0\), and \(S=B(S)+\sigma ^{S} S\). To guarantee the existence of a unique diseasefree equilibrium (DFE), we make the following assumption on the scalar difference equation representing the demography of the population without disease.
Hypothesis 2.1
The equation \(S(t+1) = \sigma ^{S} S(t)+B(S(t))\) possesses a unique positive equilibrium \(S^{*}\) that is hyperbolic and globally asymptotically stable (GAS) in \((0,\infty )\).
If the recruitment function is constant, then Hypothesis 2.1 holds. This also happens, for certain values of the parameters, in the cases of the Beverton–Holt and the Ricker recruitment functions.
Note that if Hypothesis 2.1 is met, then the unique DFE of system (4) is \(X_{0}^{*}=(S^{*},0,0,0)\).
As we will prove, the basic reproduction number \(\mathcal{R}_{0}\) of system (4) determines whether the disease is eradicated or it becomes endemic. We use the nextgeneration method to calculate \(\mathcal{R}_{0}\) of a discretetime system as developed in [2]. We consider as infected states E and I, and as uninfected states S and R. The socalled diseasefree system is the one associated with the uninfected states setting \(E=0\) and \(I=0\):
$$ \begin{aligned} &S(t+1)=B \bigl(S(t)+R(t) \bigr)+\sigma ^{S}\gamma ^{R} R(t)+\sigma ^{S}S(t), \\ &R(t+1)=\sigma ^{R} \bigl(1\gamma ^{R} \bigr) R(t). \end{aligned} $$
(9)
Proposition 2
If function B is bounded and Hypothesis 2.1is met, then \((S^{*},0)\) is a GAS equilibrium of system (9) in \(\Omega = \{ (S,R)\in \mathbf{R}_{+}^{2}:S>0 \} \).
Proof of Proposition 2
It is straightforward to show that \((S^{\ast },0)\) is an equilibrium of system (9) and, by linearization, that it is hyperbolic and locally asymptotically stable (LAS).
To prove that \((S^{*},0)\) attracts all points in Ω, we apply Theorem 2.1 in [30].
For that, let \((S_{0},R_{0})\in \Omega \) and define, for \(t=0,1,2,\ldots \) ,
$$\sigma _{t}(S)=B \bigl(S+\bigl(\sigma ^{R}\bigl(1\gamma ^{R}\bigr)\bigr)^{t}R_{0} \bigr)+ \sigma ^{S}\gamma ^{R}\bigl(\sigma ^{R}\bigl(1 \gamma ^{R}\bigr)\bigr)^{t}R_{0}+ \sigma ^{S}S,$$
that is, a continuous map in \(\mathbf{R}_{+}\), and the discrete dynamical process defined by \(\tau _{0}:=I\), the identity map, and \(\tau _{t}:=\sigma _{t1}\circ \sigma _{t2}\circ \cdots \circ \sigma _{1}\circ \sigma _{0}\) for \(t\geq 1\).
If \(\{(S(t),R(t)):t\geq 0\}\) is the orbit associated with \((S_{0},R_{0})\) in system (9), we have that \(R(t)=(\sigma ^{R}(1\gamma ^{R}))^{t} R_{0}\) and \(S(t)=\tau _{t}(S_{0})\).
Since \(0<\sigma ^{R}(1\gamma ^{R})<1\), we have \(\lim_{t\rightarrow \infty }R(t)=0\), and so we need to show that \(\lim_{t\rightarrow \infty }S(t)=S^{\ast }\).
We now prove that the discrete dynamical process \(\tau _{t}\) (\(t\geq 0\)) is asymptotically autonomous (Definition 2.1 in [30]) and that its limit discrete semiflow, \(\Sigma _{t}=\Sigma \circ \overset{(t)}{\cdots }\circ \Sigma \) (\(t \geq 0\)), is that generated by the continuous map \(\Sigma (S)=B(S)+\sigma ^{S}S\) in \(\mathbf{R}_{+}\). In order to do so, let \(x\in \mathbf{R}_{+}\), and let \(\{ x_{n} \} _{n=1}^{\infty }\) and \(\{ t_{n} \} _{n=1}^{\infty }\) be sequences in \(\mathbf{R}_{+}\) such that \(\lim_{n\rightarrow \infty }x_{n}=x\) and \(\lim_{n\rightarrow \infty }t_{n}=\infty \). We need to show that \(\lim_{n\rightarrow \infty }\Sigma _{t_{n}}(x_{n})=\Sigma (x)\), which follows immediately taking into account that \(0<\sigma ^{R}(1\gamma ^{R})<1\) and that B is continuous.
As the only Σinvariant subset of \(\mathbf{R}_{+}\) is \(\{S^{\ast }\}\), to complete the proof by applying Theorem 2.1 in [30], we just need to prove that the set \(\{S(t):t\geq 0\}\) is bounded.
Let B̂ be un upperbound of function B. Then, for every \(t\geq 0\), we have \(\sigma _{t}(S)\leq \hat{B}+R_{0}+\sigma ^{S}S\), and so \(\sigma _{t}(S_{0})\leq x(t)\), where \(x(t)\) is the solution to
$$\begin{aligned} &x(t+1) =\hat{B}+R_{0}+\sigma ^{S}x(t), \\ &x(0) =S_{0}, \end{aligned}$$
which corresponds to the linear scalar equation (6) with \(a:=\sigma ^{S}\) and \(b:=\hat{B}+R_{0}\). Now, using (8), we have \(\sigma _{t}(S_{0})\leq x(t)\leq \max \{ S_{0},(\hat{B}+R_{0})/(1 \sigma ^{S}) \} \), and so
$$ \sup \bigl\{ S(t):t\geq 0 \bigr\} =\sup \bigl\{ \tau _{t}(S_{0}):t \geq 0 \bigr\} \leq \max \bigl\{ S_{0},( \hat{B}+R_{0})/ \bigl(1\sigma ^{S} \bigr) \bigr\} , $$
as we wanted to show. □
To proceed with the application of the nextgeneration method, in the equations for the infected compartments we must separate the terms \(\mathcal{F}_{E}\) and \(\mathcal{F}_{I}\), representing new infections, from the terms \(\mathcal{T}_{E}\) and \(\mathcal{T}_{I}\) associated with transitions between compartments:
$$ \begin{aligned} &\mathcal{F}_{E}(X)=\sigma ^{E}\Phi (I/N )S,\qquad \mathcal{F}_{I}(X)=0, \\ &\mathcal{T}_{E}(X)= \sigma ^{E} \bigl(1\gamma ^{E} \bigr) E,\qquad \mathcal{T}_{I}(X)= \sigma ^{I} \gamma ^{E} E+\sigma ^{I} \bigl(1\gamma ^{I} \bigr)I. \end{aligned} $$
Now, we can calculate matrices
F={\left[\frac{\partial {\mathcal{F}}_{C}({X}_{0}^{\ast})}{\partial D}\right]}_{C,D\in \{E,I\}}=\left[\begin{array}{cc}0& {\sigma}^{E}{\mathrm{\Phi}}^{\prime}(0)\\ 0& 0\end{array}\right]
and
T={\left[\frac{\partial {\mathcal{T}}_{C}({X}_{0}^{\ast})}{\partial D}\right]}_{C,D\in \{E,I\}}=\left[\begin{array}{cc}{\sigma}^{E}(1{\gamma}^{E})& 0\\ {\sigma}^{I}{\gamma}^{E}& {\sigma}^{I}(1{\gamma}^{I})\end{array}\right]
to obtain the nextgeneration matrix \(Q=F(IdT)^{1}\) [2], where Id is the identity matrix of order 2 whose spectral radius is \(\mathcal{R}_{0}\).
$$ \mathcal{R}_{0}=\rho (Q)= \frac{\sigma ^{E}\sigma ^{I}\gamma ^{E} \Phi '(0)}{ (1\sigma ^{E}(1\gamma ^{E}) ) (1\sigma ^{I}(1\gamma ^{I}) )}. $$
(10)
The next result gives sufficient conditions for the disease eradication locally and globally.
Theorem 3
Let system (4) satisfy Hypothesis 2.1. Then

(a)
If \(\mathcal{R}_{0}<1\), then DFE \(X_{0}^{*}\) is LAS.

(b)
If \(\mathcal{R}_{0}< 1\) and \(\Phi ''(x)\leq 0\) for \(x>0\), then DFE \(X_{0}^{*}\) is GAS in
\(\Omega = \{ (S,E,I,R)\in \mathbf{R}_{+}^{4}:S>0 \} \).

(c)
If \(\mathcal{R}_{0}>1\), then DFE \(X_{0}^{*}\) is unstable.
Proof of Theorem 3
(a) and (c) are direct consequences of Theorem 2.1 in [2].
(b) \(\Phi \in C^{2}([0,1])\), \(\Phi (0)=0\), and \(\Phi ''(x)\leq 0\) imply that \(\Phi (x)\leq \Phi '(0)x\) for \(x\in [0,1]\).
Therefore
$$ \begin{aligned} E(t+1) & \leq \sigma ^{E}\Phi '(0)\frac{I(t)}{N(t)}S(t)+\sigma ^{E} \bigl(1 \gamma ^{E} \bigr) E(t) \\ & \leq \sigma ^{E}\Phi '(0)I(t)+\sigma ^{E} \bigl(1\gamma ^{E} \bigr) E(t), \end{aligned} $$
and so, using the equation for I in (4),
\begin{array}{rl}\left[\begin{array}{c}E(t+1)\\ I(t+1)\end{array}\right]& \le \left[\begin{array}{cc}{\sigma}^{E}(1{\gamma}^{E})& {\sigma}^{E}{\mathrm{\Phi}}^{\prime}(0)\\ {\sigma}^{I}{\gamma}^{E}& {\sigma}^{I}(1{\gamma}^{I})\end{array}\right]\left[\begin{array}{c}E(t)\\ I(t)\end{array}\right]\\ & =(F+T)\left[\begin{array}{c}E(t)\\ I(t)\end{array}\right]\le {(F+T)}^{t+1}\left[\begin{array}{c}E(0)\\ I(0)\end{array}\right].\end{array}
Matrix \(F+T\) can be considered to be the projection matrix of a standard linear matrix model of population dynamics [18], with matrices F and T representing the fertility and transition matrices respectively. The net reproductive rate of the model coincides with \(\mathcal{R}_{0}\) and, since \(\rho (T)=\max \{\sigma ^{E}(1\gamma ^{E}),\sigma ^{I}(1\gamma ^{I}) \}<1\) and \(\mathcal{R}_{0}<1\), we can apply Theorem 3.3 in [18], which yields that \(\rho (F+T)<1\) and, therefore, \((F+T)^{t}\underset{t\to \infty }{\longrightarrow }0\). This proves that \(E(t)\), \(I(t)\underset{t\to \infty }{\longrightarrow }0\).
To prove that also \(R(t)\) tends to 0, we use from the previous arguments that we can find \(\alpha \in (0,1)\) and \(K>0\) such that \(I(t)\leq K\alpha ^{t}\). Thus, substituting in the equation of \(R(t)\), we obtain \(R(t+1)\leq K\alpha ^{t} +\sigma ^{R}(1\gamma ^{R}) R(t)\), which implies that there exist \(\bar{\alpha }\in (\max (\alpha ,\sigma ^{R}(1\gamma ^{R})),1 )\) and \(\bar{K}>0\) such that \(R(t)\leq \bar{K}\bar{\alpha }^{t} \underset{t\to \infty }{\longrightarrow }0\).
Finally, to prove that \(S(t)\underset{t\to \infty }{\longrightarrow }S^{*}\), we just need to follow the reasoning in the proof of Proposition 2. Defining, for \((S_{0},E_{0},I_{0},R_{0})\in \Omega \) and \(t=0,1,2,\ldots \) ,
$$ \begin{aligned} \sigma _{t} (S) = {}& B \bigl(S+E(t)+I(t)+R(t) \bigr)+\sigma ^{S}\gamma ^{R} R(t) \\ & {} +\sigma ^{S} \biggl(1 \Phi \biggl( \frac{I(t)}{S+E(t)+I(t)+R(t)} \biggr) \biggr)S. \end{aligned} $$
The corresponding discrete dynamical process \(\tau _{t}\) (\(t\geq 0\)) is asymptotically autonomous with limit discrete semiflow \(\Sigma ^{t}\) (\(t \geq 0\)) generated by the continuous map \(\Sigma (S)=B(S)+\sigma ^{S}S\) in \(\mathbf{R}_{+}\), which coincides with the one in the proof of Proposition 2. □
Note that the assumption \(\Phi ''(x)\leq 0\) for \(x>0\) is met in the case of standard incidence, i.e., if \(\Phi (x)=\beta x\).
The endemicity of the disease is represented in mathematical terms by the concept of uniform persistence. We use the persistence function \(\rho (S,E,I,R)=E+I\). Thus, system (4) is uniformly persistent [25] if there exists \(\varepsilon >0\) such that \(\liminf_{t\to \infty }(E(t)+I(t))>\varepsilon \) for any solution with \(E(0)+I(0)>0\). If lim inf is substituted by lim sup in the definition, the system is said to be uniformly weakly persistent.
In the next theorem we prove the uniform persistence of system (4) when \(\mathcal{R}_{0}>1\) in the case of constant recruitment function \(B(N)=B\) and standard incidence \(\Phi (x)=\beta x\), \(\beta \in [0,1)\).
Theorem 4
Let \(\Phi (x)=\beta x\), \(0<\beta \leq 1\), and \(B(N)=B\) be constant in system (4). If \(\mathcal{R}_{0}>1\), then (4) is uniformly persistent.
Proof of Theorem 4
Since all nonnegative solutions of (4) are attracted by a compact set and \(\{X\in \mathbf{R}_{+}^{4}:E+I>0\}\) is forward invariant, Corollary 4.8 in [25] establishes that it is sufficient to prove that it is uniformly weakly persistent to obtain that it is also uniformly persistent.
So, let us prove that (4) is uniformly weakly persistent. We argue by contradiction. Suppose that it is not. Then, for any arbitrary \(\varepsilon >0\), there exists a solution \(X(t)\) with \(E(0)+I(0)>0\) and \(\limsup_{t\to \infty }(E(t)+I(t))<\varepsilon \). Thus, there exists some \(t_{0}>0\) such that \(E(t)+I(t)<\varepsilon \) for all \(t\geq t_{0}\).
Also, for \(t\geq t_{0}\), \(R(t+1)< \sigma ^{R} (1\gamma ^{R})R(t)+\sigma ^{R}\gamma ^{I} \varepsilon \) with \(0<\sigma ^{R} (1\gamma ^{R})<1\). This implies, iterating the righthand side from \(t_{0}\) on,
$$ \begin{aligned} R(t) & < \bigl(\sigma ^{R} \bigl(1\gamma ^{R} \bigr) \bigr)^{tt_{0}} R(t_{0})+\sigma ^{R}\gamma ^{I} \varepsilon \sum _{i=0}^{tt_{0}1} \bigl(\sigma ^{R} \bigl(1\gamma ^{R} \bigr) \bigr)^{i} \\ & < \bigl(\sigma ^{R} \bigl(1\gamma ^{R} \bigr) \bigr)^{tt_{0}} R(t_{0})+ \frac{\sigma ^{R}\gamma ^{I} }{(1\sigma ^{R})+\gamma ^{R} (1\sigma ^{R})\gamma ^{R} } \varepsilon . \end{aligned} $$
Then, as \((\sigma ^{R} (1\gamma ^{R}) )^{tt_{0}} \underset{t\to \infty }{\longrightarrow }0\), there exists \(t_{1}>t_{0}\) such that \(R(t)< (1+ \frac{\sigma ^{R}\gamma ^{I} }{\gamma ^{R} +d^{R}\gamma ^{R} d^{R}} )\varepsilon :=\bar{r}\varepsilon \) for \(t\geq t_{1}\).
Thus, for any \(\varepsilon >0\), there exists \(t_{1}>0\) such that
$$ E(t)+I(t)+R(t)\leq (1+\bar{r})\varepsilon \quad \text{for } t\geq t_{1}. $$
(11)
Let us now establish a lower bound for the total population \(N(t)\). Reproducing, with the appropriate changes, the calculations in the proof of Proposition 1, we obtain the following inequality:
$$ N(t+1)\geq \bar{\sigma }N(t)+B, $$
where \(\bar{\sigma }=\min_{C\in \boldsymbol{\mathcal{C}}}\{ \sigma ^{C}\}\in (0,1)\) that implies, considering the difference equation \(x(t+1)=\bar{\sigma }x(t)+B\) and using (8),
$$ N(t)\geq \bar{n}:=\min \bigl\{ N(0),B/(1\bar{\sigma }) \bigr\} >0. $$
This inequality together with (11) yields
$$ S(t)\geq \bar{n}(1+\bar{r})\varepsilon \quad \text{for } t\geq t_{1}. $$
(12)
With the help of (11) and (12), we can find the following lower bound for the coefficient of \(I(t)\) in the E equation:
$$ \begin{aligned} \frac{\sigma ^{E}\beta S(t)}{N(t)} & \geq \frac{\sigma ^{E}\beta S(t)}{S(t)+(1+\bar{r})\varepsilon } \geq \frac{\sigma ^{E}\beta (\bar{n}(1+\bar{r})\varepsilon )}{\bar{n}(1+\bar{r})\varepsilon +(1+\bar{r})\varepsilon } \\ & = \sigma ^{E}\beta \biggl(1 \frac{1+\bar{r}}{\bar{n}}\varepsilon \biggr) =:G(\varepsilon ). \end{aligned} $$
Let us define matrix
{\overline{P}}_{\epsilon}=\left[\begin{array}{cc}{\sigma}^{E}(1{\gamma}^{E})& G(\epsilon )\\ {\sigma}^{I}{\gamma}^{E}& {\sigma}^{I}(1{\gamma}^{I})\end{array}\right]
that satisfies, for \(t\geq t_{1}\),
\left[\begin{array}{c}E(t+1)\\ I(t+1)\end{array}\right]\ge {\overline{P}}_{\epsilon}\left[\begin{array}{c}E(t)\\ I(t)\end{array}\right].
As \(\bar{P}_{\varepsilon }\) is a primitive matrix, if we can find ε such that \(\rho (\bar{P}_{\varepsilon })>1\), we obtain that \(E(t)\), \(I(t)\underset{t\to \infty }{\longrightarrow }\infty \) whenever \(E(0)+I(0)>0\), which is the contradiction we were looking for. As seen in the proof of Theorem 3, we can check it by means of the net reproductive rate \(\mathcal{R}_{0,\varepsilon }\) of the model associated with matrix \(\bar{P}_{\varepsilon }\) (Theorem 3.3. in [18]).
Now \(\mathcal{R}_{0,\varepsilon }\) and \(\mathcal{R}_{0}\) satisfy
$$ \mathcal{R}_{0,\varepsilon }=\frac{G(\varepsilon )}{\sigma ^{E}\beta } \mathcal{R}_{0}= \biggl(1\frac{1+\bar{r}}{\bar{n}}\varepsilon \biggr)\mathcal{R}_{0}. $$
So, if we choose \(\bar{\varepsilon }=\frac{1}{2} (1\frac{1}{\mathcal{R}_{0}} )\frac{\bar{n}}{1+\bar{r}}\), and take ito account that \(\mathcal{R}_{0}>1\), we obtain the required result to complete the proof
$$ \mathcal{R}_{0,\bar{\varepsilon }}=\frac{1}{2}(\mathcal{R}_{0}+1)>1. $$
□