Skip to main content

Theory and Modern Applications

Competition and coexistence of a stochastic Holling II n-predator one-prey model

Abstract

In this paper, we discuss a stochastic Holling II predator–prey model with n-predator competing for one prey. The existence of a positive solution is established by using the comparison theorem. We get the stochastic break-even concentration \(\tilde{R}_{i}\) of each predator which determines the competition outcomes. When the noise intensity of the prey is small, the predator with the lowest stochastic break-even concentration will survive and other predators will go extinct. When the noise intensity of the prey is large enough, all species go to extinction. Moreover, if two predators have the same lowest stochastic break-even concentration in some conditions, the two predators can coexist. Finally, numerical simulations to illustrate the analytical results are given.

Highlights:

  • The article studies the dynamics of a stochastic predator–prey system with Holling II functional response and n-predator.

  • The sufficient conditions for the competitive exclusion and coexistence are established.

  • The results show that noises can affect the competition.

1 Introduction

Since the pioneering work of Lotka and Volterra, predator–prey models play an important role in both theory and practice, they help us to understand the relationship of biologies and environment, and have been studied by many scholars, see [113]. Most of them consider one predator, but there are multiple species inhabiting the same environment and competing for one prey. In Ref. [14], authors considered three-dimensional Lotka–Volterra models with two predators competing for one prey within a deterministic environment. Many authors studied predator–prey models with various function responses, and the interaction between predators and their prey is nonlinear in the natural world. Classical functional response is known as Holling II functional response [4, 13, 15]. Competition is a common interaction among predators for the same prey, and it plays an important role in the real world. Pure and simple competition between two predator species with no interference between rivals has been studied by some authors [14, 16, 17]. They discussed the principle of competition problem with nonlinear functional response that two predators compete for a single prey species in [14]. In [17], the authors studied the global dynamics of the predator–prey models with two predators competing for one prey in a uniform and determined environment. Here it is assumed that multiple predators compete for one prey and there is no interference between rivals, and it takes Holling II functional response of the growth rate of the predators. The model can be written as follows:

$$ \textstyle\begin{cases} \frac{dS}{dt} =Sr ( 1- \frac{S}{K} ) - \sum_{i=1}^{n} \frac{a_{i} S}{1+S} X_{i}, \\ \frac{d X_{i}}{dt} =- r_{i} X_{i} + \frac{b_{i} S}{1+S} X_{i},\quad i=1,2,\ldots, n, \end{cases} $$
(1.1)

where \({S}({t})\), \(X_{i}(t)\) represent the population densities of the prey and the ith predator at time t, respectively. r is the intrinsic birth rate of the prey, and \(a_{i}\) and \(b_{i} \)represent the capturing rate of the ith predator and the rate of nutrients into the reproduction for the ith predator. \(r_{i}\) is the natural death rate of the ith predator. K measures the environmental carrying capacity for the prey. All parameters are positive constants, and it is obvious that \(a_{i} > b_{i}\).

But epidemic models are inevitably affected by environmental white noise which is an important component in reality. However, it is more difficult to prove the competitive exclusion principle when considering environmental white noise. In this paper, we present a stochastic Holling II system with logistic diffusion term of the form and want to discuss whether the competitive exclusion principle still holds. To the best of our knowledge, few authors researched the competitive exclusion principle about the model.

We assume that the environment fluctuations mainly affect the intrinsic rate r and the death rate \(r_{i}\), like

$$r \rightarrow r+ \sigma_{0} \dot{B} (t),\qquad- r_{i} \rightarrow- r_{i} + \sigma_{i} \dot{B} ( t )\quad \mbox{for }i=1,2,\ldots,n, $$

where \(B(t)\) is an independent Brownian motion, \(\sigma_{i} (t)\) are the intensities of environmental white noise. Corresponding to system (1.1), the stochastic Holling II system with logistic diffusion term of the form can be presented as follows:

$$ \textstyle\begin{cases} dS= [ Sr ( 1- \frac{S}{K} ) - \sum_{i=1}^{n} \frac{a_{i} S}{1+S} X_{i} ] \, dt + \sigma_{0} S\, dB, \\ d X_{i} = ( - r_{i} X_{i} + \frac{b_{i} S}{1+S} X_{i} )\, dt+ \sigma_{i} X_{i}\, dB,\quad i=1,2,\ldots, n. \end{cases} $$
(1.2)

Throughout this paper, unless otherwise specified, let \(( \Omega, \mathcal{F}, \{ \mathcal{F}_{t} \}_{t\geq 0},P )\) be a complete probability space with a filtration \(\{ \mathcal{F}_{t} \}_{t\geq 0}\) satisfying the usual conditions (i.e., it is right continuous and \(\{ \mathcal{F}_{t} \}_{t\geq 0}\) contains all P-null sets), and let \(B(t)\) be the Brownian motion defined on the probability space.

Now, we introduce Itô’s formula for general stochastic differential equations, which will be used throughout this paper. Consider the following n-dimensional stochastic differential equation:

$$ dx=f ( x,t ) \, dt+g ( x,t )\, dw(t) $$
(1.3)

which is defined on \(R\times R_{n}\) with the initial value \(x ( t_{0} ) = x_{0}\), where \({f} ( {x},{t} ) =( f_{1} ( x,t ), f_{2} ( x,t ), \ldots, f_{n} ( x,t ) )\) is an n-dimensional vector function, \(g ( x,t ) = ( g_{ij} (x,t))_{n\times l}\) is a matrix function, and \(\omega ( x,{t} ) =( \omega_{1} ( x,t ), \omega_{2} ( x,t ),\ldots, \omega_{n} ( x,t ) )\) is an l-dimensional standard Brownian motion defined on the above probability space.

Define the differential operator L associated with Eq. (1.3) as follows:

$$L = \frac{\partial}{\partial t} + \sum_{i=1}^{n} f_{i} (x,t) \frac{\partial}{\partial x_{i}} + \frac{1}{2} \sum _{i,j=1}^{n} \sum_{k=1}^{l} g_{ik} (x,t) g_{jk} (x,t) \frac{\partial^{2}}{\partial x_{i}\, \partial x_{j}}. $$

If L acts on a function \(V(x,t)\in C^{2,1} (R\times R_{n};R)\), then we have

$$LV ( x,t ) = \frac{\partial V}{\partial t} + \sum_{i=1}^{n} f_{i} ( x,t ) \frac{\partial V}{\partial x_{i}} + \frac{1}{2} \sum _{i,j=1}^{n} \sum_{k=1}^{l} g_{ik} ( x,t ) g_{jk} ( x,t ) \frac{\partial^{2} V}{\partial x_{i}\, \partial x_{j}}, $$

where \(V_{t} = \frac{\partial V}{\partial t}\), \(V_{x} = ( \frac{\partial V}{\partial x_{1}},\ldots, \frac{\partial V}{\partial x_{d}} )\), \(V_{xx} = ( \frac{\partial^{2} V}{\partial x_{i} \, \partial x_{j}} )_{d\times d}\). By Itô’s formula, if \(x(t)\in R^{d}\), then

$$dV \bigl( x ( t ),t \bigr) = L \bigl( x ( t ),t \bigr)\, dt+ V_{x} \bigl( x ( t ),t \bigr) g \bigl( x ( t ),t \bigr)\, dB ( t ). $$

Lemma 1.1

(Strong law of large numbers)

Let \(M= \{ M_{t} \}_{t\geq 0}\) be real-value continuous local martingale vanishing at \(t = 0\). Then

$$\lim_{t\rightarrow +\infty} \langle M,M \rangle_{t} =\infty\quad \textit{a.s.}\quad \Rightarrow\quad \lim_{t\rightarrow +\infty} \frac{M_{t}}{ \langle M,M \rangle_{t}} =0 \quad \textit{a.s.} $$

and also

$$\lim_{t\rightarrow +\infty} \sup\frac{ \langle M,M \rangle_{t}}{t} < \infty \quad \textit{a.s.} \quad \Rightarrow\quad \lim_{t\rightarrow +\infty} \frac{M_{t}}{t} =0 \quad \textit{a.s.} $$

This paper is organized as follows. In Sect. 2, we show that there is a unique nonnegative solution of system (1.2) for any positive initial value. In Sect. 3, sufficient conditions for the principle of competitive exclusion are guaranteed. In Sect. 4, we obtain the coexistence of two survival predators. In Sect. 5, we give some simulations to illustrate our analytical results.

2 Existence and uniqueness of a nonnegative solution

Theorem 2.1

For any initial value \(( S ( 0 ), X_{1} ( 0 ), X_{2} ( 0 ),\ldots, X_{n} ( 0 ) ) \in R_{+}^{n+1}\), there is a unique solution \(( S ( t ), X_{1} ( t ), X_{2} ( t ),\ldots, X_{n} ( t ) )\) of system (1.2) on \(t\geq0\), and the solution will remain in \(R_{+}^{n+1}\) with probability one.

Proof

Consider the system

$$ \textstyle\begin{cases} du(t)= [ r ( 1- \frac{1}{K} e^{u(t)} ) - \sum_{i=1}^{n} \frac{a_{i} e^{v_{i} (t)}}{1+ e^{u(t)}} - \frac{\sigma_{0}^{2}}{2} ]\, dt + \sigma_{0}\, dB, \\ d v_{i} ( t ) = ( - r_{i} + \frac{b_{i} e^{u ( t )}}{1+ e^{u ( t )}} - \frac{\sigma_{i}^{2}}{2} ) \, dt+ \sigma_{i} \, dB, \quad i=1,2,\ldots, n, \end{cases} $$
(2.1)

with the initial value \(( u ( 0 ), v_{1} ( 0 ), v_{2} ( 0 ),\ldots, v_{n} ( 0 ) ) = ( \ln S(0), \ln X_{1} (0), \ln X_{2} ( 0 ),\ldots, \ln X_{n} (0) )\). Since the coefficients of system (2.1) are Lipschitz continuous, then there is a unique local solution \(( u ( t ), v_{1} ( t ), v_{2} ( t ),\ldots, v_{n} ( t ) )\) on \(t \in[0, \tau_{e} )\), where \(\tau_{e}\) denotes the explosion time. It is easy to see that \(( S ( t ), X_{1} ( t ), X_{2} ( t ),\ldots, X_{n} ( t ) ) = ( e^{u ( t )}, e^{v_{1} ( t )},\ldots, e^{v_{n} (t)} )\) is the unique positive local solution of system (1.2) with the initial value \(( S ( 0 ), X_{i} ( 0 ) )\) on \([0, \tau_{e} )\). Next, we will use the comparison theorem to show that the positive solution is global, i.e., \(\tau_{e} =\infty\).

Since the solution is positive for \(t \in[0, \tau_{e} )\), we have

$$dS \leq Sr \biggl( 1- \frac{S}{K} \biggr)\, dt + \sigma_{0} S \, dB. $$

Let

$$\Phi= \frac{e^{ ( r- \frac{\sigma_{0}^{2}}{2} ) t+ \sigma_{0} B(t)}}{\frac{1}{S ( 0 )} + \frac{r}{K} \int_{0}^{t} e^{ ( r- \frac{\sigma_{0}^{2}}{2} ) s+ \sigma_{0} B(s)} \,ds}, $$

then \(\Phi(t)\) is the unique solution of the following stochastic differential equation:

$$ \textstyle\begin{cases} d \Phi ( {t} ) =\Phi r ( 1- \frac{\Phi}{K} ) \, dt + \sigma_{0} \Phi \, dB, \\ \Phi ( 0 ) ={S} ( 0 ). \end{cases} $$
(2.2)

By the comparison theorem for stochastic equation, this yields

$${S} ( {t} ) \leq\Phi ( {t} ),\quad t \in [ 0, \tau_{e} ), \mbox{a.s.} $$

Besides, we have

$$d X_{i} \leq ( - r_{i} X_{i} + b_{i} \Phi )\, dt+ \sigma_{i} X_{i}\, dB,\quad i=1,2,\ldots, n. $$

Let

$$\Psi_{i} ( {t} ) = e^{- ( r_{i} + \frac{\sigma_{i}^{2}}{2} ) t+ \sigma_{i} B(t)} [ X_{i} ( 0 ) + b_{i} \int_{0}^{t} \Phi ( {s} ) e^{ ( r_{i} + \frac{\sigma_{i}^{2}}{2} ) s- \sigma_{i} B ( s )} \,ds, $$

then \(\Psi_{i} ( {t} )\) is the unique solution of the following stochastic differential equation:

$$ \textstyle\begin{cases} d \Psi_{i} ( {t} ) = ( - r_{i} + b_{i} \Psi_{i} ) \, dt+ \sigma_{i} \Psi_{i}\, dB, \quad i=1,2,\ldots, n, \\ \Psi ( 0 ) = X_{i} ( 0 ). \end{cases} $$
(2.3)

By the comparison theorem for stochastic equation, it follows

$$X_{i} ( t ) \leq\Psi_{i} ( {t} ),\quad t \in [ 0, \tau_{e} ), \mbox{a.s.} $$

Similarly, we get

$$dS \geq S \biggl( 1- \frac{S}{K} \biggr) [r\, dt+ \sigma_{0} \,dB]- \sum_{i=1}^{n} a_{i} \Psi_{i} \,dt $$

and

$$d X_{i} \geq- r_{i} X_{i} \,dt+ \sigma_{i} X_{i} \,dB,\quad i=1,2,\ldots, n. $$

Denote by \(\phi(t)\) the following stochastic differential equations:

$$ \textstyle\begin{cases} d\phi ( t ) = [ r\phi ( 1- \frac{\phi}{K} ) - \sum_{i=1}^{n} a_{i} \Psi_{i} ] \,dt + \sigma_{0} \,dB,\quad i=1,2,\ldots, n, \\ \phi ( 0 ) =S(0). \end{cases} $$
(2.4)

And \(\varphi_{i} ( t )\), \(i=1,2,\ldots, n\), is the solution of the equation

$$ \textstyle\begin{cases} d \varphi_{i} ( t ) =- r_{i} \varphi_{i} \,dt + \sigma_{{i}} \varphi_{i} \,dB, \quad i=1,2,\ldots, n, \\ \phi ( 0 ) = X_{i} (0). \end{cases} $$
(2.5)

It follows that

$$S ( t ) \geq\phi ( t ),\qquad X_{i} ( t ) \geq \varphi_{i} ( t ),\quad t \in [ 0, \tau_{e} ), \mbox{a.s.} $$

In summary, we have

$$\Phi ( {t} ) \geq S ( t ) \geq\phi ( t ), \qquad \Psi_{i} ( {t} ) \geq X_{i} ( t ) \geq\varphi_{i} ( t ), \quad t \in [ 0, \tau_{e} ), \mbox{a.s.} $$

This completes the proof of the theorem. □

Remark 2.1

Since there is a unique solution \(( S ( t ), X_{1} ( t ), X_{2} ( t ),\ldots, X_{n} ( t ) ) \in R_{+}^{n+1}\) of system (1.2) for any given initial value \(( S ( 0 ), X_{1} ( 0 ), X_{2} ( 0 ),\ldots, X_{n} ( 0 ) ) \in R_{+}^{n+1}\), and \(\Gamma=\{ ( S ( t ), X_{1} ( t ), X_{2} ( t ),\ldots, X_{n} ( t ) ) \in R_{+}^{n+1}:0\leq S\leq K, X_{i} \geq0,i=1,2,\ldots, n, t\geq0,\mbox{a.s.}\} \) is an invariant set [18], then we always assume the initial value \(( S ( 0 ), X_{1} ( 0 ), X_{2} ( 0 ),\ldots, X_{n} ( 0 ) ) \in R_{+}^{n+1} \in \Gamma\).

3 Competitive exclusion principle in model (1.2)

Define the stochastic break-even concentration

$$\tilde{R}_{i} = \frac{r_{i} + \frac{\sigma_{i}^{2}}{2}}{K b_{i}},\quad i=1,2,\ldots, n. $$

Theorem 3.1

Let \(( S ( t ), X_{1} ( t ), X_{2} ( t ),\ldots, X_{n} ( t ) )\) be the solution of system (1.2) with any initial value \(( S ( 0 ), X_{1} ( 0 ), X_{2} ( 0 ),\ldots, X_{n} ( 0 ) ) \in R_{+}^{n+1}\), then for arbitrary \(i=1,2,\ldots, n\), we get some results as follows:

  1. (i)

    If \(\tilde{R}_{i} >1\), then

    $$\lim_{t\rightarrow +\infty} \sup\frac{\ln X_{i}}{t} \leq b_{i} ( 1- \tilde{R}_{i} ) < 0,\quad \textit{a.s.}, $$

    i.e., the ith predator goes extinct with probability one.

  2. (ii)

    If \(\tilde{R}_{i} < \tilde{R}_{j}\), then

    $$\lim_{t\rightarrow +\infty} \sup\frac{\ln X_{j}}{t} \leq b_{i} ( \tilde{R}_{i} - \tilde{R}_{j} ) < 0, \quad \textit{a.s.}, $$

    i.e., the predator\(X_{j} \)goes extinct with probability one.

  3. (iii)

    If \(r- \frac{\sigma_{0}^{2}}{2} >0\), \(\tilde {R}_{i} = \min_{1\leq j\leq n} \{ \tilde{R}_{j} \}\), and \(\tilde{R}_{i} < \min_{i\neq j} \{ \tilde{R}_{j}, \frac{1}{r(1+K)} ( r- \frac{\sigma_{0}^{2}}{2} ),1 \}\), then

    $$\begin{aligned}& \lim_{t\rightarrow +\infty} \inf\frac{1}{t} \int_{0}^{t} X_{i} ( u ) \,du\geq \frac{r ( 1+K ) [ \frac{1}{r ( 1+K )} ( r- \frac{\sigma_{0}^{2}}{2} ) - \tilde{R}_{i} ]}{a_{i}} >0, \\& \lim_{t\rightarrow +\infty} X_{j} =0,\quad i\neq j, \end{aligned}$$

    i.e., only the predator \(X_{i}\) is persistent in mean and all other predators will go extinct with probability one.

    Moreover, if \(r- \frac{\sigma_{0}^{2}}{2} > \frac{r a_{i} K}{r_{i}}\) and \(r_{i} > K ra_{i}\), then

    $$\lim_{t\rightarrow +\infty} \inf\frac{1}{t} \int_{0}^{t} S ( t ) \,du\geq\frac{r- \frac{\sigma_{0}^{2}}{2} - \frac{r a_{i} K}{r_{i}}}{r( r_{i} -K a_{i)}} >0, \quad \textit{a.s.}, $$

    i.e., the prey is persistent in mean.

  4. (iv)

    If \(r- \frac{\sigma_{0}^{2}}{2} <0\), then

    $$\lim_{t\rightarrow +\infty} \inf S ( t ) =0,\qquad \lim_{t\rightarrow +\infty} \inf X_{i} ( t ) =0, \quad \textit{a.s.}, $$

    i.e., the prey and the predators all go extinct.

Proof

$$\begin{aligned} {dS}+ \sum_{i=1}^{n} d X_{i} =& \Biggl[ Sr \biggl( 1- \frac{S}{K} \biggr) - \sum _{i=1}^{n} \frac{(a_{i} - b_{i} )S}{1+S} X_{i} - \sum _{i=1}^{n} r_{i} X_{i} \Biggr] \,dt \\ &{}+ \Biggl( \sigma_{0} S+ \sum_{i=1}^{n} \sigma_{i} X_{i} \Biggr) \,dB. \end{aligned}$$

Integrating this from 0 to t and dividing by t on both sides, we have

$$\begin{aligned}& \frac{1}{t} \Biggl(S+ \sum_{i=1}^{n} {X}_{i} \Biggr)- \frac{1}{t} \Biggl(S(0)+ \sum _{i=1}^{n} {X}_{i} (0)\Biggr) \\& \quad = \frac{1}{t} \int_{0}^{t} Sr \biggl( 1- \frac{S}{K} \biggr) \,du- \frac{1}{ t} \int_{0}^{t} \sum_{i=1}^{n} \frac{(a_{i} - b_{i} )S}{1+S} X_{i} \,du- \int_{0}^{t} \frac{r_{i}}{t} \sum _{i=1}^{n} X_{i} \,du \\& \qquad {}+ \frac{\sigma_{0}}{t} \int_{0}^{t} S\,dB + \sum _{i=1}^{n} \frac{\sigma_{i}}{ t} \int_{0}^{t} X_{i} \,dB \\& \quad \leq rK - \frac{r}{t} \int_{0}^{t} S\,du- \sum _{i=1}^{n} \frac{r_{i}}{t} \int_{0}^{t} X_{i} \,du + \frac{\sigma_{0}}{t} \int_{0}^{t} S\,dB + \sum _{i=1}^{n} \frac{\sigma_{i}}{t} \int_{0}^{t} X_{i} \,dB. \end{aligned}$$

Simple computation shows that

$$ \frac{1}{t} \int_{0}^{t} S\, du+ \frac{r_{i}}{r} \sum _{i=1}^{n} \frac{1}{t} \int_{0}^{t} X_{i} \,du \leq K+ \frac{\alpha ( t )}{r}, $$
(3.1)

where

$$\alpha ( t ) = \frac{\sigma_{0}}{t} \int_{0}^{t} S\,dB + \sum _{i=1}^{n} \frac{\sigma_{i}}{t} \int_{0}^{t} X_{i} \,dB - \frac{1}{t} \Biggl(S+ \sum_{i=1}^{n} {X}_{i} \Biggr)+ \frac{1}{t} \Biggl(S(0)+ \sum _{i=1}^{n} {X}_{i} (0)\Biggr). $$

By using Itô’s formula to the first equation of system (1.2), we get

$$d \ln S = \Biggl[ r \biggl( 1- \frac{S}{K} \biggr) - \sum _{i=1}^{n} \frac{a_{i}}{1+S} X_{i} - \frac{\sigma_{0}^{2}}{2} \Biggr] \,dt + \sigma_{0} \,dB. $$

Integrating this from 0 to t and dividing by t on both sides, we have

$$ \frac{r}{K} \frac{1}{t} \int_{0}^{t} S\,du+ \sum _{i=1}^{n} \frac{a_{i}}{t} \int_{0}^{t} \frac{X_{i}}{1+S}_{i} \,du =- \frac{\ln S ( t ) - \ln S ( t0 )}{t} +r- \frac{\sigma_{0}^{2}}{2} + \frac{\sigma_{0} B}{t}. $$
(3.2)

By using Itô’s formula, we have

$$d \ln X_{i} = \biggl( - r_{i} + \frac{b_{i} S}{1+S} X_{i} - \frac{\sigma_{i}^{2}}{2} \biggr) \,dt+ \sigma_{i} \,dB. $$

Integrating this from 0 to t and dividing by t on both sides, we have

$$ \frac{\ln X_{i}}{t} = \frac{1}{t} \int_{0}^{t} \frac{b_{i} S}{1+S}_{i} \,du - r_{i} - \frac{\sigma_{i}^{2}}{2} + \frac{\sigma_{i} B_{i}}{t} + \frac{\ln X_{i} ( 0 )}{t}. $$
(3.3)

(i) If \(\tilde{R}_{i} >1\), then together with (3.1) and (3.3)

$$\begin{aligned} \frac{\ln X_{i}}{t} \leq&\frac{b_{i}}{t} \int_{0}^{t} S\,du - r_{i} - \frac{\sigma_{i}^{2}}{2} + \frac{\sigma_{i} B_{i}}{t} + \frac{\ln X_{i} ( 0 )}{t} \\ \leq& b_{i} K- r_{i} - \frac{\sigma_{i}^{2}}{2} + \beta_{i} - \frac{r_{i}}{ r} \sum_{i=1}^{n} \frac{1}{t} \int_{0}^{t} X_{i} \,du, \end{aligned}$$
(3.4)

where

$$\beta_{i} (t)\leq\frac{b_{i} \alpha(t)}{r} + \frac{\sigma_{i} B_{i}}{t} + \frac{\ln X_{i} ( 0 )}{t}. $$

By the strong law of large numbers [19], we have

$$ \lim_{t\rightarrow +\infty} \beta_{i} (t) =0,\quad \mbox{a.s.}, $$
(3.5)

then from (3.5) that

$$\lim_{t\rightarrow +\infty} \sup\frac{\ln X_{i}}{t} \leq K b_{i} ( 1- \tilde{R}_{i} ),\quad \mbox{a.s.}, $$

which implies

$$\lim_{t\rightarrow +\infty} X_{i} (t)=0,\quad \mbox{a.s.} $$

(ii) From (3.3), we know that for arbitrary j,

$$ \frac{\ln X_{j}}{t} \leq\frac{1}{t} \int_{0}^{t} \frac{b_{j} S}{ 1+S} \,du - r_{j} - \frac{\sigma_{j}^{2}}{2} + \frac{\sigma_{j} B_{j}}{t} + \frac{\ln X_{j} ( 0 )}{t}. $$
(3.6)

Computing \(\mbox{(3.6)}\times b_{i} - \mbox{(3.3)}\times b_{j}\) gives

$$\begin{aligned}& b_{i} \frac{\ln X_{j} (t)}{t} - b_{j} \frac{\ln X_{i} ( t )}{ t} \\& \quad = b_{i} b_{j} ( \tilde{R}_{i} - \tilde{R}_{j} ) + b_{i} \biggl( \frac{\sigma_{j} B_{j}}{t} + \frac{\ln X_{j} ( 0 )}{t} \biggr) - b_{j} \biggl( \frac{\sigma_{i} B_{i}}{t} + \frac{\ln X_{i} ( 0 )}{t} \biggr) \end{aligned}$$

then

$$ \frac{\ln X_{j}}{t} = b_{j} ( \tilde{R}_{i} - \tilde{R}_{j} ) +H_{ij} ( t ), $$
(3.7)

where

$$H_{ij} ( t ) = \frac{b_{j}}{b_{i}} \frac{\ln X_{i} ( t )}{t} + \biggl( \frac{\sigma_{j} B_{j}}{t} + \frac{\ln X_{j} ( 0 )}{t} \biggr) - \frac{b_{j}}{b_{i}} \biggl( \frac{\sigma_{i} B_{i}}{t} + \frac{\ln X_{i} ( 0 )}{t} \biggr). $$

By the strong law of large numbers, we obtain that

$$\lim_{t\rightarrow +\infty} H_{ij} ( t ) =0, \quad \mbox{a.s.} $$

From (3.7), if \(\tilde{R}_{i} < \tilde{R}_{j}\), then

$$\lim_{t\rightarrow +\infty} \sup\frac{\ln X_{j}}{t} \leq b_{j} ( \tilde{R}_{i} - \tilde{R}_{j} ) < 0,\quad \mbox{a.s.} $$

Then

$$\lim_{t\rightarrow +\infty} X_{j} ( t ) =0,\quad \mbox{a.s.} $$

(iii) Without loss of generality, assume that \(\tilde{R}_{1} = \min_{1\leq i\leq n} \{ \tilde{R}_{i} \}\) and \(\tilde{R}_{1} < \min_{1\neq i} \{ \tilde{R}_{i}, \frac{1}{r(1+K)} ( r- \frac{\sigma_{0}^{2}}{ 2} ),1 \}\). According to Theorem 3.1(ii), we know that for arbitrary \(i=1,2,\ldots, n\),

$$\lim_{t\rightarrow +\infty} X_{i} ( t ) =0, \quad \mbox{a.s.} $$

Then from (3.3), we have

$$\begin{aligned} \frac{\ln X_{1} (t)}{t} =& \frac{1}{t} \int_{0}^{t} \frac{b_{1} S}{1+S} \,du - r_{1} - \frac{\sigma_{1}^{2}}{2} + \frac{\sigma_{1} B_{1}}{t} + \frac{\ln X_{1} ( 0 )}{t} \\ \geq&\frac{b_{1}}{1+K} \frac{1}{t} \int_{0}^{t} S\,du - r_{1} - \frac{\sigma_{1}^{2}}{2} + \frac{\sigma_{1} B_{1}}{t} + \frac{\ln X_{1} ( 0 )}{t}. \end{aligned}$$
(3.8)

From (3.2),

$$\begin{aligned} \frac{\ln X_{1} (t)}{t} \geq&\frac{b_{1} K}{r(1+K)} \Biggl[ - \frac{a_{1}}{ t} \int_{0}^{t} \frac{X_{1}}{1+S} \,du - \sum _{i=2}^{n} \frac{a_{i}}{t} \int_{0}^{t} \frac{X_{i}}{1+S} \,du \ \\ &{}- \frac{\ln S ( t ) - \ln S ( 0 )}{t} + \biggl( r- \frac{\sigma_{0}^{2}}{2} \biggr) + \frac{\sigma_{0} B}{t} \Biggr] - r_{1} - \frac{\sigma_{1}^{2}}{2} + \frac{\sigma_{1} B_{1}}{t} + \frac{\ln X_{1} ( 0 )}{t} \\ \geq&\frac{b_{1} K}{r ( 1+K )} \biggl[ - \frac{a_{1}}{t} \int_{0}^{t} X_{1} \,du- \frac{\ln S ( t ) - \ln S ( 0 )}{t} + \biggl( r- \frac{\sigma_{0}^{2}}{2} \biggr)+ \frac{\sigma_{0} B}{t} \biggr] \\ &{} - r_{1} - \frac{\sigma_{1}^{2}}{2} + \frac{\sigma_{1} B_{1}}{t} + \frac{\ln X_{1} ( 0 )}{t}. \end{aligned}$$

From (3.3), \(r- \frac{\sigma_{0}^{2}}{2} >0\), \(\tilde{R}_{1} < \min_{1\neq i} \{ \tilde{R}_{i}, \frac{1}{r(1+K)} ( r- \frac{\sigma_{0}^{2}}{ 2} ),1 \}\), Lemma 4 in Ref. [20], and the strong law of large numbers, it is obtained that

$$\begin{aligned} \lim_{t\rightarrow +\infty} \inf \int_{0}^{t} X_{1} \,du \geq& \frac{\frac{b_{1} K}{r ( 1+K )} ( r- \frac{\sigma_{0}^{2}}{2} ) - ( r_{1} + \frac{\sigma_{1}^{2}}{2} )}{ \frac{a_{1} b_{1} K}{r ( 1+K )}} \\ =& \frac{r ( 1+K ) [ \frac{1}{r ( 1+K )} ( r- \frac{\sigma_{0}^{2}}{2} ) - \tilde{R_{1}} ]}{a_{1}} >0, \quad \mbox{a.s.} \end{aligned}$$

On the other hand, from (3.2) and the assumption we know that

$$\begin{aligned} \frac{r}{K} \frac{1}{t} \int_{0}^{t} S\,du \geq&- \frac{\ln S ( t ) - \ln S ( 0 )}{t} +r- \frac{\sigma_{0}^{2}}{2} - \sum_{i=1}^{n} \frac{a_{i}}{t} \int_{0}^{t} X_{i} \,du + \frac{\sigma _{0} B}{ t} \\ =&- \frac{\ln S ( t ) - \ln S ( 0 )}{t} +r- \frac{\sigma_{0}^{2}}{2} - \frac{a_{1}}{t} \int_{0}^{t} X_{1} \,du + \frac{\sigma_{0} B}{t}. \end{aligned}$$
(3.9)

According (3.1), it is obtained

$$ \biggl( \frac{r}{K} - \frac{r a_{1}}{r_{1}} \biggr) \frac{1}{t} \int_{0}^{t} S\,du \geq- \frac{\ln S ( t ) - \ln S ( 0 )}{t} +r- \frac{\sigma_{0}^{2}}{2} - \frac{r a_{1}}{r_{1}} K- \frac{a_{1}}{r_{1}} \alpha ( t ). $$
(3.10)

Taking limits on both sides of (3.10) and by the large number theorem for martingales, we have

$$\lim_{t\rightarrow +\infty} \inf \int_{0}^{t} S\,du \geq\frac{k a_{1} ( r- \frac{\sigma_{0}^{2}}{2} - \frac{r a_{1} K}{r_{1}} )}{r ( r_{1} -k a_{1} )} >0 \quad \mbox{a.s.} $$

(iv) From (2.2), by the comparison theorem for stochastic equation, it follows that

$$S ( t ) \leq\Phi ( {t} ), \quad t \in [ 0, \tau_{e} ), \quad \mbox{a.s.} $$

By using Itô’s formula to (2.2), we have

$$\begin{aligned} d \Phi ( {t} ) =& \biggl[ r \biggl( 1- \frac{\Phi}{K} \biggr) - \frac{\sigma_{0}^{2}}{2} \biggr] \,dt + \sigma_{0} \,dB \\ \leq& \biggl( r- \frac{\sigma_{0}^{2}}{2} \biggr) \,dt + \sigma_{0} \,dB. \end{aligned}$$

Integrating both sides and since \(r- \frac{\sigma_{0}^{2}}{2} <0\), then

$$\Phi ( {t} ) \leq\Phi ( 0 ) e^{ ( r- \frac{\sigma_{0}^{2}}{2} ) t+ \sigma_{0} B_{0} ( t )} \rightarrow0 $$

as \(t \rightarrow\infty\) since \(\lim_{t\rightarrow +\infty} \frac{B_{0} ( t )}{t} \rightarrow0\) a.s. So we get that \(\lim_{t\rightarrow +\infty} \frac{B_{0} ( t )}{t} \rightarrow0\) a.s.

From (3.3) and (3.6), if the prey S goes extinct, then

$$\lim_{t\rightarrow +\infty} \sup\frac{\ln X_{i}}{t} \leq- r_{i} - \frac{\sigma_{i}^{2}}{2} < 0,\quad \mbox{a.s.} $$

So

$$\lim_{t\rightarrow +\infty} X_{i} ( t ) =0, \quad \mbox{a.s.} $$

 □

Remark 3.1

In Theorem 3.1(iii), it is an open problem whether the prey is persistent in mean or goes extinct if \(0< r- \frac{\sigma_{0}^{2}}{2} \leq\frac{K a_{i} r}{r_{i}}\).

4 The principle of coexistence in model (1.2)

In this section, we discuss the coexistence of the predators.

From Theorem 3.1(iii), we know that if \(\tilde{R_{i}} = \min_{1\leq i\leq n} \{ \tilde{R_{j}} \}\) and \(\tilde{R}_{i} < \min_{i\neq j} \{ \tilde{R}_{j}, \frac{1}{r(1+K)} ( r- \frac{\sigma_{0}^{2}}{ 2} ),1 \}\), only the predator \(X_{i}\) is persistent in mean. Suppose that two predators have the same lowest stochastic break-even concentration, and without loss of generality, we let

$$\tilde{R}_{1} = \tilde{R}_{2} < \min_{i\neq1,2} \biggl\{ \tilde{R}_{i}, \frac{1}{r(1+K)} \biggl( r- \frac{\sigma_{0}^{2}}{2} \biggr),1 \biggr\} . $$

From Theorem 3.1(ii), we know that for arbitrary \(i=3,\ldots, n\),

$$\lim_{t\rightarrow +\infty} X_{i} ( t ) =0,\quad \mbox{a.s.} $$

So system (2.1) becomes

$$ \textstyle\begin{cases} du(t)= [ r ( 1- \frac{1}{K} e^{u(t)} ) - \sum_{i=1}^{2} \frac{a_{i} e^{v_{i} (t)}}{1+ e^{u(t)}} - \frac{\sigma_{0}^{2}}{2} ] \,dt + \sigma_{0} \,dB, \\ d v_{1} ( t ) = ( - r_{1} + \frac{b_{1} e^{u ( t )}}{1+ e^{u ( t )}} - \frac{\sigma_{1}^{2}}{2} ) \,dt+ \sigma_{1} \,dB, \\ d v_{2} ( t ) = ( - r_{2} + \frac{b_{2} e^{u ( t )}}{1+ e^{u ( t )}} - \frac{\sigma_{2}^{2}}{2} ) \,dt+ \sigma_{2} \,dB. \end{cases} $$
(4.1)

Assume \(\frac{b_{1}}{b_{2}} = \frac{\sigma_{1}}{\sigma_{2}}\), and since \(\tilde{R}_{1} = \tilde{R}_{2}\), then it follows that

$$d ( b_{2} v_{1} - b_{1} v_{2} ) = \biggl[ - b_{2} \biggl(r_{1} + \frac{\sigma_{1}^{2}}{2} \biggr)+ b_{1} \biggl(r_{2} + \frac{\sigma_{2}^{2}}{2} \biggr) \biggr] \,dt+ ( b_{2} \sigma_{1} - b_{1} \sigma_{2} ) \,dB ( t ) =0. $$

Then we have \(v_{2} ( t ) =c v_{1} ( t )^{\frac{b_{2}}{b_{1}}}\), where c is a positive constant. We get the following system:

$$ \textstyle\begin{cases} du ( t ) = [ r ( 1- \frac{1}{K} e^{u ( t )} ) - \frac{a_{1} e^{v_{1} ( t )}}{1+ e^{u ( t )}} - \frac{a_{2} ( e^{v_{1} ( t )} )^{\frac{b_{2}}{b_{1}}}}{1+ e^{u ( t )}} - \frac{\sigma_{0}^{2}}{2} ] \,dt + \sigma_{0} \,dB, \\ v_{1} ( t ) = ( - r_{1} + \frac{b_{1} e^{u ( t )}}{1+ e^{u ( t )}} - \frac{\sigma_{1}^{2}}{2} ) \,dt+ \sigma_{1} \,dB. \end{cases} $$
(4.2)

Now we research the coexistence of the equivalent system (4.2) of (1.2).

Theorem 4.1

Let \(( u ( t ), v_{1} ( t ) )\) be the solution of system (4.2). For every \(t>0\), the distribution of \(( u ( t ), v_{1} ( t ) )\) has a density \(U ( t,x,y )\). If \(\frac{b_{1}}{b_{2}} = \frac{\sigma_{1}}{\sigma _{2}}\), \(\tilde{R}_{1} = \tilde{R}_{2} < \min_{i\neq1,2} \{ \tilde{R}_{i}, \frac{1}{r(1+K)} ( r- \frac{\sigma_{0}^{2}}{2} ),1 \}\), \(\sigma_{0} < \sigma_{1} < \sigma_{2}\), and \(\frac{b_{1} K}{r} ( r- \frac{\sigma_{0}^{2}}{2} ) - ( r_{1} + \frac{\sigma_{1}^{2}}{2} ) ( 1+K ) >0\) hold, then there exists a unique density \(U^{*} ( x,y ) \) such that

$$\lim_{t\rightarrow +\infty} \iint_{R^{2}} \bigl\vert U ( t,x,y ) - U^{*} ( x,y ) \bigr\vert \, dx\, dy=0. $$

The strategy of the proof is as follows.

  • We show that the transition function of the process \(( u ( t ), v_{1} ( t ) )\) is absolutely continuous by using the Hörmander theorem [21];

  • According to support theorems [2224], we find that the density of the transition function is positive on \(\mathbb{R}^{2}\);

  • We show that the Markov semigroup satisfies the “Foguel alternative”;

  • We exclude sweeping by showing that there exists a Khasminskiĭ function.

In the following, we give the proof of Theorem 4.1 through Lemmas 4.14.4 by realizing this strategy.

Lemma 4.1

For each point \(( x_{0}, y_{0} ) \in R_{+}^{2}\) and \(t>0\), the transition probability function \(p ( t,x_{0}, y_{0},A ) \) has a continuous density \(k ( t,x,y,x_{0}, y_{0} )\) with respect to the Lebesgue measure.

Proof

The Hörmander theorem [21] for the existence of smooth density of the transition probability for degenerate diffusion processes is used in the proof of this lemma. Let \(a(x)\) and \(b(x)\) be vector fields on \(R^{d}\), then the Lie bracket \([ a,b ]\) is a vector field given by

$$[ a,b ]_{j} ( X ) = \sum_{k=1}^{d} \biggl( a_{k} \frac{\partial b_{j}}{\partial x_{k}} ( X ) - b_{k} \frac{\partial a_{j}}{\partial x_{k}} (X) \biggr). $$

Let

$$a ( \xi,\eta ) = \biggl( r \biggl( 1- \frac{1}{K} e^{\xi} \biggr) - \frac{a_{1} e^{\eta}}{1+ e^{\xi}} - \frac{a_{2} ( e^{\eta} )^{\frac{b_{2}}{b_{1}}}}{1+ e^{\xi}} - \frac{\sigma_{0}^{2}}{2},- r_{1} + \frac{b_{1} e^{\eta}}{1+ e^{\xi}} - \frac{\sigma_{1}^{2}}{2} \biggr)^{T} $$

and

$$b ( \xi,\eta ) = ( \sigma_{0}, \sigma_{1} )^{T}. $$

Then calculate directly

[a,b]= ( r σ 0 e ξ K ( 1 1 K e ξ ) σ 0 e ξ ( a 1 e η + c a 2 ( e η ) b 2 b 1 ) ( 1 + e ξ ) 2 + σ 1 ( a 1 e η + c a 2 b 2 b 1 ( e η ) b 2 b 1 ) 1 + e ξ σ 0 b 1 e ξ ( 1 + e ξ ) 2 ) .

It follows that

| σ 0 r σ 0 e ξ K ( 1 1 K e ξ ) σ 0 e ξ ( a 1 e η + c a 2 ( e η ) b 2 b 1 ) ( 1 + e ξ ) 2 + σ 1 ( a 1 e η + c a 2 b 2 b 1 ( e η ) b 2 b 1 ) 1 + e ξ σ 1 σ 0 b 1 e ξ ( 1 + e ξ ) 2 | = ( σ 0 2 b 1 e ξ ( 1 + e ξ ) 2 ( 1 1 K e ξ ) σ 0 σ 1 e ξ ( a 1 e η + c a 2 ( e η ) b 2 b 1 ) ( 1 + e ξ ) 2 + σ 1 2 ( a 1 e η + c a 2 b 2 b 1 ( e η ) b 2 b 1 ) 1 + e ξ + r σ 0 σ 1 e ξ K ) ( σ 0 2 b 1 e ξ ( 1 + e ξ ) 2 + r σ 0 σ 1 e ξ K + 1 K e ξ σ 0 σ 1 e ξ ( a 1 e η + c a 2 ( e η ) b 2 b 1 ) ( 1 + e ξ ) 2 σ 0 σ 1 e ξ ( a 1 e η + c a 2 ( e η ) b 2 b 1 ) ( 1 + e ξ ) e ξ + σ 1 ( a 1 e η + c a 2 b 2 b 1 ( e η ) b 2 b 1 ) 1 + e ξ ) < 0

according to \(\frac{b_{1}}{b_{2}} = \frac{\sigma_{1}}{\sigma_{2}}\), \(\sigma_{0} < \sigma_{1} < \sigma_{2}\). Therefore, b, \([ a, b ] \) are linearly independent on \(R^{d}\).

So, for every \(( \xi,\eta ) \in R^{2}\), vectors \(b ( \xi,\eta )\) and \([{a},{b}] ( \xi,\eta )\) span the space \(R^{2}\). In view of the Hörmander theorem [21], the transition probability function \(p ( t,x_{0}, y_{0},A ) \) has a continuous density \(k ( t,x,y,x_{0}, y_{0} )\) and \(k\in C^{\infty} ((0,\infty)\times R^{2} \times R^{2} )\). This completes the proof of Lemma 4.1. □

Lemma 4.2

For each \(( x_{0}, y_{0} ) \in R^{2}\) and \(( x,y ) \in R^{2}\), there exists \(T > 0\) such that \(k ( t,x,y, x_{0}, y_{0} ) >0\).

Proof

Now we check the positivity of the kernel k by using support theorems (see [2224]). For a point \(( x_{0}, y_{0} ) \in R^{2}\) and a function \(\varphi\in L_{2} ( [ 0,T ];R )\), consider the following system of integral equations:

$$ \textstyle\begin{cases} x_{\phi} ( t ) = x_{0} + \int_{0}^{t} [ f_{1} ( x_{\phi} ( s ), y_{\phi} ( s ) ) + \sigma_{0} \phi]\,ds, \\ y_{\phi} ( t ) = y_{0} + \int_{0}^{t} [ f_{2} ( x_{\phi} ( s ), y_{\phi} ( s ) ) + \sigma_{1} \phi]\,ds, \end{cases} $$
(4.3)

where \(f_{1} ( x,y ) =r ( 1- \frac{1}{K} e^{x} ) - \frac{a_{1} e^{y}}{1+ e^{x}} - \frac{a_{2} c( e^{y} )^{\frac{b_{2}}{b_{1}}}}{1+ e^{x}} - \frac{\sigma_{0}^{2}}{2}\) and \(f_{2} ( x,y ) =- r_{1} + \frac{b_{1} e^{x}}{1+ e^{x}} - \frac{\sigma_{1}^{2}}{ 2}\).

We denote \({X}= ( {x},{y} )^{{T}}\), \(X = ( x_{0}, y_{0} )^{T}\) and let \(D_{x,y,\phi}\) be the Frechét derivative of the function \(h\rightarrow X_{\phi+h} (T)\) from \(L^{2} ( ( 0,T ];R)\) to \(R^{2}\) with \(X_{\phi+h} =[ x_{\phi+h,} y_{\phi+h} ]^{T}\). If for some ϕ the derivative \(D_{x,y,\phi}\) has rank two, then \(k ( t,x,y,x_{0}, y_{0} ) >0\) for \({x}= x_{\phi} (T)\) and \({y}= y_{\phi} (T)\). The derivative \(D_{x,y,\phi}\) can be found by means of the perturbation method for ordinary differential equations. Namely, let \(\Gamma ( t ) =f'( x_{\phi} ( t ), y_{\phi} ( t ) )\), where \(f '\) is the Jacobians of \({f}=[ {f}_{1} ({x},{y} ), {f}_{2} ( {x},{y} ) ]^{T}\). Let \(Q(t,t_{0} )\) for \(0\leq t_{0} \leq t\leq T\) be a matrix function such that \(Q(t,t_{0} )=\mathrm{Id}\), \(\frac{\partial Q(t,t_{0} )}{ \partial t} = \Gamma ( t ) Q(t,t_{0} )\), and \(\boldsymbol{v} = [ \sigma_{0}, \sigma_{1} ]^{T}\). Then

$$D_{x,y,\phi} h= \int_{0}^{t} Q ( T,s ) \boldsymbol{v} h ( s ) \,ds. $$

We check that the rank of \(D_{x,y,\phi}\) is two. Let \(\varepsilon\in(0,T)\) and \(h = \mathbf{1}_{ [ T-\varepsilon,T ]} ( t )\), where \(t\in(0,T]\) and \(\mathbf{1}_{ [ T-\varepsilon,T ]}\) is the characteristic function of interval \([ T-\varepsilon,T ]\). Since \({Q} ( T,s ) =\mathrm{Id}+ \Gamma ( T ) ( T-s ) +o ( T-s )\), we obtain

$$D_{x,y,\phi} h=\varepsilon\boldsymbol{v} + \frac{1}{2} \varepsilon^{2} \Gamma ( T ) \boldsymbol{v} +o \bigl( \varepsilon^{2} \bigr). $$

Compute

Γ ( T ) v = ( r e x K + e x ( a 1 e y + c a 2 ( e y ) b 2 b 1 ) ( 1 + e x ) 2 a 1 e y + c a 2 b 2 b 1 ( e y ) b 2 b 1 1 + e x b 1 e x ( 1 + e x ) 2 0 ) ( σ 0 σ 1 ) = ( [ [ r e x K e x ( a 1 e y + c a 2 ( e y ) b 2 b 1 ) ( 1 + e x ) 2 ] σ 0 + [ a 1 e y + c a 2 b 2 b 1 ( e y ) b 2 b 1 ] σ 1 1 + e x ] b 1 e x ( 1 + e x ) 2 σ 0 ) .

Hence, vectors v and \(\Gamma ( T ) \boldsymbol{v}\) are linearly independent. Thus \(D_{x,y,\phi}\) has rank two.

Next, we prove that for any two points \(X_{0} \in R \) and \(\mathbf{X} \in R\), there exist a control function ϕ and \(T>0\) such that \(X_{\phi} (0)\in X_{0}\), \(X_{\phi} (T)\in X\). Taking derivatives of system (4.3) yields

$$ \textstyle\begin{cases} x_{\phi} ' ( t ) = f_{1} ( x_{\phi}, y_{\phi} ) + \sigma_{0} \phi, \\ y_{\phi} ' ( t ) = f_{2} ( x_{\phi}, y_{\phi} ) + \sigma_{1} \phi. \end{cases} $$
(4.4)

Let \(z_{\phi} = y_{\phi} - \frac{\sigma_{1}}{\sigma_{0}} x_{\phi}\), system (4.3) becomes

$$ \textstyle\begin{cases} x_{\phi} ' ( t ) = g_{1} ( x,z ) + \sigma_{1} \phi, \\ z_{\phi} ' ( t ) = g_{2} ( x,z ), \end{cases} $$
(4.5)

where

$$ \textstyle\begin{cases} g_{1} ( x,z ) =r ( 1- \frac{1}{K} e^{x} ) - \frac{a_{1} e^{z} e^{\frac{\sigma_{2}}{\sigma_{1}} x} +c a_{2} ( e^{z} e^{\frac{\sigma_{2}}{\sigma_{1}} x} )^{\frac{b_{2}}{b_{1}}}}{1+ e^{x}} - \frac{\sigma_{0}^{2}}{2}, \\ g_{2} ( x,z ) =- r_{1} + \frac{b_{1} e^{x}}{1+ e^{x}} - \frac{\sigma_{1}^{2}}{2} - \frac{\sigma_{1}}{\sigma_{0}} \frac{\sigma_{0}^{2}}{2} - \frac{\sigma_{2}}{\sigma_{1}} r+ \frac{r\sigma_{2}}{\sigma_{1}} \frac{1}{K} e^{x} - \frac{\sigma_{1}}{\sigma_{0}} \frac{a_{1} e^{z} e^{\frac{\sigma_{2}}{\sigma_{1}} x} +c a_{2} ( e^{z} e^{\frac{\sigma_{2}}{\sigma_{1}} x} )^{\frac{b_{2}}{b_{1}}}}{1+ e^{x}}. \end{cases} $$
(4.6)

Now it can be said that for any \(X_{0} \in R \) and \(\mathbf{X} \in R\), there exist a control function ϕ and \(T>0\) such that \((x_{\phi} ( 0 ), z_{\phi} (0))= (x_{0}, z_{0} )\), \((x_{\phi} ( T ), z_{\phi} (T))= (x_{T}, z_{T} )\).

We construct the function ϕ in the following way. First, we find a positive constant T and a differentiable function \(z_{\phi}:[0,T]\rightarrow\mathbb{R}_{+}\) such that \(z_{\phi} (0)= z_{0}\), \(z_{\phi} (T)= z_{T}\), \(z_{\phi} ' ( 0 ) =g_{2} ( x_{0}, z_{0} ) = z_{0}^{d}\), \(z_{\phi} ' ( T ) =g_{2} ( x_{T}, z_{T} ) = z_{T}^{d}\), and

$$ {z}_{\phi}' ({t} ) + r_{1} + \frac{\sigma_{1}^{2}}{2} + \frac{\sigma_{1}}{\sigma_{0}} \frac{\sigma_{0}^{2}}{2} + \frac{\sigma_{2}}{\sigma_{1}} r>0\quad \mbox{for }t\in [ 0,T ]. $$
(4.7)

We split the construction of function \(z_{\phi} \)on three intervals \([0, \varepsilon]\), \([\varepsilon, T - \varepsilon]\), and \([T - \varepsilon, T ]\), where \(0 < \varepsilon< T/2\). Hence, it follows that we can construct a \(C ^{2}\) function \(z_{\phi}:[0,\varepsilon]\rightarrow R\) such that

$$z_{\phi} (0)= z_{0},\qquad z_{\phi} ' = z_{0}^{d},\qquad z_{\phi} ' ( \varepsilon ) =0 $$

and \(z_{\phi}\) satisfies (4.7) for \(t \in[0,\varepsilon]\). Similarly, we construct a \(C ^{2}\) function \(z_{\phi}:[T-\varepsilon,T]\rightarrow R\) such that

$$z_{\phi} (T)= z_{T},\qquad z_{\phi} ' ( T ) = z_{T}^{d},\qquad z_{\phi} ' ( T- \varepsilon ) =0. $$

Taking T be large enough, we can extend the function

$$z_{\phi}: [0,\varepsilon]\cup[T-\varepsilon,T]\rightarrow R $$

to a \(C ^{2}\) function \(z_{\phi}\) defined on the whole interval \([0,T]\) such that \(z_{\phi}\) satisfies (4.7). Therefore, we can find one \(C ^{1} \)-function \(x_{\phi}\) which satisfies the second equation of (4.5), and finally we can determine a continuous function ϕ from the first equation (4.5). The proof of Lemma 4.2 is completed. □

Lemma 4.3

The semigroup \(\{ P(t) \}_{t\geq 0}\) is asymptotically stable or is sweeping with respect to compact sets.

Proof

By virtue of Lemma 4.1, it follows that \(\{ P(t) \}_{t\geq 0}\) is an integral Markov semigroup with a continuous kernel \(k(t,x,y)\) for \(t>0\). From Lemma 4.2, for every \(f\in\mathbb{D}\), we have

$$\int_{0}^{\infty} P ( t ) f\, dt>0\quad \mbox{a.e. on } \mathbb{R}^{2}, $$

where \(\mathbb{D}\) is defined in the Appendix. From Lemma A.1, it follows that the semigroup \(\{ P(t) \}_{t\geq 0} \) is asymptotically stable or is sweeping with respect to compact sets. □

Lemma 4.4

If \(\frac{b_{1}}{b_{2}} = \frac{\sigma_{1}}{ \sigma_{2}}\), \(\tilde{R}_{1} = \tilde{R}_{2} < \min_{i\neq1,2} \{ \tilde{R}_{i}, \frac{1}{r(1+K)} ( r- \frac{\sigma_{0}^{2}}{2} ),1 \}\), \(\sigma_{0} < \sigma_{1} < \sigma_{2}\), and \(\frac{b_{1} K}{r} ( r- \frac{\sigma_{0}^{2}}{2} ) - ( r_{1} + \frac{\sigma_{1}^{2}}{2} ) ( 1+K ) >0\) hold, then the semigroup \(\{ P(t) \}_{t\geq 0}\) is asymptotically stable.

Proof

In order to exclude sweeping, it is sufficient to construct a non-negative \(C ^{2} \)-function V and a closed set \(O\in\Sigma\) such that

$$\sup_{ ( u,v ) \in\mathbb{R}^{2} \setminus O} \mathcal{A}^{*} V ( u,v ) < 0, $$

where \(\mathcal{A}^{*} \) is the adjoint operator of the infinitesimal generator \(\mathcal{A}\) of the semigroup \(\{ P(t) \}_{t\geq 0}\), which is of the form

$$\mathcal{A}^{*} V= \frac{1}{2} \sigma_{1}^{2} \frac{\partial^{2} V}{ \partial x^{2}} + \sigma_{1} \sigma_{2} \frac{\partial^{2} V}{\partial x\, \partial y} + \frac{1}{2} \sigma_{2}^{2} \frac{\partial^{2} V}{ \partial y^{2}} + f_{1} \frac{\partial V}{\partial x} + f_{2} \frac{\partial V}{ \partial y}, $$

where \(f_{1} ( x,y )\), \(f_{2} ( x,y )\) are defined in (4.3), and such a function is called a Khasminskiĭ function [16].

Define a \(C ^{2}\)-function

$$\begin{aligned} V ( u, v_{1} ) =&M \biggl[ - \frac{b_{1} K}{r} u- ( 1+K ) v_{1} + \frac{a_{1} b_{1} K e^{v_{1}}}{r r_{1}} + \frac{b_{1} ( r_{2} + \frac{\sigma_{2}^{2}}{2} - \frac{ \sigma_{1}^{2}}{2} ) c( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2}} \biggr] \\ &{}+ \frac{[ e^{u} + \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} ]^{1+\theta}}{1+\theta} = V_{1} ( u, v_{1} ) + V_{2} ( u, v_{1} ), \end{aligned}$$

where

$$\begin{aligned}& M= \frac{2}{\lambda} \max\biggl\{ 2,\sup_{ ( u, v_{1} ) \in R^{2}} \biggl\{ - \frac{r}{4K} e^{2 ( 1+\theta ) u} - \frac{m_{2}}{2 ( 1+K )^{1+\theta}} \biggl[ \frac{a_{1} e^{v_{1}}}{b_{1}} + \frac{a_{2} c( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2}} \biggr]^{1+\theta} \\& \hphantom{M={}}{}+m_{1} \biggl[ e^{u} + \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr]^{\theta} \biggr\} \biggr\} , \\& \lambda= \biggl( r- \frac{\sigma_{0}^{2}}{2} \biggr) \frac{b_{1} K}{r} - \biggl( r_{1} + \frac{\sigma_{1}^{2}}{2} \biggr) ( 1+K ) >0, \\& m_{1} = \sup_{u\in R} \biggl\{ e^{u} ( r+ r_{1} \biggl( 1\wedge\frac{\sigma_{2}}{\sigma_{1}} \biggr) - \frac{r}{2K} e^{u} \biggr\} , \\& m_{2} = r_{1} \biggl( 1\wedge\frac{\sigma_{2}}{\sigma_{1}} \biggr) - \frac{\theta}{2} ( \sigma_{0} \vee\sigma_{1} \vee \sigma_{2} )^{2},\quad 0< \theta< \frac{2 r_{1} ( 1\wedge\frac{\sigma_{2}}{\sigma_{1}} )}{( \sigma_{0} \vee\sigma_{1} \vee\sigma_{2} )^{2}}, \\& \mathcal{A}^{*} V_{1} =M \biggl[ - \frac{b_{1} K}{r} (r- \frac{r}{K} e^{u} - \frac{a_{1} e^{v_{1}}}{1+ e^{u}} - \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{1+ e^{u}} - \frac{\sigma_{0}^{2}}{2} \biggl(1- \frac{1}{K} e^{u} \biggr)^{2} \\& \hphantom{\mathcal{A}^{*} V_{1} ={}}{}- ( 1+K ) \biggl( - r_{1} + \frac{b_{1} e^{u}}{1+ e^{u}} - \frac{\sigma_{1}^{2}}{2} \biggr) + \frac{a_{1} b_{1} K e^{v_{1}}}{r r_{1}} \biggl( - r_{1} + \frac{b_{1} e^{u}}{1+ e^{u}} \biggr) \\& \hphantom{\mathcal{A}^{*} V_{1} ={}}{}+\biggl( r_{2} + \frac{\sigma_{2}^{2}}{2} - \frac{\sigma_{1}^{2}}{2} \biggr)c\bigl( e^{v_{1}} \bigr)^{\frac{b_{2}}{b_{1}}} \biggl( - r_{1} + \frac{b_{1} e^{u}}{1+ e^{u}} - \frac{\sigma_{1}^{2}}{2} + \frac{b_{2}}{b_{1}} \frac{\sigma_{1}^{2}}{2} \biggr) \biggr] \\& \hphantom{\mathcal{A}^{*} V_{1}}\leq M \biggl[ - \frac{b_{1} K}{r} \biggl(r- \frac{r}{K} e^{u} - a_{1} e^{v_{1}} - c a_{2} \bigl( e^{v_{1}} \bigr)^{\frac{b_{2}}{b_{1}}} - \frac{\sigma_{0}^{2}}{2} \biggr) \\& \hphantom{\mathcal{A}^{*} V_{1} ={}}{}- ( 1+K ) \biggl(- r_{1} + \frac{b_{1} e^{u}}{1+ e^{u}} - \frac{\sigma_{1}^{2}}{2} \biggr)+ \frac{a_{1} b_{1} K e^{v_{1}}}{r r_{1}} \bigl( - r_{1} + b_{1} e^{u} \bigr) \\& \hphantom{\mathcal{A}^{*} V_{1} ={}}{}+\biggl( r_{2} + \frac{\sigma_{2}^{2}}{ 2} - \frac{\sigma_{1}^{2}}{2} \biggr)c\bigl( e^{v_{1}} \bigr)^{\frac{b_{2}}{b_{1}}} \biggl(- r_{1}+ b_{1} e^{u} - \frac{\sigma_{1}^{2}}{2} + \frac{b_{2}}{b_{1}} \frac{\sigma_{1}^{2}}{2} \biggr)\biggr] \\& \hphantom{\mathcal{A}^{*} V_{1}}\leq M \biggl[ -\lambda+ e^{u} \biggl( \frac{a_{1} b_{1} K e^{v_{1}}}{r r_{1}} + b_{1} \biggl( r_{2} + \frac{\sigma_{2}^{2}}{2} \biggr)c\bigl( e^{v_{1}} \bigr)^{\frac{b_{2}}{b_{1}}} \biggr)\biggr], \end{aligned}$$

and

$$\begin{aligned} \mathcal{A}^{*} V_{2} \leq&\biggl[ e^{u} + \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr]^{\theta} \\ &{}\times \biggl[ e^{u} \biggl( r- \frac{r}{K} e^{u} \biggr) - \biggl( \frac{r_{1} a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{\sigma_{2}}{\sigma_{1}} \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr) \biggr] \\ &{}+ \frac{\theta}{2} \biggl[ e^{u} + \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr]^{\theta-1} \biggl( \sigma_{0} e^{u} + \frac{a_{1} \sigma_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr)^{2} \\ \leq&\biggl[ e^{u} + \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr]^{\theta} \biggl[ e^{u} \biggl( r+ r_{1} \biggl( 1\wedge\frac{\sigma_{2}}{ \sigma_{1}} \biggr) - \frac{r}{K} e^{u} \biggr) \\ &{}- r_{1} \biggl( 1\wedge \frac{\sigma_{2}}{\sigma_{1}} \biggr) \biggl( e^{u} + \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr) \biggr] \\ &{}+ \frac{\theta}{2} ( \sigma_{0} \vee\sigma_{1} \vee \sigma_{2} )^{2} \biggl[ e^{u} + \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr]^{\theta+1} \\ \leq&- \biggl[ r_{1} \biggl( 1\wedge\frac{\sigma_{2}}{\sigma_{1}} \biggr) - \frac{\theta}{2} ( \sigma_{0} \vee\sigma_{1} \vee \sigma_{2} )^{2} \biggr] \biggl[ e^{u} + \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr]^{\theta+1} \\ &{}+ \biggl[ e^{u} + \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr]^{\theta} \biggl[ m_{1} - \frac{r}{2K} e^{2u} \biggr] \\ \leq&- m_{2} \biggl[ \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr]^{\theta+1} \\ &{}+ m_{1} \biggl[ e^{u} + \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr]^{\theta} - \frac{r}{2K} e^{(2+\theta)u}, \end{aligned}$$

where \(m_{2} {:=} r_{1} ( 1\wedge\frac{\sigma_{2}}{\sigma_{1}} ) - \frac{\theta}{2} ( \sigma_{0} \vee\sigma_{1} \vee\sigma_{2} )^{2} >0\) according to the definition of θ.

Define a closed set

$$U_{\varepsilon} = \biggl\{ (u, v_{1} )\in\mathbb{R}^{2}: \vert u \vert \leq\log\frac{1}{\varepsilon}, \vert v_{1} \vert \leq\log\frac{1}{\varepsilon} \biggr\} , $$

where \(\varepsilon > 0\) is a sufficiently small number such that

$$\begin{aligned}& 0< \varepsilon< \frac{\lambda}{4 ( \frac{a_{1} b_{1}^{2} K}{r r_{1}} + b_{1} ( r_{2} + \frac{\sigma_{2}^{2}}{2} )c )}, \end{aligned}$$
(4.8)
$$\begin{aligned}& 0< \varepsilon< \frac{m_{2}}{2 ( 1+K )^{\theta+1} M ( \frac{b_{1}^{3} K}{r r_{1}} \vee\frac{b_{1} b_{2} ( r_{2} + \frac{\sigma_{2}^{2}}{2} )}{a_{2}} )}, \end{aligned}$$
(4.9)
$$\begin{aligned}& \frac{a_{1} b_{1}^{2} K\varepsilon}{r r_{1}} + b_{1} \biggl( r_{2} + \frac{\sigma_{2}^{2}}{2} \biggr) c \varepsilon^{\frac{b_{2}}{b_{1}}} < \min \biggl\{ \frac{\lambda}{4}, \frac{r}{4KM} \biggr\} , \end{aligned}$$
(4.10)
$$\begin{aligned}& - M \lambda- \frac{r}{4KM} \varepsilon^{- ( 2+\theta )} + K_{1} \leq-1, \end{aligned}$$
(4.11)
$$\begin{aligned}& - M \lambda- \frac{m_{2}}{2 ( 1+K )} \biggl[ \frac{b_{2}}{b_{1} \varepsilon} + \frac{a_{2} c \varepsilon^{- \frac{b_{2}}{b_{1}}}}{b_{2}} \biggr]^{\theta+1} + K_{2} \leq-1. \end{aligned}$$
(4.12)

Denote

$$\begin{aligned}& D_{\varepsilon}^{1} = \bigl\{ ( u, v_{1} ) \in \mathbb{R}^{2}:-\infty< u< \log\varepsilon \bigr\} ,\qquad D_{\varepsilon}^{2} = \bigl\{ ( u, v_{1} ) \in \mathbb{R}^{2}:-\infty< v_{1} < \log\varepsilon \bigr\} , \\& D_{\varepsilon}^{3} = \biggl\{ ( u, v_{1} ) \in \mathbb{R}^{2}:u\geq\log\frac{1}{\varepsilon} \biggr\} ,\qquad D_{\varepsilon}^{4} = \biggl\{ ( u, v_{1} ) \in \mathbb{R}^{2}: v_{1} \geq\log\frac{1}{\varepsilon} \biggr\} , \end{aligned}$$

then \(\mathbb{R}^{2} \setminus U_{\varepsilon} = D_{\varepsilon}^{1} \cup D_{\varepsilon}^{2} \cup D_{\varepsilon}^{3} \cup D_{\varepsilon}^{4}\). Hence we consider four cases as follows.

Case 1. On \(D_{\varepsilon}^{1}\), using the inequality \(e^{v_{1}} \leq1+( e^{v_{1}} )^{\theta+1}\), one can derive that

$$\begin{aligned} \mathcal{A}^{*} V \leq&- \frac{M\lambda}{4} +M \biggl[ - \frac{\lambda}{ 4} + \varepsilon\biggl( \frac{a_{1} b_{1}^{2} K}{r r_{1}} + b_{1} \biggl( r_{2} + \frac{\sigma_{2}^{2}}{2} \biggr)c \biggr)\biggr] - \frac{r}{4K} e^{ ( 2+\theta ) u} \\ &{}+ \biggl[ M \biggl( \frac{b_{1}^{3} K}{r r_{1}} \vee\frac{b_{1} b_{2} ( r_{2} + \frac{\sigma_{2}^{2}}{2} )}{a_{2}} \biggr) \varepsilon- \frac{m_{2}}{2 ( 1+K )^{1+\theta}} \biggr] \biggl[ \frac{a_{1} e^{v_{1}}}{b_{1}} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2}} \biggr]^{\theta+1} \\ &{}+ \biggl[ - \frac{M\lambda}{2} + \sup_{ ( u, v_{1} ) \in R^{2}} \biggl\{ - \frac{r}{4K} e^{ ( 2+\theta ) u} - \frac{m_{2}}{2 ( 1+K )^{1+\theta}} \biggl[ \frac{a_{1} e^{v_{1}}}{ b_{1}} + \frac{a_{2} c( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2}} \biggr]^{1+\theta} \\ &{}+ m_{1} \biggl[ e^{u} + \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr]^{\theta} \biggr\} \biggr]. \end{aligned}$$

By the definition of M, (4.3), and (4.4), we have

$$\mathcal{A}^{*} V\leq- \frac{M\lambda}{4} - \frac{r}{4K} e^{ ( 2+\theta ) u} \leq- \frac{M\lambda}{4} \leq-1. $$

Case 2. On \(D_{\varepsilon}^{2}\), using the inequality \(e^{u} \leq1+ e^{ ( \theta+2 ) u}\), one can derive that

$$\begin{aligned} \mathcal{A}^{*} V \leq&- \frac{M\lambda}{4} +M \biggl[ - \frac{\lambda}{ 4} + \biggl( \frac{a_{1} b_{1}^{2} K}{r r_{1}} \varepsilon+ b_{1} \biggl( r_{2} + \frac{\sigma_{2}^{2}}{2} \biggr) c \varepsilon^{\frac{b_{2}}{b_{1}}} \biggr) \biggr] \\ &{}+ \biggl[ - \frac{r}{4K} +M \biggl( \frac{a_{1} b_{1}^{2} K}{ r r_{1}} \varepsilon+ b_{1} \biggl( r_{2} + \frac{\sigma_{2}^{2}}{2} \biggr) c \varepsilon^{\frac{b_{2}}{b_{1}}} \biggr) \biggr] e^{ ( 2+\theta ) u} \\ &{}- \frac{m_{2}}{2 ( 1+K )^{1+\theta}} \biggl[ \frac{a_{1} e^{v_{1}}}{b_{1}} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2}} \biggr]^{\theta+1} \\ &{}+ \biggl[ - \frac{M\lambda}{2} + \sup_{ ( u, v_{1} ) \in R^{2}} \biggl\{ - \frac{r}{4K} e^{2 ( 1+\theta )} - \frac{m_{2}}{2 ( 1+K )^{1+\theta}} \biggl[ \frac{a_{1} e^{v_{1}}}{ b_{1}} + \frac{a_{2} c( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2}} \biggr]^{1+\theta} \\ &{}+ m_{1} \biggl[ e^{u} + \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr]^{\theta} \biggr\} \biggr]. \end{aligned}$$

By the definition of M and (4.5), we have

$$\mathcal{A}^{*} V\leq- \frac{M\lambda}{4} - \frac{m_{2}}{2 ( 1+K )^{1+\theta}} \biggl[ \frac{a_{1} e^{v_{1}}}{b_{1}} + \frac{a_{2} c( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2}} \biggr]^{1+\theta} \leq- \frac{M\lambda}{4} \leq-1. $$

Case 3. On \(D_{\varepsilon}^{3}\),

$$\begin{aligned} \mathcal{A}^{*} V \leq&-M\lambda- \frac{r}{4K} e^{2 ( 1+\theta ) u} + \biggl\{ - \frac{r}{4K} e^{2 ( 1+\theta )} +M e^{u} \biggl( \frac{a_{1} b_{1}^{2} K}{r r_{1}} \varepsilon+ b_{1} \biggl( r_{2} + \frac{\sigma_{2}^{2}}{2} \biggr) c \varepsilon^{\frac{b_{2}}{b_{1}}} \biggr) \\ &{}- m_{2} \biggl[ \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{a_{2} c( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr]^{1+\theta} + m_{1} \biggl[ e^{u} + \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{ b_{2} ( 1+K )} \biggr]^{\theta} \biggr\} \\ \leq&-M\lambda- \frac{r}{4K} e^{2 ( 1+\theta )} + K_{1}, \end{aligned}$$

where

$$\begin{aligned} K_{1} =& \sup_{ ( u, v_{1} ) \in R^{2}} \biggl\{ - \frac{r}{4K} e^{2 ( 1+\theta )} +M e^{u} \biggl( \frac{a_{1} b_{1}^{2} K}{r r_{1}} \varepsilon+ b_{1} \biggl( r_{2} + \frac{\sigma_{2}^{2}}{2} \biggr) c \varepsilon^{\frac{b_{2}}{b_{1}}} \biggr) \\ &{}- m_{2} \biggl[ \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{a_{2} c( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr]^{1+\theta} + m_{1} \biggl[ e^{u} + \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{ b_{2} ( 1+K )} \biggr]^{\theta} \biggr\} . \end{aligned}$$

In view of (4.6), we get

$$\mathcal{A}^{*} V\leq-1. $$

Case 4. On \(D_{\varepsilon}^{4}\),

$$\begin{aligned} \mathcal{A}^{*} V \leq&-M\lambda- \frac{m_{2}}{2} \biggl[ \frac{a_{1} e^{v_{1}}}{ b_{1} ( 1+K )} + \frac{a_{2} c( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr]^{1+\theta} \\ &{}+ \biggl\{ - \frac{m_{2}}{ 2} \biggl[ \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{a_{2} c( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr]^{1+\theta} \\ &{}+M e^{u} \biggl( \frac{a_{1} b_{1}^{2} K}{r r_{1}} \varepsilon+ b_{1} \biggl( r_{2} + \frac{\sigma_{2}^{2}}{2} \biggr) c \varepsilon^{\frac{b_{2}}{b_{1}}} \biggr) \\ &{}+ m_{1} \biggl[ e^{u} + \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr]^{\theta} - \frac{r}{2K} e^{2 ( 1+\theta ) u} \biggr\} \\ \leq&-M\lambda- \frac{m_{2}}{2 ( 1+K )} \biggl[ \frac{a_{1}}{b_{1} \varepsilon} + \frac{a_{2} c \varepsilon^{\frac{b_{2}}{b_{1}}}}{b_{2}} \biggr]^{1+\theta} + K_{2}, \end{aligned}$$

where

$$\begin{aligned} K_{2} =& \sup_{ ( u, v_{1} ) \in R^{2}} \biggl\{ - \frac{m_{2}}{2} \biggl[ \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{a_{2} c( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr]^{1+\theta} \\ &{}+M e^{u} \biggl( \frac{a_{1} b_{1}^{2} K}{r r_{1}} \varepsilon+ b_{1} \biggl( r_{2} + \frac{\sigma_{2}^{2}}{2} \biggr) c \varepsilon^{\frac{b_{2}}{b_{1}}} \biggr) \\ &{}+ m_{1} \biggl[ e^{u} + \frac{a_{1} e^{v_{1}}}{b_{1} ( 1+K )} + \frac{c a_{2} ( e^{v_{1}} )^{\frac{b_{2}}{b_{1}}}}{b_{2} ( 1+K )} \biggr]^{\theta} - \frac{r}{2K} e^{2 ( 1+\theta ) u} \biggr\} . \end{aligned}$$

According to (4.7), we have

$$\mathcal{A}^{*} V\leq-1. $$

In summary, we can deduce that

$$\sup_{ ( u, v_{1} ) \in\mathbb{R}^{2} \setminus U_{\varepsilon}} \leq-1. $$

This completes the proof. □

Remark 4.1

If more than two predators have the same lowest stochastic break-even concentration, and without loss of generality, we can assume that the first k (\(k \geq3\)) predators have the same lowest value \(\tilde{R_{i}}\), that is,

$$\tilde{R}_{1} = \tilde{R}_{2} =\cdots= \tilde{R}_{k} < \min_{i\neq1,2,\ldots,k} \biggl\{ \tilde{R}_{i}, \frac{1}{r ( 1+K )} \biggl( r- \frac{\sigma_{0}^{2}}{2} \biggr),1 \biggr\} . $$

We cannot prove whether they will coexist or not, and leave it as an open problem.

5 Simulation and discussion

In this article, we analyzed the principle of the competitive exclusion and coexistence of a stochastic Holling II n-predator one-prey model. The stochastic break-even concentration of each predator determines the competition outcome. We get that the predator with lower noise may win the competition. We also obtain that the two predators with the same lowest stochastic break-even concentration will coexist under some condition.

We consider numerical simulations to illustrate the main theoretical results by using the famous Milstein higher order method [21]. Assume that there are two predators competing for one prey in the stochastic model (1.2) and its corresponding deterministic model (1.1).

Set

$$\begin{aligned}& {r}=0.5,\qquad r_{1} =0.15,\qquad r_{2} =0.1,\qquad a_{1} =0.15,\qquad a_{2} =0.16, \\& K=0.8,\qquad b_{1} =0.5,\qquad b_{2} =0.3, \end{aligned}$$

and with the initial value \((S ( 0 ), X_{1} (0),X_{2} (0))= ( 0.6,0.4,0.4 )\).

In Fig. 1, we find that the predator \(X_{1}\) survives and the predator \(X_{2}\) will go to extinction in the deterministic model.

Figure 1
figure 1

Simulations of the path \(S(t)\), \(X(t)\), \(Y (t)\) for the corresponding deterministic system (1.1) with the initial value \((S(0), X(0), Y (0)) = (0.6, 0.4, 0.4)\)

In Fig. 2, we let \(\sigma_{0} =0.21\), \(\sigma_{1} =0.8\), \(\sigma_{2} =0.6\). We can compute that \(\tilde{R}_{1} \approx1.175>1\) and \(\tilde{R}_{2} \approx1.06>1\). According to Theorem 3.1(i), the two predators will go to extinction eventually. The result is supported in Fig. 2.

Figure 2
figure 2

System (1.2) with \(\sigma_{0} =0.21\), \(\sigma _{1} =0.8\), \(\sigma_{2} =0.6\)

In Fig. 3, we choose \(\sigma_{0} =0.21\), \(\sigma_{1} =0.2\), \(\sigma_{2} =0.5\). We can compute that \(\frac{1}{r ( 1+K )} ( r- \frac{\sigma_{0}^{2}}{2} ) \approx0.5065\), \(\tilde{R}_{1} \approx0.425<0.5065< \tilde{R}_{2} \approx0.703<1\). According to Theorem 3.1(ii) and (iii), the predator \(X _{1}\) will survive and the predator \(X _{2}\) goes to extinction eventually. The result is supported in Fig. 3.

Figure 3
figure 3

System (1.2) with \(\sigma_{0} =0.21\), \(\sigma _{1} =0.2\), \(\sigma_{2} =0.5\)

In Fig. 4, we choose \(\sigma_{0} =0.21\), \(\sigma_{1} =0.5\), \(\sigma_{2} =0.2\), \(\frac{1}{r ( 1+K )} ( r- \frac{\sigma_{0}^{2}}{2} ) \approx0.5065\), \(\tilde{R}_{2} \approx0.375<0.5065< \tilde{R}_{1} \approx0.6875<1\). According to Theorem 3.1(ii) and (iii), the predator \(X _{2}\) will survive and the predator \(X _{1}\) goes to extinction eventually. The result is supported in Fig. 4.

Figure 4
figure 4

System (1.2) with \(\sigma_{0} =0.21\), \(\sigma _{1} =0.5\), \(\sigma_{2} =0.2\)

Compared with Fig. 2, Fig. 3, and Fig. 4, we find that density of the prey may alter the destiny of the competing predators.

In Fig. 5, we choose \(\sigma_{0} =0.5\), \(\sigma_{1} =0.4\), \(\sigma_{2} =0.5\), and change r to be \(r =0.1\). We can compute that \(r- \frac{\sigma_{0}^{2}}{ 2} <0\). According to Theorem 3.1(iv), the prey and the two predators all will go to extinction eventually. That is, if the noise intensity of the prey is large enough, all species will go to extinction. The result is supported in Fig. 5.

Figure 5
figure 5

System (1.2) with \(\sigma_{0} =0.5\), \(\sigma _{1} =0.4\), \(\sigma_{2} =0.5\)

In Fig. 6, we choose \(\sigma_{0} =0.02\), \(\sigma_{1} =0.1\), \(\sigma_{2} =0.2\), and change \({b}_{1}\) to be \({b}_{1} = 0.3875\), \({b}_{2}\) to be \({b}_{2} =0.775\). We can compute that \(\frac{bK}{r} ( r- \frac{\sigma_{0}^{2}}{2} ) - ( r_{1} + \frac{\sigma_{1}^{2}}{2} ) ( 1+K ) \approx0.2018>0\). According to Theorem 4.1, the two predators can coexist. Figure 6 confirms it.

Figure 6
figure 6

System (1.2) with \(\sigma_{0} =0.02\), \(\sigma _{1} =0.1\), \(\sigma_{2} =0.5\)

References

  1. Aziz-Alaoui, M.A., Daher Okiye, M.: Boundedness and global stability for a predator–prey model with modified Leslie–Gower and Holling-type II schemes. Appl. Math. Lett. 16, 1069–1075 (2003)

    Article  MathSciNet  Google Scholar 

  2. Hsu, S.-B., Huang, T.-W., Kuang, Y.: Global dynamics of a predator–prey model with Hassell–Varley type functional response. Discrete Contin. Dyn. Syst., Ser. B 10, 857–871 (2008)

    Article  MathSciNet  Google Scholar 

  3. Xue, Y., Wang, X.: Stability and local Hopf bifurcation for a predator–prey model with delay. Discrete Dyn. Nat. Soc. 8, 1258–1274 (2012)

    MathSciNet  Google Scholar 

  4. Nindjin, A.F., Aziz-Alaoui, M.A., Cadivel, M.: Analysis of a predator–prey model with modified Leslie–Gower and Holling-type II schemes with time delay. Nonlinear Anal., Real World Appl. 7, 1104–1118 (2006)

    Article  MathSciNet  Google Scholar 

  5. Xu, R., Chaplain, M.A.: Persistence and global stability in a delayed predator–prey system with Michaelis–Menten type function response. Appl. Math. Comput. 130, 441–455 (2002)

    MathSciNet  MATH  Google Scholar 

  6. Xu, C.Q., Yuan, S.L., Zhang, T.H.: Average break-even concentration in a simple chemostat model with telegraph noise. Nonlinear Anal. Hybrid Syst. 29, 373–382 (2018)

    Article  MathSciNet  Google Scholar 

  7. Zheng, W., Sugie, J.: A necessary and sufficient condition for global asymptotic stability of time-varying Lotka–Volterra predator–prey systems. Nonlinear Anal., Theory Methods Appl. 127, 128–142 (2015)

    Article  MathSciNet  Google Scholar 

  8. Morita, Y., Tachibana, K.: An entire solution to the Lotka–Volterra competition-diffusion equations. SIAM J. Math. Anal. 40, 2217–2240 (2009)

    Article  MathSciNet  Google Scholar 

  9. Liu, M., Bai, C.: Optimal harvesting policy for a stochastic predator–prey model. Appl. Math. Lett. 34, 22–26 (2014)

    Article  MathSciNet  Google Scholar 

  10. Xu, C.Q., Yuan, S.L., Zhang, T.H.: Sensitivity analysis and feedback control of noise-induced extinction for competition chemostat model with mutualism. Physica A 505, 891–902 (2018)

    Article  MathSciNet  Google Scholar 

  11. Xiao, D., Ruan, S.: Global dynamics of a ratio-dependent predator–prey system. J. Math. Biol. 43, 268–290 (2001)

    Article  MathSciNet  Google Scholar 

  12. Agiza, N.A., Elabbasy, E.M., El-Metwally, H., Elsadany, A.A.: Chaotic dynamics of a discrete prey–predator model with Holling type II. Nonlinear Anal., Real World Appl. 10, 116–129 (2009)

    Article  MathSciNet  Google Scholar 

  13. Jiang, G., Lu, Q., Qian, L.: Complex dynamics of a Holling type II prey–predator system with state feedback control. Chaos Solitons Fractals 31, 448–461 (2007)

    Article  MathSciNet  Google Scholar 

  14. Llibre, J., Xiao, D.: Global dynamics of a Lotka–Volterra model with two predators competing for one prey. SIAM J. Appl. Math. 74, 434–453 (2014)

    Article  MathSciNet  Google Scholar 

  15. Ji, C.Y., Jiang, D.Q., Shi, N.Z.: Analysis of a predator–prey model with modified Leslie–Gower and Holling-type II schemes with stochastic perturbation. J. Math. Anal. Appl. 359, 482–498 (2009)

    Article  MathSciNet  Google Scholar 

  16. Xu, C.Q., Yuan, S.L.: Competition in the chemostat: a stochastic multi-species model and its asymptotic behavior. Math. Biosci. 280, 1–9 (2016)

    Article  MathSciNet  Google Scholar 

  17. Xiao, Y., Chen, L.: Analysis of a three species eco-epidemiological model. J. Math. Anal. Appl. 258, 733–754 (2001)

    Article  MathSciNet  Google Scholar 

  18. Ji, C.Y., Jiang, D.Q., Shi, N.Z.: The behaivior of an SIR epidemic model with stochastic perturbation. Stoch. Anal. Appl. 30, 755–773 (2012)

    Article  MathSciNet  Google Scholar 

  19. Allen, L.J., Kirupaharan, N.: Asymptotic dynamics of deterministic and stochastic epidemic models with multiple pathogens. Int. J. Numer. Anal. Model. 2(3), 329C344 (2005)

    MathSciNet  MATH  Google Scholar 

  20. Liu, M., Wang, K., Wu, Q.: Survival analysis of stochastic competitive models in a polluted environment and stochastic competitive exclusion princle. Bull. Math. Biol. 73, 1969–2012 (2001)

    Article  Google Scholar 

  21. Bell, D.R.: The Malliavin Calculus. Dover, New York (2006)

    MATH  Google Scholar 

  22. Arous, G.B., Léandre, R.: Décroissance exponentielle du noyau de la chaleur sur la diagonale (II). Probab. Theory Relat. Fields 90(3), 377–402 (1991)

    Article  Google Scholar 

  23. Stroock, D.W., Varadhan, S.R.S.: On the support of diffusion processes with appplications to the strong maximum principle. In: Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Univ. California, Berkeley, Calif., 1970/1971, vol. 3, pp. 333–359 (1972)

    Google Scholar 

  24. Aida, S., Kusuoka, S., Strook, D.: On the support of Wiener functionals. In: Elworthy, K.D., Ikeda, N. (eds.) Asymptotic Problems in Probabilty Theory: Wiener Functionals and Asymptotic. Pitman Research Notes in Mathematical Series, Longman Scient. Tech., vol. 284, pp. 3–34 (1993)

    Google Scholar 

  25. Rudnicki, R.: Long-time behaviour of a stochastic prey–predator model. Stoch. Process. Appl. 108, 93–107 (2003)

    Article  MathSciNet  Google Scholar 

  26. Zhang, Q., Jiang, D.: The coexistence of a stochastic Lotka–Volterra model with two predators competing for one prey. Appl. Math. Comput. 269, 288–300 (2015)

    MathSciNet  Google Scholar 

  27. Higham, D.J.: An algorithmic introduction to numerical simulation of stochastic differential equations. SIAM Rev. 43, 525–546 (2001)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We would like to express our sincere thanks to Professor Sanling Yuan of University of Shanghai for Science and Technology for their fruitful help and discussions. This work is partially supported by the Natural Science Foundation of Guangdong Province (no. 2016A030310019 and no. 2016A030307042).

Availability of data and materials

The data listed in the manuscript is available and the range of the data is chosen in many papers. We use matlab software to simulate.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

CZ conceived of the study, designed and drafted the manuscript. YL helped to revise the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Chunjuan Zhu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Let the triple \((X, \Sigma, m)\) be a σ-finite measure space. Denote by D the subset of the space \(L^{1}\) which contains all densities, i.e.,

$${D}= \bigl\{ {f}\in L^{1}:f\geq0, \Vert f \Vert =1 \bigr\} . $$

A linear mapping \(P: L^{1} \rightarrow L^{1}\) is called a Markov operator if \(P(D) \subset D\). The Markov operator P is called an integral or kernel operator if there exists a measurable function \(k: {X}\times {X}\rightarrow[0,\infty)\) such that

$$ \int_{X} k ( x,y ) m ( dx ) =1 $$
(A.1)

for all \(y \in{X}\) and

$$Pf ( x ) = \int_{X} k ( x,y ) f(y)m ( dy ) $$

for every density f.

A family \(\{ P(t) \}_{t\geq 0}\) of Markov operators which satisfies conditions:

  1. (i)

    \(P(0)=\mathrm{Id}\);

  2. (ii)

    \(P(t+s)=P(t)P(s)\) for \(s, t \geq0\);

  3. (iii)

    For each \(f\in L^{1}\), the function \(t \rightarrow P (t) f\) is continuous with respect to the \(L^{1}\) norm, is called a Markov semigroup. A Markov semigroup \(\{ P(t) \}_{t\geq 0}\) is called integral if, for each \(t > 0\), the operator \(P (t)\) is an integral Markov operator.

A density \(f_{*}\) is called invariant for each \(t>0\). The Markov semigroup \(\{ P(t) \}_{t\geq 0}\) is called asymptotically stable if there is an invariant density \(f_{*}\) such that

$$\lim_{t\rightarrow\infty} \bigl\Vert P ( t ) f- f_{*} \bigr\Vert =0\quad \mbox{for }f\in D. $$

A Markov semigroup \(\{ P(t) \}_{t\geq 0}\) is called sweeping with respect to a set \(A\in\Sigma\) if, for every \(f\in D\),

$$\lim_{t\rightarrow\infty} \int_{X} P ( t ) f(x)m ( dx ) =0 \quad \mbox{for }f\in D. $$

We need some result concerning asymptotic stability and sweeping which can be found in [25].

Lemma A.1

Let X be a metric space and Σ be the σ-algebra of Borel sets. Let \(\{ P(t) \}_{t\geq 0}\) be an integral Markov semigroup with a continuous kernel \(k ( t,x,y )\) for \(t > 0\), which satisfies (A.1) for all \(y \in{X}\). We assume that for every \(f\in D\) we have

$$\int_{0}^{\infty} P ( t ) f\, dt>0 \quad \textit{a.e.} $$

Then this semigroup is asymptotically stable or is sweeping with respect to compact sets.

The property that a Markov semigroup \(\{ P(t) \}_{t\geq 0}\) is asymptotically stable or sweeping for a sufficiently large family of sets (e.g., for all compact sets) is called the Foguel alternative.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhu, C., Li, Y. Competition and coexistence of a stochastic Holling II n-predator one-prey model. Adv Differ Equ 2018, 343 (2018). https://doi.org/10.1186/s13662-018-1790-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-018-1790-9

MSC

Keywords