Skip to main content

Theory and Modern Applications

Dissipative control of a three-species food chain stochastic system with a hidden Markov chain

Abstract

This paper focuses on a three-species food chain system which is formulated as stochastic differential equations with regime switching represented by a hidden Markov chain. Firstly, using the Wonham filter, we estimate the hidden Markov chain through the observable solution of the Markov chain in Gaussian white noise. Then two kinds of special dissipative control strategy are proposed to study the given model. That is, under \(H_{\infty}\) control and passive control, the sufficient conditions for global asymptotic stability are established, respectively. Finally, numerical examples are given to illustrate the effectiveness of the theoretical results.

1 Introduction

The dynamic relationship between predator and prey has been considerably studied in ecology and mathematical ecology. Because of its wide spread and importance, there is an extensive literature concerned with three-species predator-prey systems (see, e.g., [16]). In [2], Freedman et al. discussed the following three-species food chain model:

$$ \textstyle\begin{cases} \dot{x}_{1}(t) = x_{1}(t) [a_{10}-a_{11}x_{1}(t)-a_{12}x_{2}(t) ], \\ \dot{x}_{2}(t) = x_{2}(t) [-a_{20}+a_{21}x_{1}(t)-a_{23}x_{3}(t) ], \\ \dot{x}_{3}(t) = x_{3}(t) [-a_{30}+a_{32}x_{2}(t) ], \end{cases} $$
(1.1)

where \(x_{1}(t),x_{2}(t),x_{3}(t)\) denote the densities of the prey, the predator and the top-predator population at time t, respectively. The parameters are all positive, and \(a_{10},a_{20}\) and \(a_{30}\) are the intrinsic growth rate of the prey \(x_{1}(t)\), the death rate of the predator \(x_{2}(t)\), and the death rate of the top-predator \(x_{3}(t)\), respectively. The coefficient \(a_{11}\) denotes the intra-specific competition of species \(x_{1}(t)\), and \(a_{12},a_{23}\) are the rates of consumption; \(a_{21},a_{32}\) measure the contribution of the victim to the growth of the consumer.

Recently, model (1.1) has been studied extensively. For example, Zhou et al. [4] investigated the existence and global stability of the positive periodic solutions of the delayed discrete food chains with omnivory. Krikorian [5] considered the Volterra predator-prey model in the three-species case and proves the global properties of its solution. Hsu et al. [6] considered a three-species Lotka-Volterra food web model with omnivory, which is defined as feeding on more than one trophic level. In addition, population systems are inevitably subject to environmental noise in the natural world. As far as we know, there are various kinds of environmental noise. As a matter of fact, there are many papers which focus on population systems perturbed by white noise; see [710]. Peculiarly, Mao [7] showed that different structures of white noise may have different effects on the population systems; Mao et al. [8] revealed that the environmental noise can suppress a potential population explosion. Different from the existing literature, we show all system parameters which are disturbed by the white noise, so that the parameters of equation (1.1) become

$$\begin{aligned} & a_{10}\rightarrow a_{10}+a_{10} \sigma_{1}\dot{\omega}_{1}(t),\qquad a_{11}\rightarrow a_{11}+a_{11}\sigma_{1}\dot{\omega}_{1}(t),\qquad a_{12}\rightarrow a_{12}+a_{12} \sigma_{1}\dot{\omega}_{1}(t), \\ &a_{20}\rightarrow a_{20}+a_{20} \sigma_{2}\dot{\omega}_{2}(t),\qquad a_{21}\rightarrow a_{21}+a_{21}\sigma_{2}\dot{\omega}_{2}(t),\qquad a_{23}\rightarrow a_{23}+a_{23}\sigma_{2} \dot{\omega}_{2}(t), \\ &a_{30}\rightarrow a_{30}+a_{30} \sigma_{3}\dot{\omega}_{3}(t),\qquad a_{32}\rightarrow a_{32}+a_{32}\sigma_{3}\dot{\omega}_{3}(t), \end{aligned}$$

where \(\dot{\omega}_{i}(t)\) is the white noise, and \(\sigma_{i}\) is a positive constant representing the intensity of the white noise. Then the corresponding random version of equation (1.1) takes the following form:

$$ \textstyle\begin{cases} dx_{1}(t) = x_{1}(t) [a_{10}-a_{11}x_{1}(t)-a_{12}x_{2}(t) ] [dt+\sigma_{1}\,d\omega _{1}(t) ], \\ dx_{2}(t) = x_{2}(t) [-a_{20}+a_{21}x_{1}(t)-a_{23}x_{3}(t) ] [dt+\sigma_{2}\,d\omega _{2}(t) ], \\ dx_{3}(t) = x_{3}(t) [-a_{30}+a_{32}x_{2}(t) ] [dt+\sigma_{3}\,d\omega_{3}(t) ], \end{cases} $$
(1.2)

where \(\omega_{i}(t)\ (i=1,2,3)\) are in mutually independent standard Brownian motion with \(\omega_{i}(0)=0\). In [7], if the noise intensity is sufficiently large, the population may become extinct with probability one. In this paper, we assume that the noise is relatively small.

And in recent years, the stochastic population systems under regime switching have received much attention [1115]. In order to illustrate such sudden shift in different regimes, we introduce the Markov chain into the underlying three-species food chain stochastic model (1.2). Let \(\alpha(t)\) be a right continuous Markov chain in a finite state space \(S=\{1,2,\ldots,m\}\). The population system under regime switching can therefore be described by the following model:

$$ \textstyle\begin{cases} dx_{1}(t) =x_{1}(t) [a_{10} (\alpha(t) )-a_{11} (\alpha (t) )x_{1}(t)-a_{12} (\alpha(t) )x_{2}(t) ] \\ \phantom{dx_{1}(t)=}{}\times [dt+\sigma_{1} (\alpha(t) )\,d\omega _{1}(t) ], \\ dx_{2}(t) =x_{2}(t) [-a_{20} (\alpha(t) )+a_{21} (\alpha (t) )x_{1}(t)-a_{23} (\alpha(t) )x_{3}(t) ]\\ \phantom{dx_{1}(t)=}{}\times [dt+\sigma_{2} ( \alpha(t) )\,d\omega _{2}(t) ], \\ dx_{3}(t) =x_{3}(t) [-a_{30} (\alpha(t) )+a_{32} (\alpha(t) )x_{2}(t) ] [dt+ \sigma_{3} (\alpha (t) )\,d\omega_{3}(t) ]. \end{cases} $$
(1.3)

We assume that the Markov chain \(\alpha(t)\) is independent of the Brownian motion \(\omega_{i}(t)\). In a large amount of literature, the Markov chain is observable. However, in practical problems, the Markov chain \(\alpha(t)\) is unobservable, even in the case of two regime environments, it may not be possible to identify the environment to be either the first or the second one. Therefore, it is necessary to consider the hidden Markov chain. In the real world, we cannot see \(\alpha(t)\) directly but only can obtain a noise-corrupted (\(\alpha(t)\) plus noise) observation. Motivated by the studies of Bercu [14], and Tran [15], we assume that the Markov chain is unobservable.

On the other hand, few authors apply dissipative controls to explain biological phenomena in the field of population systems. Dissipative theory of dynamical systems was introduced by Willems [16, 17], which has been of particular interest to researchers in the areas of physics, system theory, and control engineering. As two special cases of dissipative controls, passive control [18, 19] and \(H_{\infty}\) control [20, 21] have been widely used in these systems. Therefore, in this paper we use dissipative controls to study the dynamical behavior of a three-species food chain model. Specifically speaking, in order to balance the ecosystem, human beings need to manage and control the populations, thus we take advantage of passive control and \(H_{\infty}\) control to study the persistence of a three-species food chain model.

Motivated by the above discussions, in this paper we investigate the global asymptotic stability of equation (1.3) under \(H_{\infty}\) control and passive control. For such partially observable systems, it is essential to convert them into completely observed ones, which can be done by using a Wonham filter [2224]. Of the Wonham filter we only give a sketch in Section 2. In contrast to the existing results, the new contributions of this article are summarized as follows:

  1. (i)

    We use Wonham’s filter to build a stochastic three-species food chain system when the Markov chain is only observable in white noise.

  2. (ii)

    We study the global asymptotic stability of the three-species food chain model (1.3) under \(H_{\infty}\) control.

  3. (iii)

    We prove the persistence of the three-species food chain model (1.3) under passive control.

In order to obtain nice dynamic properties of equation (1.3), we arrange the content as follows: In Section 2, we give some preliminaries, in which Wonham’s filter is introduced and the partially observable models are converted into completely observed ones. Then in Section 3, we show the global asymptotic stability of the given model under \(H_{\infty}\) control. In Section 4, we consider the global asymptotic stability of the given model under passive control, and numerical examples are provided in Section 5. Finally, the paper is concluded with some further remarks.

2 Preliminaries

In this section, we introduce notations and some results which are necessary for obtaining the main results in the paper. Let \(\alpha(t)\) denote a finite state Markov chain taking values in \(S=\{1,2,\ldots,m\} \) with the generator \(Q =(q_{ij})\in R^{m\times m}\). \(\text{1}_{E}\) denotes the indicator function of the event E. Assume that both the standard Brownian motion \(\omega_{i}\) and the Markov chain \(\alpha(t)\) are defined on a complete filtered probability space \((\Omega, \mathscr {F},P)\) with an associated non-decreasing family of σ-algebras \(\{\mathscr {F}_{t}\}\). Throughout the paper we need the following notation:

$$\begin{aligned} & R_{+}^{3}:= \bigl\{ x=(x_{1},x_{2},x_{3})':x_{i}>0,i=1,2,3 \bigr\} , \\ & p_{k}(t):=\text{1}_{\{\alpha(t)=k\}},\quad k=1,2,\ldots,m, \\ & p(t):= \bigl(p_{1}(t),\ldots,p_{m}(t) \bigr)' \in R^{m}, \\ &\mathscr {F}_{t}^{y}:=\sigma \bigl\{ y(s),0\leq s\leq t \bigr\} , \\ &\varphi_{k}(t):=P \bigl(\alpha(t)=k \bigl\vert \mathscr {F}_{t}^{y} \bigr)=E \bigl[p_{k}(t) \bigr\vert \mathscr {F}_{t}^{y} \bigr],\quad k=1,\ldots,m, \\ &\varphi(t):= \bigl(\varphi_{1}(t),\ldots,\varphi_{m}(t) \bigr)'\in R^{m}, \\ & S_{m}:= \Biggl\{ \varphi=(\varphi_{1},\ldots, \varphi_{m})'\in R^{m}: \varphi_{k} \geq0, \sum_{k=1}^{m} \varphi_{k}=1 \Biggr\} , \\ &\bar{f}(\varphi)=\sum_{k=1}^{m} f(k) \varphi_{k},\quad \varphi\in S_{m}. \end{aligned}$$

Next we recall some results on Wonham’s filter. As suggested in [25], the Markov chain \(\alpha(t)\) is observed through the following differential equation. That is,

$$ dy(t)=f \bigl(\alpha(t) \bigr)\,dt+\beta(t)\,dB(t),\quad y(0)=0, $$
(2.1)

where \(f:\mathscr {M}\mapsto R\) is a real-valued function, \(\beta (t):[0,\infty)\mapsto R\) is a continuously differentiable function satisfying \(\inf_{t\geq0}\beta(t)>0\), and \(B(t)\) is a standard Brownian motion being independent of \(\omega_{i}\). In (2.1), the Markov chain can only be observed in Gaussian white noise. It has been proved in [22] that the posterior probability \(\alpha(\cdot)\) satisfies the following stochastic differential equations:

$$\begin{aligned} d\varphi_{j}(t) ={}& \Biggl[\sum _{k=1}^{m}q_{kj} \varphi_{k}(t)- \beta ^{-2}(t) \bigl(f(j)-\bar{f} \bigl( \varphi(t) \bigr) \bigr)\bar{f} \bigl(\varphi(t) \bigr)\varphi_{j}(t) \Biggr]\,dt \\ & {}+\beta^{-2}(t) \bigl(f(j)-\bar{f} \bigl(\varphi(t) \bigr) \bigr) \varphi_{j}(t)\,dy(t),\quad j=1,\ldots,m, \end{aligned}$$
(2.2)

where the initial distribution of \(\alpha(t)\) is \(\varphi^{0}=(\varphi _{1}(0),\ldots,\varphi_{m}(0))\). Introduce the one dimensional innovation process \(d\bar{\omega}(t)=\beta^{-1}(t)(dy(t)-\bar {f}(\varphi(t))\,dt), \bar{\omega}(0)=0\), and then equation (2.2) can be rewritten as

$$ d\varphi_{j}(t)=\sum_{k=1}^{m}q_{kj} \varphi_{k}(t)\,dt+\beta^{-1}(t) \bigl(f(j)-\bar {f} \bigl( \varphi(t) \bigr) \bigr)\varphi_{j}(t)\,d\bar{\omega}(t),\quad j=1, \ldots,m. $$

The above equation is equivalent to

$$ d\varphi(t)=Q'\varphi(t)\,dt+\beta^{-1}(t)C(t) \varphi(t)\,d\bar{\omega}(t), $$
(2.3)

where \(C(t)=\operatorname{diag}(f(1),\ldots,f(m))-\bar{f}(\varphi(t))I_{m} \) and \(I_{m}\) is the \(m\times m\) identity matrix.

In addition, it should be noticed that equation (1.3) can be written as the following form:

$$ \textstyle\begin{cases} dx_{1} =x_{1} \sum_{k=1}^{m}p_{k}(t) [ (a_{10}(k)-a_{11}(k)x_{1}-a_{12}(k)x_{2} ) (dt+\sigma_{1}(k)\,d\omega _{1}(t) ) ], \\ dx_{2} =x_{2}\sum_{k=1}^{m}p_{k}(t) [ (-a_{20}(k)+a_{21}(k)x_{1}-a_{23}(k)x_{3} ) (dt+\sigma_{2}(k)\,d\omega _{2}(t) ) ], \\ dx_{3} =x_{3}\sum_{k=1}^{m}p_{k}(t) [ (-a_{30}(k)+a_{32}(k)x_{2} ) (dt+\sigma_{3}(k)\,d\omega_{3}(t) ) ]. \end{cases} $$
(2.4)

The solution of equation (2.3) is the well-known Wonham filter \(\varphi(t)\), which is an estimate of the hidden state \(p(t)\). Replacing \(p(t)\) with \(\varphi(t)\) in equation (2.4), we get

$$ \textstyle\begin{cases} dx_{1} =x_{1} \sum_{k=1}^{m}\varphi_{k}(t) [ (a_{10}(k)-a_{11}(k)x_{1}-a_{12}(k)x_{2} ) (dt+\sigma_{1}(k)\,d\omega _{1}(t) ) ], \\ dx_{2} =x_{2}\sum_{k=1}^{m} \varphi_{k}(t) [ (-a_{20}(k)+a_{21}(k)x_{1}-a_{23}(k)x_{3} ) (dt+\sigma_{2}(k)\,d\omega _{2}(t) ) ], \\ dx_{3} =x_{3}\sum_{k=1}^{m} \varphi_{k}(t) [ (-a_{30}(k)+a_{32}(k)x_{2} ) (dt+\sigma_{3}(k)\,d\omega_{3}(t) ) ]. \end{cases} $$
(2.5)

Hence, equations (2.3) and (2.5) are merged into a completely observable stochastic three-species food chain system. For convenience, we further express it in the following matrix form:

$$ \textstyle\begin{cases} dx(t) =\operatorname{diag} (x(t) ) \sum_{k=1}^{m}\varphi_{k}(t)A_{k}(x)\,dt\\ \phantom{dx(t)=}{}+ \operatorname {diag} (x(t) )\sum_{k=1}^{m} \varphi_{k}(t) [B_{k}(x) \varXi(k) ]\,d\omega(t), \\ d\varphi(t)=Q'\varphi(t)\,dt+\beta^{-1}(t)C(t) \varphi(t)\,d\bar{\omega}(t), \end{cases} $$
(2.6)

where

$$\begin{aligned} &A_{11}^{(k)}(x) = a_{10}(k)-a_{11}(k)x_{1}-a_{12}(k)x_{2}, \qquad A_{22}^{(k)}(x) = -a_{20}(k)+a_{21}(k)x_{1}-a_{23}(k)x_{3}, \\ &A_{33}^{(k)}(x) = -a_{30}(k)+a_{32}(k)x_{2}, \qquad A_{k}(x) = \bigl[A_{11}^{(k)}(x),A_{22}^{(k)}(x),A_{33}^{(k)}(x) \bigr]^{T}, \\ &B_{k}(x)= \begin{pmatrix} A_{11}^{(k)}(x) & 0 & 0 \\ 0 & A_{22}^{(k)}(x) & 0\\ 0 & 0 & A_{33}^{(k)}(x) \end{pmatrix} , \qquad\varXi(k)= \begin{pmatrix} \sigma_{1}(k) & 0 & 0 \\ 0 & \sigma_{2}(k) & 0\\ 0 & 0 & \sigma_{3}(k) \end{pmatrix} . \end{aligned}$$

Let

$$\begin{aligned} \mu=a_{10}-\frac{a_{11}}{a_{21}}a_{20}-\frac{a_{12}}{a_{32}}a_{30}. \end{aligned}$$

When \(\mu>0\), equation (2.5) has a positive equilibrium point \(x^{*}=(x_{1}^{*},x_{2}^{*},x_{3}^{*})\), where

$$\begin{aligned} x_{1}^{*}= \frac{a_{10}a_{32}-a_{12}a_{30}}{a_{11}a_{32}},\qquad x_{2}^{*}= \frac {a_{30}}{a_{32}},\qquad x_{3}^{*}=\frac {a_{21}a_{10}a_{32}-a_{21}a_{12}a_{30}-a_{11}a_{20}a_{32}}{a_{11}a_{23}a_{32}}. \end{aligned}$$

Because the equilibrium point (see [7], for the definition of the equilibrium point or the trivial solution) requires both the drift and the diffusion coefficients are zero at this point, equation (2.5) has non-zero equilibrium position. The equilibrium point of equation (2.5) is easily obtained by the definition of the equilibrium point.

The completely observable equation (2.6) can be viewed as a diffusion equation, in which the usual diffusion term is replaced by

$$ \begin{pmatrix} \operatorname{diag}(x(t))\sum_{k=1}^{m}\varphi_{k}(t)[B_{k}(x)\varXi(k)] & O \\ O & \beta^{-1}(t)C(t)\varphi(t) \end{pmatrix} $$

and driven by \((\omega(t),\bar{\omega}(t))'\).

Let

$$ \begin{pmatrix} \operatorname{diag}(x(t))\sum_{k=1}^{m}\varphi_{k}(t)[B_{k}(x)\varXi(k)] & O \\ O & \beta^{-1}(t)C(t)\varphi(t) \end{pmatrix} ^{2} = \begin{pmatrix} \mathscr {A}_{1} & O\\ O & \mathscr {A}_{2} \end{pmatrix}, $$

where

$$\begin{aligned} &\mathscr {A}_{1} =\operatorname{diag} \Biggl\{ x_{1}^{2} \Biggl[\sum_{k=1}^{m}\varphi _{k}(t)A_{11}^{(k)}(x)\sigma_{1}( \varphi) \Biggr]^{2}, x_{2}^{2} \Biggl[\sum _{k=1}^{m}\varphi_{k}(t)A_{22}^{(k)}(x) \sigma_{2}(\varphi) \Biggr]^{2}, \\ &\phantom{\mathscr {A}_{1} =} x_{3}^{2} \Biggl[\sum _{k=1}^{m}\varphi_{k}(t)A_{33}^{(k)}(x) \sigma_{3}(\varphi) \Biggr]^{2} \Biggr\} , \\ &\mathscr {A}_{2} = \beta^{-2}(t)C\varphi(C \varphi)'. \end{aligned}$$

For a sufficiently smooth real-valued function \(h:\mathbb {R}_{+}^{n}\times S_{m}\longmapsto\mathbb{R}\), the operator associated with (2.6) is defined as follows:

$$\begin{aligned} \mathscr{L}h(x,\varphi) ={}& \frac{\partial h}{\partial x} \operatorname {diag} \bigl(x(t) \bigr)\sum_{k=1}^{m} \varphi_{k}(t)A_{k}(x) +\frac{\partial h}{\partial x}Q' \varphi(t)+\frac{1}{2}\operatorname{tr} \biggl(\frac{\partial^{2} h}{\partial x^{2}} \mathscr {A}_{1} \biggr) +\frac{1}{2}\operatorname{tr} \biggl( \frac{\partial^{2} h}{\partial\varphi ^{2}}\mathscr {A}_{2} \biggr) \\ = {}&\sum_{i=1}^{3}\frac{\partial h}{\partial x_{i}}x_{i} \sum_{k=1}^{m}\varphi _{k}(t)A_{ii}^{(k)} +\frac{\partial h}{\partial x}Q'\varphi(t)+\frac{1}{2}\operatorname{tr} \biggl( \frac{\partial^{2} h}{\partial\varphi^{2}}\mathscr {A}_{2} \biggr) \\ &{} +\frac{1}{2}\sum_{i=1}^{3} \frac{\partial^{2} h}{\partial x_{i}^{2}}x_{i}^{2} \Biggl[\sum _{k=1}^{m}\varphi_{k}(t)A_{ii}^{(k)}(x) \sigma_{i}(k) \Biggr]^{2}. \end{aligned}$$
(2.7)

If \(h(\cdot)\) is independent of φ, from (2.7) we have

$$ \mathscr{L}h(x,\varphi) = \sum _{i=1}^{3}\frac{\partial h}{\partial x_{i}}x_{i} \sum _{k=1}^{m}\varphi _{k}(t)A_{ii}^{(k)} +\frac{1}{2}\sum_{i=1}^{3} \frac{\partial^{2} h}{\partial x_{i}^{2}}x_{i}^{2} \Biggl[\sum _{k=1}^{m}\varphi_{k}(t)A_{ii}^{(k)}(x) \sigma_{i}(k) \Biggr]^{2}. $$
(2.8)

3 \(H_{\infty}\) control

For equation (2.5), we implement the following transformation:

$$ N_{1}=x_{1}-x_{1}^{*}, \qquad N_{2}=x_{2}-x_{2}^{*}, \qquad N_{3}=x_{3}-x_{3}^{*}, $$
(3.1)

that is, substituting (3.1) into equation (2.5) yields

$$ \textstyle\begin{cases} dN_{1} = (N_{1}+x_{1}^{*} )\sum_{k=1}^{m} [\varphi_{k}(t) (-a_{11}(k)N_{1}-a_{12}(k)N_{2} ) (dt+\sigma_{1}(k)\,d\omega(t) ) ], \\ dN_{2} = (N_{2}+x_{2}^{*} )\sum_{k=1}^{m} [\varphi_{k}(t) (a_{21}(k)N_{1}-a_{23}(k)N_{3} ) (dt+\sigma_{2}(k)\,d\omega(t) ) ], \\ dN_{3} = (N_{3}+x_{3}^{*} )\sum_{k=1}^{m} [\varphi_{k}(t) (a_{32}(k)N_{2} ) (dt+\sigma_{3}(k)\,d \omega(t) ) ], \end{cases} $$
(3.2)

where \(N\in\{(N_{1},N_{2},N_{3}):N_{1}+x_{1}^{*}>0,N_{2}+x_{2}^{*}>0,N_{3}+x_{3}^{*}>0\}\). Obviously, the global asymptotic stability in probability of equation (2.5) at the positive equilibrium point \(x^{*}\) is equivalent to the global asymptotic stability in probability of equation (3.2) at the origin \(N^{*}=0\).

Next, we consider the following stochastic nonlinear system with external disturbance input and control:

$$ \textstyle\begin{cases} dN_{1} = (N_{1}+x_{1}^{*} )\sum_{k=1}^{m} [\varphi_{k}(t) (-a_{11}(k)N_{1}-a_{12}(k)N_{2} ) (dt+\sigma_{1}(k)\,d\omega_{1}(t) ) ] \\ \phantom{dN_{1} =}{}+g_{11}(N)u_{11}(N)\,dt+v_{11}(N)\,dt, \\ dN_{2} = (N_{2}+x_{2}^{*} )\sum_{k=1}^{m} [\varphi_{k}(t) (a_{21}(k)N_{1}-a_{23}(k)N_{3} ) (dt+\sigma_{2}(k)\,d\omega_{2}(t) ) ] \\ \phantom{dN_{1} =}{}+g_{12}(N)u_{12}(N)\,dt+v_{12}(N)\,dt, \\ dN_{3} = (N_{3}+x_{3}^{*} )\sum_{k=1}^{m} [\varphi_{k}(t) (a_{32}(k)N_{2} ) (dt+\sigma_{3}(k)\,d \omega_{3}(t) ) ] \\ \phantom{dN_{1} =}{}+g_{13}(N)u_{13}(N)\,dt+v_{13}(N)\,dt. \end{cases} $$
(3.3)

For convenience and simplicity in the following discussion, we introduce the following notations:

$$\begin{aligned} & f_{1}(N)= \begin{pmatrix} (N_{1}+x_{1}^{*})\sum_{k=1}^{m} [\varphi_{k}(t) (-a_{11}(k)N_{1}-a_{12}(k)N_{2} ) ] \\ (N_{2}+x_{2}^{*})\sum_{k=1}^{m} [\varphi_{k}(t) (a_{21}(k)N_{1}-a_{23}(k)N_{3} ) ] \\ (N_{3}+x_{3}^{*})\sum_{k=1}^{m} [\varphi_{k}(t) (a_{32}(k)N_{2} ) ] \end{pmatrix} \overset{ \vartriangle}{=} \begin{pmatrix} (N_{1}+x_{1}^{*})\mathscr {F}_{1}(k)\\ (N_{2}+x_{2}^{*})\mathscr {F}_{2}(k)\\ (N_{3}+x_{3}^{*})\mathscr {F}_{3}(k) \end{pmatrix} , \\ & l_{1}(N)= \begin{pmatrix} (N_{1}+x_{1}^{*})\sum_{k=1}^{m} [\varphi_{k}(t) (-a_{11}(k)N_{1}-a_{12}(k)N_{2} )\sigma_{1}(k) ] \\ (N_{2}+x_{2}^{*})\sum_{k=1}^{m} [\varphi_{k}(t) (a_{21}(k)N_{1}-a_{23}(k)N_{3} )\sigma_{2}(k) ] \\ (N_{3}+x_{3}^{*})\sum_{k=1}^{m} [\varphi_{k}(t) (a_{32}(k)N_{2} )\sigma _{3}(k) ] \end{pmatrix} \overset{ \vartriangle}{=} \begin{pmatrix} (N_{1}+x_{1}^{*})\mathscr {L}_{1}(k)\\ (N_{2}+x_{2}^{*})\mathscr {L}_{2}(k)\\ (N_{3}+x_{3}^{*})\mathscr {L}_{3}(k) \end{pmatrix} , \\ & g_{1}(N)= \begin{pmatrix} g_{11}(N) &0 &0\\ 0 &g_{22}(N) &0\\ 0 &0 &g_{33}(N) \end{pmatrix} ,\qquad u_{1}(N)= \begin{pmatrix} u_{11}(N)\\ u_{12}(N)\\ u_{13}(N) \end{pmatrix} , \\ & h_{1}(N)= \begin{pmatrix} 1 &0 &0\\ 0 &1 &0\\ 0 &0 &1 \end{pmatrix} ,\qquad v_{1}(N)= \begin{pmatrix} v_{11}(N)\\ v_{12}(N)\\ v_{13}(N) \end{pmatrix},\qquad \mathtt{C}_{1}(N)= \begin{pmatrix} N_{1}\\ N_{2}\\ N_{3} \end{pmatrix} ,\qquad N= \begin{pmatrix} N_{1}\\ N_{2}\\ N_{3} \end{pmatrix} . \end{aligned}$$

Therefore, equation (3.3) can be expressed as the following affine system:

$$ \textstyle\begin{cases} dN(t) = [f_{1}(N)+g_{1}(N)u_{1}+h_{1}(N)v_{1} ]\,dt+l_{1}(N)\,dw, \\ z_{1} = (\mathtt{C}_{1}(N),u_{1} )^{T}, \end{cases} $$
(3.4)

where \(f(0)=l(0)\equiv0\).

We first give several definitions about equation (3.4). Similar definitions have been given in the literature [20, 21].

Definition 3.1

Let \(\gamma>0\), equation (3.4) is said to have \(L_{2}\)-gain less than or equal to γ. We can find a state feedback control law \(u_{1}=u_{1}^{*}(t,x)\) which satisfies

$$ E \int_{0}^{\infty} \bigl\Vert \mathtt{C}_{1}(N) \bigr\Vert ^{2}+ \bigl\Vert u_{1}^{*} \bigr\Vert ^{2}\,dt\leqslant \gamma ^{2}E \int_{0}^{\infty} \Vert v_{1} \Vert ^{2}\,dt, \quad\forall v_{1}\in L_{\infty}^{C}, $$

where \(\Vert \cdot \Vert \) is the Euclidean norm of a vector, and \(L_{\infty}^{C}\) denotes bounded function set satisfying \(\sup_{t} \Vert \cdot \Vert \leq C\).

Definition 3.2

Consider the following stochastic system:

$$ dN(t)=f_{1}(N)\,dt+l_{1}(N) \,dw,\quad N(0)=N_{0}\in R_{+}^{3},\quad f_{1}(0)=l_{1}(0)=0. $$
(3.5)

(1) The solution \(N(t)\equiv0\) of equation (3.5) is said to be stable in probability if any \(\varepsilon >0\), and

$$ \lim_{N_{0}\rightarrow0}P \Bigl(\sup _{t\geq0} \bigl\Vert N(t) \bigr\Vert >\varepsilon \Bigr)=0. $$
(3.6)

(2) The solution \(N(t)\equiv0\) of equation (3.5) is said to be locally asymptotically stable in probability if (3.6) holds and

$$ \lim_{N_{0}\rightarrow0}P \Bigl(\lim _{t\rightarrow\infty}N(t)=0 \Bigr)=1. $$
(3.7)

(3) The solution \(N(t)\equiv0\) of equation (3.5) is said to be globally asymptotically stable in probability if (3.6) holds and

$$ P \Bigl(\lim_{t\rightarrow\infty}N(t)=0 \Bigr)=1. $$
(3.8)

Consider the following stochastic system:

$$ \textstyle\begin{cases} dN(t) = f_{1}(N)\,dt+l_{1}(N)\,dw, \\ y =\mathtt{C}_{1}(N), \end{cases} $$
(3.9)

where \(N(0)=N_{0}\in R_{+}^{3}\).

Definition 3.3

Equation (3.9) is locally zero-state detectable if there exists a neighborhood \(U_{0}\) of 0, for all \(N_{0}\in U_{0}\), we have

$$ y(t)\equiv0, \forall t\geq0 \quad\Longrightarrow\quad P \Bigl\{ \lim _{t\rightarrow \infty}N(t)=0,N(0)=N_{0} \Bigr\} =1. $$

If \(U_{0}=R_{+}^{3}\), then equation (3.9) is called zero-state detectable. Equation (3.9) is locally (or globally) zero-state detectable if there is a neighborhood \(U_{0}\) of 0, for all \(N_{0}\in U_{0}\) (or \(R_{+}^{3}\)), where \(y(t)\equiv0\) implies \(N_{0}\equiv0\).

Lemma 3.1

[26]

Consider the equation (3.4). Let \(\gamma>0\), suppose there exists a smooth solution \(V\geq0\) satisfying the Hamilton-Jacobi inequality

$$\begin{aligned} & V_{N}f_{1}(N)+ \frac{1}{2} V_{N} \biggl[\frac{1}{\gamma ^{2}}h_{1}(N)h_{1}^{T}(N)-g_{1}(N)g_{1}^{T}(N) \biggr] V_{N}^{T} \\ &\quad {}+\frac{1}{2}\mathtt{C}_{1}^{T}(N) \mathtt{C}_{1}(N)+\frac{1}{2} \operatorname{Tr} \bigl(l_{1}^{T}(N)V_{NN}l_{1}(N) \bigr) \leq0, \end{aligned}$$
(3.10)

then the closed-loop system (3.4) with the feedback control \(u=-g^{T}(N)V_{N}^{T}\) has \(L_{2}\)-gain from d to z less than or equal to γ.

Lemma 3.2

Suppose there exists a solution \(V\geq0\) to equation (3.10), the system

$$ \textstyle\begin{cases} dN(t)=f_{1}(N)\,dt+l_{1}(N)\,dw, \\ z_{1}= (\mathtt{C}_{1}(N),-g_{1}^{T}(N)V_{N}^{T} )^{T}, \end{cases} $$

is zero-state observable. Then, for \(V(N)>0, N\neq0\), the closed-loop system \(dN(t)=f_{1}(N)\,dt+l_{1}(N)\,dw-g_{1}(N)g_{1}^{T}(N)V_{N}^{T}\,dt\) is locally asymptotically stable in probability. Additionally, if V is also proper, then the closed-loop system \(dN(t)=f_{1}(N)\,dt+l_{1}(N)\,dw-g_{1}(N)g_{1}^{T}(N)V_{N}^{T}\,dt\) is globally asymptotically stable in probability.

About the proof of Lemma 3.2, in the literature [27] (P39-42), the corresponding deterministic model is described in detail. Therefore, the process of proof is some simple modification of the corresponding deterministic model. So the proof is omitted here.

Next, our goal is to design a suitable and simple control so that equation (3.4) is the globally asymptotically stable in probability.

Theorem 3.1

For equation (3.4), let \(\gamma>1\), choose

$$\begin{aligned} &g_{11}=\sqrt{ \biggl(\frac{N_{1}+x_{1}^{*}}{N_{1}} \biggr)^{2} \bigl[x_{1}^{*}\mathscr {L}_{1}^{2}(k)+2N_{1} \mathscr {F}_{1}(k) \bigr]+1+ \bigl(N_{1}+x_{1}^{*} \bigr)^{2}}, \\ &g_{22}=\sqrt{ \biggl(\frac {N_{2}+x_{2}^{*}}{N_{2}} \biggr)^{2} \bigl[x_{2}^{*}\mathscr {L}_{2}^{2}(k)+2N_{2} \mathscr {F}_{2}(k) \bigr]+1+ \bigl(N_{2}+x_{2}^{*} \bigr)^{2}}, \\ &g_{33}=\sqrt{ \biggl(\frac{N_{3}+x_{3}^{*}}{N_{3}} \biggr)^{2} \bigl[x_{3}^{*}\mathscr {L}_{3}^{2}(k)+2N_{3} \mathscr {F}_{3}(k) \bigr]+1+ \bigl(N_{3}+x_{3}^{*} \bigr)^{2}}. \end{aligned}$$

The corresponding control is

$$\begin{aligned} u_{1}(N)=-g_{1}^{T}V_{N}^{T}(N)= \begin{pmatrix} -\frac{\gamma(N_{1}+x_{1}^{*})}{N_{1}}[x_{1}^{*}\mathscr {L}_{1}^{2}(k)+2N_{1}\mathscr {F}_{1}(k)]-\frac{\gamma N_{1}}{N_{1}+x_{1}^{*}}-\gamma N_{1}(N_{1}+x_{1}^{*})\\ -\frac{\gamma(N_{2}+x_{2}^{*})}{N_{2}}[x_{2}^{*}\mathscr {L}_{2}^{2}(k)+2N_{2}\mathscr {F}_{2}(k)]-\frac{\gamma N_{2}}{N_{2}+x_{2}^{*}}-\gamma N_{2}(N_{2}+x_{2}^{*})\\ -\frac{\gamma(N_{3}+x_{3}^{*})}{N_{3}}[x_{3}^{*}\mathscr {L}_{3}^{2}(k)+2N_{3}\mathscr {F}_{3}(k)]-\frac{\gamma N_{1}}{N_{3}+x_{3}^{*}}-\gamma N_{1}(N_{3}+x_{3}^{*}) \end{pmatrix} . \end{aligned}$$

Under this control, the \(L_{2}\) gain of equation (3.4) from \(v_{1}\) to \(z_{1}\) is less than γ.

Proof

Let \(V_{N}=(V_{1},V_{2},V_{3})\), substituting \(f_{1}(N),g_{1}(N),\mathtt {C}_{1}(N),h_{1}(N),l_{1}(N),V_{N},V_{NN}\) into inequality (3.10) yield

$$\begin{aligned} &V_{N}f_{1}(N)+ \frac{1}{2}V_{N} \biggl[\frac{1}{\gamma ^{2}}h_{1}(N)h_{1}^{T}(N)-g_{1}(N)g_{1}^{T}(N) \biggr]V_{N}^{T} \\ &\quad{}+\frac{1}{2}\mathtt{C}_{1}^{T}(N) \mathtt{C}_{1}(N)+\frac{1}{2}\operatorname{Tr} \bigl(l_{1}^{T}(N)V_{NN}l_{1}(N) \bigr) \leqslant0. \end{aligned}$$
(3.11)

Define \(V_{1}=\frac{\gamma N_{1}}{N_{1}+x_{1}^{*}},V_{2}=\frac{\gamma N_{2}}{N_{2}+x_{2}^{*}},V_{3}=\frac{\gamma N_{3}}{N_{3}+x_{3}^{*}}\), then \(V_{NN}=(V_{ij})_{3\times3},i,j=1,2,3\),

$$\begin{aligned} V_{11}=\frac{\gamma x_{1}^{*}}{(N_{1}+x_{1}^{*})^{2}},\qquad V_{22}=\frac{\gamma x_{2}^{*}}{(N_{2}+x_{2}^{*})^{2}}, \qquad V_{33}= \frac{\gamma x_{3}^{*}}{(N_{3}+x_{3}^{*})^{2}},\qquad V_{ij}=0,i\neq j. \end{aligned}$$

Therefore, inequality (3.11) is converted to the following form:

$$\begin{aligned} &\gamma N_{1} \mathscr {F}_{1}(k)+ \gamma N_{2}\mathscr {F}_{2}(k)+ \gamma N_{3} \mathscr {F}_{3}(k)+\frac{1}{2} \biggl( \frac{1}{\gamma^{2}}-g_{11}^{2} \biggr)V_{1}^{2} +\frac{1}{2} \biggl( \frac{1}{\gamma^{2}}-g_{22}^{2} \biggr)V_{2}^{2} \\ &\quad{}+\frac{1}{2} \biggl(\frac{1}{\gamma^{2}}-g_{33}^{2} \biggr)V_{3}^{2}+\frac{1}{2} \bigl(N_{1}^{2}+N_{2}^{2}+N_{3}^{2} \bigr) \\ &\quad{}+\frac{1}{2} \bigl[\gamma x_{1}^{*}\mathscr {L}_{1}^{2}(k)+ \gamma x_{2}^{*}\mathscr {L}_{2}^{2}(k)+\gamma x_{3}^{*}\mathscr {L}_{3}^{2}(k) \bigr]\leqslant0, \end{aligned}$$
(3.12)

where

$$\begin{aligned} &\mathscr {F}_{1}(k)=\sum_{k=1}^{m} \bigl[\varphi_{k}(t) \bigl(-a_{11}(k)N_{1}-a_{12}(k)N_{2} \bigr) \bigr], \\ & \mathscr {L}_{1}(k)=\sum_{k=1}^{m} \bigl[\varphi_{k}(t) \bigl(-a_{11}(k)N_{1}-a_{12}(k)N_{2} \bigr)\sigma_{1}(k) \bigr], \\ &\mathscr {F}_{2}(k)=\sum_{k=1}^{m} \bigl[\varphi_{k}(t) \bigl(a_{21}(k)N_{1}-a_{23}(k)N_{3} \bigr) \bigr], \\ & \mathscr {L}_{2}(k)=\sum_{k=1}^{m} \bigl[\varphi_{k}(t) \bigl(a_{21}(k)N_{1}-a_{23}(k)N_{3} \bigr)\sigma_{2}(k) \bigr], \\ &\mathscr {F}_{3}(k)=\sum_{k=1}^{m} \bigl[\varphi_{k}(t) \bigl(a_{32}(k)N_{2} \bigr) \bigr], \qquad \mathscr {L}_{3}(k)=\sum_{k=1}^{m} \bigl[\varphi_{k}(t) \bigl(a_{32}(k)N_{2} \bigr) \sigma_{3}(k) \bigr]. \end{aligned}$$

Further, inequality (3.12) is equivalent to the following inequality:

$$\begin{aligned} &\gamma N_{1} \mathscr {F}_{1}(k)+ \gamma N_{2}\mathscr {F}_{2}(k)+ \gamma N_{3} \mathscr {F}_{3}(k)+\frac{1}{2} \frac{N_{1}^{2}}{(N_{1}+x_{1}^{*})^{2}} - \frac{1}{2}\gamma^{2}g_{11}^{2} \frac{N_{1}^{2}}{(N_{1}+x_{1}^{*})^{2}} \\ &\quad{}+\frac{1}{2}\frac{N_{2}^{2}}{(N_{2}+x_{2}^{*})^{2}} -\frac{1}{2} \gamma^{2}g_{22}^{2} \frac{N_{2}^{2}}{(N_{2}+x_{2}^{*})^{2}} + \frac{1}{2}\frac{N_{3}^{2}}{(N_{3}+x_{3}^{*})^{2}} -\frac{1}{2} \gamma^{2}g_{33}^{2} \frac{N_{3}^{2}}{(N_{3}+x_{3}^{*})^{2}} \\ &\quad{}+\frac{1}{2} \bigl(N_{1}^{2}+N_{2}^{2}+N_{3}^{2} \bigr) +\frac{1}{2} \bigl[\gamma x_{1}^{*}\mathscr {L}_{1}^{2}(k)+ \gamma x_{2}^{*}\mathscr {L}_{2}^{2}(k)+\gamma x_{3}^{*}\mathscr {L}_{3}^{2}(k) \bigr]\leqslant0. \end{aligned}$$
(3.13)

If the following inequality is established, then inequality (3.13) is also established:

$$ \begin{aligned}& {-}\frac{1}{2}g_{11}^{2} \frac{N_{1}^{2}}{(N_{1}+x_{1}^{*})^{2}}\gamma^{2}+ \biggl[N_{1}\mathscr {F}_{1}(k)+\frac{1}{2}\gamma x_{1}^{*} \mathscr {L}_{1}^{2}(k) \biggr]\gamma +\frac{1}{2} \frac{N_{1}^{2}}{(N_{1}+x_{1}^{*})^{2}}+\frac{1}{2}N_{1}^{2}\leq0, \\ &{-}\frac{1}{2}g_{22}^{2}\frac{N_{2}^{2}}{(N_{2}+x_{2}^{*})^{2}} \gamma^{2}+ \biggl[N_{2}\mathscr {F}_{2}(k)+ \frac{1}{2}\gamma x_{2}^{*}\mathscr {L}_{2}^{2}(k) \biggr]\gamma +\frac{1}{2}\frac{N_{2}^{2}}{(N_{2}+x_{2}^{*})^{2}}+\frac{1}{2}N_{2}^{2} \leq0, \\ &{}-\frac{1}{2}g_{33}^{2}\frac{N_{3}^{2}}{(N_{3}+x_{3}^{*})^{2}} \gamma^{2}+ \biggl[N_{3}\mathscr {F}_{3}(k)+ \frac{1}{2}\gamma x_{3}^{*}\mathscr {L}_{3}^{2}(k) \biggr]\gamma +\frac{1}{2}\frac{N_{3}^{2}}{(N_{3}+x_{3}^{*})^{2}}+\frac{1}{2}N_{3}^{2} \leq0. \end{aligned} $$
(3.14)

That is,

$$ \begin{aligned} &g_{11}^{2}N_{1}^{2} \gamma^{2}- \bigl(N_{1}+x_{1}^{*} \bigr)^{2} \bigl[2N_{1}\mathscr {F}_{1}(k)+x_{1}^{*} \mathscr {L}_{1}^{2}(k) \bigr]\gamma -N_{1}^{2} \bigl[1+ \bigl(N_{1}+x_{1}^{*} \bigr)^{2} \bigr] \geq0, \\ &g_{22}^{2}N_{2}^{2} \gamma^{2}- \bigl(N_{2}+x_{2}^{*} \bigr)^{2} \bigl[2N_{2} \mathscr {F}_{2}(k)+x_{2}^{*}\mathscr {L}_{2}^{2}(k) \bigr]\gamma -N_{2}^{2} \bigl[1+ \bigl(N_{2}+x_{2}^{*} \bigr)^{2} \bigr]\geq0, \\ &g_{33}^{2}N_{3}^{2} \gamma^{2}- \bigl(N_{3}+x_{3}^{*} \bigr)^{2} \bigl[2N_{3} \mathscr {F}_{3}(k)+x_{3}^{*}\mathscr {L}_{3}^{2}(k) \bigr]\gamma -N_{3}^{2} \bigl[1+ \bigl(N_{3}+x_{3}^{*} \bigr)^{2} \bigr]\geq0. \end{aligned} $$
(3.15)

We have \(\gamma>1\), and \(g_{11},g_{22},g_{33}\) satisfy

$$ \begin{aligned} &\gamma>1\geq\frac{(N_{1}+x_{1}^{*})^{2}[2N_{1}\mathscr {F}_{1}(k)+x_{1}^{*}\mathscr {L}_{1}^{2}(k)]+\sqrt{\Delta_{1}}}{ 2g_{11}^{2}N_{1}^{2}}, \\ &\gamma>1\geq\frac{(N_{2}+x_{2}^{*})^{2}[2N_{2}\mathscr {F}_{2}(k)+x_{2}^{*}\mathscr {L}_{2}^{2}(k)]+\sqrt{\Delta_{2}}}{ 2g_{22}^{2}N_{2}^{2}}, \\ &\gamma>1\geq\frac{(N_{3}+x_{3}^{*})^{2}[2N_{3}\mathscr {F}_{3}(k)+x_{3}^{*}\mathscr {L}_{3}^{2}(k)] +\sqrt{\Delta_{3}}}{2g_{33}^{2}N_{3}^{2}}, \end{aligned} $$
(3.16)

where

$$\begin{aligned} &\Delta_{1}= \bigl(N_{1}+x_{1}^{*} \bigr)^{4} \bigl[2N_{1}\mathscr {F}_{1}(k)+x_{1}^{*} \mathscr {L}_{1}^{2}(k) \bigr]^{2}+4g_{11}^{2}N_{1}^{4} \bigl[1+ \bigl(N_{1}+x_{1}^{*} \bigr)^{2} \bigr], \\ &\Delta _{2}= \bigl(N_{2}+x_{2}^{*} \bigr)^{4} \bigl[2N_{2}\mathscr {F}_{2}(k)+x_{2}^{*} \mathscr {L}_{2}^{2}(k) \bigr]^{2}+4g_{22}^{2}N_{2}^{4} \bigl[1+ \bigl(N_{2}+x_{2}^{*} \bigr)^{2} \bigr], \\ &\Delta_{3}= \bigl(N_{3}+x_{3}^{*} \bigr)^{4} \bigl[2N_{3}\mathscr {F}_{3}(k)+x_{3}^{*} \mathscr {L}_{3}^{2}(k) \bigr]^{2}+4g_{33}^{2}N_{3}^{4} \bigl[1+ \bigl(N_{3}+x_{3}^{*} \bigr)^{2} \bigr]. \end{aligned}$$

Therefore inequality (3.14) is established. In the above each step of the derivation process is reversible, so we only need \(g_{11},g_{22},g_{33}\) to satisfy inequality (3.15); eventually inequality (3.12) is true. We choose

$$ \begin{aligned}& g_{11}=\sqrt{ \frac{1}{N_{1}^{2}} \bigl(N_{1}+x_{1}^{*} \bigr)^{2} \bigl[2N_{1}\mathscr {F}_{1}(k)+x_{1}^{*} \mathscr {L}_{1}^{2}(k) \bigr]+1+ \bigl(N_{1}+x_{1}^{*} \bigr)^{2}}, \\ &g_{22}=\sqrt{\frac{1}{N_{2}^{2}} \bigl(N_{2}+x_{2}^{*} \bigr)^{2} \bigl[2N_{2}\mathscr {F}_{2}(k)+x_{2}^{*} \mathscr {L}_{2}^{2}(k) \bigr]+1+ \bigl(N_{2}+x_{2}^{*} \bigr)^{2}}, \\ &g_{33}=\sqrt{\frac{1}{N_{3}^{2}} \bigl(N_{3}+x_{3}^{*} \bigr)^{2} \bigl[2N_{3}\mathscr {F}_{3}(k)+x_{3}^{*} \mathscr {L}_{3}^{2}(k) \bigr]+1+ \bigl(N_{3}+x_{3}^{*} \bigr)^{2}}. \end{aligned} $$
(3.17)

By \(V_{1}=\frac{\gamma N_{1}}{N_{1}+x_{1}^{*}},V_{2}=\frac{\gamma N_{2}}{N_{2}+x_{2}^{*}},V_{3}=\frac{\gamma N_{3}}{N_{3}+x_{3}^{*}}\), we can get

$$\begin{aligned} V(N)={}&\gamma \biggl[ \bigl(N_{1}+x_{1}^{*}-x_{1}^{*} \ln \bigl(N_{1}+x_{1}^{*} \bigr) \bigr)+ \bigl(N_{2}+x_{1}^{*}-x_{2}^{*} \ln \bigl(N_{2}+x_{2}^{*} \bigr) \bigr) \\ &{}+ \bigl(N_{3}+x_{3}^{*}-x_{3}^{*}\ln \bigl(N_{3}+x_{3}^{*} \bigr) \bigr) -\ln \biggl( \frac{e}{x_{1}^{*}} \biggr)^{x_{1}^{*}} \biggl(\frac{e}{x_{2}^{*}} \biggr)^{x_{2}^{*}} \biggl( \frac {e}{x_{3}^{*}} \biggr)^{x_{3}^{*}} \biggr]; \end{aligned}$$
(3.18)

for \(\gamma>1\), we seek out a V which satisfies inequality (3.11). According to Lemma 3.1, the conclusion is established. □

Remark 3.1

It is worth noting that if \(g_{11},g_{22},g_{33}\) meet equation (3.17), the solution \(V>0\) of inequality (3.11) is obtained. As a result, the control law depends on the corresponding form of \(g_{11},g_{22},g_{33}\).

Then we show that the zero point of equation (3.4) is globally asymptotically stable in probability without external interference signal.

Theorem 3.2

For \(\gamma>1\), equation (3.4) without exogenous disturbance signal is of the following form:

$$ \textstyle\begin{cases} dN(t) = [f_{1}(N)+g_{1}(N)u_{1} ]\,dt+l_{1}(N)\,dw, \\ z_{1} = (\mathtt{C}_{1}(N),u_{1} )^{T}. \end{cases} $$
(3.19)

Under the control \(u_{1}(N)=-g_{1}^{T}(N)V_{N}^{T}(N)\), the closed-loop system

$$ \begin{aligned} dN(t)= \bigl[f_{1}(N)-g_{1}(N)g_{1}^{T}(N)V_{N}^{T}(N) \bigr]\,dt+l_{1}(N)\,dw \end{aligned} $$
(3.20)

is globally asymptotically stable in probability at the point \(N^{*}=0\).

Proof

For the following equations:

$$ \begin{aligned} dN(t)=f_{1}(N) \,dt+l_{1}(N)\,dw, \end{aligned} $$
(3.21)

it is easy to verify that it is zero-state detectable. By \(V(N)>0,\forall N\neq0,V(0)=0\), and V is proper, according to Lemma 3.2, the conclusion is established. □

The following theorem can be obtained by using the original variable instead of the transformed system variable.

Theorem 3.3

For \(\gamma>1\), equation (2.5) with control is of the following form:

$$ \textstyle\begin{cases} dx_{1} =x_{1} \sum_{k=1}^{m}\varphi_{k}(t) [ (a_{10}(k)-a_{11}(k)x_{1}-a_{12}(k)x_{2} ) (dt+\sigma_{1}(k)\,d\omega _{1}(t) ) ] \\ \phantom{dx_{1} =}{}+\hat{g}_{11}(x)\hat{u}_{11}(x)\,dt, \\ dx_{2} =x_{2}\sum_{k=1}^{m} \varphi_{k}(t) [ (-a_{20}(k)+a_{21}(k)x_{1}-a_{23}(k)x_{3} ) (dt+\sigma_{2}(k)\,d\omega _{2}(t) ) ] \\ \phantom{dx_{1} =}{}+ \hat{g}_{22}(x)\hat{u}_{12}(x)\,dt, \\ dx_{3} =x_{3}\sum_{k=1}^{m} \varphi_{k}(t) [ (-a_{30}(k)+a_{32}(k)x_{2} ) (dt+\sigma_{3}(k)\,d\omega_{3}(t) ) ] \\ \phantom{dx_{1} =}{}+ \hat{g}_{33}(x)\hat{u}_{13}(x)\,dt, \end{cases} $$
(3.22)

where \(\hat{u}_{11}(x)=u_{11}(x-x^{*}),\hat{u}_{12}(x)=u_{12}(x-x^{*}),\hat {u}_{13}(x)=u_{13}(x-x^{*}), \hat{u}_{1}=(\hat{u}_{11}(x),\hat{u}_{12}(x), \hat{u}_{13}(x))^{T}\).

Select

$$\begin{aligned} &\hat{g}_{11}(x)=g_{11} \bigl(x-x^{*} \bigr)=\sqrt{ \frac {x_{1}^{2}}{(x_{1}-x_{1}^{*})^{2}} \bigl[2 \bigl(x_{1}-x_{1}^{*} \bigr)\hat{ \mathscr {F}} _{1}(k)+x_{1}^{*}\hat{\mathscr {L}}_{1}^{2}(k) \bigr]+1+x_{1}^{2}}, \\ &\hat{g}_{22}(x)=g_{22} \bigl(x-x^{*} \bigr)=\sqrt{ \frac {x_{2}^{2}}{(x_{2}-x_{2}^{*})^{2}} \bigl[2 \bigl(x_{2}-x_{2}^{*} \bigr)\hat{ \mathscr {F}}_{2}(k)+x_{2}^{*}\hat {\mathscr {L}}_{2}^{2}(k) \bigr]+1+x_{2}^{2}}, \\ &\hat{g}_{33}(x)=g_{33} \bigl(x-x^{*} \bigr)=\sqrt{ \frac {x_{3}^{2}}{(x_{3}-x_{3}^{*})^{2}} \bigl[2 \bigl(x_{3}-x_{3}^{*} \bigr)\hat{ \mathscr {F}}_{3}(k)+x_{3}^{*}\hat {\mathscr {L}}_{3}^{2}(k) \bigr]+1+x_{3}^{2}}, \end{aligned}$$

where

$$\begin{aligned} &\hat{\mathscr {F}}_{1}(k)=\sum_{k=1}^{m} \bigl[\varphi_{k}(t) \bigl(a_{10}(k)-a_{11}(k)x_{1}-a_{12} (k)x_{2} \bigr) \bigr], \\ &\hat{\mathscr {L}}_{1}(k)=\sum_{k=1}^{m} \bigl[\varphi_{k}(t) \bigl(a_{10}(k)-a_{11}(k)x_{1}-a_{12} (k)x_{2} \bigr)\sigma_{1}(k) \bigr], \\ &\hat{\mathscr {F}}_{2}(k)=\sum_{k=1}^{m} \bigl[\varphi_{k}(t) \bigl(-a_{20}(k)+a_{21}(k)x_{1}-a_{23} (k)x_{3} \bigr) \bigr], \\ &\hat{\mathscr {L}}_{2}(k)=\sum_{k=1}^{m} \bigl[\varphi_{k}(t) \bigl(-a_{20}(k)+a_{21}(k)x_{1}-a_{23} (k)x_{3} \bigr)\sigma_{2}(k) \bigr], \\ &\hat{\mathscr {F}}_{3}(k)=\sum_{k=1}^{m} \bigl[\varphi_{k}(t) \bigl(-a_{30}(k)+a_{32}(k)x_{2} \bigr) \bigr], \\ &\hat{\mathscr {L}}_{3}(k)=\sum_{k=1}^{m} \bigl[\varphi_{k}(t) \bigl(-a_{30}(k)+a_{32}(k)x_{2} \bigr) \sigma_{3}(k) \bigr]. \end{aligned}$$

Under the control

$$\begin{aligned} \hat{u}_{1}(x)&=u_{1} \bigl(x-x^{*} \bigr)=-g_{1}^{T} \bigl(x-x^{*} \bigr)V_{x}^{T} \bigl(x-x^{*} \bigr) \\ &= \begin{pmatrix} -\gamma\sqrt{2(x_{1}-x_{1}^{*})\hat{\mathscr {F}}_{1}(k)+x_{1}^{*}\hat{\mathscr {L}}_{1}^{2}(k)+\frac{(x_{1}-x_{1}^{*})^{2}}{x_{1}^{2}}+(x_{1}-x_{1}^{*})^{2}}\\ -\gamma\sqrt{2(x_{2}-x_{2}^{*})\hat{\mathscr {F}}_{2}(k)+x_{2}^{*}\hat{\mathscr {L}}_{2}^{2}(k)+\frac{(x_{2}-x_{2}^{*})^{2}}{x_{2}^{2}}+(x_{2}-x_{2}^{*})^{2}}\\ -\gamma\sqrt{2(x_{3}-x_{3}^{*})\hat{\mathscr {F}}_{3}(k)+x_{3}^{*}\hat{\mathscr {L}}_{3}^{2}(k)+\frac{(x_{3}-x_{3}^{*})^{2}}{x_{3}^{2}}+(x_{3}-x_{3}^{*})^{2}} \end{pmatrix} , \end{aligned}$$

therefore, the populations continue to survive.

Remark 3.2

From the above discussion we can know that, in the case of a disturbance input, by applying a certain control, originally persistent populations eventually remain persistent. This provides a theoretical basis for the rational exploitation of natural resources and not destroying ecological balance.

4 Passive control

Equation (2.5) has a positive equilibrium point \(x^{*}=(x_{1}^{*},x_{2}^{*},x_{3}^{*})\), so equation (2.5) is equivalent to the following equation:

$$ \textstyle\begin{cases} dx_{1} =x_{1} \sum_{k=1}^{m}\varphi_{k}(t) [ (-a_{11}(k) (x_{1}-x_{1}^{*} )-a_{12}(k) (x_{2}-x_{2}^{*} ) ) (dt+\sigma_{1}(k)\,d\omega_{1}(t) ) ], \\ dx_{2} =x_{2}\sum_{k=1}^{m} \varphi_{k}(t) [ (a_{21}(k) (x_{1}-x_{1}^{*} )-a_{23}(k) (x_{3}-x_{3}^{*} ) ) (dt+\sigma _{2}(k)\,d\omega_{2}(t) ) ], \\ dx_{3} =x_{3}\sum_{k=1}^{m} \varphi_{k}(t) [ (a_{32}(k) (x_{2}-x_{2}^{*} ) ) (dt+\sigma_{3}(k)\,d\omega_{3}(t) ) ]. \end{cases} $$
(4.1)

It is obvious that the global asymptotic stability in probability of equation (2.5) is equivalent to the global asymptotic stability in probability of equation (4.1) at the positive equilibrium point \(x^{*}\).

Equation (4.1) with control term is expressed as

$$ \textstyle\begin{cases} dx_{1} =x_{1} \sum_{k=1}^{m}\varphi_{k}(t) [ (-a_{11}(k) (x_{1}-x_{1}^{*} )-a_{12}(k) (x_{2}-x_{2}^{*} ) ) (dt+\sigma_{1}(k)\,d\omega_{1}(t) ) ]\\ \phantom{dx_{1} =}{} +g_{21}(x)u_{21}(x), \\ dx_{2} =x_{2}\sum_{k=1}^{m} \varphi_{k}(t) [ (a_{21}(k) (x_{1}-x_{1}^{*} )-a_{23}(k) (x_{3}-x_{3}^{*} ) ) (dt+\sigma _{2}(k)\,d\omega_{2}(t) ) ] \\ \phantom{dx_{1} =}{}+g_{22}(x)u_{22}(x), \\ dx_{3} =x_{3}\sum_{k=1}^{m} \varphi_{k}(t) [ (a_{32}(k) (x_{2}-x_{2}^{*} ) ) (dt+\sigma_{3}(k)\,d\omega_{3}(t) ) ] +g_{23}(x)u_{23}(x), \end{cases} $$
(4.2)

furthermore, equation (4.2) can be expressed as the following matrix:

$$ \textstyle\begin{cases} dx(t) = [f_{2}(x)+g_{2}(x)u_{2}(x) ]\,dt+l_{2}(x)\,dw, \\ y_{2} =h_{2}(x), \end{cases} $$
(4.3)

where

$$\begin{aligned} &f_{2}(x)= \begin{pmatrix} x_{1}\sum_{k=1}^{m} [\varphi_{k}(t) (-a_{11}(k)(x_{1}-x_{1}^{*})-a_{12}(k)(x_{2}-x_{2}^{*}) ) ] \\ x_{2}\sum_{k=1}^{m} [\varphi_{k}(t) (a_{21}(k)(x_{1}-x_{1}^{*})-a_{23}(k)(x_{3}-x_{3}^{*}) ) ] \\ x_{3}\sum_{k=1}^{m} [\varphi_{k}(t) (a_{32}(k)(x_{2}-x_{2}^{*}) ) ] \end{pmatrix} , \\ &l_{2}(x)= \begin{pmatrix} x_{1}\sum_{k=1}^{m} [\varphi_{k}(t) (-a_{11}(k)(x_{1}-x_{1}^{*})-a_{12}(k)(x_{2}-x_{2}^{*}) )\sigma_{1}(k) ] \\ x_{2}\sum_{k=1}^{m} [\varphi_{k}(t) (a_{21}(k)(x_{1}-x_{1}^{*})-a_{23}(k)(x_{3}-x_{3}^{*}) )\sigma_{2}(k) ] \\ x_{3}\sum_{k=1}^{m} [\varphi_{k}(t) (a_{32}(k)(x_{2}-x_{2}^{*}) )\sigma _{3}(k) ] \end{pmatrix} , \\ &g_{2}(N)= \begin{pmatrix} g_{21}(N) &0 &0\\ 0 &g_{22}(N) &0\\ 0 &0 &g_{23}(N) \end{pmatrix} ,\qquad h_{2}(x)= \begin{pmatrix} x_{1}-x_{1}^{*}\\ 0\\ 0 \end{pmatrix} ,\qquad u_{2}(x)= \begin{pmatrix} u_{21}(x)\\ u_{22}(x)\\ u_{23}(x) \end{pmatrix} . \end{aligned}$$

A function \(s(u_{2},y_{2}):R^{3}\times R^{3}\rightarrow R\) is called a supply rate if it is locally integrable for all input-output pairs satisfying equation (4.3). Then we introduce the notion of passivity to equation (4.3) as follows.

Definition 4.1

Equation (4.3) with the supply rate \(s(u_{2},y_{2})\) is called a dissipative system if there exists a Lyapunov function V defined on \(R^{3}\), called the storage function, for all \(x_{0}, t\geq t_{0}\geq0\), and the following dissipative inequality holds:

$$ E \bigl[V \bigl(x(t) \bigr) \bigr]-V \bigl(x(t_{0}) \bigr)\leq E \biggl[ \int_{t_{0}}^{t}s(u_{2},y_{2})\,dt \biggr], $$
(4.4)

where \(x(t_{0})=x_{0}\).

Definition 4.2

Equation (4.3) is called passive if it is dissipative with respect to the supply rate \(s(u_{2},y_{2})=u_{2}^{T}y_{2}\).

Lemma 4.1

[28]

For the stochastic nonlinear system

$$ \begin{aligned} dx=f_{2}(x) \,dt+g_{2}(x)u_{2}(x)\,dt+l_{2}(x)\,dw,\quad f_{2}(0)=u_{2}(0)=l_{2}(0)=0. \end{aligned} $$
(4.5)

Assume that equilibrium point \(x^{*}=0\) of the equation \(dx=f_{2}(x)\,dt+l_{2}(x)\,dw\) is asymptotic stable in probability and there is a function \(V(x)\geq0\), for any \(\varepsilon >0\), it is positive semi-definite and satisfies

$$ V_{x} \bigl[f_{2}(x)+g_{2}(x)u_{2}(x) \bigr]+\frac{1}{2}\operatorname{Tr} \bigl(l_{2}^{T}(x)V_{xx}l_{2}(x) \bigr)\leq -\varepsilon \bigl\Vert u_{2}(x) \bigr\Vert ^{2}, $$
(4.6)

at \(x^{*}=0\). Then equilibrium point \(x^{*}=0\) of equation (4.5) is also asymptotically stable in probability.

Lemma 4.2

[28]

Assuming that there exists a solution \(V\geq0\) to the inequality

$$ \begin{aligned} &V_{x}f_{2}(x)+ \frac{1}{2}\operatorname{Tr} \bigl(l_{2}^{T}(x)V_{xx}l_{2}(x) \bigr)\leq-\varepsilon h_{2}^{T}(x)h_{2}(x), \\ &V_{x}g_{2}(x)=h_{2}^{T}(x), \end{aligned} $$
(4.7)

and \(V(0)=0,V(x)>0, x\neq0\), and equation (4.5) is zero-state detectable, then the equilibrium point \(x^{*}=0\) of the equation \(dx=f_{2}(x)\,dt+l_{2}(x)\,dw\) is asymptotically stable in probability. If V is proper, then the zero point is globally asymptotically stable in probability.

Theorem 4.1

In equation (4.3), taking \(g_{21}(x)=x_{1},g_{22}(x)=g_{23}(x)\equiv0\), equation (4.3) is a strictly passive output.

Proof

In equation (4.2), define the storage function

$$ \begin{aligned} V(x)= \bigl(x_{1}-x_{1}^{*} \ln x_{1} \bigr)+\frac{\hat{a}_{12}}{\check {a}_{21}} \bigl(x_{2}-x_{2}^{*} \ln x_{2} \bigr)+\frac{\hat{a}_{12}\hat{a}_{23}}{\check {a}_{21}\check{a}_{32}} \bigl(x_{3}-x_{3}^{*} \ln x_{3} \bigr), \end{aligned} $$
(4.8)

where \(\hat{a}_{12}=\min\{a_{12}(k)\}, \check{a}_{21}=\max\{a_{21}(k)\} , \hat{a}_{12}=\min\{a_{12}(k)\}, \check{a}_{21}=\max\{a_{21}(k)\}, k=1,2,\ldots, m\). Then we prove that equation (4.3) is dissipative about the strict output supply rate \(s(u_{2},y_{2})=u_{2}^{T}y_{2}-\varepsilon \Vert y_{2} \Vert ^{2}\). By Lemma 4.2, we only need the following formula to be established:

$$ \begin{aligned} &V_{x}f_{2}(x)+ \frac{1}{2}\operatorname{Tr} \bigl(l_{2}^{T}(x)V_{xx}l_{2}(x) \bigr)\leq-\varepsilon h_{2}^{T}(x)h_{2}(x), \\ &V_{x}g_{2}(x)=h_{2}^{T}(x), \end{aligned} $$
(4.9)

where \(V_{x}=(V_{x_{1}},V_{x_{2}},V_{x_{3}})\). For convenience, let \(V_{x}=(V_{1},V_{2},V_{3})\), and we substitute \(V_{x},f_{2}(x),h_{2}(x),l_{2}(x)\) into the first inequality in equation (4.9), to obtain

$$\begin{aligned} & \bigl(x_{1}-x_{1}^{*} \bigr)\sum _{k=1}^{m} \bigl[ \varphi_{k}(t) \bigl(-a_{11}(k) \bigl(x_{1}-x_{1}^{*} \bigr)-a_{12}(k) \bigl(x_{2}-x_{2}^{*} \bigr) \bigr) \bigr] \\ &\qquad{}+\frac{\hat{a}_{12}}{\check{a}_{21}} \bigl(x_{2}-x_{2}^{*} \bigr)\sum _{k=1}^{m} \bigl[\varphi_{k}(t) \bigl(a_{21}(k) \bigl(x_{1}-x_{1}^{*} \bigr)-a_{23}(k) \bigl(x_{3}-x_{3}^{*} \bigr) \bigr) \bigr] \\ &\qquad{}+\frac{\hat{a}_{12}\hat{a}_{23}}{\check{a}_{21}\check {a}_{32}} \bigl(x_{3}-x_{3}^{*} \bigr)\sum _{k=1}^{m} \bigl[\varphi_{k}(t) \bigl(a_{32}(k) \bigl(x_{2}-x_{2}^{*} \bigr) \bigr) \sigma_{3}(k) \bigr] \\ &\qquad{}+\frac{1}{2}x_{1}^{*} \Biggl\{ \sum _{k=1}^{m} \bigl[\varphi_{k}(t) \bigl(-a_{11}(k) \bigl(x_{1}-x_{1}^{*} \bigr)-a_{12}(k) \bigl(x_{2}-x_{2}^{*} \bigr) \bigr) \sigma_{1}(k) \bigr] \Biggr\} ^{2} \\ &\qquad{}+\frac{1}{2}x_{2}^{*} \Biggl\{ \sum _{k=1}^{m} \bigl[\varphi_{k}(t) \bigl(a_{21}(k) \bigl(x_{1}-x_{1}^{*} \bigr)-a_{23}(k) \bigl(x_{3}-x_{3}^{*} \bigr) \bigr) \sigma_{2}(k) \bigr] \Biggr\} ^{2} \\ &\qquad{}+\frac{1}{2}x_{3}^{*} \Biggl\{ \sum _{k=1}^{m} \bigl[\varphi_{k}(t) \bigl(a_{32}(k) \bigl(x_{2}-x_{2}^{*} \bigr) \bigr) \sigma_{3}(k) \bigr] \Biggr\} ^{2} \\ &\quad\leq-\varepsilon \bigl(x_{1}-x_{1}^{*} \bigr)^{2}. \end{aligned}$$
(4.10)

Simplify this as

$$ -\sum_{k=1}^{m} \bigl[\varphi_{k}(t)a_{11}(k) \bigr] \bigl(x_{1}-x_{1}^{*} \bigr)^{2}\leq-\varepsilon \bigl(x_{1}-x_{1}^{*} \bigr)^{2}. $$
(4.11)

In order to get (4.11), we only need to take \(0<\varepsilon <\sum_{k=1}^{m}[\varphi_{k}(t)a_{11}(k)]\). So \(V(x)\) meets the first inequality of (4.9). Moreover, \(V_{x}(x)\) and \(g_{21}=x_{1},g_{22}=g_{23}\equiv0\) is substituted into the second equality of (4.9), the equation is clearly established. This implies that there exists ε satisfying \(0<\varepsilon <a_{11}\), and equation (4.3) is strictly dissipative relative to output supply rate \(s(u_{2},y_{2})\). □

For equation (4.3), by the state feedback control \(u_{2}\), the closed-loop system is globally asymptotically stable in probability at the positive equilibrium point \(x^{*}=(x_{1}^{*},x_{2}^{*},x_{3}^{*})\), so \(u_{2}\) must satisfy certain conditions. In order to find \(u_{2}\), substituting (3.1) into equation (4.3) yields

$$ \textstyle\begin{cases} dN = [\hat{f}_{2}(N)+ \hat{g}_{2}(N)\hat{u}_{2}(N) ]\,dt+\hat{l}_{2}(N)\,dw, \\ \hat{y}_{2} =\hat{h}_{2}(N), \end{cases} $$
(4.12)

where

$$\begin{aligned} &\hat{f}_{2}(N)= \begin{pmatrix} (N_{1}+x_{1}^{*})\sum_{k=1}^{m} [\varphi_{k}(t) (-a_{11}(k)N_{1}-a_{12}(k)N_{2} ) ] \\ (N_{2}+x_{2}^{*})\sum_{k=1}^{m} [\varphi_{k}(t) (a_{21}(k)N_{1}-a_{23}(k)N_{3} ) ] \\ (N_{3}+x_{3}^{*})\sum_{k=1}^{m} [\varphi_{k}(t)a_{32}(k)N_{2} ] \end{pmatrix} , \\ &\hat{l}_{2}(N)= \begin{pmatrix} (N_{1}+x_{1}^{*})\sum_{k=1}^{m} [\varphi_{k}(t) (-a_{11}(k)N_{1}-a_{12}(k)N_{2} )\sigma_{1}(k) ] \\ (N_{2}+x_{2}^{*})\sum_{k=1}^{m} [\varphi_{k}(t) (a_{21}(k)N_{1}-a_{23}(k)N_{3} )\sigma_{2}(k) ] \\ (N_{3}+x_{3}^{*})\sum_{k=1}^{m} [\varphi_{k}(t)a_{32}(k)N_{2}\sigma_{3}(k) ] \end{pmatrix} , \\ &\hat{g}_{1}(N)= \begin{pmatrix} \hat{g}_{21}(N) &0 &0\\ 0 &0 &0\\ 0 &0 &0 \end{pmatrix} , \qquad\hat{h}_{2}(N)= \begin{pmatrix} N_{1}\\ 0\\ 0 \end{pmatrix} , \qquad\hat{u}_{2}(N)= \begin{pmatrix} \hat{u}_{21}(N)\\ \hat{u}_{22}(N)\\ \hat{u}_{23}(N) \end{pmatrix} , \end{aligned}$$

and \(\hat{g}_{21}(N)=g_{21}(N+x^{*}),\hat{u}_{21}(N)=u_{21}(N+x^{*}),\hat {u}_{22}(N)=u_{22}(N+x^{*}), \hat{u} _{23}(N)=u_{23}(N+x^{*})\). It is obvious that the global asymptotic stability in probability of equation (4.3) at the positive equilibrium point \(x^{*}\) is equivalent to the global asymptotic stability in probability of equation (4.12) at the origin \(N^{*}=0\). So the following theorem is obtained.

Theorem 4.2

In equation (4.12), suppose that \(0<\varepsilon <a_{11}, \hat {u}_{22}(N)=\hat{u}_{23}(N)\equiv0\), and \(\hat{u}^{(1)}_{21}(N)<\hat {u}_{21}(N)<\hat{u}^{(2)}_{21}(N), \hat{u}_{21}(0)=0\), where

$$\hat{u}^{(1)}_{21}(N)=\frac{-N_{1}+\sqrt{\Delta}}{2\varepsilon },\qquad \hat{u}^{(2)}_{21}(N)=\frac{-N_{1}-\sqrt{\Delta}}{2\varepsilon }. $$

Then equation (4.12) is asymptotic stability in probability at equilibrium point \(x^{*}\). Further, if \(\Vert \hat{u}_{2}(N) \Vert \neq0\), equation (4.12) is global asymptotic stability in probability at equilibrium point \(x^{*}\).

Proof

Definite a storage function

$$\begin{aligned} \widehat{V}(N)={}& \bigl(N_{1}+x_{1}^{*}-x_{1}^{*} \ln \bigl(N_{1}+x_{1}^{*} \bigr) \bigr)+\frac{\hat {a}_{12}}{\check{a}_{21}} \bigl(N_{2}+x_{1}^{*}-x_{2}^{*}\ln \bigl(N_{2}+x_{2}^{*} \bigr) \bigr) \\ &{}+\frac{\hat{a}_{12}\hat{a}_{23}}{\check{a}_{21}\check{a}_{32}} \bigl(N_{3}+x_{3}^{*}-x_{3}^{*} \ln \bigl(N_{3}+x_{3}^{*} \bigr) \bigr) -\ln \biggl( \frac{e}{x_{1}^{*}} \biggr)^{x_{1}^{*}} \biggl(\frac{e}{x_{2}^{*}} \biggr)^{x_{2}^{*}} \biggl( \frac{e}{x_{3}^{*}} \biggr)^{x_{3}^{*}}. \end{aligned}$$
(4.13)

Obviously, \(\widehat{V}(N)>0,\widehat{V}(0)=0\). Note that \(\widehat {V}_{N}(N)=(\widehat{V}_{1},\widehat{V}_{2},\widehat{V}_{3}),\hat{u}_{2}(N)=\hat {u}_{2},\hat{f}_{2}(N)=\hat{f}_{2}, \hat{g}_{1}(N)=\hat{g}_{1}, \hat{l}_{2}(N)=\hat{l}_{2}\) and plug this into inequality (4.6); we obtain

$$ -\sum_{k=1}^{m} \bigl[\varphi_{k}(t)a_{11}(k) \bigr]N_{1}^{2} +N_{1}\hat{u}_{21}\leq -\varepsilon \hat{u}^{2}_{21}, \qquad 0\leq-\varepsilon \hat{u}^{2}_{22},\qquad 0\leq-\varepsilon \hat{u}^{2}_{23}. $$
(4.14)

By conditions \(\hat{u}_{22}(N)=\hat{u}_{23}(N)\equiv0\), the second and third inequalities of (4.14) were established. The first inequality of (4.14) transforms the following inequality:

$$ \begin{aligned} \varepsilon \hat{u}^{2}_{21}+N_{1} \hat{u}_{21}-\sum_{k=1}^{m} \bigl[ \varphi _{k}(t)a_{11}(k) \bigr]N_{1}^{2} \leq0. \end{aligned} $$
(4.15)

Based on \(\Delta=N_{1}^{2}+4\varepsilon \sum_{k=1}^{m} [\varphi _{k}(t)a_{11}(k)]N_{1}^{2}\geq0\), if the left side of inequality (4.15) is equal to zero, we get

$$ \hat{u}^{(1)}_{21}=\frac{-N_{1}+\sqrt{\Delta}}{2\varepsilon },\qquad \hat{u}^{(2)}_{21}= \frac{-N_{1}-\sqrt{\Delta}}{2\varepsilon }. $$
(4.16)

As long as \(\hat{u}^{(1)}_{21}<\hat{u}_{21}<\hat{u}^{(2)}_{21}\), and \(\hat{u}_{21}(0)=0\), the inequality (4.15) is established. Moreover, according to Lemma 4.1, equation (4.12) is asymptotically stable at the origin \(N^{*}=0\).

In addition, the control \(\hat{u}_{2}\) of this theorem satisfies

$$ V_{N} \bigl[ \hat{f}_{2}(N)+ \hat{g}_{2}(N)\hat{u}_{2}(N) \bigr]+ \frac{1}{2} \operatorname{Tr} \bigl(\hat {l}_{2}^{T}(x)V_{NN} \hat{l}_{2}(N) \bigr)\leq-\varepsilon \bigl\Vert \hat{u}_{2}(N) \bigr\Vert ^{2} . $$
(4.17)

If \(\Vert \hat{u}_{2}(N) \Vert ^{2}\neq0\), the storage function \(\hat{V}(N)\) is well-posed, the storage function \(\hat{V}(N)\) can be used as Lyapunov function, which is used to determine the stability of the system, so the closed-loop system is globally asymptotically stable in probability. □

Remark 4.1

Equation (4.2) is globally asymptotically stable in probability at equilibrium point \(x^{*}\). As can be seen from the \(g_{2}\), as long as we control the primary producers, it can achieve control of the entire system.

5 Numerical examples

This section is devoted to a couple of examples which demonstrate the effectiveness of the proposed theory. First, we consider a discrete-time approximation of the Wonham filter. The method used here is similar to the literature [29] (P184-191), so we only outline the procedure. Note that we are mainly interested in sample path approximations of the filters. Using the approach based on Clark transformations [30], we transform the stochastic differential equations and design a numerical procedure for the transformed system.

Let \(u_{j}(t):=\ln\varphi_{j}(t),t\geq0,j=1,\ldots,m\), namely, \(\varphi _{j}(t)=e^{u_{j}(t)}\). Applying the Itô formula to equation (2.2), one has

$$\begin{aligned} du_{j}(t) ={}& \biggl[q_{jj}+\sum _{k\neq j}q_{kj} \frac{\varphi_{k}(t)}{\varphi _{j}(t)}- \beta^{-2}(t) \bigl(f(j)-\bar{f} \bigl(\varphi(t) \bigr) \bigr) \bar{f} \bigl(\varphi(t) \bigr) \\ &{}-\frac{1}{2}\beta^{-2}(t) \bigl(f(j)-\bar{f} \bigl( \varphi(t) \bigr) \bigr)^{2} \biggr]\,dt \\ &{} +\beta^{-2}(t) \bigl(f(j)-\bar{f} \bigl(\varphi(t) \bigr) \bigr) \,dy(t),\quad j=1,\ldots,m, \end{aligned}$$
(5.1)

where \(u_{j}(0)=\ln\varphi_{j}(0)\). Then we use Euler-Maruyama type approximations of equations (2.1) and (5.1) to simulate the dynamics of the population system.

Let \(\varepsilon >0\) be the step size, an Euler-Maruyama type approximation (see [31]) of equation (2.1) is given by

$$ \textstyle\begin{cases} y_{k+1}=y_{k}+ \varepsilon f_{k}^{\varepsilon }(\alpha)+\sqrt{ \varepsilon } \sigma_{k}\xi _{k}, \\ y_{0}=0, \quad \mbox{w.p.l.}, \end{cases} $$
(5.2)

where \(\xi_{k}=\frac{\omega(\varepsilon (k+1))-\omega(\varepsilon k)}{\sqrt { \varepsilon }}, f_{k}^{\varepsilon }(\alpha)\) is a Markov chain with state space S.

Discretizing the transformed system (5.1) yields the following algorithm:

$$ \textstyle\begin{cases}u_{k+1}^{j} =u_{k}^{j}+\varepsilon r_{k}^{j}+ \sqrt{ \varepsilon }\beta _{k}^{-2} (f_{k}^{j}- \bar{f}_{k}(\varphi) )\Delta y_{k},\qquad u_{0}^{j}= \log\varphi_{0}^{j}, \\ r_{k}^{j}=q^{jj}+\sum_{i\neq j}q^{ij}\frac{\varphi_{k}^{i}}{\varphi_{k}^{j}} -\beta_{k}^{-2} (f_{k}^{j}-\bar{f}_{k}(\varphi) ) \bar{f}_{k}(\varphi)-\frac {1}{2}\beta_{k}^{-2} (f_{k}^{j}-\bar{f}_{k}(\varphi) )^{2}, \\ \varphi_{k+1}^{j}=\exp (u_{k+1}^{j} ),\qquad \varphi_{0}^{j}=\varphi^{j}(0). \end{cases} $$
(5.3)

Equivalently, we can write the above equations in terms of the white noise \(\xi_{k}\) as follows:

$$ \textstyle\begin{cases}u_{k+1}^{j} =u_{k}^{j}+\varepsilon r_{k}^{j}+ \sqrt{ \varepsilon }\beta _{k}^{-2} (f_{k}^{j}- \bar{f}_{k}(\varphi) )\xi_{k}, \qquad u_{0}^{j}= \log\varphi_{0}^{j}, \\ r_{k}^{j}=q^{jj}+\sum_{i\neq j}q^{ij}\frac{\varphi_{k}^{i}}{\varphi_{k}^{j}} -\beta_{k}^{-2} (f_{k}^{j}-\bar{f}_{k}(\varphi) ) \bar{f}_{k}(\varphi)-\beta _{k}^{-2} (f_{k}^{j}-\bar{f}_{k}(\varphi) )f_{k}(\varphi)\\ \phantom{r_{k}^{j}=}{}-\frac{1}{2}\beta _{k}^{-2} (f_{k}^{j}-\bar{f}_{k}(\varphi) )^{2}, \\ \varphi_{k+1}^{j}=\exp (u_{k+1}^{j} ), \qquad\varphi_{0}^{j}=\varphi^{j}(0). \end{cases} $$
(5.4)

Now we will give three numerical examples to illustrate our results.

Example 5.1

Suppose we have the Markov chain \(\alpha(t)\) on the state space \(S=\{ 1,2\}\) with the generator \(Q= \bigl({\scriptsize\begin{matrix}{} -2 & 2 \cr 1 & -1 \end{matrix}} \bigr) \). The Markov chain can only be observed through \(dy=f(\alpha (t))\,dt+2d\omega\), where \(f(1)=-1 \mbox{ and } f(2)=1\). When \(\alpha(t)=1\),

$$\begin{aligned} &a_{10}(1)=4,\qquad a_{11}(1)=1,\qquad a_{12}(1)=2, \qquad \sigma_{1}(1)=1.5, \\ &a_{20}(1)=1,\qquad a_{21}(1)=1,\qquad a_{23}(1)=2, \qquad \sigma_{2}(1)=3, \\ &a_{30}(1)=1,\qquad a_{32}(1)=1, \qquad\sigma_{3}(1)=2. \end{aligned}$$

When \(\alpha(t)=2\),

$$\begin{aligned} &a_{10}(2)=3,\qquad a_{11}(2)=2,\qquad a_{12}(2)=1, \qquad \sigma_{1}(2)=2.5, \\ &a_{20}(2)=2,\qquad a_{21}(2)=2,\qquad a_{23}(2)=1, \qquad \sigma_{2}(2)=2, \\ &a_{30}(2)=2,\qquad a_{32}(2)=2, \qquad \sigma_{3}(2)=1.5. \end{aligned}$$

Then the species size \(x_{i}(t),i=1,2,3\), and Wonham’s filter \(\varphi (t)\) satisfy

$$ \textstyle\begin{cases}dx_{1} =x_{1} [(4-x_{1}-2x_{2})\varphi_{1}+(3-2x_{1}-x_{2}) \varphi_{2} ]\,dt \\ \phantom{dx_{1} =}{} +x_{1} [1.5(4-x_{1}-2x_{2}) \varphi_{1}+2.5(3-2x_{1}-x_{2}) \varphi_{2} ]\,d\omega_{1}, \\ dx_{2} =x_{2} [(-1+x_{1}-2x_{3}) \varphi_{1}+(-2+2x_{1}-x_{3})\varphi_{2} ]\,dt \\ \phantom{dx_{1} =}{} +x_{2} [3(-1+x_{1}-2x_{3}) \varphi_{1}+2(-2+2x_{1}-x_{3}) \varphi_{2} ]\,d\omega_{2}, \\ dx_{3} =x_{3} [(-1+x_{2}) \varphi_{1}+(-2+2x_{2})\varphi_{2} ]\,dt \\ \phantom{dx_{1} =}{} +x_{3} [2(-1+x_{2})\varphi_{1}+1.5(-2+2x_{2}) \varphi_{2} ]\,d\omega_{3}, \\ d\varphi_{1} = [-2\varphi_{1}+3\varphi_{2}]\,dt+ \frac{1}{2} (f(1)-\bar {f}(\varphi) )\varphi_{1}\,d\bar{ \omega}, \\ d\varphi_{2} = [2\varphi_{1}-3\varphi_{2}]\,dt+ \frac{1}{2} (f(2)-\bar{f}(\varphi ) )\varphi_{2}\,d\bar{ \omega}, \end{cases} $$
(5.5)

where \(\bar{f}(\varphi)=-\varphi_{1}+\varphi_{2}\).

By the method mentioned in [32], a so-called Itô-Taylor expansion can be formed by applying Itô’s result, which is a fundamental tool of stochastic calculus. Truncating the Itô-Taylor expansion at an appropriate point produces Milstein’s method for the first three equations of equation (5.5):

$$ \textstyle\begin{cases} x_{1,k+1} =x_{1,k}+x_{1,k} [(4-x_{1,k}-2x_{2,k})\varphi _{k}^{(1)}+(3-2x_{1,k}-x_{2,k}) \varphi_{k}^{(2)} ]\varepsilon \\ \phantom{x_{1,k+1} =}{}+x_{1,k} [1.5(4-x_{1,k}-2x_{2,k})\varphi _{k}^{(1)}+2.5(3-2x_{1,k}-x_{2,k}) \varphi_{k}^{(2)} ] \varepsilon \xi_{1,k} \\ \phantom{x_{1,k+1} =}{}+\frac{1}{2}x_{1,k}^{2} [1.5(4-x_{1,k}-2x_{2,k}) \varphi _{k}^{(1)}+2.5(3-2x_{1,k}-x_{2,k}) \varphi_{k}^{(2)} ]^{2} (\varepsilon \xi_{1,k}^{2}-\varepsilon ), \\ x_{2,k+1} =x_{2,k}+x_{2,k} [(-1+x_{1,k}-2x_{3,k}) \varphi _{k}^{(1)}+(-2+2x_{1,k}-x_{3,k}) \varphi_{k}^{(2)} ]\varepsilon \\ \phantom{x_{1,k+1} =}{} +x_{2,k} [3(-1+x_{1,k}-2x_{3,k})\varphi _{k}^{(1)}+2(-2+2x_{1,k}-x_{3,k}) \varphi_{k}^{(2)} ] \varepsilon \xi_{2,k} \\ \phantom{x_{1,k+1} =}{} +\frac{1}{2}x_{2,k}^{2} [3(-1+x_{1,k}-2x_{3,k}) \varphi _{k}^{(1)}+2(-2+2x_{1,k}-x_{3,k}) \varphi_{k}^{(2)} ]^{2} (\varepsilon \xi_{2,k}^{2}-\varepsilon ), \\ x_{3,k+1} =x_{3,k}+x_{3,k} [(-1+x_{2,k}) \varphi_{k}^{(1)}+(-2+2x_{2,k}) \varphi_{k}^{(2)} ]\varepsilon \\ \phantom{x_{1,k+1} =}{}+x_{3,k} [2(-1+x_{2,k})\varphi_{k}^{(1)}+1.5(-2+2x_{2,k}) \varphi_{k}^{(2)} ] \varepsilon \xi_{3,k} \\ \phantom{x_{1,k+1} =}{} +\frac{1}{2}x_{3,k}^{2} [2(-1+x_{2,k}) \varphi_{k}^{(1)}+1.5(-2+2x_{2,k}) \varphi_{k}^{(3)} ]^{2} (\varepsilon \xi_{2,k}^{2}-\varepsilon ), \\ \varphi_{k}^{(1)}=\exp (u_{k}^{(1)} ), \\ \varphi_{k}^{(2)}=\exp (u_{k}^{(2)} ), \end{cases} $$
(5.6)

where the initial condition \(x_{1}(0)=3, x_{2}(0)=4, x_{3}(0)=5, \varphi _{1}(0)=0.1,\varphi_{2}(0)=0.8\). Taking the step size \(\varepsilon =0.005\), we perform a computer simulation of 10,000 iterations of a sample path of \(x_{i}(t),i=1,2,3\). The sample paths of the population density \(x_{i}(t)\) are shown in Figure 1(a), and their corresponding probability density functions (PDFs) are shown in Figure 2(a), (b) and (c), respectively.

Figure 1
figure 1

The sample paths of three-species densities. (a) The sample paths of \(x_{i}(t), i=1,2,3\). The parameter values used in Example 5.1; (b) Under the control \(\hat{u}_{1}=(-\sqrt{x_{2}}\frac{2(x_{1}-3)}{x_{1}},-\sqrt{x_{3}}\frac {2(x_{2}-1)}{x_{2}}, -\sqrt{x_{1}}\frac{2(x_{3}-2)}{x_{3}})\), the sample paths of \(x_{i}(t), i=1,2,3\). The parameter values used in Example 5.2; (c) Under the \(\hat{u}_{2}=(-\sqrt{x_{2}}\frac{2(x_{1}-3)}{x_{1}},0,0)\), the sample paths of \(x_{i}(t), i=1,2,3\). the parameter values used in Example 5.3.

Figure 2
figure 2

The probability density functions(PDFs) of three-species densities. (a) The probability density function (PDF) of \(x_{1}(t)\) in Example 5.1; (b) The probability density function (PDF) of \(x_{2}(t)\) in Example 5.1; (c) The probability density function (PDF) of \(x_{3}(t)\) in Example 5.1.

Example 5.2

Based on Example 5.1, applying it to Theorem 3.3, select \(\hat{g}_{1}(x)=(\sqrt{x_{2}},\sqrt{x_{3}}, \sqrt{x_{1}})\), \(\hat{u}_{1}(x)=(-\sqrt {x_{2}}\frac{2(x_{1}-3)}{x_{1}},-\sqrt{x_{3}}\frac{2(x_{2}-1)}{x_{2}}, -\sqrt{x_{1}}\frac{2(x_{3}-2)}{x_{3}})\), the other parameters are the same as Example 5.1.

The numerical algorithm is the same as above. The sample paths of the population density \(x_{i}(t)\) are shown in 1(b), and their corresponding PDFs are shown in Figure 3(a), (b) and (c), respectively.

Figure 3
figure 3

The probability density functions(PDFs) of three-species densities. (a) The probability density function (PDF) of \(x_{1}(t)\) in Example 5.2; (b) The probability density function (PDF) of \(x_{2}(t)\) in Example 5.2; (c) The probability density function (PDF) of \(x_{3}(t)\) in Example 5.2.

Example 5.3

Based on Example 5.1, similar to Theorem 4.2, select \(\hat {g}_{2}(x)=(x_{1},0,0), \hat{u}_{2}(x)=(-\frac{2x_{2}(x_{1}-3)}{x_{1}^{2}},0,0)\), the other parameters are the same as Example 5.1.

Through the above numerical algorithm, the sample paths of the population density \(x_{i}(t)\) are shown in Figure 1(c), and their corresponding PDFs are shown in Figure 4(a), (b) and (c), respectively.

Figure 4
figure 4

The probability density functions(PDFs) of three-species densities. (a) The probability density function (PDF) of \(x_{1}(t)\) in Example 5.3; (b) The probability density function (PDF) of \(x_{2}(t)\) in Example 5.3; (c) The probability density function (PDF) of \(x_{3}(t)\) in Example 5.3.

6 Conclusion

In this paper, under a hidden Markov chain, the global asymptotical stability of the three-species food chain with external disturbance is obtained. Under \(H_{\infty}\) control and passive control, we prove that a suitable and simple control can sustain the original system’s persistence though there is a disturbance input, which is the highlight of this paper. From a practical point of view, a proper control helps to manage and reasonably develop the population systems. Moreover, the robust stability of the given system will be studied in the near future.

References

  1. Kratina, P, et al.: Stability and persistence of food webs with omnivory: is there a general pattern? Ecosphere 6, 794-804 (2012)

    Google Scholar 

  2. Freedman, HI, Waltman, P: Mathematical analysis of some three-species food chain models. Math. Biosci. 33, 257-276 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  3. Ma, HP, et al.: Global stability of positive periodic solutions and almost periodic solutions for a discrete competitive system. Discrete Dyn. Nat. Soc. 2015, 1-13 (2015)

    MathSciNet  Google Scholar 

  4. Zhou, SR, et al.: Persistence and global stability of positive periodic solutions of three species food chains with omnivory. J. Math. Anal. Appl. 324, 397-408 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  5. Krikorian, N: The Volterra model for three species predator-prey systems: boundedness and stability. J. Math. Biol. 7, 117-132 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  6. Hsu, SB, et al.: Analysis of three species Lotka-Volterra food web models with omnivory. J. Math. Anal. Appl. 426, 659-687 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  7. Mao, X: Delay population dynamics and environmental noise. Stoch. Dyn. 5, 149-162 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  8. Mao, X, et al.: Environmental Brownian noise suppresses explosions in population dynamics. Stochastic Process. Appl. Probab. 97, 95-110 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  9. Mao, X, et al.: Asymptotic behavior of the stochastic Lotka-Volterra model. J. Math. Anal. Appl. 287, 141-156 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  10. Pang, S, et al.: Asymptotic property of stochastic population dynamics. Dyn. Contin. Discrete Impuls. Syst. 15, 603-620 (2008)

    MATH  Google Scholar 

  11. Du, NH, et al.: Dynamical behavior of Lotka-Volterra competition systems: non-autonomous bistable case and the effect of telegraph noise. J. Comput. Appl. Math. 170, 399-422 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  12. Luo, Q, et al.: Stochchstic population dynamics under regime switching. J. Math. Appl. 334, 69-84 (2007)

    MATH  Google Scholar 

  13. Takeuchi, Y, et al.: Evolution of predator-prey systems described by a Lotka-Volterra equation under random environment. J. Math. Anal. Appl. 323, 938-957 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  14. Bercu, B, et al.: Almost sure stabilization for feedback controls of regime-switching linear systems with a hidden Markov chain. IEEE Trans. Autom. Control 54, 2114-2125 (2009)

    Article  MathSciNet  Google Scholar 

  15. Tran, K, Yin, G: Hybrid competitive Lotka-Volterra ecosystems with a hidden Markov chain. Control Decis. 1, 51-74 (2014)

    Article  Google Scholar 

  16. Willems, JC: Dissipative dynamical systems, part I: general theory. Arch. Rational Mach. Anal. 45, 321-351 (1972)

    Article  MATH  Google Scholar 

  17. Willems, JC: Dissipative dynamical systems, part II: linear systems with quadratic supply rules. Arch. Rational Mach. Anal. 45, 352-393 (1972)

    Article  MATH  Google Scholar 

  18. Florchinger, P: A passive system approach to feedback stabilization of nonlinear control stochastic systems. SIAM J. Control Optim. 37, 1848-1864 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  19. Florchinger, P: Stablization of passive nonlinear stochastic differential systems by bounded feedback. Stoch. Anal. Appl. 21, 1255-1282 (2003)

    Article  MATH  Google Scholar 

  20. Zhang, WH, et al.: State feefback \(H_{\infty}\) control for a class of nonlinear stochastic systems. SIAM J. Control Optim. 44, 1973-1991 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  21. Zhang, WH, et al.: Nonlinear stochastic \(H_{2}/H_{\infty}\) control with state-dependent noise. In: American Control Conference, Portland, USA, June 8-10, 2005 (2005)

    Google Scholar 

  22. Wonham, WM: Some applications of stochastic differential equations to optimal nonlinear filtering. J. Soc. Ind. Appl. Math. Ser. A Control 2, 347-369 (1965)

    Article  MathSciNet  MATH  Google Scholar 

  23. Yu, L, et al.: Asset allocation for regime-switching market models under partial observation. Dyn. Syst. Appl. 23, 39-62 (2014)

    MathSciNet  MATH  Google Scholar 

  24. Yin, G, Zhang, Q: Discrete-Time Markov Chains. Two-Time-Scale Methods and Applications. Springer, New York (2005)

    MATH  Google Scholar 

  25. Tran, K, Yin, G: Stochastic competitive Lotka-Volterra ecosystems under partial observation: feedback controls for permanence and extinction. J. Franklin Inst. 351, 4039-4064 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  26. Berman, N, et al.: \(H_{\infty}\)-like control for nonlinear stochastic systems. Syst. Control Lett. 55, 247-257 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  27. van der Schaft, AJ: L 2-Gain and Passivity Techniques in Nonlinear Control. Springer, Berlin (1999)

    Google Scholar 

  28. Floringer, P: A passive system approach to feedback stabilization of nonlinear control stochastic systems. SIAM J. Control Optim. 37, 1848-1864 (1999)

    Article  MathSciNet  Google Scholar 

  29. Yin, G, Zhang, Q: Discrete-Time Markov Chains. Two-Time-Scale Methods and Applications. Springer, New York (2005)

    MATH  Google Scholar 

  30. Malcome, WP, et al.: On the numerical stability of time-discretized state estimation via Clark transformations. In: Proceedings of 42nd IEEE Conference on Decision Control, vol. 2, pp. 1406-1412 (2003)

    Google Scholar 

  31. Kloeden, PE, Platen, E: Numerical Solution of Stochastic Differential Equations, 3nd edn. Springer, Berlin (1999)

    MATH  Google Scholar 

  32. Highm, D: An algorithmic introduction to numerical simulation of stochastic differential equations. SIAM Rev. 43, 525-546 (2001)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The research is supported by the National Natural Science Foundation of China (Nos. 11661064, 11461053). The authors would like to thank the editors and anonymous reviewers for their valuable comments, which improved the presentation of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qimin Zhang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All the authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ma, Y., Zhang, Q., Wang, L. et al. Dissipative control of a three-species food chain stochastic system with a hidden Markov chain. Adv Differ Equ 2017, 102 (2017). https://doi.org/10.1186/s13662-017-1160-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-017-1160-z

Keywords