- Research
- Open Access
- Published:
Dissipative control of a three-species food chain stochastic system with a hidden Markov chain
Advances in Difference Equations volume 2017, Article number: 102 (2017)
Abstract
This paper focuses on a three-species food chain system which is formulated as stochastic differential equations with regime switching represented by a hidden Markov chain. Firstly, using the Wonham filter, we estimate the hidden Markov chain through the observable solution of the Markov chain in Gaussian white noise. Then two kinds of special dissipative control strategy are proposed to study the given model. That is, under \(H_{\infty}\) control and passive control, the sufficient conditions for global asymptotic stability are established, respectively. Finally, numerical examples are given to illustrate the effectiveness of the theoretical results.
1 Introduction
The dynamic relationship between predator and prey has been considerably studied in ecology and mathematical ecology. Because of its wide spread and importance, there is an extensive literature concerned with three-species predator-prey systems (see, e.g., [1–6]). In [2], Freedman et al. discussed the following three-species food chain model:
where \(x_{1}(t),x_{2}(t),x_{3}(t)\) denote the densities of the prey, the predator and the top-predator population at time t, respectively. The parameters are all positive, and \(a_{10},a_{20}\) and \(a_{30}\) are the intrinsic growth rate of the prey \(x_{1}(t)\), the death rate of the predator \(x_{2}(t)\), and the death rate of the top-predator \(x_{3}(t)\), respectively. The coefficient \(a_{11}\) denotes the intra-specific competition of species \(x_{1}(t)\), and \(a_{12},a_{23}\) are the rates of consumption; \(a_{21},a_{32}\) measure the contribution of the victim to the growth of the consumer.
Recently, model (1.1) has been studied extensively. For example, Zhou et al. [4] investigated the existence and global stability of the positive periodic solutions of the delayed discrete food chains with omnivory. Krikorian [5] considered the Volterra predator-prey model in the three-species case and proves the global properties of its solution. Hsu et al. [6] considered a three-species Lotka-Volterra food web model with omnivory, which is defined as feeding on more than one trophic level. In addition, population systems are inevitably subject to environmental noise in the natural world. As far as we know, there are various kinds of environmental noise. As a matter of fact, there are many papers which focus on population systems perturbed by white noise; see [7–10]. Peculiarly, Mao [7] showed that different structures of white noise may have different effects on the population systems; Mao et al. [8] revealed that the environmental noise can suppress a potential population explosion. Different from the existing literature, we show all system parameters which are disturbed by the white noise, so that the parameters of equation (1.1) become
where \(\dot{\omega}_{i}(t)\) is the white noise, and \(\sigma_{i}\) is a positive constant representing the intensity of the white noise. Then the corresponding random version of equation (1.1) takes the following form:
where \(\omega_{i}(t)\ (i=1,2,3)\) are in mutually independent standard Brownian motion with \(\omega_{i}(0)=0\). In [7], if the noise intensity is sufficiently large, the population may become extinct with probability one. In this paper, we assume that the noise is relatively small.
And in recent years, the stochastic population systems under regime switching have received much attention [11–15]. In order to illustrate such sudden shift in different regimes, we introduce the Markov chain into the underlying three-species food chain stochastic model (1.2). Let \(\alpha(t)\) be a right continuous Markov chain in a finite state space \(S=\{1,2,\ldots,m\}\). The population system under regime switching can therefore be described by the following model:
We assume that the Markov chain \(\alpha(t)\) is independent of the Brownian motion \(\omega_{i}(t)\). In a large amount of literature, the Markov chain is observable. However, in practical problems, the Markov chain \(\alpha(t)\) is unobservable, even in the case of two regime environments, it may not be possible to identify the environment to be either the first or the second one. Therefore, it is necessary to consider the hidden Markov chain. In the real world, we cannot see \(\alpha(t)\) directly but only can obtain a noise-corrupted (\(\alpha(t)\) plus noise) observation. Motivated by the studies of Bercu [14], and Tran [15], we assume that the Markov chain is unobservable.
On the other hand, few authors apply dissipative controls to explain biological phenomena in the field of population systems. Dissipative theory of dynamical systems was introduced by Willems [16, 17], which has been of particular interest to researchers in the areas of physics, system theory, and control engineering. As two special cases of dissipative controls, passive control [18, 19] and \(H_{\infty}\) control [20, 21] have been widely used in these systems. Therefore, in this paper we use dissipative controls to study the dynamical behavior of a three-species food chain model. Specifically speaking, in order to balance the ecosystem, human beings need to manage and control the populations, thus we take advantage of passive control and \(H_{\infty}\) control to study the persistence of a three-species food chain model.
Motivated by the above discussions, in this paper we investigate the global asymptotic stability of equation (1.3) under \(H_{\infty}\) control and passive control. For such partially observable systems, it is essential to convert them into completely observed ones, which can be done by using a Wonham filter [22–24]. Of the Wonham filter we only give a sketch in Section 2. In contrast to the existing results, the new contributions of this article are summarized as follows:
-
(i)
We use Wonham’s filter to build a stochastic three-species food chain system when the Markov chain is only observable in white noise.
-
(ii)
We study the global asymptotic stability of the three-species food chain model (1.3) under \(H_{\infty}\) control.
-
(iii)
We prove the persistence of the three-species food chain model (1.3) under passive control.
In order to obtain nice dynamic properties of equation (1.3), we arrange the content as follows: In Section 2, we give some preliminaries, in which Wonham’s filter is introduced and the partially observable models are converted into completely observed ones. Then in Section 3, we show the global asymptotic stability of the given model under \(H_{\infty}\) control. In Section 4, we consider the global asymptotic stability of the given model under passive control, and numerical examples are provided in Section 5. Finally, the paper is concluded with some further remarks.
2 Preliminaries
In this section, we introduce notations and some results which are necessary for obtaining the main results in the paper. Let \(\alpha(t)\) denote a finite state Markov chain taking values in \(S=\{1,2,\ldots,m\} \) with the generator \(Q =(q_{ij})\in R^{m\times m}\). \(\text{1}_{E}\) denotes the indicator function of the event E. Assume that both the standard Brownian motion \(\omega_{i}\) and the Markov chain \(\alpha(t)\) are defined on a complete filtered probability space \((\Omega, \mathscr {F},P)\) with an associated non-decreasing family of σ-algebras \(\{\mathscr {F}_{t}\}\). Throughout the paper we need the following notation:
Next we recall some results on Wonham’s filter. As suggested in [25], the Markov chain \(\alpha(t)\) is observed through the following differential equation. That is,
where \(f:\mathscr {M}\mapsto R\) is a real-valued function, \(\beta (t):[0,\infty)\mapsto R\) is a continuously differentiable function satisfying \(\inf_{t\geq0}\beta(t)>0\), and \(B(t)\) is a standard Brownian motion being independent of \(\omega_{i}\). In (2.1), the Markov chain can only be observed in Gaussian white noise. It has been proved in [22] that the posterior probability \(\alpha(\cdot)\) satisfies the following stochastic differential equations:
where the initial distribution of \(\alpha(t)\) is \(\varphi^{0}=(\varphi _{1}(0),\ldots,\varphi_{m}(0))\). Introduce the one dimensional innovation process \(d\bar{\omega}(t)=\beta^{-1}(t)(dy(t)-\bar {f}(\varphi(t))\,dt), \bar{\omega}(0)=0\), and then equation (2.2) can be rewritten as
The above equation is equivalent to
where \(C(t)=\operatorname{diag}(f(1),\ldots,f(m))-\bar{f}(\varphi(t))I_{m} \) and \(I_{m}\) is the \(m\times m\) identity matrix.
In addition, it should be noticed that equation (1.3) can be written as the following form:
The solution of equation (2.3) is the well-known Wonham filter \(\varphi(t)\), which is an estimate of the hidden state \(p(t)\). Replacing \(p(t)\) with \(\varphi(t)\) in equation (2.4), we get
Hence, equations (2.3) and (2.5) are merged into a completely observable stochastic three-species food chain system. For convenience, we further express it in the following matrix form:
where
Let
When \(\mu>0\), equation (2.5) has a positive equilibrium point \(x^{*}=(x_{1}^{*},x_{2}^{*},x_{3}^{*})\), where
Because the equilibrium point (see [7], for the definition of the equilibrium point or the trivial solution) requires both the drift and the diffusion coefficients are zero at this point, equation (2.5) has non-zero equilibrium position. The equilibrium point of equation (2.5) is easily obtained by the definition of the equilibrium point.
The completely observable equation (2.6) can be viewed as a diffusion equation, in which the usual diffusion term is replaced by
and driven by \((\omega(t),\bar{\omega}(t))'\).
Let
where
For a sufficiently smooth real-valued function \(h:\mathbb {R}_{+}^{n}\times S_{m}\longmapsto\mathbb{R}\), the operator associated with (2.6) is defined as follows:
If \(h(\cdot)\) is independent of φ, from (2.7) we have
3 \(H_{\infty}\) control
For equation (2.5), we implement the following transformation:
that is, substituting (3.1) into equation (2.5) yields
where \(N\in\{(N_{1},N_{2},N_{3}):N_{1}+x_{1}^{*}>0,N_{2}+x_{2}^{*}>0,N_{3}+x_{3}^{*}>0\}\). Obviously, the global asymptotic stability in probability of equation (2.5) at the positive equilibrium point \(x^{*}\) is equivalent to the global asymptotic stability in probability of equation (3.2) at the origin \(N^{*}=0\).
Next, we consider the following stochastic nonlinear system with external disturbance input and control:
For convenience and simplicity in the following discussion, we introduce the following notations:
Therefore, equation (3.3) can be expressed as the following affine system:
where \(f(0)=l(0)\equiv0\).
We first give several definitions about equation (3.4). Similar definitions have been given in the literature [20, 21].
Definition 3.1
Let \(\gamma>0\), equation (3.4) is said to have \(L_{2}\)-gain less than or equal to γ. We can find a state feedback control law \(u_{1}=u_{1}^{*}(t,x)\) which satisfies
where \(\Vert \cdot \Vert \) is the Euclidean norm of a vector, and \(L_{\infty}^{C}\) denotes bounded function set satisfying \(\sup_{t} \Vert \cdot \Vert \leq C\).
Definition 3.2
Consider the following stochastic system:
(1) The solution \(N(t)\equiv0\) of equation (3.5) is said to be stable in probability if any \(\varepsilon >0\), and
(2) The solution \(N(t)\equiv0\) of equation (3.5) is said to be locally asymptotically stable in probability if (3.6) holds and
(3) The solution \(N(t)\equiv0\) of equation (3.5) is said to be globally asymptotically stable in probability if (3.6) holds and
Consider the following stochastic system:
where \(N(0)=N_{0}\in R_{+}^{3}\).
Definition 3.3
Equation (3.9) is locally zero-state detectable if there exists a neighborhood \(U_{0}\) of 0, for all \(N_{0}\in U_{0}\), we have
If \(U_{0}=R_{+}^{3}\), then equation (3.9) is called zero-state detectable. Equation (3.9) is locally (or globally) zero-state detectable if there is a neighborhood \(U_{0}\) of 0, for all \(N_{0}\in U_{0}\) (or \(R_{+}^{3}\)), where \(y(t)\equiv0\) implies \(N_{0}\equiv0\).
Lemma 3.1
[26]
Consider the equation (3.4). Let \(\gamma>0\), suppose there exists a smooth solution \(V\geq0\) satisfying the Hamilton-Jacobi inequality
then the closed-loop system (3.4) with the feedback control \(u=-g^{T}(N)V_{N}^{T}\) has \(L_{2}\)-gain from d to z less than or equal to γ.
Lemma 3.2
Suppose there exists a solution \(V\geq0\) to equation (3.10), the system
is zero-state observable. Then, for \(V(N)>0, N\neq0\), the closed-loop system \(dN(t)=f_{1}(N)\,dt+l_{1}(N)\,dw-g_{1}(N)g_{1}^{T}(N)V_{N}^{T}\,dt\) is locally asymptotically stable in probability. Additionally, if V is also proper, then the closed-loop system \(dN(t)=f_{1}(N)\,dt+l_{1}(N)\,dw-g_{1}(N)g_{1}^{T}(N)V_{N}^{T}\,dt\) is globally asymptotically stable in probability.
About the proof of Lemma 3.2, in the literature [27] (P39-42), the corresponding deterministic model is described in detail. Therefore, the process of proof is some simple modification of the corresponding deterministic model. So the proof is omitted here.
Next, our goal is to design a suitable and simple control so that equation (3.4) is the globally asymptotically stable in probability.
Theorem 3.1
For equation (3.4), let \(\gamma>1\), choose
The corresponding control is
Under this control, the \(L_{2}\) gain of equation (3.4) from \(v_{1}\) to \(z_{1}\) is less than γ.
Proof
Let \(V_{N}=(V_{1},V_{2},V_{3})\), substituting \(f_{1}(N),g_{1}(N),\mathtt {C}_{1}(N),h_{1}(N),l_{1}(N),V_{N},V_{NN}\) into inequality (3.10) yield
Define \(V_{1}=\frac{\gamma N_{1}}{N_{1}+x_{1}^{*}},V_{2}=\frac{\gamma N_{2}}{N_{2}+x_{2}^{*}},V_{3}=\frac{\gamma N_{3}}{N_{3}+x_{3}^{*}}\), then \(V_{NN}=(V_{ij})_{3\times3},i,j=1,2,3\),
Therefore, inequality (3.11) is converted to the following form:
where
Further, inequality (3.12) is equivalent to the following inequality:
If the following inequality is established, then inequality (3.13) is also established:
That is,
We have \(\gamma>1\), and \(g_{11},g_{22},g_{33}\) satisfy
where
Therefore inequality (3.14) is established. In the above each step of the derivation process is reversible, so we only need \(g_{11},g_{22},g_{33}\) to satisfy inequality (3.15); eventually inequality (3.12) is true. We choose
By \(V_{1}=\frac{\gamma N_{1}}{N_{1}+x_{1}^{*}},V_{2}=\frac{\gamma N_{2}}{N_{2}+x_{2}^{*}},V_{3}=\frac{\gamma N_{3}}{N_{3}+x_{3}^{*}}\), we can get
for \(\gamma>1\), we seek out a V which satisfies inequality (3.11). According to Lemma 3.1, the conclusion is established. □
Remark 3.1
It is worth noting that if \(g_{11},g_{22},g_{33}\) meet equation (3.17), the solution \(V>0\) of inequality (3.11) is obtained. As a result, the control law depends on the corresponding form of \(g_{11},g_{22},g_{33}\).
Then we show that the zero point of equation (3.4) is globally asymptotically stable in probability without external interference signal.
Theorem 3.2
For \(\gamma>1\), equation (3.4) without exogenous disturbance signal is of the following form:
Under the control \(u_{1}(N)=-g_{1}^{T}(N)V_{N}^{T}(N)\), the closed-loop system
is globally asymptotically stable in probability at the point \(N^{*}=0\).
Proof
For the following equations:
it is easy to verify that it is zero-state detectable. By \(V(N)>0,\forall N\neq0,V(0)=0\), and V is proper, according to Lemma 3.2, the conclusion is established. □
The following theorem can be obtained by using the original variable instead of the transformed system variable.
Theorem 3.3
For \(\gamma>1\), equation (2.5) with control is of the following form:
where \(\hat{u}_{11}(x)=u_{11}(x-x^{*}),\hat{u}_{12}(x)=u_{12}(x-x^{*}),\hat {u}_{13}(x)=u_{13}(x-x^{*}), \hat{u}_{1}=(\hat{u}_{11}(x),\hat{u}_{12}(x), \hat{u}_{13}(x))^{T}\).
Select
where
Under the control
therefore, the populations continue to survive.
Remark 3.2
From the above discussion we can know that, in the case of a disturbance input, by applying a certain control, originally persistent populations eventually remain persistent. This provides a theoretical basis for the rational exploitation of natural resources and not destroying ecological balance.
4 Passive control
Equation (2.5) has a positive equilibrium point \(x^{*}=(x_{1}^{*},x_{2}^{*},x_{3}^{*})\), so equation (2.5) is equivalent to the following equation:
It is obvious that the global asymptotic stability in probability of equation (2.5) is equivalent to the global asymptotic stability in probability of equation (4.1) at the positive equilibrium point \(x^{*}\).
Equation (4.1) with control term is expressed as
furthermore, equation (4.2) can be expressed as the following matrix:
where
A function \(s(u_{2},y_{2}):R^{3}\times R^{3}\rightarrow R\) is called a supply rate if it is locally integrable for all input-output pairs satisfying equation (4.3). Then we introduce the notion of passivity to equation (4.3) as follows.
Definition 4.1
Equation (4.3) with the supply rate \(s(u_{2},y_{2})\) is called a dissipative system if there exists a Lyapunov function V defined on \(R^{3}\), called the storage function, for all \(x_{0}, t\geq t_{0}\geq0\), and the following dissipative inequality holds:
where \(x(t_{0})=x_{0}\).
Definition 4.2
Equation (4.3) is called passive if it is dissipative with respect to the supply rate \(s(u_{2},y_{2})=u_{2}^{T}y_{2}\).
Lemma 4.1
[28]
For the stochastic nonlinear system
Assume that equilibrium point \(x^{*}=0\) of the equation \(dx=f_{2}(x)\,dt+l_{2}(x)\,dw\) is asymptotic stable in probability and there is a function \(V(x)\geq0\), for any \(\varepsilon >0\), it is positive semi-definite and satisfies
at \(x^{*}=0\). Then equilibrium point \(x^{*}=0\) of equation (4.5) is also asymptotically stable in probability.
Lemma 4.2
[28]
Assuming that there exists a solution \(V\geq0\) to the inequality
and \(V(0)=0,V(x)>0, x\neq0\), and equation (4.5) is zero-state detectable, then the equilibrium point \(x^{*}=0\) of the equation \(dx=f_{2}(x)\,dt+l_{2}(x)\,dw\) is asymptotically stable in probability. If V is proper, then the zero point is globally asymptotically stable in probability.
Theorem 4.1
In equation (4.3), taking \(g_{21}(x)=x_{1},g_{22}(x)=g_{23}(x)\equiv0\), equation (4.3) is a strictly passive output.
Proof
In equation (4.2), define the storage function
where \(\hat{a}_{12}=\min\{a_{12}(k)\}, \check{a}_{21}=\max\{a_{21}(k)\} , \hat{a}_{12}=\min\{a_{12}(k)\}, \check{a}_{21}=\max\{a_{21}(k)\}, k=1,2,\ldots, m\). Then we prove that equation (4.3) is dissipative about the strict output supply rate \(s(u_{2},y_{2})=u_{2}^{T}y_{2}-\varepsilon \Vert y_{2} \Vert ^{2}\). By Lemma 4.2, we only need the following formula to be established:
where \(V_{x}=(V_{x_{1}},V_{x_{2}},V_{x_{3}})\). For convenience, let \(V_{x}=(V_{1},V_{2},V_{3})\), and we substitute \(V_{x},f_{2}(x),h_{2}(x),l_{2}(x)\) into the first inequality in equation (4.9), to obtain
Simplify this as
In order to get (4.11), we only need to take \(0<\varepsilon <\sum_{k=1}^{m}[\varphi_{k}(t)a_{11}(k)]\). So \(V(x)\) meets the first inequality of (4.9). Moreover, \(V_{x}(x)\) and \(g_{21}=x_{1},g_{22}=g_{23}\equiv0\) is substituted into the second equality of (4.9), the equation is clearly established. This implies that there exists ε satisfying \(0<\varepsilon <a_{11}\), and equation (4.3) is strictly dissipative relative to output supply rate \(s(u_{2},y_{2})\). □
For equation (4.3), by the state feedback control \(u_{2}\), the closed-loop system is globally asymptotically stable in probability at the positive equilibrium point \(x^{*}=(x_{1}^{*},x_{2}^{*},x_{3}^{*})\), so \(u_{2}\) must satisfy certain conditions. In order to find \(u_{2}\), substituting (3.1) into equation (4.3) yields
where
and \(\hat{g}_{21}(N)=g_{21}(N+x^{*}),\hat{u}_{21}(N)=u_{21}(N+x^{*}),\hat {u}_{22}(N)=u_{22}(N+x^{*}), \hat{u} _{23}(N)=u_{23}(N+x^{*})\). It is obvious that the global asymptotic stability in probability of equation (4.3) at the positive equilibrium point \(x^{*}\) is equivalent to the global asymptotic stability in probability of equation (4.12) at the origin \(N^{*}=0\). So the following theorem is obtained.
Theorem 4.2
In equation (4.12), suppose that \(0<\varepsilon <a_{11}, \hat {u}_{22}(N)=\hat{u}_{23}(N)\equiv0\), and \(\hat{u}^{(1)}_{21}(N)<\hat {u}_{21}(N)<\hat{u}^{(2)}_{21}(N), \hat{u}_{21}(0)=0\), where
Then equation (4.12) is asymptotic stability in probability at equilibrium point \(x^{*}\). Further, if \(\Vert \hat{u}_{2}(N) \Vert \neq0\), equation (4.12) is global asymptotic stability in probability at equilibrium point \(x^{*}\).
Proof
Definite a storage function
Obviously, \(\widehat{V}(N)>0,\widehat{V}(0)=0\). Note that \(\widehat {V}_{N}(N)=(\widehat{V}_{1},\widehat{V}_{2},\widehat{V}_{3}),\hat{u}_{2}(N)=\hat {u}_{2},\hat{f}_{2}(N)=\hat{f}_{2}, \hat{g}_{1}(N)=\hat{g}_{1}, \hat{l}_{2}(N)=\hat{l}_{2}\) and plug this into inequality (4.6); we obtain
By conditions \(\hat{u}_{22}(N)=\hat{u}_{23}(N)\equiv0\), the second and third inequalities of (4.14) were established. The first inequality of (4.14) transforms the following inequality:
Based on \(\Delta=N_{1}^{2}+4\varepsilon \sum_{k=1}^{m} [\varphi _{k}(t)a_{11}(k)]N_{1}^{2}\geq0\), if the left side of inequality (4.15) is equal to zero, we get
As long as \(\hat{u}^{(1)}_{21}<\hat{u}_{21}<\hat{u}^{(2)}_{21}\), and \(\hat{u}_{21}(0)=0\), the inequality (4.15) is established. Moreover, according to Lemma 4.1, equation (4.12) is asymptotically stable at the origin \(N^{*}=0\).
In addition, the control \(\hat{u}_{2}\) of this theorem satisfies
If \(\Vert \hat{u}_{2}(N) \Vert ^{2}\neq0\), the storage function \(\hat{V}(N)\) is well-posed, the storage function \(\hat{V}(N)\) can be used as Lyapunov function, which is used to determine the stability of the system, so the closed-loop system is globally asymptotically stable in probability. □
Remark 4.1
Equation (4.2) is globally asymptotically stable in probability at equilibrium point \(x^{*}\). As can be seen from the \(g_{2}\), as long as we control the primary producers, it can achieve control of the entire system.
5 Numerical examples
This section is devoted to a couple of examples which demonstrate the effectiveness of the proposed theory. First, we consider a discrete-time approximation of the Wonham filter. The method used here is similar to the literature [29] (P184-191), so we only outline the procedure. Note that we are mainly interested in sample path approximations of the filters. Using the approach based on Clark transformations [30], we transform the stochastic differential equations and design a numerical procedure for the transformed system.
Let \(u_{j}(t):=\ln\varphi_{j}(t),t\geq0,j=1,\ldots,m\), namely, \(\varphi _{j}(t)=e^{u_{j}(t)}\). Applying the Itô formula to equation (2.2), one has
where \(u_{j}(0)=\ln\varphi_{j}(0)\). Then we use Euler-Maruyama type approximations of equations (2.1) and (5.1) to simulate the dynamics of the population system.
Let \(\varepsilon >0\) be the step size, an Euler-Maruyama type approximation (see [31]) of equation (2.1) is given by
where \(\xi_{k}=\frac{\omega(\varepsilon (k+1))-\omega(\varepsilon k)}{\sqrt { \varepsilon }}, f_{k}^{\varepsilon }(\alpha)\) is a Markov chain with state space S.
Discretizing the transformed system (5.1) yields the following algorithm:
Equivalently, we can write the above equations in terms of the white noise \(\xi_{k}\) as follows:
Now we will give three numerical examples to illustrate our results.
Example 5.1
Suppose we have the Markov chain \(\alpha(t)\) on the state space \(S=\{ 1,2\}\) with the generator \(Q= \bigl({\scriptsize\begin{matrix}{} -2 & 2 \cr 1 & -1 \end{matrix}} \bigr) \). The Markov chain can only be observed through \(dy=f(\alpha (t))\,dt+2d\omega\), where \(f(1)=-1 \mbox{ and } f(2)=1\). When \(\alpha(t)=1\),
When \(\alpha(t)=2\),
Then the species size \(x_{i}(t),i=1,2,3\), and Wonham’s filter \(\varphi (t)\) satisfy
where \(\bar{f}(\varphi)=-\varphi_{1}+\varphi_{2}\).
By the method mentioned in [32], a so-called Itô-Taylor expansion can be formed by applying Itô’s result, which is a fundamental tool of stochastic calculus. Truncating the Itô-Taylor expansion at an appropriate point produces Milstein’s method for the first three equations of equation (5.5):
where the initial condition \(x_{1}(0)=3, x_{2}(0)=4, x_{3}(0)=5, \varphi _{1}(0)=0.1,\varphi_{2}(0)=0.8\). Taking the step size \(\varepsilon =0.005\), we perform a computer simulation of 10,000 iterations of a sample path of \(x_{i}(t),i=1,2,3\). The sample paths of the population density \(x_{i}(t)\) are shown in Figure 1(a), and their corresponding probability density functions (PDFs) are shown in Figure 2(a), (b) and (c), respectively.
The sample paths of three-species densities. (a) The sample paths of \(x_{i}(t), i=1,2,3\). The parameter values used in Example 5.1; (b) Under the control \(\hat{u}_{1}=(-\sqrt{x_{2}}\frac{2(x_{1}-3)}{x_{1}},-\sqrt{x_{3}}\frac {2(x_{2}-1)}{x_{2}}, -\sqrt{x_{1}}\frac{2(x_{3}-2)}{x_{3}})\), the sample paths of \(x_{i}(t), i=1,2,3\). The parameter values used in Example 5.2; (c) Under the \(\hat{u}_{2}=(-\sqrt{x_{2}}\frac{2(x_{1}-3)}{x_{1}},0,0)\), the sample paths of \(x_{i}(t), i=1,2,3\). the parameter values used in Example 5.3.
Example 5.2
Based on Example 5.1, applying it to Theorem 3.3, select \(\hat{g}_{1}(x)=(\sqrt{x_{2}},\sqrt{x_{3}}, \sqrt{x_{1}})\), \(\hat{u}_{1}(x)=(-\sqrt {x_{2}}\frac{2(x_{1}-3)}{x_{1}},-\sqrt{x_{3}}\frac{2(x_{2}-1)}{x_{2}}, -\sqrt{x_{1}}\frac{2(x_{3}-2)}{x_{3}})\), the other parameters are the same as Example 5.1.
The numerical algorithm is the same as above. The sample paths of the population density \(x_{i}(t)\) are shown in 1(b), and their corresponding PDFs are shown in Figure 3(a), (b) and (c), respectively.
Example 5.3
Based on Example 5.1, similar to Theorem 4.2, select \(\hat {g}_{2}(x)=(x_{1},0,0), \hat{u}_{2}(x)=(-\frac{2x_{2}(x_{1}-3)}{x_{1}^{2}},0,0)\), the other parameters are the same as Example 5.1.
Through the above numerical algorithm, the sample paths of the population density \(x_{i}(t)\) are shown in Figure 1(c), and their corresponding PDFs are shown in Figure 4(a), (b) and (c), respectively.
6 Conclusion
In this paper, under a hidden Markov chain, the global asymptotical stability of the three-species food chain with external disturbance is obtained. Under \(H_{\infty}\) control and passive control, we prove that a suitable and simple control can sustain the original system’s persistence though there is a disturbance input, which is the highlight of this paper. From a practical point of view, a proper control helps to manage and reasonably develop the population systems. Moreover, the robust stability of the given system will be studied in the near future.
References
Kratina, P, et al.: Stability and persistence of food webs with omnivory: is there a general pattern? Ecosphere 6, 794-804 (2012)
Freedman, HI, Waltman, P: Mathematical analysis of some three-species food chain models. Math. Biosci. 33, 257-276 (1977)
Ma, HP, et al.: Global stability of positive periodic solutions and almost periodic solutions for a discrete competitive system. Discrete Dyn. Nat. Soc. 2015, 1-13 (2015)
Zhou, SR, et al.: Persistence and global stability of positive periodic solutions of three species food chains with omnivory. J. Math. Anal. Appl. 324, 397-408 (2006)
Krikorian, N: The Volterra model for three species predator-prey systems: boundedness and stability. J. Math. Biol. 7, 117-132 (1979)
Hsu, SB, et al.: Analysis of three species Lotka-Volterra food web models with omnivory. J. Math. Anal. Appl. 426, 659-687 (2015)
Mao, X: Delay population dynamics and environmental noise. Stoch. Dyn. 5, 149-162 (2005)
Mao, X, et al.: Environmental Brownian noise suppresses explosions in population dynamics. Stochastic Process. Appl. Probab. 97, 95-110 (2002)
Mao, X, et al.: Asymptotic behavior of the stochastic Lotka-Volterra model. J. Math. Anal. Appl. 287, 141-156 (2003)
Pang, S, et al.: Asymptotic property of stochastic population dynamics. Dyn. Contin. Discrete Impuls. Syst. 15, 603-620 (2008)
Du, NH, et al.: Dynamical behavior of Lotka-Volterra competition systems: non-autonomous bistable case and the effect of telegraph noise. J. Comput. Appl. Math. 170, 399-422 (2004)
Luo, Q, et al.: Stochchstic population dynamics under regime switching. J. Math. Appl. 334, 69-84 (2007)
Takeuchi, Y, et al.: Evolution of predator-prey systems described by a Lotka-Volterra equation under random environment. J. Math. Anal. Appl. 323, 938-957 (2006)
Bercu, B, et al.: Almost sure stabilization for feedback controls of regime-switching linear systems with a hidden Markov chain. IEEE Trans. Autom. Control 54, 2114-2125 (2009)
Tran, K, Yin, G: Hybrid competitive Lotka-Volterra ecosystems with a hidden Markov chain. Control Decis. 1, 51-74 (2014)
Willems, JC: Dissipative dynamical systems, part I: general theory. Arch. Rational Mach. Anal. 45, 321-351 (1972)
Willems, JC: Dissipative dynamical systems, part II: linear systems with quadratic supply rules. Arch. Rational Mach. Anal. 45, 352-393 (1972)
Florchinger, P: A passive system approach to feedback stabilization of nonlinear control stochastic systems. SIAM J. Control Optim. 37, 1848-1864 (1999)
Florchinger, P: Stablization of passive nonlinear stochastic differential systems by bounded feedback. Stoch. Anal. Appl. 21, 1255-1282 (2003)
Zhang, WH, et al.: State feefback \(H_{\infty}\) control for a class of nonlinear stochastic systems. SIAM J. Control Optim. 44, 1973-1991 (2006)
Zhang, WH, et al.: Nonlinear stochastic \(H_{2}/H_{\infty}\) control with state-dependent noise. In: American Control Conference, Portland, USA, June 8-10, 2005 (2005)
Wonham, WM: Some applications of stochastic differential equations to optimal nonlinear filtering. J. Soc. Ind. Appl. Math. Ser. A Control 2, 347-369 (1965)
Yu, L, et al.: Asset allocation for regime-switching market models under partial observation. Dyn. Syst. Appl. 23, 39-62 (2014)
Yin, G, Zhang, Q: Discrete-Time Markov Chains. Two-Time-Scale Methods and Applications. Springer, New York (2005)
Tran, K, Yin, G: Stochastic competitive Lotka-Volterra ecosystems under partial observation: feedback controls for permanence and extinction. J. Franklin Inst. 351, 4039-4064 (2014)
Berman, N, et al.: \(H_{\infty}\)-like control for nonlinear stochastic systems. Syst. Control Lett. 55, 247-257 (2006)
van der Schaft, AJ: L 2-Gain and Passivity Techniques in Nonlinear Control. Springer, Berlin (1999)
Floringer, P: A passive system approach to feedback stabilization of nonlinear control stochastic systems. SIAM J. Control Optim. 37, 1848-1864 (1999)
Yin, G, Zhang, Q: Discrete-Time Markov Chains. Two-Time-Scale Methods and Applications. Springer, New York (2005)
Malcome, WP, et al.: On the numerical stability of time-discretized state estimation via Clark transformations. In: Proceedings of 42nd IEEE Conference on Decision Control, vol. 2, pp. 1406-1412 (2003)
Kloeden, PE, Platen, E: Numerical Solution of Stochastic Differential Equations, 3nd edn. Springer, Berlin (1999)
Highm, D: An algorithmic introduction to numerical simulation of stochastic differential equations. SIAM Rev. 43, 525-546 (2001)
Acknowledgements
The research is supported by the National Natural Science Foundation of China (Nos. 11661064, 11461053). The authors would like to thank the editors and anonymous reviewers for their valuable comments, which improved the presentation of this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All the authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Ma, Y., Zhang, Q., Wang, L. et al. Dissipative control of a three-species food chain stochastic system with a hidden Markov chain. Adv Differ Equ 2017, 102 (2017). https://doi.org/10.1186/s13662-017-1160-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662-017-1160-z
Keywords
- food chain stochastic system
- hidden Markov chain
- \(H_{\infty}\) control
- passive control