- Research
- Open access
- Published:
Boundedness and robust analysis of dynamics of nonlinear diffusion high-order Markovian jump time-delay system
Advances in Difference Equations volume 2018, Article number: 434 (2018)
Abstract
In this paper, we prove the existence and uniqueness of the equilibrium solution of the system by using the M-matrix and the topological degree technique and study the boundedness and robustness of dynamics of reaction–diffusion high-order Markovian jump Cohen–Grossberg neural networks (CGNNs) with p-Laplacian diffusion, including the common reaction–diffusion CGNNs. The obtained criteria are applicable to computer Matlab LMI-toolbox, which is suitable for large-scale calculations of actual complex engineering. Finally, a numerical example demonstrates the effectiveness of the proposed method.
1 Introduction
All the time neural networks have attracted much attention for its wide application [1,2,3,4,5,6,7]. In 1983, Cohen–Grossberg neural networks (CGNNs) were originally proposed in [8]. Since then, the CGNNs have gained increasing research attention [9,10,11,12,13,14] due to their extensive applications, such as pattern recognition, image and signal processing, quadratic optimization, and artificial intelligence. Whether the above application is successful depends on a key prerequisite that the system has some stability. So stability analysis of various neural networks has become a hot research topic [15,16,17,18,19,20,21,22,23,24,25]. In practical experience, neural networks are often disturbed by environmental noise. The noise may influence the stability of the equilibrium and vary some structure parameters, which is usually satisfied by Markov processes. During the recent decade, neural networks with Markovian jumping parameters have been extensively studied due to the fact that systems with Markovian jumping parameters are useful in modeling abrupt phenomena, such as random failures, changing in the interconnections of subsystems, and operating in different points of a nonlinear plant. Moreover, Markovian jump dynamics have been applied to various complex systems, such as dissipative fault-tolerant control for nonlinear singular perturbed systems with Markov jumping parameters based on slow state feedback, slow state variables feedback stabilization for semi-Markov jump systems with singular perturbations, finite-time nonfragile \(l^{2}\)–\(l^{\infty}\) control for jumping stochastic systems subject to input constraints via an event-triggered mechanism, and so on ([26,27,28,29] and the references therein).
High-order Cohen–Grossberg neural networks, as an important class of dynamical systems, have been the object of intensive analysis by many authors in both theory and applications due to the fact that high-order neural networks can be with impressive computational, learning, and storage capabilities. In fact, most researchers focused on low-order neural networks and did not consider the high-order terms, which have faster convergence rate and stronger approximation property. Furthermore, the high-order neural networks have been shown stronger approximation property, impressive computational, storage, and learning capabilities, greater storage capacity and higher fault tolerance, and faster convergence rate than in traditional low-order neural networks. There are a lot of literature related to the stability analysis of high-order neural networks [30,31,32,33,34,35,36]. In practical engineering the diffusion phenomenon cannot be avoided in the neural networks model when electrons are moving in an asymmetric electromagnetic field. So various reaction–diffusion models were considered [13, 14, 31, 37, 38]. For example, in [31] the following reaction–diffusion high-order Hopfield neural network was investigated:
In addition, p-Laplacian reaction–diffusion models were recently studied in [37] and [14]. For example, in [37] the following p-Laplacian reaction–diffusion system was investigated:
Note that seldom literature involves both boundedness analysis and robust stability analysis of high-order neural networks, which inspires our current work. In this paper, we present a sufficient condition for the boundedness and robust stability of the reaction–diffusion high-order Markovian jump Cohen–Grossberg neural network with nonlinear Laplacian diffusion. Of course, the existence and uniqueness of the equilibrium solution of the system will be first presented by employing the M-matrix and the topological degree technique.
For convenience, we need introduce some standard notation.
-
\(Q=(q_{ij})_{n\times n}>0\ (<0)\): a positive (negative) definite matrix, that is, \(y^{T}Qy>0\ (<0)\) for all \(0\neq y\in R^{n}\).
-
\(Q=(q_{ij})_{n\times n}\geqslant0\ (\leqslant0)\): a semipositive (seminegative) definite matrix, that is, \(y^{T}Qy\geqslant0\) \((\leqslant 0)\) for all \(y\in R^{n}\).
-
\(Q_{1}\geqslant Q_{2}\ (Q_{1}\leqslant Q_{2})\): \(Q_{1}-Q_{2}\) is a semipositive (seminegative) definite matrix.
-
\(Q_{1}\succcurlyeq Q_{2} \ (Q_{1}\preccurlyeq Q_{2})\): \(Q_{1}-Q_{2}\) is a nonnegative (nonpositive) matrix.
-
\(Q_{1}> Q_{2}\ (Q_{1}< Q_{2})\): \(Q_{1}-Q_{2}\) is a positive (negative) definite matrix.
-
\(\lambda_{\max}(\varPhi)\) and \(\lambda_{\min}(\varPhi)\) denote the largest and smallest eigenvalues of a matrix Φ, respectively.
-
\(|C|=(|c_{ij}|)_{n\times n}\) for any matrix \(C=(c_{ij})_{n\times n}\); \(|u(t,x)|=(|u_{1}(t,x)|,|u_{2}(t,x)|, \ldots,|u_{n}(t,x)|)^{T}\) for any \(u(t,x)=(u_{1}(t,x),u_{2}(t,x),\ldots,u_{n}(t,x))^{T}\).
-
I: the identity matrix of compatible dimension.
-
The symmetric terms in a symmetric matrix are denoted by ∗.
Motivated by some methods and results of the related literature [30,31,32,33,34, 39,40,41], we present the existence and uniqueness of the equilibrium solution of the system and study the boundedness and robustness of dynamics of reaction–diffusion high-order Markovian jump Cohen–Grossberg neural networks (CGNNs) with p-Laplacian diffusion, including the common reaction–diffusion CGNNs.
2 Model description and preparation
Consider the following high-order Markovian jump Cohen–Grossberg neural network with nonlinear Laplacian diffusion:
where \(x\in\varOmega\), and Ω is a bounded domain in \(R^{n}\) with smooth boundary ∂Ω of class \(\mathcal{C}^{2}\). The initial value function \(\xi_{i}(s,x)\) is bounded and continuous on \((-\infty, \tau]\times\varOmega\), \(\alpha_{i}\) is the input from outside of the networks, \(u_{i}(t,x)\) is the state variable of the ith neuron at time t and space variable x, \(a_{i}(u_{i}(t,x))\) presents an amplification function, whereas \(b_{i}(u_{i}(t,x))\) denotes an appropriate behavior function, \(D_{i}=D_{i}(t,x)\geqslant0\) is the diffusion operator, \(f_{i}\) and \(g_{j}\) are active functions, and \(T_{rij}\) and \(T_{rijl}\) are the first- and second-order synaptic weights of system (2.1) (see, e.g., [42]). \((\breve{\varOmega}, \varUpsilon, \mathbb{P})\) is the given probability space, where Ω̆ is sample space, ϒ is a σ-algebra of subsets of the sample space, and \(\mathbb{P}\) is the probability measure defined on ϒ. Let \(S=\{1, 2, \ldots, n_{0}\}\), and let \(\{ r(t):[0, +\infty)\to S\}\) be a homogeneous, finite-state right-continuous Markovian process with generator \(\varPi=(\gamma_{ij})_{n_{0}\times n_{0}}\) and the transition probability from mode \(i\in S\) at time t to mode \(j\in S\) at time \(t +\Delta t\)
where \(\gamma_{ij}\geqslant0\) is the transition probability rate from i to \(j\ (j\neq i)\), \(\gamma_{ii}=-\sum_{j=1, j\neq i}^{n_{0}}\gamma_{ij}, \delta>0\), and \(\lim_{\delta\to0}o(\delta)/\delta=0\).
Let \(u^{*}=(u_{1}^{*},u_{2}^{*},\ldots,u_{n}^{*})^{T}\in R^{n}\) be a constant vector. Then it is not difficult to deduce the following fact:
where
Indeed,
which proves (2.2).
In (2.2), i and j are symmetric, which implies that
and hence
Taking \(g_{l}(u_{l}^{*})=0=g_{j}(u_{j}^{*})\) in (2.5), system (2.1) can be rewritten as follows:
where \(\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{n})^{T}\), \(f(u)=(f_{1}(u_{1}), f_{2}(u_{2}),\ldots,f_{n}(u_{n}))^{T}\), \(g(u)=(g_{1}(u_{1}), g_{2}(u_{2}), \ldots, g_{n}(u_{n}))^{T}\) for \(u=(u_{1},u_{2},\ldots,u_{n})^{T}\), the Neumann boundary value \(\frac{\partial u(t,x)}{\partial\nu}=(\frac {\partial u_{1}(t,x)}{\partial\nu},\frac{\partial u_{2}(t,x)}{\partial\nu }, \ldots, \frac{\partial u_{n}(t,x)}{\partial\nu})\) with \(\frac{\partial u_{i}(t,x)}{\partial\nu}=(\frac{\partial u_{i}(t,x)}{\partial x_{1}}, \frac{\partial u_{i}(t,x)}{\partial x_{2}}, \ldots, \frac{\partial u_{i}(t,x)}{\partial x_{n}})^{T}\), the matrix \(D=(D_{i}(t,x))_{n\times m}\), and the vector function \(\varsigma_{0}\) is defined as
Throughout this paper, we assume the following hypotheses:
-
(H1)
There is a positive constant vector \(M=(M_{1},M_{2},\ldots,M_{n})^{T}\in R^{n}\) such that \(|g_{j}(\cdot)|\leqslant M_{j}\), \(j=1,2,\ldots,n\).
-
(H2)
There is a real number matrix \(\mathbb{B}=\operatorname{diag}(\tilde{b}_{1},\tilde {b}_{2},\ldots,\tilde{b}_{n})\) such that \(b_{i}(0)=0\) and
$$\frac{b_{i}(s)-b_{i}(r)}{s-r}\geqslant\tilde{b}_{i}>0, \quad \forall s,r\in R, s\neq r, i=1,2,\ldots,n. $$ -
(H3)
There exist real number diagonal matrices \(\overline{A}=\operatorname{diag}(\bar {a}_{1},\bar{a}_{2},\ldots,\bar{a}_{n})\) and \(\underline{A}=\operatorname{diag}(\underline {a}_{1},\underline{a}_{2}, \ldots,\underline{a}_{n})\) such that
$$0< \underline{A}\leqslant A(s)\leqslant\overline{A}. $$ -
(H4)
There are real matrices \(F_{1}=\operatorname{diag}(F_{11},F_{12},\ldots,F_{1n}), G_{1}=\operatorname{diag}(G_{11},G_{12},\ldots,G_{1n})\), \(F_{2}=\operatorname{diag}(F_{21},F_{22},\ldots,F_{2n})\), and \(G_{2}=\operatorname{diag}(G_{21},G_{22},\ldots,G_{2n})\) such that
$$F_{1j}\leqslant\frac{f_{j}(s)-f_{j}(t)}{s-t}\leqslant F_{2j},\qquad G_{1j}\leqslant\frac{g_{j}(s)-g_{j}(t)}{s-t}\leqslant G_{2j} $$with
$$\vert F_{1j} \vert \leqslant F_{2j},\qquad \vert G_{1j} \vert \leqslant G_{2j},\quad \forall j=1,2,\ldots,n. $$
Remark 1
\(F_{1j}, G_{1j} \) may be negative numbers, which implies conditions weaker than the corresponding conditions of [43].
Denote
For any mode \(r(t)=i\in S\), we assume that \(W_{r}, T_{r}\), and \(\tilde {T}_{r}\) are real constant matrices of appropriate dimensions, and \(\Delta W_{r}(t), \Delta T_{r}(t)\), and \(\Delta\tilde{T}_{r}(t)\) are real-valued matrix functions representing time-varying parameter uncertainties satisfying
with
where \(E_{r}, N_{r}, N_{r0}\), and \(\widetilde{N}_{r}\) are real matrices, and \(|K(t)|\leqslant I\) with the identity matrix I.
In the case \(p=2\), system (2.1) becomes the following common reaction–diffusion high-order Markovian jump Cohen–Grossberg neural network:
or
Lemma 2.1
([44])
Let \(\varepsilon>0\) be any given scalar, and let \(\mathcal{M},\mathfrak{E}\), and \(\mathcal{K}\) be matrices of appropriate dimensions. If \(\mathcal{K}^{T}\mathcal{K}\leqslant I\), then we have
Lemma 2.2
(Schur complement [45])
Given matrices \(\mathcal{Q}(t)\), \(\mathcal{S}(t)\), and \(\mathcal{R}(t)\) of appropriate dimensions, where \(\mathcal{Q}(t)=\mathcal{Q}(t)^{T}\) and \(\mathcal{R}(t)=\mathcal{R}(t)^{T}\), we have
if and only if
or
where \(\mathcal{Q}(t)\), \(\mathcal{S}(t)\), and \(\mathcal{R}(t)\) are dependent on t.
Lemma 2.3
(Poincaré integral inequality (see [46]))
Let Ω be a bounded domain of \(R^{n}\) with a smooth boundary ∂Ω of class \(\mathcal{C}^{2}\), and let \(h(x)\) be a real-valued function belonging to \(H_{0}^{1}(\varOmega)\) such that \(\frac {\partial h(x)}{\partial\nu}|_{\partial\varOmega}=0\). Then
where \(\lambda_{1}\) is the smallest positive eigenvalue of the Neumann boundary problem
3 Main results
Before giving the main results of this paper, we need to present two necessary technical lemmas.
Lemma 3.1
Let \(u^{*}=(u_{1}^{*},u_{2}^{*},\ldots, u_{n}^{*})^{T}\) be an equilibrium point of system (2.6), and let \(\tilde{u}=u-u^{*}\), where \(u=u(t,x)\) is any solution of system (2.6). Then
and
where \(q_{i}>0\) for all i.
Proof
Indeed, since \(u_{i}^{*}\) is a real number,
and hence
On the other hand, the Neumann zero boundary condition yields
□
In addition, from (H4) and the Weber theorem of one-variable quadratic equation it is not difficult to get the following conclusion.
Lemma 3.2
Let \(u^{*}=(u_{1}^{*},u_{2}^{*},\ldots, u_{n}^{*})^{T}\) be an equilibrium point of system (2.6), and let \(\tilde{u}=u-u^{*}\), \(\overline{f}(\tilde{u})=f(u)-f(u^{*})\), and \(\overline{g}(\tilde{u})=g(u)-g(u^{*})\). Then there are two positive definite diagonal matrices \(K_{0}\) and \(K_{1}\) such that
and
where \(u=u(t,x)\) is any solution of system (2.6).
We now give the main result of this paper.
Theorem 3.3
Assume that
Then:
-
(a)
System (2.1) or system (2.6) has a unique equilibrium point.
-
(b)
All the solutions of system (2.1) and system (2.6) are bounded.
-
(c)
If there is a sequences of positive definite diagonal matrices \(P_{r}\) \((r\in S)\), \(K_{0}\), and \(K_{1}\) such that
$$\begin{aligned} \widetilde{\varPhi}_{r}< 0\quad \forall r\in S, \end{aligned}$$(3.6)where
with \(\widetilde{\mathcal{A}}_{r}=\mathcal{A}_{r}-2F_{1}K_{0}F_{2}\) and
then the unique equilibrium point of system (2.1) or system (2.6) is globally asymptotically stochastic robust stable.
Proof
System (2.1) is equivalent to system (2.6). We divide the proof of the theorem into four steps.
Step 1. We first prove that there is at least one equilibrium point for system (2.6).
If \(u^{*}\in R^{n}\) is an equilibrium point of (2.1), then by (2.1) we get
where
Since i and j are symmetric, exchanging i and j results in
and hence
Next, taking
by (H2) and (H4) we get
Moreover, we can rewrite (3.8) in the matrix and vector form:
Let
where \(\mathfrak{R}\in R^{n}\) is a positive vector such that \((\mathbb{B}- (|W_{r}|+|E_{r}||N_{r}|)F_{2} - (|T_{r}|+|E_{r}||N_{r0}|) G_{2}-(\varGamma |\widetilde{T}_{r}|+|E_{r}||\widetilde{N}_{r}|) G_{2} )\mathfrak{R}>0\), since \((\mathbb{B}- (|W_{r}|+|E_{r}||N_{r}|)F_{2} - (|T_{r}|+|E_{r}||N_{r0}|) G_{2}-(\varGamma|\widetilde{T}_{r}|+|E_{r}||\widetilde{N}_{r}|) G_{2} )\) is an M-matrix.
Then Ω̃ is not empty since \(0\in\widetilde{\varOmega}\), and for any \(u\in\partial\widetilde{\varOmega}\),
So
Now the homotopy invariance theorem yields
where \(\operatorname{deg}(h, \widetilde{\varOmega}, 0)\) denotes topological degree. Moreover, topological degree theory tells us that there is at least one solution for \(h(u)=0\) in Ω̃, which implies that there exists at least one equilibrium point \(u^{*}\) for system (2.1).
Step 2. We prove that \(u^{*}\) is the unique equilibrium point of system (2.1).
Indeed, if \(v^{*}\) is another equilibrium point of (2.1), then
Since
we have
or
that is,
Since
we get \(|u^{*}-v^{*}|\leqslant0\), and hence \(u^{*}=v^{*}\).
Thus we have proved the existence of the unique equilibrium point of system (2.6), and so conclusion (a) is proved.
Step 3. Next, we prove the boundedness of all the solutions of system (2.1).
First, we note the following fact:
where \(g_{j}=g_{j}(u_{j}(t-\tau(t),x))\), \(g_{l}=g_{l}(u_{l}(t-\tau(t),x))\), and
Moreover, since i and j are symmetric, exchanging i and j results in
So we get
Moreover, by Lemma 3.1 we may rewrite system (2.1) in the following equivalent form:
where \(\tilde{u}=u-u^{*}\), \(\overline{f}_{j}(\tilde {u}_{j}(t,x))=f_{j}(u_{j}(t,x))-f_{j}(u_{j}^{*})\), and \(\overline{g}_{j}(\tilde{u}_{j}(t,x))=g_{j}(u_{j}(t,x))-g_{j}(u_{j}^{*})\).
Let
From Lemma 3.1 and (3.11) we can derive
which, together with the Hölder inequality, implies
and
which in turn implies that
Note that
Define the matrix \(\hat{W}_{r}=(\hat{W}_{rij})_{n\times n}\) as
the matrix \(\hat{T}_{r}=(\hat{T}_{rij})_{n\times n}\) as
and the matrix \(\check{T}_{r}=(\check{T}_{rij})_{n\times n}\) as
Then we have
Since \((\mathbb{B}- (|W_{r}|+|E_{r}||N_{r}|)F_{2} - (|T_{r}|+|E_{r}||N_{r0}|) G_{2}-(\varGamma|\widetilde{T}_{r}|+|E_{r}||\widetilde{N}_{r}|) G_{2} )\) is an M-matrix, there is a positive vector \(Q=(q_{1},q_{2},\ldots,q_{n})^{T}>0\) such that
Let
Let \(a_{*}=\max_{i}\frac{\bar{a}_{i}}{\underline{a}_{i}}\). Then \(a_{*}\geqslant 1\) and
Since
we get
or
So we have
or
Let
The boundedness of \(\xi(\cdot)\) yields that there is \(\delta_{0}>0\) such that
We will prove that
We will prove this by contradiction. Assume that (3.13) does not hold. Then there must exist \(i\in\{1,2,\ldots,n\}\) and \(t_{*}>\delta_{0}\) such that
On the other hand, (3.12) and the definition of \(\kappa_{i}\) result in
which is contradictory to \(\|u_{i}(t_{*})-u_{i}^{*}\|=q_{i}k_{0}\).
So we have proved the boundedness of all the solutions of system (2.1) and thus obtained conclusion (b).
Step 4. We will prove that the equilibrium point \(u^{*}\) is globally robustly asymptotically stochastically stable.
From (H4) we have
and
with the Neumann boundary value condition
where \(T_{ri}=(T_{rijl})_{n\times n}\), \(T_{ri}^{T}=(T_{rilj})_{n\times n}\), and
Remark 2
If system (3.15) is under the Dirichlet boundary value condition
then we can still derive a formula similar to (3.2):
where
If the Neumann boundary condition \(\frac{\partial u_{i}(t,x)}{\partial x_{j}}=0\) was replaced by the Dirichlet boundary condition \(u_{i}(t,x)|_{\partial\varOmega}=0\) in system (2.1), we would not derive a formula similar to (3.2), since system (2.1) involves \(\alpha_{i}\) (the input from outside the networks), so that \(\tilde{u}_{i}= u_{i}-u_{i}^{*}=-u_{i}^{*}\) on ∂Ω, that is, the equilibrium solution \(u_{i}^{*}\) is not necessarily zero. Of course, we can still deal with the Dirichlet boundary problem by employing the Ekeland variational principle and Lyapunov–Krasovskii functional method [47]. However, system (3.15) does not involve the input \(\alpha_{i}\), which implies that Dirichlet boundary value problem gives the same results as the Neumann boundary problem.
Obviously, the null solution of system (3.15) is globally asymptotically stable if and only if the unique equilibrium point \(u^{*}\) of system (2.6) is globally asymptotically stable.
Consider the Lyapunov–Krasovskii functional
Moreover, (H2) and (H3) yield
Diagonal matrices derive
Moreover, (3.14)–(3.15) and Lemma 3.1 yield
where \(\mathcal{L}\) is the weak infinitesimal operator, Γ and \(\widetilde{T}_{r}(t)\) are the matrices defined in (2.8).
Letting
by (3.3)–(3.4) and (3.16) we can get
where \(\widetilde{\mathcal{A}}_{r}=\mathcal{A}_{r}-2F_{1}K_{0}F_{2}\),
and
with \(\widetilde{P}_{r}=P_{r}\overline{A}(|T_{r}|+\varGamma|\widetilde {T}_{r}|+|E_{r}||K(t)|(|N_{r0}|+|\widetilde{N}_{r}|))\),
and
Denote
Applying the Schur complement theorem twice yields
Moreover, Lemma 2.1 yields
Combining (3.19) and (3.17) results in
It follows by the standard Lyapunov functional theory that the null solution of system (2.1) is globally robustly asymptotically stochastically stable. Thus conclusion (c) is proved. □
Remark 3
It is the first time that the boundedness of p-Laplacian reaction–diffusion high-order neural networks is investigated and the robust stability criterion is derived for such complex systems. If \(p=2\), then we will further derive a better corollary.
4 Applications and analysis
In the case of \(p=2\), system (2.1) reduces to the common reaction–diffusion Cohen–Grossberg neural networks (2.12). Applying Theorem 3.3 and Lemma 2.3 to the Sobolev space \(W_{0}^{1,2}(\varOmega)\) results in the following corollary.
Corollary 4.1
Assume that
Then:
-
(a)
System (2.12) or System (2.13) has a unique equilibrium point.
-
(b)
All the solutions of system (2.12) and system (2.13) are bounded.
Suppose, in addition, that there are positive definite diagonal matrices \(P_{r}\) \((r\in S)\), \(K_{0}\), and \(K_{1}\) such that the following condition holds for all \(r\in S\):
with \(\overline{\mathcal{A}}_{r}=\mathcal{A}_{r}-2F_{1}K_{0}F_{2}-2\lambda_{1}DP_{r}\). Then:
-
(c)
The unique equilibrium point of system (2.12) or system (2.13) is globally robustly asymptotically stochastically stable, where \(\lambda_{1}\) is the smallest positive eigenvalue of the Neumann boundary problem (2.15).
Remark 4
Corollary 4.1 illustrates that the diffusion item plays its role in stability criterion of high-order reaction–diffusion system, whereas the influence of the diffusion term was ignored in [31, Thm. 1].
Remark 5
The Weber theorem of one-variable quadratic equation was flexibly applied to the LMI approach of robust stability criterion for reaction–diffusion neural networks. As far as we are concerned, seldom literature related to reaction–diffusion neural networks involves such a technique.
Remark 6
It is the first time that both boundedness result and robust stability criterion of reaction–diffusion high-order neural networks are derived.
If the stochastic factor, the input variable, and parameter uncertainty are neglected, then (2.12) becomes a deterministic system. Furthermore, letting \(a_{i}(s)\equiv1\), \(b_{i}(s)=\bar{b}_{i}s\), and \(T_{ijl}(t)=0\), then system (2.12) is reduced to the following reaction–diffusion cellular neural network:
Definition 4.2
For any \(T>0\) and \(x\in \varOmega\), \(u=\{(u_{1}(t,x),u_{2}(t,x),\ldots, u_{n}(t,x))\}_{[0,T]}\) with \(T\in(0,\infty]\) is called a mild solution of (4.1) if for any \(i\in\mathcal{N}\triangleq \{1,2,\ldots,n\}\), \(u_{i}(t,\cdot)\in\mathcal{C}([0,T]; L^{2}(\varOmega))\) and the following integral equations hold for \(t\in[0,T]\) and \(x\in\varOmega\):
and
Moreover, if the diffusion phenomenon is ignored, (4.1) degenerates into the following cellular neural network:
For system (4.2), we get the following concise conclusion under the meaning of Definition 4.2.
Theorem 4.3
If \(f_{i}\) is Lipschitz continuous with Lipschitz constants \(L_{i}>0\) and \(f_{i}(0)=0\) for all \(i=1,2,\ldots,n\), then system (4.2) is globally exponentially mean-square stable.
To prove Theorem 4.3, we need to utilize [6, Thm. 5], to derive the following lemma.
Lemma 4.4
Let \(f_{i}\) and \(\sigma_{i}\) be Lipschitz continuous with Lipschitz constants \(L_{i}>0\) and \(T_{i}>0\) for \(i\in\mathcal{N}\), respectively. Let, in addition, \(f_{i}(0)=0=\sigma _{i}(0)\) for \(i\in\mathcal{N}\). Then the following time-delay ordinary differential equations are globally stochastically exponentially mean-square stable:
Proof
Rao and Zhong [6] utilized the Banach fixed point theorem, the Hölder inequality, the Burkhold–Davis–Gundy inequality, and the continuous semigroup of the Laplace operator to derive the globally stochastically exponential stability in mean square of the following impulsive stochastic reaction–diffusion cellular neural network with distributed delay:
Letting \(q_{i}=0\) in [6, Thm. 5], the diffusion phenomenon is ignored. Furthermore, if the impulse phenomenon is neglected, then partial differential equations (4.4) degenerate into the ordinary differential equations (4.2). In [6, (H1)], \(q_{i}=0\) implies that \(\gamma>0\) can be big enough if Ω is selected well. In [6, (6)], \(G_{i}=0\) (impulse phenomenon being ignored). So by [6, (6)] we can get
Obviously, \(\kappa\in(0,1)\) if letting γ big enough. Due to [6, Thm. 5], we complete the proof. □
Proof of Theorem 4.3
Now, letting \(h_{ij}=0\) and \(\sigma_{i}(\cdot)=0\) in Lemma 4.4, we can deduce Theorem 4.3 immediately. □
Next, we discuss the boundedness of all the mild solutions of system (4.1). We further assume that the initial value \(\xi_{i}(s,x)\) is bounded for all \((s,x)\in[-\tau,0]\times\varOmega\).
Definition 4.5
Model (4.1) is said to be uniformly bounded in \(L^{\infty}\) if for any given \(\tau_{1}>0\) and for all \(t\in[\tau_{1},T]\) with \(T\in(0,\infty)\), we have
Lemma 4.6
Let \(\varOmega\subset \mathbb{R}^{N}(N\in\mathbb{N})\) be a bounded domain with smooth boundary, and let Δ denote the Laplacian in \(L^{s}(\varOmega)\) with domain
for \(s\in(1,\infty)\). Then the operator \(-\Delta+1\) is sectorial and possesses closed fractional powers \((-\Delta+1)^{\eta}, \eta\in(0, 1)\), with dense domain \(D((-\Delta+1)^{\eta})\). Moreover, the following three properties hold.
(i) If \(m\in\{0,1\}, p\in[1,\infty]\), and \(q\in(1,\infty)\), then there exists a constant \(C_{1}>0\) such that, for all \(z\in D((-\Delta+1)^{\eta})\),
(ii) Suppose \(p\in[1,\infty)\). Then the associated heat semigroup \((e^{t\Delta})_{t\geqslant0}\) maps \(L^{p}(\varOmega)\) into \(D((-\Delta +1)^{\eta})\) in \(L^{p}(\varOmega)\), and there exist constants \(C_{2}>0\) and \(\lambda_{2}>0\) such that
for all \(z\in L^{p}(\varOmega)\) and \(t>0\).
The initial value \(\xi_{i}(s,x)\) is further assumed to be bounded for all \((s,x)\in[-\tau,0]\times\varOmega\).
Theorem 4.7
If \(f_{i}\) is Lipschitz continuous with Lipschitz constants \(L_{i}>0\) with \(f_{i}(0)=0\) for all \(i=1,2,\ldots,n\) and \(\|e^{D_{i}t\Delta}\| \leqslant Me^{-\gamma t}\), then model (4.1) is uniformly bounded in \(L^{\infty}\), where \(M>0\) and \(\gamma>0\) are constants.
Proof
Employing the variation-of-constants formula for \(u_{i}\), we derive that, for any \(\tau_{1}>0\),
Letting \(q=2\) and \(\eta\in(\frac{1}{2},\frac{2}{3})\) in Lemma 4.6, we see that
Here \(\lambda_{2}>0\) is the first positive eigenvalue of the Neumann boundary problem
where \(\partial_{\nu}\) denotes differentiation with respective to the outward normal of ∂Ω.
In view of Definition 4.5, we can similarly derive
Similarly, we can utilize the trigonometric inequality to prove
Combining (4.5)–(4.7) results in
which completes the proof. □
By employing the method similar to that of the proof of Lemma 4.4, we get the following corollary from Theorem 4.7.
Corollary 4.8
If \(f_{i}\) is Lipschitz continuous with Lipschitz constants \(L_{i}>0\) with \(f_{i}(0)=0\) for all \(i=1,2,\ldots,n\), then model (4.2) is uniformly bounded under in \(L^{\infty}\).
5 Numerical example
Example 5.1
Consider system (2.1) or (2.6) with the following data. Let \(n=2\), \(S=\{1,2\}\), and rewrite system (2.1) as follows:
where \(\varOmega=[0,1]\times[0,1]\subset R^{2}\), \(p=2.116\), and
Remark 7
Here, we only verify that \(b_{1}(\cdot)\) satisfies (H2). Other functions can be similarly verified to satisfy the corresponding conditions. Obviously, \(b_{1}(0)=0\), and the Lagrange mean value theorem yields
This verifies that \(b_{1}(\cdot)\) satisfies condition (H2).
Next, we propose the following data for system (2.1) or (2.6):
Let \(\alpha_{i}=\frac{1}{2^{i+1}}, \xi_{i}(s,x)= \cos(s^{2}+x^{2}+i), i=1,2; \tau (t)=9.878\cos^{2}t\), and \(\tau=9.878\). Now we can compute by Matlab that
and
so that (3.5) is satisfied.
Moreover, running Matlab LMI toolbox on LMI condition (3.6) results in
Therefore, according to Theorem 3.3, there exists a unique equilibrium point for system (5.1), which is globally robustly asymptotically stochastically stable, and all the solutions of system (5.1) are bounded (see Figs. 1–2).
Remark 8
In [31, Thm. 1], the equilibrium point of system (1.1) is globally exponentially stable in norm \(\|\cdot\|_{2}\) in the mean square for any time-varying delays \(\tau(t)\) satisfying \(\dot{\tau}(t)\leqslant\eta<1\), whereas the condition \(\dot{\tau}(t)\leqslant\eta<1\) is not necessary in our Theorem 3.3 (see, e.g., Example 5.1). Due to the ingenious employment of our Lemma 3.2, we get robust stability criteria and boundedness results in our Theorem 3.3 and Corollary 4.1 whereas such newly obtained results did not appear in [31, Thm. 1]. Motivated by some good methods and results, we have created some new methods and results in this paper.
In [50], stability of periodic solution for reaction–diffusion high-order Hopfield neural networks with time-varying delays was derived, which gives us a lot of beneficial inspiration.
Remark 9
In comparison with [50, Thms. 3.1–3.3], our Theorem 3.3 and Corollary 4.1 give criteria of LMI conditions, which can be applicable to Matlab LMI toolbox, implying that our Theorem 3.3 and Corollary 4.1 are more practical than [50, Thms. 3.1–3.3]. In addition, the boundedness is not considered in [50], whereas in this paper we presented boundedness results.
6 Conclusions and further considerations
It is the first time that the boundedness of p-Laplacian reaction–diffusion Markovian jump high-order neural networks is obtained, and the given robust stability criteria are applied to computer Matlab LMI toolbox, which is applicable to wide calculations of practical complex engineering. Finally, a numerical example demonstrates the effectiveness of the proposed method.
Under the Lipschitz condition on the active function, Theorem 4.3 and Corollary 4.8 present the stability and boundedness result for system (4.2). So we want to know whether the following system is bounded and stable under similar concise conditions:
This is an interesting problem.
References
Li, X., Song, S.: Stabilization of delay systems: delay-dependent impulsive control. IEEE Trans. Autom. Control 62(1), 406–411 (2017)
Shen, H., Xing, M., Huo, S., Wu, Z., Park, J.: Finite-time H∞ asynchronous state estimation for discrete-time fuzzy Markov jump neural networks with uncertain measurements. Fuzzy Sets Syst. (2018, to appear). https://doi.org/10.1016/j.fss.2018.01.017
Yin, Y., Niu, H., Liu, X.: Adaptive neural network sliding mode control for quad tilt rotor aircraft. Complexity 2017, Article ID 7104708 (2017)
Shao, Y., Chang, P., Lu, C.: Applying two-stage neural network based classifiers to the identification of mixture control chart patterns for an SPC-EPC process. Complexity 2017, Article ID 2323082 (2017)
Li, X., Wu, J.: Sufficient stability conditions of nonlinear differential systems under impulsive control with state-dependent delay. IEEE Trans. Autom. Control 63(1), 306–311 (2018)
Rao, R., Zhong, S.: Stability analysis of impulsive stochastic reaction–diffusion cellular neural network with distributed delay via fixed point theory. Complexity 2017, Article ID 6292597 (2017)
Li, X., Wu, J.: Stability of nonlinear differential systems with state-dependent delayed impulses. Automatica 64, 63–69 (2016)
Cohen, M., Grossberg, S.: Absolute stability and global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans. Syst. Man Cybern. 13, 815–826 (1983)
Li, R., Cao, J., Alsaedi, A., Alsaadi, F.: Exponential and fixed-time synchronization of Cohen–Grossberg neural networks with time-varying delays and reaction–diffusion terms. Appl. Math. Comput. 313, 37–51 (2017)
Song, Q., Zhang, J.: Global exponential stability of impulsive Cohen–Grossberg neural network with time-varying delays. Nonlinear Anal., Real World Appl. 9(2), 500–510 (2008)
Li, K., Song, Q.: Exponential stability of impulsive Cohen–Grossberg neural networks with time-varying delays and reaction-diffusion terms. Neurocomputing 72(1–3), 231–240 (2008)
Du, Y., Zhong, S., Zhou, N., Shi, K., Cheng, J.: Exponential stability for stochastic Cohen–Grossberg BAM neural networks with discrete and distributed time-varying delays. Neurocomputing 127, 144–151 (2014)
Zhu, Q., Cao, J.: Exponential stability analysis of stochastic reaction–diffusion Cohen–Grossberg neural networks with mixed delays. Neurocomputing 74(17), 3084–3091 (2011)
Rao, F., Zhong, S., Pu, Z.: On the role of diffusion factors in stability analysis for p-Laplace dynamical equations involved to BAM Cohen–Grossberg neural network. Neurocomputing 223, 54–62 (2017)
Chen, H., Zhong, S., Liu, X., Li, Y., Shi, K.: Improved results on nonlinear perturbed T–S fuzzy system with mixed delays. J. Franklin Inst. 354(4), 2032–2052 (2017)
Zhang, R., Liu, X., Zeng, D., Zhong, S., Shi, K.: A novel approach to stability and stabilization of fuzzy sampled-data Markovian chaotic systems. Fuzzy Sets Syst. 344, 108–128 (2018)
Song, Q., Yan, H., Zhao, Z., Liu, Y.: Global exponential stability of complex-valued neural networks with both time-varying delays and impulsive effects. Neural Netw. 79, 108–116 (2016)
Shen, H., Li, F., Xu, S., Sreeram, V.: Slow state variables feedback stabilization for semi-Markov jump systems with singular perturbations. IEEE Trans. Autom. Control 63(8), 2709–2714 (2018)
Song, Q., Yu, Q., Zhao, Z., Liu, Y., Alsaadi, F.: Boundedness and global robust stability analysis of delayed complex-valued neural networks with interval parameter uncertainties. Neural Netw. 103, 55–62 (2018)
Zhang, R., Zeng, D., Zhong, S.: Novel master–slave synchronization criteria of chaotic Lur’e systems with time delays using sampled-data control. J. Franklin Inst. 354(12), 4930–4954 (2017)
Chen, H., Zhong, S., Shao, J.: Exponential stability criterion for interval neural networks with discrete and distributed delays. Appl. Math. Comput. 250, 121–130 (2015)
Song, Q., Yan, H., Zhao, Z., Liu, Y.: Global exponential stability of impulsive complex-valued neural networks with both asynchronous time-varying and continuously distributed delays. Neural Netw. 81, 1–10 (2016)
Shen, H., Li, F., Yan, H., Karimi, H., Lam, H.: Finite-time event-triggered \(\mathrm{H}_{\infty}\) control for T–S fuzzy Markov jump systems. IEEE Trans. Fuzzy Syst. 26(5), 3122–3135 (2018)
Zhang, R., Zeng, D., Zhong, S., Yu, Y.: Event-triggered sampling control for stability and stabilization of memristive neural networks with communication delays. Appl. Math. Comput. 310, 57–74 (2017)
Chen, H., Zhong, S., Li, M., Liu, X., Adu-Gyamfi, F.: Stability criteria for T–S fuzzy systems with interval time-varying delays and nonlinear perturbations based on geometric progression delay partitioning method. ISA Trans. 63, 69–77 (2016)
Wang, Z., Shen, L., Xia, J., Shen, H., Wang, J.: Finite-time non-fragile \(l^{2}\)–\(l^{\infty}\) control for jumping stochastic systems subject to input constraints via an event-triggered mechanism. J. Franklin Inst. 355(14), 6371–6389 (2018)
Shen, H., Huo, S., Cao, J., Huang, T.: Generalized state estimation for Markovian coupled networks under round-robin protocol and redundant channels. IEEE Trans. Cybern. (2018, in press). https://doi.org/10.1109/TCYB.2018.2799929
Shen, H., Dai, M., Yan, H., Park, J.: Quantized output feedback control for stochastic semi-Markov jump systems with unreliable links. IEEE Trans. Circuits Syst. II, Express Briefs (2018, in press). https://doi.org/10.1109/TCSII.2018.2801343
Cheng, J., Zhu, H., Zhong, S., Zeng, Y., Dong, X.: Finite-time H∞ control for a class of Markovian jump systems with mode-dependent time-varying delays via new Lyapunov functionals. ISA Trans. 52, 768–774 (2013)
Yang, W., Yu, W., Cao, J., Alsaadi, F., Hayat, T.: Almost automorphic solution for neutral type high-order Hopfield BAM neural networks with time-varying leakage delays on time scales. Neurocomputing 267, 241–260 (2017)
Wang, Y., Lin, P., Wang, L.: Exponential stability of reaction–diffusion high-order Markovian jump Hopfield neural networks with time-varying delays. Nonlinear Anal., Real World Appl. 13(3), 1353–1361 (2012)
Zheng, C., Li, N., Cao, J.: Matrix measure based stability criteria for high-order neural networks with proportional delay. Neurocomputing 149, 1149–1154 (2015)
Wang, F., Liu, M.: Global exponential stability of high-order bidirectional associative memory (BAM) neural networks with time delays in leakage terms. Neurocomputing 177, 515–528 (2016)
Aouiti, C., M’hamdi, M., Cherif, F., Alimi, A.: Impulsive generalised high-order recurrent neural networks with mixed delays: stability and periodicity. Neurocomputing 321, 296–307 (2018)
Alimi, A., Aouiti, C., Cherif, F., Dridi, F., M’hamdi, M.: Dynamics and oscillations of generalized high-order Hopfield neural networks with mixed delays. Neurocomputing 321, 274–295 (2018)
Huang, C., Cao, J.: Impact of leakage delay on bifurcation in high-order fractional BAM neural networks. Neural Netw. 98, 223–235 (2018)
Tomas, C., Marta, H., Pedro, M.: Asymptotic behaviour of nonlocal p-Laplacian reaction-diffusion problems. J. Math. Anal. Appl. 459(2), 997–1015 (2018)
Chipot, M., Savitska, T.: Nonlocal p-Laplace equations depending on the \(L^{p}\) norm of the gradient. Adv. Differ. Equ. 19, 11/13 (2014)
Rao, R., Zhong, S., Pu, S.: Shouming Zhong and Zhilin Pu. LMI-based robust exponential stability criterion of impulsive integro-differential equations with uncertain parameters via contraction mapping theory. Adv. Differ. Equ. 2017, 19 (2017)
Zhang, Z., Cao, J.: Periodic solutions for complex-valued neural networks of neutral type by combining graph theory with coincidence degree theory. Adv. Differ. Equ. 2018, 261 (2018)
Zhang, W., Li, J.: Almost sure stability of the delayed Markovian jump RDNNs. Adv. Differ. Equ. 2018, 248 (2018)
Xu, B., Liu, X., Liao, X.: Global asymptotic stability of high-order Hopfield type neural networks with time delays. Comput. Math. Appl. 45, 1729–1737 (2003)
Zhang, X., Wu, S., Li, K.: Delay-dependent exponential stability for impulsive Cohen–Grossberg neural networks with time-varying delays and reaction–diffusion terms. Commun. Nonlinear Sci. Numer. Simul. 16, 1524–1532 (2011)
Wang, Y., Xie, L., de Souza Carlos, E.: Robust control of a class of uncertain nonlinear system. Syst. Control Lett. 19, 139–149 (1992)
Boyd, S., Ghaoui, L., Feron, F., Balakrishnan, V.: Linear Matrix Inequalities in Systems and Control Theory. SIAM, Philadelphia (1994)
Pan, J., Liu, X., Zhong, S.: Stability criteria for impulsive reaction–diffusion Cohen–Grossberg neural networks with time-varying delays. Math. Comput. Model. 51(9–10), 1037–1050 (2010)
Rao, R., Zhong, S.: Existence of exponential p-stability nonconstant equilibrium of Markovian jumping nonlinear diffusion equations via Ekeland variational principle. Adv. Math. Phys. 2015, Article ID 812150 (2015)
Henry, D.: Geometric Theory of Semilinear Parabolic Equations. Lecture Notes Math., vol. 840. Springer, Berlin (1981)
Horstmann, D., Winkler, M.: Boundedness vs. blow-up in a chemotaxis system. J. Differ. Equ. 215, 52–107 (2005)
Duan, L., Huang, L., Guo, Z., Fang, X.: Periodic attractor for reaction–diffusion high-order Hopfield neural networks with time-varying delays. Comput. Math. Appl. 73(2), 233–245 (2017)
Acknowledgements
The authors wholeheartedly thank the anonymous referees for their constructive suggestions, which improved the quality of this article.
Funding
The research was supported by the National Natural Science Foundation of China (Nos. 61771004 and 61533006), Scientific Research Fund of Sichuan Provincial Education Department (18ZA0082), and the 2018 teaching reform project (2018JG38) of Chengdu Normal University “Financial mathematics course—Extended applications of stochastic process in dynamical system”.
Author information
Authors and Affiliations
Contributions
Both authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that there is no conflict of interest regarding the publication of this paper. The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Rao, R., Zhong, S. Boundedness and robust analysis of dynamics of nonlinear diffusion high-order Markovian jump time-delay system. Adv Differ Equ 2018, 434 (2018). https://doi.org/10.1186/s13662-018-1888-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662-018-1888-0