- Research
- Open access
- Published:
Global asymptotic stability of piecewise homogeneous Markovian jump BAM neural networks with discrete and distributed time-varying delays
Advances in Difference Equations volume 2016, Article number: 60 (2016)
Abstract
In this paper, the issue of a global asymptotic stability analysis is developed for piecewise homogeneous Markovian jump BAM neural networks with mixed time delays. By establishing the Lyapunov functional, using mode-dependent discrete delay and applying the linear matrix inequality (LMI) method, a novel sufficient condition is obtained to guarantee the stability of the considered system. A numerical example is provided to demonstrate the feasibility and effectiveness of the proposed results.
1 Introduction
As is well known, the bidirectional associative memory (BAM) neural networks were originally introduced by Kosko [1–3], and they are a class of two-layer heteroassociative networks, which are composed of neurons arranged in two layers, the U-layer and the V-layer. Generally speaking, the neurons in one layer are fully interconnected to the neurons in the other layer. Moreover, there may be no interconnection among neurons in the same layer. In addition, the addressable memories or patterns of BAM neural networks can be stored with a two-way associative search. Owing to these reasons, the BAM neural network has been widely studied both in theory and applications; see [4–13]. Therefore, it is meaningful and important to study the BAM neural network.
Recently, a great deal of studies have been done to the stability analysis of the dynamical systems [14–25]. It is worth noting that Markovian jump systems have received increasing attention in the area of the mathematics and control research community. Therefore, the study of Markovian jumps is of great significance and value both theoretically and practically. Much work has been done for Markovian processes or Markovian chains in the literature, and the issues of stability and control have been well investigated; see, for example, [14–20] and references therein. The stability analysis problem has been investigated in [17] for stochastic high-order Markovian jumping neural networks with mixed time delays. In [18], the authors have made the first attempt to deal with the \(H_{\infty}\) estimation for discrete-time piecewise homogeneous Markov jump linear systems, and the time-varying character of TPs has been considered to be finite piecewise homogeneous and the variations have been considered to be of two types: arbitrary variations and stochastic variations. The \(H_{\infty}\) filtering analysis of piecewise homogeneous Markovian jump nonlinear systems has been studied in [19], where the mode-dependent filter is obtained. Very recently, the stochastic stability analysis has been investigated for piecewise homogeneous Markovian jump neural networks with mixed time delays in [20]. But the time-varying delays in [20] are independent of the Markovian jump mode. To the best of our knowledge, no results have been given for piecewise homogeneous Markovian jump BAM neural networks with discrete and distributed time delays.
This constitutes the motivation for the present research. In this paper, we deal with the stability problem for piecewise homogeneous Markovian jump BAM neural networks with discrete and distributed time delays. By employing the Lyapunov method, using mode-dependent discrete delay and some inequality techniques, sufficient conditions are derived for the global asymptotic stability in the mean square of the piecewise homogeneous Markovian jump BAM neural networks with discrete and distributed time delays. One illustrative example is also provided to show the effectiveness of the obtained results.
2 Model description and preliminaries
In this paper, we consider BAM neural networks with discrete and distributed time-varying delays described by
with initial values
where \(x(t)=[x_{1}(t),x_{2}(t),\ldots,x_{n}(t)]^{\intercal}\) and \(y(t)=[y_{1}(t),y_{2}(t),\ldots,y_{n}(t)]^{\intercal}\) are the state vectors, n is the number of units in the neural networks, \(C=\operatorname{diag}(c_{1},c_{2},\ldots,c_{n})\) and \(D=\operatorname{diag}(d_{1},d_{2},\ldots,d_{n})\) are diagonal matrices with positive entries \(c_{i}>0\) and \(d_{i}>0\); \(A_{1}=(a_{ij}^{(1)})_{n\times n}\) and \(B_{1}=(b_{ij}^{(1)})_{n\times n}\) are the synaptic connection matrices, \(A_{2}=(a_{ij}^{(2)})_{n\times n}\) and \(B_{2}=(b_{ij}^{(2)})_{n\times n}\) are the discretely delayed connection weight matrices, \(A_{3}=(a_{ij}^{(3)})_{n\times n}\) and \(B_{3}=(b_{ij}^{(3)})_{n\times n}\) are the distributively delayed connection weight matrices, \(f(y)=(f_{1}(y_{1}),f_{2}(y_{2}),\ldots,f_{n}(y_{n}))^{\intercal}\) and \(g(x)=(g_{1}(x_{1}),g_{2}(x_{2}),\ldots,g_{n}(x_{n}))^{\intercal}\) are the activation functions, \(\tau_{i}(t)\) and \(d_{i}\) (\(i=1,2\)) are discrete and distributed time-varying delays, respectively, and they satisfy \(0\leq d_{i}(t)\leq d_{i}\), \(0\leq\dot{d}_{i}(t)\leq d_{iu}\), \(0\leq\tau_{i}(t)\leq\tau_{i}\), \(0\leq\dot{\tau}_{i}(t)\leq \tau_{iu}\) (\(i=1,2\)). The initial value space generated function is \(\phi=(\phi_{1}^{\intercal},\phi_{2}^{\intercal})^{\intercal}\in C_{\digamma_{0}}^{2}([-u,0],\Re^{n+n})\), where \(C_{\digamma_{0}}^{2}\) denotes the family of all bounded \(\digamma_{0}\)-measurable, \(C_{\digamma_{0}}^{2}([-u,0],\Re^{n+n})\)-valued random variables, satisfying \(\|\phi\|=\sup_{-u\leq s\leq0}E|\phi(s)|^{2}<\infty\), where E denotes the expectation of the stochastic process, and \(\mu\triangleq\max(d,\tau)\), where \(d\triangleq\max(d_{1},\tau_{1})\), \(\tau\triangleq\max(d_{2},\tau_{2} )\).
The activation functions \(g_{i}(x_{i}(t))\) and \(f_{i}(x_{i}(t))\) (\(i=1,2,\ldots,n\)) are assumed to be nondecreasing, bounded, and globally Lipschitz; we have
\(\xi{_{1}},\xi{_{2}}\in{R}\), \(\xi{_{1}}\neq\xi{_{2}}\) (\(j=1,2,\ldots,n\)) where \(l_{j}>0\) and \(m_{j}>0\) (\(j=1,2,\ldots,n\)). Note \(L=\operatorname{diag}(l_{1},l_{2},\ldots,l_{n})\), \(M=\operatorname{diag}(m_{1},m_{2},\ldots,m_{n})\).
Now, based on BAM neural networks (1) and fixing a probability space \((\Omega,\digamma,\mathcal{P})\), we introduce the following Markovian jump BAM neural networks with mixed time delays:
For convenience, each possible value of \(r(t)\) is denoted by i, \(i\in S_{1}\), in the following. Then we have
The process \(\{r_{t},t\geq0\}\) is described by a Markov chain with finite state space \(S_{1}=\{1,2,\ldots,s\}\), and its transition probability matrix \(\Pi^{(\delta_{t+h})}\triangleq[\pi_{ij}^{(\delta _{t+h})}]_{s\times s}\) is given by
where \(h>0\) and \(\lim_{h\rightarrow0}o(h)/h=0\); \(\pi_{ij}^{(\delta_{t+h})}>0\) for \(j\neq i\) is the transition rate from mode i at time t to mode j at time \(t+h\) and \(\pi_{ii}^{(\delta_{t+h})}=-\sum_{j=1,j\neq i}^{s}\pi_{ij}^{(\delta_{t+h})}\). In this study, we assume that \(\delta_{t}\) varies in another finite \(S_{2}=\{1,2,\ldots,l\}\) with transition probability matrix \(\Lambda\triangleq[q_{mn}]_{l\times l}\) given by
where \(h>0\) and \(\lim_{h\rightarrow0}o(h)/h=0\); \(q_{mn}>0\) for \(m\neq n\), is the transition rate from mode m at time t to mode n at time \(t+h\) and \(q_{mm}=-\sum_{n=1,n\neq m}^{l}q_{mn}\).
Now, we are ready to introduce the notion of homogeneousness.
Definition 2.1
A finite Markov process \(r_{t}\epsilon S_{1}\) is said to be homogeneous (respectively, nonhomogeneous) if for all \(t\geqslant 0\), the transition probability satisfies \(Pr\{r_{t+h}=j\mid r_{t}=i\}=\pi_{ij}\) (respectively, \(Pr\{r_{t+h}=j\mid r_{t}=i\}=\pi _{ij}(t)\)), where \(\pi_{ij}\) (or \(\pi_{ij}(t)\)) denotes a probability function.
Remark 1
In this paper, according to the definition of homogeneousness and nonhomogeneousness, we can find that the Markovian chain \(\delta_{t}\) is homogeneous, while the Markovian chain \(r_{t}\) is nonhomogeneous.
Next, we will introduce several lemmas which will be essential in proving our conclusion in Section 3.
Lemma 2.1
[26]
For any constant matrix \(M>0\), any scalars a and b with \(a< b\), and a vector function \(x(t):[a,b]\rightarrow R^{n}\) such that the integrals concerned are well defined, the following holds:
Lemma 2.2
(Schur complement [27])
Let there be given constant matrices \(Z_{1}\), \(Z_{2}\), \(Z_{3}\), where \(Z_{1}=Z_{1}^{\intercal}\) and \(Z_{2}=Z_{2}^{\intercal}>0\). Then \(Z_{1}+Z_{3}^{\intercal}Z_{2}^{-1}Z_{3}<0\) if and only if \(\bigl [ {\scriptsize\begin{matrix}{} Z_{1} & Z_{3}^{\intercal} \cr Z_{3} & -Z_{2} \end{matrix}} \bigr ]<0\) or \(\bigl [{\scriptsize\begin{matrix}{} -Z_{2} & Z_{3} \cr Z_{3}^{\intercal} & Z_{1} \end{matrix}} \bigr ]<0\).
3 Main results
In this section, a set of conditions are derived to guarantee the global asymptotic stability in the mean square of the BAM neural networks (4).
Theorem 3.1
For any given scalars \(d_{1}\), \(d_{2}\), \(\tau_{1}\), \(\tau_{2}\), and \(\tau_{1u}\), \(\tau_{2u}\) the BAM neural networks in (4) are globally asymptotic stable in the mean square, if there exist \(P_{ji,m}>0\), \(Q_{ji,m}=\bigl [ {\scriptsize\begin{matrix}{} Q_{ji,m}^{1} & Q_{ji,m}^{2} \cr \ast& Q_{ji,m}^{3}\end{matrix}} \bigr ]>0\), \(R_{ji,m}=\bigl [ {\scriptsize\begin{matrix}{} R_{ji,m}^{1} & R_{ji,m}^{2} \cr \ast& R_{ji,m}^{3} \end{matrix}} \bigr ]>0\), \(W_{j}=\bigl [ {\scriptsize\begin{matrix}{} W_{j}^{1} & W_{j}^{2} \cr \ast& W_{j}^{3} \end{matrix}} \bigr ]>0\), \(Q_{j}=\bigl [ {\scriptsize\begin{matrix}{} Q_{j}^{1} & Q_{j}^{2} \cr \ast& Q_{j}^{3} \end{matrix}} \bigr ]>0\) (\(j=1,2\)), \(X_{ji,m}>0\), \(Y_{ji,m}>0\), \(E_{ji,m}>0\), \(F_{ji,m}>0\), \(S_{ji,m}\) (\(j=1,2\)), \(X_{i}>0\), \(Y_{i}>0\), \(E_{i}>0\), \(F_{i}>0\) (\(i=3,4\)), and any matrices \(K_{i}\) (\(i=1,2,3,4,5,6\)) with appropriate dimensions such that the following LMIs hold:
where
Proof
Consider the following Lyapunov-Krasovskii functional:
where
Denote \(\eta_{1}(t)=[x^{\intercal}(t),g^{\intercal}(x(t))]^{\intercal}\) and \(\eta_{2}(t)=[y^{\intercal}(t),f^{\intercal}(y(t))]^{\intercal}\).
Define an infinitesimal generator (denoted by \(\mathcal{L}\)) of the Markovian process acting on \(V(t,x_{t},y_{t},r_{t},\delta_{t})\) (\(r_{t}=i\), \(\delta_{t}=m\)) defined as follows:
Then, for each \(i\in S_{1}\), \(m\in S_{2}\), the stochastic differential of V along the trajectory of system (4) is given by
Denote
Next, by using a similar method to [19] in (21), when \(0<\tau_{1i}(t)<\tau_{1}\) and \(0<\tau_{2i}(t)<\tau_{2}\), according to Jensen’s inequality, we have
By a reciprocally convex approach, if the inequality (8) holds, then the following inequality holds:
which implies
Then we can get from equations (24) and (26)
It should be noted that when \(\tau_{2i}(t)=0\) or \(\tau_{2i}(t)=\tau_{2}\), we have \(\sigma_{1}(t)=0\) or \(\sigma_{2}(t)=0\), respectively. Thus equation (27) still holds. It is clear that equation (27) implies
where \(\mathcal{X}(t)=[ x^{\intercal}(t) \ x^{\intercal}(t-\tau_{2i}(t)) \ x^{\intercal}(t-\tau _{2}) ]^{\intercal}\),
Similarly, we also have
where \(\mathcal{Y}(t)=[ y^{\intercal}(t) \ y^{\intercal}(t-\tau_{1i}(t)) \ y^{\intercal}(t-\tau _{1}) ]^{\intercal}\),
Using (2) and (3), for any positive diagonal matrices \(K_{j}\) (\(j=1,2,3,4,5,6\)), we have
Here, by the use of Lemma 2.1, the integral term \(-d_{1}\int _{t-d_{1}}^{t}f^{\intercal}(y(s))E_{1i,m}f(y(s))\,ds\) and \(-d_{2}\int_{t-d_{2}}^{t}g^{\intercal}(x(s))F_{1i,m}g(x(s))\,ds\) can be estimated as, respectively,
Then it follows from (18)-(37) that
Here
Applying the Schur complement shows that (39) is equivalent to (7). We have
which implies \(\dot{V}(x_{t},y_{t},i,m)<0\). Thus, the system (4) is asymptotically stable. This completes the proof. □
Remark 2
In [19], the authors have achieved some excellent work of piecewise homogeneous Markovian jump neural networks. The main contribution is devoted to the study of the stochastic stability analysis problem for a type of continuous-time neural networks with time-varying transition probabilities and mixed time delay. However, there are no results on piecewise homogeneous Markovian jump BAM neural network systems. In the application, the study of the piecewise homogeneous Markovian jump BAM neural networks is essential.
Specifically, when there is no distributed delay, the system (4) reduces to
Consider the following Lyapunov functional for the above BAM neural networks:
where \(V_{1}(t,x_{t},y_{t},r_{t},\delta_{t})\), \(V_{2}(t,x_{t},y_{t},r_{t},\delta_{t})\), \(V_{3}(t,x_{t},y_{t},r_{t},\delta_{t})\), and \(V_{1}(t,x_{t},y_{t},r_{t},\delta_{t})\) have the same definitions as those in equation (16), and we can get the following corollary along similar lines to the proof of Theorem 3.1.
Corollary 3.1
For any given scalars \(\tau_{1}\), \(\tau_{2}\) and \(\tau_{1u}\), \(\tau_{2u}\) the BAM neural network (4) is globally asymptotic stable in the mean square, if there exist \(P_{ji,m}>0\), \(Q_{ji,m}=\bigl [ {\scriptsize\begin{matrix}{} Q_{ji,m}^{1} & Q_{ji,m}^{2} \cr \ast& Q_{ji,m}^{3} \end{matrix}} \bigr ]>0\), \(R_{ji,m}=\bigl [ {\scriptsize\begin{matrix}{} R_{ji,m}^{1} & R_{ji,m}^{2} \cr \ast& R_{ji,m}^{3} \end{matrix}} \bigr ]>0\), \(W_{j}=\bigl [ {\scriptsize\begin{matrix}{} W_{j}^{1} & W_{j}^{2} \cr \ast& W_{j}^{3} \end{matrix}} \bigr ]>0\), \(Q_{j}=\bigl [ {\scriptsize\begin{matrix}{} Q_{j}^{1} & Q_{j}^{2} \cr \ast& Q_{j}^{3} \end{matrix}} \bigr ]>0 \) (\(j=1,2\)), \(X_{ji,m}>0\), \(Y_{ji,m}>0\), \(S_{ji,m}\) (\(j=1,2\)), \(X_{i}>0\), \(Y_{i}>0\) (\(i=3,4\)), and any matrices \(K_{i}\) (\(i=1,2,3,4,5,6\)) with appropriate dimensions such that the following LMIs hold:
where
4 Examples
In this section, we will give a numerical example showing the effectiveness of the conditions given here. Consider BAM neural networks (4) with the following parameters:
and the activation functions are taken as follows:
In this example, we assume \(\tau_{1}=1.7531\), \(\tau_{2}=1.2551\), \(\tau_{1u}=0.5060\), \(\tau_{2u}=0.6991\), and \(d_{1}=d_{2}=0.8\). The discrete delay \(\tau_{1}(t)=1.2+0.5\cos(t)\), \(\tau_{2}(t)=0.6+0.6\sin(t)\) and the distributed delay \(d_{1}(t)=d_{2}(t)=0.8\cos^{2}(t)\).
The transition probability matrices are
and the transition probability matrix is
Figure 1 is the state response of model (1) (\(r(t)=1\), \(\delta(t)=1\)) with the initial condition \([x_{1}(t),x_{2}(t),y_{1}(t),y_{2}(t)]^{\intercal }=[-0.7,1.5,-0.4,1.2]^{\intercal}\), and Figure 2 is the state response of model (2) (\(r(t)=2\), \(\delta(t)=2\)) with the initial condition \([x_{1}(t),x_{2}(t),y_{1}(t),y_{2}(t)]^{\intercal }=[0.8,0.1,-0.3,-0.5]^{\intercal}\). Through this example, we find that our results demonstrate the effectiveness of the proposed result.
5 Conclusions
In this paper, based on Lyapunov-Krasovskii functionals and some inequality techniques, we have investigated the problem of global asymptotic stability for piecewise homogeneous Markovian jump BAM neural networks with discrete and distributed time-varying delays. A linear matrix inequalities method has been developed to solve this problem. The sufficient condition has been established in terms of LMIs. A numerical example is given to demonstrate the usefulness of the derived LMI-based stability conditions.
References
Kosko, B: Neural Networks and Fuzzy Systems: A Dynamical System Approach to Machine Intelligence. Prentice-Hall, Englewood Cliffs (1992)
Kosko, B: Adaptive bidirectional associative memories. Appl. Opt. 26, 4947-4960 (1987)
Kosko, B: Bidirectional associative memories. IEEE Trans. Syst. Man Cybern. 18, 49-60 (1988)
Lakshmanan, S, Park, JH, Lee, TH, Jung, HY, Rakkiyappan, R: Stability criteria for BAM neural networks with leakage delays and probabilistic time-varying delays. Appl. Math. Comput. 219, 9408-9423 (2013)
Zhu, QX, Li, XD, Yang, XS: Exponential stability for stochastic reaction-diffusion BAM neural networks with time-varying and distributed delays. Appl. Math. Comput. 217, 6078-6091 (2011)
Park, JH, Park, CH, Kwon, OM, Lee, SM: A new stability criterion for bidirectional associative memory neural networks of neutral-type. Appl. Math. Comput. 199, 716-722 (2008)
Park, JH, Kwon, OM: On improved delay-dependent criterion for global stability of bidirectional associative memory neural networks with time-varying delays. Appl. Math. Comput. 199, 435-446 (2008)
Yang, WG: Existence of an exponential periodic attractor of periodic solutions for general BAM neural networks with time-varying delays and impulses. Appl. Math. Comput. 219, 569-582 (2012)
Bao, HB, Cao, JD: Exponential stability for stochastic BAM networks with discrete and distributed delays. Appl. Math. Comput. 218, 6188-6199 (2012)
Senan, S, Arik, S, Liu, D: New robust stability results for bidirectional associative memory neural networks with multiple time delays. Appl. Math. Comput. 218, 11472-11482 (2012)
Ali, M, Balasubramaniam, P: Robust stability of uncertain fuzzy Cohen-Grossberg BAM neural networks with time-varying delays. Expert Syst. Appl. 36, 10583-10588 (2009)
Zhu, Q, Rakkiyappan, R, Chandrasekar, A: Stochastic stability of Markovian jump BAM neural networks with leakage delays and impulse control. Neurocomputing 136, 136-151 (2014)
Wang, G, Cao, J, Xu, M: Stability analysis for stochastic BAM neural networks with Markovian jumping parameters. Neurocomputing 72(16-18), 3901-3906 (2009)
Zhang, BY, Li, YM: Exponential \(L_{2}-L_{\infty}\) filtering for distributed delay systems with Markovian jumping parameters. Signal Process. 93, 206-216 (2013)
Syed Ali, M, Arik, S, Saravankumar, R: Delay-dependent stability criteria of uncertain Markovian jump neural networks with discrete interval and distributed time-varying delays. Neurocomputing 158, 167-173 (2015)
Wu, ZG, Park, JH, Su, HY, Chu, J: Delay-dependent passivity for singular Markov jump systems with time-delays. Commun. Nonlinear Sci. Numer. Simul. 18, 669-681 (2013)
Liu, Y, Wang, Z, Liu, X: An LMI approach to stability analysis of stochastic high-order Markovian jumping neural networks with mixed time delays. Nonlinear Anal. Hybrid Syst. 2, 110-120 (2008)
Zhang, LX: \(H_{\infty}\) Estimation for discrete-time piecewise homogeneous Markov jump linear systems. Automatica 45, 2570-2576 (2009)
Ding, YC, Zhu, H, Zhong, SM, Zhang, YP, Zeng, Y: \(H_{\infty}\) Filtering for a class of piecewise homogeneous Markovian jump nonlinear systems. Math. Probl. Eng. (2012). doi:10.1155/2012/716474
Wu, Z, Park, JH, Su, HY, Chu, J: Stochastic stability analysis of piecewise homogeneous Markovian jump neural networks with mixed time-delays. J. Franklin Inst. 349, 2136-2150 (2012)
Park, PG, Ko, JW, Jeong, C: Reciprocally convex approach to stability of systems with time-varying delays. Automatica 47, 235-238 (2011)
Cheng, J, Wang, H, Chen, S, Liu, Z, Yang, J: Robust delay-derivative-dependent state-feedback control for a class of continuous-time system with time-varying delays. Neurocomputing 173(3), 827-834 (2016)
Cheng, J, Zhong, S, Zhong, Q, Zhu, H, Du, Y: Finite-time boundedness of state estimation for neural networks with time-varying delays. Neurocomputing 129, 257-264 (2014)
Lu, R, Wu, H, Bai, J: New delay-dependent robust stability criteria for uncertain neutral systems with mixed delays. J. Franklin Inst. 351(3), 1386-1399 (2014)
Zhang, D, Cai, WJ, Xie, LH, Wang, QG: Non-fragile distributed filtering for T-S fuzzy systems in sensor networks. IEEE Trans. Fuzzy Syst. 23, 1883-1890 (2015). http://dx.doi.org/10.1109/TFUZZ.2014.2367101
Sun, J, Liu, GP, Chen, J: Delay-dependent stability and stabilization of neutral time-delay systems. Int. J. Robust Nonlinear Control 19, 1364-1375 (2009)
Boyd, S, Ghaohui, L, Feron, E, Balakrishnan, V: Linear Matrix Inequalities in System and Control Theory. SIAM, Philadelphia (1994)
Acknowledgements
The authors would like to thank the associate editor and the anonymous reviewers for their detailed comments and suggestions. This work was supported by the National Natural Science Foundation of China (Grant No. 61533006), the National Natural Science Foundation of China (Grant No. 61273015), and the Project of the key research base of Humanities and Social Sciences in Universities of Sichuan Province (Grant No. NYJ20150604).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
WW drafted the manuscript. YD helped to draft manuscript. SZ, JX, and NZ checked the manuscript. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Wen, W., Du, Y., Zhong, S. et al. Global asymptotic stability of piecewise homogeneous Markovian jump BAM neural networks with discrete and distributed time-varying delays. Adv Differ Equ 2016, 60 (2016). https://doi.org/10.1186/s13662-016-0758-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662-016-0758-x