- Research
- Open access
- Published:
Dissipativity analysis of neutral-type memristive neural network with two additive time-varying and leakage delays
Advances in Difference Equations volume 2019, Article number: 6 (2019)
Abstract
In this paper, we offer an approach about the dissipativity of neutral-type memristive neural networks (MNNs) with leakage, additive time, and distributed delays. By applying a suitable Lyapunov–Krasovskii functional (LKF), some integral inequality techniques, linear matrix inequalities (LMIs) and free-weighting matrix method, some new sufficient conditions are derived to ensure the dissipativity of the aforementioned MNNs. Furthermore, the global exponential attractive and positive invariant sets are also presented. Finally, a numerical simulation is given to illustrate the effectiveness of our results.
1 Introduction
In the recent decades, neural networks have been widely applied in many areas, such as automatic control engineering, image processing, associative memory, pattern recognition, parallel computing, and so on [1, 2]. Therefore, it is extremely meaningful to study neural networks. Based on the completeness of circuit theory, Chua firstly proposed the fourth fundamental electrical circuit element memristor besides the known capacitance, inductance and resistance [3]. Subsequently, HP researchers discovered that memristors exist in nanoscale systems [4]. Memristor is a circuit element with memory function in the neural networks, whose resistance slowly changes depending on the quantity of passing electric charge by supplying a voltage or current. The working mechanism of a memristor is similar to that of the human brain. Thus, the research of MNNs is more valuable than we have realized [5, 6].
In the real world, time delays are ubiquitous. They may cause complex dynamical behaviors such as periodic oscillations, dissipation, divergence and chaos [7, 8]. Hence, the dynamic behaviors of neural networks with time delays have received lots of attention [9,10,11]. The existing studies on delayed neural networks can be divided into four categories dealing with constant, time-varying, distribution, and mixed delays. While a majority of literature is concentrated on the former three simple cases, mixed delays are more effective than simple delays in MNNs [12,13,14,15,16]. So the system of MNNs with mixed delays is worth a further study.
Dissipativity, known as a generalization of Lyapunov stability, is a common concept in dynamical systems. It focuses on the diverse dynamics of systems, not only on the equilibrium dynamics. Many systems are stable at the equilibrium points, but in some cases, the systems’ orbits do not converge to equilibrium points, or even not have any equilibrium point at all. As a consequence, dissipative systems play an important role in the field of control. Dissipative system theory provides a framework for the design and analysis of control systems based on energy-related considerations [17]. At present, although there are some studies on the dissipativity of neural networks [18,19,20], most of them are focusing on the synchronization of neural networks [21,22,23,24]. For the dissipativity analysis of neural networks, it is essential to find global exponentially attracting sets. Some researchers have investigated the global dissipativity of neural networks with mixed delays, by giving some sufficient conditions to obtain the global exponentially attracting sets [25, 26]. To the best of our knowledge, few studies have considered the dissipativity of neutral-type memristive neural networks with mixed delays.
In this paper, we will investigate the dissipative of neutral-type memristive neural networks with mixed delays. The highlights of our work include:
- 1.:
-
We consider not only two additive time-varying and distribution time delays, but also time-varying leakage delays.
- 2.:
-
We obtain the dissipativity of the system by using a combination of the appropriate LKF and the reciprocally convex combination method, some integral inequality techniques, LMI and some delay-dependent dissipative criteria.
- 3.:
-
Our results are more general than those for the ordinary neural networks.
The paper is organized as follows: in Sect. 2, the preliminaries are presented; in Sect. 3, the dissipative properties of neural network models with mixed delays are analyzed; in Sect. 4 a numerical example is given to demonstrate the effectiveness of our analytical results; in Sect. 5, the work is summarized.
2 Neural network model and some preliminaries
Notations
\(R^{n}\) (resp., \(R^{n\times m}\)) is the n-dimensional Euclidean space (resp., the set of \(n\times m\) matrices) with entries from R; \(X>0\) (resp., \(X\geq 0\)) implies that the matrix X is a real positive-definite matrix (resp., positive semi-definite). When A and B are symmetric matrices, if \(A>B\) then \(A-B\) is a positive definite matrix. The superscript T denotes transpose of the matrix; ∗ denotes the elements below the main diagonal of a symmetric matrix; I and O are the identity and zero matrices, respectively, with appropriate dimensions; \(\operatorname{diag}\{ \ldots \}\) denotes a diagonal matrix; \(\lambda _{\max }(C)\) (resp., \(\lambda _{\min }(C)\)) denotes the maximum (resp., minimum) eigenvalue of matrix C. For any interval \(V\subseteq R\), let \(S\subseteq R^{k}\) (\(1 \leq k \leq n\)), \(C(V,S)=\{\varphi :V\rightarrow S\text{ is continuous}\}\) and \(C^{1}(V,S)=\{\varphi :V\rightarrow S \text{ is continuous differentiable}\}\); \(\operatorname{co}\{b_{1} , b_{2}\}\) represents closure of the convex hull generated by \(b_{1}\) and \(b_{2}\). For constants a, b, we set \(a\vee b = \max \{a, b\}\). Let \(L_{2}^{n}\) be the space of square integrable functions on \(R^{+}\) with values in \(R^{n}\); \(L_{2e}^{n}\) the extended \(L_{2}^{n}\) space defined by \(L_{2e}^{n}=\{f:f\text{ is a measurable function on }R^{+}\}\), \(P_{T}f\in L_{2}^{N}\), \(\forall T \in R^{+}\), where \((P_{T}f)(t)=f(t)\) if \(t \leq T\), and 0 if \(t>T\). For any functions \(x=\{x(t)\}\), \(y=\{y(t)\}\in L_{2e}^{n}\) and matrix Q, we define \(\langle x,Qy\rangle =\int _{0}^{T} x^{T}(t)Qy(t)\,dt\).
In this paper, we consider the following neutral-type memristor neural network model with leakage, as well as two additive time-varying and distributed time-varying delays:
where n is the number of cells in a neural network; \(x_{i}(t)\) is the voltage of the capacitor; \(f_{i}(\cdot )\) denotes the neuron activation functions of the ith neuron at time t; \(y_{i}\) is the output of the ith neural cell; \(u_{i}(t)\in L_{\infty }\) is the external constant input of the ith neuron at time t; \(\eta (t)\) denotes the leakage delay satisfying \(0\leq \eta (t)\leq \eta \); \(\tau _{j1}(t)\) and \(\tau _{j2}(t)\) are two additive time varying delays that are assumed to satisfy the conditions \(0\leq \tau _{j1}(t)\leq \tau _{1}<\infty \), \(0\leq \tau _{j2}(t)\leq \tau _{2}<\infty \); \(\delta _{1}(t)\), \(\delta _{2}(t)\) and \(h(t)\) are the time-varying delays with \(0\leq \delta _{1}\leq \delta _{1}(t)\leq \delta _{2}(t)\leq \delta _{2}\), \(0 \leq h(t)\leq h\); η, \(\tau _{1}\), \(\tau _{2}\), \(\delta _{1}\), \(\delta _{2}\) and h are nonnegative constants; \(\tau ^{*}=\eta \vee (\delta _{2} \vee (\tau \vee h))\); \(C=\operatorname{diag}(c_{1},c_{2},\ldots,c_{n})\) is a self-feedback connection matrix; \(E=\operatorname{diag}(e_{1},e_{2},\ldots,e_{n})\) is the neutral-type parameter; \(a_{ij}(t)\), \(b_{ij}(t)\), and \(d_{ij}(t)\) represent the memristive-based weights, which are defined as follows:
Here \(\mathbf{W}_{(k)ij}\) denote the memductances of memristors \(\mathbf{R}_{(k)ij}\), \(k=1,2,3\). In view of memristor property, we set
where the switching jumps \(\gamma _{i}>0\), \(\hat{a}_{ij}\), \(\check{a} _{ij}\), \(\hat{b}_{ij}\), \(\check{b}_{ij}\), \(\hat{d}_{ij}\) and \(\check{d} _{ij}\) are known constants with respect to memristances.
Remark 1
In the recent years, the dissipativity problem of MNNs has received a lot of attention. So far, substantial important results on dissipativity have been obtained for MNNs. Unfortunately, the work in [27, 28] only considered the leakage delay, while that in [29, 30] considered additive time-varying delays, but not distribution delays. In fact, the leakage delay and multiple signal transmission delays coexist in the system of MNNs. Because few results are found in the existing literature on the dissipativity analysis of neutral-type MNNs with multiple time delays, this paper attempts to extend our knowledge in this field by studying the dissipativity of such systems, and an example is given to prove the effectiveness of our results. Thus, the obtained results extend the study of the dynamic characteristics of MNNs.
Remark 2
In many real applications, signals transmitted from one point to another may experience a few segments of networks, which can possibly induce successive delays with different properties due to the variable network transmission conditions, and when \(\tau _{1}(t)+\tau _{2}(t)\) reaches its maxima, we do not necessarily have both \(\tau _{1}(t)\) and \(\tau _{2}(t)\) reach their maxima at the same time. Therefore, in this paper, we will consider the two additive delay components in (2.1) separately.
Remark 3
Furthermore, the above systems are switching systems whose connection weights vary due to their states. Although smooth analysis is suitable for studying continuous nonlinear systems, the nonsmooth analysis is suitable for studying switching nonlinear systems. Therefore, it is necessary to introduce some definitions of nonsmooth theory, such as differential inclusion and set-valued maps.
Let \(\underline{a}_{ij}=\min \{\hat{a}_{ij}, \check{a}_{ij}\}\), \(\overline{a}_{ij}=\max \{\hat{a}_{ij}, \check{a}_{ij}\}\), \(\underline{b}_{ij}=\min \{\hat{b}_{ij}, \check{b}_{ij}\}\), \(\overline{b}_{ij}=\max \{\hat{b}_{ij}, \check{b}_{ij}\}\), \(\underline{d}_{ij}=\min \{\hat{d}_{ij}, \check{d}_{ij}\}\), \(\overline{d}_{ij}=\max \{\hat{d}_{ij}, \check{d}_{ij}\}\), for \(i,j =1,2,\ldots,n\). By applying the theory of differential inclusions and set-valued maps in system (2.1) [31, 32], it follows that
Using Filippov’s theorem in [33], there exist \(a_{ij}^{\prime }(t)\in \operatorname{co}[\underline{a}_{ij}, \overline{a}_{ij}]\), \(b_{ij}^{\prime }(t)\in \operatorname{co}[\underline{b}_{ij},\overline{b}_{ij}]\), \(d_{ij}^{\prime }(t)\in \operatorname{co}[\underline{d}_{ij}, \overline{d}_{ij}]\), and \(A=(a_{ij}^{\prime }(t))_{n\times n}\), \(B=(b_{ij}^{\prime }(t))_{n\times n}\), \(D=(d_{ij}^{\prime }(t))_{n\times n} \), such that
where \(x(t)=(x_{1}(t),x_{2}(t),\ldots,x_{n}(t))^{T}\), \(x(t-\eta (t))=(x _{1}(t-\eta (t)), x_{2}(t-\eta (t)),\ldots, x_{n}(t-\eta (t)))^{T}\), \(f(x(t))=(f_{1}(x_{1}(t)), f_{2}(x_{2}(t)),\ldots, f_{n}(x_{1}(n)))^{T}\), \(f(x(t-\tau _{1}(t)-\tau _{2}(t)))=(f_{1}(x_{1}(t- \tau _{11}-\tau _{12})), f_{2}(x_{2}(t-\tau _{21}-\tau _{22})),\ldots,f _{n}(x_{n}(t-\tau _{n1}-\tau _{n2})))^{T}\), \(\dot{x}(t-h(t))=(\dot{x} _{1}(t-h(t)), \dot{x}_{2}(t-h(t)),\ldots, \dot{x}_{n}(t-h(t)))^{T}\), \(u(t)=(u_{1}(t),u_{2}(t),\ldots,u_{n}(t))^{T} \).
To prove our main results, the following assumptions, definitions and lemmas are needed.
Assumption 1
The time-varying delays \(\tau _{1}(t)\), \(\tau _{2}(t)\) and \(\eta (t) \) satisfy the conditions \(\vert \dot{\tau }_{1}(t)\vert \leq \mu _{1}\); \(\vert \dot{\tau }_{2}(t)\vert \leq \mu _{2}\); \(\vert \dot{\eta }(t)\vert \leq \mu _{3}\) where μ, \(\mu _{1}\), \(\mu _{2}\) and \(\mu _{3}\) are nonnegative constants, and we denote \(\tau (t)=\tau _{1}(t)+\tau _{2}(t)\), \(\mu =\mu _{1}+\mu _{2} \) and \(\tau =\tau _{1}+\tau _{2}\).
Assumption 2
For all \(\alpha ,\beta \in R\) and \(\alpha \neq \beta \), \(i=1,2,\ldots,n\), the activation function f is bounded and there exist constants \(k_{i}^{-}\) and \(k_{i}^{+} \) such that
where let \(F_{i}=\vert k_{i}^{-}\vert \vee \vert k_{i}^{+}\vert \), \(f=(f_{1},f_{2},\ldots,f_{n})^{T}\) and for any \(i\in \{1,2,\ldots,n\}\), \(f_{i}(0)=0\). For presentation convenience, in the following we denote
Assumption 3
\(\phi (t)\in \mathbb{C}^{1}:\mathbb{C}([\tau *,0],R ^{n}) \) is the initial function with the norm
Definition 1
Let \(x(t,0,\phi )\) be the solution of neural network (2.2) through \((0,\phi )\), \(\phi \in \mathbb{C}^{1} \). Suppose there exists a compact set \(S\subseteq R^{n}\) such that for every \(\phi \in \mathbb{C}^{1}\), there exists \(T(\phi )>0\) such that, when \(t\geq T(\phi )\), \(x(t,0,\phi )\subseteq S\). Then the neural network (2.2) is said to be a globally dissipative system, and S is called a globally attractive set. The set S is called positively invariant if for every \(\phi \in S\), it holds that \(x(t,0,\phi ) \subseteq S\) for all \(t\in R_{+}\).
Definition 2
Let S be a globally attractive set of neural network (2.2). The neural network (2.2) is said to be globally exponentially dissipative if there exist constant \(a>0 \) and compact \(S^{*} \supset S\) in \(R^{n}\) such that for every \(\phi \in R^{n} \backslash S^{*} \), there exists a constant \(M(\phi )>0\) such that
Here \(x\in R^{n}\) but \(x\notin S^{*}\). Set \(S^{*}\) is called a globally exponentially attractive set.
Lemma 1
([36])
Consider a given matrix \(R>0\). Then, for all continuous functions \(\omega (\cdot ):[a,b]\rightarrow R^{n}\), such that the considered integral is well defined, one has
Lemma 2
([37])
For any given matrices H, E, a scalar \(\varepsilon >0\) and F with \(F^{T} F\leq I\), the following inequality holds:
Lemma 3
([38])
For any constant matrix \(H\in {R}^{n\times n}\) and two scalars \(b\geq a\geq 0\), the following inequality holds:
Lemma 4
([39])
Let the functions \(f_{1}(t), f_{2}(t) ,\ldots, f _{N}(t):R^{m} \rightarrow R\) have positive values in an open subset D of \(R^{m}\) and satisfy
with \(\alpha _{i}>0\) and \(\sum_{i}\alpha _{i}=1\), then the reciprocal convex combination of \(f_{i}(t)\) over the set D satisfies
3 Main results
In this section, under Assumptions 1–3 and by using Lyapunov–Krasovskii functional method and LMI technique, the delay-dependent dissipativity criterion of system (2.2) is derived in the following theorem.
Theorem 3.1
Under Assumptions 1–3, if there exist symmetric positive definite matrices \(P>0\), \({Q_{i}>0}\), \(V_{i} >0 \), \(U_{i}>0\) (\(i=1,2,3\)), \(R_{j}>0\), \(T_{j}>0\) (\(j=1,2,3,4,5\)), \(G_{k}>0\) (\(k=1,2,3,4\)), \(L_{1}>0\), \(L_{2}>0\), \(S_{2}>0\), \(S_{3}>0\), three \(n\times n \) diagonal matrices \(M>0\), \(\beta _{1}>0\), \(\beta _{2}>0 \), \(n\times n\) real matrix \(S_{1} \) such that the following LMIs hold:
where \(\varPsi =[\psi ]_{l\times n}\) (\(l,n=1,2,\ldots,25\)); \(\psi _{1,1}=-PM-M ^{T}P+2\alpha P+2Q_{1}+Q_{2}+Q_{3}+R_{1}+R_{2}+R_{3}+R_{4}+R_{5} -4e ^{-2\alpha \tau _{1}}T_{1}-4e^{-2\alpha \tau _{2}}T_{2}-4e^{-2\alpha \tau }T_{3} -4e^{-2\alpha \eta }T_{4}-4e^{-2\alpha h}T_{5}+\eta ^{2}L _{2}-K_{1}\beta _{1}\), \(\psi _{1,2}=-2e^{-\alpha \tau }G_{3}\), \(\psi _{1,3}=-2e ^{-\alpha \tau _{1}}G_{1}\), \(\psi _{1,4}=-2e^{-\alpha \tau _{2}}G_{2}\), \(\psi _{1,5}=PM-2e^{-2\alpha \eta }G_{4}\), \(\psi _{1,6}=e^{-2\alpha h}T _{5}\), \(\psi _{1,7}=-2e^{-2\alpha \tau }(T_{3}+2G_{3})\), \(\psi _{1,8}=-2e ^{-2\alpha \tau _{1}}(T_{1}+2G_{1})\), \(\psi _{1,9}=-2e^{-2\alpha \tau _{2}}(T_{2}+2G_{2})\), \(\psi _{1,10}=-PC+S_{1}C-2e^{-2\alpha \eta }(T _{4}+2G_{4})\), \(\psi _{1,11}=PA-S_{1}A+K_{2}\beta _{1}\), \(\psi _{1,12}=PB-S _{1}B\), \(\psi _{1,13}=M^{T}PM-\alpha PM -\alpha M^{T}P\), \(\psi _{1,14}=-6e ^{-2\alpha \eta }G_{4}\), \(\psi _{1,15}=-6e^{-2\alpha \eta }T_{4}\), \(\psi _{1,16}=-6e^{-2\alpha \tau } T_{3}\), \(\psi _{1,17}= -6e^{-2 \alpha \tau _{1}}T_{1}\), \(\psi _{1,18}=-6e^{-2\alpha \tau _{2}}T_{2}\), \(\psi _{1,19}=6e^{-2\alpha \tau }G_{3}\), \(\psi _{1,20}=6e^{-2\alpha \tau _{1}}G_{1}\), \(\psi _{1,21}=6e^{-2\alpha \tau _{2}}G_{2}\), \(\psi _{1,22}=PD-S _{1}D\), \(\psi _{1,23}=S_{1}\), \(\psi _{1,24}=PE-S_{1}E\), \(\psi _{2,2}=-e ^{-2\alpha \tau }Q_{1}-4e^{-2\alpha \tau }T_{3}\), \(\psi _{2,7}=-2e^{-2 \alpha \tau }(T_{3}+2G_{3})\), \(\psi _{2,16}=6e^{-2\alpha \tau }G_{3}\), \(\psi _{2,19}=6e^{-2\alpha \tau }T_{3}\), \(\psi _{3,3}=-e^{-2\alpha \tau _{1}}Q_{2}-4e^{-2\alpha \tau _{1}}T_{1}\), \(\psi _{3,8}=-2e^{-2 \alpha \tau _{1}}(T_{1}+2G_{1})\), \(\psi _{3,17}=6e^{-2\alpha \tau _{1}}G _{1}\), \(\psi _{3,20}=6e^{-2\alpha \tau _{1}}T_{1}\), \(\psi _{4,4}=-e^{-2 \alpha \tau _{2}}Q_{3}-4e^{-2\alpha \tau _{2}}T_{2}\), \(\psi _{4,9}=-2e ^{-2\alpha \tau _{2}}(T_{2}+2G_{2})\), \(\psi _{4,18}=6e^{-2\alpha \tau _{2}}G_{2}\), \(\psi _{4,21}=6e^{-2\alpha \tau _{2}}T_{2}\), \(\psi _{5,5}=-e ^{-2\alpha \eta }R_{2}-4e^{-2\alpha \eta }T_{4}\), \(\psi _{5,10}=-2e ^{-2\alpha \eta }(T_{4}+2G_{4})\), \(\psi _{5,13}=-M^{T}PM\), \(\psi _{5,14}=6e ^{-2\alpha \eta }T_{4}\), \(\psi _{5,15}=6e^{-2\alpha \eta }G_{4}\), \(\psi _{6,6}=-e^{-2\alpha h}T_{5}\), \(\psi _{7,7}=-(1-\mu )e^{-2\alpha \tau }R_{3}-4e^{-2\alpha \tau }(2T_{3}+G_{3})-K_{1}\beta _{2}\), \(\psi _{7,12}=K_{2}\beta _{2}\), \(\psi _{7,16}=6e^{-2\alpha \tau }(T_{3}+G _{3})\), \(\psi _{7,19}=6e^{-2\alpha \tau }(T_{3}+G_{3})\), \(\psi _{8,8}=-(1- \mu _{1})e^{-2\alpha \tau _{1}}R_{4}-4e^{-2\alpha \tau _{1}}(2T_{1}+G _{1})\), \(\psi _{8,17}=6e^{-2\alpha \tau _{1}}(T_{1}+G_{1})\), \(\psi _{8,20}=6e ^{-2\alpha \tau _{1}}(T_{1}+G_{1})\), \(\psi _{9,9}=-(1-\mu _{2})e^{-2 \alpha \tau _{2}}R_{5}-4e^{-2\alpha \tau _{2}}(2T_{2}+G_{2})\), \(\psi _{9,18}=6e^{-2\alpha \tau _{2}}(T_{2}+G_{2})\), \(\psi _{9,21}=6e^{-2 \alpha \tau _{2}}(T_{2}+G_{2})\), \(\psi _{10,13}=M^{T}P{C}\), \(\psi _{10,10}=-(1- \mu _{3})e^{-2\alpha \eta }R_{1}-4e^{-2\alpha \eta }(2T_{4}+G_{4})\), \(\psi _{10,14}=6e^{-2\alpha \eta }(T_{4}+G_{4})\), \(\psi _{10,15}=6e^{-2 \alpha \eta }(T_{4}+G_{4})\), \(\psi _{10,23}=-S_{2}C\), \(\psi _{10,24}=-S _{3}C\), \(\psi _{11,11}=(\delta _{2}-\delta _{1})^{2}L_{1}-\beta _{1}\), \(\psi _{11,13}=-M^{T}PA\), \(\psi _{11,23}=S_{2}A\), \(\psi _{11,24}=S_{3}A\), \(\psi _{12,12}=-\beta _{2}\), \(\psi _{12,13}=-M^{T}PB\), \(\psi _{12,23}=S _{2}B\), \(\psi _{12,24}=S_{3}B\), \(\psi _{13,13}=\alpha M^{T}PM-2e^{-2 \alpha \eta }L_{2}\), \(\psi _{13,22}=-M^{T}PD\), \(\psi _{13,24}=-M^{T}PE\), \(\psi _{13,25}=-MP\), \(\psi _{14,14}=-12e^{-2\alpha \eta }T_{4}\), \(\psi _{14,15}=-12e^{-2\alpha \eta }G_{4}\), \(\psi _{15,15}=-12e^{-2 \alpha \eta }T_{4}\), \(\psi _{16,16}=-12e^{-2\alpha \tau }T_{3}\), \(\psi _{16,19}=-12e^{-2\alpha \tau }G_{3}\), \(\psi _{17,17}=-12e^{-2 \alpha \tau _{1}}T_{1}\), \(\psi _{17,20}=-12e^{-2\alpha \tau _{1}}G_{1}\), \(\psi _{18,18}=-12e^{-2 \alpha \tau _{2}}T_{2}\), \(\psi _{18,21}=-12e^{-2\alpha \tau _{2}}G_{2}\), \(\psi _{19,19}=-12e^{-2\alpha \tau }T_{3}\), \(\psi _{20,20}=-12e^{-2 \alpha \tau _{1}} T_{1}\), \(\psi _{21,21}=-12e^{-2\alpha \tau _{2}}T_{2}\), \(\psi _{22,22}=-e^{-2\alpha \delta _{2}}L_{1}\), \(\psi _{22,23}=S_{2}D\), \(\psi _{22,24}=S_{3}D\), \(\psi _{23,23}=\frac{\tau _{1}^{4}}{4}U_{1}+\frac{\tau _{2}^{4}}{4}U_{2} +\frac{\tau ^{4}}{4}U _{3}-S_{2}+\tau _{1}^{2}T_{1}+\tau _{2}^{2}T_{2}+\tau ^{2}T_{3}+\eta ^{2}T _{4}+h^{2}T_{5}\), \(\psi _{23,24}=S_{2}E\), \(\psi _{24,24}=S_{3}E+E^{T}S _{3}+S_{3}\), \(\psi _{25,25}=S_{2}\), \(\varUpsilon _{k}^{T}=[\varGamma _{1k},\varGamma _{2k},\varGamma _{3k},\varGamma _{4k}, \varGamma _{5k},\varGamma _{6k}]^{T}\) (\(k=1,2,3,4\)), \(\varGamma _{11}^{T}=\varGamma _{12}^{T}=\tau _{1}(e_{1}-e_{20})\), \(\varGamma _{13}^{T}=\varGamma _{14}^{T}=\mathbf{0}\), \(\varGamma _{21}^{T}=\varGamma _{22}^{T}=\mathbf{0}\), \(\varGamma _{23}^{T}=\varGamma _{24}^{T}=\tau _{1}(e_{1}-e_{17})\), \(\varGamma _{31}^{T}=\varGamma _{33}^{T}=\tau _{2}(e_{1}-e_{21})\), \(\varGamma _{32} ^{T}=\varGamma _{34}^{T}=\mathbf{0}\), \(\varGamma _{41}^{T}=\varGamma _{43}^{T}=\mathbf{0}\), \(\varGamma _{42}^{T}= \varGamma _{44}^{T}=\tau _{2}(e_{1}-e_{18})\), \(\varGamma _{51}^{T}=\tau (e _{1}-e_{19})\), \(\varGamma _{52}^{T}=\tau _{1}(e_{1}-e_{19})\), \(\varGamma _{53}^{T}=\tau _{2}(e_{1}-e_{19})\), \(\varGamma _{54}^{T}=\varGamma _{61}^{T}=\mathbf{0}\), \(\varGamma _{62}^{T}=\tau _{2}(e_{1}-e_{16})\), \(\varGamma _{63}^{T}=\tau _{1}(e_{1}-e_{16})\), \(\varGamma _{64}^{T}=\tau (e_{1}-e_{19})\), \(e_{i}=[\mathbf{0}_{n\times (i-1)n},\mathbf{I}_{n\times n},\mathbf{0} _{n\times (25-i)n}]\) (\(i=1,2,\ldots,25\)), then the neural network (2.2) is exponentially dissipative, and
is a positively invariant and globally exponentially attractive set, where the external input \(\vert u(t)\vert \leq \varGamma _{u}\), \(\varGamma _{u}>0\) is a bound of the external input \(u(t)\) on \(R^{+}\). In addition, the exponential dissipativity rate index α can be used in the Φ.
Proof
Considering the following Lyapunov–Krasovskii function:
where
Calculating the derivative of \(V(t,x(t))\) along the trajectory of neural network (2.2), it can be deduced that
For any matrix \(G_{1}\) with , by using Lemmas 1 and 4, we can obtain the following:
where
Similarly, it holds that
where
Applying Lemma 1 and Newton–Leibniz formula, we have
Similarly, it holds that
The second term of Eq. (3.8) can be written as
By Lemma 3, we obtain
Applying Lemma 4, for any matrix \(V_{1}\) with , the above inequality becomes:
where
Similarly, by Lemmas 3 and 4, we have
where
By using Assumption 2, we can obtain the following:
which can be compactly written as
Then for any positive matrices \(\beta _{1}=\operatorname{diag}(\beta _{1s}, \beta _{2s},\ldots,\beta _{ns})\) and \(\beta _{2}=\operatorname{diag}(\tilde{\beta }_{1s},\tilde{\beta }_{2s},\ldots,\tilde{\beta }_{ns})\), the following inequalities hold true:
Note that
For any appropriately dimensioned matrix \(S_{1}\), the following is satisfied:
Similarly, we have
In addition, it follows from Lemma 2 that for every \(H\geq 0\), \(N\geq 0\),
From Eqs. (3.2)–(3.27), if we let \(H=S_{2}\), \(N=S _{3}\), we can derive that
where
Letting \(\tau _{1}(t)=0\), \(\tau _{1}(t)=\tau _{1}\) and \(\tau _{2}(t)=0\), \(\tau _{2}(t)=\tau _{2}\), we can get
From Eq. (3.2) it is easy to deduce that
where
and
Then according to the LMI (3.1) and Eq. (3.29), we have
where
Note that \(\phi _{2}\leq 0\) and \(\phi _{2}=0\) if and only if external input \(u=0\). Hence, one may deduce that when \(\vert x(t)\vert >\phi _{1}\), i.e., \(x\notin S\), it holds that
Hence when \(x\notin S \), we finally obtain that
Note that S is a sphere, when \({x\notin S}\), \(M=\sqrt{\frac{ \lambda _{2}}{\lambda _{1}}}\Vert \phi \Vert _{\tau *}\),
According to Definition 2, we can get that system (2.2) is globally exponentially dissipative with positively invariant and globally exponentially attractive set S. This completes the proof. □
Remark 4
In the proof of Theorem 3.1, an LMI-based condition imposed on global exponential dissipativity of system (2.2) was given. It is worth mentioning that in order to derive the globally exponentially attractive set S and guarantee the practicability of dissipativity criteria, we chose two special but suitable \(H=S_{2}\) and \(N = S_{3}\) in (3.28). From Theorem 3.1, we can find that the globally exponentially attractive set S can be directly obtained by using the LMIs.
Remark 5
In Theorem 3.1, we firstly transform system (2.1) to system (2.2) by using a convex combination technique and Filippov’s theorem. In addition, we introduce the double and triple integrals in the LKF by considering leakage, discrete and two additive time-varying delays. The problem has not been solved in [29, 30, 40]. Constructing this form of double and triple integral terms in the LKF is a recent tool to get less conservative results.
If in Theorem 3.2 we take the exponential dissipativity rate index \(\alpha =0\) and replace the exponential-type Lyapunov–Krasovskii functional in Theorem 3.1, then we can obtain the following theorem.
Theorem 3.2
Under the same conditions as in Theorem 3.1, system (2.2) is global dissipative, and S given in Theorem 3.1 is the positively invariant and globally attractive set if the following LMI holds:
where\(\varTheta =[\varTheta ]_{l\times n}\) (\(l,n=1,2,\ldots,25\)), \(\varTheta _{1,1}=-PM-M ^{T}P+2Q_{1}+Q_{2}+Q_{3}+R_{1}+R_{2}+R_{3}+R_{4}+R_{5} -4T_{1}-4T_{2}-4T _{3}-4T_{4}-4T_{5}+\eta ^{2}L_{2}-K_{1}\beta _{1}\), \(\varTheta _{1,2}=-2G _{3}\), \(\varTheta _{1,3}=-2G_{1}\), \(\varTheta _{1,4}=-2G_{2}\), \(\varTheta _{1,5}=PM-2G _{4}\), \(\varTheta _{1,6}=T_{5}\), \(\varTheta _{1,7}=-2(T_{3}+2G_{3})\), \(\varTheta _{1,8}=-2(T_{1}+2G_{1})\), \(\varTheta _{1,9}=-2(T_{2}+2G_{2})\), \(\varTheta _{1,10}=-PC+S_{1}C-2(T_{4}+2G_{4})\), \(\varTheta _{1,11}={PA-S_{1}A+K_{2} \beta _{1}}\), \(\varTheta _{1,12}=PB-S_{1}B\), \(\varTheta _{1,13}=M^{T}PM\), \(\varTheta _{1,14}=-6G_{4}\), \(\varTheta _{1,15}= -6T_{4}\), \(\varTheta _{1,16}=-6T _{3}\), \(\varTheta _{1,17}=-6T_{1}\), \(\varTheta _{1,18}=-6T_{2}\), \(\varTheta _{1,19}=6G_{3}\), \(\varTheta _{1,20}=6G_{1}\), \(\varTheta _{1,21}=6G_{2}\), \(\varTheta _{1,22}=PD-S_{1}D\), \(\varTheta _{1,23}=-S_{1}\), \(\varTheta _{1,24}=PE-S _{1}E\), \(\varTheta _{2,2}=-Q_{1}-4T_{3}\), \(\varTheta _{2,7}=-2(T_{3}+2G_{3})\), \(\varTheta _{2,18}=6G_{3}\), \(\varTheta _{2,21}=6T_{3}\), \(\varTheta _{3,3}=-Q _{2}-4T_{1}\), \(\varTheta _{3,8}=-2(T_{1}+2G_{1})\), \(\varTheta _{3,19}=6G_{1}\), \(\varTheta _{3,22}=6T_{1}\), \(\varTheta _{4,4}=-Q_{3}-4T_{2}\), \(\varTheta _{4,9}=-2(T _{2}+2G_{2})\), \(\varTheta _{4,20}=6G_{2}\), \(\varTheta _{4,23}=6T_{2}\), \(\varTheta _{5,5}=-R_{2}-4T_{4}\), \(\varTheta _{5,10}=-2(T_{4}+2G_{4})\), \(\varTheta _{5,13}=-M^{T}PM\), \(\varTheta _{5,14}=6T_{4}\), \(\varTheta _{5,15}=6G_{4}\), \(\varTheta _{6,6}=-T_{5}\), \(\varTheta _{7,7}=-(1-\mu )R_{3}-4(2T_{3}+G_{3})-K _{1}\beta _{2}\), \(\varTheta _{7,12}={-K_{2}\beta _{2}}\), \(\varTheta _{7,16}=6(T _{3}+G_{3})\), \(\varTheta _{7,19}=6(T_{3}+G_{3})\), \(\varTheta _{8,8}=-(1-\mu _{1})R_{4}-4(2T_{1}+G_{1})\), \(\varTheta _{8,17}=6(T_{1}+G_{1})\), \(\varTheta _{8,20}=6(T_{1}+G_{1})\), \(\varTheta _{9,9}=-(1-\mu _{2})R_{5}-4(2T _{2}+G_{2})\), \(\varTheta _{9,18}=6(T_{2}+G_{2})\), \(\varTheta _{9,21}=6(T_{2}+G _{2})\), \(\varTheta _{10,10}=-(1-\mu _{3})R_{1}-4(2T_{4}+G_{4})\), \(\varTheta _{10,13}=M ^{T}PC\), \(\varTheta _{10,14}=6(T_{4}+G_{4})\), \(\varTheta _{10,15}=6(T_{4}+G _{4})\), \(\varTheta _{10,23}=-S_{2}C\), \(\varTheta _{10,24}=-S_{3}C\), \(\varTheta _{11,11}=( \delta _{2}-\delta _{1})^{2}L_{1}-\beta _{1}\), \(\varTheta _{11,13}=-M^{T}PA\), \(\varTheta _{11,23}=S_{2}A\), \(\varTheta _{11,24}=-S_{3}A\), \(\varTheta _{12,12}=- \beta _{2}\), \(\varTheta _{12,13}=-M^{T}PB\), \(\varTheta _{12,23}=S_{2}B\), \(\varTheta _{12,24}=-S_{3}B\), \(\varTheta _{13,13}=-2L_{2}\), \(\varTheta _{13,21}=-M ^{T}PD\), \(\varTheta _{13,24}=-M^{T}PE\), \(\varTheta _{13,25}=-2MP\), \(\varTheta _{14,14}=-12T_{4}\), \(\varTheta _{14,15}=-12G_{4}\), \(\varTheta _{15,15}=-12T _{4}\), \(\varTheta _{16,16}=-12T_{3}\), \(\varTheta _{16,19}=-12G_{3}\), \(\varTheta _{17,17}=-12T_{1}\), \(\varTheta _{17,20}=-12G_{1}\), \(\varTheta _{18,18}=-12T _{2}\), \(\varTheta _{18,21}=-12G_{2}\), \(\varTheta _{19,19}=-12T_{3}\), \(\varTheta _{20,20}=-12T_{1}\), \(\varTheta _{21,21}=-12T_{2}\), \(\varTheta _{22,22}=-L _{1}\), \(\varTheta _{22,23}=S_{2}D\), \(\varTheta _{22,24}=-S_{3}D\), \(\varTheta _{23,23}=\frac{ \tau _{1}^{4}}{4}U_{1}+\frac{\tau _{2}^{4}}{4}U_{2} +\frac{\tau ^{4}}{4}U _{3}-S_{2}+\tau _{1}^{2}T_{1}+\tau _{2}^{2}T_{2}+\tau ^{2}T_{3} +\eta ^{2}T_{4}+h^{2}T_{5}\), \(\varTheta _{23,24}=S_{2}E\), \(\varTheta _{24,24}=S_{3}E+E ^{T}S_{3}+S_{3}\), \(\varTheta _{25,25}=S_{2}\), \(\varUpsilon _{k}^{T}=[\varGamma _{1k},\varGamma _{2k},\varGamma _{3k},\varGamma _{4k}, \varGamma _{5k},\varGamma _{6k}]^{T}\) (\(k=1,2,3,4\)), \(\varGamma _{11}^{T}=\varGamma _{12}^{T}=\tau _{1}(e_{1}-e_{20})\), \(\varGamma _{13}^{T}=\varGamma _{14}^{T}=\mathbf{0}\), \(\varGamma _{21}^{T}=\varGamma _{22}^{T}=\mathbf{0}\), \(\varGamma _{23}^{T}=\varGamma _{24}^{T}=\tau _{1}(e_{1}-e_{17})\), \(\varGamma _{31}^{T}=\varGamma _{33}^{T}=\tau _{2}(e_{1}-e_{21})\), \(\varGamma _{32} ^{T}=\varGamma _{34}^{T}=\mathbf{0}\), \(\varGamma _{41}^{T}=\varGamma _{43}^{T}=\mathbf{0}\), \(\varGamma _{42}^{T}= \varGamma _{44}^{T}=\tau _{2}(e_{1}-e_{18})\), \(\varGamma _{51}^{T}= \tau (e_{1}-e_{19})\), \(\varGamma _{52}^{T}=\tau _{1}(e_{1}-e_{19})\), \(\varGamma _{53}^{T}=\tau _{2}(e_{1}-e_{19})\), \(\varGamma _{54}^{T}=\varGamma _{61}^{T}=\mathbf{0}\), \(\varGamma _{62}^{T}=\tau _{2}(e_{1}-e_{16})\), \(\varGamma _{63}^{T}=\tau _{1}(e_{1}-e_{16})\), \(\varGamma _{64}^{T}=\tau (e_{1}-e_{19})\), \(e_{i}=[\mathbf{0}_{n\times (i-1)n},\mathbf{I}_{n\times n},\mathbf{0} _{n\times (25-i)n}]\) (\(i=1,2,\ldots,25\)).
Proof
Replace the exponential-type Lyapunov–Krasovskii functional in Theorem 3.1 by
where
The rest of the proof of Theorem 3.2 is similar to that of Theorem 3.1, so the details are omitted. □
Remark 6
In particular, when \(E=0\) and \(D=0\), system (2.2) is written as system (4) in [19], we can see that the system is dissipative from [19]. Furthermore, we discuss the global exponential dissipativity of system (2.2): our model can be regarded as an extension of system (4) from [19].
Remark 7
If \(\tau _{1}(t)+\tau _{2}(t)=\tau (t)\), \(0\leq \tau (t)\leq \tau \), \(\vert \dot{\tau }(t)\leq \mu \vert \), \(E=0\) and \(\eta (t)=0\), i.e., system (2.2) is without two additive time-varying as well as leakage delays and neural term, then system (2.2) is reduced to the following neural network:
So the system is no longer a neutral-type memristive neural network. We find that the dissipativity of other types of neural network model has been discussed in [30, 41, 42]. When some terms are removed, the dissipativity result of Theorem 3.1 can be obtained by utilizing LMI. So our system is more general.
4 Example and simulation
In this section, we give a numerical example to illustrate the effectiveness of our results.
Example 1
Consider the two-dimensional MNNs (2.1) with the following parameters:
The activation function are \(f_{1}(s)=\tanh (0.3s) - 0.2\sin (s)\), \(f _{2}(s)=\tanh (0.2s) + 0.3\sin (s)\). Let \(\alpha =0.01\), \(c_{1}=c_{2}=2\), \(e_{1}=e_{2}=0.2\), \(m_{1}=2\), \(m_{2}=3.56\), \(h(t)=0.1\sin (2t) + 0.5\), \(\eta (t) =0.1\sin (2t) + 0.2\), \(\tau _{1}(t)=0.1\sin (t)+0.2\), \(\tau _{2}(t)= 0.1\cos (t) + 0.5\), \(\delta _{1}(t) = 0.4\sin (t) + 0.4\), \(\delta _{2}(t) = 0.4\sin (t) + 0.6\), \(u = [0.5\sin (t); 0.25\cos (t)]^{T}\). So \(\eta =0.4\), \(\bar{h}=0.6\), \(\tau _{1} = 0.3\), \(\tau _{2} = 0.6\), \(\tau = 0.9\), \(\delta _{1} = 0\), \(\delta _{2} = 1\), \(\mu _{1} = 0.1\), \(\mu _{2} = 0.1\), \(\mu = 0.2\). Then \(K_{1}^{-}=-0.2\), \(K_{1}^{+}=0.5\), \(K_{2}^{-}=-0.3\) and \(K_{2}^{+}=0.5\), i.e.,
With the above parameters, using LMI toolbox in MATLAB, we obtain the following feasible solution to LMIs in Theorem 3.1:
Then system (2.1) is a globally exponentially dissipative system, and the set \(S=\{x:\vert x\vert \leq 8.333\}\). Figure 1 shows trajectories of neuron states \(x_{1}(t)\) and \(x_{2}(t)\) of neutral-type MNNs (2.1). Figure 2 shows three-dimensional space trajectories of neuron states \(x_{1}(t)\) and \(x_{2}(t)\) of neutral-type MNNs (2.1). It can be seen that neuron states \(x_{1}(t)\) and \(x_{2}(t)\) are becoming periodic when the outputs of neutral-type MNNs (2.1) controllers are designed as periodic signals. According to Theorem 3.1 and Definition 2, system (2.1) is globally dissipative. Under the same conditions, if we take the external input \(u(t)=0\), then by Theorem 3.2, we know that the invariant set is \(S=\{0\}\) and system (2.1) is globally stable as shown in Fig. 3.
5 Conclusions
This paper has investigated the dissipativity of neutral-type memristive neural network with two additive time-varying delays, as well as distribution time and time-varying leakage delays. By applying novel linear matrix inequalities, Lyapunov–Krasovskii functional and Newton–Leibniz formula, the dissipativity of the system was obtained. Even though the dissipative of MNNs has been reported before, there are few references about the dissipativity of neutral-type MNNs. We have considered adding neutral terms to the model, which made the model more realistic. Finally, we have given a numerical example to illustrate the effectiveness and exactness of our results. When Markovian jumping is added to this model, how to study the dissipativity of neutral-type MNNs with mixed delays in such a model becomes an interesting question. We will extend our work towards this direction in the future.
References
Wang, Z., Liu, Y., Liu, X.: On global asymptotic stability of neural networks with discrete and distributed delays. Phys. Lett. A 345(4–6), 299–308 (2005)
Egmont-Petersen, M., Ridder, D.D., Handels, H.: Image processing with neural networks—a review. Pattern Recognit. 35(10), 2279–2301 (2002)
Chua, L.: Memristor-the missing circuit element. IEEE Trans. Circuit Theory 18(5), 507–519 (1971)
Strukov, D.B., Snider, G.S., Stewart, D.R., Williams, R.S.: The missing memristor found. Nature 453(7191), 80–83 (2008)
Cantley, K.D., Subramaniam, A., Stiegler, H.J., Chapman, R.A., Vogel, E.M.: Neural learning circuits utilizing nano-crystalline silicon transistors and memristors. IEEE Trans. Neural Netw. Learn. Syst. 23(4), 565–573 (2012)
Ding, S., Wang, Z., Zhang, H.: Dissipativity analysis for stochastic memristive neural networks with time-varying delays: a discrete-time case. IEEE Trans. Neural Netw. Learn. Syst. 29(3), 618–630 (2018)
Cheng, J., Park, J.H., Cao, J., Zhang, D.: Quantized \({H^{\infty }}\) filtering for switched linear parameter-varying systems with sojourn probabilities and unreliable communication channels. Inf. Sci. 466, 289–302 (2018)
Zhang, D., Cheng, J., Park, J.H., Cao, J.: Robust \({H^{\infty }}\) control for nonhomogeneous Markovian jump systems subject to quantized feedback and probabilistic measurements. J. Franklin Inst. 355(15), 6992–7010 (2018)
Sun, J., Chen, J.: Stability analysis of static recurrent neural networks with interval time-varying delay. Appl. Math. Comput. 221(9), 111–120 (2013)
Sun, Y., Cui, B.T.: Dissipativity analysis of neural networks with time-varying delays. Neurocomputing 168, 741–746 (2015)
Li, C., Feng, G.: Delay-interval-dependent stability of recurrent neural networks with time-varying delay. Neurocomputing 72, 1179–1183 (2009)
Lv, X., Li, X.: Delay-dependent dissipativity of neural networks with mixed non-differentiable interval delays. Neurocomputing 267, 85–94 (2017)
Wei, H., Li, R., Chen, C., Tu, Z.: Extended dissipative analysis for memristive neural networks with two additive time-varying delay components. Neurocomputing 216, 429–438 (2016)
Zeng, X., Xiong, Z., Wang, C.: Hopf bifurcation for neutral-type neural network model with two delays. Appl. Math. Comput. 282, 17–31 (2016)
Xu, C., Li, P., Pang, Y.: Exponential stability of almost periodic solutions for memristor-based neural networks with distributed leakage delays. Neural Comput. 28(12), 1–31 (2016)
Zhang, Y., Gu, D.W., Xu, S.: Global exponential adaptive synchronization of complex dynamical networks with neutral-type neural network nodes and stochastic disturbances. IEEE Trans. Circuits Syst. I, Regul. Pap. 60(10), 2709–2718 (2013)
Brogliato, B., Maschke, B., Lozano, R., Egeland, O.: Dissipative Systems Analysis and Control. Springer, Berlin (2007)
Huang, Y., Ren, S.: Passivity and passivity-based synchronization of switched coupled reaction–diffusion neural networks with state and spatial diffusion couplings. Neural Process. Lett. 5, 1–17 (2017)
Fu, Q., Cai, J., Zhong, S., Yu, Y.: Dissipativity and passivity analysis for memristor-based neural networks with leakage and two additive time-varying delays. Neurocomputing 275, 747–757 (2018)
Willems, J.C.: Dissipative dynamical systems part I: general theory. Arch. Ration. Mech. Anal. 45(5), 321–351 (1972)
Hong, D., Xiong, Z., Yang, C.: Analysis of adaptive synchronization for stochastic neutral-type memristive neural networks with mixed time-varying delays. Discrete Dyn. Nat. Soc. 2018, 8126127 (2018)
Cheng, J., Park, J.H., Karimi, H.R., Shen, H.: A flexible terminal approach to sampled-data exponentially synchronization of Markovian neural networks with time-varying delayed signals. IEEE Trans. Cybern. 48(8), 2232–2244 (2018)
Zhang, D., Cheng, J., Cao, J., Zhang, D.: Finite-time synchronization control for semi-Markov jump neural networks with mode-dependent stochastic parametric uncertainties. Appl. Math. Comput. 344–345, 230–242 (2019)
Zhang, W., Yang, S., Li, C., Zhang, W., Yang, X.: Stochastic exponential synchronization of memristive neural networks with time-varying delays via quantized control. Neural Netw. 104, 93–103 (2018)
Duan, L., Huang, L.: Global dissipativity of mixed time-varying delayed neural networks with discontinuous activations. Commun. Nonlinear Sci. Numer. Simul. 19(12), 4122–4134 (2014)
Tu, Z., Cao, J., Alsaedi, A., Alsaadi, F.: Global dissipativity of memristor-based neutral type inertial neural networks. Neural Netw. 88, 125–133 (2017)
Manivannan, R., Cao, Y.: Design of generalized dissipativity state estimator for static neural networks including state time delays and leakage delays. J. Franklin Inst. 355, 3990–4014 (2018)
Xiao, J., Zhong, S., Li, Y.: Relaxed dissipativity criteria for memristive neural networks with leakage and time-varying delays. Neurocomputing 171, 708–718 (2016)
Samidurai, R., Sriraman, R.: Robust dissipativity analysis for uncertain neural networks with additive time-varying delays and general activation functions. Math. Comput. Simul. 155, 201–216 (2019)
Lin, W.J., He, Y., Zhang, C., Long, F., Wu, M.: Dissipativity analysis for neural networks with two-delay components using an extended reciprocally convex matrix inequality. Inf. Sci. 450, 169–181 (2018)
Aubin, J.P., Cellina, A.: Differential Inclusions. Springer, Berlin (1984)
Arscott, F.M.: Differential Equations with Discontinuous Righthand Sides. Kluwer Academic, Amsterdam (1988)
Filippov, A.F.: Classical solutions of differential equations with multi-valued right-hand side. SIAM J. Control Optim. 5(4), 609–621 (1967)
Song, Q., Cao, J.: Global dissipativity analysis on uncertain neural networks with mixed time-varying delays. Chaos 18, 043126 (2008)
Liao, X., Wang, J.: Global dissipativity of continuous-time recurrent neural networks with time delay. Phys. Rev. E 68(1 Pt 2), 016118 (2003)
Seuret, A., Gouaisbaut, F.: Wirtinger-based integral inequality: application to time-delay systems. Automatica 49, 2860–2866 (2013)
Wang, Z., Liu, Y., Fraser, K., Liu, X.: Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays. Phys. Lett. A 354(4), 288–297 (2006)
Kwon, O.M., Lee, S.M., Park, J.H., Cha, E.J.: New approaches on stability criteria for neural networks with interval time-varying delays. Appl. Math. Comput. 218(19), 9953–9964 (2012)
Park, P.G., Ko, J.W., Jeong, C.: Reciprocally convex approach to stability of systems with time-varying delays. Automatica 47(1), 235–238 (2011)
Xin, Y., Li, Y., Cheng, Z., Huang, X.: Global exponential stability for switched memristive neural networks with time-varying delays. Neural Netw. 80, 34–42 (2016)
Guo, Z., Wang, J., Yan, Z.: Global exponential dissipativity and stabilization of memristor-based recurrent neural networks with time-varying delays. Neural Netw. 48, 158–172 (2013)
Nagamani, G., Joo, Y.H., Radhika, T.: Delay-dependent dissipativity criteria for Markovian jump neural networks with random delays and incomplete transition probabilities. Nonlinear Dyn. 91(4), 2503–2522 (2018)
Acknowledgements
The authors would like to thank the referees for their valuable comments on an earlier version of this article.
Authors’ information
Email address: youngcp0127@163.com (Cuiping Yang), xiong1601@163.com (Zuoliang Xiong), shineytq@163.com (Tianqing Yang).
Funding
This work is supported by National Natural Science Foundation of China (No. 61563033).
Author information
Authors and Affiliations
Contributions
All authors contributed equally to the writing of this paper. All authors of the manuscript have read and agreed to its content and are accountable for all aspects of the accuracy and integrity of the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Yang, C., Xiong, Z. & Yang, T. Dissipativity analysis of neutral-type memristive neural network with two additive time-varying and leakage delays. Adv Differ Equ 2019, 6 (2019). https://doi.org/10.1186/s13662-018-1941-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662-018-1941-z