Skip to main content

Theory and Modern Applications

Dissipativity analysis of neutral-type memristive neural network with two additive time-varying and leakage delays


In this paper, we offer an approach about the dissipativity of neutral-type memristive neural networks (MNNs) with leakage, additive time, and distributed delays. By applying a suitable Lyapunov–Krasovskii functional (LKF), some integral inequality techniques, linear matrix inequalities (LMIs) and free-weighting matrix method, some new sufficient conditions are derived to ensure the dissipativity of the aforementioned MNNs. Furthermore, the global exponential attractive and positive invariant sets are also presented. Finally, a numerical simulation is given to illustrate the effectiveness of our results.

1 Introduction

In the recent decades, neural networks have been widely applied in many areas, such as automatic control engineering, image processing, associative memory, pattern recognition, parallel computing, and so on [1, 2]. Therefore, it is extremely meaningful to study neural networks. Based on the completeness of circuit theory, Chua firstly proposed the fourth fundamental electrical circuit element memristor besides the known capacitance, inductance and resistance [3]. Subsequently, HP researchers discovered that memristors exist in nanoscale systems [4]. Memristor is a circuit element with memory function in the neural networks, whose resistance slowly changes depending on the quantity of passing electric charge by supplying a voltage or current. The working mechanism of a memristor is similar to that of the human brain. Thus, the research of MNNs is more valuable than we have realized [5, 6].

In the real world, time delays are ubiquitous. They may cause complex dynamical behaviors such as periodic oscillations, dissipation, divergence and chaos [7, 8]. Hence, the dynamic behaviors of neural networks with time delays have received lots of attention [9,10,11]. The existing studies on delayed neural networks can be divided into four categories dealing with constant, time-varying, distribution, and mixed delays. While a majority of literature is concentrated on the former three simple cases, mixed delays are more effective than simple delays in MNNs [12,13,14,15,16]. So the system of MNNs with mixed delays is worth a further study.

Dissipativity, known as a generalization of Lyapunov stability, is a common concept in dynamical systems. It focuses on the diverse dynamics of systems, not only on the equilibrium dynamics. Many systems are stable at the equilibrium points, but in some cases, the systems’ orbits do not converge to equilibrium points, or even not have any equilibrium point at all. As a consequence, dissipative systems play an important role in the field of control. Dissipative system theory provides a framework for the design and analysis of control systems based on energy-related considerations [17]. At present, although there are some studies on the dissipativity of neural networks [18,19,20], most of them are focusing on the synchronization of neural networks [21,22,23,24]. For the dissipativity analysis of neural networks, it is essential to find global exponentially attracting sets. Some researchers have investigated the global dissipativity of neural networks with mixed delays, by giving some sufficient conditions to obtain the global exponentially attracting sets [25, 26]. To the best of our knowledge, few studies have considered the dissipativity of neutral-type memristive neural networks with mixed delays.

In this paper, we will investigate the dissipative of neutral-type memristive neural networks with mixed delays. The highlights of our work include:


We consider not only two additive time-varying and distribution time delays, but also time-varying leakage delays.


We obtain the dissipativity of the system by using a combination of the appropriate LKF and the reciprocally convex combination method, some integral inequality techniques, LMI and some delay-dependent dissipative criteria.


Our results are more general than those for the ordinary neural networks.

The paper is organized as follows: in Sect. 2, the preliminaries are presented; in Sect. 3, the dissipative properties of neural network models with mixed delays are analyzed; in Sect. 4 a numerical example is given to demonstrate the effectiveness of our analytical results; in Sect. 5, the work is summarized.

2 Neural network model and some preliminaries


\(R^{n}\) (resp., \(R^{n\times m}\)) is the n-dimensional Euclidean space (resp., the set of \(n\times m\) matrices) with entries from R; \(X>0\) (resp., \(X\geq 0\)) implies that the matrix X is a real positive-definite matrix (resp., positive semi-definite). When A and B are symmetric matrices, if \(A>B\) then \(A-B\) is a positive definite matrix. The superscript T denotes transpose of the matrix; denotes the elements below the main diagonal of a symmetric matrix; I and O are the identity and zero matrices, respectively, with appropriate dimensions; \(\operatorname{diag}\{ \ldots \}\) denotes a diagonal matrix; \(\lambda _{\max }(C)\) (resp., \(\lambda _{\min }(C)\)) denotes the maximum (resp., minimum) eigenvalue of matrix C. For any interval \(V\subseteq R\), let \(S\subseteq R^{k}\) (\(1 \leq k \leq n\)), \(C(V,S)=\{\varphi :V\rightarrow S\text{ is continuous}\}\) and \(C^{1}(V,S)=\{\varphi :V\rightarrow S \text{ is continuous differentiable}\}\); \(\operatorname{co}\{b_{1} , b_{2}\}\) represents closure of the convex hull generated by \(b_{1}\) and \(b_{2}\). For constants a, b, we set \(a\vee b = \max \{a, b\}\). Let \(L_{2}^{n}\) be the space of square integrable functions on \(R^{+}\) with values in \(R^{n}\); \(L_{2e}^{n}\) the extended \(L_{2}^{n}\) space defined by \(L_{2e}^{n}=\{f:f\text{ is a measurable function on }R^{+}\}\), \(P_{T}f\in L_{2}^{N}\), \(\forall T \in R^{+}\), where \((P_{T}f)(t)=f(t)\) if \(t \leq T\), and 0 if \(t>T\). For any functions \(x=\{x(t)\}\), \(y=\{y(t)\}\in L_{2e}^{n}\) and matrix Q, we define \(\langle x,Qy\rangle =\int _{0}^{T} x^{T}(t)Qy(t)\,dt\).

In this paper, we consider the following neutral-type memristor neural network model with leakage, as well as two additive time-varying and distributed time-varying delays:

$$ \textstyle\begin{cases} \dot{x_{i}}(t)=-c_{i}x_{i}(t-\eta (t))+\sum_{j=1}^{n}a_{ij}(x_{i}(t))f _{j}(x_{j}(t)) +\sum_{j=1}^{n}b_{ij}(x_{i}(t))f(x_{j}(t-\tau _{j1}(t) \\ \hphantom{\dot{x_{i}}(t)=}{} -\tau _{j2}(t)))+\sum_{j=1}^{n}d_{ij}(x_{i}(t))\int _{t-\delta _{2}(t)} ^{t-\delta _{1}(t)}f_{j}(x_{j}(s))\,ds +e_{i}\dot{x}_{i}(t-h(t))+u_{i}(t), \\ y_{i}(t)=f_{i}(x_{i}(t)), \\ x_{i}(t)=\phi _{i}(t), \quad t\in (-\tau ^{*},0), \end{cases} $$

where n is the number of cells in a neural network; \(x_{i}(t)\) is the voltage of the capacitor; \(f_{i}(\cdot )\) denotes the neuron activation functions of the ith neuron at time t; \(y_{i}\) is the output of the ith neural cell; \(u_{i}(t)\in L_{\infty }\) is the external constant input of the ith neuron at time t; \(\eta (t)\) denotes the leakage delay satisfying \(0\leq \eta (t)\leq \eta \); \(\tau _{j1}(t)\) and \(\tau _{j2}(t)\) are two additive time varying delays that are assumed to satisfy the conditions \(0\leq \tau _{j1}(t)\leq \tau _{1}<\infty \), \(0\leq \tau _{j2}(t)\leq \tau _{2}<\infty \); \(\delta _{1}(t)\), \(\delta _{2}(t)\) and \(h(t)\) are the time-varying delays with \(0\leq \delta _{1}\leq \delta _{1}(t)\leq \delta _{2}(t)\leq \delta _{2}\), \(0 \leq h(t)\leq h\); η, \(\tau _{1}\), \(\tau _{2}\), \(\delta _{1}\), \(\delta _{2}\) and h are nonnegative constants; \(\tau ^{*}=\eta \vee (\delta _{2} \vee (\tau \vee h))\); \(C=\operatorname{diag}(c_{1},c_{2},\ldots,c_{n})\) is a self-feedback connection matrix; \(E=\operatorname{diag}(e_{1},e_{2},\ldots,e_{n})\) is the neutral-type parameter; \(a_{ij}(t)\), \(b_{ij}(t)\), and \(d_{ij}(t)\) represent the memristive-based weights, which are defined as follows:

$$\begin{aligned}& a_{ij} \bigl(x_{i}(t) \bigr)=\frac{\mathbf{W}_{(1)ij}}{\mathbf{C}_{i}} \times \operatorname {sign}_{ij}, \qquad b_{ij} \bigl(x_{i}(t) \bigr)=\frac{\mathbf{W}_{(2)ij}}{\mathbf{C}_{i}}\times \operatorname {sign}_{ij}, \\& d_{ij} \bigl(x_{i}(t) \bigr)=\frac{\mathbf{W}_{(3)ij}}{\mathbf{C}_{i}}\times \operatorname {sign}_{ij}, \qquad \operatorname{sign}_{ij}= \textstyle\begin{cases} 1, & i\neq j, \\ -1,& i=j. \end{cases}\displaystyle \end{aligned}$$

Here \(\mathbf{W}_{(k)ij}\) denote the memductances of memristors \(\mathbf{R}_{(k)ij}\), \(k=1,2,3\). In view of memristor property, we set

$$\begin{aligned}& a_{ij} \bigl(x_{i}(t) \bigr)= \textstyle\begin{cases} \hat{a}_{ij}, & \vert x_{i}(t) \vert \leq \gamma _{i}, \\ \check{a}_{ij}, & \vert x_{i}(t) \vert >\gamma _{i}, \end{cases}\displaystyle \quad\quad b_{ij} \bigl(x_{i}(t) \bigr)= \textstyle\begin{cases} \hat{b}_{ij}, & \vert x_{i}(t) \vert \leq \gamma _{i}, \\ \check{b}_{ij}, & \vert x_{i}(t) \vert >\gamma _{i}, \end{cases}\displaystyle \\& d_{ij} \bigl(x_{i}(t) \bigr)= \textstyle\begin{cases} \hat{d}_{ij}, & \vert x_{i}(t) \vert \leq \gamma _{i}, \\ \check{d}_{ij}, & \vert x_{i}(t) \vert >\gamma _{i}, \end{cases}\displaystyle \end{aligned}$$

where the switching jumps \(\gamma _{i}>0\), \(\hat{a}_{ij}\), \(\check{a} _{ij}\), \(\hat{b}_{ij}\), \(\check{b}_{ij}\), \(\hat{d}_{ij}\) and \(\check{d} _{ij}\) are known constants with respect to memristances.

Remark 1

In the recent years, the dissipativity problem of MNNs has received a lot of attention. So far, substantial important results on dissipativity have been obtained for MNNs. Unfortunately, the work in [27, 28] only considered the leakage delay, while that in [29, 30] considered additive time-varying delays, but not distribution delays. In fact, the leakage delay and multiple signal transmission delays coexist in the system of MNNs. Because few results are found in the existing literature on the dissipativity analysis of neutral-type MNNs with multiple time delays, this paper attempts to extend our knowledge in this field by studying the dissipativity of such systems, and an example is given to prove the effectiveness of our results. Thus, the obtained results extend the study of the dynamic characteristics of MNNs.

Remark 2

In many real applications, signals transmitted from one point to another may experience a few segments of networks, which can possibly induce successive delays with different properties due to the variable network transmission conditions, and when \(\tau _{1}(t)+\tau _{2}(t)\) reaches its maxima, we do not necessarily have both \(\tau _{1}(t)\) and \(\tau _{2}(t)\) reach their maxima at the same time. Therefore, in this paper, we will consider the two additive delay components in (2.1) separately.

Remark 3

Furthermore, the above systems are switching systems whose connection weights vary due to their states. Although smooth analysis is suitable for studying continuous nonlinear systems, the nonsmooth analysis is suitable for studying switching nonlinear systems. Therefore, it is necessary to introduce some definitions of nonsmooth theory, such as differential inclusion and set-valued maps.

Let \(\underline{a}_{ij}=\min \{\hat{a}_{ij}, \check{a}_{ij}\}\), \(\overline{a}_{ij}=\max \{\hat{a}_{ij}, \check{a}_{ij}\}\), \(\underline{b}_{ij}=\min \{\hat{b}_{ij}, \check{b}_{ij}\}\), \(\overline{b}_{ij}=\max \{\hat{b}_{ij}, \check{b}_{ij}\}\), \(\underline{d}_{ij}=\min \{\hat{d}_{ij}, \check{d}_{ij}\}\), \(\overline{d}_{ij}=\max \{\hat{d}_{ij}, \check{d}_{ij}\}\), for \(i,j =1,2,\ldots,n\). By applying the theory of differential inclusions and set-valued maps in system (2.1) [31, 32], it follows that

$$ \textstyle\begin{cases} \dot{x_{i}}(t)\in -c_{i}x_{i}(t-\eta (t))+\sum_{j=1}^{n}\operatorname{co}[ \underline{a}_{ij}, \overline{a}_{ij}]f_{j}(x_{j}(t))+\sum_{j=1}^{n}\operatorname{co}[ \underline{b}_{ij},\overline{b}_{ij}]f(x_{j}(t \\ \hphantom{\dot{x_{i}}(t)\in}{} -\tau _{j1}(t)-\tau _{j2}(t))) +\sum_{j=1}^{n}\operatorname{co}[\underline{d}_{ij}, \overline{d}_{ij}]\int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f_{j}(x_{j}(s))\,ds \\ \hphantom{\dot{x_{i}}(t)\in}{}+{e}_{i}\dot{x}_{i}(t-h(t))+u_{i}(t), \\ y_{i}(t)=f_{i}(x_{i}(t)), \\ x_{i}(t)=\phi _{i}(t), \quad t\in (-\tau ^{*},0). \end{cases} $$

Using Filippov’s theorem in [33], there exist \(a_{ij}^{\prime }(t)\in \operatorname{co}[\underline{a}_{ij}, \overline{a}_{ij}]\), \(b_{ij}^{\prime }(t)\in \operatorname{co}[\underline{b}_{ij},\overline{b}_{ij}]\), \(d_{ij}^{\prime }(t)\in \operatorname{co}[\underline{d}_{ij}, \overline{d}_{ij}]\), and \(A=(a_{ij}^{\prime }(t))_{n\times n}\), \(B=(b_{ij}^{\prime }(t))_{n\times n}\), \(D=(d_{ij}^{\prime }(t))_{n\times n} \), such that

$$ \textstyle\begin{cases} \dot{x}(t)=-Cx(t-\eta (t))+Af(x(t))+Bf(x(t-\tau _{1}(t)-\tau _{2}(t))) \\ \hphantom{\dot{x}(t)=}{} +D\int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f(x(s))\,ds+E\dot{x}(t-h(t))+u(t), \\ y(t)=f(x(t)), \\ x(t)=\phi (t), \quad t\in (-\tau ^{*},0), \end{cases} $$

where \(x(t)=(x_{1}(t),x_{2}(t),\ldots,x_{n}(t))^{T}\), \(x(t-\eta (t))=(x _{1}(t-\eta (t)), x_{2}(t-\eta (t)),\ldots, x_{n}(t-\eta (t)))^{T}\), \(f(x(t))=(f_{1}(x_{1}(t)), f_{2}(x_{2}(t)),\ldots, f_{n}(x_{1}(n)))^{T}\), \(f(x(t-\tau _{1}(t)-\tau _{2}(t)))=(f_{1}(x_{1}(t- \tau _{11}-\tau _{12})), f_{2}(x_{2}(t-\tau _{21}-\tau _{22})),\ldots,f _{n}(x_{n}(t-\tau _{n1}-\tau _{n2})))^{T}\), \(\dot{x}(t-h(t))=(\dot{x} _{1}(t-h(t)), \dot{x}_{2}(t-h(t)),\ldots, \dot{x}_{n}(t-h(t)))^{T}\), \(u(t)=(u_{1}(t),u_{2}(t),\ldots,u_{n}(t))^{T} \).

To prove our main results, the following assumptions, definitions and lemmas are needed.

Assumption 1

The time-varying delays \(\tau _{1}(t)\), \(\tau _{2}(t)\) and \(\eta (t) \) satisfy the conditions \(\vert \dot{\tau }_{1}(t)\vert \leq \mu _{1}\); \(\vert \dot{\tau }_{2}(t)\vert \leq \mu _{2}\); \(\vert \dot{\eta }(t)\vert \leq \mu _{3}\) where μ, \(\mu _{1}\), \(\mu _{2}\) and \(\mu _{3}\) are nonnegative constants, and we denote \(\tau (t)=\tau _{1}(t)+\tau _{2}(t)\), \(\mu =\mu _{1}+\mu _{2} \) and \(\tau =\tau _{1}+\tau _{2}\).

Assumption 2

For all \(\alpha ,\beta \in R\) and \(\alpha \neq \beta \), \(i=1,2,\ldots,n\), the activation function f is bounded and there exist constants \(k_{i}^{-}\) and \(k_{i}^{+} \) such that

$$ k_{i}^{-}\leq \frac{f_{i}(\alpha )-f_{i}(\beta )}{\alpha -\beta } \leq k_{i}^{+}, $$

where let \(F_{i}=\vert k_{i}^{-}\vert \vee \vert k_{i}^{+}\vert \), \(f=(f_{1},f_{2},\ldots,f_{n})^{T}\) and for any \(i\in \{1,2,\ldots,n\}\), \(f_{i}(0)=0\). For presentation convenience, in the following we denote

$$ K_{1}=\operatorname{diag} \bigl\{ {k_{1}^{-}k_{1}^{+},k_{2}^{-}k_{2}^{+}, \ldots,k _{n}^{-}k_{n}^{+}} \bigr\} , \quad \quad K_{2}=\operatorname{diag} \biggl\{ {\frac{k_{1}^{-}+k_{1}^{+}}{2}, \frac{k_{2}^{-}+k _{2}^{+}}{2},\ldots, \frac{k_{n}^{-}+k_{n}^{+}}{2}} \biggr\} . $$

Assumption 3

\(\phi (t)\in \mathbb{C}^{1}:\mathbb{C}([\tau *,0],R ^{n}) \) is the initial function with the norm

$$ \Vert \phi \Vert _{\tau *}=\sup_{s\in [\tau *,0]} \bigl\{ \bigl\vert \phi (s) \bigr\vert , \bigl\vert \dot{\phi }(s) \bigr\vert \bigr\} . $$

Definition 1

([34, 35])

Let \(x(t,0,\phi )\) be the solution of neural network (2.2) through \((0,\phi )\), \(\phi \in \mathbb{C}^{1} \). Suppose there exists a compact set \(S\subseteq R^{n}\) such that for every \(\phi \in \mathbb{C}^{1}\), there exists \(T(\phi )>0\) such that, when \(t\geq T(\phi )\), \(x(t,0,\phi )\subseteq S\). Then the neural network (2.2) is said to be a globally dissipative system, and S is called a globally attractive set. The set S is called positively invariant if for every \(\phi \in S\), it holds that \(x(t,0,\phi ) \subseteq S\) for all \(t\in R_{+}\).

Definition 2

([34, 35])

Let S be a globally attractive set of neural network (2.2). The neural network (2.2) is said to be globally exponentially dissipative if there exist constant \(a>0 \) and compact \(S^{*} \supset S\) in \(R^{n}\) such that for every \(\phi \in R^{n} \backslash S^{*} \), there exists a constant \(M(\phi )>0\) such that

$$ \inf_{\tilde{x}\in S } \bigl\{ \bigl\vert x(t,0,\phi )-\tilde{x} \bigr\vert :x\in R^{n} \backslash S^{*} \bigr\} \leq M(\phi )e^{-at},\quad t\in R_{+}. $$

Here \(x\in R^{n}\) but \(x\notin S^{*}\). Set \(S^{*}\) is called a globally exponentially attractive set.

Lemma 1


Consider a given matrix \(R>0\). Then, for all continuous functions \(\omega (\cdot ):[a,b]\rightarrow R^{n}\), such that the considered integral is well defined, one has

$$ \int _{a}^{b}\omega ^{T}(u)R\omega (u) \,du \geq \frac{1}{b-a} \biggl[ \int _{a}^{b}\omega (u)\,du \biggr] ^{T}R \biggl[ \int _{a}^{b}\omega (u)\,du \biggr]. $$

Lemma 2


For any given matrices H, E, a scalar \(\varepsilon >0\) and F with \(F^{T} F\leq I\), the following inequality holds:

$$ HFE+(HFE)^{T}\leq \varepsilon HH^{T}+\varepsilon ^{-1}E^{T}E. $$

Lemma 3


For any constant matrix \(H\in {R}^{n\times n}\) and two scalars \(b\geq a\geq 0\), the following inequality holds:

$$ \begin{aligned} &-\frac{(b^{2}-a^{2})}{2} \int _{-b}^{-a} \int ^{t}_{t+\theta }x^{T}(s)Hx(s)\,ds\,d \theta \\ &\quad \leq - \biggl[ \int _{-b}^{-a} \int ^{t}_{t+\theta }x(s)\,ds\,d\theta \biggr] ^{T} H \biggl[ \int _{-b}^{-a} \int ^{t}_{t+\theta }x(s)\,ds\,d\theta \biggr]. \end{aligned} $$

Lemma 4


Let the functions \(f_{1}(t), f_{2}(t) ,\ldots, f _{N}(t):R^{m} \rightarrow R\) have positive values in an open subset D of \(R^{m}\) and satisfy

$$ \frac{1}{\alpha _{1}}f_{1}(t)+\frac{1}{\alpha _{2}}f_{2}(t)+ \cdots +\frac{1}{ \alpha _{N}}f_{N}(t):D\rightarrow R^{n}, $$

with \(\alpha _{i}>0\) and \(\sum_{i}\alpha _{i}=1\), then the reciprocal convex combination of \(f_{i}(t)\) over the set D satisfies

$$\begin{aligned}& \forall g_{i,j}(t):R^{m}\rightarrow R^{n},\quad \quad g_{i,j}(t)\doteq g_{j,i}(t), \\& \sum_{i}\frac{1}{\alpha _{i}}f_{i}(t)\geq \sum_{i}f_{i}(t)+\sum _{i \neq j}g_{i,j}(t),\quad\quad \begin{bmatrix} f_{i}(t)&g_{i,j}(t)\\ g_{j,i}(t)&f_{j}(t) \end{bmatrix} \geq 0. \end{aligned}$$

3 Main results

In this section, under Assumptions 13 and by using Lyapunov–Krasovskii functional method and LMI technique, the delay-dependent dissipativity criterion of system (2.2) is derived in the following theorem.

Theorem 3.1

Under Assumptions 13, if there exist symmetric positive definite matrices \(P>0\), \({Q_{i}>0}\), \(V_{i} >0 \), \(U_{i}>0\) (\(i=1,2,3\)), \(R_{j}>0\), \(T_{j}>0\) (\(j=1,2,3,4,5\)), \(G_{k}>0\) (\(k=1,2,3,4\)), \(L_{1}>0\), \(L_{2}>0\), \(S_{2}>0\), \(S_{3}>0\), three \(n\times n \) diagonal matrices \(M>0\), \(\beta _{1}>0\), \(\beta _{2}>0 \), \(n\times n\) real matrix \(S_{1} \) such that the following LMIs hold:

$$ \varPhi _{k}=\varPsi -e^{-2\alpha \tau }\varUpsilon _{k}^{T} \begin{bmatrix} U_{1}&V_{1}&0&0&0&0 \\ *&U_{1}&0&0&0&0 \\ *&*&U_{2}&V_{2}&0&0 \\ *&*&*&U_{2}&0&0 \\ *&*&*&*&U_{3}&V_{3} \\ *&*&*&*&*&U_{3} \end{bmatrix} \varUpsilon _{k}< 0 \quad (k=1,2,3,4), $$

where \(\varPsi =[\psi ]_{l\times n}\) (\(l,n=1,2,\ldots,25\)); \(\psi _{1,1}=-PM-M ^{T}P+2\alpha P+2Q_{1}+Q_{2}+Q_{3}+R_{1}+R_{2}+R_{3}+R_{4}+R_{5} -4e ^{-2\alpha \tau _{1}}T_{1}-4e^{-2\alpha \tau _{2}}T_{2}-4e^{-2\alpha \tau }T_{3} -4e^{-2\alpha \eta }T_{4}-4e^{-2\alpha h}T_{5}+\eta ^{2}L _{2}-K_{1}\beta _{1}\), \(\psi _{1,2}=-2e^{-\alpha \tau }G_{3}\), \(\psi _{1,3}=-2e ^{-\alpha \tau _{1}}G_{1}\), \(\psi _{1,4}=-2e^{-\alpha \tau _{2}}G_{2}\), \(\psi _{1,5}=PM-2e^{-2\alpha \eta }G_{4}\), \(\psi _{1,6}=e^{-2\alpha h}T _{5}\), \(\psi _{1,7}=-2e^{-2\alpha \tau }(T_{3}+2G_{3})\), \(\psi _{1,8}=-2e ^{-2\alpha \tau _{1}}(T_{1}+2G_{1})\), \(\psi _{1,9}=-2e^{-2\alpha \tau _{2}}(T_{2}+2G_{2})\), \(\psi _{1,10}=-PC+S_{1}C-2e^{-2\alpha \eta }(T _{4}+2G_{4})\), \(\psi _{1,11}=PA-S_{1}A+K_{2}\beta _{1}\), \(\psi _{1,12}=PB-S _{1}B\), \(\psi _{1,13}=M^{T}PM-\alpha PM -\alpha M^{T}P\), \(\psi _{1,14}=-6e ^{-2\alpha \eta }G_{4}\), \(\psi _{1,15}=-6e^{-2\alpha \eta }T_{4}\), \(\psi _{1,16}=-6e^{-2\alpha \tau } T_{3}\), \(\psi _{1,17}= -6e^{-2 \alpha \tau _{1}}T_{1}\), \(\psi _{1,18}=-6e^{-2\alpha \tau _{2}}T_{2}\), \(\psi _{1,19}=6e^{-2\alpha \tau }G_{3}\), \(\psi _{1,20}=6e^{-2\alpha \tau _{1}}G_{1}\), \(\psi _{1,21}=6e^{-2\alpha \tau _{2}}G_{2}\), \(\psi _{1,22}=PD-S _{1}D\), \(\psi _{1,23}=S_{1}\), \(\psi _{1,24}=PE-S_{1}E\), \(\psi _{2,2}=-e ^{-2\alpha \tau }Q_{1}-4e^{-2\alpha \tau }T_{3}\), \(\psi _{2,7}=-2e^{-2 \alpha \tau }(T_{3}+2G_{3})\), \(\psi _{2,16}=6e^{-2\alpha \tau }G_{3}\), \(\psi _{2,19}=6e^{-2\alpha \tau }T_{3}\), \(\psi _{3,3}=-e^{-2\alpha \tau _{1}}Q_{2}-4e^{-2\alpha \tau _{1}}T_{1}\), \(\psi _{3,8}=-2e^{-2 \alpha \tau _{1}}(T_{1}+2G_{1})\), \(\psi _{3,17}=6e^{-2\alpha \tau _{1}}G _{1}\), \(\psi _{3,20}=6e^{-2\alpha \tau _{1}}T_{1}\), \(\psi _{4,4}=-e^{-2 \alpha \tau _{2}}Q_{3}-4e^{-2\alpha \tau _{2}}T_{2}\), \(\psi _{4,9}=-2e ^{-2\alpha \tau _{2}}(T_{2}+2G_{2})\), \(\psi _{4,18}=6e^{-2\alpha \tau _{2}}G_{2}\), \(\psi _{4,21}=6e^{-2\alpha \tau _{2}}T_{2}\), \(\psi _{5,5}=-e ^{-2\alpha \eta }R_{2}-4e^{-2\alpha \eta }T_{4}\), \(\psi _{5,10}=-2e ^{-2\alpha \eta }(T_{4}+2G_{4})\), \(\psi _{5,13}=-M^{T}PM\), \(\psi _{5,14}=6e ^{-2\alpha \eta }T_{4}\), \(\psi _{5,15}=6e^{-2\alpha \eta }G_{4}\), \(\psi _{6,6}=-e^{-2\alpha h}T_{5}\), \(\psi _{7,7}=-(1-\mu )e^{-2\alpha \tau }R_{3}-4e^{-2\alpha \tau }(2T_{3}+G_{3})-K_{1}\beta _{2}\), \(\psi _{7,12}=K_{2}\beta _{2}\), \(\psi _{7,16}=6e^{-2\alpha \tau }(T_{3}+G _{3})\), \(\psi _{7,19}=6e^{-2\alpha \tau }(T_{3}+G_{3})\), \(\psi _{8,8}=-(1- \mu _{1})e^{-2\alpha \tau _{1}}R_{4}-4e^{-2\alpha \tau _{1}}(2T_{1}+G _{1})\), \(\psi _{8,17}=6e^{-2\alpha \tau _{1}}(T_{1}+G_{1})\), \(\psi _{8,20}=6e ^{-2\alpha \tau _{1}}(T_{1}+G_{1})\), \(\psi _{9,9}=-(1-\mu _{2})e^{-2 \alpha \tau _{2}}R_{5}-4e^{-2\alpha \tau _{2}}(2T_{2}+G_{2})\), \(\psi _{9,18}=6e^{-2\alpha \tau _{2}}(T_{2}+G_{2})\), \(\psi _{9,21}=6e^{-2 \alpha \tau _{2}}(T_{2}+G_{2})\), \(\psi _{10,13}=M^{T}P{C}\), \(\psi _{10,10}=-(1- \mu _{3})e^{-2\alpha \eta }R_{1}-4e^{-2\alpha \eta }(2T_{4}+G_{4})\), \(\psi _{10,14}=6e^{-2\alpha \eta }(T_{4}+G_{4})\), \(\psi _{10,15}=6e^{-2 \alpha \eta }(T_{4}+G_{4})\), \(\psi _{10,23}=-S_{2}C\), \(\psi _{10,24}=-S _{3}C\), \(\psi _{11,11}=(\delta _{2}-\delta _{1})^{2}L_{1}-\beta _{1}\), \(\psi _{11,13}=-M^{T}PA\), \(\psi _{11,23}=S_{2}A\), \(\psi _{11,24}=S_{3}A\), \(\psi _{12,12}=-\beta _{2}\), \(\psi _{12,13}=-M^{T}PB\), \(\psi _{12,23}=S _{2}B\), \(\psi _{12,24}=S_{3}B\), \(\psi _{13,13}=\alpha M^{T}PM-2e^{-2 \alpha \eta }L_{2}\), \(\psi _{13,22}=-M^{T}PD\), \(\psi _{13,24}=-M^{T}PE\), \(\psi _{13,25}=-MP\), \(\psi _{14,14}=-12e^{-2\alpha \eta }T_{4}\), \(\psi _{14,15}=-12e^{-2\alpha \eta }G_{4}\), \(\psi _{15,15}=-12e^{-2 \alpha \eta }T_{4}\), \(\psi _{16,16}=-12e^{-2\alpha \tau }T_{3}\), \(\psi _{16,19}=-12e^{-2\alpha \tau }G_{3}\), \(\psi _{17,17}=-12e^{-2 \alpha \tau _{1}}T_{1}\), \(\psi _{17,20}=-12e^{-2\alpha \tau _{1}}G_{1}\), \(\psi _{18,18}=-12e^{-2 \alpha \tau _{2}}T_{2}\), \(\psi _{18,21}=-12e^{-2\alpha \tau _{2}}G_{2}\), \(\psi _{19,19}=-12e^{-2\alpha \tau }T_{3}\), \(\psi _{20,20}=-12e^{-2 \alpha \tau _{1}} T_{1}\), \(\psi _{21,21}=-12e^{-2\alpha \tau _{2}}T_{2}\), \(\psi _{22,22}=-e^{-2\alpha \delta _{2}}L_{1}\), \(\psi _{22,23}=S_{2}D\), \(\psi _{22,24}=S_{3}D\), \(\psi _{23,23}=\frac{\tau _{1}^{4}}{4}U_{1}+\frac{\tau _{2}^{4}}{4}U_{2} +\frac{\tau ^{4}}{4}U _{3}-S_{2}+\tau _{1}^{2}T_{1}+\tau _{2}^{2}T_{2}+\tau ^{2}T_{3}+\eta ^{2}T _{4}+h^{2}T_{5}\), \(\psi _{23,24}=S_{2}E\), \(\psi _{24,24}=S_{3}E+E^{T}S _{3}+S_{3}\), \(\psi _{25,25}=S_{2}\), \(\varUpsilon _{k}^{T}=[\varGamma _{1k},\varGamma _{2k},\varGamma _{3k},\varGamma _{4k}, \varGamma _{5k},\varGamma _{6k}]^{T}\) (\(k=1,2,3,4\)), \(\varGamma _{11}^{T}=\varGamma _{12}^{T}=\tau _{1}(e_{1}-e_{20})\), \(\varGamma _{13}^{T}=\varGamma _{14}^{T}=\mathbf{0}\), \(\varGamma _{21}^{T}=\varGamma _{22}^{T}=\mathbf{0}\), \(\varGamma _{23}^{T}=\varGamma _{24}^{T}=\tau _{1}(e_{1}-e_{17})\), \(\varGamma _{31}^{T}=\varGamma _{33}^{T}=\tau _{2}(e_{1}-e_{21})\), \(\varGamma _{32} ^{T}=\varGamma _{34}^{T}=\mathbf{0}\), \(\varGamma _{41}^{T}=\varGamma _{43}^{T}=\mathbf{0}\), \(\varGamma _{42}^{T}= \varGamma _{44}^{T}=\tau _{2}(e_{1}-e_{18})\), \(\varGamma _{51}^{T}=\tau (e _{1}-e_{19})\), \(\varGamma _{52}^{T}=\tau _{1}(e_{1}-e_{19})\), \(\varGamma _{53}^{T}=\tau _{2}(e_{1}-e_{19})\), \(\varGamma _{54}^{T}=\varGamma _{61}^{T}=\mathbf{0}\), \(\varGamma _{62}^{T}=\tau _{2}(e_{1}-e_{16})\), \(\varGamma _{63}^{T}=\tau _{1}(e_{1}-e_{16})\), \(\varGamma _{64}^{T}=\tau (e_{1}-e_{19})\), \(e_{i}=[\mathbf{0}_{n\times (i-1)n},\mathbf{I}_{n\times n},\mathbf{0} _{n\times (25-i)n}]\) (\(i=1,2,\ldots,25\)), then the neural network (2.2) is exponentially dissipative, and

$$\begin{aligned} S&= \biggl\{ x: \vert x \vert \leq \frac{ \vert (P-S_{1}) \vert +\sqrt{ \vert (P-S_{1}) \vert ^{2} + \lambda _{\min }{(Q_{1})}\lambda _{\max }(S_{3})}}{\lambda _{\min }{(Q_{1})}} \varGamma _{u} \biggr\} \end{aligned}$$

is a positively invariant and globally exponentially attractive set, where the external input \(\vert u(t)\vert \leq \varGamma _{u}\), \(\varGamma _{u}>0\) is a bound of the external input \(u(t)\) on \(R^{+}\). In addition, the exponential dissipativity rate index α can be used in the Φ.


Considering the following Lyapunov–Krasovskii function:

$$ V \bigl(t,x(t) \bigr)=\sum_{k=1} ^{6}V_{k}(t), $$


$$\begin{aligned} &V_{1} \bigl(t,x(t) \bigr)= \biggl[x(t)-M \int _{t-\eta }^{t}x(s)\,ds \biggr]^{T}P \biggl[x(t)-M \int _{t-\eta }^{t}x(s)\,ds \biggr], \\ &\begin{aligned} V_{2} \bigl(t,x(t) \bigr)&= \int _{t-\tau }^{t}e^{2\alpha (s-t)}x^{T}(s)Q_{1}x(s) \,ds + \int _{t-\tau _{1}}^{t}e^{2\alpha (s-t)}x^{T}(s)Q_{2}x(s) \,ds \\ &\quad{} + \int _{t-\tau _{2}}^{t}e^{2\alpha (s-t)}x^{T}(s)Q _{3}x(s)\,ds, \end{aligned} \\ & \begin{aligned} V_{3} \bigl(t,x(t) \bigr)&= \int _{t-\eta (t)}^{t}e^{2\alpha (s-t)}x^{T}(s)R_{1}x(s) \,ds + \int _{t-\eta }^{t}e^{2\alpha (s-t)}x^{T}(s)R_{2}x(s) \,ds \\ &\quad{} + \int _{t-\tau (t)}^{t}e^{2\alpha (s-t)}x(s)^{T} R_{3}x(s)\,ds + \int _{t-\tau _{1}(t)}^{t}e^{2\alpha (s-t)}x(s)^{T} R_{4}x(s)\,ds \\ &\quad{} + \int _{t-\tau _{2}(t)}^{t}e^{2\alpha (s-t)}x(s)^{T} R _{5}x(s)\,ds, \end{aligned} \\ & V_{4} \bigl(t,x(t) \bigr)=\tau _{1} \int _{-\tau _{1}}^{0} \int _{t+\theta }^{t}e^{2 \alpha (s-t)}\dot{x}^{T}(s)T_{1} \dot{x}(s)\,ds\,d\theta \\ & \hphantom{V_{4} \bigl(t,x(t) \bigr)}\quad{} +\tau _{2} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t}e^{2\alpha (s-t)} \dot{x}^{T}(s)T_{2} \dot{x}(s)\,ds\,d\theta \\ & \hphantom{V_{4} \bigl(t,x(t) \bigr)} \quad{} +\tau \int _{-\tau }^{0} \int _{t+\theta }^{t}e^{2\alpha (s-t)}\dot{x} ^{T}(s)T_{3}\dot{x}(s)\,ds\,d\theta \\ & \hphantom{V_{4} \bigl(t,x(t) \bigr)} \quad{} +\eta \int _{-\eta }^{0} \int _{t+\theta }^{t}e^{2\alpha (s-t)}\dot{x} ^{T}(s)T_{4}\dot{x}(s)\,ds\,d\theta \\ & \hphantom{V_{4} \bigl(t,x(t) \bigr)} \quad{} +h \int _{-h}^{0} \int _{t+\theta }^{t}e^{2\alpha (s-t)}\dot{x}^{T}(s)T _{5}\dot{x}(s)\,ds\,d\theta , \\ & \begin{aligned} V_{5} \bigl(t,x(t) \bigr)&=(\delta _{2}- \delta _{1}) \int _{-\delta _{2}}^{-\delta _{1}} \int _{t+\theta }^{t}e^{2\alpha (s-t)}f^{T} \bigl(x(s) \bigr)L_{1}f \bigl(x(s) \bigr)\,ds\,d \theta \\ & \quad{} +\eta \int _{-\eta }^{0} \int _{t+\theta }^{t}e^{2\alpha (s-t)}x^{T}(s)L _{2}x(s)\,ds\,d\theta , \end{aligned} \\ & \begin{aligned} V_{6} \bigl(t,x(t) \bigr)&=\frac{\tau _{1}^{2}}{2} \int _{-\tau _{1}}^{0} \int _{ \theta }^{0} \int _{t+\lambda }^{t}e^{2\alpha (s-t)}\dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds\,d\lambda \,d\theta \\ & \quad{} +\frac{\tau _{2}^{2}}{2} \int _{-\tau _{2}}^{0} \int _{\theta }^{0} \int _{t+\lambda }^{t}e^{2\alpha (s-t)}\dot{x}^{T}(s)U_{2} \dot{x}(s)\,ds\,d \lambda \,d \theta \\ & \quad{} +\frac{\tau ^{2}}{2} \int _{-\tau }^{0} \int _{\theta }^{0} \int _{t+\lambda }^{t}e^{2\alpha (s-t)}\dot{x}^{T}(s)U_{3} \dot{x}(s)\,ds\,d \lambda \,d \theta . \end{aligned} \end{aligned}$$

Calculating the derivative of \(V(t,x(t))\) along the trajectory of neural network (2.2), it can be deduced that

$$\begin{aligned}& \begin{aligned}[b] \dot{V}_{1} \bigl(t,x(t) \bigr)&=2 \biggl[x^{T}(t)- \int _{t-\eta }^{t}x^{T}(s)\,ds \times M \biggr]P \biggl[-Cx \bigl(t-\eta (t) \bigr)+Af \bigl(x(t) \bigr) \\ &\quad{} +Bf \bigl(x \bigl(t-\tau _{1}(t)-\tau _{2}(t) \bigr) \bigr)+D \int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f \bigl(x(s) \bigr)\,ds+E\dot{x} \bigl(t-h(t) \bigr) \\ &\quad{} +u(t)-Mx(t)+Mx(t-\eta ) \biggr], \end{aligned} \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] \dot{V}_{2} \bigl(t,x(t) \bigr)&\leq x(t)^{T}[ Q_{1}+Q_{2}+Q_{3}]x(t) -e^{-2 \alpha \tau }x(t-\tau )^{T}Q_{1}x(t-\tau ) \\ &\quad{} -e^{-2\alpha \tau _{1}}x(t-\tau _{1})^{T}Q_{2}x(t- \tau _{1}) -e^{-2\alpha \tau _{2}}x(t-\tau _{2})Q_{3}x(t- \tau _{2}) \\ &\quad{} -2\alpha V_{2} \bigl(t,x(t) \bigr), \end{aligned} \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] \dot{V}_{3} \bigl(t,x(t) \bigr)&\leq x^{T}(t)[R_{1}+R_{2}+R_{3}+R_{4}+R_{5}]x(t) -e^{-2\alpha \eta }x^{T}(t-\eta )R_{2}x(t-\eta ) \\ &\quad{} -(1-\mu _{3})e^{-2\alpha \eta }x^{T} \bigl(t-\eta (t) \bigr)R _{1}x \bigl(t-\eta (t) \bigr) \\ &\quad{} -(1-\mu )e^{-2\alpha \tau }x^{T} \bigl(t-\tau (t) \bigr)R _{3}x \bigl(t-\tau (t) \bigr) \\ &\quad{} -(1-\mu _{1})e^{-2\alpha \tau _{1}(t)}x^{T} \bigl(t- \tau _{1}(t) \bigr)R_{4}x(t-\tau _{1}) \\ &\quad{}-(1-\mu _{2})e^{-2\alpha \tau _{2}}x^{T} \bigl(t-\tau _{2}(t) \bigr)R_{5}x \bigl(t-\tau _{2}(t) \bigr) -2 \alpha V_{3} \bigl(t,x(t) \bigr), \end{aligned} \end{aligned}$$
$$\begin{aligned}& \dot{V}_{4} \bigl(t,x(t) \bigr)\leq \tau _{1}^{2}\dot{x}^{T}(t)T_{1}\dot{x}(t)- \tau _{1}e^{-2\alpha \tau _{1}} \int _{t-\tau _{1}}^{t}\dot{x}^{T}(s)T_{1} \dot{x}(s)\,ds \\ & \hphantom{\dot{V}_{4} \bigl(t,x(t) \bigr)}\quad{} +\tau _{2}^{2}\dot{x}^{T}(t)T_{2} \dot{x}(t)-\tau _{2}e^{-2\alpha \tau _{2}} \int _{t-\tau _{2}}^{t}\dot{x}^{T}(s)T_{2} \dot{x}(s)\,ds \\ & \hphantom{\dot{V}_{4} \bigl(t,x(t) \bigr)}\quad{} +\tau ^{2}\dot{x}^{T}(t)T_{3} \dot{x}(t)-\tau e^{-2\alpha \tau } \int _{t-\tau }^{t}\dot{x}^{T}(s)T_{3} \dot{x}(s)\,ds \\ & \hphantom{\dot{V}_{4} \bigl(t,x(t) \bigr)}\quad{} +\eta ^{2}\dot{x}^{T}(t)T_{4} \dot{x}(t)-\eta e^{-2\alpha \eta } \int _{t-\eta }^{t}\dot{x}^{T}(s)T_{4} \dot{x}(s)\,ds \\ & \hphantom{\dot{V}_{4} \bigl(t,x(t) \bigr)}\quad{} +h^{2}\dot{x}^{T}(t)T_{5} \dot{x}(t)-he^{-2\alpha h} \int _{t-h}^{t} \dot{x}^{T}(s)T_{5} \dot{x}(s)\,ds-2\alpha V_{4} \bigl(t,x(t) \bigr), \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] \dot{V}_{5} \bigl(t,x(t) \bigr)&\leq (\delta _{2}-\delta _{1})^{2}f^{T} \bigl(x(t) \bigr)L_{1}f \bigl(x(t) \bigr)+ \eta ^{2}x^{T}(t)L_{2}x(t) \\ &\quad{} -e^{2\alpha \delta _{2}} \bigl(\delta _{2}(t)-\delta _{1}(t) \bigr) \int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f^{T} \bigl(x(s) \bigr)L_{1}f \bigl(x(s) \bigr)\,ds \\ &\quad{} -\eta e^{-2\alpha \eta } \int _{t-\eta }^{t}x^{T}(s)L_{2}x(s) \,ds-2\alpha V_{5} \bigl(t,x(t) \bigr), \end{aligned} \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] \dot{V}_{6} \bigl(t,x(t) \bigr)&\leq \frac{\tau _{1}^{4}}{4}\dot{x}(t)U_{1} \dot{x}(t)+\frac{\tau _{2}^{4}}{4} \dot{x}(t)U_{2}\dot{x}(t) +\frac{\tau ^{4}}{4}\dot{x}(t)U_{3} \dot{x}(t) \\ &\quad{} -\frac{\tau _{1}^{2}}{2}e^{-2\alpha \tau _{1}} \int _{-\tau _{1}}^{0} \int _{t+\theta }^{t}\dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds \\ &\quad{} -\frac{\tau _{2}^{2}}{2}e^{-2\alpha \tau _{2}} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t}\dot{x}^{T}(s)U_{2} \dot{x}(s)\,ds \\ &\quad{} -\frac{\tau ^{2}}{2}e^{-2\alpha \tau } \int _{-\tau }^{0} \int _{t+\theta }^{t}\dot{x}^{T}(s)U_{3} \dot{x}(s)\,ds-2\alpha V_{6} \bigl(t,x(t) \bigr). \end{aligned} \end{aligned}$$

For any matrix \(G_{1}\) with [ T 1 G 1 T 1 ]0, by using Lemmas 1 and 4, we can obtain the following:

$$\begin{aligned} &-\tau _{1}e ^{-2\alpha \tau _{1}} \int _{t-\tau _{1}}^{t}\dot{x}^{T}(s)T _{1}\dot{x}(s)\,ds \\ &\quad =-\tau _{1}e^{-2\alpha \tau _{1}} \biggl[ \int _{t-\tau _{1}}^{t-\tau _{1}(t)} \dot{x}^{T}(s)T_{1} \dot{x}(s)\,ds + \int _{t-\tau _{1}(t)}^{t}\dot{x}^{T}(s)T _{1}\dot{x}(s)\,ds \biggr] \\ &\quad \leq e^{-2\alpha \tau _{1}} \biggl\{ -\frac{\tau _{1}}{\tau _{1}-\tau _{1}(t)} \bigl[ \vartheta _{1}^{T}(t)T_{1}\vartheta _{1} +3 \vartheta _{2}^{T}(t)T_{1} \vartheta _{2}(t) \bigr] \\ &\quad\quad{} -\frac{\tau _{1}}{\tau _{1}(t)} \bigl[\vartheta _{3}^{T}(t)T_{1} \vartheta _{3}(t)+3\vartheta _{4}^{T}(t)T_{1} \vartheta _{4}(t) \bigr] \biggr\} \\ &\quad \leq e^{-2\alpha \tau _{1}} \bigl[-\vartheta _{1}^{T}(t)T_{1} \vartheta _{1}(t)-3 \vartheta _{2}^{T}(t)T_{1} \vartheta _{2}(t)- \vartheta _{3}^{T}(t)T_{1} \vartheta _{3}(t) \\ &\quad \quad{} -3\vartheta _{4}^{T}(t)T_{1} \vartheta _{4}(t)-2\vartheta _{1} ^{T}(t)G_{1} \vartheta _{3}(t)-6\vartheta _{2}^{T}(t)G_{1} \vartheta _{4}(t) \bigr], \end{aligned}$$


$$\begin{aligned}& \vartheta _{1}(t)=x \bigl(t-\tau _{1}(t) \bigr)-x(t-\tau _{1}); \\& \vartheta _{2}(t)=x \bigl(t- \tau _{1}(t) \bigr)+x(t-\tau _{1})-\frac{2}{\tau _{1}-\tau _{1}(t)} \int _{t-\tau _{1}}^{t-\tau _{1}(t)}x(s)\,ds; \\& \vartheta _{3}(t)=x(t)-x \bigl(t-\tau _{1}(t) \bigr); \qquad \vartheta _{4}(t)=x(t)+x \bigl(t-\tau _{1}(t) \bigr)- \frac{2}{\tau _{1}(t)} \int _{t-\tau _{1}(t)}^{t}x(s)\,ds. \end{aligned}$$

Similarly, it holds that

$$\begin{aligned}& -\tau _{2} e^{-2\alpha \tau _{2}} \int _{t-\tau _{2}}^{t}\dot{x}^{T}(s)T _{2}\dot{x}(s)\,ds \\& \quad \leq e^{-2\alpha \tau _{2}} \bigl[-\vartheta _{5}^{T}(t)T_{2} \vartheta _{5}(t)-3 \vartheta _{6}^{T}(t)T_{2} \vartheta _{6}(t)- \vartheta _{7}^{T}(t)T_{2} \vartheta _{7}(t)-3\vartheta _{8}^{T}(t)T_{2} \vartheta _{8}(t) \\& \quad \quad{} -2\vartheta _{5}^{T}(t)G_{2} \vartheta _{7}(t)-6\vartheta _{6} ^{T}(t)G_{2} \vartheta _{8}(t) \bigr], \end{aligned}$$
$$\begin{aligned}& -\tau e^{-2\alpha \tau } \int _{t-\tau }^{t}\dot{x}^{T}(s)T_{3} \dot{x}(s)\,ds \\& \quad \leq e^{-2\alpha \tau } \bigl[-\vartheta _{9}^{T}(t)T_{3} \vartheta _{9}(t)-3 \vartheta _{10}^{T}(t)T_{3} \vartheta _{10}(t)- \vartheta _{11}^{T}(t)T _{3}\vartheta _{11}(t)-3\vartheta _{12}^{T}(t)T_{3} \vartheta _{12}(t) \\& \quad \quad{} -2\vartheta _{9}^{T}(t)G_{3} \vartheta _{11}(t)-6\vartheta _{10} ^{T}(t)G_{3} \vartheta _{12}(t) \bigr], \end{aligned}$$
$$\begin{aligned}& -\eta e^{-2\alpha \eta } \int _{t-\eta }^{t}\dot{x}^{T}(s)T_{4} \dot{x}(s)\,ds \\& \quad \leq e^{-2\alpha \eta } \bigl[-\vartheta _{13}^{T}(t)T_{4} \vartheta _{13}(t)-3 \vartheta _{14}^{T}(t)T_{4} \vartheta _{14}(t)- \vartheta _{15}^{T}(t)T _{4}\vartheta _{15}(t)-3\vartheta _{16}^{T}(t)T_{4} \vartheta _{16}(t) \\& \quad \quad{} -2\vartheta _{13}^{T}(t)G_{4} \vartheta _{15}(t)-6\vartheta _{14} ^{T}(t)G_{4} \vartheta _{16}(t) \bigr], \end{aligned}$$


$$\begin{aligned}& \vartheta _{5}(t)=x \bigl(t-\tau _{2}(t) \bigr)-x(t-\tau _{2}); \\& \vartheta _{6}(t)=x \bigl(t- \tau _{2}(t) \bigr)+x(t-\tau _{2})-\frac{2}{\tau _{2}-\tau _{2}(t)} \int _{t-\tau _{2}}^{t-\tau _{2}(t)}x(s)\,ds; \\& \vartheta _{7}(t)=x(t)-x \bigl(t-\tau _{2}(t) \bigr); \quad\quad \vartheta _{8}(t)=x(t)+x \bigl(t-\tau _{2}(t) \bigr)- \frac{2}{\tau _{2}(t)} \int _{t-\tau _{2}(t)} ^{t}x(s)\,ds; \\& \vartheta _{9}(t)=x \bigl(t-\tau (t) \bigr)-x(t-\tau ); \\& \vartheta _{10}(t)=x \bigl(t- \tau (t) \bigr)+x(t-\tau )- \frac{2}{\tau -\tau (t)} \int _{t-\tau }^{t-\tau (t)}x(s)\,ds; \quad\quad \vartheta _{11}(t)=x(t)-x \bigl(t-\tau (t) \bigr); \\& \vartheta _{12}(t)=x(t)+x \bigl(t- \tau (t) \bigr)-\frac{2}{\tau (t)} \int _{t-\tau (t)}^{t}x(s)\,ds; \quad\quad \vartheta _{13}(t)=x \bigl(t-\eta (t) \bigr)-x(t-\eta ); \\& \vartheta _{14}(t)=x \bigl(t- \eta (t) \bigr)+x(t-\eta )- \frac{2}{\eta -\eta (t)} \int _{t-\eta }^{t-\eta (t)}x(s)\,ds; \quad\quad \vartheta _{15}(t)=x(t)-x \bigl(t-\eta (t) \bigr); \\& \vartheta _{16}(t)=x(t)+x \bigl(t- \eta (t) \bigr)-\frac{2}{\eta (t)} \int _{t-\eta (t)}^{t}x(s)\,ds. \end{aligned}$$

Applying Lemma 1 and Newton–Leibniz formula, we have

$$\begin{aligned} &-he^{-2\alpha h} \int _{t-h}^{t}\dot{x}^{T}(s)T_{5} \dot{x}(s)\,ds \\ &\quad \leq -e^{-2\alpha h} \biggl[ \int _{t-h}^{t}\dot{x}(s)\,ds \biggr] ^{T}T_{5} \biggl[ \int _{t-h}^{t}\dot{x}(s)\,ds \biggr] \\ &\quad \leq \bigl[x(t)-x(t-h) \bigr]^{T} \bigl[-e^{-2\alpha h}T_{5} \bigr] \bigl[x(t)-x(t-h) \bigr]. \end{aligned}$$

Similarly, it holds that

$$\begin{aligned} &{-e^{2\alpha \delta _{2}} \bigl(\delta _{2}(t)-\delta _{1}(t) \bigr) \int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f^{T} \bigl(x(s) \bigr)L_{1}f \bigl(x(s) \bigr)\,ds} \\ &\quad \leq -e^{2\alpha \delta _{2}} \biggl[ \int _{t-\delta _{2}(t)} ^{t-\delta _{1}(t)}f \bigl(x(s) \bigr)\,ds \biggr]^{T} L_{1} \biggl[ \int _{t-\delta _{2}(t)} ^{t-\delta _{1}(t)}f \bigl(x(s) \bigr)\,ds \biggr], \end{aligned}$$
$$\begin{aligned} &{-e^{2\alpha \eta }\eta \int _{t-\eta }^{t}(x(s)^{T}L_{2}x(s) \,ds} \\ &\quad \leq -e^{2\alpha \eta } \biggl[ \int _{t-\eta }^{t}x(s)\,ds \biggr] ^{T}L_{2} \biggl[ \int _{t-\eta }^{t}x(s)\,ds \biggr]. \end{aligned}$$

The second term of Eq. (3.8) can be written as

$$\begin{aligned} &-\frac{\tau _{1}^{2}}{2}e^{-2\alpha \tau _{1}} \int _{-\tau _{1}}^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds\,d\theta \\ &\quad =-\frac{\tau _{1}^{2}}{2}e^{-2\alpha \tau _{1}} \int _{-\tau _{1}}^{- \tau _{1}(t)} \int _{t+\theta }^{t} \dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds\,d \theta \\ &\quad \quad{} -\frac{\tau _{1}^{2}}{2}e^{-2\alpha \tau _{1}} \int _{-\tau _{1}(t)} ^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds\,d\theta . \end{aligned}$$

By Lemma 3, we obtain

$$\begin{aligned} &-\frac{\tau _{1}^{2}}{2}e^{-2\alpha \tau _{1}} \int _{-\tau _{1}}^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds\,d\theta \\ &\quad \leq -\frac{\tau _{1}^{2}}{\tau _{1}^{2}-\tau _{1}^{2}(t)}e^{-2\alpha \tau _{1}} \biggl[ \int _{-\tau _{1}}^{-\tau _{1}(t)} \int _{t+\theta }^{t} \dot{x}(s)\,ds\,d\theta \biggr]^{T}U_{1} \biggl[ \int _{-\tau _{1}}^{-\tau _{1}(t)} \int _{t+\theta }^{t} \dot{x}(s)\,ds\,d\theta \biggr] \\ &\quad \quad{} -\frac{\tau _{1}^{2}}{\tau _{1}^{2}(t)}e^{-2\alpha \tau _{1}} \biggl[ \int _{-\tau _{1}(t)}^{0} \int _{t+\theta }^{t} \dot{x}(s)\,ds \,d\theta \biggr]^{T}U_{1} \biggl[ \int _{-\tau _{1}(t)}^{0} \int _{t+ \theta }^{t} \dot{x}(s)\,ds \,d\theta \biggr]. \end{aligned}$$

Applying Lemma 4, for any matrix \(V_{1}\) with [ U 1 V 1 U 1 ]0, the above inequality becomes:

$$\begin{aligned} &-\frac{\tau _{1}^{2}}{2} e^{-2\alpha \tau _{1}} \int _{-\tau _{1}}^{0} \int _{t+\theta }^{t}\dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds\,d\theta \\ &\quad \leq e^{-2\alpha \tau _{1}} \biggl\{ - \biggl[ \int _{-\tau _{1}}^{-\tau _{1}(t)} \int _{t+\theta }^{t}\dot{x}(s)\,ds\,d\theta \biggr]^{T} U_{1} \biggl[ \int _{-\tau _{1}}^{-\tau _{1}(t)} \int _{t+\theta }^{t}\dot{x}(s)\,ds\,d \theta \biggr] \biggr\} \\ &\quad \quad{} +e^{-2\alpha \tau _{1}} \biggl\{ - \biggl[ \int _{-\tau _{1}(t)} ^{0} \int _{t+\theta }^{t}\dot{x}(s)\,ds\,d\theta \biggr]^{T} 2V_{1} \biggl[ \int _{-\tau _{1}}^{-\tau _{1}(t)} \int _{t+\theta }^{t}\dot{x}(s)\,ds\,d\theta \biggr] \biggr\} \\ &\quad\quad{} +e^{-2\alpha \tau _{1}} \biggl\{ - \biggl[ \int _{-\tau _{1}(t)} ^{0} \int _{t+\theta }^{t}\dot{x}(s)\,ds\,d\theta \biggr]^{T} U_{1} \biggl[ \int _{-\tau _{1}(t)}^{0} \int _{t+\theta }^{t}\dot{x}(s)\,ds\,d\theta \biggr] \biggr\} \\ &\quad \leq e^{-2\alpha \tau } \bigl(-\varsigma _{1}^{T}U_{1} \varsigma _{1}-2\varsigma _{1}^{T}V_{1} \varsigma _{2}-\varsigma _{2}^{T} U _{1} \varsigma _{2} \bigr) \\ &\quad =\xi ^{T}(t)e^{-2\alpha \tau } \bigl[-\varGamma _{1}^{T}(t)U_{1}\varGamma _{1}(t)-2 \varGamma _{2}^{T}(t)V_{1}\varGamma _{1}(t)-\varGamma _{2}^{T}(t)U_{1} \varGamma _{2}(t) \bigr] \xi (t), \end{aligned}$$


$$\begin{aligned} &\varsigma _{1}= \bigl(\tau _{1}-\tau _{1}(t) \bigr)x(t)- \int _{t-\tau _{1}}^{t-\tau _{1}(t)}x(s)\,ds;\quad\quad \varsigma _{2}=\tau _{1}(t)- \int _{t-\tau _{1}(t)} ^{t}x(s)\,ds; \\ &\varGamma _{1}(t)= \bigl(\tau _{1}-\tau _{1}(t) \bigr) (e_{1}-e_{20});\quad \quad \varGamma _{2}(t)=\tau _{1}(t) (e_{1}-e_{17}). \end{aligned}$$

Similarly, by Lemmas 3 and 4, we have

$$\begin{aligned}& -\frac{\tau _{2}^{2}}{2} e^{-2\alpha \tau _{2}} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t}\dot{x}^{T}(s)U_{2} \dot{x}(s)\,ds\,d\theta \\& \quad \leq e^{-2\alpha \tau } \bigl(-\varsigma _{3}^{T}U_{2} \varsigma _{3}-2 \varsigma _{3}^{T}V_{2} \varsigma _{4}-\varsigma _{4}^{T} U_{2} \varsigma _{4} \bigr) \\& \quad =\xi ^{T}(t)e^{-2\alpha \tau } \bigl[-\varGamma _{3}^{T}(t)U_{2}\varGamma _{3}(t)-2 \varGamma _{4}^{T}(t)V_{2}\varGamma _{3}(t)-\varGamma _{4}^{T}(t)U_{4} \varGamma _{2}(t) \bigr] \xi (t), \end{aligned}$$
$$\begin{aligned}& -\frac{\tau ^{2}}{2} e^{-2\alpha \tau } \int _{-\tau }^{0} \int _{t+ \theta }^{t}\dot{x}^{T}(s)U_{3} \dot{x}(s)\,ds\,d\theta \\& \quad \leq e^{-2\alpha \tau } \bigl(-\varsigma _{5}^{T}U_{3} \varsigma _{5}-2 \varsigma _{5}^{T}V_{3} \varsigma _{6}-\varsigma _{6}^{T} U_{3} \varsigma _{6} \bigr) \\& \quad =\xi ^{T}(t)e^{-2\alpha \tau } \bigl[-\varGamma _{5}^{T}(t)U_{3}\varGamma _{5}(t)-2 \varGamma _{6}^{T}(t)V_{3}\varGamma _{5}(t)-\varGamma _{6}^{T}(t)U_{3} \varGamma _{6}(t) \bigr] \xi (t), \end{aligned}$$


$$\begin{aligned} &\varsigma _{3}= \bigl(\tau _{2}-\tau _{2}(t) \bigr)x(t)- \int _{t-\tau _{2}}^{t-\tau _{2}(t)}x(s)\,ds;\quad \quad \varsigma _{4}=\tau _{2}(t)- \int _{t-\tau _{2}(t)} ^{t}x(s)\,ds; \\ &\varGamma _{3}(t)= \bigl(\tau _{2}-\tau _{2}(t) \bigr) (e_{1}-e_{21});\quad \quad \varGamma _{4}(t)=\tau _{2}(t) (e_{1}-e_{18}); \\ &\varsigma _{4}= \bigl(\tau -\tau (t) \bigr)x(t)- \int _{t-\tau }^{t-\tau (t)}x(s)\,ds; \quad \quad \varsigma _{5}=\tau (t)- \int _{t-\tau (t)}^{t}x(s)\,ds; \\ &\varGamma _{4}(t)= \bigl(\tau -\tau (t) \bigr) (e_{1}-e_{19}); \quad \quad \varGamma _{5}(t)=\tau (t) (e_{1}-e_{16}). \end{aligned}$$

By using Assumption 2, we can obtain the following:

$$ \bigl[f_{i} \bigl(x(t) \bigr)-l_{i}^{-}x(t) \bigr] \bigl[f_{i} \bigl(x(t) \bigr)-l_{i}^{+}x(t) \bigr] \leq 0 \quad (i=1,2,\ldots,n), $$

which can be compactly written as

$$\begin{aligned}& \begin{bmatrix} x(t)\\ f(x(t)) \end{bmatrix} ^{T} \begin{bmatrix} K_{1}&-K_{2}\\ *&I \end{bmatrix} \begin{bmatrix} x(t)\\ f(x(t)) \end{bmatrix} \leq 0, \\& \begin{bmatrix} x(t-\tau _{1}(t)-\tau _{2}(t))\\ f(x(t-\tau _{1}(t)-\tau _{2}(t))) \end{bmatrix} ^{T} \begin{bmatrix} K_{1}&-K_{2}\\ *&I \end{bmatrix} \begin{bmatrix} x(t-\tau _{1}(t)-\tau _{2}(t))\\ f(x(t-\tau _{1}(t)-\tau _{2}(t))) \end{bmatrix} \leq 0. \end{aligned}$$

Then for any positive matrices \(\beta _{1}=\operatorname{diag}(\beta _{1s}, \beta _{2s},\ldots,\beta _{ns})\) and \(\beta _{2}=\operatorname{diag}(\tilde{\beta }_{1s},\tilde{\beta }_{2s},\ldots,\tilde{\beta }_{ns})\), the following inequalities hold true:

$$\begin{aligned}& \begin{bmatrix} x(t)\\ f(x(t)) \end{bmatrix} ^{T} \begin{bmatrix} K_{1}\beta _{1}&-K_{2}\beta _{1}\\ *&\beta _{1} \end{bmatrix} \begin{bmatrix} x(t)\\ f(x(t)) \end{bmatrix} \leq 0, \end{aligned}$$
$$\begin{aligned}& \begin{bmatrix} x(t-\tau _{1}(t)-\tau _{2}(t))\\ f(x(t-\tau _{1}(t)-\tau _{2}(t))) \end{bmatrix} ^{T} \begin{bmatrix} K_{1}\beta _{2}&-K_{2}\beta _{2}\\ *&\beta _{2} \end{bmatrix} \begin{bmatrix} x(t-\tau _{1}(t)-\tau _{2}(t))\\ f(x(t-\tau _{1}(t)-\tau _{2}(t))) \end{bmatrix} \leq 0. \end{aligned}$$

Note that

$$ \begin{aligned} &\dot{x}(t) +Cx \bigl(t-\eta (t) \bigr)-Af \bigl(x(t) \bigr)-Bf \bigl(x \bigl(t-\tau _{1}(t)-\tau _{2}(t) \bigr) \bigr) \\ &\quad{} -D \int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f \bigl(x(s) \bigr)\,ds-E\dot{x} \bigl(t-h(t) \bigr)-u(t)=0. \end{aligned} $$

For any appropriately dimensioned matrix \(S_{1}\), the following is satisfied:

$$\begin{aligned}& 2x ^{T}(t)S_{1}\dot{x}(t)+2x^{T}(t)S_{1}Cx \bigl(t-\eta (t) \bigr)-2x^{T}(t)S_{1}Af \bigl(x(t) \bigr) \\& \quad{} -2x^{T}(t)S_{1}Bf \bigl(x \bigl(t-\tau _{1}(t)-\tau _{2}(t) \bigr) \bigr)-2x^{T}(t)S_{1}D \int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f \bigl(x(s) \bigr)\,ds \\& \quad{} -2x^{T}(t)S_{1}E\dot{x} \bigl(t-h(t) \bigr)-2x^{T}(t)S_{1}u(t)=0. \end{aligned}$$

Similarly, we have

$$\begin{aligned} &2 \dot{x}^{T}(t)S_{2}\dot{x}(t)+2\dot{x}^{T}(t)S_{2}Cx \bigl(t-\eta (t) \bigr)-2 \dot{x}^{T}(t)S_{2}Af \bigl(x(t) \bigr) \\ &\quad{} -2\dot{x}^{T}(t)S_{2}Bf \bigl(x \bigl(t-\tau _{1}(t)-\tau _{2}(t) \bigr) \bigr)-2\dot{x}^{T}(t)S _{2}D \int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f \bigl(x(s) \bigr)\,ds \\ &\quad{} -2\dot{x}^{T}(t)S_{2}E\dot{x} \bigl(t-h(t) \bigr)-2 \dot{x}^{T}(t)S_{2}u(t)=0, \end{aligned}$$
$$\begin{aligned} &2 \dot{x}^{T} \bigl(t-h(t) \bigr)S_{3}\dot{x}(t)+2 \dot{x}^{T} \bigl(t-h(t) \bigr)S_{3}Cx \bigl(t- \eta (t) \bigr)-2 \dot{x}^{T} \bigl(t-h(t) \bigr)S_{3}Af(x(t) \\ &\quad{} -2\dot{x}^{T} \bigl(t-h(t) \bigr)S_{3}Bf \bigl(x \bigl(t-\tau _{1}(t)-\tau _{2}(t) \bigr) \bigr) -2 \dot{x}^{T} \bigl(t-h(t) \bigr)S_{3}E\dot{x} \bigl(t-h(t) \bigr) \\ &\quad{} -2\dot{x}^{T} \bigl(t-h(t) \bigr)S_{3}D \int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f \bigl(x(s) \bigr)\,ds -2 \dot{x}^{T} \bigl(t-h(t) \bigr)S_{3}u(t)=0. \end{aligned}$$

In addition, it follows from Lemma 2 that for every \(H\geq 0\), \(N\geq 0\),

$$\begin{aligned} &2\dot{x}^{T}(t)S_{2}u(t)\leq \dot{x}^{T}(t)H \dot{x}(t)+u^{T}(t)S_{2}H ^{-1}S_{2}u(t), \end{aligned}$$
$$\begin{aligned} &2\dot{x}^{T} \bigl(t-h(t) \bigr)S_{3}u(t) \leq \dot{x}^{T} \bigl(t-h(t) \bigr)N \dot{x} \bigl(t-h(t) \bigr)+u^{T}(t)S_{3}N^{-1}S_{3}u(t). \end{aligned}$$

From Eqs. (3.2)–(3.27), if we let \(H=S_{2}\), \(N=S _{3}\), we can derive that

$$\begin{aligned} &\dot{V} \bigl(t,x(t) \bigr)+2\alpha V \bigl(t,x(t) \bigr) \\ & \quad \leq {-x^{T}(t)Q_{1}x(t)}+x^{T}(t)[2P-2S_{1}]u(t)+u^{T}(t)S_{3}u(t)+ \xi ^{T}(t)\varPhi \xi (t), \end{aligned}$$


$$\begin{aligned}& \xi (t)= \biggl[ x(t), x(t-\tau ), x(t-\tau _{1}), x(t-\tau _{2}), x(t-\eta ), x(t-h), x \bigl(t-\tau (t) \bigr), \\& \hphantom{\xi (t)=}{}x \bigl(t-\tau _{1}(t) \bigr),x \bigl(t-\tau _{2}(t) \bigr), x \bigl(t-\eta (t) \bigr), f \bigl(x(t) \bigr), f \bigl(x \bigl(t- \tau (t) \bigr) \bigr), \\& \hphantom{\xi (t)=}{} \int _{t-\eta }^{t}x(s)\,ds, \frac{1}{\eta -\eta (t)} \int _{t-\eta } ^{t-\eta (t)}x(s)\,ds, \frac{1}{\eta (t)} \int _{t-\eta (t)}^{t}x(s)\,ds, \\& \hphantom{\xi (t)=}{} \frac{1}{\tau (t)} \int _{t-\tau (t)}^{t}x(s)\,ds, \frac{1}{\tau _{1}(t)} \int _{t-\tau _{1}(t)}^{t}x(s)\,ds, \frac{1}{\tau _{2}(t)} \int _{t-\tau _{2}(t)}^{t}x(s)\,ds, \\& \hphantom{\xi (t)=}{}\frac{1}{\tau -\tau (t)} \int _{t-\tau }^{t-\tau (t)}x(s)\,ds, \frac{1}{ \tau _{1}-\tau _{1}(t)} \int _{t-\tau _{1}}^{t-\tau _{1}(t)}x(s)\,ds, \\& \hphantom{\xi (t)=}{}\frac{1}{\tau _{2}-\tau _{2}(t)} \int _{t-\tau _{2}}^{t-\tau _{2}(t)}x(s)\,ds, \int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f \bigl(x(s) \bigr)\,ds, \dot{x}(t), \dot{x} \bigl(t-h(t) \bigr), u(t) \biggr]^{T}, \\& \begin{aligned} \varPhi ={} &\varPsi -e^{-2\alpha \tau } \bigl[-\varGamma _{1}^{T}(t)U_{1}\varGamma _{1}(t)-2 \varGamma _{2}^{T}(t)V_{1}\varGamma _{1}(t)-\varGamma _{2}^{T}(t)U_{1} \varGamma _{2}(t) \\ &{}-\varGamma _{3}^{T}(t)U_{2}\varGamma _{3}(t)-2\varGamma _{4}^{T}(t)V_{2} \varGamma _{3}(t)-\varGamma _{4}^{T}(t)U_{2} \varGamma _{4}(t) \\ &{}-\varGamma _{5}^{T}(t)U_{3}\varGamma _{5}(t)-2\varGamma _{6}^{T}(t)V_{3} \varGamma _{5}(t)-\varGamma _{6}^{T}(t)U_{3} \varGamma _{6}(t) \bigr]. \end{aligned} \end{aligned}$$

Letting \(\tau _{1}(t)=0\), \(\tau _{1}(t)=\tau _{1}\) and \(\tau _{2}(t)=0\), \(\tau _{2}(t)=\tau _{2}\), we can get

$$ \textstyle\begin{cases} \varPhi _{1} = \varPhi (0,0), \\ \varPhi _{2} = \varPhi (0,\tau _{2}), \\ \varPhi _{3} = \varPhi (\tau _{1},0), \\ \varPhi _{4} = \varPhi (\tau _{1},\tau _{2}). \end{cases} $$

From Eq. (3.2) it is easy to deduce that

$$\begin{aligned} \lambda _{1} \bigl\vert x(t) \bigr\vert ^{2}\leq V \bigl(t,x(t) \bigr)\leq \lambda _{2} \bigl\Vert x(t) \bigr\Vert ^{2}, \end{aligned}$$


$$\begin{aligned} \bigl\Vert x(t) \bigr\Vert _{\tau ^{*}}=\sup_{\theta \in [-\tau ^{*},0]} \bigl\{ \bigl\vert x(t+ \theta ) \bigr\vert , \bigl\vert \dot{x}(t+\theta ) \bigr\vert \bigr\} \end{aligned}$$


$$\begin{aligned} &\lambda _{1}=\lambda _{\min }(P), \\ & \begin{aligned} \lambda _{2}&=\lambda _{\max }(P)+\tau \lambda _{\max }(Q_{1}) +\tau _{1} \lambda _{\max }(Q_{2})+\tau _{2}\lambda _{\max }(Q_{3}) \\ &\quad{} +\eta \lambda _{\max }(R_{1})+\eta \lambda _{\max }(R_{2})+\tau \lambda _{\max }(R_{3})+ \tau _{1}\lambda _{\max }(R_{4}) \\ &\quad {}+\tau _{2}\lambda _{\max }(R_{5})+\tau _{1}^{2}\lambda _{\max }(T_{1}) +\tau _{2}^{2}\lambda _{\max }(T_{2})+\tau ^{2}\lambda _{\max }(T_{3}) \\ &\quad {}+\eta ^{2}\lambda _{\max }(T_{4})+h^{2} \lambda _{\max }(T_{5})+\frac{ \tau _{1}^{3}}{2}\lambda _{\max }(U_{1}) +\frac{\tau _{2}^{3}}{2}\lambda _{\max }(U_{2}) \\ &\quad {}+\frac{\tau ^{3}}{2}\lambda _{\max }(U_{3})+\eta ^{2}\lambda _{\max }(L _{2}) +\max _{j\in \{1,2,\ldots,n\}}F_{j}(\delta _{2}-\delta _{1})^{2}\lambda _{\max }(L_{1}). \end{aligned} \end{aligned}$$

Then according to the LMI (3.1) and Eq. (3.29), we have

$$\begin{aligned}& \dot{V} \bigl(t,x(t) \bigr)+2\alpha V \bigl(t,x(t) \bigr) \\& \quad \leq {-x^{T}(t)Q_{1}x(t)}+x^{T}(t)[2P-2S_{1}]u(t)+u^{T}(t)S_{3}u(t) \\& \quad \leq {-\lambda _{\min }(Q_{1})} \bigl\vert x(t) \bigr\vert ^{2}+2 \bigl\vert x(t) \bigr\vert \cdot \bigl\vert (P-S_{1}) \bigr\vert \cdot \bigl\vert u(t) \bigr\vert +\lambda _{\max }(S_{3}) \bigl\vert u(t) \bigr\vert ^{2} \\& \quad \leq {-\lambda _{\min }(Q_{1})} \bigl\vert x(t) \bigr\vert ^{2}+2 \bigl\vert x(t) \bigr\vert \cdot \bigl\vert (P-S_{1}) \bigr\vert \cdot \varGamma _{u}+\lambda _{\max }(S_{3})\varGamma _{u}^{2} \\& \quad \leq {-\lambda _{\min }(Q_{1})} \bigl( \bigl\vert x(t) \bigr\vert -\phi _{1} \bigr) \bigl( \bigl\vert x(t) \bigr\vert - \phi _{2} \bigr), \end{aligned}$$


$$\begin{aligned} &\phi _{1}=\frac{ \vert (P-S_{1}) \vert +\sqrt{ \vert (P-S_{1}) \vert ^{2}+{\lambda _{\min }(Q _{1})}\lambda _{\max }(S_{3})}}{{\lambda _{\min }(Q_{1})}}\varGamma _{u}, \\ &\phi _{2}=\frac{ \vert (P-S_{1}) \vert -\sqrt{ \vert (P-S_{1}) \vert ^{2}+{\lambda _{\min }(Q _{1})}\lambda _{\max }(S_{3})}}{{\lambda _{\min }(Q_{1})}}\varGamma _{u}. \end{aligned}$$

Note that \(\phi _{2}\leq 0\) and \(\phi _{2}=0\) if and only if external input \(u=0\). Hence, one may deduce that when \(\vert x(t)\vert >\phi _{1}\), i.e., \(x\notin S\), it holds that

$$\begin{aligned} &\dot{V} \bigl(t,x(t) \bigr)+2\alpha V \bigl(t,x(t) \bigr)\leq 0, \quad t\in R_{+},\quad\quad {V} \bigl(t,x(t) \bigr) \leq {V}(0,\phi )e^{-2\alpha t}, \quad t\in R_{+}, \\ &\lambda _{1} \bigl\vert x(t,0,\phi ) \bigr\vert ^{2} \leq V \bigl(t,x(t) \bigr)\leq V(0,\phi )e^{-2 \alpha t}\leq \lambda _{2}e^{-2\alpha t} \Vert \phi \Vert _{\tau *}^{2}. \end{aligned}$$

Hence when \(x\notin S \), we finally obtain that

$$\begin{aligned} & \bigl\vert x(t,0,\phi ) \bigr\vert \leq \sqrt{\frac{\lambda _{2}}{\lambda _{1}}} \Vert \phi \Vert _{\tau *}e^{-\alpha t}, \quad t\in R_{+}. \end{aligned}$$

Note that S is a sphere, when \({x\notin S}\), \(M=\sqrt{\frac{ \lambda _{2}}{\lambda _{1}}}\Vert \phi \Vert _{\tau *}\),

$$\begin{aligned} \inf_{\tilde{x}\in S} \bigl\{ \bigl\vert x(t,0,\phi )-\tilde{x} \bigr\vert \bigr\} \leq \bigl\vert x(t,0, \phi )-0 \bigr\vert \leq Me^{-\alpha t}, t\in R_{+}. \end{aligned}$$

According to Definition 2, we can get that system (2.2) is globally exponentially dissipative with positively invariant and globally exponentially attractive set S. This completes the proof. □

Remark 4

In the proof of Theorem 3.1, an LMI-based condition imposed on global exponential dissipativity of system (2.2) was given. It is worth mentioning that in order to derive the globally exponentially attractive set S and guarantee the practicability of dissipativity criteria, we chose two special but suitable \(H=S_{2}\) and \(N = S_{3}\) in (3.28). From Theorem 3.1, we can find that the globally exponentially attractive set S can be directly obtained by using the LMIs.

Remark 5

In Theorem 3.1, we firstly transform system (2.1) to system (2.2) by using a convex combination technique and Filippov’s theorem. In addition, we introduce the double and triple integrals in the LKF by considering leakage, discrete and two additive time-varying delays. The problem has not been solved in [29, 30, 40]. Constructing this form of double and triple integral terms in the LKF is a recent tool to get less conservative results.

If in Theorem 3.2 we take the exponential dissipativity rate index \(\alpha =0\) and replace the exponential-type Lyapunov–Krasovskii functional in Theorem 3.1, then we can obtain the following theorem.

Theorem 3.2

Under the same conditions as in Theorem 3.1, system (2.2) is global dissipative, and S given in Theorem 3.1 is the positively invariant and globally attractive set if the following LMI holds:

$$ \varPhi _{k}=\varTheta -\varUpsilon _{k}^{T} \begin{bmatrix} U_{1}&V_{1}&0&0&0&0 \\ *&U_{1}&0&0&0&0 \\ *&*&U_{2}&V_{2}&0&0 \\ *&*&*&U_{2}&0&0 \\ *&*&*&*&U_{3}&V_{3} \\ *&*&*&*&*&U_{3} \end{bmatrix} \varUpsilon _{k}< 0 \quad (k=1,2,3,4), $$

where\(\varTheta =[\varTheta ]_{l\times n}\) (\(l,n=1,2,\ldots,25\)), \(\varTheta _{1,1}=-PM-M ^{T}P+2Q_{1}+Q_{2}+Q_{3}+R_{1}+R_{2}+R_{3}+R_{4}+R_{5} -4T_{1}-4T_{2}-4T _{3}-4T_{4}-4T_{5}+\eta ^{2}L_{2}-K_{1}\beta _{1}\), \(\varTheta _{1,2}=-2G _{3}\), \(\varTheta _{1,3}=-2G_{1}\), \(\varTheta _{1,4}=-2G_{2}\), \(\varTheta _{1,5}=PM-2G _{4}\), \(\varTheta _{1,6}=T_{5}\), \(\varTheta _{1,7}=-2(T_{3}+2G_{3})\), \(\varTheta _{1,8}=-2(T_{1}+2G_{1})\), \(\varTheta _{1,9}=-2(T_{2}+2G_{2})\), \(\varTheta _{1,10}=-PC+S_{1}C-2(T_{4}+2G_{4})\), \(\varTheta _{1,11}={PA-S_{1}A+K_{2} \beta _{1}}\), \(\varTheta _{1,12}=PB-S_{1}B\), \(\varTheta _{1,13}=M^{T}PM\), \(\varTheta _{1,14}=-6G_{4}\), \(\varTheta _{1,15}= -6T_{4}\), \(\varTheta _{1,16}=-6T _{3}\), \(\varTheta _{1,17}=-6T_{1}\), \(\varTheta _{1,18}=-6T_{2}\), \(\varTheta _{1,19}=6G_{3}\), \(\varTheta _{1,20}=6G_{1}\), \(\varTheta _{1,21}=6G_{2}\), \(\varTheta _{1,22}=PD-S_{1}D\), \(\varTheta _{1,23}=-S_{1}\), \(\varTheta _{1,24}=PE-S _{1}E\), \(\varTheta _{2,2}=-Q_{1}-4T_{3}\), \(\varTheta _{2,7}=-2(T_{3}+2G_{3})\), \(\varTheta _{2,18}=6G_{3}\), \(\varTheta _{2,21}=6T_{3}\), \(\varTheta _{3,3}=-Q _{2}-4T_{1}\), \(\varTheta _{3,8}=-2(T_{1}+2G_{1})\), \(\varTheta _{3,19}=6G_{1}\), \(\varTheta _{3,22}=6T_{1}\), \(\varTheta _{4,4}=-Q_{3}-4T_{2}\), \(\varTheta _{4,9}=-2(T _{2}+2G_{2})\), \(\varTheta _{4,20}=6G_{2}\), \(\varTheta _{4,23}=6T_{2}\), \(\varTheta _{5,5}=-R_{2}-4T_{4}\), \(\varTheta _{5,10}=-2(T_{4}+2G_{4})\), \(\varTheta _{5,13}=-M^{T}PM\), \(\varTheta _{5,14}=6T_{4}\), \(\varTheta _{5,15}=6G_{4}\), \(\varTheta _{6,6}=-T_{5}\), \(\varTheta _{7,7}=-(1-\mu )R_{3}-4(2T_{3}+G_{3})-K _{1}\beta _{2}\), \(\varTheta _{7,12}={-K_{2}\beta _{2}}\), \(\varTheta _{7,16}=6(T _{3}+G_{3})\), \(\varTheta _{7,19}=6(T_{3}+G_{3})\), \(\varTheta _{8,8}=-(1-\mu _{1})R_{4}-4(2T_{1}+G_{1})\), \(\varTheta _{8,17}=6(T_{1}+G_{1})\), \(\varTheta _{8,20}=6(T_{1}+G_{1})\), \(\varTheta _{9,9}=-(1-\mu _{2})R_{5}-4(2T _{2}+G_{2})\), \(\varTheta _{9,18}=6(T_{2}+G_{2})\), \(\varTheta _{9,21}=6(T_{2}+G _{2})\), \(\varTheta _{10,10}=-(1-\mu _{3})R_{1}-4(2T_{4}+G_{4})\), \(\varTheta _{10,13}=M ^{T}PC\), \(\varTheta _{10,14}=6(T_{4}+G_{4})\), \(\varTheta _{10,15}=6(T_{4}+G _{4})\), \(\varTheta _{10,23}=-S_{2}C\), \(\varTheta _{10,24}=-S_{3}C\), \(\varTheta _{11,11}=( \delta _{2}-\delta _{1})^{2}L_{1}-\beta _{1}\), \(\varTheta _{11,13}=-M^{T}PA\), \(\varTheta _{11,23}=S_{2}A\), \(\varTheta _{11,24}=-S_{3}A\), \(\varTheta _{12,12}=- \beta _{2}\), \(\varTheta _{12,13}=-M^{T}PB\), \(\varTheta _{12,23}=S_{2}B\), \(\varTheta _{12,24}=-S_{3}B\), \(\varTheta _{13,13}=-2L_{2}\), \(\varTheta _{13,21}=-M ^{T}PD\), \(\varTheta _{13,24}=-M^{T}PE\), \(\varTheta _{13,25}=-2MP\), \(\varTheta _{14,14}=-12T_{4}\), \(\varTheta _{14,15}=-12G_{4}\), \(\varTheta _{15,15}=-12T _{4}\), \(\varTheta _{16,16}=-12T_{3}\), \(\varTheta _{16,19}=-12G_{3}\), \(\varTheta _{17,17}=-12T_{1}\), \(\varTheta _{17,20}=-12G_{1}\), \(\varTheta _{18,18}=-12T _{2}\), \(\varTheta _{18,21}=-12G_{2}\), \(\varTheta _{19,19}=-12T_{3}\), \(\varTheta _{20,20}=-12T_{1}\), \(\varTheta _{21,21}=-12T_{2}\), \(\varTheta _{22,22}=-L _{1}\), \(\varTheta _{22,23}=S_{2}D\), \(\varTheta _{22,24}=-S_{3}D\), \(\varTheta _{23,23}=\frac{ \tau _{1}^{4}}{4}U_{1}+\frac{\tau _{2}^{4}}{4}U_{2} +\frac{\tau ^{4}}{4}U _{3}-S_{2}+\tau _{1}^{2}T_{1}+\tau _{2}^{2}T_{2}+\tau ^{2}T_{3} +\eta ^{2}T_{4}+h^{2}T_{5}\), \(\varTheta _{23,24}=S_{2}E\), \(\varTheta _{24,24}=S_{3}E+E ^{T}S_{3}+S_{3}\), \(\varTheta _{25,25}=S_{2}\), \(\varUpsilon _{k}^{T}=[\varGamma _{1k},\varGamma _{2k},\varGamma _{3k},\varGamma _{4k}, \varGamma _{5k},\varGamma _{6k}]^{T}\) (\(k=1,2,3,4\)), \(\varGamma _{11}^{T}=\varGamma _{12}^{T}=\tau _{1}(e_{1}-e_{20})\), \(\varGamma _{13}^{T}=\varGamma _{14}^{T}=\mathbf{0}\), \(\varGamma _{21}^{T}=\varGamma _{22}^{T}=\mathbf{0}\), \(\varGamma _{23}^{T}=\varGamma _{24}^{T}=\tau _{1}(e_{1}-e_{17})\), \(\varGamma _{31}^{T}=\varGamma _{33}^{T}=\tau _{2}(e_{1}-e_{21})\), \(\varGamma _{32} ^{T}=\varGamma _{34}^{T}=\mathbf{0}\), \(\varGamma _{41}^{T}=\varGamma _{43}^{T}=\mathbf{0}\), \(\varGamma _{42}^{T}= \varGamma _{44}^{T}=\tau _{2}(e_{1}-e_{18})\), \(\varGamma _{51}^{T}= \tau (e_{1}-e_{19})\), \(\varGamma _{52}^{T}=\tau _{1}(e_{1}-e_{19})\), \(\varGamma _{53}^{T}=\tau _{2}(e_{1}-e_{19})\), \(\varGamma _{54}^{T}=\varGamma _{61}^{T}=\mathbf{0}\), \(\varGamma _{62}^{T}=\tau _{2}(e_{1}-e_{16})\), \(\varGamma _{63}^{T}=\tau _{1}(e_{1}-e_{16})\), \(\varGamma _{64}^{T}=\tau (e_{1}-e_{19})\), \(e_{i}=[\mathbf{0}_{n\times (i-1)n},\mathbf{I}_{n\times n},\mathbf{0} _{n\times (25-i)n}]\) (\(i=1,2,\ldots,25\)).


Replace the exponential-type Lyapunov–Krasovskii functional in Theorem 3.1 by

$$ V \bigl(t,x(t) \bigr)=\sum_{k=1} ^{6}V_{k}(t), $$


$$\begin{aligned}& V_{1} \bigl(t,x(t) \bigr)= \biggl[x(t)-M \int _{t-\eta }^{t}x(t)\,ds \biggr]^{T}P \biggl[x(t)-M \int _{t-\eta }^{t}x(t)\,ds \biggr], \\& \begin{aligned} V_{2} \bigl(t,x(t) \bigr)&= \int _{t-\tau }^{t}x^{T}(s)Q_{1}x(s) \,ds + \int _{t-\tau _{1}}^{t}x^{T}(s)Q_{2}x(s) \,ds \\ &\quad{} + \int _{t-\tau _{2}}^{t}x^{T}(s)Q_{3}x(s) \,ds, \end{aligned} \\& \begin{aligned} V_{3} \bigl(t,x(t) \bigr)&= \int _{t-\eta (t)}^{t}x^{T}(s)R_{1}x(s) \,ds + \int _{t- \eta }^{t}x^{T}(s)R_{2}x(s) \,ds \\ &\quad{}+ \int _{t-\tau (t)}^{t}x(s)^{T} R_{3}x(s)\,ds + \int _{t-\tau _{1}(t)}^{t}x(s)^{T} R_{4}x(s)\,ds \\ &\quad{}+ \int _{t-\tau _{2}(t)}^{t}x(s)^{T} R_{5}x(s)\,ds, \end{aligned} \\& \begin{aligned} V_{4} \bigl(t,x(t) \bigr)&=\tau _{1} \int _{-\tau _{1}}^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s)T_{1} \dot{x}(s)\,ds\,d\theta \\ &\quad{} +\tau _{2} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t}\dot{x}^{T}(s)T_{2} \dot{x}(s)\,ds \,d\theta \\ &\quad{} +\tau \int _{-\tau }^{0} \int _{t+\theta }^{t}\dot{x}^{T}(s)T_{3} \dot{x}(s)\,ds\,d \theta +\eta \int _{-\eta }^{0} \int _{t+\theta }^{t}\dot{x}^{T}(s)T_{4} \dot{x}(s)\,ds\,d\theta \\ &\quad{} +h \int _{-h}^{0} \int _{t+\theta }^{t}\dot{x}^{T}(s)T_{5} \dot{x}(s)\,ds\,d \theta , \end{aligned} \\& V_{5} \bigl(t,x(t) \bigr)=(\delta _{2}-\delta _{1}) \int _{-\delta _{2}}^{-\delta _{1}} \int _{t+\theta }^{t}f^{T} \bigl(x(s) \bigr)L_{1}f \bigl(x(s) \bigr)\,ds\,d\theta +\eta \int _{-\eta }^{0} \int _{t+\theta }^{t}x^{T}(s)L_{2}x(s) \,ds\,d\theta , \\& \begin{aligned} V_{6} \bigl(t,x(t) \bigr)&=\frac{\tau _{1}^{2}}{2} \int _{-\tau _{1}}^{0} \int _{ \theta }^{0} \int _{t+\lambda }^{t}\dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds\,d \lambda \,d\theta \\ &\quad{}+\frac{\tau _{2}^{2}}{2} \int _{-\tau _{2}}^{0} \int _{ \theta }^{0} \int _{t+\lambda }^{t}\dot{x}^{T}(s)U_{2} \dot{x}(s)\,ds\,d \lambda \,d \theta \\ &\quad{} +\frac{\tau ^{2}}{2} \int _{-\tau }^{0} \int _{\theta }^{0} \int _{t+\lambda }^{t}\dot{x}^{T}(s)U_{3} \dot{x}(s)\,ds\,d \lambda \,d \theta . \end{aligned} \end{aligned}$$

The rest of the proof of Theorem 3.2 is similar to that of Theorem 3.1, so the details are omitted. □

Remark 6

In particular, when \(E=0\) and \(D=0\), system (2.2) is written as system (4) in [19], we can see that the system is dissipative from [19]. Furthermore, we discuss the global exponential dissipativity of system (2.2): our model can be regarded as an extension of system (4) from [19].

Remark 7

If \(\tau _{1}(t)+\tau _{2}(t)=\tau (t)\), \(0\leq \tau (t)\leq \tau \), \(\vert \dot{\tau }(t)\leq \mu \vert \), \(E=0\) and \(\eta (t)=0\), i.e., system (2.2) is without two additive time-varying as well as leakage delays and neural term, then system (2.2) is reduced to the following neural network:

$$ \textstyle\begin{cases} \dot{x}(t)=-Cx(t)+Af(x(t))+Bf(x(t-\tau (t))) \\ \hphantom{\dot{x}(t)=}{} +D\int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f(x(s))\,ds+\mu (t), \\ y(t)=f(x(t)), \\ x(t)=\phi (t), t\in (-\tau ^{*},0). \end{cases} $$

So the system is no longer a neutral-type memristive neural network. We find that the dissipativity of other types of neural network model has been discussed in [30, 41, 42]. When some terms are removed, the dissipativity result of Theorem 3.1 can be obtained by utilizing LMI. So our system is more general.

4 Example and simulation

In this section, we give a numerical example to illustrate the effectiveness of our results.

Example 1

Consider the two-dimensional MNNs (2.1) with the following parameters:

$$\begin{aligned}& a_{11} \bigl(x_{1}(t) \bigr)= \textstyle\begin{cases} 1.2,& \vert x_{1}(t) \vert \leq 1, \\ -1, & \vert x_{1}(t) \vert > 1, \end{cases}\displaystyle \quad\quad a_{12} \bigl(x_{1}(t) \bigr)= \textstyle\begin{cases} 0.3, & \vert x_{1}(t) \vert \leq 1, \\ 0.5,& \vert x_{1}(t) \vert > 1, \end{cases}\displaystyle \\ & a_{21} \bigl(x_{2}(t) \bigr)= \textstyle\begin{cases} 0.7,& \vert x_{2}(t) \vert \leq 1, \\ -1, & \vert x_{2}(t) \vert > 1, \end{cases}\displaystyle \quad\quad a_{22} \bigl(x_{2}(t) \bigr)= \textstyle\begin{cases} 2.5, & \vert x_{2}(t) \vert \leq 1, \\ -0.3, & \vert x_{2}(t) \vert > 1, \end{cases}\displaystyle \\ & b_{11} \bigl(x_{1}(t) \bigr)= \textstyle\begin{cases} 0.8,& \vert x_{1}(t) \vert \leq 1, \\ 0.2, & \vert x_{1}(t) \vert > 1, \end{cases}\displaystyle \quad\quad b_{12} \bigl(x_{1}(t) \bigr)= \textstyle\begin{cases} 0.05, & \vert x_{1}(t) \vert \leq 1, \\ -0.05,& \vert x_{1}(t) \vert > 1, \end{cases}\displaystyle \\ & b_{21} \bigl(x_{2}(t) \bigr)= \textstyle\begin{cases} 0.3, & \vert x_{2}(t) \vert \leq 1 , \\ 1,& \vert x_{2}(t) \vert > 1, \end{cases}\displaystyle \quad\quad b_{22} \bigl(x_{2}(t) \bigr)= \textstyle\begin{cases} 0.9,& \vert x_{2}(t) \vert \leq 1, \\ -0.3,& \vert x_{2}(t) \vert > 1, \end{cases}\displaystyle \\ & d_{11} \bigl(x_{1}(t) \bigr)= \textstyle\begin{cases} -0.9, & \vert x_{1}(t) \vert \leq 1, \\ 2, & \vert x_{1}(t) \vert > 1, \end{cases}\displaystyle \quad\quad d_{12} \bigl(x_{1}(t) \bigr)= \textstyle\begin{cases} -0.5,& \vert x_{1}(t) \vert \leq 1, \\ -0.3,& \vert x_{1}(t) \vert > 1, \end{cases}\displaystyle \\ & d_{21} \bigl(x_{2}(t) \bigr)= \textstyle\begin{cases} 2, & \vert x_{2}(t) \vert \leq 1, \\ 0.3,& \vert x_{2}(t) \vert > 1, \end{cases}\displaystyle \quad\quad d_{22} \bigl(x_{2}(t) \bigr)= \textstyle\begin{cases} 1.5,& \vert x_{2}(t) \vert \leq 1, \\ 1, & \vert x_{2}(t) \vert > 1. \end{cases}\displaystyle \end{aligned}$$

The activation function are \(f_{1}(s)=\tanh (0.3s) - 0.2\sin (s)\), \(f _{2}(s)=\tanh (0.2s) + 0.3\sin (s)\). Let \(\alpha =0.01\), \(c_{1}=c_{2}=2\), \(e_{1}=e_{2}=0.2\), \(m_{1}=2\), \(m_{2}=3.56\), \(h(t)=0.1\sin (2t) + 0.5\), \(\eta (t) =0.1\sin (2t) + 0.2\), \(\tau _{1}(t)=0.1\sin (t)+0.2\), \(\tau _{2}(t)= 0.1\cos (t) + 0.5\), \(\delta _{1}(t) = 0.4\sin (t) + 0.4\), \(\delta _{2}(t) = 0.4\sin (t) + 0.6\), \(u = [0.5\sin (t); 0.25\cos (t)]^{T}\). So \(\eta =0.4\), \(\bar{h}=0.6\), \(\tau _{1} = 0.3\), \(\tau _{2} = 0.6\), \(\tau = 0.9\), \(\delta _{1} = 0\), \(\delta _{2} = 1\), \(\mu _{1} = 0.1\), \(\mu _{2} = 0.1\), \(\mu = 0.2\). Then \(K_{1}^{-}=-0.2\), \(K_{1}^{+}=0.5\), \(K_{2}^{-}=-0.3\) and \(K_{2}^{+}=0.5\), i.e.,

$$ K_{1}= \begin{bmatrix} -0.1 & 0 \\ 0 & -0.15 \end{bmatrix} ,\quad \quad K_{2}= \begin{bmatrix} 0.15 & 0 \\ 0 & 0.1 \end{bmatrix} . $$

With the above parameters, using LMI toolbox in MATLAB, we obtain the following feasible solution to LMIs in Theorem 3.1:

$$\begin{aligned} &P = 1.0\times 10^{-11} \begin{bmatrix} 0.0764 & -0.0110\\ -0.0110 & 0.1583 \end{bmatrix} , \quad\quad Q_{1} = 1.0\times 10^{-11} \begin{bmatrix} -0.6182 & -0.0001\\ -0.0001 & -0.6132 \end{bmatrix} , \\ &Q_{2} = 1.0\times 10^{-11} \begin{bmatrix} 0.1918 & 0.0006\\ 0.0006 & 0.2014 \end{bmatrix} ,\quad\quad Q_{3} = 1.0\times 10^{-11} \begin{bmatrix} 0.2056 & -0.0002\\ -0.0002 & 0.2169 \end{bmatrix} , \\ &U_{1} = 1.0\times 10^{-10} \begin{bmatrix} 0.2502 & 0.0004\\ 0.0004 & 0.2525 \end{bmatrix} ,\quad\quad U_{2} = 1.0\times 10^{-10} \begin{bmatrix} 0.1844 & 0.0010\\ 0.0010 & 0.1857 \end{bmatrix} , \\ &U_{3} = 1.0\times 10^{-12} \begin{bmatrix} 0.3907 & 0.0451\\ 0.0451 & 0.4122 \end{bmatrix} ,\quad\quad R_{1} = 1.0\times 10^{-11} \begin{bmatrix} 0.3373 & 0.0191\\ 0.0191 & 0.4588 \end{bmatrix} , \\ &R_{2} = 1.0\times 10^{-11} \begin{bmatrix} 0.3207& -0.0053\\ -0.0053 & 0.3337 \end{bmatrix} ,\quad\quad R_{3} = 1.0\times 10^{-10} \begin{bmatrix} 0.1887 & -0.0002\\ -0.0002 & 0.1863 \end{bmatrix} , \\ &R_{4} = 1.0\times 10^{-11} \begin{bmatrix} 0.2471 & 0.0008\\ 0.0008 & 0.2573 \end{bmatrix} ,\quad\quad R_{5} = 1.0\times 10^{-11} \begin{bmatrix} 0.3801 & 0.0002\\ 0.0002 & 0.3920 \end{bmatrix} , \\ &T_{1} = 1.0\times 10^{-10} \begin{bmatrix} 0.6706 & 0.0008\\ 0.0008 & 0.6752 \end{bmatrix} ,\quad\quad T_{2} = 1.0\times 10^{-11} \begin{bmatrix} 0.3678 & 0.0005\\ 0.0005 & 0.3711 \end{bmatrix} , \\ &T_{3} = 1.0\times 10^{-11} \begin{bmatrix} 0.1644 & 0.0005\\ 0.0005 & 0.1672 \end{bmatrix} ,\quad\quad T_{4} = 1.0\times 10^{-11} \begin{bmatrix} 0.5042 & -0.0251\\ -0.0251 & 0.4591 \end{bmatrix} , \\ &T_{5} = 1.0\times 10^{-12} \begin{bmatrix} -0.4935 & 0.1301\\ 0.1301 & -0.5262 \end{bmatrix} ,\quad\quad G_{1} = 1.0\times 10^{-10} \begin{bmatrix} 0.2745 & 0.0004\\ 0.0004 & 0.2766 \end{bmatrix} , \\ &G_{2} = 1.0\times 10^{-12} \begin{bmatrix} 0.1888 & 0.0007\\ 0.0007 & 0.1930 \end{bmatrix} ,\quad\quad G_{3} = 1.0\times 10^{-12} \begin{bmatrix} -0.3413 & 0.0003\\ 0.0003 & -0.3296 \end{bmatrix} , \\ &G_{4} = 1.0\times 10^{-12} \begin{bmatrix} -0.5025 & -0.0079\\ -0.0079 & -0.5993 \end{bmatrix} ,\quad\quad L_{1} = 1.0\times 10^{-12} \begin{bmatrix} -0.9111 & -0.3262\\ -0.3262 & -0.8944 \end{bmatrix} , \\ &L_{2} = 1.0\times 10^{-9} \begin{bmatrix} 0.1237 & -0.0008\\ -0.0008 & 0.1207 \end{bmatrix} ,\quad\quad S_{2} = 1.0\times 10^{-12} \begin{bmatrix} -0.4977 & 0.1688\\ 0.1688 & -0.2524 \end{bmatrix} , \\ &S_{3} = 1.0\times 10^{-13} \begin{bmatrix} 0.0904 & -0.4569\\ -0.4569 & -0.7191 \end{bmatrix} ,\quad\quad \beta _{1} = 1.0\times 10^{-9} \begin{bmatrix} 0.1429 & 0\\ 0 & 0.1061 \end{bmatrix} , \\ &\beta _{2} = 1.0\times 10^{-9} \begin{bmatrix} 0.2035 & 0\\ 0 & 0.1523 \end{bmatrix} ,\quad\quad S_{1} = 1.0\times 10^{-11} \begin{bmatrix} 0.2004 & -0.0384\\ -0.0303 & 0.1845 \end{bmatrix} , \\ &V_{1} = \begin{bmatrix} 74.2116 & 0\\ 0 &74.2116 \end{bmatrix} ,\quad\quad V_{2} = \begin{bmatrix} 74.2116 & 0\\ 0 & 74.2116 \end{bmatrix} , \\ & V_{3} = \begin{bmatrix} 74.2116 & 0\\ 0 & 74.2116 \end{bmatrix} . \end{aligned}$$

Then system (2.1) is a globally exponentially dissipative system, and the set \(S=\{x:\vert x\vert \leq 8.333\}\). Figure 1 shows trajectories of neuron states \(x_{1}(t)\) and \(x_{2}(t)\) of neutral-type MNNs (2.1). Figure 2 shows three-dimensional space trajectories of neuron states \(x_{1}(t)\) and \(x_{2}(t)\) of neutral-type MNNs (2.1). It can be seen that neuron states \(x_{1}(t)\) and \(x_{2}(t)\) are becoming periodic when the outputs of neutral-type MNNs (2.1) controllers are designed as periodic signals. According to Theorem 3.1 and Definition 2, system (2.1) is globally dissipative. Under the same conditions, if we take the external input \(u(t)=0\), then by Theorem 3.2, we know that the invariant set is \(S=\{0\}\) and system (2.1) is globally stable as shown in Fig. 3.

Figure 1
figure 1

State trajectories of \(x_{1}(t)\), \(x_{2}(t)\)

Figure 2
figure 2

State trajectories of \(x_{1}\), \(x_{2}\) in three-dimensional space

Figure 3
figure 3

State trajectories of \(x_{1}\), \(x_{2}\) in three-dimensional space when \(u(t)=0\)

5 Conclusions

This paper has investigated the dissipativity of neutral-type memristive neural network with two additive time-varying delays, as well as distribution time and time-varying leakage delays. By applying novel linear matrix inequalities, Lyapunov–Krasovskii functional and Newton–Leibniz formula, the dissipativity of the system was obtained. Even though the dissipative of MNNs has been reported before, there are few references about the dissipativity of neutral-type MNNs. We have considered adding neutral terms to the model, which made the model more realistic. Finally, we have given a numerical example to illustrate the effectiveness and exactness of our results. When Markovian jumping is added to this model, how to study the dissipativity of neutral-type MNNs with mixed delays in such a model becomes an interesting question. We will extend our work towards this direction in the future.


  1. Wang, Z., Liu, Y., Liu, X.: On global asymptotic stability of neural networks with discrete and distributed delays. Phys. Lett. A 345(4–6), 299–308 (2005)

    Article  Google Scholar 

  2. Egmont-Petersen, M., Ridder, D.D., Handels, H.: Image processing with neural networks—a review. Pattern Recognit. 35(10), 2279–2301 (2002)

    Article  Google Scholar 

  3. Chua, L.: Memristor-the missing circuit element. IEEE Trans. Circuit Theory 18(5), 507–519 (1971)

    Article  Google Scholar 

  4. Strukov, D.B., Snider, G.S., Stewart, D.R., Williams, R.S.: The missing memristor found. Nature 453(7191), 80–83 (2008)

    Article  Google Scholar 

  5. Cantley, K.D., Subramaniam, A., Stiegler, H.J., Chapman, R.A., Vogel, E.M.: Neural learning circuits utilizing nano-crystalline silicon transistors and memristors. IEEE Trans. Neural Netw. Learn. Syst. 23(4), 565–573 (2012)

    Article  Google Scholar 

  6. Ding, S., Wang, Z., Zhang, H.: Dissipativity analysis for stochastic memristive neural networks with time-varying delays: a discrete-time case. IEEE Trans. Neural Netw. Learn. Syst. 29(3), 618–630 (2018)

    Article  MathSciNet  Google Scholar 

  7. Cheng, J., Park, J.H., Cao, J., Zhang, D.: Quantized \({H^{\infty }}\) filtering for switched linear parameter-varying systems with sojourn probabilities and unreliable communication channels. Inf. Sci. 466, 289–302 (2018)

    Article  MathSciNet  Google Scholar 

  8. Zhang, D., Cheng, J., Park, J.H., Cao, J.: Robust \({H^{\infty }}\) control for nonhomogeneous Markovian jump systems subject to quantized feedback and probabilistic measurements. J. Franklin Inst. 355(15), 6992–7010 (2018)

    Article  MathSciNet  Google Scholar 

  9. Sun, J., Chen, J.: Stability analysis of static recurrent neural networks with interval time-varying delay. Appl. Math. Comput. 221(9), 111–120 (2013)

    MathSciNet  MATH  Google Scholar 

  10. Sun, Y., Cui, B.T.: Dissipativity analysis of neural networks with time-varying delays. Neurocomputing 168, 741–746 (2015)

    Article  Google Scholar 

  11. Li, C., Feng, G.: Delay-interval-dependent stability of recurrent neural networks with time-varying delay. Neurocomputing 72, 1179–1183 (2009)

    Article  Google Scholar 

  12. Lv, X., Li, X.: Delay-dependent dissipativity of neural networks with mixed non-differentiable interval delays. Neurocomputing 267, 85–94 (2017)

    Article  Google Scholar 

  13. Wei, H., Li, R., Chen, C., Tu, Z.: Extended dissipative analysis for memristive neural networks with two additive time-varying delay components. Neurocomputing 216, 429–438 (2016)

    Article  Google Scholar 

  14. Zeng, X., Xiong, Z., Wang, C.: Hopf bifurcation for neutral-type neural network model with two delays. Appl. Math. Comput. 282, 17–31 (2016)

    MathSciNet  Google Scholar 

  15. Xu, C., Li, P., Pang, Y.: Exponential stability of almost periodic solutions for memristor-based neural networks with distributed leakage delays. Neural Comput. 28(12), 1–31 (2016)

    Article  MathSciNet  Google Scholar 

  16. Zhang, Y., Gu, D.W., Xu, S.: Global exponential adaptive synchronization of complex dynamical networks with neutral-type neural network nodes and stochastic disturbances. IEEE Trans. Circuits Syst. I, Regul. Pap. 60(10), 2709–2718 (2013)

    Article  MathSciNet  Google Scholar 

  17. Brogliato, B., Maschke, B., Lozano, R., Egeland, O.: Dissipative Systems Analysis and Control. Springer, Berlin (2007)

    Book  Google Scholar 

  18. Huang, Y., Ren, S.: Passivity and passivity-based synchronization of switched coupled reaction–diffusion neural networks with state and spatial diffusion couplings. Neural Process. Lett. 5, 1–17 (2017)

    Google Scholar 

  19. Fu, Q., Cai, J., Zhong, S., Yu, Y.: Dissipativity and passivity analysis for memristor-based neural networks with leakage and two additive time-varying delays. Neurocomputing 275, 747–757 (2018)

    Article  Google Scholar 

  20. Willems, J.C.: Dissipative dynamical systems part I: general theory. Arch. Ration. Mech. Anal. 45(5), 321–351 (1972)

    Article  Google Scholar 

  21. Hong, D., Xiong, Z., Yang, C.: Analysis of adaptive synchronization for stochastic neutral-type memristive neural networks with mixed time-varying delays. Discrete Dyn. Nat. Soc. 2018, 8126127 (2018)

    Article  MathSciNet  Google Scholar 

  22. Cheng, J., Park, J.H., Karimi, H.R., Shen, H.: A flexible terminal approach to sampled-data exponentially synchronization of Markovian neural networks with time-varying delayed signals. IEEE Trans. Cybern. 48(8), 2232–2244 (2018)

    Article  Google Scholar 

  23. Zhang, D., Cheng, J., Cao, J., Zhang, D.: Finite-time synchronization control for semi-Markov jump neural networks with mode-dependent stochastic parametric uncertainties. Appl. Math. Comput. 344–345, 230–242 (2019)

    MathSciNet  Google Scholar 

  24. Zhang, W., Yang, S., Li, C., Zhang, W., Yang, X.: Stochastic exponential synchronization of memristive neural networks with time-varying delays via quantized control. Neural Netw. 104, 93–103 (2018)

    Article  Google Scholar 

  25. Duan, L., Huang, L.: Global dissipativity of mixed time-varying delayed neural networks with discontinuous activations. Commun. Nonlinear Sci. Numer. Simul. 19(12), 4122–4134 (2014)

    Article  MathSciNet  Google Scholar 

  26. Tu, Z., Cao, J., Alsaedi, A., Alsaadi, F.: Global dissipativity of memristor-based neutral type inertial neural networks. Neural Netw. 88, 125–133 (2017)

    Article  Google Scholar 

  27. Manivannan, R., Cao, Y.: Design of generalized dissipativity state estimator for static neural networks including state time delays and leakage delays. J. Franklin Inst. 355, 3990–4014 (2018)

    Article  MathSciNet  Google Scholar 

  28. Xiao, J., Zhong, S., Li, Y.: Relaxed dissipativity criteria for memristive neural networks with leakage and time-varying delays. Neurocomputing 171, 708–718 (2016)

    Article  Google Scholar 

  29. Samidurai, R., Sriraman, R.: Robust dissipativity analysis for uncertain neural networks with additive time-varying delays and general activation functions. Math. Comput. Simul. 155, 201–216 (2019)

    Article  MathSciNet  Google Scholar 

  30. Lin, W.J., He, Y., Zhang, C., Long, F., Wu, M.: Dissipativity analysis for neural networks with two-delay components using an extended reciprocally convex matrix inequality. Inf. Sci. 450, 169–181 (2018)

    Article  MathSciNet  Google Scholar 

  31. Aubin, J.P., Cellina, A.: Differential Inclusions. Springer, Berlin (1984)

    Book  Google Scholar 

  32. Arscott, F.M.: Differential Equations with Discontinuous Righthand Sides. Kluwer Academic, Amsterdam (1988)

    Google Scholar 

  33. Filippov, A.F.: Classical solutions of differential equations with multi-valued right-hand side. SIAM J. Control Optim. 5(4), 609–621 (1967)

    Article  MathSciNet  Google Scholar 

  34. Song, Q., Cao, J.: Global dissipativity analysis on uncertain neural networks with mixed time-varying delays. Chaos 18, 043126 (2008)

    Article  MathSciNet  Google Scholar 

  35. Liao, X., Wang, J.: Global dissipativity of continuous-time recurrent neural networks with time delay. Phys. Rev. E 68(1 Pt 2), 016118 (2003)

    MathSciNet  Google Scholar 

  36. Seuret, A., Gouaisbaut, F.: Wirtinger-based integral inequality: application to time-delay systems. Automatica 49, 2860–2866 (2013)

    Article  MathSciNet  Google Scholar 

  37. Wang, Z., Liu, Y., Fraser, K., Liu, X.: Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays. Phys. Lett. A 354(4), 288–297 (2006)

    Article  Google Scholar 

  38. Kwon, O.M., Lee, S.M., Park, J.H., Cha, E.J.: New approaches on stability criteria for neural networks with interval time-varying delays. Appl. Math. Comput. 218(19), 9953–9964 (2012)

    MathSciNet  MATH  Google Scholar 

  39. Park, P.G., Ko, J.W., Jeong, C.: Reciprocally convex approach to stability of systems with time-varying delays. Automatica 47(1), 235–238 (2011)

    Article  MathSciNet  Google Scholar 

  40. Xin, Y., Li, Y., Cheng, Z., Huang, X.: Global exponential stability for switched memristive neural networks with time-varying delays. Neural Netw. 80, 34–42 (2016)

    Article  Google Scholar 

  41. Guo, Z., Wang, J., Yan, Z.: Global exponential dissipativity and stabilization of memristor-based recurrent neural networks with time-varying delays. Neural Netw. 48, 158–172 (2013)

    Article  Google Scholar 

  42. Nagamani, G., Joo, Y.H., Radhika, T.: Delay-dependent dissipativity criteria for Markovian jump neural networks with random delays and incomplete transition probabilities. Nonlinear Dyn. 91(4), 2503–2522 (2018)

    Article  Google Scholar 

Download references


The authors would like to thank the referees for their valuable comments on an earlier version of this article.

Authors’ information

Email address: (Cuiping Yang), (Zuoliang Xiong), (Tianqing Yang).


This work is supported by National Natural Science Foundation of China (No. 61563033).

Author information

Authors and Affiliations



All authors contributed equally to the writing of this paper. All authors of the manuscript have read and agreed to its content and are accountable for all aspects of the accuracy and integrity of the manuscript.

Corresponding author

Correspondence to Zuoliang Xiong.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, C., Xiong, Z. & Yang, T. Dissipativity analysis of neutral-type memristive neural network with two additive time-varying and leakage delays. Adv Differ Equ 2019, 6 (2019).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: