- Research
- Open Access
- Published:
On extended dissipativity analysis for neural networks with time-varying delay and general activation functions
Advances in Difference Equations volume 2016, Article number: 79 (2016)
Abstract
We investigate the problem of extended dissipativity analysis for a class of neural networks with time-varying delay. The extended dissipativity analysis generalizes a few previous known results, which contain the \(H_{\infty}\), passivity, dissipativity, and \(\ell_{2}-\ell _{\infty}\) performance in a unified framework. By introducing a suitable augmented Lyapunov-Krasovskii functional and considering the sufficient information of neuron activation functions and together with a new bound inequality, we give some sufficient conditions in terms of linear matrix inequalities (LMIs) to guarantee the stability and extended dissipativity of delayed neural networks. Numerical examples are given to illustrate the efficiency and less conservative of the proposed methods.
1 Introduction
In recent years, neural networks have received extensive attention due to their extensive applications in variety of areas, such as signal processing, image processing, pattern recognition, associative memory, and optimization problems [1, 2]. Since theoretical analysis is usually a prerequisite for guaranteeing success in applications, numerous investigations have been conducted on theoretical analysis of the dynamical behaviors of delayed neural networks. It is well known that time delay is always encountered because the neural networks are frequently implemented by all kinds of hardware circuits-digital or integrated circuits. In addition, the existence of time delay is often one of the main sources to cause poor performance, chaos, and instability. As a result, numerous stability analysis criteria of delayed neural networks have been reported in [3–20].
It is worth pointing out that the performance of a neural network, which is usually characterized by an input-output relationship, plays an important role in various scenarios. For example, \(H_{\infty}\) control problem [21–23], passivity and passification problems [24, 25], \(\ell_{2}-\ell_{\infty}\) performance analysis [26], and dissipativity analysis [27–29]. Up to now, dissipativity has attracted many researchers’ attention because it does not only unifies the \(H_{\infty }\) and passivity performance [30–35] but also provides a more flexible robust control design in practical engineering, such as chemical process control [36] and power converters [37]. Recently, \((\mathcal {Q},\mathcal{S},\mathcal{R})\)-dissipativity is developed in [38] and [39]; however, the \(\ell_{2}-\ell_{\infty}\) performance is not contained in the dissipativity. In order to overcome this drawback, Zhang et el. [40] proposed a general performance called extended dissipativity, which unifies these performances. Further, in [41], the authors discussed the issue of the extended dissipativity analysis in continuous-time delay neural networks. In [42], the authors addressed the problem of the extended dissipativity for the discrete-time delay neural networks. In addition, in [43, 44], the authors studied dissipativity analysis of neural networks with time-varying delay and randomly occurring uncertainties. However, it should be mentioned that in [40, 41], the stability criteria of neural networks are conservative. There still exists room for further improvement because some useful terms are ignored in the Lyapunov-Krasovskii functional employed in [40, 41]. It is natural to look for an alternative view to reduce the conservatism of stability criteria. This has motivated our research on this issue.
In this paper, we investigate extended dissipativity analysis for neural networks with time-varying delay and general activation functions. The contribution of this paper is as follows. First, constructing a suitable augmented Lyapunov-Krasovskii functional, the aim is to utilize a new bound inequality to reduce the conservatism of the results. Second, the extended dissipativity generalizes a few previous known results, which encompass the \(H_{\infty}\) performance, \(\ell_{2}-\ell_{\infty}\), passivity, and dissipativity by adjusting weighting matrices in a new performance index. Third, we pay more attention to activation functions. Differently from some existing methods [9, 10, 12], and [45, 46], which divided the bound of neuron activation functions into two subintervals directly, we introduce a parameter δ such that \(\lambda_{i}^{\delta}=\lambda _{i}^{-}+\delta(\lambda_{i}^{+}-\lambda_{i}^{-})\) and we employ cross terms among the states with the conditions of \(\lambda_{i}^{-}\leq\frac {f_{i}(a)-f_{i}(b)}{a-b}\leq\lambda_{i}^{\delta}\) and \(\lambda _{i}^{\delta}\leq\frac{f_{i}(a)-f_{i}(b)}{a-b}\leq\lambda_{i}^{+}\). In addition, for the particular case \(b=0\), the conditions of \(\lambda _{i}^{-}\leq\frac{f_{i}(a)}{a}\leq\lambda_{i}^{\delta}\) and \(\lambda _{i}^{\delta}\leq\frac{f_{i}(a)}{a}\leq\lambda_{i}^{+}\) are also taken into full consideration. The derived conditions are formulated in terms of linear matrix inequalities (LMIs) to guarantee the stability and extended dissipativity of delayed neural networks. Numerical examples are presented to show the improvement and effectiveness of the results.
In this presentation, we use the following notation. We denote by \(\mathbb{R}^{n}\) the n-dimensional Euclidean space and by \(\mathbb {R}^{m\times n}\) the set of all \(m\times n\) real matrix. The asterisk ∗ denotes the symmetric part in a symmetric matrix, \(\operatorname{diag}\{\cdots\}\) denotes a diagonal matrix. The notation \(P>0\) (\(P\geq0\)) means that a matrix P is a symmetric positive-definite (positive-semidefinite) matrix. By I and 0 we denote the identity and zero matrices of appropriate dimensions, respectively. The superscript \('T'\) stands for matrix transposition, \(\operatorname{sym}(A)\) is defined as \(A+A^{T}\), and \(\|\cdot\|\) refers to the Euclidean vector norm and its induced norm of a matrix. For a real matrix N, \(N^{\perp}\) denotes its orthogonal complement with maximum row rank.
2 Preliminaries
Consider the class of neural networks with time-varying delay described by
where \(x(t)=[x_{1}(t),x_{2}(t),\ldots,x_{n}(t)]^{T}\in\mathbb{R}^{n}\), and \(x_{i}(t)\) denotes the state of ith neuron at time t; \(f(x(t))=[f_{1}(x_{1}(t)),f_{2}(x_{2}(t)),\ldots ,f_{n}(x_{n}(t))]^{T}\in\mathbb{R}^{n}\), and \(f_{i}(x_{i}(t))\) is the activation function of the ith neuron at time t; \(y(t)\) is the output of the neural network; \(C=\operatorname{diag}(c_{1},c_{2},\ldots,c_{n})\) describes the rate with which each neural neuron will rest its potential to the resting state in isolation when disconnected from the networks and external inputs; A, B, and D denote constant matrices of appropriate dimensions; \(\phi(t)\) is the initial condition; \(h(t)\) is the time-varying delay satisfying \(0\leq h(t)\leq h\), \(\dot {h}(t)\leq\mu<1\); and \(\omega(t)\in\mathbb{R}^{n}\) is the disturbance input belonging to \(\ell_{2}[0,\infty]\).
Assumption 2.1
As assumed in many references, such as [45], the activation function \(f_{i}(\cdot)\) of neural network (1) is continuous, bounded, and there exist constants \(\lambda^{-}_{i}\) and \(\lambda^{+}_{i}\) such that
The following lemmas, definition, and assumption play a key role in deriving the main results of this paper.
Lemma 2.1
([47])
For a given matrix \(M>0\), the following inequality holds for all continuous differentiable functions \(x:[a,b]\rightarrow\mathbb{R}^{n}\):
where \(\xi_{1}(t)=x(b)-x(a)\) and \(\xi_{2}(t)=x(b)+x(a)-\frac{2}{b-a}\int ^{b}_{a}x(s)\,ds\).
Lemma 2.2
([14])
For any constant matrices \(N\in\mathbb{R}^{n_{a}\times n_{b}}\), \(X\in \mathbb{R}^{n_{a}\times n_{a}}\), \(Y\in\mathbb{R}^{n_{a}\times n_{b}}\), and \(R\in\mathbb{R}^{n_{b}\times n_{b}}\), with \(\bigl [ {\scriptsize\begin{matrix}{} X & Y \cr * & R\end{matrix}} \bigr ]\geq0\), the following inequality holds for any \(a\in\mathbb{R}^{n_{a}}\) and \(b\in\mathbb{R}^{n_{b}}\):
Applying this lemma yields the following new integral inequality.
Lemma 2.3
For any constant matrices \(R\in\mathbb{R}^{n\times n}\), \(X\in\mathbb {R}^{2n\times2n}\), and \(Y\in\mathbb{R}^{2n\times n}\) with \(\bigl [ {\scriptsize\begin{matrix}{} X & Y \cr * & R\end{matrix}} \bigr ]\geq0\) and scalars \(b>a>0\) such that the following inequality is well defined, we have:
where \(\varpi(t)=[x^{T}(b)\ \int^{b}_{a}\frac{x^{T}(s)}{b-a}\,ds]^{T}\).
Proof
It is easy to see that
Therefore, the following equation holds for any \(N_{1}, N_{2}\in \mathbb{R}^{n\times n}\):
where \(N=[N_{1}\ N_{2}]\). Applying Lemma 2.2 yields
To sum up, we have
After s simple rearrangement, we can obtain (5). This completes the proof. □
Remark 2.1
Inequality (5) is called an integral inequality. In this paper, it plays a key role in the derivation of a criterion for delay-dependent stabilization. If we let \(Y=\frac{2}{b-a}[-R\ R]^{T}\) and \(X=Y^{T}R^{-1}Y\), then (5) reduces to \(-\int_{a}^{b}\int_{s}^{b}\dot{x}^{T}(u)R\dot{x}(u)\,du\,ds \leq-\frac{2}{(b-a)^{2}}(\int_{a}^{b}\int_{s}^{b}\dot {x}(u)\,du\,ds)^{T} R(\int_{a}^{b}\int_{s}^{b}\dot{x}(u)\,du\,ds) \), which means that (5) provides freedom in deriving stability criteria and makes it possible to find a tight bound.
Lemma 2.4
([22])
For any vectors \(x_{1}\), \(x_{2}\), constant matrices \(Q_{i}\), \(i=1,\ldots,4\), and \(S_{i}\), \(i=1,2\), and real scalars \(\alpha\geq0\), \(\beta\geq0\) satisfying \(\alpha+\beta=1\), the following inequality holds:
subject to
Lemma 2.5
([48])
Let \(\zeta\in\mathbb{R} ^{n}\), \(\Phi=\Phi^{T}\in \mathbb{R} ^{n \times n}\), and \(B\in\mathbb{R} ^{m \times n}\) with \(\operatorname{rank}(B)< n\). Then, the following two statements are equivalent:
-
(a)
\(\zeta^{T}\Phi\zeta<0\), \(B\zeta=0\), \(\zeta\neq0\);
-
(b)
\((B^{\perp})^{T}\Phi B^{\perp}<0\), where \(B^{\perp}\) is a right orthogonal complement of B.
Definition 2.1
([40])
For given matrices \(\Psi_{1}\), \(\Psi_{2}\), \(\Psi_{3}\), and \(\Psi_{4}\) satisfying Assumption 2.2, system (1) is said to be extended dissipative if for any \(t_{f}\geq0\) and all \(\omega(t)\in\ell _{2}[0,\infty)\), under the zero initial state, the following inequality holds:
where \(J(t)=y^{T}(t)\Psi_{1}y(t)+2y^{T}(t)\Psi_{2}\omega(t)+\omega ^{T}(t)\Psi_{3}\omega(t)\).
Assumption 2.2
For given real symmetric matrices \(\Psi_{1}\), \(\Psi_{2}\), \(\Psi_{3}\), and \(\Psi_{4}\) the following conditions are satisfied:
-
(1)
\(\Psi_{1}\leq0\), \(\Psi_{3}>0\), and \(\Psi_{4}\geq0\);
-
(2)
\((\|\Psi_{1}\|+\|\Psi_{2}\|)\cdot\|\Psi_{4}\|=0\).
Remark 2.2
The matrices \(\Psi_{1}\), \(\Psi_{2}\), \(\Psi_{3}\), and \(\Psi_{4}\) satisfy inequality (6). This can lead to the complexity of systems and increase the difficulty of solving the problem. The performance index in (6) is an extended index, which gives a more general performance by setting the weighting matrices \(\Psi _{i}\) (\(i=1,2,3,4\)). More specifically, (6) becomes the \(\ell_{2}-\ell _{\infty}\) performance when \(\Psi_{1}=\Psi_{2}=0\), \(\Psi_{3}=\gamma ^{2}I\), and \(\Psi_{4}=I\); (6) denotes the \(H_{\infty}\) performance when \(\Psi_{1}=-I\), \(\Psi_{2}=\Psi_{4}=0\), and \(\Psi _{3}=\gamma^{2}I\); (6) represents the passivity performance when \(\Psi _{1}=\Psi_{4}=0\), \(\Psi_{2}=I\), and \(\Psi_{3}=\gamma I\); (6) reduces to the \((\mathcal{Q},\mathcal{S},\mathcal{R})\)-dissipativity performance when \(\Psi_{1}=\mathcal{Q}\), \(\Psi_{2}=\mathcal{S}\), \(\Psi _{3}=\mathcal{R}-\alpha I\), and \(\Psi_{4}=0\).
3 Main results
In this section, new stability criteria for system (1) are derived by using the Lyapunov method and LMI framework. For the sake of simplicity of matrix and vector representation, \(e_{i}\in\mathbb{R}^{9n\times n}\) are defined as block entry matrices, for example, \(e^{T}_{4}=[0\ 0 \ 0 \ I \ 0 \ 0 \ 0 \ 0 \ 0]\). The other notations are the following:
3.1 Stability analysis
The following theorem is given for system (1) with \(\omega(t)=0\) as the first result.
Theorem 3.1
For given scalars \(0<\delta\leq1\), \(h>0\), and μ and diagonal matrices \(\lambda_{m}=\operatorname{diag}\{\lambda_{1}^{-},\ldots,\lambda_{n}^{-}\}\) and \(\lambda_{M}=\operatorname{diag}\{\lambda_{1}^{+},\ldots,\lambda_{n}^{+}\}\), system (1) with \(\omega(t)=0\) is asymptotically stable if there exist positive definite matrices P, \(Q_{i}\), \(U_{i}\), \(R_{i}\) (\(i=1,2\)) and positive diagonal matrices \(K_{i}=\operatorname{diag}(k_{i1},\ldots,k_{in})\) (\(i=1,2\)), \(H_{i}=\operatorname{diag}(h_{i1},\ldots,h_{in})\) (\(i=1,\ldots,4\)), and \(\Pi _{i}=\operatorname{diag}(\pi_{i1},\ldots,\pi_{in})\) (\(i=1,\ldots,6\)) for any matrices \(Y_{i}\) (\(k=1,\ldots,4\)), \(S_{i}\) (\(i=1,2\)), \(F_{i}\) (\(i=1,2\)), and \(X_{i}\) (\(i=1,\ldots,4\)) of appropriate dimensions such that the following conditions hold:
Proof
Let us consider the Lyapunov-Krasovskii functional candidate
where
and
Then, calculating the time derivative of \(V(t,x_{t})\) along the trajectory of system (1) yields
By using Lemma 2.1 we can obtain
Using Jensen’ inequality to estimate the \(U_{2}\)-dependent integral term in (13) yields
On one hand, from Lemma 2.4 it is clear that if there exist matrices \(S_{1}\) and \(S_{2}\) satisfying (9), then the estimation of the \(U_{1}\)-dependent integral term in (13), the \(R_{1}\)-dependent integral term in (14), and the \(R_{2}\)-dependent integral term in (15) can be obtained as follows:
where \(\alpha=\frac{h(t)}{h}\) and \(\beta=\frac{h-h(t)}{h}\).
On the other hand, according to Lemma 2.3, we obtain
Now, letting \(M=\varpi_{1}^{T}(t)X_{1}\varpi_{1}(t)+\varpi _{4}^{T}(t)X_{4}\varpi_{4}(t)\) and \(Z=\varpi_{2}^{T}(t)X_{2}\varpi _{2}(t)+\varpi_{3}^{T}(t)X_{3}\varpi_{3}(t)\), define the vector-valued function
When \(h(t)=\frac{h}{M+Z}\), we have \(\dot{g}(h(t))=0\), and in this case, we can obtain a minimum value. So, it is clear that we can get a maximum value at the endpoints.
Case I: when \(M\geq Z\),
Case II: when \(M< Z\),
In addition, for any matrices \(F_{1}\) and \(F_{2}\) with appropriate dimension, the following zero equation holds:
Furthermore, by introducing a parameter δ for the bound of the activation function we will consider two subintervals, \(\lambda_{i}^{-}\leq(f_{i}(a)-f_{i}(b))/(a-b)\leq\lambda_{i}^{\delta}\) and \(\lambda_{i}^{\delta}\leq(f_{i}(a)-f_{i}(b))/(a-b)\leq\lambda_{i}^{+}\), where \(\lambda_{i}^{\delta}=\lambda_{i}^{-}+\delta(\lambda _{i}^{+}-\lambda_{i}^{-})\).
Case I: \(\lambda_{i}^{-}\leq\frac{f_{i}(a)-f_{i}(b)}{a-b}\leq\lambda _{i}^{\delta}\).
For Case I, the following conditions hold:
and
Then, for any appropriate diagonal matrices \(H_{i}=\operatorname{diag}\{h_{i1},\ldots ,h_{in}\}>0\), \(i=1,2\), we have:
When \(b=0\), we have \(\lambda_{i}^{-}\leq\frac{f_{i}(a)}{a}\leq\lambda _{i}^{\delta}\) and, for any scalars \(\pi_{1i}>0\), \(i=1,2,\ldots,n\),
which is equivalent to
where \(\Pi_{1}=\operatorname{diag}\{\pi_{11},\ldots,\pi_{1n}\}\).
Similarly, for any appropriately diagonal matrices \(\Pi_{i}=\operatorname{diag}\{\pi _{i1},\ldots,\pi_{in}\}>0\), \(i=2,3\), we have:
Combining the inequalities from (11) to (27) together gives the upper bound of \(\dot{V}(t,x_{t})\):
Case II: \(\lambda_{i}^{\delta}\leq\frac{f_{i}(a)-f_{i}(b)}{a-b}\leq \lambda_{i}^{+}\).
Case II can be discussed similarly as the procedure in Case I. Then we obtain:
where \(H_{3}\), \(H_{4}\), and \(\Pi_{i}\) (\(i=4,\ldots,6\)) are defined in Theorem 3.1.
Combining the inequalities from (11) to (22), together with (29), gives the upper bound of \(\dot{V}(t,x_{t})\):
Using the fact that \(\Xi_{[h(t)]}\) is dependent on \(h(t)\) and applying Lemma 2.5 with \(\Gamma\xi(t)=0\), it follows that if LMIs (7), (8) hold, then system (1) with \(\omega(t)=0\) is asymptotically stable. This ends the proof. □
3.2 Extended dissipative analysis
In this section, by assuming zero initial conditions we establish the extended dissipativity condition for all nonzero \(\omega(t)\in\ell _{2}[0,\infty]\).
Theorem 3.2
For given scalars \(0<\delta\leq1\), \(h>0\), and μ, diagonal matrices \(\lambda_{m}=\operatorname{diag}\{\lambda_{1}^{-},\ldots, \lambda_{n}^{-}\}\) and \(\lambda_{M}=\operatorname{diag}\{\lambda_{1}^{+},\ldots,\lambda_{n}^{+}\}\), and matrices \(\Psi_{i}\) (\(i=1,\ldots,4\)) satisfying Assumption 2.2, system (1) is asymptotically stable and extended dissipative if there exist positive definite matrices P, \(Q_{i}\), \(U_{i}\), \(R_{i}\) (\(i=1,2\)) and positive diagonal matrices \(K_{i}=\operatorname{diag}(k_{i1},\ldots,k_{in})\) (\(i=1,2\)), \(H_{i}=\operatorname{diag}(h_{i1},\ldots,h_{in})\) (\(i=1,\ldots,4\)), and \(\Pi _{i}=\operatorname{diag}(\pi_{i1},\ldots,\pi_{in})\) (\(i=1,\ldots,6\)) for any matrices \(Y_{i}\) (\(k=1,\ldots,4\)), \(S_{i}\) (\(i=1,2\)), \(F_{i}\) (\(i=1,2\)), and \(X_{i}\) (\(i=1,\ldots,4\)) of appropriate dimensions such that LMIs (9) and the following conditions hold:
where
Proof
From (28) and (30) we have \(\dot{V}(t,x_{t})\leq\xi^{T}(t)(\Xi_{[h(t)]}+\Phi_{i}+\Sigma_{j})\xi(t)\) (\(\forall i,j=a,b\)), and it is clear that
where \(\bar{\xi}^{T}(t)=[\xi^{T}(t)\ \omega^{T}(t)]^{T}\) and \(J(t)\) are defined in Definition 2.1. By Lemma 2.5, (31) and (32) are equivalent to \(\bar{\xi}^{T}(t)(\bar{\Xi }_{[h(t)]}+\bar{\Phi}_{i}+\bar{\Sigma}_{j})\bar{\xi}(t)<0\) (\(\forall i,j=a,b\)). Therefore, we can obtain
By integrating both sides of this inequality from 0 to \(t\geq0\) we can obtain
Considering the two cases of \(\Psi_{4}=0\) and \(\Psi_{4}>0\), due to the extended dissipativity condition, we can represent the strictly \((\mathcal{Q},\mathcal{S},\mathcal{R})\)-dissipativity condition, the \(H_{\infty}\) performance, and the passivity when \(\Psi _{4}=0\) or the \(\ell_{2}-\ell_{\infty}\) performance criterion when \(\Psi_{4}>0\).
On one hand, we consider \(\Psi_{4}=0\) and from (34) we can get that
This implies Assumption 2.2 with \(\Psi_{4}=0\).
On the other hand, when \(\Psi_{4}>0\), as mentioned in Assumption 2.2, we have the matrices \(\Psi_{1}=0\), \(\Psi_{2}=0\), and \(\Psi_{3}>0\) in this case. Then, for any \(0\leq t\leq t_{f}\), (34) leads to \(\int _{0}^{t_{f}}J(s)\,ds\geq\int_{0}^{t}J(s)\,ds\geq x^{T}(t)Px(t)\). Therefore, according to (33), we have
From (35) and (36) we get that system (1) is extended dissipative. This completes the proof. □
4 Illustrative examples
In this section, we introduce two examples to illustrate the merits of the derived results.
Example 1
Consider the neural networks (1) with the following parameters:
In this example, for stability analysis, D is chosen to be zero. Our purpose is to estimate the allowable upper bounds delay h under different μ such that system (1) is globally asymptotically stable. When \(\delta=0.8\), according to Table 1, this example shows that the stability criterion in this paper gives much less conservative results than those in [9–12, 41]. In addition, for the case of \(\mu =0.8\), \(h=8.2046\), and the initial state \((-0.2,0.2)^{T}\), the stability results can be further verified by Figure 1.
Example 2
In this example, the generality of the extended dissipativity is demonstrated, which unifies the popular and important performance, such as \(H_{\infty}\) performance, passivity, dissipativity, and \(\ell_{2}-\ell_{\infty}\) performance. Consider the neural networks (1) with the following parameters:
Case I: \(H_{\infty}\) performance. Let \(\Psi_{1}=-I\), \(\Psi _{2}=0\), \(\Psi_{3}=\gamma^{2}I\), and \(\Psi_{4}=0\). The extended dissipativity reduces to standard \(H_{\infty}\) performance. By Theorem 3.2, the allowable \(H_{\infty}\) performance γ can be obtained for the case \(\mu =0.5\) and different δ and h. The relationship among γ, δ, and h is demonstrated in Table 2. For \(\mu=0.5\) and fixed h, we can see from Table 2 that the minimum value of γ becomes smaller when the value of δ increases.
Case II: \(\ell_{2}-\ell_{\infty}\) performance. When we let \(\Psi _{1}=0\), \(\Psi_{2}=0\), \(\Psi_{3}=\gamma^{2}I\), and \(\Psi_{4}=I\), the extended dissipativity becomes the \(\ell_{2}-\ell_{\infty}\) performance. For \(\mu=0.8\), the different values of γ are listed in Table 3 by solving the LMIs in Theorem 3.2 with various values of δ and h. It is easy to see that the best value of δ is 0.7.
Case III: passivity performance. When we let \(\Psi_{1}=0\), \(\Psi _{2}=I\), \(\Psi_{3}=\gamma I\) and \(\Psi_{4}=0\), the passivity performance is obtained. For given \(\mu=0.5\) and \(\delta=0.5\), the maximum values of h with various γ are obtained in Table 4 by solving the LMIs in Theorem 3.2.
Case IV: dissipativity. When we let \(\Psi_{1}=-0.5I\), \(\Psi _{2}=I\), \(\Psi_{3}=2I\), and \(\Psi_{4}=0\), the dissipativity performance is obtained. For given \(\mu=0.5\) and \(\delta=0.1\), the maximum values of h with various γ are obtained in Table 5 by solving the LMIs in Theorem 3.2.
Finally, through Example 1, we conclude that our results have improvements at the amount of 3.84% and 3.37% for \(\mu=0.8\) and 0.9, respectively, compared with the recent work [41].
5 Conclusions
In this paper, we investigated the problem of extended dissipativity analysis for a class of neural network with time-varying delay. The extended dissipativity generalizes a few previous known results, which contain the \(H_{\infty}\), passivity, dissipativity, and \(\ell_{2}-\ell _{\infty}\) performance in a unified framework. By introducing a suitable augmented Lyapunov-Krasovskii functional and considering the sufficient information of neuron activation functions together with a new bound inequality, some sufficient conditions are given in terms of linear matrix inequalities (LMIs) to guarantee the stability and extended dissipativity of delayed neural networks. At present, we only give the theoretical results in our paper, and we will try to extend these theoretical results to real-life applications in the future.
References
Cichoki, A, Unbehauen, R: Neural Networks for Optimization and Signal Processing. Wiley, Chichester (1993)
Watta, PB, Wang, K, Hassoun, MH: Recurrent neural nets as dynamical Boolean systems with applications to associative memory. IEEE Trans. Neural Netw. 8, 1268-1280 (1997)
Wu, Z, Shi, P, Su, H, Chu, J: Delay-dependent exponential stability analysis for discrete-time switched neural networks with time-varying delay. Neurocomputing 74, 1626-1631 (2011)
Kwon, O, Park, J, Lee, S, Cha, E: Analysis on delay-dependent stability for neural networks with time-varying delays. Neurocomputing 103, 114-120 (2013)
Tian, JK, Xiong, WJ, Xu, F: Improved delay-partitioning method to stability analysis for neural networks with discrete and distributed time-varying delays. Appl. Math. Comput. 233, 152-164 (2014)
Rakkiyappan, R, Sakthivel, N, Park, JH, Kwon, OM: Sampled-data state estimation for Markovian jumping fuzzy cellular neural networks with mode-dependent probabilistic time-varying delays. Appl. Math. Comput. 221, 741-769 (2013)
Zhang, H, Wang, Z, Liu, D: Robust stability analysis for interval Cohen-Grossberg neural networks with unknown time-varying delays. IEEE Trans. Neural Netw. 21, 1942-1954 (2009)
Cheng, J, Zhu, H, Zhong, S, Li, G: Novel delay-dependent robust stability criteria for neutral systems with mixed time-varying delays and nonlinear perturbations. Appl. Math. Comput. 219(14), 7741-7763 (2013)
Kown, OM, Park, JH: Improved delay-dependent stability criterion for neural networks with time-varying delays. Phys. Lett. A 373, 529-535 (2009)
Tian, J, Zhong, S: Improved delay-dependent stability criterion for neural networks with time-varying delay. Appl. Math. Comput. 217, 10278-10288 (2011)
Wang, Y, Yang, C, Zuo, Z: On exponential stability analysis for neural networks with time-varying delays and general activation functions. Commun. Nonlinear Sci. Numer. Simul. 17, 1447-1459 (2012)
Kwon, O, Park, M, Lee, S, Park, J, Cha, E: Stability for neural networks with time-varying delays via some new approaches. IEEE Trans. Neural Netw. Learn. Syst. 24, 181-193 (2013)
Park, JH, Kwon, OM: Further results on state estimation for neural networks of neutral-type with time-varying delay. Appl. Math. Comput. 208, 69-75 (2009)
Moon, YS, Park, PG, Kwon, WH, Lee, YS: Delay dependent robust stabilization of uncertain state-delayed systems. Int. J. Control 74, 1447-1455 (2011)
Shatyrko, A, DiblÃk, J, Khusainov, D, Ruzickova, M: Stabilization of Lur’e-type nonlinear control systems by Lyapunov-Krasovskii functionals. Adv. Differ. Equ. 2012, 229 (2012)
Zeng, HB, Park, JH, Zhang, CF, Wang, W: Stability and dissipativity analysis of static neural networks with interval time-varying delay. J. Franklin Inst. 352, 1284-1295 (2015)
Feng, JW, Tang, Z, Zhao, Y, Xu, C: Cluster synchronisation of non-linearly coupled Lur’e networks with identical and non-identical nodes and an asymmetrical coupling matrix. IET Control Theory Appl. 7, 2117-2127 (2013)
Tang, Z, Feng, JW, Zhao, Y: Global synchronization of nonlinear coupled complex dynamical networks with information exchanges at discrete-time. Neurocomputing 151, 1486-1494 (2015)
Tang, Z, Park, JH, Lee, TH, Feng, JW: Mean square exponential synchronization for impulsive coupled neural networks with time-varying delays and stochastic disturbances. Complexity (2015). doi:10.1002/cplx.21647
Tang, Z, Park, JH, Lee, TH, Feng, JW: Random adaptive control for cluster synchronization of complex networks with distinct communities. Int. J. Adapt. Control Signal Process. (2015). doi:10.1002/acs.2599
Bara, GI, Boutayeb, M: Static output feedback stabilization with \(H_{\infty}\) performance for linear discrete-time systems. IEEE Trans. Autom. Control 50, 250-254 (2005)
Lee, WI, Lee, SY, Park, PG: Improved criteria on robust stability and \(H_{\infty}\) performance for linear systems with interval time-varying delays via new triple integral functional. Appl. Math. Comput. 243, 570-577 (2014)
Liu, M, Zhang, S, Fan, Z, Zheng, S, Sheng, W: Exponential \(H_{\infty}\) synchronization and state estimation for chaotic systems via a unified model. IEEE Trans. Neural Netw. Learn. Syst. 24, 1114-1126 (2013)
Mahmoud, MS, Ismail, A: Passivity and passification of time-delay systems. J. Math. Anal. Appl. 292, 247-258 (2004)
Xu, M, Zheng, WX, Zou, Y: Passivity analysis of neural networks with time-varying delays. IEEE Trans. Circuits Syst. II 56, 325-329 (2009)
Zhang, L, Shi, P, Boukas, EK, Wang, C: Robust \(\ell_{2}-\ell_{\infty}\) filtering for switched linear discrete time-delay systems with polytopic uncertainties. IET Control Theory Appl. 1, 722-730 (2007)
Wu, Z, Cui, M, Xie, X, Shi, P: Theory of stochastic dissipative systems. IEEE Trans. Autom. Control 56, 1650-1655 (2011)
Han, C, Wu, L, Shi, P, Zeng, Q: On dissipativity of Takagi-Sugeno fuzzy descriptor systems with time-delay. J. Franklin Inst. 349, 3170-3184 (2012)
Wu, ZG, Shi, P, Su, H, Chu, J: Dissipativity analysis for discrete-time stochastic neural networks with time-varying delays. IEEE Trans. Neural Netw. Learn. Syst. 24, 345-355 (2013)
Mahmoud, MS, Khan, GD: Dissipativity analysis for discrete stochastic neural networks with Markovian delays and partially known transition matrix. Appl. Math. Comput. 228, 292-310 (2014)
Mahmoud, MS, Nounou, HN: Dissipative analysis and synthesis of time-delay systems. Mediterr. J. Meas. Control 1, 97-108 (2005)
Meisami-Azad, M, Mohammadpour, J, Grigoriadis, KM: Dissipative analysis and control of state-space symmetric systems. Automatica 45, 1574-1579 (2009)
Mahmoud, MS, Nounou, HN, Xia, Y: Robust dissipative control for Internet-based switching systems. J. Franklin Inst. 347, 154-172 (2010)
Mahmoud, MS, Saif, AWA: Dissipativity analysis and design for uncertain Markovian jump systems with time-varying delays. Appl. Math. Comput. 219, 9681-9695 (2013)
Mahmoud, MS, Shi, Y, Al-Sunni, FM: Dissipativity analysis and synthesis of a class of nonlinear systems with time-varying delays. J. Franklin Inst. 346, 570-592 (2009)
Jeltsema, D, Scherpen, JMA: Tuning of passivity-preserving controllers for switched-mode power converters. IEEE Trans. Autom. Control 49, 1333-1344 (2004)
Niu, Y, Wang, X, Lu, J: Dissipative-based adaptive neural control for nonlinear systems. J. Control Theory Appl. 2, 126-130 (2004)
Feng, Z, Lam, J: Stability and dissipativity analysis of distributed delay cellular neural networks. IEEE Trans. Neural Netw. 22, 976-981 (2011)
Wu, ZG, Park, JH, Shu, H, Chu, J: Admissibility and dissipativity analysis for discrete-time singular systems with mixed time-varying delays. Appl. Math. Comput. 218, 7128-7138 (2012)
Zhang, B, Zheng, WX, Xu, S: Filtering of Markovian jump delay systems based on a new performance index. IEEE Trans. Circuits Syst. I 60, 1250-1263 (2013)
Lee, TH, Park, MJ, Park, JH, Kwon, OM, Lee, SM: Extended dissipative analysis for neural networks with time-varying delays. IEEE Trans. Neural Netw. Learn. Syst. 25, 1936-1941 (2014)
Feng, Z, Zeng, W: On extended dissipativity of discrete-time neural networks with time delay. IEEE Trans. Neural Netw. Learn. Syst. 26, 3293-3300 (2015)
Zeng, HB, Park, JH, Xia, JW: Further results on dissipativity analysis of neural networks with time-varying delay and randomly occurring uncertainties. Nonlinear Dyn. 79, 83-91 (2015)
Wang, J, Park, JH, Shen, H, Wang, J: Delay-dependent robust dissipativity conditions for delayed neural networks with random uncertainties. Appl. Math. Comput. 221, 710-719 (2013)
Cheng, J, Zhu, H, Ding, Y, Zhong, S, Zhong, Q: Stochastic finite-time boundedness for Markovian jumping neural networks with time-varying delays. Appl. Math. Comput. 242, 281-295 (2014)
Zhang, Y, Yue, D, Tian, E: New stability criteria of neural networks with interval time-varying delays: a piecewise delay method. Appl. Math. Comput. 208, 249-259 (2009)
Seuret, A, Gouaisbaut, F: Wirtinger-based integral inequality: application to time-delay systems. Automatica 49, 2860-2866 (2013)
Skelton, RE, Iwasaki, T, Grigoradis, KM: A Unified Algebraic Approach to Linear Control Design. Taylor & Francis, New York (1997)
Acknowledgements
The authors would like to thank the editors and the reviewers for their valuable suggestions and comments, which have led to a much improved paper. This work was financially supported by the National Natural Science Foundation of China (No. 61273015, No. 61533006) and the China Scholarship Council.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
All authors drafted the manuscript and read and approved the final manuscript.
Authors’ contributions
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Wang, X., She, K., Zhong, S. et al. On extended dissipativity analysis for neural networks with time-varying delay and general activation functions. Adv Differ Equ 2016, 79 (2016). https://doi.org/10.1186/s13662-016-0769-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662-016-0769-7
Keywords
- dissipativity
- neural networks
- activation functions
- time delay
- stability