Skip to main content

Theory and Modern Applications

Dissipative criteria for Takagi–Sugeno fuzzy Markovian jumping neural networks with impulsive perturbations using delay partitioning approach

Abstract

In this work, we investigate the result of dissipative analysis for Takagi–Sugeno fuzzy Markovian jumping neural networks with impulsive perturbations via delay partition approach. By using the Lyapunov–Krasovskii functional and delay partition approach, we derive a set of delay-dependent sufficient criteria for obtaining the required results. Furthermore, we restate the obtained sufficient conditions in the form of linear matrix inequalities (LMIs), which can be checked by the standard MATLAB LMI tool box. The main advantage of this work is reduced conservatism, which is mainly based on the delay partition approach. Finally, we provide numerical examples with simulations to demonstrate the applicability of the proposed method.

1 Introduction

In the last twenty years, neural networks have received increasing consideration because of their applications in various fields such as signal processing, pattern recognition, optimization problems, associative memories, and so on [1, 9, 30, 31, 42, 43]. In particular, the stability theory of neural networks has become an important topic in both theory and practice, since stability is one of the major problems related to neural network dynamic behaviors. Besides, time-delays are frequently encountered in hardware implementation of neural networks, since time-delays are the source of generation of instability and poor performance. Hence the stability analysis of neural networks with time delay have obtained remarkable consideration in recent years [12, 35, 42, 43].

In practice, most of the neural networks are represented by nonlinear models, so it is important and necessary to design an appropriate neural network approach for nonlinear systems. In this relation, the Takagi–Sugeno (T–S) fuzzy model can provide an effective approach for complex nonlinear systems in terms of fuzzy sets and linear input–output variables [21, 24]. The main advantage of this T–S fuzzy model is easy to analyze and design linear systems to nonlinear systems. Thus, many authors extended the T–S fuzzy models to describe different types of neural networks with time delays to establish the stability of the concerned network models [5, 25]. Very recently, Shen et al. [27] obtained an asynchronous state estimation for fuzzy Markovian jumping neural networks with uncertain measurements in a finite time interval. Based on the Lyapunov stability theory and Wirtinger-based integral inequality, sufficient conditions were constructed in [4] to ensure that the state estimation error system is robustly stable along with a guaranteed dissipative performance of T–S fuzzy neural networks.

On the other side, Markovian jump systems (MJSs), as a certain class of switched systems consisting of an indexed family of subsystems and a set of Markovian chains, have received increasing attention [18, 19]. Due to the presence of sudden variation, random component failures, and abrupt environment changes in dynamic systems, by MJSs we can model aircraft control, solar receiver control, manufacturing systems, networked control systems, power systems, and other practical systems [15, 26, 32]. During the past few years, MJSs have been used to various disciplines of science and engineering fields, and a great number of results have been obtained in [2, 3, 10, 12, 13, 29]. Based on a generalized double integral inequality, dissipativity conditions were proposed under the consideration of free-matrix-based integral inequality and Finsler’s lemma approach in [12]. In [13] the authors studied the problem of passivity and dissipativity analysis of Markovian jump neural networks including two types of additive time-varying delays. In [29] the results on dissipativity-based Markovian jump neural networks are established using some generalized integral inequalities. Stability analysis of neural models with Markovian jump limitations and delay constants are derived using reciprocally convex approach in [3].

On the other hand, dissipativity is an important concept of dynamical systems, which is closely related with the intuitive phenomenon of loss or dissipation of energy. Moreover, dissipativity theory gives a framework for the control design and stability analysis of practical control systems under an input–output energy-related consideration. Based on the framework of dissipativity, many problems were investigated for continuous-time neural networks [12, 13, 23, 35, 37, 41] and discrete-time neural networks [17], but there appeared a few works based on the dissipativity concept of T–S fuzzy neural networks [11, 14]. Also, it is well known that impulsive effects are used to express the dynamical models in many areas such as medicine, biology, economics, and telecommunications. Roughly speaking, the states of neural networks often undergo rapid disruption and sudden changes at certain moments of time, which leads to impulsive effects. Thus impulses should be taken into account while studying the dissipativity of neural networks and the corresponding issues studied in the literature [20, 22, 44]. However, up to now, the results of dissipativity analysis of Markovian jumping T–S fuzzy neural networks together with impulsive effects have not yet been reported, which is still an open challenge. All this motivated us to consider a new set of dissipative conditions for fuzzy Markovian jumping neural networks with impulsive control via delay partitioning approach.

The main contributions of this paper are summarized as follows:

  1. (i)

    Uncertain parameters, Markovian jumping, nonlinearities, time-delay, dissipative conditions and impulsive perturbations are considered in the framework of stability analysis and designing Takagi–Sugeno fuzzy neural networks.

  2. (ii)

    By employing a proper Lyapunov–Krasovskii functional the asymptotic stability of addressed neural networks is checked via some less conservative stability conditions.

  3. (iii)

    Some novel uncertain parameters are initially handled in the Lyapunov–Krasovskii functional, which ensures sufficient conditions for asymptotic stability of designed neural networks.

  4. (iv)

    The importance of the proposed algorithm is illustrated by numerical examples.

2 Problem formulation

Let \(\{r(t),t\geq 0\}\) be a right-continuous Markovian process taking values in the finite space \(S=\{1,2,\ldots,s\}\) with generator \(\varGamma =(\pi _{ij})\) (\(i,j\in S\)) given by

$$\begin{aligned}& \operatorname{Pr}\bigl\{ r(t+\Delta t)=j\mid r(t)=i\bigr\} = \textstyle\begin{cases} \pi _{ij}\Delta t + o(\Delta t), & i\neq j, \\ 1 + \pi _{ii}\Delta t + o(\Delta t), & i=j, \end{cases}\displaystyle \end{aligned}$$

where \(\Delta t\geq 0\), \(\lim_{\Delta t\rightarrow 0}(o(\Delta t)/ \Delta t)=0\), \(\pi _{ij}\geq 0\) for \(j\neq i\) is the transition rate from mode i to mode j at time \(t+\Delta t\), and \(\pi _{ii}=- \sum_{j=1,j\neq i}^{N} \pi _{ij}\).

Consider the neural networks of Markovian jumping parameters with mixed interval time-varying delays of the following form:

$$\begin{aligned} \begin{gathered} \dot{v_{i}}(t) = -a_{i} \bigl(r(t)\bigr)v_{i}(t) + \sum_{i=1}^{n} w_{1ij}\bigl(r(t)\bigr) f_{i}\bigl(v_{i}(t)\bigr) + \sum_{i=1}^{n} w_{2ij}\bigl(r(t) \bigr) f_{i}\bigl(v_{i}\bigl(t-\tau (t)\bigr)\bigr) \\ \hphantom{\dot{v_{i}}(t) }\quad {} + \sum_{i=1}^{n} w_{3ij}\bigl(r(t)\bigr) \int _{t-d(t)}^{t} f_{i} \bigl(v(s)\bigr)\,ds + u(t), \quad t>0, t\neq t_{k}, \\ y(t)= f\bigl(v_{i}(t)\bigr), \\ v(t_{k}) = I_{k}\bigl(r(t)\bigr) \bigl(v \bigl(t_{k}^{-}\bigr)\bigr), \quad t=t_{k}, k \in Z_{+}, \end{gathered} \end{aligned}$$
(1)

where \(i=1,2,\ldots,n\), \(v_{i}(t)\) is the state of the ith neuron, \(a_{i}(r(t))>0\) denotes the rate with which the cell i resets its potential to the resting states when isolated from the other cells and inputs, \(w_{1ij}(r(t))\), \(w_{2ij}(r(t))\) and \(w_{3ij}(r(t))\) are the connection weights at the time t, \(u(t)=[u_{1}(t),u_{2}(t),\ldots, u_{n}(t)]^{T}\) is the external input, \(y(t)=[y_{1}(t), y_{2}(t),\ldots, y_{n}(t)]\) is the output, \(I_{k}(r(t))(\cdot )\) is a constant real matrix at the time moments \(t_{k}\), and \(f_{i}(\cdot )\) stands for the signal function of the ith neuron. In addition, we suppose that the discrete delay \(\tau (t)\) and distributed delay \(d(t)\) satisfy

$$\begin{aligned}& 0\leq \tau _{1} \leq \tau (t) < \tau _{2},\quad\quad \dot{\tau (t)} \leq \mu _{1}, \quad\quad 0 \leq d(t) \leq d,\quad\quad \dot{d(t)}\leq \mu _{2}, \end{aligned}$$
(2)

where \(\tau _{1}\), \(\tau _{2}\), d, \(\mu _{1}\), \(\mu _{2}\) are constants. We consider system (1) together with the initial condition \(v(s)=\psi (s)\), \(s\in [-\max \{\tau _{2},d\},0]\).

Then we rewrite model (1) as

$$\begin{aligned} \begin{gathered} \dot{v}(t)= -A\bigl(r(t)\bigr)v(t) + W_{1}\bigl(r(t)\bigr) f\bigl(v(t)\bigr) + W_{2}\bigl(r(t) \bigr) f\bigl(v\bigl(t-\tau (t)\bigr)\bigr) \\ \hphantom{\dot{v}(t)} \quad {}+ W_{3}\bigl(r(t)\bigr) \int _{t-d(t)}^{t} f \bigl(v(s)\bigr)\,ds + u(t), \quad t>0, t\neq t_{k}, \\ y(t) = f\bigl(v(t)\bigr), \\ v(t_{k}) = I_{k}\bigl(r(t)\bigr) \bigl(v \bigl(t_{k}^{-}\bigr)\bigr), \quad t=t_{k}, k \in Z_{+}, \end{gathered} \end{aligned}$$
(3)

where \(v(t)=[v_{1}(t),v_{2}(t),\ldots, v_{n}(t)]^{T}\), \(A= \operatorname{diag}(a_{1}(r(t)),\ldots,a_{n}(r(t)))\), \(W_{1}=(w_{1ij}(r(t)))_{m \times n}\), \(W_{2}=(w_{2ij}(r(t)))_{m \times n}\), \(W_{3}=(w_{3ij}(r(t)))_{m \times n}\), and \(f(\cdot )=(f _{1}(\cdot ),f_{2}(\cdot ),\ldots,f_{n}(\cdot ))^{T}\).

Let \(v^{*}=(v_{1}^{*},v_{2}^{*},\ldots,v_{n}^{*})\) be the equilibrium points of system (3). We can obtain from (3) that in terms of the transformations \(x(\cdot )=v(\cdot )-v^{*}\) and \(f(x(t))=f(x(t)+v ^{*})-f(v^{*})\), system (3) can be written as

$$\begin{aligned} \begin{gathered} \dot{x}(t) = -A\bigl(r(t)\bigr)x(t) + W_{1}\bigl(r(t)\bigr) f\bigl(x(t)\bigr) + W_{2}\bigl(r(t) \bigr) f\bigl(x\bigl(t-\tau (t)\bigr)\bigr) \\ \hphantom{\dot{x}(t)} \quad {}+ W_{3}\bigl(r(t)\bigr) \int _{t-d(t)}^{t} f \bigl(x(s)\bigr)\,ds + u(t), \quad t>0, t\neq t_{k}, \\ y(t) = f\bigl(x(t)\bigr), \\ x(t_{k}) = I_{k}\bigl(r(t)\bigr) \bigl(x \bigl(t_{k}^{-}\bigr)\bigr), \quad t=t_{k}, k \in Z_{+}, \end{gathered} \end{aligned}$$
(4)

where \(x(t)=[x_{1}(t),x_{2}(t),\ldots, x_{n}(t)]^{T}\), \(f(\cdot )=(f _{1}(\cdot ), f_{2}(\cdot ),\ldots, f_{n}(\cdot ))^{T}\).

Further, we express the Markovian jumping neural network with impulsive control by a T–S fuzzy model. We represent the ith rule of this T–S fuzzy model in the following form:

$$\begin{aligned} \begin{gathered} \textbf{IF} \quad v_{1}(t)\text{ is }M_{i1}, v_{2}(t)\text{ is }M_{i2},\ldots, v_{j}(t)\text{ is }M_{ij} \quad \textbf{THEN} \\ \dot{x}(t) = -A_{i}\bigl(r(t)\bigr)x(t) + W_{1i}\bigl(r(t) \bigr) f\bigl(x(t)\bigr) + W_{2i}\bigl(r(t)\bigr) f\bigl(x\bigl(t-\tau (t)\bigr)\bigr) \\ \hphantom{\dot{x}(t)}\quad {} + W_{3i}\bigl(r(t)\bigr) \int _{t-d(t)}^{t} f \bigl(x(s)\bigr)\,ds + u(t), \quad t>0, t\neq t_{k}, \\ y(t) = f\bigl(x(t)\bigr), \\ x(t_{k}) = I_{k}\bigl(r(t)\bigr) \bigl(x \bigl(t_{k}^{-}\bigr)\bigr), \quad t=t_{k}, k \in Z_{+}, i=1,2,\ldots,r, \end{gathered} \end{aligned}$$
(5)

where \(M_{ij}\) are fuzzy sets, \((v_{1}(t),v_{2}(t),\ldots,v_{j}(t))^{T}\) represents the premise variable vector, \(x(t)\) denotes the state variable, and r is the number of IF–THEN rules. It is known that system (5) has a unique global solution on \(t\geq 0\) with initial values \(\psi _{x} \in \mathcal{C}([-\max \{\tau _{2},d\},0];\mathbb{R}^{n})\). For convenience, \(r(t)=i\), where \(i\in s\), and in the upcoming discussion, we represent the system matrices associated together with the ith mode by \(A_{i}(r(t))=A_{i}\), \(W_{1i}(r(t))=W_{1i}\), \(W_{2i}(r(t))=W_{2i}\), \(W_{3i}(r(t))=W_{3i}\).

Then the state equation is as follows:

$$\begin{aligned} \begin{gathered} \dot{x}(t) = \sum_{i=1}^{r} \lambda _{i}\bigl(v(t)\bigr) \biggl\{ -A_{i}x(t) + W_{1i}f\bigl(x(t)\bigr) + W_{2i}f\bigl(x\bigl(t-\tau (t) \bigr)\bigr) \\ \hphantom{\dot{x}(t)} \quad {}+ W_{3i} \int _{t-d(t)}^{t} f \bigl(x(s)\bigr)\,ds + u(t) \biggr\} , \quad t>0, t\neq t_{k}, \\ y(t) = f\bigl(x(t)\bigr), \\ x(t_{k}) = I_{k}\bigl(x\bigl(t_{k}^{-} \bigr)\bigr), \quad t=t_{k}, k \in Z_{+}, i=1,2,\ldots,r, \end{gathered} \end{aligned}$$
(6)

where \(\lambda _{i}(v(t)) = \frac{\beta _{i}(v(t))}{\sum_{i=1}^{r} \beta _{i}(v(t))}\), \(\beta _{i}(v(t)) = \prod_{j=1}^{p} M_{ij}(v_{j}(t)) \), and \(M_{ij}(\cdot )\) is the degree of the membership function of \(M_{ij}\). Further, we assume that \(\beta _{i}(v(t)) \geq 0\), \(i = 1,2,\ldots, r\), and \(\sum_{i=1}^{r} \beta _{i}(v(t)) > 0 \) for all \(v(t)\). Therefore \(\lambda _{i}(v(t))\) satisfy \(\lambda _{i}(v(t)) \geq 0\), \(i = 1,\ldots, r\), and \(\sum_{i=1}^{r} \lambda _{i}(v(t)) = 1 \) for any \(v(t)\).

Based on the previous simple transformation, we can equivalently rewrite model (6) as follows:

$$\begin{aligned} \begin{gathered} \dot{x}(t) = -A_{i}x(t) + W_{1i} f\bigl(x(t)\bigr) + W_{2i}f\bigl(x\bigl(t-\tau (t) \bigr)\bigr) \\ \hphantom{\dot{x}(t)} \quad{} + W_{3i} \int _{t-d(t)}^{t} f \bigl(x(s)\bigr)\,ds + u(t), \quad t>0, t\neq t_{k}, \\ y(t) = f\bigl(x(t)\bigr), \\ x(t_{k}) = I_{k}\bigl(x\bigl(t_{k}^{-} \bigr)\bigr), \quad t=t_{k}, k \in Z_{+}, i=1,2,\ldots,r. \end{gathered} \end{aligned}$$
(7)

The following assumptions are needed to prove the required result.

Assumption (H1)

([9])

For all \(j\in {1,2,\ldots,n}\), \(f_{j}(0)=0\), and there exist constants \(F_{j}^{-}\) and \(F_{j}^{+}\) such that

$$\begin{aligned}& F_{j}^{-} \leq \frac{f_{j}(\alpha _{1})- f_{j}(\alpha _{2})}{\alpha _{1}- \alpha _{2}} \leq F_{j}^{+} \end{aligned}$$
(8)

for all \(\alpha _{1},\alpha _{2}\in \mathbb{R}\) and \(\alpha _{1} \neq \alpha _{2}\).

Assumption (H2)

The impulsive times \(t_{k}\) satisfy \(0=t_{0}< t_{1}<\cdots <t_{k} \rightarrow \infty \) and \(\inf_{k\in Z_{+}}\{t_{k}-t_{k-1}\}>0\).

The energy function E associated with system (7) is represented by

$$\begin{aligned}& E(u,y,T)=\langle y,\mathcal{Q}y \rangle _{T} + 2 \langle y, \mathcal{S}u \rangle _{T} + \langle u,\mathcal{R}u \rangle _{T}, \end{aligned}$$
(9)

where

$$\begin{aligned}& \langle y,u \rangle _{T}= \int _{0}^{T} y^{T} u \,dt, \quad T \geq 0. \end{aligned}$$

The following definitions and lemmas are needed to prove our results.

Definition 1

Given some value \(\vartheta >0\), real constant matrices \(\mathcal{Q}=\mathcal{Q}^{T} \) and \(\mathcal{R}=\mathcal{R}^{T}\), and a matrix \(\mathcal{S}\), the considered model (7) is \((\mathcal{Q}, \mathcal{S}, \mathcal{R})\)ϑ-dissipative for any \(T \geq 0\). Under the zero initial condition, the following condition is holds:

$$\begin{aligned}& E(u,y,T) \geq \vartheta \langle u,u \rangle _{T} . \end{aligned}$$
(10)

Definition 2

The proposed neural network model (7) is called passive if there exists a scalar \(\gamma \geq 0\) such that the following inequality holds for all \(t_{f}\geq 0\) under the zero initial condition:

$$\begin{aligned}& 2 \biggl[ \int _{0}^{t_{f}} y^{T}(s)u(s)\,ds \biggr] \geq -\gamma \biggl[ \int _{0}^{t_{f}} u^{T}(s)u(s)\,ds \biggr]. \end{aligned}$$
(11)

Lemma 2.1

([7])

For any vectors \(\tau (t)\geq 0\) and positive-definite matrix \(Q\in R^{n \times n}\), we have the following inequalities:

$$\begin{aligned}& - \int _{t-\tau (t)}^{t} \dot{x}^{T}(s) Q \dot{x}(s) \,ds \leq \tau (t) \zeta ^{T}(t)MQ^{-1}M^{T} \zeta (t) + 2\zeta ^{T}(t)M \bigl[x(t)-x\bigl(t-\tau (t)\bigr)\bigr], \\& \begin{aligned} - \int _{t-\tau }^{t-\tau (t)} \dot{x}^{T}(s) Q \dot{x}(s) \,ds &\leq \bigl( \tau -\tau (t)\bigr) \zeta ^{T}(t)NQ^{-1}N^{T} \zeta (t) \\ &\quad{} + 2\zeta ^{T}(t)N \bigl[x\bigl(t- \tau (t)\bigr)-x(t-\tau ) \bigr],\end{aligned} \end{aligned}$$

where M and N are free weighting matrices of appropriate dimensions, and

$$\begin{aligned} \zeta (t) = & \biggl[ x^{T}(t) \quad x^{T}(t-\tau _{1}) \quad x^{T} \biggl(t-\frac{ \tau _{a}}{N} \biggr) \quad x^{T} \biggl(t-2\frac{\tau _{a}}{N} \biggr)\quad \cdots \quad x^{T} \biggl(t-(N-1)\frac{\tau _{a}}{N} \biggr) \\ & x^{T}(t- \tau _{a}) \quad x^{T}\bigl(t-\tau (t)\bigr) \quad x^{T} \biggl(t- \frac{\tau _{2}}{N} \biggr) \quad x^{T} \biggl(t-2\frac{\tau _{2}}{N} \biggr) \quad \cdots \\ & x ^{T} \biggl(t-(N-1)\frac{\tau _{2}}{N} \biggr) \quad x^{T}(t-\tau _{2}) \quad \dot{x}^{T}(t) \quad f^{T}\bigl(x(t)\bigr) \quad f^{T}\bigl(x\bigl(t-\tau (t)\bigr)\bigr) \\ & \int _{t-\tau _{2}}^{t} x^{T}(s)\,ds \quad \int _{t-\tau _{2}}^{t-\tau _{1}} x^{T}(s)\,ds \quad \int _{t-\tau _{2}}^{t} f^{T}\bigl(x(s)\bigr)\,ds \quad \int _{t-\tau _{2}}^{t-\tau _{1}} f^{T}\bigl(x(s)\bigr)\,ds \\ & \int _{t-d(t)} ^{t} f^{T}\bigl(x(s)\bigr)\,ds \quad x^{T}\bigl(t-d(t)\bigr) \quad u^{T}(t) \biggr]^{T}. \end{aligned}$$

3 Main results

In this section, we establish a dissipativity condition for fuzzy impulsive Markovian jumping neural networks (7) with both discrete and distributed time delays. Under a Lyapunov functional and delay fractionizing approach, in the following theorem, we provide a new set of novel delay-dependent dissipative conditions with impulsive perturbations. For presentation convenience, we denote

$$\begin{aligned}& F_{1} = \operatorname{diag}\bigl(F_{1}^{-}F_{1}^{+}, F_{2}^{-}F_{2}^{+},\ldots, F_{n}^{-}F_{n}^{+}\bigr), \quad\quad F_{2} = \operatorname{diag} \biggl( \frac{F_{1}^{-} + F_{1}^{+}}{2}, \frac{F_{2}^{-} + F_{2}^{+}}{2},\ldots, \frac{F_{n}^{-} + F_{n}^{+}}{2} \biggr), \\& F_{3} = \operatorname{diag} \bigl( F_{1}^{-}, F_{2}^{-},\ldots, F_{n} ^{-} \bigr), \quad\quad F_{4} = \operatorname{diag} \bigl( F_{1}^{+}, F_{2}^{+},\ldots, F_{n}^{+} \bigr). \end{aligned}$$

Theorem 3.1

Under Assumptions (H1) and (H2), for given scalars \(\tau _{1}\), \(\tau _{2}\), d, \(\mu _{1}\), and \(\mu _{2}\), the neural network described by (7) is strictly \((\mathcal{Q}, \mathcal{S}, \mathcal{R})\)–ϑ-dissipative if there exist positive definite matrices \(P_{1i}\), \(P_{i}\) (\(i=2,\ldots,4\)), Q, R, \(S_{i}\) (\(i=1,2,\ldots,7\)), \(T_{i}\) (\(i=1,2\)), positive diagonal matrices \(U_{1}\), \(U_{2}\), and matrices O, \(L_{i}\), \(M_{i}\), \(V _{i}\) (\(i=1,2\)) of appropriate dimensions such that the following LMIs hold:

$$\begin{aligned}& I_{ik}^{T}P_{1j}I_{ik}-P_{1i} < 0, \end{aligned}$$
(12)
[ Φ τ a N L τ δ M S 1 0 S 2 ] <0,
(13)
[ Φ τ a N L τ δ V S 1 0 S 2 ] <0,
(14)

and

Q = [ Q 11 Q 12 Q 1 N Q 22 Q 2 N Q N 1 , N 1 Q N 1 , N Q N N ] 0 , R = [ R 11 R 12 R 1 N R 22 R 2 N R N 1 , N 1 R N 1 , N R N N ] 0 ,

where

Φ = [ Φ 11 Φ 12 Φ 13 Φ 14 Φ 15 0 Φ 17 Φ 18 0 0 0 0 0 Φ 22 Φ 23 0 0 F 2 U 2 0 0 0 0 0 0 0 Φ 33 0 0 0 0 0 0 0 0 0 0 Φ 44 OW 1 i j OW 2 i j 0 0 0 0 OW 3 i j 0 O Φ 55 0 0 0 0 0 0 0 S U 2 0 0 0 0 0 0 0 T 1 0 0 0 0 0 0 T 2 0 0 0 0 0 S 5 0 0 0 0 S 6 0 0 0 S 7 0 0 ( 1 μ 2 ) P 4 0 γ I R ] , Φ 11 = [ Φ 1 L 2 T Q 12 L 1 Q 13 Q 1 N V 1 P 3 P 2 S 4 L 2 0 0 V 2 Q 22 Q 11 Q 23 Q 12 Q 2 N Q 1 , N 1 Q 1 N Q 33 Q 22 Q 3 N Q 2 , N 1 Q 2 N Q N N Q N 1 , N 1 Q N 1 , N Q N N ] , Φ 33 = [ R 22 R 11 R 23 R 12 R 2 N R 1 , N 1 R 1 N R 33 R 22 R 3 N R 2 , N 1 R 2 N R N N R N 1 , N 1 R N 1 , N R N N S 3 S 4 ] , Φ 13 = [ R 12 R 13 R 1 N M 1 + S 3 0 0 0 M 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ] , Φ 1 = P 2 + P 4 + Q 11 + L 1 + L 1 T + R 11 S 3 τ 2 2 T 1 + τ 12 2 T 2 F 1 U 1 + j = 1 N π i j P 1 j , Φ 12 = [ ( V 1 + M 1 ) T ( V 2 + M 2 + S 4 ) T 0 0 0 0 ] T , Φ 22 = ( 1 μ 1 ) P 3 2 S 4 F 1 U 2 , Φ 23 = [ 0 0 0 S 4 ] , Φ 14 = [ ( P 1 i A i j T O T ) T 0 0 0 0 0 ] T , Φ 44 = τ a N S 1 + τ δ S 2 + τ 2 2 S 3 + τ 12 2 S 4 + τ 2 4 4 T 1 + τ s 2 T 2 O O T , Φ 15 = [ ( F 2 U 1 0 0 0 0 0 ] T , Φ 55 = τ 2 2 S 5 + τ 12 2 S 6 + d 2 S 7 U 1 Q , Φ 17 = [ τ 2 T 1 T 0 0 0 0 0 ] T , Φ 18 = [ τ 12 T 2 T 0 0 0 0 0 ] T , L = [ L 1 T L 2 T 0 0 0 0 0 0 0 0 0 ] T , M = [ M 1 T M 2 T 0 0 0 0 0 0 0 0 0 ] T , V = [ V 1 T V 2 T 0 0 0 0 0 0 0 0 0 ] T , τ a = ( τ 1 + τ 2 ) 2 , τ δ = ( τ 2 τ 1 ) 2 , τ 12 = τ 2 τ 1 , τ s = 1 2 ( τ 2 2 τ 1 2 ) .

Proof

To obtain dissipativity criteria for the fuzzy Markovian jumping impulsive neural networks (7), we examine the Lyapunov–Krasovskii functional

$$\begin{aligned}& \begin{aligned}[b] V\bigl(t,x(t),i\bigr) &= V_{1}\bigl(t,x(t),i\bigr)+V_{2} \bigl(t,x(t),i\bigr)+V_{3}\bigl(t,x(t),i\bigr)+V_{4} \bigl(t,x(t),i\bigr)\\&\quad{} +V _{5}\bigl(t,x(t),i\bigr)+V_{6} \bigl(t,x(t),i\bigr), \end{aligned} \end{aligned}$$
(15)

where

$$\begin{aligned}& \begin{aligned} V_{1}\bigl(t,x(t),i\bigr) &= x^{T}(t)P_{1i} x(t) + \int _{t-\tau _{1}}^{t} x^{T}(s) P_{2} x(s) \,ds + \int _{t-\tau (t)}^{t-\tau _{1}} x^{T}(s) P_{3} x(s) \,ds \\&\quad{} + \int _{t-d(t)} ^{t}x^{T}(s)P_{4}x(s) \,ds,\end{aligned} \\& V_{2}\bigl(t,x(t),i\bigr) = \int _{t-\frac{\tau _{a}}{N}}^{t} \xi _{1}^{T}(s) Q \xi _{1}(s) \,ds + \int _{t-\frac{\tau _{2}}{N}}^{t} \xi _{2}^{T}(s) R \xi _{2}(s) \,ds, \\& V_{3}\bigl(t,x(t),i\bigr) = \int _{-\frac{\tau _{a}}{N}}^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s) S _{1} \dot{x}(s) \,ds \,d\theta + \int _{-\tau _{2}}^{-\tau _{a}} \int _{t+ \theta }^{t} \dot{x}^{T}(s) S_{2} \dot{x}(s) \,ds \,d\theta , \\& V_{4}\bigl(t,x(t),i\bigr) = \tau _{2} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s) S _{3} \dot{x}(s) \,ds \,d\theta + \tau _{12} \int _{-\tau _{2}}^{-\tau _{1}} \int _{t+\theta }^{t} \dot{x}^{T}(s) S_{4} \dot{x}(s) \,ds \,d\theta , \\& \begin{aligned} V_{5}\bigl(t,x(t),i\bigr) &= \tau _{2} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t} f^{T}\bigl(x(s)\bigr) S _{5} f\bigl(x(s)\bigr) \,ds \,d\theta \\ &\quad {}+ \tau _{12} \int _{-\tau _{2}}^{-\tau _{1}} \int _{t+\theta }^{t} f^{T}\bigl(x(s)\bigr) S_{6} f\bigl(x(s)\bigr) \,ds \,d\theta \\ &\quad {}+ d \int _{-d}^{0} \int _{t+\theta }^{t} f^{T}\bigl(x(s)\bigr) S_{7} f\bigl(x(s)\bigr) \,ds \,d\theta , \end{aligned} \\& \begin{aligned} V_{6}\bigl(t,x(t),i\bigr) &= \frac{\tau _{2}^{2}}{2} \int _{-\tau _{2}}^{0} \int _{\theta }^{0} \int _{t+\lambda }^{t} \dot{x}^{T}(s) T_{1} \dot{x}(s)\,ds \,d\lambda \,d \theta \\ &\quad {}+ \tau _{s} \int _{-\tau _{2}}^{-\tau _{1}} \int _{\theta }^{0} \int _{t+\lambda }^{t} \dot{x}^{T}(s) T_{2} \dot{x}(s)\,ds \,d\lambda \,d \theta\end{aligned} \end{aligned}$$

with

ξ 1 ( t ) = [ x T ( t ) x T ( t τ a N ) x T ( t ( N 1 ) τ a N ) ] T , ξ 2 ( t ) = [ x T ( t ) x T ( t τ 2 N ) x T ( t ( N 1 ) τ 2 N ) ] T .

For \(t=t_{k}\), we have

$$\begin{aligned} V_{1}\bigl(t_{k},x(t),j\bigr)- V_{1} \bigl(t_{k}^{-},x(t),i\bigr) = & x^{T}(t_{k})P_{1j}x(t_{k})- x^{T}\bigl(t_{k}^{-}\bigr)P_{1i}x \bigl(t_{k}^{-}\bigr) \\ = & x^{T}\bigl(t_{k}^{-}\bigr)I_{ik}^{T}P_{1j}I_{ik}x \bigl(t_{k}^{-}\bigr)- x^{T}\bigl(t_{k}^{-} \bigr)P _{1i}x\bigl(t_{k}^{-}\bigr) \\ = & x^{T}\bigl(t_{k}^{-}\bigr) \bigl[I_{ik}^{T}P_{1j}I_{ik}-P_{1i} \bigr]x\bigl(t_{k}^{-}\bigr). \end{aligned}$$
(16)

Based on the assumptions and conditions, we know that \(I_{ik}\) is a constant matrix at the moment \(t_{k}\) and in the mode i for \(i\in S\), \(k\in N\). So

$$\begin{aligned}& V_{1}\bigl(t_{k},x(t),j\bigr)- V_{1} \bigl(t_{k}^{-}, x(t),i\bigr) < 0. \end{aligned}$$
(17)

For \(t\in [t_{k-1}, t_{k}]\), by (17) we obtain that the weak infinitesimal generator \(\mathcal{L}V(t,x(t),i)\) satisfies

$$\begin{aligned}& \begin{aligned}[b] &\mathcal{L}V_{1}\bigl(t,x(t),i\bigr) \\ & \quad = 2 x^{T}(t)P_{1i} \dot{x}(t) + x^{T}(t)[P_{2}+P_{4}]x(t) + x^{T}(t- \tau _{1}) (P_{3}-P_{2})x(t- \tau _{1}) \\ & \quad \quad{} - \bigl(1-\dot{\tau }(t)\bigr) x^{T}\bigl(t-\tau (t) \bigr)P_{3} x\bigl(t-\tau (t)\bigr) - \bigl(1- \dot{d}(t) \bigr)x^{T}\bigl(t-d(t)\bigr)P_{4}x\bigl(t-d(t)\bigr) \\ & \quad \quad{} + \sum_{j=1}^{N} \pi _{ij} x^{T}(t) P_{1j} x(t), \\ & \quad \leq 2 x^{T}(t)P_{1i}\dot{x}(t) + x^{T}(t)[P_{2}+P_{4}]x(t) + x^{T}(t- \tau _{1}) (P_{3}-P_{2})x(t- \tau _{1}) \\ & \quad \quad{} - (1-\mu _{1}) x^{T}\bigl(t-\tau (t) \bigr)P_{3} x\bigl(t-\tau (t)\bigr) - (1-\mu _{2})x^{T} \bigl(t-d(t)\bigr)P_{4}x\bigl(t-d(t)\bigr) \\ & \quad \quad{} + \sum_{j=1}^{N} \pi _{ij} x^{T}(t) P_{1j} x(t), \end{aligned} \end{aligned}$$
(18)
$$\begin{aligned}& \begin{aligned}[b] \mathcal{L}V_{2}\bigl(t,x(t),i\bigr) &= \xi _{1}^{T}(t)Q \xi _{1}(t) - \xi _{1}^{T} \biggl(t- \frac{\tau _{a}}{N} \biggr) Q \xi _{1} \biggl(t-\frac{\tau _{a}}{N} \biggr) \\ &\quad{} + \xi _{2}^{T}(t)R\xi _{2}(t) - \xi _{2}^{T} \biggl(t-\frac{\tau _{2}}{N} \biggr) R \xi _{2} \biggl(t-\frac{\tau _{2}}{N} \biggr), \end{aligned} \end{aligned}$$
(19)
$$\begin{aligned}& \begin{aligned}[b] \mathcal{L}V_{3}\bigl(t,x(t),i\bigr) &= \frac{\tau _{a}}{N} \dot{x}^{T}(t) S_{1} \dot{x}(t) + (\tau _{2}-\tau _{a})\dot{x}^{T}(t) S_{2} \dot{x}(t) \\ &\quad{} - \int _{t-\frac{\tau _{a}}{N}}^{t} \dot{x}^{T}(s) S_{1} \dot{x}(s) \,ds - \int _{t-\tau _{2}}^{t-\tau _{a}} \dot{x}^{T}(s) S_{2} \dot{x}(s) \,ds, \end{aligned} \end{aligned}$$
(20)
$$\begin{aligned}& \begin{aligned}[b] \mathcal{L}V_{4}\bigl(t,x(t),i\bigr) &= \tau _{2}^{2} \dot{x}^{T}(t)S_{3} \dot{x}(t) - \tau _{2} \int _{t-\tau _{2}}^{t} \dot{x}^{T}(s) S_{3} \dot{x}(s) \,ds + \tau _{12} ^{2} \dot{x}^{T}(t)S_{4} \dot{x}(t) \\ &\quad{} - \tau _{12} \int _{t-\tau _{2}}^{t-\tau _{1}} \dot{x}^{T}(s) S _{4} \dot{x}(s) \,ds, \end{aligned} \end{aligned}$$
(21)
$$\begin{aligned}& \begin{aligned}[b] \mathcal{L}V_{5}\bigl(t,x(t),i\bigr) &= f^{T}\bigl(x(t) \bigr) \bigl[\tau _{2}^{2} S_{5} + \tau _{12}^{2}S_{6}+d^{2}S_{7} \bigr] f\bigl(x(t)\bigr) - \tau _{2} \int _{t-\tau _{2}}^{t} f^{T}\bigl(x(s)\bigr) S_{5} f\bigl(x(s)\bigr) \,ds\hspace{-20pt} \\ &\quad{} - \tau _{12} \int _{t-\tau _{2}}^{t-\tau _{1}} f^{T}\bigl(x(s)\bigr) S_{6} f\bigl(x(s)\bigr) \,ds-d \int _{t-d(t)}^{t} f^{T}\bigl(x(s) \bigr)S_{7}f\bigl(x(s)\bigr)\,ds, \end{aligned} \end{aligned}$$
(22)
$$\begin{aligned}& \begin{aligned}[b] \mathcal{L}V_{6}\bigl(t,x(t),i\bigr) &= \frac{\tau _{2}^{4}}{4} \dot{x}^{T}(t) T_{1} \dot{x}(t) - \frac{\tau _{2}^{2}}{2} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s) T _{1} \dot{x}(s) \,ds \,d\theta \\ &\quad{} + \tau _{s}^{2} \dot{x}^{T}(t) T_{2} \dot{x}(t) - \tau _{s} \int _{-\tau _{2}}^{-\tau _{1}} \int _{t+\theta }^{t} \dot{x}^{T}(s) T _{2} \dot{x}(s) \,ds \,d\theta . \end{aligned} \end{aligned}$$
(23)

Note that

$$\begin{aligned}& - \int _{t-\tau _{2}}^{t-\tau _{a}} \dot{x}^{T}(s) S_{2} \dot{x}(s) \,ds = - \int _{t-\tau _{2}}^{t-\tau (t)} \dot{x}^{T}(s) S_{2} \dot{x}(s) \,ds - \int _{t-\tau (t)}^{t-\tau _{a}} \dot{x}^{T}(s) S_{2} \dot{x}(s) \,ds. \end{aligned}$$

Using Lemma 2.1, we obtain

$$\begin{aligned}& - \int _{t-\frac{\tau _{a}}{N}}^{t} \dot{x}^{T}(s)S_{1} \dot{x}(s)\,ds \leq \frac{\tau _{a}}{N} \zeta ^{T}(t)LS_{1}^{-1}L^{T} \zeta (t) + 2 \zeta ^{T}(t)L \biggl[ x(t)-x \biggl(t-\frac{\tau _{a}}{N} \biggr) \biggr], \end{aligned}$$
(24)
$$\begin{aligned}& \begin{aligned}[b] - \int _{t-\tau (t)}^{t-\tau _{a}} \dot{x}^{T}(s)S_{2} \dot{x}(s)\,ds &\leq \bigl(\tau (t)-\tau _{a}\bigr) \zeta ^{T}(t)VS_{2}^{-1}V^{T}\zeta (t) \\ &\quad{} + 2 \zeta ^{T}(t)V \bigl[ x(t-\tau _{a})-x\bigl(t- \tau (t)\bigr) \bigr], \end{aligned} \end{aligned}$$
(25)
$$\begin{aligned}& \begin{aligned}[b] - \int _{t-\tau _{2}}^{t-\tau (t)} \dot{x}^{T}(s)S_{2} \dot{x}(s)\,ds &\leq \bigl(\tau _{2}-\tau (t)\bigr) \zeta ^{T}(t)MS_{2}^{-1}M^{T}\zeta (t) \\ &\quad {}+ 2 \zeta ^{T}(t)M \bigl[ x\bigl(t-\tau (t)\bigr)-x(t-\tau _{2}) \bigr]. \end{aligned} \end{aligned}$$
(26)

Applying the lemma in [8] and the Newton–Leibniz formula

$$\begin{aligned}& \int _{t-\tau _{2}}^{t} \dot{x}(s)\,ds = x(t)- x(t-\tau _{2}), \end{aligned}$$

we have

$$\begin{aligned} \begin{aligned}[b] - \tau _{2} \int _{t-\tau _{2}}^{t} \dot{x}^{T}(s) S_{3} \dot{x}(s)\,ds &\leq - \biggl[ \int _{t-\tau _{2}}^{t}\dot{x}(s) \,ds \biggr]^{T} S_{3} \biggl[ \int _{t-\tau _{2}}^{t} \dot{x}(s) \,ds \biggr] \\ &\leq - \bigl[ x(t) - x(t-\tau _{2}) \bigr]^{T} S_{3} \bigl[ x(t) - x(t- \tau _{2}) \bigr]. \end{aligned} \end{aligned}$$
(27)

Note that

$$\begin{aligned}& \int _{t-\tau _{2}}^{t-\tau _{1}}\dot{x}^{T}(s) S_{4} \dot{x}(s) \,ds = \int _{t-\tau _{2}}^{t-\tau (t)}\dot{x}^{T}(s) S_{4} \dot{x}(s) \,ds + \int _{t-\tau (t)}^{t-\tau _{1}}\dot{x}^{T}(s) S_{4} \dot{x}(s) \,ds. \end{aligned}$$

The lemma in [6] gives

$$\begin{aligned} \bigl[\tau _{2}-\tau (t) \bigr] \int _{t-\tau _{2}}^{t-\tau (t)} \dot{x} ^{T}(s) S_{4} \dot{x}(s)\,ds \geq & \biggl[ \int _{t-\tau _{2}}^{t-\tau (t)}\dot{x}(s) \,ds \biggr]^{T} S _{4} \biggl[ \int _{t-\tau _{2}}^{t-\tau (t)}\dot{x}(s) \,ds \biggr] \\ \geq & \bigl[ x\bigl(t-\tau (t)\bigr) - x(t-\tau _{2}) \bigr]^{T} S_{4} \bigl[ x\bigl(t- \tau (t)\bigr) - x(t-\tau _{2}) \bigr]. \end{aligned}$$

Since \(\tau _{2}-\tau (t) \leq \tau _{2}-\tau _{1}\), we have

$$\begin{aligned}& [\tau _{2}-\tau _{1} ] \int _{t-\tau _{2}}^{t-\tau (t)} \dot{x} ^{T}(s) S_{4} \dot{x}(s)\,ds \geq \bigl[ x\bigl(t-\tau (t)\bigr) - x(t-\tau _{2}) \bigr]^{T} S_{4} \bigl[ x\bigl(t- \tau (t) \bigr) - x(t-\tau _{2}) \bigr], \end{aligned}$$

and thus

$$\begin{aligned}& \begin{aligned}[b] & - [\tau _{2}-\tau _{1} ] \int _{t-\tau _{2}}^{t-\tau (t)} \dot{x} ^{T}(s) S_{4} \dot{x}(s)\,ds \\&\quad \leq - \bigl[ x\bigl(t-\tau (t)\bigr) - x(t- \tau _{2}) \bigr]^{T} S_{4} \bigl[ x\bigl(t- \tau (t)\bigr) - x(t-\tau _{2}) \bigr]. \end{aligned} \end{aligned}$$
(28)

Similarly, we have

$$\begin{aligned}& \begin{aligned}[b] & - [\tau _{2}-\tau _{1} ] \int _{t-\tau (t)}^{t-\tau _{1}} \dot{x} ^{T}(s) S_{4} \dot{x}(s)\,ds \\&\quad \leq - \bigl[ x(t-\tau _{1}) - x \bigl(t-\tau (t)\bigr) \bigr]^{T} S_{4} \bigl[ x(t- \tau _{1}) - x\bigl(t-\tau (t)\bigr) \bigr] \end{aligned} \end{aligned}$$
(29)

and

$$\begin{aligned}& \begin{aligned}[b] - \tau _{2} \int _{t-\tau _{2}}^{t} f^{T}\bigl(x(s)\bigr) S_{5} f\bigl(x(s)\bigr) \,ds &\leq - \biggl( \int _{t-\tau _{2}}^{t} f\bigl(x(s)\bigr)\,ds \biggr)^{T} \\ &\quad {}\times S_{5} \biggl( \int _{t-\tau _{2}}^{t} f\bigl(x(s)\bigr)\,ds \biggr), \end{aligned} \end{aligned}$$
(30)
$$\begin{aligned}& \begin{aligned}[b] - \tau _{12} \int _{t-\tau _{2}}^{t-\tau _{1}} f^{T}\bigl(x(s)\bigr) S_{6} f\bigl(x(s)\bigr) \,ds &\leq - \biggl( \int _{t-\tau _{2}}^{t-\tau _{1}} f\bigl(x(s)\bigr)\,ds \biggr)^{T} \\ &\quad {}\times S_{6} \biggl( \int _{t-\tau _{2}}^{t-\tau _{1}} f\bigl(x(s)\bigr)\,ds \biggr), \end{aligned} \end{aligned}$$
(31)
$$\begin{aligned}& \begin{aligned}[b] -d \int _{t-d(t)}^{t} f^{T}\bigl(x(s) \bigr)S_{7}f\bigl(x(s)\bigr)\,ds &\leq - \biggl( \int _{t-d(t)}^{t} f\bigl(x(s)\bigr)\,ds \biggr)^{T} \\ &\quad {}\times S_{7} \biggl( \int _{t-d(t)}^{t} f\bigl(x(s)\bigr)\,ds \biggr), \end{aligned} \end{aligned}$$
(32)
$$\begin{aligned}& \begin{aligned}[b] - \frac{\tau _{2}^{2}}{2} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s) T_{1} \dot{x}(s) \,ds \,d\theta &\leq - \biggl( \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t} \dot{x}(s) \,ds \,d \theta \biggr)^{T} \\ &\quad {}\times T_{1} \biggl( \int _{-\tau _{2}}^{0} \int _{t+\theta } ^{t} \dot{x}(s) \,ds \,d\theta \biggr) \\ &\leq - \biggl( \tau _{2} x(t) - \int _{t-\tau _{2}}^{t}x(s)\,ds \biggr)^{T} \\ &\quad {}\times T_{1} \biggl( \tau _{2} x(t) - \int _{t-\tau _{2}}^{t}x(s)\,ds \biggr), \end{aligned} \end{aligned}$$
(33)
$$\begin{aligned}& \begin{aligned}[b] - \tau _{s} \int _{-\tau _{2}}^{-\tau _{1}} \int _{t+\theta }^{t} \dot{x} ^{T}(s) T_{2} \dot{x}(s) \,ds \,d\theta &\leq - \biggl( \int _{-\tau _{2}}^{-\tau _{1}} \int _{t+\theta }^{t} \dot{x}(s) \,ds \,d\theta \biggr)^{T} \\ &\quad {}\times T_{2} \biggl( \int _{-\tau _{2}}^{-\tau _{1}} \int _{t+ \theta }^{t} \dot{x}(s) \,ds \,d\theta \biggr) \\ &\leq - \biggl( \tau _{12} x(t) - \int _{t-\tau _{2}}^{t-\tau _{1}} x(s)\,ds \biggr)^{T} \\ &\quad {}\times T_{2} \biggl( \tau _{12} x(t) - \int _{t-\tau _{2}}^{t- \tau _{1}} x(s)\,ds \biggr). \end{aligned} \end{aligned}$$
(34)

For positive diagonal matrices \(U_{1}\) and \(U_{2}\), it follows from Assumption (H1) that

0 [ x ( t ) f ( x ( t ) ) ] T [ F 1 U 1 F 2 U 1 F 2 U 1 U 1 ] [ x ( t ) f ( x ( t ) ) ] ,
(35)
0 [ x ( t τ ( t ) ) f ( x ( t τ ( t ) ) ) ] T [ F 1 U 2 F 2 U 2 F 2 U 2 U 2 ] [ x ( t τ ( t ) ) f ( x ( t τ ( t ) ) ) ] .
(36)

On the other side, for any matrix O of appropriate dimensions, from system (7) we have

$$\begin{aligned}& \begin{aligned}[b] & 2 \dot{x}^{T}(t) O \times \biggl[ -A_{ij} x(t) +W_{1ij} f\bigl(x(t)\bigr)+ W _{2ij} f\bigl(x\bigl(t-\tau (t) \bigr)\bigr) \\&\quad{} + W_{3ij} \int _{t-d(t)}^{t} f\bigl(x(s)\bigr) \,ds+ u(t)- \dot{x}(t) \biggr] = 0. \end{aligned} \end{aligned}$$
(37)

Combining (18)–(37), we can obtain

$$\begin{aligned}& \mathcal{L}V\bigl(t,x(t),i\bigr) + \vartheta u^{T}(t)u(t) - y^{T}(t)\mathcal{Q}y(t) - 2 y^{T}(t)\mathcal{S} u(t) - u^{T}(t) \mathcal{R} u(t) \\& \quad \leq \zeta ^{T}(t) \biggl\{ \varPhi + \frac{\tau _{a}}{N} LS_{1}^{-1}L ^{T} + \bigl(\tau _{2}- \tau (t)\bigr)MS_{2}^{-1}M^{T} + \bigl(\tau (t)- \tau _{a}\bigr)VS _{2}^{-1}V^{T} \biggr\} \zeta (t). \end{aligned}$$
(38)

By the conditions of Theorem 3.1, if \(\zeta (t)\neq 0\), then we have

$$\begin{aligned}& \mathcal{L}V\bigl(t,x(t),i\bigr)< 0. \end{aligned}$$
(39)

For \(t\in [t_{k-1},t_{k}]\), in view of (16) and (39), we have

$$\begin{aligned}& V\bigl(t_{k},x(t),j\bigr) < V\bigl(t_{k}^{-},x(t),i \bigr) < V\bigl(t_{k-1},x(t),i\bigr). \end{aligned}$$
(40)

By a similar proof and mathematical induction we can ensure that (40) is true for all i, j, \(r(0)=i_{0}\in S\), \(k\in N\):

$$\begin{aligned}& V\bigl(t_{k},x(t),j\bigr) < V\bigl(t_{k}^{-},x(t),i \bigr) < V\bigl(t_{k},x(t),i\bigr) < \cdots < V\bigl(t _{0},x(t),i_{0}\bigr). \end{aligned}$$
(41)

It follows from (38) that

$$\begin{aligned}& \mathcal{L}V\bigl(t,x(t),i\bigr) + \vartheta u^{T}(t)u(t) - y^{T}(t)\mathcal{Q}y(t) - 2 y^{T}(t)\mathcal{S} u(t) - u^{T}(t) \mathcal{R} u(t) \\& \quad \leq \zeta ^{T}(t) \biggl\{ \varPhi + \frac{\tau _{a}}{N} LS_{1}^{-1}L ^{T} + \bigl(\tau _{2}- \tau (t)\bigr)MS_{2}^{-1}M^{T} + \bigl(\tau (t)- \tau _{a}\bigr)VS _{2}^{-1}V^{T} \biggr\} \zeta (t). \end{aligned}$$
(42)

Let

$$\begin{aligned}& \varPi = \varPhi + \frac{\tau _{a}}{N} LS_{1}^{-1}L^{T} + \bigl(\tau _{2}-\tau (t)\bigr)MS _{2}^{-1}M^{T} + \bigl(\tau (t)-\tau _{a}\bigr)VS_{2}^{-1}V^{T}. \end{aligned}$$
(43)

Then, applying the lemma in [8] to (43), we obtain the following inequalities:

$$\begin{aligned}& \varPhi + \frac{\tau _{a}}{N}LS_{1}^{-1}L^{T} + ( \tau _{2}-\tau _{a})MS_{2} ^{-1}M^{T}< 0, \end{aligned}$$
(44)
$$\begin{aligned}& \varPhi + \frac{\tau _{a}}{N}LS_{1}^{-1}L^{T} + ( \tau _{2}-\tau _{a})VS_{2} ^{-1}V^{T}< 0. \end{aligned}$$
(45)

Using Schur complements on (44)–(45), we obtain the LMIs of Theorem 3.1. Since \(\varPhi _{i} < 0\), we easily get

$$\begin{aligned}& y^{T}(t)\mathcal{Q}y(t) + 2 y^{T}(t)\mathcal{S} u(t) + u^{T}(t) \mathcal{R} u(t) > \mathcal{L}V\bigl(t,x(t),i\bigr) + \vartheta u^{T}(t)u(t). \end{aligned}$$
(46)

Integrating this inequality from 0 to T and using the zero initial conditions, we get

$$\begin{aligned}& E(y,u,T) \geq \vartheta \langle u,u \rangle _{T} + V(T) - V(0) \geq \vartheta \langle u,u \rangle _{T} \end{aligned}$$
(47)

for all \(T \geq 0\). Hence, if condition (11) holds, then the proposed model (7) is \((\mathcal{Q}, \mathcal{S}, \mathcal{R})\)ϑ-dissipative in the sense of Definition 1. □

Remark 1

The LKF \(V_{3}(t,x(t),i)\) plays a important role in reducing the conservativity of time-varying delay system, whereas in the derivative of \(\dot{V}_{3}(t,x(t),i)\), the cross terms \(-\int _{t- \frac{\tau _{a}}{N}}^{t} \dot{x}^{T}(s)S_{1}\dot{x}(s)\,ds\), \(-\int _{t-\tau (t)}^{t-\tau _{a}} \dot{x}^{T}(s)S_{2}\dot{x}(s)\,ds\), and \(-\int _{t-\tau _{2}}^{t-\tau (t)} \dot{x}^{T}(s)S_{2}\dot{x}(s)\,ds\) are defined as follows:

$$\begin{aligned}& - \int _{t-\frac{\tau _{a}}{N}}^{t} \dot{x}^{T}(s)S_{1} \dot{x}(s)\,ds \leq \frac{\tau _{a}}{N} \zeta ^{T}(t)LS_{1}^{-1}L^{T} \zeta (t) + 2 \zeta ^{T}(t)L \biggl[ x(t)-x \biggl(t-\frac{\tau _{a}}{N} \biggr) \biggr], \\& \begin{gathered} - \int _{t-\tau (t)}^{t-\tau _{a}} \dot{x}^{T}(s)S_{2} \dot{x}(s)\,ds \\\quad \leq \bigl(\tau (t)-\tau _{a}\bigr) \zeta ^{T}(t)VS_{2}^{-1}V^{T}\zeta (t) + 2 \zeta ^{T}(t)V \bigl[ x(t-\tau _{a})-x\bigl(t-\tau (t)\bigr) \bigr],\end{gathered} \\& \begin{gathered} - \int _{t-\tau _{2}}^{t-\tau (t)} \dot{x}^{T}(s)S_{2} \dot{x}(s)\,ds \\\quad \leq \bigl(\tau _{2}-\tau (t)\bigr) \zeta ^{T}(t)MS_{2}^{-1}M^{T}\zeta (t) + 2 \zeta ^{T}(t)M \bigl[ x\bigl(t-\tau (t)\bigr)-x(t-\tau _{2}) \bigr].\end{gathered} \end{aligned}$$

Finally, to reduce the conservatism of the constructed dissipativity conditions, the convexity of the matrix function for cross term is applied. This treatment involved in our paper is different from the approaches used in [12, 35, 37, 41] and may ensure a better feasible region for dissipativity conditions. Thus, using a tighter bounding of the time derivative of LKF and a low number of slack variables, the considered dissipativity condition is less conservative than that in [12, 35, 37, 41].

Remark 2

Very recently, many researchers endeavor to focus on how to reduce conservatism of dissipativity condition for neural network delay systems. A free-matrix-based integral inequality technique is constructed by using a set of slack variables, which can be solved via convex optimization algorithms [37]. Therefore, some improved dissipativity criteria for delayed neural networks are investigated in [35, 41] using the LKF approach. In [12] the authors developed the Wirtinger double integral inequality, which was used to analyze the dissipativity behavior of continuous-time neural networks involving Markovian jumping parameters under Finsler’s lemma approach. Using a delay fractioning approach, the designed dissipativity condition is much less conservative than those in the existing works, and the derived results can ensure the dissipativity of the proposed delayed neural networks. Hence the delay-partitioning method is widely applied and exposes the potential of reducing the conservatism. However, to the best of authors’ knowledge, dissipativity analysis of fuzzy Markovian jumping neural network with discrete and distributed time varying delays and impulses has not been investigated yet, and it shows the effectiveness of our developed methods.

Remark 3

Consider the Markovian jumping neural network without fuzzy and impulsive effects of the following form:

$$\begin{aligned}& \begin{gathered} \dot{x}(t) = -A_{i}x(t) + W_{1i} f\bigl(x(t)\bigr) + W_{2i}f\bigl(x\bigl(t-\tau (t) \bigr)\bigr) \\ \hphantom{\dot{x}(t)}\quad{} + W_{3i} \int _{t-d(t)}^{t} f \bigl(x(s)\bigr)\,ds + u(t), \quad t>0, t\neq t_{k}, \\ y(t) = f\bigl(x(t)\bigr). \end{gathered} \end{aligned}$$
(48)

Due to Theorem 3.1, we obtain a corollary for the dissipativity analysis of Markovian jumping neural networks (48).

Corollary 3.2

Under Assumption (H1) and given scalars \(\tau _{1}\), \(\tau _{2}\), d, \(\mu _{1}\), and \(\mu _{2}\), the neural network model (48) is strictly \((\mathcal{Q}, \mathcal{S}, \mathcal{R})\)–ϑ-dissipative if there exist positive definite matrices \(P_{1i}\), \(P_{i}\) (\(i=2,\ldots,4\)), Q, R, \(S_{i}\) (\(i=1,2,\ldots,7\)), \(T_{i}\) (\(i=1,2\)), positive diagonal matrices \(U_{1}\), \(U_{2}\), and matrices O, \(L_{i}\), \(M_{i}\), \(V_{i}\) (\(i=1,2\)) of appropriate dimensions such that the following LMIs hold:

[ Φ τ a N L τ δ M S 1 0 S 2 ] <0,
(49)
[ Φ τ a N L τ δ V S 1 0 S 2 ] <0.
(50)

Proof

The proof is similar to that of Theorem 3.1 and therefore is omitted. □

Remark 4

When Markovian jumping parameters are not taken, that is, the Markov chain \(\{r(t),t\geq 0\}\) only takes a unique value 1 (i.e., \(S=1\)), then system (48) becomes the following neural network model:

$$\begin{aligned}& \begin{gathered} \dot{x}(t) = -Ax(t) + W_{1} f\bigl(x(t) \bigr) + W_{2}f\bigl(x\bigl(t-\tau (t)\bigr)\bigr) \\ \hphantom{\dot{x}(t)}\quad{} + W_{3} \int _{t-d(t)}^{t} f \bigl(x(s)\bigr)\,ds + u(t), \quad t>0, t\neq t_{k}, \\ y(t) = f\bigl(x(t)\bigr). \end{gathered} \end{aligned}$$
(51)

For system (51), we obtain the following corollary by Theorem 3.1 and Corollary 3.2.

Corollary 3.3

Based on Assumption (H1) and given scalars \(\tau _{1}\), \(\tau _{2}\), d, \(\mu _{1}\), and \(\mu _{2}\), the neural network (51) is strictly \((\mathcal{Q}, \mathcal{S}, \mathcal{R})\)–ϑ-dissipative if there exist positive definite matrices \(P_{1}\), \(P_{i}\) (\(i=2,\ldots,4\)), Q, R, \(S_{i}\) (\(i=1,2,\ldots,7\)), \(T_{i}\) (\(i=1,2\)), positive diagonal matrices \(U_{1}\), \(U_{2}\), and matrices O, \(L_{i}\), \(M_{i}\), \(V_{i}\) (\(i=1,2\)) of appropriate dimensions such that the following LMIs hold:

[ Φ τ a N L τ δ M S 1 0 S 2 ] <0,
(52)
[ Φ τ a N L τ δ V S 1 0 S 2 ] <0,
(53)

and

Q = [ Q 11 Q 12 Q 1 N Q 22 Q 2 N Q N 1 , N 1 Q N 1 , N Q N N ] 0 , R = [ R 11 R 12 R 1 N R 22 R 2 N R N 1 , N 1 R N 1 , N R N N ] 0 ,

where

Φ = [ Φ 11 Φ 12 Φ 13 Φ 14 Φ 15 0 Φ 17 Φ 18 0 0 0 0 0 Φ 22 Φ 23 0 0 F 2 U 2 0 0 0 0 0 0 0 Φ 33 0 0 0 0 0 0 0 0 0 0 Φ 44 OW 1 OW 2 0 0 0 0 OW 3 0 O Φ 55 0 0 0 0 0 0 0 S U 2 0 0 0 0 0 0 0 T 1 0 0 0 0 0 0 T 2 0 0 0 0 0 S 5 0 0 0 0 S 6 0 0 0 S 7 0 0 ( 1 μ 2 ) P 4 0 γ I R ] , Φ 11 = [ Φ 1 L 2 T Q 12 L 1 Q 13 Q 1 N V 1 P 3 P 2 S 4 L 2 0 0 V 2 Q 22 Q 11 Q 23 Q 12 Q 2 N Q 1 , N 1 Q 1 N Q 33 Q 22 Q 3 N Q 2 , N 1 Q 2 N Q N N Q N 1 , N 1 Q N 1 , N Q N N ] , Φ 33 = [ R 22 R 11 R 23 R 12 R 2 N R 1 , N 1 R 1 N R 33 R 22 R 3 N R 2 , N 1 R 2 N R N N R N 1 , N 1 R N 1 , N R N N S 3 S 4 ] , Φ 13 = [ R 12 R 13 R 1 N M 1 + S 3 0 0 0 M 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ] , Φ 1 = P 2 + P 4 + Q 11 + L 1 + L 1 T + R 11 S 3 τ 2 2 T 1 + τ 12 2 T 2 F 1 U 1 , Φ 12 = [ ( V 1 + M 1 ) T ( V 2 + M 2 + S 4 ) T 0 0 0 0 ] T , Φ 22 = ( 1 μ 1 ) P 3 2 S 4 F 1 U 2 , Φ 23 = [ 0 0 0 S 4 ] , Φ 14 = [ ( P 1 A T O T ) T 0 0 0 0 0 ] T , Φ 44 = τ a N S 1 + τ δ S 2 + τ 2 2 S 3 + τ 12 2 S 4 + τ 2 4 4 T 1 + τ s 2 T 2 O O T , Φ 15 = [ ( F 2 U 1 0 0 0 0 0 ] T , Φ 55 = τ 2 2 S 5 + τ 12 2 S 6 + d 2 S 7 U 1 Q , Φ 17 = [ τ 2 T 1 T 0 0 0 0 0 ] T , Φ 18 = [ τ 12 T 2 T 0 0 0 0 0 ] T , L = [ L 1 T L 2 T 0 0 0 0 0 0 0 0 0 ] T , M = [ M 1 T M 2 T 0 0 0 0 0 0 0 0 0 ] T , V = [ V 1 T V 2 T 0 0 0 0 0 0 0 0 0 ] T , τ a = ( τ 1 + τ 2 ) 2 , τ δ = ( τ 2 τ 1 ) 2 , τ 12 = τ 2 τ 1 , τ s = 1 2 ( τ 2 2 τ 1 2 ) .

Proof

To prove the dissipativity criteria for the recurrent neural networks (51), we define the following Lyapunov–Krasovskii functional:

$$\begin{aligned}& \begin{aligned}[b] V\bigl(t,x(t)\bigr) &= V_{1}\bigl(t,x(t)\bigr)+V_{2} \bigl(t,x(t)\bigr)+V_{3}\bigl(t,x(t)\bigr)+V_{4}\bigl(t,x(t) \bigr) \\ &\quad{} +V _{5}\bigl(t,x(t)\bigr)+V_{6}\bigl(t,x(t)\bigr), \end{aligned} \end{aligned}$$
(54)

where

$$\begin{aligned}& \begin{aligned} V_{1}\bigl(t,x(t)\bigr) & = x^{T}(t)P_{1} x(t) + \int _{t-\tau _{1}}^{t} x^{T}(s) P_{2} x(s) \,ds + \int _{t-\tau (t)}^{t-\tau _{1}} x^{T}(s) P_{3} x(s) \,ds \\ &\quad{} + \int _{t-d(t)} ^{t}x^{T}(s)P_{4}x(s) \,ds,\end{aligned} \\& V_{2}\bigl(t,x(t)\bigr) = \int _{t-\frac{\tau _{a}}{N}}^{t} \xi _{1}^{T}(s) Q \xi _{1}(s) \,ds + \int _{t-\frac{\tau _{2}}{N}}^{t} \xi _{2}^{T}(s) R \xi _{2}(s) \,ds, \\& V_{3}\bigl(t,x(t)\bigr) = \int _{-\frac{\tau _{a}}{N}}^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s) S _{1} \dot{x}(s) \,ds \,d\theta + \int _{-\tau _{2}}^{-\tau _{a}} \int _{t+ \theta }^{t} \dot{x}^{T}(s) S_{2} \dot{x}(s) \,ds \,d\theta , \\& V_{4}\bigl(t,x(t)\bigr) = \tau _{2} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s) S _{3} \dot{x}(s) \,ds \,d\theta + \tau _{12} \int _{-\tau _{2}}^{-\tau _{1}} \int _{t+\theta }^{t} \dot{x}^{T}(s) S_{4} \dot{x}(s) \,ds \,d\theta , \\& \begin{aligned} V_{5}\bigl(t,x(t)\bigr)& = \tau _{2} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t} f^{T}\bigl(x(s)\bigr) S _{5} f\bigl(x(s)\bigr) \,ds \,d\theta + \tau _{12} \int _{-\tau _{2}}^{-\tau _{1}} \int _{t+\theta }^{t} f^{T}\bigl(x(s)\bigr) S_{6} f\bigl(x(s)\bigr) \,ds \,d\theta \\ &\quad {} + d \int _{-d}^{0} \int _{t+\theta }^{t} f^{T}\bigl(x(s)\bigr) S_{7} f\bigl(x(s)\bigr) \,ds \,d\theta , \end{aligned} \\& \begin{aligned} V_{6}\bigl(t,x(t)\bigr) &= \frac{\tau _{2}^{2}}{2} \int _{-\tau _{2}}^{0} \int _{\theta }^{0} \int _{t+\lambda }^{t} \dot{x}^{T}(s) T_{1} \dot{x}(s)\,ds \,d\lambda \,d \theta \\ &\quad{} + \tau _{s} \int _{-\tau _{2}}^{-\tau _{1}} \int _{\theta }^{0} \int _{t+\lambda }^{t} \dot{x}^{T}(s) T_{2} \dot{x}(s)\,ds \,d\lambda \,d \theta . \end{aligned} \end{aligned}$$

Then, using the same proof as in Theorem 3.1, we get the result. □

Remark 5

If the distributed delay is not considered in system (51), then the recurrent neural network is rewritten as

$$\begin{aligned} \begin{gathered} \dot{x}(t) = -Ax(t) + W_{1} f\bigl(x(t) \bigr) + W_{2}f\bigl(x\bigl(t-\tau (t)\bigr)\bigr) + u(t), \quad t>0, t \neq t_{k}, \\ y(t) = f\bigl(x(t)\bigr). \end{gathered} \end{aligned}$$
(55)

The dissipative condition of delayed neural network (55) is constructed as follows.

Corollary 3.4

Under Assumption (H1) and given scalars \(\tau _{1}\), \(\tau _{2}\), and \(\mu _{1}\), the neural network (55) is \((\mathcal{Q}, \mathcal{S}, \mathcal{R})\)–ϑ-dissipative if there exist positive definite matrices \(P_{1}\), \(P_{i}\) (\(i=2,\ldots,3\)), Q, R, \(S_{i}\) (\(i=1,2,\ldots,6\)), \(T _{i}\) (\(i=1,2\)), positive diagonal matrices \(U_{1}\), \(U_{2}\), and matrices O, \(L_{i}\), \(M_{i}\), \(V_{i}\) (\(i=1,2\)) of appropriate dimensions such that the following LMIs hold:

[ Φ τ a N L τ δ M S 1 0 S 2 ] <0,
(56)
[ Φ τ a N L τ δ V S 1 0 S 2 ] <0,
(57)

and

Q = [ Q 11 Q 12 Q 1 N Q 22 Q 2 N Q N 1 , N 1 Q N 1 , N Q N N ] 0 , R = [ R 11 R 12 R 1 N R 22 R 2 N R N 1 , N 1 R N 1 , N R N N ] 0 ,

where

$$\begin{aligned}& \varPhi = \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} \varPhi _{11} & \varPhi _{12} & \varPhi _{13} & \varPhi _{14} & \varPhi _{15} & 0 & \varPhi _{17} & \varPhi _{18} & 0 & 0 & 0 \\ * & \varPhi _{22} & \varPhi _{23} & 0 & 0 & F_{2}U_{2} & 0 & 0 & 0 & 0 & 0 \\ * & * & \varPhi _{33} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & \varPhi _{44} & \mathit{OW}_{1} & \mathit{OW}_{2} & 0 & 0 & 0 & 0 & O \\ * & * & * & * & \varPhi _{55} & 0 & 0 & 0 & 0 & 0 & -\mathcal{S} \\ * & * & * & * & * & -U_{2} & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & * & * & -T_{1} & 0 & 0 & 0 & 0 \\ * & * & * & * & * & * & * & -T_{2} & 0 & 0 & 0 \\ * & * & * & * & * & * & * & * & -S_{5} & 0 & 0 \\ * & * & * & * & * & * & * & * & * & -S_{6} & 0 \\ * & * & * & * & * & * & * & * & * & * & \gamma I-\mathcal{R} \end{array}\displaystyle \right ], \\& \varPhi _{1} = P_{2} + Q_{11} + L_{1} + L_{1}^{T} + R_{11}-S_{3}- \tau _{2}^{2}T_{1}+ \tau _{12}^{2}T_{2}-F_{1}U_{1}, \\& \varPhi _{55} = \tau _{2}^{2}S_{5}+ \tau _{12}^{2}S_{6}-U_{1}-\mathcal{Q}, \end{aligned}$$

and the other elements are as in Corollary 3.3.

Proof

This proof is similar to that of Corollary 3.3 and therefore is omitted. □

Remark 6

As a particular case of dissipativity, we get passivity criteria for system (55) by taking \(\mathcal{Q} = 0\), \(\mathcal{S} = I\), and \(\mathcal{R} =2\gamma I\) in Corollary 3.4. The following corollary can obtained from Corollary 3.4 and describes the passivity conditions for system (55).

Corollary 3.5

Under Assumption (H1) and given scalars \(\tau _{1}\), \(\tau _{2}\), and \(\mu _{1}\), the neural network (55) is passive if there exist positive definite matrices \(P_{1}\), \(P_{i}\) (\(i=2,\ldots,3\)), Q, R, \(S _{i}\) (\(i=1,2,\ldots,6\)), \(T_{i}\) (\(i=1,2\)), positive diagonal matrices \(U_{1}\), \(U_{2}\), and matrices O, \(L_{i}\), \(M_{i}\), \(V_{i}\) (\(i=1,2\)) of appropriate dimensions such that the following LMIs hold:

[ Φ τ a N L τ δ M S 1 0 S 2 ] <0,
(58)
[ Φ τ a N L τ δ V S 1 0 S 2 ] <0,
(59)

and

Q = [ Q 11 Q 12 Q 1 N Q 22 Q 2 N Q N 1 , N 1 Q N 1 , N Q N N ] 0 , R = [ R 11 R 12 R 1 N R 22 R 2 N R N 1 , N 1 R N 1 , N R N N ] 0 ,

where

$$\begin{aligned}& \varPhi = \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} \varPhi _{11} & \varPhi _{12} & \varPhi _{13} & \varPhi _{14} & \varPhi _{15} & 0 & \varPhi _{17} & \varPhi _{18} & 0 & 0 & 0 \\ * & \varPhi _{22} & \varPhi _{23} & 0 & 0 & F_{2}U_{2} & 0 & 0 & 0 & 0 & 0 \\ * & * & \varPhi _{33} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & \varPhi _{44} & \mathit{OW}_{1} & \mathit{OW}_{2} & 0 & 0 & 0 & 0 & O \\ * & * & * & * & \varPhi _{55} & 0 & 0 & 0 & 0 & 0 & -I \\ * & * & * & * & * & -U_{2} & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & * & * & -T_{1} & 0 & 0 & 0 & 0 \\ * & * & * & * & * & * & * & -T_{2} & 0 & 0 & 0 \\ * & * & * & * & * & * & * & * & -S_{5} & 0 & 0 \\ * & * & * & * & * & * & * & * & * & -S_{6} & 0 \\ * & * & * & * & * & * & * & * & * & * & -\gamma I \end{array}\displaystyle \right ], \\& \begin{gathered} \varPhi _{1} = P_{2} + Q_{11} + L_{1} + L_{1}^{T} + R_{11}-S_{3}- \tau _{2}^{2}T_{1}+ \tau _{12}^{2}T_{2}-F_{1}U_{1}, \\ \varPhi _{55} = \tau _{2}^{2}S_{5}+ \tau _{12}^{2}S_{6}-U_{1}. \end{gathered} \end{aligned}$$

Proof

The proof directly follows from Corollary 3.4. □

4 Numerical examples

In this section, we analyze several numerical examples to exploit the effectiveness of the proposed methods.

Example 1

Consider the fuzzy impulsive neural network (7) with two-mode Markovian jumping process with the following parameters:

  • Mode 1:

    A 11 = [ 2 0 0 2 ] , A 21 = [ 2.3 0 0 2.3 ] , W 111 = [ 0.40 0.12 0.22 0.72 ] , W 121 = [ 0.53 0.10 0.21 0.73 ] , W 211 = [ 0.6 0.5 0.2 0.1 ] , W 221 = [ 0.71 0.41 0.21 0.11 ] , W 311 = [ 0.3 0.1 0.1 0.2 ] , W 321 = [ 0.31 0.11 0.11 0.12 ] .
  • Mode 2:

    A 12 = [ 2.5 0 0 2.5 ] , A 22 = [ 2.15 0 0 2.15 ] , W 112 = [ 0.4 0.15 0.1 0.5 ] , W 122 = [ 0.48 0.17 0.12 0.3 ] , W 212 = [ 0.6 0.4 0.1 0.15 ] , W 222 = [ 0.65 0.35 0.12 0.14 ] , W 312 = [ 0.3 0.2 0.3 0.1 ] , W 322 = [ 0.15 0.21 0.15 0.12 ] .

We choose

Q= [ 4 0 0 4 ] ,S= [ 0.3 0 0.2 1 ] ,R= [ 3 0 . 0 3 ] , I k = [ 0.3 0 0 0.3 ] .

We consider the activation functions \(f_{1}(x) = f_{2}(x) = \tanh (x)\). Assumption (H1) is satisfied with \(F_{1}^{-}=0\), \(F_{1}^{+}=1\), \(F _{2}^{-}=0\), and \(F_{2}^{+}=1\). Thus

F 1 = [ 0 0 0 0 ] , F 2 = [ 0.5 0 0 0.5 ] .

Let \(r(t)\) be a right-continuous Markov chain taking values in \(S = \{1, 2\}\) with generator π=[ 7 7 5 5 ], and let the membership functions for rules 1 and 2 be \(\mu _{1}(\theta (t))=\sin ^{2}(x_{1}-0.5)\) and \(\mu _{2}(\theta (t))= \cos ^{2}(x_{1}-0.5)\). Then via the Matlab LMI control toolbox, for \(N=2\), we can see that the LMIs given in Theorem 3.1 are feasible. Thus we observe from Theorem 3.1 that the neural network (7) subject to leakage delays and impulsive effect is dissipative. The simulation results for the state responses of system (7) with two Markovian jumping modes (\(i = 1,2\)) are given in Fig. 1. Also, Fig. 2 illustrates the mode transition rates. In Table 1, we mention the maximum allowable upper bound for delays \(\tau _{2}=d\) with different values of \(\mu _{1}\), \(\mu _{2}\).

Figure 1
figure 1

Simulation results of the T–S fuzzy Markovian jumping neural networks

Figure 2
figure 2

Mode transitions \(r(t)\)

Table 1 Maximum upper bound for delays \(\tau _{2}=d\) with different μ (\(\mu _{1}=\mu _{2}\))

Example 2

Consider the Markovian jumping neural network (48) with parameters

A 1 = [ 2.1 0 0 2.3 ] , A 2 = [ 2.2 0 0 2.3 ] , W 11 = [ 0.3 0.6 0.2 0.4 ] , W 12 = [ 0.3 0.1 0.2 0.5 ] , W 21 = [ 0.2 0.7 0.4 0.3 ] , W 22 = [ 0.1 0.8 0.6 1.1 ] , W 31 = [ 0.4 0.4 0.2 0.6 ] , W 32 = [ 0.3 0.2 0.5 0.4 ] , Q = [ 1 0 0 1 ] , S = [ 1 0 1 1 ] , R = [ 2 0 . 0 2 ] , π = [ 2 2 3 3 ] ,

and the activation function \(f_{1}(x) = f_{2}(x) = \tanh (x)\). Choosing \(N=2\), \(\mu _{1}=0.5\), and \(\mu _{2}=0.1\) and using Matlab LMI toolbox and Corollary 3.2, we obtained the maximum allowable upper bound of \(\tau _{2}\) and d for various values of \(\tau _{1}\) listed in Table 2. This implies that the Markovian jumping neural network (48) is dissipative in the sense of Definition 1.

Table 2 Maximum upper bound of \(\tau _{2}=d\) when \(\mu _{1}=0.5\), \(\mu _{2}=0.1\)

Example 3

Consider the neural network (51) with the following parameters:

A = [ 4 0 0 3 ] , W 1 = [ 3.2 0.4 4 3.6 ] , W 2 = [ 2.2 1.2 1.2 4 ] , W 3 = [ 1 0.4 0.6 1 ] , Q = [ 0.1 0 0 0.3 ] , S = [ 5 0 0 6 ] , R = [ 0.6 0.4 2.1 0.3 ] , f 1 ( x ) = f 2 ( x ) = tanh ( x ) .

For this neural network, we would like to have the dissipativity for the allowable maximum time delay value of \(\tau _{2}\) and d for different values of μ and given \(\tau _{1}\). We can see from Table 3 that the condition presented in Corollary 3.3 still ensures the dissipativity of this model. The results in [12, 35, 37, 41] are not applicable to this system as the time-varying distributed delay is involved in this system.

Table 3 Allowable upper bound of \(\tau _{2}\) and d for different \(\tau _{1}\) and μ (\(\mu _{1}=\mu _{2}\))

Example 4

Consider the neural networks (55) with the following coefficient matrices:

A = [ 2 0 0 1.5 ] , W 1 = [ 1.2 1 0.2 0.3 ] , W 2 = [ 0.8 0.4 0.2 0.1 ] , Q = [ 0.9 0 0 0.9 ] , S = [ 0.5 0 0.3 1 ] , R = [ 2 0 0 2 ] .

Here we choose \(F_{1}^{-}=-0.1\), \(F_{1}^{+}=0.9\), \(F_{2}^{-}=-0.1\), and \(F_{2}^{+}=0.9\). Thus

F 1 = [ 0.9 0 0 0.9 ] , F 2 = [ 0.4 0 0 0.4 ] .

We assume that \(\tau _{1}=0\) and \(\tau _{2}=0.4\) for different values of \(\mu _{1}\). The optimum dissipativity performances γ are calculated by the methods in Corollary 3.4 and are listed in Table 4. We can observe that our considered dissipativity condition provides a less conservative result in comparison to the existing works [12, 35, 37, 41].

Table 4 Optimal dissipativity performance γ for different \(\mu _{1}\)

Example 5

Consider the neural networks (55) with the following parameters:

A= [ 1.4 0 0 1.5 ] , W 1 = [ 1.2 1 1.2 1.3 ] , W 2 = [ 0.2 0.5 0.3 0.8 ] .

Moreover, the activation function is chosen as \(g_{i}(x_{i})=0.5( \vert x_{i}+1 \vert - \vert x_{i}-1 \vert )\), \(i=1,2\). The allowable upper bounds of \(\tau _{2}\) when \(\tau _{1}=0\) for various values of \(\mu _{1}\) obtained by Corollary 3.5 are listed in Table 5. We easily see that the obtained passivity-based results in our work are more general than the others [25, 36, 39, 42].

Table 5 Maximum upper bound of \(\tau _{2}\) for different values of \(\mu _{1}\) (Example 5)

Example 6

Consider the neural networks (55), with the following parameters:

A= [ 2.2 0 0 1.8 ] , W 1 = [ 1.2 1 0.2 0.3 ] , W 2 = [ 0.8 0.4 0.2 0.1 ] ,

\(F_{1}^{-}=F_{2}^{-}=0\), and \(F_{1}^{+}=F_{2}^{+}=1\). For different values of \(\mu _{1}\), the allowable upper bounds of \(\tau _{2}\) when \(\tau _{1}=0\) computed by Corollary 3.5 and with the results presented in [16, 35, 37, 38, 40, 41] are listed in Table 6.

Table 6 Maximum upper bound of \(\tau _{2}\) for different values of \(\mu _{1}\) (Example 6)

Example 7

Consider the neural networks (55) with the following parameters studied in [13, 28, 30, 33, 34].

A= [ 1.5 0 0 0.7 ] , W 1 = [ 0.0503 0.0454 0.0987 0.2075 ] , W 2 = [ 0.2381 0.9320 0.0388 0.5062 ] ,

\(f_{1}(x) = 0.3\tanh (x)\), \(f_{2}(x) = 0.8 \tanh (x)\), \(F_{1}^{-}=F_{2} ^{-}=0\), \(F_{1}^{+}=0.3\), and \(F_{2}^{+}=0.8\). By using Corollary 3.5 and solving MATLAB LMI tool box the corresponding results for the maximum allowable upper bounds of \(\tau _{2}\) for different values of \(\mu _{1}\) when \(\tau _{1}=0\) are computed and listed in Table 7. We can observe from Table 7 that the passivity condition proposed in this paper provides less conservative results than the others [13, 28, 30, 33, 34].

Table 7 Maximum upper bound of \(\tau _{2}\) for different values of \(\mu _{1}\) (Example 7)

5 Conclusion

In this paper, we have studied the problem of dissipative conditions for Takagi–Sugeno fuzzy Markovian jumping neural networks with impulsive perturbations using the delay partition method. By constructing a proper LKF and LMI approach together with delay fractioning approach, we have established a set of sufficient conditions ensuring that the considered fuzzy Markovian neural networks are \((\mathcal{Q}, \mathcal{S}, \mathcal{R})\)ϑ-dissipative. Finally, several numerical examples are given to illustrate the effectiveness of the proposed dissipative theory. Moreover, our results show that the developed method yields less conservative results than some other works. Furthermore, the problem of finite-time extended dissipative conditions for stochastic T–S fuzzy singular Markovian jump systems with randomly occurring uncertainties and time delay using the delay portioning approach is an untreated work and will be the topic of our future work.

References

  1. Bao, H., Park, J.H., Cao, J.: Exponential synchronization of coupled stochastic memristor-based neural networks with time-varying probabilistic delay coupling and impulsive delay. IEEE Trans. Neural Netw. Learn. Syst. 27, 190–201 (2016)

    Article  MathSciNet  Google Scholar 

  2. Boyd, S., Ghaoui, L.E., Feron, E., Balakrishnan, V.: Linear Matrix Inequalities in Systems and Control Theory. SIAM, Philadelphia (1994)

    Book  Google Scholar 

  3. Chen, G., Xia, J., Zhuang, G.: Delay-dependent stability and dissipativity analysis of generalized neural networks with Markovian jump parameters and two delay components. J. Franklin Inst. 353, 2137–2158 (2016)

    Article  MathSciNet  Google Scholar 

  4. Choi, H.D., Ahn, C.K., Shi, P., Lim, M.T., Song, M.K.: \(L_{2}\)\(L_{\infty }\) filtering for Takagi–Sugeno fuzzy neural networks based on Wirtinger-type inequalities. Neurocomputing 153, 117–125 (2015)

    Article  Google Scholar 

  5. Jian, J., Wan, P.: Global exponential convergence of fuzzy complex-valued neural networks with time-varying delays and impulsive effects. Fuzzy Sets Syst. 338, 23–39 (2018)

    Article  MathSciNet  Google Scholar 

  6. Kwon, O.M., Park, J.H., Lee, S.M., Cha, E.J.: New augmented Lyapunov–Krasovskii functional approach to stability analysis of neural networks with time-varying delays. Nonlinear Dyn. 76, 221–236 (2014)

    Article  MathSciNet  Google Scholar 

  7. Lakshmanan, S., Senthilkumar, T., Balasubramaniam, P.: Improved results on robust stability of neutral systems with mixed time-varying delays and nonlinear perturbations. Appl. Math. Model. 35, 5355–5368 (2011)

    Article  MathSciNet  Google Scholar 

  8. Liu, J., Gu, Z., Tian, E.: A new approach to \(H_{\infty }\) filtering for linear time-delay systems. J. Franklin Inst. 349, 184–200 (2012)

    Article  MathSciNet  Google Scholar 

  9. Liu, Y., Wang, Z., Liu, X.: Global exponential stability of generalized recurrent neural networks with discrete and distributed delays. Neural Netw. 19(5), 667–675 (2006)

    Article  Google Scholar 

  10. Maharajan, C., Raja, R., Cao, J., Ravi, G., Rajchakit, G.: Global exponential stability of Markovian jumping stochastic impulsive uncertain BAM neural networks with leakage, mixed time delays, and α-inverse Hölder activation functions. Adv. Differ. Equ. 2018, 113 (2018)

    Article  Google Scholar 

  11. Muralisankar, S., Gopalakrishnan, N., Balasubramaniam, P.: An LMI approach for global robust dissipativity analysis of T–S fuzzy neural networks with interval time-varying delays. Expert Syst. Appl. 39, 3345–3355 (2012)

    Article  Google Scholar 

  12. Nagamani, G., Joo, Y.H., Radhika, T.: Delay-dependent dissipativity criteria for Markovian jump neural networks with random delays and incomplete transition probabilities. Nonlinear Dyn. 91, 2503–2522 (2018)

    Article  Google Scholar 

  13. Nagamani, G., Radhika, T.: Dissipativity and passivity analysis of Markovian jump neural networks with two additive time-varying delays. Neural Process. Lett. 44, 571–592 (2016)

    Article  Google Scholar 

  14. Pan, Y., Zhou, Q., Lu, Q., Wu, C.: New dissipativity condition of stochastic fuzzy neural networks with discrete and distributed time-varying delays. Neurocomputing 162, 267–272 (2015)

    Article  Google Scholar 

  15. Pandiselvi, S., Raja, R., Zhu, Q., Rajchakit, G.: A state estimation \(H_{\infty }\) issue for discrete-time stochastic impulsive genetic regulatory networks in the presence of leakage, multiple delays and Markovian jumping parameters. J. Franklin Inst. 355, 2735–2761 (2018)

    Article  MathSciNet  Google Scholar 

  16. Radhika, T., Nagamani, G., Zhu, Q., Ramasamy, S., Saravanakumar, R.: Further results on dissipativity analysis for Markovian jump neural networks with randomly occurring uncertainties and leakage delays. Neural Comput. Appl. 23, 1–15 (2018)

    Google Scholar 

  17. Raja, R., Karthik Raja, U., Samidurai, R., Leelamani, A.: Dissipativity of discrete-time BAM stochastic neural networks with Markovian switching and impulses. J. Franklin Inst. 350, 3217–3247 (2013)

    Article  MathSciNet  Google Scholar 

  18. Raja, R., Karthik Raja, U., Samidurai, R., Leelamani, A.: Improved stochastic dissipativity of uncertain discrete-time neural networks with multiple delays and impulses. Int. J. Mach. Learn. Cybern. 6, 289–305 (2015)

    Article  Google Scholar 

  19. Raja, R., Sakthivel, R., Marshal Anthoni, S.: Dissipativity of discrete-time BAM stochastic neural networks with Markovian switching and impulses. J. Franklin Inst. 350, 3217–3247 (2011)

    Article  MathSciNet  Google Scholar 

  20. Raja, R., Zhu, Q., Senthilraj, S., Samidurai, R.: Improved stability analysis of uncertain neutral type neural networks with leakage delays and impulsive effects. Appl. Math. Comput. 266, 1050–1069 (2015)

    MathSciNet  MATH  Google Scholar 

  21. Sakthivel, R., Saravanakumar, T., Ma, Y.-K., Marshal Anthoni, S.: Finite-time resilient reliable sampled-data control for fuzzy systems with randomly occurring uncertainties. Fuzzy Sets Syst. 329, 1–18 (2107)

    Article  MathSciNet  Google Scholar 

  22. Sakthivel, R., Saravanakumar, T., Sathishkumar, M.: Non-fragile reliable control synthesis of the sugarcane borer. IET Syst. Biol. 11, 139–143 (2017)

    Article  Google Scholar 

  23. Saravanakumar, T., Sakthivel, R., Selvaraj, P., Marshal Anthoni, S.: Dissipative analysis for discrete-time systems via fault-tolerant control against actuator failures. Complexity 21, 579–592 (2016)

    Article  MathSciNet  Google Scholar 

  24. Senan, S.: An analysis of global stability of Takagi–Sugeno fuzzy Cohen–Grossberg neural networks with time delays. Neural Process. Lett. 10, 1–12 (2018)

    Google Scholar 

  25. Senthilraj, S., Raja, R., Zhu, Q., Samidurai, R.: Effects of leakage delays and impulsive control in dissipativity analysis of Takagi–Sugeno fuzzy neural networks with randomly occurring uncertainties. J. Franklin Inst. 354, 3574–3593 (2017)

    Article  MathSciNet  Google Scholar 

  26. Senthilraj, S., Raja, R., Zhu, Q., Samidurai, R., Yao, Z.: Delay-interval-dependent passivity analysis of stochastic neural networks with Markovian jumping parameters and time delay in the leakage term. Nonlinear Anal. Hybrid Syst. 22, 262–275 (2016)

    Article  MathSciNet  Google Scholar 

  27. Shen, H., Xiang, M., Huo, S., Wu, Z.G., Park, J.H.: Finite-time \(H_{\infty }\) asynchronous state estimation for discrete-time fuzzy Markov jump neural networks with uncertain measurements. Fuzzy Sets Syst. 356, 113–128 (2019)

    Article  Google Scholar 

  28. Shi, K., Zhong, S., Zhu, H., Liu, X., Zeng, Y.: New delay-dependent stability criteria for neutral-type neural networks with mixed random time-varying delays. Neurocomputing 168, 896–907 (2015)

    Article  Google Scholar 

  29. Shu, Y., Liu, X.G., Qiu, S., Wang, F.: Dissipativity analysis for generalized neural networks with Markovian jump parameters and time-varying delay. Nonlinear Dyn. 89, 2125–2140 (2017)

    Article  MathSciNet  Google Scholar 

  30. Song, Q.: Exponential stability of recurrent neural networks with both time-varying delays and general activation functions via LMI approach. Neurocomputing 71, 2823–2830 (2008)

    Article  Google Scholar 

  31. Song, Q., Yan, H., Zhao, Z., Liu, Y.: Global exponential stability of complex-valued neural networks with both time-varying delays and impulsive effects. Neural Netw. 79, 108–116 (2016)

    Article  Google Scholar 

  32. Sowmiya, C., Raja, R., Cao, J., Rajchakit, G., Alsaedi, A.: Enhanced robust finite-time passivity for Markovian jumping discrete-time BAM neural networks with leakage delay. Adv. Differ. Equ. 2017, 318 (2017)

    Article  MathSciNet  Google Scholar 

  33. Sun, J., Liu, G.P., Chen, J., Rees, D.: Improved stability criteria for neural networks with time-varying delay. Phys. Lett. A 373, 342–348 (2009)

    Article  Google Scholar 

  34. Tian, J., Zhong, S.: Improved delay-dependent stability criterion for neural networks with time-varying delay. Appl. Math. Comput. 217, 10278–10288 (2011)

    MathSciNet  MATH  Google Scholar 

  35. Wu, Z., Park, J., Su, H., Chu, J.: Robust dissipativity analysis of neural networks with time-varying delay and randomly occurring uncertainties. Nonlinear Dyn. 69, 1323–1332 (2012)

    Article  MathSciNet  Google Scholar 

  36. Xu, S., Zheng, W.X., Zou, Y.: Passivity analysis of neural networks with time-varying delays. IEEE Trans. Circuits Syst. II, Express Briefs 56(4), 325–329 (2009)

    Article  Google Scholar 

  37. Zeng, H.B., He, Y., Shi, P., Wu, M., Xiao, S.P.: Dissipativity analysis of neural networks with time-varying delays. Neurocomputing 168, 741–746 (2015)

    Article  Google Scholar 

  38. Zeng, H.B., He, Y., Wu, M., Xiao, H.Q.: Improved conditions for passivity of neural networks with a time-varying delay. IEEE Trans. Cybern. 44, 785–792 (2014)

    Article  Google Scholar 

  39. Zeng, H.B., He, Y., Wu, M., Xiao, S.P.: Passivity analysis for neural networks with a time-varying delay. Neurocomputing 74, 730–734 (2011)

    Article  Google Scholar 

  40. Zeng, H.B., Park, J.H., Shen, H.: Robust passivity analysis of neural networks with discrete and distributed delays. Neurocomputing 149, 1092–1097 (2015)

    Article  Google Scholar 

  41. Zeng, H.B., Park, J.H., Xia, J.W.: Further results on dissipativity analysis of neural networks with time-varying delay and randomly occurring uncertainties. Nonlinear Dyn. 79, 83–91 (2015)

    Article  MathSciNet  Google Scholar 

  42. Zhang, B., Xu, S., Lam, J.: Relaxed passivity conditions for neural networks with time-varying delays. Neurocomputing 142, 299–306 (2014)

    Article  Google Scholar 

  43. Zhang, C.K., He, Y., Jiang, L., Lin, W.J., Wu, M.: Delay-dependent stability analysis of neural networks with time-varying delay: a generalized free-weighting-matrix approach. Appl. Math. Comput. 294, 102–120 (2017)

    MathSciNet  MATH  Google Scholar 

  44. Zhu, J., Sun, J.: Stability of quaternion-valued impulsive delay difference systems and its application to neural networks. Neurocomputing 284, 63–69 (2018)

    Article  Google Scholar 

Download references

Acknowledgements

The authors express their sincere gratitude to the editors for the careful reading of the original manuscript.

Availability of data and materials

Data sharing not applicable to this paper as no data sets were generated or analyzed during the current study.

Funding

This work was jointly supported by the National Natural Science Foundation of China (61773217) and the Construct Program of the Key Discipline in Hunan Province.

Author information

Authors and Affiliations

Authors

Contributions

All three authors contributed equally to this work. They all read and approved the final version of the manuscript.

Corresponding author

Correspondence to Quanxin Zhu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nirmala, V.J., Saravanakumar, T. & Zhu, Q. Dissipative criteria for Takagi–Sugeno fuzzy Markovian jumping neural networks with impulsive perturbations using delay partitioning approach. Adv Differ Equ 2019, 140 (2019). https://doi.org/10.1186/s13662-019-2085-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-019-2085-5

Keywords