In this section, we establish a dissipativity condition for fuzzy impulsive Markovian jumping neural networks (7) with both discrete and distributed time delays. Under a Lyapunov functional and delay fractionizing approach, in the following theorem, we provide a new set of novel delay-dependent dissipative conditions with impulsive perturbations. For presentation convenience, we denote
$$\begin{aligned}& F_{1} = \operatorname{diag}\bigl(F_{1}^{-}F_{1}^{+}, F_{2}^{-}F_{2}^{+},\ldots, F_{n}^{-}F_{n}^{+}\bigr), \quad\quad F_{2} = \operatorname{diag} \biggl( \frac{F_{1}^{-} + F_{1}^{+}}{2}, \frac{F_{2}^{-} + F_{2}^{+}}{2},\ldots, \frac{F_{n}^{-} + F_{n}^{+}}{2} \biggr), \\& F_{3} = \operatorname{diag} \bigl( F_{1}^{-}, F_{2}^{-},\ldots, F_{n} ^{-} \bigr), \quad\quad F_{4} = \operatorname{diag} \bigl( F_{1}^{+}, F_{2}^{+},\ldots, F_{n}^{+} \bigr). \end{aligned}$$
Theorem 3.1
Under Assumptions (H1)
and (H2), for given scalars
\(\tau _{1}\), \(\tau _{2}\), d, \(\mu _{1}\), and
\(\mu _{2}\), the neural network described by (7) is strictly
\((\mathcal{Q}, \mathcal{S}, \mathcal{R})\)–ϑ-dissipative if there exist positive definite matrices
\(P_{1i}\), \(P_{i}\) (\(i=2,\ldots,4\)), Q, R, \(S_{i}\) (\(i=1,2,\ldots,7\)), \(T_{i}\) (\(i=1,2\)), positive diagonal matrices
\(U_{1}\), \(U_{2}\), and matrices
O, \(L_{i}\), \(M_{i}\), \(V _{i}\) (\(i=1,2\)) of appropriate dimensions such that the following LMIs hold:
$$\begin{aligned}& I_{ik}^{T}P_{1j}I_{ik}-P_{1i} < 0, \end{aligned}$$
(12)
(13)
(14)
and
where
Proof
To obtain dissipativity criteria for the fuzzy Markovian jumping impulsive neural networks (7), we examine the Lyapunov–Krasovskii functional
$$\begin{aligned}& \begin{aligned}[b] V\bigl(t,x(t),i\bigr) &= V_{1}\bigl(t,x(t),i\bigr)+V_{2} \bigl(t,x(t),i\bigr)+V_{3}\bigl(t,x(t),i\bigr)+V_{4} \bigl(t,x(t),i\bigr)\\&\quad{} +V _{5}\bigl(t,x(t),i\bigr)+V_{6} \bigl(t,x(t),i\bigr), \end{aligned} \end{aligned}$$
(15)
where
$$\begin{aligned}& \begin{aligned} V_{1}\bigl(t,x(t),i\bigr) &= x^{T}(t)P_{1i} x(t) + \int _{t-\tau _{1}}^{t} x^{T}(s) P_{2} x(s) \,ds + \int _{t-\tau (t)}^{t-\tau _{1}} x^{T}(s) P_{3} x(s) \,ds \\&\quad{} + \int _{t-d(t)} ^{t}x^{T}(s)P_{4}x(s) \,ds,\end{aligned} \\& V_{2}\bigl(t,x(t),i\bigr) = \int _{t-\frac{\tau _{a}}{N}}^{t} \xi _{1}^{T}(s) Q \xi _{1}(s) \,ds + \int _{t-\frac{\tau _{2}}{N}}^{t} \xi _{2}^{T}(s) R \xi _{2}(s) \,ds, \\& V_{3}\bigl(t,x(t),i\bigr) = \int _{-\frac{\tau _{a}}{N}}^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s) S _{1} \dot{x}(s) \,ds \,d\theta + \int _{-\tau _{2}}^{-\tau _{a}} \int _{t+ \theta }^{t} \dot{x}^{T}(s) S_{2} \dot{x}(s) \,ds \,d\theta , \\& V_{4}\bigl(t,x(t),i\bigr) = \tau _{2} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s) S _{3} \dot{x}(s) \,ds \,d\theta + \tau _{12} \int _{-\tau _{2}}^{-\tau _{1}} \int _{t+\theta }^{t} \dot{x}^{T}(s) S_{4} \dot{x}(s) \,ds \,d\theta , \\& \begin{aligned} V_{5}\bigl(t,x(t),i\bigr) &= \tau _{2} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t} f^{T}\bigl(x(s)\bigr) S _{5} f\bigl(x(s)\bigr) \,ds \,d\theta \\ &\quad {}+ \tau _{12} \int _{-\tau _{2}}^{-\tau _{1}} \int _{t+\theta }^{t} f^{T}\bigl(x(s)\bigr) S_{6} f\bigl(x(s)\bigr) \,ds \,d\theta \\ &\quad {}+ d \int _{-d}^{0} \int _{t+\theta }^{t} f^{T}\bigl(x(s)\bigr) S_{7} f\bigl(x(s)\bigr) \,ds \,d\theta , \end{aligned} \\& \begin{aligned} V_{6}\bigl(t,x(t),i\bigr) &= \frac{\tau _{2}^{2}}{2} \int _{-\tau _{2}}^{0} \int _{\theta }^{0} \int _{t+\lambda }^{t} \dot{x}^{T}(s) T_{1} \dot{x}(s)\,ds \,d\lambda \,d \theta \\ &\quad {}+ \tau _{s} \int _{-\tau _{2}}^{-\tau _{1}} \int _{\theta }^{0} \int _{t+\lambda }^{t} \dot{x}^{T}(s) T_{2} \dot{x}(s)\,ds \,d\lambda \,d \theta\end{aligned} \end{aligned}$$
with
For \(t=t_{k}\), we have
$$\begin{aligned} V_{1}\bigl(t_{k},x(t),j\bigr)- V_{1} \bigl(t_{k}^{-},x(t),i\bigr) = & x^{T}(t_{k})P_{1j}x(t_{k})- x^{T}\bigl(t_{k}^{-}\bigr)P_{1i}x \bigl(t_{k}^{-}\bigr) \\ = & x^{T}\bigl(t_{k}^{-}\bigr)I_{ik}^{T}P_{1j}I_{ik}x \bigl(t_{k}^{-}\bigr)- x^{T}\bigl(t_{k}^{-} \bigr)P _{1i}x\bigl(t_{k}^{-}\bigr) \\ = & x^{T}\bigl(t_{k}^{-}\bigr) \bigl[I_{ik}^{T}P_{1j}I_{ik}-P_{1i} \bigr]x\bigl(t_{k}^{-}\bigr). \end{aligned}$$
(16)
Based on the assumptions and conditions, we know that \(I_{ik}\) is a constant matrix at the moment \(t_{k}\) and in the mode i for \(i\in S\), \(k\in N\). So
$$\begin{aligned}& V_{1}\bigl(t_{k},x(t),j\bigr)- V_{1} \bigl(t_{k}^{-}, x(t),i\bigr) < 0. \end{aligned}$$
(17)
For \(t\in [t_{k-1}, t_{k}]\), by (17) we obtain that the weak infinitesimal generator \(\mathcal{L}V(t,x(t),i)\) satisfies
$$\begin{aligned}& \begin{aligned}[b] &\mathcal{L}V_{1}\bigl(t,x(t),i\bigr) \\ & \quad = 2 x^{T}(t)P_{1i} \dot{x}(t) + x^{T}(t)[P_{2}+P_{4}]x(t) + x^{T}(t- \tau _{1}) (P_{3}-P_{2})x(t- \tau _{1}) \\ & \quad \quad{} - \bigl(1-\dot{\tau }(t)\bigr) x^{T}\bigl(t-\tau (t) \bigr)P_{3} x\bigl(t-\tau (t)\bigr) - \bigl(1- \dot{d}(t) \bigr)x^{T}\bigl(t-d(t)\bigr)P_{4}x\bigl(t-d(t)\bigr) \\ & \quad \quad{} + \sum_{j=1}^{N} \pi _{ij} x^{T}(t) P_{1j} x(t), \\ & \quad \leq 2 x^{T}(t)P_{1i}\dot{x}(t) + x^{T}(t)[P_{2}+P_{4}]x(t) + x^{T}(t- \tau _{1}) (P_{3}-P_{2})x(t- \tau _{1}) \\ & \quad \quad{} - (1-\mu _{1}) x^{T}\bigl(t-\tau (t) \bigr)P_{3} x\bigl(t-\tau (t)\bigr) - (1-\mu _{2})x^{T} \bigl(t-d(t)\bigr)P_{4}x\bigl(t-d(t)\bigr) \\ & \quad \quad{} + \sum_{j=1}^{N} \pi _{ij} x^{T}(t) P_{1j} x(t), \end{aligned} \end{aligned}$$
(18)
$$\begin{aligned}& \begin{aligned}[b] \mathcal{L}V_{2}\bigl(t,x(t),i\bigr) &= \xi _{1}^{T}(t)Q \xi _{1}(t) - \xi _{1}^{T} \biggl(t- \frac{\tau _{a}}{N} \biggr) Q \xi _{1} \biggl(t-\frac{\tau _{a}}{N} \biggr) \\ &\quad{} + \xi _{2}^{T}(t)R\xi _{2}(t) - \xi _{2}^{T} \biggl(t-\frac{\tau _{2}}{N} \biggr) R \xi _{2} \biggl(t-\frac{\tau _{2}}{N} \biggr), \end{aligned} \end{aligned}$$
(19)
$$\begin{aligned}& \begin{aligned}[b] \mathcal{L}V_{3}\bigl(t,x(t),i\bigr) &= \frac{\tau _{a}}{N} \dot{x}^{T}(t) S_{1} \dot{x}(t) + (\tau _{2}-\tau _{a})\dot{x}^{T}(t) S_{2} \dot{x}(t) \\ &\quad{} - \int _{t-\frac{\tau _{a}}{N}}^{t} \dot{x}^{T}(s) S_{1} \dot{x}(s) \,ds - \int _{t-\tau _{2}}^{t-\tau _{a}} \dot{x}^{T}(s) S_{2} \dot{x}(s) \,ds, \end{aligned} \end{aligned}$$
(20)
$$\begin{aligned}& \begin{aligned}[b] \mathcal{L}V_{4}\bigl(t,x(t),i\bigr) &= \tau _{2}^{2} \dot{x}^{T}(t)S_{3} \dot{x}(t) - \tau _{2} \int _{t-\tau _{2}}^{t} \dot{x}^{T}(s) S_{3} \dot{x}(s) \,ds + \tau _{12} ^{2} \dot{x}^{T}(t)S_{4} \dot{x}(t) \\ &\quad{} - \tau _{12} \int _{t-\tau _{2}}^{t-\tau _{1}} \dot{x}^{T}(s) S _{4} \dot{x}(s) \,ds, \end{aligned} \end{aligned}$$
(21)
$$\begin{aligned}& \begin{aligned}[b] \mathcal{L}V_{5}\bigl(t,x(t),i\bigr) &= f^{T}\bigl(x(t) \bigr) \bigl[\tau _{2}^{2} S_{5} + \tau _{12}^{2}S_{6}+d^{2}S_{7} \bigr] f\bigl(x(t)\bigr) - \tau _{2} \int _{t-\tau _{2}}^{t} f^{T}\bigl(x(s)\bigr) S_{5} f\bigl(x(s)\bigr) \,ds\hspace{-20pt} \\ &\quad{} - \tau _{12} \int _{t-\tau _{2}}^{t-\tau _{1}} f^{T}\bigl(x(s)\bigr) S_{6} f\bigl(x(s)\bigr) \,ds-d \int _{t-d(t)}^{t} f^{T}\bigl(x(s) \bigr)S_{7}f\bigl(x(s)\bigr)\,ds, \end{aligned} \end{aligned}$$
(22)
$$\begin{aligned}& \begin{aligned}[b] \mathcal{L}V_{6}\bigl(t,x(t),i\bigr) &= \frac{\tau _{2}^{4}}{4} \dot{x}^{T}(t) T_{1} \dot{x}(t) - \frac{\tau _{2}^{2}}{2} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s) T _{1} \dot{x}(s) \,ds \,d\theta \\ &\quad{} + \tau _{s}^{2} \dot{x}^{T}(t) T_{2} \dot{x}(t) - \tau _{s} \int _{-\tau _{2}}^{-\tau _{1}} \int _{t+\theta }^{t} \dot{x}^{T}(s) T _{2} \dot{x}(s) \,ds \,d\theta . \end{aligned} \end{aligned}$$
(23)
Note that
$$\begin{aligned}& - \int _{t-\tau _{2}}^{t-\tau _{a}} \dot{x}^{T}(s) S_{2} \dot{x}(s) \,ds = - \int _{t-\tau _{2}}^{t-\tau (t)} \dot{x}^{T}(s) S_{2} \dot{x}(s) \,ds - \int _{t-\tau (t)}^{t-\tau _{a}} \dot{x}^{T}(s) S_{2} \dot{x}(s) \,ds. \end{aligned}$$
Using Lemma 2.1, we obtain
$$\begin{aligned}& - \int _{t-\frac{\tau _{a}}{N}}^{t} \dot{x}^{T}(s)S_{1} \dot{x}(s)\,ds \leq \frac{\tau _{a}}{N} \zeta ^{T}(t)LS_{1}^{-1}L^{T} \zeta (t) + 2 \zeta ^{T}(t)L \biggl[ x(t)-x \biggl(t-\frac{\tau _{a}}{N} \biggr) \biggr], \end{aligned}$$
(24)
$$\begin{aligned}& \begin{aligned}[b] - \int _{t-\tau (t)}^{t-\tau _{a}} \dot{x}^{T}(s)S_{2} \dot{x}(s)\,ds &\leq \bigl(\tau (t)-\tau _{a}\bigr) \zeta ^{T}(t)VS_{2}^{-1}V^{T}\zeta (t) \\ &\quad{} + 2 \zeta ^{T}(t)V \bigl[ x(t-\tau _{a})-x\bigl(t- \tau (t)\bigr) \bigr], \end{aligned} \end{aligned}$$
(25)
$$\begin{aligned}& \begin{aligned}[b] - \int _{t-\tau _{2}}^{t-\tau (t)} \dot{x}^{T}(s)S_{2} \dot{x}(s)\,ds &\leq \bigl(\tau _{2}-\tau (t)\bigr) \zeta ^{T}(t)MS_{2}^{-1}M^{T}\zeta (t) \\ &\quad {}+ 2 \zeta ^{T}(t)M \bigl[ x\bigl(t-\tau (t)\bigr)-x(t-\tau _{2}) \bigr]. \end{aligned} \end{aligned}$$
(26)
Applying the lemma in [8] and the Newton–Leibniz formula
$$\begin{aligned}& \int _{t-\tau _{2}}^{t} \dot{x}(s)\,ds = x(t)- x(t-\tau _{2}), \end{aligned}$$
we have
$$\begin{aligned} \begin{aligned}[b] - \tau _{2} \int _{t-\tau _{2}}^{t} \dot{x}^{T}(s) S_{3} \dot{x}(s)\,ds &\leq - \biggl[ \int _{t-\tau _{2}}^{t}\dot{x}(s) \,ds \biggr]^{T} S_{3} \biggl[ \int _{t-\tau _{2}}^{t} \dot{x}(s) \,ds \biggr] \\ &\leq - \bigl[ x(t) - x(t-\tau _{2}) \bigr]^{T} S_{3} \bigl[ x(t) - x(t- \tau _{2}) \bigr]. \end{aligned} \end{aligned}$$
(27)
Note that
$$\begin{aligned}& \int _{t-\tau _{2}}^{t-\tau _{1}}\dot{x}^{T}(s) S_{4} \dot{x}(s) \,ds = \int _{t-\tau _{2}}^{t-\tau (t)}\dot{x}^{T}(s) S_{4} \dot{x}(s) \,ds + \int _{t-\tau (t)}^{t-\tau _{1}}\dot{x}^{T}(s) S_{4} \dot{x}(s) \,ds. \end{aligned}$$
The lemma in [6] gives
$$\begin{aligned} \bigl[\tau _{2}-\tau (t) \bigr] \int _{t-\tau _{2}}^{t-\tau (t)} \dot{x} ^{T}(s) S_{4} \dot{x}(s)\,ds \geq & \biggl[ \int _{t-\tau _{2}}^{t-\tau (t)}\dot{x}(s) \,ds \biggr]^{T} S _{4} \biggl[ \int _{t-\tau _{2}}^{t-\tau (t)}\dot{x}(s) \,ds \biggr] \\ \geq & \bigl[ x\bigl(t-\tau (t)\bigr) - x(t-\tau _{2}) \bigr]^{T} S_{4} \bigl[ x\bigl(t- \tau (t)\bigr) - x(t-\tau _{2}) \bigr]. \end{aligned}$$
Since \(\tau _{2}-\tau (t) \leq \tau _{2}-\tau _{1}\), we have
$$\begin{aligned}& [\tau _{2}-\tau _{1} ] \int _{t-\tau _{2}}^{t-\tau (t)} \dot{x} ^{T}(s) S_{4} \dot{x}(s)\,ds \geq \bigl[ x\bigl(t-\tau (t)\bigr) - x(t-\tau _{2}) \bigr]^{T} S_{4} \bigl[ x\bigl(t- \tau (t) \bigr) - x(t-\tau _{2}) \bigr], \end{aligned}$$
and thus
$$\begin{aligned}& \begin{aligned}[b] & - [\tau _{2}-\tau _{1} ] \int _{t-\tau _{2}}^{t-\tau (t)} \dot{x} ^{T}(s) S_{4} \dot{x}(s)\,ds \\&\quad \leq - \bigl[ x\bigl(t-\tau (t)\bigr) - x(t- \tau _{2}) \bigr]^{T} S_{4} \bigl[ x\bigl(t- \tau (t)\bigr) - x(t-\tau _{2}) \bigr]. \end{aligned} \end{aligned}$$
(28)
Similarly, we have
$$\begin{aligned}& \begin{aligned}[b] & - [\tau _{2}-\tau _{1} ] \int _{t-\tau (t)}^{t-\tau _{1}} \dot{x} ^{T}(s) S_{4} \dot{x}(s)\,ds \\&\quad \leq - \bigl[ x(t-\tau _{1}) - x \bigl(t-\tau (t)\bigr) \bigr]^{T} S_{4} \bigl[ x(t- \tau _{1}) - x\bigl(t-\tau (t)\bigr) \bigr] \end{aligned} \end{aligned}$$
(29)
and
$$\begin{aligned}& \begin{aligned}[b] - \tau _{2} \int _{t-\tau _{2}}^{t} f^{T}\bigl(x(s)\bigr) S_{5} f\bigl(x(s)\bigr) \,ds &\leq - \biggl( \int _{t-\tau _{2}}^{t} f\bigl(x(s)\bigr)\,ds \biggr)^{T} \\ &\quad {}\times S_{5} \biggl( \int _{t-\tau _{2}}^{t} f\bigl(x(s)\bigr)\,ds \biggr), \end{aligned} \end{aligned}$$
(30)
$$\begin{aligned}& \begin{aligned}[b] - \tau _{12} \int _{t-\tau _{2}}^{t-\tau _{1}} f^{T}\bigl(x(s)\bigr) S_{6} f\bigl(x(s)\bigr) \,ds &\leq - \biggl( \int _{t-\tau _{2}}^{t-\tau _{1}} f\bigl(x(s)\bigr)\,ds \biggr)^{T} \\ &\quad {}\times S_{6} \biggl( \int _{t-\tau _{2}}^{t-\tau _{1}} f\bigl(x(s)\bigr)\,ds \biggr), \end{aligned} \end{aligned}$$
(31)
$$\begin{aligned}& \begin{aligned}[b] -d \int _{t-d(t)}^{t} f^{T}\bigl(x(s) \bigr)S_{7}f\bigl(x(s)\bigr)\,ds &\leq - \biggl( \int _{t-d(t)}^{t} f\bigl(x(s)\bigr)\,ds \biggr)^{T} \\ &\quad {}\times S_{7} \biggl( \int _{t-d(t)}^{t} f\bigl(x(s)\bigr)\,ds \biggr), \end{aligned} \end{aligned}$$
(32)
$$\begin{aligned}& \begin{aligned}[b] - \frac{\tau _{2}^{2}}{2} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s) T_{1} \dot{x}(s) \,ds \,d\theta &\leq - \biggl( \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t} \dot{x}(s) \,ds \,d \theta \biggr)^{T} \\ &\quad {}\times T_{1} \biggl( \int _{-\tau _{2}}^{0} \int _{t+\theta } ^{t} \dot{x}(s) \,ds \,d\theta \biggr) \\ &\leq - \biggl( \tau _{2} x(t) - \int _{t-\tau _{2}}^{t}x(s)\,ds \biggr)^{T} \\ &\quad {}\times T_{1} \biggl( \tau _{2} x(t) - \int _{t-\tau _{2}}^{t}x(s)\,ds \biggr), \end{aligned} \end{aligned}$$
(33)
$$\begin{aligned}& \begin{aligned}[b] - \tau _{s} \int _{-\tau _{2}}^{-\tau _{1}} \int _{t+\theta }^{t} \dot{x} ^{T}(s) T_{2} \dot{x}(s) \,ds \,d\theta &\leq - \biggl( \int _{-\tau _{2}}^{-\tau _{1}} \int _{t+\theta }^{t} \dot{x}(s) \,ds \,d\theta \biggr)^{T} \\ &\quad {}\times T_{2} \biggl( \int _{-\tau _{2}}^{-\tau _{1}} \int _{t+ \theta }^{t} \dot{x}(s) \,ds \,d\theta \biggr) \\ &\leq - \biggl( \tau _{12} x(t) - \int _{t-\tau _{2}}^{t-\tau _{1}} x(s)\,ds \biggr)^{T} \\ &\quad {}\times T_{2} \biggl( \tau _{12} x(t) - \int _{t-\tau _{2}}^{t- \tau _{1}} x(s)\,ds \biggr). \end{aligned} \end{aligned}$$
(34)
For positive diagonal matrices \(U_{1}\) and \(U_{2}\), it follows from Assumption (H1) that
(35)
(36)
On the other side, for any matrix O of appropriate dimensions, from system (7) we have
$$\begin{aligned}& \begin{aligned}[b] & 2 \dot{x}^{T}(t) O \times \biggl[ -A_{ij} x(t) +W_{1ij} f\bigl(x(t)\bigr)+ W _{2ij} f\bigl(x\bigl(t-\tau (t) \bigr)\bigr) \\&\quad{} + W_{3ij} \int _{t-d(t)}^{t} f\bigl(x(s)\bigr) \,ds+ u(t)- \dot{x}(t) \biggr] = 0. \end{aligned} \end{aligned}$$
(37)
Combining (18)–(37), we can obtain
$$\begin{aligned}& \mathcal{L}V\bigl(t,x(t),i\bigr) + \vartheta u^{T}(t)u(t) - y^{T}(t)\mathcal{Q}y(t) - 2 y^{T}(t)\mathcal{S} u(t) - u^{T}(t) \mathcal{R} u(t) \\& \quad \leq \zeta ^{T}(t) \biggl\{ \varPhi + \frac{\tau _{a}}{N} LS_{1}^{-1}L ^{T} + \bigl(\tau _{2}- \tau (t)\bigr)MS_{2}^{-1}M^{T} + \bigl(\tau (t)- \tau _{a}\bigr)VS _{2}^{-1}V^{T} \biggr\} \zeta (t). \end{aligned}$$
(38)
By the conditions of Theorem 3.1, if \(\zeta (t)\neq 0\), then we have
$$\begin{aligned}& \mathcal{L}V\bigl(t,x(t),i\bigr)< 0. \end{aligned}$$
(39)
For \(t\in [t_{k-1},t_{k}]\), in view of (16) and (39), we have
$$\begin{aligned}& V\bigl(t_{k},x(t),j\bigr) < V\bigl(t_{k}^{-},x(t),i \bigr) < V\bigl(t_{k-1},x(t),i\bigr). \end{aligned}$$
(40)
By a similar proof and mathematical induction we can ensure that (40) is true for all i, j, \(r(0)=i_{0}\in S\), \(k\in N\):
$$\begin{aligned}& V\bigl(t_{k},x(t),j\bigr) < V\bigl(t_{k}^{-},x(t),i \bigr) < V\bigl(t_{k},x(t),i\bigr) < \cdots < V\bigl(t _{0},x(t),i_{0}\bigr). \end{aligned}$$
(41)
It follows from (38) that
$$\begin{aligned}& \mathcal{L}V\bigl(t,x(t),i\bigr) + \vartheta u^{T}(t)u(t) - y^{T}(t)\mathcal{Q}y(t) - 2 y^{T}(t)\mathcal{S} u(t) - u^{T}(t) \mathcal{R} u(t) \\& \quad \leq \zeta ^{T}(t) \biggl\{ \varPhi + \frac{\tau _{a}}{N} LS_{1}^{-1}L ^{T} + \bigl(\tau _{2}- \tau (t)\bigr)MS_{2}^{-1}M^{T} + \bigl(\tau (t)- \tau _{a}\bigr)VS _{2}^{-1}V^{T} \biggr\} \zeta (t). \end{aligned}$$
(42)
Let
$$\begin{aligned}& \varPi = \varPhi + \frac{\tau _{a}}{N} LS_{1}^{-1}L^{T} + \bigl(\tau _{2}-\tau (t)\bigr)MS _{2}^{-1}M^{T} + \bigl(\tau (t)-\tau _{a}\bigr)VS_{2}^{-1}V^{T}. \end{aligned}$$
(43)
Then, applying the lemma in [8] to (43), we obtain the following inequalities:
$$\begin{aligned}& \varPhi + \frac{\tau _{a}}{N}LS_{1}^{-1}L^{T} + ( \tau _{2}-\tau _{a})MS_{2} ^{-1}M^{T}< 0, \end{aligned}$$
(44)
$$\begin{aligned}& \varPhi + \frac{\tau _{a}}{N}LS_{1}^{-1}L^{T} + ( \tau _{2}-\tau _{a})VS_{2} ^{-1}V^{T}< 0. \end{aligned}$$
(45)
Using Schur complements on (44)–(45), we obtain the LMIs of Theorem 3.1. Since \(\varPhi _{i} < 0\), we easily get
$$\begin{aligned}& y^{T}(t)\mathcal{Q}y(t) + 2 y^{T}(t)\mathcal{S} u(t) + u^{T}(t) \mathcal{R} u(t) > \mathcal{L}V\bigl(t,x(t),i\bigr) + \vartheta u^{T}(t)u(t). \end{aligned}$$
(46)
Integrating this inequality from 0 to T and using the zero initial conditions, we get
$$\begin{aligned}& E(y,u,T) \geq \vartheta \langle u,u \rangle _{T} + V(T) - V(0) \geq \vartheta \langle u,u \rangle _{T} \end{aligned}$$
(47)
for all \(T \geq 0\). Hence, if condition (11) holds, then the proposed model (7) is \((\mathcal{Q}, \mathcal{S}, \mathcal{R})\)–ϑ-dissipative in the sense of Definition 1. □
Remark 1
The LKF \(V_{3}(t,x(t),i)\) plays a important role in reducing the conservativity of time-varying delay system, whereas in the derivative of \(\dot{V}_{3}(t,x(t),i)\), the cross terms \(-\int _{t- \frac{\tau _{a}}{N}}^{t} \dot{x}^{T}(s)S_{1}\dot{x}(s)\,ds\), \(-\int _{t-\tau (t)}^{t-\tau _{a}} \dot{x}^{T}(s)S_{2}\dot{x}(s)\,ds\), and \(-\int _{t-\tau _{2}}^{t-\tau (t)} \dot{x}^{T}(s)S_{2}\dot{x}(s)\,ds\) are defined as follows:
$$\begin{aligned}& - \int _{t-\frac{\tau _{a}}{N}}^{t} \dot{x}^{T}(s)S_{1} \dot{x}(s)\,ds \leq \frac{\tau _{a}}{N} \zeta ^{T}(t)LS_{1}^{-1}L^{T} \zeta (t) + 2 \zeta ^{T}(t)L \biggl[ x(t)-x \biggl(t-\frac{\tau _{a}}{N} \biggr) \biggr], \\& \begin{gathered} - \int _{t-\tau (t)}^{t-\tau _{a}} \dot{x}^{T}(s)S_{2} \dot{x}(s)\,ds \\\quad \leq \bigl(\tau (t)-\tau _{a}\bigr) \zeta ^{T}(t)VS_{2}^{-1}V^{T}\zeta (t) + 2 \zeta ^{T}(t)V \bigl[ x(t-\tau _{a})-x\bigl(t-\tau (t)\bigr) \bigr],\end{gathered} \\& \begin{gathered} - \int _{t-\tau _{2}}^{t-\tau (t)} \dot{x}^{T}(s)S_{2} \dot{x}(s)\,ds \\\quad \leq \bigl(\tau _{2}-\tau (t)\bigr) \zeta ^{T}(t)MS_{2}^{-1}M^{T}\zeta (t) + 2 \zeta ^{T}(t)M \bigl[ x\bigl(t-\tau (t)\bigr)-x(t-\tau _{2}) \bigr].\end{gathered} \end{aligned}$$
Finally, to reduce the conservatism of the constructed dissipativity conditions, the convexity of the matrix function for cross term is applied. This treatment involved in our paper is different from the approaches used in [12, 35, 37, 41] and may ensure a better feasible region for dissipativity conditions. Thus, using a tighter bounding of the time derivative of LKF and a low number of slack variables, the considered dissipativity condition is less conservative than that in [12, 35, 37, 41].
Remark 2
Very recently, many researchers endeavor to focus on how to reduce conservatism of dissipativity condition for neural network delay systems. A free-matrix-based integral inequality technique is constructed by using a set of slack variables, which can be solved via convex optimization algorithms [37]. Therefore, some improved dissipativity criteria for delayed neural networks are investigated in [35, 41] using the LKF approach. In [12] the authors developed the Wirtinger double integral inequality, which was used to analyze the dissipativity behavior of continuous-time neural networks involving Markovian jumping parameters under Finsler’s lemma approach. Using a delay fractioning approach, the designed dissipativity condition is much less conservative than those in the existing works, and the derived results can ensure the dissipativity of the proposed delayed neural networks. Hence the delay-partitioning method is widely applied and exposes the potential of reducing the conservatism. However, to the best of authors’ knowledge, dissipativity analysis of fuzzy Markovian jumping neural network with discrete and distributed time varying delays and impulses has not been investigated yet, and it shows the effectiveness of our developed methods.
Remark 3
Consider the Markovian jumping neural network without fuzzy and impulsive effects of the following form:
$$\begin{aligned}& \begin{gathered} \dot{x}(t) = -A_{i}x(t) + W_{1i} f\bigl(x(t)\bigr) + W_{2i}f\bigl(x\bigl(t-\tau (t) \bigr)\bigr) \\ \hphantom{\dot{x}(t)}\quad{} + W_{3i} \int _{t-d(t)}^{t} f \bigl(x(s)\bigr)\,ds + u(t), \quad t>0, t\neq t_{k}, \\ y(t) = f\bigl(x(t)\bigr). \end{gathered} \end{aligned}$$
(48)
Due to Theorem 3.1, we obtain a corollary for the dissipativity analysis of Markovian jumping neural networks (48).
Corollary 3.2
Under Assumption (H1)
and given scalars
\(\tau _{1}\), \(\tau _{2}\), d, \(\mu _{1}\), and
\(\mu _{2}\), the neural network model (48) is strictly
\((\mathcal{Q}, \mathcal{S}, \mathcal{R})\)–ϑ-dissipative if there exist positive definite matrices
\(P_{1i}\), \(P_{i}\) (\(i=2,\ldots,4\)), Q, R, \(S_{i}\) (\(i=1,2,\ldots,7\)), \(T_{i}\) (\(i=1,2\)), positive diagonal matrices
\(U_{1}\), \(U_{2}\), and matrices
O, \(L_{i}\), \(M_{i}\), \(V_{i}\) (\(i=1,2\)) of appropriate dimensions such that the following LMIs hold:
(49)
(50)
Proof
The proof is similar to that of Theorem 3.1 and therefore is omitted. □
Remark 4
When Markovian jumping parameters are not taken, that is, the Markov chain \(\{r(t),t\geq 0\}\) only takes a unique value 1 (i.e., \(S=1\)), then system (48) becomes the following neural network model:
$$\begin{aligned}& \begin{gathered} \dot{x}(t) = -Ax(t) + W_{1} f\bigl(x(t) \bigr) + W_{2}f\bigl(x\bigl(t-\tau (t)\bigr)\bigr) \\ \hphantom{\dot{x}(t)}\quad{} + W_{3} \int _{t-d(t)}^{t} f \bigl(x(s)\bigr)\,ds + u(t), \quad t>0, t\neq t_{k}, \\ y(t) = f\bigl(x(t)\bigr). \end{gathered} \end{aligned}$$
(51)
For system (51), we obtain the following corollary by Theorem 3.1 and Corollary 3.2.
Corollary 3.3
Based on Assumption (H1)
and given scalars
\(\tau _{1}\), \(\tau _{2}\), d, \(\mu _{1}\), and
\(\mu _{2}\), the neural network (51) is strictly
\((\mathcal{Q}, \mathcal{S}, \mathcal{R})\)–ϑ-dissipative if there exist positive definite matrices
\(P_{1}\), \(P_{i}\) (\(i=2,\ldots,4\)), Q, R, \(S_{i}\) (\(i=1,2,\ldots,7\)), \(T_{i}\) (\(i=1,2\)), positive diagonal matrices
\(U_{1}\), \(U_{2}\), and matrices
O, \(L_{i}\), \(M_{i}\), \(V_{i}\) (\(i=1,2\)) of appropriate dimensions such that the following LMIs hold:
(52)
(53)
and
where
Proof
To prove the dissipativity criteria for the recurrent neural networks (51), we define the following Lyapunov–Krasovskii functional:
$$\begin{aligned}& \begin{aligned}[b] V\bigl(t,x(t)\bigr) &= V_{1}\bigl(t,x(t)\bigr)+V_{2} \bigl(t,x(t)\bigr)+V_{3}\bigl(t,x(t)\bigr)+V_{4}\bigl(t,x(t) \bigr) \\ &\quad{} +V _{5}\bigl(t,x(t)\bigr)+V_{6}\bigl(t,x(t)\bigr), \end{aligned} \end{aligned}$$
(54)
where
$$\begin{aligned}& \begin{aligned} V_{1}\bigl(t,x(t)\bigr) & = x^{T}(t)P_{1} x(t) + \int _{t-\tau _{1}}^{t} x^{T}(s) P_{2} x(s) \,ds + \int _{t-\tau (t)}^{t-\tau _{1}} x^{T}(s) P_{3} x(s) \,ds \\ &\quad{} + \int _{t-d(t)} ^{t}x^{T}(s)P_{4}x(s) \,ds,\end{aligned} \\& V_{2}\bigl(t,x(t)\bigr) = \int _{t-\frac{\tau _{a}}{N}}^{t} \xi _{1}^{T}(s) Q \xi _{1}(s) \,ds + \int _{t-\frac{\tau _{2}}{N}}^{t} \xi _{2}^{T}(s) R \xi _{2}(s) \,ds, \\& V_{3}\bigl(t,x(t)\bigr) = \int _{-\frac{\tau _{a}}{N}}^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s) S _{1} \dot{x}(s) \,ds \,d\theta + \int _{-\tau _{2}}^{-\tau _{a}} \int _{t+ \theta }^{t} \dot{x}^{T}(s) S_{2} \dot{x}(s) \,ds \,d\theta , \\& V_{4}\bigl(t,x(t)\bigr) = \tau _{2} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s) S _{3} \dot{x}(s) \,ds \,d\theta + \tau _{12} \int _{-\tau _{2}}^{-\tau _{1}} \int _{t+\theta }^{t} \dot{x}^{T}(s) S_{4} \dot{x}(s) \,ds \,d\theta , \\& \begin{aligned} V_{5}\bigl(t,x(t)\bigr)& = \tau _{2} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t} f^{T}\bigl(x(s)\bigr) S _{5} f\bigl(x(s)\bigr) \,ds \,d\theta + \tau _{12} \int _{-\tau _{2}}^{-\tau _{1}} \int _{t+\theta }^{t} f^{T}\bigl(x(s)\bigr) S_{6} f\bigl(x(s)\bigr) \,ds \,d\theta \\ &\quad {} + d \int _{-d}^{0} \int _{t+\theta }^{t} f^{T}\bigl(x(s)\bigr) S_{7} f\bigl(x(s)\bigr) \,ds \,d\theta , \end{aligned} \\& \begin{aligned} V_{6}\bigl(t,x(t)\bigr) &= \frac{\tau _{2}^{2}}{2} \int _{-\tau _{2}}^{0} \int _{\theta }^{0} \int _{t+\lambda }^{t} \dot{x}^{T}(s) T_{1} \dot{x}(s)\,ds \,d\lambda \,d \theta \\ &\quad{} + \tau _{s} \int _{-\tau _{2}}^{-\tau _{1}} \int _{\theta }^{0} \int _{t+\lambda }^{t} \dot{x}^{T}(s) T_{2} \dot{x}(s)\,ds \,d\lambda \,d \theta . \end{aligned} \end{aligned}$$
Then, using the same proof as in Theorem 3.1, we get the result. □
Remark 5
If the distributed delay is not considered in system (51), then the recurrent neural network is rewritten as
$$\begin{aligned} \begin{gathered} \dot{x}(t) = -Ax(t) + W_{1} f\bigl(x(t) \bigr) + W_{2}f\bigl(x\bigl(t-\tau (t)\bigr)\bigr) + u(t), \quad t>0, t \neq t_{k}, \\ y(t) = f\bigl(x(t)\bigr). \end{gathered} \end{aligned}$$
(55)
The dissipative condition of delayed neural network (55) is constructed as follows.
Corollary 3.4
Under Assumption (H1)
and given scalars
\(\tau _{1}\), \(\tau _{2}\), and
\(\mu _{1}\), the neural network (55) is
\((\mathcal{Q}, \mathcal{S}, \mathcal{R})\)–ϑ-dissipative if there exist positive definite matrices
\(P_{1}\), \(P_{i}\) (\(i=2,\ldots,3\)), Q, R, \(S_{i}\) (\(i=1,2,\ldots,6\)), \(T _{i}\) (\(i=1,2\)), positive diagonal matrices
\(U_{1}\), \(U_{2}\), and matrices
O, \(L_{i}\), \(M_{i}\), \(V_{i}\) (\(i=1,2\)) of appropriate dimensions such that the following LMIs hold:
(56)
(57)
and
where
$$\begin{aligned}& \varPhi = \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} \varPhi _{11} & \varPhi _{12} & \varPhi _{13} & \varPhi _{14} & \varPhi _{15} & 0 & \varPhi _{17} & \varPhi _{18} & 0 & 0 & 0 \\ * & \varPhi _{22} & \varPhi _{23} & 0 & 0 & F_{2}U_{2} & 0 & 0 & 0 & 0 & 0 \\ * & * & \varPhi _{33} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & \varPhi _{44} & \mathit{OW}_{1} & \mathit{OW}_{2} & 0 & 0 & 0 & 0 & O \\ * & * & * & * & \varPhi _{55} & 0 & 0 & 0 & 0 & 0 & -\mathcal{S} \\ * & * & * & * & * & -U_{2} & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & * & * & -T_{1} & 0 & 0 & 0 & 0 \\ * & * & * & * & * & * & * & -T_{2} & 0 & 0 & 0 \\ * & * & * & * & * & * & * & * & -S_{5} & 0 & 0 \\ * & * & * & * & * & * & * & * & * & -S_{6} & 0 \\ * & * & * & * & * & * & * & * & * & * & \gamma I-\mathcal{R} \end{array}\displaystyle \right ], \\& \varPhi _{1} = P_{2} + Q_{11} + L_{1} + L_{1}^{T} + R_{11}-S_{3}- \tau _{2}^{2}T_{1}+ \tau _{12}^{2}T_{2}-F_{1}U_{1}, \\& \varPhi _{55} = \tau _{2}^{2}S_{5}+ \tau _{12}^{2}S_{6}-U_{1}-\mathcal{Q}, \end{aligned}$$
and the other elements are as in Corollary 3.3.
Proof
This proof is similar to that of Corollary 3.3 and therefore is omitted. □
Remark 6
As a particular case of dissipativity, we get passivity criteria for system (55) by taking \(\mathcal{Q} = 0\), \(\mathcal{S} = I\), and \(\mathcal{R} =2\gamma I\) in Corollary 3.4. The following corollary can obtained from Corollary 3.4 and describes the passivity conditions for system (55).
Corollary 3.5
Under Assumption (H1)
and given scalars
\(\tau _{1}\), \(\tau _{2}\), and
\(\mu _{1}\), the neural network (55) is passive if there exist positive definite matrices
\(P_{1}\), \(P_{i}\) (\(i=2,\ldots,3\)), Q, R, \(S _{i}\) (\(i=1,2,\ldots,6\)), \(T_{i}\) (\(i=1,2\)), positive diagonal matrices
\(U_{1}\), \(U_{2}\), and matrices
O, \(L_{i}\), \(M_{i}\), \(V_{i}\) (\(i=1,2\)) of appropriate dimensions such that the following LMIs hold:
(58)
(59)
and
where
$$\begin{aligned}& \varPhi = \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} \varPhi _{11} & \varPhi _{12} & \varPhi _{13} & \varPhi _{14} & \varPhi _{15} & 0 & \varPhi _{17} & \varPhi _{18} & 0 & 0 & 0 \\ * & \varPhi _{22} & \varPhi _{23} & 0 & 0 & F_{2}U_{2} & 0 & 0 & 0 & 0 & 0 \\ * & * & \varPhi _{33} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & \varPhi _{44} & \mathit{OW}_{1} & \mathit{OW}_{2} & 0 & 0 & 0 & 0 & O \\ * & * & * & * & \varPhi _{55} & 0 & 0 & 0 & 0 & 0 & -I \\ * & * & * & * & * & -U_{2} & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & * & * & -T_{1} & 0 & 0 & 0 & 0 \\ * & * & * & * & * & * & * & -T_{2} & 0 & 0 & 0 \\ * & * & * & * & * & * & * & * & -S_{5} & 0 & 0 \\ * & * & * & * & * & * & * & * & * & -S_{6} & 0 \\ * & * & * & * & * & * & * & * & * & * & -\gamma I \end{array}\displaystyle \right ], \\& \begin{gathered} \varPhi _{1} = P_{2} + Q_{11} + L_{1} + L_{1}^{T} + R_{11}-S_{3}- \tau _{2}^{2}T_{1}+ \tau _{12}^{2}T_{2}-F_{1}U_{1}, \\ \varPhi _{55} = \tau _{2}^{2}S_{5}+ \tau _{12}^{2}S_{6}-U_{1}. \end{gathered} \end{aligned}$$
Proof
The proof directly follows from Corollary 3.4. □