Graph theory
A graph \(\mathtt{G = (V, E)}\) is composed of two sets: V is the set of nodes, and \(\mathtt{E}\subseteq \mathtt{V}\times \mathtt{V} \) is the set of edges. Each graph \(\mathtt{G = (V, E)}\) has a unique nonnegative matrix \(A=(a_{ij})_{N\times N}\in R^{N\times N}\) corresponding to it where \(a_{ij}>0 \) represents the connection relationship between node i and node j, \((i,j)\in \mathtt{V}\times \mathtt{V}\). The union of two graphs \(\mathtt{G}_{1} = (\mathtt{V}, \mathtt{E}_{1})\) and \(\mathtt{G}_{2} = (\mathtt{V}, \mathtt{E}_{2})\) is defined by \(\mathtt{G}_{1}\cup \mathtt{G}_{2} = (\mathtt{V}, \mathtt{E}_{1}\cup \mathtt{E}_{2})\).
Given a graph \(\mathtt{G = (V, E)} \) and a nonempty subset \(\mathtt{N \subseteq V} \), the neighbors of N are defined as the set \(\mathtt{M (N, G)} = \{ {j \in \mathtt{V} \backslash \mathtt{N} | \exists i \in \mathtt{N}},\text{ such that }(i, j) \in \mathtt{E} \} \). If the set N is a singleton, then \(\mathtt{M (N, G)} \) represents the neighbor of one node.
If \(\bigcup^{m}_{i=1}\mathtt{G}_{i} \) contains a spanning tree (a detailed introduction is provided in [19]), then the sequence of graphs \((\mathtt{G}_{i})^{m}_{i=1} \) is jointly connected. If the set of \(\mathtt{V}_{k}\) satisfies \(\mathtt{V}_{k} \subseteq \mathtt{V} (1 \leq k \leq m + 1) \) and \(\mathtt{V}_{k+1} \subseteq \mathtt{V}_{k} \cup \mathtt{N}( \mathtt{G}_{k}, \mathtt{V}_{k} ) \) where \(\mathtt{V}_{1} \) is a singleton, \(\mathtt{V}_{m+1} = \mathtt{V} \), then the sequence of graphs \((\mathtt{G}_{i})^{m}_{i=1} \) is sequentially connected. It is also T-sequential connected with the period \(T=m\). For some more properties of jointly connected and sequentially connected cases refer to [18, 19].
Remark 2.1
Note that each graph of \(\mathtt{G}_{i}\) has the common nodes. In addition, we can see that, if the sequence of graphs \((\mathtt{G}_{i})^{m}_{i=1} \) is T-sequentially connected, then the information spread process of nodes is given by the spanning tree of the sequence of graphs \((\mathtt{G}_{i})^{m}_{i=1} \). At the same time, if a sequence of graphs \((\mathtt{G}_{i})^{m}_{i=1} \) is sequentially connected, then it is also jointly connected, but the converse is not true.
Model description
Let \(\mathtt{V}^{x} = \mathtt{V}^{y} = \{ 1, 2,\ldots, N \} \) be the sets of nodes. In this paper, we study two interconnected bidirection MNNs system and consider each MNNs consisting of N NNs. Each of the NNs corresponds to a node, and the state equations of the ith and the jth NN are given by
$$\begin{aligned} &\frac{dx_{is}(t)}{dt}= -c_{s}x_{is}(t)+\sum _{i=1}^{n}a_{si}f_{i} \bigl(y_{ji}(t)\bigr)+ \sum_{i=1}^{n}b_{si}f_{i}^{\tau } \bigl(y_{ji}(t-\tau _{1})\bigr)+I_{s}(t)+u_{is}(t), \end{aligned}$$
(2.1)
$$\begin{aligned} &\frac{dy_{js}(t)}{dt}= -\mathfrak{c}_{s}y_{js}(t)+\sum _{j=1}^{n} \mathfrak{a}_{sj}g_{j} \bigl(x_{ij}(t)\bigr)+\sum_{j=1}^{n} \mathfrak{b}_{sj}g_{j}^{ \tau }\bigl(x_{ij}(t- \tau _{2})\bigr)+\mathfrak{I}_{s}(t)+v_{js}(t), \end{aligned}$$
(2.2)
or in compact forms
$$\begin{aligned} &\frac{dx_{i}(t)}{dt}=-Cx_{i}(t)+Af\bigl(y_{j}(t) \bigr)+Bf^{\tau }\bigl(y_{j}(t-\tau _{1}) \bigr)+I(t)+u_{i}(t), \end{aligned}$$
(2.3)
$$\begin{aligned} &\frac{dy_{j}(t)}{dt}=-\mathfrak{C}y_{j}(t)+\mathfrak{A}g \bigl(x_{i}(t)\bigr)+ \mathfrak{B}g^{\tau }\bigl(x_{i}(t- \tau _{2})\bigr)+\mathfrak{I}(t)+v_{j}(t), \end{aligned}$$
(2.4)
where \(i\in \mathtt{V}^{x} \), \(j\in \mathtt{V}^{y} \), \(t\ge t_{0} \), \(x_{i}(t)=(x_{i1}(t),x_{i2}(t),\ldots,x_{in}(t)) \in R^{n} \) and \(y_{j}(t)=(y_{j1}(t),y_{j2}(t),\ldots, y_{jn}(t)) \in R^{n} \) are the state vectors of the ith NN and jth NN of the two MNNs, respectively. \(C=\operatorname{diag} \{ c_{1},c_{2},\ldots,c_{n} \},c_{i}>0\) and \(\mathfrak{C}=\operatorname{diag} \{ \mathfrak{c}_{1},\mathfrak{c}_{2},\ldots, \mathfrak{c}_{n} \},\mathfrak{c}_{i}>0\) are self-inhibitions of the two neurons. \(I(t) \in R^{n} \), \(\mathfrak{I}(t)\in R^{n} \) represent the input or bias, \(u_{i} \), \(v_{i} \) are control inputs of two neurons, respectively. \(\tau _{1}>0 \), \(\tau _{2}>0 \) are the transmission delays and let \(\tau =\max \{ \tau _{1},\tau _{2} \} \). \(f(y_{j}(t)) = (f_{1}(y_{j1}(t)),f_{2}(y_{j2}(t)),\ldots,f_{n}(y_{jn}(t)))^{T} \), \(g(x_{i}(t))=(g_{1}(x_{i1}(t)),g_{2}(x_{i2}(t)), \ldots,g_{n}(x_{in}(t)))^{T} \) and \(f^{\tau }(y_{j}(t-\tau _{1}))=(f^{\tau }_{1}(y_{j1}(t-\tau _{1})),f^{ \tau }_{2}(y_{j2}(t-\tau _{1})),\ldots,f^{\tau }_{n}(y_{jn}(t-\tau _{1})))^{T} \), \(g^{\tau }(x_{i}(t-\tau _{2}))=(g^{\tau }_{1}x_{i1}(t-\tau _{2}),g^{ \tau }_{2}x_{i2}(t-\tau _{2}), \ldots,g^{\tau }_{n}x_{in}(t-\tau _{2}))^{T} \) are activation functions, and \(A = [a_{ij} ] _{n \times n}\), \(\mathfrak{A} = [\mathfrak{a}_{ij} ] _{n \times n}\), \(B = [b_{ij}]_{n \times n} \), \(\mathfrak{B} = [\mathfrak{b}_{ij}]_{n \times n} \) are the connection weight matrices and delay connection weight matrices of the two NNs.
Given an impulsive instant sequence \(\{ t_{1},t_{2},t_{3},\ldots \} \) satisfying \(0 < t_{k} < t_{k+1}(k \in Z^{+}) \) and \(t_{k} \to \infty \) when \(k \to \infty \). For the distributed impulsive controllers of the ith and jth nodes, we assign
$$\begin{aligned} &u_{i}(t)=\sum_{k=1}^{+\infty } \Biggl( x_{i}(t)-\sum_{j=1}^{N}d_{ij}(t) \varGamma (t)x_{j}(t) \Biggr)\delta (t-t_{k}), \end{aligned}$$
(2.5)
$$\begin{aligned} &v_{j}(t)=\sum_{k=1}^{+\infty } \Biggl( y_{j}(t)-\sum_{j=1}^{N} \mathfrak{d}_{ji}(t)\varLambda (t)y_{i}(t) \Biggr)\delta (t-t_{k}), \end{aligned}$$
(2.6)
\(\delta (\cdot )\) is the Dirac impulsive function; \(\varGamma (t_{k}) = \operatorname{diag} \{ \gamma _{1}(t_{k}),\gamma _{2}(t_{k}), \ldots,\gamma _{n}(t_{k}) \} \) and \(\varLambda (t_{k}) = \operatorname{diag} \{ \lambda _{1}(t_{k}),\lambda _{2}(t_{k}), \ldots,\lambda _{n}(t_{k}) \} \) represent the coupling gains between two nodes with \(\gamma _{i}(t_{k})>0 \) and \(\lambda _{i}(t_{k})>0 \). \(D(t_{k})=(d_{ij}(t_{k}))_{N \times N} \) and \(\mathfrak{D}(t_{k})=(\mathfrak{d}_{ji}(t_{k}))_{N \times N} \) represent the impulsive coupling matrices with
$$\begin{aligned} &\textstyle\begin{cases} d_{ij}(t_{k})\ge 0, &i \ne j, \\ \sum_{j=1}^{N}d_{ij}(t_{k})=1, & \forall i \in \mathtt{V}^{x}, \end{cases}\displaystyle \end{aligned}$$
(2.7)
$$\begin{aligned} &\textstyle\begin{cases} \mathfrak{d}_{ij}(t_{k})\ge 0, & i \ne j, \\ \sum_{j=1}^{N}\mathfrak{d}_{ij}(t_{k})=1,& \forall i \in \mathtt{V}^{y}. \end{cases}\displaystyle \end{aligned}$$
(2.8)
In algebraic graph theory, \(\mathtt{G}(D(t_{k} ))\) and \(\mathtt{G}(\mathfrak{D}(t_{k} ))\) are the adjacent matrices of the directed weight graphs \(\mathtt{G}_{k}^{x} \) and \(\mathtt{G}_{k}^{y} \).
Under impulsive controllers (2.5) and (2.6) with the conditions (2.7) and (2.8), then controlled systems (2.1) and (2.2) can be described in the following forms:
$$\begin{aligned} &\textstyle\begin{cases} \frac{dx_{i}(t)}{dt}=-Cx_{i}(t)+Af(y_{j}(t))+Bf^{\tau }(y_{j}(t-\tau _{1}))+I(t),& t \ne t_{k}, \\ x_{i}(t^{+}_{k})=\sum_{j=1}^{N}d_{ij}(t_{k})\varGamma (t_{k})x_{j}(t_{k}), \end{cases}\displaystyle \end{aligned}$$
(2.9)
$$\begin{aligned} & \textstyle\begin{cases} \frac{dy_{j}(t)}{dt}=-\mathfrak{C}y_{j}(t)+\mathfrak{A}g(x_{i}(t))+ \mathfrak{B}g^{\tau }(x_{i}(t-\tau _{2}))+\mathfrak{I}(t), & t \ne t_{k}, \\ y_{j}(t^{+}_{k})=\sum_{i=1}^{N}\mathfrak{d}_{ji}(t_{k})\varLambda (t_{k})y_{i}(t_{k}), \end{cases}\displaystyle \end{aligned}$$
(2.10)
where \(x_{i}(t^{-}_{k})=x_{i}(t_{k})\), \(y_{j}(t^{-}_{k})=y_{j}(t_{k})\).
We use \(D( t_{k} ) \) and \(\mathfrak{D}( t_{k} ) \) to describe the coupling topology of NNs (2.9) and (2.10) at impulsive time \(t_{k}\). The graphs \(\mathtt{G}^{x}_{k}\) and \(\mathtt{G}^{y}_{k}\) correspond to matrices \(D( t_{k} )\) and \(\mathfrak{D}( t_{k} )\), respectively. Coupling matrices \(D( t_{k} ) \) and \(\mathfrak{D}( t_{k} ) \) can switch at impulsive time \(t_{k}\) with period. It should be noted that \(\mathtt{G}^{x}_{k} \) and \(\mathtt{G}^{y}_{k} \) may contain selfloops, which means the conditions \(d_{ii}(t_{k})>0\) and \(\mathfrak{d}_{ii}(t_{k} )>0\) exist.
Remark 2.2
Through (2.7) and (2.8) we can see that the couplings of the two MNNs only occur at the impulsive time \(t_{k}\), and each node of the two MNNs is independent at other times. In addition, there is an interaction between the corresponding nodes of the two MNNs, so that there is also an interaction between the two MNNs. Then how to synchronize the two networks through certain conditions will be a problem we will discuss later.
Remark 2.3
Compared with the controllers used in articles [24–26], the controllers in these three articles are all used in single-layer network, but the system in this paper is a bidirection MNNs system, for which one needs to consider the coupling between nodes at the impulsive time, so this type of controllers may not be suitable. The controllers used in this paper can be regarded as a generalization of this kind of controller in MNNs.
Note that when each NNs in MNNs (2.9) and (2.10) satisfy
$$\begin{aligned} &\lim_{t \to \infty } \bigl\Vert x_{i}(t)-x_{j}(t) \bigr\Vert =0,\quad i,j \in \mathtt{V}^{x}, \\ &\lim_{t \to \infty } \bigl\Vert y_{i}(t)-y_{j}(t) \bigr\Vert =0,\quad i,j \in \mathtt{V}^{y}, \end{aligned}$$
then MNNs (2.9) and (2.10) are globally synchronized.
Let \(\gamma _{\max }=\max_{k \ge 1,i \in N} \{ \gamma _{i}(t_{k} ) \} \), \(\lambda _{\max } = \max_{k \ge 1,i \in N} \{ \lambda _{i}(t_{k} ) \} \) and \(\gamma =-\ln (\gamma _{\max })\), \(\lambda =-\ln (\lambda _{\max })\) are called coupling strength of \(\mathtt{G}^{x}_{k}\) and \(\mathtt{G}^{y}_{k}\).
For the continuous activation functions \(f_{i}(x) \), \(g_{i}(x) \) and \(f^{\tau }_{i}(x) \), \(g^{\tau }_{i}(x) \), Assumption 1 is made.
Assumption 1
There exist \(p_{i}>0 \), \(p^{\tau }_{i}>0 \) and \(l_{j}>0 \), \(l^{\tau }_{j}>0 \) such that
$$\begin{aligned} & \bigl\vert f_{i}(x)-f_{i}(y) \bigr\vert \leq p_{i} \vert x-y \vert , \\ & \bigl\vert f^{\tau }_{i}(x)-f^{\tau }_{i}(y) \bigr\vert \leq p^{\tau }_{i} \vert x-y \vert , \\ & \bigl\vert g_{j}(x)-g_{j}(y) \bigr\vert \leq l_{j} \vert x-y \vert , \\ & \bigl\vert g^{\tau }_{j}(x)-g^{\tau }_{j}(y) \bigr\vert \leq l^{\tau }_{j} \vert x-y \vert , \end{aligned}$$
for any x, \(y \in R \) and \(i=1,2,\ldots,n\), \(j=1,2,\ldots,n \). Denote \(P=\operatorname{diag} \{ p_{1},p_{2},\ldots,p_{n} \} \), \(P^{\tau }=\operatorname{diag} \{ p^{\tau }_{1},p^{\tau }_{2},\ldots,p^{\tau }_{n} \} \) and \(L=\operatorname{diag} \{ l_{1},l_{2},\ldots,l_{n} \} \), \(L^{\tau }=\operatorname{diag} \{ l^{\tau }_{1},l^{\tau }_{2},\ldots,l^{\tau }_{n} \} \).
Definitions and properties
In this paper, we give some concepts of the convex set [19]. Given a set \(M \subseteq R^{n}\), then the convex hull of M is defined as
$$\begin{aligned} \overline{\operatorname{co}}(M)= \Biggl\{ \sum_{i=1}^{n}a_{i}x_{i}: \sum_{i=1}^{n}a_{i}=1,a_{i} \ge 0,x_{i}\in M,n\ge 1 \Biggr\} . \end{aligned}$$
We define the diameter of M as
$$\begin{aligned} \operatorname{diam}(M)=\sup_{x,y\in M} \Vert x-y \Vert . \end{aligned}$$
Given two sets \(M_{1}, M_{2} \subseteq R^{n}\), define
$$\begin{aligned} M_{1}+M_{2}= \{ x+y:x\in M_{1},y\in M_{2} \}. \end{aligned}$$
It is obvious that \(\operatorname{diam}(M_{1} + M_{2}) \leq \operatorname{diam}(M_{1}) + \operatorname{diam}(M_{2})\).
Let \(e_{ij}(t)=x_{i}(t)-x_{j}(t),i,j \in \mathtt{V}^{x} \), \(\mathfrak{e}_{ij}(t)=y_{i}(t)-y_{j}(t),i,j \in \mathtt{V}^{y} \), The state error between NNs of (2.9) and (2.10) are defined as \(\Vert e_{ij}(t) \Vert \) and \(\Vert \mathfrak{e}_{ij}(t) \Vert \), \(\Vert \cdot \Vert \) represent the 1-norm. Besides, let \(\Vert \tilde{e}_{ij}(t) \Vert =\sup_{\theta _{1}\in [- \tau,0]} \Vert e_{ij}(t+\theta _{1}) \Vert \), \(\Vert \tilde{\mathfrak{e}}_{ij}(t) \Vert =\sup_{\theta _{2} \in [-\tau,0]} \Vert \mathfrak{e}_{ij}(t+\theta _{2}) \Vert \) and \(h_{k}=t_{k+1}-t_{k} \), \(h_{\inf }=\inf_{k\in z^{+}} \{ h_{k} \} \), \(h_{\sup }=\sup_{k\in z^{+}} \{ h_{k} \} \), \(0\leq \tau \leq h_{\mathrm{inf}}\).
Lemma 2.1
For any\(i,j\in \{ 1, 2,\ldots, N \} \)and\(t\in ( t_{k},t_{k+1} ]\), \(k\in Z^{+} \), we have
$$\begin{aligned} & \bigl\Vert e_{ij}(t) \bigr\Vert \leq \bigl\Vert \tilde{e}_{ij}\bigl(t^{+}_{k}\bigr) \bigr\Vert \exp \bigl\{ -r(t-t_{k}) \bigr\} , \end{aligned}$$
(2.11)
$$\begin{aligned} & \bigl\Vert \mathfrak{e}_{ij}(t) \bigr\Vert \leq \bigl\Vert \tilde{\mathfrak{e}}_{ij}\bigl(t^{+}_{k}\bigr) \bigr\Vert \exp \bigl\{ -r(t-t_{k}) \bigr\} , \end{aligned}$$
(2.12)
where\(r \ne 0 \)and satisfies
$$\begin{aligned} &r-c_{\min }+ \Vert AP \Vert + \bigl\Vert BP^{\tau } \bigr\Vert \exp \{ r\tau \} \leq 0, \end{aligned}$$
(2.13)
$$\begin{aligned} &r-\mathfrak{c}_{\min }+ \Vert \mathfrak{A}L \Vert + \bigl\Vert \mathfrak{B}L^{\tau } \bigr\Vert \exp \{ r\tau \} \leq 0, \end{aligned}$$
(2.14)
in which\(c_{\min }=\min_{1\leq i \leq n} \{ c_{i} \} \), \(\mathfrak{c}_{\min }=\min_{1\leq i \leq n} \{ \mathfrak{c}_{i} \} \).
Proof
Let
$$\begin{aligned} &\mathbb{V}(t)= \bigl\Vert e_{ij}(t) \bigr\Vert \exp \bigl\{ r(t-t_{k}) \bigr\} , \\ &\mathfrak{V}(t)= \bigl\Vert \mathfrak{e}_{ij}(t) \bigr\Vert \exp \bigl\{ r(t-t_{k}) \bigr\} , \end{aligned}$$
and \(\mathbb{W}(t)=\max \{ \mathbb{V}(t),\mathfrak{V}(t) \} \).
According to (2.9) and (2.10), when \(t\in (t_{k},t_{k+1})\), we have
$$\begin{aligned} \frac{de_{ij}(t)}{dt}={}&\frac{dx_{i}(t)}{dt}-\frac{dx_{j}(t)}{dt} \\ ={}&{-}C\bigl(x_{i}(t)-x_{j}(t)\bigr)+A \bigl[f \bigl(y_{j}(t)\bigr)-f\bigl(y_{i}(t)\bigr) \bigr] \\ &{} +B \bigl[f^{\tau }\bigl(y_{j}(t-\tau _{1}) \bigr)-f^{\tau }\bigl(y_{i}(t-\tau _{1})\bigr) \bigr], \\ \frac{d\mathfrak{e}_{ij}(t)}{dt} ={}&\frac{dy_{i}(t)}{dt}- \frac{dy_{j}(t)}{dt} \\ ={}&{-}\mathfrak{C}\bigl(y_{i}(t)-y_{j}(t)\bigr)+\mathfrak{A} \bigl[g\bigl(x_{j}(t)\bigr)-g\bigl(x_{i}(t)\bigr) \bigr] \\ &{} +\mathfrak{B} \bigl[g^{\tau }\bigl(x_{j}(t-\tau _{2})\bigr)-g^{\tau }\bigl(x_{i}(t- \tau _{2})\bigr) \bigr]. \end{aligned}$$
Then
$$\begin{aligned} \frac{d\mathbb{V}(t)}{dt}={}&\operatorname{sign}\bigl(e_{ij}(t) \bigr)^{T}\frac{de_{ij}}{dt}\exp \bigl\{ r(t-t_{k}) \bigr\} +r\mathbb{V}(t) \\ ={}&{-}\operatorname{sign}\bigl(e_{ij}(t)\bigr)^{T}C \bigl(x_{i}(t)-x_{j}(t)\bigr)\exp \bigl\{ r(t-t_{k}) \bigr\} \\ &{} +r\mathbb{V}(t)+\operatorname{sign}\bigl(e_{ij}(t) \bigr)^{T}A \bigl[f\bigl(y_{j}(t)\bigr)-f \bigl(y_{i}(t)\bigr) \bigr]\exp \bigl\{ r(t-t_{k}) \bigr\} \\ &{} +\operatorname{sign}\bigl(e_{ij}(t)\bigr)^{T}B \bigl[f^{\tau }\bigl(y_{j}(t-\tau _{1}) \bigr)-f^{ \tau }\bigl(y_{i}(t-\tau _{1})\bigr) \bigr] \exp \bigl\{ r(t-t_{k}) \bigr\} \\ \leq{}& (r-c_{\mathrm{min}})\mathbb{V}(t)+ \Vert AP \Vert \bigl\Vert y_{i}(t)-y_{j}(t) \bigr\Vert \exp \bigl\{ r(t-t_{k}) \bigr\} \\ &{} + \bigl\Vert BP^{\tau } \bigr\Vert \bigl\Vert y_{i}(t- \tau _{1})-y_{j}(t- \tau _{1}) \bigr\Vert \exp \bigl\{ r(t-t_{k}) \bigr\} \\ \leq{}& (r-c_{\mathrm{min}})\mathbb{V}(t)+ \Vert AP \Vert \mathfrak{V}(t)+ \bigl\Vert BP^{\tau } \bigr\Vert \mathfrak{V}(t-\tau _{1}) \exp \{ r \tau \} \\ \leq{}& \bigl(r-c_{\mathrm{min}}+ \Vert AP \Vert \bigr)\mathbb{W}(t)+ \bigl\Vert BP^{\tau } \bigr\Vert \mathbb{W}(t-\tau _{1})\exp \{ r \tau \}. \end{aligned}$$
Let
$$ \tilde{\mathbb{W}}(t)=\sup_{\theta \in [-\tau,0]}\mathbb{W}(t+ \theta ). $$
Then one has
$$ \tilde{\mathbb{W}}(t)\ge \mathbb{W}(t)\ge 0 . $$
(2.15)
From (2.13) and (2.15) we have
$$\begin{aligned} \frac{d\mathbb{V}(t)}{dt}&=\operatorname{sign}\bigl(e_{ij}(t) \bigr)^{T}\frac{de_{ij}}{dt}\exp \bigl\{ r(t-t_{k}) \bigr\} +r\mathbb{V}(t) \\ &\leq \bigl(r-c_{\mathrm{min}}+ \Vert AP \Vert \bigr)\mathbb{W}(t)+ \bigl\Vert BP^{\tau } \bigr\Vert \mathbb{W}(t-\tau _{1})\exp \{ r \tau \} \\ &\leq \bigl(r-c_{\mathrm{min}}+ \Vert AP \Vert + \bigl\Vert BP^{\tau } \bigr\Vert \exp \{ r\tau \} \bigr)\tilde{\mathbb{W}}(t) \\ &\leq 0. \end{aligned}$$
(2.16)
Let \(\tilde{\mathbb{V}}(t)=\sup_{\theta _{1} \in [-\tau,0]} \mathbb{V}(t+\theta _{1})\), from (2.16) and using the Lyapunov–Razumikhin method, we can get
$$\begin{aligned} \frac{d\tilde{\mathbb{V}}(t)}{dt}\leq 0, \quad\forall t\in (t_{k},t_{k+1}), \end{aligned}$$
then we can get
$$\begin{aligned} \mathbb{V}(t)\leq \tilde{\mathbb{V}}(t)\leq \tilde{\mathbb{V}} \bigl(t^{+}_{k}\bigr)= \bigl\Vert \tilde{e}_{ij} \bigl(t^{+}_{k}\bigr) \bigr\Vert , \end{aligned}$$
hence
$$\begin{aligned} \bigl\Vert \tilde{e}_{ij}(t) \bigr\Vert \leq \bigl\Vert \tilde{e}_{ij}\bigl(t^{+}\bigr) \bigr\Vert \exp \bigl\{ -r(t-t_{k}) \bigr\} . \end{aligned}$$
Similarly we can get
$$\begin{aligned} \frac{d\mathfrak{V}(t)}{dt}&=\operatorname{sign}\bigl(\mathfrak{e}_{ij}(t) \bigr)^{T} \frac{d\mathfrak{e}_{ij}}{dt}\exp \bigl\{ r(t-t_{k}) \bigr\} +r \mathfrak{V}(t) \\ &\leq \bigl(r-\mathfrak{c}_{\mathrm{min}}+ \Vert \mathfrak{A}L \Vert + \bigl\Vert \mathfrak{B}L^{\tau } \bigr\Vert \exp \{ r\tau \} \bigr) \tilde{ \mathbb{W}}(t) \\ &\leq 0 \end{aligned}$$
(2.17)
and
$$\begin{aligned} \bigl\Vert \tilde{\mathfrak{e}}_{ij}(t) \bigr\Vert \leq \bigl\Vert \tilde{\mathfrak{e}}_{ij}\bigl(t^{+}_{k} \bigr) \bigr\Vert \exp \bigl\{ -r(t-t_{k}) \bigr\} , \end{aligned}$$
which mean (2.11) and (2.12) are true. □
Remark 2.4
The result of the Lemma 2.1 is important in the subsequent proof, it establishes the relationship of \(\Vert e_{ij}(t) \Vert \) and \(\Vert \mathfrak{e}_{ij}(t) \Vert \) in two adjacent impulsive interval, so we can go through the iteration to derive them up to time \(t=t_{1}\).