Consider the following model of switched multilayer neural networks [3]:

where x(t)={[{x}_{1}(t)\phantom{\rule{0.5em}{0ex}}\cdots \phantom{\rule{0.5em}{0ex}}{x}_{n}(t)]}^{T}\in {R}^{n} is the state vector, z(t)\in {R}^{p} is a linear combination of the states, A=diag\{-{a}_{1},\dots ,-{a}_{n}\}\in {R}^{n\times n} ({a}_{k}>0, k=1,\dots ,n) is the self-feedback matrix, W\in {R}^{n\times n} and V\in {R}^{n\times n} are the connection weight matrices, \varphi (x(t))={[{\varphi}_{1}(x(t))\phantom{\rule{0.5em}{0ex}}\cdots \phantom{\rule{0.5em}{0ex}}{\varphi}_{n}(x(t))]}^{T}:{R}^{n}\to {R}^{n} is the nonlinear function vector satisfying the global Lipschitz condition with Lipschitz constant {L}_{\varphi}>0, J(t)\in {R}^{n} is an external input vector, and H\in {R}^{p\times n} is a known constant matrix. *α* is a switching signal which takes its values in the finite set \mathcal{I}=\{1,2,\dots ,N\}. The matrices ({A}_{\alpha},{W}_{\alpha},{V}_{\alpha},{H}_{\alpha}) are allowed to take values, at an arbitrary time, in the finite set \{({A}_{1},{W}_{1},{V}_{1},{H}_{1}),\dots ,({A}_{N},{W}_{N},{V}_{N},{H}_{N})\}. Throughout this paper, we assume that the switching rule *α* is not known a priori and its instantaneous value is available in real time. Define the indicator function \xi (t)={({\xi}_{1}(t),{\xi}_{2}(t),\dots ,{\xi}_{N}(t))}^{T}, where

{\xi}_{i}(t)=\{\begin{array}{cc}1,\hfill & \text{when the switched system is described by the}i\text{th mode}({A}_{i},{W}_{i},{V}_{i},{H}_{i}),\hfill \\ 0,\hfill & \text{otherwise},\hfill \end{array}

with i=1,\dots ,N. By using this indicator function, the model of the switched multilayer neural networks (1)-(2) can be written as

where {\sum}_{i=1}^{N}{\xi}_{i}(t)=1 is satisfied under any switching rules. Let \gamma >0 be a predefined level of disturbance attenuation. In this paper, for a given \kappa >0, we derive sets of criteria such that the switched multilayer neural network (3)-(4) with J(t)=0 is exponentially stable (\parallel x(t)\parallel \le \mathrm{\Upsilon}exp(-\kappa t), where \mathrm{\Upsilon}>0) and

\underset{t\ge 0}{sup}\{exp(\kappa t){z}^{T}(t)z(t)\}<{\gamma}^{2}{\int}_{0}^{\mathrm{\infty}}exp(\kappa t){J}^{T}(t)J(t)\phantom{\rule{0.2em}{0ex}}dt,

(5)

under zero-initial conditions for all nonzero J(t)\in {L}_{2}[0,\mathrm{\infty}), where {L}_{2}[0,\mathrm{\infty}) is the space of square integrable vector functions over [0,\mathrm{\infty}).

A set of generalized {\mathcal{H}}_{2} exponential stability criterion of the switched multilayer neural network (3)-(4) is derived in the following theorem.

**Theorem 1** *For given*\gamma >0*and*\kappa >0, *the switched multilayer neural network* (3)-(4) *is generalized*{\mathcal{H}}_{2}*exponentially stable if*

*for*i=1,\dots ,N, *where*{\lambda}_{min}(\cdot )*is the minimum eigenvalue of the matrix and* *P* *satisfies the Lyapunov inequality*{A}_{i}^{T}P+P{A}_{i}<-{k}_{i}I.

*Proof* We consider the Lyapunov function V(t)=exp(\kappa t){x}^{T}(t)Px(t). The time derivative of the function along the trajectory of (3) satisfies

\dot{V}(t)<\sum _{i=1}^{N}{\xi}_{i}(t)exp(\kappa t)\{-{x}^{T}(t)[{k}_{i}-\kappa P]x(t)+2{x}^{T}(t)P{W}_{i}\varphi ({V}_{i}x(t))+2{x}^{T}(t)PJ(t)\}.

(10)

Applying Young’s inequality [20], we have 2{x}^{T}(t)P{W}_{i}\varphi ({V}_{i}x(t))\le {x}^{T}(t)PPx(t)+{\varphi}^{T}({V}_{i}x(t)){W}_{i}^{T}{W}_{i}, \varphi ({V}_{i}x(t))\le {\parallel P\parallel}^{2}{\parallel x(t)\parallel}^{2}+{L}_{\varphi}^{2}{\parallel {W}_{i}\parallel}^{2}{\parallel {V}_{i}\parallel}^{2}{\parallel x(t)\parallel}^{2} and 2{x}^{T}(t)PJ(t)\le {x}^{T}(t)P{P}^{T}x(t)+{J}^{T}(t)J(t)\le {\parallel P\parallel}^{2}{\parallel x(t)\parallel}^{2}+{\parallel J(t)\parallel}^{2}. Substituting these inequalities into (10), we have

\begin{array}{rcl}\dot{V}(t)& <& \sum _{i=1}^{N}{\xi}_{i}(t)exp(\kappa t)\{-[{k}_{i}-(2+\kappa ){\parallel P\parallel}^{2}-{L}_{\varphi}^{2}{\parallel {W}_{i}\parallel}^{2}{\parallel {V}_{i}\parallel}^{2}]{\parallel x(t)\parallel}^{2}+{\parallel J(t)\parallel}^{2}\}\\ =& -\sum _{i=1}^{N}{\xi}_{i}(t)exp(\kappa t)[{k}_{i}-(2+\kappa ){\parallel P\parallel}^{2}-{L}_{\varphi}^{2}{\parallel {W}_{i}\parallel}^{2}{\parallel {V}_{i}\parallel}^{2}]{\parallel x(t)\parallel}^{2}\\ +\sum _{i=1}^{N}{\xi}_{i}(t)exp(\kappa t){\parallel J(t)\parallel}^{2}.\end{array}

(11)

If the following condition is satisfied,

{k}_{i}-(2+\kappa ){\parallel P\parallel}^{2}-{L}_{\varphi}^{2}{\parallel {W}_{i}\parallel}^{2}{\parallel {V}_{i}\parallel}^{2}>0,

(12)

for i=1,\dots ,N, we have

\begin{array}{rcl}\dot{V}(t)& <& \sum _{i=1}^{N}{\xi}_{i}(t)exp(\kappa t){\parallel J(t)\parallel}^{2}\\ =& exp(\kappa t){\parallel J(t)\parallel}^{2}.\end{array}

(13)

The following inequalities

{\parallel {W}_{i}\parallel}^{2}<\frac{{k}_{i}-(2+\kappa ){\parallel P\parallel}^{2}}{{L}_{\varphi}^{2}{\parallel {V}_{i}\parallel}^{2}},\phantom{\rule{1em}{0ex}}{\parallel P\parallel}^{2}<\frac{{k}_{i}}{2+\kappa},

(14)

for i=1,\dots ,N, imply the condition (12). Thus, we obtain (6) and (7). Under the zero-initial condition, we have V(t){|}_{t=0}=0 and V(t)\ge 0. Define

\mathrm{\Phi}(t)=V(t)-{\int}_{0}^{t}exp(\kappa \sigma ){J}^{T}(\sigma )J(\sigma )\phantom{\rule{0.2em}{0ex}}d\sigma .

(15)

Then, for any nonzero J(t), we obtain

\begin{array}{rcl}\mathrm{\Phi}(t)& =& V(t)-V(t){|}_{t=0}-{\int}_{0}^{t}exp(\kappa \sigma ){J}^{T}(\sigma )J(\sigma )\phantom{\rule{0.2em}{0ex}}d\sigma \\ =& {\int}_{0}^{t}[\dot{V}(\sigma )-exp(\kappa \sigma ){J}^{T}(\sigma )J(\sigma )]\phantom{\rule{0.2em}{0ex}}d\sigma .\end{array}

From (13), we have \mathrm{\Phi}(t)<0. It means

V(t)<{\int}_{0}^{t}exp(\kappa \sigma ){J}^{T}(\sigma )J(\sigma )\phantom{\rule{0.2em}{0ex}}d\sigma .

The condition (9) implies

\begin{array}{rcl}exp(\kappa t){z}^{T}(t)z(t)& =& \sum _{i=1}^{N}{\xi}_{i}(t)exp(\kappa t){x}^{T}(t){H}_{i}^{T}{H}_{i}x(t)\\ \le & \sum _{i=1}^{N}{\xi}_{i}(t)exp(\kappa t){\parallel {H}_{i}\parallel}^{2}{\parallel x(t)\parallel}^{2}\\ \le & {\gamma}^{2}\sum _{i=1}^{N}{\xi}_{i}(t)exp(\kappa t){\lambda}_{min}(P){\parallel x(t)\parallel}^{2}\\ \le & {\gamma}^{2}\sum _{i=1}^{N}{\xi}_{i}(t)exp(\kappa t){x}^{T}(t)Px(t)\\ =& {\gamma}^{2}V(t)\\ <& {\gamma}^{2}{\int}_{0}^{t}exp(\kappa \sigma ){J}^{T}(\sigma )J(\sigma )\phantom{\rule{0.2em}{0ex}}d\sigma \\ \le & {\gamma}^{2}{\int}_{0}^{\mathrm{\infty}}exp(\kappa \sigma ){J}^{T}(\sigma )J(\sigma )\phantom{\rule{0.2em}{0ex}}d\sigma .\end{array}

(16)

Taking the supremum over t>0 leads to (5). This completes the proof. □

**Corollary 1** *When*J(t)=0, *the conditions* (6)-(9) *ensure that the switched multilayer neural network* (3)-(4) *is exponentially stable*.

*Proof* When J(t)=0, from (13), \dot{V}(t)<0 for all x(t)\ne 0. Thus, for any t\ge 0, it implies that

exp(\kappa t){\parallel x(t)\parallel}^{2}{\lambda}_{min}(P)\le exp(\kappa t){x}^{T}(t)Px(t)=V(t)<V(0)={x}^{T}(0)Px(0).

(17)

Finally, we have

\parallel x(t)\parallel <\sqrt{\frac{{x}^{T}(0)Px(0)}{{\lambda}_{min}(P)}}exp(-\frac{\kappa}{2}t).

(18)

This completes the proof. □

In the next theorem, we find a new set of LMI criteria for the generalized {\mathcal{H}}_{2} exponential stability of the switched multilayer neural network (3)-(4). This set of LMI criteria can be facilitated readily via standard numerical algorithms [21, 22].

**Theorem 2** *For given level*\gamma >0*and*\kappa >0, *the switched multilayer neural network* (3)-(4) *is generalized*{\mathcal{H}}_{2}*exponentially stable if there exist a positive symmetric matrix* *P* *and a positive scalar* *ϵ* *such that*

*for*i=1,\dots ,N.

*Proof* Consider the Lyapunov function V(t)=exp(\kappa t){x}^{T}(t)Px(t). Applying Young’s inequality [20], we have \u03f5[{L}_{\varphi}^{2}{x}^{T}(t){V}_{i}^{T}{V}_{i}x(t)-{\varphi}^{T}({V}_{i}x(t))\varphi ({V}_{i}x(t))]\ge 0. By using this inequality, the time derivative of V(t) along the trajectory of (3) is

\begin{array}{rcl}\dot{V}(t)& =& \sum _{i=1}^{N}{\xi}_{i}(t)exp(\kappa t)\{{x}^{T}(t)[{A}_{i}^{T}P+P{A}_{i}+\kappa P]x(t)\\ +2{x}^{T}(t)P{W}_{i}\varphi ({V}_{i}x(t))+2{x}^{T}(t)PJ(t)\}\\ \le & \sum _{i=1}^{N}{\xi}_{i}(t)exp(\kappa t)\{{x}^{T}(t)[{A}_{i}^{T}P+P{A}_{i}+\kappa P]x(t)\\ +2{x}^{T}(t)P{W}_{i}\varphi ({V}_{i}x(t))+2{x}^{T}(t)PJ(t)\\ +\u03f5[{L}_{\varphi}^{2}{x}^{T}(t){V}_{i}^{T}{V}_{i}x(t)-{\varphi}^{T}({V}_{i}x(t))\varphi ({V}_{i}x(t))]\}\\ =& \sum _{i=1}^{N}{\xi}_{i}(t)exp(\kappa t){\left[\begin{array}{c}x(t)\\ \varphi ({V}_{i}x(t))\\ J(t)\end{array}\right]}^{T}\left[\begin{array}{ccc}{A}_{i}^{T}P+P{A}_{i}+\kappa P+\u03f5{L}_{\varphi}^{2}{V}_{i}^{T}{V}_{i}& P{W}_{i}& P\\ {W}_{i}^{T}P& -\u03f5I& 0\\ P& 0& -I\end{array}\right]\\ \times \left[\begin{array}{c}x(t)\\ \varphi ({V}_{i}x(t))\\ J(t)\end{array}\right]+\sum _{i=1}^{N}{\xi}_{i}(t)exp(\kappa t){J}^{T}(t)J(t).\end{array}

(21)

If the LMI (19) is satisfied, we have

\begin{array}{rcl}\dot{V}(t)& <& \sum _{i=1}^{N}{\xi}_{i}(t)exp(\kappa t){J}^{T}(t)J(t)\\ =& exp(\kappa t){J}^{T}(t)J(t).\end{array}

(22)

Under the zero-initial condition, one has V(t){|}_{t=0}=0 and V(t)\ge 0. Define

\mathrm{\Phi}(t)=V(t)-{\int}_{0}^{t}exp(\kappa \sigma ){J}^{T}(\sigma )J(\sigma )\phantom{\rule{0.2em}{0ex}}d\sigma .

(23)

Then, for any nonzero J(t), we obtain

\begin{array}{rcl}\mathrm{\Phi}(t)& =& V(t)-V(t){|}_{t=0}-{\int}_{0}^{t}exp(\kappa \sigma ){J}^{T}(\sigma )J(\sigma )\phantom{\rule{0.2em}{0ex}}d\sigma \\ =& {\int}_{0}^{t}[\dot{V}(\sigma )-exp(\kappa \sigma ){J}^{T}(\sigma )J(\sigma )]\phantom{\rule{0.2em}{0ex}}d\sigma .\end{array}

From (22), we have \mathrm{\Phi}(t)<0. It means

V(t)<{\int}_{0}^{t}exp(\kappa \sigma ){J}^{T}(\sigma )J(\sigma )\phantom{\rule{0.2em}{0ex}}d\sigma .

The LMI (20) implies

\begin{array}{rcl}exp(\kappa t){z}^{T}(t)z(t)& =& \sum _{i=1}^{N}{\xi}_{i}(t)exp(\kappa t){x}^{T}(t){H}_{i}^{T}{H}_{i}x(t)\\ <& {\gamma}^{2}\sum _{i=1}^{N}{\xi}_{i}(t)exp(\kappa t){x}^{T}(t)Px(t)\\ =& {\gamma}^{2}V(t)\\ <& {\gamma}^{2}{\int}_{0}^{t}exp(\kappa \sigma ){J}^{T}(\sigma )J(\sigma )\phantom{\rule{0.2em}{0ex}}d\sigma \\ \le & {\gamma}^{2}{\int}_{0}^{\mathrm{\infty}}exp(\kappa \sigma ){J}^{T}(\sigma )J(\sigma )\phantom{\rule{0.2em}{0ex}}d\sigma .\end{array}

(24)

Taking the supremum over t>0 leads to (5). This completes the proof. □

**Corollary 2** *When*J(t)=0, *a set of LMI conditions* (19)-(20) *ensure that the switched multilayer neural network* (3)-(4) *is exponentially stable*.

*Proof* When J(t)=0, from (22), we have

\dot{V}(t)<0,\phantom{\rule{1em}{0ex}}\mathrm{\forall}x(t)\ne 0.

(25)

Thus, for any t\ge 0, it implies that

exp(\kappa t){\parallel x(t)\parallel}^{2}{\lambda}_{min}(P)\le exp(\kappa t){x}^{T}(t)Px(t)=V(t)<V(0)={x}^{T}(0)Px(0).

(26)

Finally, we have

\parallel x(t)\parallel <\sqrt{\frac{{x}^{T}(0)Px(0)}{{\lambda}_{min}(P)}}exp(-\frac{\kappa}{2}t).

(27)

This completes the proof. □

**Example 1** Consider the switched neural network (3)-(4), where

Applying Theorem 2 with \gamma =0.52, we obtain

P=\left[\begin{array}{cc}3.7405& 0.0474\\ 0.0474& 3.7059\end{array}\right],\phantom{\rule{2em}{0ex}}\u03f5=6.2769.

The switching signal \alpha \in \{1,2\} is given by

\alpha =\{\begin{array}{cc}1,\hfill & 2k\le t\le 2k+1,k\in Z,\hfill \\ 2,\hfill & \text{otherwise},\hfill \end{array}

where *Z* is the whole set of nonnegative integers. Figure 1 shows state trajectories when x(0)={[-1.5\phantom{\rule{0.5em}{0ex}}0.5]}^{T} and {J}_{i}(t) (i=1,2) is a white noise.