In this paper \(\mathbb{R}_{0}^{+}= [0,\infty )\), \(\mathbb{R}^{n}\) is the n-dimensional vector space over the real numbers; \(\mathbb {R}^{m\times n}\) will be used for the set of all \(m\times n\) matrices, \(\mathrm {I}_{n\times n}\) is the \(n\times n\) identity matrix; \(0_{m\times n}\) is an \(m\times n\) matrix filled with zeros; a superscript T marks the transpose of a vector or a matrix; and \(\vec{\mathrm{e}}_{k,n}\) is the unit vector along the kth coordinate direction in an n-dimensional space. Subscripts n and \(n\times n\), which indicate the dimension of the space or the matrix, will be dropped whenever they are clear from the context. The Euclidean norm of a vector \(a\in\mathbb{R}^{n}\) will be written as \(\vert a\vert \), so
$$\vert a\vert = \Biggl(\sum_{i=1}^{n}a_{i}^{2} \Biggr)^{\frac{1}{2}} $$
and for a square matrix \(A\in\mathbb{R}^{n\times n}\), \(\vert A\vert \) will be the operator norm induced by the Euclidean vector norm. Recall that
$$\vert A\vert = \bigl(\lambda_{\max} \bigl(A^{T}A \bigr) \bigr)^{\frac{1}{2}}, $$
where \(\lambda_{\max}\) is the largest eigenvalue of \(A^{\mathrm{T}}A\). We will write \(\mathcal{C}_{n,\tau}\) for the Banach space \(\mathcal {C} ( [-\tau,0 ],\mathbb{R}^{n} )\) of continuous functions from \([-\tau,0 ]\) to \(\mathbb {R}^{n}\) with norm
$$\Vert x\Vert _{\infty}=\sup_{s\in [-\tau,0 ]} \bigl\{ \bigl\vert x (s )\bigr\vert \bigr\} $$
and use \(\mathcal{C}_{n,\tau}^{1}=\mathcal{C}^{1} ( [-\tau ,0 ],\mathbb{R}^{n} )\) for the Banach space \(\mathcal{C}^{1} ( [-\tau,0 ],\mathbb{R}^{n} )\) of continuous functions from \([-\tau,0 ]\) to \(\mathbb{R}^{n}\) with a continuous derivative with norm
$$\Vert x\Vert _{\infty,1}=\sup_{s\in [-\tau ,0 ]} \bigl\{ \bigl\vert x (s )\bigr\vert ,\bigl\vert \dot {x} (s )\bigr\vert \bigr\} . $$
We will also need the time shift operator, which operates on time dependent functions and is given by
$$\mathcal{T}_{t}x=s\mapsto x (s+t ). $$
For a function f with domain X, the function g with domain \(Y\subset X\) that coincides with f on Y will be denoted by \(f\vert _{Y} \). As is usual in the literature on differential equations with delay, we will use the abbreviated notation \(x_{t}\) for the time shifted function x, restricted to the domain \([-\tau,0 ]\), so
$$x_{t}=\mathcal{T}_{t}x\vert _{ [-\tau,0 ]}. $$
In this paper we will consider a Lur’e system of neutral type with indirect control,
$$\begin{aligned}& \frac{d}{dt} \bigl[x (t )-Dx (t-\tau ) \bigr]=A_{1}x (t )+A_{2}x (t-\tau )+bf \bigl(\sigma (t ) \bigr),\quad t\ge t_{0}, \end{aligned}$$
(1)
$$\begin{aligned}& \frac{d}{dt}\sigma (t )=c^{\mathrm{T}}x (t )-\rho f \bigl(\sigma (t ) \bigr), \quad t\ge t_{0}, \end{aligned}$$
(2)
$$\begin{aligned}& x_{t_{0}}=\phi \end{aligned}$$
(3)
with \(\phi\in\mathcal{C}_{n,\tau}\), \(A_{1},A_{2},D\in\mathbb {R}^{n\times n}\), \(b,c\in\mathbb{R}^{n}\), \(\rho,\tau\in\mathbb{R}\), \(f\in\mathcal {C} (\mathbb{R},\mathbb{R} )\) such that \(\rho>0\), \(\tau>0\), \(\vert D\vert <1\), and
$$ k_{1}\sigma^{2}\le\sigma f (\sigma )\le k_{2} \sigma ^{2}, $$
(4)
where \(k_{1},k_{2}\in\mathbb{R}\) and \(k_{2}>k_{1}>0\). This is a special case of the more general autonomous neutral functional-differential equation
$$ \frac{d}{dt} \bigl[x (t )-Dx (t-\tau ) \bigr]=F (x_{t} ), $$
(5)
where \(D\in\mathbb{R}^{n\times n}\) and \(F\in\mathcal{C} (\mathcal{C}_{n,\tau},\mathbb{R}^{n} )\) with initial condition
$$ x_{t_{0}}=\phi, $$
(6)
where \(\phi\in\mathcal{C}_{n,\tau}^{1}\). If we need to refer to a specific solution of (5) with (6) then we will use the notation \(x_{ \langle t_{0},\phi \rangle}\).
Definition 1
A pair \((x,\sigma )\in C ( [t_{0}-\tau,\infty ),\mathbb{R}^{n} )\times C ( [t_{0},\infty ),\mathbb{R} )\) is a solution of (1), (2), (3) on \([t_{0},\infty )\) if x satisfies (
3
) and the pair satisfies the system (1) and (2).
Evidently, as discussed in [23], p.169, there are obviously two families of metrics or measures for stability in this case, one based on x alone and another based on x and its derivative. A general theory of stability in two metrics or measures was first given by [24] and extended by [25]; see also [26, 27]. We use the definition of measure given in [26].
Definition 2
A function \(h\in\mathcal{C} (\mathbb{R}_{0}^{+}\times X,\mathbb {R}_{0}^{+} )\), where X is a Banach space, is called a measure in X if
$$\inf_{ (t,x )\in\mathbb{R}\times X}h (t,x )=0 $$
and the set of all measures in X is denoted by \(\Gamma (X )\).
Note the large difference in meaning conveyed by the subtle difference in terminology between a ‘measure in X’ and a ‘measure on X’.
Definition 3
For given \(h_{0}\in\Gamma (\mathcal{C}_{n,\tau}^{1} )\) and \(h\in\Gamma (\mathcal{C}_{n,\tau}^{1} )\) the solution \(x_{ \langle t_{0},\phi \rangle}\) of (5) with (6) is \((h_{0},h )\)
stable if
$$\forall\epsilon>0 \exists\delta>0 \forall\psi\in\mathcal {C}_{n,\tau} : h_{0} (t_{0},\phi-\psi )\le\delta \Rightarrow h (t,x_{ \langle t_{0},\phi \rangle t}-x_{ \langle t_{0},\psi \rangle t} )\le\epsilon. $$
Definition 4
For given \(h_{0}\in\Gamma (\mathcal{C}_{n,\tau}^{1} )\) and \(h\in\Gamma (\mathcal{C}_{n,\tau}^{1} )\) the solution \(x_{ \langle t_{0},\phi \rangle}\) of (5) with (6) is \((h_{0},h )\)
asymptotically
stable if it is \((h_{0},h )\)
stable and
$$\begin{aligned}& \exists\delta>0 \forall\epsilon>0 \exists T>t_{0} \forall t\ge T \forall\psi\in\mathcal{C}_{n,\tau} : \\& h_{0} (t_{0},\phi-\psi )\le\delta\Rightarrow h (t,x_{ \langle t_{0},\phi \rangle t}-x_{ \langle t_{0},\psi \rangle t} )\le\epsilon \end{aligned}$$
or equivalently if it is \((h_{0},h )\)
stable and
$$\exists\delta>0 \forall\psi\in\mathcal{C}_{n,\tau} : h_{0} (t_{0},\phi-\psi )\le\delta\Rightarrow\lim_{t\rightarrow\infty}h (t,x_{ \langle t_{0},\phi \rangle t}-x_{ \langle t_{0},\psi \rangle t} )\rightarrow0. $$
Definition 5
For given \(h_{0}\in\Gamma (\mathcal{C}_{n,\tau}^{1} )\) and \(h\in\Gamma (\mathcal{C}_{n,\tau}^{1} )\) the solution \(x_{ \langle t_{0},\phi \rangle}\) of (5) with (6) is \((h_{0},h )\)
exponentially
stable (after for instance [27, 28]) if
$$\begin{aligned}& \exists\rho>0 \exists K>0 \exists\lambda>0 \forall t\ge T \forall\psi\in \mathcal{C}_{n,\tau} : \\& h_{0} (t_{0},\phi-\psi )\le\rho\Rightarrow h (t,x_{ \langle t_{0},\phi \rangle t}-x_{ \langle t_{0},\psi \rangle t} )\le Kh_{0} (t_{0}, \phi-\psi )e^{-\lambda (t-t_{0} )}. \end{aligned}$$
Definition 6
For given \(h_{0}\in\Gamma (\mathcal{C}_{n,\tau}^{1} )\) and \(h\in\Gamma (\mathcal{C}_{n,\tau}^{1} )\) the solution \(x_{ \langle t_{0},\phi \rangle}\) of (5) with (6) is \((h_{0},h )\)
globally asymptotically
stable if
$$\forall\epsilon>0 \forall\psi\in\mathcal{C}_{n,\tau} \exists T>t_{0} \forall t\ge T : h (t,x_{ \langle t_{0},\phi \rangle t}-x_{ \langle t_{0},\psi \rangle t} )\le \epsilon. $$
Definition 7
We call the zero solution \(x:t\mapsto0_{n\times1}\), \(\sigma:t\mapsto0\) of (1), (2) stable if it is \((h_{0},h )\) stable for \(h_{0} (t,\phi )=\Vert \phi \Vert _{\infty}\) and \(h (t, \langle x_{t},\sigma_{t} \rangle )=\sqrt{\vert x_{t} (0 )\vert ^{2}+\vert \sigma _{t} (0 )\vert ^{2}}\).
Definition 8
We call the zero solution \(x:t\mapsto0_{n\times1}\), \(\sigma:t\mapsto0\) of (1), (2) asymptotically stable if it is \((h_{0},h )\) asymptotically stable for \(h_{0} (t,\phi )=\Vert \phi \Vert _{\infty}\) and \(h (t, \langle x_{t},\sigma_{t} \rangle )=\sqrt{\vert x_{t} (0 )\vert ^{2}+\vert \sigma _{t} (0 )\vert ^{2}}\).
Definition 9
We call the zero solution \(x:t\mapsto0_{n\times1}\), \(\sigma:t\mapsto0\) of (1), (2) globally asymptotically stable if it is \((h_{0},h )\) globally asymptotically stable for \(h_{0} (t,\phi )=\Vert \phi \Vert _{\infty}\) and \(h (t, \langle x_{t},\sigma_{t} \rangle )=\sqrt{\vert x_{t} (0 )\vert ^{2}+\vert \sigma _{t} (0 )\vert ^{2}}\).
Definition 10
We call the zero solution \(x:t\mapsto0_{n\times1}\), \(\sigma:t\mapsto0\) of (1), (2) globally asymptotically stable in metric
\(\mathcal{C}^{1}\) if it is \((h_{0},h )\) globally asymptotically stable for \(h_{0} (t,\phi )=\Vert \phi \Vert _{\infty}\) and
$$h \bigl(t, \langle x_{t},\sigma_{t} \rangle \bigr)=\max \bigl(\sqrt{\bigl\vert x_{t} (0 )\bigr\vert ^{2}+\bigl\vert \sigma _{t} (0 )\bigr\vert ^{2}},\sqrt{\bigl\vert \dot{x}_{t} (0 )\bigr\vert ^{2}+\bigl\vert \dot{ \sigma}_{t} (0 )\bigr\vert ^{2}} \bigr). $$
Definition 11
The system (1), (2) is called absolutely stable if the zero solution of the system (1), (2) is globally asymptotically stable for an arbitrary function \(f (\sigma )\) that satisfies (4).
To investigate the system (1), (2) we use a Lyapunov-Krasovskii functional of the form
$$\begin{aligned} V [x,\sigma,t ] = & x^{T} (t )Hx (t ) \\ &{} +\int_{s=t-\tau}^{t}e^{-\zeta (t-s )} \bigl\{ x^{T} (s )G_{1}x (s )+\dot{x}^{T} (s )G_{2}\dot{x} (s ) \bigr\} \,ds \\ &{} +\beta\int_{w=0}^{\sigma (t )}f (w )\,dw, \end{aligned}$$
(7)
where \(H,G_{1},G_{2}\in\mathbb{R}^{n\times n}\) and \(\beta,\gamma\in \mathbb{R}\), \(\beta>0\), \(\zeta>0\).
We define the matrix
$$ S [A_{1},A_{2},b,c,\rho,\tau,H,G_{1},G_{2}, \beta,\zeta ]= \begin{bmatrix} S_{11} & S_{12} & S_{13} & S_{14}\\ S_{12}^{\mathrm{T}} & S_{22} & S_{23} & S_{24}\\ S_{34}^{\mathrm{T}} & S_{34}^{\mathrm{T}} & S_{33} & S_{34}\\ S_{14}^{\mathrm{T}} & S_{24}^{\mathrm{T}} & S_{34}^{\mathrm{T}} & S_{44} \end{bmatrix}, $$
(8)
where
$$ \begin{aligned} &S_{11}=-A_{1}^{T}H-HA_{1}-G_{1}-A_{1}^{T}G_{2}A_{1}, \qquad S_{12}=-HA_{2}-A_{1}^{T}G_{2}A_{2}, \\ &S_{13}=-HD-A_{1}^{T}G_{2}D, \qquad S_{14}=-Hb-A_{1}^{T}G_{2}b- \frac {1}{2}\beta c, \\ &S_{22}=e^{-\zeta\tau}G_{1}-A_{2}^{T}G_{2}A_{2}, \qquad S_{23}=A_{2}G_{2}D, \qquad S_{24}=-A_{2}^{T}G_{2}b, \\ &S_{33}=e^{-\zeta\tau}G_{2}-D^{T}G_{2}D, \qquad S_{34}=-D^{T}G_{2}b,\qquad S_{44}=\beta \rho-b^{T}G_{2}b. \end{aligned} $$
(9)
In [13, 14] a general theorem was proved, that provided sufficient conditions for absolute stability and estimates of the exponential decay for the solutions of the system (1), (2), when the elements of the matrices \(A_{1}\) and \(A_{2}\) were only known to lie in given intervals. When \(A_{1}\) and \(A_{2}\) are known exactly the following theorem follows immediately.
Theorem 1
Let
\(\vert D\vert <1\), \(\rho,\tau>0\)
and suppose that there exist positive definite matrices
\(G_{1}\), \(G_{2}\), H, and constants
\(\zeta>0\), \(\beta>0\)
such that the matrix
\(S [A_{1},A_{2},b,c,\rho,\tau,H,G_{1},G_{2},\beta,\zeta ]\)
is positive definite. In that case the system (1), (2) is absolutely stable in metric with respect to the metric defined earlier for \(\mathcal{C}^{1}\).
Corollary 1
Let
\(\vert D\vert <1\), \(\rho,\tau>0\)
and suppose that there exist positive definite matrices
\(G_{1}\), \(G_{2}\), H, and constants
\(0<\lambda<1\), \(\beta>0\)
such that the matrix
\(\tilde{S} [A_{1},A_{2},b,c,\rho,\tau ,H,G_{1},G_{2},\beta,\lambda ]\)
given by
\(S_{ij}\)
for
\((i,j)\notin\{ (2,2), (3,3) \}\)
and
\(\tilde{S}_{22} = \lambda G_{1}-A_{2}^{T}G_{2}A_{2}\), \(\tilde{S}_{33} = \lambda G_{2}-D^{T}G_{2}D\)
is positive definite. In that case the system (1), (2) is absolutely stable in metric in metric \(\mathcal{C}^{1}\)
for all finite delays
τ.
Proof
For each τ this follows from Theorem 1 by taking \(\zeta= \tau^{-1} \log\lambda\). □
Note 1
In this corollary there are no conditions on the delay other than \(\tau>0\).
In analogy with the definition of exponential stability in terms of two measures we can use the existence of a Lyapunov-Krasovkii functional with specific properties to define a new type of stability. The definition is based on the inequality
$$ \frac{d}{dt}V [x,t ]\le-\gamma V [x,t ]. $$
(10)
Definition 12
A system is stable with respect to the functional
V
with exponent
\(\gamma>0\) if inequality (10) holds for the total derivative of the functional \(V [x,t ]\) along any solution of \(x:t\mapsto x (t )\) of the system.
For some systems it can be profitable to examine the possibility of stabilizing the system by allowing a specific type of linear state feedback.
Definition 13
A system is stabilizable with respect to functional
V
and state feedback of a given type if the adding state feedback of that type results in a system that is stable with respect to the functional V with exponent \(\gamma>0\).
To illustrate the use of these definitions in the next two sections we will apply these definitions first in the case of a linear system with delay and then in the case of a scalar nonlinear neutral system with indirect control.