In this section, based on a backstepping technique and neural networks, we design an adaptive actuator failure compensation control scheme. The control goal of this manuscript is to establish an adaptive controller such that the system output y follows a desired reference signal \(y_{d}\).
3.1 A. Adaptive tracking control design
Before implementing the design, a set of uncertain constants is defined as
$$\begin{aligned}& W_{i}=M_{i}\Vert \theta_{i} \Vert _{2}^{2}=M_{i}\theta_{i}^{T} \theta_{i}, \quad i=1, \ldots,n, \end{aligned}$$
(6)
where \(\theta_{i}=[\theta_{i,1},\ldots,\theta_{i,N_{i}}]\) and \(N_{i}\) are the weight vector and the number of neurons in ith hidden layer, respectively. The operator \(\Vert \cdot \Vert _{2}\) represents the Euclidean norm of a column vector. From the definition we know that \(W_{i}\) is an unknown constant. The adaptive parameter \(\hat{W}_{i}\) is utilized to estimate \(W_{i}\), and the estimated error is \(\tilde{W} _{i} = W_{i}-\hat{W}_{i}\). The computation in adaptation mechanism utilized in the literature can be reduced from \(\Sigma_{i=1}^{n}M_{i}\) to n.
The coordinate transformations are defined as follows:
$$ \begin{gathered} {z}_{1}={x}_{1}-{y}_{d}, \\ {z}_{i}={x}_{i}-\alpha_{i-1}-y_{d}^{(i-1)}, \quad i=2,\ldots,n, \end{gathered} $$
(7)
where \(\alpha_{i-1}\) is an intermediate controller.
Denote
$$\begin{aligned} \check{u}_{j}=\frac{\operatorname{sign}(b_{j})}{{\sigma }_{j}}\hat{ \tau}^{T}v, \end{aligned}$$
(8)
where \(\hat{\tau}=[\hat{\tau}_{1},\hat{\tau}_{2,1},\ldots, \hat{\tau}_{2,m}]\) is the estimate of \({\tau}\in R^{1+m}\) designed latter, and \(v=[\alpha_{n}+y_{d}^{(n)},{\sigma}_{1},\ldots,{\sigma }_{m}]\).
To ensure the boundedness of the jumping size of the Lyapunov function at failure instants, we design the adaptation laws for updating parameter estimators with projection operation.
The adaptive laws are selected as
$$\begin{aligned}& \dot{\hat{W_{i}}}=\sum_{i=1}^{n} \frac {l}{2a_{i}}z_{i}^{2}-{\wp} _{1} \hat{W}_{i}, \quad i=1,2,\ldots,n, \end{aligned}$$
(9)
$$\begin{aligned}& \dot{\hat{\varepsilon}}=\sum_{i=1}^{n}hz_{i} \tanh\frac{z_{i}}{ \varsigma}-{\wp}_{2}\hat{\varepsilon}, \end{aligned}$$
(10)
$$\begin{aligned}& \dot{\hat{\tau}}=\operatorname{Proj}(-\Gamma_{\tau}vz_{n}), \quad \hat{\tau}(0)\in \Omega_{\tau}, \end{aligned}$$
(11)
where h, \({\wp}_{1}\), and \({\wp}_{2}\) are positive design constants, and \(\Gamma_{\tau} \in R^{(1+m)(1+m)}\) is chosen to be symmetric and positive definite (for convenience, we can select \(\Gamma_{\tau}=diag \{r_{1},r_{2},\dots,r_{1+m}\}\) with positive constants \(r_{i}\)), \(\operatorname{Proj}(\cdot)\) denotes the projection operator, and \(\Omega_{\tau}\) is a known convex compact set given by
$$\begin{aligned} \Omega_{\tau}=\biggl\{ (\gamma_{1}, \gamma_{2,1},\ldots,\gamma _{2,m})\Big| \frac{1}{mb _{M}} \leq \frac{1}{b_{0}\kappa_{0}} \mbox{ and }\biggl\vert \gamma_{2,j}\leq \frac{b _{M}u_{fM}}{b_{0}\kappa_{0}}\biggr\vert , j=1,\ldots,m \biggr\} ; \end{aligned}$$
(12)
\(\alpha_{i-1}\) is the virtual controller designed as
$$\begin{aligned} \alpha_{i}=-{\lambda}_{i}{z}_{i}- \frac {1}{2a_{i}^{2}}z_{i}\hat{W} _{i}-\hat{\varepsilon}\tanh \frac{z_{i}}{\varsigma}, \end{aligned}$$
(13)
where \(\lambda_{i}\), \(a_{i}\), and l are positive design constants.
With the developed projection-based tuning function approach, we further propose new piecewise Lyapunov function analysis to establish the closed-loop system stability.
3.2 B. Stability analysis
Theorem 1
Consider the closed-loop adaptive system consisting of nonlinear plant (1) with actuator failures (2)-(3) and the proposed control scheme (9)-(14). Moreover, the nonstrict-feedback nonlinear system (1) satisfies Assumptions
1-4. Then:
-
(1)
All the signals of the closed-loop system are bounded;
-
(2)
The tracking error converges to a small neighborhood of zero.
Proof
Consider the following Lyapunov function candidate:
$$\begin{aligned} \bar{V}_{k} =&\frac{1}{2}\sum _{i=1}^{n}z_{i}^{2}+ \frac{1}{2l} \tilde{W}^{2}+\frac{1}{2h}\tilde{ \varepsilon}^{2}+\frac{1}{2}\sum_{j=1}^{m} \vert b_{j}\vert \kappa_{j,k}(t)\tilde{\tau }^{T}\Gamma_{\tau}^{-1} \tilde{\tau}, \end{aligned}$$
(14)
where \(\tilde{\varepsilon}_{i}\) is used to estimate \(\varepsilon_{i}\), \(\tilde{\varepsilon}_{i} = \varepsilon_{i}-\hat{\varepsilon}_{i}\) refers to the estimated error, and \(\tilde{\tau}_{i} = \tau_{i}- \hat{\tau}_{i}\) is the estimated error of \(\tau_{i}\). We define \(\tau(t)\in R^{1+m}\) by
$$\begin{aligned}& \tau_{1}(t)=\frac{1}{\sum_{j=1}^{m}\vert b_{j}\vert \kappa_{j,k}(t)}, \end{aligned}$$
(15)
$$\begin{aligned}& \tau_{2,j}(t)=-\frac{b_{j}u_{fj,k}(t)}{\sum_{j=1}^{m}\vert b_{j}\vert \kappa _{j,k}(t)}, \end{aligned}$$
(16)
where \(j=1,\ldots,m\).
Note that all \(\Re_{1}\), \(\Re_{2} \in\Omega_{\tau}\) satisfy
$$\begin{aligned} \Vert \Re_{1}-\Re_{2}\Vert \leq\sqrt{\biggl( \frac{1}{b_{0}\kappa _{0}}-\frac{1}{mb _{M}}\biggr)^{2}+\frac{4mb_{m}^{2}u_{fM}^{2}}{b_{0}^{2}\kappa_{0}^{2}}}= \hbar. \end{aligned}$$
From the related properties of a projection operator in [17] we can derive that \(\hat{\tau}(t)\in\Omega_{\tau}\). We get
$$\begin{aligned} \bigl\Vert \tilde{\tau}(t)\bigr\Vert =\bigl\Vert \tau (t)- \hat{\tau}(t)\bigr\Vert \leq\hbar. \end{aligned}$$
(17)
The time derivative of V̄ is
$$\begin{aligned} \dot{\bar{V}}_{k} =&z_{1}(z_{2}+ \alpha_{1}+\varphi_{1})+\sum_{i=2} ^{n-1}z_{i}({z}_{i+1}+\alpha_{i}+ \varphi_{i}-\dot{\alpha}_{i-1}) \\ &{}+\dot{\phi}+z_{n}\Biggl(\sum_{j=1}^{m} \vert b_{j}\vert \kappa _{j,k}(t)\hat{\tau} ^{T}v+\sum_{j=1}^{m}b_{j}{ \sigma}_{j}u_{fj,k}(t)+\varphi_{n}- \dot{ \alpha}_{n-1}-y_{d}^{(n)}\Biggr), \end{aligned}$$
(18)
where
$$\begin{aligned} \phi=\frac{1}{2l}\tilde{W}^{2}+\frac{1}{2h}\tilde{ \varepsilon}^{2}+ \frac{1}{2}\sum_{j=1}^{m} \vert b_{j}\vert \kappa _{j,k}(t)\tilde{ \tau}^{T} \Gamma_{\tau}^{-1}\tilde{\tau}. \end{aligned}$$
It follows that
$$\begin{aligned} \dot{\phi} =&-\frac{1}{l}\tilde{W}\dot{\hat{W}}- \frac{1}{h} \tilde{\varepsilon}\dot{\hat{\varepsilon}}-\sum _{j=1}^{m}\vert b_{j}\vert \kappa_{j,k}(t)\tilde{\tau}^{T}\Gamma^{-1}_{\tau} \dot{\hat{\tau}} \\ & {}+\sum_{j=1}^{m}\frac{\vert b_{j}\vert \dot{\kappa_{j,k}}(t)}{2} \tilde{\tau}^{T}\Gamma^{-1}_{\tau}\tilde{\tau}+\sum _{j=1}^{m}\vert b _{j}\vert \kappa_{j,k}(t)\tilde{\tau}^{T}\Gamma^{-1}_{\tau} \dot {\tau}. \end{aligned}$$
(19)
Define unknown functions as
$$\begin{aligned}& \begin{aligned}[b] &P_{i}\bigl(x,\hat{W}, \hat{\varepsilon},y_{d},\ldots,y_{d}^{(i-1)}\bigr) \\ &\quad =\frac{\partial\alpha_{i-1}}{\partial\hat{W}}\sum_{j=1}^{i-1} \frac{l}{2a _{j}^{2}}z_{j}^{2}-z_{i} \frac{l}{2a_{i}^{2}}\sum_{j=2}^{i}\biggl\vert z_{j}\frac{ \partial\alpha_{j-1}}{\partial\hat{W}}\biggr\vert -\frac{\partial \alpha_{i-1}}{ \partial\hat{W}}{ \wp}_{1}\hat{W} \\ &\quad\quad{} +\frac{\partial\alpha_{i-1}}{\partial\hat{\varepsilon}}\sum_{j=1} ^{i-1}hz_{j}\tanh\frac{z_{j}}{\varsigma}+h\tanh\frac{z_{i}}{\varsigma } \sum_{j=2}^{i}z_{j} \frac{\partial\alpha_{j-1}}{\partial \hat{\varepsilon}}-\frac{\partial\alpha_{i-1}}{\partial \hat{\varepsilon}}{\wp}_{2}\hat{\varepsilon}, \end{aligned} \end{aligned}$$
(20)
$$\begin{aligned}& f_{1}(x)=\varphi_{1}(x), \end{aligned}$$
(21)
$$\begin{aligned}& f_{i}\bigl(x,\hat{W},\hat{\varepsilon},y_{d}, \ldots,y_{d}^{(i-1)}\bigr)=z _{i-1}+ \varphi_{i}-\sum_{j=1}^{i-1} \frac{\partial\alpha_{i-1}}{ \partial{x_{j}}}(x_{j+1}+\varphi_{j}) \end{aligned}$$
(22)
$$\begin{aligned}& \hphantom{f_{i}\bigl(x,\hat{W},\hat{\varepsilon},y_{d}, \ldots,y_{d}^{(i-1)}\bigr)}\quad{}-\sum_{j=1}^{i-1} \frac{\partial\alpha_{i-1}}{\partial{y_{d}}^{(j-1)}}y_{d}^{(j)}-P _{i}\bigl(x, \hat{W},\hat{\varepsilon},y_{d},\ldots,y_{d}^{(i-1)} \bigr), \\& \hphantom{f_{i}\bigl(x,\hat{W},\hat{\varepsilon},y_{d}, \ldots,y_{d}^{(i-1)}\bigr)}\quad i= 2, \ldots,n. \end{aligned}$$
(23)
From (9)-(10) and (20)-(22) we have
$$\begin{aligned} \sum_{i=2}^{n}z_{i} \biggl(P_{i}-\frac{\partial\alpha _{i-1}}{\partial \hat{W}}\dot{\hat{W}}-\frac{\partial\alpha_{i-1}}{\partial \hat{\varepsilon}}\dot{ \hat{\varepsilon}}\biggr) \leq0. \end{aligned}$$
(24)
Moreover, according to (8) and (17), we have
$$\begin{aligned} \sum_{j=1}^{m}\vert b_{j}\vert \kappa _{j,k}(t)\tau^{T}v+\sum _{j=1}^{m}b_{j} { \sigma}_{j}u_{fj,k}(t)-\alpha_{n}-y_{d}^{(n)}=0. \end{aligned}$$
(25)
It follows that
$$\begin{aligned} \dot{\bar{V}}_{k}(t)\leq\sum _{i=1}^{n}(\alpha _{i}+f_{i})- \sum_{j=1} ^{m}\vert b_{j} \vert \kappa_{j,k}(t)\tilde{\tau }^{T}vz_{n}+\dot{ \phi}. \end{aligned}$$
(26)
Since \({f}_{i}(x)\) contain the unknown functions \(\varphi_{i}\), they cannot be implemented in practice. According to Lemma 1, for any given constant \(\varepsilon_{i}>0\), there exists a neural network \(\theta^{T}_{i}\Re_{i}\) such that
$$\begin{aligned} \vert z_{i}\vert f_{i}(x) \leq \frac {1}{2a_{i}^{2}}z_{i}^{2}\Vert \theta_{i} \Vert ^{2} \Re_{i}^{T}\Re_{i}+ \frac{1}{2}a_{i}^{2}+\vert z_{i}\vert \varepsilon. \end{aligned}$$
(27)
Combining with the condition \(\Re_{i}^{T}\Re_{i} \leq M_{i}\) and Lemma 2, we have
$$\begin{aligned} \vert z_{i}\vert f_{i}(x) \leq \frac {1}{2a_{i}^{2}}z_{i}^{2}W+\frac{1}{2}a_{i} ^{2}+z_{i}\varepsilon\tanh\frac{z_{i}}{\varsigma}+0.2785\varsigma \varepsilon, \end{aligned}$$
(28)
where
$$\begin{aligned}& W=\max\bigl\{ \Vert \theta_{1}\Vert ^{2} \Re _{1}^{T}\Re_{1},\ldots, \Vert \theta_{i} \Vert ^{2}\Re_{i}^{T} \Re_{i}\bigr\} , \end{aligned}$$
(29)
$$\begin{aligned}& \varepsilon=\max\{\varepsilon_{1},\ldots,\varepsilon_{n} \}. \end{aligned}$$
(30)
Substituting (27) and (11) into (25), we have
$$\begin{aligned} \dot{\bar{{V}}}_{k} \leq&\sum _{i=1}^{n}\lambda _{i}z_{i}^{2}+ \sum_{i=1}^{n}\frac{1}{2a_{i}^{2}}z_{i}^{2} \tilde{W}+\sum_{i=1}^{n} \frac{1}{2}a_{i}^{2}+\sum _{i=1}^{n}z_{i}\tilde{\varepsilon}\tanh \frac{z _{i}}{\varsigma}+\sum_{i=1}^{n}0.2785 \varsigma\varepsilon \\ & {}-\sum_{j=1}^{m}\vert b_{j}\vert \kappa_{j,k}(t)\tilde {\tau}^{T}vz _{n}+\dot{\phi}. \end{aligned}$$
(31)
By Assumption 5 and by (17) and (18) we get
$$\begin{aligned}& \Vert \dot{\tau}_{1}\Vert \leq \frac{ mb_{M}d_{1}}{b_{0}^{2}\kappa_{0}^{2}}, \end{aligned}$$
(32)
$$\begin{aligned}& \Vert \dot{\tau}_{2,j}\Vert \leq\frac {mb_{M}^{2}d_{2}+mb_{M}^{2}u_{fM}d _{1}}{b_{0}^{2}\kappa_{0}^{2}},\quad j=1,\ldots,m. \end{aligned}$$
(33)
It follows that
$$\begin{aligned} \bigl\Vert \dot{\tau}(t)\bigr\Vert \leq\frac {mb_{M}d_{1}}{b_{0}^{2}\kappa_{0}^{2}}+ \frac{mb _{M}^{2}d_{2}+mb_{M}^{2}u_{fM}d_{1}}{b_{0}^{2}\kappa_{0}^{2}}=\varpi . \end{aligned}$$
(34)
According to (9), (10), and (12), we have
$$\begin{aligned} \dot{\bar{V}}_{k} \leq&\sum _{i=1}^{n}\lambda _{i}z_{i}^{2}- \frac{\tau _{1}}{2l}\tilde{W}^{2}-\frac{\tau_{2}}{2h}\tilde{ \varepsilon}^{2}+\frac{mb _{M}}{2}\tau_{M}^{2}+ \frac{\tau_{1}}{2l}W^{2}+\frac{\tau_{2}}{2h} \varepsilon^{2}+ \sum_{i=1}^{n}\frac{1}{2}a_{i}^{2}(0.2785 \varsigma \varepsilon)+\varpi_{0} \\ & {}-\sum_{j=1}^{m}\frac{\vert b_{j}\vert \kappa _{j,k}(t)}{2\lambda_{\max}( \Gamma_{\tau}^{-1})} \tilde{\tau}^{T}\Gamma_{\tau}^{-1} \tilde{\tau}, \end{aligned}$$
(35)
where
$$\begin{aligned} \varpi_{0}=\frac{mb_{M}d_{1}}{2}\hbar^{2}\bigl\Vert \Gamma _{\tau}^{-1}\bigr\Vert _{F}+mb _{M}\hbar\bigl\Vert \Gamma_{\tau}^{-1}\bigr\Vert _{F}\varpi. \end{aligned}$$
(36)
Then, it follows that
$$\begin{aligned} \dot{\bar{V}}_{k}(t) \leq-A\bar{V}_{k}+B, \end{aligned}$$
(37)
where
$$\begin{aligned}& A=\min\biggl\{ 2\lambda_{i}, \tau_{1}, \tau_{2}, \frac{1}{\lambda _{\max}( \Gamma_{\tau}^{-1})}\biggr\} , \end{aligned}$$
(38)
$$\begin{aligned}& \begin{aligned}[b] B&=\frac{\tau_{1}}{2l}W^{2}+ \frac{\tau_{2}}{2h}\varepsilon^{2}+\sum_{i=1}^{n} \frac{1}{2}a_{i}^{2}(0.2785\varsigma\varepsilon)+ \frac{mb _{M}}{2}\tau_{M}^{2}+\varpi_{0}t_{k}, \\ &\quad t_{k} \leq t \leq t_{k+1}, k=0,1, \ldots. \end{aligned} \end{aligned}$$
(39)
To establish the closed-loop system stability for all the time under the case of actuator failures or faults, we need to consider the overall Lyapunov function defined as
$$\begin{aligned} V(t)= \bar{V}(t)_{k}, \quad t \in[t_{k},t_{k+1}), k=0,1,\ldots, \end{aligned}$$
(40)
where \(t_{0}=0\) is the initial time instant. Note that \(V(t)\) is not a continuous function, and it experiences a sudden jump at each failure instant \(t_{k+1}\) (\(k=0,1,\ldots\)). The jumping size at instant \(t_{k}\) is computed as
$$\begin{aligned}& V\bigl(t_{k+1}^{+}\bigr)-V \bigl(t_{k+1}^{-}\bigr) \\& \quad=\sum_{j=1}^{m}\frac{\vert b_{j}\vert \kappa _{j,k+1}(t_{k+1}^{+})}{2} \tilde{\tau}^{T}\bigl(t_{k+1}^{+}\bigr) \Gamma_{\tau}^{-1}\tilde{\tau}\bigl(t_{k+1} ^{+}\bigr) \\& \quad\quad {}-\sum_{j=1}^{m} \frac{\vert b_{j}\vert \kappa _{j,k+1}(t_{k+1}^{-})}{2} \tilde{\tau}^{T}\bigl(t_{k+1}^{-} \bigr)\Gamma_{\tau}^{-1}\tilde{\tau}\bigl(t_{k+1} ^{-}\bigr) \\& \quad \leq\frac{mb_{M}}{2}\bigl\Vert \Gamma_{\tau}^{-1} \bigr\Vert _{F}\hbar^{2}=\Lambda. \end{aligned}$$
(41)
Let \(H(t)=e^{at}V(t)\). Then it follows that
$$\begin{aligned} \dot{H}=Ae^{at}V(t)+e^{at}\dot{V}(t)\leq Be^{At}. \end{aligned}$$
(42)
Integrating both sides of (42) over \([t_{k},t_{k+1})\), we have
$$\begin{aligned} H\bigl(t_{k+1}^{-}\bigr)\leq H \bigl(t_{k+1}^{+}\bigr)+ \int ^{t_{k+1}}_{t_{k}}Be^{At}\,dt. \end{aligned}$$
(43)
From (41)-(43) we have
$$\begin{aligned} H\bigl(t_{k+1}^{+}\bigr) \leq H \bigl(t_{k}^{+}\bigr)+ \int ^{t_{k+1}}_{t_{k}}Be^{At}\,dt+e ^{at_{k+1}}\Lambda. \end{aligned}$$
(44)
We denote by \(\aleph(t, T)\) the number of jumps of the overall Lyapunov \(V(t)\) during \((t,T)\) for \(t \geq0\). Let \(T^{\flat} = \min\{t_{k+1} - t_{k}\}\), \(k = 0, 1,\ldots\) . Then it follows that
$$\begin{aligned}& \begin{aligned}[b] H\bigl(T^{-}\bigr)&=H \bigl[t^{+}_{\aleph(0,T)}\bigr]+ \int^{T}_{t_{\aleph(0,T)}}Be^{At}\,dt \Lambda \\ &\leq V(0)+ \int^{T}_{0}Be^{At}\,dt+e^{AT} \sum_{k=1}^{\aleph(0,T)}e ^{a(t_{k}-T)}\Lambda, \end{aligned} \end{aligned}$$
(45)
$$\begin{aligned}& \begin{aligned} &\aleph(t_{k}, T)T^{\flat} \leq T-t_{k}, \quad k = 1, \ldots,\aleph(0, T) \\ &t_{k}-T \leq-\aleph(t_{k}, T)T^{\flat}. \end{aligned} \end{aligned}$$
(46)
Then we have
$$\begin{aligned} e^{AT}\sum_{k=1}^{\aleph(0,T)}e^{a(t_{k}-T)} \Lambda \leq& e^{AT} \sum_{k=1}^{\aleph(0,T)}e^{-a\aleph(t_{k},T)T^{\flat}} \Lambda \\ =&\frac{1-e^{-aT^{\flat}\aleph(0,T)}}{1-e^{-aT^{\flat}}}e^{aT} \Lambda. \end{aligned}$$
(47)
It follows that
$$\begin{aligned}& V\bigl(T^{-}\bigr) \leq\biggl(V(0)- \frac{B}{A}-\frac{\Lambda }{1-e^{-aT^{\flat}}}\biggr)e ^{-aT}+\biggl( \frac{B}{A}+\frac{\Lambda}{1-e^{-aT^{\flat}}}\biggr), \quad \forall T > 0. \end{aligned}$$
(48)
Then all closed-loop signals are bounded. Note that \(\Sigma_{i=1}^{n} \leq2V(t)\), and let \(T \rightarrow\infty\). The bound of the tracking error can be derived as
$$\begin{aligned} \lim_{t\rightarrow\infty}\bigl\Vert z(t)\bigr\Vert \leq \sqrt{\frac{2B}{A}+\frac{2 \Lambda}{1-e^{-AT^{\flat}}}}. \end{aligned}$$
(49)
□
Remark 2
Inequality (28) makes a vital contribution to the backstepping design because it builds the relation between \(x_{i}\) and \(z_{i}\), which makes a backstepping-based design procedure viable.
Remark 3
The adaptive parameters \(\hat{W}_{i}\), ε̂, τ̂ are utilized to estimate \(W_{i}\), ε, τ, respectively. \(\tilde{W}_{i} = W _{i}-\hat{W}_{i}\), \(\tilde{\varepsilon}=\epsilon-\hat{\varepsilon}\), and \(\tilde{\tau}=\tau-\hat{\tau}\) denote the estimated errors. Note that the failure parameter \(\kappa_{j,k}\) is allowed to be time varying during each time interval \([t_{k}, t_{k+1})\) for \(k = 0,1,\ldots\) and \(b_{j}\) (\(j=1,\ldots,m\)) are unknown control coefficients. The instability cannot be ensured when \([t_{k}, t_{k+1})\) and \(b_{j}\) are not contained in the Lyapunov function.
Remark 4
The failure-related parameters τ contained in the Lyapunov function (15) will undergo a sudden jump at unknown time instant \(t_{k}\), and it follows that decreasing of the Lyapunov function, shown as in (38), is only valid at the time interval \([t_{k}, t_{k+1})\) during which the Lyapunov function \(\bar{V}_{k}(t)\) is differentiable. To establish the closed-loop system stability under the case of actuator failures or faults, we consider the overall Lyapunov function defined in (41).