Skip to main content

Theory and Modern Applications

Iterative learning control for a class of discrete-time singular systems

Abstract

This paper is concerned with the iterative learning control problem for a class of discrete-time singular systems. According to the characteristics of the systems, a closed-loop PD-type learning algorithm is proposed and the convergence condition of the algorithm is established. It is shown that the algorithm can guarantee the system output converges to the desired trajectory on the whole time interval. Moreover, the presented algorithm is also suitable for discrete-time singular systems with state delay. Finally, the validity of the presented algorithm is verified by two numerical examples.

1 Introduction

Iterative learning control (ILC) is an effective control strategy to achieve perfect trajectory tracking for repetitive systems in a finite time interval (see [1, 2]). The basic idea of ILC is to improve the current tracking performance by fully utilizing the past control experience. Since the complete iterative learning algorithm was initially proposed by Arimoto et al. [3], it has attracted extensive attention in the field of control theory and many efforts have been made devoted to the progress of ILC in recent years (see [4–8] and the references therein).

Singular systems have essential differences than the normal systems in many aspects, due to the fact that singular systems can preserve the structure of physical systems and impulsive elements. In many practical engineering problems, the systems have singular system models, such as circuit systems, large-scale systems, constrained mechanical systems and robotic systems (see [9, 10]). Hitherto, many significant results based on the theory of normal systems have been successfully extended to singular systems and the related research has been published (see [9–13] and the references therein). Meanwhile, there is some work which has been reported on the ILC for singular systems, but most of it has focused mainly on the continue-time singular systems (see [14–17]). For instance, reference [14] analyzed the convergence of D-type and PD-type closed-loop learning algorithms for linear singular systems in the sense of the Frobenius norm. Based on the Weierstrass canonical form of singular systems, reference [15] proposed a P-type ILC algorithm for the fast subsystems with impulse. In [16], the ILC technique was applied to a class of singular systems with state delay, then the convergence of the algorithm and the possibility of the state tracking were analyzed. Based on the nonsingular transformation method, a PD-type algorithm was designed in [17] to study the state tracking problem for a class of singular systems. Very recently, reference [18] applied the ILC strategy to a class of discrete singular systems, then the convergence analysis of the algorithm was given in detail by using λ-norm.

On the other hand, it should be pointed out that most of the singular systems studied in the above-mentioned works are based on the assumption that the matrix \(A_{22}\) is nonsingular (see [16–18]), which implies that the systems are impulse-free (for continue-time singular systems) or causal (for discrete-time singular systems). However, in many practical singular system models, the matrix \(A_{22}\) may be singular. Motivated by the aforementioned discussions, the ILC problem for a class of discrete-time singular systems will be further considered in this paper. According to the characteristics of the systems, a closed-loop PD-type learning algorithm is proposed and the convergence condition of the algorithm is established. It is worth pointing out that the algorithm presented in this paper has the ability to eliminate the non-causality of discrete-time singular systems. Under the action of the algorithm, the uniform convergence of the output tracking error is guaranteed with the aid of λ-norm. Furthermore, the result is extended to discrete-time singular systems with state delay. In the end, two numerical examples are given to support the theoretical analysis.

Throughout this paper, I denotes the identity matrix with appropriate dimensions. For a given vector or matrix X, \(\Vert X \Vert \) denotes its Euclidean norm. For a discrete system, \(t \in [0,T]\) denotes the integer sequence \(t = 0,1,2,\ldots, T\) . For a function h: \([0,T] \to {R} ^{n}\) and a real number \(0<\lambda < 1\), \({\Vert h \Vert _{\lambda }}\) denotes the λ-norm defined by \({\Vert h \Vert _{\lambda }} = \sup_{t \in [0,T]} \{ {{\lambda^{t}}\Vert {h(t)} \Vert } \} \).

2 Problem description

Consider the following discrete-time singular system:

$$ \textstyle\begin{cases} {E{x_{k}} ( {t + 1} ) = A{x_{k}} ( t ) + B{u_{k}} ( t ) },\\ {{y_{k}}(t) = C{x_{k}} ( t ) }, \end{cases} $$
(1)

where k denotes the iteration index, \(t \in [0,T]\) denotes the time index, \(E\in {{{R}}^{n\times n}}\) is a singular matrix and \(\operatorname{rank}({E})= {q} < n\). \(x_{k}(t) \in {R}^{n}\), \({u_{k}}(t) \in {R}^{m}\), \({y_{k}}(t) \in {R}^{r}\) represent the state, control input and output of the system, respectively. A, B and C are real matrices with appropriate dimensions.

Definition 1

([9])

The system (1) is said to be regular if there exists a constant complex \({s_{0}}\) such that \(\det ({s_{0}}E - A) \ne 0\).

Before giving our ILC law, basic assumptions for the system (1) are first given as follows.

Assumption 1

For the given desired output trajectory \(y_{d} {(t)}\), there exists a desired control input \({u_{d}}(t)\) such that

$$ \textstyle\begin{cases} {E{x_{d}} ( {t + 1} ) = A{x_{d}} ( t ) + B{u_{d}} ( t ) }, \\ {{y_{d}}(t) = C{x_{d}} ( t ) }, \end{cases} $$

where \(x_{d}{(t)}\) is the desired state.

Assumption 2

The initial resetting condition holds for all iterations, i.e.,

$$ {x _{k}}(0) = {x_{d}}(0), \quad k = 0,1,2, \ldots, $$

where \({x_{d}}(0)\) is the initial value of the desired state.

Assumption 3

([9])

The system (1) is regular, controllable and observable.

Given a desired output trajectory \(y_{d}{(t)}\), the target of this paper is to design an appropriate learning algorithm and generate the control sequence \(u_{k}{(t)}\), such that the system output \(y_{k}{(t)}\) can track the desired trajectory \(y_{d}{(t)}\) as the iteration number increases.

3 Convergence analysis of the algorithm

In this paper, we adopt the following closed-loop PD-type learning algorithm:

$$ {u_{k+1}}(t) = {u_{k}}(t) + \Gamma {e_{k+1}}(t + 1)+K{e_{k+1}}(t), $$
(2)

where \({\Gamma }, K\in {{{R}}^{m\times r}}\) are the learning gain matrices, and \({e_{k}}(t) = {y_{d}}(t) - {y_{k}}(t)\) is the output tracking error at the kth iteration.

Theorem 1

Consider the system (1) satisfying Assumptions 1-3. If there exists the gain matrix \({\Gamma } \in {{R}^{{m} \times {r}}}\) such that the matrix \(E + B\Gamma C\) is nonsingular and

$$ \rho = \Vert {I - \Gamma C\tilde{B}} \Vert < 1, $$
(3)

where \(\tilde{B} ={(E + B\Gamma C)^{ - 1}}B\). Then the system output \({y_{k}}(t)\) can converge to the desired trajectory \({y_{d}}(t)\) on the time interval \([0,T+1]\) by using the algorithm (2), i.e., \(\mathop{\lim }_{k \to \infty } {y_{k}}(t) = {y_{d}}(t), t \in [0,T+1]\).

Proof

Denote \(\Delta {x_{k}}(t) = {x_{d}}(t) - {x_{k}}(t)\), \(\Delta {u_{k}}(t) = {u_{d}}(t) - {u_{k}}(t)\). From (1), (2) and Assumption 1, we have

$$ E\Delta {x_{k}} ( {t + 1} ) = A\Delta {x_{k}} ( t ) + B\Delta {u_{k}} ( t ) $$
(4)

and

$$\begin{aligned} \Delta {u_{k}}(t) &= \Delta {u_{k - 1}}(t) - \Gamma {e_{k}}(t + 1)-K {e_{k}}(t) \\ &= \Delta {u_{k - 1}}(t) - \Gamma C\Delta {x_{k}}(t + 1)-KC \Delta {x_{k}}(t). \end{aligned}$$
(5)

Substituting (5) into (4) results in

$$ E\Delta {x_{k}} ( {t + 1} ) = (A-BKC)\Delta {x_{k}} ( t ) + B{\Delta {u_{k - 1}}(t) - B\Gamma C\Delta {x_{k}}(t + 1)}, $$

that is,

$$ ( {E + B\Gamma C} ) \Delta {x_{k}} ( {t + 1} ) = (A-BKC) \Delta {x_{k}} ( t ) + B\Delta {u_{k - 1}}(t). $$

Since the matrix \(E + B\Gamma C\) is nonsingular, further we can get

$$ \Delta {x_{k}} ( {t + 1} ) = \tilde{A}\Delta {x_{k}} ( t ) + \tilde{B}\Delta {u_{k - 1}}(t), $$
(6)

where

$$ \tilde{A} = { ( {E + B\Gamma C} ) ^{ - 1}}(A-BKC), \qquad \tilde{B} = { ( {E + B \Gamma C} ) ^{ - 1}}B. $$

Taking the Euclidean norm on both sides of (6) gives

$$\begin{aligned} \bigl\Vert {\Delta {x_{k}}(t + 1)} \bigr\Vert \le & \Vert {\tilde{A}} \Vert \bigl\Vert {\Delta{x_{k}}(t)} \bigr\Vert +\Vert {\tilde{B}} \Vert \bigl\Vert {\Delta {u_{k - 1}}(t)} \bigr\Vert \\ =&{c_{1}} \bigl\Vert {\Delta {x_{k}}(t)} \bigr\Vert + {c_{2}} \bigl\Vert {\Delta {u_{k - 1}}(t)} \bigr\Vert , \end{aligned}$$
(7)

where \({c_{1}} = \Vert {\tilde{A}} \Vert , {c_{2}} = \Vert {\tilde{B}} \Vert \). Noting that \(\Vert \Delta {x_{k}}(0) \Vert = 0\) by Assumption 2, for \(t \ge 1\), we can obtain

$$ \begin{aligned} \bigl\Vert {\Delta {x_{k}}(t)} \bigr\Vert &\le {c_{1}} \bigl\Vert {\Delta {x_{k}}(t-1)} \bigr\Vert + {c_{2}} \bigl\Vert {\Delta {u_{k - 1}}(t-1)} \bigr\Vert \\ &\le c_{1}^{t} \bigl\Vert {\Delta{x_{k}}(0)}\bigr\Vert + \sum_{s = 0}^{t - 1} {c_{1}^{t - s - 1}{c_{2}} \bigl\Vert {\Delta{u_{k - 1}}(s)} \bigr\Vert } \\ &= \sum_{s = 0}^{t - 1} {c_{1}^{t - s - 1}{c_{2}} \bigl\Vert {\Delta {u_{k - 1}}(s)} \bigr\Vert }. \end{aligned} $$

Multiplying both sides of the above inequality by \({\lambda^{t}} \) (\(0 <\lambda < 1\)) yields

$$ \begin{aligned} {\lambda^{t}} \bigl\Vert {\Delta {x_{k}}(t)} \bigr\Vert &\le \sum_{s = 0}^{t - 1} {{{(\lambda {c_{1}})}^{t - s - 1}}\lambda {{c_{2}} {{ \lambda^{s}} \bigl\Vert {\Delta {u_{k - 1}}(s)} \bigr\Vert } }} \\ &\le \sum_{s = 0}^{t - 1} {{{(\lambda {c_{1}})}^{t - s - 1}}\lambda {c_{2}}\sup _{t \in [0,T - 1]} \bigl\{ {{\lambda^{t}}\Delta {u_{k - 1}}(t)} \bigr\} } \\ &\le \sum_{s = 0}^{t - 1} {{{(\lambda {c_{1}})} ^{t - s - 1}}\lambda {c_{2}} {{\Vert {\Delta {u_{k - 1}}} \Vert } _{\lambda }}} \\ &\le \frac{{1 - {{(\lambda {c_{1}})}^{T}}}}{{1 - \lambda {c_{1}}}}\lambda {c_{2}} {\Vert {\Delta {u_{k - 1}}} \Vert _{\lambda }}. \end{aligned} $$

Applying the definition of the λ-norm to the above expression results in

$$ {\Vert {\Delta {x_{k}}} \Vert _{\lambda }} \le \lambda {c_{3}} {\Vert {\Delta {u_{k - 1}}} \Vert _{\lambda }}, $$
(8)

where

$$ {c_{3}} = \frac{{1 - {{(\lambda {c_{1}})}^{T}}}}{{1 - \lambda {c_{1}}}}{c_{2}}. $$

It follows from (5) and (6) that

$$ \begin{aligned} \Delta {u_{k}}(t) &= \Delta {u_{k - 1}}(t) - \Gamma C\Delta {x_{k}}(t + 1)-KC\Delta {x_{k}}(t) \\ &= (I - \Gamma C\tilde{B})\Delta {u_{k - 1}}(t)-(KC+ \Gamma C\tilde{A}) \Delta {x_{k}} ( t ) . \end{aligned} $$

Taking the Euclidean norm on both sides of the above equation and combining with (3) yields

$$ \bigl\Vert {\Delta {u_{k}}(t)} \bigr\Vert \le \rho \bigl\Vert { \Delta {u_{k - 1}}(t)} \bigr\Vert + {c_{4}} \bigl\Vert { \Delta {x_{k}}(t)} \bigr\Vert , $$

where \({c_{4}} = \Vert K C+{\Gamma C\tilde{A}} \Vert \). Combining with (8), we can derive

$$\begin{aligned} {\Vert {\Delta {u_{k}}} \Vert _{\lambda }} \le & \rho {\Vert {\Delta {u_{k - 1}}} \Vert _{\lambda }} + {c_{4}} {\Vert {\Delta {x_{1k}}} \Vert _{\lambda }} \\ \le & \rho {\Vert {\Delta {u_{k - 1}}} \Vert _{\lambda }} + \lambda {c_{3}} {c_{4}} {\Vert {\Delta {u_{k - 1}}} \Vert _{\lambda }} \\ =& \hat{\rho } {\Vert {\Delta {u_{k - 1}}} \Vert _{\lambda }}, \end{aligned}$$
(9)

where \(\hat{\rho }= \rho + \lambda {c_{3}}{c_{4}}\). Since \(0 \le \rho < 1\) by (3), it is possible to choose λ sufficiently small so that \(\hat{\rho }< 1\). Therefore, (9) is a contraction in \({\Vert {\Delta {u_{k}}} \Vert _{\lambda }}\), and we have

$$ \mathop{\lim }_{k \to \infty } {\Vert {\Delta {u_{k}}} \Vert _{\lambda }} = 0. $$
(10)

It follows from (8) and (10) that

$$ \mathop{\lim }_{k \to \infty } {\Vert {\Delta {x_{k}}} \Vert _{\lambda }} = 0. $$

Since \(0<\lambda <1\), we have \({\lambda }^{T}\le \lambda^{t}\le 1\) for \(t\in [0, T]\). Furthermore, we have

$$ {\lambda^{T}}\sup_{t \in [0,T]} { \bigl\Vert \Delta {{x_{k}}}(t) \bigr\Vert }\le \sup_{t \in [0,T]} \bigl\{ {{\lambda^{t}} \bigl\Vert \Delta{{x_{k}}}(t)\bigr\Vert } \bigr\} = \Vert \Delta {{x_{k}}} \Vert _{\lambda }, $$

therefore

$$ \sup_{t \in [0,T]} \bigl\Vert {\Delta {x_{k}}(t)} \bigr\Vert \le {\lambda^{ - T}} {\Vert {\Delta {x_{k}}} \Vert _{\lambda }}. $$

It is obvious that \(\mathop{\lim }_{k \to \infty } \sup_{t \in [0,T]} \Vert {\Delta {x_{k}}(t)} \Vert {\mathrm{{ = 0}}}\), that is,

$$ \mathop{\lim }_{k \to \infty } \bigl\Vert {\Delta {x_{k}}(t)} \bigr\Vert = 0, \quad t \in [0,T]. $$

Recalling (7), we can obtain

$$ \mathop{\lim }_{k \to \infty } \bigl\Vert {\Delta {x_{k}}(t)} \bigr\Vert = 0, \quad t \in [0,T+1]. $$

Therefore, we have

$$ \mathop{\lim }_{k \to \infty } {y_{k}}(t) = {y_{d}}(t), \quad t \in [0,T + 1]. $$

This completes the proof. □

4 Extension to systems with state delay

In this section, we further extend the result of Theorem 1 to a discrete-time singular system with state delay, which is described by

$$ \textstyle\begin{cases} {E{x_{k}} ( {t + 1} ) =A{x_{k}} ( t ) +{{D}} {x_{k}} ( {t - {\tau }} ) + B{u_{k}} ( t ) }, \\ {{y_{k}}(t) = C{x_{k}} ( t ) }, \end{cases} $$
(11)

where Ï„ is a known positive integer time delay. For \(t \in [ - \tau ,0]\), \({x_{k}}(t) = {\varphi_{k}}(t)\) and \({\varphi_{k}}(t)\) is the initial function of the system.

Basic assumptions for the system (11) are given for further analysis.

Assumption 4

For the given desired output trajectory \(y_{d} {(t)}\), there exists a desired control input \({u_{d}}(t)\) such that

$$ \textstyle\begin{cases} {E{x_{d}} ( {t + 1} ) = A{x_{d}} ( t ) + {{D}} {x_{d}} ( {t - {\tau }} ) + B{u_{d}} ( t ) }, \\ {{y_{d}}(t) = C{x_{d}} ( t ) }, \end{cases} $$

where \(x_{d}{(t)}\) is the desired state.

Assumption 5

The initial resetting condition holds for all iterations, i.e.,

$$ {\varphi_{k}}(t) = {\varphi_{d}}(t) , \quad t \in [ - \tau ,0], k = 0,1,2,\ldots , $$

where \({\varphi_{d}}(t)\) is the desired initial function.

Assumption 6

The system (11) is regular, controllable and observable.

Theorem 2

Consider the system (11) satisfying Assumptions 4-6. If there exists the gain matrix \({\Gamma } \in {{R}^{{m} \times {r}}}\) such that the matrix \(E + B\Gamma C\) is nonsingular and the convergence condition (3) holds, then the system output \({y_{k}}(t)\) can converge to the desired trajectory \({y_{d}}(t)\) on the time interval \([0,T+1]\) by using the algorithm (2), i.e., \(\mathop{\lim } _{k \to \infty } {y_{k}}(t) = {y_{d}}(t), t\in [0,T+1]\).

Proof

Repeating the similar procedure as that (4) to (6), we can get

$$ \Delta {x_{k}} ( {t + 1} ) = \tilde{A}\Delta {x_{k}} ( t ) + {{{\tilde{D}}}\Delta } {x_{k}} ( {t - {\tau }} ) + \tilde{B}\Delta {u_{k - 1}}(t), $$
(12)

where \({\tilde{D}} = { ( {E + B\Gamma C} ) ^{ - 1}}{D}\). Taking the Euclidean norm on both sides of (12) results in

$$\begin{aligned} \bigl\Vert {\Delta {x_{k}}(t + 1)} \bigr\Vert \le& \Vert {\tilde{A}} \Vert \bigl\Vert {\Delta{x_{k}}(t)} \bigr\Vert + { \Vert {{{\tilde{D}}}} \Vert } \bigl\Vert {\Delta {x_{k}}(t -{\tau})} \bigr\Vert + \Vert {\tilde{B}} \Vert \bigl\Vert {\Delta {u_{k - 1}}(t)}\bigr\Vert \\ =&c_{1} \bigl\Vert {\Delta {x_{k}}(t)} \bigr\Vert +c_{5} { \bigl\Vert {\Delta {x_{k}}(t -{\tau })} \bigr\Vert } + {c_{2}} \bigl\Vert {\Delta {u_{k - 1}}(t)} \bigr\Vert , \end{aligned}$$
(13)

where \({c_{5}} = \Vert {{{\tilde{D}}}} \Vert \). From Assumption 5, we know

$$ \bigl\Vert \Delta {x_{k}}(t) \bigr\Vert = \bigl\Vert {\varphi_{d}}(t) - {\varphi_{k}}(t) \bigr\Vert = 0, \quad t \in [ - \tau ,0]. $$
(14)

Noting that \(\Vert \Delta {x_{k}}(0) \Vert = 0\) by (14), for \(t \ge 1\), we can derive

$$ \begin{aligned} \bigl\Vert {\Delta {x_{k}}(t)} \bigr\Vert &\le {c_{1}} \bigl\Vert {\Delta {x_{k}}(t-1)} \bigr\Vert + {c_{5}} { \bigl\Vert {\Delta {x_{k}}(t -1- {\tau })} \bigr\Vert } + {c_{2}} \bigl\Vert {\Delta{u_{k - 1}}(t-1)} \bigr\Vert \\ &\le c_{1}^{t} \bigl\Vert {\Delta {x_{k}}(0)} \bigr\Vert + \sum_{s = 0}^{t - 1} {c_{1}^{t - s - 1} \bigl\{ {{c_{5}} { \bigl\Vert { \Delta {x_{k}}(s - {\tau })} \bigr\Vert } + {c_{2}} \bigl\Vert {\Delta {u_{k - 1}}(s)} \bigr\Vert } \bigr\} } \\ &= \sum_{s = 0}^{t - 1} {c_{1}^{t - s - 1} \bigl\{ {{c_{5}} { \bigl\Vert {\Delta {x_{k}}(s - {\tau })} \bigr\Vert } + {c_{2}} \bigl\Vert {\Delta {u_{k - 1}}(s)} \bigr\Vert } \bigr\} }. \end{aligned} $$

Multiplying both sides of the above inequality by \(\lambda^{t}\) (\(0 <\lambda {c_{1}}< 1\)) and combining with (14) gives

$$\begin{aligned} &\lambda^{t} \bigl\Vert {\Delta {x_{k}}(t)} \bigr\Vert \\ &\quad \le \sum_{s = 0}^{t - 1} {{{(\lambda {c_{1}})}^{t - s - 1}}\lambda \bigl\{ {{c_{5}} {{ \lambda^{{\tau }}} {\lambda^{s - {\tau }}} \bigl\Vert {\Delta {x_{k}}(s - {\tau })} \bigr\Vert + {c_{2}} { \lambda^{s}} \bigl\Vert {\Delta {u_{k - 1}}(s)} \bigr\Vert } } \bigr\} } \\ &\quad \le \sum_{s = 0}^{t - 1} {{{( \lambda{c_{1}})}^{t - s - 1}}\lambda \Bigl\{ {{c_{5}} {{ \lambda^{{\tau }}}\sup_{t \in [ - {\tau },T - {\tau }]} \bigl\{ {{ \lambda^{t}} \bigl\Vert {\Delta {x_{k}}(t)} \bigr\Vert } \bigr\} + {c_{2}}\sup_{t \in [0,T - 1]} \bigl\{ {{ \lambda^{t}} \bigl\Vert {\Delta {u_{k - 1}}(s)} \bigr\Vert } \bigr\} } } \Bigr\} } \\ &\quad = \sum_{s = 0}^{t - 1} {{{(\lambda {c_{1}})}^{t - s - 1}}\lambda \Bigl\{ {{c_{5}} {{ \lambda^{{\tau }}} \sup_{t \in [0,T - {\tau }]} \bigl\{ {{ \lambda^{t}} \bigl\Vert {\Delta {x_{k}}(t)} \bigr\Vert } \bigr\} + {c_{2}}\sup_{t \in [0,T - 1]} \bigl\{ {{ \lambda^{t}} \bigl\Vert {\Delta {u_{k - 1}}(s)} \bigr\Vert } \bigr\} } } \Bigr\} } \\ &\quad \le \sum_{s = 0}^{t - 1} {{{( \lambda {c_{1}})}^{t - s - 1}}\lambda \bigl\{ {{c_{5}} { \lambda^{ \tau }} {{\Vert {\Delta {x_{k}}} \Vert }_{\lambda }}} + {c_{2}} {{\Vert {\Delta {u_{k - 1}}} \Vert }_{\lambda }}} \bigr\} \\ &\quad \le \frac{{1 - {{(\lambda {c_{1}})}^{T}}}}{{1 - \lambda {c_{1}}}}\lambda \bigl\{ c_{5} \lambda^{\tau } {{ \Vert {\Delta{x_{k}}}\Vert }_{\lambda }} + c_{2} \Vert \Delta u_{k - 1} \Vert _{\lambda } \bigr\} . \end{aligned}$$

Applying the definition of the λ-norm the above expression becomes

$$ {\Vert {\Delta {x_{k}}} \Vert _{\lambda }} \le \frac{{1 - {{( \lambda {c_{1}})}^{T}}}}{{1 - \lambda {c_{1}}}}\lambda \bigl\{ c_{5} \lambda^{\tau } \Vert \Delta x_{k} \Vert _{\lambda} + c_{2} \Vert \Delta{u_{k - 1}} \Vert _{\lambda} \bigr\} . $$
(15)

Letting the above λ be such that

$$ {\lambda^{\tau + 1}} {c_{5}}\frac{{1 - {{(\lambda {c_{1}})}^{T}}}}{ 1 - \lambda c_{1}} < 1 $$

holds, further we have

$$ {\Vert {\Delta {x_{k}}} \Vert _{\lambda }} \le \lambda {c_{6}} {\Vert {\Delta {u_{k - 1}}} \Vert _{\lambda }}, $$
(16)

where

$$ {c_{6}} = \frac{{ \frac{{1 - {{(\lambda {c_{1}})}^{T}}}}{{1 - \lambda {c_{1}}}}}}{{1 - {\lambda^{\tau + 1}}{c_{5}}\frac{{1 - {{(\lambda {c_{1}})}^{T}}}}{{1 - \lambda {c_{1}}}}}}{c_{2}}. $$

It follows from (5) and (12) that

$$\begin{aligned} \begin{aligned} \Delta {u_{k}}(t) &= \Delta {u_{k - 1}}(t) - \Gamma C\Delta {x_{k}}(t + 1)-K C\Delta {x_{k}}(t) \\ &= (I -\Gamma C\tilde{B})\Delta {u_{k - 1}}(t) - (KC+\Gamma C\tilde{A})\Delta {x_{k}} ( t ) + {\Gamma C{{ \tilde{D}}}\Delta } {x_{k}} ( {t - {\tau }} ) . \end{aligned} \end{aligned}$$

Taking the Euclidean norm on both sides of the above expression and combining with (3) yield

$$ \bigl\Vert {\Delta {u_{k}}(t)} \bigr\Vert \le \rho \bigl\Vert { \Delta {u_{k - 1}}(t)} \bigr\Vert + {c_{4}} \bigl\Vert { \Delta {x_{k}}(t)} \bigr\Vert + {c_{7}} { \bigl\Vert { \Delta {x_{k}}(t - {\tau })} \bigr\Vert }, $$

where \({c_{7}} = \Vert {\Gamma C{{\tilde{D}}}} \Vert \). Combining with (14) and (16), we can derive

$$\begin{aligned} {\Vert {\Delta {u_{k}}} \Vert _{\lambda }} \le & \rho {\Vert {\Delta {u_{k - 1}}} \Vert _{\lambda }} + {c_{4}} {\Vert {\Delta {x_{k}}} \Vert _{\lambda }} + {c_{7}} {\sup_{t \in [0,T]} \bigl\{ {{ \lambda^{t}} \bigl\Vert {\Delta {x_{k}}(t -{\tau })} \bigr\Vert } \bigr\} } \\ =& \rho {\Vert {\Delta {u_{k - 1}}} \Vert _{\lambda }} + {c_{4}} {\Vert {\Delta {x_{k}}} \Vert _{\lambda }} + {c_{7}} {{\lambda^{{\tau }}} \sup_{t \in [ - {\tau },T - {\tau }]} \bigl\{ {{\lambda^{t}} \bigl\Vert {\Delta {x_{k}}(t)} \bigr\Vert } \bigr\} } \\ =& \rho {\Vert {\Delta {u_{k - 1}}} \Vert _{\lambda }} + {c_{4}} {\Vert {\Delta {x_{k}}} \Vert _{\lambda }} + {c_{7}} {{\lambda^{{\tau }}} \sup_{t \in [0,T - {\tau }]} \bigl\{ {{\lambda^{t}} \bigl\Vert {\Delta {x_{k}}(t)} \bigr\Vert } \bigr\} } \\ \le & \rho {\Vert {\Delta {u_{k - 1}}} \Vert _{\lambda }} + \bigl( {c_{4}} + {c_{7}} {\lambda^{\tau }} \bigr){ \Vert {\Delta {x_{k}}} \Vert _{\lambda }} \\ \le & \rho {\Vert {\Delta {u_{k - 1}}} \Vert _{\lambda }} + \lambda {c_{6}} \bigl({c_{4}} + {c_{7}} { \lambda^{\tau }} \bigr){\Vert {\Delta {u_{k - 1}}} \Vert _{\lambda }} \\ =& \tilde{\rho } {\Vert {\Delta {u_{k - 1}}} \Vert _{\lambda }}, \end{aligned}$$
(17)

where \(\tilde{\rho }= \rho + \lambda {c_{4}}{c_{6}}+ {\lambda^{\tau +1 }}{c_{6}} {c_{7}}\). Since \(0 \le \rho < 1\) by (3), it is possible to choose λ sufficiently small so that \(\tilde{\rho }< 1\). Therefore, (17) is a contraction in \({\Vert {\Delta {u_{k}}} \Vert _{\lambda }}\), then we have

$$ \mathop{\lim }_{k \to \infty } {\Vert {\Delta {u_{k}}} \Vert _{\lambda }} = 0. $$
(18)

Similarly, it follows from (16) and (18) that

$$ \mathop{\lim }_{k \to \infty } \bigl\Vert {\Delta {x_{k}}(t)} \bigr\Vert = 0, \quad t \in [0,T]. $$

Recalling (13), we can obtain

$$ \mathop{\lim }_{k \to \infty } \bigl\Vert {\Delta {x_{k}}(t)} \bigr\Vert = 0, \quad t \in [0,T+1]. $$

Therefore, we have

$$ \mathop{\lim }_{k \to \infty } {y_{k}}(t) = {y_{d}}(t), \quad t \in [0,T + 1]. $$

This completes the proof. □

Remark 1

For the discrete singular delay system (11), when the closed-loop PD-type learning algorithm (2) is applied, the delay variable \({\Delta {x_{k}}(t-\tau )}\) can be transformed into the variable \({\Delta {x_{k}}(t)}\) with the aid of Assumption 5 and the λ-norm.

5 Numerical examples

In this section, two numerical examples are constructed to demonstrate the validity of the presented closed-loop PD-type learning algorithm.

Example 1

Consider the following discrete-time singular system:

$$ \textstyle\begin{cases} {E{x_{k}} ( {t + 1} ) = A{x_{k}} ( t ) + B{u_{k}} ( t ) }, \\ {{y_{k}}(t) = C{x_{k}} ( t ) }, \end{cases} $$

where \(t \in [0,14]\), and

$$\begin{aligned}& E= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 1& 0& 0 \\ 0& 1& 0 \\ 0& 0& 0 \end{array}\displaystyle \right ] ,\qquad A= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 1& 0& 0 \\ 0& 1& 0 \\ 0.2& -0.3& 1 \end{array}\displaystyle \right ] , \\& B= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} -0.5& -2 \\ 1& 0 \\ 1& 0.2 \end{array}\displaystyle \right ] , \qquad C= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} 1& -1& 0.5 \\ 0& 0.2& 1 \end{array}\displaystyle \right ] . \end{aligned}$$

According to the algorithm (2), take the gain matrices

$$ \Gamma =\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 1& 0 \\ 0& 2 \end{array}\displaystyle \right ], \qquad K =\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} -1& 0 \\ 0& 0.2 \end{array}\displaystyle \right ], $$

furthermore, we can compute that

$$ \tilde{B} ={(E + B\Gamma C)^{ - 1}}B=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 1.0881& -0.2720 \\ 0.1036& -0.0259 \\ 0.2383& 0.4404 \end{array}\displaystyle \right ] . $$

Then we have \(\rho = \Vert {I- \Gamma C\tilde{B}} \Vert =0.6076< 1\), i.e., the convergence condition (3) holds. Take the given desired output trajectory as

$$ {y_{d}}(t)= \left [ \textstyle\begin{array}{@{}c@{}} y^{(1)}_{d}(t) \\ y^{(2)}_{d}(t) \end{array}\displaystyle \right ] = \left [ \textstyle\begin{array}{@{}c@{}} 0.01t(5t-14) \\ 0.02t(t-10) \end{array}\displaystyle \right ] . $$

Set the initial state and the initial input

$$ x_{k}(t) =[0\quad 0\quad 0]^{\mbox{T}}, \qquad u_{0}(t) = [0 \quad 0]^{\mbox{T}}. $$

Figures 1 and 2 give the tracking situations of the system outputs \(y^{(1)} _{k}(t)\) and \(y^{(2)}_{k}(t)\) to the desired trajectories at the 7th, 10th and 16th iterations, respectively. From Figure 3, we know that the maximum tracking errors \(e^{(1)}_{k}(t)\) and \(e^{(2)}_{k}(t)\) tend to zero as the iteration number increases by using the closed-loop PD-type learning algorithm (2).

Figure 1
figure 1

The tracking performance of the system output \(\pmb{y^{(1)}_{k}(t)}\) to the desired trajectory \(\pmb{y^{(1)}_{d}(t)}\) at different iterations by using the learning algorithm ( 2 ).

Figure 2
figure 2

The tracking performance of the system output \(\pmb{y^{(2)}_{k}(t)}\) to the desired trajectory \(\pmb{y^{(2)}_{d}(t)}\) at different iterations by using the learning algorithm ( 2 ).

Figure 3
figure 3

The maximum tracking error versus iteration number.

Example 2

Consider the following discrete-time singular system with state delay:

$$ \textstyle\begin{cases} E{x_{k}(t + 1) = A{x_{k}}(t) + {D} {x_{k}}(t - {\tau }) + B{u_{k}}(t)}, \\ y_{k}(t) = Cx_{k} ( t ) , \end{cases} $$
(19)

where \(t \in [0,\mathrm{{14}}]\), the time delay \({\tau } = 1\), and

$$ \begin{aligned} {E} = \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 1& 0 \\ 0& 0 \end{array}\displaystyle \right ] ,\qquad {A} = \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 1& 0.1 \\ 0.5& 0 \end{array}\displaystyle \right ] , \qquad {D} = \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 0& 0.1 \\ 0.1& 0.2 \end{array}\displaystyle \right ] ,\qquad B =C= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 1& 0 \\ 0& 1 \end{array}\displaystyle \right ] . \end{aligned} $$

By Lemma 1 in [19], we know that the system (19) is noncausal. According to the algorithm (2), take the gain matrices

$$ \Gamma =\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 2& 3 \\ 0& 2 \end{array}\displaystyle \right ] , \qquad K = \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 0.1& 0 \\ 0& 0.1 \end{array}\displaystyle \right ], $$

we further have

$$ \tilde{B}={(E + B\Gamma C)^{-1}}B =\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 0.3333& -0.5 \\ 0& 0.5 \end{array}\displaystyle \right ] , $$

so \(\rho = \Vert {I - \Gamma C\tilde{B}} \Vert =0.6009< 1\), i.e., the convergence condition (3) is satisfied. Take the given desired output trajectory as

$$ {y_{d}}(t)= \left [ \textstyle\begin{array}{@{}c@{}} y^{(1)}_{d}(t) \\ y^{(2)}_{d}(t) \end{array}\displaystyle \right ] = \left [ \textstyle\begin{array}{@{}c@{}} \cos (0.4t) \\ \mathrm{{e}}^{0.1t}-1 \end{array}\displaystyle \right ] . $$

Set the initial state and the initial input

$$ {x_{k}}(t) = \left [ \textstyle\begin{array}{@{}c@{}} 1+t \\ 2t \end{array}\displaystyle \right ] , \quad t \in [ -1,0], \qquad u_{0}(t) =\left [ \textstyle\begin{array}{@{}c@{}} 0 \\ 0 \end{array}\displaystyle \right ] . $$

Correspondingly, the simulation results are shown in Figures 4-6. From Figures 4 and 5, it is obvious that the trajectories \(y^{(1)}_{k}(t)\) and \(y ^{(2)}_{k}(t)\) at the 11th iteration can follow the desired ones. From Figure 6, we can see that the uniform convergence of the output tracking error is guaranteed under the action of closed-loop PD-type learning algorithm (2).

Figure 4
figure 4

The tracking performance of the system output \(\pmb{y^{(1)}_{k}(t)}\) to the desired trajectory \(\pmb{y^{(1)}_{d}(t)}\) at different iterations by using the learning algorithm ( 2 ).

Figure 5
figure 5

The tracking performance of the system output \(\pmb{y^{(2)}_{k}(t)}\) to desired trajectory \(\pmb{y^{(2)}_{d}(t)}\) at different iterations by using the learning algorithm ( 2 ).

Figure 6
figure 6

The maximum tracking error versus iteration number.

6 Conclusion

In this paper, the problem of iterative learning control is investigated for a class of discrete-time singular systems. Then a closed-loop PD-type learning algorithm is adopted for such singular systems, and the convergence condition of the algorithm is established. We show that the algorithm can ensure the output tracking error converges to zero on the whole time interval. The corresponding result is further extended to discrete-time singular systems with state delay. In the end, two numerical examples are constructed to illustrate the effectiveness of the presented algorithm.

References

  1. Bien, Z, Xu, JX: Iterative Learning Control: Analysis, Design, Integration and Applications. Kluwer Academic Publishers, Dordrecht (1998)

    Book  Google Scholar 

  2. Xu, JX, Tan, Y: Linear and Nonlinear Iterative Learning Control. Springer, Berlin (2003)

    MATH  Google Scholar 

  3. Arimoto, S, Kawamura, S, Miyazaki, F: Bettering operation of robots by learning. J. Robot. Syst. 1(2), 123-140 (1984)

    Article  Google Scholar 

  4. Sun, M, Wang, D: Initial shift issues on discrete-time iterative learning control with system relative degree. IEEE Trans. Autom. Control 48(1), 144-148 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  5. Moore, KL, Chen, Y, Bahl, V: Monotonically convergent iterative learning control for linear discrete-time systems. Automatica 41(9), 1529-1537 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  6. Hou, Z, Xu, JX, Zhong, H: Freeway traffic control using iterative learning control-based ramp metering and speed signaling. IEEE Trans. Veh. Technol. 56(2), 466-477 (2007)

    Article  Google Scholar 

  7. Zhu, Q: Iterative learning control design for linear discrete-time systems with multiple high-order internal models. Automatica 62, 65-76 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  8. Meng, D, Moore, KL: Robust iterative learning control for nonrepetitive uncertain systems. IEEE Trans. Autom. Control 62(2), 907-913 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  9. Dai, L: Singular Control Systems. Springer, New York (1989)

    Book  MATH  Google Scholar 

  10. Duan, GR: Analysis and Design of Descriptor Linear Systems. Springer, New York (2010)

    Book  MATH  Google Scholar 

  11. Xu, S, Lam, J, Zou, Y, Li, J: Robust admissibility of time-varying singular systems with commensurate time delays. Automatica 45(11), 2714-2717 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  12. Wu, L, Shi, P, Gao, H: State estimation and sliding mode control of Markovian jump singular systems. IEEE Trans. Autom. Control 55(5), 1213-1219 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  13. Zheng, G, Bejarano, FJ: Observer design for linear singular time-delay systems. Automatica 80(6), 1-9 (2017)

    MathSciNet  MATH  Google Scholar 

  14. Piao, FX, Zhang, QL: Iterative learning control for linear singular systems. Control Decis. 22(3), 349-356 (2007)

    Google Scholar 

  15. Piao, FX, Zhang, QL, Wang, ZF: Iterative learning control for a class of singular systems. Acta Autom. Sin. 33(6), 658-659 (2007)

    Google Scholar 

  16. Xie, SL, Xie, ZD, Liu, YQ: Iterative learning control algorithm for state tracking of singular systems with delay. Syst. Eng. Electron. 21(5), 10-16 (1999)

    Google Scholar 

  17. Tian, SP, Zhou, XJ: State tracking algorithm for a class of singular ILC systems. J. Syst. Sci. Math. Sci. 32(6), 731-738 (2012)

    MathSciNet  MATH  Google Scholar 

  18. Tian, S, Liu, Q, Dai, X, Zhang, J: A PD-type iterative learning control algorithm for singular discrete systems. Adv. Differ. Equ. 2016, Article ID 321 (2016)

    Article  MathSciNet  Google Scholar 

  19. Liao, F, Cao, M, Hu, Z, An, P: Design of an optimal preview controller for linear discrete-time causal descriptor systems. Int. J. Control 85(10), 1616-1624 (2012)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to express their gratitude to the anonymous reviewers for their valuable suggestions that have improved the quality of this paper. This work was supported by the National Natural Science Foundation of China (Nos. 61374104, 61773170) and the Natural Science Foundation of Guangdong Province of China (No. 2016A030313505).

Author information

Authors and Affiliations

Authors

Contributions

All the authors contributed equally to this work. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Senping Tian.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gu, P., Tian, S. & Liu, Q. Iterative learning control for a class of discrete-time singular systems. Adv Differ Equ 2018, 13 (2018). https://doi.org/10.1186/s13662-018-1471-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-018-1471-8

Keywords