Skip to main content

Theory and Modern Applications

Networked iterative learning control for discrete-time systems with stochastic packet dropouts in input and output channels

Abstract

The paper develops a derivative-type (D-type) networked iterative learning control (NILC) scheme for repetitive discrete-time systems with packet dropouts stochastically occurred in input and output communication channels. The scheme generates the sequential recursive-mode control inputs by mending the dropped instant-wise output with the synchronous desired output, while it drives the plant by refreshing the dropped instant-wise control input with the used consensus-instant control input at the previous iteration. By adopting statistic technique, the convergences of the developed NILC scheme for linear and nonlinear systems are derived, respectively. The derivations present that under certain conditions the mathematical expectations of the stochastic tracking errors in the sense of 1-norm converge to zero. Numerical simulations exhibit the effectiveness and validity.

1 Introduction

In biology, psychology, sociology as well as in philosophy, the notion of ‘learning’ has been acknowledged as one of intelligent capabilities for an individual to earn food and fit the environment for surviving and evolving persistently. It is noted as a process for an intelligent agent to acquire knowledge or experience from its perception and cognition of the environment and then to act on the environment so as to improve its behavior performance at the next time. Benefited from the advancing computer technology, learning algorithm has been algorithmically embedded into the control programming of a robotic manipulator to track a desired trajectory. The pioneer contribution is the iterative learning control (ILC) invented in the 1980s whose scheme is to utilize the historical tracking discrepancy to modify its control command so that the upgraded control command may drive the repetitive system to track a predetermined desired trajectory [1]. Overviewing the existing ILC investigations, the ILC has been acknowledged as one of the most effective intelligent control strategies for a repetitive system operated over a fixed time interval owing to its less system information requirement and precise tracking insurance [28].

Along with the development of internet service, some of efficacious control schemes can be networked for higher efficiency and lower cost, which forms networked control systems (NCSs). However, confined by the physical features of the wire or wireless net communication devices such as the limit bandwidth or temporal oscillation of the net, the embedment of the communication net into the traditional control loop may possibly incur the communication delay and packet dropout which will deteriorate the control effects [913]. In terms of the communication delays, a usual manner is to replace the delayed data by the captured data at the last sampling instant in the case when the delay is within one sampling step length [911]. In treating the packet dropout, the method is to replace the dropped data with the latest captured one [12, 13]. It has been shown that the aforementioned handling methods work satisfactorily under the assumptions that the probabilities of the communication delay and the packet dropout are constrained appropriately.

Inspired by the handling methods for the NCSs, the investigations have been emerged to embed the network into the conventional ILC system addressing the communication delay and/or packet dropout. In detail, a D-type NILC strategy has been considered for a class of linear time-invariant (LTI) multiple-input-multiple-output (MIMO) systems, where both the packet dropout and the communication delay of the system output are considered [14]. Reference [15] has addressed a proportional-type (P-type) NILC for a class of nonlinear systems with random packet losses happening in both the input and the output communication channels, where the term ‘packet losses’ is no other than communication delays. The handling methods for the delayed data in [14, 15] are to substitute the one-step-delayed data by the captured data at the last sampling instant, which is no other than the conventional NCSs [911]. As the above-mentioned replacement mechanism of the communication delayed data is one-step-ahead mode, it to some extent does not match the ILC scheme which is an exact time point-to-point mapping along iteration direction. As shown in [14, 15], the tracking error is asymptotically upper-bounded but nonzero when communication delays occur. In addition, one [16] has developed a P-type NILC scheme for a class of nonlinear systems with stochastic delays happened in both system output and control input communication channels, where the delayed data is replaced with the synchronous data of the previous iteration. It has been shown that the proposed NILC scheme can drive the NILC system to track the desired trajectory precisely as the iteration goes on.

Regarding the communication data dropouts, the paper [17] has proposed a D-type NILC scheme for a class of LTI MIMO systems with packet dropout in the output channel and has deduced the convergence by Kalman filtering approach. Further work [18, 19] has considered a D-type NILC algorithm for a general case that only part of the system output data stochastically drops but the remaining is successfully transmitted, which induces the learning gain by minimizing the trace of the input error covariance matrix or assigns it in the sense of mean square. Besides, the literature [20, 21] has presented D-type NILC schemes for a class of discrete-time systems with packet dropout occurring in the output channel, and in particular it [20] has analyzed the convergence on basis of exponential stability for asynchronous dynamical systems, while one [21] has derived the learning performance based on 2-D model. Further relevant work [22] has adopted a H-infinity measurement to assess the tracking performance of the NILC schemes for systems with packet dropout occurred in output channel. Recently, one [23] has developed a D-type NILC algorithm for a class of single-input-single-output (SISO) systems with system output packet dropout modeled as a 0-1 Bernoulli-type Markov chain along the iteration axis. Under the assumption that for a fixed sampling instant the quantity of the successive packet loss is less than a constant, the learning gain has been designed as an iteration-decreasing sequence and the convergence has been deduced by stochastic approximation and optimization techniques. In further work [24] one has considered the NILC design for nonlinear systems with unknown control direction and system output packet dropouts. It is recalled that the handling strategy of dropped data proposed in [1724] is equivalent to replacing the dropped data with the synchronous desired output signal. Meanwhile, [25] has developed an NILC scheme by replacing the dropped output with the successfully captured latest synchronous output.

It is observed that, however, the literature [1725] only considers the packet dropout occurring in the output communication channel. As a matter of fact, the packet dropout occurs not only in the output communication channel, but also possibly in the input communication channel. Under this circumstance, the synchronous desired signal replacement in the existing literature [1724] is hardly adoptable for the input dropout as the desired input is unavailable but pursued. Nevertheless, it is worth recalling that the learning capability of the ILC is principally benefited from the time point-to-point compensation for the control input along the iteration direction rather than the time axis. Thus, the replacement for the dropped data by the captured latest synchronous ones would be a feasibility to deal with the dropped input. This motivates the paper.

This paper is to develop a D-type NILC strategy for discrete-time systems with both stochastic input and output packet dropouts. The strategy mends the dropped instant-wise output with the synchronous desired output, while it refreshes the dropped instant-wise input with the consensus-instant input used at the previous iteration. By means of the statistic technique, the convergence of the developed NILC scheme for respective linear and nonlinear systems is derived, which shows that under certain conditions the mathematical expectations of the stochastic tracking errors in the sense of 1-norm converge to zero.

The rest of the paper is organized as follows. In Section 2, a P-type NILC scheme is formulated and some notations are presented. Section 3 analyzes the convergence of the proposed NILC scheme to linear systems and Section 4 addresses the convergent characteristic of the proposed NILC scheme imposed on a kind of affine nonlinear systems. The effectiveness and the validity are numerically simulated in Section 5 and Section 6 concludes the paper.

2 NILC algorithm and notations

Let \((X,F,P)\) be a probability space and \(p \in [0,1]\) be a constant number, where \(X = \{ 0,1\}\) is a sample space, \(F = \{ \emptyset,\{ 0\},\{ 1\},\{ 0,1\} \}\) is a set of events and P is a probability measure on set F satisfying \(P(\emptyset ) = 0\), \(P(\{ 0\} ) = p\), \(P(\{ 1\} ) = 1 - p\) and \(P(\{ 0,1\} ) = 1\), respectively. A stochastic variable ξ is said to be subject to 0-1 Bernoulli distribution refers that ξ is defined on \((\Omega,F,P)\) satisfying \(\xi (0) = 0\) and \(\xi (1) = 1\). Denote \(E\{ \xi \}\) as the mathematical expectation of the stochastic variable ξ. Then \(E\{ \xi \} = P(\xi = 1) = 1 - p\). Let \(x = (x_{1}, \ldots,x_{n})^{ \top}\) and \(y = (y_{1}, \ldots,y_{n})^{ \top} \in R^{n}\) be two n-dimensional real vectors. The partial order relation is defined as \(x \prec y\) if and only if \(x_{i} \le y_{i}\) for all \(i = 1,2, \ldots,n\). Let \(H = (h_{ij})_{m \times n} \in R^{m \times n}\) be a real matrix. Denote \(| x | = (| x_{1} |,| x_{2} |, \ldots,| x_{n} |)^{ \top}\), \(| H | = (| h_{ij} |)_{m \times n}\), \(\|x \|_{1} = \sum_{i = 1}^{n} | x_{i} |\) and \(\| H \|_{1} = \max_{1 \le j \le n}\sum_{i = 1}^{m} | h_{ij} |\).

Consider a class of repetitive discrete-time single-input-single-output (SISO) systems described as follows:

$$ \left \{ \textstyle\begin{array}{l} x_{k}(t + 1) = f(x_{k}(t),u_{k}(t)), \quad t \in S^{ -}, \\ y_{k}(t) = g(x_{k}(t),u_{k}(t)), \quad t \in S, \end{array}\displaystyle \right . $$
(1)

where the subscript \(k = 1,2, \ldots \) denotes the iteration index, t refers to the discrete-time variable with \(S^{ -} = \{ 0,1,2, \ldots,N - 1\}\) and \(S = \{ 0,1,2, \ldots,N\}\). \(x_{k}(t) \in R^{n}\), \(u_{k}(t) \in R\) and \(y_{k}(t) \in R\) are n-dimensional state, scalar input and scalar output at the kth iteration, respectively. \(f( \cdot, \cdot )\) and \(g( \cdot, \cdot )\) are functions of the state and input variables.

In the system (1), when the control input \(u_{k}(t)\) is generated by an ILC update law and is transmitted from the ILC unit to the actuator via the input communication channel for driving the system, and simultaneously the system output \(y_{k}(t)\) is transferred through the output communication channel from the sensor to the ILC unit for data updating, the mode is regarded as a networked iterative learning control paradigm, abbreviated as NILC. The diagram of the NILC is illustrated in Figure 1. In the schematic diagram, \(\tilde{u}_{k}(t)\) denotes the signal which is transmitted from the ILC unit to the actuator via the input net channel. Its stochastic dropout is regarded as a random on/off switch. \(u_{k}(t)\) refers to the control command of the actuator for driving the system which is composed of \(\tilde{u}_{k}(t)\) and \(u_{k - 1}(t)\) in a switch mode. In detail, in the case when the ILC signal \(\tilde{u}_{k}(t)\) at t instant is successfully captured by the actuator, the signal \(\tilde{u}_{k}(t)\) is directly taken as \(u_{k}(t)\) for driving the system, while in the case when the ILC signal \(\tilde{u}_{k}(t)\) at t instant is dropped, the actuator will borrow its used input data \(u_{k - 1}(t)\) at the previous iteration for driving the system.

Figure 1
figure 1

Schematic diagram of NILC.

Mathematically, the control input \(u_{k}(t)\) of the actuator is represented as follows:

$$ \begin{aligned} &u_{1}(t) = \tilde{u}_{1}(t),\quad t \in S^{ -}, \mbox{given as a test signal}, \\ &u_{k}(t) = \omega_{k,t}\tilde{u}_{k}(t) + [1 - \omega_{k,t}]u_{k - 1}(t), \quad t \in S^{ -}, k = 2,3, \ldots, \end{aligned} $$
(2)

where for all \(t \in S^{ -}\) and \(k = 2,3, \ldots \) , \(\omega_{k,t}\) is a stochastic variable subject to 0-1 Bernoulli distribution. Here, \(\omega_{k,t} = 1\) means that the signal \(\tilde{u}_{k}(t)\) is successfully transmitted while \(\omega_{k,t} = 0\) marks that the signal \(\tilde{u}_{k}(t)\) is dropped.

Analogously, \(y_{k}(t)\) refers to the system output which will be transmitted to the ILC unit for data updating through the output channel and its stochastic dropout is considered as a random off/on switch. Whilst \(\tilde{y}_{k}(t)\) is a candidate signal for the ILC updating which will be either the system output \(y_{k}(t)\) or the desired signal \(y_{d}(t)\) depending on the success of the data communication, namely, in the case when the system output \(y_{k}(t)\) is successfully transferred to the ILC unit, it will be adopted for data updating, while in the case when the system output \(y_{k}(t)\) is dropped, the ILC unit will utilize the saved desired output signal for the new command generation. Thus, the signal \(\tilde{y}_{k}(t)\) is formulated as follows:

$$ \tilde{y}_{k}(t) = \alpha_{k,t}y_{k}(t) + [1 - \alpha_{k,t}]y_{d}(t),\quad t \in S^{ +}, k = 1,2, \ldots, $$
(3)

where \(S^{ +} = \{ 1,2, \ldots,N\}\), \(y_{d}(t)\) is the desired output, and for all \(t \in S^{ +}\) and \(k = 1,2, \ldots \) , \(\alpha_{k,t}\) is a stochastic variable subject to 0-1 Bernoulli distribution.

Remark 1

As shown in (2) and (3), the handling methods of packet dropouts is different from those in [11, 12]. The handling methods of packet dropouts in [11, 12] can be described as follows:

$$\left \{ \textstyle\begin{array}{l} u_{k}(t) = \omega_{k,t}\tilde{u}_{k}(t) + [1 - \omega_{k,t}]u_{k}(t - 1), \\ \tilde{y}_{k}(t) = \alpha_{k,t}y_{k}(t) + [1 - \alpha_{k,t}]\tilde{y}_{k}(t - 1). \end{array}\displaystyle \right . $$

Meanwhile, the replacement algorithm (2) is also different from (3). This benefits from the characteristic of ILC and it is expected that the replacement algorithms (2) and (3) show better performance.

It is noted that in the concerned NILC profile Figure 1, the status of the communicated data packet is either dropped or captured in success, which is modeled as a 0-1 Bernoulli stochastic variable. In general, it is well known that the occurrences of the data packet dropout at two iterations are independent of each other. Thus, the assumption is extracted as follows.

(A1):

Assume that the stochastic variable \(\omega_{k,t}\) is independent on the variable \(\omega_{l,s}\) for all \(k \ne l\), \(s,t \in S^{ -}\). Meanwhile, assume that the stochastic variable \(\alpha_{k,t}\) is independent upon the variable \(\alpha_{l,s}\) for all \(k \ne l\), \(s,t \in S^{ +}\). Besides, assume that \(\alpha_{k,t}\) is independent on \(\omega_{l,s}\) for all \(k = 1,2, \ldots\) , \(l = 2,3, \ldots\) , \(t \in S^{ +}\) and \(s \in S^{ -}\).

Moreover, for simplifying the analysis, the following assumption is introduced.

(A2):

Assume that the probabilities of packet dropout in the input and output channels are ω̄ and , respectively, mathematically,

$$\begin{aligned}& P\{ \omega_{k,t} = 0\} = \bar{\omega},\quad 0 \le \bar{\omega} < 1, \mbox{for } t \in S^{ -}, k = 2,3, \ldots, \\& P\{ \alpha_{k,t} = 0\} = \bar{\alpha},\quad 0 \le \bar{\alpha} < 1, \mbox{for } t \in S^{ +}, k = 1,2, \ldots. \end{aligned}$$

Since for given k, t, \(\omega_{k,t}\) and \(\alpha_{k,t}\) are stochastic variables subject to 0-1 Bernoulli distribution, it is easy to calculate the expectations of those stochastic variables as follows:

$$\begin{aligned}& E\{ \omega_{k,t}\} = P\{ \omega_{k,t} = 1\} = 1 - \bar{ \omega},\quad 0 \le \bar{\omega} < 1, \mbox{for } t \in S^{ -}, k = 2,3, \ldots, \\& E\{ \alpha_{k,t}\} = P\{ \alpha_{k,t} = 1\} = 1 - \bar{ \alpha},\quad 0 \le \bar{\alpha} < 1, \mbox{for } t \in S^{ +}, k = 1,2, \ldots. \end{aligned}$$

Based on the formulations (2) and (3), a derivative-type (D-type) NILC updating law is constructed in the form of

$$ \tilde{u}_{k + 1}(t) = \tilde{u}_{k}(t) + \Gamma \delta \tilde{y}_{k}(t + 1), \quad t \in S^{ -}, k = 1,2, \ldots, $$
(4)

where \(\delta \tilde{y}_{k}(t + 1) = y_{d}(t + 1) - \tilde{y}_{k}(t + 1)\) and Γ denotes the learning gain.

In order to analyze the convergent characteristics of the proposed NILC scheme (4) with (2) and (3), the lifting technique is used and a set of denotations are introduced as follows:

$$\begin{aligned}& u_{k} = \bigl[u_{k}(0),u_{k}(1), \ldots,u_{k}(N - 1)\bigr]^{ \top} \in R^{N},\qquad \tilde{u}_{k} = \bigl[\tilde{u}_{k}(0),\tilde{u}_{k}(1), \ldots,\tilde{u}_{k}(N - 1)\bigr]^{ \top} \in R^{N}, \\& u_{d} = \bigl[u_{d}(0),u_{d}(1), \ldots,u_{d}(N - 1)\bigr]^{ \top} \in R^{N},\qquad y_{k} = \bigl[y_{k}(1),y_{k}(2), \ldots,y_{k}(N)\bigr]^{ \top} \in R^{N}, \\& \tilde{y}_{k} = \bigl[\tilde{y}_{k}(1), \tilde{y}_{k}(2), \ldots,\tilde{y}_{k}(N) \bigr]^{ \top} \in R^{N},\qquad y_{d} = \bigl[y_{d}(1),y_{d}(2), \ldots,y_{d}(N) \bigr]^{ \top} \in R^{N}, \\& \delta y_{k} = y_{d} - y_{k},\qquad \delta \tilde{y}_{k} = y_{d} - \tilde{y}_{k},\qquad \delta u_{k} = u_{d} - u_{k},\qquad \delta \tilde{u}_{k} = u_{d} - \tilde{u}_{k}, \\& \Omega_{k} = \operatorname{diag}(\omega_{k,0}, \omega_{k,1}, \ldots,\omega_{k,N - 1}) \in R^{N \times N},\qquad \Omega = \operatorname{diag}(\bar{\omega},\bar{\omega}, \ldots,\bar{\omega} ) \in R^{N \times N}, \\& \Lambda_{k} = \operatorname{diag}(\alpha_{k,1}, \alpha_{k,2}, \ldots,\alpha_{k,N}) \in R^{N \times N},\qquad \Lambda = \operatorname{diag}(\bar{\alpha},\bar{\alpha}, \ldots,\bar{\alpha} ) \in R^{N \times N}. \end{aligned}$$

Thus, equations (2) and (3) are, respectively, lifted as

$$ \begin{aligned} &u_{1} = \tilde{u}_{1}, \\ &u_{k} = \Omega_{k}\tilde{u}_{k} + (I - \Omega_{k})u_{k - 1}, \quad k = 2,3, \ldots, \end{aligned} $$
(5)

and

$$ \tilde{y}_{k} = \Lambda_{k}y_{k} + (I - \Lambda_{k})y_{d},\quad k = 1,2, \ldots, $$
(6)

where I is an identity matrix with appropriate dimension.

Moreover, the D-type NILC update law (4) is lifted as

$$ \tilde{u}_{k + 1} = \tilde{u}_{k} + \Gamma \delta \tilde{y}_{k}. $$
(7)

The following lemmas are useful in this paper.

Lemma 1

Let \(\{ e_{k}\}_{k = 1}^{\infty}\), \(\{ \sigma_{k}\}_{k = 1}^{\infty}\) and \(\{ \varphi_{k}\}_{k = 1}^{\infty}\) be nonnegative sequences, which satisfy \(e_{k + 1} \le \sum_{i = 1}^{k} \sigma_{i}e_{k - i + 1} + \varphi_{k}\), \(\sigma = \sum_{i = 1}^{\infty} \sigma_{i} < 1\) and \(\lim_{k \to \infty} \varphi_{k} = 0\). Then \(\lim_{k \to \infty} e_{k} = 0\).

Proof

First, we prove that the nonnegative sequence \(\{ e_{k}\}_{k = 1}^{\infty}\) is bounded. Since the sequence \(\{ \varphi_{k}\}_{k = 1}^{\infty}\) is nonnegative satisfying \(\lim_{k \to \infty} \varphi_{k} = 0\) and \(\sigma = \sum_{i = 1}^{\infty} \sigma_{i} < 1\), there exists a positive integer \(K_{1}\) such that \(\varphi_{k} + \sigma < 1\) for all \(k \ge K_{1}\). Let \(C = \max \{ e_{1},e_{2}, \ldots,e_{K_{1}},1\}\). Thus, \(e_{k} \le C\) for all \(k \ge K_{1} + 1\), i.e., the nonnegative sequence \(\{ e_{k}\}_{k = 1}^{\infty}\) is bounded. The proof is accomplished by induction.

For \(k = K_{1} + 1\), a direct computation shows that

$$e_{K_{1} + 1} \le \sum_{i = 1}^{K_{1}} \sigma_{i}e_{K_{1} - i + 1} + \varphi_{K_{1}} \times 1 \le (\rho + \varphi_{K_{1}})C \le C. $$

Now assume that for \(K_{1} + 1 \le k \le K\), \(e_{k} \le C\).

Next we are to show that it is true for \(k = K + 1\). By \(e_{K_{1} + 1} \le C\) and the induction hypothesis, a direct calculation shows that

$$\begin{aligned} \begin{aligned} e_{K + 1} &\le \sum_{i = 1}^{K} \sigma_{i}e_{K - i + 1} + \varphi_{K} \times 1 \le (\rho + \varphi_{K})\max \{ e_{1}, \ldots,e_{K_{1}},e_{K_{1} + 1}, \ldots,e_{K},1\} \\ &\le \max \{ e_{1}, \ldots,e_{K_{1}},e_{K_{1} + 1}, \ldots,e_{K},1\} \le C. \end{aligned} \end{aligned}$$

Since \(\sigma = \sum_{i = 1}^{\infty} \sigma_{i} < 1\) and \(\lim_{k \to \infty} \varphi_{k} = 0\), for any \(\varepsilon > 0\) there exists a positive integer \(K_{\varepsilon} (K_{\varepsilon} \ge K_{1})\) such that

$$\sum_{j = 1}^{\infty} \sigma_{K_{\varepsilon} + j} < \frac{1 - \sigma}{ C}\frac{\varepsilon}{2}\quad \mbox{and}\quad \varphi_{K_{\varepsilon} + i} < \frac{\varepsilon}{2}(1 - \sigma )\quad \mbox{for all } i = 1,2, \ldots. $$

For \(k \ge K_{\varepsilon} + 1\), we have

$$\begin{aligned} e_{k + 1} \le& \sigma_{1}e_{k} + \sigma_{2}e_{k - 1} + \cdots + \sigma_{K_{\varepsilon}} e_{k - K_{\varepsilon} + 1} + \sigma_{K_{\varepsilon} + 1}e_{k - K_{\varepsilon}} + \cdots + \sigma_{k}e_{1} + \varphi_{k} \\ \le& \sigma_{1}e_{k} + \sigma_{2}e_{k - 1} + \cdots + \sigma_{K_{\varepsilon}} e_{k - K_{\varepsilon} + 1} + (\sigma_{K_{\varepsilon} + 1} + \cdots + \sigma_{k})C + \frac{\varepsilon}{2}(1 - \sigma ) \\ \le& \sigma_{1}e_{k} + \sigma_{2}e_{k - 1} + \cdots + \sigma_{K_{\varepsilon}} e_{k - K_{\varepsilon} + 1} + \frac{1 - \sigma}{ C} \frac{\varepsilon}{2}C + \frac{\varepsilon}{2}(1 - \sigma ) \\ \le& \sigma_{1}e_{k} + \sigma_{2}e_{k - 1} + \cdots + \sigma_{K_{\varepsilon}} e_{k - K_{\varepsilon} + 1} + \varepsilon (1 - \sigma ). \end{aligned}$$

Taking superior limit on both sides of the above inequality yields

$$\begin{aligned} \lim_{k \to \infty} \sup e_{k + 1} \le& \sigma_{1} \lim_{k \to \infty} \sup e_{k} + \sigma_{2}\lim _{k \to \infty} \sup e_{k - 1} + \cdots + \sigma_{K_{\varepsilon}} \lim_{k \to \infty} \sup e_{k - K_{\varepsilon} + 1} + \varepsilon (1 - \sigma ) \\ \le& (\sigma_{1} + \sigma_{2} + \cdots + \sigma_{K_{\varepsilon}} )\lim_{k \to \infty} \sup e_{k} + \varepsilon (1 - \sigma ) \\ \le& \sigma \lim_{k \to \infty} \sup e_{k} + \varepsilon (1 - \sigma ). \end{aligned}$$

The above inequality leads to

$$\lim_{k \to \infty} \sup e_{k} \le \varepsilon. $$

Consequently

$$\lim_{k \to \infty} e_{k} = 0. $$

This completes the proof. □

Lemma 2

Let \(\{ \phi_{k}\}_{k = 1}^{\infty}\), \(\{ \lambda_{k}\}_{k = 1}^{\infty}\) and \(\{ \Phi_{k}\}_{k = 1}^{\infty}\) be nonnegative sequences, which satisfy (i) \(\lim_{k \to \infty} \phi_{k} = 0\), \(\lim_{k \to \infty} \lambda_{k} = 0\) and \(\lim_{k \to \infty} \Phi_{k} = 0\), (ii) \(\sum_{k = 1}^{\infty} \phi_{k}\) is bounded. Then

$$\lim_{k \to \infty} \Biggl(\sum_{i = 1}^{k} \phi_{i}\lambda_{k - i + 1} + \Phi_{k}\Biggr) = 0. $$

Proof

From \(\lim_{k \to \infty} \lambda_{k} = 0\), it follows that the nonnegative sequence \(\{ \lambda_{k}\}_{k = 1}^{\infty}\) is bounded. Let \(C = \sup_{k = 1,2, \ldots} \{ \lambda_{k}\}\) and \(\phi = \sum_{k = 1}^{\infty} \phi_{k}\). Since the sequence \(\{ \phi_{k}\}_{k = 1}^{\infty}\) is nonnegative and we have the assumption (ii), it is true that for any \(\varepsilon > 0\) there exists a positive integer \(K_{1}\) such that \(\sum_{k = K_{1} + 1}^{\infty} \phi_{k} < \frac{\varepsilon}{3C}\). In addition, from the assumption (i) it is immediate that there exists a positive integer \(K_{2}\) (\(K_{2} > K_{1}\)) so that \(\lambda_{k - K_{1} + 1} < \frac{\varepsilon}{3\phi}\) for all \(k - K_{1} + 1 > K_{2}\). Further, the assumptions that \(\{ \Phi_{k} \}_{k = 1}^{\infty}\) is nonnegative and \(\lim_{k \to \infty} \Phi_{k} = 0\) imply that there exists a positive integer \(K_{3}\) such that \(\Phi_{k} < \frac{\varepsilon}{3}\) for all \(k > K_{3}\).

Thus, for all \(k > \max \{ K_{1} + K_{2} - 1, K_{3}\}\), we have

$$\begin{aligned} \sum_{i = 1}^{k} \phi_{i} \lambda_{k - i + 1} + \Phi_{k} =& \phi_{1} \lambda_{k} + \phi_{2}\lambda_{k - 1} + \cdots + \phi_{K_{1}}\lambda_{k - K_{1} + 1} + \phi_{K_{1} + 1} \lambda_{k - K_{1}} + \cdots + \phi_{k}\lambda_{1} + \Phi_{k} \\ \le& (\phi_{1} + \phi_{2} + \cdots + \phi_{K_{1}}) \frac{\varepsilon}{3\phi} + C \Biggl( \sum_{k = K_{1} + 1}^{\infty} \phi_{k} \Biggr) + \Phi_{k} \\ < & \phi \times \frac{\varepsilon}{3\phi} + C \times \frac{\varepsilon}{3C} + \frac{\varepsilon}{3} = \varepsilon. \end{aligned}$$

Consequently

$$\lim_{k \to \infty} \Biggl( \sum_{i = 1}^{k} \phi_{i}\lambda_{k - i + 1} + \Phi_{k} \Biggr) = 0. $$

This completes the proof. □

3 Convergence analysis for LTI SISO systems

For a real system, it is well known that in a neighborhood of an operating point, the dynamics can be approximated as a linear system. This section considers a class of repetitive discrete-time LTI SISO systems taking the form of

$$ \left \{ \textstyle\begin{array}{l} x_{k}(t + 1) = Ax_{k}(t) + Bu_{k}(t),\quad t \in S^{ -}, \\ y_{k}(t) = Cx_{k}(t),\quad t \in S, \end{array}\displaystyle \right . $$
(8)

where A, B and C are matrices with appropriate dimensions. In particular, CB is supposed to be nonzero, under which assumption it is easy to check that for a given desired output \(y_{d}(t)\), \(t \in S\) there exist such desired state \(x_{d}(t)\), \(t \in S\) and desired control input \(u_{d}(t)\), \(t \in S^{ -}\) that

$$ \left \{ \textstyle\begin{array}{l} x_{d}(t + 1) = Ax_{d}(t) + Bu_{d}(t),\quad t \in S^{ -}, \\ y_{d}(t) = Cx_{d}(t), \quad t \in S. \end{array}\displaystyle \right . $$
(9)

The dynamic systems (8) and (9) can be, respectively, lifted as

$$ y_{k} = Hu_{k} + Gx_{k}(0), $$
(10)

and

$$ y_{d} = Hu_{d} + Gx_{d}(0). $$
(11)

Here

$$\begin{aligned}& G = \bigl[ (CA)^{ \top},\bigl(CA^{2}\bigr)^{ \top}, \ldots,\bigl(CA^{N}\bigr)^{ \top} \bigr]^{ \top} \in R^{N \times n}, \\& H = \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} CB & & & & \\ CAB & CB & & & \\ CA^{2}B & CAB & CB & & \\ \vdots & \vdots & \vdots & \ddots & \\ CA^{N - 1}B & CA^{N - 2}B & CA^{N - 3}B & \cdots & CB \end{array}\displaystyle \right ] \in R^{N \times N}. \end{aligned}$$

Theorem 1

Assume that the proposed NILC scheme (4) with (2) and (3) is applied to the system (8) and the initial state is resettable, namely, \(x_{k}(0) = x_{d}(0)\) for all \(k = 1,2, \ldots \) . Then the expectation \(E\{\| \delta y_{k}\|_{1}\}\) of the tracking error \(\| \delta y_{k}\|_{1}\) is convergent to zero as the iteration goes on if the inequality \(\rho_{1} = \| E\{ | I - \Gamma \Lambda_{k}H\Omega_{k} |\} \|_{1} + (1 - \bar{\alpha} )\bar{\omega} | \Gamma |\| H \|_{1} < 1\) holds.

Remark 2

By the assumptions 1 and 2, \(\| E\{ | I - \Gamma \Lambda_{k}H\Omega_{k} |\} \|_{1}\) is a constant and is independent of k, i.e. \(\rho_{1}\) is a constant number and is independent of k.

Remark 3

For a given k (\(k \ge 2\)), \(\| \delta y_{k} \|_{1} = \sum_{t = 0}^{N} | y_{d}(t) - y_{k}(t) |\) is a nonnegative stochastic variable and dependent upon \(\{ \alpha_{i,t}:t \in S,1 \le i \le k - 1\}\) and \(\{ \omega_{i,t}:t \in S^{ -},1 \le i \le k\}\). Thus, \(E\{ \| \delta y_{k} \|_{1}\}\) can be understood as the expectation of the stochastic variable \(\| \delta y_{k} \|_{1}\) or the first-order deviation of the stochastic output \(y_{k}(t)\) (\(t \in S\)) with respect to the desired output \(y_{d}(t)\) (\(t \in S\)).

Proof

From (6), (7), (10), (11) and \(x_{k}(0) = x_{d}(0)\), it follows that

$$ \delta \tilde{u}_{k + 1} = \delta \tilde{u}_{k} - \Gamma \delta \tilde{y}_{k} = \delta \tilde{u}_{k} - \Gamma \Lambda_{k}\delta y_{k} = \delta \tilde{u}_{k} - \Gamma \Lambda_{k}H\delta u_{k}. $$
(12)

By (5), we have

$$ \begin{aligned} &\delta u_{1} = \delta \tilde{u}_{1}, \\ &\delta u_{k} = \Omega_{k}\delta \tilde{u}_{k} + (I - \Omega_{k})\delta u_{k - 1},\quad k = 2,3, \ldots. \end{aligned} $$
(13)

By backwardly iterating (13), we find

$$ \delta u_{k} = \Omega_{k}\delta \tilde{u}_{k} + \sum_{i = 2}^{k - 1} \Biggl[ \prod _{j = 0}^{k - 1 - i} (I - \Omega_{k - j}) \Biggr] \Omega_{i}\delta \tilde{u}_{i} + \Biggl[ \prod _{j = 0}^{k - 2} (I - \Omega_{k - j}) \Biggr]\delta \tilde{u}_{1}. $$
(14)

Substituting (14) into (12) shows

$$\begin{aligned} \delta \tilde{u}_{k + 1} =& [ I - \Gamma \Lambda_{k}H \Omega_{k} ]\delta \tilde{u}_{k} - \sum _{i = 2}^{k - 1} \Gamma \Lambda_{k}H \Biggl[ \prod_{j = 0}^{k - 1 - i} (I - \Omega_{k - j}) \Biggr]\Omega_{i}\delta \tilde{u}_{i} \\ &{}- \Gamma \Lambda_{k}H \Biggl[ \prod_{j = 0}^{k - 2} (I - \Omega_{k - j}) \Biggr]\delta \tilde{u}_{1}. \end{aligned}$$
(15)

By (15) and a direct computation, we have

$$\begin{aligned} \begin{aligned}[b] \vert \delta \tilde{u}_{k + 1} \vert \prec{}& \vert I - \Gamma \Lambda_{k}H\Omega_{k} \vert \vert \delta \tilde{u}_{k} \vert + \sum_{i = 2}^{k - 1} \vert \Gamma \vert \Lambda_{k}\vert H \vert \Biggl[ \prod _{j = 0}^{k - 1 - i} (I - \Omega_{k - j}) \Biggr]\Omega_{i}\vert \delta \tilde{u}_{i} \vert \\ &{}+ \vert \Gamma \vert \Lambda_{k} \vert H \vert \Biggl[ \prod_{j = 0}^{k - 2} (I - \Omega_{k - j}) \Biggr]\vert \delta \tilde{u}_{1} \vert . \end{aligned} \end{aligned}$$
(16)

It is obvious that the assumption (A2) results in

$$E\{ \Omega_{i}\} = I - \Omega, E\{ I - \Omega_{k - j}\} = \Omega\quad \mbox{and}\quad E\{ \Lambda_{k}\} = I - \Lambda. $$

Thus, calculating the expectation to both sides of (16) and taking the assumption (A1) into account yield

$$\begin{aligned} E\bigl\{ \vert \delta \tilde{u}_{k + 1} \vert \bigr\} \prec& E\bigl\{ \vert I - \Gamma \Lambda_{k}H\Omega_{k} \vert \bigr\} E\bigl\{ \vert \delta \tilde{u}_{k} \vert \bigr\} + \sum _{i = 2}^{k - 1} \vert \Gamma \vert (I - \Lambda ) \vert H \vert \Omega^{k - i}(I - \Omega )E\bigl\{ \vert \delta \tilde{u}_{i} \vert \bigr\} \\ &{}+ \vert \Gamma \vert (I - \Lambda )\vert H \vert \Omega^{k - 1}E\bigl\{ \vert \delta \tilde{u}_{1} \vert \bigr\} . \end{aligned}$$
(17)

Computing 1-norm on both sides of (17) and considering the property \(\| E\{ | \delta \tilde{u}_{k} |\} \|_{1} = E\{ \| \delta \tilde{u}_{k}\|_{1}\}\) achieve

$$\begin{aligned} E\bigl\{ \Vert \delta \tilde{u}_{k + 1} \Vert _{1}\bigr\} \le& \bigl\Vert E\bigl\{ \vert I - \Gamma \Lambda_{k}H \Omega_{k} \vert \bigr\} \bigr\Vert _{1} E\bigl\{ \Vert \delta \tilde{u}_{k} \Vert _{1}\bigr\} + (1 - \bar{\alpha} )| \Gamma |\Vert H \Vert _{1}\bar{ \omega}^{k - 1}E\bigl\{ \Vert \delta \tilde{u}_{1} \Vert _{1}\bigr\} \\ &{}+ \sum_{i = 2}^{k - 1} (1 - \bar{\alpha} ) (1 - \bar{\omega} )| \Gamma |\Vert H \Vert _{1}\bar{ \omega}^{k - i}E\bigl\{ \Vert \delta \tilde{u}_{i} \Vert _{1}\bigr\} \\ \le& \bigl\Vert E\bigl\{ \vert I - \Gamma \Lambda_{k}H \Omega_{k} \vert \bigr\} \bigr\Vert _{1} E\bigl\{ \Vert \delta \tilde{u}_{k} \Vert _{1}\bigr\} + (1 - \bar{\alpha} )| \Gamma |\Vert H\Vert _{1}\bar{\omega}^{k}E \bigl\{ \Vert \delta \tilde{u}_{1} \Vert _{1}\bigr\} \\ &{}+ \sum_{i = 1}^{k - 1} (1 - \bar{\alpha} ) (1 - \bar{\omega} )| \Gamma |\Vert H \Vert _{1} \bar{ \omega}^{k - i}E\bigl\{ \Vert \delta \tilde{u}_{i} \Vert _{1}\bigr\} \\ \le& \sum_{i = 1}^{k} \sigma_{i}e_{k - i + 1} + \varphi_{k}, \end{aligned}$$
(18)

where \(e_{k} = E\{ \| \delta \tilde{u}_{k} \|_{1}\}\), \(\varphi_{k} = (1 - \bar{\alpha} ) | \Gamma |\| H \|_{1} \bar{\omega}^{k}E\{ \| \delta \tilde{u}_{1} \|_{1}\}\), \(\sigma_{1} =\| E\{ | I - \Gamma \Lambda_{k}H\Omega_{k} |\} \|_{1}\) and \(\sigma_{i} = (1 - \bar{\alpha} )(1 - \bar{\omega} ) | \Gamma |\| H \|_{1} \bar{\omega}^{i - 1}\) for \(i = 2,3, \ldots \) .

From the assumption \(\rho_{1} = \| E\{ | I - \Gamma \Lambda_{k}H\Omega_{k} |\} \|_{1} + (1 - \bar{\alpha} )\bar{\omega} | \Gamma |\| H \|_{1} = \sum_{i = 1}^{\infty} \sigma_{i} < 1\), \(\lim_{k \to \infty} \varphi_{k} = 0\) and Lemma 1, the inequality (18) gives rise to

$$ \lim_{k \to \infty} E\bigl\{ \| \delta \tilde{u}_{k} \|_{1}\bigr\} = 0. $$
(19)

By (10), (11), (14), and \(x_{k}(0) = x_{d}(0)\), we have

$$ \delta y_{k} = H\delta u_{k} = H\Omega_{k} \delta \tilde{u}_{k} + \sum_{i = 2}^{k - 1} H \Biggl[ \prod_{j = 0}^{k - 1 - i} (I - \Omega_{k - j}) \Biggr]\Omega_{i}\delta \tilde{u}_{i} + H \Biggl[ \prod_{j = 0}^{k - 2} (I - \Omega_{k - j}) \Biggr]\delta \tilde{u}_{1}. $$
(20)

From the equality (20), a direct computation shows

$$\begin{aligned} \vert \delta y_{k} \vert \prec& \vert H \vert \Omega_{k} \vert \delta \tilde{u}_{k} \vert + \sum _{i = 2}^{k - 1} \vert H \vert \Biggl[ \prod _{j = 0}^{k - 1 - i} (I - \Omega_{k - j}) \Biggr]\Omega_{i} \vert \delta \tilde{u}_{i} \vert \\ &{}+ \vert H \vert \Biggl[ \prod_{j = 0}^{k - 2} (I - \Omega_{k - j}) \Biggr]\vert \delta \tilde{u}_{1} \vert . \end{aligned}$$
(21)

Taking the expectation on both sides of (21) and taking the assumptions (A1) and (A2) into account, we get

$$\begin{aligned} E\bigl\{ \vert \delta y_{k} \vert \bigr\} \prec& \vert H \vert (I - \Omega )E\bigl\{ \vert \delta \tilde{u}_{k} \vert \bigr\} + \sum_{i = 2}^{k - 1} \vert H \vert \Omega^{k - i}(I - \Omega )E\bigl\{ \vert \delta \tilde{u}_{i} \vert \bigr\} \\ &{}+ \vert H \vert \Omega^{k - 1}E\bigl\{ \vert \delta \tilde{u}_{1} \vert \bigr\} . \end{aligned}$$
(22)

Taking the 1-norm on both sides of (22), we obtain

$$\begin{aligned} E\bigl\{ \Vert \delta y_{k} \Vert _{1}\bigr\} \le& \Vert H \Vert _{1} \Biggl( (1 - \bar{\omega} )E\bigl\{ \Vert \delta \tilde{u}_{k} \Vert _{1}\bigr\} \\ &{}+ \sum _{i = 1}^{k - 1}\bar{\omega}^{k - i}(1 - \bar{ \omega} )E\bigl\{ \Vert \delta \tilde{u}_{i} \Vert _{1} \bigr\} + \bar{\omega}^{k}E\bigl\{ \Vert \delta \tilde{u}_{1} \Vert _{1}\bigr\} \Biggr) \\ \le& \Vert H \Vert _{1} \Biggl( \sum _{i = 1}^{k} \phi_{i}\lambda_{k - i + 1} + \Phi_{k} \Biggr), \end{aligned}$$
(23)

where \(\phi_{i} = (1 - \bar{\omega} )\bar{\omega}^{i - 1}\), \(\lambda_{i} = \| E\{ |\delta \tilde{u}_{i} |\} \|_{1}\) and \(\Phi_{k} = \bar{\omega}^{k}\| E\{ | \delta \tilde{u}_{1} |\} \|_{1}\).

By (19), (23), and Lemma 2, we obtain

$$\lim_{k \to \infty} E\bigl\{ \| \delta y_{k} \|_{1}\bigr\} = 0. $$

This completes the proof. □

Corollary 1

Assume that the proposed NILC scheme (4) with (2) and (3) is applied to the LTI system (8) and the initial state is resettable, that is, \(x_{k}(0) = x_{d}(0)\) for \(k = 1,2, \ldots \) . Then the expectation \(E\{ \| \delta y_{k} \|_{1}\}\) of the tracking error \(\| \delta y_{k} \|_{1}\) is convergent to zero as the iteration approaches infinity if the conditions \(\| A \|_{1} < 1\) and

$$\tilde{\rho}_{1} = E\bigl\{ | 1 - \Gamma \alpha_{k,1}CB \omega_{k,0} |\bigr\} + (1 - \bar{\alpha} ) \bigl[ (1 - \bar{\omega} )\| A \|_{1} + \bar{\omega} \bigr]\frac{| \Gamma |\| B \|_{1}\| C \|_{1}}{1 - \| A \|_{1}} < 1 $$

are satisfied.

Remark 4

It is evident that under the assumption \(CB \ne 0\) the learning gain Γ can be chosen so as to guarantee the inequality \(0 < \Gamma CB < 1\), which implies that

$$E\bigl\{ \vert 1 - \alpha_{k,1}\omega_{k,0}\Gamma CB \vert \bigr\} = E\{ 1 - \alpha_{k,1}\omega_{k,0}\Gamma CB\} = 1 - (1 - \bar{\alpha} ) (1 - \bar{\omega} )\Gamma CB. $$

Thus, it is not difficult to compute that

$$\tilde{\rho}_{1} = 1 - (1 - \bar{\alpha} )| \Gamma | \bigl( | CB | + \| B \|_{1}\| C \|_{1} \bigr) \biggl( 1 - \frac{\| B \|_{1}\| C \|_{1}}{(| CB | + \| B \|_{1}\| C \|_{1})(1 - \| A \|_{1})} - \bar{ \omega} \biggr). $$

It is well known that \(0 \le \bar{\alpha} < 1\). Therefore, it is possible that the convergent condition \(\tilde{\rho}_{1} < 1\) is guaranteed if the proportional learning gain Γ is properly chosen and the dropout probability of the input data is constrained as \(\bar{\omega} < 1 - \| B \|_{1}\| C \|_{1} (| CB | + \| B \|_{1}\| C \|_{1})^{ - 1}(1 - \| A \|_{1})^{ - 1}\), which implies that input data may not drop with higher frequency.

Remark 5

It is observed that the inequality (18) reduces to

$$E\bigl\{ \| \delta \tilde{u}_{k + 1}\|_{1}\bigr\} \le \bigl\| E\bigl\{ | I - \Gamma \Lambda_{k}H\Omega_{k} |\bigr\} \bigr\| _{1}E\bigl\{ \| \delta \tilde{u}_{k} \|_{1}\bigr\} $$

for the case when \(\bar{\alpha} = 0\) and \(\bar{\omega} = 0\), respectively. This implies that the expectation of the input error monotonically converges to zero if the input and output drop with null probabilities. Particularly, for the case when the input and output do not drop at all, the input error in the sense of 1-norm is monotonously convergent. This coincides with the existing conclusion in [7].

Remark 6

As shown in (15), \(\delta \tilde{u}_{k + 1}\) involves all the past signals \(\{ \delta \tilde{u}_{i}:1 \le i \le k\}\) and its dynamics is quite complex.

4 Convergence characteristics of nonlinear systems

In the real world, it is well known that the dynamics of some systems is nonlinear due to the Coulomb friction, saturation or dead zone of the devices. This section considers a kind of affine nonlinear systems described by

$$ \left \{ \textstyle\begin{array}{l} x_{k}(t + 1) = f(x_{k}(t)) + Bu_{k}(t),\quad t \in S^{ -}, \\ y_{k}(t) = Cx_{k}(t),\quad t \in S, \end{array}\displaystyle \right . $$
(24)

where \(x_{k}(t) \in R^{n}\), \(u_{k}(t) \in R\) and \(y_{k}(t) \in R\) are n-dimensional state, scalar input, and scalar output, respectively. \(f( \cdot )\) is a nonlinear function and CB is supposed to be nonzero, under which it is easy to check that, for a given desired output \(y_{d}(t)\), \(t \in S\), there exist desired state \(x_{d}(t)\), \(t \in S\) and desired control input \(u_{d}(t)\), \(t \in S^{ -}\) such that

$$ \left \{ \textstyle\begin{array}{l} x_{d}(t + 1) = f(x_{d}(t)) + Bu_{d}(t), \quad t \in S^{ -}, \\ y_{d}(t) = Cx_{d}(t), \quad t \in S, \end{array}\displaystyle \right . $$
(25)

i.e., \(y_{d}(t)\) is realizable.

(A3):

Assume that the nonlinear function \(f(z)\) is uniformly globally Lipschitz with respect to z, i.e., for all \(z_{1}, z_{2} \in R^{n}\), there exists a positive constant \(L_{f}\) such that

$$\bigl\Vert f(z_{1}) - f(z_{2}) \bigr\Vert _{1} \le L_{f} \Vert z_{1} - z_{2} \Vert _{1}. $$

In order to analyze the convergent characteristics of the proposed NILC scheme (4) with (2) and (3) for the nonlinear system (24), the lifting technique is used and a set of denotations are introduced as follows:

$$\begin{aligned}& x_{k} = \bigl[\bigl(x_{k}(0)\bigr)^{ \top}, \bigl(x_{k}(1)\bigr)^{ \top}, \ldots,\bigl(x_{k}(N - 1)\bigr)^{ \top} \bigr]^{ \top} \in R^{nN}, \\& x_{k}^{ +} = \bigl[\bigl(x_{k}(1) \bigr)^{ \top},\bigl(x_{k}(2)\bigr)^{ \top}, \ldots, \bigl(x_{k}(N)\bigr)^{ \top} \bigr]^{ \top} \in R^{nN}, \\& x_{d} = \bigl[\bigl(x_{d}(0)\bigr)^{ \top}, \bigl(x_{d}(1)\bigr)^{ \top}, \ldots,\bigl(x_{d}(N - 1)\bigr)^{ \top} \bigr]^{ \top} \in R^{nN}, \\& x_{d}^{ +} = \bigl[\bigl(x_{d}(1) \bigr)^{ \top},\bigl(x_{d}(2)\bigr)^{ \top}, \ldots, \bigl(x_{d}(N)\bigr)^{ \top} \bigr]^{ \top} \in R^{nN}, \\& f(x_{k}) = \bigl[\bigl(f\bigl(x_{k}(0)\bigr) \bigr)^{ \top},\bigl(f\bigl(x_{k}(1)\bigr)\bigr)^{ \top}, \ldots,\bigl(f\bigl(x_{k}(N - 1)\bigr)\bigr)^{ \top} \bigr]^{ \top} \in R^{nN}, \\& f(x_{d}) = \bigl[\bigl(f\bigl(x_{d}(0)\bigr) \bigr)^{ \top},\bigl(f\bigl(x_{d}(1)\bigr)\bigr)^{ \top}, \ldots,\bigl(f\bigl(x_{d}(N - 1)\bigr)\bigr)^{ \top} \bigr]^{ \top} \in R^{nN}, \\& \overline{B} = \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} B & & \\ & \ddots & \\ & & B \end{array}\displaystyle \right ] \in R^{nN \times N},\qquad \overline{C} = \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} C & & \\ & \ddots & \\ & & C \end{array}\displaystyle \right ] \in R^{N \times nN}. \end{aligned}$$

Thus, (24) and (25) are, respectively, rewritten as

$$ \left \{ \textstyle\begin{array}{l} x_{k}^{ +} = f(x_{k}) + \overline{B}u_{k}, \\ y_{k} = \overline{C}x_{k}^{ +}, \end{array}\displaystyle \right . $$
(26)

and

$$ \left \{ \textstyle\begin{array}{l} x_{d}^{ +} = f(x_{d}) + \overline{B}u_{d}, \\ y_{d} = \overline{C}x_{d}^{ +}. \end{array}\displaystyle \right . $$
(27)

Theorem 2

Assume that the proposed NILC scheme (4) with (2) and (3) is applied to the nonlinear system (24) and the initial state is resettable, i.e., \(x_{k}(0) = x_{d}(0)\) for \(k = 1,2, \ldots \) . Then, under the assumptions (A1), (A2), and (A3), the expectation \(E\{ \| \delta y_{k} \|_{1}\}\) of the tracking error \(\| \delta y_{k} \|_{1}\) converges to zero as the iteration tends to infinity if \(L_{f} < 1\) and \(\rho_{2} = \| E\{ | I - \Gamma \Lambda_{k}\overline{C}\overline{B}\Omega_{k} |\} \|_{1} + (1 - \bar{\alpha} )| \Gamma |\| \overline{B} \|_{1}\| \overline{C} \|_{1} (\bar{\omega} + \frac{L_{f}}{1 - L_{f}}) < 1\) are satisfied.

Proof

Equations (6), (7), (26), (27), and \(x_{k}(0) = x_{d}(0)\) give rise to

$$\begin{aligned} \delta \tilde{u}_{k + 1} =& \delta \tilde{u}_{k} - \Gamma \delta \tilde{y}_{k} = \delta \tilde{u}_{k} - \Gamma \Lambda_{k}\delta y_{k} \\ = &\delta \tilde{u}_{k} - \Gamma \Lambda_{k}\overline{C} \bigl(x_{d}^{ +} - x_{k}^{ +}\bigr) \\ =& \delta \tilde{u}_{k} - \Gamma \Lambda_{k}\overline{C} \overline{B}\delta u_{k} - \Gamma \Lambda_{k}\overline{C} \bigl[f(x_{d}) - f(x_{k})\bigr]. \end{aligned}$$
(28)

Substituting (14) into (28) yields

$$\begin{aligned} \delta \tilde{u}_{k + 1} =& (I - \Gamma \Lambda_{k} \overline{C}\overline{B}\Omega_{k})\delta \tilde{u}_{k} - \sum_{i = 2}^{k - 1} \Gamma \Lambda_{k} \overline{C}\overline{B} \Biggl[ \prod_{j = 0}^{k - 1 - i} (I - \Omega_{k - j}) \Biggr]\Omega_{i}\delta \tilde{u}_{i} \\ &{}- \Gamma \Lambda_{k}\overline{C}\overline{B} \Biggl[ \prod _{j = 0}^{k - 2} (I - \Omega_{k - j}) \Biggr]\delta \tilde{u}_{1} - \Gamma \Lambda_{k}\overline{C} \bigl( f(x_{d}) - f(x_{k}) \bigr). \end{aligned}$$
(29)

By (29) and a direct computation, we get

$$\begin{aligned} \vert \delta \tilde{u}_{k + 1} \vert \prec& \vert I - \Gamma \Lambda_{k}\overline{C}\overline{B}\Omega_{k} \vert \vert \delta \tilde{u}_{k} \vert + \sum _{i = 2}^{k - 1} \vert \Gamma \vert \Lambda_{k}\vert \overline{C}\overline{B} \vert \Biggl[ \prod _{j = 0}^{k - 1 - i} (I - \Omega_{k - j}) \Biggr]\Omega_{i} \vert \delta \tilde{u}_{i} \vert \\ &{}+ \vert \Gamma \vert \Lambda_{k}\vert \overline{C} \overline{B} \vert \Biggl[ \prod_{j = 0}^{k - 2} (I - \Omega_{k - j}) \Biggr]\vert \delta \tilde{u}_{1} \vert \\ &{}+ \vert \Gamma \vert \Lambda_{k}\vert \overline{C} \vert \bigl\vert f(x_{d}) - f(x_{k}) \bigr\vert . \end{aligned}$$
(30)

Calculating the expectation on both sides of (30) and taking the assumptions (A1) and (A2) into consideration, we obtain

$$\begin{aligned} E\bigl\{ \vert \delta \tilde{u}_{k + 1} \vert \bigr\} \prec& E\bigl\{ \vert I - \Gamma \Lambda_{k}\overline{C}\overline{B} \Omega_{k} \vert \bigr\} E\bigl\{ \vert \delta \tilde{u}_{k} \vert \bigr\} \\ &{}+ \sum_{i = 2}^{k - 1} \vert \Gamma \vert (I - \Lambda )\vert \overline{C}\overline{B} \vert \Omega^{k - i}(I - \Omega )E\bigl\{ \vert \delta \tilde{u}_{i} \vert \bigr\} \\ &{}+ \vert \Gamma \vert (I - \Lambda )\vert \overline{C}\overline{B} \vert \Omega^{k - 1}E\bigl\{ \vert \delta \tilde{u}_{1} \vert \bigr\} + \vert \Gamma \vert (I - \Lambda )\vert \overline{C} \vert E\bigl\{ \bigl\vert f(x_{d}) - f(x_{k}) \bigr\vert \bigr\} . \end{aligned}$$
(31)

Taking 1-norm on both sides of (31) and taking the property \(\| E\{ | \delta \tilde{u}_{k} |\} \|_{1} = E\{ \| \delta \tilde{u}_{k} \|_{1}\}\) into account, we have

$$\begin{aligned} E\bigl\{ \Vert \delta \tilde{u}_{k + 1} \Vert _{1}\bigr\} \le& \bigl\Vert E\bigl\{ \vert I - \Gamma \Lambda_{k}\overline{C} \overline{B}\Omega_{k} \vert \bigr\} \bigr\Vert _{1}E \bigl\{ \Vert \delta \tilde{u}_{k} \Vert _{1}\bigr\} \\ &{}+ \sum_{i = 2}^{k - 1} (1 - \bar{\alpha} ) (1 - \bar{\omega} )| \Gamma |\Vert \overline{C}\overline{B} \Vert _{1} \bar{\omega}^{k - i} E\bigl\{ \Vert \delta \tilde{u}_{i} \Vert _{1} \bigr\} \\ &{}+ (1 - \bar{\alpha} )| \Gamma |\Vert \overline{C}\overline{B} \Vert _{1}\bar{\omega}^{k - 1}E\bigl\{ \Vert \delta \tilde{u}_{1} \Vert _{1}\bigr\} \\ &{}+ (1 - \bar{\alpha} )| \Gamma |\Vert \overline{C} \Vert _{1}\bigl\Vert E\bigl\{ \bigl\vert f(x_{d}) - f(x_{k}) \bigr\vert \bigr\} \bigr\Vert _{1} \\ \le& \bigl\Vert E\bigl\{ \vert I - \Gamma \Lambda_{k} \overline{C}\overline{B}\Omega_{k} \vert \bigr\} \bigr\Vert _{1}E\bigl\{ \Vert \delta \tilde{u}_{k} \Vert _{1} \bigr\} \\ &{}+ \sum_{i = 1}^{k - 1} (1 - \bar{\alpha} ) (1 - \bar{\omega} )| \Gamma |\Vert \overline{C}\overline{B} \Vert _{1} \bar{\omega}^{k - i}E\bigl\{ \Vert \delta \tilde{u}_{i} \Vert _{1}\bigr\} \\ &{}+ (1 - \bar{\alpha} )| \Gamma |\Vert \overline{C}\overline{B} \Vert _{1}\bar{\omega}^{k}E\bigl\{ \Vert \delta \tilde{u}_{1} \Vert _{1}\bigr\} \\ &{}+ (1 - \bar{\alpha} )| \Gamma |\Vert \overline{C} \Vert _{1}\bigl\Vert E\bigl\{ \bigl\vert f(x_{d}) - f(x_{k}) \bigr\vert \bigr\} \bigr\Vert _{1}. \end{aligned}$$
(32)

From that \(\| E\{ | f(x_{d}) - f(x_{k}) |\} \|_{1} = E\{ \| f(x_{d}) - f(x_{k}) \|_{1}\}\) and the assumption (A3), we get

$$ \bigl\Vert E\bigl\{ \bigl\vert f(x_{d}) - f(x_{k}) \bigr\vert \bigr\} \bigr\Vert _{1} \le L_{f}E\bigl\{ \Vert x_{d} - x_{k} \Vert _{1}\bigr\} . $$
(33)

Substituting (33) into (32) one arrives at

$$\begin{aligned} E\bigl\{ \Vert \delta \tilde{u}_{k + 1}\Vert _{1}\bigr\} \le& \bigl\Vert E\bigl\{ \vert I - \Gamma \Lambda_{k}\overline{C} \overline{B}\Omega_{k} \vert \bigr\} \bigr\Vert _{1} E \bigl\{ \Vert \delta \tilde{u}_{k}\Vert _{1}\bigr\} \\ &{}+ \sum_{i = 1}^{k - 1} (1 - \bar{\alpha} ) (1 - \bar{\omega} ) | \Gamma |\Vert \overline{C}\overline{B} \Vert _{1} \bar{\omega}^{k - i}E\bigl\{ \Vert \delta \tilde{u}_{i}\Vert _{1}\bigr\} \\ &{}+ (1 - \bar{\alpha} )| \Gamma |\Vert \overline{C}\overline{B} \Vert _{1}\bar{\omega}^{k}E\bigl\{ \Vert \delta \tilde{u}_{1} \Vert _{1}\bigr\} \\ &{}+ (1 - \bar{\alpha} )| \Gamma |\Vert \overline{C} \Vert _{1}L_{f}E\bigl\{ \Vert x_{d} - x_{k} \Vert _{1}\bigr\} . \end{aligned}$$
(34)

By (26), (27), and (14), we have

$$\begin{aligned} x_{d}^{ +} - x_{k}^{ +} =& f(x_{d}) - f(x_{k}) + \overline{B}\Omega_{k} \delta \tilde{u}_{k} + \sum_{i = 2}^{k - 1} \overline{B} \Biggl[ \prod_{j = 0}^{k - 1 - i} (I - \Omega_{k - j}) \Biggr]\Omega_{i}\delta \tilde{u}_{i} \\ &{}+ \overline{B} \Biggl[ \prod_{j = 0}^{k - 2} (I - \Omega_{k - j}) \Biggr]\delta \tilde{u}_{1}. \end{aligned}$$
(35)

By (35), we obtain

$$\begin{aligned} \bigl\vert x_{d}^{ +} - x_{k}^{ +} \bigr\vert \prec& \bigl\vert f(x_{d}) - f(x_{k}) \bigr\vert + \vert \overline{B} \vert \Omega_{k} \vert \delta \tilde{u}_{k} \vert + \sum_{i = 2}^{k - 1} \vert \overline{B} \vert \Biggl[ \prod_{j = 0}^{k - 1 - i} (I - \Omega_{k - j}) \Biggr]\Omega_{i} \vert \delta \tilde{u}_{i} \vert \\ &{}+ \vert \overline{B} \vert \Biggl[ \prod_{j = 0}^{k - 2} (I - \Omega_{k - j}) \Biggr]\vert \delta \tilde{u}_{1} \vert . \end{aligned}$$
(36)

Taking the expectation on both sides of (36) and taking the assumptions (A1) and (A2) into consideration, we have

$$\begin{aligned} E\bigl\{ \bigl\vert x_{d}^{ +} - x_{k}^{ +} \bigr\vert \bigr\} \prec& E\bigl\{ \bigl\vert f(x_{d}) - f(x_{k}) \bigr\vert \bigr\} + \vert \overline{B} \vert (I - \Omega )E\bigl\{ \vert \delta \tilde{u}_{k} \vert \bigr\} \\ &{}+ \sum_{i = 2}^{k - 1} \vert \overline{B} \vert \Omega^{k - i}(I - \Omega )E\bigl\{ \vert \delta \tilde{u}_{i} \vert \bigr\} + \vert \overline{B} \vert \Omega^{k - 1}E\bigl\{ \vert \delta \tilde{u}_{1} \vert \bigr\} . \end{aligned}$$
(37)

Computing the 1-norm on both sides of (37) leads to

$$\begin{aligned} E\bigl\{ \bigl\Vert x_{d}^{ +} - x_{k}^{ +} \bigr\Vert _{1}\bigr\} \le& \bigl\Vert E\bigl\{ \bigl\vert f(x_{d}) - f(x_{k}) \bigr\vert \bigr\} \bigr\Vert _{1} + (1 - \bar{\omega} )\Vert \overline{B} \Vert _{1} E\bigl\{ \Vert \delta \tilde{u}_{k} \Vert _{1}\bigr\} \\ &{}+ \sum_{i = 2}^{k - 1} (1 - \bar{\omega} ) \bar{\omega}^{k - i}\Vert \overline{B} \Vert _{1} E\bigl\{ \Vert \delta \tilde{u}_{i} \Vert _{1}\bigr\} + \bar{ \omega}^{k - 1} \Vert \overline{B} \Vert _{1} E\bigl\{ \Vert \delta \tilde{u}_{1}\Vert _{1}\bigr\} . \end{aligned}$$
(38)

Substituting (33) into (38) reaches

$$\begin{aligned} E\bigl\{ \bigl\Vert x_{d}^{ +} - x_{k}^{ +} \bigr\Vert _{1}\bigr\} \le& L_{f}E\bigl\{ \Vert x_{d} - x_{k} \Vert _{1}\bigr\} + (1 - \bar{ \omega} )\Vert \overline{B} \Vert _{1} E\bigl\{ \Vert \delta \tilde{u}_{k} \Vert _{1}\bigr\} \\ &{}+ \sum_{i = 2}^{k - 1} (1 - \bar{\omega} ) \bar{\omega}^{k - i} \Vert \overline{B} \Vert _{1} E\bigl\{ \Vert \delta \tilde{u}_{i} \Vert _{1}\bigr\} + \bar{ \omega}^{k - 1} \Vert \overline{B} \Vert _{1} E\bigl\{ \Vert \delta \tilde{u}_{1} \Vert _{1}\bigr\} . \end{aligned}$$
(39)

From \(E\{ \| x_{d} - x_{k}\|_{1}\} \le E\{ \| x_{d}^{ +} - x_{k}^{ +} \|_{1}\}\) and (39), one obtains

$$\begin{aligned} E\bigl\{ \bigl\Vert x_{d}^{ +} - x_{k}^{ +} \bigr\Vert _{1}\bigr\} \le& (1 - \bar{\omega} ) \frac{\Vert \overline{B} \Vert _{1}}{1 - L_{f}} E \bigl\{ \Vert \delta \tilde{u}_{k} \Vert _{1}\bigr\} + \sum_{i = 1}^{k - 1}(1 - \bar{\omega} )\bar{ \omega}^{k - i} \frac{\Vert \overline{B} \Vert _{1}}{1 - L_{f}} E\bigl\{ \Vert \delta \tilde{u}_{i} \Vert _{1}\bigr\} \\ &{} + \bar{ \omega}^{k} \frac{\Vert \overline{B} \Vert _{1}}{1 - L_{f}} E\bigl\{ \Vert \delta \tilde{u}_{1} \Vert _{1}\bigr\} . \end{aligned}$$
(40)

Furthermore, from (34) and (40), it follows that

$$\begin{aligned} E\bigl\{ \Vert \delta \tilde{u}_{k + 1} \Vert _{1}\bigr\} \le& \biggl( \bigl\Vert E\bigl\{ \vert I - \Gamma \Lambda_{k} \overline{C}\overline{B}\Omega_{k} \vert \bigr\} \bigr\Vert _{1} + (1 - \bar{\alpha} ) (1 - \bar{\omega} )| \Gamma | \frac{L_{f} \Vert \overline{B} \Vert _{1} \Vert \overline{C} \Vert _{1}}{1 - L_{f}} \biggr)E\bigl\{ \Vert \delta \tilde{u}_{k} \Vert _{1} \bigr\} \\ &{}+ \sum_{i = 2}^{k - 1}\frac{(1 - \bar{\alpha} )(1 - \bar{\omega} )| \Gamma |\Vert \overline{B} \Vert _{1}\Vert \overline{C} \Vert _{1}}{1 - L_{f}} \bar{\omega}^{k - i}E\bigl\{ \Vert \delta \tilde{u}_{i} \Vert _{1}\bigr\} \\ &{}+ \frac{(1 - \bar{\alpha} )| \Gamma |\Vert \overline{B} \Vert _{1}\Vert \overline{C} \Vert _{1}}{1 - L_{f}}\bar{\omega}^{k - 1}E \bigl\{ \Vert \delta \tilde{u}_{1} \Vert _{1}\bigr\} \\ \le& \biggl( \bigl\Vert E\bigl\{ \vert I - \Gamma \Lambda_{k} \overline{C}\overline{B}\Omega_{k} \vert \bigr\} \bigr\Vert _{1} + (1 - \bar{\alpha} ) (1 - \bar{\omega} )| \Gamma | \frac{L_{f}\Vert \overline{B} \Vert _{1}\Vert \overline{C} \Vert _{1}}{1 - L_{f}} \biggr)E\bigl\{ \Vert \delta \tilde{u}_{k} \Vert _{1}\bigr\} \\ &{}+ \sum_{i = 1}^{k - 1}\frac{(1 - \bar{\alpha} )(1 - \bar{\omega} )| \Gamma |\Vert \overline{B} \Vert _{1}\Vert \overline{C} \Vert _{1}}{1 - L_{f}} \bar{\omega}^{k - i}E\bigl\{ \Vert \delta \tilde{u}_{i} \Vert _{1}\bigr\} \\ &{}+ \frac{(1 - \bar{\alpha} )| \Gamma |\Vert \overline{B} \Vert _{1}\Vert \overline{C} \Vert _{1}}{1 - L_{f}}\bar{\omega}^{k}E \bigl\{ \Vert \delta \tilde{u}_{1} \Vert _{1}\bigr\} . \end{aligned}$$
(41)

From the convergent condition \(\rho_{2} < 1\), inequality (41), and Lemma 1, we have

$$ \lim_{k \to \infty} E\bigl\{ \Vert \delta \tilde{u}_{k} \Vert _{1}\bigr\} = 0. $$
(42)

By (40), (42), and Lemma 2, we get

$$ \lim_{k \to \infty} E\bigl\{ \bigl\Vert x_{d}^{ +} - x_{k}^{ +} \bigr\Vert _{1}\bigr\} = 0. $$
(43)

From (26) and (27), it follows that

$$ \delta y_{k} = \overline{C}\bigl(x_{d}^{ +} - x_{k}^{ +} \bigr). $$
(44)

By (44) and simple computation, we have

$$ E\bigl\{ \Vert \delta y_{k} \Vert _{1}\bigr\} \le \Vert \overline{C} \Vert _{1}E\bigl\{ \bigl\Vert x_{d}^{ +} - x_{k}^{ +} \bigr\Vert _{1} \bigr\} . $$
(45)

Equations (43) and (45) reduce to

$$\lim_{k \to \infty} E\bigl\{ \Vert \delta y_{k} \Vert _{1}\bigr\} = 0. $$

This completes the proof. □

5 Numerical simulations

For the sake of exhibiting the effectiveness of the proposed learning scheme, the simulations are done for the systems being linear and nonlinear, respectively, where the tracking error is formulated as \(\| \delta y_{k}\|_{1} = \sum_{t = 0}^{30} | y_{d}(t) - y_{k}(t) |\). In accordance with the derivation that the tracking error is evaluated in a statistical sense by mathematical expectation, the numerical experiments are made for 500 runs. Here, the terminology ‘one run’ means that the NILC-driven system operates 60 iterations until a perfect tracking is achieved. Namely, the expectation of system output \(y_{k}(t)\) is computed as \(E\{ y_{k}(t)\} = \frac{1}{500}\sum_{m = 1}^{500} y_{k}^{(m)}(t)\) and the expectation of tracking error is formulated as \(E\{ \| \delta y_{k} \|_{1}\} = \frac{1}{500}\sum_{m = 1}^{500} \sum_{t = 0}^{30} | y_{d}(t) - y_{k}^{(m)}(t) |\), where the superscript (m) marks the run order.

Example 1

Consider a second-order linear system as follows:

$$\begin{aligned}& \left [ \textstyle\begin{array}{@{}c@{}} x_{1,k}(t + 1) \\ x_{2,k}(t + 1) \end{array}\displaystyle \right ] = \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} \frac{1}{6} & \frac{1}{5} \\ \frac{1}{5} & \frac{1}{6} \end{array}\displaystyle \right ]\left [ \textstyle\begin{array}{@{}c@{}} x_{1,k}(t) \\ x_{2,k}(t) \end{array}\displaystyle \right ] + \left [ \textstyle\begin{array}{@{}c@{}} \frac{1}{2} \\ \frac{1}{2} \end{array}\displaystyle \right ]u_{k}(t), \quad t \in S^{ -} = \{ 0,1,2, \ldots,29\}, \\& y_{k}(t) = x_{1,k}(t) + x_{2,k}(t), \quad t \in S = \{ 0,1,2, \ldots,30\}, \\& x_{1,k}(0) = 0, \qquad x_{2,k}(0) = 0. \end{aligned}$$
(46)

The desired trajectory is chosen as \(y_{d}(t) = \sin (\frac{\pi}{15}t)\), \(t \in S\). The beginning control signal is set as \(u_{1}(t) = 0\) for \(t \in S^{ -}\). For the proposed NILC scheme (4) with (2) and (3), the convergent factor is computed as \(\tilde{\rho}_{1} = 1 - \Gamma (1 - \bar{\alpha} )(\frac{8}{19} - 2\bar{\omega} )\) under the assumption that the learning gain is restricted to \(\Gamma \in (0,1]\). Thus, the convergent condition \(\tilde{\rho}_{1} < 1\) in Corollary 1 holds if the probabilities satisfy \(0 \le \bar{\alpha} < 1\)and \(0 \le \bar{\omega} < \frac{4}{19}\), respectively.

Set the learning gain as \(\Gamma = 0.4\). Choose three groups of probabilities as \(\mathrm{P}_{1}\): \(\bar{\alpha} = 0\), \(\bar{\omega} = 0\); \(\mathrm{P}_{2}\): \(\bar{\alpha} = 0.2\), \(\bar{\omega} = 0.1\) and \(\mathrm{P}_{3}\): \(\bar{\alpha} = 0.4\), \(\bar{\omega} = 0.2\), respectively. It is testified that the convergent conditions for three groups of probabilities in Corollary 1 are \(\tilde{\rho}_{1}(\mathrm{P}_{1}) = \frac{79}{95} < 1\), \(\tilde{\rho}_{1}(\mathrm{P}_{2}) = \frac{2\text{,}207}{2\text{,}375} < 1\) and \(\tilde{\rho}_{1}(\mathrm{P}_{3}) = \frac{2\text{,}363}{2\text{,}375} < 1\), respectively, which implies that the convergent conditions are satisfied. Figures 2 and 3 depict the outputs for those three groups probabilities at the third and seventh iterations, respectively, where the dashed curves exhibit for the desired outputs, the solid ones denote the outputs for \(\mathrm{P}_{1}\): \(\bar{\alpha} = 0\), \(\bar{\omega} = 0\), the dot-dash ones plot the outputs for \(\mathrm{P}_{2}\): \(\bar{\alpha} = 0.2\), \(\bar{\omega} = 0.1\) and the circle-solid ones present the outputs for \(\mathrm{P}_{3}\): \(\bar{\alpha} = 0.4\), \(\bar{\omega} = 0.2\), respectively.

Figure 2
figure 2

Outputs of linear system at the 3rd iteration.

Figure 3
figure 3

Outputs of linear system at the 7th iteration.

It is observed that the outputs for larger dropout probabilities render stronger stochastic oscillations and slower tracking. Figure 4 displays the expectations of those outputs at the seventh iteration, while Figure 5 shows the expectations of tracking errors, which convey that expectations of tracking errors with respect to the proposed NILC scheme (4) with (2) and (3) converge to nullity very well.

Figure 4
figure 4

Expectations of outputs of linear system at the 7th iteration.

Figure 5
figure 5

Expectations of tracking errors of linear system.

Example 2

Consider a nonlinear system modeled as

$$\begin{aligned}& \left [ \textstyle\begin{array}{@{}c@{}} x_{1,k}(t + 1) \\ x_{2,k}(t + 1) \end{array}\displaystyle \right ] = \left [ \textstyle\begin{array}{@{}c@{}} \frac{1}{3}\sin (x_{2,k}(t)) \\ \frac{1}{3}\cos (x_{1,k}(t)) \end{array}\displaystyle \right ] + \left [ \textstyle\begin{array}{@{}c@{}} \frac{1}{2} \\ \frac{1}{2} \end{array}\displaystyle \right ]u_{k}(t),\quad t \in S^{ -} = \{ 0,1,2, \ldots,29\}, \\& y_{k}(t) = x_{1,k}(t) + x_{2,k}(t),\quad t \in S = \{ 0,1,2, \ldots,30\}, \\& x_{1,k}(0) = 0,\qquad x_{2,k}(0) = 0. \end{aligned}$$
(47)

The desired trajectory is set as \(y_{d}(t) = \sin ( \frac{\pi}{15}t )\), \(t \in S\). The control signal at the beginning iteration is set as \(u_{1}(t) = 0\) for \(t \in S^{ -}\). It is calculated that the Lipschitz constant of the function \(f(x_{1,k}(t),x_{2,k}(t)) = [\frac{1}{3}\sin (x_{2,k}(t)), \frac{1}{3}\cos (x_{1,k}(t))]^{ \top}\) is \(L_{f} = \frac{1}{3}\). Under the assumption that the learning gain Γis confined within the range \((0,1]\), the convergent factor is formulated as \(\rho_{2} = 1 - \Gamma (1 - \bar{\alpha} )(0.5 - 2\bar{\omega} )\). This means that \(\rho_{2} < 1\) holds if the probabilities are restricted as \(0 \le \bar{\alpha} < 1\) and \(0 \le \bar{\omega} < 0.25\) are satisfied.

Choose learning gain as \(\Gamma = 0.6\) and set three groups of probabilities as \(\mathrm{P}_{1}\): \(\bar{\alpha} = 0\), \(\bar{\omega} = 0\); \(\mathrm{P}_{2}\): \(\bar{\alpha} = 0.2\), \(\bar{\omega} = 0.1\) and \(\mathrm{P}_{3}\): \(\bar{\alpha} = 0.4\), \(\bar{\omega} = 0.2\), respectively. It is not difficult to test that the convergent conditions in Theorem 2 for those three groups of probabilities are \(\rho_{2}(\mathrm{P}_{1}) = 0.7 < 1\), \(\rho_{2}(\mathrm{P}_{2}) = 0.856 < 1\) and \(\rho_{2}(\mathrm{P}_{3}) = 0.964 < 1\), respectively. Figures 6 and 7 give the outputs at the third and sixth iterations, respectively, where the dashed curves mark the desired outputs, the solid ones are the outputs for \(\mathrm{P}_{1}\): \(\bar{\alpha} = 0\), \(\bar{\omega} = 0\), the dot-dash ones express the outputs for \(\mathrm{P}_{2}\): \(\bar{\alpha} = 0.2\), \(\bar{\omega} = 0.1\) and the circle-solid ones represent the outputs for \(\mathrm{P}_{3}\): \(\bar{\alpha} = 0.4\), \(\bar{\omega} = 0.2\), respectively.

Figure 6
figure 6

Outputs of nonlinear system at the 3rd iteration.

Figure 7
figure 7

Outputs of nonlinear system at the 6th iteration.

It is seen that the outputs are closing to the desired trajectory as the iteration goes on, though the outputs for larger dropout probabilities reveal stochastic perturbation. Figure 8 plots the expectations of outputs at the sixth iteration on operation time interval, while Figure 9 depicts the expectations of tracking errors along the iteration direction, respectively.

Figure 8
figure 8

Expectations of outputs of nonlinear system at the 6th iteration.

Figure 9
figure 9

Expectations of tracking errors of nonlinear system.

From Figures 2-9, it is found that the proposed ILC scheme (4) with the compensations (2) and (3) may drive both the linear and the nonlinear systems to track the desired trajectory perfectly in statistical mode.

6 Conclusion

In this paper, a D-type NILC scheme is developed for discrete-time systems with appropriate mending manners for dropped input and output data. Under the assumption that the stochastic data dropouts are subject to 0-1 Bernoulli-type distributions and by assessing the tracking performance in the form of mathematical expectation, the zero-error convergences of the NILC for the SISO linear and affine nonlinear time-invariant systems are derived, respectively. Both the theoretical derivations and the numerical simulations convey that the proposed NILC scheme enables the linear and affine nonlinear time-invariant systems to track the desired trajectory well, though the stochastic dropout may disturb the tracking behavior. However, the investigations for the networked ILC systems with noise and parameter uncertainties are challenging in future work.

References

  1. Arimoto, S, Kawamura, S, Miyazaki, F: Bettering operation of robots by learning. J. Robot. Syst. 1(2), 123-140 (1984)

    Article  Google Scholar 

  2. Bristow, DA, Tharayil, M, Alleyne, AG: A survey of iterative learning control. IEEE Control Syst. Mag. 26(3), 96-114 (2006)

    Article  Google Scholar 

  3. Chen, Y, Moore, KL, Yu, J, Zhang, T: Iterative learning control and repetitive control in hard disk drive industry - a tutorial. Int. J. Adapt. Control Signal Process. 22(4), 325-343 (2008)

    Article  MATH  Google Scholar 

  4. Mi, CT, Lin, H, Zhang, Y: Iterative learning control of antilock braking of electric and hybrid vehicles. IEEE Trans. Veh. Technol. 54(2), 486-494 (2005)

    Article  Google Scholar 

  5. Chen, Y, Wen, C, Gong, Z, Sun, M: An iterative learning controller with initial state learning. IEEE Trans. Autom. Control 44(2), 371-376 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  6. Park, KH, Bien, Z: Intervalized iterative learning control for monotonic convergence in the sense of sup-norm. Int. J. Control 78(15), 1218-1227 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  7. Ruan, XE, Wang, Q: Convergence properties of iterative learning control processes in the sense of the Lebesgue-P norm. Asian J. Control 14(4), 1095-1107 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  8. Bu, XH, Hou, ZS: Adaptive iterative learning control for linear systems with binary-valued observations. IEEE Trans. Neural Netw. Learn. Syst. (2016). doi:10.1109/TNNLS.2016.2616885.

    Google Scholar 

  9. Krtolica, R, Ozguner, U, Chan, H, Goktas, H, Winkelman, J, Liubakka, M: Stability of linear feedback-systems with random communication delays. Int. J. Control 59(4), 925-953 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  10. Yang, FW, Wang, ZD, Hung, YS, Gani, M: H-infinity control for networked systems with random communication delays. IEEE Trans. Autom. Control 51(3), 511-518 (2006)

    Article  Google Scholar 

  11. Wen, DL, Yang, GH: Dynamic output feedback H-infinity control for networked control systems with quantisation and random communication delays. Int. J. Syst. Sci. 42(10), 1723-1734 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  12. Wu, J, Chen, TW: Design of networked control systems with packet dropouts. IEEE Trans. Autom. Control 52(7), 1314-1319 (2007)

    Article  MathSciNet  Google Scholar 

  13. Wang, D, Wang, JL, Wang, W: H-infinity controller design of networked control systems with Markov packet dropouts. IEEE Trans. Syst. Man Cybern. Syst. 43(3), 689-697 (2013)

    Article  Google Scholar 

  14. Liu, C, Xu, J, Wu, J: Iterative learning control for remote control systems with communication delay and data dropout. Math. Probl. Eng. 2012, Article ID 705474 (2012)

    MathSciNet  MATH  Google Scholar 

  15. Bu, XH, Yu, FS, Hou, ZS, Wang, FZ: Iterative learning control for a class of nonlinear systems with random packet losses. Nonlinear Anal., Real World Appl. 14(1), 567-580 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  16. Liu, J, Ruan, XE: Networked iterative learning control approach for nonlinear systems with random communication delay. Int. J. Syst. Sci. 47(16), 3960-3969 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  17. Ahn, HS, Chen, Y, Moore, KL: Intermittent iterative learning control. In: Proceedings of the 2006 IEEE International Conference on Intelligent Control, Munich, Germany, pp. 144-149 (2006)

    Google Scholar 

  18. Ahn, HS, Moore, KL, Chen, YQ: Discrete-time intermittent iterative learning control with independent data dropouts. In: Proceedings of 17th IFAC World Congress, Seoul, Korea, pp. 12442-12447 (2008)

    Google Scholar 

  19. Ahn, HS, Moore, KL, Chen, YQ: Stability of discrete-time iterative learning control with random data dropouts and delayed controlled signals in networked control systems. In: Int. Conf. Control., Autom., Robot. Vis., ICARCV Hanoi, Vietnam, pp. 757-762 (2008)

    Google Scholar 

  20. Bu, XH, Hou, ZS: Stability of iterative learning control with data dropouts via asynchronous dynamical system. Int. J. Autom. Comput. 8, 29-36 (2011)

    Article  Google Scholar 

  21. Bu, XH, Hou, ZS, Jin, ST, Chi, RH: An iterative learning control design approach for networked control systems with data dropouts. Int. J. Robust Nonlinear Control 26(1), 91-109 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  22. Bu, XH, Hou, ZS, Yu, FS, Wang, FZ: Hinf iterative learning controller design for a class of discrete-time systems with data dropouts. Int. J. Syst. Sci. 45(9), 1902-1912 (2014)

    Article  MATH  Google Scholar 

  23. Shen, D, Wang, YQ: Iterative learning control for networked stochastic systems with random packet losses. Int. J. Control 88(5), 959-968 (2015)

    MathSciNet  MATH  Google Scholar 

  24. Shen, D, Wang, YQ: ILC for networked nonlinear systems with unknown control direction through random lossy channel. Syst. Control Lett. 77, 30-39 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  25. Liu, J, Ruan, XE: Networked iterative learning control for linear-time-invariant systems with random packet losses. In: Proceeding of the 35th Chinese Control Conference, Chendu, China, pp. 38-43 (2016)

    Google Scholar 

Download references

Acknowledgements

The authors sincerely appreciate the support of the National Natural Science Foundation of China under granted No. F010114-60974140 and 61273135.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoe Ruan.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, J., Ruan, X. Networked iterative learning control for discrete-time systems with stochastic packet dropouts in input and output channels. Adv Differ Equ 2017, 53 (2017). https://doi.org/10.1186/s13662-017-1103-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-017-1103-8

Keywords