Skip to main content

Theory and Modern Applications

Iterative learning control for MIMO parabolic partial difference systems with time delay


In this paper, the iterative learning control (ILC) technique is extended to multi-input multi-output (MIMO) systems governed by parabolic partial difference equations with time delay. Two types of ILC algorithm are presented for the system with state delay and input delay, respectively. The sufficient conditions for tracking error convergence are established under suitable assumptions. Detailed convergence analysis is given based on discrete Gronwall’s inequality and discrete Green’s formula for the systems with time-varying uncertainty coefficients. Numerical results show the effectiveness of the proposed ILC algorithms.

1 Introduction

The system governed by partial difference equations is a class of important dynamic systems which firstly arises as numerical solution of partial differential equations. In fact, the state space model of partial difference system can cover many nature laws, such as logistic model with spatial migration, mathematics physical processes (discrete heat equation), engineering technology (image processing, digital signal processing, circuit systems) (see [1, 2] and the references therein). There are many excellent results on the partial difference equations/systems, including exist/no exist [3, 4], stability [5], oscillation [6], positivity of solutions [7], etc. From the view point of practicality, since the first step, that is, the implementation of control for distributed parameter systems modeled by partial differential equations, requires to discretize the variable of systems, studying the control of partial difference systems has a great value.

Iterative learning control (ILC) is an intelligent control method which imitates human learning behavior [8]. For a repeatable system in a given finite time interval, based on tracking objective and previous input and output information, ILC can improve the system performance with the increase in the iteration number. ILC algorithm is simple but it can deal with many complex systems with nonlinear and uncertain characteristics [9, 10]. As an effective control algorithm, ILC has been used to track a given target in many types of systems, including ordinary difference/differential systems [11, 12], partial differential systems or distributed parameter systems [13, 14], impulsive systems [15], stochastic systems [16], fractional systems [17], etc. However, there are few results on ILC for partial difference systems.

Motivated by the above, in this paper, we investigate the ILC problem of MIMO parabolic partial difference equations with delay. Both the P-type ILC scheme and ILC scheme with time delay parameter are proposed for systems with state delay and input delay, respectively. Convergence conditions are given by using a discrete form inequality/formula. It is shown that selecting the learning gain parameter through the iterative learning process can guarantee the convergence of the output tracking error between the given desired output and the actual output.

Compared with the current literature, the main features of this work are summarized as follows: (1) It can handle MIMO parabolic partial difference systems. Although Refs. [18, 19] studied ILC for partial difference systems, the systems are only the single-input and single-output (SISO). Because a MIMO system often involves multi-input variables and multi-output variables, the input and output of a SISO system only are one, respectively, so the mathematical analysis of a MIMO system is more complex than that of a SISO system. (2) The systems include time delay in state and input. The ILC of systems with time delay is studied in [12, 20], but the systems are stated by ordinary difference equations, which is different to this paper. The system in this paper is governed by partial difference equations and simultaneously involving three different indices: time, space, and iteration; therefore the convergence analysis is more complex. (3) We used the methods of partial difference equations which have been applied in stability analysis from partial difference systems [2, 46], instead of Lyapunov method or linear matrix inequality (for multi dimensional dynamic systems) [21, 22].

The rest of the paper is arranged as follows. In Sect. 2, we present the formulation and some preliminaries. Section 3 provides ILC design and rigorous convergence analysis. In Sect. 4, the simulation results are illustrated. Finally, the conclusions of this paper are shown in Sect. 5.

2 ILC system description

In this paper, we consider the following two classes of parabolic type partial difference systems, which run a given task repeatedly on a finite time interval \([0,J]\). The first class is the system with time delay in state as follows:

figure a

The second class is the system with time delay in input, that is,

figure b

In systems (1a)–(1b) and (2a)–(2b), k is the index of iteration. \(\mathbf {Z}_{k}\in \mathbb{R}^{n}\), \(\mathbf {U}_{k}\in\mathbb{R}^{m}\), and \(\mathbf {Y}_{k}\in\mathbb {R}^{l}\) denote the system state vector, input and output vector, respectively. \(x,s\) are spatial and time discrete variables, respectively, \(1\leq x\leq I\), \(0\leq s\leq J\), where \(I,J\) are given integers. \(\mathbf {A}(s),{\mathbf {A}}_{\tau}(s)\in\mathbb {R}^{n\times n}\), \(\mathbf {B}(s),{\mathbf {B}}_{\tau}(s)\in\mathbb{R}^{n\times m}\), \(\mathbf {C}(s)\in\mathbb{R}^{l\times n}\), \(\mathbf {G}(s),{\mathbf {G}}_{\tau}(s)\in\mathbb{R}^{l\times m}\) are uncertain bounded real matrices for all \(0\leq s\leq J\), \(\mathbf {D}(s)\) is a positive bounded diagonal matrix for all \(0\leq s\leq J\), written as

$$\begin{aligned} \mathbf {D}(s)=\operatorname{diag}\bigl\{ d_{1}(s),d_{2}(s),\ldots,d_{n}(s)\bigr\} ,\quad 0< p_{i}\leq d_{i}(s)< \infty, \end{aligned}$$

where \(p_{i}\) is a known constant for \(i=1,2,\ldots,n\). τ is known time delay. The corresponding boundary and initial conditions of systems (1a)–(1b) and (2a)–(2b) will be given later. In the two systems, the partial differences are defined as usual, i.e.,

figure c

where (3a) is the first order difference scheme for time variable s, (3b) and (3c) are the first order and the second order difference schemes for space variable x, respectively.

The control objective of this paper is to design an ILC controller to track the given desired target \(\mathbf {Y}_{d}(x,s)\) based on the measurable system output \(\mathbf {Y}_{k}(x,s)\) so that the tracking error \({\mathbf {e}}_{k}(x,s)\) would vanish when the iteration time k tends to infinity, that is,

$$ \lim_{k\rightarrow\infty} \mathbf {Y}_{k}(x,s)=\mathbf {Y}_{d}(x,s). $$

For convenience, some notations used in this paper are defined as follows.

(1) The norm \(\|\cdot\|\) is defined as \(\|{\mathbf {A}}\|=\sqrt{\lambda _{\max}({\mathbf {A}}^{\mathrm{T}}\mathbf {A})},{\mathbf {A}}\in\mathbb{R}^{n\times n}\), where \(\lambda_{\max}\) denotes the maximum eigenvalue. If \({\mathbf {A}}(s):[0,1,2,\ldots,J]\rightarrow\mathbb{R}^{n\times n}\), then \(\|{\mathbf {A}}\|=\sqrt{\lambda_{\max_{0\leq s\leq J }}({\mathbf {A}(s)}^{\mathrm{T}}\mathbf {A}(s))} \), where \(\lambda_{\max_{0\leq s\leq J }}\) indicates the maximum eigenvalue of \({\mathbf {A}(s)}^{\mathrm{T}}\mathbf {A}(s)\ ({0\leq s\leq J })\). We will simply write \(\bar{\lambda}_{A}\) as \(\|{\mathbf {A}}\|^{2}\).

(2) For \({\mathbf {f}}(x,s)\in\mathbb{R}^{n}\), \(0\leq x\leq I, 0\leq s\leq J\), the \(\mathbf {L}^{2}\)-norm of \({\mathbf {f}}(x,s)\) denotes \(\|{\mathbf {f}}(\cdot,s)\|^{2}_{\mathbf {L}^{2}}=\sum_{x=1}^{I}({\mathbf {f}(x,s)}^{\mathrm{T}}{\mathbf {f}}(x,s))\). For a given constant \(\lambda>0\), the \((\mathbf {L}^{2},\lambda)\)-norm of \({\mathbf {f}}(x,s)\) can be defined as

$$ \Vert {\mathbf {f}} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)}=\sup_{0\leq s\leq J} \bigl\{ \bigl\Vert {\mathbf {f}}(\cdot,s) \bigr\Vert ^{2}_{\mathbf {L}^{2}} \lambda^{s}\bigr\} =\sup_{0\leq s\leq J}\Biggl\{ \sum _{x=1}^{I}\bigl({\mathbf {f}(x,s)}^{\mathrm{T}} \mathbf {f}(x,s)\bigr)\lambda ^{s}\Biggr\} . $$

(3) For \(\mathbf {f}_{k}(x,s)\in\mathbb{R}^{n},\xi \geq 1\), the \(\|\mathbf {f}_{k}\|^{2}_{(\mathbf {L}^{2},\lambda(\xi))}\) norm (satisfying three basic requirements as norm definition) is defined as follows:

$$ \Vert \mathbf {f}_{k} \Vert ^{2}_{(\mathbf {L}^{2},\lambda(\xi))}=\sup _{0\leq s\leq J}\Biggl\{ \sum_{x=1}^{I} \bigl({\mathbf {f}(x,s)}^{\mathrm{T}}\mathbf {f}(x,s)\bigr)\lambda ^{s} \xi^{k}\Biggr\} , $$

if \(\xi=1\), then \(\|\mathbf {f}\|^{2}_{(\mathbf {L}^{2},\lambda(1))}=\|\mathbf {f}\| ^{2}_{(\mathbf {L}^{2},\lambda)}\).

(4) According to the Rayleigh–Ritz theorem, for a symmetry matrix \({\mathbf {A}}\in\mathbb{R}^{n\times n}\), we have \(\lambda_{1}x^{\mathrm{T}}x\leq x^{\mathrm{T}}{\mathbf {A}}x \leq \lambda_{n}x^{\mathrm{T}}x\), for \(x\in\mathbb{R}^{n}\), where \(\lambda_{i}\ (i=1,2,\ldots,n,\lambda _{1}\leq \cdots \leq \lambda_{n})\) are the eigenvalues of the square matrix A. Similar results can be obtained for a time-varying matrix. That is, letting \({\mathbf {A}(j)}\in\mathbb{R}^{n\times n}\ (0\leq j\leq J)\), we can obtain \(\lambda_{\min_{0\leq j\leq J }}(\mathbf {A}(j))x^{\mathrm{T}}x\leq x^{\mathrm{T}}{\mathbf {A}{(j)}}x \leq \lambda_{\max_{0\leq j\leq J }}(\mathbf {A}{(j)})x^{\mathrm{T}}x\), where \(\lambda_{\min_{0\leq j\leq J }}(\mathbf {A}{(j)}), \lambda_{\max_{0\leq j\leq J }} ({\mathbf {A}}{(j)})\) denote the maximum and minimum eigenvalues of the square matrix \({\mathbf {A}}(j), 0\leq j\leq J\), respectively.

The following lemmas will be used in later sections.

Lemma 1

(Discrete Gronwall’s inequality, [2, 5])

Let constant sequences \(\{v(x)\}, \{B(x)\}\), and \(\{D(x)\}\) be real sequences defined for \(x\geq 0\), which satisfy

$$ v(x+1)\leq B(x)v(x)+D(x),\quad B(x)\geq 0, x\geq 0. $$


$$ v(s)\leq \prod_{x=0}^{s-1}B(x)v(0)+\sum _{x=0}^{s-1}D(x)\prod _{i=x+1}^{s-1} B(i), \quad s\geq 0. $$

Lemma 2

(Discrete Green’s formula for vector)

Under the zero boundary value condition, i.e., \({\mathbf {Z}}_{k}(0,s)=0={\mathbf {Z}}_{k}(I+1,s)\), for system (1a)(1b), we have

$$\begin{aligned} \sum_{x=1}^{I}{{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s){ \Delta}_{1}^{2}{{\mathbf {Z}}}_{k}(x-1,s)=-{\sum _{x=0}^{I}}\bigl(\Delta_{1}{{\mathbf {Z}}}_{k}(x,s)\bigr)^{\mathrm{T}}\bigl(\Delta_{1}{{\mathbf {Z}}}_{k}(x,s)\bigr). \end{aligned}$$


In view of the boundary condition \({\mathbf {Z}}_{k}(0,s)=0={\mathbf {Z}}_{k}(I+1,s)\), we can obtain that

$$\begin{aligned} & \sum_{x=1}^{I}{{{\mathbf {Z}}}_{k}^{\mathrm{T}}}(x,s){\Delta}_{1}^{2}{{ \mathbf {Z}}}_{k}(x-1,s) \\ &\quad=\sum_{x=1}^{I}\bigl[{{{\mathbf {Z}}}_{k}^{\mathrm{T}}}(x,s){{{\mathbf {Z}}}_{k}}(x+1,s)-2{{{\mathbf {Z}}}_{k}^{\mathrm{T}}}(x,s){{{\mathbf {Z}}}_{k}}(x,s)+{{{\mathbf {Z}}}_{k}^{\mathrm{T}}}(x,s){{{\mathbf {Z}}}_{k}}(x-1,s)\bigr] \\ &\quad={{{\mathbf {Z}}}_{k}^{\mathrm{T}}}(I+1,s)\Delta_{1}{{\mathbf {Z}}}_{k}(I,s)-{{\mathbf {Z}}}_{k}(1,s)\Delta_{1}{{\mathbf {Z}}}_{k}(0,s)-\sum_{x=1}^{I} \bigl[\Delta _{1}{{\mathbf {Z}}}_{k}(x,s)\bigr)^{\mathrm{T}}\bigl( \Delta_{1}{{\mathbf {Z}}}_{k}(x,s)\bigr] \\ &\quad =-{\sum_{x=0}^{I}}\bigl( \Delta_{1}{{\mathbf {Z}}}_{k}(x,s)\bigr)^{\mathrm{T}}\bigl( \Delta _{1}{{\mathbf {Z}}}_{k}(x,s)\bigr). \end{aligned}$$

This is the end of proof of Lemma 2. □

3 ILC design and convergence analysis

In this section, we propose our iterative learning control algorithms, establish a sufficient condition for the convergence of the algorithm, and provide a rigorous proof. First, we consider the case of the system with time delay in state (i.e., system (1a)–(1b)) in the following Sect. 3.1.

3.1 System with time delay in state

For system (1a)–(1b), we assume the corresponding initial and boundary conditions as follows:

$$\begin{aligned} &{\mathbf {Z}}_{k}(x,s)={\mathbf {\varphi}}_{0}(x,s),\quad 1\leq x \leq I,-\tau \leq s\leq 0, \end{aligned}$$
$$\begin{aligned} &{\mathbf {Z}}_{k}(0,s)=0={\mathbf {Z}}_{k}(I+1,s),\quad -\tau \leq s \leq J, \end{aligned}$$

for \(k=1,2,\ldots \) .

We propose the P-type iterative learning control algorithm of looking for a control input sequence \(\mathbf {U}_{k+1}(x,s) \) in system (1a)–(1b) as follows:

$$ {\mathbf {U}}_{k+1}(x,s)={\mathbf {U}}_{k}(x,s)+{\mathbf {\Gamma}}(s){\mathbf {e}}_{k}(x,s), $$

where \(\mathbf {e}_{k}(x,s)=\mathbf {Y}_{d}(x,s)-{\mathbf {Y}}_{k}(x,s)\) is the kth output error corresponding to the kth input \(\mathbf {U}_{k}(x,s)\), and \(\mathbf {\Gamma}(s)\) is the gain matrix in the learning process. Thus, for system (1a)–(1b), (4) is transformed into

$$ \lim_{k\rightarrow\infty} \mathbf {e}_{k}(x,s)=0,\quad 1\leq x \leq I, 0\leq s\leq J. $$

For simplicity of presentation, we denote

$$\begin{aligned} &\bar{{\mathbf {Z}}}_{k}(x,s)={\mathbf {Z}}_{k+1}(x,s)-{\mathbf {Z}}_{k}(x,s), \\ &\bar{{\mathbf {U}}}_{k}(x,s)={\mathbf {U}}_{k+1}(x,s)-{\mathbf {U}}_{k}(x,s), \\ &\bar{{\mathbf {Y}}}_{k}(x,s)={\mathbf {Y}}_{k+1}(x,s)-{\mathbf {Y}}_{k}(x,s). \end{aligned}$$

Then, based on the kth and the \((k+1)\)th learning system of (1a)–(1b), we have

figure d

In order to derive the convergence conditions of the ILC algorithm described by (7), we give the following proposition.

Proposition 1

Under the initial and boundary conditions given in (5), (6), for \(\bar {{\mathbf {Z}}}_{k}(x,s)\) (\(0\leq j\leq J\)) of in (9a)(9b), we have

$$\begin{aligned} &\sum_{x=1}^{I}{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s+1)\bar{{\mathbf {Z}}}_{k}(x,s+1) \\ &\quad \leq c_{1}\sum_{x=1}^{I}{ \bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s)\bar {{\mathbf {Z}}}_{k}(x,s) \\ &\qquad{}+c_{2}\sum_{x=1}^{I}{\bar{{ \mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s-\tau)\bar{{\mathbf {Z}}}_{k}(x,s- \tau)+c_{3}\sum_{x=1}^{I}{\bar{{ \mathbf {U}}}_{k}}^{\mathrm{T}}(x,s)\bar {{\mathbf {U}}}_{k}(x,s), \end{aligned}$$

where \(c_{1},c_{2},c_{3}\) are positive bounded constants that will be given later.


By (9a) and the definition of partial difference (3a), we have

$$\begin{aligned} \bar{{\mathbf {Z}}}_{k}(x,s+1)={}&{\mathbf {D}}(s)\Delta_{1}^{2} \bar{{\mathbf {Z}}}_{k}(x-1,s)+\bigl({\mathbf {I}}+{\mathbf {A}}(s)\bigr)\bar{{\mathbf {Z}}}_{k}(x,s) \\ &{}+{\mathbf {A}}_{\tau}(s))\bar{{\mathbf {Z}}}_{k}(x,s-\tau) +{\mathbf {B}}(s)\bar{{\mathbf {U}}}_{k}(x,s). \end{aligned}$$

Here and in later sections, I denotes a unit matrix.

Multiplying two sides of (11) by \({\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s)\) from left, we have

$$\begin{aligned} {\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s)\bar{{\mathbf {Z}}}_{k}(x,s+1)={}&{\bar {{\mathbf {Z}}}_{k}}^{T}(x,s){ \mathbf {D}}(s)\Delta_{1}^{2}\bar{{\mathbf {Z}}}_{k}(x-1,s)+{ \bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s) \bigl({\mathbf {I}}+{\mathbf {A}}(s) \bigr)\bar{{\mathbf {Z}}}_{k}(x,s) \\ & {}+{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s){\mathbf {A}}_{\tau}(s))\bar{{\mathbf {Z}}}_{k}(x,s-\tau)+ {\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s){\mathbf {B}}(s)\bar{{\mathbf {U}}}_{k}(x,s). \end{aligned}$$

On the other hand, from (3a), we have

$$\begin{aligned} &\bigl(\Delta_{2}\bar{{\mathbf {Z}}}_{k}(x,s) \bigr)^{\mathrm{T}}\bigl(\Delta_{2}\bar{{\mathbf {Z}}}_{k}(x,s) \bigr) \\ &\quad=\bigl(\bar{{\mathbf {Z}}}_{k}(x,s+1)-\bar{{\mathbf {Z}}}_{k}(x,s)\bigr)^{\mathrm{T}}\bigl(\bar{{\mathbf {Z}}}_{k}(x,s+1)- \bar{{\mathbf {Z}}}_{k}(x,s)\bigr) \\ &\quad = {\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s+1)\bar{{\mathbf {Z}}}_{k}(x,s+1)-2{\bar {{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s+1) \bar{{\mathbf {Z}}}_{k}(x,s) + {\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s) \bar{{\mathbf {Z}}}_{k}(x,s). \end{aligned}$$

(13) yields

$$\begin{aligned} &{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s+1)\bar{{\mathbf {Z}}}_{k}(x,s+1) \\ &\quad = \bigl(\Delta_{2}\bar{{\mathbf {Z}}}_{k}(x,s)\bigr)^{\mathrm{T}}\bigl(\Delta_{2}\bar{{\mathbf {Z}}}_{k}(x,s)\bigr)+2\bar{{\mathbf {Z}}}_{k}^{\mathrm{T}}(x,s){ \bar{{\mathbf {Z}}}_{k}}(x,s+1) - {\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s) \bar{{\mathbf {Z}}}_{k}(x,s). \end{aligned}$$

Then, substituting (12) into (14), we have

$$\begin{aligned} {\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s+1)\bar{{\mathbf {Z}}}_{k}(x,s+1) ={}&\bigl(\Delta_{2}\bar{{\mathbf {Z}}}_{k}(x,s)\bigr)^{\mathrm{T}}\bigl(\Delta_{2}\bar{{\mathbf {Z}}}_{k}(x,s)\bigr)+2{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s){ \mathbf {D}}(s)\Delta _{1}^{2}\bar{{\mathbf {Z}}}_{k}(x-1,s) \\ & {}+2{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s) \bigl({\mathbf {I}}+{\mathbf {A}}(s)\bigr)\bar{{\mathbf {Z}}}_{k}(x,s)+2{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s){ \mathbf {A}}_{\tau}(s))\bar {{\mathbf {Z}}}_{k}(x,s-\tau) \\ & {}+2{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s){\mathbf {B}}(s)\bar{{\mathbf {U}}}_{k}(x,s) -{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s) \bar{{\mathbf {Z}}}_{k}(x,s). \end{aligned}$$

Summing both sides on (15) from \(x=1\) to I, we get

$$\begin{aligned} &\sum_{x=1}^{I}{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s+1)\bar{{\mathbf {Z}}}_{k}(x,s+1) \\ &\quad=\sum_{x=1}^{I}\bigl( \Delta_{2}\bar{{\mathbf {Z}}}_{k}(x,s)\bigr)^{\mathrm{T}}\bigl( \Delta_{2}\bar {{\mathbf {Z}}}_{k}(x,s)\bigr)+2\sum _{i=1}^{I}{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s){ \mathbf {D}}(s)\Delta_{1}^{2}\bar{{\mathbf {Z}}}_{k}(x-1,s) \\ & \qquad{}+\sum_{x=1}^{I}{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s) \bigl({\mathbf {I}}+2{\mathbf {A}}(s)\bigr)\bar{{ \mathbf {Z}}}_{k}(x,s)+2\sum_{x=1}^{I}{ \bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s){\mathbf {A}}_{\tau}(s)) \bar{{\mathbf {Z}}}_{k}(x,s-\tau) \\ & \qquad{}+2\sum_{x=1}^{I}{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s){\mathbf {B}}(s)\bar {{\mathbf {U}}}_{k}(x,s) \\ &\quad=\Omega_{1}+\Omega_{2}+\Omega_{3}+ \Omega_{4}+\Omega_{5}, \end{aligned}$$

where \(\Omega_{i}\ (i=1,2,\ldots,5)\) are the first term to fifth term of the right of the first equality sign in (16), respectively. We will estimate \(\Omega_{i}\) separately.

For \(\Omega_{1}\), by (9a) and (3c), we have

$$\begin{aligned} \Delta_{2}\bar{{\mathbf {Z}}}_{k}(x,s)={}&{\mathbf {D}}(s)\bar{{\mathbf {Z}}}_{k}(x+1,s) +\bigl({\mathbf {A}}(s)-2{\mathbf {D}}(s)\bigr)\bar{{\mathbf {Z}}}_{k}(x,s)+{\mathbf {D}} {\bar{\mathbf {Z}}}_{k}(x-1,s) \\ & {}+{\mathbf {A}}_{\tau}(s)\bar{{\mathbf {Z}}}_{k}(x,s-\tau)+{\mathbf {B}}(s)\bar{{\mathbf {U}}}_{k}(x,s). \end{aligned}$$

Using the elementary inequality \((\sum_{i=1}^{N}z_{i})^{2}\leq N\sum_{i=1}^{N}z_{i}^{2}\), one can show that

$$\begin{aligned} \Omega_{1}={}&\sum_{x=1}^{I} \bigl(\Delta_{2}\bar{{\mathbf {Z}}}_{k}(x,s)\bigr)^{\mathrm{T}} \bigl(\Delta_{2}\bar{{\mathbf {Z}}}_{k}(x,s)\bigr) \\ ={}&\sum_{x=1}^{I}\bigl[\bigl({\mathbf {D}}(s) \bar{{\mathbf {Z}}}_{k}(x+1,s) +\bigl({\mathbf {A}}(s)-2{\mathbf {D}}(s)\bigr)\bar{{ \mathbf {Z}}}_{k}(x,s) \\ &{}+{\mathbf {D}}(s){\bar{\mathbf {Z}}}_{k}(x-1,s)+{ \mathbf {A}}_{\tau}(s)\bar{{\mathbf {Z}}}_{k}(x,s-\tau) \\ & {}+{\mathbf {B}}(s)\bar{{\mathbf {U}}}_{k}(x,s)\bigr)^{\mathrm{T}}\bigr] \bigl[\bigl({\mathbf {D}}(s)\bar{{\mathbf {Z}}}_{k}(x+1,s) +\bigl({\mathbf {A}}(s)-2{ \mathbf {D}}(s)\bigr)\bar{{\mathbf {Z}}}_{k}(x,s) +{\mathbf {D}}(s){\bar {\mathbf {Z}}}_{k}(x-1,s) \\ & {}+{\mathbf {A}}_{\tau}(s)\bar{{\mathbf {Z}}}_{k}(x,s-\tau)+{\mathbf {B}}(s)\bar{{\mathbf {U}}}_{k}(x,s)\bigr)\bigr] \\ \leq {}&5\bar{\lambda}_{\mathbf {D}}\sum_{x=1}^{I}{ \bar{{\mathbf {Z}}}_{k}}^{T}(x+1,s)\bar{{\mathbf {Z}}}_{k}(x+1,s)+5 \bar{\lambda}_{A-2D}\sum_{x=1}^{I}{ \bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s)\bar{{\mathbf {Z}}}_{k}(x,s) \\ & {}+5\bar{\lambda}_{A_{\tau}} \sum_{x=1}^{I}{ \bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s - \tau)\bar{{\mathbf {Z}}}_{k}(x,s - \tau)+5\bar{\lambda}_{B}\sum _{x=1}^{I}{\bar{{\mathbf {U}}}_{k}}^{\mathrm{T}}(x,s) \bar{{\mathbf {U}}}_{k}(x,s)\\ &{}+5\bar{\lambda}_{D}\sum _{x=1}^{I}{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x - 1,s)\bar {{\mathbf {Z}}}_{k}(x - 1,s), \end{aligned}$$


$$\begin{aligned} &\bar{\lambda}_{D}=\lambda_{\max_{0\leq s\leq J}} \bigl(\mathbf {D}^{\mathrm{T}}(s){\mathbf {D}}(s)\bigr),\qquad \bar{\lambda}_{A-2D}= \lambda_{\max _{0\leq s\leq J}} \bigl(\bigl(\mathbf {A}(s)-2\mathbf {D}(s)\bigr)^{\mathrm{T}}\bigl( \mathbf {A}(s)-2\mathbf {D}(s)\bigr) \bigr), \\ &\bar{\lambda}_{B}=\lambda_{\max_{0\leq s\leq J}}\bigl(\mathbf {B}^{\mathrm{T}}(s)\mathbf {B}(s)\bigr),\qquad \bar{\lambda}_{A_{\tau}}= \lambda_{\max_{0\leq s\leq J}}\bigl(\mathbf {A}_{\tau}^{\mathrm{T}}(s)\mathbf {A}_{\tau}(s)\bigr). \end{aligned}$$

Furthermore, as boundary condition (6) means

$$\begin{aligned} &\sum_{x=1}^{I}{\bar{{\mathbf {Z}}}_{k}}^{T}(x+1,s)\bar{{\mathbf {Z}}}_{k}(x+1,s) \leq \sum_{x=1}^{I}{\bar{{\mathbf {Z}}}_{k}}^{T}(x,s)\bar{{\mathbf {Z}}}_{k}(x,s) , \\ &\sum_{x=1}^{I}{\bar{{\mathbf {Z}}}_{k}}^{T}(x-1,s)\bar{{\mathbf {Z}}}_{k}(x-1,s) \leq \sum_{x=1}^{I}{\bar{{\mathbf {Z}}}_{k}}^{T}(x,s)\bar{{\mathbf {Z}}}_{k}(x,s). \end{aligned}$$


$$\begin{aligned} \Omega_{1} \leq {}&(5{\bar{\lambda}_{A-2D}}+10{\bar{ \lambda}_{D}})\sum_{x=1}^{I}{ \bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s)\bar{{\mathbf {Z}}}_{k}(x,s)+5{ \bar{\lambda}_{A_{\tau}}}\sum_{x=1}^{I}{ \bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s-\tau)\bar{{\mathbf {Z}}}_{k}(x,s-\tau) \\ & {}+5{\bar{\lambda}_{B}}\sum_{x=1}^{I}{ \bar{{\mathbf {U}}}_{k}}^{\mathrm{T}}(x,s)\bar{{\mathbf {U}}}_{k}(x,s). \end{aligned}$$

By Lemma 2 and the positive definiteness of \(D(s)\), we have

$$\begin{aligned} \Omega_{2}=2\sum_{x=1}^{I}{ \bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s){\mathbf {D}}\Delta _{1}^{2}\bar{{\mathbf {Z}}}_{k}(x-1,s) \leq -2d_{\min}\sum_{x=0}^{I} \bigl(\Delta _{1}\bar{{\mathbf {Z}}}_{k}(x,s) \bigr)^{\mathrm{T}}\bigl(\Delta_{1}\bar{{\mathbf {Z}}}_{k}(x,s) \bigr)\leq 0, \end{aligned}$$

where \(d_{\min}=\min_{0\leq s\leq J}\{d_{1}(s),d_{2}(s),\ldots ,d_{n}(s)\}>0\) exists because \(p_{i}\) is known.

For \(\Omega_{3}\sim\Omega_{5}\), using the inequality \(y^{\mathrm{T}}H^{\mathrm{T}}Lz\leq \frac{1}{2} (y^{\mathrm{T}}H^{\mathrm{T}}Hy+z^{\mathrm{T}}L^{\mathrm{T}}Lz)\) (\(H\in\mathbb{R}^{n\times m},L \in\mathbb{R}^{n\times l},y\in\mathbb{R}^{m},z\in\mathbb {R}^{l}\)), we have

$$\begin{aligned} &\Omega_{3}=\sum_{x=1}^{I}{ \bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s) \bigl({\mathbf {I}}+2{\mathbf {A}}(s) \bigr)\bar{{\mathbf {Z}}}_{k}(x,s) \leq g\sum _{x=1}^{I}{\bar {{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s) \bar{{\mathbf {Z}}}_{k}(x,s), \end{aligned}$$
$$\begin{aligned} &\Omega_{4}=2\sum_{x=1}^{I}{ \bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s){\mathbf {A}}_{\tau}(s) \bar{{\mathbf {Z}}}_{k}(x,s-\tau) \\ &\phantom{\Omega_{4}}\leq \bar{\lambda}_{A_{\tau}}\sum _{x=1}^{I}{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s) \bar{{\mathbf {Z}}}_{k}(x,s)+\sum_{x=1}^{I}{ \bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s-\tau)\bar{{\mathbf {Z}}}_{k}(x,s-\tau), \end{aligned}$$
$$\begin{aligned} &\Omega_{5} = 2\sum_{x=1}^{I}{ \bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s){\mathbf {B}}(s)\bar{{\mathbf {U}}}_{k}(x,s) \leq \bar{\lambda}_{B}\sum _{x=1}^{I}{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s) \bar{{\mathbf {Z}}}_{k}(x,s)+\sum_{x=1}^{I}{ \bar{{\mathbf {U}}}_{k}}^{\mathrm{T}}(x,s)\bar{{\mathbf {U}}}_{k}(x,s), \end{aligned}$$

where constants \(g=\lambda_{\max_{0\leq s\leq J}}[(I+2A(s))^{\mathrm{T}}(I+2A(s))]+1\).

Finally, substituting (17)–(21) into (16), we obtain

$$\begin{aligned} &\sum_{x=1}^{I}{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s+1)\bar{{\mathbf {Z}}}_{k}(x,s+1) \\ &\quad \leq (5\bar{\lambda}_{A-2D}+10{\bar{\lambda}}_{D}+g+ \bar {\lambda}_{A}+\bar{\lambda}_{B})\sum _{x=1}^{I}{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s) \bar{{\mathbf {Z}}}_{k}(x,s) \\ &\qquad{}+(5\bar{\lambda}_{A_{\tau}}+1)\sum_{x=1}^{I}{ \bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s-\tau)\bar{{\mathbf {Z}}}_{k}(x,s-\tau)+(5\bar{\lambda}_{B}+1)\sum _{x=1}^{I}{\bar{{\mathbf {U}}}_{k}}^{\mathrm{T}}(x,s) \bar{{\mathbf {U}}}_{k}(x,s) \\ &\quad \leq c_{1}\sum_{x=1}^{I}{ \bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s)\bar {{\mathbf {Z}}}_{k}(x,s) +c_{2}\sum_{x=1}^{I}{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s-\tau)\bar{{\mathbf {Z}}}_{k}(x,s- \tau) \\ & \qquad{}+c_{3}\sum_{x=1}^{I}{\bar{{ \mathbf {U}}}_{k}}^{\mathrm{T}}(x,s)\bar{{\mathbf {U}}}_{k}(x,s), \end{aligned}$$

where \(c_{1}=5\bar{\lambda}_{A-2D}+10{\bar{\lambda}}_{D}+g+\bar{\lambda}_{A}+\bar{\lambda}_{B}\), \(c_{2}=5\bar{\lambda}_{A_{\tau}}+1\), \(c_{3}=5\bar{\lambda}_{B}+1\).

This completes the proof of Proposition 1. □

With the help of the above technical lemmas and Proposition 1, the following theorem establishes convergent conditions of the partial difference systems with time delay in state described by (1a)–(1b).

Theorem 1

If the gain matrix \(\mathbf {\Gamma}(s)\) in algorithm (7) satisfies

$$\begin{aligned} \bigl\Vert \bigl(\mathbf {I}-\mathbf {G}(s)\mathbf {\Gamma}(s)\bigr) \bigr\Vert ^{2} \leq \rho,\quad 2\rho < 1, 0\leq s\leq J. \end{aligned}$$

Then, under the initial and boundary conditions (5), (6), the output error of system (1a)(1b) converges to zero in mean \(\mathbf{L}^{2}\) norm, that is,

$$ \lim_{k\rightarrow\infty} \bigl\Vert \mathbf {e}_{k}(\cdot,s) \bigr\Vert _{\mathbf {L}_{2}}^{2}=0,\quad 0\leq s\leq J. $$


According to algorithm (7), we have

$$\begin{aligned} {\mathbf {e}}_{k+1}(x,s)&= {\mathbf {e}}_{k}(x,s)+{\mathbf {Y}}_{k}(x,s) -{\mathbf {Y}}_{k+1}(x,s) \\ &={\mathbf {e}}_{k}(x,s) +{\mathbf {C}}(s) \bigl({\mathbf {Z}}_{k}(x,s)-{ \mathbf {Z}}_{k+1}(x,s)\bigr) +{\mathbf {G}}(s) \bigl({\mathbf {U}}_{k}(x,s)-{ \mathbf {U}}_{k+1}(x,s)\bigr) \\ &={\mathbf {e}}_{k}(x,s)-{\mathbf {C}}(s){\bar{\mathbf {Z}}}_{k}(x,s)-{ \mathbf {G}}(s){\mathbf {\Gamma}}(s){\mathbf {e}}_{k}(x,s) \\ &=\bigl({\mathbf {I}}-{\mathbf {G}}(s){\mathbf {\Gamma}}(s)\bigr){\mathbf {e}}_{k}(x,s)-{ \mathbf {C}}(s){\bar{\mathbf {Z}}}_{k}(x,s) \\ &={\hat{\mathbf {G}}}(s){\mathbf {e}}_{k}(x,s)-{\mathbf {C}}(s){\bar{\mathbf {Z}}}_{k}(x,s), \end{aligned}$$

where \({\hat{\mathbf {G}}}(s)={\mathbf {I}}-{\mathbf {G}}(s){\mathbf {\Gamma}}(s)\).

Multiplying both sides of (25) by \({{\mathbf {e}}_{k+1}^{\mathrm{T}}}(x,s)\) from left, we have

$$\begin{aligned} {{\mathbf {e}}_{k+1}^{\mathrm{T}}}(x,s){\mathbf {e}}_{k+1}(x,s)={}& \bigl({\hat{\mathbf {G}}}(s){\mathbf {e}}_{k}(x,s)-{\mathbf {C}}(s){\bar{\mathbf {Z}}}_{k}^{\mathrm{T}}(x,s)\bigr) \bigl({\hat{\mathbf {G}}}(s){\mathbf {e}}_{k}(x,s)-{\mathbf {C}}(s){\bar{\mathbf {Z}}}_{k}(x,s)\bigr) \\ ={} &{\mathbf {e}}_{k}^{\mathrm{T}}(x,s){\hat{\mathbf {G}}}^{\mathrm{T}}(s){{ \hat{\mathbf {G}}}(s){\mathbf {e}}_{k}(x,s)-2{\mathbf {e}}_{k}^{\mathrm{T}}}(x,s){{ \hat{\mathbf {G}}}}^{\mathrm{T}}(s){\mathbf {C}}(s){\bar{\mathbf {Z}}}_{k}(x,s) \\ & {}+\bar{{\mathbf {Z}}}_{k}^{\mathrm{T}}(x,s){{\mathbf {C}}}^{\mathrm{T}}(s){\mathbf {C}}(s){\bar{\mathbf {Z}}}_{k}(x,s) \\ \leq {}&2\bigl({\mathbf {e}}_{k}^{\mathrm{T}}(x,s){{\hat{\mathbf {G}}}}^{\mathrm{T}}(s){{\hat{\mathbf {G}}}(s){\mathbf {e}}_{k}(x,s)+\bar{{\mathbf {Z}}}_{k}^{\mathrm{T}}}(x,s){{\mathbf {C}}}^{\mathrm{T}}(s){\mathbf {C}}(x,s){\bar{\mathbf {Z}}}_{k}(s)\bigr) \\ \leq {}&2\bigl(\rho{ \mathbf {e}}_{k}^{\mathrm{T}}(x,s){\mathbf {e}}_{k}(x,s)+\bar {\lambda}_{C}\bar{{\mathbf {Z}}}_{k}^{\mathrm{T}}(x,s)\bar{{\mathbf {Z}}}_{k}(x,s)\bigr), \end{aligned}$$

where \(\rho={\lambda}_{\max_{0\leq s\leq J}} ({\hat{{\mathbf {G}}}^{\mathrm{T}}(s)}\hat{\mathbf {G}}(s) )\) and \({\bar{\lambda}_{C}}=\lambda_{\max_{0\leq s\leq J}} (\mathbf {C}^{\mathrm{T}}(s){\mathbf {C}}(s) )\).

Summing both sides of (26) from \(x=1\) to I, we get

$$\begin{aligned} \sum_{x=1}^{I}{{\mathbf {e}}_{k+1}^{\mathrm{T}}}(x,s){ \mathbf {e}}_{k+1}(x,s) \leq 2\rho\sum_{x=1}^{I}{ \mathbf {e}}_{k}^{\mathrm{T}}(x,s){\mathbf {e}}_{k}(x,s)+2\bar{ \lambda}_{C}\sum_{x=1}^{I}\bar{{ \mathbf {Z}}}_{k}^{\mathrm{T}}(x,s)\bar{{\mathbf {Z}}}_{k}(x,s). \end{aligned}$$

Multiplying \(\lambda^{s}\ (0<\lambda<1)\) to both sides of (27) and using the definition of \(\|\cdot\|_{\mathbf{L}^{2}}^{2}\), we have

$$\begin{aligned} \bigl\Vert {\mathbf {e}}_{k+1}(\cdot,s) \bigr\Vert _{\mathbf {L}^{2}}^{2} \lambda^{s} &\leq 2\rho \bigl\Vert {\mathbf {e}}_{k}(\cdot,s) \bigr\Vert _{\mathbf {L}^{2}}^{2}\lambda^{s}+2\bar{\lambda}_{C} \bigl\Vert \bar{\mathbf {Z}}_{k}(\cdot,s) \bigr\Vert _{\mathbf {L}^{2}}^{2}\lambda^{s} \\ &\leq 2\rho \Vert {\mathbf {e}}_{k} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)} +2\bar{ \lambda}_{C} \Vert \bar{\mathbf {Z}}_{k} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)}, \end{aligned}$$


$$\begin{aligned} \Vert {\mathbf {e}}_{k+1} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)} \leq 2 \rho \Vert {\mathbf {e}}_{k} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)} +2\bar{ \lambda}_{C} \Vert \bar{\mathbf {Z}}_{k} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)}. \end{aligned}$$

We rewrite the conclusion of Proposition 1 as follows:

$$\begin{aligned} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,s+1) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}\leq c_{1} \bigl\Vert \bar {{\mathbf {Z}}}_{k}(\cdot,s) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}+c_{2} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,s-\tau) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}+c_{3} \bigl\Vert \bar{{\mathbf {U}}}_{k}(\cdot ,s) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}. \end{aligned}$$

Then, by Lemma 1, we obtain

$$\begin{aligned} &\bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,s) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}} \\ &\quad \leq c_{1}^{s} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,0) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}+ \sum_{t=0}^{s-1} \bigl(c_{2} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,t-\tau) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}+c_{3} \bigl\Vert \bar{{\mathbf {U}}}_{k}(\cdot,t) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}} \bigr)c_{1}^{s-t-1}. \end{aligned}$$

According to (5), the initial setting is the same for every iterative process, we have

$$\begin{aligned} &\bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,s) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}} \\ &\quad \leq \sum_{t=0}^{s-1}c_{2} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,t-\tau) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}c_{1}^{s-t-1}+\sum _{t=0}^{s-1}c_{3} \bigl\Vert \bar{{\mathbf {U}}}_{k}(\cdot ,t) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}c_{1}^{s-t-1}. \end{aligned}$$

We consider \(\sum_{t=0}^{s-1}c_{2}\|\bar{{\mathbf {Z}}}_{k}(\cdot,t-\tau)\| ^{2}_{{ \mathbf {L}}^{2}}c_{1}^{s}\) as follows:

If \(s<\tau\), then \(t-\tau<0\),

$$\sum_{t=0}^{s-1}c_{2} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,t-\tau) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}c_{1}^{s-t-1}=0,\quad s< \tau; $$

If \(s>\tau\), then

$$\begin{aligned} &\sum_{t=0}^{s-1}c_{2} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,t-\tau) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}c_{1}^{s-t-1} \\ &\quad =\sum_{t=0}^{\tau}c_{2} \bigl\Vert \bar{\mathbf {Z}}_{k}(\cdot,t-\tau) \bigr\Vert ^{2}_{ {\mathbf {L}} ^{2}} c_{1}^{s-t-1}+\sum _{t=\tau+1}^{s-1}c_{2} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot ,t-\tau) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}c_{1}^{s-t-1} \\ &\quad =\sum_{t=\tau+1}^{s-1}c_{2} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,t-\tau) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}c_{1}^{s-t-1} \leq \sum _{t=0}^{s-1}c_{2} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,t) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}c_{1}^{s-t-1}. \end{aligned}$$

Thus, from (32) and (33), we obtain

$$\begin{aligned} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,{s}) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}} \leq \sum_{t=0}^{s-1}c_{2} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,t) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}c_{1}^{s-t-1}+\sum _{t=0}^{s-1}c_{3} \bigl\Vert \bar{{\mathbf {U}}}_{k}(\cdot ,t) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}c_{1}^{s-t-1}. \end{aligned}$$

On the other hand, by iterative learning control scheme (7) again, we have

$$\begin{aligned} \bar{\mathbf {U}}_{k}(x,t) ={\mathbf {U}}_{k+1}(x,t)-{\mathbf {U}}_{k}(x,t)=\mathbf {\Gamma }(s)\mathbf {e}_{k}(x,t), \end{aligned}$$

which yields

$$\begin{aligned} {\bar{{\mathbf {U}}}_{k}}^{\mathrm{T}}(x,s)\bar{{\mathbf {U}}}_{k}(x,s)=\bigl(\mathbf {\Gamma }(s){\mathbf {e}}_{k}^{\mathrm{T}}(x,s) \bigr) \bigl(\mathbf {\Gamma}(s){\mathbf {e}}_{k}(x,s)\bigr)\leq \bar{ \lambda}_{\Gamma}{ \mathbf {e}}_{k}^{\mathrm{T}}(x,s){\mathbf {e}}_{k}(x,s), \end{aligned}$$

where \({\bar{\lambda}_{\Gamma}}=\lambda_{\max_{0\leq s\leq J}} (\mathbf {\Gamma}^{\mathrm{T}}(s)\mathbf {\Gamma}(s) )\). Hence

$$\begin{aligned} \bigl\Vert \bar{{\mathbf {U}}}_{k}(\cdot,s) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}\leq {\bar{\lambda}_{\Gamma}} \bigl\Vert {{\mathbf {e}}}_{k}(\cdot,{s}) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}. \end{aligned}$$

Then, by (34) and (36), we obtain

$$\begin{aligned} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,s) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}} \leq \sum_{t=0}^{s-1}c_{2} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,t) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}c_{1}^{s-t-1}+\sum _{t=0}^{s-1}c_{3}{\bar{\lambda}_{\Gamma}} \bigl\Vert {{\mathbf {e}}}_{k}(\cdot,t) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}c_{1}^{s-t-1}. \end{aligned}$$

Multiplying \(\lambda^{s}\ (0<\lambda<1)\) to both sides of (37), meanwhile taking λ small enough, such that \(\lambda(c_{1}+c_{2})<1\), we get

$$\begin{aligned} & \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,s) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}\lambda^{s} \\ &\quad \leq \sum_{t=0}^{s-1}c_{2} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,t) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}c_{1}^{s-t-1}\lambda^{s}+ \sum_{t=0}^{s-1}c_{3}{\bar{\lambda }_{\Gamma}} \bigl\Vert {{\mathbf {e}}}_{k}(\cdot,t) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}c_{1}^{s-t-1}\lambda^{s} \\ &\quad \leq \sum_{t=0}^{s-1}c_{2} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,t) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}\lambda^{t}c_{1}^{s-t-1} \lambda^{j-t}+\sum_{t=0}^{s-1} c_{3}{\bar{\lambda}_{\Gamma}} \bigl\Vert {{\mathbf {e}}}_{k}(\cdot,t) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}} \lambda^{t}c_{1}^{s-t-1}\lambda^{s-t} \\ &\quad \leq \sum_{t=0}^{s-1}c_{2}c_{1}^{s-t-1} \lambda^{s-t} \Vert \bar{{\mathbf {Z}}}_{k} \Vert ^{2}_{{ (\mathbf {L}}^{2},\lambda)}+\sum_{t=0}^{s-1}c_{3}{ \bar{\lambda }_{\Gamma}}c_{1}^{s-t-1}\lambda^{s-t} \Vert {{\mathbf {e}}}_{k} \Vert ^{2}_{{ (\mathbf {L}}^{2},\lambda)} \\ &\quad \leq c_{2}\lambda\sum_{t=0}^{s-1}(c_{1} \lambda)^{s-t-1} \Vert \bar{{\mathbf {Z}}}_{k} \Vert ^{2}_{{( \mathbf {L}}^{2},\lambda)}+c_{3}\bar{\lambda}_{\Gamma}\lambda \sum_{t=0}^{s-1}(c_{1} \lambda)^{s-t-1} \Vert {{\mathbf {e}}}_{k} \Vert ^{2}_{{ (\mathbf {L}}^{2},\lambda)} \\ &\quad \leq c_{2}\lambda\frac{1-(c_{1}\lambda)^{s}}{1-c_{1}\lambda} \Vert \bar {{\mathbf {Z}}}_{k} \Vert ^{2}_{{ \mathbf {L}}^{2},\lambda}+c_{3}\bar{ \lambda}_{\Gamma }\lambda\frac{1-(c_{1}\lambda)^{s}}{1-c_{1}\lambda} \Vert {{\mathbf {e}}}_{k} \Vert ^{2}_{{ (\mathbf {L}}^{2},\lambda)} \\ &\quad \leq \frac{c_{2}\lambda}{1-c_{1}\lambda} \Vert \bar{{\mathbf {Z}}}_{k} \Vert ^{2}_{{ (\mathbf {L}}^{2},\lambda)}+ \frac{c_{3}\bar{\lambda}_{\Gamma}\lambda }{1-c_{1}\lambda} \Vert {{\mathbf {e}}}_{k} \Vert ^{2}_{{ (\mathbf {L}}^{2},\lambda)}, \end{aligned}$$

then we have

$$\begin{aligned} \biggl(1-\frac{c_{2}\lambda}{1-c_{1}\lambda}\biggr) \Vert \bar{{\mathbf {Z}}}_{k} \Vert ^{2}_{({ \mathbf {L}}^{2},\lambda)} \leq \frac{c_{3}\lambda\bar{\lambda}_{\Gamma }}{1-c_{1}\lambda} \Vert {{\mathbf {e}}}_{k} \Vert ^{2}_{({ \mathbf {L}}^{2},\lambda )}, \end{aligned}$$


$$\begin{aligned} \Vert \bar{{\mathbf {Z}}}_{k} \Vert ^{2}_{{ (\mathbf {L}}^{2},\lambda)} \leq \frac {c_{3}\lambda\bar{\lambda}_{\Gamma} }{1-\lambda(c_{1}+c_{2})} \Vert {{\mathbf {e}}}_{k} \Vert ^{2}_{{ (\mathbf {L}}^{2},\lambda)}. \end{aligned}$$

Therefore, substituting (39) into (29), we have

$$\begin{aligned} \Vert {\mathbf {e}}_{k+1} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)} & \leq 2\rho \Vert {\mathbf {e}}_{k} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)} +2\lambda_{C} \Vert {\bar{{\mathbf {Z}}}}_{k} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)} \\ &\leq 2\rho \Vert {\mathbf {e}}_{k} \Vert ^{2}_{(\mathbf{L}^{2},\lambda)} +\lambda\frac{2 c_{3}{{\bar{\lambda}}_{C}}\bar{\lambda }_{\Gamma}}{1-\lambda(c_{1}+c_{2})} \Vert {\mathbf {e}}_{k} \Vert ^{2}_{(\mathbf {L}^{2},\lambda )} \\ &\leq \biggl[2\rho+\lambda\frac{2 c_{3}{\bar{\lambda }_{C}\bar{\lambda}_{\Gamma}}}{1-\lambda(c_{1}+c_{2})}\biggr] \Vert {\mathbf {e}}_{k} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)}. \end{aligned}$$

Let \(\delta=[2\rho+\lambda\frac{2 c_{3}{\bar{\lambda }_{C}\bar{\lambda}_{\Gamma}}}{1-(c_{1}+c_{2})}]\), because \(2\rho<1\), by the continuity of real number, we can take λ small enough such that \(\delta<1\). Rewrite (40) as

$$\begin{aligned} \Vert {\mathbf {e}}_{k+1} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)}\leq \delta \Vert {\mathbf {e}}_{k} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)}. \end{aligned}$$

Then, from (41), we have

$$\begin{aligned} \Vert {\mathbf {e}}_{k+1} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)}\leq \delta^{k} \Vert {\mathbf {e}}_{1} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)}. \end{aligned}$$

Selecting a suitable ξ, such that \(\xi>1\) and \(\delta\xi<1\), multiplying both sides of (42) by \(\xi^{k}\), we obtain

$$\begin{aligned} \Vert {\mathbf {e}}_{k+1} \Vert _{(\mathbf {L}^{2},\lambda(\xi))}^{2}\leq ( \delta\xi )^{k} \Vert {\mathbf {e}}_{1} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)}\leq \Vert {\mathbf {e}}_{1} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)}. \end{aligned}$$


$$\begin{aligned} \bigl\Vert {\mathbf {e}}_{k+1}(\cdot,s) \bigr\Vert _{\mathbf {L}^{2}}^{2}&= \Biggl(\sum_{x=1}^{I}{{\mathbf {e}}_{k+1}}^{\mathrm{T}}(x,s){\mathbf {e}}_{k+1}(x,s){ \lambda^{s}} {\xi^{k}}\Biggr){\lambda ^{-s}} { \xi^{-k}} \\ &\leq \Vert {\mathbf {e}}_{k+1} \Vert _{(\mathbf {L}^{2},\lambda(\xi))}^{2}{ \xi ^{-k}} {\lambda^{-s}} \\ &\leq \Vert {\mathbf {e}}_{1} \Vert _{(\mathbf {L}^{2},\lambda)}^{2}{ \xi ^{-k}} {\lambda^{-J}} \\ &\leq {\xi^{-k}} {\lambda^{-J}}\sup _{0\leq s\leq J}\sum_{x=1}^{I}{{\mathbf {e}}_{1}}^{\mathrm{T}}(x,s){\mathbf {e}}_{1}(x,s). \end{aligned}$$

Noting \(\xi>1\) and \(I,J,\lambda\) are bounded in (44), thus we obtain

$$ \lim_{k\rightarrow\infty} \bigl\Vert {\mathbf {e}}_{k}(\cdot,s) \bigr\Vert _{\mathbf {L}_{2}}^{2}=0,\quad 0\leq s\leq J. $$

This completes the proof of Theorem 1. □

Next, we will consider system (2a)–(2b) in the following Sect. 3.2.

3.2 System with time delay in input

We assume the corresponding boundary value and initial value conditions of system (2a)–(2b) to be

$$\begin{aligned} &{\mathbf {Z}}_{k}(x,0)={{\mathbf {\varphi}}(x,0)},\quad 1\leq x\leq I, \end{aligned}$$
$$\begin{aligned} &{\mathbf {Z}}_{k}(0,s)=0={\mathbf {Z}}_{k}(I+1,s),\quad 0\leq s \leq J, \end{aligned}$$

for \(k=1,2,\ldots \) .

For system (2a)–(2b), we propose that the iterative learning control scheme is

$$ \mathbf {U}_{k+1}(x,s)=\mathbf {U}_{k}(x,s)+\mathbf { \Gamma}_{\tau}(s)\mathbf {e}_{k}(x,s+\tau), $$

where \(-\tau \leq s\leq J-\tau\).

Theorem 2

If the gain matrix \(\mathbf {\Gamma_{\tau}}(s)\) in algorithm (48) satisfies

$$\begin{aligned} \bigl\Vert \bigl(I-\mathbf {G_{\tau}}(s)\mathbf {\Gamma_{\tau}}(s) \bigr) \bigr\Vert ^{2} \leq \rho,\quad 2\rho < 1, \forall s \in [0,J]. \end{aligned}$$

Then, under the initial setting (46) and boundary value (47), the output error of system (2a)(2b) converges to zero in mean \(\mathbf {L}^{2}\) norm, that is,

$$ \lim_{k\rightarrow\infty} \bigl\Vert \mathbf {e}_{k}(\cdot,s) \bigr\Vert _{\mathbf {L}^{2}}^{2}=0,\quad 0\leq s\leq J. $$


According to algorithm (48) with \(-\tau \leq s\leq J-\tau\), we have

$$\begin{aligned} &{\mathbf {e}}_{k+1}(x,s+\tau) \\ &\quad= {\mathbf {e}}_{k}(x,s+\tau)+{\mathbf {Y}}_{k}(x,s+\tau) -{\mathbf {Y}}_{k+1}(x,s+\tau) \\ &\quad= {\mathbf {e}}_{k}(x,s+\tau) +{\mathbf {C}}(s+\tau) \bigl({\mathbf {Z}}_{k}(x,s+\tau)-{\mathbf {Z}}_{k+1}(x,s+\tau)\bigr) +{\mathbf {G}}_{\tau}(s+\tau)\bar{{\mathbf {U}}}_{k}(x,s) \\ &\quad={\mathbf {e}}_{k}(x,s+\tau)-{\mathbf {C}}(s+\tau){\bar{\mathbf {Z}}}_{k}(x,s+\tau )-{\mathbf {G}}_{\tau}(s+\tau){\mathbf {\Gamma}}_{\tau}(s+\tau){\mathbf {e}}_{k}(x,s+\tau ) \\ &\quad =\bigl[{\mathbf {I}}-{\mathbf {G}}_{\tau}(s+\tau){\mathbf {\Gamma}}(s+\tau)\bigr]{ \mathbf {e}}_{k}(x,s+\tau)-{\mathbf {C}}(s+\tau){\bar{\mathbf {Z}}}_{k}(x,s+ \tau) \\ &\quad \triangleq {\hat{\mathbf {G}}_{\tau}}(s+\tau){\mathbf {e}}_{k}(x,s+ \tau)-{\mathbf {C}}(s+\tau){\bar{\mathbf {Z}}}_{k}(x,s+\tau), \end{aligned}$$

that is,

$$\begin{aligned} {\mathbf {e}}_{k+1}(x,s)={\hat{\mathbf {G}}_{\tau}}(s){\mathbf {e}}_{k}(x,s)-{\mathbf {C}}(s){\bar{\mathbf {Z}}}_{k}(x,s), \quad 0\leq s\leq J. \end{aligned}$$


$$\begin{aligned} \Vert {\mathbf {e}}_{k+1} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)} \leq 2 \bar{\lambda }_{\hat{G}_{\tau}} \Vert {\mathbf {e}}_{k} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)} +2\bar{\lambda}_{C} \Vert \bar{\mathbf {Z}}_{k} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)}, \end{aligned}$$

where \(\rho_{\tau}={\lambda}_{\max_{0\leq s\leq J}} (\hat {{\mathbf {G}}_{\tau}^{\mathrm{T}}}(s)\hat{\mathbf {G}}_{\tau}(s) )\).

On the other hand, similar to Proposition 1, we can obtain

$$\begin{aligned} &\sum_{x=1}^{I}{\bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s+1)\bar{{\mathbf {Z}}}_{k}(x,s+1) \\ &\quad \leq c_{4}\sum_{x=1}^{I}{ \bar{{\mathbf {Z}}}_{k}}^{\mathrm{T}}(x,s)\bar{{\mathbf {Z}}}_{k}(x,s)+c_{5} \sum_{x=1}^{I}{\bar{{\mathbf {U}}}_{k}}^{\mathrm{T}}(x,s-\tau)\bar {{\mathbf {U}}}_{k}(x,s- \tau), \end{aligned}$$

where \(c_{4},c_{5}\) are positive bounded constants.

Using Lemma 1 again for (54) and noting (47), we conclude

$$\begin{aligned} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,s) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}} &\leq c_{4}^{s}\sum _{t=0}^{s-1} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,0) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}+c_{5} \sum_{t=0}^{s-1} \bigl\Vert \bar{{\mathbf {U}}}_{k}(\cdot,t-\tau) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}c_{4}^{s-t-1} \\ & \leq c_{4}^{s}\sum_{t=0}^{s-1} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,0) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}+c_{5} \bar{\lambda}_{\Gamma_{\tau}}\sum _{t=0}^{s-1} \bigl\Vert {\mathbf {e}}_{k}(\cdot,t) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}c_{4}^{s-t-1} \\ & \leq c_{5}\bar{\lambda}_{\Gamma_{\tau}}\sum _{t=0}^{s-1} \bigl\Vert {\mathbf {e}}_{k}(\cdot ,t) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}c_{4}^{s-t-1}. \end{aligned}$$

Multiplying \(\lambda^{j}\) to both sides of (55), we get

$$\begin{aligned} \bigl\Vert \bar{{\mathbf {Z}}}_{k}(\cdot,s) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}\lambda^{s} & \leq c_{5} \bar{\lambda}_{\Gamma_{\tau}}\sum_{t=0}^{s-1} \bigl\Vert {\mathbf {e}}_{k}(\cdot,t) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}}]c_{4}^{s-t-1} \lambda^{s} \\ & \leq c_{5}\bar{\lambda}_{\Gamma_{\tau}}\sum _{t=0}^{s-1} \bigl( \bigl\Vert {\mathbf {e}}_{k}(\cdot,t) \bigr\Vert ^{2}_{{ \mathbf {L}}^{2}} \lambda^{t} \bigr)c_{4}^{s-t-1}\lambda ^{s-t} \\ & \leq c_{5}\bar{\lambda}_{\Gamma_{\tau}} \bigl\Vert {\mathbf {e}}_{k}(\cdot,t) \bigr\Vert ^{2}_{({ \mathbf {L}}^{2},\lambda)}\sum _{t=0}^{s-1}c_{4}^{s-t-1} \lambda^{s-t} \\ & \leq \frac{c_{5}\lambda\bar{\lambda}_{\Gamma_{\tau}}}{1-c_{4}\lambda} \bigl\Vert {\mathbf {e}}_{k}(\cdot,t) \bigr\Vert ^{2}_{({ \mathbf {L}}^{2},\lambda)}. \end{aligned}$$

Substituting (56) into (53), we have

$$\begin{aligned} \Vert {\mathbf {e}}_{k+1} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)} \leq 2 \rho_{\tau} \Vert {\mathbf {e}}_{k} \Vert ^{2}_{(\mathbf {L}^{2},\lambda)} +\frac{2\lambda\lambda_{C}c_{5}\lambda_{\Gamma_{\tau}}}{1-c_{4}\lambda} \bigl\Vert {\mathbf {e}}_{k}(\cdot,t) \bigr\Vert ^{2}_{({ \mathbf {L}}^{2},\lambda)}. \end{aligned}$$

By the condition of Theorem 2: \(2\rho_{\tau}<1\), we can find λ such that

$$\delta_{1}=2\rho_{\tau}+ \frac{2\lambda\lambda_{C}c_{5}\lambda_{\Gamma_{\tau }}}{1-c_{4}\lambda}< 1. $$

Then, similar to Theorem 1, we can obtain that

$$\begin{aligned} \Vert {\mathbf {e}}_{k+1} \Vert ^{2}_{(\mathbf{L}^{2},\lambda)} \leq \delta_{1} \Vert {\mathbf {e}}_{k} \Vert ^{2}_{({ \mathbf {L}}^{2},\lambda)}. \end{aligned}$$

In the end, we have

$$ \lim_{k\rightarrow\infty} \bigl\Vert {\mathbf {e}}_{k}(\cdot,s) \bigr\Vert _{\mathbf {L}^{2}}^{2}=0,\quad 0\leq s\leq J. $$

This is the conclusion of Theorem 2. □

Remark 1

For discrete time and spatial variables \(x=0,1,\ldots ,I,s=0,1,2,\ldots,J\), \(I,J\) are bounded, one can easily show that (4) holds by the conclusions of Theorems 1 and 2. That is, the actual output (iterative output) can completely track the desired output as iteration number tends to infinity for system (1a)–(1b) and system (2a)–(2b).

4 Numerical simulations

To illustrate the effectiveness of the algorithm, we give two examples for systems (1a)–(1b) and (2a)–(2b), respectively. First, giving consideration to system (1a)–(1b), let the system state, the control input, and the output be

Z(x,s)= [ Z 1 ( x , s ) Z 2 ( x , s ) ] ,U(x,s)= [ U 1 ( x , s ) U 2 ( x , s ) ] ,Y(x,s)= [ Y 1 ( x , s ) Y 2 ( x , s ) ] .

The space and time variables \((x,s)\in[0,10]\times[0,200]\), time delay \(\tau=5\). And the coefficient matrices, the gain matrices are as follows:

D = [ 0.3 0 0 0.1 ] , A = [ 0.3 0.2 e 4 s 0.1 0 0.3 1 8 + s ] , A τ = [ 0.2 0 0.1 0.15 ] , B = [ 0.25 0 0.12 0.4 1 2 + s ] , C = [ 0.2 0.12 0.01 0.3 ] , G = [ 0.8 + 0.2 e 4 s 0 0 0.7 + 0.4 e 4 s ] , Γ = [ 1.1 + 0.2 e 8 s 0.02 0 1.2 + 0.1 e 8 s ] .

The desired trajectory is

$${\mathbf {Y}}_{d}(x,s)=\bigl({Y}_{d1}(x,s),Y_{d2}(x,s) \bigr)=\biggl(0.02s\sin\biggl(\frac{ x}{11}\pi\biggr),2\cos\biggl( \frac{(10-x)}{2} \pi\biggr) \bigl(1-e^{\frac{-0.01xs}{2}}\bigr)\biggr). $$

From \({\hat{\mathbf {G}}}(s)={\mathbf {I}}-{\mathbf {G}}(s){\mathbf {\Gamma}}(s)\), we can easily calculate \(\rho<0.5\) and find it meets the conditions of Theorem 1.

Figures 1 and 3 are the desired surfaces, Figs. 2 and 4 are output surfaces of the 20th iteration. Figures 5 and 6 are the corresponding error surface of the 20th iteration. Figure 9 is \({\mathbf {L}}^{2}\) error convergence history with iterations, the maximum values of the twentieth iteration errors are \(2.0615\times10^{-7}\),\(1.9855\times 10^{-6}\), respectively. Therefore, the iterative learning algorithm (7) is effective for system (1a)–(1b).

Figure 1
figure 1

Desired surface \(y_{d1}(x,s)\)

Figure 2
figure 2

Output surface \(y_{k1}(x,s)\)

Figure 3
figure 3

Desired surface \(y_{d2}(x,s)\)

Figure 4
figure 4

Output surface \(y_{k2}(x,s)\ (k=20)\)

Figure 5
figure 5

Error \(e_{k1}(x,s)\) for state delay

Figure 6
figure 6

Error \(e_{k2}(x,s)\) for state delay

Secondly, we consider systems described by (2a)–(2b) with input time delay. Let

B τ = [ 0.2 0 0.1 0.15 ] ,C= [ 0.3 0.1 0.1 0.4 ] , Γ τ = [ 1 + 0.4 e 5 s 0.01 0 1.15 + 0.1 e 4 s ] ,

and select time delay \(\tau=8\), \(\mathbf {G}_{\tau}=\mathbf {G}\), the rest are the same as those in system (1a)–(1b). Figures 7 and 8 describe the two error surfaces in the twenty times iteration. According to Figs. 7 and 8, we observe that the maximum values of the twentieth iteration errors are \(0.8856\times10^{-15}\), \(0.8913\times10^{-15}\), respectively. At the same time, the data from Figs. 9 and 10 denote that the tracking errors are already acceptable at the 10th iteration. The two error curves in Figs. 9 and 10 also demonstrate the efficacy of the proposed algorithms (7) and (48).

Figure 7
figure 7

Error \(e_{k1}(x,s)\) for input delay

Figure 8
figure 8

Error \(e_{k2}(x,s)\) for input delay

Figure 9
figure 9

Iterations-error for state delay

Figure 10
figure 10

Iterations-error for input delay

5 Conclusions

In this paper, a study of the ILC problem for parabolic partial difference systems with time delay is performed. Convergence results are proved for two different time delay cases. Simulation studies are used to illustrate the applicability of the theoretical results.


  1. Roesser, R.P.: A discrete state-space model for linear image processing. IEEE Trans. Autom. Control 20(1), 1–10 (1975)

    Article  MathSciNet  Google Scholar 

  2. Cheng, S.S.: Partial Difference Equations. Advances in Discrete Mathematics and Applications, vol. 3. Taylor Francis, London (2003)

    Book  Google Scholar 

  3. Cheng, S.S.: Sturmian theorems for hyperbolic partial difference equations. J. Differ. Equ. Appl. 2(4), 375–387 (1996)

    Article  MathSciNet  Google Scholar 

  4. Wong, P.J.Y., Agarwal, R.P.: Nonexistence of unbounded nonoscillatory solutions of partial difference equations. J. Math. Anal. Appl. 214(2), 503–523 (1997)

    Article  MathSciNet  Google Scholar 

  5. Xie, S.L., Cheng, S.S.: Stability criteria for parabolic type partial difference equations. J. Comput. Appl. Math. 75(1), 57–66 (1996)

    Article  MathSciNet  Google Scholar 

  6. Zhang, B.G., Tian, C.J.: Oscillation criteria of a class of partial difference equation with delays. Comput. Math. Appl. 48(1), 291–303 (2004)

    Article  MathSciNet  Google Scholar 

  7. Liu, S.T., Guan, X.P., Jun, Y.: Nonexistence of positive solutions of a class of nonlinear delay partial difference equations. J. Math. Anal. Appl. 234(2), 361–371 (1999)

    Article  MathSciNet  Google Scholar 

  8. Arimoto, S., Kawamura, S., Miyazaki, F.: Bettering operation of robots by learning. J. Robot. Syst. 1(2), 123–140 (1984)

    Article  Google Scholar 

  9. Sun, M.X., Wang, D.W.: Iterative learning control design for uncertain dynamic systems with delayed states. Dyn. Control 10(4), 341–357 (2000)

    Article  MathSciNet  Google Scholar 

  10. Sun, M.X., Wang, D.W.: Initial condition issues on iterative learning control for nonlinear systems with time delay. Int. J. Syst. Sci. 32(11), 1365–1375 (2001)

    Article  Google Scholar 

  11. Zhu, Q., Hu, G.-D., Liu, W.-Q.: Iterative learning control design method for linear discrete-time uncertain systems with iteratively periodic factors. IET Control Theory Appl. 9(15), 2305–2311 (2015)

    Article  MathSciNet  Google Scholar 

  12. Li, X.-D., Chow, T.W.S., Ho, J.K.L.: 2D system theory based iterative learning control for linear continuous systems with time delays. IEEE Trans. Circuits Syst. I, Regul. Pap. 52(7), 1421–1430 (2005)

    Article  MathSciNet  Google Scholar 

  13. He, W., Meng, T.T., Huang, D.Q., Li, X.F.: Adaptive boundary iterative learning control for an Euler–Bernoulli beam system with input constrain. IEEE Trans. Neural Netw. Learn. Syst. 29(5), 1539–1549 (2018)

    Article  MathSciNet  Google Scholar 

  14. Dai, X.S., Xu, C., Tian, S.P., Li, Z.L.: Iterative learning control for MIMO second-order hyperbolic distributed parameter systems with uncertainties. Adv. Differ. Equ. 2016(1), 94 (2016)

    Article  MathSciNet  Google Scholar 

  15. Liu, S.D., Wang, J.R., Wei, W.: A study on iterative learning control for impulsive differential equations. Commun. Nonlinear Sci. Numer. Simul. 24(1), 4–10 (2015)

    Article  MathSciNet  Google Scholar 

  16. Shen, D., Xu, J.-X.: A novel Markov chain based ILC analysis for linear stochastic systems under general data dropouts environments. IEEE Trans. Autom. Control 62(11), 5850–5857 (2018)

    Article  MathSciNet  Google Scholar 

  17. Li, Y., Jiang, W.: Fractional order nonlinear systems with delay in iterative learning control. Appl. Math. Comput. 257(15), 546–552 (2015)

    MathSciNet  MATH  Google Scholar 

  18. Dai, X., Mei, S., Tian, S.: D-type iterative learning control for a class of parabolic partial difference systems. Trans. Inst. Meas. Control 40(10), 3105–3114 (2018)

    Article  Google Scholar 

  19. Dai, X., Tian, S., Guo, Y.: Iterative learning control for discrete parabolic distributed parameter systems. Int. J. Autom. Comput. 12(3), 316–322 (2015)

    Article  Google Scholar 

  20. Liang, C., Wang, J., Feckan, M.: A study on ILC for linear discrete systems with single delay. J. Differ. Equ. Appl. 24(3), 358–374 (2018).

    Article  MathSciNet  Google Scholar 

  21. Meng, D.Y., Jia, Y.M., Du, J.P., Yu, F.: Robust iterative learning control design for uncertain time-delay systems based on a performance index. IET Control Theory Appl. 4(5), 759–772 (2010)

    Article  MathSciNet  Google Scholar 

  22. Cichy, B., Galkowski, K., Rogers, E.: Iterative learning control for spatio-temporal dynamics using Crank–Nicholson discretization. Multidimens. Syst. Signal Process. 23(1), 185–208 (2012)

    Article  MathSciNet  Google Scholar 

Download references


The authors gratefully acknowledge the financial support of the National Natural Science Foundation of China (Grant Nos. 61863004, 61364006, 61563005), the Natural Science Foundation of Guangxi (Grant No. 2017GXNSFAA198179), and the Key Laboratory of Industrial Process Intelligent Control Technology of Guangxi Higher Education Institutes Director Foundation (No. IPICT-2016-02).

Author information

Authors and Affiliations



This work was carried out in collaboration among all authors. XD raised these interesting problems in this research. XM (, YZ (, and GX ( proved the theorems, interpreted the results, and wrote the article. The numerical example is given by XY ( All authors defined the research theme, read, and approved the manuscript.

Corresponding author

Correspondence to Xisheng Dai.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Dai, X., Tu, X., Zhao, Y. et al. Iterative learning control for MIMO parabolic partial difference systems with time delay. Adv Differ Equ 2018, 344 (2018).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Iterative learning control
  • Parabolic partial difference systems
  • Time delay
  • Convergence