 Research
 Open access
 Published:
Identification and control of delayed unstable and integrative LTI MIMO systems using pattern search methods
Advances in Difference Equations volume 2013, Article number: 331 (2013)
Abstract
In this paper, we develop a multimodel adaptive control strategy to be applied on delay compensation schemes for stable/unstable LTI MIMO systems. The only requirement is that the delay should be bounded and decoupled from the control strategy. The delay identification problem is formulated as an optimization one, and it is framed on the abstract definition of the Generalized Pattern Search Method (GPSM). Taking advantage of the global convergence analysis presented by GPSM, we make a stability analysis of the proposed approach and delay identification capabilities. Simulation examples show the usefulness of the proposed strategy proving that the scheme is capable of identifying the delay and stabilizing the system even with a larger delay. The capabilities of the approach are tested on a secondorder delayed unstable process, a MIMO unstable systems and an irrigation channel model. Additionally, simulation examples on an irrigation channel with timevarying delay are presented.
1 Introduction
The external delay in a process causes the output signal to be delayed with respect to the input. If a control strategy has to be designed for this system, the presence of the delay makes it a more difficult task, especially, when the rational part of the system is unstable [1, 2]. Various strategies have been used to counteract the delay effect. Thus, the tuning of PID controllers is perhaps the most widely used. In [3–6], different tuning rules for stable/unstable systems with delay can be found. The disadvantage of such techniques is that they only work well, when the delay is small compared with the time constant of the system [7].
For systems, where the delay is dominant (i.e., the delay is big compared with the time constant of the system), different control strategies such as delay compensation schemes (DCS) should be used, being the most wellknown the Smith predictor and its modifications for unstable and integrative systems [8–11]. Generally, these approaches have to be applied offline to the nominal parameters of the system known beforehand. An additional shortcoming is the lack of robustness to small uncertainties in the delay. In practice, when using a DCS for an unstable system, it is very difficult to ensure the closedloop stability in the presence of delay uncertainty. This makes the DCS design considerably more difficult than its stable counterpart. In this way, much effort has been dedicated during the last years to the design of controllers for systems with delay uncertainty.
Recently, a framework focused on the identification and control of systems with delay uncertainty, has been proposed both for stable [12–14] and integrative [15] systems. The approach presented is based on the classical SP and a multimodel scheme. The multimodel scheme contains a battery of timevarying models which are updated using a modification rule. Each model possesses the same rational component but a different delay value. The algorithm compares the mismatch between the actual system and each model and selects, at each time interval, the one that best describes the behaviour of the actual system, providing online identification of the delay while simultaneously ensuring the closedloop stability. The way in which the delay varies is determined by a heuristic optimization; this allows both the delay identification and the system control. Additionally, this approach leads to a robustly stable closedloop system while achieving a great performance for systems with unknown long delays.
In this sense, the work presented in [12] is extended in this paper to potentially unstable and integrative systems. In this case, the control scheme is based on the modified Smith predictor (MoSP) introduced in [8], and the optimization is framed into a Pattern Search Method (PSM) [16]. It is worth emphasizing that the component by component delay identification in an unstable MIMO system is a difficult task, since it is impractical to estimate from openloop experiments.
Controloriented model identification methods have been of great interest in the process control community, and there are different approaches, such as design of optimal controllers, based on the particle swarm optimization [17], LMI optimization [18–21] to treat the identification problem. The drawback of all these works is that the optimization is done offline. Recently, a systematic closedloop parametric online identification method based on a step response test and online weight least squares optimization was proposed in [22] for integrative and unstable processes. However, this work is treated without dealing the control strategy. Therefore, the modification of controller parameters must be done offline.
In this paper, the delay identification is formulated as an optimization problem, which is solved using the socalled Pattern Search Method (PSM). The PSM has been used in mathematics and optimization theory [23, 24], but its use in control is rather limited, with few works on it, [25, 26]. Moreover, in these two works, there is neither an analysis of convergence nor a frame on the PSM. This contrasts with the present work, where the proposed PSM is framed correctly on the generalized PSM [16], and therefore, the proposed PSM inherits the general convergence properties. Thus, analytical stability properties are formulated for this approach adequately and easily, since previous results and concepts can be used for this purpose.
The PSM is implemented for practical purposes on the modified Smith predictor (MoSP) [8], and it is complemented through a multimodel scheme running in parallel [27]. The multimodel scheme contains the trial points (battery of models), which are updated through time by using a modification rule called exploratory move in the PSM context. Each model possesses the same rational component but a different value for the delay. After an exploratory move, the algorithm compares the mismatch between the actual system and each model, and selects at each time interval the one that best describes the behaviour of the system, providing an online estimation of the delay. An advantage of the multimodel approach is that the PSM is implemented under simple mathematical operations, which make its implementation relatively simple.
The proposed approach is tested on different systems models and under different conditions as: a secondorder delayed unstable process, where an uncertainty in the rational component of the system is taken into account; also it is tested on a MIMO unstable systems and on an irrigation channel model, which is an integrative MIMO system. Additionally, simulation results have been extended to timevarying ones showing the potential and applications of the proposed approach.
The paper is organized as follows. Section 2 states the problem formulation. Section 3 presents the proposed control scheme framed on the GPSM. The stability analysis is performed in Section 4. Simulation examples are presented in Section 5. Finally, Section 6 summarizes the main conclusions.
2 Problem formulation
Let us consider the following MIMO transfer function model:
where {G}^{df}(s) is a matrix containing the rational component of the system, H(s)=({H}_{ij}(s))=({e}^{{h}_{ij}s}), for i,j=1,2,\dots ,n, is a matrix containing only delays, and • denotes the Schur (or componentwise) product [28]. The following assumptions are made about system (1).
Assumption 1 The rational transfer function matrix {G}^{df}(s) is proper, with no polezero unstable cancellations and known.
Assumption 2 The delay between each input/output component lies in a known compact interval. That is, there exist two known matrices \overline{H}=({\overline{h}}_{ij}), \underline{H}=({\underline{h}}_{ij})\in {\mathbb{R}}^{n\times n} such that {\underline{h}}_{ij}\le {h}_{ij}\le {\overline{h}}_{ij}, \mathrm{\forall}i,j.
Assumption 1 is feasible in many control problems, where a nominal model of the system is available beforehand, and it does not posses unstable polezero cancellations. However, the delay may be unknown or even timevarying, and hence it has to be estimated [29]. Note that there is no assumption on the stability of {G}^{df}(s) being potentially unstable. Assumption 2 will be used in the proposed patternsearchbased algorithm to estimate the delay in Section 3, and it is feasible in many practical control problems, where bounds on the delay are known.
2.1 Modified Smith predictor
The MoSP [8] is shown in Figure 1. {\stackrel{\u02c6}{G}}^{df}(s)\u2022\stackrel{\u02c6}{H} and {G}^{df}(s)\u2022H are the transfer functions of the plant model and the actual plant, respectively. This structure has three controllers which are designed for different objectives. Compensator {K}^{1}=diag({K}_{i}^{1}), where i=1,2,\dots ,n, in the inner loop, is designed to prestabilise {G}^{df} in the unstable case, C=diag({C}_{i}), i=1,2,\dots ,n, is used to take care of reference tracking, and {K}^{2}=diag({K}_{i}^{2}), i=1,2,\dots ,n, is used for disturbance rejection. The controllers can be designed in different ways, for instance, using robust control techniques. We clarify that the controller design is not the focus of the present work.
We omit by notation simplicity the argument s, in the next equations. Thus, the Laplace transform with zero initial conditions of the closedloop response obtained from Figure 1 is given by
where \chi =(I+{K}^{2}({G}^{df}\u2022H)), \stackrel{\u02c6}{\chi}=(I+{K}^{2}({\stackrel{\u02c6}{G}}^{df}\u2022\stackrel{\u02c6}{H})). If Assumption 1 holds (the model perfectly matches the plant {\stackrel{\u02c6}{G}}^{df}(s)={G}^{df}(s)) and \stackrel{\u02c6}{H}=H, then (2) reduces to the simplified transfer function
Hence, the system with internal delay that appears in Eq. (2), becomes a system with external delay in Eq. (3). Therefore, this is precisely a topology that decouples the delay from the control strategy, making the system easier to control, since compensators {K}^{1}, {K}^{2} and C are designed regardless of the delay (i.e., based only on the rational component of the system which is known beforehand). Otherwise, if the delay is not known beforehand, the exact compensation cannot be performed despite that the rational component is known, and the closedloop Eq. (2) can be potentially unstable.
The problem faced corresponds to the case when the matrix delay H is unknown, and our objective is to obtain an estimation of the matrix delay \stackrel{\u02c6}{H} to be used in the MoSP structure depicted in Figure 1 in order to guarantee the stability of the system. The method of estimating the delay involves the formulation of the identification problem as an optimization problem solved online by PSM, which is briefly explained below.
2.2 Generalized pattern search method (GPSM)
GPSM was proposed in [16] for derivativefree unconstrained optimization (minimization in this case) of continuously differentiable convex functions J:{\mathbb{R}}^{n}\to \mathbb{R}.
The GPSM consists of a sequence of iterations {\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}}, k\in \mathbb{N}. At each iteration, a number of trial steps\mathrm{\Delta}{h}_{k}^{n} are added to the iteration {\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}} to obtain a number of trial points{\stackrel{\u02c6}{H}}_{k}^{n}={\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}}+\mathrm{\Delta}{h}_{k}^{n} at iteration k. The objective function J is evaluated on these trial points through a series of exploratory moves, which defines a procedure in which the trial points are evaluated, and the values obtained are compared with J({\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}}). Then the trial step \mathrm{\Delta}{h}_{k}^{\ast} associated with minimum value of J({\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}}+\mathrm{\Delta}{h}_{k}^{n})J({\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}})\le 0 is chosen to generate the next estimate of the patterns iteration {\stackrel{\u02c6}{H}}_{k+1}^{\mathrm{nom}}={\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}}+\mathrm{\Delta}{h}_{k}^{\ast}. The trial steps \mathrm{\Delta}{h}_{k}^{n} are generated using a step length parameter {\mathrm{\Delta}}_{k}\in {\mathbb{R}}_{+}^{n}, which is also updated through time depending on the value of \mathrm{\Delta}{h}_{k1}^{n}. The evolution of the trial points establishes the convergence properties of the algorithm. A full PSM explanation can be found in [16].
3 Proposed control scheme using PSM
The basic structure of the proposed scheme is depicted in Figure 2. The MoSP and the PSM are complemented with four elements: a set of trial points, an objective function for evaluating the potential behaviour of each model, a switching logic, which monitors the index periodically and decides the best model to be used in the control law, and the switching mechanism, which is intended to reduce the possible mismatch between the nominal and the actual output of the system. The following subsections consider in detail the different elements of the proposed architecture and how they fit into the GPSM.
3.1 The patterns
The proposed architecture is composed of patterns, which are directions in the search space starting from the nominal delay {\stackrel{\u02c6}{H}}_{k}=({\stackrel{\u02c6}{h}}_{{k}_{ij}})\in {\mathbb{R}}^{n\times n}. The patterns are modified at each iteration to obtain a new set of trial points. These trial points are timevarying and automatically adjusted by the algorithm as corresponds to the PSM framework. The trial steps are defined by
where \mathrm{\Delta}{h}_{k}^{p,l}\in {\mathbb{R}}^{n\times n}, for p=1,2,\dots ,{N}_{m} and l=1,2,\dots ,n, where {N}_{m} is the number of models ({N}_{m} should be odd, since the nominal model should always be evaluated), the basis vector {B}^{l}\in {\mathbb{R}}^{n\times 1} (vector having one in the position l th and zeros elsewhere), the step length parameter, {\mathrm{\Delta}}_{k}={[\mathrm{\Delta}{h}_{k}^{1},\mathrm{\Delta}{h}_{k}^{2},\dots ,\mathrm{\Delta}{h}_{k}^{{N}_{m}}]}^{T}\in {\mathbb{R}}^{1\times {N}_{m}}, is adjusted by the algorithm whose values are defined initially by the designer and the constant matrix {C}^{p,l}\in {\mathbb{R}}^{{N}_{m}\times n} (matrices having one in the position (l th, p th) and zeros elsewhere). In such a way that the trial points take the form (5)(7).
It can be seen that the trial points are generated by the addition of the patterns, which in turn are generated by the length parameter {\mathrm{\Delta}}_{k}, to the nominal model {\stackrel{\u02c6}{H}}_{k}. Note that the zero state (no changes in the nominal model) and both positive and negative directions to change {\stackrel{\u02c6}{H}}_{k} should all be included, as explained in the definition of {\mathrm{\Delta}}_{k} in Section 3.3. In this way, it is ensured that all the search space is evaluated.
3.2 Objective function
The second element of the proposed scheme is an objective function aimed at evaluating the behavior of all the trial points {\stackrel{\u02c6}{H}}_{k}^{(p,l)}. The suggested objective function is
where Q={Q}^{T}>0, {e}^{(p,l)}(\tau )=y(\tau ){\stackrel{\u02c6}{y}}^{(p,l)}(\tau ) is the output error, y(\tau ) is the vector output of the plant at the instant t=\tau, while {\stackrel{\u02c6}{y}}^{(p,l)}(\tau ) denotes the (vector) output of each different model {\stackrel{\u02c6}{H}}^{(p,l)}. Notice that both y(t) and {\stackrel{\u02c6}{y}}^{(p,l)}(t) depend on the reference signal r(t), and so does (8). This dependence is expressed in (8) implicitly as dependence with t. {T}_{\mathrm{res}} is the socalled residence time and defines the time interval window, where the delay model {\stackrel{\u02c6}{H}}^{(p,l)} is evaluated. Also it can be seen that the objective function (8) differs from the proposed in [16], because it is timedependent. However, it is not a problem to frame the present approach into the GPSM as explained in Section 3.4.
The Laplace transform of the output error, {e}^{(p,l)} with zero initial conditions is given by
where \mathrm{\Omega}=((I+({K}^{1}+C){\stackrel{\u02c6}{G}}^{df}C{\stackrel{\u02c6}{G}}^{df}\u2022{\stackrel{\u02c6}{H}}^{(p,l)}){\stackrel{\u02c6}{\chi}}^{1}\chi +C{G}^{df}\u2022H). It is readily seen from (9) that the error is zero when {\stackrel{\u02c6}{H}}^{(p,l)}=H. According to this fact, the search for the minimum of the function (8) leads to an estimation of the actual matrix delay. Thus, the identification problem is converted into an optimization one. It is important to notice that (8) is a continuously differentiable function, since it is defined as the square of the subtraction of two functions that are continuously differentiable. Also, (8) satisfies the compactness condition stated in [16]. On the other hand, the simple decrease condition (convexity) cannot be guaranteed to be satisfied.
3.3 Proposed pattern search method
The PSM monitors at time instants being integer multiple of the residence time the objective function and selects the nominal delay, which is the best estimation of the actual one. The nominal delay is used within each time interval to generate the control law. The initial trial point is selected by the designer as {\stackrel{\u02c6}{H}}_{k=1}^{\mathrm{nom}} and {\mathrm{\Delta}}_{k=1} using Assumption 2. From this moment onwards, the PSM can be formally expressed as the Algorithm 3.
The trial points (5)(7) are compared (line 13 of Algorithm 3) with nominal model using the performance index (8). In this way, the element associated with the lowest value of the objective function is obtained, and this represents the best delay. Next, the new trial points are generated based on {\mathrm{\Delta}}_{k}, which is given by
where the reduction factor matrix is given by
for 0<\eta <1. Furthermore, {lim}_{k\to \mathrm{\infty}}{\mathrm{\Gamma}}_{k}=0. Notice that since {\mathrm{\Delta}}_{k} contains positive and negative values, the trial points (5) and (7) are defined as additions and subtractions to the nominal model. These positive and negative values along with the zero value in the central position of (10) and an adequate value for {N}_{m} and {\mathrm{\Gamma}}_{0} allow to make a dense search in the delay space (i.e., the complete search space can be explored to find the minimum of the objective function (8)).
The models are not equally spaced in the delay space since the mesh is more refined near the nominal delay. To show this issue, the separation between consecutive patterns can be calculated according to Eq. (10). Thus, consider only the positive values for {\mathrm{\Delta}}_{k} (since the separation for the negative part is identical due to symmetry): {\mathrm{\Delta}}_{k}={[{l}^{2}{\mathrm{\Gamma}}_{k}]}_{l=0}^{l=(\frac{{N}_{m}1}{2})}. Then, the separation between two consecutive patterns is: \delta {\mathrm{\Delta}}_{k}={(l+1)}^{2}{\mathrm{\Gamma}}_{k}{l}^{2}{\mathrm{\Gamma}}_{k}=(2l+1){\mathrm{\Gamma}}_{k} with l=0,1,\dots ,(\frac{{N}_{m}1}{2})1. Notice that \delta {\mathrm{\Delta}}_{k} increases as l increases, which means that the patterns are more separated as they get far from the nominal model. The largest separation between patterns is given by \delta {\mathrm{\Delta}}_{k,\mathrm{max}}=({N}_{m}2){\mathrm{\Gamma}}_{k}, while the minimum separation is given by \delta {\mathrm{\Delta}}_{k,\mathrm{min}}={\mathrm{\Gamma}}_{k} at each iteration, k.
In this approach, the values of η, {\mathrm{\Gamma}}_{0} and {N}_{m} are fixed by the designer, in relation with the desired convergence time, the cycle time and architecture of the processor. For practical purposes, the approach possesses the advantage of being easily implementable in real systems, such as microcontrollers, low cost chips or similar programmable devices. The approach operation is simple and yields a great source of possible applications.
3.4 Convergence results of the identification scheme
This section states the convergence results for the Algorithm 3, which guarantee the identification of the actual matrix delay. The proof is divided into two steps. Firstly, the conditions under which (8) has a unique global minimum when {\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}}=H are stated. Secondly, it will be shown that the proposed Algorithm 1 is able to asymptotically find the global minimum of function J(\stackrel{\u02c6}{H},t).
From (8), it can be seen that J(\stackrel{\u02c6}{H},t)=0 when \stackrel{\u02c6}{H}=H, and J(\stackrel{\u02c6}{H},t)\ge 0 since its integrand is nonnegative. Thus, the global minimum is calculated when J(\stackrel{\u02c6}{H},t)=0. The next Assumption 3 will be used subsequently.
Assumption 3 Fix an arbitrary {T}_{\mathrm{res}}>0. The reference signals {r}_{1}(t),{r}_{2}(t),\dots ,{r}_{n}(t) satisfy the following conditions:
and
for all \lambda ,{\lambda}_{i},{\gamma}_{i}\in [\underline{h},\overline{h}]\cap (0,\mathrm{\infty}) for i=1,2,\dots ,n, \overline{h}=max{\overline{h}}_{ij}, \underline{h}=min{\underline{h}}_{ij}, j\ne i, {\u03f5}_{j}\in \{0,1\} and t\in \mathcal{I}\subseteq [k{T}_{\mathrm{res}},(k+1){T}_{\mathrm{res}}), k\in \mathbb{N}, for at least one connected interval ℐ of positive measure.
The meaning and role played by Assumption 3 in the delay identification is pointed out in the proof of Lemma 1. Basically, the interpretation of (13) is that the reference signals cannot be periodic, and that the different sums between them cannot be equal to other reference signal or a delayed version of it. This interpretation allows us to generate reference signals easily in practice despite Eq. (13) looks complicated.
Lemma 1 The function (8) has a unique global minimum at\stackrel{\u02c6}{H}=Hfor allt\ge 0, provided that the reference signals{r}_{1}(t),{r}_{2}(t),\dots ,{r}_{n}(t)satisfy Assumption 3 for a given{T}_{\mathrm{res}}>0.
Proof The proof is done in the same way as the proof of Lemma 2 in [12]. □
In conclusion, if compensation between the different components is not possible due to Assumption 3, then the actual matrix delay is the unique global minimum for the objective function (8).
Lemma 1 states that the minimum of J(\stackrel{\u02c6}{H},t) is unique if the reference signals {r}_{j}(t) satisfy Assumption 3. The approach presents the peculiarity that the global minimum is always the same, but for each time window, the function J(\stackrel{\u02c6}{H},t) may take a different form. However, the same ideas and concepts from the GPSM can still be applied to this problem.
Next, we establish that the proposed algorithm is able to find the global minimum of the proposed function J(\stackrel{\u02c6}{H},t). Lemma 1 guarantees that under Assumption 3, J(\stackrel{\u02c6}{H},t) has a global minimum, but there may be local minima, while the GPSM is designed for decreasing functions (convex functions). Fortunately, the original GPSM given in [16] is extended in [30] to functions with multiple local minima. This has been done by converting the search into dense, according to the ideas in [30], which can be achieved in the presented algorithm making the parameter {\mathrm{\Gamma}}_{0} very close to zero and {N}_{m} sufficiently large.
Now, we can formulate the following result on delay identification based on the dense construction of patterns according to the ideas in [30] and the global convergence results stated in [16].
Theorem 1 Consider the delay system given by (1) satisfying Assumptions 1 and 2. Thus, the PSM based Algorithm 3through models (5)(7) can identify the actual matrix delay provided that the reference signals{r}_{1}(t),{r}_{2}(t),\dots ,{r}_{n}(t)satisfy Assumption 3 for a value of{T}_{\mathrm{res}}, {\mathrm{\Gamma}}_{0}is sufficiently close to zero and{N}_{m}is sufficiently large.
The proof of Theorem 1 relies on two basic features. The first one is that the mesh generated by the patterns is dense in the search space. This fact allows obtaining an estimate of the delay lying in a neighbourhood of the actual delay, where there is no local optimum except the global one. Secondly, the algorithm is proven to converge to the actual delay by reduction to the absurd.
Proof If the initial estimate is the actual matrix delay, then the delay is identified and the theorem is proven. Thus, consider that {\stackrel{\u02c6}{H}}_{0}\ne H. The particular problem of pattern search methods is that the objective function to be optimized is timevarying, since it changes at each residence time. In this way, we may consider each of the objective functions (8) evaluated at each residence time multiple, t=k{T}_{\mathrm{res}}, to define the family of functions, {J}_{k}(\stackrel{\u02c6}{H})={\int}_{(k1){T}_{\mathrm{res}}}^{t=k{T}_{\mathrm{res}}}{(y(\tau )\stackrel{\u02c6}{y}(\tau ,\stackrel{\u02c6}{H}))}^{2}\phantom{\rule{0.2em}{0ex}}d\tau, which satisfy that {J}_{k}(\stackrel{\u02c6}{H})=0 if and only if \stackrel{\u02c6}{H}=H. Lemma 1 ensures that all these functions possess a common unique global minimum provided that Assumption 3 holds. However, each of these functions may possess a number of local minima.
Thus, one of the following two cases will arise:

(a)
All the functions {J}_{k}(\stackrel{\u02c6}{H}) only have the actual matrix delay as local minimum. Then, any new iteration {\stackrel{\u02c6}{H}}_{1} belongs to an interval, where there is no other minimum than the actual delay. This would be the best situation since the optimization process will not be threatened by the presence of local minima.

(b)
At least one of the functions {J}_{k}(\stackrel{\u02c6}{H}) has extra local minima being distinct of the global minimum. In this case, let us denote by K\subseteq \mathbb{N} the set indexing the functions having local minima and {C}_{k}, with k\in K, the set of all delays being local minima of function {J}_{k}(\stackrel{\u02c6}{H}), excluding the global optimum at the actual matrix delay.
Hence, {J}_{k}(\stackrel{\u02c6}{H})>0 for all \stackrel{\u02c6}{H}\in {C}_{k} and all k\in K. Moreover, we can define \theta =inf\{{J}_{k}(\stackrel{\u02c6}{H})\stackrel{\u02c6}{H}\in {C}_{k},k\in K\}>0 which exists and is finite and positive. The parameter θ defines a neighbourhood, I, around the actual delay, where all the functions {J}_{k}(\stackrel{\u02c6}{H}) do not have any local minima except the global one. In addition, denote by λ the diameter of the set I. Thus, if the patterns define a dense subset of the search space in such a way that the patterns cover all the space, 2{(\frac{{N}_{m}1}{2})}^{2}{\mathrm{\Gamma}}_{0}\ge \overline{H}\underline{H} (i.e., the spread of the patterns given by the total amplitude of {\mathrm{\Delta}}_{0} is larger than the uncertainty range given by Assumption 2), and the separation between them is sufficiently small, \parallel \delta {\mathrm{\Delta}}_{0,\mathrm{max}}\parallel =({N}_{m}2)\parallel {\mathrm{\Gamma}}_{0}\parallel <\lambda, which is achieved with a sufficiently large {N}_{m} and a sufficiently small \parallel {\mathrm{\Gamma}}_{0}\parallel, then for any initial estimate {\stackrel{\u02c6}{H}}_{0}, there exists an estimate {\stackrel{\u02c6}{H}}_{1} such that J({\stackrel{\u02c6}{H}}_{1})<\theta. This implies that the iteration provides an estimate within a neighbourhood I of the actual matrix delay, where all the functions do not have any other minimum than the actual one.
In conclusion, the iteration {\stackrel{\u02c6}{H}}_{1} is always guaranteed to belong to an interval, where the actual matrix delay, H, is the only optimum. This is the first feature of the proof. Notice that it is not necessary to compute the location of the minima of the functions {J}_{k}(\stackrel{\u02c6}{H}) since they are only used to ensure that there exists such an estimate {\stackrel{\u02c6}{H}}_{1}. Once the estimate {\stackrel{\u02c6}{H}}_{1} belongs to I, all the following iterations will, i.e., {\stackrel{\u02c6}{H}}_{k}\in I for all k\ge 1.
Notice that the estimates sequence {\{{\stackrel{\u02c6}{H}}_{k}\}}_{k=0}^{\mathrm{\infty}} converges to a constant finite limit since {lim}_{k\to \mathrm{\infty}}{\mathrm{\Gamma}}_{k}={lim}_{k\to \mathrm{\infty}}{\eta}^{k}{\mathrm{\Gamma}}_{0}=0 for 0<\eta <1 and, therefore, {lim}_{k\to \mathrm{\infty}}{\mathrm{\Delta}}_{k}=0 in Eq. (10), which makes all the patterns collapse to only one asymptotically. Hence, no oscillatory behaviour is asymptotically possible for {\stackrel{\u02c6}{H}}_{k}, and convergence to a constant value is guaranteed.
Now, the convergence of the estimates to the actual matrix delay is performed by reduction to the absurd. Thus, assume that {\stackrel{\u02c6}{H}}_{k}\to {H}_{\ast}\in I with {H}_{\ast}\ne H, which means that the estimates converge to a delay value different from the actual one. In this way, there exists \u03f5>0 such that \parallel H{H}_{\ast}\parallel >2\u03f5. It will be proven that this leads to a contradiction. Recall that Lemma 1 ensures that the global minimum is unique and located at the actual matrix delay, provided that Assumption 3 holds, which implies that {J}_{k}({H}_{\ast})>0 for all k.
The convergence {\stackrel{\u02c6}{H}}_{k}\to {H}_{\ast} could imply any of these two situations below:

(i)
There is a finite {k}_{0}\in \mathbb{N} such that {\stackrel{\u02c6}{H}}_{k}={H}_{\ast} for all k\ge {k}_{0}\ge 1, which means that the limit {H}_{\ast} is reached in finite time.

(ii)
For a prescribed finite \u03f5>0, there is a finite {k}_{0}={k}_{0}(\u03f5)\in \mathbb{N} such that for all k\ge {k}_{0}\ge 1, \parallel {H}_{\ast}{\stackrel{\u02c6}{H}}_{k}\parallel \le \u03f5, which means that the limit {H}_{\ast} is reached asymptotically.
These two situations will be considered separately:

(i)
This case requires that {\stackrel{\u02c6}{H}}_{{k}_{0}}={\stackrel{\u02c6}{H}}_{{k}_{0}+1}={H}_{\ast}. However, since in particular {J}_{{k}_{0}}({H}_{\ast})>0, and there are no local minima of any function {J}_{k} in I, since they are avoided by its definition, there is a direction in the search space, for which {J}_{{k}_{0}}(\stackrel{\u02c6}{H}) decreases since the minimum is located at H\in I, i.e., {J}_{{k}_{0}}(H)=0. Taking into account that the patterns define a dense mesh in the search space, there is a direction in \mathrm{\Delta}{h}_{k}^{p,l}, which defines the patterns through Eq. (4), near this decreasing direction and a value 1\le l\le (\frac{{N}_{m}1}{2}) such that the parameter step length {\mathrm{\Delta}}_{{k}_{0}}={l}^{2}{\mathrm{\Gamma}}_{{k}_{0}} given by Eq. (10), defines a pattern {\stackrel{\u02c6}{H}}_{k}^{(p,l)} with J({\stackrel{\u02c6}{H}}_{{k}_{0}}^{(p,l)})<J({\stackrel{\u02c6}{H}}_{{k}_{0}}). Consequently, {\stackrel{\u02c6}{H}}_{{k}_{0}+1}={\stackrel{\u02c6}{H}}_{{k}_{0}}^{(p,l)}\ne {\stackrel{\u02c6}{H}}_{{k}_{0}}, which is a contradiction with the initial assumption that {\stackrel{\u02c6}{H}}_{{k}_{0}}={\stackrel{\u02c6}{H}}_{{k}_{0}+1}={H}_{\ast}.

(ii)
The condition \parallel {H}_{\ast}{\stackrel{\u02c6}{H}}_{k}\parallel \le \u03f5 for a sufficiently small finite \u03f5>0 defines a neighbourhood L of {H}_{\ast}\ne H, where the estimates lie, {\stackrel{\u02c6}{H}}_{k}\in L, for all k\ge {k}_{0}. However, this fact along with \parallel H{H}_{\ast}\parallel >2\u03f5 implies that
\begin{array}{rl}2\u03f5& <\parallel H{H}_{\ast}\parallel =\parallel H{\stackrel{\u02c6}{H}}_{k}+{\stackrel{\u02c6}{H}}_{k}+{H}_{\ast}\parallel \\ \le \parallel H{\stackrel{\u02c6}{H}}_{k}\parallel +\parallel {\stackrel{\u02c6}{H}}_{k}+{H}_{\ast}\parallel \le \parallel H{\stackrel{\u02c6}{H}}_{k}\parallel +\u03f5\end{array}(14)\phantom{\rule{1em}{0ex}}\Rightarrow \phantom{\rule{1em}{0ex}}\parallel H{\stackrel{\u02c6}{H}}_{k}\parallel >\u03f5.(15)Therefore, there is a finite \underline{\mu}(\u03f5)>0 satisfying {J}_{k}(\stackrel{\u02c6}{H})\ge \underline{\mu}>0 for all \stackrel{\u02c6}{H}\in L. Since the functions {J}_{k}(\stackrel{\u02c6}{H}) are continuous with respect to \stackrel{\u02c6}{H}, there exists a value {\stackrel{\u02c6}{H}}_{{k}_{0}}^{m}\notin L such that {J}_{k}({\stackrel{\u02c6}{H}}_{k}^{m})={\mu}_{1} for some 0<{\mu}_{1}<\underline{\mu}. Hence, since {\stackrel{\u02c6}{H}}_{{k}_{0}}^{m}\notin L, then \parallel {H}_{\ast}{\stackrel{\u02c6}{H}}_{{k}_{0}}^{m}\parallel >\u03f5.
Notice that since the estimates belong to the interval I, where there are no local minima, and \parallel \delta {\mathrm{\Delta}}_{0,\mathrm{max}}\parallel =({N}_{m}2)\parallel {\mathrm{\Gamma}}_{0}\parallel <\lambda, the distance between the estimate and {H}_{\ast} is strictly smaller than \parallel \delta {\mathrm{\Delta}}_{k,\mathrm{max}}\parallel =({N}_{m}2){\eta}^{k}\parallel {\mathrm{\Gamma}}_{0}\parallel <\lambda, which is strictly smaller than (\frac{{N}_{m}1}{2}){\eta}^{k}\parallel {\mathrm{\Gamma}}_{0}\parallel at each step k, which is the range defined by the step length, through Eq. (10), i.e., there are patterns outside the region L. Thus, the patterns define a dense mesh in the search space, and there is a direction in \mathrm{\Delta}{h}_{k}^{p,l}, which defines the patterns through Eqs. (5)(7), and a value 1\le l\le (\frac{{N}_{m}1}{2}) such that the parameter step length {\mathrm{\Delta}}_{{k}_{0}}={l}^{2}{\mathrm{\Gamma}}_{{k}_{0}} defines a pattern {\stackrel{\u02c6}{H}}_{k}^{(p,l)} near a value of the delay {\stackrel{\u02c6}{H}}_{k}^{m} satisfying {J}_{k}({\stackrel{\u02c6}{H}}_{k}^{m})={\mu}_{1} for some 0<{\mu}_{1}<\underline{\mu}, since the patterns go ahead the boundary of L. Then, J({\stackrel{\u02c6}{H}}_{{k}_{0}}^{(p,l)})<J(\stackrel{\u02c6}{H}) for all \stackrel{\u02c6}{H}\in L. Consequently, {\stackrel{\u02c6}{H}}_{{k}_{0}+1}={\stackrel{\u02c6}{H}}_{{k}_{0}}^{(p,l)}\notin L which is a contradiction with the initial assumption that {\stackrel{\u02c6}{H}}_{k}\in L for, in particular, k={k}_{0}+1.
In conclusion, there is a contradiction with the initial assumption of {\stackrel{\u02c6}{H}}_{k}\to {H}_{\ast} with {H}_{\ast}\ne H, and hence, the sequence of iterations converges to the actual matrix delay H, proving the theorem. □
Notice that the identification result has been established thanks to the fact that the identification problem is formulated within a GPSM, taking advantage of this technique in its applications to control theory. Also, note that Theorem 1 requires {\mathrm{\Gamma}}_{0} being sufficiently close to zero, and that {N}_{m} is sufficiently large. However, it has been observed in simulation examples that a finite value is sufficient for practical applications as shown in Section 5.
3.5 Extension to timevarying delays
In the results presented above, the delays are supposed to be timeinvariant. The original formulation of {\mathrm{\Gamma}}_{k}, given in (11), makes that {lim}_{k\to \mathrm{\infty}}{\mathrm{\Gamma}}_{k}=0, as it is commented in Section 3.3. Thus, for timevarying delay systems, the estimation is not possible, because the search space is asymptotically reduced to a single point. However, simulation results show that a small modification in Algorithm 3 allows the proposed approach to be extended to the case, when the delay is timevarying. The modification is made over the reduction factor matrix, and it is given by the substitution of Eq. (11) by the following Eq. (16):
where \underline{\mathrm{\Gamma}}>0 is the lower bound of the reduction factor matrix. As it can be seen, Eq. (16) is easily implementable in Algorithm 3. This condition implies that the length parameter (10) is only reduced until a certain positive value (which is defined by \underline{\mathrm{\Gamma}}). In this way, we can ensure that there will always be different models (5)(7) generated in such way that all the possible search space can be evaluated regardless the potential timeevolution of the delays. Thus, the proposed Algorithm 3 is flexible enough to handle the identification of timevarying delays. Note that for timevarying delay case, the precision of the estimation is determined by \underline{\mathrm{\Gamma}} value.
4 Stability analysis
This section states the stability properties of the closeloop. We will use the identification properties of the proposed algorithm, stated in Theorem 1, to show that the nominal delay converges to a neighbourhood of the actual matrix delay in finite time and eventually becomes a timeinvariant system. Thus, the stability theorem is formulated as follows.
Theorem 2 The closedloop system depicted in Figure 2obtained from Eqs. (1), (5)(7) through Algorithm 3is stable provided that Assumptions 1 and 2 hold, {\mathrm{\Gamma}}_{0}is sufficiently close to zero, {N}_{m}is sufficiently large and(C+{K}^{1})stabilizes{G}^{df}(s).
Proof The proof is made by contradiction. If the output is unbounded, then the input signal behaves as a nonperiodic signal satisfying Assumption 3 for any value of {T}_{\mathrm{res}}. Therefore, Theorem 1 guarantees that the nominal delay converges to the actual matrix delay, and hence, {lim}_{t\to \mathrm{\infty}}\stackrel{\u02c6}{H}(t)=H implying: {lim}_{t\to \mathrm{\infty}}{\stackrel{\u02c6}{h}}_{ij}(t)={h}_{ij}, \mathrm{\forall}ij. Consequently, there exist finite {t}_{ij}\in \mathbb{R} such that {\stackrel{\u02c6}{h}}_{ij}(t){h}_{ij}\le {\u03f5}_{ij}, \mathrm{\forall}t\ge {t}_{ij} for any positive prescribed {\u03f5}_{ij}>0. Thus, denote by \u03f5={max}_{ij}{\u03f5}_{ij} and {t}^{\ast}={max}_{ij}{t}_{ij} and consider the statespace realization of the closedloop system given by Eq. (2):
where the delays h, \stackrel{\u02c6}{h}(t) are the representation of the matrix delays into a statespace description, while {A}_{0} and {A}_{1} are appropriate matrices. Furthermore, {A}_{0} is the statespace description of the perfectly compensated delay given by the closedloop system Eq. (3), and hence stable by design through compensators C and {K}^{1}. However, since Theorem 1 guarantees that the delay is identified, then \parallel x(th)x(t\stackrel{\u02c6}{h}(t))\parallel \to 0, \mathrm{\forall}t\ge {t}_{1}\ge {t}^{\ast}, and there is \delta =\delta (\u03f5) such that \parallel x(th)x(t\stackrel{\u02c6}{h}(t))\parallel \le \delta\mathrm{\forall}t\ge {t}_{1}.
The BIBOstability of Eq. (17) can be deduced from the autonomous system (i.e., the system with r(t)=0). Thus, the solution to Eq. (17) is given by:
Thus, the upperbounding of Eq. (18) leads to:
for t\ge {t}_{1}, since the matrix {A}_{0} is a stability matrix, the system is linear, and there is no finite escape time on the finite interval [0,{t}_{1}], and the entries of the matrix {A}_{1} are bounded, since it is the realization of a finite transfer function given by Eq. (2). Thus, all the signals in the closedloop system are bounded. Hence, the state is bounded, which is a contradiction with the initial assumption, where the output and, therefore, the state diverge. □
It can be seen that the proof is straightforward, since the scheme has been correctly framed within the PSM, which is, inheriting its convergence properties, and the MoSP has a minimum robust behaviour. Note that since the delay is identified, the output tends to the perfectly compensated system output.
5 Simulation examples
In this section, we will examine the performance of the proposed scheme in four simulation scenarios. (i) The first scenario shows that it is not necessary to know the system in its totality. Therefore, we suppose that the unstable system has an uncertainty of 6% in its parameters. (ii) The second scenario shows the effectiveness of the scheme in the delay identification, using a 2\times 2 MIMO system with unstable poles. (iii) The proposed approach is tested on an irrigation channel model, which is modelled as an integrative MIMO system. (iv) Finally, simulation for timevarying delay system.
5.1 Secondorder delayed unstable processes
The process is given by
We will assume an error of 6% in the parameters of the plant model, as shown below
and C=\frac{(0.0625s+0.35)}{0.25s}, {K}^{1}=12.468 and {K}^{2}=1.414. For this simulation, we used a {T}_{\mathrm{res}}=1.2 seconds, {\gamma}_{ij}=0.1 and {\mathrm{\Delta}}_{k}=[2,4,8,12,16,20,23].
A comparison with the MoSP [8] is made. First, is to note that the nominal delay used in the MoSP is lower {\stackrel{\u02c6}{h}}_{0}^{\mathrm{nom}}=9 seconds in comparison with the initial nominal model used in our approach {\stackrel{\u02c6}{h}}_{0}^{\mathrm{nom}}=23 seconds, this was necessary, because if we use a delay of {\stackrel{\u02c6}{h}}_{0}^{\mathrm{nom}}=23 seconds in the MoSP, this becomes unstable.
Figure 4 clearly shows that the scheme is able to tackle the uncertainty in the delay, which leads to remarkable performance. Also, results demonstrate that the proposed scheme has a robust behaviour for uncertainties in modelling parameters of the plant.
Figure 5 shows the value that takes the nominal delay {\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}} for the control law at each time interval, this is initialized in 23 seconds, {\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}}=13.6 at 16 seconds, which is a convergence time quite small, and taking into account the length of the real delay. Thus, good delay identification and good performance are achieved with a step as reference signal.
It can be seen that good results are obtained, despite the number of models ({N}_{m}=7) is small, and that the {\gamma}_{ij} value is quite large, that is, 0.1 in comparison with requirements of Theorem 1.
5.2 MIMO case
A complete knowledge of the delayfree part of the plant is assumed. The rational component of the considered plant is given by
the matrix associated with the real delay of the plant is
{\gamma}_{ij}=0.01, {\mathrm{\Delta}}_{k}=[2,4,8,12,16,20,23], {T}_{\mathrm{res}}=1.2 seconds and the controllers are given by
The system’s outputs are shown in Figures 6 and 7 for the channels 1 and 2, respectively. The outputs are compared with a scheme having the model delay error of 10%. It can be seen that good results were obtained, since a fixed delay upper to 15% makes the system become unstable, although it is not shown in Figure 6. Notice that a finite value for {\gamma}_{ij} and {\mathrm{\Delta}}_{k} is again sufficient to perform the delay identification.
The nominal delay matrix at 25 seconds is
which is the same as (23), the evolution of the delay through time is shown in Figure 8.
As can be noticed, the convergence time is small in comparison with the channel delays, which permits the system’s stability.
5.3 Integrative case
In this section, the proposed approach is used over an integrative MIMO irrigation channel model. The delayfree matrix of the irrigation channel is given by
A modelling error of 3% is taking into account to show the robustness of the proposed approach
and the actual delay is given by
C is given by
and {K}^{1}=diag(8,8,8) and {K}^{2}=diag(1.1,1.1,1.1). The way of selecting these values for {K}^{1} and {K}^{2} controllers is explained in detail in [8].
The initial parameters used in the algorithm are given by: {T}_{\mathrm{res}}=2.5 seconds, {\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}}=diag(0,0,0) seconds, and {\mathrm{\Delta}}_{k}=[2,4,6,8,10,12,14,16,18,20,22]. Note that the number of models is low {N}_{m}=11.
Figure 9 shows the simulated waterlevel errors flows over upstream gates for the three pools, in response to a large step change in the flow out of pool 3. Figure 9(a) shows the simulation concerning a PI controller based on [8] (where the used compensators are the same that the compensators used in the proposed scheme), where the model delay is constant, {\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}}=diag(6.27,7.14,14.94) (which correspond to a model delay error of ±20%). Figure 9(b) shows the output obtained with the proposed scheme, where the delay model is unknown.
As expected, the simulation shows that in all three pools, better performance is achieved by the proposed scheme, when the time delay has a modelling error. Although the control strategies used in these simulations are not in the same conditions, since the proposed scheme adjusts the controller through time, while in the other case, the control strategy is always fixed, this comparison is useful to show the effectiveness of the proposed scheme.
Figure 10 shows the delay evolution for the three pools, the obtained delay matrix is {\stackrel{\u02c6}{H}}_{k\to \mathrm{\infty}}=diag(5.2,8.6,12.4). It is noteworthy that the identification is very precise, despite the input signal is a step. Notice that the convergence time is fast, since in 20.4 minutes, all the delays are identified.
5.4 Timevarying delay
In this case, the pure delay term is given by
these are timevarying delays, whose variations are shown in Figure 11(d), (e) and (f). As it can be seen, the delay varies arbitrarily, this is, in order to verify the optimal performance of the algorithm. The following scenario was simulated. At time 0 minutes, all water levels were 27.50 m, 24.85 m and 22.15 m for pool 1, 2 and 3, respectively. At time 20 minutes the setpoint for the water level in pool 3 was reduced from 22.15 m to 22.10 m, and at time 150 minutes was augmented from 22.10 m to 22.20 m.
Figure 11(a), (b) and (c) shows the water level for pool 1, 2 and 3, respectively. The proposed scheme is compared with PI controller proposed in [8], where the delays are fixed in \stackrel{\u02c6}{H}=diag(11.02,10.08,9.1). As expected, the simulation shows that in all three pools, better performance is achieved by the proposed scheme for timevarying delays.
Figures 11(d), (e) and (f) shows the delay evolution for pools 1, 2 and 3, respectively. It is noteworthy that the identification is very precise, and the presented Algorithm 3 is able to follow the timeevolution of the delays. It can be seen that the approach can identify both abrupt as continuous changes in the delay.
6 Conclusion
This paper has presented a delay identification strategy that can be applied to delay compensation control schemes for stable/unstable MIMO systems. The main objectives are the delay identification and ensuring the closedloop stability, this is usually difficult for unstable system. The approach is formulated as an optimization problem and then framed into the generalized patterns search method, inheriting the convergence properties, which are a novelty both in the control theory as well as in mathematics. The optimization has been implemented online, using a multiplemodel scheme, which is also a novel implementation of pattern search methods.
Despite convergence results require technical conditions that seem difficult to meet, the generation of such reference signals can be accomplished easily in practice. Therefore, the simplification of the technical requirements on the input signal for the convergence results is an open question of research.
Finally, it is shown that the proposed approach is robustly stable, even when the rational component of the system has a 6% error, which provides versatility and could be implemented in real systems. Moreover, simulation results showed that the identification is given with a great precision. Additionally, simulation results were presented for a timevarying delay case, where it is corroborated that the expected good results in practical situations require readjustment of the model time delays. In authors’ opinion, pattern search methods constitute a powerful optimization technique for controloriented applications such that it can be extended in future to the case, where the delay is time varying or for combined parametric and delay identification.
References
Dong Y, Wei J: Output feedback stabilization of nonlinear discretetime systems with timedelay. Adv. Differ. Equ. 2012, 2012(73):111.
Wei F, Cai Y: Existence, uniqueness and stability of the solution to neutral stochastic functional differential equations with infinite delay under nonLipschitz conditions. Adv. Differ. Equ. 2013, 2013(51):112.
Alcántara S, Pedret C, Vilanova R, Zhang WD: Simple analytical minmax model matching approach to robust proportionalintegrativederivative tuning with smooth setpoint response. Ind. Eng. Chem. Res. 2010, 49(2):690700.
Lee Y, Lee J, Park S: PID controller tuning for integrating and unstable processes with time delay. Chem. Eng. Sci. 2000, 55: 34813493.
Ntogramatzidis L, Ferrante A: Exact tuning of PID controllers in control feedback design. IET Control Theory Appl. 2011, 5(4):565578.
Shi D, Wang J, Ma L: Design of balanced proportionalintegralderivative controllers based on bilevel optimisation. IET Control Theory Appl. 2011, 5(1):8492.
Zhang WD, Sun YX: Modified Smith predictor for controlling integrator/time delay processes. Ind. Eng. Chem. Res. 1996, 35: 27692772.
Majhi S, Atherton DP: Modified Smith predictor and controller for processes with time delay. Control Theory Appl. 1999, 146: 359366.
Meng D, Jia Y, Du J, Yu F: Learning control for timedelay systems with iterationvarying uncertainty: a Smith predictorbased approach. IET Control Theory Appl. 2011, 4(12):27072718.
NormeyRico JE, Camacho EF: Control of DeadTime Processes. Springer, Berlin; 2007.
De Paor AM: A modified Smith predictor and controller for unstable processes with time delay. Int. J. Control 1985, 41(4):10251036.
Herrera J, Ibeas A, Alcantara S, de la Sen M: Multimodelbased techniques for the identification and adaptive control of delayed multiinput multioutput systems. IET Control Theory Appl. 2011, 5(1):188202.
Garcia CA, Ibeas A, Herrera J, Vilanova R: Inventory control for the supply chain: an adaptive control approach based on the identification of the leadtime. Omega 2012, 40: 314327.
Herrera J, Ibeas A, Alcantara S, Vilanova R: Multimodelbased techniques for the identification of the delay in mimo systems. Proceedings of the 2010 American Control Conference, Marriott Waterfront, Baltimore, MD, USA, June 30July 02 2010.
Herrera J, Ibeas A, Alcantara S, de la Sen M: Identification and control of integrative mimo systems using pattern search algorithms: an application to irrigation channels. Eng. Appl. Artif. Intell. 2013, 26: 334346.
Torczon V: On the convergence of pattern search algorithms. SIAM J. Optim. 1997, 7(1):125.
Pan I, Das S, Gupta A: Tuning of an optimal fuzzy PID controller with stochastic algorithms for networked control systems with random time delay. ISA Trans. 2011, 50: 2127.
Dong Y, Liu J: Exponential stabilization of uncertain nonlinear timedelay systems. Adv. Differ. Equ. 2012, 2012(180):115.
Gouaisbaut F, Dambrine M, Richard JP: Robust control of delay systems: a sliding mode control design via LMI. Syst. Control Lett. 2002, 46: 219230.
Sangapate P: New sufficient conditions for the asymptotic stability of discrete timedelay systems. Adv. Differ. Equ. 2012, 2012(28):18.
Xiang M, Xiang Z: Reliable control of positive switched systems with timevarying delays. Adv. Differ. Equ. 2013, 2013(25):115.
Liu T, Gao F: Closedloop step response identification of integrating and unstable processes. Chem. Eng. Sci. 2010. 10.1016/j.ces.2010.01.013
Bogani C, Gasparo MG, Papini A: Generalized pattern search methods for a class of nonsmooth optimization problems with structure. J. Comput. Appl. Math. 2009, 229: 283293.
Liu L, Zhang X: Generalized pattern search methods for linearly equality constrained optimization problems. Appl. Math. Comput. 2006, 181: 527535.
Jelali M: Estimation of valve stiction in control loops using separable leastsquares and global search algorithms. J. Process Control 2008, 18: 632642.
Negenborn RR, Leirens S, De Schutter B, Hellendoorn J: Supervisory non linear MPC for emergency voltage control using pattern search. Control Eng. Pract. 2009, 17: 841848.
Ibeas A, de la Sen M: Artificial intelligence and graph theory tools for describing switched linear control systems. Appl. Artif. Intell. 2006, 20(9):703741.
Bernstein DS: Matrix Mathematics. Princeton University Press, Princeton; 2005.
Marchetti G, Scali C, Lewin DR: Identification and control of openloop unstable processes by relay methods. Automatica 2001, 37: 20492055.
Audet C, Dennis JE: Mesh adaptive direct search algorithms for constrained optimization. SIAM J. Optim. 2006, 17: 188217.
Acknowledgements
This work was partially supported by the Spanish Ministry of Economy and Competitiveness through grant DPI201230651, by the Basque Government (Gobierno Vasco) through grant IE37810 and by the University of the Basque Country through grant UFI 11/07.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally. All authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Herrera, J., Ibeas, A., de la Sen, M. et al. Identification and control of delayed unstable and integrative LTI MIMO systems using pattern search methods. Adv Differ Equ 2013, 331 (2013). https://doi.org/10.1186/168718472013331
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/168718472013331