- Research
- Open access
- Published:

# A time optimal control problem of some linear switching controlled ordinary differential equations

*Advances in Difference Equations*
**volume 2012**, Article number: 52 (2012)

## Abstract

This article studies a time optimal control problem \left(\mathcal{P}\right) for some switching controlled systems. We first prove the existence of time optimal controls to the problem \left(\mathcal{P}\right). Then, we derive the bang-bang property for time optimal controls to this problem, through utilizing the Pontryagin maximum principle.

**AMS Classification**: 49K20; 35J65.

## 1 Introduction

Let *A* be a *n* × *n* matrix. Let \stackrel{\u20d7}{{b}_{1}} and \stackrel{\u20d7}{{b}_{2}} be two different vectors given in ℝ^{n}with *n* ≥ 1 and write *B* for the *n* × 2 matrix \left(\stackrel{\u20d7}{{b}_{1}},\stackrel{\u20d7}{{b}_{2}}\right). We define

and

Consider the following controlled system:

where the control function u\left(\cdot \right)={\left({u}_{1}\left(\cdot \right),{u}_{2}\left(\cdot \right)\right)}^{T}\in \mathcal{U}. In this system, we can rewrite *Bu*(*t*) as {u}_{1}\left(t\right)\stackrel{\u20d7}{{b}_{1}}+{u}_{2}\left(t\right)\stackrel{\u20d7}{{b}_{2}}, while \stackrel{\u20d7}{{b}_{1}} and \stackrel{\u20d7}{{b}_{2}} are treated as two different controllers; *u*_{1}(·) and *u*_{2}(·) are treated as controls. Here, the controls *u*_{1}(·) and *u*_{2}(·) hold the property as following:

and system (1.1) is called switching controlled system. The condition (1.2) ensures that, at almost every instant of time, at most one of controllers \stackrel{\u20d7}{{b}_{1}} and \stackrel{\u20d7}{{b}_{2}} is active. Such kind of switching controlled systems model a large class of problems in applied science.

The purpose of this article is to study a time optimal control problem for switching controlled system (1.1). We begin with introducing the problem which will be studied. To serve such purpose, we define two sets as:

and

Here, {\u2225\cdot \u2225}_{{\mathbb{R}}^{2}} stands for the Euclid norm in ℝ^{2}. (We will utilize the notation {\u27e8\cdot ,\cdot \u27e9}_{{\mathbb{R}}^{2}} to represent the Euclid inner product in ℝ^{2}.) Then, the time optimal control problem studied in the article is as:

Throughout this article, we denote *x*(·; *x*_{0}, *u*) with *u*(·) = (*u*_{1}(·), *u*_{2}(·))^{T}, to the solution of the Equation (1.1) corresponding to the initial data *x*_{0} and controls *u*(·) = (*u*_{1}(·), *u*_{2}(·))^{T}. Consequently, *x*(*T*; *x*_{0}, *u*) stands for the state of the solution *x*(·; *x*_{0}, *u*) at time *T*.

In the problem \left(\mathcal{P}\right) the number

is called the optimal time, while a control *u**(·), in the set {\mathcal{U}}_{\text{ad}}, and holding the property that *x*(*T**;*x*_{0},*u**) = 0, is called an optimal control. This problem is to ask for such a control *u**(·) in the constraint control set {\mathcal{U}}_{\text{ad}} that it derives the solution *x*(·;*x*_{0},*u**) from the initial state *x*_{0} to the origin of ℝ^{n}in the shortest time.

Next, we present the main results obtained in this study.

**Theorem 1.1**. *The problem* \left(\mathcal{P}\right) *has at least one optimal control provided that the Kalman rank condition holds for A and B, and* Re *λ* ≤ 0 *for each eigenvalue λ of A*.

**Theorem 1.2**. *When the Kalman rank condition holds for A and B, any optimal control u* to* \left(\mathcal{P}\right) *has the bang-bang property:*

**Remark 1.3**. *(i) The statement that the Kalman rank condition holds for A and B if and only if*

*(ii) Since any optimal control u**(·) *to the problem* \left(\mathcal{P}\right) *is a switching control, the statement that the bang-bang property* (1.5) *holds for the optimal control u**(·) *is equivalent to the statement that for almost every t* ∈ [0, *T**], *u**(*t*) *is one of four vertices of the domain* {(*v*_{1},*v*_{2})^{T}∈ ℝ^{2}; |*v*_{1} + *v*_{2}| ≤ 1 *and* |*v*_{1}-*v*_{2}| ≤ 1}.

*(iii) The condition that* Re*λ* ≤ 0 *for each eigenvalue λ of A is a condition to guarantee the existence of time optimal control under the control constraint:* {\u2225u\left(t\right)\u2225}_{{\mathbb{R}}^{2}}\le 1 *for almost every t, even in the case where the switching constraint disappears* (*see* [1, 2]).

In the classical time optimal control problem, the controls constraint are convex, and the existence of the optimal control can be get by the weak convergence methods. In our article, the controls set in the problem \left(\mathcal{P}\right) lose the convexity. And it can not be researched by making use of methods in most studies from past publications (see [3–5]). Here, we utilize an idea from relax control theory to prove the existence theorem. Finally, we make use of the Pontragin maximum principle and the unique continuation property to obtain the bang-bang property for optimal controls to this problem.

With regard to the studies of time optimal control problems governed by ordinary partial differential equations and without the switching constrain to controls, there have been a lot of literatures. We would like to quote the following related articles [2, 6–9].

The rest of the article is structured as following: Section 2 presents the proof Theorem 1.1; Section 3 provides the proof of Theorem 1.2.

## 2 The existence of time optimal controls

We prove the existence result, namely, Theorem 1.1, as follows.

*Proof*. Write \text{co}\stackrel{\u0303}{U} for the convex hull of the set \stackrel{\u0303}{U}. Then, it is clear that

Now, we define another constraint control set as:

Since the Kalman rank condition and the condition that Re *λ* ≤ 0, for each eigenvalue *λ* of *A*, hold, we can utilize the same argument (see [[1], Theorem 2.6]) to get that system (1.1) is exact null controllable with the control constraint set {\stackrel{\u0303}{\mathcal{U}}}_{\text{ad}}.

Next, we consider a new time optimal control problem:

We denote {\stackrel{\u0303}{T}}^{*} to the optimal time for this problem.

Because the control set \text{co}\stackrel{\u0303}{U} is convex, we can utilize the classical weak convergence method to prove that the problem \left(\stackrel{\u0303}{\mathcal{P}}\right) has at least one solution (see, for instance [[1], Theorem 3.1]). Namely, there exists at least one control \stackrel{\u0303}{u}\left(\cdot \right)\in {\stackrel{\u0303}{\mathcal{U}}}_{\text{ad}} such that the corresponding solution x\left(\cdot ;{x}_{0},\stackrel{\u0303}{u}\right) to the equation (1.1) holds the property: x\left({\stackrel{\u0303}{T}}^{*};{x}_{0},\stackrel{\u0303}{u}\right)=0.

Let

Then, one can easily check that is a convex, nonempty subset of *L*^{∞}(0, + ∞; ℝ^{2}), moreover, it is compact in the weak* topology. Therefore, we can apply the Krein-Milman theorem to get an extreme point {\stackrel{\u0303}{u}}^{*}\left(\cdot \right) in the set .

Now, we claim that *for almost every* t\in \left[0,{\stackrel{\u0303}{T}}^{*}\right],{\stackrel{\u0303}{u}}^{*}\left(t\right)\equiv {\left({\stackrel{\u0303}{u}}_{1}^{*}\left(t\right),{\stackrel{\u0303}{u}}_{2}^{*}\left(t\right)\right)}^{T} *belongs to* \stackrel{\u0303}{U}. Here is the argument: in order to prove the statement that {\stackrel{\u0303}{u}}^{*}\left(t\right)\in \stackrel{\u0303}{U}, it suffices to show that for almost every time t\in \left[0,{\stackrel{\u0303}{T}}^{*}\right], the following two equalities stand:

and

By seeking for a contradiction, we suppose that the above-mentioned statement was not true. Then there would exist a number *ε* with 0 < ϵ < 1, and a measurable subset F\subset \left[0,{\stackrel{\u0303}{T}}^{*}\right] with a positive measure such that one of the following two statements stands:

and

In the case where (2.1) holds, we define a functional *I*_{
F
}: *L*^{∞}(*F*) → ℝ^{n}by setting

where \stackrel{\u20d7}{\alpha}\left(\cdot \right) is a vector-valued function over *F* and is defined by \stackrel{\u20d7}{\alpha}\left(s\right)={\left(\alpha \left(s\right),\alpha \left(s\right)\right)}^{T}, for almost every *s* ∈ *F*. It is clear that *I*_{
F
}is a bounded linear operator from *L*^{∞}(*F*) to ℝ^{n}. *L*^{∞}(*F*) is an infinite dimensional space, and ℝ^{n}is a finite dimensional space. Thus, the kernel of *I*_{
F
}is not trivial. Namely, there exists a function *β*(·) holds the properties: it belongs to *L*^{∞}(*F*); is non-trivial; satisfies {\u2225\beta \left(\cdot \right)\u2225}_{{L}^{\infty}\left(F\right)}\le 1; and is such that *I*_{
F
}(*β*(*t*)) = 0. Let \stackrel{\u20d7}{\beta}\left(s\right)={\left(\beta \left(s\right),\beta \left(s\right)\right)}^{T} over *F*. We extend this function \stackrel{\u20d7}{\beta}\left(\cdot \right) over [0, +∞) by setting it to take the value (0, 0)^{T}over [0, +∞) *\ F*. We still denote this extension by \stackrel{\u20d7}{\beta}\left(\cdot \right). Then, we construct two control functions as following:

We will proof that both *v*(·) and *w*(·) belong to . Since

{\stackrel{\u0303}{u}}^{*}\left(\cdot \right) is an extreme point of , and x\left({\stackrel{\u0303}{T}}^{*};{x}_{0},{\stackrel{\u0303}{u}}^{*}\right)=0, it follows at once that

Thus, the remainder is to show that *v*(·) and *w*(·) belong to {\stackrel{\u0303}{\mathcal{U}}}_{\text{ad}}, namely, for almost every *t* ∈ [0,+∞), *v*(*t*) and *w*(*t*) are in the set \text{co}\stackrel{\u0303}{U}. With regard to *t* ∈ [0,+∞), there are only two possibilities: it belongs to either [0, +∞) \ *F* or *F*.

When *t* ∈ [0, +∞) \ *F*, we have that \stackrel{\u20d7}{\beta}\left(t\right)=0. Consequently, it holds that v\left(t\right)=w\left(t\right)={\stackrel{\u0303}{u}}^{*}\left(t\right). Along with the fact that {\stackrel{\u0303}{u}}^{*}\left(\cdot \right) is an extreme point of , this indicates that v\left(t\right)=w\left(t\right)\in \text{co}\stackrel{\u0303}{U}, for almost all *t* ∈ [0, +∞) \ *F*.

When *t* ∈ *F*, we observe that

and

On the other hand, one can easily check that

and

These, together with (2.4), yields that v\left(t\right)\in \text{co}\stackrel{\u0303}{U}, for almost every *t* ∈ *F*. Similarly, we can derive that w\left(t\right)\in \text{co}\stackrel{\u0303}{U}, for almost every *t* ∈ *F*.

Therefore, we have proved that for almost every *t* ∈ [0,+∞), both *v*(*t*) and *w*(*t*) belong to the set \text{co}\stackrel{\u0303}{U}. Combined with (2.3), this shows that

However, it is obvious that {\stackrel{\u0303}{u}}^{*}\left(t\right)=\frac{1}{2}v\left(t\right)+\frac{1}{2}w\left(t\right). Along with (2.5), this contradicts to the fact that {\stackrel{\u0303}{u}}^{*}\left(\cdot \right) is an extreme point of .

In the case where (2.2) holds, we can utilize the same arguments as above to get a contradiction to the fact that {\stackrel{\u0303}{u}}^{*}\left(\cdot \right) is an extreme point of .

Thus, we have proved that {\stackrel{\u0303}{u}}^{*}\left(t\right)\in \stackrel{\u0303}{U} for almost every t\in \left[0,{\stackrel{\u0303}{T}}^{*}\right]. In summary, we conclude that the above-mentioned claim stands.

Next, we define another control function \u016b\left(\cdot \right) by setting

By the above-mentioned claim, we can easily find that \u016b\left(\cdot \right) belong to {\mathcal{U}}_{\text{ad}}, and is an optimal control for problem \left(\stackrel{\u0303}{\mathcal{P}}\right). Since *T** is the optimal time to the problem \left(\mathcal{P}\right), from the facts that x\left({\stackrel{\u0303}{T}}^{*};{x}_{0},\u016b\right)=0 and \u016b\left(\cdot \right)\in {\mathcal{U}}_{\text{ad}}, we deduce that

However, it is clear that {\mathcal{U}}_{\text{ad}}\subset {\stackrel{\u0303}{\mathcal{U}}}_{\text{ad}}. Thus, we necessarily have

Therefore, it holds that

This indicates that \u016b\left(\cdot \right) is an time optimal control to the problem \left(\mathcal{P}\right). Hence, we have completed the proof of Theorem 1.1.

## 3 The bang-bang property

This section is devoted to proving Theorem 1.2.

*Proof.* Let {u}^{*}\left(\cdot \right)={\left({u}_{1}^{*}\left(\cdot \right){u}_{2}^{*}\left(\cdot \right)\right)}^{T}\in {\mathcal{U}}_{\text{ad}} be a time optimal control for problem \left(\mathcal{P}\right). We aim to show that *u**(·) holds the bang-bang property (1.5). By the classic discuss, we can get the Pontryagain's maximum principle for the problem \left(\mathcal{P}\right) (see [10, 11]). Namely, there exists a multiplier *ξ*_{0} in ℝ^{n}, with {\u2225{\xi}_{0}\u2225}_{{\mathbb{R}}^{n}}=1, and such that the following maximum principle stands:

where *ψ*(*t*) is the solution of the following adjoint equation:

Then, by the the Kalman rank condition, we obtain that

Besides, it follows from (1.3), namely, the definition of \stackrel{\u0303}{U}, that v\in \stackrel{\u0303}{U} if and only if -v\in \stackrel{\u0303}{U}. This, together with (3.3) and (3.1), immediately gives the inequality:

Next, we define subsets *E*_{
k
}with *k* = 1, 2,..., by setting

By contradiction, we suppose that *u**(·) did not have the bang-bang property, namely, (1.5) did not hold for *u**(·). Then, there would exist a natural *k* such that *m*(*E*_{
k
}) > 0. Therefore, we could find a positive number *C* > 1 such that

Now, we construct another control \u016b\left(\cdot \right) in the following manner:

It is obvious that \u016b\left(\cdot \right)\in {\mathcal{U}}_{\text{ad}}. However, by the construction of \u016b\left(\cdot \right) and by (3.4), we can easily obtain the inequality:

which leads to a contradiction, because of (3.1).

In summary, we conclude that the proof of Theorem 1.2 has been completed.

## References

Evans LC: An Introduction to Mathematical Optimal Control Theory.[http://math.berkeley.edu/evans/]

Phung KD, Wang G, Zhang X: On the existence of time optimal controls for linear evolution equations.

*Discret Contin Dyn Syst Ser B*2007, 8: 925–941.Barbu V

*Volume 190*. Academic Press, Inc; 1993.Barbu V, Precupanu T:

*Convexity and Optimization in Banach Spaces.*D Reidel Dordrecht; 1986.Pontryagin LS, Boltyanskii VG, Gamkrelidze RV:

*The Mathematical Theory of Optimal Processes.*Wiley, New York; 1962.Fattorini HO: Time optimal control of solutions of operational differential equations.

*J SIAM Control*1964, 2: 54–59.Lasalle JP:

*The Time Optimal Control Problem.**Volume 5*. Contributions to the Theory of Nonlinear Oscillations, Princeton University Press, Princeton; 1960:1–24.Wang G, Wang L: The bang-bang principle of time optimal controls for the heat equation with internal controls.

*Syst Control Lett*2007, 56: 709–713. 10.1016/j.sysconle.2007.06.001Wang G:

*L*^{∞}-null controllability for the heat equation and its consequences for the time optimal control problem.*J SIAM Control Optim*2008, 47: 1701–1720. 10.1137/060678191Barbu V:

*Mathematical Methods in Optimization of Differential Systems.*Kluwer Academic Publishers, Dordrecht; 1994.Li X, Yong J:

*Optimal Control Theory for Infinite Dimensional Systems.*Birkhäauser, Boston/Cambridge, MA; 1995.

## Acknowledgements

The authors would like to thank professor Gengsheng Wang for his valuable suggestions on this article. This work was partially supported by the National Natural Science Foundation of China under Grant No. 10971158 and the Natural Science Foundation of Ningbo under Grant No. 2010A610096.

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Competing interests

The authors declare that they have no competing interests.

### Authors' contributions

GZ provided the questions and solved the existence Theorem for the optimal control. BM gave the proof for the bang-bang principle of the optimal control. All authors read and approved the final manuscript.

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## About this article

### Cite this article

Zheng, G., Ma, B. A time optimal control problem of some linear switching controlled ordinary differential equations.
*Adv Differ Equ* **2012**, 52 (2012). https://doi.org/10.1186/1687-1847-2012-52

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/1687-1847-2012-52