- Research
- Open Access
- Published:
A time optimal control problem of some linear switching controlled ordinary differential equations
Advances in Difference Equations volume 2012, Article number: 52 (2012)
Abstract
This article studies a time optimal control problem for some switching controlled systems. We first prove the existence of time optimal controls to the problem . Then, we derive the bang-bang property for time optimal controls to this problem, through utilizing the Pontryagin maximum principle.
AMS Classification: 49K20; 35J65.
1 Introduction
Let A be a n × n matrix. Let and be two different vectors given in ℝnwith n ≥ 1 and write B for the n × 2 matrix . We define
and
Consider the following controlled system:
where the control function . In this system, we can rewrite Bu(t) as , while and are treated as two different controllers; u1(·) and u2(·) are treated as controls. Here, the controls u1(·) and u2(·) hold the property as following:
and system (1.1) is called switching controlled system. The condition (1.2) ensures that, at almost every instant of time, at most one of controllers and is active. Such kind of switching controlled systems model a large class of problems in applied science.
The purpose of this article is to study a time optimal control problem for switching controlled system (1.1). We begin with introducing the problem which will be studied. To serve such purpose, we define two sets as:
and
Here, stands for the Euclid norm in ℝ2. (We will utilize the notation to represent the Euclid inner product in ℝ2.) Then, the time optimal control problem studied in the article is as:
Throughout this article, we denote x(·; x0, u) with u(·) = (u1(·), u2(·))T, to the solution of the Equation (1.1) corresponding to the initial data x0 and controls u(·) = (u1(·), u2(·))T. Consequently, x(T; x0, u) stands for the state of the solution x(·; x0, u) at time T.
In the problem the number
is called the optimal time, while a control u*(·), in the set , and holding the property that x(T*;x0,u*) = 0, is called an optimal control. This problem is to ask for such a control u*(·) in the constraint control set that it derives the solution x(·;x0,u*) from the initial state x0 to the origin of ℝnin the shortest time.
Next, we present the main results obtained in this study.
Theorem 1.1. The problem has at least one optimal control provided that the Kalman rank condition holds for A and B, and Re λ ≤ 0 for each eigenvalue λ of A.
Theorem 1.2. When the Kalman rank condition holds for A and B, any optimal control u* to has the bang-bang property:
Remark 1.3. (i) The statement that the Kalman rank condition holds for A and B if and only if
(ii) Since any optimal control u*(·) to the problem is a switching control, the statement that the bang-bang property (1.5) holds for the optimal control u*(·) is equivalent to the statement that for almost every t ∈ [0, T*], u*(t) is one of four vertices of the domain {(v1,v2)T∈ ℝ2; |v1 + v2| ≤ 1 and |v1-v2| ≤ 1}.
(iii) The condition that Reλ ≤ 0 for each eigenvalue λ of A is a condition to guarantee the existence of time optimal control under the control constraint: for almost every t, even in the case where the switching constraint disappears (see [1, 2]).
In the classical time optimal control problem, the controls constraint are convex, and the existence of the optimal control can be get by the weak convergence methods. In our article, the controls set in the problem lose the convexity. And it can not be researched by making use of methods in most studies from past publications (see [3–5]). Here, we utilize an idea from relax control theory to prove the existence theorem. Finally, we make use of the Pontragin maximum principle and the unique continuation property to obtain the bang-bang property for optimal controls to this problem.
With regard to the studies of time optimal control problems governed by ordinary partial differential equations and without the switching constrain to controls, there have been a lot of literatures. We would like to quote the following related articles [2, 6–9].
The rest of the article is structured as following: Section 2 presents the proof Theorem 1.1; Section 3 provides the proof of Theorem 1.2.
2 The existence of time optimal controls
We prove the existence result, namely, Theorem 1.1, as follows.
Proof. Write for the convex hull of the set . Then, it is clear that
Now, we define another constraint control set as:
Since the Kalman rank condition and the condition that Re λ ≤ 0, for each eigenvalue λ of A, hold, we can utilize the same argument (see [[1], Theorem 2.6]) to get that system (1.1) is exact null controllable with the control constraint set .
Next, we consider a new time optimal control problem:
We denote to the optimal time for this problem.
Because the control set is convex, we can utilize the classical weak convergence method to prove that the problem has at least one solution (see, for instance [[1], Theorem 3.1]). Namely, there exists at least one control such that the corresponding solution to the equation (1.1) holds the property: .
Let
Then, one can easily check that is a convex, nonempty subset of L∞(0, + ∞; ℝ2), moreover, it is compact in the weak* topology. Therefore, we can apply the Krein-Milman theorem to get an extreme point in the set
.
Now, we claim that for almost every belongs to . Here is the argument: in order to prove the statement that , it suffices to show that for almost every time , the following two equalities stand:
and
By seeking for a contradiction, we suppose that the above-mentioned statement was not true. Then there would exist a number ε with 0 < ϵ < 1, and a measurable subset with a positive measure such that one of the following two statements stands:
and
In the case where (2.1) holds, we define a functional I F : L∞(F) → ℝnby setting
where is a vector-valued function over F and is defined by , for almost every s ∈ F. It is clear that I F is a bounded linear operator from L∞(F) to ℝn. L∞(F) is an infinite dimensional space, and ℝnis a finite dimensional space. Thus, the kernel of I F is not trivial. Namely, there exists a function β(·) holds the properties: it belongs to L∞(F); is non-trivial; satisfies ; and is such that I F (β(t)) = 0. Let over F. We extend this function over [0, +∞) by setting it to take the value (0, 0)Tover [0, +∞) \ F. We still denote this extension by . Then, we construct two control functions as following:
We will proof that both v(·) and w(·) belong to . Since
is an extreme point of , and , it follows at once that
Thus, the remainder is to show that v(·) and w(·) belong to , namely, for almost every t ∈ [0,+∞), v(t) and w(t) are in the set . With regard to t ∈ [0,+∞), there are only two possibilities: it belongs to either [0, +∞) \ F or F.
When t ∈ [0, +∞) \ F, we have that . Consequently, it holds that . Along with the fact that is an extreme point of , this indicates that , for almost all t ∈ [0, +∞) \ F.
When t ∈ F, we observe that
and
On the other hand, one can easily check that
and
These, together with (2.4), yields that , for almost every t ∈ F. Similarly, we can derive that , for almost every t ∈ F.
Therefore, we have proved that for almost every t ∈ [0,+∞), both v(t) and w(t) belong to the set . Combined with (2.3), this shows that
However, it is obvious that . Along with (2.5), this contradicts to the fact that is an extreme point of .
In the case where (2.2) holds, we can utilize the same arguments as above to get a contradiction to the fact that is an extreme point of .
Thus, we have proved that for almost every . In summary, we conclude that the above-mentioned claim stands.
Next, we define another control function by setting
By the above-mentioned claim, we can easily find that belong to , and is an optimal control for problem . Since T* is the optimal time to the problem , from the facts that and , we deduce that
However, it is clear that . Thus, we necessarily have
Therefore, it holds that
This indicates that is an time optimal control to the problem . Hence, we have completed the proof of Theorem 1.1.
3 The bang-bang property
This section is devoted to proving Theorem 1.2.
Proof. Let be a time optimal control for problem . We aim to show that u*(·) holds the bang-bang property (1.5). By the classic discuss, we can get the Pontryagain's maximum principle for the problem (see [10, 11]). Namely, there exists a multiplier ξ0 in ℝn, with , and such that the following maximum principle stands:
where ψ(t) is the solution of the following adjoint equation:
Then, by the the Kalman rank condition, we obtain that
Besides, it follows from (1.3), namely, the definition of , that if and only if . This, together with (3.3) and (3.1), immediately gives the inequality:
Next, we define subsets E k with k = 1, 2,..., by setting
By contradiction, we suppose that u*(·) did not have the bang-bang property, namely, (1.5) did not hold for u*(·). Then, there would exist a natural k such that m(E k ) > 0. Therefore, we could find a positive number C > 1 such that
Now, we construct another control in the following manner:
It is obvious that . However, by the construction of and by (3.4), we can easily obtain the inequality:
which leads to a contradiction, because of (3.1).
In summary, we conclude that the proof of Theorem 1.2 has been completed.
References
Evans LC: An Introduction to Mathematical Optimal Control Theory.[http://math.berkeley.edu/evans/]
Phung KD, Wang G, Zhang X: On the existence of time optimal controls for linear evolution equations. Discret Contin Dyn Syst Ser B 2007, 8: 925–941.
Barbu V Volume 190. Academic Press, Inc; 1993.
Barbu V, Precupanu T: Convexity and Optimization in Banach Spaces. D Reidel Dordrecht; 1986.
Pontryagin LS, Boltyanskii VG, Gamkrelidze RV: The Mathematical Theory of Optimal Processes. Wiley, New York; 1962.
Fattorini HO: Time optimal control of solutions of operational differential equations. J SIAM Control 1964, 2: 54–59.
Lasalle JP: The Time Optimal Control Problem. Volume 5. Contributions to the Theory of Nonlinear Oscillations, Princeton University Press, Princeton; 1960:1–24.
Wang G, Wang L: The bang-bang principle of time optimal controls for the heat equation with internal controls. Syst Control Lett 2007, 56: 709–713. 10.1016/j.sysconle.2007.06.001
Wang G: L∞-null controllability for the heat equation and its consequences for the time optimal control problem. J SIAM Control Optim 2008, 47: 1701–1720. 10.1137/060678191
Barbu V: Mathematical Methods in Optimization of Differential Systems. Kluwer Academic Publishers, Dordrecht; 1994.
Li X, Yong J: Optimal Control Theory for Infinite Dimensional Systems. Birkhäauser, Boston/Cambridge, MA; 1995.
Acknowledgements
The authors would like to thank professor Gengsheng Wang for his valuable suggestions on this article. This work was partially supported by the National Natural Science Foundation of China under Grant No. 10971158 and the Natural Science Foundation of Ningbo under Grant No. 2010A610096.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
GZ provided the questions and solved the existence Theorem for the optimal control. BM gave the proof for the bang-bang principle of the optimal control. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Zheng, G., Ma, B. A time optimal control problem of some linear switching controlled ordinary differential equations. Adv Differ Equ 2012, 52 (2012). https://doi.org/10.1186/1687-1847-2012-52
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687-1847-2012-52
Keywords
- switching control
- the Pontryagain maximum principle
- bang-bang property
- controlled linear ordinary different equations