Our approach in finding the optimal control is based on the definition of the saddle point given below as in [20] with slight changes to suit our problem.
Definition 4.1
-
(i)
If the pair is optimal, then there exists a saddle point of the game over the interval with respect to , if
for all and , where and are nonempty sets of admissible controls.
-
(ii)
The upper value of the game at any path and time is defined by
and the lower value of the game is
and if
The objective is to find the optimal admissible controls, and , such that satisfies Definition 4.1 for and .
Theorem 4.1 (Bellman principle of optimality)
If is optimal over the interval starting at an initial state , then is necessarily optimal over the subinterval for any dt such that .
For the proof of the above theorem, refer to [15].
Applying Theorem 4.1 and Definition 4.1 to the value of the game , we have that
(5)
We need to calculate the expectation of the function . Approximating the function using Taylor’s formula, we have
Ignoring the terms of higher powers and letting , we get
(6)
Substituting the stochastic equation (4) into equation (6) and using the properties of Ito’s lemma, we give the function by
(7)
Taking the expectation of equation (7), we have
(8)
Substituting equation (8) to equation (5) yields
(9)
The above equation is the Bellman equation similar to the one in [21] which is a parabolic differential equation that has simple solutions for some simple processes and utility functions. In this paper we will adopt the idea of [22] instead of solving the Bellman equation, which is not always easy. From the Bellman equation we can solve for the optimum values and , by taking the derivative with respect to and ,
(10)
As for the maximizer , we have
(11)
Substituting the values of and onto ℒ in equation (9) and collecting the like terms yields the expression
(12)
Equation (12) is a nonlinear second order partial differential equation (PDE), and its solution is a bit challenging as it is nonlinear and in high dimensions. As assumed in [13], there is a connection between the controls and the variance of the Brownian noise. Considering the difference in our control weights, we have the following cases:
-
(i)
implies that more weight is on the minimizing control than on the maximizing control variable.
-
(ii)
implies more weight on the maximizing control than on the minimizing control variable.
-
(iii)
, the weights of the controls are equivalent, hence it is an ideal situation for a minimax optimal control.
The intuition we get from [13] is that the higher the variance, the lower the weight of the controls, hence ‘cheap’ controls and vice versa. In our case we want to strike a deal such that both players attain their optimums. The variance of the Brownian noise here is given by , therefore we want to attain a situation whereby for all and , where the difference of the control coefficients will be the same as the variance of the noise. Our assumption on the balancing parameter is different from the one suggested by other authors, as in [13] and [15], where the balancing term is just a constant parameter. In our case, the balancing variable is dependent on t such that at any time instant the equality sign is attained as the variance terms differing with time.
Suppose that
(13)
We determine all the partial derivatives of the new value function given in equation (13),
(14)
and
(15)
Therefore substituting (13), (14), (15) and taking into consideration the assumption that for all to the nonlinear PDE given in (12), we have
(16)
which yields a second order quasilinear PDE with the boundary condition given as
(17)
If the solution is found to exist for equation (16), then we have the results given below.
Theorem 4.2 If satisfies equation (16), then the transformed control optimums are given as
and
for the value
where
satisfies
One would observe that is now positive while , this is so because the problem has been transformed from minimax to maxmin problem. The PDE in (16) is found to be a bit difficult to solve in terms of dependence variables x and t, therefore in this paper we resort to transforming the above PDE to an ODE for which, in most cases, a solution can be obtained. Consider a one-dimensional problem for this case, thus and fix t, then the equation becomes more dependent on x. This leads to a nonlinear ODE, and before solving the nonlinear ODE, we have the following assumptions.
(A:1)
-
(i)
, and are nonnegative functions.
-
(ii)
is Lipschitz continuous for all and .
-
(iii)
and are also continuous functions and bounded functions for all .
Let
Multiplying throughout by , we have
(18)
where
and
For transformation and simplicity purposes, we would represent the following functions as and .
This yields the following first order ODE:
(19)
which gives the equation
(20)
Given the following conditions:
(A:2)
-
(i)
is a compact and bounded set.
-
(ii)
is bounded.
-
(iii)
, and
By the Lipschitz condition in (A:1), we have
(21)
we know that
(22)
For the equation
Therefore,
(23)
Hence the solution has been found to exist, with the terminal condition given by
(24)
In summary we have the following results.
Theorem 4.3
Consider a special case for the equation
for a one-dimensional problem and for
Then, assuming that (A:1) and (A:2) hold, at least one solution has been found to exist.
The solution in (23) is not necessarily unique, and to attain uniqueness, more boundary conditions to the ODE must be given. For a one-dimensional problem at least one solution has been found to exist, and for the equation is a PDE which is difficult to solve.
4.1 Iterative optimal control estimates
From Theorem 4.3, consider the estimated value function to be given as
(25)
where
and
The expectation of the value function is driven by stochastic differential equation (19). The function in equation (25) is the probability density function of the transitions, and the function will be defined later.
Certainly, we cannot surely know future paths and the future control values due to the presence of the noise to the problem. This does not mean we have to give up since future paths cannot be certainly known, therefore we may estimate future paths, hence future control values, in order to attain optimums as the controls are dependent on the path control.
The continuous time interval is divided into small time intervals to attain small equal discrete paths assuming we are not distorting the trajectory in any way, that is, let
Suppose the transition between the paths is given by
(26)
The above equation is the cumulative probability density function for the sample path from to . The transitions of the sample paths are Markovian as they are solely dependent on the current path () at time . Following the work of [15] to the latter, we take the noise term to be Gaussian distributed with mean zero and variance as given earlier. Therefore,
(27)
for , which is the change in time t.
Hence we have the following results given as a lemma.
Lemma 4.1 From both Theorem 4.2 and Theorem 4.3, and assuming that the transitions are given by equation (27), we give the iterative optimal controls as
and
for the estimated value function
where
and
Proof From Theorem 4.2, suppose that the solution is given as an estimated iterative value function in equation (25). Consider the discrete paths of the optimal trajectory given as
(28)
Now, substituting equation (27) to equation (25), we have
(29)
for
where
We know that
and
Therefore, we have
(30)
Let
and
Similarly, extending that representation to other functions, we may express the iterative value of the game as
(31)
Applying Theorem 4.2, we obtain the optimal iterative control estimates, which completes the proof. □