- Research
- Open access
- Published:
Non-zero sum differential games of anticipated forward-backward stochastic differential delayed equations under partial information and application
Advances in Difference Equations volume 2017, Article number: 383 (2017)
Abstract
This paper is concerned with a non-zero sum differential game problem of an anticipated forward-backward stochastic differential delayed equation under partial information. We establish a maximum principle and a verification theorem for the Nash equilibrium point by virtue of the duality and convex variation approach. We study a linear-quadratic system under partial information and present an explicit form of the Nash equilibrium point. We derive the filtering equations and prove the existence and uniqueness for the Nash equilibrium point. As an application, we solve a time-delayed pension fund management problem with nonlinear expectation to measure the risk and obtain the explicit solution.
1 Introduction
The general nonlinear backward stochastic differential equations (BSDEs) were first developed by Pardoux and Peng [1] and have been widely applied in the optimal control, mathematical finance, and related fields (see Peng [2, 3], Karoui et al. [4]). The classical Black-Scholes option pricing formula in the financial market can also be deduced by virtue of the BSDE theory. If a BSDE is coupled with a forward stochastic differential equation (SDE), it is called the forward-backward stochastic differential equation (FBSDE). The fundamental research based on FBSDEs has been surveyed by Ma and Yong [5]. FBSDEs are widely encountered in the application of the stochastic recursive utility (see, e.g., Wang and Wu [6]), financial optimization problem with large investors (see Cvitanic and Ma [7]), asset pricing problem with differential utilities (see Antonellia et al. [8]), etc. The classical Hamiltonian system is also one of the forms of FBSDEs in the stochastic control field (see, e.g., Yong and Zhou [9]).
In classical cases, there are many phenomena that have the nature of past-dependence, i.e., their behavior not only depends on the situation at the present time, but also on their past history. Such models were identified as stochastic differential delayed equations (SDDEs), which are a natural generalization of the classical SDEs and have been widely studied in engineering, life science, finance, and other fields (see, for example, the population growth model in Mohammed [10], Arriojas et al. [11]). Chen and Wu [12] first studied a stochastic control problem based on SDDE. When introducing the adjoint equation, they need some new types of BSDEs, which have been introduced by Peng and Yang [13] for the general nonlinear case and called anticipated BSDEs (ABSDEs). The anticipated term defined by the conditional expectation can be regarded as a predicted value of the future state. It can be applied to the insider trading market, describing the asset price influenced by insiders (see, e.g., Øksendal and Sulem [14], Kyle [15]). Moreover, a class of BSDEs with time-delayed generators (BSDDEs) has also been studied in the stochastic control field (see Wu and Wang [16], Shi and Wang [17], Wu and Shu [18]). Recently, Chen and Wu [19], Huang et al. [20] studied a linear-quadratic (LQ) case based on a coupled SDDE and ABSDE called the anticipated forward-backward stochastic differential delayed equation (AFBSDDE).
Game theory has been pervading the economic theory, and it attracts more and more research attention. It was firstly introduced by Von Neumann and Morgenstern [21]. Nash [22] made the fundamental contribution in non-cooperate games and gave the classical notion of Nash equilibrium point. In recent years, many articles on stochastic differential game problems driven by stochastic differential equations have appeared. Researchers try to consider the strategy on multiple players rather than one player and try to find an equilibrium point rather than an optimal control. These problems are more complicated than the classical control problems but much closer to social and behavior science. Yu [23] solved the LQ game problem on a forward and backward system. Øksendal and Sulem [24], Hui and Xiao [25] made a research on the maximum principle of a forward-backward system. Chen and Yu [26] studied the maximum principle of an SDDE case, Shi and Wang [17], Wu and Shu [18] discussed a BSDDE case.
In reality, instead of complete information, there are many cases where the controller can only obtain partial information, reflecting in mathematics that the control variable is adapted to a smaller filtration. Based on this phenomenon, Xiong and Zhou [27] dealt with a mean-variance problem in the financial market that the investor’s optimal portfolio is only based on the stock process he observed. This assumption of partial information is indeed natural in the financial market. Recently, Wu and Wang [16], Wu and Shu [18] also considered the partial information case.
To our best knowledge, the research on general AFBSDDEs and its wide applications in mathematical finance are quite lacking in the literature. Recently, Huang and Shi [28] discussed the optimal control problem based on the AFBSDDE system. Our work distinguished itself from the above-mentioned ones in the following aspects. First, we study the stochastic differential game problem with multiple players rather than the control problem with only one controller. We aim to find the equilibrium point rather than the optimal control. Second, we consider the case that the diffusion coefficient can contain control variables and the control field is convex. Third, we deal with the system under partial information that the available information to the players is partial, which can be seen as a generalization of the complete information case. Fourth, we derive the filtering equations of an LQ system and get worthwhile results about the existence and uniqueness of the equilibrium point. Finally, as an example, we solve a financial problem by virtue of the theoretical results we obtain.
The rest of this paper is organized as follows. In Section 2, we give some necessary notions and state some preliminary results. In Section 3, we establish a necessary condition (maximum principle) and a sufficient condition (verification theorem) for the Nash equilibrium point. In Section 4, we study a linear-quadratic game problem under partial information. We derive the filtering equations and prove the existence and uniqueness for the Nash equilibrium point. In Section 5, we solve a pension fund management problem with nonlinear expectation and obtain the explicit solution.
2 Preliminary results
Throughout this article, we denote by \(\mathbb{R}^{k}\) the k-dimensional Euclidean space; by \(\mathbb{R}^{k\times l}\) the collection of \(k\times l\) matrices. For a given Euclidean space, we denote by \(\langle\cdot,\cdot\rangle\) (resp. \(\vert \cdot \vert \)) the scalar product (resp. norm). The superscript τ denotes the transpose of vectors or matrices.
Let \((\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\geq0},\mathbb{P})\) be a complete filtered probability space equipped with a \(d+\bar{d}\)-dimensional, \(\mathcal{F}_{t}\)-adapted standard Brownian motion \((W(\cdot),\bar{W}(\cdot))\), where \(\mathcal{F}=\mathcal{F} _{T}\). \(\mathbb{E}^{\mathcal{F}_{t}}[\cdot]=\mathbb{E}[\cdot| \mathcal{F}_{t}]\) denotes the conditional expectation under natural filtration \(\mathcal{F}_{t}\), and \(f_{x}(\cdot)\) denotes the partial derivative of function \(f(\cdot)\) with respect to x. Let \(T>0\) be the finite time duration and \(0<\delta<T\) be the constant time delay. Moreover, we denote by \(\mathbb{C}([-\delta,0];\mathbb{R}^{k})\) the space of a uniformly bounded continuous function on \([-\delta,0]\), by \(\mathbb{L}^{p}_{\mathcal{F}}(\Omega;\mathbb{R}^{k})\) the space of an \(\mathcal{F}\)-measurable random variable ξ satisfying \(\mathbb{E}\vert \xi \vert ^{p}<\infty\) for any \(p\geq1\), and by \(\mathbb{L} ^{p}_{\mathcal{F}}(r,s;\mathbb{R}^{k})\) the space of \(\mathbb{R}^{k}\)-valued \(\mathcal{F}_{t}\)-adapted processes \(\varphi(\cdot)\) satisfying \(\mathbb{E}\int_{r}^{s}\vert \varphi (t)\vert ^{p}\,dt< \infty\) for any \(p\geq1\).
We consider the following AFBSDDE:
Here, \((x^{v}(\cdot),y^{v}(\cdot),z^{v}(\cdot),\bar{z}^{v}(\cdot)): \Omega\times[-\delta,T]\times[0,T+\delta]\times[0,T]\times[0,T]\), \(b: \Omega\times[0,T]\times\mathbb{R}^{n}\times\mathbb{R}^{n} \times\mathbb{R}^{k_{1}}\times\mathbb{R}^{k_{2}}\rightarrow \mathbb{R}^{n}\), \(\sigma: \Omega\times[0,T]\times\mathbb{R}^{n} \times\mathbb{R}^{n}\times\mathbb{R}^{k_{1}}\times\mathbb{R}^{k _{2}}\rightarrow\mathbb{R}^{n\times d}\), \(\bar{\sigma}: \Omega \times[0,T]\times\mathbb{R}^{n}\times\mathbb{R}^{n}\times \mathbb{R}^{k_{1}}\times\mathbb{R}^{k_{2}}\rightarrow\mathbb{R}^{n \times\bar{d}}\), \(f: \Omega\times[0,T]\times\mathbb{R}^{n}\times \mathbb{R}^{m}\times\mathbb{R}^{m\times d}\times\mathbb{R}^{m\times \bar{d}}\times\mathbb{R}^{m}\times\mathbb{R}^{k_{1}}\times \mathbb{R}^{k_{2}}\rightarrow\mathbb{R}^{m}\), \(G:\Omega\times \mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) are given continuous maps, \(x^{v}_{\delta}(t)=x^{v}(t-\delta)\), \(y^{v}_{\delta^{+}}(t)= \mathbb{E}^{\mathcal{F}_{t}}[y^{v}(t+\delta)]\), \(\xi(\cdot)\in \mathbb{C}([-\delta,0];\mathbb{R}^{n})\) is the initial path of \(x^{v}(\cdot)\), \(\varphi(\cdot)\in\mathbb{L}^{2}_{\mathcal{F}}(T,T+ \delta;\mathbb{R}^{m})\) is the terminal path of \(y^{v}(\cdot)\). Here, for simplicity, we omit the notation of ω in each process.
Let \(U_{i}\) be a nonempty convex subset of \(\mathbb{R}^{k_{i}}\), \(\mathcal{G}_{t}^{i}\subseteq\mathcal{F}_{t}\) be a given sub-filtration which represents the information available to the player i, and \(v_{i}(\cdot)\) be the control process of player i (\(i=1,2\)). We denote by \(\mathcal{U}_{\mathrm {ad}}^{i}\) the set of \(U_{i}\)-valued \(\mathcal{G}^{i} _{t}\)-adapted control processes \(v_{i}(\cdot)\in\mathbb{L}_{ \mathcal{G}^{i}}^{2}(0,T;\mathbb{R}^{k_{i}})\), and it is called the admissible control set for player i (\(i=1,2\)). \(\mathcal{U}_{\mathrm {ad}}= \mathcal{U}_{\mathrm {ad}}^{1}\times\mathcal{U}_{\mathrm {ad}}^{2}\) is called the set of admissible controls for the two players. We also introduce the following assumption:
-
H1.
Functions b, σ, σ̄ are continuously differentiable in \((x,x_{\delta},v_{1},v_{2})\), f is continuously differentiable in \((x,y,z,\bar{z},y_{\delta^{+}},v_{1},v_{2})\), G is continuously differentiable in x. All the partial derivatives of b, σ, σ̄, f, G are uniformly bounded.
Then we have the following existence and uniqueness result which can be found in [12, 13].
Theorem 2.1
If \(v_{1}(\cdot)\) and \(v_{2}(\cdot)\) are admissible controls and assumption H1 holds, AFBSDDE (1) admits a unique solution \((x(\cdot),y(\cdot),z(\cdot),\bar{z}(\cdot))\in\mathbb{L}_{ \mathcal{F}}^{2}(-\delta,T;\mathbb{R}^{n})\times\mathbb{L}_{ \mathcal{F}}^{2}(0,T+\delta;\mathbb{R}^{m}) \times\mathbb{L}_{ \mathcal{F}}^{2}(0,T;\mathbb{R}^{m\times d})\times\mathbb{L}_{ \mathcal{F}}^{2}(0,T;\mathbb{R}^{m\times\bar{d}})\).
The players have their own preferences which are described by the following cost functionals:
Here, \(l_{i}: \Omega\times[0,T]\times\mathbb{R}^{n}\times \mathbb{R}^{m}\times\mathbb{R}^{m\times d}\times\mathbb{R}^{m\times \bar{d}}\times\mathbb{R}^{k_{1}}\times\mathbb{R}^{k_{2}}\rightarrow \mathbb{R}\), \(\Phi_{i}: \Omega\times\mathbb{R}^{n}\rightarrow \mathbb{R}\), \(\gamma_{i}: \Omega\times\mathbb{R}^{m}\rightarrow \mathbb{R}\) (\(i=1,2\)) are given continuous maps. \(l_{i}\), \(\Phi_{i}\), and \(\gamma_{i}\) satisfy the following condition:
-
H2.
Functions \(l_{i}\), \(\Phi_{i}\), and \(\gamma_{i}\) are continuously differentiable with respect to \((x,y,z,\bar{z},v_{1},v _{2})\), x, and y, respectively. Moreover, there exists a positive constant C such that the partial derivatives of \(l_{i}\), \(\Phi_{i}\), and \(\gamma_{i}\) are bounded by \(C(1+\vert x\vert +\vert y\vert +\vert z\vert +\vert \bar{z}\vert +\vert v_{1}\vert +\vert v _{2}\vert )\), \(C(1+\vert x\vert )\), and \(C(1+\vert y\vert )\), respectively.
Now we suppose that each player hopes to maximize his cost functional \(J_{i}(v_{1}(\cdot),v_{2}(\cdot))\) by selecting a suitable admissible control \(v_{i}(\cdot)\) (\(i=1,2\)). The problem is to find an admissible control \((u_{1}(\cdot),u_{2}(\cdot))\in\mathcal{U}_{\mathrm {ad}}\) such that
If we can find an admissible control \((u_{1}(\cdot),u_{2}(\cdot))\) satisfying (2), then we call it a Nash equilibrium point. In what follows, we aim to establish the necessary and sufficient condition for the Nash equilibrium point subject to this game problem.
3 Maximum principle
In this section, we will establish a necessary condition (maximum principle) and a sufficient condition (verification theorem) for problem (2).
Let \((u_{1}(\cdot),u_{2}(\cdot))\) be an equilibrium point of the game problem, \((v_{1}(\cdot),v_{2}(\cdot))\in\mathbb{L}_{\mathcal{G} ^{1}}^{2}(0,T; \mathbb{R}^{k_{1}})\times\mathbb{L}_{\mathcal{G} ^{2}}^{2}(0,T; \mathbb{R}^{k_{2}})\) be such that \((u_{1}(\cdot)+v _{1}(\cdot),u_{1}(\cdot)+v_{2}(\cdot))\in\mathcal{U}_{\mathrm {ad}}\). Then, for any \(0\leq\epsilon\leq1\), we take the variational control \(u_{1}^{\epsilon}(\cdot)=u_{1}(\cdot)+\epsilon v_{1}(\cdot)\) and \(u_{2}^{\epsilon}(\cdot)=u_{2}(\cdot)+\epsilon v_{2}(\cdot)\). Because both \(U_{1}\) and \(U_{2}\) are convex, \((u_{1}^{\epsilon}( \cdot),u_{2}^{\epsilon}(\cdot)) \) is also in \(\mathcal{U}_{\mathrm {ad}}\). For simplicity, we denote by \((x^{u_{1}^{\epsilon}}(\cdot),y^{u_{1}^{ \epsilon}}(\cdot),z^{u_{1}^{\epsilon}}(\cdot),\bar{z}^{u_{1}^{ \epsilon}}(\cdot))\), \((x^{u_{2}^{\epsilon}}(\cdot),y^{u_{2}^{ \epsilon}}(\cdot),z^{u_{2}^{\epsilon}}(\cdot),\bar{z}^{u_{2}^{ \epsilon}}(\cdot))\), and \((x(\cdot),y(\cdot),z(\cdot),\bar{z}( \cdot))\) the corresponding state trajectories of system (1) with control \((u_{1}^{\epsilon}(\cdot),u_{2}(\cdot))\), \((u_{1}( \cdot),u_{2}^{\epsilon}(\cdot))\), and \((u_{1}(\cdot),u_{2}(\cdot ))\).
The following lemma gives an estimation of \((x(\cdot),y(\cdot),z( \cdot),\bar{z}(\cdot))\).
Lemma 3.1
Let H1 hold. For \(i=1,2\),
Proof
Using Itô’s formula to \(\vert x^{u_{i}^{\epsilon }}(t)-x(t)\vert ^{2}\) and Gronwall’s inequality, we draw the conclusion. □
For notational simplicity, we set \(\zeta(t)=\zeta(t,x(t),x_{\delta }(t),u_{1}(t),u_{2}(t))\) for \(\zeta=b, \sigma, \bar{\sigma}\); \(f(t)=f(t,x(t),y(t), z(t),\bar{z}(t),y_{\delta^{+}}(t),u_{1}(t),u _{2}(t))\), and \(l_{i}(t)=l_{i}(t,x(t),y(t), z(t), \bar{z}(t),u_{1}(t),u_{2}(t))\) (\(i=1,2\)).
We introduce the following variational equations:
Next, set
Then we can get the following two lemmas by using Lemma 3.1. The technique is classical (see Chen and Wu [12]). Thus we omit the details and only state the main result for simplicity.
Lemma 3.2
Let H1 hold. For \(i=1,2\),
Lemma 3.3
Let H1 and H2 hold. For \(i=1,2\),
We introduce the adjoint equation as
This equation is also an AFBSDDE. By the existence and uniqueness result in [12, 13], we know that (5) admits a unique solution \((p_{i}(t),q_{i}(t),k_{i}(t),\bar{k}_{i}(t))\) (\(i=1,2\)).
Define the Hamiltonian function \(H_{i}\) (\(i=1,2\)) by
Then (5) can be rewritten as a stochastic Hamiltonian system of the following type:
where \(H_{i}(t)=H_{i}(t,x(t),y(t),z(t),\bar{z}(t),x_{\delta}(t),y _{\delta^{+}}(t),u_{1}(t),u_{2}(t);p_{i}(t),q_{i}(t),k_{i}(t), \bar{k}_{i}(t))\).
Theorem 3.1
Let H1 and H2 hold. Suppose that \((u_{1}(\cdot),u_{2}(\cdot))\) is an equilibrium point of our problem and \((x(\cdot),y(\cdot),z(\cdot), \bar{z}(\cdot))\) is the corresponding state trajectory. Then we have
for any \(v_{i}\in U_{i}\) a.e., where \((p_{i}(\cdot),q_{i}(\cdot),k _{i}(\cdot),\bar{k}_{i}(\cdot))\) (\(i=1,2\)) is the solution of the adjoint equation (5).
Proof
Applying Itô’s formula to \(\langle q_{1}(\cdot),x_{1}^{1}(\cdot )\rangle\), we get
Noticing the initial and terminal conditions, we have
Similarly, we also have
Applying Itô’s formula to \(\langle p_{1}(\cdot),y_{1}^{1}(\cdot )\rangle\), we obtain
Noticing the initial and terminal conditions, we have
Substituting (9) into (4) leads to
for any \(v_{1}(\cdot)\) such that \(u_{1}(\cdot)+v_{1}(\cdot)\in \mathcal{U}_{\mathrm {ad}}^{1}\).
We set
for \(0\leq t\leq T\), and \(\bar{v}_{1}(\cdot)\in\mathcal{U} _{\mathrm {ad}}^{1}\). Then we have
Letting \(\epsilon\rightarrow0\), we get
for any admissible control \(\bar{v}_{1}(\cdot)\in\mathcal{U}_{\mathrm {ad}} ^{1}\).
Furthermore, we set \(\bar{v}_{1}(t)=v_{1} 1_{A}+u_{1}(t)1_{\Omega-A}\) for any \(v_{1}\in U_{1}\) and \(A\in\mathcal{G}_{t}^{1}\), then it is obvious that \(\bar{v}_{1}(\cdot)\) defined above is an admissible control.
So \(\mathbb{E}\langle H_{1v_{1}}(t),\bar{v}_{1}(t)-u_{1}(t)\rangle= \mathbb{E}[1_{A}\langle H_{1v_{1}}(t),v_{1}-u_{1}(t)\rangle]\leq0 \) for any \(A\in\mathcal{G}_{t}^{1}\). This implies
for any \(v_{1}\in U_{1}\). Repeating the same process to deal with the case \(i=2\), we can show that the other equality also holds for any \(v_{2}\in U_{2}\). □
Remark 3.1
If \((u_{1}(\cdot),u_{2}(\cdot))\) is an equilibrium point of non-zero sum differential game and \((u_{1}(\cdot),u_{2}(\cdot))\) is an interior point of \(U_{1}\times U_{2}\) for all \(t\in[0,T]\), then the inequality in Theorem 3.1 is equivalent to the following equation:
Proof
It is obvious that the “⇐” part holds. For the “⇒” part, we assume that a Nash equilibrium point \((u_{1}(\cdot),u_{2}(\cdot))\) takes values in the interior of \(U_{1}\times U_{2}\), a.e. for all \(t\in[0,T]\). Then, for \((\omega,t)\in\Omega\times[0,T]\) and \(i=1,2\), there exists a closed ball \(\bar{B}_{u_{i}(t)}(k)\subset U_{i}\), where \(u_{i}(t)\) is the center and \(k>0\) denotes the radius. For any \(\eta\in\mathbb{R}^{k _{i}}\) with \(\vert \eta \vert =1\), both \(v_{i}=u_{i}(t)+k\eta\) and \({v}'_{i}=u _{i}(t)-k\eta\) belong to \(\bar{B}_{u_{1}(t)}(k)\). Then, by (6), we have \(\mathbb{E}[H_{iv_{i}}(t)|\mathcal{G} _{t}^{i}]k\eta=0\). From the arbitrariness of η, we get \(\mathbb{E}[H_{iv_{i}}(t)|\mathcal{G}_{t}^{i}]=0\), a.e. and finish the proof. □
On the other hand, we will aim to build a sufficient maximum principle called the verification theorem for the equilibrium point under some concavity assumptions of \(H_{i}\). At this moment, assumption H2 can be relaxed to the following:
-
H3.
Functions \(l_{i}\), \(\Phi_{i}\), and \(\gamma_{i}\) are differentiable with respect to \((x,y,z,\bar{z},v_{1},v_{2})\), x, and y, respectively, satisfying the condition that for each \((v_{1}( \cdot),v_{2}(\cdot))\in\mathcal{U}_{\mathrm {ad}}\), \(l_{i}(\cdot,x^{v}(t),y ^{v}(t),z^{v}(t),\bar{z}^{v}(t),v_{1}(t),v_{2}(t))\in\mathbb{L}^{1} _{\mathcal{F}}(0,T;\mathbb{R})\), and \(l_{i\phi}(\cdot,x^{v}(\cdot),y ^{v}(\cdot),z^{v}(\cdot), \bar{z}^{v}(\cdot), v_{1}(\cdot), v_{2}(\cdot))\in\mathbb{L}^{2}_{\mathcal{F}}(0,T;\mathbb{R})\) for \(\phi=x,y,z,\bar{z},v_{i}\) (\(i=1,2\)).
Theorem 3.2
Let H1 and H3 hold. Let \((u_{1}(\cdot),u_{2}(\cdot))\in\mathcal{U} _{\mathrm {ad}}^{1}\times\mathcal{U}_{\mathrm {ad}}^{2}\) be given and \((x(\cdot),y( \cdot), z(\cdot),\bar{z}(\cdot))\) be the corresponding trajectory. Setting
Suppose
are concave functions respectively, and \(G(x)=M_{T}x\), \(M_{T}\in \mathbb{R}^{m\times n}\), \(\forall x\in\mathbb{R}^{n}\). If condition (6) holds, then \((u_{1}(\cdot),u_{2}(\cdot))\) is an equilibrium point.
Proof
For any \(v_{1}(\cdot)\in\mathcal{U}_{\mathrm {ad}}^{1}\), let \((x^{v_{1}}( \cdot),y^{v_{1}}(\cdot),z^{v_{1}}(\cdot),\bar{z}^{v_{1}}(\cdot))\) be the trajectory corresponding to the control \((v_{1}(\cdot),u_{2}( \cdot))\in\mathcal{U}_{\mathrm {ad}}\). We consider
with
where \(\Theta(t)=(x(t),y(t),z(t),\bar{z}(t))\) and \(\Theta^{v_{1}}(t)=(x ^{v_{1}}(t),y^{v_{1}}(t),z^{v_{1}}(t),\bar{z}^{v_{1}}(t))\).
Since \(\gamma_{1}\) is concave on y, then
Applying Itô’s formula to \(\langle p_{1}(\cdot),y^{v_{1}}(\cdot )-y(\cdot)\rangle\) and taking expectation, we get
where \(f(t)=f(t,\Theta(t),\mathbb{E}^{\mathcal{F}_{t}}[y_{\delta^{+}}(t)],u _{1}(t),u_{2}(t))\), and \(f^{v_{1}}(t)=f(t,\Theta^{v_{1}}(t), \mathbb{E}^{\mathcal{F}_{t}}[y_{\delta^{+}}^{v_{1}}(t)], v_{1}(t), u _{2}(t))\).
Due to \(\Phi_{1}\) being concave on x,
Applying Itô’s formula to \(\langle q_{1}(\cdot),x^{v_{1}}(\cdot )-x(\cdot)\rangle\) and taking expectation, we get
where \(b(t)=b(t,x(t),x_{\delta}(t),u_{1}(t),u_{2}(t))\) and \(b^{v_{1}}(t)=b(t,x^{v_{1}}(t),x^{v_{1}}_{\delta}(t),v_{1}(t),u_{2}(t))\), etc.
Moreover, we have
Note that
due to the fact that \(x^{v_{1}}(t)=x(t)=\xi(t)\) for any \(t\in[- \delta,0)\) and \(H_{1x_{\delta}}(t)=0\) for any \(t\in(T,{T+\delta}]\).
Similarly, we have
due to the fact that \(y^{v_{1}}(t)=y(t)=\varphi(t)\) for any \(t\in(T,T+\delta]\) and \(H_{1y_{\delta^{+}}}(t)=0\) for any \(t\in[-\delta,0)\).
By the concavity of \(H_{1}\), we derive that
From the necessary condition (6), it follows that
Repeating the same process to deal with the case \(i=2\), we can draw the desired conclusion. □
In conclusion, with the help of Theorems 3.1 and 3.2, we can formally solve the Nash equilibrium point \((u_{1}(\cdot),u_{2}(\cdot))\). We can first use the necessary principle to get the candidate equilibrium point and then use the verification theorem to check whether the candidate point is the equilibrium one. Let us discuss a linear-quadratic case.
4 A linear-quadratic case
In this section, we study a linear-quadratic case, which can be seen as a special case of the general system discussed in Section 3, and aim to give a unique Nash equilibrium point explicitly. For notational simplification, we suppose the dimension of Brownian motion \(d=\bar{d}=1\) and notations are the same as in the above sections if there is no specific illustration.
Consider a linear game system with delayed and anticipated states:
where all the coefficients are bounded, deterministic matrices defining on \([0,T]\), \(\xi(\cdot)\in\mathbb{C}([-\delta,0];\mathbb{R}^{n})\), \(\varphi(\cdot)\in\mathbb{L}^{2}_{\mathcal{F}}(T,T+\delta; \mathbb{R}^{m})\). For any given \((v_{1}(\cdot),v_{2}(\cdot))\in \mathcal{U}_{\mathrm {ad}}\), it is easy to know that (13) admits a unique solution \((x^{v}(\cdot),y^{v}(\cdot),z^{v}(\cdot),\bar{z} ^{v}(\cdot))\). Here, we only consider the case that \(x^{v}(\cdot)\) is driven by one Brownian motion \(W(\cdot)\) just for notational simplicity. All the techniques and proof are similar.
In addition, two players aim to maximize their index functionals for \(i=1,2\):
where \(O_{i}(\cdot)\), \(P_{i}(\cdot)\), \(Q_{i}(\cdot)\), \(\bar{Q}_{i}( \cdot)\) are bounded deterministic non-positive symmetric matrices, \(R_{i}(\cdot)\) is a bounded deterministic negative symmetric matrix, \(R_{i}^{-1}(\cdot)\) is bounded, \(M_{i}\), \(N_{i}\) are deterministic non-positive symmetric matrices for \(i=1,2\).
According to Theorem 3.1, the Hamiltonian function is given by
If \((u_{1}(\cdot),u_{2}(\cdot))\) is the Nash equilibrium point, then
where \(\hat{q}_{i}(t)=\mathbb{E}[q_{i}(t)|{\mathcal{G}_{t}}]\) for \(i=1,2\), etc., and \((p_{i}(\cdot),q_{i}(\cdot),k_{i}(\cdot))\) is the solution of the following adjoint equation:
We note that the setting \(\mathcal{G}_{t}\subseteq\mathcal{F}_{t}\) is very general. In order to get an explicit expression of the equilibrium point, we suppose \(\mathcal{G}_{t}=\sigma\{W(s);0\leq s\leq t\}\) in the rest of this section.
We denote the filtering of state process \(x(t)\) by \(\hat{x}(t)= \mathbb{E}[x(t)|{\mathcal{G}_{t}}]\), etc., and note that \(\mathbb{E}[y({t+ \delta})|\mathcal{G}_{t}]=\mathbb{E}\{[y({t+\delta})|\mathcal{G}_{t+ \delta}]|\mathcal{G}_{t}\}=\mathbb{E}[\hat{y}({t+\delta})|\mathcal{G} _{t}]\). By Theorem 8.1 in Lipster and Shiryayev [29] and Theorem 5.7 (Kushner-FKK equation) in Xiong [30], we can get the state filtering equation for (13):
where \(\mathcal{B}_{i}(t)=B_{i}^{\tau}(t)\hat{q}_{i}(t)+D_{i}^{ \tau}(t)\hat{k}_{i}(t)-H_{i}^{\tau}(t)\hat{p}_{i}(t)\), and the adjoint filtering equation for (15) satisfying
From Theorems 3.1 and 3.2, it is easy to know that \((u_{1}(\cdot),u_{2}(\cdot))\) is an equilibrium point for the above linear-quadratic game problem if and only if \((u_{1}(\cdot),u_{2}( \cdot))\) satisfies the expression of (14) with \((\hat{x}, \hat{y},\hat{z},\hat{p}_{i},\hat{q}_{i},\hat{k}_{i})\) (\(i=1,2\)) being the solution of the coupled triple dimensions filtering AFBSDDE (16)-(17) (TFBSDDE). Then the existence and uniqueness of the equilibrium point is equivalent to the existence and uniqueness of the TFBSDDE.
However, we note that TFBSDDE (16)-(17) is so complicated. Fortunately, in some particular cases, we can make some transactions to link it with a double dimensions filtering AFBSDDE, called DFBSDDE. Now we present our result in the following.
-
H4.
The dimension of x is equal to that of y: \(n=m\), \(\bar{G}(t)\equiv0\) and coefficients \(B_{i}(t)=B_{i}\), \(D_{i}(t)=D_{i}\), \(H _{i}(t)=H_{i}\) are independent of time t for any \(i=1,2\).
Theorem 4.1
Under H4, we assume that one of the following conditions holds true:
-
(a)
\(D_{1}=D_{2}=H_{1}=H_{2} \equiv0\) and \(B_{i}R^{-1}_{i}B ^{\tau}_{i}S=SB_{i}R^{-1}_{i}B^{\tau}_{i}\) (\(i=1,2\));
-
(b)
\(B _{1}=B_{2}=H_{1}=H_{2} \equiv0\) and \(D_{i}R^{-1}_{i}D^{\tau} _{i}S=SD_{i}R^{-1}_{i}D^{\tau}_{i}\) (\(i=1,2\));
-
(c)
\(B_{1}=B _{2}=D_{1}=D_{2} \equiv0\) and \(H_{i}R^{-1}_{i}H^{\tau}_{i}S=SH _{i}R^{-1}_{i}H^{\tau}_{i}\) (\(i=1,2\)),
where \(S^{\tau}=A(\cdot),\bar{A}(\cdot),C(\cdot),\bar{C}(\cdot), E(\cdot),F(\cdot),\bar{F}(\cdot),G(\cdot),M_{T},O_{i}(\cdot),P _{i}(\cdot),Q_{i}(\cdot),M_{i},N_{i}\). Then \((u_{1}(\cdot), u_{2}( \cdot))\) given by (14) is a unique Nash equilibrium point.
Proof
We only prove (a). The same method can be used to get (b) and (c). From the above discussion, we need to prove only that there exists a unique solution of the coupled TFBSDDE (16)-(17). In the case that \(D_{1}=D_{2}=H _{1}=H_{2}\equiv0\), it becomes
Now we consider another DFBSDDE:
From the commutation relation between matrices, we notice that, if \((\hat{x},\hat{y},\hat{z},\hat{p}_{i}, \hat{q}_{i}, \hat{k}_{i})\) (\(i=1,2\)) is a solution of (18), then \((\tilde{x},\tilde{y},\tilde{z},\tilde{p},\tilde{q},\tilde{k})\) solves (19), where
On the other hand, if \((\tilde{x},\tilde{y},\tilde{z},\tilde{p}, \tilde{q},\tilde{k})\) is a solution of (19), we can let \(\hat{x}(t)=\tilde{x}(t)\), \(\hat{y}(t)=\tilde{y}(t)\), \(\hat{z}(t)=\tilde{z}(t)\). From the existence and uniqueness result of SDDE and ABSDE (see [12, 13]), we can get \((\hat{p}_{i}(t),\hat{q}_{i}(t), \hat{k}_{i}(t))\) from the following filtering AFBSDDE:
We let
By Itô’s formula and the uniqueness result of the solution of the SDDE and ABSDE for fixed \((\hat{x}(\cdot),\hat{y}(\cdot),\hat{z}( \cdot))\), we have
Then \((\hat{x},\hat{y},\hat{z},\hat{p}_{i},\hat{q}_{i},\hat{k}_{i})\) (\(i=1,2\)) is a solution of (18). Moreover, the existence and uniqueness of (19) is equivalent to the existence and uniqueness of (18). According to the monotonicity condition in [19, 28], it is easy to check that DFBSDDE (19) satisfies the condition and it has a unique solution. So TFBSDDE (18) admits a unique solution. We complete the proof. □
5 An example in finance
This section is devoted to studying a pension fund management problem under partial information with time-delayed surplus arising from the financial market, which naturally motivates the above theoretical research. The financial market is the Black-Scholes market, while the pension fund management framework comes from Federico [31]. To get close to reality, we study this problem in the case when the performance criterion \(J_{i}(v_{1}(\cdot),v_{2}(\cdot ))\) is measured by a criterion involving risk. If we interpret risk in the sense of a convex risk measure, it can be performed by a nonlinear expectation called g-expectation, which can also be used to represent a nonlinear human preference in behavioral economics (see [3, 32–35] and recent articles [25, 36, 37]). Now we introduce it in detail.
In the following, we only consider the one-dimensional case just for simplicity of notations. First, we give the definition of convex risk measure and its connection with g-expectation.
Definition 5.1
([32])
Let \(\mathbb{F}\) be the family of all lower bounded \(\mathcal{F}_{T}\)-measurable random variables. A convex risk measure on \(\mathbb{F}\) is a functional \(\rho: \mathbb{F}\rightarrow\mathbb{R}\) such that
-
(a)
(convexity) \(\rho(\lambda X_{1}+(1-\lambda)X_{2})\leq\lambda \rho(X_{1})+(1-\lambda)\rho(X_{2})\), \(X_{1},X_{2}\in\mathbb{F}\), \(\lambda\in(0,1)\),
-
(b)
(monotonicity) if \(X_{1}\leq X_{2}\) a.e., then \(\rho(X_{1})\geq \rho(X_{2})\), \(X_{1},X_{2}\in\mathbb{F}\),
-
(c)
(translation invariance) \(\rho(X+m)=\rho(X)-m\), \(X\in \mathbb{F}\), \(m\in\mathbb{R}\).
The convex risk measure is a useful tool widely applied in the measurement of financial positions. For the financial interpretation (see, e.g., [34]), the property (a) in Definition 5.1 means that the risk of a diversified position is not more than the weighted average of the individual risks; (b) means that if portfolio \(X_{2}\) is better than \(X_{1}\) under almost all scenarios, then the risk of \(X_{2}\) should be less than the risk of \(X_{1}\); (c) implies that the addition of a sure amount of capital reduces the risk by the same amount. It is also a generalization of the concept of coherent risk measure in [38]. Here, if \(\rho(X)\leq0\), then position X is called acceptable, and \(\rho(X)\) represents the maximal amount that investors can withdraw without changing the acceptability of X. If \(\rho(X)\geq0\), then X is called unacceptable and \(\rho(X)\) represents the minimal extra wealth that investors have to add into the position X to make it acceptable.
Consider the following BSDE:
Under certain assumptions, (22) has a unique solution \((y(\cdot),z(\cdot))\). If we also set \(g(t,y, z)|_{z=0}\equiv0\), we can make the definition as follows.
Definition 5.2
For each \(\xi\in\mathcal{F}_{T}\), we call
the generalized expectation (g-expectation) of ξ related to g.
The well-known Allais [39] and Ellsberg [40] paradox indicates that the classical von-Neumann-Morgenstern linear expected utility theory (here we mean that the linear expectation \(\mathbb{E}\) is used) cannot exactly express people’s subjective preferences or criterion involving risk. Then one naturally tries to replace \(\mathbb{E}\) by some kind of nonlinear expectation. From Definition 5.2, the g-expectation \(\mathcal{E}_{g}(\cdot)\) based on the BSDE possesses all the properties that \(\mathbb{E}\) has, except the linearity (see [3]). It can be seen as a (subject) nonlinear preference and is closely related to the stochastic differential utility (see, e.g., [4]). It is obvious that when \(g(\cdot)=0\), \(\mathcal{E}_{g}\) is reduced to the classical expectation \(\mathbb{E}\).
Here, we present the g-expectation as a nonlinear measurement of risk and give the connection between the convex risk measure and the g-expectation as follows (see [33, 41] for more details).
Definition 5.3
The risk \(\rho(\xi)\) of the random variable \(\xi\in\mathcal{L} ^{2}_{\mathcal{F}}(\Omega;\mathbb{R})\) (ξ can be regarded as a financial position in the financial market) is defined by
where \(\mathcal{E}_{g}[\cdot]\) is defined in Definition 5.2 with ξ replaced by −ξ. Here, g is independent of y and is convex with respect to z.
Assuming that there are two assets in the financial market for the pension fund managers to invest:
where \(S_{1}(\cdot)\) is a risky finance asset price and \(S_{0}( \cdot)\) is one risk-free asset price. \(\mu(\cdot)\) is an appreciation rate of the asset process, and \(\sigma(\cdot)\) is the volatility coefficient. We assume that \(\mu(\cdot)\), \(r(\cdot)\) and \(\sigma(\cdot)\) are deterministic bounded coefficients, and \(\sigma^{-1}(\cdot)\) is bounded.
Suppose that there are two pension fund managers (players) working together to invest the risk-free and risky assets. In the real financial market, it is reasonable for the investors to make decisions based on the historical price of the risky asset \(S_{1}(\cdot)\). So the observable filtration can be set as \(\mathcal{G}_{t}=\sigma\{S_{1}(s)|0 \leq s\leq t\}\), and it is clear that \(\mathcal{G}_{t}=\mathcal{F} _{t}^{W}=\sigma\{W(s)|0\leq s\leq t\}\). The pension fund wealth \(x(\cdot)\) can be modeled by
Here, we denote by \(\pi(t)\) the amount of portfolio invested in the risky asset at time t, and \(\alpha(x(t)-x(t-\delta))\) represents the surplus premium to fund members or their capital transfusions depending on the performance of fund growth during the past period with parameter \(\alpha>0\) (see, e.g., [16, 17]). Meanwhile, there is an instantaneous consumption rate \(c_{i}(t)\) for manager i (\(i=1,2\)). We assume that the value of \(x(\cdot)\) is not only affected by the risky asset, but also by some practical phenomena like the physical inaccessibility of some economic parameters, inaccuracies in measurement, insider trading or the information asymmetry, etc. (see, e.g., [42, 43]). Here, \(\bar{\sigma}(\cdot)\) represents the instantaneous volatility affected by these unobservable factors, \(\mathcal{F}_{t}^{\bar{W}}\) represents the unobservable filtration generated by \(\bar{W}(\cdot)\). We set \(x(t)\) be adapted to the filtration \(\mathcal{F}_{t}\) generated by Brownian motion \((W(\cdot),\bar{W}(\cdot))\), and the control processes \(c_{i}(t)\) (\(i=1,2\)) be adapted to the observation filtration \({\mathcal{G}_{t}\subseteq\mathcal{F}_{t}}\).
The controlled process \(c_{i}(\cdot)\) (\(i=1,2\)) is called admissible for manager i if \(c_{i}(t)>0\) is adapted to the filtration \(\mathcal{G}_{t}\) at time t, \(c_{i}(t)\in\mathbb{L}^{2}(0,T; \mathbb{R})\), and the family of admissible control \((c_{1}(\cdot),c _{2}(\cdot))\) is denoted by \(\mathcal{C}_{1}\times\mathcal{C}_{2}\).
We assume that the insurance company hopes for more terminal capital with less risks and more consumption \(c_{i}(\cdot)\). According to Definitions 5.1 and 5.3, we can define the cost functional as
where \(K_{i}\), \(L_{i}\) are positive constants representing the different extent of preferences of two managers. β is a discount factor and \(1-\gamma\in(0,1)\) is a constant called the Arrow-Pratt index of risk aversion. Here, we set g be a linear form as \(g(\cdot, y(\cdot), z( \cdot))=g(\cdot)z(\cdot)\), where \(g(\cdot)\) is a deterministic bounded coefficient.
Then our problem is naturally to find an equilibrium point \((c_{1} ^{*}(\cdot),c_{2}^{*}(\cdot))\in\mathcal{C}_{1}\times\mathcal{C} _{2}\) such that
Then our problem can be reformulated as
and
Now we will apply the theoretical results obtained in Section 3 to solve the above game problem. The Hamiltonian function is in the form of
where the adjoint process \((p_{i}(\cdot),q_{i}(\cdot),k_{i}(\cdot), \bar{k}_{i}(\cdot))\) satisfies
Then we use the necessary maximum principle (Theorem 3.1) to find a candidate equilibrium point:
where \(\hat{q}_{i}(t)=\mathbb{E}[q_{i}(t)|\mathcal{G}_{t}]\) (\(i=1,2\)).
Now we have to deal with \(\hat{q}_{i}(t)\), the optimal filtering of \(q_{i}(t)\) on the observation \(\mathcal{G}_{t}\). We also set \(\hat{p}_{i}(t)=\mathbb{E}[p_{i}(t)|\mathcal{G}_{t}]\). Note that
Then, by Theorem 8.1 in [30], we have
From (28), we can derive the explicit expression of \(\hat{p}_{i}(t)\) as
which is a \(\mathcal{G}_{t}\)-exponential martingale.
By Theorem 5.1 in [13], we can prove \(\hat{q}_{i}(t) \geq0\), \(t\in[0,T]\). Thus \(c^{*}_{i}(t)>0\) for all \(t\in[0,T]\). Next, we will solve the anticipated BSDE of \(\hat{q}_{i}(t)\) recursively. This method can also be found in [44, 45].
(1) When \(t\in[T-\delta,T]\), the ABSDE in (28) becomes a standard BSDE (without anticipation):
Obviously, we have
From Proposition 5.3 in [4], \((\hat{q}_{i}(t), \hat{k}_{i}(t))\) is Malliavin differentiable and \(\{D_{t}\hat{q}_{i}(t); T-\delta\leq t\leq T\}\) provides a version of \(\{\hat{k}_{i}(t);T-\delta\leq t\leq T\}\), i.e.,
(2) If we have solved ABSDE (28) on the interval \([T-n\delta,T-(n-1)\delta]\) (\(n=1,2,\ldots\)), and the solution \(\{( \hat{q}_{i}(t),\hat{k}_{i}(t));T-n\delta\leq t\leq T-(n-1)\delta\}\) is Malliavin differentiable, then we continue to consider the solvability on the next interval \([T-(n+1)\delta,T-n\delta]\), where we can rewrite ABSDE (28) as follows:
We note that \(\{(\hat{q}_{i}(s+\delta),\hat{k}_{i}(s+\delta));t \leq s\leq T-n\delta\}\) has been solved and is Malliavin differentiable. So the same discussion leads to \(\{(\hat{q}_{i}(t), \hat{k}_{i}(t));T-(n+1)\delta\leq t\leq T-n\delta\}\) is Malliavin differentiable, and
for any \(t\in[T-(n+1)\delta,T-n\delta]\), \(i=1,2\).
We notice that all the conditions in the verification theorem (Theorem 3.2) are satisfied, then Theorem 3.2 implies that \((c_{1}^{*}(\cdot),c_{2}^{*}(\cdot))\) given by (27) is an equilibrium point.
Proposition 5.1
The investment problem (23)-(24) admits an equilibrium point \((c_{1}^{*}(\cdot), c_{2}^{*}(\cdot))\) which is defined by (27).
6 Conclusions
To the author’s best knowledge, this article is the first attempt to study the non-zero sum differential game problem of AFBSDDE under partial information. Throughout this paper, there are four distinguishing features worthy of being highlighted. First, we considered the time-delayed system, which has wide applications and can explain various past-dependent situations. Second, we studied the game problem with multiple players and aimed to find the Nash equilibrium point other than the optimal control. We established a necessary condition (maximum principle) and a sufficient condition (verification theorem) by virtue of the duality and convex variation approach. Third, we discussed an LQ system under the partial information condition. Applying the stochastic filtering formula, we derived the filtering equation and proved the existence and uniqueness of the filtering equation and the corresponding Nash equilibrium point. Fourth, we solved a pension fund management problem with a nonlinear expectation to measure the risk (convex risk measure) and obtained the explicit solution.
References
Pardoux, E, Peng, S: Adapted solution of a backward stochastic differential equation. Syst. Control Lett. 14, 55-61 (1990)
Peng, S: Backward stochastic differential equations and applications to optimal control. Appl. Math. Optim. 27, 125-144 (1993)
Peng, S: Backward SDE and related g-expectation. In: Backward Stochastic Differential Equations, pp. 141-159 (1997)
Karoui, NE, Peng, S, Quenez, MC: Backward stochastic differential equations in finance. Math. Finance 7, 1-71 (1997)
Ma, J, Yong, J: Forward-Backward Stochastic Differential Equations and Their Applications. Springer, New York (1999)
Wang, G, Wu, Z: The maximum principles for stochastic recursive optimal control problems under partial information. IEEE Trans. Autom. Control 54, 1230-1242 (2009)
Cvitanic, J, Ma, J: Hedging options for a large investor and forward-backward SDE’s. Ann. Appl. Probab. 6, 370-398 (1996)
Antonellia, F, Barucci, E, Mancinoc, ME: Asset pricing with a forward-backward stochastic differential utility. Econ. Lett. 72, 151-157 (2001)
Yong, J, Zhou, X: Stochastic Controls: Hamiltonian Systems and HJB Equations. Springer, New York (1999)
Mohammed, SEA: Stochastic differential systems with memory: theory, examples and applications. In: Stochastic Analysis and Related Topics VI. Progress in Probability, vol. 42, pp. 1-77. Birkhäuser, Boston (1998)
Arriojas, M, Hu, Y, Monhammed, SEA, Pap, G: A delayed black and Scholes formula. Stoch. Anal. Appl. 25, 471-492 (2006)
Chen, L, Wu, Z: Maximum principle for the stochastic optimal control problem with delay and application. Automatica 46, 1074-1080 (2010)
Peng, SG, Yang, Z: Anticipated backward stochastic differential equations. Ann. Probab. 37, 877-902 (2009)
Øksendal, B, Sulem, A: Optimal Control of Predictive Mean-Field Equations and Applications to Finance. Springer, Norway (2015)
Kyle, AS: Continuous auctions and insider trading. Econometrica 53, 1315-1336 (1985)
Wu, S, Wang, G: Optimal control problem of backward stochastic differential delay equation under partial information. Syst. Control Lett. 82, 71-78 (2015)
Shi, J, Wang, G: A nonzero sum differential game of BSDE with time-delayed generator and applications. IEEE Trans. Autom. Control 61(7), 1959-1964 (2016)
Wu, S, Shu, L: Non-zero sum differential games of backward stochastic differential delay equations under partial information. Asian J. Control 19(1), 316-324 (2017)
Chen, L, Wu, Z: A type of generalized forward-backward stochastic differential equations and applications. Chin. Ann. Math., Ser. B 32, 279-292 (2011)
Huang, J, Li, X, Shi, J: Forward-backward linear quadratic stochastic optimal control problem with delay. Syst. Control Lett. 61, 623-630 (2012)
Von Neumann, J, Morgenstern, O: The Theory of Games and Economic Behavior. Princeton University Press, Princeton (1944)
Nash, J: Non-cooperative games. Ann. Math. 54, 286-295 (1951)
Yu, Z: Linear-quadratic optimal control and nonzero-sum differential game of forward-backward stochastic system. Asian J. Control 14, 173-185 (2012)
Øksendal, B, Sulem, A: Forward-backward stochastic differential games and stochastic control under model uncertainty. J. Optim. Theory Appl. 161, 22-55 (2014)
Hui, E, Xiao, H: Maximum principle for differential games of forward-backward stochastic systems with applications. J. Math. Anal. Appl. 386, 412-427 (2012)
Chen, L, Yu, Z: Maximum principle for nonzero-sum stochastic differential game with delays. IEEE Trans. Autom. Control 60, 1422-1426 (2015)
Xiong, J, Zhou, X: Mean-variance portfolio selection under partial information. SIAM J. Control Optim. 46, 156-175 (2007)
Huang, J, Shi, J: Maximum principle for optimal control of fully coupled forward-backward stochastic differential delayed equations. ESAIM, Contrôle Optim. Calc. Var. 18, 1073-1096 (2012)
Liptser, RS, Shiryayev, AN: Statistics of Random Processes. Springer, New York (1977)
Xiong, J: An Introduction to Stochastic Filtering Theory. Oxford University Press, Oxford (2008)
Federico, S: A stochastic control problem with delay arising in a pension fund model. Finance Stoch. 15, 421-459 (2011)
Föllmer, H, Schied, A: Convex measure of risk and trading constraints. Finance Stoch. 2, 429-447 (2002)
Gianin, ER: Risk measures via g-expectations. Insur. Math. Econ. 39, 19-34 (2006)
Frittelli, M, Gianin, ER: Putting order in risk measures. J. Bank. Finance 26, 1473-1486 (2002)
Peng, S: Nonlinear Expectations, Nonlinear Evaluations and Risk Measures: Stochastic Methods in Finance. Springer, Berlin (2004)
An, TTK, Øksendal, B: A maximum principle for stochastic differential games with g-expectation and partial information. Stochastics 84, 137-155 (2012)
Yong, J: Optimality variational principle for controlled forward-backward stochastic differential equations with mixed initial-terminal conditions. SIAM J. Control Optim. 48, 4119-4156 (2010)
Artzner, P, Delbaen, F, Eber, J, Heath, D: Coherent measures of risk. Math. Finance 9, 203-228 (1999)
Allais, M: Le comportement de l’homme rationnel devant le risque: Critique des postulats et axiomes de l’ecole Americaine. Econometrica 21, 503-546 (1953)
Ellsberg, D: Risk, ambiguity, and the Savage axioms. Q. J. Econ. 75, 643-669 (1961)
Jiang, L: Convexity, translation invariance and subadditivity for g-expectations and related risk measures. Ann. Appl. Probab. 18, 245-258 (2008)
Lakner, P: Utility maximization with partial information. Stoch. Process. Appl. 56, 247-273 (1995)
Huang, J, Wang, G, Wu, Z: Optimal premium policy of an insurance firm: full and partial information. Insur. Math. Econ. 47, 208-215 (2010)
Yu, Z: The stochastic maximum principle for optimal control problems of delay systems involving continuous and impulse controls. Automatica 48, 2420-2432 (2012)
Menoukeu Pamen, O: Optimal control for stochastic delay systems under model uncertainty: a stochastic differential game approach. J. Optim. Theory Appl. 167, 998-1031 (2015)
Acknowledgements
This work was supported by the Natural Science Foundation of China (No. 61573217, No. 11601285), the Natural Science Foundation of Shandong Province (No. ZR2016AQ13), the National High-level Personnel of Special Support Program of China, and the Chang Jiang Scholar Program of Chinese Education Ministry. The author would like to thank Prof. Zhen Wu for his valuable suggestions.
Author information
Authors and Affiliations
Contributions
All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The author declares that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Zhuang, Y. Non-zero sum differential games of anticipated forward-backward stochastic differential delayed equations under partial information and application. Adv Differ Equ 2017, 383 (2017). https://doi.org/10.1186/s13662-017-1438-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662-017-1438-1