Skip to main content

Theory and Modern Applications

Non-zero sum differential games of anticipated forward-backward stochastic differential delayed equations under partial information and application

Abstract

This paper is concerned with a non-zero sum differential game problem of an anticipated forward-backward stochastic differential delayed equation under partial information. We establish a maximum principle and a verification theorem for the Nash equilibrium point by virtue of the duality and convex variation approach. We study a linear-quadratic system under partial information and present an explicit form of the Nash equilibrium point. We derive the filtering equations and prove the existence and uniqueness for the Nash equilibrium point. As an application, we solve a time-delayed pension fund management problem with nonlinear expectation to measure the risk and obtain the explicit solution.

1 Introduction

The general nonlinear backward stochastic differential equations (BSDEs) were first developed by Pardoux and Peng [1] and have been widely applied in the optimal control, mathematical finance, and related fields (see Peng [2, 3], Karoui et al. [4]). The classical Black-Scholes option pricing formula in the financial market can also be deduced by virtue of the BSDE theory. If a BSDE is coupled with a forward stochastic differential equation (SDE), it is called the forward-backward stochastic differential equation (FBSDE). The fundamental research based on FBSDEs has been surveyed by Ma and Yong [5]. FBSDEs are widely encountered in the application of the stochastic recursive utility (see, e.g., Wang and Wu [6]), financial optimization problem with large investors (see Cvitanic and Ma [7]), asset pricing problem with differential utilities (see Antonellia et al. [8]), etc. The classical Hamiltonian system is also one of the forms of FBSDEs in the stochastic control field (see, e.g., Yong and Zhou [9]).

In classical cases, there are many phenomena that have the nature of past-dependence, i.e., their behavior not only depends on the situation at the present time, but also on their past history. Such models were identified as stochastic differential delayed equations (SDDEs), which are a natural generalization of the classical SDEs and have been widely studied in engineering, life science, finance, and other fields (see, for example, the population growth model in Mohammed [10], Arriojas et al. [11]). Chen and Wu [12] first studied a stochastic control problem based on SDDE. When introducing the adjoint equation, they need some new types of BSDEs, which have been introduced by Peng and Yang [13] for the general nonlinear case and called anticipated BSDEs (ABSDEs). The anticipated term defined by the conditional expectation can be regarded as a predicted value of the future state. It can be applied to the insider trading market, describing the asset price influenced by insiders (see, e.g., Øksendal and Sulem [14], Kyle [15]). Moreover, a class of BSDEs with time-delayed generators (BSDDEs) has also been studied in the stochastic control field (see Wu and Wang [16], Shi and Wang [17], Wu and Shu [18]). Recently, Chen and Wu [19], Huang et al. [20] studied a linear-quadratic (LQ) case based on a coupled SDDE and ABSDE called the anticipated forward-backward stochastic differential delayed equation (AFBSDDE).

Game theory has been pervading the economic theory, and it attracts more and more research attention. It was firstly introduced by Von Neumann and Morgenstern [21]. Nash [22] made the fundamental contribution in non-cooperate games and gave the classical notion of Nash equilibrium point. In recent years, many articles on stochastic differential game problems driven by stochastic differential equations have appeared. Researchers try to consider the strategy on multiple players rather than one player and try to find an equilibrium point rather than an optimal control. These problems are more complicated than the classical control problems but much closer to social and behavior science. Yu [23] solved the LQ game problem on a forward and backward system. Øksendal and Sulem [24], Hui and Xiao [25] made a research on the maximum principle of a forward-backward system. Chen and Yu [26] studied the maximum principle of an SDDE case, Shi and Wang [17], Wu and Shu [18] discussed a BSDDE case.

In reality, instead of complete information, there are many cases where the controller can only obtain partial information, reflecting in mathematics that the control variable is adapted to a smaller filtration. Based on this phenomenon, Xiong and Zhou [27] dealt with a mean-variance problem in the financial market that the investor’s optimal portfolio is only based on the stock process he observed. This assumption of partial information is indeed natural in the financial market. Recently, Wu and Wang [16], Wu and Shu [18] also considered the partial information case.

To our best knowledge, the research on general AFBSDDEs and its wide applications in mathematical finance are quite lacking in the literature. Recently, Huang and Shi [28] discussed the optimal control problem based on the AFBSDDE system. Our work distinguished itself from the above-mentioned ones in the following aspects. First, we study the stochastic differential game problem with multiple players rather than the control problem with only one controller. We aim to find the equilibrium point rather than the optimal control. Second, we consider the case that the diffusion coefficient can contain control variables and the control field is convex. Third, we deal with the system under partial information that the available information to the players is partial, which can be seen as a generalization of the complete information case. Fourth, we derive the filtering equations of an LQ system and get worthwhile results about the existence and uniqueness of the equilibrium point. Finally, as an example, we solve a financial problem by virtue of the theoretical results we obtain.

The rest of this paper is organized as follows. In Section 2, we give some necessary notions and state some preliminary results. In Section 3, we establish a necessary condition (maximum principle) and a sufficient condition (verification theorem) for the Nash equilibrium point. In Section 4, we study a linear-quadratic game problem under partial information. We derive the filtering equations and prove the existence and uniqueness for the Nash equilibrium point. In Section 5, we solve a pension fund management problem with nonlinear expectation and obtain the explicit solution.

2 Preliminary results

Throughout this article, we denote by \(\mathbb{R}^{k}\) the k-dimensional Euclidean space; by \(\mathbb{R}^{k\times l}\) the collection of \(k\times l\) matrices. For a given Euclidean space, we denote by \(\langle\cdot,\cdot\rangle\) (resp. \(\vert \cdot \vert \)) the scalar product (resp. norm). The superscript τ denotes the transpose of vectors or matrices.

Let \((\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\geq0},\mathbb{P})\) be a complete filtered probability space equipped with a \(d+\bar{d}\)-dimensional, \(\mathcal{F}_{t}\)-adapted standard Brownian motion \((W(\cdot),\bar{W}(\cdot))\), where \(\mathcal{F}=\mathcal{F} _{T}\). \(\mathbb{E}^{\mathcal{F}_{t}}[\cdot]=\mathbb{E}[\cdot| \mathcal{F}_{t}]\) denotes the conditional expectation under natural filtration \(\mathcal{F}_{t}\), and \(f_{x}(\cdot)\) denotes the partial derivative of function \(f(\cdot)\) with respect to x. Let \(T>0\) be the finite time duration and \(0<\delta<T\) be the constant time delay. Moreover, we denote by \(\mathbb{C}([-\delta,0];\mathbb{R}^{k})\) the space of a uniformly bounded continuous function on \([-\delta,0]\), by \(\mathbb{L}^{p}_{\mathcal{F}}(\Omega;\mathbb{R}^{k})\) the space of an \(\mathcal{F}\)-measurable random variable ξ satisfying \(\mathbb{E}\vert \xi \vert ^{p}<\infty\) for any \(p\geq1\), and by \(\mathbb{L} ^{p}_{\mathcal{F}}(r,s;\mathbb{R}^{k})\) the space of \(\mathbb{R}^{k}\)-valued \(\mathcal{F}_{t}\)-adapted processes \(\varphi(\cdot)\) satisfying \(\mathbb{E}\int_{r}^{s}\vert \varphi (t)\vert ^{p}\,dt< \infty\) for any \(p\geq1\).

We consider the following AFBSDDE:

$$ \textstyle\begin{cases} dx^{v}(t) =b (t,x^{v}(t),x^{v}_{\delta}(t),v_{1}(t),v_{2}(t) )\,dt+ \sigma (t,x^{v}(t),x^{v}_{\delta}(t),v_{1}(t),v_{2}(t) )\,dW(t) \\ \hphantom{dx^{v}(t) =}{}+\bar{ \sigma} (t,x^{v}(t),x^{v}_{\delta}(t),v_{1}(t),v_{2}(t) )\,d\bar{W}(t), \\ -dy^{v}(t) =f (t,x^{v}(t),y^{v}(t),z^{v}(t), \bar{z}^{v}(t),y^{v} _{\delta^{+}}(t),v_{1}(t),v_{2}(t) )\,dt \\ \hphantom{-dy^{v}(t) =}{} -z^{v}(t)\,dW(t)-\bar{z}^{v}(t)\,d\bar{W}(t),\quad t\in[0,T], \\ x^{v}(t) =\xi(t),\quad t\in[- \delta,0], \\ y^{v}(T) =G (x^{v}(T) ), \quad\quad y^{v}(t)= \varphi(t),\quad t \in(T,T+\delta]. \end{cases} $$
(1)

Here, \((x^{v}(\cdot),y^{v}(\cdot),z^{v}(\cdot),\bar{z}^{v}(\cdot)): \Omega\times[-\delta,T]\times[0,T+\delta]\times[0,T]\times[0,T]\), \(b: \Omega\times[0,T]\times\mathbb{R}^{n}\times\mathbb{R}^{n} \times\mathbb{R}^{k_{1}}\times\mathbb{R}^{k_{2}}\rightarrow \mathbb{R}^{n}\), \(\sigma: \Omega\times[0,T]\times\mathbb{R}^{n} \times\mathbb{R}^{n}\times\mathbb{R}^{k_{1}}\times\mathbb{R}^{k _{2}}\rightarrow\mathbb{R}^{n\times d}\), \(\bar{\sigma}: \Omega \times[0,T]\times\mathbb{R}^{n}\times\mathbb{R}^{n}\times \mathbb{R}^{k_{1}}\times\mathbb{R}^{k_{2}}\rightarrow\mathbb{R}^{n \times\bar{d}}\), \(f: \Omega\times[0,T]\times\mathbb{R}^{n}\times \mathbb{R}^{m}\times\mathbb{R}^{m\times d}\times\mathbb{R}^{m\times \bar{d}}\times\mathbb{R}^{m}\times\mathbb{R}^{k_{1}}\times \mathbb{R}^{k_{2}}\rightarrow\mathbb{R}^{m}\), \(G:\Omega\times \mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) are given continuous maps, \(x^{v}_{\delta}(t)=x^{v}(t-\delta)\), \(y^{v}_{\delta^{+}}(t)= \mathbb{E}^{\mathcal{F}_{t}}[y^{v}(t+\delta)]\), \(\xi(\cdot)\in \mathbb{C}([-\delta,0];\mathbb{R}^{n})\) is the initial path of \(x^{v}(\cdot)\), \(\varphi(\cdot)\in\mathbb{L}^{2}_{\mathcal{F}}(T,T+ \delta;\mathbb{R}^{m})\) is the terminal path of \(y^{v}(\cdot)\). Here, for simplicity, we omit the notation of ω in each process.

Let \(U_{i}\) be a nonempty convex subset of \(\mathbb{R}^{k_{i}}\), \(\mathcal{G}_{t}^{i}\subseteq\mathcal{F}_{t}\) be a given sub-filtration which represents the information available to the player i, and \(v_{i}(\cdot)\) be the control process of player i (\(i=1,2\)). We denote by \(\mathcal{U}_{\mathrm {ad}}^{i}\) the set of \(U_{i}\)-valued \(\mathcal{G}^{i} _{t}\)-adapted control processes \(v_{i}(\cdot)\in\mathbb{L}_{ \mathcal{G}^{i}}^{2}(0,T;\mathbb{R}^{k_{i}})\), and it is called the admissible control set for player i (\(i=1,2\)). \(\mathcal{U}_{\mathrm {ad}}= \mathcal{U}_{\mathrm {ad}}^{1}\times\mathcal{U}_{\mathrm {ad}}^{2}\) is called the set of admissible controls for the two players. We also introduce the following assumption:

  1. H1.

    Functions b, σ, σ̄ are continuously differentiable in \((x,x_{\delta},v_{1},v_{2})\), f is continuously differentiable in \((x,y,z,\bar{z},y_{\delta^{+}},v_{1},v_{2})\), G is continuously differentiable in x. All the partial derivatives of b, σ, σ̄, f, G are uniformly bounded.

Then we have the following existence and uniqueness result which can be found in [12, 13].

Theorem 2.1

If \(v_{1}(\cdot)\) and \(v_{2}(\cdot)\) are admissible controls and assumption H1 holds, AFBSDDE (1) admits a unique solution \((x(\cdot),y(\cdot),z(\cdot),\bar{z}(\cdot))\in\mathbb{L}_{ \mathcal{F}}^{2}(-\delta,T;\mathbb{R}^{n})\times\mathbb{L}_{ \mathcal{F}}^{2}(0,T+\delta;\mathbb{R}^{m}) \times\mathbb{L}_{ \mathcal{F}}^{2}(0,T;\mathbb{R}^{m\times d})\times\mathbb{L}_{ \mathcal{F}}^{2}(0,T;\mathbb{R}^{m\times\bar{d}})\).

The players have their own preferences which are described by the following cost functionals:

$$ \begin{aligned} J_{i} \bigl(v_{1}( \cdot),v_{2}(\cdot) \bigr)& =\mathbb{E} \biggl[ \int_{0}^{T}l_{i} \bigl(t,x ^{v}(t),y^{v}(t),z^{v}(t),\bar{z}^{v}(t),v_{1}(t),v_{2}(t) \bigr)\,dt+\Phi _{i} \bigl(x^{v}(T) \bigr) \\ &\quad{} +\gamma_{i} \bigl(y^{v}(0) \bigr) \biggr]. \end{aligned} $$

Here, \(l_{i}: \Omega\times[0,T]\times\mathbb{R}^{n}\times \mathbb{R}^{m}\times\mathbb{R}^{m\times d}\times\mathbb{R}^{m\times \bar{d}}\times\mathbb{R}^{k_{1}}\times\mathbb{R}^{k_{2}}\rightarrow \mathbb{R}\), \(\Phi_{i}: \Omega\times\mathbb{R}^{n}\rightarrow \mathbb{R}\), \(\gamma_{i}: \Omega\times\mathbb{R}^{m}\rightarrow \mathbb{R}\) (\(i=1,2\)) are given continuous maps. \(l_{i}\), \(\Phi_{i}\), and \(\gamma_{i}\) satisfy the following condition:

  1. H2.

    Functions \(l_{i}\), \(\Phi_{i}\), and \(\gamma_{i}\) are continuously differentiable with respect to \((x,y,z,\bar{z},v_{1},v _{2})\), x, and y, respectively. Moreover, there exists a positive constant C such that the partial derivatives of \(l_{i}\), \(\Phi_{i}\), and \(\gamma_{i}\) are bounded by \(C(1+\vert x\vert +\vert y\vert +\vert z\vert +\vert \bar{z}\vert +\vert v_{1}\vert +\vert v _{2}\vert )\), \(C(1+\vert x\vert )\), and \(C(1+\vert y\vert )\), respectively.

Now we suppose that each player hopes to maximize his cost functional \(J_{i}(v_{1}(\cdot),v_{2}(\cdot))\) by selecting a suitable admissible control \(v_{i}(\cdot)\) (\(i=1,2\)). The problem is to find an admissible control \((u_{1}(\cdot),u_{2}(\cdot))\in\mathcal{U}_{\mathrm {ad}}\) such that

$$ \textstyle\begin{cases} J_{1} (u_{1}( \cdot),u_{2}(\cdot) )=\sup_{v_{1}(\cdot)\in\mathcal {U}_{1}}J_{1} (v_{1}(\cdot),u_{2}( \cdot) ), \\ J_{2} (u_{1}(\cdot),u_{2}(\cdot) )=\sup_{v_{2}(\cdot)\in\mathcal {U}_{2}}J_{2} (u_{1}( \cdot),v_{2}( \cdot) ). \end{cases} $$
(2)

If we can find an admissible control \((u_{1}(\cdot),u_{2}(\cdot))\) satisfying (2), then we call it a Nash equilibrium point. In what follows, we aim to establish the necessary and sufficient condition for the Nash equilibrium point subject to this game problem.

3 Maximum principle

In this section, we will establish a necessary condition (maximum principle) and a sufficient condition (verification theorem) for problem (2).

Let \((u_{1}(\cdot),u_{2}(\cdot))\) be an equilibrium point of the game problem, \((v_{1}(\cdot),v_{2}(\cdot))\in\mathbb{L}_{\mathcal{G} ^{1}}^{2}(0,T; \mathbb{R}^{k_{1}})\times\mathbb{L}_{\mathcal{G} ^{2}}^{2}(0,T; \mathbb{R}^{k_{2}})\) be such that \((u_{1}(\cdot)+v _{1}(\cdot),u_{1}(\cdot)+v_{2}(\cdot))\in\mathcal{U}_{\mathrm {ad}}\). Then, for any \(0\leq\epsilon\leq1\), we take the variational control \(u_{1}^{\epsilon}(\cdot)=u_{1}(\cdot)+\epsilon v_{1}(\cdot)\) and \(u_{2}^{\epsilon}(\cdot)=u_{2}(\cdot)+\epsilon v_{2}(\cdot)\). Because both \(U_{1}\) and \(U_{2}\) are convex, \((u_{1}^{\epsilon}( \cdot),u_{2}^{\epsilon}(\cdot)) \) is also in \(\mathcal{U}_{\mathrm {ad}}\). For simplicity, we denote by \((x^{u_{1}^{\epsilon}}(\cdot),y^{u_{1}^{ \epsilon}}(\cdot),z^{u_{1}^{\epsilon}}(\cdot),\bar{z}^{u_{1}^{ \epsilon}}(\cdot))\), \((x^{u_{2}^{\epsilon}}(\cdot),y^{u_{2}^{ \epsilon}}(\cdot),z^{u_{2}^{\epsilon}}(\cdot),\bar{z}^{u_{2}^{ \epsilon}}(\cdot))\), and \((x(\cdot),y(\cdot),z(\cdot),\bar{z}( \cdot))\) the corresponding state trajectories of system (1) with control \((u_{1}^{\epsilon}(\cdot),u_{2}(\cdot))\), \((u_{1}( \cdot),u_{2}^{\epsilon}(\cdot))\), and \((u_{1}(\cdot),u_{2}(\cdot ))\).

The following lemma gives an estimation of \((x(\cdot),y(\cdot),z( \cdot),\bar{z}(\cdot))\).

Lemma 3.1

Let H1 hold. For \(i=1,2\),

$$\begin{aligned}& \sup_{0\leq t\leq T}\mathbb{E} \bigl\vert x^{u_{i}^{\epsilon }}(t)-x(t) \bigr\vert ^{2} \leq C\epsilon^{2}, \qquad \sup _{0\leq t\leq T}\mathbb{E} \bigl\vert y^{u_{i}^{\epsilon }}(t)-y(t) \bigr\vert ^{2} \leq C\epsilon^{2}, \\& \mathbb{E} \int_{0}^{T} \bigl\vert z^{u_{i}^{\epsilon}}(t)-z(t) \bigr\vert ^{2}\,dt\leq C \epsilon^{2}, \qquad \mathbb{E} \int_{0}^{T} \bigl\vert \bar{z}^{u_{i}^{\epsilon}}(t)- \bar {z}(t) \bigr\vert ^{2}\,dt \leq C \epsilon^{2}. \end{aligned}$$

Proof

Using Itô’s formula to \(\vert x^{u_{i}^{\epsilon }}(t)-x(t)\vert ^{2}\) and Gronwall’s inequality, we draw the conclusion. □

For notational simplicity, we set \(\zeta(t)=\zeta(t,x(t),x_{\delta }(t),u_{1}(t),u_{2}(t))\) for \(\zeta=b, \sigma, \bar{\sigma}\); \(f(t)=f(t,x(t),y(t), z(t),\bar{z}(t),y_{\delta^{+}}(t),u_{1}(t),u _{2}(t))\), and \(l_{i}(t)=l_{i}(t,x(t),y(t), z(t), \bar{z}(t),u_{1}(t),u_{2}(t))\) (\(i=1,2\)).

We introduce the following variational equations:

$$ \textstyle\begin{cases} dx_{i}^{1}(t) = [b_{x}(t)x_{i}^{1}(t)+b_{x_{\delta}}(t)x_{i}^{1}(t- \delta)+b_{v_{i}}(t)v_{i}(t) ]\,dt \\ \hphantom{dx_{i}^{1}(t) = }{} + [\sigma_{x}(t)x_{i}^{1}(t)+ \sigma_{x_{\delta}}(t)x_{i}^{1}(t-\delta)+ \sigma_{v_{i}}(t)v_{i}(t) ]\,dW(t) \\ \hphantom{dx_{i}^{1}(t) = }{} + [\bar{\sigma}_{x}(t)x_{i}^{1}(t)+\bar{ \sigma}_{x_{\delta}}(t)x _{i}^{1}(t-\delta)+\bar{ \sigma}_{v_{i}}(t)v_{i}(t) ]\,d\bar{W}(t), \\ -dy _{i}^{1}(t) = \{f_{x}(t)x_{i}^{1}(t)+f_{y}(t)y_{i}^{1}(t)+f_{z}(t)z _{i}^{1}(t)+f_{\bar{z}}(t)\bar{z}_{i}^{1}(t) \\ \hphantom{-dy _{i}^{1}(t) = }{}+\mathbb{E}^{ \mathcal{F}_{t}} [f_{y_{\delta^{+}}}(t)y_{i}^{1}(t+ \delta) ]+f_{v_{i}}(t)v _{i}(t) \}\,dt \\ \hphantom{-dy _{i}^{1}(t) = }{} -z_{i}^{1}(t)\,dW(t)-\bar{z}_{i}^{1}(t)\,d\bar{W}(t), \quad t\in[0,T], \\ x_{i}^{1}(t) =0,\quad t\in[-\delta,0], \\ y _{i}^{1}(T) =G_{x} (x(T) )x_{i}^{1}(T), \quad\quad y_{i}^{1}(t)=0, \quad t\in(T,T+ \delta]\ (i=1,2). \end{cases} $$
(3)

Next, set

$$ \phi_{i}^{\epsilon}(t)=\frac{\phi^{u_{i}^{\epsilon}}(t)-\phi(t)}{ \epsilon}-\phi_{i}^{1}(t) \quad \text{for } \phi=x,y,z,\bar{z}\ (i=1,2). $$

Then we can get the following two lemmas by using Lemma 3.1. The technique is classical (see Chen and Wu [12]). Thus we omit the details and only state the main result for simplicity.

Lemma 3.2

Let H1 hold. For \(i=1,2\),

$$\begin{aligned}& \lim_{\epsilon\rightarrow0}\sup_{0\leq t\leq T} \mathbb{E} \bigl\vert x_{i}^{\epsilon}(t) \bigr\vert ^{2}=0, \qquad \lim _{\epsilon\rightarrow0}\sup_{0\leq t\leq T} \mathbb{E} \bigl\vert y_{i}^{\epsilon}(t) \bigr\vert ^{2}=0, \\& \lim_{\epsilon\rightarrow0}\mathbb{E} \int_{0}^{T} \bigl\vert z_{i}^{ \epsilon}(t) \bigr\vert ^{2}\,dt=0, \qquad \lim_{\epsilon\rightarrow0}\mathbb{E} \int_{0}^{T} \bigl\vert \bar{z} _{i}^{\epsilon}(t) \bigr\vert ^{2}\,dt=0. \end{aligned}$$

Lemma 3.3

Let H1 and H2 hold. For \(i=1,2\),

$$ \begin{aligned}[b] &\mathbb{E} \int_{0}^{T} \bigl[l_{ix}^{\tau}(t) x_{i}^{1}(t)+l_{iy}^{\tau}(t) y_{i}^{1}(t)+l_{iz}^{\tau}(t) z_{i}^{1}(t)+l_{i\bar{z}}^{\tau}(t) \bar{z}_{i}^{1}(t)+l_{iv_{i}}^{\tau}(t) v_{i}^{1}(t) \bigr]\,dt \\ &\quad{} + \mathbb{E} \bigl[\Phi_{ix}^{\tau} \bigl(x(T) \bigr)x_{i}^{1}(T) \bigr]+\gamma_{iy}^{\tau} \bigl(y(0) \bigr)y _{i}^{1}(0)\leq0. \end{aligned} $$
(4)

We introduce the adjoint equation as

$$ \textstyle\begin{cases} dp_{i}(t) = [f_{y}^{\tau}(t)p_{i}(t)+f_{y_{\delta^{+}}}^{\tau}(t- \delta)p_{i}(t-\delta)-l_{iy}(t) ]\,dt \\ \hphantom{dp_{i}(t) =}{} + [f_{z}^{\tau}(t)p_{i}(t)-l _{iz}(t) ]\,dW(t)+ [f_{\bar{z}}^{\tau}(t)p_{i}(t)-l_{i\bar{z}}(t) ]\,d\bar{W}(t), \\ -dq_{i}(t) = \{b_{x}^{\tau}(t)q_{i}(t)+ \sigma_{x}^{ \tau}(t)k_{i}(t)+\bar{ \sigma}_{x}^{\tau}(t)\bar{k}_{i}(t)-f_{x} ^{\tau}(t)p_{i}(t) \\ \hphantom{-dq_{i}(t) =}{} +\mathbb{E}^{\mathcal{F}_{t}} [b_{x_{\delta}} ^{\tau}(t+ \delta)q_{i}(t+\delta)+\sigma_{x_{\delta}}^{\tau}(t+ \delta)k_{i}(t+\delta) \\ \hphantom{-dq_{i}(t) =}{} +\bar{\sigma}_{x_{\delta}}^{\tau}(t+ \delta)\bar{k}_{i}(t+ \delta) ]+l_{ix}(t) \}\,dt-k_{i}(t)\,dW(t)-\bar{k} _{i}(t)\,d\bar{W}(t), \\ p_{i}(0) =-\gamma_{y} (y(0) ),\quad\quad p_{i}(t)=0, \quad t\in [-\delta,0), \\ q_{i}(T) =-G_{x}^{\tau} (x(T) )p_{i}(T)+ \Phi_{ix} (x(T) ), \\ q_{i}(t) =k_{i}(t)=\bar{k}_{i}(t)=0,\quad t \in(T,T+\delta ]\ (i=1,2). \end{cases} $$
(5)

This equation is also an AFBSDDE. By the existence and uniqueness result in [12, 13], we know that (5) admits a unique solution \((p_{i}(t),q_{i}(t),k_{i}(t),\bar{k}_{i}(t))\) (\(i=1,2\)).

Define the Hamiltonian function \(H_{i}\) (\(i=1,2\)) by

$$ \begin{aligned} & H_{i}(t,x,y,z,\bar{z},x_{\delta},y_{\delta^{+}},v_{1},v_{2};p _{i},q_{i},k_{i},\bar{k}_{i}) \\ &\quad = \bigl\langle q_{i},b(t,x,x_{\delta},v _{1},v_{2}) \bigr\rangle + \bigl\langle k_{i}, \sigma(t,x,x_{\delta},v_{1},v_{2}) \bigr\rangle + \bigl\langle \bar{k}_{i},\bar{\sigma}(t,x,x_{\delta},v_{1},v _{2}) \bigr\rangle \\ &\quad\quad{} - \bigl\langle p_{i},f(t,x,y,z,\bar{z},y_{\delta^{+}},v_{1},v_{2}) \bigr\rangle +l _{i}(t,x,y,z,\bar{z},v_{1},v_{2}) . \end{aligned} $$

Then (5) can be rewritten as a stochastic Hamiltonian system of the following type:

$$ \textstyle\begin{cases} dp_{i}(t) = [-H_{iy}(t)-H_{iy_{\delta^{+}}}(t- \delta) ]\,dt-H_{iz}(t)\,dW(t)-H _{i\bar{z}}(t)\,d\bar{W}(t), \\ -dq_{i}(t) = \{H_{ix}(t)+\mathbb{E}^{ \mathcal{F}_{t}} [H_{ix_{\delta}}(t+\delta) ] \}\,dt-k_{i}(t)\,dW(t)- \bar{k}_{i}(t)\,d\bar{W}(t),\quad t\in[0,T], \\ p_{i}(0) =-\gamma_{y} (y(0) ),\quad\quad p_{i}(t)=0,\quad t\in [-\delta,0), \\ q_{i}(T) =-G_{x}^{\tau} (x(T) )p _{i}(T)+\Phi_{ix} (x(T) ), \\ q_{i}(t) =k_{i}(t)=\bar{k}_{i}(t)=0, \quad t \in(T,T+\delta ], \end{cases} $$

where \(H_{i}(t)=H_{i}(t,x(t),y(t),z(t),\bar{z}(t),x_{\delta}(t),y _{\delta^{+}}(t),u_{1}(t),u_{2}(t);p_{i}(t),q_{i}(t),k_{i}(t), \bar{k}_{i}(t))\).

Theorem 3.1

Let H1 and H2 hold. Suppose that \((u_{1}(\cdot),u_{2}(\cdot))\) is an equilibrium point of our problem and \((x(\cdot),y(\cdot),z(\cdot), \bar{z}(\cdot))\) is the corresponding state trajectory. Then we have

$$ \mathbb{E} \bigl[ \bigl\langle H_{iv_{i}}(t),v_{i}-u_{i}(t) \bigr\rangle |\mathcal{G} ^{i}_{t} \bigr]\leq0\quad(i=1,2) $$
(6)

for any \(v_{i}\in U_{i}\) a.e., where \((p_{i}(\cdot),q_{i}(\cdot),k _{i}(\cdot),\bar{k}_{i}(\cdot))\) (\(i=1,2\)) is the solution of the adjoint equation (5).

Proof

Applying Itô’s formula to \(\langle q_{1}(\cdot),x_{1}^{1}(\cdot )\rangle\), we get

$$\begin{aligned} & \mathbb{E} \bigl\langle -G_{x}^{\tau} \bigl(x(T) \bigr)p_{1}(T)+ \Phi_{1x} \bigl(x(T) \bigr),x_{1} ^{1}(T) \bigr\rangle \\ &\quad =\mathbb{E} \int_{0}^{T} \bigl[ \bigl\langle f_{x}^{\tau}(t)p _{1}(t),x_{1}^{1}(t) \bigr\rangle + \bigl\langle b_{x_{\delta}}^{\tau}(t)q_{1}(t),x _{1}^{1}(t- \delta) \bigr\rangle \\ &\quad\quad{} - \bigl\langle \mathbb{E}^{\mathcal{F}_{t}} \bigl[b_{x_{\delta}}^{\tau}(t+ \delta)q_{1}(t+\delta) \bigr],x_{1}^{1}(t) \bigr\rangle + \bigl\langle \sigma_{x_{\delta}}^{\tau}(t)k_{1}(t)+ \bar{\sigma}_{x_{\delta}} ^{\tau}(t)\bar{k}_{1}(t),x_{1}^{1}(t- \delta) \bigr\rangle \\ &\quad\quad{} - \bigl\langle \mathbb{E}^{\mathcal{F}_{t}} \bigl[\sigma_{x_{\delta}}^{\tau}(t+ \delta)k_{1}(t+\delta)+\bar{\sigma}_{x_{\delta}}^{\tau}(t+ \delta )\bar{k}_{1}(t+\delta) \bigr],x_{1}^{1}(t) \bigr\rangle \\ &\quad\quad{} + \bigl\langle q_{1}(t),b_{v_{1}}(t)v_{1}(t) \bigr\rangle + \bigl\langle k_{1}(t), \sigma_{v_{1}}(t)v_{1}(t) \bigr\rangle \\ &\quad\quad{} + \bigl\langle \bar{k}_{1}(t),\bar{\sigma}_{v_{1}}(t)v_{1}(t) \bigr\rangle - \bigl\langle l_{1x}(t),x_{1}^{1}(t) \bigr\rangle \bigr]\,dt. \end{aligned}$$
(7)

Noticing the initial and terminal conditions, we have

$$ \begin{aligned} & \mathbb{E} \int_{0}^{T} \bigl[ \bigl\langle b_{x_{\delta}}^{\tau}(t)q_{1}(t),x _{1}^{1}(t- \delta) \bigr\rangle - \bigl\langle \mathbb{E}^{\mathcal{F}_{t}} \bigl[b _{x_{\delta}}^{\tau}(t+ \delta)q_{1}(t+\delta) \bigr],x_{1}^{1}(t) \bigr\rangle \bigr] \,dt \\ &\quad =\mathbb{E} \int_{0}^{T} \bigl\langle b_{x_{\delta}}^{\tau}(t)q _{1}(t),x_{1}^{1}(t-\delta) \bigr\rangle \,dt- \mathbb{E} \int_{\delta}^{T+ \delta} \bigl\langle b_{x_{\delta}}^{\tau}(t)q_{1}(t),x_{1}^{1}(t- \delta ) \bigr\rangle \,dt \\ &\quad =\mathbb{E} \int_{0}^{\delta} \bigl\langle b_{x_{\delta}} ^{\tau}(t)q_{1}(t),x_{1}^{1}(t-\delta) \bigr\rangle \,dt-\mathbb{E} \int_{T} ^{T+\delta} \bigl\langle b_{x_{\delta}}^{\tau}(t)q_{1}(t),x_{1}^{1}(t- \delta) \bigr\rangle \,dt \\ &\quad =0. \end{aligned} $$

Similarly, we also have

$$ \begin{aligned} &\mathbb{E} \int_{0}^{T} \bigl[ \bigl\langle \sigma_{x_{\delta}}^{\tau}(t)k_{1}(t)+\bar{ \sigma}_{x_{\delta}}^{\tau}(t)\bar{k}_{1}(t),x_{1}^{1}(t- \delta) \bigr\rangle \\ &\quad{} - \bigl\langle \mathbb{E}^{\mathcal{F}_{t}} \bigl[\sigma_{x_{\delta}} ^{\tau}(t+\delta)k_{1}(t+\delta)+\bar{\sigma}_{x_{\delta}}^{ \tau}(t+ \delta)\bar{k}_{1}(t+\delta) \bigr],x_{1}^{1}(t) \bigr\rangle \bigr] \,dt=0. \end{aligned} $$

Applying Itô’s formula to \(\langle p_{1}(\cdot),y_{1}^{1}(\cdot )\rangle\), we obtain

$$ \begin{aligned}[b] & \mathbb{E} \bigl\langle p_{1}(T),G_{x} \bigl(x(T) \bigr)x_{1}^{1}(T) \bigr\rangle + \bigl\langle \gamma_{y} \bigl(y(0) \bigr),y_{1}^{1}(0) \bigr\rangle \\ &\quad =\mathbb{E} \int_{0}^{T} \bigl[ \bigl\langle f_{y_{\delta^{+}}}^{\tau}(t-\delta)p_{1}(t-\delta),y_{1} ^{1}(t) \bigr\rangle - \bigl\langle p_{1}(t), \mathbb{E}^{\mathcal{F}_{t}} \bigl[f_{y _{\delta^{+}}}(t)y_{1}^{1}(t+ \delta) \bigr] \bigr\rangle \\ &\quad\quad{} - \bigl\langle p_{1}(t),f_{x}(t)x_{1}^{1}(t)+f_{v_{1}}(t)v_{1}(t) \bigr\rangle - \bigl\langle l_{1y}(t),y_{1}^{1}(t) \bigr\rangle \\ &\quad\quad{} - \bigl\langle l_{1z}(t),z_{1}^{1}(t) \bigr\rangle - \bigl\langle l_{1\bar{z}}(t), \bar{z}_{1}^{1}(t) \bigr\rangle \bigr]\,dt. \end{aligned} $$
(8)

Noticing the initial and terminal conditions, we have

$$ \begin{aligned} & \mathbb{E} \int_{0}^{T} \bigl[ \bigl\langle f_{y_{\delta^{+}}}^{\tau}(t- \delta)p _{1}(t- \delta),y_{1}^{1}(t) \bigr\rangle - \bigl\langle p_{1}(t),\mathbb{E}^{ \mathcal{F}_{t}} \bigl[f_{y_{\delta^{+}}}(t)y_{1}^{1}(t+ \delta) \bigr] \bigr\rangle \bigr] \,dt \\ &\quad =\mathbb{E} \int_{0}^{T} \bigl\langle f_{y_{\delta^{+}}}^{\tau}(t- \delta)p_{1}(t-\delta),y_{1}^{1}(t) \bigr\rangle \,dt-\mathbb{E} \int_{ \delta}^{T+\delta} \bigl\langle f_{y_{\delta^{+}}}^{\tau}(t- \delta)p _{1}(t-\delta),y_{1}^{1}(t) \bigr\rangle \,dt \\ &\quad =\mathbb{E} \int_{0}^{\delta } \bigl\langle f_{y_{\delta^{+}}}^{\tau}(t- \delta)p_{1}(t-\delta),y_{1} ^{1}(t) \bigr\rangle \,dt-\mathbb{E} \int_{T}^{T+\delta} \bigl\langle f_{y_{\delta ^{+}}}^{\tau}(t- \delta)p_{1}(t-\delta),y_{1}^{1}(t) \bigr\rangle \,dt \\ &\quad =0. \end{aligned} $$

From (7) and (8), we have

$$\begin{aligned} & \mathbb{E} \bigl\langle \Phi_{1x} \bigl(x(T) \bigr),x_{1}^{1}(T) \bigr\rangle + \bigl\langle \gamma _{y} \bigl(y(0) \bigr),y_{1}^{1}(0) \bigr\rangle \\ &\quad =\mathbb{E} \int_{0}^{T} \bigl[ \bigl\langle q _{1}(t),b_{v_{1}}(t)v_{1}(t) \bigr\rangle + \bigl\langle k_{1}(t),\sigma_{v_{1}}(t)v _{1}(t) \bigr\rangle + \bigl\langle \bar{k}_{1}(t),\bar{\sigma}_{v_{1}}(t)v_{1}(t) \bigr\rangle \\ &\quad\quad{} - \bigl\langle p_{1}(t),f_{v_{1}}(t)v_{1}(t) \bigr\rangle - \bigl\langle l_{1y}(t),y _{1}^{1}(t) \bigr\rangle \\ &\quad\quad{} - \bigl\langle l_{1z}(t),z_{1}^{1}(t) \bigr\rangle - \bigl\langle l_{1\bar{z}}(t), \bar{z}_{1}^{1}(t) \bigr\rangle - \bigl\langle l_{1x}(t),x_{1}^{1}(t) \bigr\rangle \bigr]\,dt. \end{aligned}$$
(9)

Substituting (9) into (4) leads to

$$ \mathbb{E} \int_{0}^{T} \bigl\langle H_{1v_{1}}(t),v_{1}(t) \bigr\rangle \,dt\leq0 $$

for any \(v_{1}(\cdot)\) such that \(u_{1}(\cdot)+v_{1}(\cdot)\in \mathcal{U}_{\mathrm {ad}}^{1}\).

We set

$$u_{1}(s)+v_{1}(s)= \textstyle\begin{cases} u_{1}(s),& s\notin[t,t+\epsilon],\\ \bar{v}_{1}(s),& s \in[t,t+\epsilon], \end{cases} $$

for \(0\leq t\leq T\), and \(\bar{v}_{1}(\cdot)\in\mathcal{U} _{\mathrm {ad}}^{1}\). Then we have

$$ \frac{1}{\varepsilon}\mathbb{E} \int_{t}^{t+\epsilon} \bigl\langle H_{1v _{1}}(s), \bar{v}_{1}(s)-u_{1}(s) \bigr\rangle \,ds\leq0. $$

Letting \(\epsilon\rightarrow0\), we get

$$ \mathbb{E} \bigl\langle H_{1v_{1}}(t),\bar{v}_{1}(t)-u_{1}(t) \bigr\rangle \leq0 $$

for any admissible control \(\bar{v}_{1}(\cdot)\in\mathcal{U}_{\mathrm {ad}} ^{1}\).

Furthermore, we set \(\bar{v}_{1}(t)=v_{1} 1_{A}+u_{1}(t)1_{\Omega-A}\) for any \(v_{1}\in U_{1}\) and \(A\in\mathcal{G}_{t}^{1}\), then it is obvious that \(\bar{v}_{1}(\cdot)\) defined above is an admissible control.

So \(\mathbb{E}\langle H_{1v_{1}}(t),\bar{v}_{1}(t)-u_{1}(t)\rangle= \mathbb{E}[1_{A}\langle H_{1v_{1}}(t),v_{1}-u_{1}(t)\rangle]\leq0 \) for any \(A\in\mathcal{G}_{t}^{1}\). This implies

$$ \mathbb{E} \bigl[ \bigl\langle H_{1v_{1}}(t),v_{1}-u_{1}(t) \bigr\rangle |\mathcal{G} _{t}^{1} \bigr]\leq0,\quad \textit{a.e.} $$

for any \(v_{1}\in U_{1}\). Repeating the same process to deal with the case \(i=2\), we can show that the other equality also holds for any \(v_{2}\in U_{2}\). □

Remark 3.1

If \((u_{1}(\cdot),u_{2}(\cdot))\) is an equilibrium point of non-zero sum differential game and \((u_{1}(\cdot),u_{2}(\cdot))\) is an interior point of \(U_{1}\times U_{2}\) for all \(t\in[0,T]\), then the inequality in Theorem 3.1 is equivalent to the following equation:

$$ \mathbb{E} \bigl[H_{iv_{i}}(t)|\mathcal{G}_{t}^{i} \bigr]=0, \quad\forall v_{i} \in U_{i}\textit{ a.e. } (i=1,2). $$

Proof

It is obvious that the “” part holds. For the “” part, we assume that a Nash equilibrium point \((u_{1}(\cdot),u_{2}(\cdot))\) takes values in the interior of \(U_{1}\times U_{2}\), a.e. for all \(t\in[0,T]\). Then, for \((\omega,t)\in\Omega\times[0,T]\) and \(i=1,2\), there exists a closed ball \(\bar{B}_{u_{i}(t)}(k)\subset U_{i}\), where \(u_{i}(t)\) is the center and \(k>0\) denotes the radius. For any \(\eta\in\mathbb{R}^{k _{i}}\) with \(\vert \eta \vert =1\), both \(v_{i}=u_{i}(t)+k\eta\) and \({v}'_{i}=u _{i}(t)-k\eta\) belong to \(\bar{B}_{u_{1}(t)}(k)\). Then, by (6), we have \(\mathbb{E}[H_{iv_{i}}(t)|\mathcal{G} _{t}^{i}]k\eta=0\). From the arbitrariness of η, we get \(\mathbb{E}[H_{iv_{i}}(t)|\mathcal{G}_{t}^{i}]=0\), a.e. and finish the proof. □

On the other hand, we will aim to build a sufficient maximum principle called the verification theorem for the equilibrium point under some concavity assumptions of \(H_{i}\). At this moment, assumption H2 can be relaxed to the following:

  1. H3.

    Functions \(l_{i}\), \(\Phi_{i}\), and \(\gamma_{i}\) are differentiable with respect to \((x,y,z,\bar{z},v_{1},v_{2})\), x, and y, respectively, satisfying the condition that for each \((v_{1}( \cdot),v_{2}(\cdot))\in\mathcal{U}_{\mathrm {ad}}\), \(l_{i}(\cdot,x^{v}(t),y ^{v}(t),z^{v}(t),\bar{z}^{v}(t),v_{1}(t),v_{2}(t))\in\mathbb{L}^{1} _{\mathcal{F}}(0,T;\mathbb{R})\), and \(l_{i\phi}(\cdot,x^{v}(\cdot),y ^{v}(\cdot),z^{v}(\cdot), \bar{z}^{v}(\cdot), v_{1}(\cdot), v_{2}(\cdot))\in\mathbb{L}^{2}_{\mathcal{F}}(0,T;\mathbb{R})\) for \(\phi=x,y,z,\bar{z},v_{i}\) (\(i=1,2\)).

Theorem 3.2

Let H1 and H3 hold. Let \((u_{1}(\cdot),u_{2}(\cdot))\in\mathcal{U} _{\mathrm {ad}}^{1}\times\mathcal{U}_{\mathrm {ad}}^{2}\) be given and \((x(\cdot),y( \cdot), z(\cdot),\bar{z}(\cdot))\) be the corresponding trajectory. Setting

$$ \begin{gathered} {H}_{1}^{v_{1}}(t) = H_{1} \bigl(t,x(t),y(t),z(t),\bar{z}(t),x_{\delta}(t),y _{\delta^{+}}(t),v_{1}(t),u_{2}(t);p_{1}(t),q_{1}(t),k_{1}(t), \bar{k} _{1}(t) \bigr), \\ {H}_{2}^{v_{2}}(t) = H_{2} \bigl(t,x(t),y(t),z(t), \bar{z}(t),x _{\delta}(t),y_{\delta^{+}}(t),u_{1}(t),v_{2}(t);p_{2}(t),q_{2}(t),k _{2}(t),\bar{k}_{2}(t) \bigr). \end{gathered} $$

Suppose

$$ \begin{gathered} (x,y,z,\bar{z},x_{\delta},y_{\delta^{+}},v_{i}) \mapsto H_{i}^{v _{i}}(t)\quad(i=1,2), \\ x\mapsto \Phi_{i}(x) \quad (i=1,2), \\ y\mapsto \gamma_{i}(y) \quad (i=1,2) \end{gathered} $$

are concave functions respectively, and \(G(x)=M_{T}x\), \(M_{T}\in \mathbb{R}^{m\times n}\), \(\forall x\in\mathbb{R}^{n}\). If condition (6) holds, then \((u_{1}(\cdot),u_{2}(\cdot))\) is an equilibrium point.

Proof

For any \(v_{1}(\cdot)\in\mathcal{U}_{\mathrm {ad}}^{1}\), let \((x^{v_{1}}( \cdot),y^{v_{1}}(\cdot),z^{v_{1}}(\cdot),\bar{z}^{v_{1}}(\cdot))\) be the trajectory corresponding to the control \((v_{1}(\cdot),u_{2}( \cdot))\in\mathcal{U}_{\mathrm {ad}}\). We consider

$$ J_{1} \bigl(v_{1}(\cdot),u_{2}(\cdot) \bigr)-J_{1} \bigl(u_{1}(\cdot),u_{2}(\cdot) \bigr)=A+B+C, $$

with

$$ \begin{gathered} A =\mathbb{E} \int_{0}^{T} \bigl[l_{1} \bigl(t, \Theta^{v_{1}}(t),v_{1}(t),u_{2}(t) \bigr)-l _{1} \bigl(t,\Theta(t),u_{1}(t),u_{2}(t) \bigr) \bigr]\,dt, \\ B =\mathbb{E} \bigl[\Phi_{1} \bigl(x ^{v_{1}}(T) \bigr)- \Phi_{1} \bigl(x(T) \bigr) \bigr], \\ C =\gamma_{1} \bigl(y^{v_{1}}(0) \bigr)-\gamma _{1} \bigl(y(0) \bigr), \end{gathered} $$

where \(\Theta(t)=(x(t),y(t),z(t),\bar{z}(t))\) and \(\Theta^{v_{1}}(t)=(x ^{v_{1}}(t),y^{v_{1}}(t),z^{v_{1}}(t),\bar{z}^{v_{1}}(t))\).

Since \(\gamma_{1}\) is concave on y, then

$$ C\leq\gamma_{1y}^{\tau} \bigl(y(0) \bigr) \bigl(y^{v_{1}}(0)-y(0) \bigr). $$

Applying Itô’s formula to \(\langle p_{1}(\cdot),y^{v_{1}}(\cdot )-y(\cdot)\rangle\) and taking expectation, we get

$$ \begin{aligned}[b] C&\leq \mathbb{E} \int_{0}^{T} \bigl[- \bigl\langle p_{1}(t),f^{v_{1}}(t)-f(t) \bigr\rangle - \bigl\langle H_{1y}(t)+H_{1y_{\delta^{+}}}(t-\delta),y^{v_{1}}(t)-y(t) \bigr\rangle \\ &\quad{} - \bigl\langle H_{1z}(t),z^{v_{1}}(t)-z(t) \bigr\rangle - \bigl\langle H _{1\bar{z}}(t),\bar{z}^{v_{1}}(t)-\bar{z}(t) \bigr\rangle \bigr]\,dt \\ &\quad{} - \mathbb{E} \bigl\langle p_{1}(T),M_{T} \bigl(x^{v_{1}}(T)-x(T) \bigr) \bigr\rangle , \end{aligned} $$
(10)

where \(f(t)=f(t,\Theta(t),\mathbb{E}^{\mathcal{F}_{t}}[y_{\delta^{+}}(t)],u _{1}(t),u_{2}(t))\), and \(f^{v_{1}}(t)=f(t,\Theta^{v_{1}}(t), \mathbb{E}^{\mathcal{F}_{t}}[y_{\delta^{+}}^{v_{1}}(t)], v_{1}(t), u _{2}(t))\).

Due to \(\Phi_{1}\) being concave on x,

$$ B\leq\mathbb{E}\Phi_{1x}^{\tau} \bigl(x(T) \bigr) \bigl(x^{v_{1}}(T)-x(T) \bigr). $$

Applying Itô’s formula to \(\langle q_{1}(\cdot),x^{v_{1}}(\cdot )-x(\cdot)\rangle\) and taking expectation, we get

$$ \begin{aligned}[b] B&\leq \mathbb{E} \int_{0}^{T} \bigl[ \bigl\langle q_{1}(t),b^{v_{1}}(t)-b(t) \bigr\rangle + \bigl\langle k_{1}(t),\sigma^{v_{1}}(t)-\sigma(t) \bigr\rangle \\ &\quad{} + \bigl\langle \bar{k}_{1}(t),\bar{\sigma}^{v_{1}}(t)- \bar{ \sigma}(t) \bigr\rangle - \bigl\langle H_{1x}(t)+ \mathbb{E}^{\mathcal{F}_{t}} \bigl[H_{1x_{\delta }}(t+\delta) \bigr],x^{v_{1}}(t)-x(t) \bigr\rangle \bigr]\,dt \\ &\quad{} +\mathbb{E} \bigl\langle M _{T}^{\tau}p_{1}(T),x^{v_{1}}(T)-x(T) \bigr\rangle , \end{aligned} $$
(11)

where \(b(t)=b(t,x(t),x_{\delta}(t),u_{1}(t),u_{2}(t))\) and \(b^{v_{1}}(t)=b(t,x^{v_{1}}(t),x^{v_{1}}_{\delta}(t),v_{1}(t),u_{2}(t))\), etc.

Moreover, we have

$$ \begin{aligned}[b] A& =\mathbb{E} \int_{0}^{T} \bigl[H_{1}^{v_{1}}(t)-H_{1}(t) \bigr]\,dt-\mathbb{E} \int_{0}^{T} \bigl[ \bigl\langle q_{1}(t),b^{v_{1}}(t)-b(t) \bigr\rangle \\ &\quad{} + \bigl\langle k _{1}(t),\sigma^{v_{1}}(t)-\sigma(t) \bigr\rangle + \bigl\langle \bar{k}_{1}(t),\bar{ \sigma}^{v_{1}}(t)- \bar{\sigma}(t) \bigr\rangle \\ &\quad{} - \bigl\langle p_{1}(t),f ^{v_{1}}(t)-f(t) \bigr\rangle \bigr]\,dt. \end{aligned} $$
(12)

From (10)-(12), we can obtain

$$ \begin{aligned} & J_{1} \bigl(v_{1}( \cdot),u_{2}(\cdot) \bigr)-J_{1} \bigl(u_{1}( \cdot),u_{2}(\cdot) \bigr) \\ &\quad =A+B+C \\ &\quad \leq\mathbb{E} \int_{0}^{T} \bigl[ \bigl(H_{1}^{v_{1}}(t)-H_{1}(t) \bigr)- \bigl\langle H _{1x}(t)+\mathbb{E}^{\mathcal{F}_{t}} \bigl[H_{1x_{\delta}}(t+\delta) \bigr],x ^{v_{1}}(t)-x(t) \bigr\rangle \\ & \quad\quad{} - \bigl\langle H_{1y}(t)+H_{1y_{\delta^{+}}}(t- \delta),y^{v_{1}}(t)-y(t) \bigr\rangle - \bigl\langle H_{1z}(t),z^{v_{1}}(t)-z(t) \bigr\rangle \\ & \quad\quad{} - \bigl\langle H_{1\bar{z}}(t),\bar{z}^{v_{1}}(t)- \bar{z}(t) \bigr\rangle \bigr]\,dt. \end{aligned} $$

Note that

$$ \begin{aligned} & \mathbb{E}\biggl[ \int_{0}^{T} \bigl\langle H_{1x_{\delta}}(t),x^{v_{1}}(t- \delta )-x(t-\delta) \bigr\rangle \,dt- \int_{0}^{T} \bigl\langle \mathbb{E}^{\mathcal{F} _{t}}\bigl[ H_{1x_{\delta}}(t+\delta)\bigr],x^{v_{1}}(t)-x(t) \bigr\rangle \,dt\biggr] \\ &\quad = \mathbb{E} \int_{-\delta}^{0} \bigl\langle H_{1x_{\delta}}(t+ \delta),x ^{v_{1}}(t)-x(t) \bigr\rangle \,dt-\mathbb{E} \int_{T-\delta}^{T} \bigl\langle H _{1x_{\delta}}(t+ \delta),x^{v_{1}}(t)-x(t) \bigr\rangle \,dt \\ &\quad =0, \end{aligned} $$

due to the fact that \(x^{v_{1}}(t)=x(t)=\xi(t)\) for any \(t\in[- \delta,0)\) and \(H_{1x_{\delta}}(t)=0\) for any \(t\in(T,{T+\delta}]\).

Similarly, we have

$$ \begin{aligned} & \mathbb{E} \int_{T}^{T+\delta} \bigl\langle H_{1y_{\delta^{+}}}(t- \delta),y ^{v_{1}}(t)-y(t) \bigr\rangle \,dt-\mathbb{E} \int_{0}^{\delta} \bigl\langle H_{1y _{\delta^{+}}}(t- \delta),y^{v_{1}}(t)-y(t) \bigr\rangle \,dt \\ &\quad =0, \end{aligned} $$

due to the fact that \(y^{v_{1}}(t)=y(t)=\varphi(t)\) for any \(t\in(T,T+\delta]\) and \(H_{1y_{\delta^{+}}}(t)=0\) for any \(t\in[-\delta,0)\).

By the concavity of \(H_{1}\), we derive that

$$ \begin{aligned} J_{1} \bigl(v_{1}( \cdot),u_{2}(\cdot) \bigr)-J_{1} \bigl(u_{1}( \cdot),u_{2}(\cdot) \bigr) & \leq\mathbb{E} \int_{0}^{T} \bigl\langle H_{1v_{1}}(t),v_{1}(t)-u_{1}(t) \bigr\rangle \,dt \\ &=\mathbb{E} \int_{0}^{T}\mathbb{E} \bigl[ \bigl\langle H_{1v_{1}}(t),v _{1}(t)-u_{1}(t) \bigr\rangle | \mathcal{G}_{t}^{1} \bigr]\,dt. \end{aligned} $$

From the necessary condition (6), it follows that

$$ J_{1} \bigl(u_{1}(\cdot),u_{2}(\cdot) \bigr)= \sup_{v_{1}\in\mathcal{U} _{1}}J_{1} \bigl(v_{1}( \cdot),u_{2}(\cdot) \bigr). $$

Repeating the same process to deal with the case \(i=2\), we can draw the desired conclusion. □

In conclusion, with the help of Theorems 3.1 and 3.2, we can formally solve the Nash equilibrium point \((u_{1}(\cdot),u_{2}(\cdot))\). We can first use the necessary principle to get the candidate equilibrium point and then use the verification theorem to check whether the candidate point is the equilibrium one. Let us discuss a linear-quadratic case.

4 A linear-quadratic case

In this section, we study a linear-quadratic case, which can be seen as a special case of the general system discussed in Section 3, and aim to give a unique Nash equilibrium point explicitly. For notational simplification, we suppose the dimension of Brownian motion \(d=\bar{d}=1\) and notations are the same as in the above sections if there is no specific illustration.

Consider a linear game system with delayed and anticipated states:

$$ \textstyle\begin{cases} dx^{v}(t) = [A(t)x^{v}(t)+\bar{A}(t)x^{v}_{\delta}(t)+B_{1}(t)v_{1}(t)+B _{2}(t)v_{2}(t) ]\,dt \\ \hphantom{dx^{v}(t) =}{}+ [C(t)x^{v}(t)+\bar{C}(t)x^{v}_{\delta}(t)+D _{1}(t)v_{1}(t)+D_{2}(t)v_{2}(t) ]\,dW(t), \\ -dy^{v}(t) = [E(t)x^{v}(t)+F(t)y ^{v}(t)+G(t)z^{v}(t)+ \bar{G}(t)\bar{z}^{v}(t) \\ \hphantom{-dy^{v}(t) =}{}+\bar{F}(t)y^{v}_{ \delta^{+}}(t)+H_{1}(t)v_{1}(t)+H_{2}(t)v_{2}(t) ]\,dt-z^{v}(t)\,dW(t) \\ \hphantom{-dy^{v}(t) =}{} - \bar{z}^{v}(t)\,d\bar{W}(t),\quad t\in[0,T], \\ x^{v}(t) =\xi(t), \quad t\in[-\delta,0], \\ y^{v}(T) =M_{T}x^{v}(T), \quad\quad y^{v}(t)= \varphi(t),\quad t\in(T,T+\delta], \end{cases} $$
(13)

where all the coefficients are bounded, deterministic matrices defining on \([0,T]\), \(\xi(\cdot)\in\mathbb{C}([-\delta,0];\mathbb{R}^{n})\), \(\varphi(\cdot)\in\mathbb{L}^{2}_{\mathcal{F}}(T,T+\delta; \mathbb{R}^{m})\). For any given \((v_{1}(\cdot),v_{2}(\cdot))\in \mathcal{U}_{\mathrm {ad}}\), it is easy to know that (13) admits a unique solution \((x^{v}(\cdot),y^{v}(\cdot),z^{v}(\cdot),\bar{z} ^{v}(\cdot))\). Here, we only consider the case that \(x^{v}(\cdot)\) is driven by one Brownian motion \(W(\cdot)\) just for notational simplicity. All the techniques and proof are similar.

In addition, two players aim to maximize their index functionals for \(i=1,2\):

$$\begin{aligned} J_{i} \bigl(v_{1}( \cdot),v_{2}(\cdot) \bigr)& =\frac{1}{2}\mathbb{E} \biggl[ \int_{0} ^{T} \bigl[ \bigl\langle O_{i}(t)x^{v}(t),x^{v}(t) \bigr\rangle + \bigl\langle P_{i}(t)y^{v}(t),y ^{v}(t) \bigr\rangle \\ &\quad{} + \bigl\langle Q_{i}(t)z^{v}(t),z^{v}(t) \bigr\rangle + \bigl\langle \bar{Q}_{i}(t)\bar{z}^{v}(t), \bar{z}^{v}(t) \bigr\rangle + \bigl\langle R_{i}(t)v _{i}(t),v_{i}(t) \bigr\rangle \bigr]\,dt \\ &\quad{} + \bigl\langle M_{i}x^{v}(t),x^{v}(t) \bigr\rangle + \bigl\langle N_{i}y^{v}(0),y^{v}(0) \bigr\rangle \biggr], \end{aligned}$$

where \(O_{i}(\cdot)\), \(P_{i}(\cdot)\), \(Q_{i}(\cdot)\), \(\bar{Q}_{i}( \cdot)\) are bounded deterministic non-positive symmetric matrices, \(R_{i}(\cdot)\) is a bounded deterministic negative symmetric matrix, \(R_{i}^{-1}(\cdot)\) is bounded, \(M_{i}\), \(N_{i}\) are deterministic non-positive symmetric matrices for \(i=1,2\).

According to Theorem 3.1, the Hamiltonian function is given by

$$ \begin{aligned} & \mathbb{H}_{i}(t,x,y,z, \bar{z},x_{\delta},y_{\delta^{+}},v_{1},v _{2};p_{i},q_{i},k_{i}) \\ &\quad = \bigl\langle q_{i},A(t)x+\bar{A}(t)x_{\delta}+B _{1}(t)v_{1}+B_{2}(t)v_{2} \bigr\rangle \\ &\quad\quad{} + \bigl\langle k_{i},C(t)x+\bar{C}(t)x _{\delta}+D_{1}(t)v_{1}+D_{2}(t)v_{2} \bigr\rangle - \bigl\langle p_{i},E(t)x+F(t)y+G(t)z+ \bar{G}(t)\bar{z} \\ &\quad\quad{} +\bar{F}(t)y_{\delta^{+}}+H_{1}(t)v_{1}+H_{2}(t)v _{2} \bigr\rangle +\frac{1}{2} \bigl[ \bigl\langle O_{i}(t)x,x \bigr\rangle + \bigl\langle P_{i}(t)y,y \bigr\rangle + \bigl\langle Q_{i}(t)z,z \bigr\rangle \\ &\quad\quad{} + \bigl\langle \bar{Q}_{i}(t) \bar{z},\bar{z} \bigr\rangle + \bigl\langle R_{i}(t)v_{i},v_{i} \bigr\rangle \bigr]. \end{aligned} $$

If \((u_{1}(\cdot),u_{2}(\cdot))\) is the Nash equilibrium point, then

$$ \begin{aligned} u_{i}(t)=-R^{-1}_{i}(t) \bigl[B^{\tau}_{i}(t)\hat{q}_{i}(t)+D^{\tau}_{i}(t) \hat{k}_{i}(t)-H^{\tau}_{i}(t)\hat{p}_{i}(t) \bigr],\quad t\in[0,T], \end{aligned} $$
(14)

where \(\hat{q}_{i}(t)=\mathbb{E}[q_{i}(t)|{\mathcal{G}_{t}}]\) for \(i=1,2\), etc., and \((p_{i}(\cdot),q_{i}(\cdot),k_{i}(\cdot))\) is the solution of the following adjoint equation:

$$ \textstyle\begin{cases} dp_{i}(t) = [F^{\tau}(t)p_{i}(t)+\bar{F}^{\tau}(t- \delta)p_{i}(t- \delta)-P_{i}(t)y(t) ]\,dt+ [G^{\tau}(t)p_{i}(t) \\ \hphantom{dp_{i}(t) =}{}-Q_{i}(t)z(t) ]\,dW(t)+ [ \bar{G}^{\tau}(t)p_{i}(t)- \bar{Q}_{i}(t)\bar{z}(t) ]\,d\bar{W}(t), \\ -dq _{i}(t) = \{A^{\tau}(t)q_{i}(t)+C^{\tau}(t)k_{i}(t)-E^{\tau}(t)p _{i}(t)+\mathbb{E}^{\mathcal{F}_{t}} [\bar{A}^{\tau}(t+ \delta)q_{i}(t+ \delta) \\ \hphantom{-dq _{i}(t) =}{} +\bar{C}^{\tau}(t+\delta)k_{i}(t+\delta) ]+O_{i}(t)x(t) \}\,dt-k_{i}(t)\,dW(t),\quad t\in[0,T], \\ p_{i}(0) =-N_{i}y(0), \quad\quad p_{i}(t)=0, \quad t\in [-\delta,0), \\ q_{i}(T) =-M_{T}p_{i}(T)+M_{i}x(T),\quad\quad q _{i}(t)=0,\quad t\in(T,T+\delta ]\ (i=1,2). \end{cases} $$
(15)

We note that the setting \(\mathcal{G}_{t}\subseteq\mathcal{F}_{t}\) is very general. In order to get an explicit expression of the equilibrium point, we suppose \(\mathcal{G}_{t}=\sigma\{W(s);0\leq s\leq t\}\) in the rest of this section.

We denote the filtering of state process \(x(t)\) by \(\hat{x}(t)= \mathbb{E}[x(t)|{\mathcal{G}_{t}}]\), etc., and note that \(\mathbb{E}[y({t+ \delta})|\mathcal{G}_{t}]=\mathbb{E}\{[y({t+\delta})|\mathcal{G}_{t+ \delta}]|\mathcal{G}_{t}\}=\mathbb{E}[\hat{y}({t+\delta})|\mathcal{G} _{t}]\). By Theorem 8.1 in Lipster and Shiryayev [29] and Theorem 5.7 (Kushner-FKK equation) in Xiong [30], we can get the state filtering equation for (13):

$$ \textstyle\begin{cases} d\hat{x}(t) = [ A(t) \hat{x}(t)+\bar{A}(t)\hat{x}_{\delta}(t)-\sum_{i=1}^{2}B_{i}(t)R_{i}^{-1}(t) \mathcal{B}_{i}(t) ]\,dt \\ \hphantom{ d\hat{x}(t) =}{}+ [ C(t) \hat{x}(t)+\bar{C}(t)\hat{x}_{\delta}(t)-\sum_{i=1}^{2}D_{i}(t)R_{i} ^{-1}(t)\mathcal{B}_{i}(t) ]\,dW(t), \\ -d\hat{y}(t) = \{E(t)\hat{x}(t)+F(t) \hat{y}(t)+G(t)\hat{z}(t)+\bar{G}(t) \hat{\bar{z}}(t)+\bar{F}(t) \mathbb{E}^{\mathcal{G}_{t}} [\hat{y}(t+\delta) ] \\ \hphantom{-d\hat{y}(t) =}{} -\sum_{i=1}^{2}H _{i}(t)R_{i}^{-1}(t) \mathcal{B}_{i}(t) \}\,dt-\hat{z}(t)\,dW(t), \quad t \in[0,T], \\ \hat{x}(t) =\xi(t),\quad t\in[-\delta,0], \\ \hat{y}(T) =M_{T}\hat{x}(T), \quad\quad \hat{y}(t)=\hat{\varphi}(t),\quad t \in(T,T+ \delta], \end{cases} $$
(16)

where \(\mathcal{B}_{i}(t)=B_{i}^{\tau}(t)\hat{q}_{i}(t)+D_{i}^{ \tau}(t)\hat{k}_{i}(t)-H_{i}^{\tau}(t)\hat{p}_{i}(t)\), and the adjoint filtering equation for (15) satisfying

$$ \textstyle\begin{cases} d\hat{p}_{i}(t) = [F^{\tau}(t)\hat{p}_{i}(t)+\bar{F}^{\tau}(t- \delta)\hat{p}_{i}(t-\delta)-P_{i}(t)\hat{y}(t) ]\,dt \\ \hphantom{d\hat{p}_{i}(t) =}{} + [G^{\tau}(t) \hat{p}_{i}(t)-Q_{i}(t) \hat{z}(t) ]\,dW(t), \\ -d\hat{q}_{i}(t) = \{A^{ \tau}(t)\hat{q}_{i}(t)+C^{\tau}(t) \hat{k}_{i}(t)-E^{\tau}(t) \hat{p}_{i}(t)+ \mathbb{E}^{\mathcal{G}_{t}} [\bar{A}^{\tau}(t+\delta ) \hat{q}_{i}(t+\delta) \\ \hphantom{-d\hat{q}_{i}(t) =}{} +\bar{C}^{\tau}(t+\delta)\hat{k}_{i}(t+ \delta) ]+O_{i}(t)\hat{x}(t) \}\,dt-\hat{k}_{i}(t)\,dW(t),\quad t\in[0,T], \\ \hat{p}_{i}(0) =-N_{i}y(0), \quad\quad \hat{p}_{i}(t)=0, \quad t\in [- \delta,0), \\ \hat{q}_{i}(T) =-M_{T}\hat{p}_{i}(T)+M_{i} \hat{x}(T), \quad\quad \hat{q}_{i}(t)=0,\quad t\in(T,T+\delta ]\ (i=1,2). \end{cases} $$
(17)

From Theorems 3.1 and 3.2, it is easy to know that \((u_{1}(\cdot),u_{2}(\cdot))\) is an equilibrium point for the above linear-quadratic game problem if and only if \((u_{1}(\cdot),u_{2}( \cdot))\) satisfies the expression of (14) with \((\hat{x}, \hat{y},\hat{z},\hat{p}_{i},\hat{q}_{i},\hat{k}_{i})\) (\(i=1,2\)) being the solution of the coupled triple dimensions filtering AFBSDDE (16)-(17) (TFBSDDE). Then the existence and uniqueness of the equilibrium point is equivalent to the existence and uniqueness of the TFBSDDE.

However, we note that TFBSDDE (16)-(17) is so complicated. Fortunately, in some particular cases, we can make some transactions to link it with a double dimensions filtering AFBSDDE, called DFBSDDE. Now we present our result in the following.

  1. H4.

    The dimension of x is equal to that of y: \(n=m\), \(\bar{G}(t)\equiv0\) and coefficients \(B_{i}(t)=B_{i}\), \(D_{i}(t)=D_{i}\), \(H _{i}(t)=H_{i}\) are independent of time t for any \(i=1,2\).

Theorem 4.1

Under H4, we assume that one of the following conditions holds true:

  1. (a)

    \(D_{1}=D_{2}=H_{1}=H_{2} \equiv0\) and \(B_{i}R^{-1}_{i}B ^{\tau}_{i}S=SB_{i}R^{-1}_{i}B^{\tau}_{i}\) (\(i=1,2\));

  2. (b)

    \(B _{1}=B_{2}=H_{1}=H_{2} \equiv0\) and \(D_{i}R^{-1}_{i}D^{\tau} _{i}S=SD_{i}R^{-1}_{i}D^{\tau}_{i}\) (\(i=1,2\));

  3. (c)

    \(B_{1}=B _{2}=D_{1}=D_{2} \equiv0\) and \(H_{i}R^{-1}_{i}H^{\tau}_{i}S=SH _{i}R^{-1}_{i}H^{\tau}_{i}\) (\(i=1,2\)),

where \(S^{\tau}=A(\cdot),\bar{A}(\cdot),C(\cdot),\bar{C}(\cdot), E(\cdot),F(\cdot),\bar{F}(\cdot),G(\cdot),M_{T},O_{i}(\cdot),P _{i}(\cdot),Q_{i}(\cdot),M_{i},N_{i}\). Then \((u_{1}(\cdot), u_{2}( \cdot))\) given by (14) is a unique Nash equilibrium point.

Proof

We only prove (a). The same method can be used to get (b) and (c). From the above discussion, we need to prove only that there exists a unique solution of the coupled TFBSDDE (16)-(17). In the case that \(D_{1}=D_{2}=H _{1}=H_{2}\equiv0\), it becomes

$$\begin{aligned} \textstyle\begin{cases} d\hat{x}(t) = [ A(t) \hat{x}(t)+\bar{A}(t)\hat{x}_{\delta}(t)-\sum_{i=1}^{2}B_{i}R_{i}^{-1}B_{i}^{\tau} \hat{q}_{i}(t) ]\,dt \\ \hphantom{d\hat{x}(t) =}{} + [C(t) \hat{x}(t)+\bar{C}(t)\hat{x}_{\delta}(t) ]\,dW(t), \\ -d\hat{y}(t) = \{E(t) \hat{x}(t)+F(t)\hat{y}(t)+G(t)\hat{z}(t)+\bar{F}(t) \mathbb{E}^{ \mathcal{G}_{t}} [\hat{y}(t+\delta) ] \}\,dt-\hat{z}(t)\,dW(t), \\ d\hat{p} _{i}(t) = [F^{\tau}(t)\hat{p}_{i}(t)+ \bar{F}^{\tau}(t-\delta) \hat{p}_{i}(t-\delta)-P_{i}(t) \hat{y}(t) ]\,dt \\ \hphantom{d\hat{p} _{i}(t) =}{} + [G^{\tau}(t)\hat{p} _{i}(t)-Q_{i}(t) \hat{z}(t) ]\,dW(t), \\ -d\hat{q}_{i}(t) = \{A^{\tau}(t) \hat{q}_{i}(t)+C^{\tau}(t) \hat{k}_{i}(t)-E^{\tau}(t)\hat{p}_{i}(t)+ \mathbb{E}^{\mathcal{G}_{t}} [\bar{A}^{\tau}(t+\delta) \hat{q}_{i}(t+ \delta) \\ \hphantom{-d\hat{q}_{i}(t) =}{} +\bar{C}^{\tau}(t+\delta)\hat{k}_{i}(t+\delta) ]+O_{i}(t) \hat{x}(t) \}\,dt-\hat{k}_{i}(t)\,dW(t), \quad t\in[0,T], \\ \hat{x}(t) = \xi(t),\quad t\in[-\delta,0]; \quad\quad \hat{y}(T)= M_{T} \hat{x}(T), \quad\quad \hat{y}(t)=\hat{\varphi}(t),\quad t\in (T,T+\delta], \\ \hat{p}_{i}(0) =-N_{i}y(0), \quad\quad \hat{p}_{i}(t)=0, \quad t\in [-\delta,0 ), \\ \hat{q} _{i}(T) =-M_{T}\hat{p}_{i}(T)+M_{i} \hat{x}(T), \quad\quad \hat{q}_{i}(t)= \hat{k}_{i}(t)=0,\quad t\in(T,T+ \delta ]. \end{cases}\displaystyle \end{aligned}$$
(18)

Now we consider another DFBSDDE:

$$\begin{aligned} \textstyle\begin{cases} d\tilde{x}(t) = [ A(t) \tilde{x}(t)+\bar{A}(t)\tilde{x}_{\delta}(t)- \tilde{q}(t) ]\,dt+ [C(t)\tilde{x}(t)+\bar{C}(t)\tilde{x}_{\delta}(t) ]\,dW(t), \\ -d\tilde{y}(t) = \{E(t)\tilde{x}(t)+F(t)\tilde{y}(t)+G(t) \tilde{z}(t)+ \bar{F}(t)\mathbb{E}^{\mathcal{G}_{t}} [\tilde{y}(t+ \delta) ] \}\,dt- \tilde{z}(t)\,dW(t), \\ d\tilde{p}(t) = [F^{\tau}(t) \tilde{p}(t)+\bar{F}^{\tau}(t- \delta)\tilde{p}(t-\delta)-\sum_{i=1} ^{2}B_{i}R^{-1}_{i}B^{\tau}_{i}P_{i}(t) \tilde{y}(t) ]\,dt \\ \hphantom{d\tilde{p}(t) =}{} + [G^{ \tau}(t)\tilde{p}(t)-\sum_{i=1}^{2}B_{i}R^{-1}_{i}B^{\tau}_{i}Q_{i}(t) \tilde{z}(t) ]\,dW(t), \\ -d\tilde{q}(t) = \{A^{\tau}(t)\tilde{q}(t)+C ^{\tau}(t) \tilde{k}(t)-E^{\tau}(t)\tilde{p}(t)+\bar{A}^{\tau}(t+ \delta) \mathbb{E}^{\mathcal{G}_{t}} [\tilde{q}(t+\delta) ] \\ \hphantom{-d\tilde{q}(t) =} {} + \bar{C}^{\tau}(t+\delta)\mathbb{E}^{\mathcal{G}_{t}} [\tilde{k}(t+ \delta) ]+\sum_{i=1}^{2}B_{i}R^{-1}_{i}B^{\tau}_{i}O_{i}(t) \tilde{x}(t) \}\,dt-\tilde{k}(t)\,dW(t), \\ \tilde{x}(t) =\xi(t),\quad t\in[-\delta ,0]; \quad\quad \tilde{y}(T)=M_{T} \tilde{x}(T), \quad\quad \tilde{y}(t)=\hat{\varphi}(t), \quad t\in (T,T+\delta], \\ \tilde{p}(0) =-\sum_{i=1}^{2}B_{i}R^{-1} _{i}B^{\tau}_{i}N_{i}\tilde{y}(0),\quad\quad \tilde{p}(t)=0,\quad t\in [- \delta,0 ), \\ \tilde{q}(T) =\sum_{i=1}^{2}B_{i}R^{-1}_{i}B^{\tau} _{i}M_{i}\tilde{x}(T)-M_{T}\tilde{p}(T),\quad\quad \tilde{q}(t)=\tilde{k}(t)=0, \quad t\in(T,T+\delta ]. \end{cases}\displaystyle \end{aligned}$$
(19)

From the commutation relation between matrices, we notice that, if \((\hat{x},\hat{y},\hat{z},\hat{p}_{i}, \hat{q}_{i}, \hat{k}_{i})\) (\(i=1,2\)) is a solution of (18), then \((\tilde{x},\tilde{y},\tilde{z},\tilde{p},\tilde{q},\tilde{k})\) solves (19), where

$$ \textstyle\begin{cases} \tilde{x}(t) =\hat{x}(t),\quad\quad \tilde{y}(t)=\hat{y}(t),\quad\quad \tilde{z}(t)= \hat{z}(t), \\ \tilde{p}(t) =B_{1}R^{-1}_{1}B^{\tau}_{1} \hat{p}_{1}(t)+B _{2}R^{-1}_{2}B^{\tau}_{2} \hat{p}_{2}(t), \\ \tilde{q}(t) =B_{1}R ^{-1}_{1}B^{\tau}_{1} \hat{q}_{1}(t)+B_{2}R^{-1}_{2}B^{\tau}_{2} \hat{q}_{2}(t), \\ \tilde{k}(t) =B_{1}R^{-1}_{1}B^{\tau}_{1} \hat{k} _{1}(t)+B_{2}R^{-1}_{2}B^{\tau}_{2} \hat{k}_{2}(t). \end{cases} $$

On the other hand, if \((\tilde{x},\tilde{y},\tilde{z},\tilde{p}, \tilde{q},\tilde{k})\) is a solution of (19), we can let \(\hat{x}(t)=\tilde{x}(t)\), \(\hat{y}(t)=\tilde{y}(t)\), \(\hat{z}(t)=\tilde{z}(t)\). From the existence and uniqueness result of SDDE and ABSDE (see [12, 13]), we can get \((\hat{p}_{i}(t),\hat{q}_{i}(t), \hat{k}_{i}(t))\) from the following filtering AFBSDDE:

$$ \textstyle\begin{cases} d\hat{p}_{i}(t) = [F^{\tau}(t) \hat{p}_{i}(t)+\bar{F}^{\tau}(t- \delta)\tilde{p}_{i}(t- \delta)-P_{i}(t)\hat{y}(t) ]\,dt+ [G^{\tau}(t) \hat{p}_{i}(t) \\ \hphantom{d\hat{p}_{i}(t) =}{} -Q_{i}(t)\hat{z}(t) ]\,dW(t), \\ -d\hat{q}_{i}(t) = \{A ^{\tau}(t)\hat{q}_{i}(t)+C^{\tau}(t) \hat{k}_{i}(t)-E^{\tau}(t) \hat{p}_{i}(t)+ \bar{A}^{\tau}(t+\delta)\mathbb{E}^{\mathcal{F}_{t}} [ \hat{q}_{i}(t+\delta) ] \\ \hphantom{-d\hat{q}_{i}(t) =}{} +\bar{C}^{\tau}(t+\delta)\mathbb{E}^{ \mathcal{F}_{t}} [ \hat{k}_{i}(t+\delta) ]+O_{i}(t)\hat{x}(t) \}\,dt \\ \hphantom{-d\hat{q}_{i}(t) =}{} - \hat{k}_{i}(t)\,dW(t),\quad t\in[0,T], \\ \hat{p}_{i}(0) =-N_{i}\hat{y}(0), \quad\quad \hat{p}_{i}(t)=0, \quad t\in [-\delta,0), \\ \hat{q}_{i}(T) =-M_{T} \hat{p}_{i}(T)+M_{i} \hat{x}(T), \quad\quad \hat{q}_{i}(t)=\hat{k}_{i}(t)=0, \quad t\in(T,T+ \delta ]. \end{cases} $$

We let

$$ \textstyle\begin{cases} \bar{p}(t) =B_{1}R^{-1}_{1}B^{\tau}_{1} \hat{p}_{1}(t)+B_{2}R^{-1} _{2}B^{\tau}_{2} \hat{p}_{2}(t), \\ \bar{q}(t) =B_{1}R^{-1}_{1}B^{ \tau}_{1} \hat{q}_{1}(t)+B_{2}R^{-1}_{2}B^{\tau}_{2} \hat{q}_{2}(t), \\ \bar{k}(t) =B_{1}R^{-1}_{1}B^{\tau}_{1} \hat{k}_{1}(t)+B_{2}R^{-1} _{2}B^{\tau}_{2} \hat{k}_{2}(t). \end{cases} $$
(20)

By Itô’s formula and the uniqueness result of the solution of the SDDE and ABSDE for fixed \((\hat{x}(\cdot),\hat{y}(\cdot),\hat{z}( \cdot))\), we have

$$ \textstyle\begin{cases} \tilde{p}(t)=\bar{p}(t) =B_{1}R^{-1}_{1}B^{\tau}_{1} \hat{p}_{1}(t)+B _{2}R^{-1}_{2}B^{\tau}_{2} \hat{p}_{2}(t), \\ \tilde{q}(t)=\bar{q}(t) =B _{1}R^{-1}_{1}B^{\tau}_{1} \hat{q}_{1}(t)+B_{2}R^{-1}_{2}B^{\tau} _{2}\hat{q}_{2}(t), \\ \tilde{k}(t)=\bar{k}(t) =B_{1}R^{-1}_{1}B^{ \tau}_{1} \hat{k}_{1}(t)+B_{2}R^{-1}_{2}B^{\tau}_{2} \hat{k}_{2}(t). \end{cases} $$
(21)

Then \((\hat{x},\hat{y},\hat{z},\hat{p}_{i},\hat{q}_{i},\hat{k}_{i})\) (\(i=1,2\)) is a solution of (18). Moreover, the existence and uniqueness of (19) is equivalent to the existence and uniqueness of (18). According to the monotonicity condition in [19, 28], it is easy to check that DFBSDDE (19) satisfies the condition and it has a unique solution. So TFBSDDE (18) admits a unique solution. We complete the proof. □

5 An example in finance

This section is devoted to studying a pension fund management problem under partial information with time-delayed surplus arising from the financial market, which naturally motivates the above theoretical research. The financial market is the Black-Scholes market, while the pension fund management framework comes from Federico [31]. To get close to reality, we study this problem in the case when the performance criterion \(J_{i}(v_{1}(\cdot),v_{2}(\cdot ))\) is measured by a criterion involving risk. If we interpret risk in the sense of a convex risk measure, it can be performed by a nonlinear expectation called g-expectation, which can also be used to represent a nonlinear human preference in behavioral economics (see [3, 3235] and recent articles [25, 36, 37]). Now we introduce it in detail.

In the following, we only consider the one-dimensional case just for simplicity of notations. First, we give the definition of convex risk measure and its connection with g-expectation.

Definition 5.1

([32])

Let \(\mathbb{F}\) be the family of all lower bounded \(\mathcal{F}_{T}\)-measurable random variables. A convex risk measure on \(\mathbb{F}\) is a functional \(\rho: \mathbb{F}\rightarrow\mathbb{R}\) such that

  1. (a)

    (convexity) \(\rho(\lambda X_{1}+(1-\lambda)X_{2})\leq\lambda \rho(X_{1})+(1-\lambda)\rho(X_{2})\), \(X_{1},X_{2}\in\mathbb{F}\), \(\lambda\in(0,1)\),

  2. (b)

    (monotonicity) if \(X_{1}\leq X_{2}\) a.e., then \(\rho(X_{1})\geq \rho(X_{2})\), \(X_{1},X_{2}\in\mathbb{F}\),

  3. (c)

    (translation invariance) \(\rho(X+m)=\rho(X)-m\), \(X\in \mathbb{F}\), \(m\in\mathbb{R}\).

The convex risk measure is a useful tool widely applied in the measurement of financial positions. For the financial interpretation (see, e.g., [34]), the property (a) in Definition 5.1 means that the risk of a diversified position is not more than the weighted average of the individual risks; (b) means that if portfolio \(X_{2}\) is better than \(X_{1}\) under almost all scenarios, then the risk of \(X_{2}\) should be less than the risk of \(X_{1}\); (c) implies that the addition of a sure amount of capital reduces the risk by the same amount. It is also a generalization of the concept of coherent risk measure in [38]. Here, if \(\rho(X)\leq0\), then position X is called acceptable, and \(\rho(X)\) represents the maximal amount that investors can withdraw without changing the acceptability of X. If \(\rho(X)\geq0\), then X is called unacceptable and \(\rho(X)\) represents the minimal extra wealth that investors have to add into the position X to make it acceptable.

Consider the following BSDE:

$$ \textstyle\begin{cases} -dy(t) =g (t,y(t),z(t) )\,dt-z(t)\,dW(t), \\ y(T) =\xi. \end{cases} $$
(22)

Under certain assumptions, (22) has a unique solution \((y(\cdot),z(\cdot))\). If we also set \(g(t,y, z)|_{z=0}\equiv0\), we can make the definition as follows.

Definition 5.2

([3, 35])

For each \(\xi\in\mathcal{F}_{T}\), we call

$$ \mathcal{E}_{g}(\xi)\triangleq y(0) $$

the generalized expectation (g-expectation) of ξ related to g.

The well-known Allais [39] and Ellsberg [40] paradox indicates that the classical von-Neumann-Morgenstern linear expected utility theory (here we mean that the linear expectation \(\mathbb{E}\) is used) cannot exactly express people’s subjective preferences or criterion involving risk. Then one naturally tries to replace \(\mathbb{E}\) by some kind of nonlinear expectation. From Definition 5.2, the g-expectation \(\mathcal{E}_{g}(\cdot)\) based on the BSDE possesses all the properties that \(\mathbb{E}\) has, except the linearity (see [3]). It can be seen as a (subject) nonlinear preference and is closely related to the stochastic differential utility (see, e.g., [4]). It is obvious that when \(g(\cdot)=0\), \(\mathcal{E}_{g}\) is reduced to the classical expectation \(\mathbb{E}\).

Here, we present the g-expectation as a nonlinear measurement of risk and give the connection between the convex risk measure and the g-expectation as follows (see [33, 41] for more details).

Definition 5.3

The risk \(\rho(\xi)\) of the random variable \(\xi\in\mathcal{L} ^{2}_{\mathcal{F}}(\Omega;\mathbb{R})\) (ξ can be regarded as a financial position in the financial market) is defined by

$$ \rho(\xi)\triangleq\mathcal{E}_{g}[-\xi]=y(0), $$

where \(\mathcal{E}_{g}[\cdot]\) is defined in Definition 5.2 with ξ replaced by −ξ. Here, g is independent of y and is convex with respect to z.

Assuming that there are two assets in the financial market for the pension fund managers to invest:

$$ \textstyle\begin{cases} dS_{0}(t) = r(t)S_{0}(t)\,dt, \\ dS_{1}(t) = \mu(t)S_{1}(t)\,dt+\sigma(t)S _{1}(t)\,dW(t), \\ S_{0}(0) = 1,\quad\quad S_{1}(0)> 0, \end{cases} $$

where \(S_{1}(\cdot)\) is a risky finance asset price and \(S_{0}( \cdot)\) is one risk-free asset price. \(\mu(\cdot)\) is an appreciation rate of the asset process, and \(\sigma(\cdot)\) is the volatility coefficient. We assume that \(\mu(\cdot)\), \(r(\cdot)\) and \(\sigma(\cdot)\) are deterministic bounded coefficients, and \(\sigma^{-1}(\cdot)\) is bounded.

Suppose that there are two pension fund managers (players) working together to invest the risk-free and risky assets. In the real financial market, it is reasonable for the investors to make decisions based on the historical price of the risky asset \(S_{1}(\cdot)\). So the observable filtration can be set as \(\mathcal{G}_{t}=\sigma\{S_{1}(s)|0 \leq s\leq t\}\), and it is clear that \(\mathcal{G}_{t}=\mathcal{F} _{t}^{W}=\sigma\{W(s)|0\leq s\leq t\}\). The pension fund wealth \(x(\cdot)\) can be modeled by

$$ \textstyle\begin{cases} dx(t) = (r(t)x(t)+ ( \mu(t)-r(t) )\pi(t)-\alpha (x(t)-x(t-\delta) )-c _{1}(t)-c_{2}(t) )\,dt \\ \hphantom{dx(t) =}{} +\pi(t)\sigma(t)\,dW(t)+\bar{\sigma}(t)\,d\bar{W}(t), \\ x(0) =x_{0}>0, \quad\quad x(t)=0,\quad t\in[-\delta,0). \end{cases} $$
(23)

Here, we denote by \(\pi(t)\) the amount of portfolio invested in the risky asset at time t, and \(\alpha(x(t)-x(t-\delta))\) represents the surplus premium to fund members or their capital transfusions depending on the performance of fund growth during the past period with parameter \(\alpha>0\) (see, e.g., [16, 17]). Meanwhile, there is an instantaneous consumption rate \(c_{i}(t)\) for manager i (\(i=1,2\)). We assume that the value of \(x(\cdot)\) is not only affected by the risky asset, but also by some practical phenomena like the physical inaccessibility of some economic parameters, inaccuracies in measurement, insider trading or the information asymmetry, etc. (see, e.g., [42, 43]). Here, \(\bar{\sigma}(\cdot)\) represents the instantaneous volatility affected by these unobservable factors, \(\mathcal{F}_{t}^{\bar{W}}\) represents the unobservable filtration generated by \(\bar{W}(\cdot)\). We set \(x(t)\) be adapted to the filtration \(\mathcal{F}_{t}\) generated by Brownian motion \((W(\cdot),\bar{W}(\cdot))\), and the control processes \(c_{i}(t)\) (\(i=1,2\)) be adapted to the observation filtration \({\mathcal{G}_{t}\subseteq\mathcal{F}_{t}}\).

The controlled process \(c_{i}(\cdot)\) (\(i=1,2\)) is called admissible for manager i if \(c_{i}(t)>0\) is adapted to the filtration \(\mathcal{G}_{t}\) at time t, \(c_{i}(t)\in\mathbb{L}^{2}(0,T; \mathbb{R})\), and the family of admissible control \((c_{1}(\cdot),c _{2}(\cdot))\) is denoted by \(\mathcal{C}_{1}\times\mathcal{C}_{2}\).

We assume that the insurance company hopes for more terminal capital with less risks and more consumption \(c_{i}(\cdot)\). According to Definitions 5.1 and 5.3, we can define the cost functional as

$$ J^{g}_{i} \bigl(c_{1}( \cdot),c_{2}(\cdot) \bigr)=-K_{i}\mathcal{E}_{g} \bigl[-x(T) \bigr]+ \mathbb{E} \int_{0}^{T}e^{-\beta t}L_{i} \frac{c_{i}(t)^{\gamma}}{ \gamma}\,dt, \quad i=1,2, $$
(24)

where \(K_{i}\), \(L_{i}\) are positive constants representing the different extent of preferences of two managers. β is a discount factor and \(1-\gamma\in(0,1)\) is a constant called the Arrow-Pratt index of risk aversion. Here, we set g be a linear form as \(g(\cdot, y(\cdot), z( \cdot))=g(\cdot)z(\cdot)\), where \(g(\cdot)\) is a deterministic bounded coefficient.

Then our problem is naturally to find an equilibrium point \((c_{1} ^{*}(\cdot),c_{2}^{*}(\cdot))\in\mathcal{C}_{1}\times\mathcal{C} _{2}\) such that

$$ \textstyle\begin{cases} J_{1}^{g} (c_{1}^{*}(\cdot),c_{2}^{*}( \cdot) ) =\sup_{c_{1}\in\mathcal{C}_{1}}J_{1}^{g} (c_{1}(\cdot),c _{2}^{*}( \cdot) ), \\ J_{2}^{g} (c_{1}^{*}( \cdot),c_{2}^{*}(\cdot) ) =\sup_{c_{2}\in\mathcal {C}_{2}}J_{2}^{g} (c_{1}^{*}(\cdot),c_{2}( \cdot) ). \end{cases} $$

Then our problem can be reformulated as

$$ \textstyle\begin{cases} dx(t) = (r(t)x(t)+ ( \mu(t)-r(t) )\pi(t)-\alpha (x(t)-x(t-\delta) ) \\ \hphantom{dx(t) =}{}-c_{1}(t)-c_{2}(t) )\,dt+\pi(t)\sigma(t)\,dW(t)+\bar{ \sigma}(t)\,d\bar{W}(t), \\ -dy(t) =g(t)z(t)\,dt-z(t)\,dW(t),\quad t\in[0,T], \\ x(0) = x_{0}, \quad\quad x(t)=0,\quad t\in[-\delta,0), \\ y(T) =-x(T), \end{cases} $$
(25)

and

$$ J^{g}_{i} \bigl(c_{1}( \cdot),c_{2}(\cdot) \bigr)=\mathbb{E} \int_{0}^{T}e^{- \beta t}L_{i} \frac{c_{i}(t)^{\gamma}}{\gamma}\,dt-K_{i}y(0), \quad i=1,2. $$
(26)

Now we will apply the theoretical results obtained in Section 3 to solve the above game problem. The Hamiltonian function is in the form of

$$\begin{aligned} & H_{i} \bigl(t,x(t),y(t),z(t),x_{\delta}(t),c_{1}(t),c_{2}(t);p_{i}(t),q _{i}(t),k_{i}(t),\bar{k}_{i}(t) \bigr) \\ &\quad =q_{i}(t) \bigl[r(t)x(t)+ \bigl(\mu(t)-r(t) \bigr) \pi(t)- \alpha \bigl(x(t)-x(t-\delta) \bigr)-c_{1}(t)-c_{2}(t) \bigr] \\ & \quad\quad{} +k_{i}(t)\pi(t)\sigma(t)+\bar{k}_{i}(t) \bar{ \sigma}(t)-p_{i}(t)g(t)z(t)+e ^{-\beta t}L_{i} \frac{c_{i}(t)^{\gamma}}{\gamma}, \end{aligned}$$

where the adjoint process \((p_{i}(\cdot),q_{i}(\cdot),k_{i}(\cdot), \bar{k}_{i}(\cdot))\) satisfies

$$ \textstyle\begin{cases} dp_{i}(t) =g(t)p_{i}(t)\,dW(t), \\ -dq_{i}(t) = \{ (r(t)-\alpha )q_{i}(t)+ \alpha\mathbb{E}^{\mathcal{F}_{t}} [q_{i}(t+\delta) ] \}\,dt-k_{i}(t)\,dW(t)- \bar{k}_{i}(t)\,d\bar{W}(t), \\ p_{i}(0) =K_{i}, \\ q_{i}(T) =p_{i}(T), \quad\quad q_{i}(t)=k_{i}(t)= \bar{k}_{i}(t)=0,\quad t\in(T,T+\delta]\ (i=1,2). \end{cases} $$

Then we use the necessary maximum principle (Theorem 3.1) to find a candidate equilibrium point:

$$ \begin{gathered} c_{1}^{*}(t)= \bigl(L_{1}^{-1}e^{\beta t}\hat{q}_{1}(t) \bigr)^{ \frac{1}{\gamma-1}}, \\ c_{2}^{*}(t)= \bigl(L_{2}^{-1}e^{\beta t} \hat{q} _{2}(t) \bigr)^{\frac{1}{\gamma-1}}, \end{gathered} $$
(27)

where \(\hat{q}_{i}(t)=\mathbb{E}[q_{i}(t)|\mathcal{G}_{t}]\) (\(i=1,2\)).

Now we have to deal with \(\hat{q}_{i}(t)\), the optimal filtering of \(q_{i}(t)\) on the observation \(\mathcal{G}_{t}\). We also set \(\hat{p}_{i}(t)=\mathbb{E}[p_{i}(t)|\mathcal{G}_{t}]\). Note that

$$ \begin{aligned} \mathbb{E} \bigl\{ \mathbb{E} \bigl[q_{i}(t+ \delta)|\mathcal{F}_{t} \bigr]|\mathcal{G} _{t} \bigr\} = \mathbb{E} \bigl[q_{i}(t+\delta)|\mathcal{G}_{t} \bigr]= \mathbb{E} \bigl\{ \mathbb{E} \bigl[q_{i}(t+\delta)| \mathcal{G}_{t+\delta} \bigr]|\mathcal{G}_{t} \bigr\} =\mathbb{E} \bigl[\hat{q}_{i}(t+ \delta)|\mathcal{G}_{t} \bigr]. \end{aligned} $$

Then, by Theorem 8.1 in [30], we have

$$ \textstyle\begin{cases} d\hat{p}_{i}(t) =g(t) \hat{p}_{i}(t)\,dW(t), \\ -d\hat{q}_{i}(t) = \{ (r(t)- \alpha ) \hat{q}_{i}(t)+\alpha\mathbb{E}^{\mathcal{G}_{t}} [\hat{q} _{i}(t+\delta) ] \}\,dt-\hat{k}_{i}(t)\,dW(t),\quad t \in[0,T], \\ \hat{p} _{i}(0) =K_{i}, \\ \hat{q}_{i}(T) =\hat{p}_{i}(T), \quad\quad \hat{q}_{i}(t)= \hat{k}_{i}(t)=0,\quad t\in(T,T+\delta]\ (i=1,2). \end{cases} $$
(28)

From (28), we can derive the explicit expression of \(\hat{p}_{i}(t)\) as

$$ \hat{p}_{i}(t)=K_{i}\exp \biggl\{ \int_{0}^{t}g(s)\,dW(s)-\frac{1}{2} \int_{0} ^{t}g^{2}(s)\,ds \biggr\} >0, \quad t\in[0,T], $$

which is a \(\mathcal{G}_{t}\)-exponential martingale.

By Theorem 5.1 in [13], we can prove \(\hat{q}_{i}(t) \geq0\), \(t\in[0,T]\). Thus \(c^{*}_{i}(t)>0\) for all \(t\in[0,T]\). Next, we will solve the anticipated BSDE of \(\hat{q}_{i}(t)\) recursively. This method can also be found in [44, 45].

(1) When \(t\in[T-\delta,T]\), the ABSDE in (28) becomes a standard BSDE (without anticipation):

$$ \hat{q}_{i}(t)=\hat{p}_{i}(T)+ \int_{t}^{T} \bigl(r(s)-\alpha \bigr) \hat{q}_{i}(s)\,ds- \int_{t}^{T}\hat{k}_{i}(s)\,dW(s),\quad t \in[T-\delta,T]. $$

Obviously, we have

$$ \hat{q}_{i}(t)=\exp \biggl\{ \int_{t}^{T} \bigl(r(s)-\alpha \bigr)\,ds \biggr\} \mathbb{E}^{ \mathcal{G}_{t}} \bigl[\hat{p}_{i}(T) \bigr]=\exp \biggl\{ \int_{t}^{T} \bigl(r(s)-\alpha \bigr)\,ds \biggr\} \hat{p}_{i}(t),\quad t\in[T-\delta,T]. $$

From Proposition 5.3 in [4], \((\hat{q}_{i}(t), \hat{k}_{i}(t))\) is Malliavin differentiable and \(\{D_{t}\hat{q}_{i}(t); T-\delta\leq t\leq T\}\) provides a version of \(\{\hat{k}_{i}(t);T-\delta\leq t\leq T\}\), i.e.,

$$ \hat{k}_{i}(t)=D_{t}\hat{q}_{i}(t)=\exp \biggl\{ \int_{t}^{T} \bigl(r(s)-\alpha \bigr)\,ds \biggr\} D_{t}\hat{p}_{i}(t),\quad t\in[T-\delta,T]. $$

(2) If we have solved ABSDE (28) on the interval \([T-n\delta,T-(n-1)\delta]\) (\(n=1,2,\ldots\)), and the solution \(\{( \hat{q}_{i}(t),\hat{k}_{i}(t));T-n\delta\leq t\leq T-(n-1)\delta\}\) is Malliavin differentiable, then we continue to consider the solvability on the next interval \([T-(n+1)\delta,T-n\delta]\), where we can rewrite ABSDE (28) as follows:

$$ \hat{q}_{i}(t)=\hat{q}_{i}(T-n\delta)+ \int_{t}^{T-n\delta} \bigl\{ \bigl(r(s)- \alpha \bigr) \hat{q}_{i}(s)+\alpha\mathbb{E}^{\mathcal{G}_{s}} \bigl[\hat{q} _{i}(s+\delta) \bigr] \bigr\} \,ds- \int_{t}^{T-n\delta}\hat{k}_{i}(s)\,dW(s). $$

We note that \(\{(\hat{q}_{i}(s+\delta),\hat{k}_{i}(s+\delta));t \leq s\leq T-n\delta\}\) has been solved and is Malliavin differentiable. So the same discussion leads to \(\{(\hat{q}_{i}(t), \hat{k}_{i}(t));T-(n+1)\delta\leq t\leq T-n\delta\}\) is Malliavin differentiable, and

$$\begin{aligned}& \begin{aligned} \hat{q}_{i}(t)& =\exp \biggl\{ \int_{t}^{T-n\delta} \bigl(r(s)-\alpha \bigr)\,ds \biggr\} \mathbb{E}^{\mathcal{G}_{t}} \bigl[\hat{q}_{i}(T-n\delta) \bigr] \\ &\quad{} +\alpha \int _{t}^{T-n\delta}\exp \biggl\{ \int_{t}^{s} \bigl(r(\eta)-\alpha \bigr)\,d\eta \biggr\} \mathbb{E}^{\mathcal{G}_{t}} \bigl[\hat{q}_{i}(s+\delta) \bigr] \,ds, \end{aligned} \\& \begin{aligned} \hat{k}_{i}(t)& =\exp \biggl\{ \int_{t}^{T-n\delta} \bigl(r(s)-\alpha \bigr)\,ds \biggr\} \mathbb{E}^{\mathcal{G}_{t}} \bigl[D_{t}\hat{q}_{i}(T-n\delta) \bigr] \\ &\quad{} +\alpha \int_{t}^{T-n\delta}\exp \biggl\{ \int_{t}^{s} \bigl(r(\eta)-\alpha \bigr)\,d\eta \biggr\} \mathbb{E}^{\mathcal{G}_{t}} \bigl[D_{t}\hat{q}_{i}(s+ \delta) \bigr]\,ds \end{aligned} \end{aligned}$$

for any \(t\in[T-(n+1)\delta,T-n\delta]\), \(i=1,2\).

We notice that all the conditions in the verification theorem (Theorem 3.2) are satisfied, then Theorem 3.2 implies that \((c_{1}^{*}(\cdot),c_{2}^{*}(\cdot))\) given by (27) is an equilibrium point.

Proposition 5.1

The investment problem (23)-(24) admits an equilibrium point \((c_{1}^{*}(\cdot), c_{2}^{*}(\cdot))\) which is defined by (27).

6 Conclusions

To the author’s best knowledge, this article is the first attempt to study the non-zero sum differential game problem of AFBSDDE under partial information. Throughout this paper, there are four distinguishing features worthy of being highlighted. First, we considered the time-delayed system, which has wide applications and can explain various past-dependent situations. Second, we studied the game problem with multiple players and aimed to find the Nash equilibrium point other than the optimal control. We established a necessary condition (maximum principle) and a sufficient condition (verification theorem) by virtue of the duality and convex variation approach. Third, we discussed an LQ system under the partial information condition. Applying the stochastic filtering formula, we derived the filtering equation and proved the existence and uniqueness of the filtering equation and the corresponding Nash equilibrium point. Fourth, we solved a pension fund management problem with a nonlinear expectation to measure the risk (convex risk measure) and obtained the explicit solution.

References

  1. Pardoux, E, Peng, S: Adapted solution of a backward stochastic differential equation. Syst. Control Lett. 14, 55-61 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  2. Peng, S: Backward stochastic differential equations and applications to optimal control. Appl. Math. Optim. 27, 125-144 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  3. Peng, S: Backward SDE and related g-expectation. In: Backward Stochastic Differential Equations, pp. 141-159 (1997)

    Google Scholar 

  4. Karoui, NE, Peng, S, Quenez, MC: Backward stochastic differential equations in finance. Math. Finance 7, 1-71 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  5. Ma, J, Yong, J: Forward-Backward Stochastic Differential Equations and Their Applications. Springer, New York (1999)

    MATH  Google Scholar 

  6. Wang, G, Wu, Z: The maximum principles for stochastic recursive optimal control problems under partial information. IEEE Trans. Autom. Control 54, 1230-1242 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  7. Cvitanic, J, Ma, J: Hedging options for a large investor and forward-backward SDE’s. Ann. Appl. Probab. 6, 370-398 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  8. Antonellia, F, Barucci, E, Mancinoc, ME: Asset pricing with a forward-backward stochastic differential utility. Econ. Lett. 72, 151-157 (2001)

    Article  MathSciNet  Google Scholar 

  9. Yong, J, Zhou, X: Stochastic Controls: Hamiltonian Systems and HJB Equations. Springer, New York (1999)

    Book  MATH  Google Scholar 

  10. Mohammed, SEA: Stochastic differential systems with memory: theory, examples and applications. In: Stochastic Analysis and Related Topics VI. Progress in Probability, vol. 42, pp. 1-77. Birkhäuser, Boston (1998)

    Google Scholar 

  11. Arriojas, M, Hu, Y, Monhammed, SEA, Pap, G: A delayed black and Scholes formula. Stoch. Anal. Appl. 25, 471-492 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  12. Chen, L, Wu, Z: Maximum principle for the stochastic optimal control problem with delay and application. Automatica 46, 1074-1080 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  13. Peng, SG, Yang, Z: Anticipated backward stochastic differential equations. Ann. Probab. 37, 877-902 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  14. Øksendal, B, Sulem, A: Optimal Control of Predictive Mean-Field Equations and Applications to Finance. Springer, Norway (2015)

    MATH  Google Scholar 

  15. Kyle, AS: Continuous auctions and insider trading. Econometrica 53, 1315-1336 (1985)

    Article  MATH  Google Scholar 

  16. Wu, S, Wang, G: Optimal control problem of backward stochastic differential delay equation under partial information. Syst. Control Lett. 82, 71-78 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  17. Shi, J, Wang, G: A nonzero sum differential game of BSDE with time-delayed generator and applications. IEEE Trans. Autom. Control 61(7), 1959-1964 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  18. Wu, S, Shu, L: Non-zero sum differential games of backward stochastic differential delay equations under partial information. Asian J. Control 19(1), 316-324 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  19. Chen, L, Wu, Z: A type of generalized forward-backward stochastic differential equations and applications. Chin. Ann. Math., Ser. B 32, 279-292 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  20. Huang, J, Li, X, Shi, J: Forward-backward linear quadratic stochastic optimal control problem with delay. Syst. Control Lett. 61, 623-630 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  21. Von Neumann, J, Morgenstern, O: The Theory of Games and Economic Behavior. Princeton University Press, Princeton (1944)

    MATH  Google Scholar 

  22. Nash, J: Non-cooperative games. Ann. Math. 54, 286-295 (1951)

    Article  MathSciNet  MATH  Google Scholar 

  23. Yu, Z: Linear-quadratic optimal control and nonzero-sum differential game of forward-backward stochastic system. Asian J. Control 14, 173-185 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  24. Øksendal, B, Sulem, A: Forward-backward stochastic differential games and stochastic control under model uncertainty. J. Optim. Theory Appl. 161, 22-55 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  25. Hui, E, Xiao, H: Maximum principle for differential games of forward-backward stochastic systems with applications. J. Math. Anal. Appl. 386, 412-427 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  26. Chen, L, Yu, Z: Maximum principle for nonzero-sum stochastic differential game with delays. IEEE Trans. Autom. Control 60, 1422-1426 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  27. Xiong, J, Zhou, X: Mean-variance portfolio selection under partial information. SIAM J. Control Optim. 46, 156-175 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  28. Huang, J, Shi, J: Maximum principle for optimal control of fully coupled forward-backward stochastic differential delayed equations. ESAIM, Contrôle Optim. Calc. Var. 18, 1073-1096 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  29. Liptser, RS, Shiryayev, AN: Statistics of Random Processes. Springer, New York (1977)

    Book  MATH  Google Scholar 

  30. Xiong, J: An Introduction to Stochastic Filtering Theory. Oxford University Press, Oxford (2008)

    MATH  Google Scholar 

  31. Federico, S: A stochastic control problem with delay arising in a pension fund model. Finance Stoch. 15, 421-459 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  32. Föllmer, H, Schied, A: Convex measure of risk and trading constraints. Finance Stoch. 2, 429-447 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  33. Gianin, ER: Risk measures via g-expectations. Insur. Math. Econ. 39, 19-34 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  34. Frittelli, M, Gianin, ER: Putting order in risk measures. J. Bank. Finance 26, 1473-1486 (2002)

    Article  Google Scholar 

  35. Peng, S: Nonlinear Expectations, Nonlinear Evaluations and Risk Measures: Stochastic Methods in Finance. Springer, Berlin (2004)

    Book  Google Scholar 

  36. An, TTK, Øksendal, B: A maximum principle for stochastic differential games with g-expectation and partial information. Stochastics 84, 137-155 (2012)

    MathSciNet  MATH  Google Scholar 

  37. Yong, J: Optimality variational principle for controlled forward-backward stochastic differential equations with mixed initial-terminal conditions. SIAM J. Control Optim. 48, 4119-4156 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  38. Artzner, P, Delbaen, F, Eber, J, Heath, D: Coherent measures of risk. Math. Finance 9, 203-228 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  39. Allais, M: Le comportement de l’homme rationnel devant le risque: Critique des postulats et axiomes de l’ecole Americaine. Econometrica 21, 503-546 (1953)

    Article  MathSciNet  MATH  Google Scholar 

  40. Ellsberg, D: Risk, ambiguity, and the Savage axioms. Q. J. Econ. 75, 643-669 (1961)

    Article  MATH  Google Scholar 

  41. Jiang, L: Convexity, translation invariance and subadditivity for g-expectations and related risk measures. Ann. Appl. Probab. 18, 245-258 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  42. Lakner, P: Utility maximization with partial information. Stoch. Process. Appl. 56, 247-273 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  43. Huang, J, Wang, G, Wu, Z: Optimal premium policy of an insurance firm: full and partial information. Insur. Math. Econ. 47, 208-215 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  44. Yu, Z: The stochastic maximum principle for optimal control problems of delay systems involving continuous and impulse controls. Automatica 48, 2420-2432 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  45. Menoukeu Pamen, O: Optimal control for stochastic delay systems under model uncertainty: a stochastic differential game approach. J. Optim. Theory Appl. 167, 998-1031 (2015)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by the Natural Science Foundation of China (No. 61573217, No. 11601285), the Natural Science Foundation of Shandong Province (No. ZR2016AQ13), the National High-level Personnel of Special Support Program of China, and the Chang Jiang Scholar Program of Chinese Education Ministry. The author would like to thank Prof. Zhen Wu for his valuable suggestions.

Author information

Authors and Affiliations

Authors

Contributions

All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yi Zhuang.

Ethics declarations

Competing interests

The author declares that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhuang, Y. Non-zero sum differential games of anticipated forward-backward stochastic differential delayed equations under partial information and application. Adv Differ Equ 2017, 383 (2017). https://doi.org/10.1186/s13662-017-1438-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-017-1438-1

MSC

Keywords