Skip to main content

Theory and Modern Applications

Backward reachability approach to state-constrained stochastic optimal control problem for jump-diffusion models

Abstract

In this paper, we consider the stochastic optimal control problem for jump-diffusion models with state constraints. In general, the value function of such problems is the discontinuous viscosity solution of the associated Hamilton-Jacobi-Bellman (HJB) equation since the regularity cannot be guaranteed at the boundary of the state constraint. By adapting the stochastic target theory, we obtain an equivalent representation of the original value function by means of the backward reachable set. We then show that this backward reachable can be characterized by the zero-level set of the auxiliary value function for the unconstrained stochastic control problem, which includes two additional unbounded controls as a consequence of the martingale representation theorem. We prove that the auxiliary value function is the unique continuous viscosity solution of the associated HJB equation, which is the second-order nonlinear partial integro-differential equation (PIDE). Our paper provides an explicit way to characterize the original (possibly discontinuous) value function as a zero-level set of the continuous solution of the auxiliary HJB equation. The proof of the existence and uniqueness requires a new technique due to the unbounded control sets and the presence of the singularity of the corresponding Lévy measure in the nonlocal operator of the HJB equation.

1 Introduction

Let B and Ñ be a standard Brownian motion and an E-marked compensated Poisson random process, respectively, which are mutually independent of each other. The problem studied in this paper is to minimize the following objective functional over \(u \in \mathcal{U}_{t,T}\)

$$\begin{aligned} J(t,a;u) = \mathbb{E} \biggl[ \int _{t}^{T} l \bigl(s,x_{s}^{t,a;u},u_{s} \bigr) \,\mathrm{d} s + m \bigl(x_{T}^{t,a;u} \bigr) \biggr], \end{aligned}$$
(1.1)

subject to the \(\mathbb{R}^{n}\)-dimensional stochastic differential equation (SDE)

$$\begin{aligned} \textstyle\begin{cases} \mathrm{d} x_{s}^{t,a;u} = f(s,x_{s}^{t,a;u},u_{s})\,\mathrm{d} s + \sigma (s,x_{s}^{t,a;u},u_{s}) \,\mathrm{d} B_{s} \\ \hphantom{\mathrm{d} x_{s}^{t,a;u} =}{} + \int _{E} \chi (s,x_{s-}^{t,a;u},u_{s},e) \tilde{N}(\mathrm{d} e, \mathrm{d} s), \\ x_{t}^{t,a;u} = a, \end{cases}\displaystyle \end{aligned}$$
(1.2)

and the state constraint (Γ is a non-empty closed subset of \(\mathbb{R}^{n}\))

$$\begin{aligned} x_{s}^{t,a;u} \in \Gamma ,\quad \forall s \in [t,T], \mathbb{P}\text{-a.s.} \end{aligned}$$
(1.3)

The precise problem formulation is given in Sect. 2.2. The associated value function for (1.1) is defined by

$$\begin{aligned} V(t,a) := \inf_{u \in \mathcal{U}_{t,T}} \bigl\{ J(t,a;u)| x_{s}^{t,a;u} \in \Gamma , \mathbb{P}\text{-a.s.,} \forall s \in [t,T] \bigr\} . \end{aligned}$$
(1.4)

The problem in (1.4) can then be referred to as the stochastic optimal control problem for jump-diffusion systems with state constraints.

The main results of the paper can be summarized as follows:

  • The first main result is that the value function in (1.4) can be equivalently represented by the zero-level set of W (see Theorems 3.1 and 3.2), i.e.,

    $$\begin{aligned} V(t,a) & = \inf \bigl\{ b \geq 0~| (a,b) \in \mathcal{R}_{t}^{\Gamma } \bigr\} = \inf \bigl\{ b \geq 0| W(t,a,b) = 0 \bigr\} , \end{aligned}$$
    (1.5)

    where \(\mathcal{R}_{t}^{\Gamma}\) is the backward reachable set of the stochastic target problem with state constraints (see (1.7)), and W (defined in (3.7)) is a continuous value function of the auxiliary stochastic control problem that includes unbounded control sets \(\mathcal{A}_{t,T} \times \mathcal{B}_{t,T}\);

  • The second main result is that the auxiliary value function W is a unique continuous viscosity solution of the following Hamilton-Jacobi-Bellman (HJB) equation with suitable boundary conditions (see Theorems 4.1 and 5.1): (time and state arguments are suppressed)

    $$\begin{aligned} 0 = {}& {-} \partial _{t} W + \mathop{\sup _{u \in U}}_{\alpha \in \mathbb{R}^{r}, \beta \in G^{2}} \left \{ - \left \langle D W, \begin{bmatrix} f(u) \\ -l(u) \end{bmatrix} \right \rangle - \frac{1}{2} \operatorname{Tr} \left ( \begin{bmatrix} \sigma \sigma ^{\top}(u) & \sigma (u) \alpha \\ (\sigma (u) \alpha )^{\top }& \alpha ^{\top }\alpha \end{bmatrix} D^{2} W \right )\right . \\ & \left .{} - \int _{E} \left [ W \bigl(t,a + \chi (u,e), b+ \beta (e) \bigr) - W(t,a,b) - \left \langle DW, \begin{bmatrix} \chi (u,e) \\ \beta (e) \end{bmatrix} \right \rangle \right ]\pi (\mathrm{d} e) \right \} \\ & {} - d(a,\Gamma ), \end{aligned}$$
    (1.6)

    which is the second-order nonlinear partial integro-differential equation (PIDE) that includes two unbounded control variables \((\alpha ,\beta ) \in \mathbb{R}^{r} \times G^{2}\).

  • The first and second main results imply that we can characterize the original value function (1.4) using (1.5) and the solution of (1.6).

(Deterministic and stochastic) control problems with state constraints were studied extensively in the literature; see [1, 7, 14, 15, 20, 22, 25, 27, 28, 33, 34] and the references therein. In particular, as discussed in [11, 15, 28, 33], under some conditions, the value function of the state-constrained stochastic control problem is only a discontinuous viscosity solution to the associated constrained HJB equation having a complex boundary condition at Γ (the boundary of Γ), as the regularity cannot be guaranteed at Γ. In fact, in the references mentioned above, they did not study the equivalent representation of the corresponding value function as a continuous function, their control spaces are bounded, and they only considered the case for deterministic systems or SDEs in a Brownian setting without jumps. Viability theory for deterministic and stochastic systems could be viewed as an alternative approach to solve state-constrained problems [35, 19], and its extension to jump-diffusion models was studied in [31, 39]. However, they focus only on the viability property of state constraints (without optimizing the objective functional), their control spaces are bounded, and some additional technical assumptions (e.g., see [31, (H.3)]) are essentially required.

Recently, the state-constrained problem via the backward reachability approach was studied in [11]. One remarkable feature of [11] is that it provides the explicit characterization method of the original (possibly discontinuous) value function in terms of the zero-level set of the auxiliary value function, which is continuous, as the auxiliary value function is induced from the unconstrained control problem. However, the model used in [11] is the SDE driven by Brownian motion without jumps, which is a special case of (1.2). Moreover, the HJB equation in [11] is the local equation, which is also a special case of (1.6) without the nonlocal integral term (the second line of (1.6)). The aim of this paper is to generalize the results in [11] to the case of jump-diffusion systems. As mentioned below, it turns out that these generalizations are not straightforward due to jump diffusions in (1.1) and the presence of the nonlocal operator in the HJB equation (1.6).

Our first main result given in (1.5) is obtained based on the stochastic target theory. In particular, using the equivalence relationship between stochastic optimal control and stochastic target problems, we show (1.5) (see Theorems 3.1 and 3.2), where \(\mathcal{R}_{t}^{\Gamma}\) is the backward reachable set with the state constraint given by

$$\begin{aligned} \mathcal{R}_{t}^{\Gamma} := {}&\bigl\{ (a,b) \in \mathbb{R}^{n} \times \mathbb{R}|\exists (u,\alpha ,\beta ) \in \mathcal{U}_{t,T} \times \mathcal{A}_{t,T} \times \mathcal{B}_{t,T} \text{ such that} \\ &{} y_{T;t,a,b}^{u,\alpha ,\beta} \geq m \bigl(x_{T}^{t,a;u} \bigr), \mathbb{P}\text{-a.s. and } x_{s}^{t,a;u} \in \Gamma , \forall s \in [t,T], \mathbb{P}\text{-a.s.} \bigr\} , \end{aligned}$$
(1.7)

with \((y_{s;t,a,b}^{u,\alpha ,\beta})_{s \in [t,T]}\) being an auxiliary state process controlled by additional control processes \((\alpha ,\beta ) \in \mathcal{A}_{t,T} \times \mathcal{B}_{t,T}\) that take values from unbounded control spaces. Here, the main technical tool to show the equivalence in (1.5) using (1.7) is the martingale representation theorem for general Lévy processes, by which additional (unbounded) controls \((\alpha ,\beta ) \in \mathcal{A}_{t,T} \times \mathcal{B}_{t,T}\) are introduced. It should be mentioned that [11] also used the result of [13] (where only \((u,\alpha ) \in \mathcal{U}_{t,T} \times \mathcal{A}_{t,T}\) appeared in [11]), and we extend [11] to the case of jump-diffusion models. Note that this extension is not straightforward, since we have to obtain the new estimates for SDEs with jump diffusions (Lemmas 2.1 and 3.1) and the additional control variables \((\alpha ,\beta ) \in \mathcal{A}_{t,T} \times \mathcal{B}_{t,T}\) (Remark 3.1). We mention that in addition to the application of the martingale representation theorem, these steps are essential to prove (1.5) and the properties of W in (1.5) (see Theorems 3.1 and 3.2 as well as the results in Sect. 3.3). Moreover, (1.5) in Theorem 3.2 relies on the existence of optimal controls for jump-diffusion systems, and our paper presents the new existence result for general optimal control problems with jump diffusions (see Theorem A.1 in the Appendix), which has not been reported in existing literatures. We should mention that the results on the existence of optimal controls in [11, 38] are applicable only for linear SDEs without jumps.

The second main result of this paper is to show that the auxiliary value function W is a unique continuous viscosity solution of the HJB equation in (1.6) (see Theorems 4.1 and 5.1), where W will be defined in Sect. 3.2 (see (3.7)). Therefore, using the solution of (1.6), the explicit characterization of the original value function V in (1.4) can be obtained through (1.5). We mention that the proofs for existence and uniqueness of the viscosity solution in Theorems 4.1 and 5.1 should be different from those for the case without jumps in [11]. Specifically, Theorem 4.1, the proof for existence of the viscosity solution for (1.6), requires the dynamic programming principle and the application of Itô’s formula of general Lévy-type stochastic integrals to test functions. In fact, unlike [11, Theorem 4.3], Theorem 4.1 has to deal with two different stochastic integrals (stochastic integrals with respect to the Brownian motion and the (compensated) Poisson process) and their quadratic variations to obtain the desired inequalities in the definition of viscosity solutions. Such an extended (existence of viscosity solution) analysis is not presented in [11, Theorem 4.3], and our paper provides the different proof in Theorem 4.1.

Regarding the proof of uniqueness of the viscosity solution in Theorem 5.1, the approach developed for the case without jumps in [11, Theorem 4.6] (that also relies on [10, 17]) cannot be directly adopted, since the HJB equation in (1.6) includes the local term (the first line of (1.6)) and the nonlocal (integral) operator in terms of the singular Lévy measure π (the second line of (1.6)), where the latter is induced due to jump diffusions. Note also that in classical stochastic optimal control problems with jump diffusions without state constraints (\(\Gamma = \mathbb{R}^{n}\)), the corresponding control space is assumed to be a compact set [18, 31, 32, 35]. Hence, their approaches cannot be used directly to prove the uniqueness of the viscosity solution for the HJB equation in (1.6).

Based on the discussion above, we need to develop a new approach to prove the uniqueness of the viscosity solution for the HJB equation in (1.6). Our strategy to prove the uniqueness in Theorem 5.1 is to use the equivalent definition of viscosity solutions in terms of (super and sub)jets, where the nonlocal integral operator is decomposed into the singular part with the test function and the nonsingular part with jets (see Lemma 6.3). Then we obtain the desired result for the nonlocal singular part with the help of the regularity of test functions and the estimates in Remark 3.1. Note that the unboundedness of \(\beta \in G^{2}\) in the nonlocal nonsingular part is resolved with the help of the appropriate construction of the comparison functions Ψ in (6.12) and the proper estimates of doubling variables in Ψ based on [21, Proposition 3.7].Footnote 1 In addition, we convert the second-order local part (the first line of (1.6)) into the equivalent spectral radius form by which the unboundedness with respect to \(\alpha \in \mathbb{R}^{r}\) can be handled (see Lemma 6.1). By combining all these steps, we obtain the comparison principle of viscosity sub and supersolutions (see Theorem 5.1), which implies the uniqueness of the viscosity solution for (1.6) (see Corollary 5.1).

The rest of the paper is organized as follows. The notation and the precise problem statement are given in Sect. 2. In Sect. 3, we obtain the equivalent representation of (1.4) given in (1.5). In Sect. 4, we show that the auxiliary value function W is the continuous viscosity solution of the HJB equation in (1.6). The uniqueness of the viscosity solution for (1.6) is presented in Sect. 5, and its proof is provided in Sect. 6. The Appendix provides several different conditions for the existence of optimal controls for jump-diffusion systems.

2 Notation and problem statement

2.1 Notation

Let \(\mathbb{R}^{n}\) be the n-dimensional Euclidean space. For \(x,y \in \mathbb{R}^{n}\), \(x^{\top}\) denotes the transpose of x, \(\langle x,y \rangle \) is the inner product, and \(|x| := \langle x, x \rangle ^{1/2}\). Let \(\mathbb{S}^{n}\) be the set of \(n \times n\) symmetric matrices. Let \(\operatorname{Tr} (A)\) be the trace operator for a square matrix \(A \in \mathbb{R}^{n \times n}\). Let \(\|\cdot \|_{F}\) be the Frobenius norm, i.e., \(\|A\|_{F} := \operatorname{Tr} (AA^{\top})^{1/2}\) for \(A \in \mathbb{R}^{n \times m}\). Let \(I_{n}\) be an \(n \times n\) identity matrix. In various places of the paper, an exact value of a positive constant C can vary from line to line, which mainly depends on the coefficients in Assumptions 1, 2, and 3, terminal time T, and the initial condition, but independent of a specific choice of control.

Let \((\Omega , \mathcal{F}, \mathbb{P})\) be a complete probability space with the natural filtration \(\mathbb{F} :=\{\mathcal{F}_{s}, 0 \leq s \leq t\}\) generated by the following two mutually independent stochastic processes and augmented by all the \(\mathbb{P}\)-null sets in \(\mathcal{F}\): (i) an r-dimensional standard Brownian motion B defined on \([t,T]\) and (ii) an E-marked right continuous Poisson random measure (process) N defined on \(E \times [t,T]\), where \(E := \bar{E} \setminus \{0\}\) with \(\bar{E} \subset \mathbb{R}^{l}\) is a Borel subset of \(\mathbb{R}^{l}\) equipped with its Borel σ-field \(\mathcal{B}(E)\). The intensity measure of N is denoted by \(\hat{\pi}(\mathrm{d} e,\mathrm{d} t) := \pi (\mathrm{d} e) \,\mathrm{d} t\), satisfying \(\pi (E) < \infty \), where \(\{\tilde{N}(A,(t,s]) := (N-\hat{\pi})(A,(t,s])\}_{s \in (t,T]}\) is an associated compensated \(\mathcal{F}_{s}\)-martingale random (Poisson) measure of N for any \(A \in \mathcal{B}(E)\). Here, π is a σ-finite Lévy measure on \((E,\mathcal{B}(E))\), which holds \(\int _{E} (1 \wedge |e|^{2}) \pi (\mathrm{d} e) < \infty \).

We introduce the following spaces:

  • \(L^{p}(\Omega ,\mathcal{F}_{s};\mathbb{R}^{n})\), \(s \in [t,T]\), \(p \geq 1\): the space of \(\mathcal{F}_{s}\)-measurable \(\mathbb{R}^{n}\)-valued random vectors, satisfying \(\|x\|_{L^{p}} := \mathbb{E}[|x|^{p}] < \infty \);

  • \(\mathcal{L}_{\mathbb{F}}^{p}(t,T;\mathbb{R}^{n})\), \(t \in [0,T]\), \(p \geq 1\): the space of \(\mathbb{F}\)-predictable \(\mathbb{R}^{n}\)-valued random processes, satisfying \(\|x\|_{\mathcal{L}_{\mathcal{F}}^{p}} := \mathbb{E}[\int _{t}^{T} |x_{s}|^{p} \,\mathrm{d} s ]^{\frac{1}{p}} < \infty \);

  • \(G^{2}(E,\mathcal{B}(E),\pi ;\mathbb{R}^{n})\): the space of square integrable functions \(k:E \rightarrow \mathbb{R}^{n}\) such that k satisfies \(\|k\|_{G^{2}} := (\int _{E} |k(e)|^{2} \pi (\mathrm{d} e))^{\frac{1}{2}} < \infty \), where π is a σ-finite Lévy measure on \((E,\mathcal{B}(E))\). \(G^{2}(E,\mathcal{B}(E),\pi ;\mathbb{R}^{n})\) is a Hilbert space [2, page 9];

  • \(\mathcal{G}^{2}_{\mathbb{F}}(t,T,\pi ;\mathbb{R}^{n})\), \(t \in [0,T]\): the space of stochastic processes \(k:\Omega \times [t,T] \times E \rightarrow \mathbb{R}^{n}\) such that k is a \(\mathcal{P} \times \mathcal{B}(E)\)-measurable and \(\mathbb{R}^{n}\)-valued \(\mathbb{F}\)-predictable stochastic process, which satisfies \(\|k\|_{\mathcal{G}^{2}_{\mathbb{F}}} := \mathbb{E}[\int _{t}^{T} \int _{E} |k_{s}(e)|^{2} \pi (\mathrm{d} e) \,\mathrm{d} s]^{\frac{1}{2}} < \infty \), where \(\mathcal{P}\) denotes the σ-algebra of \(\mathbb{F}\)-predictable subsets of \(\Omega \times [0,T]\). Note that \(\mathcal{G}^{2}_{\mathbb{F}}(t,T,\pi ;\mathbb{R}^{n})\) is a Hilbert space [2, Lemma 4.1.3];

  • \(C([0,T] \times \mathbb{R}^{n})\): the set of \(\mathbb{R}\)-valued continuous functions on \([0,T] \times \mathbb{R}^{n}\);

  • \(C_{p}([0,T] \times \mathbb{R}^{n})\), \(p \geq 1\): the set of \(\mathbb{R}\)-valued continuous functions such that \(f \in C([0,T] \times \mathbb{R}^{n})\) holds \(|f(t,x)| \leq C (1 + |x|^{p})\);

  • \(C_{b}^{l,r}([0,T] \times \mathbb{R}^{n})\) \(l,r \geq 1\): the set of \(\mathbb{R}\)-valued continuous functions on \([0,T] \times \mathbb{R}^{n}\) such that for \(f \in C^{l,r}([0,T] \times \mathbb{R}^{n})\), \(\partial _{t}^{l} f\) and \(D^{r} f\) exist, and are continuous and uniformly bounded, where \(\partial _{t}^{l} f\) is the lth-order partial derivative of f with respect to \(t \in [0,T]\), and \(D^{r} f\) is the rth-order derivative of f in \(x \in \mathbb{R}^{n}\).

2.2 Problem statement

We consider the following stochastic differential equation (SDE) driven by both B and Ñ:

$$\begin{aligned} \textstyle\begin{cases} \mathrm{d} x_{s}^{t,a;u} = f(s,x_{s}^{t,a;u},u_{s})\,\mathrm{d} s + \sigma (s,x_{s}^{t,a;u},u_{s}) \,\mathrm{d} B_{s} \\ \hphantom{\mathrm{d} x_{s}^{t,a;u} =}{} + \int _{E} \chi (s,x_{s-}^{t,a;u},u_{s},e) \tilde{N}(\mathrm{d} e,\mathrm{d} s), \\ x_{t}^{t,a;u} = a, \end{cases}\displaystyle \end{aligned}$$
(2.1)

where \(x_{s}^{t,a;u} \in \mathbb{R}^{n}\) is the value of the state at time s, and \(u_{s} \in U\) is the value of the control at time s with U being the space of control values, which is a compact subset of \(\mathbb{R}^{m}\). The set of admissible controls is denoted by \(\mathcal{U}_{t,T} := \mathcal{L}_{\mathbb{F}}^{2}(t,T;U)\).

Assumption 1

\(f: [0,T] \times \mathbb{R}^{n} \times U \rightarrow \mathbb{R}^{n}\), \(\sigma : [0,T] \times \mathbb{R}^{n} \times U \rightarrow \mathbb{R}^{n \times r}\) and \(\chi : [0,T] \times \mathbb{R}^{n} \times U \times E \rightarrow \mathbb{R}^{n}\) are continuous in \((t,x,u) \in [0,T] \times \mathbb{R}^{n} \times U\), and hold the following conditions with the constant \(L > 0\): for \(x,x^{\prime }\in \mathbb{R}^{n}\),

$$\begin{aligned} & \bigl\vert f(t,x,u) - f \bigl(t,x^{\prime},u \bigr) \bigr\vert + \bigl\vert \sigma (t,x,u) - \sigma \bigl(t,x^{ \prime},u \bigr) \bigr\vert \\ &\quad {}+ \bigl\Vert \chi (t,x,u,\cdot ) - \chi \bigl(t,x^{\prime},u, \cdot \bigr) \bigr\Vert _{G^{2}} \leq L \bigl\vert x-x^{\prime} \bigr\vert , \\ & \bigl\vert f(t,x,u) \bigr\vert + \bigl\vert \sigma (t,x,u) \bigr\vert + \bigl\Vert \chi (t,x,u,\cdot ) \bigr\Vert _{G^{2}} \leq L \bigl(1 + \vert x \vert \bigr). \end{aligned}$$

Lemma 2.1

Suppose that Assumption 1holds. Then the following results hold:

  1. (i)

    For any \(a \in \mathbb{R}^{n}\) and \(u \in \mathcal{U}_{t,T}\), there is a unique \(\mathbb{F}\)-adapted càdlàg process such that (2.1) holds;

  2. (ii)

    For any \(a,a^{\prime }\in \mathbb{R}^{n}\), \(u \in \mathcal{U}_{t,T}\), and \(t,t^{\prime }\in [0,T]\) with \(t \leq t^{\prime}\), there exists a constant \(C>0\) such that (a) \(\mathbb{E} [ \sup_{s \in [t,T]} |x_{s}^{t,a;u}|^{2} ] \leq C(1+|a|^{2})\), (b) \(\mathbb{E} [\sup_{s \in [t,T]} |x_{s}^{t,a;u} - x_{s}^{t,a^{ \prime};u}|^{2} ] \leq C|a-a^{\prime}|^{2}\), and (c) \(\mathbb{E} [ \sup_{s \in [t^{\prime},T]} |x_{s}^{t,a;u} - x_{s}^{t^{ \prime},a;u}|^{2} ] \leq C(1+|a|^{2})|t^{\prime}-t|\).

Proof

We only need to prove part (ii)-(c), as the proof for other parts can be found in [2, Chap. 6].

Note that \(x_{s}^{t,a;u} = x_{s}^{t^{\prime}, x_{t^{\prime}}^{t,a;u};u}\) for all \(s \in [t^{\prime},T]\). Using (ii)-(a), (ii)-(b) and Kunita’s formula for general Lévy-type stochastic integrals (see [2, Theorem 4.4.23]), it follows that

$$\begin{aligned} \mathbb{E} \Bigl[\sup_{s \in [t^{\prime},T]} \bigl\vert x_{s}^{t^{\prime}, x_{t^{ \prime}}^{t,a;u};u} - x_{s}^{t^{\prime},a;u} \bigr\vert ^{2} \Bigr] \leq C \mathbb{E} \bigl[ \bigl\vert x_{t^{\prime}}^{t,a;u} - a \bigr\vert ^{2} \bigr] \leq C \bigl(1+ \vert a \vert ^{2} \bigr) \bigl\vert t^{ \prime}-t \bigr\vert . \end{aligned}$$

This completes the proof. □

The objective functional is given by

$$\begin{aligned} J(t,a;u) = \mathbb{E} \biggl[ \int _{t}^{T} l \bigl(s,x_{s}^{t,a;u},u_{s} \bigr) \,\mathrm{d} s + m \bigl(x_{T}^{t,a;u} \bigr) \biggr]. \end{aligned}$$
(2.2)

Let \(\Gamma \subset \mathbb{R}^{n}\) be the non-empty and closed set, which captures the state constraint. Then the state-constrained stochastic control problem for jump-diffusion systems considered in this paper is as follows:

$$\begin{aligned} & \inf_{u \in \mathcal{U}_{t,T}} J(t,a;u)~ \text{subject to (2.1) and } x_{s}^{t,a;u} \in \Gamma ,~ \text{$\forall s \in [t,T]$, $\mathbb{P}$-a.s.} \end{aligned}$$

We introduce the value function for the above problem:

$$\begin{aligned} V(t,a) := \inf_{u \in \mathcal{U}_{t,T}} \bigl\{ J(t,a;u)| x_{s}^{t,a;u} \in \Gamma ,~ \text{$\mathbb{P}$-a.s.,}~ \forall s \in [t,T] \bigr\} . \end{aligned}$$
(2.3)

Note that (2.3) means \(a \in \Gamma \) for the initial state of the SDE in (2.1). The following assumptions are imposed for (2.2).

Assumption 2

  1. (i)

    \(l: [0,T] \times \mathbb{R}^{n} \times U \rightarrow \mathbb{R}\) and \(m:\mathbb{R}^{n} \rightarrow \mathbb{R}\) are continuous in \((t,x,u) \in [0,T] \times \mathbb{R}^{n} \times U\). l and m satisfy the following conditions with the constant \(L > 0\): for \(x,x^{\prime }\in \mathbb{R}^{n}\),

    $$\begin{aligned} &\bigl\vert l(t,x,u) - l \bigl(t,x^{\prime},u \bigr) \bigr\vert + \bigl\vert m(x) - m \bigl(x^{\prime} \bigr) \bigr\vert \leq L \bigl\vert x-x^{ \prime} \bigr\vert , \\ &\bigl\vert l(t,x,u) \bigr\vert + \bigl\vert m(x) \bigr\vert \leq L \bigl(1 + \vert x \vert \bigr); \end{aligned}$$
  2. (ii)

    l and m are nonnegative functions, i.e., \(l,m \geq 0\).

Remark 2.1

In view of (ii) of Assumption 2, \(J(t,a;u) \geq 0\) for any \((t,a,u) \in [0,T] \times \mathbb{R}^{n} \times \mathcal{U}_{t,T}\), which implies that \(V(t,a) \geq 0\) for \((t,a) \in [0,T] \times \mathbb{R}^{n}\).

3 Characterization of V

In this section, we convert the original problem in (2.3) into the stochastic target problem for jump-diffusion systems with state constraints. Then we show that (2.3) can be characterized by the backward reachable set of the stochastic target problem, which is equivalent to the zero-level set of the auxiliary value function.

3.1 Equivalent stochastic target problem via backward reachability approach

We first introduce an auxiliary SDE associated with the objective functional in (2.2):

$$\begin{aligned} \textstyle\begin{cases} \,\mathrm{d} y_{s; t,a,b}^{u,\alpha ,\beta} = - l(s,x_{s}^{t,a;u},u_{s}) \,\mathrm{d} s + \alpha _{s}^{\top }\,\mathrm{d} B_{s} + \int _{E} \beta _{s}(e) \tilde{N}(\mathrm{d} e,\mathrm{d} s),& s \in (t,T], \\ y_{t; t,a,b}^{u,\alpha ,\beta} = b, \end{cases}\displaystyle \end{aligned}$$
(3.1)

where \(b \in \mathbb{R}\), \(u \in \mathcal{U}_{t,T}\), \(\alpha \in \mathcal{L}_{\mathbb{F}}^{2}(t,T;\mathbb{R}^{r}) =: \mathcal{A}_{t,T}\) and \(\beta \in \mathcal{G}_{\mathbb{F}}^{2}(t,T,\pi ;\mathbb{R}) =: \mathcal{B}_{t,T}\).

Lemma 3.1

Suppose that Assumptions 1and 2hold. Then:

  1. (i)

    For any \((u,\alpha ,\beta ) \in \mathcal{U}_{t,T} \times \mathcal{A}_{t,T} \times \mathcal{B}_{t,T}\) and \((a,b) \in \mathbb{R}^{n+1}\), there is a unique \(\mathbb{F}\)-adapted càdlàg process such that (3.1) holds;

  2. (ii)

    For any \((u,\alpha ,\beta ) \in \mathcal{U}_{t,T} \times \mathcal{A}_{t,T} \times \mathcal{B}_{t,T}\), \((a,b) \in \mathbb{R}^{n+1}\), and \((a^{\prime},b^{\prime}) \in \mathbb{R}^{n+1}\), there exists a constant \(C> 0\) such that (a) \(\mathbb{E} [ \sup_{s \in [t,T]} | y_{s;t,a,b}^{u,\alpha , \beta} - y_{s;t,a^{\prime},b^{\prime}}^{u,\alpha ,\beta} |^{2} ] \leq C (|a-a^{\prime}|^{2} + |b-b^{\prime}|^{2})\) and (b) \(\lim_{t^{\prime }\rightarrow t} \mathbb{E} [ | y_{T;t,a,b}^{u, \alpha ,\beta} - y_{T;t^{\prime},a,b}^{u,\alpha ,\beta}|^{2} ]^{ \frac{1}{2}} = 0\) for \(t^{\prime }\in [0,T]\).

Proof

The proof for parts (i) and (ii)-(a) is analogous to that of Lemma 2.1. We prove part (ii)-(b). Without loss of generality, we assume \(t^{\prime }\geq t\). Consider,

$$\begin{aligned} y_{T;t,a,b}^{u,\alpha ,\beta} - y_{T;t^{\prime},a,b}^{u,\alpha , \beta} ={} &{-} \int _{t}^{T} \bigl[ l \bigl(s,x_{s}^{t,a;u},u_{s} \bigr) - l \bigl(s,x_{s}^{t^{\prime},a;u},u_{s} \bigr) \bigr] \,\mathrm{d} s \\ & {} - \int _{t}^{t^{\prime}} l \bigl(s,x_{s}^{t^{\prime},a;u},u_{s} \bigr) \,\mathrm{d} s + \int _{t}^{t^{\prime}} \alpha _{s}^{\top } \,\mathrm{d} B_{s} + \int _{t}^{t^{ \prime}} \int _{E} \beta _{s}(e) \tilde{N}(\mathrm{d} e,\mathrm{d} s). \end{aligned}$$

By Assumptions 1 and 2, and using Lemma 2.1, we have

$$\begin{aligned} & \mathbb{E} \bigl[ \bigl\vert y_{T;t,a,b}^{u,\alpha ,\beta} - y_{T;t^{\prime},a,b}^{u, \alpha ,\beta} \bigr\vert ^{2} \bigr]^{\frac{1}{2}} \\ & \quad \leq C \sqrt{t^{\prime }- t} + \mathbb{E} \biggl[ \biggl\vert \int _{t}^{t^{ \prime}} \alpha _{s}^{\top } \,\mathrm{d} B_{s} \biggr\vert ^{2} \biggr]^{ \frac{1}{2}} + \mathbb{E} \biggl[ \biggl\vert \int _{t}^{t^{\prime}} \int _{E} \beta _{s}(e) \tilde{N}(\mathrm{d} e,\mathrm{d} s) \biggr\vert ^{2} \biggr]^{ \frac{1}{2}}. \end{aligned}$$

Notice that as \((\alpha ,\beta ) \in \mathcal{A}_{t,T} \times \mathcal{B}_{t,T}\),

$$\begin{aligned} & \mathbb{E} \biggl[ \biggl\vert \int _{t}^{t^{\prime}} \alpha _{s}^{ \top } \,\mathrm{d} B_{s} \biggr\vert ^{2} \biggr] = \mathbb{E} \biggl[ \int _{t}^{t^{ \prime}} \vert \alpha _{s} \vert ^{2} \,\mathrm{d} s \biggr] \rightarrow 0\quad \text{as }t^{\prime }\rightarrow t, \end{aligned}$$

and similarly using Kunita’s formula [2, Theorem 4.4.23],

$$\begin{aligned} & \mathbb{E} \biggl[ \biggl\vert \int _{t}^{t^{\prime}} \int _{E} \beta _{s}(e) \tilde{N}(\mathrm{d} e,\mathrm{d} s) \biggr\vert ^{2} \biggr] \leq C \mathbb{E} \biggl[ \int _{t}^{t^{\prime}} \int _{E} \bigl\vert \beta _{s}(e) \bigr\vert ^{2} \pi (\mathrm{d} e) \,\mathrm{d} s \biggr] \rightarrow 0\quad \text{as }t^{\prime }\rightarrow t. \end{aligned}$$

Hence, part (ii)-(b) follows. This completes the proof. □

Remark 3.1

We may impose bounds (dependent on the initial state a for (2.1)) on additional control variables \((\alpha ,\beta ) \in \mathcal{A}_{t,T} \times \mathcal{B}_{t,T}\). In particular, let \(\tilde{J}(t,a;u) := \int _{t}^{T} l(s,x_{s}^{t,a;u},u_{s}) \,\mathrm{d} s + m(x_{T}^{t,a;u})\). Since \(\tilde{J} \in L^{2}(\Omega ,\mathcal{F}_{T};\mathbb{R})\), in view of the martingale representation theorem [2, Theorem 5.3.5], there exist unique \((\alpha ,\beta ) \in \mathcal{A}_{t,T} \times \mathcal{B}_{t,T}\) such thatFootnote 2

$$\begin{aligned} \tilde{J}(t,a;u) = J(t,a;u) + \int _{t}^{T} \alpha _{s}^{\top } \,\mathrm{d} B_{s} + \int _{t}^{T} \int _{E} \beta _{s}(e) \tilde{N}(\mathrm{d} e,\mathrm{d} s), \end{aligned}$$

which implies

$$\begin{aligned} & \int _{t}^{T} \alpha _{s}^{\top } \,\mathrm{d} B_{s} + \int _{t}^{T} \int _{E} \beta _{s}(e) \tilde{N}(\mathrm{d} e,\mathrm{d} s) \\ & \quad = \int _{t}^{T} l \bigl(s,x_{s}^{t,a;u},u_{s} \bigr) \,\mathrm{d} s + m \bigl(x_{T}^{t,a;u} \bigr) - \mathbb{E} \biggl[ \int _{t}^{T} l \bigl(s,x_{s}^{t,a;u},u_{s} \bigr) \,\mathrm{d} s + m \bigl(x_{T}^{t,a;u} \bigr) \biggr]. \end{aligned}$$

Then from (i) of Assumption 2, the estimates in (ii) of Lemma 2.1, and the fact that Ñ and B are mutually independent, we have

$$\begin{aligned} \Vert \alpha \Vert _{\mathcal{L}^{2}_{\mathbb{F}}}^{2} \leq C \bigl(1+ \vert a \vert ^{2} \bigr),~ \Vert \beta \Vert _{\mathcal{G}_{\mathbb{F}}^{2}}^{2} \leq C \bigl(1+ \vert a \vert ^{2} \bigr). \end{aligned}$$

Hence, without loss of generality, we may use the controls \((\alpha ,\beta )\) such that \((\alpha ,\beta )\) are square integrable and bounded in \(\mathcal{L}^{2}_{\mathbb{F}}\) and \(\mathcal{G}_{\mathbb{F}}^{2}\) senses.

For any function \(m:\mathbb{R}^{n} \rightarrow \mathbb{R}\), let us define the epigraph of m:

$$\begin{aligned} \mathcal{E}(m) := \bigl\{ (x,y) \in \mathbb{R}^{n} \times \mathbb{R}| y \geq m(x) \bigr\} . \end{aligned}$$

Then we have the following equivalent expression of the value function in (2.3) in terms of the stochastic target problem with state constraints. Below, we drop \({t,T}\) in \(\mathcal{U}_{t,T}\), \(\mathcal{A}_{t,T}\) and \(\mathcal{B}_{t,T}\) to simplify the notation.

Lemma 3.2

Assume that Assumptions 1and 2hold. Then:

$$\begin{aligned} V(t,a) ={}& \inf \bigl\{ b \geq 0| \exists (u,\alpha ,\beta ) \in \mathcal{U} \times \mathcal{A} \times \mathcal{B} \textit{ such that} \\ & \bigl(x_{T}^{t,a;u},y_{T;t,a,b}^{u,\alpha ,\beta} \bigr) \in \mathcal{E}(m), \mathbb{P}\textit{-a.s. and }x_{s}^{t,a;u} \in \Gamma , \forall s \in [t,T], \mathbb{P}\textit{-a.s.} \bigr\} \end{aligned}$$
(3.2)

Remark 3.2

Note that (3.2) is the stochastic target problem for jump-diffusion systems with state constraints; see [1214, 30, 36].

Proof of Lemma 3.2

It is easy to see that

$$\begin{aligned} V(t,a) = {}& \inf \bigl\{ b \geq 0| \exists u \in \mathcal{U} \text{ such that } \\ & b \geq J(t,a;u) \text{ and } x_{s}^{t,a;u} \in \Gamma , \forall s \in [t,T], \mathbb{P}\text{-a.s.} \bigr\} \end{aligned}$$
(3.3)

As discussed in [13] and [11], we consider the following two statements: for \(b \geq 0\),

  1. (a)

    There exists \(u \in \mathcal{U}\) such that \(b \geq J(t,a;u)\) and \(x_{s}^{t,a;u} \in \Gamma \) for \(s \in [t,T]\), \(\mathbb{P}\)-a.s.;

  2. (b)

    There exist \((u,\alpha ,\beta ) \in \mathcal{U} \times \mathcal{A} \times \mathcal{B}\) such that \(y_{T; t,a,b}^{u,\alpha ,\beta} \geq m(x_{T}^{t,a;u})\), \(\mathbb{P}\)-a.s. and \(x_{s}^{t,a;u} \in \Gamma \) for \(s \in [t,T]\), \(\mathbb{P}\)-a.s.

Note that (a) corresponds to (3.3), while (3.2) is equivalent to (b). Then it is necessary to show the equivalence between (a) and (b).

First, from (b), there exist \((u,\alpha ,\beta ) \in \mathcal{U} \times \mathcal{A} \times \mathcal{B}\) such that \(y_{T; t,a,b}^{u,\alpha ,\beta} \geq m(x_{T}^{t,a;u})\) and by (3.1),

$$\begin{aligned} b \geq m \bigl(x_{T}^{t,a;u} \bigr) + \int _{t}^{T} l \bigl(s,x_{s}^{t,a;u},u_{s} \bigr) \,\mathrm{d} s - \int _{t}^{T} \alpha _{s}^{\top } \,\mathrm{d} B_{s} - \int _{t}^{T} \int _{E} \beta _{s}(e) \tilde{N}(\mathrm{d} e,\mathrm{d} s). \end{aligned}$$
(3.4)

Since the stochastic integrals \(\int _{t}^{r} \alpha _{s}^{\top }\,\mathrm{d} B_{s}\) and \(\int _{t}^{r} \int _{E} \beta _{s}(e) \tilde{N}(\mathrm{d} e,\mathrm{d} s)\) in (3.4) are \(\mathcal{F}_{r}\)-martingales, by taking the expectation of (3.4), we get \(b \geq J(t,a;u)\). Hence, (b) implies (a).

On the other hand, let \(\tilde{J}(t,a;u) := \int _{t}^{T} l(s,x_{s}^{t,a;u},u_{s}) \,\mathrm{d} s + m(x_{T}^{t,a;u})\). Since \(\tilde{J} \in L^{2}(\Omega ,\mathcal{F}_{T};\mathbb{R})\), in view of the martingale representation theorem [2, Theorem 5.3.5], there exist unique \((\tilde{\alpha},\tilde{\beta} ) \in \mathcal{A} \times \mathcal{B}\) such that

$$\begin{aligned} \tilde{J}(t,a;u) = J(t,a;u) + \int _{t}^{T} \tilde{\alpha}_{s}^{\top } \,\mathrm{d} B_{s} + \int _{t}^{T} \int _{E} \tilde{\beta}_{s}(e) \tilde{N}( \, \mathrm{d} e,\mathrm{d} s). \end{aligned}$$

Then from (a), for \(b \geq 0\),

$$\begin{aligned} b \geq J(t,a;u) ={}& \int _{t}^{T} l \bigl(s,x_{s}^{t,a;u},u_{s} \bigr) \,\mathrm{d} s + m \bigl(x_{T}^{t,a;u} \bigr) - \int _{t}^{T} \tilde{\alpha}_{s}^{\top } \,\mathrm{d} B_{s} \\ &{} - \int _{t}^{T} \int _{E} \tilde{\beta}_{s} (e) \tilde{N}( \mathrm{d} e,\mathrm{d} s), \end{aligned}$$

which, together with (3.1), shows that \(y_{T; t,a,b}^{u,\tilde{\alpha},\tilde{\beta}} \geq m(x_{T}^{t,a;u})\). Hence, (a) implies (b). This completes the proof. □

We now introduce the backward reachable set

$$\begin{aligned} \mathcal{R}_{t}^{\Gamma} :={}& \bigl\{ (a,b) \in \mathbb{R}^{n} \times \mathbb{R}|\exists (u,\alpha ,\beta ) \in \mathcal{U} \times \mathcal{A} \times \mathcal{B} \text{ such that} \\ &{} \bigl(x_{T}^{t,a;u},y_{T;t,a,b}^{u,\alpha ,\beta} \bigr) \in \mathcal{E}(m), \mathbb{P}\text{-a.s. and } x_{s}^{t,a;u} \in \Gamma , \forall s \in [t,T],\mathbb{P}\text{-a.s.} \bigr\} \end{aligned}$$
(3.5)

Clearly, based on Lemma 3.2, we have the following result:

Theorem 3.1

Assume that Assumptions 1and 2hold. For any \((t,a) \in [0,T] \times \mathbb{R}^{n}\),

$$\begin{aligned} V(t,a) = \inf \bigl\{ b \geq 0| (a,b) \in \mathcal{R}_{t}^{\Gamma} \bigr\} . \end{aligned}$$
(3.6)

Remark 3.3

From Theorem 3.1, we observe that the value function in (2.3) can be characterized by the backward reachable set \(\mathcal{R}_{t}^{\Gamma}\). In the next subsection, we focus on an explicit characterization of \(\mathcal{R}_{t}^{\Gamma}\) as the zero-level set of the value function for the unconstrained auxiliary stochastic control problem.

3.2 Characterization of backward reachable set

Let

$$\begin{aligned} \bar{J}(t,a,b;u,\alpha ,\beta ) = \mathbb{E} \biggl[ \max \bigl\{ m \bigl(x_{T}^{t,a;u} \bigr) - y_{T;t,a,b}^{u,\alpha ,\beta}, 0 \bigr\} + \int _{t}^{T} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s \biggr], \end{aligned}$$

where we introduce the following distance function on \(\mathbb{R}^{n}\) to \(\mathbb{R}^{+}\):

$$\begin{aligned} d(x,\Gamma ) = 0\quad \text{if and only if}\quad x \in \Gamma . \end{aligned}$$

Then the auxiliary value function \(W: [0,T] \times \mathbb{R}^{n} \times \mathbb{R} \rightarrow \mathbb{R}\) can be defined as follows:

$$\begin{aligned} W(t,a,b) & := \mathop{\inf_{u \in \mathcal{U}}}_{\alpha \in \mathcal{A},\beta \in \mathcal{B}} \bar{J}(t,a,b;u,\alpha ,\beta ),\quad \text{subject to (2.1) and (3.1).} \end{aligned}$$
(3.7)

Remark 3.4

Note that (3.7) does not have any state constraints. Moreover, from (3.7), we have \(W(T,a,b) = \max \{m(a) - b, 0\}\).

Assumption 3

\(d(x,\Gamma )\) is nonnegative, Lipschitz continuous in x with the Lipschitz constant L, and satisfies the linear growth condition in x.

Remark 3.5

One example of \(d(x,\Gamma )\) is \(d(x,\Gamma ) = \inf_{y \in \Gamma} | x - y|\). Clearly, it holds Assumption 3.

The following theorem shows the equivalent expression of V in terms of the zero-level set of W.

Theorem 3.2

Suppose that Assumptions 1-3hold and there exists an optimal control such that it attains the minimum of the auxiliary optimal control problem in (3.7). Then:

  1. (i)

    The reachable set can be obtained by

    $$\begin{aligned} \mathcal{R}_{t}^{\Gamma }= \bigl\{ (a,b) \in \mathbb{R}^{n} \times \mathbb{R}| W(t,a,b) = 0 \bigr\} ,\quad \forall t \in [0,T]; \end{aligned}$$
  2. (ii)

    The value function V in (2.3) can be characterized by the zero-level set of W: for \((t,a) \in [0,T] \times \mathbb{R}^{n}\),

    $$\begin{aligned} V(t,a) = \inf \bigl\{ b \geq 0 | (a,b) \in \mathcal{R}_{t}^{\Gamma } \bigr\} = \inf \bigl\{ b \geq 0 | W(t,a,b) = 0 \bigr\} . \end{aligned}$$
    (3.8)

Remark 3.6

  1. (i)

    In Sects. 4 and 5, we show that W is a unique viscosity solution of the associated Hamilton-Jacobi-Bellman (HJB) equation. Hence, from Theorem 3.2 (particularly (3.8)), the value function of the state-constrained problem V in (2.3) can be obtained by solving the HJB equation of W.

  2. (ii)

    Theorem 3.2 relies on the existence of optimal controls for (3.7). In the Appendix, we provide several different conditions under which (3.7) admits an optimal solution.

Proof of Theorem 3.2

From (3.6) in Theorem 3.1, we see that (ii) follows from (i). Hence, we prove (i). Recall \(\mathcal{R}_{t}^{\Gamma}\) defined in (3.5):

$$\begin{aligned} \mathcal{R}_{t}^{\Gamma} :={} &\bigl\{ (a,b) \in \mathbb{R}^{n} \times \mathbb{R}|\exists (u,\alpha ,\beta ) \in \mathcal{U} \times \mathcal{A} \times \mathcal{B}\text{ such that} \\ &{} \bigl(x_{T}^{t,a;u},y_{T;t,a,b}^{u,\alpha ,\beta} \bigr) \in \mathcal{E}(m), \mathbb{P}\text{-a.s. and } x_{s}^{t,a;u} \in \Gamma , \forall s \in [t,T], \mathbb{P}\text{-a.s.} \bigr\} , \end{aligned}$$

and let \(\bar{\mathcal{R}}_{t}^{\Gamma }:= \{(a,b) \in \mathbb{R}^{n} \times \mathbb{R}| W(t,a,b) = 0 \}\). We will show that \(\mathcal{R}_{t}^{\Gamma }\subseteq \bar{\mathcal{R}}_{t}^{\Gamma}\) and \(\mathcal{R}_{t}^{\Gamma }\supseteq \bar{\mathcal{R}}_{t}^{\Gamma}\) for \(t \in [0,T]\).

Fix \((a,b) \in \mathcal{R}_{t}^{\Gamma}\). By definition, there exist \((u,\alpha ,\beta ) \in \mathcal{U} \times \mathcal{A} \times \mathcal{B}\) such that

$$\begin{aligned} \max \bigl\{ m \bigl(x_{T}^{t,a;u} \bigr) - y_{T;t,a,b}^{u,\alpha ,\beta}, 0 \bigr\} = 0\quad \text{and}\quad d \bigl(x_{s}^{t,a;u},\Gamma \bigr) = 0,\quad \forall s \in [t,T], \mathbb{P}\text{-a.s.} \end{aligned}$$

This implies that \(W(t,a,b) = 0\) for \(t \in [0,T]\); hence, \(\mathcal{R}_{t}^{\Gamma }\subseteq \bar{\mathcal{R}}_{t}^{\Gamma}\) for \(t \in [0,T]\).

Suppose that \((a,b) \in \bar{\mathcal{R}}_{t}^{\Gamma}\), i.e., \(W(t,a,b) = 0\). Then due to the assumption of the existence of an optimal control given in the statement,Footnote 3 there exist \((\bar{u},\bar{\alpha},\bar{\beta}) \in \mathcal{U} \times \mathcal{A} \times \mathcal{B}\) such that

$$\begin{aligned} W(t,a,b) = \mathbb{E} \biggl[ \max \bigl\{ m \bigl(x_{T}^{t,a;\bar{u}} \bigr) - y_{T;t,a,b}^{ \bar{u},\bar{\alpha},\bar{\beta}}, 0 \bigr\} + \int _{t}^{T}\,d \bigl(x_{s}^{t,a; \bar{u}}, \Gamma \bigr) \,\mathrm{d} s \biggr] = 0. \end{aligned}$$

From the nonnegativity of l, m, and \(d(x,\Gamma )\) in Assumptions 2 and 3, and the property of the max function, we can see that \(\max \{ m(x_{T}^{t,a;\bar{u}}) - y_{T;t,a,b}^{\bar{u},\bar{\alpha}, \bar{\beta}}, 0 \} + \int _{t}^{T} d(x_{s}^{t,a;\bar{u}},\Gamma ) \,\mathrm{d} s < 0\), \(\mathbb{P}\)-a.s., is not possible. In addition, \(\max \{ m(x_{T}^{t,a;\bar{u}}) - y_{T;t,a,b}^{\bar{u},\bar{\alpha}, \bar{\beta}}, 0 \} + \int _{t}^{T} d(x_{s}^{t,a;\bar{u}},\Gamma ) \,\mathrm{d} s > 0\), \(\mathbb{P}\)-a.s., is not possible, as it contradicts \(W(t,a,b) = 0\). Hence, we must have

$$\begin{aligned} \max \bigl\{ m \bigl(x_{T}^{t,a;\bar{u}} \bigr) - y_{T;t,a,b}^{\bar{u},\bar{\alpha}, \bar{\beta}}, 0 \bigr\} + \int _{t}^{T} d \bigl(x_{s}^{t,a;\bar{u}}, \Gamma \bigr) \,\mathrm{d} s =0,\quad \mathbb{P}\text{-a.s.}, \end{aligned}$$

which, together with the nonnegativity of \(d(x,\Gamma )\), leads to

$$\begin{aligned} \bigl(x_{T}^{t,a;\bar{u}},y_{T;t,a,b}^{\bar{u},\bar{\alpha},\bar{\beta}} \bigr) \in \mathcal{E}(m)\quad \text{and}\quad x_{s}^{t,a;\bar{u}} \in \Gamma ,\quad \forall s \in [t,T],\mathbb{P}\text{a.s.} \end{aligned}$$

This shows that \(\bar{\mathcal{R}}_{t}^{\Gamma }\subseteq \mathcal{R}_{t}^{\Gamma}\) for \(t \in [0,T]\). We complete the proof. □

3.3 Properties of W

We provide some useful properties of W in (3.7).

Proposition 3.1

Assume that Assumptions 1-3hold. Then for \((a,b) \in \mathbb{R}^{n} \times \mathbb{R}\) and \(t \in [0,T]\) with \(\tau > 0\), the auxiliary value function W satisfies the following dynamic programming principle (DPP):

$$\begin{aligned} W(t,a,b) & = \mathop{\inf_{u \in \mathcal{U}}}_{\alpha \in \mathcal{A},\beta \in \mathcal{B}} \mathbb{E} \biggl[ \int _{t}^{t+ \tau} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s + W \bigl(t+\tau ,x_{t+\tau}^{t,a;u}, y_{t+ \tau ;t,a,b}^{u,\alpha ,\beta} \bigr) \biggr]. \end{aligned}$$

Proof

Let us define

$$\begin{aligned} \widehat{W}(t,a,b) := \mathop{\inf_{u \in \mathcal{U}}}_{\alpha \in \mathcal{A},\beta \in \mathcal{B}} \mathbb{E} \biggl[ \int _{t}^{t+ \tau} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s + W \bigl(t+\tau ,x_{t+\tau}^{t,a;u}, y_{t+ \tau ;t,a,b}^{u,\alpha ,\beta} \bigr) \biggr]. \end{aligned}$$

We prove \(W(t,a,b) \geq \widehat{W}(t,a,b)\) and \(W(t,a,b) \leq \widehat{W}(t,a,b)\). Notice that for \(t^{\prime }\geq t\), it follows that

$$\begin{aligned} x_{s}^{t,a;u} = x_{s}^{t^{\prime},x_{t^{\prime}}^{t,a;u};u},\qquad y_{s; t,a,b}^{u, \alpha ,\beta} = y_{s; t^{\prime },x_{t^{\prime}}^{t,a;u}, y_{t^{ \prime};t,a,b}^{u,\alpha ,\beta}}^{u,\alpha ,\beta},\quad \forall s \in \bigl[t^{ \prime },T \bigr]. \end{aligned}$$

Hence, with \(t^{\prime }= t+\tau \),

$$\begin{aligned} & \bar{J}(t,a,b;u,\alpha ,\beta ) \\ & \quad = \mathbb{E} \biggl[ \max \bigl\{ m \bigl(x_{T}^{t+\tau ,x_{t+\tau}^{t,a;u};u} \bigr) - y_{T; t+\tau ,x_{t+\tau}^{t,a;u}, y_{t+\tau ;t,a,b}^{u,\alpha , \beta}}^{u,\alpha ,\beta}, 0 \bigr\} \\ &\qquad {} + \int _{t+\tau}^{T} d \bigl(x_{s}^{t+\tau ,x_{t+\tau}^{t,a;u};u}, \Gamma \bigr) \,\mathrm{d} s + \int _{t}^{t+\tau} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s \biggr] \\ &\quad = \mathbb{E} \biggl[ \bar{J} \bigl(t+\tau ,x_{t+\tau}^{t,a;u}, y_{t+\tau ;t,a,b}^{u, \alpha ,\beta};u,\alpha ,\beta \bigr) + \int _{t}^{t+\tau} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s \biggr]. \end{aligned}$$

We can easily deduce that

$$\begin{aligned} \bar{J}(t,a,b;u,\alpha ,\beta ) \geq \mathbb{E} \biggl[ \int _{t}^{t+ \tau} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s + W \bigl(t+\tau ,x_{t+\tau}^{t,a;u}, y_{t+ \tau ;t,a,b}^{u,\alpha ,\beta} \bigr) \biggr], \end{aligned}$$

which, by taking the infimum with respect to \((u,\alpha ,\beta ) \in \mathcal{U} \times \mathcal{A} \times \mathcal{B}\), leads to

$$\begin{aligned} W(t,a,b) \geq \widehat{W}(t,a,b). \end{aligned}$$

On the other hand, by the measurable selection theorem (see [37, Abstract] and [6, Theorem 8.1.3]), for any \(\epsilon > 0\), there is the tuple \((u^{\epsilon},\alpha ^{\epsilon},\beta ^{\epsilon}) \in \mathcal{U}_{t+ \tau ,T} \times \mathcal{A}_{t+\tau ,T} \times \mathcal{B}_{t+\tau ,T}\) such that

$$\begin{aligned} W \bigl(t+\tau ,x_{t+\tau}^{t,a;u}, y_{t+\tau ;t,a,b}^{u,\alpha ,\beta} \bigr) + \epsilon \geq \bar{J} \bigl(t+\tau ,x_{t+\tau}^{t,a;u}, y_{t+\tau ;t,a,b}^{u, \alpha ,\beta};u^{\epsilon}, \alpha ^{\epsilon},\beta ^{\epsilon} \bigr). \end{aligned}$$
(3.9)

Here, we apply [6, Theorem 8.1.3]Footnote 4 to get (3.9). Specifically, let \((\Theta ,\mathcal{M})\) be the measurable space, where \(\Theta := [t+\tau ,T] \times \Omega \) and \(\mathcal{M} := \mathcal{B}([t+\tau ,T]) \otimes \mathcal{F} \) with \(\mathcal{B}([t+\tau ,T])\) being the Borel σ-algebra generated by subintervals of \([t+\tau ,T]\). Let \(\mathcal{X}_{t+\tau ,T} := \mathcal{U}_{t+\tau ,T} \times \mathcal{A}_{t+\tau ,T} \times \mathcal{B}_{t+\tau ,T}\). Note that \(\mathcal{X}_{t+\tau ,T}\) is a separable Hilbert space. For \((s,\omega ) \in \Theta \), define the set-valued map Ξ from Θ to closed subsets of \(\mathcal{X}_{t+\tau ,T}\) by \(\Xi (s,\omega ) := \{(u_{s,T},\alpha _{s,T},\beta _{s,T}) := (u, \alpha ,\beta ) \in \mathcal{X}_{s,T}| W(s,x_{s}^{t,a;u}, y_{s;t,a,b}^{u, \alpha ,\beta}) + \epsilon \geq \bar{J}(s,x_{s}^{t,a;u}, y_{s;t,a,b}^{u, \alpha ,\beta};u,\alpha ,\beta ) \} \). Note that for \((s,\omega ) \in \Theta \), \(\Xi (s,\omega )\) is a non-empty closed subset of \(\mathcal{X}_{t+\tau ,T}\) due to the definition of W and the continuity of the involved functions by Assumptions 1-3. Then in view of the measurable selection theorem in [6, Theorem 8.1.3], there is the tuple \((u^{\epsilon},\alpha ^{\epsilon},\beta ^{\epsilon}) \in \mathcal{X}_{t+ \tau ,T} = \mathcal{U}_{t+\tau ,T} \times \mathcal{A}_{t+\tau ,T} \times \mathcal{B}_{t+\tau ,T} \) such that (3.9) holds.

For any \((u,\alpha ,\beta ) \in \mathcal{U}_{t,t+\tau} \times \mathcal{A}_{t,t+ \tau} \times \mathcal{B}_{t,t+\tau}\), define

$$\begin{aligned} u_{s}^{\prime }& := \textstyle\begin{cases} u_{s}, & s \in [t,t+\tau ), \\ u_{s}^{\epsilon}, & s \in [t+\tau ,T], \end{cases}\displaystyle \qquad \alpha _{s}^{\prime }:= \textstyle\begin{cases} \alpha _{s}, & s \in [t,t+\tau ), \\ \alpha _{s}^{\epsilon}, & s \in [t+\tau ,T], \end{cases}\displaystyle \\ \beta _{s}^{\prime}(e) &:= \textstyle\begin{cases} \beta _{s}(e), & s \in [t,t+\tau ), \\ \beta _{s}^{\epsilon}(e), & s \in [t+\tau ,T]. \end{cases}\displaystyle \end{aligned}$$

Cleary, \((u^{\prime},\alpha ^{\prime},\beta ^{\prime }) \in \mathcal{U} \times \mathcal{A} \times \mathcal{B}\). We then have

$$\begin{aligned} W(t,a,b) & \leq \bar{J} \bigl(t,a,b;u^{\prime},\alpha ^{\prime}, \beta ^{ \prime} \bigr) \\ & = \mathbb{E} \biggl[ \bar{J} \bigl(t+\tau ,x_{t+\tau}^{t,a;u} y_{t+\tau ;t,a,b}^{u, \alpha ,\beta};u^{\epsilon},\alpha ^{\epsilon}, \beta ^{\epsilon} \bigr) + \int _{t}^{t+\tau} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s \biggr] \\ & \leq \mathbb{E} \biggl[ \int _{t}^{t+\tau} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s + W \bigl(t+\tau ,x_{t+\tau}^{t,a;u} y_{t+\tau ;t,a,b}^{u,\alpha , \beta} \bigr) \biggr] + \epsilon , \end{aligned}$$

which implies by the fact that \((u,\alpha ,\beta ) \in \mathcal{U}_{t,t+\tau} \times \mathcal{A}_{t,t+ \tau} \times \mathcal{B}_{t,t+\tau}\) and \(\epsilon > 0\) are arbitrary,

$$\begin{aligned} W(t,a,b) \leq \widehat{W}(t,a,b). \end{aligned}$$

We complete the proof. □

Lemma 3.3

Suppose that Assumptions 1-3hold. Then for \(t \in [0,T]\), there exists a constant \(C>0\) such that

  1. (i)

    \(|W(t,a,b)| \leq C(1 + |a|)\) for any \((a,b) \in \mathbb{R}^{n} \times [0,\infty )\);

  2. (ii)

    W is Lipschitz continuous in \(\mathbb{R}^{n} \times \mathbb{R}\), i.e., for \((a,b) \in \mathbb{R}^{n} \times \mathbb{R}\) and \((a^{\prime},b^{\prime}) \in \mathbb{R}^{n} \times \mathbb{R}\), \(|W(t,a,b) - W(t,a^{\prime},b^{\prime})| \leq C(|a - a^{\prime}| + |b - b^{\prime}|)\);

  3. (iii)

    W is continuous in \(t \in [0,T]\).

Proof

In view of the definition of W, when \(b \in [0,\infty )\), with \(\alpha =0\) and \(\beta =0\),

$$\begin{aligned} W(t,a,b) & \leq \mathop{\inf_{u \in \mathcal{U}}} \mathbb{E} \biggl[ \max \bigl\{ m \bigl(x_{T}^{t,a;u} \bigr) - y_{T;t,a,b}^{u,0,0}, 0 \bigr\} + \int _{t}^{T} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s \biggr] \\ & = \mathbb{E} \biggl[ \int _{t}^{T} l \bigl(s,x_{s}^{t,a;u},u_{s} \bigr) \,\mathrm{d} s + m \bigl(x_{T}^{t,a;u} \bigr) + \int _{t}^{T} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s \biggr], \end{aligned}$$

where the second inequality follows from the fact that l and m are nonnegative due to (ii) of Assumption 2. Then the linear growth of W in a in the statement of (i) follows from Assumptions 1, 2, and 3, and (ii) of Lemma 2.1.

Note that \(|\inf f(x) - \inf g(x)| \leq \sup |f(x) - g(x)|\) and \(|\sup f(x) - \sup g(x)| \leq \sup |f(x) - g(x)|\). From Assumptions 1, 2, and 3, and using the Hölder inequality,

$$\begin{aligned} \bigl\vert W(t,a,b) - W \bigl(t,a^{\prime}, b^{\prime} \bigr) \bigr\vert \leq {}& C \mathop{\sup_{u \in \mathcal{U}}}_{\alpha \in \mathcal{A}, \beta \in \mathcal{B}} \biggl\{ \mathbb{E} \bigl[ \bigl\vert x_{T}^{t,a;u} - x_{T}^{t,a^{ \prime};u} \bigr\vert ^{2} \bigr]^{\frac{1}{2}} + \mathbb{E} \bigl[ \bigl\vert y_{T;t,a,b}^{u, \alpha ,\beta} - y_{T;t,a^{\prime},b^{\prime}}^{u,\alpha ,\beta} \bigr\vert ^{2} \bigr]^{\frac{1}{2}} \\ &{} + \mathbb{E} \biggl[ \int _{t}^{T} \bigl\vert x_{s}^{t,a;u} - x_{s}^{t,a^{ \prime};u} \bigr\vert ^{2} \,\mathrm{d} s \biggr]^{\frac{1}{2}} \biggr\} \\ \leq{} & C \bigl( \bigl\vert a - a^{\prime } \bigr\vert + \bigl\vert b - b^{\prime} \bigr\vert \bigr). \end{aligned}$$

Notice that to obtain the last inequality, we have used (ii) of Lemmas 2.1 and 3.1, the compactness of U, and the fact that the controls \((\alpha ,\beta )\) can be restricted to be bounded in \(\mathcal{G}_{\mathbb{F}}^{2}\) and \(\mathcal{L}_{\mathbb{F}}^{2}\) senses from Remark 3.1. This shows (ii).

For the continuity of W in \(t \in [0,T]\) in (iii), let \(t,t+\tau \in [0,T]\) with \(\tau > 0\). Then by applying the similar technique above and using (ii) of Lemma 2.1, we have

$$\begin{aligned} & \bigl\vert W(t+\tau ,a,b) - W(t,a,b) \bigr\vert \\ & \quad \leq C \mathop{\sup_{u \in \mathcal{U}}}_{\alpha \in \mathcal{A}, \beta \in \mathcal{B}} \biggl\{ \mathbb{E} \bigl[ \bigl\vert x_{T}^{t+\tau ,a;u} - x_{T}^{t,a;u} \bigr\vert ^{2} \bigr]^{\frac{1}{2}} + \mathbb{E} \bigl[ \bigl\vert y_{T;t+ \tau ,a,b}^{u,\alpha ,\beta} - y_{T;t,a,b}^{u,\alpha ,\beta} \bigr\vert ^{2} \bigr]^{\frac{1}{2}} \\ & \qquad {}+ \mathbb{E} \biggl[ \int _{t+\tau}^{T} \bigl\vert x_{s}^{t+ \tau ,a;u} - x_{s}^{t,a;u} \bigr\vert ^{2} \,\mathrm{d} s \biggr]^{\frac{1}{2}} + \mathbb{E} \biggl[ \int _{t}^{t+\tau} \bigl(1 + \bigl\vert x_{s}^{t,a;u} \bigr\vert ^{2} \bigr) \, \mathrm{d} s \biggr]^{\frac{1}{2}} \biggr\} \\ &\quad \leq C \Bigl( \tau ^{\frac{1}{2}} + \mathop{\sup_{u \in \mathcal{U}}}_{\alpha \in \mathcal{A},\beta \in \mathcal{B}} \mathbb{E} \bigl[ \bigl\vert y_{T;t+ \tau ,a,b}^{u,\alpha ,\beta} - y_{T;t,a,b}^{u, \alpha ,\beta} \bigr\vert ^{2} \bigr]^{\frac{1}{2}} \Bigr). \end{aligned}$$
(3.10)

From Remark 3.1, we may consider \((\alpha ,\beta )\) bounded in \(\mathcal{G}_{\mathbb{F}}^{2}\) and \(\mathcal{L}_{\mathbb{F}}^{2}\) senses. We then apply (ii) of Lemma 3.1 to get \(\lim_{\tau \downarrow 0} \mathop{\sup_{u \in \mathcal{U}}}_{ \alpha \in \mathcal{A},\beta \in \mathcal{B}} \mathbb{E} [ |y_{T;t+ \tau ,a,b}^{u,\alpha ,\beta} - y_{T;t,a,b}^{u,\alpha ,\beta}|^{2} ]^{\frac{1}{2}} = 0\) in (3.10). This, together with (3.10), implies \(|W(t+\tau ,a,b) - W(t,a,b)| \rightarrow 0\) as \(\tau \downarrow 0\). We complete the proof. □

Lemma 3.4

Suppose that Assumptions 1-3hold. If \(b \leq 0\), then we have \(W(t,a,b) = W_{0}(t,a) - b\), where \(W_{0}:[0,T] \times \mathbb{R}^{n} \rightarrow \mathbb{R}\) is the value function of the following problem:

$$\begin{aligned} W_{0}(t,a) := \inf_{u \in \mathcal{U}} \biggl\{ J(t,a;u) + \mathbb{E} \int _{t}^{T} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s \biggr\} . \end{aligned}$$

Proof

By definition of W, for any \((t,a,b) \in [0,T] \times \mathbb{R}^{n} \times \mathbb{R}\), we have

$$\begin{aligned} W(t,a,b) & = \mathop{\inf_{u \in \mathcal{U}}}_{\alpha \in \mathcal{A},\beta \in \mathcal{B}} \mathbb{E} \biggl[ \max \bigl\{ m \bigl(x_{T}^{t,a;u} \bigr) - y_{T;t,a,b}^{u,\alpha ,\beta}, 0 \bigr\} + \int _{t}^{T} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s \biggr] \\ & \geq \mathop{\inf_{u \in \mathcal{U}}}_{\alpha \in \mathcal{A}, \beta \in \mathcal{B}} \mathbb{E} \biggl[ m \bigl(x_{T}^{t,a;u} \bigr) - y_{T;t,a,b}^{u, \alpha ,\beta} + \int _{t}^{T} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s \biggr] \\ & = \inf_{u \in \mathcal{U}} \mathbb{E} \biggl[m \bigl(x_{T}^{t,a;u} \bigr) - b + \int _{t}^{T} l \bigl(s,x_{s}^{t,a;u},u_{s} \bigr) + \int _{t}^{T} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s \biggr] \\ & = W_{0}(t,a) - b, \end{aligned}$$

where we have used the fact that \(\mathbb{E}[\int _{t}^{T} \alpha _{s}^{\top }\,\mathrm{d} B_{s}] = 0\) and \(\mathbb{E}[\int _{t}^{T} \int _{E} \beta _{s}(e) \tilde{N}(\mathrm{d} e,\mathrm{d} s)] = 0\), as the stochastic integrals of B and Ñ are \(\mathcal{F}_{t}\)-martingales.

On the other hand, when \((\alpha ,\beta ) = (0,0) \in \mathcal{A} \times \mathcal{B}\), since \(b \leq 0\), and l and m are nonnegative,

$$\begin{aligned} \max \bigl\{ m \bigl(x_{T}^{t,a;u} \bigr) - y_{T;t,a,b}^{u,0,0}, 0 \bigr\} = m \bigl(x_{T}^{t,a;u} \bigr) - b + \int _{t}^{T} l \bigl(s,x_{s}^{t,a;u},u_{s} \bigr) \,\mathrm{d} s \geq 0. \end{aligned}$$

Hence, with \((\alpha ,\beta ) = (0,0) \in \mathcal{A} \times \mathcal{B}\), it follows that

$$\begin{aligned} W(t,a,b) & \leq \inf_{u \in \mathcal{U}}\mathbb{E} \biggl[ m \bigl(x_{T}^{t,a;u} \bigr) - b + \int _{t}^{T} l \bigl(s,x_{s}^{t,a;u},u_{s} \bigr) \,\mathrm{d} s + \int _{t}^{T} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s \biggr] \\ & = W_{0}(t,a) - b. \end{aligned}$$

This completes the proof. □

Based on (3.7) (see Remark 3.4) and Lemma 3.4, W satisfies the following boundary conditions:

Lemma 3.5

Suppose that Assumptions 1, 2, and 3hold. Then W satisfies the following boundary conditions:

$$\begin{aligned} \textstyle\begin{cases} W(T,a,b) = \max \{ m(a) - b,0 \},& (a,b) \in \mathbb{R}^{n} \times [0, \infty ), \\ W(t,a,0) = W_{0}(t,a),& (t,a) \in [0,T) \times \mathbb{R}^{n}. \end{cases}\displaystyle \end{aligned}$$

4 Characterization of W via viscosity solution of Hamilton-Jacobi-Bellman equation

Based on Theorem 3.2 and Remark 3.6, it is necessary to study the characterization of the auxiliary value function W in (3.7) in order to solve the original state-constrained control problem in (2.3). In this section and Sects. 5-6, we provide the characterization of W by showing that W is a unique continuous viscosity solution of the associated HJB equation.

As seen from (3.7), the auxiliary value function depends on the augmented dynamical system on \(\mathbb{R}^{n+1}\). We introduce the following notation:

$$\begin{aligned} &\widehat{f}(t,a,u) := \begin{bmatrix} f(t,a,u) \\ -l(t,a,u) \end{bmatrix},\qquad \widehat{ \sigma}(t,a,u,\alpha ) := \begin{bmatrix} \sigma (t,a,u) \\ \alpha ^{\top } \end{bmatrix}, \\ &\widehat{\chi}(t,a,u,e,\beta ) := \begin{bmatrix} \chi (t,a,u,e) \\ \beta (e) \end{bmatrix},\qquad \widehat{a} = \begin{bmatrix} a \\ b \end{bmatrix}, \end{aligned}$$

where \(\widehat{\sigma}:[0,T] \times \mathbb{R}^{n} \times U \times \mathbb{R}^{p} \rightarrow \mathbb{R}^{ (n+1) \times p}\) and \(\widehat{\chi}: [0,T] \times \mathbb{R}^{n} \times U \times E \times G^{2}(E,\mathcal{B}(E),\pi ; \mathbb{R}) \rightarrow \mathbb{R}^{n+1}\). Let \(\mathcal{O} := [0,T) \times \mathbb{R}^{n} \times (0,\infty )\), \(\bar{\mathcal{O}} := [0,T] \times \mathbb{R}^{n} \times [0,\infty )\), and \(G^{2} := G^{2}(E,\mathcal{B}(E), \pi ;\mathbb{R})\).

The HJB equation with the boundary conditions (see Lemma 3.5) is introduced below, which is the second-order nonlinear partial integro-differential equation (PIDE):

$$\begin{aligned} \textstyle\begin{cases} - \partial _{t} W(t,a,b) + H(t,a,b,(W,DW,D^{2}W)(t,a,b)) = 0, & (t,a,b) \in \mathcal{O}, \\ W(T,a,b) = \max \{ m(a) - b,0 \},& (a,b) \in \mathbb{R}^{n} \times [0, \infty ), \\ W(t,a,0) = W_{0}(t,a),& (t,a) \in [0,T) \times \mathbb{R}^{n}, \end{cases}\displaystyle \end{aligned}$$
(4.1)

where the Hamiltonian \(H:\bar{\mathcal{O}} \times \mathbb{R} \times \mathbb{R}^{n+1} \times \mathbb{S}^{n+1} \rightarrow \mathbb{R}\) is defined by

$$\begin{aligned} & H \bigl(t,a,b,W,DW,D^{2}W \bigr) \\ &\quad := \mathop{\sup_{u \in U}}_{\alpha \in \mathbb{R}^{r}, \beta \in G^{2}} \biggl\{ - \bigl\langle D W (t,a,b), \widehat{f}(t,a,u) \bigr\rangle - \frac{1}{2} \operatorname{Tr} \bigl(\widehat{\sigma} \widehat{\sigma}^{\top}(t,a,u, \alpha ) D^{2} W (t,a,b) \bigr) \\ &\qquad {} - \int _{E} \bigl[ W \bigl(t,a+\chi (t,a,u,e),b+\beta (e) \bigr) - W(t,a,b) \\ &\qquad {} - \bigl\langle DW(t,a,b), \widehat{\chi}(t,a,u,e,\beta ) \bigr\rangle \bigr]\pi (\mathrm{d} e) \biggr\} - d(a,\Gamma ). \end{aligned}$$

The notion of viscosity solutions for (4.1) is given as follows [8, 9]:

Definition 1

A real-valued function \(W \in C(\bar{\mathcal{O}} )\) is said to be a viscosity subsolution (resp. supersolution) of (4.1) if

  1. (i)

    \(W(T,a,b) \leq \max \{m(a) - b,0\}\) (resp. \(W(T,a,b) \geq \max \{m(a) - b,0\}\)) for \((a,b) \in \mathbb{R}^{n} \times [0,\infty )\) and \(W(t,a,0) \leq W_{0}(t,a)\) (resp. \(W(t,a,0) \geq W_{0}(t,a) \)) for \((t,a) \in [0,T) \times \mathbb{R}^{n}\);

  2. (ii)

    For all test functions \(\phi \in C_{b}^{1,3}(\bar{\mathcal{O}}) \cap C_{2}(\bar{\mathcal{O}})\), the following inequality holds at the global maximum (resp. minimum) point \((t,a,b) \in \mathcal{O}\) of \(W-\phi \):

    $$\begin{aligned} & - \partial _{t} \phi (t,a,b) + H \bigl(t,a,b, \bigl(\phi , D \phi , D^{2} \phi \bigr) (t,a,b) \bigr) \leq 0 \\ &\quad \bigl(\text{resp. } {-} \partial _{t} \phi (t,a,b) + H \bigl(t,a,b, \bigl(\phi , D \phi , D^{2} \phi \bigr) (t,a,b) \bigr) \geq 0 \bigr). \end{aligned}$$

A real-valued function \(W \in C(\bar{\mathcal{O}})\) is said to be a viscosity solution of (4.1) if it is both a viscosity subsolution and a viscosity supersolution of (4.1).

The existence of the viscosity solution for (4.1) can be stated as follows:

Theorem 4.1

Suppose that Assumptions 1-3hold. Then the auxiliary value function W defined in (3.7) is a continuous viscosity solution of the HJB equation in (4.1).

Proof of Theorem 4.1

Let us first prove the subsolution property. In view of Lemma 3.3, \(W \in C([0,T] \times \mathbb{R}^{n+1})\). Also, from Lemma 3.5, W satisfies (i) of Definition 1.

We prove (ii) of Definition 1. Let \(\phi \in C_{b}^{1,3}(\bar{\mathcal{O}})\) be the test function such that

$$\begin{aligned} (W-\phi ) (t,a,b) = \max_{(\bar{t},\bar{a},\bar{b}) \in \mathcal{O}} (W- \phi ) (\bar{t},\bar{a}, \bar{b}), \end{aligned}$$

and without loss of generality, we may assume that \(W(t,a,b) = \phi (t,a,b)\). This implies \(W(\bar{t},\bar{a},\bar{b}) \leq \phi (\bar{t},\bar{a},\bar{b})\) for \((\bar{t},\bar{a},\bar{b}) \in \mathcal{O}\) and \((\bar{t},\bar{a},\bar{b}) \neq (t,a,b)\).

By using the DPP in Proposition 3.1 with \(t,t+\tau \in [0,T]\) and \(\tau > 0\),

$$\begin{aligned} \phi (t,a,b) & = W(t,a,b) \\ & = \mathop{\inf_{u \in \mathcal{U}}}_{\alpha \in \mathcal{A},\beta \in \mathcal{B}} \mathbb{E} \biggl[ \int _{t}^{t+\tau} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s + W \bigl(t+\tau ,x_{t+\tau}^{t,a;u}, y_{t+\tau ;t,a,b}^{u, \alpha ,\beta} \bigr) \biggr], \end{aligned}$$

which implies

$$\begin{aligned} \phi (t,a,b) - \mathbb{E} \biggl[ \int _{t}^{t+\tau} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s + \phi \bigl(t+\tau ,x_{t+\tau}^{t,a;u}, y_{t+\tau ;t,a,b}^{u, \alpha ,\beta} \bigr) \biggr] \leq 0. \end{aligned}$$

By applying Itô’s formula of Lévy-type stochastic integrals [2, Theorem 4.4.7],

$$\begin{aligned} & - \mathbb{E} \biggl[ \int _{t}^{t+\tau} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s + \int _{t}^{t+\tau} \partial _{t} \phi \bigl(s,x_{s}^{t,a;u}, y_{s;t,a,b}^{u, \alpha ,\beta} \bigr) \,\mathrm{d} s \biggr] \\ &\quad {} - \mathbb{E} \biggl[ \int _{t}^{t+\tau} \bigl\langle D \phi \bigl(s,x_{s}^{t,a;u}, y_{s;t,a,b}^{u,\alpha ,\beta} \bigr), \widehat{f} \bigl(s,x_{s}^{t,a;u},u_{s} \bigr) \bigr\rangle \,\mathrm{d} s \biggr] \\ &\quad {} - \frac{1}{2} \mathbb{E} \biggl[ \int _{t}^{t+\tau} \operatorname{Tr} \bigl( \widehat{\sigma}\widehat{\sigma}^{\top} \bigl(s,x_{s}^{t,a;u},u_{s}, \alpha _{s} \bigr) D^{2} \phi \bigl(s,x_{s}^{t,a;u}, y_{s;t,a,b}^{u,\alpha ,\beta} \bigr) \bigr) \,\mathrm{d} s \biggr] \\ &\quad {} - \mathbb{E} \biggl[ \int _{t}^{t+\tau} \int _{E} \bigl[ \phi \bigl(s,x_{s}^{t,a;u} + \chi \bigl(s,x_{s}^{t,a;u},u_{s},e \bigr), y_{s;t,a,b}^{u,\alpha ,\beta} + \beta _{s}(e) \bigr) - \phi \bigl(s,x_{s}^{t,a;u}, y_{s;t,a,b}^{u,\alpha ,\beta} \bigr) \\ &\quad {} - \bigl\langle D \phi \bigl(s,x_{s}^{t,a;u}, y_{s;t,a,b}^{u, \alpha ,\beta} \bigr), \widehat{\chi} \bigl(s,x_{s}^{t,a;u},u_{s},e, \beta _{s}(e) \bigr) \bigr\rangle \bigr] \pi (\mathrm{d} e) \, \mathrm{d} s \biggr] \leq 0, \end{aligned}$$

where we have used the fact that the expectation for the stochastic integrals of B and Ñ are zero, since they are \(\mathcal{F}_{t}\)-martingales.

Multiplying \(\frac{1}{\tau}\) above and then letting \(\tau \downarrow 0\), we have

$$\begin{aligned} & - \partial _{t} \phi (t,a,b) + H^{\prime } \bigl(t,a,b, \bigl(\phi ,D\phi ,D^{2} \phi \bigr) (t,a,b); u,\alpha ,\beta \bigr) \leq 0, \end{aligned}$$

where

$$\begin{aligned} & H^{\prime } \bigl(t,a,b, \bigl(\phi ,D\phi ,D^{2}\phi \bigr) (t,a,b); u,\alpha , \beta \bigr) \\ &\quad := - d(a,\Gamma ) - \bigl\langle D \phi (t,a,b), \widehat{f}(t,a,u) \bigr\rangle - \frac{1}{2} \operatorname{Tr} \bigl(\widehat{\sigma} \widehat{ \sigma}^{\top}(t,a,u, \alpha ) D^{2} \phi (t,a,b) \bigr) \\ &\qquad {} - \int _{E} \bigl[ \phi \bigl(t, a + \chi (t,a,u,e),b + \beta (e) \bigr)) \\ &\qquad {} - \phi (t,a,b) - \bigl\langle D \phi (t,a,b), \widehat{ \chi}(t,a,u,e, \beta ) \bigr\rangle \bigr]\pi (\mathrm{d} e). \end{aligned}$$
(4.2)

By taking sup with respect to \((u,\alpha ,\beta ) \in U \times \mathbb{R}^{p} \times G^{2}\), in view of definition H,

$$\begin{aligned} - \partial _{t} \phi (t,a,b) + H \bigl(t,a,b, \bigl(\phi ,D \phi ,D^{2} \phi \bigr) (t,a,b) \bigr) \leq 0, \end{aligned}$$
(4.3)

which shows that W is the viscosity subsolution of (4.1).

We now prove, by contradiction, the supersolution property. It is easy to see that W satisfies the boundary inequalities in (i) of Definition 1.

Suppose that \(\phi \in C_{b}^{1,3}(\bar{\mathcal{O}})\) is the test function satisfying the following property:

$$\begin{aligned} (W-\phi ) (t,a,b) = \min_{(\bar{t},\bar{a}, \bar{b}) \in \mathcal{O} } (W-\phi ) (\bar{t},\bar{a}, \bar{b}), \end{aligned}$$

and without loss of generality, we may assume \(W(t,a,b) = \phi (t,a,b)\). This implies that \(W(\bar{t},\bar{a},\bar{b}) \geq \phi (\bar{t},\bar{a},\bar{b})\) for \((\bar{t},\bar{a},\bar{b}) \in \mathcal{O}\) and \((\bar{t},\bar{a},\bar{b}) \neq (t,a,b)\).

Let us assume that W is not a viscosity supersolution. Then there exists a constant \(\theta >0\) such that

$$\begin{aligned} - \partial _{t} \phi (t,a,b) + H \bigl(t,a,b, \bigl(\phi , D \phi , D^{2} \phi \bigr) (t,a,b) \bigr) \leq - \theta < 0. \end{aligned}$$

Recall the definition of \(H^{\prime}\) in (4.2) and note that \(H^{\prime }\leq \sup_{u \in U, \alpha \in \mathbb{R}^{r}, \beta \in G^{2}} H^{\prime }= H\). Then for any \((u,\alpha ,\beta ) \in U \times \mathbb{R}^{p} \times G^{2}\), we have

$$\begin{aligned} - \partial _{t} \phi (t,a,b) + H^{\prime } \bigl(t,a,b, \bigl(\phi ,D\phi ,D^{2} \phi \bigr) (t,a,b); u,\alpha , \beta \bigr) \leq - \theta < 0. \end{aligned}$$
(4.4)

On the other hand, the DPP in Proposition 3.1 implies

$$\begin{aligned} \phi (t,a,b) & = W(t,a,b) \\ & \geq \mathop{\inf_{u \in \mathcal{U}}}_{\alpha \in \mathcal{A}, \beta \in \mathcal{B}} \mathbb{E} \biggl[ \int _{t}^{t+\tau} d \bigl(x_{s}^{t,a;u}, \Gamma \bigr) \,\mathrm{d} s + \phi \bigl(t+\tau ,x_{t+\tau}^{t,a;u}, y_{t+\tau ;t,a,b}^{u, \alpha ,\beta} \bigr) \biggr], \end{aligned}$$

and for each \(\epsilon > 0 \), there exist \((u^{\epsilon},\alpha ^{\epsilon},\beta ^{\epsilon}) \in \mathcal{U} \times \mathcal{A} \times \mathcal{B}\) such that

$$\begin{aligned} - \epsilon \tau \leq \phi (t,a,b) - \mathbb{E} \biggl[ \int _{t}^{t+ \tau} d \bigl(x_{s}^{t,a;u^{\epsilon}}, \Gamma \bigr) \,\mathrm{d} s + \phi \bigl(t+\tau ,x_{t+ \tau}^{t,a;u^{\epsilon}}, y_{t+\tau ;t,a,b}^{u^{\epsilon},\alpha ^{ \epsilon},\beta ^{\epsilon}} \bigr) \biggr] . \end{aligned}$$
(4.5)

As in the viscosity subsolution case, we apply Itô’s formula to (4.5) and then multiply \(\frac{1}{\tau}\). Since (4.4) holds for any \((u,\alpha ,\beta ) \in U \times \mathbb{R}^{p} \times G^{2}\), by letting \(\tau \downarrow 0\) and noting the arbitrariness of ϵ, we have

$$\begin{aligned} 0 & \leq - \partial _{t} \phi (t,a,b) + H^{\prime } \bigl(t,a,b, \bigl(\phi ,D \phi ,D^{2}\phi \bigr) (t,a,b); u,\alpha , \beta \bigr) \leq - \theta . \end{aligned}$$

This leads to the desired contradiction, since \(\theta > 0\). Hence, W is the viscosity supersolution. This, together with (4.3), shows that W is the continuous viscosity solution of (4.1). This completes the proof. □

5 Uniqueness of viscosity solution

We state the comparison principle of viscosity subsolution and supersolution, whose proof is reported in Sect. 6.

Theorem 5.1

Suppose that Assumptions 1-3hold. Let \(\underline{W} \in C(\bar{\mathcal{O}})\) be a viscosity subsolution of the HJB equation in (4.1), and \(\overline{W} \in C(\bar{\mathcal{O}})\) a viscosity supersolution of (4.1), where both \(\underline{W}\) and satisfy the linear growth condition in \(a \in \mathbb{R}^{n}\). Then

$$\begin{aligned} \underline{W}(t,a,b) \leq \overline{W}(t,a,b),\quad \forall (t,a,b) \in \bar{\mathcal{O}}. \end{aligned}$$
(5.1)

Based on Theorems 4.1 and 5.1, we state the following main result:

Corollary 5.1

Let Assumptions 1-3hold. Then the auxiliary value function W in (3.7) is a unique continuous viscosity solution of the HJB equation in (4.1).

Proof

Note first that in view of Theorem 4.1, the auxiliary value function W in (3.7) is the continuous viscosity solution of the HJB equation in (4.1). To prove the uniqueness, by Lemma 3.3, the auxiliary value function W satisfies the linear growth condition in Theorem 5.1. As W is the viscosity solution of (4.1) (see Theorem 4.1), by Definition 1, W is both the viscosity subsolution and the viscosity supersolution satisfying the comparison principle in Theorem 5.1. Then the uniqueness follows from Theorem 5.1. This completes the proof. □

5.1 Concluding remarks

We have studied the state-constrained stochastic optimal problem for jump-diffusion systems. Our main results are Theorems 3.2, 4.1, and 5.1, where we have shown that the original value function V in (2.3) can be characterized by the zero-level set of the auxiliary value function W in (3.7) (see (3.8)). Note that W can be characterized by solving the associated HJB equation in (4.1), since W is a unique continuous viscosity solution of (4.1).

One possible potential future research problem would be to consider the two-player stochastic game framework for which we need to generalize Theorem 3.2 using the notion of nonanticipative strategies. The state-constrained problem with general BSDE (backward SDE) type recursive objective functionals would also be an interesting avenue to pursue. Applications to various mathematical finance problems will be studied in the near future.

6 Proof of Theorem 5.1

This section is devoted to the proof of Theorem 5.1.

6.1 Equivalent definitions of viscosity solutions

To prove the uniqueness, we first provide two equivalent definitions of Definition 1. The HJB equation in (4.1) can be rewritten as follows:

$$\begin{aligned} \textstyle\begin{cases} \sup_{u \in U} \{ \sup_{ \alpha \in \mathbb{R}^{r}} H^{(1)}(t,a,(DW,D^{2}W)(t,a,b); u,\alpha ) \\ \quad{} + \sup_{\beta \in G^{2} }H^{(2)}(t,a,b,(W,DW)(t,a,b); u,\beta ) \} = 0,& (t,a,b) \in \mathcal{O}, \\ W(T,a,b) = \max \{ m(a) - b,0 \},& (a,b) \in \mathbb{R}^{n} \times [0, \infty ), \\ W(t,a,0) = W_{0}(t,a),& (t,a) \in [0,T) \times \mathbb{R}^{n}, \end{cases}\displaystyle \end{aligned}$$
(6.1)

where with D 2 W= [ D 2 W ( 11 ) D 2 W ( 12 ) ( D 2 W ( 12 ) ) D 2 W ( 22 ) ] ,

$$\begin{aligned} & H^{(1)} \bigl(t,a, \bigl(\partial _{t} W, DW,D^{2}W \bigr); u,\alpha \bigr) \\ &\quad := - \partial _{t} W - d(a,\Gamma ) - \bigl\langle D W, \widehat{f}(t,a,u) \bigr\rangle - \frac{1}{2} \operatorname{Tr} \bigl( \sigma \sigma ^{\top }(t,a,u) D^{2} W_{(11)} \bigr) \\ &\qquad {} - \alpha ^{\top }\sigma ^{\top }(t,a,u) D^{2} W_{(12)} - \frac{1}{2} \vert \alpha \vert ^{2} D^{2} W_{(22)}, \end{aligned}$$

and

$$\begin{aligned} & H^{(2)} \bigl(t,a,b,(W,DW) (t,a,b); u,\beta \bigr) \\ &\quad := - \int _{E} \bigl[ W \bigl(t,a + \chi (t,a,u,e), b + \beta (e) \bigr) - W(t,a,b) \\ &\qquad {}- \bigl\langle D W(t,a,b), \widehat{\chi}(t,a,u,e,\beta ) \bigr\rangle \bigr] \pi (\mathrm{d} e). \end{aligned}$$

To avoid the possibility of \(\sup_{\alpha \in \mathbb{R}^{r}} H^{(1)}=\infty \) due to the unboundedness of α, we have the following result. The proof is analogous that for [11, Lemma 4.1, Remark 4.5] and [17, Sect. 2.3].

Lemma 6.1

\(H^{(1)}\) can be expressed as

$$\begin{aligned} & \sup_{\alpha \in \mathbb{R}^{r}} H^{(1)} \bigl(t,a, \bigl(\partial _{t} W, DW,D^{2}W \bigr); u,\alpha \bigr) = \Lambda ^{+} \bigl(\mathcal{G}_{\psi} \bigl(t,a, \bigl(\partial _{t} W, DW,D^{2}W \bigr);u \bigr) \bigr), \end{aligned}$$

where \(\Lambda ^{+}(A) := \sup_{|v|=1} |A v| = \sup_{v \neq 0} \frac{|A v|}{|v|} \), i.e., the largest eigenvalue of \(A \in \mathbb{S}^{n}\), and

$$\begin{aligned} & \mathcal{G}_{\psi} \bigl(t,a, \bigl(\partial _{t} W, DW,D^{2}W \bigr);u \bigr) := \begin{bmatrix} \mathcal{G}_{(11)} & \psi (b) \mathcal{G}_{(12)} \\ \psi (b) \mathcal{G}_{(12)}^{\top }& \psi ^{2}(b) \mathcal{G}_{(22)} \end{bmatrix} \end{aligned}$$

with \(\psi :[0,\infty ) \rightarrow [0,\infty )\) being a continuous function and

$$\begin{aligned} &\mathcal{G}_{(11)} := - \partial _{t} W - d(a,\Gamma ) - \bigl\langle D W, \widehat{f}(t,a,u) \bigr\rangle - \frac{1}{2} \operatorname{Tr} \bigl(\sigma \sigma ^{\top }(t,a,u) D^{2} W_{(11)} \bigr), \\ &\mathcal{G}_{(12)} := - \frac{1}{2} \bigl(\sigma ^{\top }(t,a,u) D^{2} W_{(12)} \bigr)^{ \top},\qquad \mathcal{G}_{(22)} := - \frac{1}{2} D^{2} W_{(22)} I_{r}. \end{aligned}$$

Remark 6.1

From Lemma 6.1, the HJB equation in (6.1) is equivalent to

$$\begin{aligned} \textstyle\begin{cases} \sup_{u \in U} \{ \Lambda ^{+}(\mathcal{G}_{\psi}(t,a,( \partial _{t} W,DW,D^{2}W)(t,a,b);u)) \\ \quad {}+ \sup_{\beta \in G^{2} }H^{(2)}(t,a,b,(W,DW)(t,a,b); u,\beta ) \} = 0, & (t,a,b) \in \mathcal{O}, \\ W(T,a,b) = \max \{ m(a) - b,0 \},& (a,b) \in \mathbb{R}^{n} \times [0, \infty ), \\ W(t,a,0) = W_{0}(t,a),& (t,a) \in [0,T) \times \mathbb{R}^{n}. \end{cases}\displaystyle \end{aligned}$$
(6.2)

We will use (6.2) to prove the comparison principle in Theorem 5.1 with \(\psi (b) := \frac{1}{2}e^{\frac{1}{2}b}\) for \(b \in [0,\infty )\).

For \(\delta > 0\), let \(E_{\delta} := \{ e \in E| |e| < \delta \}\); hence, \(E = E_{\delta} \cup E_{\delta}^{C}\). We then define

$$\begin{aligned} & H^{(2)} \bigl(t,a,b,(W,DW); u,\beta \bigr) \\ &\quad = H^{(21)}_{\delta} \bigl(t,a,b,(W,DW);u,\beta \bigr) + H^{(22)}_{\delta} \bigl(t,a,b,(W,DW);u, \beta \bigr), \end{aligned}$$

where

$$\begin{aligned} & H^{(21)}_{\delta} \bigl(t,a,b,(W,DW);u,\beta \bigr) \\ &\quad := - \int _{E_{\delta}} \bigl[ W \bigl(t,a + \chi (t,a,u,e), b + \beta (e) \bigr) - W(t,a,b) \\ &\qquad {} - \bigl\langle D W(t,a,b), \widehat{\chi}(t,a,u,e,\beta ) \bigr\rangle \bigr] \pi (\mathrm{d} e), \end{aligned}$$

and

$$\begin{aligned} & H^{(22)}_{\delta} \bigl(t,a,b,(W,DW);u,\beta \bigr) \\ &\quad := - \int _{E_{\delta}^{C}} \bigl[ W \bigl(t,a + \chi (t,a,u,e), b + \beta (e) \bigr) - W(t,a,b) \bigr]\pi ( \,\mathrm{d} e) \\ &\qquad {} - \bigl\langle D W(t,a,b), \widehat{\chi}(t,a,u,e,\beta ) \bigr\rangle ]\pi (\mathrm{d} e). \end{aligned}$$

From [8, 9, 18, 31, 32] (see [9, Proposition 1]), we have the following first equivalent definition of Definition 1:

Lemma 6.2

Suppose that W is a viscosity subsolution (resp. supersolution) of the HJB equation in (6.2). Then it is necessary and sufficient to hold the following:

  1. (i)

    \(W(T,a,b) \leq \max \{m(a) - b,0\}\) (resp. \(W(T,a,b) \geq \max \{m(a) - b,0\}\)) for \((a,b) \in \mathbb{R}^{n} \times [0,\infty )\) and \(W(t,a,0) \leq W_{0}(t,a)\) (resp. \(W(t,a,0) \geq W_{0}(t,a) \)) for \((t,a) \in [0,T) \times \mathbb{R}^{n}\);

  2. (ii)

    For all \(\delta \in (0,1)\) and test functions \(\phi \in C_{b}^{1,3}(\bar{\mathcal{O}}) \cap C_{2}(\bar{\mathcal{O}}) \), the following inequality holds at the global maximum (resp. minimum) point \((t,a,b) \in \mathcal{O}\) of \(W-\phi \):

    $$\begin{aligned} & \sup_{u \in U} \Bigl\{ \Lambda ^{+} \bigl( \mathcal{G}_{\psi} \bigl(t,a, \bigl( \partial _{t} \phi , D \phi ,D^{2}\phi \bigr) (t,a,b);u \bigr) \bigr) \\ & \qquad {} + \sup_{\beta \in G^{2} } \bigl\{ H^{(21)}_{\delta} \bigl(t,a,b,( \phi ,D\phi ) (t,a,b);u,\beta \bigr) \\ &\qquad {} + H^{(22)}_{\delta} \bigl(t,a,b,(W,D\phi ) (t,a,b);u, \beta \bigr) \bigr\} \Bigr\} \leq 0 \\ &\quad \Bigl( \textit{resp. } \sup_{u \in U} \Bigl\{ \Lambda ^{+} \bigl(\mathcal{G}_{ \psi} \bigl(t,a, \bigl(\partial _{t} \phi , D\phi ,D^{2}\phi \bigr) (t,a,b);u \bigr) \bigr) \\ &\quad \qquad {} + \sup_{\beta \in G^{2} } \bigl\{ H^{(21)}_{\delta} \bigl(t,a,b,( \phi ,D\phi ) (t,a,b);u,\beta \bigr) \\ &\quad \qquad {} + H^{(22)}_{\delta} \bigl(t,a,b,(W,D\phi ) (t,a,b);u, \beta \bigr) \bigr\} \Bigr\} \geq 0 \Bigr). \end{aligned}$$

The definition of parabolic superjet and subjet is given as follows [21]:

Definition 2

  1. (i)

    For \(W(t,\widehat{a})\), the superjet of W at the point of \((t,\widehat{a}) \in \mathcal{O}\) is defined byFootnote 5

    $$\begin{aligned} &\mathcal{P}^{1,2,+} W(t,\widehat{a})\\ &\quad := \biggl\{ (q,p,P) \in \mathbb{R} \times \mathbb{R}^{n+1} \times \mathbb{S}^{n+1} | \\ &\qquad {} W \bigl(t^{\prime},\widehat{a}^{\prime} \bigr) \leq W(t,\widehat{a}) + q \bigl(t^{\prime}-t \bigr) + \bigl\langle p, \widehat{a}^{\prime }- \widehat{a} \bigr\rangle \\ &\qquad {}+ \frac{1}{2} \bigl\langle P \bigl(\widehat{a}^{\prime }- \widehat{a} \bigr), \widehat{a}^{\prime }- \widehat{a} \bigr\rangle + o \bigl( \bigl\vert t^{ \prime}-t \bigr\vert + \bigl\vert \widehat{a}^{\prime }- \widehat{a} \bigr\vert ^{2} \bigr),~ \text{as $(t^{\prime},\widehat{a}^{\prime}) \rightarrow (t,\widehat{a})$} \biggr\} . \end{aligned}$$
  2. (ii)

    The closure of \(\mathcal{P}^{1,2,+} W(t,\widehat{a})\) is defined by

    $$\begin{aligned} \overline{\mathcal{P}}^{1,2,+}W(t,\widehat{a}) :={} & \Bigl\{ (q,p,P) \in \mathbb{R} \times \mathbb{R}^{n+1} \times \mathbb{S}^{n+1} | \\ & {} (q,p,P) = \lim_{n \rightarrow \infty} (q_{n},p_{n},P_{n}) \text{ with } (q_{n},p_{n},P_{n}) \in \mathcal{P}^{1,2,+} W(t_{n}, \widehat{a}_{n}) \\ & {} \text{and } \lim_{n \rightarrow \infty} \bigl(t_{n}, \widehat{a}_{n},W(t_{n}, \widehat{a}_{n}) \bigr) = \bigl(t,\widehat{a},W(t, \widehat{a}) \bigr) \Bigr\} . \end{aligned}$$
  3. (iii)

    For \(W(t,\widehat{a})\), the subjet of W at the point of \((t,\widehat{a}) \in \mathcal{O}\) and its closure are defined by

    $$\begin{aligned} \mathcal{P}^{1,2,-} W(t,\widehat{a}) := - \mathcal{P}^{1,2,+} \bigl(-W(t, \widehat{a}) \bigr),\qquad \overline{\mathcal{P}}^{1,2,-} W(t, \widehat{a}) := - \overline{\mathcal{P}}^{1,2,+} \bigl(-W(t,\widehat{a}) \bigr). \end{aligned}$$

Using Definition 2 and Lemma 6.2, we have the following second equivalent definition of Definition 1 (see [8, 32], [31, Lemma 3.5], [9, Proposition 1], and [38, Lemmas 5.4 and 5.5, Chap. 4]):

Lemma 6.3

Suppose that W is a viscosity subsolution (resp. supersolution) of the HJB equation in (6.2). Then it is necessary and sufficient to hold the following:

  1. (i)

    \(W(T,a,b) \leq \max \{m(a) - b,0\}\) (resp. \(W(T,a,b) \geq \max \{m(a) - b,0\}\)) for \((a,b) \in \mathbb{R}^{n} \times [0,\infty )\) and \(W(t,a,0) \leq W_{0}(t,a)\) (resp. \(W(t,a,0) \geq W_{0}(t,a) \)) for \((t,a) \in [0,T) \times \mathbb{R}^{n}\);

  2. (ii)

    For all \(\delta \in (0,1)\) and test functions \(\phi \in C_{b}^{1,3}(\bar{\mathcal{O}}) \cap C_{2}(\bar{\mathcal{O}})\) with the local maximum (resp. minimum) point \((t,a,b) \in \mathcal{O}\) of \(W-\phi \), if \((q,p,P) \in \overline{\mathcal{P}}^{1,2,+}W(t,a,b)\) (resp. \((q,p,P) \in \overline{\mathcal{P}}^{1,2,-}W(t,a,b)\)) with \(p = D \phi (t,a,b)\) and \(P = D^{2} \phi (t,a,b)\), then the following inequality holds:

    $$\begin{aligned} &\sup_{u \in U} \Bigl\{ \Lambda ^{+} \bigl( \mathcal{G}_{\psi} \bigl(t,a,(q,p,P);u \bigr) \bigr) \\ &\qquad{} + \sup_{\beta \in G^{2} } \bigl\{ H^{(21)}_{\delta} \bigl(t,a,b,( \phi ,D \phi ) (t,a,b);u,\beta \bigr) \\ &\qquad{} + H^{(22)}_{\delta} \bigl(t,a,b,W(t,a,b),p;u,\beta \bigr) \bigr\} \Bigr\} \leq 0 \\ & \quad \Bigl( \textit{resp. } \sup_{u \in U} \Bigl\{ \Lambda ^{+} \bigl(\mathcal{G}_{ \psi} \bigl(t,a,(q,p,P);u \bigr) \bigr) \\ &\quad \qquad {} + \sup_{\beta \in G^{2} } \bigl\{ H^{(21)}_{\delta} \bigl(t,a,b,( \phi ,D \phi ) (t,a,b);u,\beta \bigr) \\ &\quad \qquad {} + H^{(22)}_{\delta} \bigl(t,a,b,W(t,a,b),p;u,\beta \bigr) \bigr\} \Bigr\} \geq 0 \Bigr). \end{aligned}$$

Remark 6.2

Lemma 6.3 is introduced due to the singularity of the Lévy measure in zero, appearing in the nonlocal operator \(H_{\delta}^{(21)}\). We will see that with the regularity of the test function, one can pass the limit of \(H_{\delta}^{(21)}\) around the singular point of the measure.

6.2 Strict viscosity subsolution

Lemma 6.4

Suppose that \(\underline{W}\) is the viscosity subsolution of (6.2). Let

$$\begin{aligned} \underline{W}_{\nu}(t,a,b) := \underline{W}(t,a,b) + \nu \gamma (t,b), \end{aligned}$$

where for \(\nu >0\),

$$\begin{aligned} \gamma (t,b) := -(T-t) - \bigl(1-e^{-b} \bigr). \end{aligned}$$

Then \(\underline{W}_{\nu}\) is the strict viscosity subsolution of (6.2) in the sense that ≤0 is replaced by \(\leq - \frac{\nu}{8}\) in Definition 1.

Proof

We first verify the boundary condition of \(W_{\nu}\). Note that as \(b \in [0,\infty )\) and \(\nu > 0\),

$$\begin{aligned} \underline{W}_{\nu}(T,a,b) = \underline{W}(T,a,b) - \nu \bigl(1-e^{-b} \bigr) \leq \max \bigl\{ m(a) - b,0 \bigr\} , \end{aligned}$$

and by Lemma 3.5

$$\begin{aligned} \underline{W}_{\nu}(t,a,0) &= \underline{W}(t,a,0) - \nu (T-t) \leq W_{0}(t,a). \end{aligned}$$

Now, let \(\phi _{\nu} \in C_{b}^{1,3}(\bar{\mathcal{O}})\) be the test function such that

$$\begin{aligned} (\underline{W}_{\nu}- \phi _{\nu}) (t,a,b) = \max _{(t^{\prime},a^{ \prime},b^{\prime}) \in \mathcal{O}}(\underline{W}_{\nu}- \phi _{\nu}) \bigl(t^{ \prime},a^{\prime},b^{\prime} \bigr). \end{aligned}$$

Then from (6.2) and Definition 1, it is necessary to show that

$$\begin{aligned} & \sup_{u \in U} \Bigl\{ \Lambda ^{+} \bigl(\mathcal{G}_{\psi} \bigl(t,a, \bigl( \partial _{t} \phi _{\nu}, D \phi _{\nu}, D^{2} \phi _{\nu} \bigr) (t,a,b);u \bigr) \bigr) \\ &\quad {}+ \sup_{\beta \in G^{2} }H^{(2)} \bigl(t,a,b,(\phi _{\nu},D\phi _{ \nu}) (t,a,b); u,\beta \bigr) \Bigr\} \leq - \frac{\nu}{8}. \end{aligned}$$
(6.3)

By defining

$$\begin{aligned} \underline{\phi}(t,a,b) := - \nu \gamma (t,b) + \phi _{\nu}(t,a,b), \end{aligned}$$

it is easy to see that \(\underline{\phi} \in C_{b}^{1,3}(\bar{\mathcal{O}})\) and

$$\begin{aligned} (\underline{W}_{\nu}- \phi _{\nu}) (t,a,b) & = \underline{W}(t,a,b) - \bigl(- \nu \gamma (t,b) + \phi _{\nu}(t,a,b) \bigr) \\ & = \underline{W}(t,a,b) - \underline{\phi}(t,a,b). \end{aligned}$$

Then

$$\begin{aligned} \max_{(t^{\prime},a^{\prime},b^{\prime}) \in \mathcal{O}}( \underline{W}_{\nu}- \phi _{\nu}) \bigl(t^{\prime},a^{\prime},b^{\prime} \bigr) & = (\underline{W}_{\nu}- \phi _{\nu}) \bigl(t,a^{\prime},b^{\prime} \bigr) \\ & = (\underline{W} - \underline{\phi}) \bigl(t,a^{\prime},b^{\prime} \bigr) = \max_{(t^{\prime},a^{\prime},b^{\prime}) \in \mathcal{O}}( \underline{W} - \underline{\phi}) \bigl(t^{\prime},a^{\prime},b^{\prime} \bigr). \end{aligned}$$
(6.4)

Since \(\phi _{\nu} = \underline{\phi} + \nu \gamma \), \(\Lambda ^{+}\) is the norm, and \(H^{(2)}\) is linear in \(\phi _{\nu}\) and \(D \phi _{\nu}\),

$$\begin{aligned} & \sup_{u \in U} \Bigl\{ \Lambda ^{+} \bigl( \mathcal{G}_{\psi} \bigl(t,a, \bigl( \partial _{t} \phi _{\nu}, D \phi _{\nu}, D^{2} \phi _{\nu} \bigr) (t,a,b);u \bigr) \bigr) \\ &\quad + \sup_{\beta \in G^{2} }H^{(2)} \bigl(t,a,b,(\phi _{\nu},D\phi _{ \nu}) (t,a,b); u,\beta \bigr) \Bigr\} \leq I_{(1)} + I_{(2)}, \end{aligned}$$

where

$$\begin{aligned}& \begin{aligned} I^{(1)}:= {}& \sup_{u \in U} \Bigl\{ \Lambda ^{+} \bigl(\mathcal{G}_{\psi} \bigl(t,a, \bigl( \partial _{t} \underline{\phi} , D \underline{\phi} , D^{2} \underline{\phi} \bigr) (t,a,b);u \bigr) \bigr) \\ &{} + \sup_{\beta \in G^{2} }H^{(2)} \bigl(t,a,b,( \underline{\phi},D \underline{\phi}) (t,a,b); u,\beta \bigr) \Bigr\} , \end{aligned} \\& \begin{aligned} I^{(2)} :={}& \nu \sup_{u \in U} \Bigl\{ \Lambda ^{+} \bigl(\mathcal{G}_{ \psi} \bigl(t,a, \bigl(\partial _{t} \gamma , D \gamma , D^{2} \gamma \bigr) (t,b);u \bigr) \bigr) \\ &{} + \sup_{\beta \in G^{2} }H^{(2)} \bigl(t,a,b,(\gamma ,D \gamma ) (t,b); u,\beta \bigr) \Bigr\} . \end{aligned} \end{aligned}$$

We now provide the estimate of \(I^{(1)}\) and \(I^{(2)}\). First, since \(\underline{W}\) is the viscosity subsolution and \(\underline{\phi}\) is the corresponding test function in view of (6.4), we have

$$\begin{aligned} I^{(1)} \leq 0. \end{aligned}$$
(6.5)

For \(I^{(2)}\), we observe that

$$\begin{aligned} & H^{(2)} \bigl(t,a,b,(\gamma ,D \gamma ) (t,b); u,\beta \bigr) \\ &\quad = \int _{E} \bigl[ \bigl(1-e^{-(b + \beta (e))} \bigr) - \bigl(1-e^{-b} \bigr) - e^{-b} \beta (e) \bigr] \pi ( \mathrm{d} e). \end{aligned}$$

Since \(b \in [0,\infty )\) and \(\beta \in G^{2}\), it is easy to see that with \(\beta (e) = 0\),

$$\begin{aligned} \sup_{\beta \in G^{2}} H^{(2)} \bigl(t,a,b,(\gamma ,D \gamma ) (t,b); u, \beta \bigr) = 0. \end{aligned}$$

Recall \(\psi (b) = \frac{1}{2}e^{\frac{1}{2}b}\) for \(b \in [0,\infty )\). In the definition of \(\mathcal{G}_{\psi}\),

$$\begin{aligned} &\mathcal{G}_{(11)}= -1 - e^{-b} l(t,a,u) - d(a,\Gamma ),\qquad \psi (b) \mathcal{G}_{(12)} = 0, \\ &\psi ^{2}(b)\mathcal{G}_{(22)}= - \frac{1}{2 \times 4} e^{b} e^{-b}I_{r} = -\frac{1}{8} I_{r}. \end{aligned}$$

Note that since l and \(d(a,\Gamma )\) are positive, and \(b \in [0,\infty )\), we have \(\mathcal{G}_{(11)} \leq -1\). Then we can show that

$$\begin{aligned} & \Lambda ^{+} \bigl(\mathcal{G}_{\psi} \bigl(t,a, \bigl( \partial _{t} \gamma , D \gamma , D^{2} \gamma \bigr) (t,b);u \bigr) \bigr) \\ &\quad = \Lambda ^{+} \left ( \begin{bmatrix} -1 - e^{-b} l(t,a,u) - d(a,\Gamma ) & 0 \\ 0 & -\frac{1}{8} I_{r} \end{bmatrix} \right ) \leq - \frac{1}{8}, \end{aligned}$$

which implies

$$\begin{aligned} I_{(2)} \leq -(\nu /8). \end{aligned}$$
(6.6)

Then (6.5) and (6.6) lead to (6.3). We complete the proof. □

6.3 Proof of Theorem 5.1

We continue to prove the uniqueness. For \(\eta > 0\) and \(\nu > 0\), let

$$\begin{aligned} \Psi _{\nu ;\eta ,\lambda} (t,a,b) &:= \underline{W}_{\nu}(t,a,b) - \overline{W}(t,a,b) - 2 \eta e^{-\lambda t} \bigl(1 + \vert a \vert ^{2} + b \bigr), \end{aligned}$$
(6.7)

where \(\lambda > 0\) will be specified later. Then it is necessary to show that

$$\begin{aligned} \Psi _{\nu ;\eta ,\lambda} (t,a,b) \leq 0,\quad \forall (t,a,b) \in \bar{\mathcal{O}}, \end{aligned}$$
(6.8)

since by letting \(\eta \downarrow 0\) and then \(\nu \downarrow 0\), the desired result in (5.1) holds, i.e.,

$$\begin{aligned} \underline{W}(t,a,b) \leq \overline{W}(t,a,b),\quad \forall (t,a,b) \in \bar{ \mathcal{O}}. \end{aligned}$$

Assume that (6.8) is not true, i.e., \(\Psi _{\nu ;\eta ,\lambda} (t,a,b) > 0\) for some \((t,a,b) \in \bar{\mathcal{O}}\). Consider,

$$\begin{aligned} \Psi _{\nu ;\eta ,\lambda}(\tilde{t},\tilde{a},\tilde{b}) = \max _{(t,a,b) \in \bar{\mathcal{O}}} \Psi _{\nu ;\eta ,\lambda}(t,a,b) > 0, \end{aligned}$$
(6.9)

where the maximum exists, since \(\underline{W}_{\nu}\) and satisfy the linear growth condition (\(\log (1+b)\) also holds the linear growth condition) and \(e^{-\lambda t}\) is decreasing. Actually, \((\tilde{t},\tilde{a},\tilde{b})\) is dependent on \((\nu ,\eta ,\lambda )\), i.e., \((\tilde{t},\tilde{a},\tilde{b}) := (\tilde{t}_{\nu ;\eta ,\lambda}, \tilde{a}_{\nu ;\eta ,\lambda},\tilde{b}_{\nu ;\eta ,\lambda})\).

Suppose that \(\tilde{t} = T\). Then in view of (6.7) and the definition of \(\underline{W}_{\nu}\),

$$\begin{aligned} \Psi _{\nu ;\eta ,\lambda}(T,\tilde{a},\tilde{b}) = & \underline{W}(T, \tilde{a}, \tilde{b}) + \nu \gamma (T,\tilde{b}) - \overline{W}(T, \tilde{a},\tilde{b}) - 2 \eta e^{-\lambda T} \bigl(1+ \vert \tilde{a} \vert ^{2} + \tilde{b} \bigr) \leq 0, \end{aligned}$$

which contradicts (6.9). Hence, \(\tilde{t} < T\). Similarly, when \(\tilde{b}=0\), we have

$$\begin{aligned} \Psi _{\nu ;\eta ,\lambda}(\tilde{t},\tilde{a},0) & = \underline{W}( \tilde{t}, \tilde{a},0) - \nu (T-\tilde{t}) - \overline{W}(\tilde{t}, \tilde{a},0) - 2 \eta e^{-\lambda t} \bigl(1+ \vert \tilde{a} \vert ^{2} \bigr) \leq 0, \end{aligned}$$

which again contradicts (6.9). Hence, \(\tilde{b} > 0\). This implies that \((\tilde{t},\tilde{a},\tilde{b}) \in \mathcal{O}\).

After doubling variables of Ψ, we consider

$$\begin{aligned} \Psi _{\nu ;\eta ,\lambda}^{\kappa} (t,a,b,\breve{a},\breve{b}) = & \widehat{\Psi}_{\nu ;\eta ,\lambda}(t,a,b,\breve{a},\breve{b}) - \kappa \zeta (a,b, \breve{a},\breve{b}), \end{aligned}$$

where \(\kappa >0\) and

$$\begin{aligned} &\begin{aligned} \widehat{\Psi}_{\nu ;\eta ,\lambda}(t,a,b,\breve{a},\breve{b}) :={}& \underline{W}_{\nu}(t,a,b) - \overline{W}(t,\breve{a},\breve{b}) - \eta e^{-\lambda t} \bigl(1+ \vert a \vert ^{2} + b \bigr) \\ &{} - \eta e^{-\lambda t} \bigl(1+ \vert \breve{a} \vert ^{2} + \breve{b} \bigr) - \frac{\eta e^{-\lambda t} }{2} \bigl( \vert a - \tilde{a} \vert ^{2} + (b- \tilde{b}) \bigr) - \frac{1}{2} \vert t-\tilde{t} \vert ^{2}, \end{aligned} \\ & \zeta (a,b,\breve{a},\breve{b}) := \frac{1}{2} \bigl( \vert a - \breve{a} \vert ^{2} + \vert b-\breve{b} \vert ^{2} \bigr). \end{aligned}$$

Since \(\widehat{\Psi}_{\nu ;\eta ,\lambda}(t,a,b,a,b) \leq \Psi _{\nu ; \eta ,\lambda}(t,a,b)\) and \(\widehat{\Psi}_{\nu ;\eta ,\lambda}(\tilde{t},\tilde{a},\tilde{b}, \tilde{a},\tilde{b}) = \Psi _{\nu ;\eta ,\lambda}(\tilde{t},\tilde{a}, \tilde{b})\),

$$\begin{aligned} \Psi _{\nu ;\eta ,\lambda}(\tilde{t},\tilde{a},\tilde{b}) = \max _{(t,a,b) \in \mathcal{O}} \Psi _{\nu ;\eta ,\lambda}(t,a,b) = \max _{(t,a,b) \in \mathcal{O}} \widehat{\Psi}_{\nu ;\eta ,\lambda}(t,a,b,a,b). \end{aligned}$$
(6.10)

We consider \((t^{\prime}_{\kappa},a^{\prime}_{\kappa}, b^{\prime}_{\kappa}, \breve{a}^{\prime}_{\kappa}, \breve{b}^{\prime}_{\kappa} )\) such that

$$\begin{aligned} & \Psi _{\nu ;\eta ,\lambda}^{\kappa} \bigl(t^{\prime}_{\kappa},a^{\prime}_{ \kappa}, b^{\prime}_{\kappa}, \breve{a}^{\prime}_{\kappa}, \breve{b}^{ \prime}_{\kappa} \bigr) \\ & \quad = \max_{(t,a, b,\breve{a}, \breve{b}) \in \mathcal{O} \times \mathbb{R}^{n} \times (0,\infty )} \bigl\{ \widehat{\Psi}_{\nu ; \eta ,\lambda}(t,a,b, \breve{a},\breve{b}) - \kappa \zeta (a,b, \breve{a},\breve{b}) \bigr\} , \end{aligned}$$

which exists since \(-\Psi _{\nu ;\eta ,\lambda}^{\kappa}\) is coercive. Then from [21, Proposition 3.7],

$$\begin{aligned} \textstyle\begin{cases} \lim_{\kappa \rightarrow \infty} \kappa \zeta (a^{\prime}_{\kappa}, b^{ \prime}_{\kappa}, \breve{a}^{\prime}_{\kappa}, \breve{b}^{\prime}_{ \kappa}) = 0, \\ \lim_{\kappa \rightarrow \infty} \Psi _{\nu ;\eta ,\lambda}^{\kappa}(t^{ \prime}_{\kappa}, a^{\prime}_{\kappa}, b^{\prime}_{\kappa}, \breve{a}^{ \prime}_{\kappa}, \breve{b}^{\prime}_{\kappa}) = \widehat{\Psi}_{\nu ; \eta ,\lambda}(t^{\prime},a^{\prime}, b^{\prime}, \breve{a}^{\prime }, \breve{b}^{\prime}) \\ \hphantom{\lim_{\kappa \rightarrow \infty} \Psi _{\nu ;\eta ,\lambda}^{\kappa}(t^{ \prime}_{\kappa}, a^{\prime}_{\kappa}, b^{\prime}_{\kappa}, \breve{a}^{ \prime}_{\kappa}, \breve{b}^{\prime}_{\kappa})} = \max_{\zeta (a,b, \breve{a},\breve{b}) = 0} \widehat{\Psi}_{\nu ;\eta ,\lambda}(t,a, b, \breve{a}, \breve{b}), \\ \lim_{\kappa \rightarrow \infty} \zeta (a^{\prime}_{\kappa}, b^{ \prime}_{\kappa}, \breve{a}^{\prime}_{\kappa}, \breve{b}^{\prime}_{ \kappa}) = \zeta (a^{\prime},b^{\prime},\breve{a}^{\prime},\breve{b}^{ \prime}) = 0. \end{cases}\displaystyle \end{aligned}$$

This, together with (6.10), implies that as \(\kappa \rightarrow \infty \),

$$\begin{aligned} \textstyle\begin{cases} \vert a^{\prime}_{\kappa} - \breve{a}^{\prime}_{\kappa} \vert ^{2},~ \vert b^{\prime}_{ \kappa} - \breve{b}^{\prime}_{\kappa} \vert ^{2} \rightarrow 0, \\ \frac{\kappa}{2} \vert a^{\prime}_{\kappa} - \breve{a}^{\prime}_{\kappa} \vert ^{2},~ \frac{\kappa}{2} \vert b^{\prime}_{\kappa} - \breve{b}^{\prime}_{\kappa} \vert ^{2} \rightarrow 0, \\ t^{\prime}_{\kappa} \rightarrow \tilde{t}, ~ a^{\prime}_{\kappa}, \breve{a}^{\prime}_{\kappa} \rightarrow \tilde{a},~ b^{\prime}_{ \kappa},\breve{b}^{\prime}_{\kappa} \rightarrow \tilde{b}. \end{cases}\displaystyle \end{aligned}$$
(6.11)

For simplicity, we denote \((t^{\prime},a^{\prime}, b^{\prime}, \breve{a}^{\prime }, \breve{b}^{ \prime }) := (t^{\prime}_{\kappa},a^{\prime}_{\kappa}, b^{\prime}_{ \kappa}, \breve{a}^{\prime}_{\kappa}, \breve{b}^{\prime}_{\kappa} )\).

We let

$$\begin{aligned} &h_{\eta ,\lambda}(t,a,b):= \eta e^{-\lambda t} \bigl(1+ \vert a \vert ^{2} + b \bigr) + \frac{1}{2} \vert t-\tilde{t} \vert ^{2} + \eta e^{-\lambda t} \frac{1}{2} \bigl( \vert a - \tilde{a} \vert ^{2} + (b-\tilde{b}) \bigr), \\ &\widehat{h}_{\eta ,\lambda}(t,\breve{a},\breve{b}) := \eta e^{- \lambda t} \bigl(1+ \vert \breve{a} \vert ^{2} +\breve{b} \bigr), \\ &\zeta _{\kappa}(a,b,\breve{a},\breve{b}) := \frac{\kappa}{2} \bigl( \vert a - \breve{a} \vert ^{2} + \vert b-\breve{b} \vert ^{2} \bigr). \end{aligned}$$

Then

$$\begin{aligned} \Psi _{\nu ;\eta ,\lambda}^{\kappa} (t,a,\breve{a},b, \breve{b}) ={}& \bigl( \underline{W}_{\nu}(t,a,b) - h_{\eta ,\lambda}(t,a,b) \bigr) \\ &{} - \bigl(\overline{W}(t,\breve{a},\breve{b}) + \widehat{h}(t,\breve{a}, \breve{b}) \bigr) - \zeta _{\kappa}(a,b,\breve{a},\breve{b}). \end{aligned}$$
(6.12)

We invoke Crandall-Ishii’s lemma in [21, Theorem 8.3 and Remark 2.7] from which there exist

$$\begin{aligned} \textstyle\begin{cases} q+\widehat{q} = \partial _{t} \zeta _{\kappa}(a^{\prime}, b^{\prime}, \breve{a}^{\prime }, \breve{b}^{\prime }) = 0, \\ (q + \partial _{t} h_{\eta ,\lambda}, D_{(a,b)}(h_{\eta ,\lambda} + \zeta _{\kappa}), P + D_{(a,b)}^{2} h_{\eta ,\lambda})(t^{\prime}, a^{ \prime}, b^{\prime}) \in \overline{\mathcal{P}}^{1,2,+}\underline{W}_{ \nu}(t^{\prime}, a^{\prime}, b^{\prime}), \\ (- \widehat{q} - \partial _{t} \widehat{h}_{\eta ,\lambda}, - D_{( \breve{a},\breve{b})}( \widehat{h}_{\eta ,\lambda} + \zeta _{\kappa}), -\widehat{P} - D_{(\breve{a},\breve{b})}^{2} \widehat{h}_{\eta , \lambda})(t^{\prime}, \breve{a}^{\prime}, \breve{b}^{\prime}) \in \overline{\mathcal{P}}^{1,2,-}\overline{W}(t^{\prime}, \breve{a}^{ \prime}, \breve{b}^{\prime}), \end{cases}\displaystyle \end{aligned}$$

such that

$$\begin{aligned} -3 \kappa \begin{bmatrix} I_{n+1} & 0 \\ 0 & I_{n+1} \end{bmatrix} \leq \begin{bmatrix} P & 0 \\ 0 & \widehat{P} \end{bmatrix} \leq 3 \kappa \begin{bmatrix} I_{n+1} & -I_{n+1} \\ -I_{n+1} & I_{n+1} \end{bmatrix}. \end{aligned}$$
(6.13)

Straightforward computation yields

$$\begin{aligned} \textstyle\begin{cases} \partial _{t} h_{\eta ,\lambda}(t,a,b) = -\eta \lambda e^{- \lambda t}(1+ \vert a \vert ^{2}+b) + (t-\tilde{t}) - \frac{\eta \lambda e^{-\lambda t}}{2} ( \vert a - \tilde{a} \vert ^{2} + (b-\tilde{b}) ), \\ \partial _{t} \widehat{h}_{\eta ,\lambda}(t,\breve{a},\breve{b}) = - \eta \lambda e^{-\lambda t}(1 + \vert \breve{a} \vert ^{2} +\breve{b}), \\ D_{(a,b)} h_{\eta ,\lambda}(t,a,b) = \begin{bmatrix} 2 \eta e^{-\lambda t} a + \eta e^{-\lambda t} (a - \tilde{a}) \\ \frac{3}{2} \eta e^{-\lambda t} \end{bmatrix},\\ D_{(\breve{a},\breve{b})} \widehat{h}_{\eta ,\lambda}(t, \breve{a},\breve{b}) = \begin{bmatrix} 2 \eta e^{-\lambda t} \breve{a} \\ \eta e^{-\lambda t} \end{bmatrix}, \\ D_{(a,b)}^{2} h_{\eta ,\lambda}(t,a,b) = \begin{bmatrix} 3 \eta e^{-\lambda t} I_{n} & 0 \\ 0 & 0 \end{bmatrix},\\ D_{(\breve{a},\breve{b})}^{2} \widehat{h}_{\eta , \lambda}(t,\breve{a},\breve{b}) = \begin{bmatrix} 2 \eta e^{-\lambda t} I_{n} & 0 \\ 0 & 0 \end{bmatrix}, \\ D_{(a,b)} \zeta _{\kappa}(t,a,b,\breve{a},\breve{b}) = \begin{bmatrix} \kappa (a - \breve{a}) \\ \kappa (b - \breve{b}) \end{bmatrix},\\ D_{(\breve{a},\breve{b})} \zeta _{\kappa}(t,a,b, \breve{a},\breve{b}) = \begin{bmatrix} - \kappa (a - \breve{a}) \\ - \kappa (b - \breve{b}) \end{bmatrix}. \end{cases}\displaystyle \end{aligned}$$
(6.14)

Below, we use the superscript ′ in the above derivatives when they are evaluated at \((t^{\prime},a^{\prime}, b^{\prime}, \breve{a}^{\prime }, \breve{b}^{ \prime })\) (e.g. \(\partial _{t} h_{\eta ,\lambda}^{\prime }:= \partial _{t} h_{\eta , \lambda}(t^{\prime},a^{\prime},b^{\prime})\)).

From Lemmas 6.3 and 6.4, there exists \(\phi \in C_{b}^{1,3}(\bar{\mathcal{O}}) \cap C_{2}(\bar{\mathcal{O}})\) such that

$$\begin{aligned} &\sup_{u \in U} \Bigl\{ \Lambda ^{+} \bigl( \mathcal{G}_{\psi} \bigl(t^{\prime},a^{ \prime}, \bigl(q + \partial _{t} h_{\eta ,\lambda}^{\prime}, D_{(a,b)} \bigl(h_{ \eta ,\lambda}^{\prime }+ \zeta ^{\prime}_{\kappa} \bigr), P + D_{(a,b)}^{2} h_{\eta ,\lambda}^{\prime} \bigr); u \bigr) \bigr) \\ &\quad {} + \sup_{\beta \in G^{2} } \bigl\{ H^{(21)}_{\delta} \bigl(t^{ \prime}, a^{\prime}, b^{\prime},(\phi ,D \phi ) \bigl(t^{\prime}, a^{\prime}, b^{\prime} \bigr);u,\beta \bigr) \\ &\quad{} + H^{(22)}_{\delta} \bigl(t^{\prime}, a^{\prime}, b^{\prime}, \underline{W}_{\nu} \bigl(t^{\prime},a^{\prime}, b^{\prime} \bigr), D_{(a,b)} \bigl(h_{ \eta ,\lambda}^{\prime }+ \zeta ^{\prime}_{\kappa} \bigr) ;u,\beta \bigr) \bigr\} \Bigr\} \leq - \frac{\nu}{8}, \end{aligned}$$

and

$$\begin{aligned} &\sup_{u \in U} \Bigl\{ \Lambda ^{+} \bigl( \mathcal{G}_{\psi} \bigl(t^{\prime}, \breve{a}^{\prime}, \bigl(- \widehat{q} - \partial _{t} \widehat{h}_{\eta , \lambda}^{\prime}, - D_{(\breve{a},\breve{b})} \bigl( \widehat{h}_{\eta , \lambda}^{\prime }+ \zeta _{\kappa}^{\prime} \bigr), -\widehat{P} - D_{( \breve{a},\breve{b})}^{2} \widehat{h}_{\eta ,\lambda}^{\prime} \bigr); u \bigr) \bigr) \\ &\quad{} + \sup_{\beta \in G^{2} } \bigl\{ H^{(21)}_{\delta} \bigl(t^{ \prime}, \breve{a}^{\prime}, \breve{b}^{\prime},( \phi ,D \phi ) \bigl(t^{ \prime}, \breve{a}^{\prime}, \breve{b}^{\prime} \bigr);u,\beta \bigr) \\ &\quad{} + H^{(22)}_{\delta} \bigl(t^{\prime}, \breve{a}^{\prime}, \breve{b}^{\prime}, \overline{W} \bigl(t^{\prime}, \breve{a}^{\prime}, \breve{b}^{\prime} \bigr),- D_{(\breve{a},\breve{b})} \bigl( \widehat{h}_{\eta , \lambda}^{\prime }+ \zeta _{\kappa}^{\prime} \bigr);u,\beta \bigr) \bigr\} \Bigr\} \geq 0. \end{aligned}$$

Then using \(\sup \{ f(x) - g(x)\} \leq \sup f(x) - \sup g(x)\), we have

$$\begin{aligned} \Upsilon ^{(1)} + \Upsilon ^{(2)} + \Upsilon ^{(3)} \geq \frac{\nu}{8}, \end{aligned}$$

where

$$\begin{aligned}& \begin{aligned} \Upsilon ^{(1)} :={}& \sup_{u \in U} \bigl\{ \Lambda ^{+} \bigl( \mathcal{G}_{\psi} \bigl(t^{\prime}, \breve{a}^{\prime}, \bigl(- \widehat{q} - \partial _{t} \widehat{h}_{\eta ,\lambda}^{\prime}, - D_{(\breve{a}, \breve{b})} \bigl( \widehat{h}_{\eta ,\lambda}^{\prime }+ \zeta _{\kappa}^{ \prime} \bigr), -\widehat{P} - D_{(\breve{a},\breve{b})}^{2} \widehat{h}_{ \eta ,\lambda}^{\prime} \bigr); u \bigr) \bigr) \\ & {} - \Lambda ^{+} \bigl(\mathcal{G}_{\psi} \bigl(t^{\prime},a^{ \prime}, \bigl(q + \partial _{t} h_{\eta ,\lambda}^{\prime}, D_{(a,b)} \bigl(h_{ \eta ,\lambda}^{\prime }+ \zeta ^{\prime}_{\kappa} \bigr), P + D_{(a,b)}^{2} h_{\eta ,\lambda}^{\prime} \bigr); u \bigr) \bigr) \bigr\} , \end{aligned} \\& \begin{aligned} \Upsilon ^{(2)} :={}& \sup_{u \in U, \beta \in G^{2} } \bigl\{ H^{(21)}_{ \delta} \bigl(t^{\prime}, \breve{a}^{\prime}, \breve{b}^{\prime},(\phi ,D \phi ) \bigl(t^{\prime}, \breve{a}^{\prime}, \breve{b}^{\prime} \bigr);u,\beta \bigr) \\ &{}- H^{(21)}_{\delta} \bigl(t^{\prime}, a^{\prime}, b^{\prime},( \phi ,D \phi ) \bigl(t^{\prime}, a^{\prime}, b^{\prime} \bigr);u,\beta \bigr) \bigr\} , \end{aligned} \\& \begin{aligned} \Upsilon ^{(3)} :={}& \sup_{u \in U, \beta \in G^{2} } \bigl\{ H^{(22)}_{ \delta} \bigl(t^{\prime}, \breve{a}^{\prime}, \breve{b}^{\prime}, \overline{W} \bigl(t^{\prime}, \breve{a}^{\prime}, \breve{b}^{\prime} \bigr),- D_{( \breve{a},\breve{b})} \bigl( \widehat{h}_{\eta ,\lambda}^{\prime }+ \zeta _{ \kappa}^{\prime} \bigr);u,\beta \bigr) \\ &{} - H^{(22)}_{\delta} \bigl(t^{\prime}, a^{\prime}, b^{\prime}, \underline{W}_{\nu} \bigl(t,a^{\prime}, b^{\prime} \bigr), D_{(a,b)} \bigl(h_{\eta , \lambda}^{\prime }+ \zeta ^{\prime}_{\kappa} \bigr) ;u,\beta \bigr) \bigr\} . \end{aligned} \end{aligned}$$

We obtain the estimate of \(\Upsilon ^{(1)}\), \(\Upsilon ^{(2)}\), and \(\Upsilon ^{(3)}\) in (6.15), (6.21) and (6.26) separately below. That is, (6.15), (6.21), and (6.26) show that for any \(\lambda \geq \max \{C_{2}, C_{4}\}\), where \(C_{2}\) and \(C_{4}\) are given below, we have

$$\begin{aligned} \frac{\nu}{8} \leq \lim_{\eta \downarrow 0} \lim _{\kappa \rightarrow \infty} \lim_{\delta \downarrow 0} \bigl\{ \Upsilon ^{(1)} + \Upsilon ^{(2)} + \Upsilon ^{(3)} \bigr\} \leq 0, \end{aligned}$$

which leads to the desired contradiction, since \(\nu > 0\) from Lemma 6.4. Hence, (6.8) holds, and we have the comparison principle in (5.1).

6.4 Estimate of \(\Upsilon ^{(1)}\)

From the definition of \(\mathcal{G}_{\psi}\), we denote

$$\begin{aligned} & \mathcal{G}_{\psi} \bigl(t^{\prime}, \breve{a}^{\prime}, \bigl(- \widehat{q} - \partial _{t} \widehat{h}_{\eta ,\lambda}^{\prime}, - D_{(\breve{a}, \breve{b})} \bigl( \widehat{h}_{\eta ,\lambda}^{\prime }+ \zeta _{\kappa}^{ \prime} \bigr), -\widehat{P} - D_{(\breve{a},\breve{b})}^{2} \widehat{h}_{ \eta ,\lambda}^{\prime} \bigr);u \bigr) = \widehat{\mathcal{G}}^{(1)}_{\psi} + \widehat{ \mathcal{G}}^{(2)}_{ \psi} + \widehat{\mathcal{G}}^{(3)}_{\psi}, \end{aligned}$$

where

$$\begin{aligned} &\widehat{\mathcal{G}}^{(1)}_{\psi} := \mathcal{G}_{\psi} \biggl(t^{\prime}, \breve{a}^{\prime}, \biggl(- \widehat{q} - \frac{1}{2}\partial _{t} \widehat{h}_{\eta ,\lambda}^{\prime}, - D_{(\breve{a},\breve{b})} \bigl( \widehat{h}_{\eta ,\lambda}^{\prime }+ \zeta _{\kappa}^{\prime} \bigr),0 \biggr);u \biggr), \\ &\widehat{\mathcal{G}}^{(2)}_{\psi}:= \mathcal{G}_{\psi} \bigl(t^{\prime}, \breve{a}^{\prime},(0,0,- \widehat{P});u \bigr), \\ &\widehat{\mathcal{G}}^{(3)}_{\psi}:= \mathcal{G}_{\psi} \biggl(t^{\prime}, \breve{a}^{\prime}, \biggl(- \frac{1}{2}\partial _{t} \widehat{h}_{\eta , \lambda}^{\prime},0,- D_{(\breve{a},\breve{b})}^{2} \widehat{h}_{ \eta ,\lambda}^{\prime} \biggr);u \biggr), \end{aligned}$$

and

$$\begin{aligned} & \mathcal{G}_{\psi} \bigl(t^{\prime},a^{\prime}, \bigl(q + \partial _{t} h_{ \eta ,\lambda}^{\prime}, D_{(a,b)} \bigl(h_{\eta ,\lambda}^{\prime }+ \zeta ^{\prime}_{\kappa} \bigr), P + D_{(a,b)}^{2} h_{\eta ,\lambda}^{ \prime} \bigr); u \bigr) = \mathcal{G}^{(1)}_{\psi} + \mathcal{G}^{(2)}_{\psi} + \mathcal{G}^{(3)}_{ \psi}, \end{aligned}$$

where

$$\begin{aligned} & \mathcal{G}^{(1)}_{\psi} := \mathcal{G}_{\psi} \biggl(t^{\prime},a^{\prime}, \biggl(q + \frac{1}{2} \partial _{t} h_{\eta ,\lambda}^{\prime}, D_{(a,b)} \bigl(h_{ \eta ,\lambda}^{\prime }+ \zeta ^{\prime}_{\kappa} \bigr),0 \biggr);u \biggr), \\ & \mathcal{G}^{(2)}_{\psi} := \mathcal{G}_{\psi} \bigl(t^{\prime},a^{\prime},(0,0, P); u \bigr), \\ & \mathcal{G}^{(3)}_{\psi} := \mathcal{G}_{\psi} \biggl(t^{\prime},a^{\prime}, \biggl( \frac{1}{2} \partial _{t} h_{\eta ,\lambda}^{\prime}, 0, D_{(a,b)}^{2} h_{\eta ,\lambda}^{\prime} \biggr); u \biggr). \end{aligned}$$

Then using \(|A-B| \geq |A| - |B|\), we have

$$\begin{aligned} \Upsilon ^{(1)} & := \sup_{u \in U} \bigl\{ \Lambda ^{+} \bigl( \widehat{\mathcal{G}}^{(1)}_{\psi} + \widehat{\mathcal{G}}^{(2)}_{ \psi} + \widehat{ \mathcal{G}}^{(3)}_{\psi} \bigr) - \Lambda ^{+} \bigl( \mathcal{G}^{(1)}_{\psi} + \mathcal{G}^{(2)}_{\psi} + \mathcal{G}^{(3)}_{ \psi} \bigr) \bigr\} \\ & \leq \sup_{u \in U} \Lambda ^{+} \bigl(\widehat{ \mathcal{G}}^{(1)}_{\psi} + \widehat{\mathcal{G}}^{(2)}_{\psi} + \widehat{\mathcal{G}}^{(3)}_{ \psi} - \bigl( \mathcal{G}^{(1)}_{\psi} + \mathcal{G}^{(2)}_{\psi} + \mathcal{G}^{(3)}_{\psi} \bigr) \bigr) \\ & \leq \Upsilon ^{(11)} + \Upsilon ^{(12)} + \Upsilon ^{(13)}, \end{aligned}$$

where

$$\begin{aligned} &\Upsilon ^{(11)}:= \sup_{u \in U} \Lambda ^{+} \bigl( \widehat{\mathcal{G}}^{(1)}_{\psi} - \mathcal{G}^{(1)}_{\psi} \bigr),\qquad \Upsilon ^{(12)} := \sup_{u \in U} \Lambda ^{+} \bigl( \widehat{\mathcal{G}}^{(2)}_{\psi} - \mathcal{G}^{(2)}_{\psi} \bigr), \\ &\Upsilon ^{(13)}:= \sup_{u \in U} \Lambda ^{+} \bigl( \widehat{\mathcal{G}}^{(3)}_{\psi} - \mathcal{G}^{(3)}_{\psi} \bigr). \end{aligned}$$

The estimate of \(\Upsilon ^{(1i)}\), \(i=1,2,3\), is obtained in (6.16), (6.19), and (6.20) separately below, which show that for any \(\lambda \geq \max \{C_{2}, C_{4}\}\), where \(C_{2}\) and \(C_{4}\) are given below,

$$\begin{aligned} \lim_{\kappa \rightarrow \infty} \Upsilon ^{(1)} \leq \lim_{\kappa \rightarrow \infty} \bigl\{ \Upsilon ^{(11)} + \Upsilon ^{(12)} + \Upsilon ^{(13)} \bigr\} \leq 0. \end{aligned}$$
(6.15)

6.4.1 Estimate of \(\Upsilon ^{(11)}\)

From definition,

$$\begin{aligned} \widehat{\mathcal{G}}_{\psi}^{(1)} &= \begin{bmatrix} \widehat{q} + \frac{1}{2}\partial _{t} \widehat{h}_{\eta ,\lambda}^{ \prime }- d(\breve{a}^{\prime},\Gamma ) + \langle D_{(\breve{a}, \breve{b})}(\widehat{h}_{\eta ,\lambda}^{\prime }+ \zeta _{\kappa}^{ \prime}), \widehat{f}(t^{\prime},\breve{a}^{\prime},u) \rangle & 0 \\ 0 & 0 \end{bmatrix}, \\ \mathcal{G}_{\psi}^{(1)} &= \begin{bmatrix} -q - \frac{1}{2}\partial _{t} h_{\eta ,\lambda}^{\prime }- d(a^{ \prime},\Gamma ) - \langle D_{(a,b)}(h_{\eta ,\lambda}^{\prime }+ \zeta _{\kappa}^{\prime}), \widehat{f}(t^{\prime},a^{\prime},u) \rangle & 0 \\ 0 & 0 \end{bmatrix}, \end{aligned}$$

which implies (note that \(\widehat{q} + q = 0\))

$$\begin{aligned} \Upsilon ^{(11)} ={}& \sup_{u \in U} \max \bigl\{ \partial _{t} \widehat{h}_{ \eta ,\lambda}^{\prime }+ \partial _{t} h_{\eta ,\lambda}^{\prime }+ \bigl\langle D_{(\breve{a},\breve{b})} \bigl(\widehat{h}_{\eta ,\lambda}^{ \prime }+ \zeta _{\kappa}^{\prime} \bigr), \widehat{f} \bigl(t^{\prime}, \breve{a}^{ \prime},u \bigr) \bigr\rangle \\ &{} + \bigl\langle D_{(a,b)} \bigl(h_{\eta ,\lambda}^{\prime }+ \zeta _{\kappa}^{\prime} \bigr), \widehat{f} \bigl(t^{\prime},a^{\prime},u \bigr) \bigr\rangle + \bigl(d \bigl(a^{\prime},\Gamma \bigr) - d \bigl( \breve{a}^{\prime}, \Gamma \bigr) \bigr) ,0 \bigr\} . \end{aligned}$$

We have

$$\begin{aligned} & \frac{1}{2} \bigl(\partial _{t} \widehat{h}_{\eta ,\lambda}^{\prime }+ \partial _{t} h_{\eta ,\lambda}^{\prime} \bigr) \\ &\quad = - \frac{\eta}{2} \lambda e^{-\lambda t^{\prime}} \bigl(1+ \bigl\vert a^{\prime} \bigr\vert ^{2} + b^{\prime} \bigr) + \bigl(t^{\prime}-\tilde{t} \bigr) - \frac{\eta \lambda e^{-\lambda t^{\prime}} }{4} \bigl( \bigl\vert a^{\prime }- \tilde{a} \bigr\vert ^{2} + \bigl\vert b^{\prime }-\tilde{b} \bigr\vert ^{4} \bigr) \\ & \qquad{} - \frac{\eta}{2} \lambda e^{-\lambda t^{\prime}} \bigl(1 + \bigl\vert \breve{a}^{\prime} \bigr\vert ^{2} + \breve{b}^{\prime } \bigr) \rightarrow - \eta \lambda e^{-\lambda \tilde{t}} \bigl(1 + \vert \tilde{a} \vert ^{2} + \tilde{b} \bigr)\quad \text{as }\kappa \rightarrow \infty \text{ due to (6.11)}, \end{aligned}$$

and using Cauchy-Schwarz inequality, and Assumptions 1 and 2,

$$\begin{aligned} & \bigl\langle D_{(\breve{a},\breve{b})} \bigl(\widehat{h}_{\eta ,\lambda}^{ \prime }+ \zeta _{\kappa}^{\prime} \bigr), \widehat{f} \bigl(t^{\prime}, \breve{a}^{ \prime},u \bigr) \bigr\rangle + \bigl\langle D_{(a,b)} \bigl(h_{\eta ,\lambda}^{\prime }+ \zeta _{\kappa}^{\prime} \bigr), \widehat{f} \bigl(t^{\prime},a^{\prime},u \bigr) \bigr\rangle \\ &\qquad \leq C_{1} \eta e^{-\lambda t^{\prime}} \bigl( 1 + \bigl\vert a^{\prime} \bigr\vert ^{2} + \bigl\vert \breve{a}^{\prime} \bigr\vert ^{2} + b^{\prime }+ \breve{b}^{\prime} \bigr) ~ \rightarrow C_{2} \eta e^{-\lambda \tilde{t}} \bigl(1 + \vert \tilde{a} \vert ^{2} + \tilde{b} \bigr), \\ &\quad \text{as $\kappa \rightarrow \infty $ due to (6.11)}. \end{aligned}$$

Moreover, from Assumption 3,

$$\begin{aligned} \bigl\vert d \bigl(a^{\prime},\Gamma \bigr) - d \bigl( \breve{a}^{\prime},\Gamma \bigr) \bigr\vert \leq C \bigl\vert a^{ \prime }- \breve{a}^{\prime} \bigr\vert \rightarrow 0 \quad \text{as }\kappa \rightarrow \infty \text{ due to (6.11)}. \end{aligned}$$

Hence,

$$\begin{aligned} \lim_{\kappa \rightarrow \infty} \Upsilon ^{(11)} \leq \max \bigl\{ (- \lambda + C_{2})\eta e^{-\lambda \tilde{t}} \bigl(1+ \vert \tilde{a} \vert ^{2} + \tilde{b} \bigr),0 \bigr\} , \end{aligned}$$

and for any \(\lambda > 0\) with \(\lambda \geq C_{2}\), we have

$$\begin{aligned} \lim_{\kappa \rightarrow \infty} \Upsilon ^{(11)} \leq 0. \end{aligned}$$
(6.16)

6.4.2 Estimate of \(\Upsilon ^{(12)}\)

From definition,

$$\begin{aligned} &\widehat{\mathcal{G}}^{(2)}_{\psi}= \begin{bmatrix} \frac{1}{2} \operatorname{Tr} (\sigma \sigma ^{\top }(t^{\prime},\breve{a}^{ \prime},u) \widehat{P}_{(11)} ) & \frac{1}{2} \psi (\breve{b}^{ \prime}) \widehat{P}_{(12)}^{\top }\sigma (t^{\prime},\breve{a}^{ \prime},u) \\ \frac{1}{2} \psi (\breve{b}^{\prime})\sigma ^{\top}(t^{\prime}, \breve{a}^{\prime},u) \widehat{P}_{(12)} & \frac{1}{2} \psi ^{2}( \breve{b}^{\prime}) \widehat{P}_{(22)} I_{r} \end{bmatrix}, \\ &\mathcal{G}^{(2)}_{\psi}= \begin{bmatrix} - \frac{1}{2} \operatorname{Tr} (\sigma \sigma ^{\top }(t^{\prime},a^{\prime},u ) P_{(11)} ) & - \frac{1}{2} \psi (b^{\prime}) P_{(12)}^{\top } \sigma (t^{\prime},a^{\prime},u) \\ - \frac{1}{2} \psi (b^{\prime}) \sigma ^{\top}(t^{\prime},a^{\prime},u) P_{(12)} & - \frac{1}{2} \psi ^{2}(b^{\prime}) P_{(22)} I_{r} \end{bmatrix}. \end{aligned}$$

Let

$$\begin{aligned} \Delta := \begin{bmatrix} \sigma ^{\top }(t^{\prime}, a^{\prime},u) & 0 \\ 0 & \psi (b^{\prime}) \end{bmatrix},\qquad \breve{\Delta} := \begin{bmatrix} \sigma ^{\top }(t^{\prime}, \breve{a}^{\prime},u) & 0 \\ 0 & \psi (\breve{b}^{\prime}) \end{bmatrix}. \end{aligned}$$

Using (6.13) and Assumption 1, together with Cauchy-Schwarz inequality, we can show that for any \(z \in \mathbb{R}^{r+1}\),

z [ Δ Δ ˘ ] [ P 0 0 P ˆ ] [ Δ Δ ˘ ] z 3 κ z [ Δ Δ ˘ ] [ I n + 1 I n + 1 I n + 1 I n + 1 ] [ Δ Δ ˘ ] z 3 κ Δ Δ ˘ F 2 | z | 2 3 κ C 2 ( | a a ˘ | 2 + | b b ˘ | 2 ) | z | 2 .
(6.17)

For \(j \in \{1,\ldots ,r\}\), let \(z_{(j)} := \begin{bmatrix} \hat{z}_{(j)}^{\top }& z_{j} \end{bmatrix} ^{\top }\in \mathbb{R}^{r+1}\), where \(z_{j} \in \mathbb{R}\) and \(\hat{z}_{(j)}\) is an r-dimensional vector with the jth entry being \(\hat{z} \in \mathbb{R}\) and other entries being zero, i.e., \(\hat{z}_{(j)} := [0\ \cdots \ 0 \ \hat{z} 0 \ \cdots \ 0] \). Then

$$\begin{aligned} & \frac{1}{2} z_{(j)}^{\top } \bigl(\Delta P \Delta ^{\top }+ \breve{\Delta} \widehat{P} \breve{ \Delta}^{\top} \bigr) z_{(j)} \\ &\quad = \frac{1}{2} \hat{z}^{2} \bigl( \sigma ^{\top } \bigl(t^{\prime}, \breve{a}^{\prime},u \bigr) \widehat{P}_{(11)} \sigma \bigl(t^{\prime},\breve{a}^{ \prime},u \bigr) + \sigma ^{\top } \bigl(t^{\prime},a^{\prime},u \bigr) P_{(11)} \sigma \bigl(t^{\prime},a^{\prime},u \bigr) \bigr)_{jj} \\ & \qquad{} + \hat{z} \bigl( \psi \bigl(\breve{b}^{\prime} \bigr) \widehat{P}_{(12)}^{ \top }\sigma \bigl(t^{\prime}, \breve{a}^{\prime},u \bigr) + \psi \bigl(b^{\prime} \bigr) P_{(12)}^{ \top }\sigma \bigl(t^{\prime},a^{\prime},u \bigr) \bigr)_{j} z_{j} \\ & \qquad {}+ \frac{1}{2} z_{j}^{2} \bigl( \psi ^{2} \bigl(\breve{b}^{\prime} \bigr) \widehat{P}_{(22)} + \psi ^{2} \bigl(b^{\prime} \bigr) P_{(22)} \bigr) \\ &\quad \leq \frac{3}{2} \kappa C^{2} \bigl( \bigl\vert a^{\prime }- \breve{a}^{\prime} \bigr\vert ^{2} + \bigl\vert b^{\prime }- \breve{b}^{\prime} \bigr\vert ^{2} \bigr) \bigl(\hat{z}^{2} +z_{j}^{2} \bigr), \end{aligned}$$
(6.18)

where the inequality follows from (6.17). In (6.18) and below, \((\cdot )_{j}\) and \((\cdot )_{jj}\) indicate the jth component of the vector, and the jth element of the row and column of the matrix, respectively.

Let

$$\begin{aligned} y := \begin{bmatrix} \hat{z} & y_{2}^{\top } \end{bmatrix}^{\top},\qquad y_{2} := \begin{bmatrix} z_{1} & \cdots & z_{r} \end{bmatrix}^{\top}. \end{aligned}$$

Using (6.18), we can show that

$$\begin{aligned} & y^{\top } \bigl( \widehat{\mathcal{G}}^{(2)}_{\psi} - \mathcal{G}^{(2)}_{ \psi} \bigr) y \\ & \quad = \frac{1}{2} \sum_{j=1}^{r} \hat{z}^{2} \bigl( \sigma ^{\top } \bigl(t^{ \prime}, \breve{a}^{\prime},u \bigr) \widehat{P}_{(11)} \sigma \bigl(t^{\prime}, \breve{a}^{\prime},u \bigr) + \sigma ^{\top } \bigl(t^{\prime},a^{\prime},u \bigr) P_{(11)} \sigma \bigl(t^{\prime},a^{\prime},u \bigr) \bigr)_{jj} \\ & \qquad{} + \sum_{j=1}^{r} \hat{z} \bigl( \psi \bigl(\breve{b}^{\prime} \bigr) \widehat{P}_{(12)}^{\top } \sigma \bigl(t^{\prime},\breve{a}^{\prime},u \bigr) + \psi \bigl(b^{\prime} \bigr) P_{(12)}^{\top }\sigma \bigl(t^{\prime},a^{\prime},u \bigr) \bigr)_{j} z_{j} \\ & \qquad{} + \frac{1}{2} \sum_{j=1}^{r} z_{j}^{2} \bigl( \psi ^{2} \bigl( \breve{b}^{\prime} \bigr) \widehat{P}_{(22)} + \psi ^{2} \bigl(b^{\prime} \bigr) P_{(22)} \bigr) \\ &\quad \leq \frac{3}{2} \kappa C^{2} \bigl( \bigl\vert a^{\prime }- \breve{a}^{\prime} \bigr\vert ^{2} + \bigl\vert b^{\prime }- \breve{b}^{\prime} \bigr\vert ^{2} \bigr) r \vert y \vert ^{2}, \end{aligned}$$

which, together with the arbitrariness of and \(z_{j}\), \(j \in \{1,\ldots ,r\}\), leads to

$$\begin{aligned} \max_{ \vert y \vert =1} y^{\top } \bigl( \widehat{ \mathcal{G}}^{(2)}_{\psi} - \mathcal{G}^{(2)}_{\psi} \bigr) y \leq \frac{3}{2} r \kappa C^{2} \bigl( \bigl\vert a^{ \prime }- \breve{a}^{\prime} \bigr\vert ^{2} + \bigl\vert b^{\prime }- \breve{b}^{\prime} \bigr\vert ^{2} \bigr). \end{aligned}$$

Hence, in view of (6.11) and the definition of \(\Lambda ^{+}\) (see Lemma 6.1 and [26, Example 5.6.6]), we have

$$\begin{aligned} \lim_{\kappa \rightarrow \infty} \Upsilon ^{(12)} \leq 0. \end{aligned}$$
(6.19)

6.4.3 Estimate of \(\Upsilon ^{(13)}\)

By definition, we have

$$\begin{aligned} &\widehat{\mathcal{G}}^{(3)}_{\psi}= \begin{bmatrix} \frac{1}{2}\partial _{t} \widehat{h}_{\eta ,\lambda}^{\prime }+ \eta e^{- \lambda t^{\prime}} \operatorname{Tr} (\sigma \sigma ^{\top }(t^{\prime}, \breve{a}^{\prime},u) ) & 0 \\ 0 & 0_{r \times r} \end{bmatrix}, \\ &\mathcal{G}^{(3)}_{\psi} = \begin{bmatrix} - \frac{1}{2}\partial _{t} h_{\eta ,\lambda}^{\prime }- \frac{3}{2} \eta e^{-\lambda t^{\prime}} \operatorname{Tr} (\sigma \sigma ^{\top }(t^{ \prime},a^{\prime},u ) ) & 0 \\ 0 & 0_{r \times r} \end{bmatrix}, \end{aligned}$$

which implies

$$\begin{aligned} \Upsilon ^{(13)} = {}& \sup_{u \in U} \max \biggl\{ \frac{1}{2} \bigl(\partial _{t} \widehat{h}_{\eta ,\lambda}^{\prime }+ \partial _{t} h_{\eta ,\lambda}^{ \prime} \bigr) + \eta e^{-\lambda t^{\prime}} \operatorname{Tr} \bigl(\sigma \sigma ^{ \top } \bigl(t^{\prime},\breve{a}^{\prime},u \bigr) \bigr) \\ &{} + \frac{3}{2}\eta e^{-\lambda t^{\prime}} \operatorname{Tr} \bigl( \sigma \sigma ^{\top } \bigl(t^{\prime},a^{\prime},u \bigr) \bigr),0 \biggr\} . \end{aligned}$$

Note that from Assumption 1,

$$\begin{aligned} & \biggl\vert \eta e^{-\lambda t^{\prime}} \operatorname{Tr} \bigl(\sigma \sigma ^{ \top } \bigl(t^{\prime},\breve{a}^{\prime},u \bigr) \bigr) + \frac{3 \eta e^{-\lambda t^{\prime}}}{2} \operatorname{Tr} \bigl(\sigma \sigma ^{ \top } \bigl(t^{\prime},a^{\prime},u \bigr) \bigr) \biggr\vert \\ & \quad = \biggl\vert \eta e^{-\lambda t^{\prime}} \bigl\Vert \sigma \bigl(t^{\prime},\breve{a}^{ \prime},u \bigr) \bigr\Vert _{F}^{2} + \frac{3 \eta e^{-\lambda t^{\prime}}}{2} \bigl\Vert \sigma \bigl(t^{\prime},a^{\prime},u \bigr) \bigr\Vert _{F}^{2} \biggr\vert \\ &\quad \leq C_{3} \eta e^{-\lambda t^{\prime}} \bigl(1 + \bigl\vert a^{\prime } \bigr\vert ^{2} + \bigl\vert \breve{a}^{\prime} \bigr\vert ^{2} \bigr) ~ \rightarrow C_{4} \eta e^{- \lambda \tilde{t}} \bigl(1 + \vert \tilde{a} \vert ^{2} \bigr)\quad \text{as }\kappa \rightarrow \infty \text{ due to (6.11)}, \end{aligned}$$

and as shown above,

$$\begin{aligned} & \frac{1}{2} \bigl(\partial _{t} \widehat{h}_{\eta ,\lambda}^{\prime }+ \partial _{t} h_{\eta ,\lambda}^{\prime} \bigr) \rightarrow - \eta \lambda e^{- \lambda \tilde{t}} \bigl(1 + \vert \tilde{a} \vert ^{2} + \tilde{b} \bigr)\quad \text{as }\kappa \rightarrow \infty \text{ due to (6.11)}. \end{aligned}$$

Hence,

$$\begin{aligned} \lim_{\kappa \rightarrow \infty} \Upsilon ^{(13)} & \leq \max \bigl\{ (C_{4} - \lambda ) \eta e^{-\lambda \tilde{t}} \bigl(1 + \vert \tilde{a} \vert ^{2} + \tilde{b} \bigr), 0 \bigr\} , \end{aligned}$$

and if we choose \(\lambda > 0\) with \(\lambda \geq C_{4}\), then

$$\begin{aligned} \lim_{\kappa \rightarrow \infty} \Upsilon ^{(13)} & \leq 0. \end{aligned}$$
(6.20)

6.5 Estimate of \(\Upsilon ^{(2)}\)

In view of the definition of \(H^{(21)}\),

$$\begin{aligned} \Upsilon ^{(2)} = \sup_{(u,\beta (e)) \in U \times G^{2}} \bigl\{ \Upsilon ^{(21)} + \Upsilon ^{(22)} \bigr\} , \end{aligned}$$

where

$$\begin{aligned} \Upsilon ^{(21)}:={}& {-} \int _{E_{\delta}} \left [ \phi ^{\prime} \bigl(t^{ \prime}, \breve{a}^{\prime }+ \chi \bigl(t^{\prime}, \breve{a}^{\prime},u,e \bigr), \breve{b}^{\prime }+ \beta (e) \bigr) - \phi ^{\prime} \bigl(t^{\prime}, \breve{a}^{\prime}, \breve{b}^{\prime} \bigr)\vphantom{\begin{bmatrix} \chi (t^{\prime}, \breve{a}^{\prime},u,e) \\ \beta (e) \end{bmatrix}} \right . \\ &\left .{} - \left \langle D \phi ^{\prime} \bigl(t^{\prime}, \breve{a}^{\prime}, \breve{b}^{\prime} \bigr), \begin{bmatrix} \chi (t^{\prime}, \breve{a}^{\prime},u,e) \\ \beta (e) \end{bmatrix} \right \rangle \right ] \pi (\mathrm{d} e), \\ \Upsilon ^{(22)}:={}& \int _{E_{\delta}} \left [ \phi \bigl(t^{\prime},a^{ \prime }+ \chi \bigl(t^{\prime},a^{\prime},u,e \bigr), b^{\prime }+ \beta (e) \bigr) - \phi \bigl(t^{\prime},a^{\prime},b^{\prime} \bigr)\vphantom{\begin{bmatrix} \chi (t^{\prime},a^{\prime},u,e) \\ \beta (e) \end{bmatrix}} \right . \\ &{} -\left . \left \langle D \phi \bigl(t,a^{\prime},b^{\prime} \bigr), \begin{bmatrix} \chi (t^{\prime},a^{\prime},u,e) \\ \beta (e) \end{bmatrix} \right \rangle \right ] \pi (\mathrm{d} e). \end{aligned}$$

Let \(\chi ^{\prime}(u,e) := \chi (t^{\prime},a^{\prime},u,e)\). From the Höder inequality, it follows from the uniform boundedness of \(D^{2}\phi \) that

$$\begin{aligned} \Upsilon ^{(22)} \leq{}& \int _{E_{\delta}} \int _{0}^{1} (1-z) \bigl\Vert D^{2} \phi \bigl(t^{\prime},a^{\prime }+ z \chi ^{\prime}(u,e), b^{ \prime }+ z \beta (e) \bigr) \bigr\Vert _{F} \\ &{}\times \bigl( \bigl\vert \chi ^{\prime}(u,e) \bigr\vert + \bigl\vert \beta (e) \bigr\vert \bigr) \,\mathrm{d} z \pi ( \,\mathrm{d} e). \\ \leq{}& C \biggl( \biggl( \int _{E_{\delta}} \bigl\vert \chi ^{\prime}(u,e) \bigr\vert ^{2} \pi (\mathrm{d} e) \biggr)^{\frac{1}{2}} + \biggl( \int _{E_{\delta}} \bigl\vert \beta (e) \bigr\vert ^{2} \pi ( \,\mathrm{d} e) \biggr)^{\frac{1}{2}} \biggr). \end{aligned}$$

Then the regularity of χ in Assumption 1 and the fact that \(\beta \in G^{2}(E,\mathcal{B}(E),\pi ;\mathbb{R})\) can be restricted to a bounded control from Remark 3.1 imply that \(\lim_{\delta \downarrow 0} \Upsilon ^{(22)} \leq 0\). A similar technique can be applied to show that \(\lim_{\delta \downarrow 0} \Upsilon ^{(21)} \leq 0\).

Hence, we have

$$\begin{aligned} \lim_{\delta \downarrow 0} \Upsilon ^{(2)} \leq 0. \end{aligned}$$
(6.21)

6.6 Estimate of \(\Upsilon ^{(3)}\)

Recall (6.12)

$$\begin{aligned} \Psi _{\nu ;\eta ,\lambda}^{\kappa} (t,a,\breve{a},b,\breve{b}) ={}& \bigl( \underline{W}_{\nu}(t,a,b) - h_{\eta ,\lambda}(t,a,b) \bigr) \\ &{} - \bigl(\overline{W}(t,\breve{a},\breve{b}) + \widehat{h}(t,\breve{a}, \breve{b}) \bigr) - \zeta _{\kappa}(a,b,\breve{a},\breve{b}), \end{aligned}$$

from which we have

$$\begin{aligned} \underline{W}_{\nu}(t,a,b) - \overline{W}(t, \breve{a},\breve{b}) ={} & \Psi _{\nu ;\eta ,\lambda}^{\kappa} (t,a,b, \breve{a},\breve{b}) \\ &{} + h_{\eta ,\lambda}(t,a,b) + \widehat{h}_{\eta ,\lambda}(t, \breve{a}, \breve{b}) + \zeta _{\kappa}(a,b,\breve{a},\breve{b}). \end{aligned}$$
(6.22)

We note that \((t^{\prime}, a^{\prime}, b^{\prime},\breve{a}^{\prime},\breve{b}^{ \prime})\) is the maximum point of \(\Psi _{\nu ;\eta ,\lambda}^{\kappa}\).

Let \(\chi ^{\prime}(u,e) := \chi (t^{\prime},a^{\prime},u,e)\) and \(\breve{\chi}^{\prime}(u,e) := \chi (t^{\prime},\breve{a}^{\prime},u,e)\). Since \((t^{\prime}, a^{\prime}, b^{\prime},\breve{a}^{\prime},\breve{b}^{ \prime})\) is the maximum point of \(\Psi _{\nu ;\eta ,\lambda}^{\kappa}\), it follows from (6.22) and the definition of \(\Upsilon ^{(3)}\) that

$$\begin{aligned} \Upsilon ^{(3)} & \leq \sup_{u \in U, \beta \in G^{2} } \bigl\{ \Upsilon ^{(31)} + \Upsilon ^{(32)} + \Upsilon ^{(33)} \bigr\} , \end{aligned}$$

where

$$\begin{aligned}& \begin{aligned} \Upsilon ^{(31)} :={}& \int _{E_{\delta}^{C}} \bigl[h_{\eta ,\lambda} \bigl(t^{ \prime}, a^{\prime }+ \chi ^{\prime}(u,e),b^{\prime }+ \beta (e) \bigr) - h_{ \eta ,\lambda} \bigl(t^{\prime},a^{\prime},b^{\prime} \bigr) \bigr] \pi (\mathrm{d} e) \\ &{} - \int _{E_{\delta}^{C}} \left \langle D_{(a,b)} h_{\eta ,\lambda}^{ \prime}, \begin{bmatrix} \chi ^{\prime}(u,e) \\ \beta (e) \end{bmatrix} \right \rangle \pi (\mathrm{d} e), \end{aligned} \\& \begin{aligned} \Upsilon ^{(32)} :={}& \int _{E_{\delta}^{C}} \bigl[\widehat{h}_{ \eta ,\lambda} \bigl(t^{\prime},\breve{a}^{\prime }+ \breve{\chi}^{\prime}(u,e), \breve{b}^{\prime }+ \beta (e) \bigr) - \widehat{h}_{\eta ,\lambda} \bigl(t^{ \prime},\breve{a}^{\prime},\breve{b}^{\prime} \bigr) \bigr] \pi (\mathrm{d} e) \\ &{} - \int _{E_{\delta}^{C}} \left \langle D_{(\breve{a},\breve{b})} \widehat{h}_{\eta ,\lambda}^{\prime}, \begin{bmatrix} \breve{\chi}^{\prime}(u,e) \\ \beta (e) \end{bmatrix} \right \rangle \pi (\mathrm{d} e), \end{aligned} \\& \begin{aligned} \Upsilon ^{(33)} :={}& \int _{E_{\delta}^{C}} \bigl[ \zeta _{\kappa} \bigl(a^{ \prime }+ \chi ^{\prime}(u,e), b^{\prime }+ \beta (e), \breve{a}^{ \prime }+ \breve{\chi}^{\prime}(u,e), \breve{b}^{\prime }+ \beta (e) \bigr) - \zeta _{\kappa} \bigl(a^{\prime},b^{\prime},\breve{a}^{\prime}, \breve{b}^{ \prime} \bigr)) \bigr] \pi (\mathrm{d} e) \\ & {} - \int _{E_{\delta}^{C}} \left \langle D_{(a,b)} \zeta ^{\prime}_{ \kappa}, \begin{bmatrix} \chi ^{\prime}(u,e) \\ \beta (e) \end{bmatrix} \right \rangle \pi (\mathrm{d} e) + \int _{E_{\delta}^{C}} \left \langle - D_{( \breve{a},\breve{b})} \zeta _{\kappa}^{\prime}, \begin{bmatrix} \breve{\chi}^{\prime}(u,e) \\ \beta (e) \end{bmatrix} \right \rangle \pi (\mathrm{d} e). \end{aligned} \end{aligned}$$

From (6.14), we can show that

$$\begin{aligned} \Upsilon ^{(31)} ={}& \int _{E_{\delta}^{C}} \int _{0}^{1} (1-z) \operatorname{Tr} \left ( \begin{bmatrix} 3 \eta e^{-\lambda t^{\prime}} I_{n} & 0 \\ 0 & 0 \end{bmatrix} \right . \\ &{} \times \left . \begin{bmatrix} \chi ^{\prime }(\chi ^{\prime})^{\top}(u,e) & \chi ^{\prime}(u,e) \beta ^{\top}(e) \\ \beta (e) (\chi ^{\prime})^{\top}(u,e) & \beta (e) \beta ^{\top}(e) \end{bmatrix} \right ) \, \mathrm{d} z\, \pi (\mathrm{d} e) \\ \leq {}& Cn \eta e^{-\lambda t^{\prime}} \bigl(1 + \bigl\vert a^{\prime} \bigr\vert ^{2} \bigr), \end{aligned}$$
(6.23)

and similarly,

$$\begin{aligned} \Upsilon ^{(31)} = {}& \int _{E_{\delta}^{C}} \int _{0}^{1} (1-z) \operatorname{Tr} \left ( \begin{bmatrix} 2 \eta e^{-\lambda t^{\prime}} I_{n} & 0 \\ 0 & 0 \end{bmatrix} \right . \\ &{} \times \left . \begin{bmatrix} \breve{\chi}^{\prime }(\breve{\chi}^{\prime})^{\top}(u,e) & \breve{\chi}^{\prime}(u,e) \beta ^{\top}(e) \\ \beta (e) (\breve{\chi}^{\prime})^{\top}(u,e) & \beta (e) \beta ^{ \top}(e) \end{bmatrix} \right ) \,\mathrm{d} z\, \pi (\mathrm{d} e) \\ \leq{} & Cn \eta e^{-\lambda t^{\prime}} \bigl(1 + \bigl\vert \breve{a}^{\prime} \bigr\vert ^{2} \bigr). \end{aligned}$$
(6.24)

Moreover, using (6.14) and Assumption 1,

$$\begin{aligned} \Upsilon ^{(33)} & = \frac{\kappa}{2} \int _{E_{\delta}^{C}} \bigl\vert \chi ^{ \prime}(u,e) - \breve{\chi}^{\prime}(u,e) \bigr\vert ^{2} \pi (\mathrm{d} e) \\ & \leq \frac{\kappa}{2} \bigl\vert a^{\prime }- \breve{a}^{\prime} \bigr\vert ^{2} \rightarrow 0\quad \text{as }\kappa \rightarrow \infty \text{ due to (6.11)}. \end{aligned}$$
(6.25)

Hence, (6.23)-(6.25), together with (6.11), imply that

$$\begin{aligned} \lim_{\eta \downarrow 0} \lim_{\kappa \rightarrow \infty} \lim_{ \delta \downarrow 0} \Upsilon ^{(3)} \leq 0. \end{aligned}$$
(6.26)

Availability of data and materials

Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

Notes

  1. We mention that our comparison function should be different from that for the case without jumps in [11] to deal with both the local and nonlocal parts of the HJB equation in (1.6).

  2. Since the initial time of this paper is \(t \in [0,T]\), the martingale representation theorem is initialized at \(t \in [0,T]\).

  3. In the Appendix, we discuss the existence of optimal controls for jump-diffusion systems under some mild assumptions of the coefficients.

  4. Note that [6, Theorem 8.1.3] is stated as follows. Let \(\mathcal{X}\) be a complete separable metric space, \((\Theta ,\mathcal{M})\) a measurable space, Ξ a measurable set-valued map from Θ to closed non-empty subsets of \(\mathcal{X}\). Then there exists a measurable selection of Ξ (see [6, Definition 8.1.2]).

  5. Note that \(\widehat{a} = \begin{bmatrix} a^{\top }& b \end{bmatrix} ^{\top }\in \mathbb{R}^{n+1}\), by which we denote \(W(t,\widehat{a}) := W(t,a,b)\) and \(W(t^{\prime},\widehat{a}^{\prime}) := W(t^{\prime},a^{\prime},b^{ \prime})\).

References

  1. Altarovici, A., Bokanowski, O., Zidani, H.: A general Hamilton-Jacobi framework for non-linear state-constrained control problems. ESAIM Control Optim. Calc. Var. 19, 337–357 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  2. Applebaum, D.: Lévy Processes and Stochastic Calculus, 2nd edn. Cambridge University Press, Cambridge (2009)

    Book  MATH  Google Scholar 

  3. Aubin, J.P.: Viability Theory. Birkhäuser, Basel (1991)

    MATH  Google Scholar 

  4. Aubin, J.P., Bayen, A.M., Saint-Pierre, P.: Viability Theory, 2nd edn. Springer, Berlin (2011)

    Book  MATH  Google Scholar 

  5. Aubin, J.P., Da Prato, G.: The viability theorem for stochastic differential inclusions. Stoch. Anal. Appl. 16(1), 1–15 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  6. Aubin, J.P., Frankowska, H.: Set-Valued Analysis. Birkhäuser, Basel (1990)

    MATH  Google Scholar 

  7. Bardi, M., Koike, S., Soravia, P.: Pursuit-evation games with state constraints: dynamic programming and discrete-time approximations. Discrete Contin. Dyn. Syst. 6(2), 361–380 (2000)

    Article  MATH  Google Scholar 

  8. Barles, G., Buckdahn, R., Backward, P.E.: Stochastic differential equations and integral-partial differential equations. Stoch. Stoch. Rep. 60, 57–83 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  9. Barles, G., Imbert, C.: Second-order elliptic integro-differential equations: viscosity solutions’ theory revisited. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 25(3), 567–585 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  10. Bokanowski, O., Bruder, B., Maroso, S., Zidani, H.: Numerical approximation for a superreplication problem under gamma constraints. SIAM J. Numer. Anal. 47(3), 2289–2320 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  11. Bokanowski, O., Picarelli, A., State-Constrained, Z.H.: Stochastic optimal control problems via reachability approach. SIAM J. Control Optim. 54(5), 2568–2593 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  12. Bouchard, B.: Stochastic targets with mixed diffusion processes and viscosity solutions. In: Stochastic Processes and Their Applications, vol. 101, pp. 273–302 (2002)

    MATH  Google Scholar 

  13. Bouchard, B., Dang, N.M.: Optimal control versus stochastic target problems: an equivalence result. Syst. Control Lett. 61, 343–346 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  14. Bouchard, B., Elie, R., Imbert, C.: Optimal control under stochastic target constraints. SIAM J. Control Optim. 48(5), 3501–3531 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  15. Bouchard, B., Weak, N.M.: Dynamic programming for generalized state constraints. SIAM J. Control Optim. 50(6), 3344–3373 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  16. Brezis, H.: Functional Analysis, Sobolev Spaces and Partial Differential Equations. Springer, Berlin (2011)

    Book  MATH  Google Scholar 

  17. Brüder, B.: Super-replication of European Options with a Derivative Asset under Constrained Finite Variation Strategies (2005). Hal-00012183

  18. Buckdahn, R., Hu, Y., Li, J.: Stochastic representation for solutions of Isaacs’ type integral-partial differential equations. In: Stochastic Processes and Their Applications, vol. 121, pp. 2715–2750 (2011)

    MATH  Google Scholar 

  19. Buckdahn, R., Peng, S., Quincampoix, M., Rainer, C.: Existence of stochastic control under state constraints. C. R. Acad. Sci., Sér. 1 Math. 327(1), 17–22 (1998)

    MathSciNet  MATH  Google Scholar 

  20. Capuzzo-Dolcetta, I., Lions, P.L.: Hamilton-Jacobi equations with state constraints. Trans. Am. Math. Soc. 318(2), 643–683 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  21. Crandall, M.G., Ishii, H., Lions, P.L.: User’s guide to viscosity solutions of second order partial differential equations. Bull. Am. Math. Soc. 27, 1–67 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  22. Frankowska, H., Mazzola, M.: Discontinuous solutions of Hamilton-Jacobi-Bellman equation under state constraints. Calc. Var. Partial Differ. Equ. 46, 725–747 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  23. Hanane, B.G., Mezerdi, B.: The relaxed stochastic maximum principle in optimal control of diffusions with controlled jumps. Afr. Stat. 12(2), 1287–1312 (2017)

    MathSciNet  MATH  Google Scholar 

  24. Haussmann, U.G., Lepeltier, J.P.: On the existence of optimal controls. SIAM J. Control Optim. 28(4), 851–902 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  25. Hermosilla, C., Vinter, R., Zidani, H.: Hamilton-Jacobi-Bellman equations for optimal control processes with convex state constraints. Syst. Control Lett. 109, 30–36 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  26. Horn, R., Johnson, C.: Matrix Analysis, 2nd edn. Cambridge University Press, Cambridge (2013)

    MATH  Google Scholar 

  27. Ishii, H., Loreti, P.: A class of stochastic optimal control problems with state constraint. Indiana Univ. Math. J. 51(5), 1167–1196 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  28. Katsoulakis, M.A.: Viscosity solutions of second order fully nonlinear elliptic equations with state constraints. Indiana Univ. Math. J. 43(2), 493–519 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  29. Kushner, H.: Jump-diffusions with controlled jumps: existence and numerical methods. J. Math. Anal. Appl. 249, 179–198 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  30. Moreau, L.: Stochastic target problem with controlled loss in jump diffusions models. SIAM J. Control Optim. 49(6), 2577–2607 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  31. Peng, S., Zhu, X.H.: The viability property of controlled jump diffusion processes. Acta Math. Sin. 24(8), 1351–1368 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  32. Pham, H.: Optimal stopping of controlled jump diffusion processes: a viscosity solution approach. J. Math. Syst. Estim. Control 8(1), 1–27 (1998)

    MathSciNet  Google Scholar 

  33. Soner, H.M.: Optimal control with state constraint I. SIAM J. Control Optim. 24(3), 552–561 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  34. Soner, H.M.: Optimal control with state-space constraint II. SIAM J. Control Optim. 24(6), 1110–1122 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  35. Soner, H.M.: Optimal control of jump-Markov processes and viscosity solutions. In: Fleming, W., Lions, P.L. (eds.) Stochastic Differential Systems, Stochastic Control Theory and Applications. The IMA Volumes in Mathematics and Its Applications., vol. 10, pp. 501–511. Springer, Berlin (1988)

    Chapter  MATH  Google Scholar 

  36. Soner, H.M., Stochastic, T.N.: Target problems, dynamic programming and viscosity solutions. SIAM J. Control Optim. 41(2), 404–424 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  37. Wagner, D.H.: Survey of measurable selection theorems. SIAM J. Control Optim. 15(5), 859–903 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  38. Yong, J., Zhou, X.Y.: Stochastic Controls: Hamiltonian Systems and HJB Equations. Springer, Berlin (1999)

    Book  MATH  Google Scholar 

  39. Zhu, X.H., Liu, G.Z.: Viability property of jump diffusion processes on manifolds. Acta Math. Appl. Sin. 32(2), 349–354 (2016)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The author would like to thank two anonymous referees for their careful reading and suggestions, which have helped to improve the paper. The author particularly thanks to Reviewer 2 who provided the detailed comment on the application of the measurable selection theorem in Proposition 3.1 and pointed out some errors of the proof for Lemma 6.4 in the earlier version of the manuscript.

Funding

This work was supported in part by the Technology Innovation Program (20018112) funded by the Ministry of Trade, Industry and Energy (MOTIE, Korea), in part by the National Research Foundation of Korea (NRF) Grant funded by the Ministry of Science and ICT, South Korea (NRF-2021R1A2C2094350, NRF-2017R1A5A1015311), and in part by Institute of Information & communications Technology Planning and Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2020-0-01373).

Author information

Authors and Affiliations

Authors

Contributions

JM: problem formulation, analysis, writing, and editing. The author read and approved the final manuscript.

Corresponding author

Correspondence to Jun Moon.

Ethics declarations

Competing interests

The author declares no competing interests.

Appendix:  Existence of optimal controls for jump-diffusion systems

Appendix:  Existence of optimal controls for jump-diffusion systems

In Theorem 3.2, an additional assumption of the existence of optimal controls for the auxiliary optimal control problem in (3.7) is needed. Here, we show that a certain class of stochastic optimal control problems for jump-diffusion systems with unbounded control sets admits an optimal control. The proof of the main result in this appendix (see Theorem A.1) extends the case of SDEs in a Brownian setting without jumps studied in [11, Appendix A] and [38, Theorem 5.2, Chap. 2] to the framework of jump-diffusion systems.

As in (3.7), consider

$$\begin{aligned} \begin{aligned} &W(t,a,b) := \mathop{\inf_{u \in \mathcal{U}}}_{\alpha \in \mathcal{A},\beta \in \mathcal{B}} \overline{J}(t,a,b;u,\alpha , \beta ), \\ &\overline{J}(t,a,b;u,\alpha ,\beta ) = \mathbb{E} \biggl[ \rho _{2} \bigl(x_{T}^{t,a;u}, y_{T;t,a,b}^{u,\alpha ,\beta} \bigr) + \int _{t}^{T} \rho _{1} \bigl(s,x_{s}^{t,a;u}, u_{s} \bigr) \, \mathrm{d} s \biggr], \end{aligned} \end{aligned}$$
(A.1)

and subject to (we recall (2.1) and (3.1))

$$\begin{aligned} & \textstyle\begin{cases} \mathrm{d} x_{s}^{t,a;u} = f(s,x_{s}^{t,a;u},u_{s})\,\mathrm{d} s + \sigma (s,x_{s}^{t,a;u},u_{s}) \,\mathrm{d} B_{s} \\ \hphantom{\mathrm{d} x_{s}^{t,a;u} = }{} + \int _{E} \chi (s,x_{s-}^{t,a;u},u_{s},e) \tilde{N}(\mathrm{d} e,\mathrm{d} s), \quad x_{t}^{t,a;u} = a, \\ \mathrm{d} y_{s; t,a,b}^{u,\alpha ,\beta} = - l(s,x_{s}^{t,a;u},u_{s}) \,\mathrm{d} s + \alpha _{s}^{\top }\,\mathrm{d} B_{s} + \int _{E} \beta _{s}(e) \tilde{N}(\mathrm{d} e,\mathrm{d} s),\quad y_{t; t,a,b}^{u,\alpha ,\beta} = b. \end{cases}\displaystyle \end{aligned}$$

Assumption 4

  1. (i)

    For \(\iota := f, \sigma , \chi , l\) with \(\iota = \begin{bmatrix} \iota _{1}^{\top }& \cdots & \iota _{n}^{\top}\end{bmatrix} ^{\top}\), ι satisfies Assumptions 1 and 2, and is independent of x. Moreover, \(\iota _{i}\), \(i=1,\ldots ,n\), is convex and Lipschitz continuous in u with the Lipschitz constant L;

  2. (ii)

    \(\rho _{1}\) and \(\rho _{2}\) are convex, nondecreasing and bounded from below;

  3. (iii)

    \(U \subset \mathbb{R}^{m}\) is a compact and convex set.

Note that Assumption 4 is different from that in [11, Appendix A] and [38, Theorem 5.2, Chap. 2]. We have the following result:

Theorem A.1

Suppose that Assumption 4holds. Then (A.1) admits an optimal solution \((\widehat{u}, \widehat{\alpha}, \widehat{\beta}) \in \mathcal{U} \times \mathcal{A} \times \mathcal{B}\), i.e.,

$$\begin{aligned} W(t,a,b) = \overline{J}(t,a,b;\widehat{u}, \widehat{\alpha}, \widehat{\beta}) = \mathop{\inf_{u \in \mathcal{U}}}_{\alpha \in \mathcal{A},\beta \in \mathcal{B}} \overline{J}(t,a,b;u, \alpha , \beta ). \end{aligned}$$

Proof

Since \(\rho _{1}\) and \(\rho _{2}\) are bounded from below, (A.1) is well defined. Suppose that \(\{(\widehat{u}_{k}, \widehat{\alpha}_{k}, \widehat{\beta}_{k})\}_{k \geq 1} \in \mathcal{U} \times \mathcal{A} \times \mathcal{B}\) is a sequence of minimizing controllers such that \(\overline{J}(t,a,b; \widehat{u}_{k}, \widehat{\alpha}_{k}, \widehat{\beta}_{k}) \xrightarrow{k \rightarrow \infty} W(t,a,b)\). Note that \(\mathcal{L}_{\mathbb{F}}^{2}\) and \(\mathcal{G}_{\mathbb{F}}^{2}\) are Hilbert spaces. Also, from Remark 3.1, \(\{(\widehat{\alpha}_{k},\widehat{\beta}_{k})\}_{k \geq 1}\) can be restricted to a sequence of controls bounded in \(\mathcal{L}_{\mathbb{F}}^{2}\) and \(\mathcal{G}_{\mathbb{F}}^{2}\) senses, and U is compact from (iii) of Assumption 4. Hence, in view of [16, Theorem 3.18], we can extract a subsequence \(\{(u_{k_{i}},\widehat{\alpha}_{k_{i}},\widehat{\beta}_{k_{i}})\}_{i \geq 1}\) from \(\{(\widehat{u}_{k}, \widehat{\alpha}_{k}, \widehat{\beta}_{k}) \}_{k \geq 1}\) such that

$$\begin{aligned} (\widehat{u}_{k_{i}},\widehat{\alpha}_{k_{i}},\widehat{ \beta}_{k_{i}}) \xrightarrow{i \rightarrow \infty} (\widehat{u}, \widehat{\alpha}, \widehat{\beta})\quad \text{weakly in } \mathcal{L}_{\mathbb{F}}^{2} \times \mathcal{L}_{\mathbb{F}}^{2} \times \mathcal{G}_{\mathbb{F}}^{2}. \end{aligned}$$

Then for each \(\epsilon > 0\), there exists \(i^{\prime}\) such that for any \(i \geq i^{\prime}\),

$$\begin{aligned} \overline{J}(t,a,b,;\widehat{u}_{k_{i}},\widehat{ \alpha}_{k_{i}}, \widehat{\beta}_{k_{i}}) \leq W(t,a,b) + \frac{\epsilon}{2}. \end{aligned}$$
(A.2)

From Mazur’s lemma [16, Corollary 3.8], we have convex combinations of subsequences above

$$\begin{aligned} (\widetilde{u}_{k_{i}},\widetilde{ \alpha}_{k_{i}},\widetilde{\beta}_{k_{i}}) &:= \sum _{p \geq 1} \theta _{k_{i} p}(\widehat{u}_{k_{i} + p}, \widehat{\alpha}_{k_{i} + p},\widehat{\beta}_{k_{i} + p}), \quad \theta _{k_{i} p} \geq 0, \sum_{p\geq 1} \theta _{k_{i} p} = 1, \end{aligned}$$
(A.3)

such that

$$\begin{aligned} (\widetilde{u}_{k_{i}},\widetilde{ \alpha}_{k_{i}},\widetilde{\beta}_{k_{i}}) \xrightarrow{i \rightarrow \infty} (\widehat{u}, \widehat{\alpha}, \widehat{\beta})\quad \text{strongly in } \mathcal{L}_{\mathbb{F}}^{2} \times \mathcal{L}_{\mathbb{F}}^{2} \times \mathcal{G}_{\mathbb{F}}^{2}, \end{aligned}$$
(A.4)

where \((\widehat{u}, \widehat{\alpha},\widehat{\beta}) \in \mathcal{U} \times \mathcal{A} \times \mathcal{B}\). Then from (A.3) and (i) of Assumption 4, we have

$$\begin{aligned} x_{s}^{t,a;\widetilde{u}_{k_{i}}} \preceq \sum_{p \geq 1} \theta _{k_{i}p} x_{s}^{t,a;\widehat{u}_{k_{i}+p}},\qquad y_{s;t,a,b}^{\widetilde{u}_{k_{i}}, \widetilde{\alpha}_{k_{i}}, \widetilde{\beta}_{k_{i}}} \leq \sum_{p \geq 1} \theta _{k_{i} p} y_{s;t,a,b}^{\widehat{u}_{k_{i}+p}, \widehat{\alpha}_{k_{i}+p}, \widehat{\beta}_{k_{i}+p}},\quad s \in [t,T], \end{aligned}$$

where denotes the componentwise inequality. Using the Lipschitz property of f, σ, χ and l in u (see (i) of Assumption 4) and the proof of Lemma 2.1, (A.4) implies the convergence of the following sequence strongly in the \(\mathcal{L}_{\mathbb{F}}^{\infty}\)-norm sense:

$$\begin{aligned} \bigl(x_{t}^{t,a;\widetilde{u}_{k_{i}}}, y_{t;t,a,b}^{\widetilde{u}_{k_{i}}, \widetilde{\alpha}_{k_{i}}, \widetilde{\beta}_{k_{i}}} \bigr) \xrightarrow{i \rightarrow \infty} \bigl(x_{t}^{t,a;\widehat{u}}, y_{t;t,a,b}^{ \widehat{u}, \widehat{\alpha}, \widehat{\beta}} \bigr). \end{aligned}$$

By continuity of , for each \(\epsilon > 0\), there exists \(i^{\prime \prime}\) such that \(i \geq i^{\prime \prime}\),

$$\begin{aligned} \overline{J}(t,a,b;\widehat{u},\widehat{\alpha},\widehat{\beta}) \leq \overline{J}(t,a,b; \widetilde{u}_{k_{i}},\widetilde{ \alpha}_{k_{i}}, \widetilde{\beta}_{k_{i}}) + \frac{\epsilon}{2}. \end{aligned}$$

This, together (ii) of Assumption 4 and (A.2), shows that for any \(i \geq \max \{i^{\prime}, i^{\prime \prime}\}\),

$$\begin{aligned} \overline{J}(t,a,b;\widehat{u},\widehat{\alpha},\widehat{\beta}) & \leq \overline{J}(t,a,b; \widetilde{u}_{k_{i}},\widetilde{ \alpha}_{k_{i}}, \widetilde{\beta}_{k_{i}}) + \frac{\epsilon}{2} \\ & \leq \mathbb{E} \biggl[ \rho _{2} \biggl( \sum _{p \geq 1} \theta _{k_{i}p} x_{T}^{t,a;\widehat{u}_{k_{i}+p}}, \sum_{p \geq 1} \theta _{k_{i} p} y_{T;t,a,b}^{ \widehat{u}_{k_{i}+p}, \widehat{\alpha}_{k_{i}+p}, \widehat{\beta}_{k_{i}+p}} \biggr) \\ &~~~ + \int _{t}^{T} \rho _{1} \biggl(s, \sum_{p \geq 1} \theta _{k_{i}p} x_{s}^{t,a; \widehat{u}_{k_{i}+p}}, \sum_{p \geq 1} \theta _{k_{i} p}\widehat{u}_{k_{i} + p,s} \biggr) \,\mathrm{d} s \biggr] + \frac{\epsilon}{2} \\ & \leq \sum_{p \geq 1} \theta _{k_{i} p} \overline{J}(t,a,b; \widehat{u}_{k_{i} + p},\widehat{\alpha}_{k_{i} + p}, \widehat{\beta}_{k_{i} + p}) + \frac{\epsilon}{2} \\ & \leq W(t,a,b) + \epsilon . \end{aligned}$$

Since ϵ is arbitrary, we have the desired result. This completes the proof. □

Remark A.1

As in [11, Appendix A], we can also use the following assumption in Theorem A.1 instead of Assumption 4:

  1. (i)

    \(f(s,x,u) = A_{s} x + B_{s} u\), \(\sigma (s,x,u) = C_{s} x + D_{s} u\), \(\chi (s,x,e) = E_{s} x + F_{s} u + r_{s}(e)\) and \(l(s,x,u) = H_{s} x + K_{s} u\), where A, B, C, E, F, r, H and K are deterministic and bounded coefficients with appropriate dimensions;

  2. (ii)

    \(\rho _{1}\) and \(\rho _{2}\) are convex and bounded from below;

  3. (iii)

    \(U \subset \mathbb{R}^{m}\) is a compact and convex set.

Unlike the case of SDEs in a Brownian setting, there are not many results on the existence of optimal controls for jump-diffusion systems. Some results related to the relaxed optimal solution approach can be found in [23, 29]. It is interesting to study the existence of optimal controls for jump-diffusion systems in the original strong sense as for the case of SDEs driven by Brownian motion in [24].

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Moon, J. Backward reachability approach to state-constrained stochastic optimal control problem for jump-diffusion models. Adv Cont Discr Mod 2022, 68 (2022). https://doi.org/10.1186/s13662-022-03747-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-022-03747-z

MSC

Keywords