 Research
 Open access
 Published:
A regressionbased Monte Carlo method to solve twodimensional forward backward stochastic differential equations
Advances in Difference Equations volumeÂ 2021, ArticleÂ number:Â 207 (2021)
Abstract
The purpose of this paper is to investigate the numerical solutions to twodimensional forward backward stochastic differential equations(FBSDEs). Based on the Fourier coscos transform, the approximations of conditional expectations and their errors are studied with conditional characteristic functions. A new numerical scheme is proposed by using the leastsquares regressionbased Monte Carlo method to solve the initial value of FBSDEs. Finally, a numerical experiment in European option pricing is implemented to test the efficiency and stability of this scheme.
1 Introduction
In this paper, we consider the numerical solutions to the twodimensional decoupled forward backward stochastic differential equations (FBSDEs):
where \(X_{t}=(X^{1}_{t},X^{2}_{t})^{*}, 0\leq t \leq T\), is a twodimensional forward component and \(Y_{t}, 0\leq t \leq T\), is a onedimensional backward component. \(\mu (X_{t}) =(\mu _{1}(X^{1}_{t}),\mu _{2}(X^{2}_{t}))^{*}, \sigma (X_{t}) =\operatorname{diag}(\sigma _{1}(X^{1}_{t}),\sigma _{2}(X^{2}_{t}))\) are drift and volatility terms. \(W_{t}=(W^{1}_{t},W^{2}_{t})^{*}, 0 \leq t \leq T\), is a standard twodimensional Brownian motion defined on a filtered probability space \((\Omega ,\mathcal{F},\mathbf{P}, (\mathcal{F}_{t})_{0\leq t\leq T})\), where \(\mathcal{F}_{t}\) is filtration of \(W_{t}\). Here, the operator \((\cdot )^{*}\) denotes the transpose operator for a vector.
Under the standard conditions on f and g, Pardoux and Peng [1] proved that there exists a unique solution to nonlinear FBSDEs. But it is often difficult to obtain the analytic solutions. So it is crucial to give the numerical schemes. The key of the numerical schemes is how to discrete conditional expectation. Up to now, there have been lots of methods to solve this problem. Zhang et al. [2] constructed a kind of sparsegrid Gaussâ€“Hermite quadrature rule and hierarchical sparsegrid interpolation to approximate this conditional expectation. Fu et al. [3] gave a method of spectral sparse grid approximations to deal with highdimensional conditional expectation. It is known that the Fourier transform is an important tool in option pricing, not only in the SDE framework but also in the ODE framework. With the Fourier cos transform, one can convert conditional expectation to a series form. By constructing a function basis from truncating series, one can approximate the conditional expectation. More efficient methods and fast algorithms follow this idea. For details, one can refer to [4â€“6]. These schemes are applied to many fields, such as option pricing [7â€“10], portfolio optimization [11, 12], and so on. Ruijter and Oosterlee [13] extended the Fourier cos method to twodimensional FBSDE, named Fourier coscos method, and gave a numerical scheme to pricing European option and Bermudan option under GBM model and Heston stochastic volatility. Recently, Meng and Ding [14] investigated a Fourier sinsin method named modified Fourier sinsin method to price rainbow options within twodimensional BSDE. Numerical experiments showed that its convergence and efficiency were expected. Inspired by the literature, we extend the idea to solving twodimensional FBSDEs by using the Fourier coscos transform and the leastsquare Monte Carlo regression to obtain the numerical solution to FBSDEs (1) and (2). It is a supplement to our previous work in [15], and it can be extended to highdimensional FBSDEs.
The paper is organized as follows. In Sect. 2, some assumptions about FBSDEs are given to ensure the existence of solution. In the discretization scheme of forward equation (1), we use the classical Euler scheme which was used by Zhao et al. [16]. For backward equation (2), we use the theta scheme. In Sect. 3, we give the approximations and their error analysis of conditional expectations from the discretization of backward equation (2). In Sect. 4, we present a numerical scheme based on the leastsquares Monte Carlo regression and provide an example in option pricing for a numerical experiment. In Sect. 5, we conclude our investigation.
2 Discretization of FBSDEs
In this section, we denote by \(L_{T}^{2}(\mathbf{R}^{2})\) the set of \(\mathcal{F}_{T}\)measurable random variables \(X: \Omega \to \mathbf{R}^{2}\) which are square integrable, and by \(\mathcal{H}_{T}^{2}(\mathbf{R})\) the set of predictable processes \(\eta: \Omega \times [0, T] \to \mathbf{R}\) such that
where \(\cdot \) is the standard Euclidean norm in the Euclidean space R. The terminal condition \(Y_{T}\) in equation (2) is \(\mathcal{F}_{T}\)measurable and square integrable. We give some assumptions:

(A1)
The function \(g(x)\) is uniformly global functional Lipschitz continuous.

(A2)
The functions \(\mu (x)\) and \(\sigma (x)\) are uniformly Lipschitz continuous and satisfy a linear growth condition.

(A3)
The generator \(f(t,y,z)\) satisfies the following continuity condition:
$$ \bigl\vert f(t_{2},y_{2},z_{2})  f(t_{1},y_{1},z_{1}) \bigr\vert \leq C_{f} \bigl( \vert t_{2}t_{1} \vert ^{1/2} + \vert y_{2}y_{1} \vert + \vert z_{2}z_{1} \vert \bigr) $$for any \((t_{2},y_{2},z_{2}),(t_{1},y_{1},z_{1})\in [0,T]\times \mathbf{R} \times \mathbf{R}^{2}\), where \(C_{f}>0\) is a constant.
Assumptions (A1), (A2), and (A3) can guarantee the existence and uniqueness of solution \((X_{t},Y_{t},Z_{t})\) to FBSDEs (1)â€“(2). Now we are in the position to discretize FBSDEs (1) and (2) by using the Euler scheme. Given a partition \(\Delta: 0 = t_{0}< t_{1}<\cdots <t_{M}=T\) with time steps \(\Delta t_{m} = t_{m}t_{m1}\), denote \(X_{m}= X_{t_{m}}, Y_{m}= Y_{t_{m}}, Z_{m}= Z_{t_{m}}\), and \(\Delta W_{m} = W_{t_{m}}W_{t_{m1}}\). The classical Euler discretization for FSDE (1) is
for \(m=1, \ldots , M\). In the time interval \([0, T]\), we rewrite BSDE (2) to the following form:
Considering \(Y_{t}\) to be an \((\mathcal{F}_{t})\)adapted process, we take conditional expectations on both sides of equation (3) with respect to filtration \(\mathcal{F}_{t_{m1}}\), and then we have an iteration backward equation
where \(\mathbb{E}_{m1}^{x}[\cdot ] = \mathbb{E} [\cdot \mid X^{\Delta }_{m1}=x ]\). Multiplying \(\Delta W^{*}_{m}\) and taking conditional expectations on both sides of (4), we have
Applying the theta discretization method to (4),(5), we obtain a discrete solution \((Y^{\Delta }_{m1}, Z^{\Delta }_{m1})\) to approximate the solution \((Y_{t1},Z_{t1})\) to BSDE (2):
Here, \(\theta _{1}\) and \(\theta _{2}\) are two parameters in the theta discretization scheme. As a consequence of the Feynmanâ€“Kac theorem, the terminal values \(Y_{M}\) and \(Z_{M}\) are both deterministic functions of \(X^{\Delta }_{M}\), i.e., \(Y_{M} = g(X_{M})\) and \(Z_{M} = \nabla g(X_{{M}})\cdot \sigma (X_{{M}})\), where âˆ‡ is a normal gradient operator with respect to the augment. Now, in combination with equations (6) and (7), we know that the solution \((Y^{\Delta }_{m1}, Z^{\Delta }_{m1})\) is represented by the kinds of conditional expectations
for some function \(\upsilon (x)\). In these expectations, the first conditional expectation is onedimensional and the second is twodimensional. Motivated by successful use of the Fourier coscos method in twodimensional BSDEs, we use the Fourier transform to obtain the approximation expressions of the above conditional expectations.
3 Approximation of conditional expectation and error analysis
In this section, we give the approximation of conditional expectations \(U(x),V(x)\) and their error analysis. First, we give the approximation of \(U(x)\). Let \(p(yx)\) denote the conditional density function of \(X_{m}^{\Delta }\) given by \(X_{m1}^{\Delta }=x\). The symbol âˆ‘âˆ‘â€² in theorems below means that the first term in the summation is weighted by onehalf, and \(\operatorname{Re}\{\cdot \} \) denotes the real part of a complex number.
Theorem 3.1
Let \(\varphi (w_{1},w_{2}x_{1},x_{2})\) be the conditional characteristic function of \(p(y_{1},y_{2}x_{1}, x_{2})\), and denote \(\phi (w_{1},w_{2}0,0)=\phi _{\mathrm{levy}}(w_{1},w_{2})\). Then, for any rectangular area \(D=[a_{1}, b_{1}]\times [a_{2}, b_{2}] \subset \mathbf{R}^{2}\), the conditional expectation \(U(x)\) has the following expansion:
where
And
is a Fourier cosine coefficient of \(\upsilon (y_{1},y_{2})\).
Proof
For a truncated finite integration region D, we have
By using the Fourier coscos transform to \(p(y_{1},y_{2}x_{1},x_{2})\) in D, we have
where \(A_{k_{1},k_{2}}(x_{1},x_{2})\) is the Fourier cosine coefficient of \(p(y_{1},y_{2}x_{1},x_{2})\):
Then we have
With the cos formula
the integral in equation (9) will be changed to the following form:
Substituting the above equation into (10), we can obtain the form of equation (8).â€ƒâ–¡
Next, we consider the twodimensional conditional expectation
The difficulty of \(V(x)\) is to deal with the Brownian motion \(\Delta W_{m}\). Note that the components of \(\Delta W_{m}\) are independent, we can handle them separately. Denote
and assume that the given condition is \((X^{1,\Delta }_{m1},X^{2,\Delta }_{m1})=(x_{1},x_{2})\). Then, from the forward scheme for equation (1), we can revise it to another form defined by
We find that
and
These integrals are similar to \(U(x)\) and can be calculated by using the method in Theorem 3.1. Next we directly give the expansion of \(V_{1}(x)\) and \(V_{2}(x)\).
Theorem 3.2
Under the assumptions of Theorem 3.1, for any rectangular area \(D=[a_{1}, b_{1}]\times [a_{2}, b_{2}] \subset \mathbf{R}^{2}\), the components of conditional expectation \(V(x)\) have the following expansions:
where
are Fourier cosine coefficients of \(\upsilon (y_{1},y_{2})\rho _{x_{j}}(y_{j})\), \(j=1,2\).
Remark 1
There are some results to approximate conditional expectation, such as using polynomial basis functions, Malliavin approach, and Monte Carlo sequence convergence (see [17â€“22]). Most of them consider the timespatial approximation. In fact, it needs much more time to implement, especially in dealing with highdimensional conditional expectation. The results in Theorem 3.1 and Theorem 3.2 show that they can contain much information and have many advantages to deal with highdimensional FBSDEs.
Theorem 3.1 and Theorem 3.2 give us some idea to approximate conditional expectations. For suitable integers \(N_{1},N_{2}\), the conditional expectations \(U(x),V(x)\) can be approximated by the truncation terms
with the error
and
with the error
for \(j=1,2\).
Now, we give the error analysis of the approximations. Ruijter and Oosterlee [13] pointed out that the coefficients \(A_{k_{1},k_{2}}(x_{1},x_{2})\) usually decay faster than \(B_{k_{1},k_{2}}\). Thus, we find that the error \(\epsilon _{2}(x)\) converges exponentially in \(N_{1}\) and \(N_{2}\) for density functions in the class \(C^{\infty }([a_{1}, b_{1}]\times [a_{2}, b_{2}])\), i.e.,
for some positive constants \(P_{1},N,\nu \), and \(N=\min \{N_{1},N_{2}\}\). If a density function has a discontinuity point in one of its derivatives, then the error \(\epsilon _{2}(x)\) has an algebraic convergence, i.e.,
for some positive constants \(P_{2},\beta ,N\), where \(\beta \geq N,N=\min \{N_{1},N_{2}\}\). On the other hand, according to [23], \(B_{k_{1},k_{2}}\) exhibits at least algebraic convergence and gives us information of algebraic convergence of Fourier series, i.e., for suitable positive constants \(N,n,P,Q\), we have
After interchanging the summation and integration, we rewrite \(\epsilon _{3}(x)\) in another form:
It then follows that
From (11)â€“(15), with a properly chosen truncation of the integration range, the overall error \(\epsilon (x)\) converges. With the same method, we can also prove that the overall error \(\widetilde{\epsilon }_{j}(x)\ (j=1,2)\) converges.
Therefore, if we choose a suitable region \([a_{1},b_{1}]\times [a_{2},b_{2}]\), then the errors of the approximation can be well controlled. We can use approximations (11) and (12) as a substitution to conditional expectations. The key is to choose basis functions. In the next section, we state our basis functions and employ the leastsquares Monte Carlo regression method to numerical FBSDEs (1) and (2).
4 Numerical experiment
In this section, we give the numerical scheme to FBSDEs (1) and (2) based on the leastsquare Monte Carlo regression and perform a numerical experiment in pricing European option. In the following, we give the basis functions and corresponding coefficients \(\alpha _{j,m}\) at time \(t_{m}\). For approximations \(Y_{m},Z_{m}\), we use truncation functions to represent.
First, we state the numerical scheme. Under the condition of value \((Y_{m},Z_{m})\), we implement the following leastsquare regression by using finitedimensional basis functions, respectively, to approximate \(Y_{m1},Z_{m1}\) at each time \(t_{m}\):
Notice that if \(\theta =1\) in (16), the scheme will deduce to the representation in Gobet et al. [24]. Many numerical experiments show that the theta scheme is of secondorder convergence when \(\theta =0.5\). Following this idea, we also consider \(\theta = 0.5\). On the choice of basis functions, Gobet et al. use hypercubes and global polynomials as basis functions to test the effectiveness and stability under the assumption of assets following geometric Brownian motions (GBMs). Unfortunately, they only give onedimensional numerical experiments to test the efficiency. We want to know the stability and efficiency in a highdimensional space. From Sect. 3, we can use conditional characteristic functions to express the basis functions.
Next, we simplify the basis functions by following Theorems 3.1 and 3.2. We assume that the underly assets follow GBM, i.e., \(\mu _{j}(x_{j}) = \mu _{j} x_{j}\) and \(\sigma _{j}(x_{j}) = \sigma _{j} x_{j}\) for \(j=1,2\). Then the basis functions with respect to \(U(x)\) are given by
and for \(V_{j}(x)\ (j=1,2)\), the function bases are given
Here,
for each \(k_{j} = 0,1,2,\ldots N{}1,j=1,2\). We combine the Monte Carlo method and Picard iterations to implement the procedure:

Simulations. Simulate L independent simulations of
$$ \bigl(X^{\Delta }_{m,l}\bigr)_{2 \leq m \leq M+1,1\leq l\leq L},(\Delta W_{m,l})_{1 \leq m \leq M1,1\leq l\leq L}. $$ 
Initialization. The algorithm is initialized with \(Y^{\Delta }_{M,l}=g(X^{\Delta }_{M,l})\). The value \((Y^{\Delta }_{m},Z^{\Delta }_{m})\) represented via basis functions and corresponding coefficients is known at time \(t_{m1}\). The coefficients of basis functions are computed by the leastsquare method.

Backward iteration. Assume that \(Y^{\Delta ,I,I}_{m,L}\) is built with L simulations. Denote \(\alpha ^{r,i,I}_{m}=(\alpha ^{r,i,I}_{1,m},\alpha ^{r,i,I}_{2,m}), \widetilde{\Phi }_{m,k}(x)=(\widetilde{\Phi }^{1}_{m,k}(x), \widetilde{\Phi }^{2}_{m,k}(x))\). The symbol â‹† represents multiplication of corresponding elements. Then do Picard iterations:
 â†’:

The initialization \(i= 0\) of Picard iterations is settled as \((Y^{\Delta ,0,I}_{m1,l},Z^{\Delta ,0,I}_{m1,l})=(0,0)\), i.e., \(\alpha ^{r,0,I}_{j,m} = 0,j=0,1,2\).
 â†’:

For \(i = 1,2,\ldots ,I\), the coefficients \(\alpha ^{r,i,I}_{j,m}\) are iteratively obtained as the argmin in \((\alpha _{0,m},\alpha _{1,m},\alpha _{2,m})\) of the quantity
$$\begin{aligned} &\frac{1}{L}\sum_{l=1}^{L} \Biggl[ Y^{\Delta }_{m,l} +0.5f\bigl(t_{m},Y^{ \Delta }_{m,l},Z^{\Delta }_{m,l} \bigr)\Delta t_{m}  0.5Z^{\Delta }_{m,l} \Delta W_{m,l} \\ &\quad{}+ 0.5 f \Biggl(t_{m1},\sum_{k_{1}=0}^{N1} \sum_{k_{2}=0}^{N1}{'} \alpha ^{1,p1,I}_{0,m}\Phi _{m,k}(x),\sum _{k_{1}=0}^{N1}\sum_{k_{2}=0}^{N1}{'} \alpha ^{r,i1,I}_{m}\star \widetilde{\Phi }_{m,k}(x) \Biggr)\Delta t_{m} \\ &\quad{}  \sum_{k_{1}=0}^{N1} \sum _{k_{2}=0}^{N1}{'}\alpha ^{1,i,I}_{0,m} \Phi _{m,k}(x) 0.5 \Biggl(\sum_{k_{1}=0}^{N1} \sum_{k_{2}=0}^{N1}{'} \alpha ^{r,i,I}_{m}\star \widetilde{\Phi }_{m,k}(x) \Biggr) \Delta W_{m,l} \Biggr]^{2}. \end{aligned}$$  â†’:

Take \(\alpha ^{r}_{j,m} = \alpha ^{r,I,I}_{j,m}\). Use the coefficients \(\alpha ^{r}_{j,m}\), \(j = 1,\ldots ,6\), to compute \(Y^{\Delta }_{m1}\) and \(Z^{\Delta }_{m1}\).

Initial value. Compute the initial value \((Y^{\Delta }_{0}, Z^{\Delta }_{0})\).
Now we test the algorithm on an exampleâ€”test on European option. We do \(S=50\) times and collect each time the value \(Y^{\Delta ,S}_{0}\). At each time the simulated value is defined as \(\{Y^{\Delta ,S}_{0,s}: s=1,\ldots ,50 \}\). The mean is denoted by
Following the literature [8], we choose \(a_{1}=a_{2}=a\) and \(b_{1}=b_{2}=b\), where
and \(\xi ^{i}_{j}\) denotes the jth cumulant of the stochastic variable \(X^{i}_{T}\). Denote by \(e_{Y}=Y_{0}Y^{\Delta }_{0}\) the error of the numerical solution of Y. In the experiment, we present an application of our scheme to financial problems, i.e., pricing European option and hedging strategy. We consider option pricing of a basket call option in the Blackâ€“Scholes model. Someone has two kinds of assets. Denote by \(p_{t}\) and \(X_{t}=(X^{1}_{t},X^{2}_{t})\) the bond price and the prices of two independent stocks, respectively, that satisfy
with the initial conditions \(p_{0}=p, X_{0}=x_{0}=(x^{1}_{0},x^{2}_{0}),t\in [0,T]\). At time t, an investor takes with wealth \(y_{t}\) in hand. He puts \(\pi ^{i}_{t}\ (i=1,2)\) to buy the ith stock and \(y_{t}(\pi ^{1}_{t}+\pi ^{2}_{t})\) to buy the bond. The processes \(y_{t}\) and \(\pi ^{i}_{t}\ (i=1,2)\) satisfy the following SDE:
Denote \(z^{i}_{t}=\sigma _{i}\pi ^{i}_{t}\ (i=1,2)\), then \((y_{t},z_{t})\) satisfies
with the terminal condition \(y_{T}=\max \{\sqrt{X^{1}_{T}X^{2}_{T}}K,0\}\). If \(\mu _{i}=\mu ,\sigma _{i}=\sigma \), then the analytic solution can be given by a twodimensional Blackâ€“Scholes formula. In our numerical experiment, we set
The absolute error \(e_{Y}\) of experiments is listed in Table 1.
In this table, we find that the error is accepted. Generally speaking, with the increase of \(M,N\), the scheme is stable but the computation time is longer. In this example, if \(N=20\) and \(M=17\), then the error is accepted.
5 Conclusion
In this paper, we extend the Fourier cos transform to propose a method of numerical solutions to highdimensional FBSDEs by combining conditional characteristic functions. In this method, the Fourier coscos transform is used to deal with two kinds of conditional expectations. Following the error analysis in [13], we prove that the errors in approximation of conditional expectations are well controlled in theory. It also shows that this numerical scheme is efficient and stable.
Availability of data and materials
Not applicable.
References
Pardoux, E., Peng, S.G.: Backward stochastic differential equations and quasilinear parabolic partial differential equations. In: Stochastic Partial Differential Equations and Their Applications, vol.Â 176, pp.Â 200â€“217. Springer, Berlin (1992)
Zhang, G., Gunzburger, M., Zhao, W.: A sparsegrid method for multidimensional backward stochastic differential equations. J. Comput. Math. 31, 221â€“248 (2013)
Fu, Y., Zhao, W., Zhou, T.: Efficient spectral sparse grid approximations for solving multidimensional forward backward SDEs. Discrete Contin. Dyn. Syst., Ser. B 22, 3439â€“3458 (2017)
Ding, D., U, S.C.: Efficient option pricing methods based on Fourier series expansions. J. Math. Res. Expo. 31, 12â€“22 (2011)
Fang, F., Oosterlee, C.W.: Pricing earlyexercise and discrete barrier options by Fouriercosine series expansions. Numer. Math. 114, 27â€“62 (2009)
Yang, Y., Su, W., Zhang, Z.: Estimating the discounted density of the deficit at ruin by Fourier cosine series expansion. Stat. Probab. Lett. 146, 147â€“155 (2019)
Chan, T.L.R.: Hedging and pricing earlyexercise options with complex Fourier series expansion. N. Am. J. Econ. Finance 2019, Article IDÂ 100973 (2019)
Ibrahim, S.N.I., Ng, T.W.: Fourierbased approach for power options valuation. Malaysian J. Math. Sci. 13, 31â€“40 (2019)
Lin, S., He, X.J.: A regime switching fractional Blackâ€“Scholes model and European option pricing. Commun. Nonlinear Sci. Numer. Simul. 85, Article IDÂ 105222 (2020)
Ma, J., Wang, H.: Convergence rates of moving mesh methods for moving boundary partial integrodifferential equations from regimeswitching jumpdiffusion Asian option pricing. J. Comput. Appl. Math. 370, Article IDÂ 112598 (2020)
Drapeau, S., Luo, P., Xiong, D.: Characterization of fully coupled FBSDE in terms of portfolio optimization. Electron.Â J. Probab. 25, 1â€“26 (2020)
Xie, B., Yu, Z.: An exploration of \(L_{p}\)theory for forwardbackward stochastic differential equations with random coefficients on small durations. J. Math. Anal. Appl. 483(2), Article IDÂ 123642 (2020)
Ruijter, M.J., Oosterlee, C.W.: Twodimensional Fourier cosine series expansion method for pricing financial options. SIAM J. Sci. Comput. 34, 642â€“671 (2012)
Meng, Q.J., Ding, D.: An efficient pricing method for rainbow options based on twodimensional modified sinesine series expansions. Int. J. Comput. Math. 90, 1096â€“1113 (2013)
Ding, D., Li, X., Liu, Y.: A regressionbased numerical scheme for backward stochastic differential equations. Comput. Stat. 32, 1357â€“1373 (2017)
Zhao, W., Chen, L., Peng, S.G.: A new kind of accurate numerical method for backward stochastic differential equations. SIAM J. Sci. Comput. 28, 1563â€“1581 (2006)
Bender, C., Zhang, J.: Time discretization and Markovian iteration for coupled FBSDEs. Ann. Appl. Probab. 18, 143â€“177 (2008)
Bouchard, B., Ekeland, I., Touzi, N.: On the Malliavin approach to Monte Carlo approximation of conditional expectations. Finance Stoch. 8, 45â€“71 (2004)
Bouchard, B., Touzi, N.: Discretetime approximation and MonteCarlo simulation of backward stochastic differential equations. Stoch. Process. Appl. 111, 175â€“206 (2004)
Crimaldi, I., Pratelli, L.: Convergence results for conditional expectations. Bernoulli 11, 737â€“745 (2005)
Li, Y., Yang, J., Zhao, W.: Convergence error estimates of the Crankâ€“Nicolson scheme for solving decoupled FBSDEs. Sci. China Math. 60, 923â€“948 (2017)
Sun, Y., Zhao, W.: New secondorder schemes for forward backward stochastic differential equations. East Asian J. Appl. Math. 8, 399â€“421 (2018)
Boyd, J.P.: Chebyshev & Fourier Spectral Methods. Dover, Mineola (2001)
Gobet, E., Lemor, J.P., Warin, X.: A regressionbased Monte Carlo method to solve backward stochastic differential equations. Ann. Appl. Probab. 15, 2172â€“2202 (2005)
Acknowledgements
We would like to thank the editor for handling this paper and the referees for their significant suggestions.
Funding
The work is supported by the National Natural Science Foundation of China (81960618, 61773217), Ministry of Education of Humanities and Social Science Project (17YJC840015), Hubei Key Laboratory of Applied Mathematics (AM201807), Research Project of Hubei Provincial Department of Education (B2020341), Hunan Provincial Science and Technology Project Foundation (2019RS1033), the Scientific Research Fund of Hunan Provincial Education Department (18A013), and Research Project of College of Engineering and Technology Yangtze University (2019KY01, 2020KY07).
Author information
Authors and Affiliations
Contributions
All authors contributed equally to this article. All authors read and approved the final manuscript
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Abbreviations
Not applicable.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the articleâ€™s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the articleâ€™s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Li, X., Wu, Y., Zhu, Q. et al. A regressionbased Monte Carlo method to solve twodimensional forward backward stochastic differential equations. Adv Differ Equ 2021, 207 (2021). https://doi.org/10.1186/s13662021033615
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662021033615