In this section, we will give a brief description of the CFDS6 and POD techniques, then the construction of ECFDS6 based on POD for solving ODPE is derived.
The construction of sixthorder compact finite difference scheme for ODPE
Firstly, consider the following initial and boundary value problem:
$$ \textstyle\begin{cases} \frac{{\partial u}}{{\partial t}}  a\frac{{{\partial ^{2}}u}}{{\partial{x^{2}}}} = 0, \quad0 < x < L, 0 \le t \le T, \\ u(0,t) = g_{1}(t),\qquad u(L,t) = g_{2}(t), \quad0 \le t \le T, \\ u(x,0) = \varphi(x),\quad0 \le x \le L, \end{cases} $$
(1)
where \(g_{1}(t)\), \(g_{2}(t)\) and \(\varphi(x)\) are given enough smooth functions. Let h be the spatial step increment in the xdirection and τ be the time step increment, and then write \(x_{j} = ( j  1 )h\)
\((j = 1,2,3, \ldots,J)\), \(t_{n} = n\tau \)
\((n = 0,1,2, \ldots,N1)\), \(u_{j}^{n} \approx u(x_{j},j_{n})\).
The CFDS can be summarized into two broad categories. The main idea of the first methods is to apply the central difference to the governing partial differential equation and then constantly replace the higherorder derivatives in the truncation error with loworder derivatives of the partial differential equation, which is called the traditional explicit finite difference method. The basic idea in the second methods is that all the spatial derivatives in the governing PDEs can be obtained through solving a system of linear equations [21,22,23]. In this paper, we choose the second way to build a highorder compact finite difference scheme for a parabolic equation.
Because the parabolic equation (1) containing the secondorder derivatives, we only give the compact finite difference scheme for secondorder derivatives. Next, we derive the CFDS6 for an ODPE. For the secondorder derivatives at interior nodes \(u_{j}\), the sixthorder scheme formula can be written as follows:
$$ \frac{2}{{11}}u''_{j  1} + u''_{j} + \frac{2}{{11}}u''_{j + 1} = \frac{3}{{11{h^{2}}}} \biggl( \frac{{{u_{j  2}} + 16{u_{j  1}}}}{4}  \frac{{17}}{2}u_{j} + \frac{{{u_{j + 2}} + 16{u_{j + 1}}}}{4} \biggr). $$
(2)
At the most left boundary point \(x_{1}\), a sixthorder formula can be given as follows:
$$ \begin{aligned}[b] &u''_{1} + \frac{{126}}{{11}}u''_{2} \\ &\quad = \frac{1}{{{h^{2}}}} \biggl( \frac{{2077}}{{157}}{u_{1}}  \frac{{2943}}{{110}}{u_{2}} + \frac{{573}}{{44}}{u_{3}} + \frac {{167}}{{99}}{u_{4}}  \frac{{18}}{{11}}{u_{5}} + \frac{{57}}{{110}}{u_{6}}  \frac{{131}}{{1980}}{u_{7}} \biggr). \end{aligned} $$
(3)
At the second left boundary point \({x_{2}}\), the sixthorder formula is given as follows:
$$ \begin{aligned}[b] &\frac{{11}}{{128}}{u''_{1}} + {u''_{2}} + \frac{{11}}{{128}}{u''_{3}} \\ &\quad = \frac{1}{{{h^{2}}}} \biggl( \frac{{585}}{{512}}{u_{1}}  \frac {{141}}{{64}}{u_{2}} + \frac{{585}}{{512}}{u_{3}} + \frac{9}{{32}}{u_{4}}  \frac{{81}}{{512}}{u_{5}} + \frac {3}{{64}}{u_{6}}  \frac{3}{{512}}{u_{7}} \biggr). \end{aligned} $$
(4)
According to the symmetry, at the second right boundary point \({x_{J  1}}\), the sixthorder formula is
$$ \begin{aligned}[b] & \frac{{11}}{{128}} {u''_{J}} + {u''_{J  1}} + \frac {{11}}{{128}}{u''_{J  2}} \\ & \quad = \frac{1}{{{h^{2}}}} \biggl( \frac{{585}}{{512}}{u_{J}}  \frac {{141}}{{64}}{u_{J  1}} + \frac{{585}}{{512}}{u_{J  2}} + \frac{9}{{32}}{u_{J  3}} \\ &\qquad {}  \frac{{81}}{{512}}{u_{J 4}} + \frac {3}{{64}}{u_{J  5}}  \frac{3}{{512}}{u_{J  6}} \biggr). \end{aligned} $$
(5)
Similarly, at the most right boundary point \({x_{J}}\), a sixthorder formula is
$$ \begin{aligned}[b] &{u''_{J  1}} + {u''_{J}} \\ &\quad = \frac{1}{{{h^{2}}}} \biggl( \frac{{2077}}{{157}}{u_{J}}  \frac {{2943}}{{110}}{u_{J  1}} + \frac{{573}}{{44}}{u_{J  2}} + \frac{{167}}{{99}}{u_{J  3}} \\ &\qquad {}  \frac{{18}}{{11}}{u_{J  4}} + \frac{{57}}{{110}}{u_{J  5}}  \frac {{131}}{{1980}}{u_{J  6}} \biggr). \end{aligned} $$
(6)
Note that the scheme of Eqs. (2)–(6) can be written as
$$ \mathbf{{AU}}'' = \mathbf{{BU}}, $$
(7)
where
\begin{array}{c}\mathbf{U}={({u}_{1},{u}_{2},\dots ,{u}_{J1},{u}_{J})}^{T},\hfill \\ \mathbf{A}={\left[\begin{array}{ccccccc}1& \frac{126}{11}& & & & & \\ \frac{11}{128}& 1& \frac{11}{128}& & & & \\ & \frac{2}{11}& 1& \frac{2}{11}& & & \\ & & \ddots & \ddots & \ddots & & \\ & & & \frac{2}{11}& 1& \frac{2}{11}& \\ & & & & \frac{11}{128}& 1& \frac{11}{128}\\ & & & & & \frac{126}{11}& 1\end{array}\right]}_{J\times J},\hfill \\ \mathbf{B}=\frac{1}{a{h}^{2}}{\left[\begin{array}{cccccccc}\frac{2077}{157}& \frac{2943}{110}& \frac{574}{44}& \frac{167}{99}& \frac{18}{11}& \frac{57}{110}& \frac{131}{1980}& \\ \frac{585}{512}& \frac{141}{64}& \frac{459}{512}& \frac{9}{32}& \frac{81}{512}& \frac{3}{64}& \frac{3}{512}& \\ \frac{3}{44}& \frac{12}{11}& \frac{51}{22}& \frac{12}{11}& \frac{3}{44}& & & \\ & \ddots & \ddots & \ddots & \ddots & \ddots & & \\ & & \frac{3}{44}& \frac{12}{11}& \frac{51}{22}& \frac{12}{11}& \frac{3}{44}& \\ & \frac{3}{512}& \frac{3}{64}& \frac{81}{512}& \frac{9}{32}& \frac{459}{512}& \frac{141}{64}& \frac{585}{512}\\ & \frac{131}{1980}& \frac{57}{110}& \frac{18}{11}& \frac{167}{99}& \frac{574}{44}& \frac{2943}{110}& \frac{2077}{157}\end{array}\right]}_{J\times J}.\hfill \end{array}
As mentioned above, the parabolic equation in (1) has been converted into a system of initial value problem of ordinary differential equations (ODEs) by the compact scheme (2)–(6). Then the fourthorder Runge–Kutta (RK4) scheme is applied to integrate the timedependent governing ODEs,
$$ \frac{{d\mathbf {U}}}{{dt}} = R(\mathbf {U}), $$
(8)
where R denotes a spatial differential operator. Assuming that the value of \(\mathbf {U}^{n}\) at \({t_{n}}\) is given, then the numerical solution \(\mathbf {U}^{n + 1}\) at \({t_{n + 1}} = {t_{n}} + \tau\) is obtained through the following operations:
$$ \textstyle\begin{cases} \mathbf {k}_{0} = \tau\cdot R(\mathbf {U}^{n}),\qquad \mathbf {U}_{1} = \mathbf {U}^{n} + \mathbf {k}_{0}/2, \\ \mathbf {k}_{1} = \tau\cdot R(\mathbf {U}_{1}),\qquad \mathbf {U}_{2} = \mathbf {U}^{n} + \mathbf {k}_{1}/2, \\ \mathbf {k}_{2} = \tau\cdot R(\mathbf {U}_{2}),\qquad \mathbf {U}_{3} = \mathbf {U}^{n} + \mathbf {k}_{2}, \\ \mathbf {k}_{3} = \tau\cdot R(\mathbf {U}_{3}),\qquad \mathbf {K}=\mathbf {k}_{0} + 2\mathbf {k}_{1} + 2\mathbf {k}_{2} + \mathbf {k}_{3}, \\ \mathbf {U}^{n + 1} = \mathbf {U}^{n} + \frac{1}{6}\mathbf {K}. \end{cases} $$
(9)
Using the sixthorder compact difference scheme listed in Eq. (7), the second derivative related to the operator \(R(\mathbf {U})\) at each time level is obtained. Then we can get the numerical solution at \({t_{n + 1}}\) by the RK4 method. Thus, if the initial value is known, we can calculate the value at any time steps through many iterations.
The establishment of ECFDS6 based on POD technique
In this section, we use the POD technique to build the ECFDS6. For more details, not described here, please refer to [15, 24,25,26]. Meanwhile, the POD method has a variety of interpretations, refer to [9,10,11] to find more interpretations. As described in the Introduction, the main goal of POD is to seek a set of orthogonal matrices generated by applying a singular value decomposition (SVD) into sample space, which is called an optimal basis function. Then, by using the first M sequences of the optimal basis function, the samples can be expressed optimally. In this method, POD will be used to calculate the optimal basis function. To this aim, we need a set of snapshots and use SVD to construct the optimal basis.
We suppose that there are d samples (also usually called snapshots) \({\mathbf{{s}}^{{{1}}}},{\mathbf{{s}}^{{{2}}}}, \ldots , {\mathbf{{s}}^{{{d}}}}\) which can be written as a matrix \(\mathbf{{S}} = ({\mathbf{{s}}^{1}},{\mathbf{{s}}^{2}}, \ldots ,{\mathbf{{s}} ^{d}})\), where \(\mathbf{{s}}^{i} \in{R^{J}}\)
\((i = 1,2, \ldots,d)\). \(\mathbf{{S}} \in{R^{J \times d}}\), and \(\mathbf{{S}}{\mathbf{{S}} ^{T}} \in{R^{J \times J}}\) is an \(J \times J\) semidefinite matrix. Applying the SVD on matrix S:
\mathbf{S}=\mathbf{G}\left[\begin{array}{cc}{\mathbf{D}}_{r}& 0\\ 0& 0\end{array}\right]{\mathbf{V}}^{T},
(10)
the matrix \(\mathbf {G} = (\boldsymbol{\alpha}_{1},\boldsymbol{\alpha}_{2}, \ldots, \boldsymbol{\alpha}_{J})\) with J rows and J columns. The G and \({\mathbf {V}_{d \times d}}\) are both orthogonal matrices, \(\mathbf {D}_{r} = \operatorname{diag}({\lambda_{1}},{\lambda_{2}}, \ldots,{\lambda_{r}})\). The orthogonal eigenvectors of \(\mathbf {S}{\mathbf {S} ^{T}}\) are contained in the matrix \(\mathbf {G} = (\boldsymbol{\alpha}_{1}, \boldsymbol{\alpha}_{2}, \ldots,\boldsymbol{\alpha}_{J})\). The singular values \({\lambda_{i}}\)
\((i = 1,2, \ldots,r)\) satisfy \({\lambda_{1}} \ge {\lambda_{2}} \ge\cdots\ge{\lambda_{r}} > 0\).
Denote d columns of S by \({\boldsymbol{\beta}^{l}} = {(s_{1}^{l},s _{2}^{l}, \ldots,s_{J}^{l})^{T}}\)
\((l = 1,2, \ldots,d)\), the projection \({P_{k}}\) is defined as follows:
$$ {P_{M}}\bigl(\boldsymbol{\beta}^{l}\bigr) = \sum_{i = 1}^{M} {\bigl({\boldsymbol{\alpha} _{i}},{\boldsymbol{\beta}^{l}}\bigr)} {\boldsymbol{\alpha}_{i}}, $$
(11)
the \(0 < M \le d\) and \(( \cdot, \cdot)\) represents the inner product of vectors, then we can obtain the following result [24]:
$$ \bigl\Vert {\boldsymbol{\beta}^{l}}  {P_{M}}\bigl({\boldsymbol{\beta}^{l}}\bigr) \bigr\Vert _{{2}} \le{\lambda _{M + 1}}, $$
(12)
where \(\Vert \cdot\Vert_{{2}}\) is standard norm of vector. Hence, \({\boldsymbol{\alpha}_{1}},{\boldsymbol{\alpha}_{2}}, \ldots,\boldsymbol{\alpha}_{M}\) are a group of the optimal POD basis, which from basis matrix \(\boldsymbol{\alpha} = (\boldsymbol{\alpha}_{1},\boldsymbol{\alpha}_{2}, \ldots, \boldsymbol{\alpha}_{M})\). It should be pointed out that the basis matrix fulfills the orthogonality condition, i.e., \({{\boldsymbol{\alpha}}^{T}} {\boldsymbol{\alpha}} = \mathbf {I}\) (I is unit matrix with M dimension).
In the following, the procedure of establishing ECFDS6 for parabolic equation is listed by the POD basis.
If U of Eq. (7) is substituted for
$$ \mathbf {U}^{*} = \boldsymbol{\alpha} \mathbf {V} = {\boldsymbol{\alpha} _{J \times M}} {\mathbf {V}_{M \times1}}, $$
(13)
we have
$$ {\mathbf {V}}^{\prime\prime} = {{\boldsymbol{\alpha}}^{T}} { \mathbf {A}^{  1}} {\mathbf {B}\boldsymbol{\alpha}} \mathbf {V}, $$
(14)
and noting that \({{\boldsymbol{\alpha}}^{T}}{\boldsymbol{\alpha}} = \mathbf {I}\), let \({\mathbf {V}_{\mathbf{{0}}}}= {\boldsymbol{\alpha} ^{T}}{\mathbf{{U}}^{n}}\) then the RK4 for the reduced solution is given as follows:
$$ \textstyle\begin{cases} {\overline{\mathbf {k}} _{0}} = \tau\cdot R({\mathbf {V}_{0}}),\qquad{\mathbf {V}_{1}} = {\mathbf {V}_{0}} + {\overline{\mathbf {k}} _{0}}/2, \\ {\overline{\mathbf {k}} _{1}} = \tau\cdot R({\mathbf {V}_{1}}),\qquad {\mathbf {V}_{2}} = {\mathbf {V}_{0}} + {\overline{\mathbf {k}} _{1}}/2, \\ {\overline{\mathbf {k}} _{2}} = \tau\cdot R({\mathbf {V}_{2}}),\qquad {\mathbf {V}_{3}} = {\mathbf {V}_{0}} + {\overline{\mathbf {k}} _{2}}, \\ {\overline{\mathbf {k}} _{3}} = \tau\cdot R({\mathbf {V}_{3}}),\qquad \overline{\mathbf {K}}={\overline{\mathbf {k}} _{0}} + 2{\overline{\mathbf {k}}_{1}} + {\overline{\mathbf {k}} _{2}} + {\overline{\mathbf {k}} _{3}}, \\ {\mathbf {V}^{n + 1}} = {\mathbf {V}_{\mathbf{{0}}}} + \frac{1}{6}\overline{\mathbf {K}}. \end{cases} $$
(15)
We can obtain the global solution \({\mathbf {U}^{n + 1}} = {\boldsymbol{\alpha}} {\mathbf {V}^{n + 1}}\) when the reduced solution \({\mathbf {V}^{n + 1}}\) has been obtained from Eq. (15). Here, the procedure of ECFDS6 for parabolic equation is listed as follows:

step 1
Snapshot S is generated from experiments or numerical simulations.

step 2
Formulate the optimal POD basis matrix α by the SVD method.

step 3
Apply Eq. (14) to work out the reduced secondorder derivative \(\mathbf {V}''\).

step 4
Obtain the reduced solution by solving Eq. (15).

step 5
Having applied \({\mathbf {U}^{n + 1}} = {\boldsymbol{\alpha}}{ \mathbf {V}^{n + 1}}\), then the reduced solution is expanded.
It is easy to see from that above algorithm that ECFDS6 needs to solve only \(M \times M\) equations (Eq. (14)) at each iteration, but CFDS6 includes \(J \times J\) equations (Eq. (7)) to solve at each iteration. In general, M is much smaller than J, which means that ECFDS6 needs less computational time than CFDS6. Applying that whole procedure, we may complete the entire calculation from \({t^{n}}\) to \({t^{n + 1}}\). Moreover, due to the use of a sixthorder compact scheme for discretizing the space variables, it is not difficult to find that our algorithm is of sixthorder accuracy.