As mentioned before, the main interest of this paper is to propose an implicit method for solving problems of FDE in a predict–correct numerical scheme. Therefore, the analysis of the method will focus on the implicit method of FAM22 that has been developed.
3.1 Order of the method
Definition 1
([5])
The general formulation of fractional linear multistep method (FLMM) for the solution of Equation (5) is considered:
$$\begin{aligned} \sum_{j=0}^{i}\alpha _{j}y_{i-j}=h^{\alpha }\sum_{j=0}^{i} \beta _{j}f(t_{i-j},y_{i-j}), \end{aligned}$$
(19)
where \(\alpha _{j}\) and \(\beta _{j}\) are real parameters and α denotes the fractional order.
Definition 2
([10])
As \(C_{0}=C_{1}=\cdots =C_{q}=0\) and \(C_{q+1} \neq 0\), the linear multistep method is said to be of order q. The following is the formula for calculating the constant \(C_{q}\):
$$\begin{aligned} C_{q}=\sum_{j=0}^{k} \biggl[\frac{j^{q} \alpha _{j}}{q!}- \frac{j^{q-1}\beta _{j}}{(q-1)!} \biggr], \quad q=0,1,2,\ldots, \end{aligned}$$
(20)
where k is the order of the proposed method, α is the coefficient acquired from the proposed method, and β is the coefficient obtained from the proposed method. It is important to note that the method’s error constant is \(C_{q+1}\).
Proof
For the objective of investigating the order of the implicit method of FAM22 in Equation (18), firstly, we obtain \(\alpha _{j}\) and \(\beta _{j}\) by comparing Equations (18) and (19). Therefore, we have
$$\begin{aligned} \begin{aligned} &\alpha _{0}=-1,\qquad \beta _{0}= \frac{1}{\Gamma (\alpha )} \biggl( \frac{-(i)^{\alpha }}{\alpha } + \frac{(i+1)^{\alpha +1}-(i)^{\alpha +1}}{\alpha +1} \biggr), \\ &\alpha _{1}=1,\qquad \beta _{1}= \frac{1}{\Gamma (\alpha )} \biggl( \frac{(i+1)^{\alpha }}{\alpha } + \frac{(i)^{\alpha +1}-(i+1)^{\alpha +1}}{\alpha +1} \biggr). \end{aligned} \end{aligned}$$
(21)
Substituting Equation (21) into (20) will give
$$\begin{aligned} \begin{aligned} &C_{0}=\sum _{j=0}^{2}\alpha _{j}=0, \\ &C_{1}=\sum_{j=0}^{2}(j\alpha _{j}-\beta _{j})=0, \\ &C_{2}=\sum_{j=0}^{2}\biggl( \frac{j^{2}\alpha _{j}}{2!}-j\beta _{j}\biggr)=0, \\ &C_{3}=\sum_{j=0}^{2}\biggl( \frac{j^{3}\alpha _{j}}{3!}- \frac{j^{2}\beta _{j}}{2!}\biggr)=-\frac{1}{12}. \end{aligned} \end{aligned}$$
(22)
Therefore, the implicit method is proven to be of order 2 with error constant \(C_{3}=-\frac{1}{12}\). □
3.2 Order of accuracy
To begin with, the implicit method of FAM22 is given in Equation (18), and for simplicity, let
$$\begin{aligned} \begin{aligned} &A =\frac{(i+1)^{\alpha }}{\alpha }, \qquad B= \frac{(i)^{\alpha +1}-(i+1)^{\alpha +1}}{\alpha +1}, \\ &C= \frac{-(i)^{\alpha }}{\alpha },\qquad D= \frac{(i+1)^{\alpha +1}-(i)^{\alpha +1}}{\alpha +1}. \end{aligned} \end{aligned}$$
(23)
Hence, we will obtain
$$\begin{aligned} \begin{aligned} y(t_{i+1})= y(t_{i}) + \frac{h^{\alpha }}{\Gamma (\alpha )} \bigl[ ( A + B ) F_{i+1}+ (C + D ) F_{i} \bigr]. \end{aligned} \end{aligned}$$
Next, according to the concept of Taylor expansion, it is given that
$$\begin{aligned} \begin{aligned} y(t_{i+1})=y(t_{i}) + hy^{\prime }(t_{i}) + \frac{h^{2}}{2!} y^{\prime \prime }( \theta ), \end{aligned} \end{aligned}$$
(24)
and for the case of initial value problem,
$$\begin{aligned} \begin{aligned} y^{\prime }(t_{i}) \approx f\bigl(t_{i},y(t_{i})\bigr). \end{aligned} \end{aligned}$$
(25)
Therefore, the local truncation error denoted as \(e_{i+1}\) can be obtained as follows:
$$\begin{aligned} \begin{aligned} e_{i+1}=y(t_{i+1})-y(t_{i}) - \frac{h^{\alpha }}{\Gamma (\alpha )} \bigl[ ( A + B ) y^{\prime }(t_{i+1}) \bigr]- \frac{h^{\alpha }}{\Gamma (\alpha )} \bigl[ (C + D ) y^{\prime }(t_{i}) \bigr]. \end{aligned} \end{aligned}$$
(26)
Expanding Equation (26) by implementing the Taylor expansion in Equations (24) and (25) will give
$$\begin{aligned} \begin{aligned} e_{i+1}={}&y(t_{i}) + hy^{\prime }(t_{i}) + \frac{h^{2}}{2!} y^{\prime \prime }( \theta )-y(t_{i}) - \frac{h^{\alpha }}{\Gamma (\alpha )} \biggl[ ( A + B ) y^{\prime }(t_{i}) + hy^{\prime \prime }(t_{i}) + \frac{h^{2}}{2!} y^{\prime \prime \prime }(\theta ) \biggr] \\ &{}-\frac{h^{\alpha }}{\Gamma (\alpha )} \bigl[ (C + D ) y^{\prime }(t_{i}) \bigr] + O \bigl(h^{3}\bigr). \end{aligned} \end{aligned}$$
(27)
Based on Equation (27), the local truncation error \(e_{i+1}\) is \(O(h^{3})\). As known, the local truncation error is usually denoted as \(O(h^{(n+1)})\), where n is the order of accuracy and h is the step size. Therefore, it can be concluded that the proposed method is proved to be in the second order of accuracy.
3.3 Convergence analysis
Theorem 1
([11])
Let \(f(t,y)\) be Lipschitz continuous at all points \((t,y)\) in the region R defined by [10], given by
$$\begin{aligned} a\leq t \leq b, \quad -\infty < y< \infty, \end{aligned}$$
(28)
such that a and b are finite. Suppose that there exists a constant L such that, for every \(t, y, y^{*}\), the coordinates \(t, y, y^{*}\) and \((t,y^{*})\) are both in R where
$$\begin{aligned} \bigl\vert f(t,y)-f\bigl(t,y^{*}\bigr) \bigr\vert \leq L \bigl\vert y-y^{*} \bigr\vert . \end{aligned}$$
(29)
Theorem 2
([1, 11, 12])
A linear multistep method is said to be convergent if, for all initial values problems subject to the hypothesis of Theorem 1as \(t\in [a,b]\) and \(0<\alpha <1\), we have that
$$\begin{aligned} \bigl\vert y-y^{*} \bigr\vert \leq K.t^{\alpha -1}h^{p}, \end{aligned}$$
where K is a constant depending only on α and p as \(p \in (0,1)\) stated by [12] and
$$\begin{aligned} \lim_{h\rightarrow 0} y_{i}=y^{*}(t_{i}). \end{aligned}$$
Proof
In the first step of this convergence analysis, we recall the proposed method as in Equation (18), where
$$\begin{aligned} \begin{aligned} y(t_{i+1})={}& y(t_{i}) + \frac{h^{\alpha }}{\Gamma (\alpha )} \biggl[ \biggl( \frac{(i+1)^{\alpha }}{\alpha } + \frac{(i)^{\alpha +1}-(i+1)^{\alpha +1}}{\alpha +1} \biggr) F_{i+1} \\ & {}+\biggl( \frac{-(i)^{\alpha }}{\alpha } + \frac{(i+1)^{\alpha +1}-(i)^{\alpha +1}}{\alpha +1} \biggr) F_{i} \biggr]. \end{aligned} \end{aligned}$$
Based on the above equation, let
$$\begin{aligned} \begin{aligned} &P =\frac{(i+1)^{\alpha }}{\alpha } + \frac{(i)^{\alpha +1}-(i+1)^{\alpha +1}}{\alpha +1}, \\ &Q= \frac{-(i)^{\alpha }}{\alpha } + \frac{(i+1)^{\alpha +1}-(i)^{\alpha +1}}{\alpha +1}. \end{aligned} \end{aligned}$$
(30)
For the next step, substituting Equation (30) into Equation (18) will give the following:
i. The exact form of the system is given by
$$\begin{aligned} \begin{aligned} &y^{*}(t_{i+1})-y^{*}(t_{i})= \frac{h^{\alpha }}{\Gamma (\alpha )} ( P ) F_{i+1}^{*} + \frac{h^{\alpha }}{\Gamma (\alpha )} ( Q ) F_{i}^{*} -\frac{1}{12} h^{3}y^{*(3)}( \xi ). \end{aligned} \end{aligned}$$
(31)
ii. The approximate form of the system is
$$\begin{aligned} \begin{aligned} y(t_{i+1})-y(t_{i})= \frac{h^{\alpha }}{\Gamma (\alpha )} ( P ) F_{i+1} + \frac{h^{\alpha }}{\Gamma (\alpha )} ( Q ) F_{i}. \end{aligned} \end{aligned}$$
(32)
Subtracting Equation (32) from (31) will give
$$\begin{aligned} \begin{aligned} y(t_{i+1})-y^{*}(t_{i+1})={}&y(t_{i})-y^{*}(t_{i}) + \frac{h^{\alpha }}{\Gamma (\alpha )} (P) \bigl[f(t_{i+1},y_{i+1})-f \bigl(t^{*}_{i+1},y^{*}_{i+1}\bigr)\bigr] \\ &{}+\frac{h^{\alpha }}{\Gamma (\alpha )} (Q) \bigl[f(t_{i},y_{i})-f \bigl(t^{*}_{i},y^{*}_{i}\bigr)\bigr] \\ &{}-\frac{1}{12} h^{3}y^{*(3)}(\xi ). \end{aligned} \end{aligned}$$
(33)
Let
$$\begin{aligned} \begin{aligned} & \vert d_{i+1} \vert = \bigl\vert y_{i+1}-y^{*}_{i+1} \bigr\vert ,\qquad \vert d_{i} \vert = \bigl\vert y_{i}-y^{*}_{i} \bigr\vert . \end{aligned} \end{aligned}$$
(34)
In the next step, we apply the Lipschitz condition as in Theorem 1 and the assumption in Equation (34). Therefore, we have
$$\begin{aligned} \begin{aligned} \biggl(1-\frac{h^{\alpha }P}{\Gamma (\alpha )} \biggr) \vert d_{i+1} \vert \leq \biggl(1+\frac{h^{\alpha }Q}{\Gamma (\alpha )} \biggr) \vert d_{i} \vert -\frac{1}{12}h^{3}y^{*(3)}(\xi ). \end{aligned} \end{aligned}$$
(35)
Rewriting Equation (35) based on Theorem 2, we obtain
$$\begin{aligned} \begin{aligned} \bigl(1-Kh^{\alpha } \bigr) \vert d_{i+1} \vert \leq \bigl(1+Kh^{ \alpha } \bigr) \vert d_{i} \vert - \frac{1}{12}h^{3}y^{*(3)}( \xi ). \end{aligned} \end{aligned}$$
(36)
As h is sufficiently small or \(h\rightarrow 0\) and the initial value tends to 0, it is proved that \(| d_{i+1} | \leq | d_{i} |\); therefore, we have \(|y_{i+1}|=|y_{i+1}^{*}|\) and \(|y_{i}|=|y_{i}^{*}|\). As a result, Theorem 2 is satisfied, and the implicit method of FAM22 has been proven to converge. □