This section is concerned with explaining in detail two spectral algorithms for numerically solving two kinds of space fractional linear advection-dispersion problems. First, we select a unified double Tchebyshev expansion as basis functions, and then apply the two well-known spectral methods, namely the collocation and Petrov-Galerkin methods.
4.1 Choice of the basis functions
We choose the following two families of orthogonal polynomials:
$$\begin{aligned}& \phi_{i}(x)=x(\ell-x) T_{i}^{\ell}(x),\quad i=0,1,2, \ldots, \end{aligned}$$
(4.1)
$$\begin{aligned}& \psi_{j}(t)=T_{j}^{\tau}(t), \quad j=0,1,2, \ldots. \end{aligned}$$
(4.2)
It is not difficult to show that the polynomials \(\{\phi_{i}(x)\} _{i\ge 0}\) are linearly independent and orthogonal with respect to the weight function \(w(x)=\frac{1}{x^{5/2}(\ell-x)^{5/2}}\) on \([0,\ell]\). Moreover, it is also clear that each member of them fulfills the boundary conditions (3.6). The orthogonality relation for \(\{\phi_{i}(x)\}_{i\ge 0}\) is
$$ \int_{0}^{\ell}\frac{\phi_{i}(x) \phi_{j}(x)\, dx}{x^{5/2}(\ell-x)^{5/2}} =h_{i}= \textstyle\begin{cases} \frac{\pi}{\kappa_{i}},&i=j,\\ 0,&i\neq j. \end{cases} $$
(4.3)
Now, we define the following two spaces:
$$\begin{aligned}& V= \bigl\{ {y\in H_{w(x,t)}^{2}(\Omega):y(0,t)=y( \ell,t)=0, t\in[0,\tau ]} \bigr\} , \\& V_{M} = \operatorname{span} \bigl\{ \phi_{i}(x) \psi_{j}(t): i,j=0,1,\ldots,M \bigr\} , \end{aligned}$$
(4.4)
where \(H_{w(x,t)}^{2}(\Omega)\), \(\Omega=(0,\ell)\times(0,\tau]\) is the Sobolev space defined in [16], and
$$w(x,t)=\frac{1}{x^{5/2}(\ell-x)^{5/2}t^{1/2}(\tau-t)^{1/2}}. $$
Now, let \(g(x,t)\in V\). Then this function can be expanded in the following double expansion:
$$ g(x,t)=\sum_{i=0}^{\infty}\sum _{j=0}^{\infty} c_{ij} \phi_{i}(x) \psi_{j}(t), $$
(4.5)
and the coefficients \(c_{ij}\) are given by the formula
$$ c_{ij}=\frac{1}{h_{i}h_{j}} \int_{0}^{\tau} \int_{0}^{\ell}\frac {g(x,t) \phi_{i}(x) \psi_{j}(t)}{x^{5/2}(\ell-x)^{5/2}t^{1/2}(\tau-t)^{1/2}}\,dx\,dt. $$
(4.6)
Numerically, \(g(x,t)\) can be approximated by the truncated double series
$$ g(x,t)\approx g_{M}(x,t)=\sum_{i=0}^{M} \sum_{j=0}^{M} c_{ij} \phi_{i}(x) \psi_{j}(t). $$
(4.7)
In the following, we are going to state and prove three important theorems concerned with the basis functions \(\phi_{i}(x)\) and \(\psi_{j}(t)\). In the first, we give a new formula for the fractional derivatives of the polynomials \(\phi_{i}(x)\) in the sense of Riemann-Liouville, while the second gives the Riesz fractional derivatives for the same basis. In the third theorem, an integral formula for \(\psi_{j}(t)\) is given.
Theorem 1
Let
\(\alpha \in(1,2)\). Then the following fractional derivative (in the Riemann-Liouville sense) relation is valid:
$$ \begin{aligned}[b] {}_{0}^{R}D_{x}^{\alpha} \phi_{i}(x) ={}&i\sum_{k=0}^{i } \frac{(-1)^{i+k+1} (k+1) (i+k-1)! \Gamma(\alpha-k-1) \sin(\pi (\alpha-k))}{\pi (\frac{1}{2} )_{k} (i-k)! \ell^{k-1}} \\ &\times \biggl( x^{-\alpha +k+1}-\frac{(k+2)}{(-\alpha+k+2) \ell} x^{-\alpha +k+2} \biggr).\end{aligned} $$
(4.8)
Proof
The power form representation for \(T_{i}^{\ell}(x)\) given in (2.11) enables one to write \(\phi_{i}(x)\) as
$$ \phi_{i}(x)=i\sum_{k=0}^{i} \frac{(-1)^{i-k} (i+k-1)! 2^{2k}}{ (2k)! (i-k)! \ell^{k-1}}x^{k+1} -i\sum_{k=0}^{i} \frac{(-1)^{i-k} (i+k-1)! 2^{2k}}{ (2k)! (i-k)! \ell^{k}}x^{k+2}. $$
(4.9)
If the operator \({}_{0}^{R}D_{x}^{\alpha}\) is applied to both sides of (4.9), and making use of the formula
$${}_{0}^{R}D_{x}^{\alpha} x^{k}=\frac{k!}{\Gamma (1+k-\alpha )}x^{k-\alpha }, $$
then we get
$$ \begin{aligned}[b] {}_{0}^{R}D_{x}^{\alpha} \phi_{i}(x)={}& i\sum_{k=0}^{i} \frac{(-1)^{i-k} (i+k-1)! 2^{2k} (k+1)!}{ (2k)! (i-k)! \Gamma(k+2-\alpha) \ell^{k-1}}x^{k+1-\alpha}\\ & -i\sum_{k=0}^{i} \frac{(-1)^{i-k} (i+k-1)! 2^{2k} (k+2)!}{ (2k)! (i-k)! \Gamma(k+3-\alpha) \ell^{k}}x^{k+2-\alpha}. \end{aligned}$$
(4.10)
If we make use of the relation
$$\Gamma(\xi)\Gamma(1-\xi)=\frac{\pi}{\sin(\xi \pi)}, $$
then after performing some rather lengthy manipulations, we get the desired equation (4.8). □
Note 3
By writing \(a_{k}\lessapprox b_{k}\), we mean that there exists a generic constant C such that \(a_{k}< C\, b_{k}\) for large k.
Lemma 1
For
\(x\in[0,1]\)
and
\(0<\alpha<1\), we have
$$ \big| {}_{0}^{R}D_{x}^{\alpha} \phi_{i}(x)\big|\lessapprox i. $$
(4.11)
Proof
From relation 2.7 and knowing that \(\phi_{i}(0)=0\), we have
$$\bigl({}_{0}^{R}D_{x}^{\alpha} \phi_{i} \bigr) (x)= \bigl({}_{0}^{C}D_{x}^{\alpha} \phi_{i} \bigr) (x), $$
so it suffices to prove the lemma for Caputo’s definition:
$$ \bigl({}_{0}^{C}D_{x}^{\alpha} \phi_{i} \bigr) (x)=\frac{1}{\Gamma (1-\alpha)} \int_{0}^{x}(x-\tau )^{-\alpha} \phi'_{i}(\tau)\,d\tau. $$
(4.12)
Noting that \(\phi_{i}(x)=(1-2x)\cos(i \theta)-i\sqrt{x-x^{2}} \sin (i \theta)\), where \(\theta=\arccos(2x-1)\), and since \(\sqrt{x-x^{2}}\leq\frac{1}{2}\), we get
$$\big| {}_{0}^{C}D_{x}^{\alpha} \phi_{i}(x)\big|\leq\frac{1+\frac{i}{2}}{\Gamma (1-\alpha)} \int_{0}^{x}(x-\tau)^{-\alpha}\,d\tau= \frac{x^{1-\alpha }(1+\frac{i}{2})}{\Gamma (2-\alpha)}, $$
and since \(x^{1-\alpha}\leq1\), then the lemma is proved. □
Lemma 2
For
\(x\in[0,1]\)
and
\(1<\gamma<2\), we have
$$ \big| {}_{0}^{R}D_{x}^{\gamma} \phi_{i}(x)\big|\lessapprox i^{2}. $$
(4.13)
Proof
The proof of this lemma is similar to the proof of Lemma 1. □
Theorem 2
For
\(\beta\in(0,2)-\{1\}\), the following Riesz fractional derivatives relation is valid:
$$ \begin{aligned}[b]\frac{\partial ^{\beta} \phi_{i}(x)}{\partial |x|^{\beta}}& =i \sum _{k=0}^{i }\frac{(i + k-1 )! (k+1)! \Gamma(\beta- k - 2) \sec(\frac{\pi \beta}{2}) \sin(\pi(\beta- k))}{2 \pi (2k)! (i - k)! \ell^{k}} \\ &\quad{} \times \left[ \bigl((-4)^{k} \bigl((\ell-x)^{1 + k - \beta} \bigl(-(2 + k) x + \ell \beta \bigr) + (-1)^{i} x^{1 + k - \beta} \bigl((2 + k) (-\ell+ x) +\ell \beta \bigr) \bigr) \bigr)\right]. \end{aligned} $$
(4.14)
Proof
Based on the two analytic forms of \(T_{i}^{\ell}(x)\) given in (2.11) and (2.12) and if we perform similar manipulations as in the proof of Theorem 1, then we get the desired result. □
Similarly we can prove the following estimates for the Riesz fractional derivatives of \(\phi_{i}(x)\).
Lemma 3
For
\(x\in[0,1]\)
and
\(0<\beta<1\), we have
$$ \biggl\vert \frac{\partial ^{\beta} \phi_{i}(x)}{\partial |x|^{\beta}} \biggr\vert \lessapprox i. $$
(4.15)
Lemma 4
For
\(x\in[0,1]\)
and
\(1<\gamma<2\), we have
$$ \biggl\vert \frac{\partial ^{\gamma}\phi_{i}(x)}{\partial |x|^{\gamma}} \biggr\vert \lessapprox i^{2}. $$
(4.16)
Theorem 3
If
\(\psi_{j}(t)\)
is defined as in (4.2), then the following integral formula holds:
$$ \int_{0}^{t}\psi_{j}(s)\,ds=j\sum _{k=0}^{j} \frac{(-1)^{j-k} (j+k-1)! 2^{2k}}{(k+1) (j-k)! (2k)! \tau^{k}}t^{k+1}. $$
(4.17)
Proof
The proof is easily obtained by integrating relation (2.11) over the interval \([0,t]\). □
4.2 Numerical algorithms for handling equation (3.11)
This section is devoted to describing in detail two numerical algorithms for handling equation (3.11). The first algorithm depends on the application of the typical collocation method, while the second depends on the application of the Petrov-Galerkin method. The main idea behind the two proposed algorithms is based on making use of Theorems 1 and 3 along with the application of the collocation and Petrov-Galerkin methods in order to transform equation (3.11) into a system of linear or nonlinear algebraic equations in the unknown expansion coefficients \(c_{ij}\). The linear system is solved using Gauss elimination, and the nonlinear one is solved via Newton’s iterative method.
4.2.1 The collocation approach
Consider the following approximate solution of equation (3.11):
$$ g_{M}(x,t)=\sum_{i=0}^{M}\sum _{j=0}^{M} c_{ij} \phi_{i}(x) \psi_{j}(t). $$
(4.18)
Now, let \(v(x,t)\) and \(k(x,t)\in V\). Then these two functions can be expanded in the following double expansions:
$$\begin{aligned}& v(x,t)=\sum_{p=0}^{\infty}\sum _{q=0}^{\infty} v_{p q} T_{p}^{\ell}(x) T_{q}^{\tau}(t), \end{aligned}$$
(4.19)
$$\begin{aligned}& k(x,t)=\sum_{p=0}^{\infty}\sum _{q=0}^{\infty} k_{p q} T_{p}^{\ell}(x) T_{q}^{\tau}(t), \end{aligned}$$
(4.20)
and the coefficients \(v_{p q}\) and \(k_{p q}\) are given by the formulas
$$\begin{aligned}& v_{p q}=\frac{1}{h_{p}h_{q}} \int_{0}^{\tau} \int_{0}^{\ell}\frac {v(x,t) T_{p}^{\ell}(x) T_{q}^{\tau}(t),}{x^{1/2}(\ell -x)^{1/2}t^{1/2}(\tau-t)^{1/2}}\,dx\,dt, \end{aligned}$$
(4.21)
$$\begin{aligned}& k_{p q}=\frac{1}{h_{p}h_{q}} \int_{0}^{\tau} \int_{0}^{\ell}\frac {k(x,t) T_{p}^{\ell}(x) T_{q}^{\tau}(t),}{x^{1/2}(\ell -x)^{1/2}t^{1/2}(\tau-t)^{1/2}}\,dx\,dt. \end{aligned}$$
(4.22)
Now consider the following approximations:
$$ v(x,s)\approx v_{M}(x,s)=\sum_{p=0}^{M} \sum_{q=0}^{M} v_{p q} T_{p}^{\ell}(x) T_{q}^{\tau}(s), $$
(4.23)
and
$$ k(x,s)\approx k_{M}(x,s)=\sum_{p=0}^{M} \sum_{q=0}^{M} k_{p q} T_{p}^{\ell}(x) T_{q}^{\tau}(s). $$
(4.24)
In order to apply the collocation method, and due to (4.18), we note that the residual of (3.11) takes the form
$$ \begin{aligned}[b] R(x,t)={}&\sum_{i=0}^{M} \sum_{j=0}^{M}c_{ij} \phi_{i}(x) \psi_{j}(t)+ \sum_{i=0}^{M} \sum_{j=0}^{M} c_{ij} {}_{0}^{R}D_{x}^{\beta} \phi_{i}(x) \int_{0}^{t} v(x,s) \psi_{j}(s)\,ds \\ &- \sum_{i=0}^{M}\sum _{j=0}^{M} c_{ij} {}_{0}^{R}D_{x}^{\gamma} \phi(x) \int_{0}^{t} k(x,s)\,\psi_{j}(s) \,ds-F(x,t,g), \end{aligned} $$
(4.25)
and therefore
$$ \begin{aligned}[b] R(x,t)={}&\sum_{i=0}^{M} \sum_{j=0}^{M}c_{ij} \phi_{i}(x) \psi_{j}(t)+ \sum_{i,j=0}^{M} \sum_{p,q=0}^{M} c_{ij} v_{p q} T_{p}^{\ell}(x) {}_{0}^{R}D_{x}^{\beta} \phi_{i}(x) \int_{0}^{t} T_{q}^{\tau}(s) \psi_{j}(s)\,ds \\ &- \sum_{i,j=0}^{M}\sum _{p,q=0}^{M} c_{ij} k_{p q} T_{p}^{\ell}(x) {}_{0}^{R}D_{x}^{\gamma} \phi_{i}(x) \int_{0}^{t} T_{q}^{\tau}(s) \psi_{j}(s)\,ds-F(x,t,g). \end{aligned} $$
(4.26)
Now, making use of the two power form representations of the polynomials \(\phi_{i}(x)\) and \(\psi_{j}(t)\), and the two relations in (2.10), (4.8) and (4.17) enable us to write the residual \(R(x,t)\) in the following form:
$$\begin{aligned} R(x,t)={}&\sum_{i=0}^{M} \sum_{j=0}^{M}\sum _{k=0}^{i}\sum_{s=0}^{j} c_{ij} \frac{(-1)^{i+j-k-s} 2^{2k} 2^{2s} (i+k-1)! (j+s-1)!}{(2k)! (2s)! (i-k)! (j-s)! \tau^{s} \ell^{k}} (\ell-x)x^{k+1} t^{s} \\ &+ \sum_{i,j=0}^{M}\sum _{p,q=0}^{M} \sum_{k=0}^{i} \sum_{s=0}^{p} i p c_{i j} v_{p q} \\ &\times\frac{(-1)^{i+k+p-s+1} (k+1) 2^{2s} (p+s-1)! (i+k-1)! \Gamma(\beta-k-1) \sin(\pi (\beta-k))}{2 \sqrt{\pi} (p-s)! (2s)! (i-k)! \Gamma(k+\frac{1}{2}) \ell^{s+k-1}} \\ & \times \biggl( x^{-\beta+s +k+1}-\frac{(k+2)}{(-\beta+k+2) \ell} x^{-\beta+s+k+2} \biggr) \\ & \times \Biggl( \zeta\sum_{s=0}^{\zeta} \frac{(-1)^{\zeta-s} (\zeta+s-1)! 2^{2s}}{(s+1) (\zeta-s)! (2s)! \tau^{s}}t^{s+1}+\eta\sum_{s=0}^{\eta} \frac{(-1)^{\eta-s} (\eta+s-1)! 2^{2s}}{(s+1) (\eta-s)! (2s)! \tau^{s}}t^{s+1} \Biggr) \\ &-\sum_{i,j=0}^{M}\sum _{p,q=0}^{M}\sum_{k=0}^{i} \sum_{s=0}^{p} i p c_{i j} k_{p q} \\ &\times \frac{(-1)^{i+k+p-s+1} (k+1) 2^{2s} (p+s-1)! (i+k-1)! \Gamma(\gamma-k-1) \sin(\pi (\gamma -k))}{2 \sqrt{\pi} (p-s)! (2s)! (i-k)! \ell^{r+k-1} \Gamma(k+\frac{1}{2})} \\ & \times \biggl( x^{-\gamma+s +k+1}-\frac{(k+2)}{(-\gamma+k+2) \ell} x^{-\gamma+s+k+2} \biggr) \\ & \times \Biggl( \zeta\sum_{s=0}^{\zeta} \frac{(-1)^{\zeta-s} (\zeta+s-1)! 2^{2s}}{(s+1) (\zeta-s)! (2s)! \tau^{s}}t^{s+1} +\eta\sum_{s=0}^{\eta} \frac{(-1)^{\eta-s} (\eta+s-1)! 2^{2s}}{(s+1) (\eta-s)! (2s)! \tau^{s}}t^{s+1} \Biggr) \\ & -F(x,t,g), \end{aligned}$$
(4.27)
where \(\zeta=|q-j|\) and \(\eta=q+j\). Now, we apply the typical collocation method to equation (4.27). In fact, if this equation is collocated at the following set of points: \(\{ (\frac{i \ell}{M+2},\frac{j \tau}{M+2} ): 1\leq i,j\leq M+1 \}\), then we obtain a system of algebraic equations of dimension \((M+1)^{2}\) in the unknown expansion coefficients \(\{c_{ij}: 0\leq i,j\leq M\}\). This linear algebraic system is solved by the Gaussian elimination procedure or by any suitable solver, while the nonlinear system is solved with the aid of Newton’s iterative method. Hence, the desired spectral solution can be obtained.
4.2.2 Petrov-Galerkin approach
This section is devoted to introducing an alternative algorithm for finding a spectral solution to equation (3.11). The main advantage of employing the Petrov-Galerkin method is its flexibility in choosing test functions, since we can choose a suitable set of functions that is not identical to the set of trial functions. Now, we choose the test functions to be
$$\rho_{mn}(x,t)=x^{m} t^{n}. $$
Then the application of the Petrov-Galerkin method leads to
$$ \int_{0}^{\tau} \int_{0}^{\ell}R(x,t) \rho_{mn}(x,t)\,dx \,dt=0,\quad 1\leq m,n\leq M+1, $$
(4.28)
where \(R(x,t)\) is defined as in (4.27).
Now, equation (4.28) can be written alternatively as
$$\begin{aligned} &\sum_{i=0}^{M} \sum_{j=0}^{M}\sum _{k=0}^{i}\sum_{s=0}^{j} c_{ij} \frac{(-1)^{i+j-k-s} 2^{2k+2s} (i+k-1)! (j+s-1)!\ell^{m+3} \tau^{n+1}}{(2k)! (2s)! (i-k)! (j-s)! (k+m+2) (k+m+3) (n+s+1)} \\ &\qquad{}+ \sum_{i,j=0}^{M}\sum _{p,q=0}^{M} \sum_{k=0}^{i} \sum_{s=0}^{j} i j p c_{i j} v_{p q} \\&\qquad\times{} \frac{(-1)^{i+j-s+k+1} (k+1) (i+k-1)! \Gamma(\beta-k-1) \sin(\pi (\beta -k)) (j+s-1)! 2^{2s}}{(s+1)(j-s)! (2s)! \tau^{s } \sqrt{\pi} (i-k)! \ell^{k-1} \Gamma(k+\frac{1}{2})} \\ &\qquad{}\times\sum_{s=0}^{p} \frac {(-1)^{p-s} 2^{2s} (p+s-1)! }{(p-s)! (2s)! } \\ &\qquad{}\times \Biggl( \zeta\sum_{s=0}^{\zeta} \frac{(-1)^{\zeta-s} (\zeta+s-1)! 2^{2s}\tau^{n+2}}{(s+1) (\zeta-s)! (2s)! (2+n+s)} +\eta\sum_{s=0}^{\eta} \frac{(-1)^{\eta-s} (\eta+s-1)! 2^{2s}\tau^{n+2}}{(s+1) (\eta-s)! (2s)! (2+n+s)} \Biggr) \\ &\qquad{}\times \biggl(\frac{\ell^{-\beta+k+m+s+2}}{-\beta +k+m+s+2}-\frac{(k+2)\ell^{-\beta+k+m+s+2}}{(-\beta +k+2) (-\beta+k+m+s+3)} \biggr) \\ &\qquad{}-\sum_{i,j=0}^{M}\sum _{p,q=0}^{M}\sum_{k=0}^{i} \sum_{s=0}^{j} i j p c_{i j} k_{p q} \\ &\qquad{}\times \frac{(-1)^{i+k-s+1} (k+1) (i+k-1)! (j+s-1)! 2^{2s} \Gamma(\gamma-k-1) \sin(\pi (\gamma -k))}{(s+1) (j-s)! (2s)! \tau^{s} \sqrt{\pi} (i-k)! \ell^{s+k-1} \Gamma(k+\frac{1}{2})} \\ &\qquad{}\times\sum_{s=0}^{p} \frac{(-1)^{p-s} 2^{2r} (p+s-1)! }{(p-s)! (2s)! } \\ &\qquad{}\times \Biggl( \zeta\sum_{s=0}^{\zeta} \frac{(-1)^{\zeta-s} (\zeta+s-1)! 2^{2s}\tau^{n+2}}{(s+1) (\zeta-s)! (2s)! (n+s+2)} +\eta\sum_{s=0}^{\eta} \frac{(-1)^{\eta-s} (\eta+s-1)! 2^{2s}\tau^{n+2}}{(s+1) (\eta-s)! (2s)! (n+s+2)}t^{n+s+1} \Biggr) \\ &\qquad{}\times \biggl( \frac{\ell^{-\gamma+k+m+s+2}}{-\beta +k+m+s+2}-\frac{(k+2)\ell^{-\gamma+k+m+s+2}}{(-\gamma +k+2) (-\gamma+k+m+s+3)} \biggr) -F_{m,n}=0, \end{aligned}$$
(4.29)
where \(F_{m,n}= \int_{0}^{\tau} \int_{0}^{\ell}F(x,t,g) \rho_{mn}(x,t) \,dx\,dt\). Hence, a system of equations of dimension \({(M+1)^{2}}\) in the unknown expansion coefficients \(c_{ij}\) is generated. This system can be efficiently solved by the Gauss elimination technique, and hence an approximate spectral solution can be obtained.
4.3 Numerical algorithms for handling equation (3.23)
In this section, we describe how the collocation and Petrov-Galerkin procedures can be employed to handle equation (3.23). As we have done in Section 4.2.1, collocation and Petrov-Galerkin are employed along with Theorems 2 and 3 to convert equation (3.23) into a system of linear algebraic equations in the unknown expansion coefficients \(c_{ij}\).
4.3.1 The collocation approach
Now, consider the following approximate spectral solution of equation (3.23):
$$ g_{M}(x,t)=\sum_{i=0}^{M}\sum _{j=0}^{M} c_{ij} \phi_{i}(x) \psi_{j}(t). $$
(4.30)
In order to apply the collocation method, and due to (4.30), we note that the residual of (3.23) takes the form
$$ \begin{aligned}[b] R(x,t)={}&\sum_{i=0}^{M} \sum_{j=0}^{M}c_{ij} \phi_{i}(x) \psi_{j}(t)+ \kappa_{\beta} \sum _{i=0}^{M}\sum _{j=0}^{M} c_{ij} \frac{\partial ^{\beta} \phi_{i}(x)}{\partial |x|^{\beta}} \int _{0}^{t}\psi_{j}(s)\,ds \\ &- \kappa_{\gamma} \sum_{i=0}^{M} \sum_{j=0}^{M} c_{ij} \frac{\partial ^{\gamma} \phi_{i}(x)}{\partial |x|^{\gamma}} \int _{0}^{t}\psi_{j}(s)\,ds-F(x,t,g). \end{aligned} $$
(4.31)
The latter formula, along with the aid of the two power form representations of the polynomials \(\phi_{i}(x)\) and \(\psi_{j}(t)\), and the two relations in (4.14) and (4.17), enable one to write the residual \(R(x,t)\) in the following form:
$$\begin{aligned} \begin{aligned}[b] R(x,t)={}&\sum_{i=0}^{M} \sum_{j=0}^{M}\sum _{k=0}^{i}\sum_{s=0}^{j} i j c_{ij} \frac{(-1)^{j-s+k} (i+k-1)! (j+s-1)! 2^{2s} \sqrt {\pi}}{\Gamma(k+\frac{1}{2}) (i-k)! (j-s)! k! (2s)! \ell^{k} \tau^{s}} x (\ell-x)^{k+1} t^{s} \\ &+ \kappa_{\beta} \sum_{i=0}^{M} \sum_{j=0}^{M}\sum _{k=0}^{i}c_{ij} i \sum _{k=0}^{i }\frac{(i + k - 1)! \Gamma(2 + k) \Gamma(\beta- k - 2) \sec(\frac{\pi \beta}{2}) \sin(\pi(\beta- k))}{2 \pi (2k)! (i - k)! \ell^{k}} \\ & \times( \biggl( (-4)^{k} \bigl((-1)^{2k} (\ell-x)^{1 + k - \beta} \bigl(-(2 + k) x +\ell \beta \bigr) + (-1)^{i} x^{1 + k - \beta} \bigl((2 + k) (-\ell+ x) + \ell \beta \bigr) \bigr) \biggr)\hspace{-20pt} \\ &\times j\sum_{s=0}^{j} \frac{(-1)^{j-s} (j+s-1)! 2^{2s}}{(s+1) (j-s)! (2s)! \tau ^{s}}t^{s+1} \\ &-\kappa_{\gamma} \sum_{i=0}^{M} \sum_{j=0}^{M}\sum _{k=0}^{i}c_{ij} i \sum _{k=0}^{i }\frac{(i + k - 1)! \Gamma(2 + k) \Gamma(\gamma- k - 2) \sec(\frac{\pi \gamma}{2}) \sin(\pi(\gamma- k))}{2 \pi (2k)! (i - k)! \ell^{k}} \\ & \times( \biggl((-4)^{k} \bigl((-1)^{2k} (\ell-x)^{1 + k - \gamma} \bigl(-(2 + k) x +\ell \gamma \bigr) + (-1)^{i} x^{1 + k - \gamma} \bigl((2 + k) (-\ell+ x) +\ell \gamma \bigr) \bigr) \biggr)\hspace{-20pt} \\ &\times j\sum_{s=0}^{j} \frac{(-1)^{j-s} (j+s-1)! 2^{2s}}{(s+1) (j-s)! (2s)! \tau ^{s}}t^{s+1} -F(x,t,g). \end{aligned} \end{aligned}$$
(4.32)
Now, we apply the typical collocation method. In fact, the residual \(R(x,t) \) is enforced to vanish at the following set of points: \(\{ (\frac{i \ell}{M+2},\frac{j \tau}{M+2} ): 1\leq i,j\leq M+1 \}\). Then we obtain a linear algebraic system of equations of dimension \((M+1)^{2}\) in the unknown expansion coefficients \(\{c_{ij}: 0\leq i,j\leq M\}\) which is solved by Gauss elimination solver.
4.3.2 Petrov-Galerkin approach
To apply the Petrov-Galerkin method to equation (3.23), we choose the test functions to be
$$\rho_{mn}(x,t)=x^{m} t^{n}, $$
and therefore the application of the Petrov-Galerkin method leads to
$$ \int_{0}^{\tau} \int_{0}^{\ell}R(x,t) \rho_{mn}(x,t)\,dx \,dt=0,\quad 1\leq m,n\leq M+1, $$
(4.33)
where \(R(x,t)\) is defined as in (4.32). Now, equation (4.33) can be written alternatively as
$$\begin{aligned}& \sum_{i=0}^{M} \sum_{j=0}^{M}\sum _{k=0}^{i}\sum_{s=0}^{j}i j c_{ij} \frac{(-1)^{j-s+k} (i+k-1)! (j+s-1)! 2^{2s} \sqrt{\pi}}{\Gamma(k+\frac{1}{2}) (i-k)! (j-s)! k! (2s)! \ell^{k} \tau^{s}} \int_{0}^{\ell} \int_{0}^{\tau} x^{m+1} ( \ell-x)^{k+1} t^{n+s}\,dt\,dx \\& \qquad{}+(\kappa_{\beta}-\kappa_{\gamma})\sum _{i=0}^{M} \sum_{j=0}^{M} \sum_{k=0}^{i}\sum _{s=0}^{j} i j c_{ij} \\& \qquad{}\times\frac{(i + k - 1)! \Gamma(2 +k) \Gamma(\beta-k-2) \sec(\frac{\pi \beta}{2}) \sin(\pi (\beta- k)) {(-1)^{j-s} (j+s-1)! 2^{2s}}}{2 \pi (2k)! (i - k)! \ell^{k} (s+1) (j-s)! (2s)! \tau^{s}} \\& \qquad{}\times \int_{0}^{\ell} \int_{0}^{\tau} ( \biggl( \biggl((-4)^{k} \bigl((-1)^{2k} ( \ell-x)^{1 + k - \beta} \\& \qquad{}\times \bigl(-(2 + k) x +\ell \beta \bigr) + (-1)^{i} x^{1 + k - \beta} \bigl((2 + k) (-\ell+ x) +\ell \beta \bigr) \bigr) \biggr) \biggr) \\& \qquad{}\times x^{m} t^{n+s+1}\,dt\,dx-F_{m,n}=0, \end{aligned}$$
(4.34)
where
$$F_{m,n}= \int_{0}^{\tau} \int_{0}^{\ell}F(x,t,g) \rho_{mn}(x,t) \,dx\,dt. $$
Equation (4.34) generates a linear system of dimension \((M+1)^{2}\) in the unknown expansion coefficients \(c_{ij}\), which is efficiently solved using Gauss elimination solver.