Skip to main content

Theory and Modern Applications

A new class of operational matrices method for solving fractional neutral pantograph differential equations

Abstract

This paper uses new fractional integration operational matrices to solve a class of fractional neutral pantograph delay differential equations. A fractional-order function space is constructed where the exact solution lies in, and a set of orthogonal bases are given. Using them, we reduce the fractional delay differential equation to algebraic equations and get the approximate solution. Finally, we give the Legendre operational matrix of fractional integration to solve the equation as an example and show the efficiency of the method.

1 Introduction

Fractional calculus is a generalization of calculus to an arbitrary order. In recent years, it is extensively applied to various fields such as viscous elastic mechanics, power fractal networks, electronic circuits [13]. Due to the memory and non-local characters of fractional derivative, many scholars use fractional differential equations to simulate complex phenomenon in order to make it closer to the real problem. On the other hand, the operational matrices of fractional calculus, as linear transformations, have been widely concerned. Some operational matrices are obtained by approximating the integral of orthogonal polynomials. Some operational matrices are obtained indirectly by means of other operational matrices. With the help of these matrices and orthogonal polynomials, we can reduce a fractional differential or integral equation to algebraic equations, and get the approximate solution. Much research in this field has emerged, such as Legendre operational matrices [13], Chebyshev operational matrices [4], block pulse operational matrices [5].

Fractional delay differential equations arise in many applications,such as automatic control, long transmission lines, economy and biology [6]. The fractional pantograph equation, as a kind of fractional delay differential equations, plays an important role in explaining various phenomena [7, 8]. Therefore, it has attracted a great deal of attentions. The authors [9] studied the existence of solutions of nonlinear fractional pantograph equations when the order of the derivative is smaller than 1. Yang and Huang [10] studied spectral-collocation methods for fractional pantograph delay-integro-differential equations whereas Rahimkhani, Ordokhani and Babolian [7] gave the numerical solution of fractional pantograph differential equations by using the generalized fractional-order Bernoulli wavelet. However, only few papers are devoted to the approximate solution of fractional neutral pantograph differential equations.

In this paper, we consider the fractional neutral pantograph differential equation

$$ D^{\alpha}_{C}y(x)=a(x)y(px)+b(x)D^{\gamma}_{C}y(px)+d(x)y(x)+g(x), $$
(1)

subject to the initial condition

$$ y(0)=0, $$
(2)

where \(0< p<1,~0<\gamma\leq\alpha\leq1,~x\in[0,1]\). \(y(x)\) is an unknown function; \(a,b,d,g\in C[0,1]\) are the known functions, the fractional derivatives are in the sense of Caputo. For simplicity of the proof, we set \(y(0)=0\), otherwise we let \(v(x)=y(x)-y(0)\) and solve the fractional differential equation with the unknown function \(v(x)\) and \(v(0)=0\).

This paper is devoted to obtaining a class of operational matrices based on different type of fractional orthogonal polynomials and solving the fractional neutral pantograph delay differential equations by the obtained operational matrices. As never before, we construct suitable fractional orthogonal polynomials and get the better operational matrices based on the order of the fractional differential or integral equation. Thus we can find approximate solutions with high accuracy.

We organize this paper as follows: In Sect. 2, we give some basic definitions and useful lemmas. In Sect. 3, we introduce operational matrices of fractional integration based on fractional orthogonal basis functions. In Sect. 4, we give the numerical algorithms. In Sect. 5, a fractional Legendre operational matrix is given. In Sect. 6, two numerical examples are given to illustrate the applicability and accuracy of the proposed method.

2 Preliminaries

In this section, some preliminary results about the fractional integral and the Caputo fractional differential operator are recalled [11]. Then we give the appropriate space where we find the approximate solution of fractional differential equations. Throughout the paper, we assume \(\alpha>0\).

Definition 2.1

The Riemann–Liouville (R–L) fractional integral operator \(J_{0}^{\alpha}\) is given by

$$J_{0}^{\alpha}u(x)=\frac{1}{\Gamma(\alpha)} \int _{0}^{x}(x-s)^{\alpha-1}u(s)\,\mathrm{d}s, $$

where \(\Gamma(\alpha)=\int_{0}^{\infty}x^{\alpha-1}e^{-x}\,\mathrm{d}x\).

Lemma 2.1

For any \(u(x)\in L^{2}[0,1]\), \(J_{0}^{\alpha}u(x)\in L^{2}[0,1]\) holds.

Proof

$$\begin{aligned} &\int_{0}^{1}\bigl(J^{\alpha}_{0}u(x) \bigr)^{2}\,\mathrm{d}x \\ &\quad=\frac{1}{\Gamma ^{2}(\alpha)} \int_{0}^{1} \biggl( \int_{0}^{x}(x-t)^{\alpha -1}u(t)\,\mathrm{d}t \biggr)^{2}\,\mathrm{d}x \\ &\quad\leq \frac{1}{\Gamma^{2}(\alpha)} \int_{0}^{1} \biggl( \int _{0}^{x}(x-t)^{\alpha-1}\,\mathrm{d}t \biggr)\cdot \biggl( \int _{0}^{x}(x-t)^{\alpha-1}u^{2}(t)\, \mathrm{d}t \biggr)\,\mathrm{d}x \\ &\quad\leq \frac{1}{\alpha\Gamma^{2}(\alpha)} \int_{0}^{1} \biggl( \int _{0}^{x}(x-t)^{\alpha-1}u^{2}(t)\, \mathrm{d}t \biggr)\,\mathrm{d}x. \end{aligned}$$

Note that \(u\in L^{2}[0,1]\), and we can derive that \(u^{2}\in L^{1}[0,1]\) and \(\int_{0}^{x}(x-t)^{\alpha-1}u^{2}(t)\,\mathrm{d}t\in L^{1}[0,1]\). Therefore,

$$\int_{0}^{1}\bigl(J^{\alpha}_{0}u(x) \bigr)^{2}\,\mathrm{d}x< +\infty. $$

So \(J_{0}^{\alpha}u\in L^{2}[0,1]\) holds. □

Definition 2.2

The Caputo fractional differential operator \(D_{C}^{\alpha}\) is given by

$$D_{C}^{\alpha}u(x)=\frac{1}{\Gamma(n-\alpha)} \int _{0}^{x}(x-s)^{n-\alpha-1}u^{(n)}(s) \,\mathrm{d}s,\quad n-1< \alpha < n,n=\lceil\alpha\rceil. $$

Definition 2.3

Defining the inner product and the norm \(\Vert \cdot \Vert \) in the \(C[0,1]\) separately:

$$\begin{aligned} &\bigl(u(x),v(x)\bigr)= \int_{0}^{1}u(x)\cdot v(x)\cdot\rho(x)\,\mathrm{d}x, \\ &\bigl\Vert u(x) \bigr\Vert =\sqrt{\bigl(u(x),u(x)\bigr)}= \biggl( \int_{0}^{1}\bigl(u(x)\bigr)^{2}\cdot\rho (x)\,\mathrm{d}x \biggr)^{1/2},\quad \forall u(x),v(x)\in C[0,1]. \end{aligned}$$

Definition 2.4

Defining \(C^{\alpha}[0,1]\triangleq\{J_{0}^{\alpha}u(x)|u\in C[0,1]\} \), its inner product and its norm \(\Vert \cdot \Vert _{\alpha}\):

$$\begin{aligned} &\bigl(u(x),v(x)\bigr)_{\alpha}= \int_{0}^{1}D^{\alpha}_{C}u(x)\cdot D^{\alpha }_{C}v(x)\rho(x)\,\mathrm{d}x, \\ &\bigl\Vert u(x) \bigr\Vert _{\alpha}=\sqrt{\bigl(u(x),u(x) \bigr)_{\alpha}}= \biggl( \int _{0}^{1}\bigl(D^{\alpha}u(x) \bigr)^{2}\cdot\rho(x)\,\mathrm{d}x \biggr)^{1/2},\quad \forall u(x),v(x)\in C^{\alpha}[0,1]. \end{aligned}$$

We can find that

$$\bigl(u(x),u(x)\bigr)_{\alpha}= \int_{0}^{1}\bigl(D^{\alpha}_{C}u(x) \bigr)^{2}\rho (x)\,\mathrm{d}x\geq0, $$

and \(u(x)=0\) when \((u(x),u(x))_{\alpha}=0\). In fact, there exists a \(u_{1}(x)\in C[0,1]\) such that \(u(x)=J_{0}^{\alpha}u_{1}(x)\). That is,

$$\int_{0}^{1}\bigl(D^{\alpha}_{C}u(x) \bigr)^{2}\rho(x)\,\mathrm{d}x= \int _{0}^{1}u_{1}^{2}(x)\rho(x) \,\mathrm{d}x=0. $$

Therefore we get \(u_{1}(x)\equiv0\) and \(u(x)=J_{0}^{\alpha }u_{1}(x)\equiv0\). It is easy to see that the other conditions of the inner product definition are also true. Hence, \(C^{\alpha}[0,1]\) is an inner product space.

Denoting by X the set of all polynomials in the \([0,1]\), then X is a dense subset of \(C[0,1]\). Denote \(J_{0}^{\alpha}X=\{J_{0}^{\alpha }P|P\in X\}\). Let be the completion of the inner product space \(C^{\alpha}[0,1]\).

Lemma 2.2

(1) X is a dense subset of \((L^{2}[0,1], \Vert \cdot \Vert )\);

(2) \(J_{0}^{\alpha}X\) is a dense subset of \((\tilde{H}, \Vert \cdot \Vert _{\alpha})\).

Proof

(1) Taking any \(u(x)\in C[0,1]\) and \(\varepsilon>0\), there exists a \(p\in X\), such that \(\Vert u-p \Vert _{C}<\varepsilon\). We have

$$\Vert u-p \Vert =\sqrt{(u-p,u-p)}= \biggl( \int_{0}^{1}(u-p)^{2}\cdot\rho (x)\, \mathrm{d}x \biggr)^{1/2}\leq A \Vert u-p \Vert _{C}< A \varepsilon. $$

where \(A=(\int_{0}^{1}\rho(x)\,\mathrm{d}x)^{1/2}\). Therefore X is a dense subset of \((C[0,1], \Vert \cdot \Vert )\). Noting that \((C[0,1], \Vert \cdot \Vert )\) is dense in the \((L^{2}[0,1], \Vert \cdot \Vert )\), then X is a dense subset of \((L^{2}[0,1], \Vert \cdot \Vert )\).

(2) For \(u(x)\in C^{\alpha}[0,1]\), there exists a \(u_{1}(x)\in C[0,1]\) such that \(u(x)=J_{0}^{\alpha}u_{1}(x)\). For any \(\varepsilon>0\), there exists a \(p\in X\) such that \(\Vert u_{1}-p \Vert _{C}<\varepsilon\). We have

$$\begin{aligned} \bigl\Vert u(x)-J_{0}^{\alpha}p \bigr\Vert _{\alpha}&= \biggl( \int _{0}^{1}\bigl[D_{C}^{\alpha} \bigl(u-J_{0}^{\alpha}p\bigr)\bigr]^{2}\cdot\rho(x)\, \mathrm {d}x \biggr)^{1/2} \\ &= \biggl( \int_{0}^{1}\bigl[D_{C}^{\alpha} \bigl(J_{0}^{\alpha }u_{1}-J_{0}^{\alpha}p \bigr)\bigr]^{2}\cdot\rho(x)\,\mathrm{d}x \biggr)^{1/2} \\ &= \biggl( \int_{0}^{1}(u_{1}-p)^{2}\cdot \rho(x)\,\mathrm{d}x \biggr)^{1/2} \\ &\leq \Vert u_{1}-p \Vert _{C}\cdot \biggl( \int_{0}^{1}\rho(x)\,\mathrm {d}x \biggr)^{1/2} < A\varepsilon, \end{aligned}$$

where \(A=(\int_{0}^{1}\rho(x)\,\mathrm{d}x)^{1/2}\). Therefore \(J_{0}^{\alpha}X\) is a dense subset of \((C^{\alpha}[0,1], \Vert \cdot \Vert _{\alpha})\). Noting that \((C^{\alpha}[0,1], \Vert \cdot \Vert _{\alpha})\) is dense in \((\tilde{H}, \Vert \cdot \Vert _{\alpha})\), then \(J_{0}^{\alpha}X\) is a dense subset of \((\tilde{H}, \Vert \cdot \Vert _{\alpha})\). □

Lemma 2.3

\(\tilde{H}\subseteq L^{2}[0,1]\).

Proof

For any \(\tilde{f}\in\tilde{H}\), there exists \(f_{i}\in C[0,1]\), so that \(\tilde{f}_{i}\triangleq J_{0}^{\alpha}f_{i}\in C^{\alpha}[0,1]\) and

$$ \Vert \tilde{f}_{i}-\tilde{f} \Vert _{\alpha}\rightarrow 0\quad (i \rightarrow\infty). $$

Therefore \(\Vert \tilde{f}_{i}-\tilde{f}_{j} \Vert _{\alpha }\rightarrow0\ (i,j\rightarrow\infty)\), that is,

$$ \lim_{i,j\rightarrow\infty} \int_{0}^{1}\bigl[D_{C}^{\alpha}( \tilde {f}_{i}-\tilde{f}_{j})\bigr]^{2}\rho(x)\, \mathrm{d}x=\lim_{i,j\rightarrow \infty} \int_{0}^{1}(f_{i}-f_{j})^{2} \rho(x)\,\mathrm{d}x=0. $$

Then \(\lim_{i,j\rightarrow\infty} \Vert f_{i}-f_{j} \Vert =0\). Because the space \(L^{2}[0,1]\) is complete, there exists a \(f\in L^{2}[0,1]\) so that \(f=\lim_{i\rightarrow\infty}f_{i}\). We have

$$\begin{aligned} &\tilde{f}=\lim_{i\rightarrow\infty}\tilde{f}_{i}=\lim _{i\rightarrow\infty}J_{0}^{\alpha}f_{i}=J_{0}^{\alpha} \Bigl(\lim_{i\rightarrow\infty}f_{i}\Bigr)=J_{0}^{\alpha}f, \\ &\int_{0}^{1}(\tilde{f})^{2}\,\mathrm{d}x = \int_{0}^{1}\bigl(J^{\alpha}_{0}f(x) \bigr)^{2}\,\mathrm{d}x \\ &\phantom{\int_{0}^{1}(\tilde{f})^{2}\,\mathrm{d}x}=\frac {1}{\Gamma^{2}(\alpha)} \int_{0}^{1} \biggl( \int_{0}^{x}(x-t)^{\alpha -1}f(t)\,\mathrm{d}t \biggr)^{2}\,\mathrm{d}x \\ &\phantom{\int_{0}^{1}(\tilde{f})^{2}\,\mathrm{d}x}\leq\frac{1}{\Gamma^{2}(\alpha)} \int_{0}^{1} \biggl( \int _{0}^{x}(x-t)^{\alpha-1}\,\mathrm{d}t \biggr)\cdot \biggl( \int _{0}^{x}(x-t)^{\alpha-1}f^{2}(t)\, \mathrm{d}t \biggr)\,\mathrm{d}x \\ &\phantom{\int_{0}^{1}(\tilde{f})^{2}\,\mathrm{d}x}=\frac{1}{\alpha\Gamma^{2}(\alpha)} \int_{0}^{1}x^{\alpha}\cdot \biggl( \int_{0}^{x}(x-t)^{\alpha-1}f^{2}(t)\, \mathrm{d}t \biggr)\,\mathrm{d}x. \end{aligned}$$

Note that \(f\in L^{2}[0,1]\), and we can derive that \(f^{2}\in L^{1}[0,1]\) and \(\int_{0}^{x}(x-t)^{\alpha-1}f^{2}(t)\,\mathrm{d}t\in L^{1}[0,1]\). Therefore,

$$\int_{0}^{1}\bigl(J^{\alpha}_{0}f(x) \bigr)^{2}\,\mathrm{d}x< +\infty $$

and

$$\tilde{f}=J_{0}^{\alpha}f\in L^{2}[0,1]. $$

Thus we get \(\tilde{H}\subset L^{2}[0,1]\). □

Remark 1

Because \(C^{\alpha}=J^{\alpha}_{0}C[0,1]\) is dense in \(J^{\alpha}_{0}L^{2}[0,1]\) and \(J^{\alpha}_{0}L^{2}[0,1]\) is complete, we can derive that \(\tilde{H}= J^{\alpha}_{0}L^{2}[0,1]\).

Lemma 2.4

Assume \(h(x)\in\tilde{H}\). Then, for any \(\gamma>0\), we can derive that \(J_{0}^{\gamma}h(x)\in\tilde{H}\).

Proof

By the assumption, there exists a \(h_{1}(x)\in L^{2}[0,1]\) so that \(h(x)=J_{0}^{\alpha}h_{1}(x)\). Therefore

$$J_{0}^{\gamma}h(x)=J_{0}^{\gamma+\alpha}h_{1}(x)=J_{0}^{\alpha }J_{0}^{\gamma}h_{1}(x). $$

By Lemma 2.1, we have \(J_{0}^{\gamma}h_{1}(x)\in L^{2}[0,1]\). So we can get \(J_{0}^{\alpha}J_{0}^{\gamma }h_{1}(x)\in\tilde{H}\), that is, \(J_{0}^{\gamma}h(x)\in\tilde{H}\). □

Lemma 2.5

Assume \(\{u_{n}(x)\}\subset X\) is an orthogonal basis of \(L^{2}[0,1]\), then \(\{\tilde{u}_{n}(x)\}=\{J_{0}^{\alpha}u_{n}(x)\}\) is an orthogonal basis of . For any \(g\in\tilde{H}\), we can derive that \(g=\sum_{n=0}^{\infty}C_{n}\tilde{u}_{n}\), where \(C_{n}=\frac{(g,\tilde{u}_{n})_{\alpha}}{ \Vert \tilde{u}_{n} \Vert _{\alpha}^{2}}\).

Proof

By the assumption, we have

$$\begin{aligned} (\tilde{u}_{n},\tilde{u}_{m})_{\alpha}&= \int_{0}^{1}\bigl(D_{C}^{\alpha } \tilde{u}_{n}\bigr)\cdot\bigl(D_{C}^{\alpha} \tilde{u}_{m}\bigr)\cdot\rho (x)\,\mathrm{d}x\\ &= \int_{0}^{1}u_{n}(x)\cdot u_{m}(x)\cdot\rho (x)\,\mathrm{d}x \textstyle\begin{cases} =0, & n\neq m; \\ \neq0, & n=m. \end{cases}\displaystyle \end{aligned}$$

For any \(\tilde{f}\in\tilde{H}\), according to the proof of the above lemma, there exists a \(f\in L^{2}[0,1]\) so that \(\tilde{f}=J_{0}^{\alpha}f\).

Assume \((\tilde{f},\tilde{u}_{n})_{\alpha}=0\), and we can derive that

$$ \int_{0}^{1}\bigl(D_{C}^{\alpha} \tilde{f}\bigr) \bigl(D_{C}^{\alpha}\tilde {u}_{n} \bigr)\rho(x)\,\mathrm{d}x= \int_{0}^{1}f u_{n}\rho(x)\, \mathrm{d}x=0. $$

By the assumption that \(\{u_{n}(x)\}\) is an orthogonal basis of \(L^{2}[0,1]\), so \(f=0\) a.e. Then we get \(\tilde{f}=0\), and \(\{\tilde {u}_{n}(x)\}\) is a orthogonal basis of .

For any \(g\in\tilde{H}\), we assume \(g=\sum_{n=0}^{\infty }C_{n}\tilde{u}_{n}\). Therefore

$$(g,\tilde{u}_{m})_{\alpha}=\Biggl(\sum _{n=0}^{\infty}C_{n}\tilde {u}_{n}, \tilde{u}_{m}\Biggr)_{\alpha}=C_{m}( \tilde{u}_{m},\tilde {u}_{m})_{\alpha}=C_{m} \Vert \tilde{u}_{m} \Vert _{\alpha}^{2}. $$

So we have \(C_{m}=\frac{(g,\tilde{u}_{m})_{\alpha}}{ \Vert \tilde {u}_{m} \Vert _{\alpha}^{2}}\). □

3 Operational matrices based on the orthogonal basis functions

An operational matrix can be obtained by using a set of orthogonal basis functions. Now, according to Lemma 2.5, we can get new orthogonal bases and their corresponding operational matrices.

Denoting

$$\tilde{U}_{m}(x)= \begin{pmatrix} \tilde{u}_{0}(x) \\ \tilde{u}_{1}(x) \\ \vdots\\ \tilde{u}_{m-1}(x) \end{pmatrix}, $$

where \(\{u_{n}(x)\}\subset X\) is an orthogonal basis of \(L^{2}[0,1]\) and \(\tilde{u}_{n}(x)=J^{\alpha}_{0}u(x)(n=0,1,2,\ldots)\). According to Lemma 2.5, \(\{\tilde{u}_{n}(x)\}\) is an orthogonal basis of .

Lemma 3.1

Assume \(\tilde{U}_{m}(x)=(\tilde{u}_{0}(x),\tilde {u}_{1}(x),\ldots,\tilde{u}_{m-1}(x))^{T}\) and \(0< p\leq1\), then there exists an \(m\times m\) matrix A such that

$$ \tilde{U}_{m}(px)\simeq A \tilde{U}_{m}(x), $$
(3)

where

$$A= \begin{pmatrix} a_{00} & a_{01} & \ldots& a_{0 m-1} \\ a_{10} & a_{11} & \ldots& a_{1m-1} \\ \vdots& \vdots& \ldots& \vdots\\ a_{m-10} & a_{m-11} & \ldots& a_{m-1m-1} \end{pmatrix} $$

with \(a_{ij}=\frac{(\tilde{u}_{i}(px),\tilde{u}_{j}(x))_{\alpha}}{\int _{0}^{1}{u}_{j}^{2}(x)\rho(x)\,\mathrm{d}x}\).

Proof

$$\begin{aligned} \tilde{U}_{m}(px)&=\bigl(\tilde{u}_{0}(px), \tilde{u}_{1}(px),\ldots ,\tilde{u}_{m-1}(px) \bigr)^{T} \\ &\simeq \Biggl(\sum_{j=0}^{m-1}a_{0j} \tilde{u}_{j}(x),\sum_{j=0}^{m-1}a_{1j} \tilde{u}_{j}(x),\ldots,\sum_{j=0}^{m-1}a_{m-1j} \tilde{u}_{j}(x) \Biggr)^{T} \\ &= \begin{pmatrix} a_{00} & a_{01} & \ldots& a_{0 m-1} \\ a_{10} & a_{11} & \ldots& a_{1m-1} \\ \vdots& \vdots& \ldots& \vdots\\ a_{m-10} & a_{m-11} & \ldots& a_{m-1m-1} \end{pmatrix} \begin{pmatrix} \tilde{u}_{0}(x) \\ \tilde{u}_{1}(x) \\ \vdots\\ \tilde{u}_{m-1}(x) \end{pmatrix} \\ &=A\cdot\tilde{U}_{m}(x). \end{aligned}$$

Noting that

$$\begin{aligned} \bigl(\tilde{u}_{i}(px),\tilde{u}_{j}(x) \bigr)_{\alpha}&\simeq \Biggl(\sum_{k=0}^{m-1}a_{ik} \tilde{u}_{k}(x),\tilde {u}_{j}(x) \Biggr)_{\alpha}\\ &= a_{ij}\bigl(\tilde{u}_{j}(x),\tilde {u}_{j}(x) \bigr)_{\alpha}=a_{ij} \int_{0}^{1}{u}_{j}^{2}(x) \rho(x)\,\mathrm{d}x, \end{aligned}$$

we can derive that

$$a_{ij}=\frac{(\tilde{u}_{i}(px),\tilde{u}_{j}(x))_{\alpha}}{\int _{0}^{1}{u}_{j}^{2}(x)\rho(x)\,\mathrm{d}x}. $$

 □

Theorem 3.1

Let \(J_{0}^{\alpha}\) be the R–L fractional integral operator. Then

$$ J_{0}^{\alpha}\tilde{U}_{m}(x)\simeq K_{m}^{\alpha}\cdot\tilde{U}_{m}(x), $$
(4)

where

$$K_{m}^{\alpha}=\bigl(J_{0}^{\alpha} \tilde{U}_{m}(x),\tilde {U}_{m}^{T}(x) \bigr)_{\alpha}\cdot D^{-1} $$

and

$$\begin{aligned} \bigl(J_{0}^{\alpha}\tilde{U}_{m}(x), \tilde{U}_{m}^{T}(x)\bigr)_{\alpha }=\bigl( \bigl(J_{0}^{\alpha}\tilde{u}_{i}(x), \tilde{u}_{j}(x)\bigr)_{\alpha }\bigr)_{i,j=0}^{m-1},\qquad D= \begin{pmatrix} v_{0} & & & \\ & v_{1} & & \\ & & \ddots& \\ & & & v_{m-1} \end{pmatrix} . \end{aligned}$$

Proof

Let \(K_{m}^{\alpha}\) be the operational matrix of fractional integration for \(\tilde{U}_{m}(x)\). We have

$$\begin{aligned} \bigl(J_{0}^{\alpha}\tilde{U}_{m}(x), \tilde{U}_{m}^{T}(x)\bigr)_{\alpha }&=\left ( \begin{pmatrix} J_{0}^{\alpha}\tilde{u}_{0}(x) \\ J_{0}^{\alpha}\tilde{u}_{1}(x) \\ \vdots\\ J_{0}^{\alpha}\tilde{u}_{m-1}(x) \end{pmatrix}, \bigl(\tilde{u}_{0}(x),\tilde{u}_{1}(x), \ldots,\tilde {u}_{m-1}(x)\bigr) \right )_{\alpha} \\ &\simeq \left ( \begin{pmatrix} \sum_{j=0}^{m-1}k_{0j}\tilde{u}_{j}(x)\\ \sum_{j=0}^{m-1}k_{1j}\tilde{u}_{j}(x) \\ \vdots\\ \sum_{j=0}^{m-1}k_{m-1j}\tilde{u}_{j}(x) \end{pmatrix} ,\bigl(\tilde{u}_{0}(x), \tilde{u}_{1}(x),\ldots,\tilde {u}_{m-1}(x)\bigr) \right )_{\alpha} \\ &= \begin{pmatrix} k_{00}v_{0} & k_{01}v_{1} & \ldots& k_{0 m-1}v_{m-1} \\ k_{10}v_{0} & k_{11}v_{1} & \ldots& k_{1m-1}v_{m-1} \\ \vdots& \vdots& \ldots& \vdots\\ k_{m-10}v_{0} & k_{m-11}v_{1} & \ldots& k_{m-1m-1}v_{m-1} \end{pmatrix} \\ &=K_{m}^{\alpha}\cdot \begin{pmatrix} v_{0} & & & \\ & v_{1} & & \\ & & \ddots& \\ & & & v_{m-1} \end{pmatrix}, \end{aligned}$$

where

$$\begin{aligned} &K_{m}^{\alpha}= \begin{pmatrix} k_{00} & k_{01} & \ldots& k_{0 m-1} \\ k_{10} & k_{11} & \ldots& k_{1m-1} \\ \vdots& \vdots& \ldots& \vdots\\ k_{m-1,0} & k_{m-1,1} & \ldots& k_{m-1,m-1} \end{pmatrix},\\ &D= \begin{pmatrix} v_{0} & & & \\ & v_{1} & & \\ & & \ddots& \\ & & & v_{m-1} \end{pmatrix} , \end{aligned}$$

with

$$ v_{j}=\bigl(\tilde{u}_{j}(x),\tilde{u}_{j}(x) \bigr)_{\alpha}= \int _{0}^{1}{u}_{j}^{2}(x) \rho(x)\,\mathrm{d}x. $$

Then

$$K_{m}^{\alpha}=\bigl(J_{0}^{\alpha} \tilde{U}_{m}(x),\tilde {U}_{m}^{T}(x) \bigr)_{\alpha}\cdot D^{-1}. $$

 □

4 Numerical algorithms

In this section, we consider the problem given in (1) and (2).

Lemma 4.1

Assuming that \(D^{\alpha}_{C}y(x)\) is absolute continuous and its derivative lies in \(L^{2}[0,1]\), then \(D^{\alpha}_{C}y(x)\in\tilde{H}\).

Proof

Let \(v(x)=D^{\alpha}_{C}y(x)\). Using \(v'(x)\in L^{2}[0,1]\) and Lemma 2.1, one has \(D^{\alpha}_{C}v(x)=J_{0}^{1-\alpha}v'(x)\in L^{2}[0,1]\). Therefore \(v(x)=J_{0}^{\alpha}D^{\alpha}_{C}v(x)\in J_{0}^{\alpha}L^{2}[0,1]=\tilde{H}\). □

Now we expand \(D^{\alpha}_{C}y(x)\) by the fractional-order orthogonal polynomials \(\{\tilde{u}_{m}(x)\}\), thus:

$$ D^{\alpha}_{C}y(x)\simeq c_{0}+C^{T} \tilde{U}_{m}(x), $$
(5)

where the coefficient vector C and \(\tilde{U}_{m}(x)\) are given by

$$ C=[c_{1},c_{2},\ldots,c_{m}]^{T},\qquad \tilde {U}_{m}(x)=\bigl[\tilde{u}_{0}(x), \tilde{u}_{1}(x),\ldots,\tilde {u}_{m-1}(x) \bigr]^{T}. $$

Based on (5), (4), (3) and the properties of the Caputo derivative, we have

$$\begin{aligned} &y(x)\simeq J^{\alpha}_{0}c_{0}+C^{T}J^{\alpha }_{0} \tilde{U}_{m}(x)\simeq\frac{c_{0}}{\Gamma(\alpha +1)}x^{\alpha}+C^{T}K^{\alpha}_{m} \tilde{U}_{m}(x), \end{aligned}$$
(6)
$$\begin{aligned} & D^{\gamma}_{C}y(x)=D^{\gamma}_{C} \bigl[J^{\alpha}_{0}D^{\alpha }_{C}y(x) \bigr]=J^{\alpha-\gamma}_{0}D^{\alpha}_{C}y(x)\simeq J^{\alpha-\gamma}_{0}\bigl[c_{0}+C^{T} \tilde{U}_{m}(x)\bigr] \\ &\phantom{D^{\gamma}_{C}y(x)}\simeq \frac{c_{0}}{\Gamma(\alpha-\gamma+1)}x^{\alpha-\gamma }+C^{T}K^{\alpha-\gamma}_{m} \tilde{U}_{m}(x) \end{aligned}$$
(7)

and

$$\begin{aligned} D^{\gamma}_{C}y(px)&\simeq \frac{c_{0}}{\Gamma(\alpha -\gamma+1)}(px)^{\alpha-\gamma}+C^{T}K^{\alpha-\gamma}_{m} \tilde {U}_{m}(px) \\ &\simeq\frac{c_{0}}{\Gamma(\alpha-\gamma +1)}(px)^{\alpha-\gamma}+C^{T}K^{\alpha-\gamma}_{m}A \tilde{U}_{m}(x). \end{aligned}$$
(8)

By using (6) and (3), we get

$$\begin{aligned} y(px)\simeq \frac{c_{0}}{\Gamma(\alpha+1)}(px)^{\alpha }+C^{T}K^{\alpha}_{m} \tilde{U}_{m}(px) \simeq\frac {c_{0}}{\Gamma(\alpha+1)}(px)^{\alpha}+C^{T}K^{\alpha}_{m}A \tilde {U}_{m}(x). \end{aligned}$$
(9)

By replacing (5), (6), (8) and (9) in Eq. (1), we obtain a system of algebraic equations,

$$\begin{aligned} c_{0}+ C^{T}\tilde{U}_{m}(x)={}&a(x) \biggl[ \frac{c_{0}}{\Gamma(\alpha+1)}(px)^{\alpha}+C^{T}K^{\alpha }_{m}A \tilde{U}_{m}(x) \biggr]+b(x)\biggl[ \frac{c_{0}}{\Gamma (\alpha-\gamma+1)}(px)^{\alpha-\gamma} \\ &{}+ C^{T}K^{\alpha-\gamma}_{m}A\tilde {U}_{m}(x) \biggr]+d(x) \biggl[ \frac{c_{0}}{\Gamma(\alpha +1)}x^{\alpha}+C^{T}K^{\alpha}_{m} \tilde{U}_{m}(x) \biggr]+g(x). \end{aligned}$$
(10)

Substituting the points \(\{x|{u}_{m}(x)=0\}\) into (10), the coefficient \(c_{0}\) and the coefficient vector C can be simply computed and the approximate solution of (1) under the initial condition (2) can be obtained.

5 Fractional-order Legendre operational matrix

In this section, we derive a new fractional-order Legendre operational matrix basing on the research in Sect. 3, and we will solve two fractional-order equations using the operational matrix in Sect. 6.

Definition 5.1

The Legendre polynomials \(\{L_{k}(x):k=0,1,2,\ldots\}\) are the eigenfunctions of the Sturm–Liouville problem,

$$\bigl(\bigl(1-x^{2}\bigr)L_{k}^{\prime}(x) \bigr)^{\prime}+k(k+1)L_{k}(x)=0,\quad x\in[-1,1]. $$

Shifted Legendre polynomials can be defined on the interval \([0,1]\) by introducing the variable \(x=2z-1\). Let the shifted Legendre polynomials \(L_{k}(2z-1)\) be denoted by \(R_{k}(z)\). It is easy to see that

$$\int_{0}^{1}R_{i}(z)\cdot R_{j}(z)\,\mathrm{d}z=\frac{1}{2i+1}\delta_{ij}, $$

where

$$\delta_{ij}= \textstyle\begin{cases} 0, & i\neq j, \\ 1, & i=j. \end{cases} $$

The analytic form of the shifted Legendre polynomials \(R_{i}(z)\) of degree i is given by

$$R_{i}(z)=\sum_{k=0}^{i}(-1)^{i+k} \frac{(i+k)!}{(i-k)!}\frac{z^{k}}{(k!)^{2}}. $$

In order to solve the αth-order differential equations (\(\alpha >0\)), we define a new fractional-order Legendre functions. Let

$$\tilde{U}_{m}(x)=\bigl\{ J_{0}^{\alpha}R_{0}(x),J_{0}^{\alpha }R_{1}(x), \ldots,J_{0}^{\alpha}R_{m-1}(x)\bigr\} =\bigl\{ \tilde {u}_{0}(x),\tilde{u}_{1}(x),\ldots,\tilde{u}_{m-1}(x) \bigr\} , $$

where

$$\begin{aligned} \tilde{u}_{i}(x) &= J_{0}^{\alpha}R_{i}(x) \\ &= \sum_{k=0}^{i}(-1)^{i+k} \frac{(i+k)!}{(i-k)!}\frac{J_{0}^{\alpha }x^{k}}{(k!)^{2}} \\ &= \sum_{k=0}^{i}(-1)^{i+k} \frac{(i+k)!}{(i-k)!}\frac {1}{(k!)^{2}}\frac{k!}{\Gamma(k+\alpha+1)}\cdot x^{\alpha+k} \\ &=\sum_{k=0}^{i}(-1)^{i+k} \frac{(i+k)!}{(i-k)!(k!)\Gamma(k+\alpha +1)}\cdot x^{\alpha+k}. \end{aligned}$$

By Lemma 3.1, we get \(\tilde{U}_{m}(px)\simeq A\tilde{U}_{m}(x)\) and \(A=(a_{ij})_{i,j=0}^{m-1}\) with \(a_{ij}=\frac{(\tilde {u}_{i}(px),\tilde{u}_{j}(x))_{\alpha}}{\int _{0}^{1}{u}_{j}^{2}(x)\rho(x)\,\mathrm{d}x}\). Therefore

$$\begin{aligned} a_{ij}&=\frac{p^{\alpha}\int_{0}^{1}{R}_{i}(px){R}_{j}(x)\,\mathrm {d}x}{\int_{0}^{1}{R}_{j}^{2}(x)\,\mathrm{d}x} \\ &=(2j+1)p^{\alpha} \int_{0}^{1}{R}_{i}(px){R}_{j}(x) \,\mathrm{d}x. \end{aligned}$$

By Theorem 3.1, we get

$$v_{j}=\bigl(\tilde{u}_{j}(x),\tilde{u}_{j}(x) \bigr)_{\alpha}= \int _{0}^{1}{R}_{j}^{2}(x)\, \mathrm{d}x=\frac{1}{2j+1} $$

and

$$D= \begin{pmatrix} 1 & & & \\ & \frac{1}{3} & & \\ & & \ddots& \\ & & & \frac{1}{2m-1} \end{pmatrix}. $$

Let \(G=(J_{0}^{\alpha}\tilde{U}_{m}(x),\tilde{U}_{m}^{T}(x))_{\alpha }=((J_{0}^{\alpha}\tilde{u}_{i}(x),\tilde{u}_{j}(x))_{\alpha })_{i,j=0}^{m-1}=(g_{ij})_{i,j=0}^{m-1}\). We have

$$\begin{aligned} g_{ij}&=\bigl(J_{0}^{\alpha} \tilde{u}_{i}(x),\tilde{u}_{j}(x)\bigr)_{\alpha} \\ &= \int_{0}^{1}\tilde{u}_{i}(x){R}_{j}(x) \,\mathrm{d}x \\ &=\sum_{k=0}^{i}(-1)^{i+k} \frac{(i+k)!}{(i-k)!(k!)\Gamma(k+\alpha +1)}\cdot \int_{0}^{1}x^{\alpha+k}{R}_{j}(x)\, \mathrm{d}x, \end{aligned}$$

while

$$\begin{aligned} \int_{0}^{1}x^{\alpha+k}{R}_{j}(x)\, \mathrm{d}x&=\sum_{l=0}^{j}(-1)^{j+l} \frac{(j+l)!}{(j-l)!(l!)^{2}} \int _{0}^{1}x^{\alpha+k}\cdot x^{l} \,\mathrm{d}x \\ &=\sum_{l=0}^{j}(-1)^{j+l} \frac{(j+l)!}{(j-l)!(l!)^{2}}\frac {1}{\alpha+k+l+1}; \end{aligned}$$

therefore

$$g_{ij}=\sum_{k=0}^{i}\sum _{l=0}^{j}(-1)^{i+k+j+l} \frac {(i+k)!(j+l)!}{(i-k)!(k!)(j-l)!(l!)^{2}(\alpha+k+l+1)\Gamma(k+\alpha+1)}. $$

Then we get the fractional-order Legendre operational matrix,

$$K_{m}^{\alpha}=G\cdot D^{-1}. $$

6 Numerical examples

Example 1

Consider the following fractional neutral pantograph differential equation [7]:

$$ \textstyle\begin{cases} D^{\gamma}_{C}y(x)=-y(x)+0.1y(0.8x)+0.5D^{\gamma}_{C}y(0.8x)+g(x), \quad \text{$x\in(0,1]$,} \\ y(0)=0, & \end{cases} $$

with \(0<\gamma\leq1\).

The exact solution of this problem for \(\gamma=0.8\) is \(x^{3.8}\) with \(g(x)=2.211894885744887 x^{3}+0.9571706039258614 x^{3.8}\). By the present method we derive the corresponding approximate solutions and absolute errors between the exact solution and the approximate solutions when \(m=5,8,10\), which are displayed in Figs. 1, 2 and 3. The maximum of the absolute errors between the exact solution and the approximate solutions on the interval \([0,1]\) separately are \(7.65934*10^{-6}\), \(2.69025*10^{-7}\), \(5.5087*10^{-8}\) when \(m=5,8,10\) which are obtained separately at points \(x=0.195015, 0.0932709\) and 0.0634485.

Figure 1
figure 1

The comparison of the approximate solution and the exact solution when \(m=5\)

Figure 2
figure 2

The comparison of the approximate solution and the exact solution when \(m=8\)

Figure 3
figure 3

The comparison of the approximate solution and the exact solution when \(m=10\)

Denote \(Ly=D^{\alpha}_{C}y(x)-a(x)y(px)-b(x)D^{\gamma }_{C}y(px)-d(x)y(x)\). So (1) is equivalent to \(Ly=g\). Hence \((Ly-g)(x_{i})=0\) where \(\{x_{i}\}\) are nodes. The approximate solution \(y_{n}\) satisfies \((Ly_{n}-g)(x_{i})=0\). Thus \((L(y-y_{n}))(x_{i})=0\). Because \(L(y-y_{n})\) fluctuates, the fluctuation of \(y-y_{n}\) is possible. The fluctuation times of absolute errors are related to the number of nodes \(n+1\). When n increases, the fluctuation times of absolute errors may grow. Therefore, the fluctuations of error in Figs. 1(b), 2(b) and 3(b) are possible.

When \(\gamma=1\) and \(g(x)=(0.32x-0.5)e^{-0.8x}+e^{-x}\), the exact solution is \(xe^{-x}\). We derive the absolute errors which are displayed in Fig. 4, and compare the absolute errors with that from Ref. [7] in Table 1. These results show that our method has higher accuracy.

Figure 4
figure 4

The absolute error curve with \(m=8\)

Table 1 The comparison of the absolute errors with Bernoulli wavelet method on the interval \([0,1]\)

Example 2

Consider the following fractional neutral pantograph differential equation [7]:

$$ \textstyle\begin{cases} D^{\gamma}_{C}y(x)=0.75y(x)+y(0.5x)+D^{\gamma _{1}}_{C}y(0.5x)+0.5D^{\gamma}_{C}y(0.5x)+g(x), \quad\text{$x\in (0,1]$,} \\ y(0)=0, & \end{cases} $$

with \(0<\gamma_{1}<\gamma\leq1\).

The exact solution of this problem for \(\gamma=0.7\) and \(\gamma _{1}=0.3\) is \(x^{3.1}\) with \(g(x)=2.06871 x^{2.4}-0.208387 x^{2.8}-0.866629 x^{3.1}\). By the present method we derive the corresponding approximate solutions and absolute errors when \(m=5,10,12\), which are displayed in Figs. 5, 6 and 7. The maximum of the absolute errors between the exact solution and the approximate solutions on the interval \([0,1]\) separately are 0.0031029, 0.0000298692 and \(7.82099*10^{-6}\) when \(m=5,10,12\), which are obtained separately in the first case.

Figure 5
figure 5

The comparison of the approximate solution and the exact solution when \(m=5\)

Figure 6
figure 6

The comparison of the approximate solution and the exact solution when \(m=10\)

Figure 7
figure 7

The comparison of the approximate solution and the exact solution when \(m=12\)

7 Conclusion

In this paper, a method for obtaining a kind of operational matrices of fractional integration is proposed. In this way, we can obtain different operational matrices \(K_{m}^{\alpha}\) based on different orthogonal polynomials. Using the matrices together with the collocation method, we can obtain the approximate solutions of fractional neutral pantograph delay differential equations. As an example, we obtain a fractional Legendre operational matrix and solve two fractional neutral pantograph differential equations by the constructed method. The numerical computation indicates that the method is efficient.

References

  1. Abbasbandy, S., Kazem, S., Alhuthali, M., Alsulami, H.: Application of the operational matrix of fractional-order Legendre functions for solving the time-fractional convection–diffusion equation. Appl. Math. Comput. 266, 31–40 (2015)

    MathSciNet  Google Scholar 

  2. Kashkari, B.S.H., Syam, M.I.: Fractional-order Legendre operational matrix of fractional integration for solving the Riccati equation with fractional order. Appl. Math. Comput. 290, 281–291 (2016)

    MathSciNet  Google Scholar 

  3. Sahu, P.K., Ray, S.S.: Legendre spectral collocation method for Fredholm integro-differential-difference equation with variable coefficients and mixed conditions. Appl. Math. Comput. 268, 575–580 (2015)

    MathSciNet  Google Scholar 

  4. Atabakzadeh, M.H., Akrami, M.H., Erjaee, G.H.: Chebyshev operational matrix method for solving multi-order fractional ordinary differential equations. Appl. Math. Model. 37, 8903–8911 (2013)

    Article  MathSciNet  Google Scholar 

  5. Li, Y.L., Sun, N.: Numerical solution of fractional differential equations using the generalized block pulse operational matrix. Comput. Math. Appl. 62, 1046–1054 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  6. Magin, R.L.: Fractional calculus models of complex dynamics in biological tissues. Comput. Math. Appl. 59, 1586–1593 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  7. Rahimkhani, P., Ordokhani, Y., Babolian, E.: Numerical solution of fractional pantograph differential equations by using generalized fractional-order Bernoulli wavelet. J. Comput. Appl. Math. 309, 493–510 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  8. Sedaghat, S., Ordokhani, Y., Dehghan, M.: Numerical solution of the delay differential equations of pantograph type via Chebyshev polynomials. Commun. Nonlinear Sci. Numer. Simul. 17, 4815–4830 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  9. Balachandran, K., Kiruthika, S.: Existence of solutions of nonlinear fractional pantograph equations. Acta Math. Sci. 33, 712–720 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  10. Yang, Y., Huang, Y.Q.: Spectral-collocation methods for fractional pantograph delay-integrodifferential equations. Adv. Math. Phys. 2013, 1–14 (2013)

    MathSciNet  MATH  Google Scholar 

  11. Diethelm, K.: The Analysis of Fractional Differential Equations: An Application-Oriented Exposition Using Differential Operators of Caputo Type. Springer, Heidelberg (2010)

    Book  MATH  Google Scholar 

Download references

Acknowledgements

The work was supported by the National Natural Science Foundation of China (Grant No. 11501150).

Author information

Authors and Affiliations

Authors

Contributions

They all read and approved the final version of the manuscript.

Corresponding author

Correspondence to Lei Shi.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, L., Ding, X., Chen, Z. et al. A new class of operational matrices method for solving fractional neutral pantograph differential equations. Adv Differ Equ 2018, 94 (2018). https://doi.org/10.1186/s13662-018-1536-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-018-1536-8

Keywords