Theory and Modern Applications

# Modified differential transform method for solving linear and nonlinear pantograph type of differential and Volterra integro-differential equations with proportional delays

## Abstract

In this study, a hybrid technique for improving the differential transform method (DTM), namely the modified differential transform method (MDTM) expressed as a combination of the differential transform method, Laplace transforms, and the Padé approximant (LPDTM) is employed for the first time to ascertain exact solutions of linear and nonlinear pantograph type of differential and Volterra integro-differential equations (DEs and VIDEs) with proportional delays. The advantage of this method is its simple and trusty procedure, it solves the equations straightforward and directly without requiring large computational work, perturbations or linearization, and enlarges the domain of convergence, and leads to the exact solution. Also, to validate the reliability and efficiency of the method, some examples and numerical results are provided.

## 1 Introduction

Mathematical modeling of various phenomena in science and engineering such as biological population management, chemistry, physics, physiological and pharmaceutical kinetics and chemical kinetics, medicine, infectious diseases, economy, nonlinear dynamical system, communication networks, number theory, electrodynamics, the navigational control of ships and aircraft and control problems and electronic systems leads to one of the most important kinds of delay differential equations (DDEs), namely pantograph equation [17]. The term pantograph was first used by Ockendon and Tayler in [8] which modeled and redesigned the collection system for an electric locomotive.

At this point, it is usually difficult to solve these kinds of DDEs analytically. Therefore, in the literature, there are some valuable efforts that focus on finding the analytical and numerical methods for solving the pantograph type of DEs and VIDEs with proportional delays. (see, e.g., [921] and the references therein). The differential transform method (DTM) is an analytical-numerical technique introduced for the first time by Zhou [22] to study the electrical circuits. Like any subject in mathematics, DTM has grown over a period of time. In 1999, Chen and Ho [23] developed this method for partial differential equations with two independent variables. In 2004, Ayaz [24] extended the two-dimensional DTM into the three-dimensional DTM and used it to solve linear and nonlinear partial differential equations. In 2005, Arikoglu and Ozkol [25] used the differential transform method for integro-differential equations. With the advent of fractional calculus, the differential transform method was also modified to solve derivative and integral problems of any order. In 2007, Arikoglu and Ozkol [26] proposed a numerical-analytical method similar to the DTM, called the fractional differential transform method (FDTM), which they used to solve fractional differential equations. In 2008, Odibat and Momani [27] introduced the generalized differential transform method (GDTM) based on the differential transform method and generalized Taylor’s formula and the Caputo fractional derivative and used it to solve fractional partial differential equations. Also in the same year, Momani and Erturk [28] learned to correct and improve the accuracy of the solution of convergent series obtained by the differential transform method, they introduced the modified differential transform method (MDTM), and Chang [29] used the DTM for one-dimensional nonlinear functions. In 2009, Keskin [30, 31] introduced the reduced form of the DTM as the reduced differential transform method (RDTM), which Keskin and Oturanc [32, 33] used to solve partial differential equations and fractional differential equations. Many authors, during recent years, have used this method for solving various types of equations. For example, differential-algebraic equations [3436], Volterra integral equation [3739], integro-differential equations [4043] and fractional differential equations [4447] are solved using this method. This suggested technique is highly efficient and powerful in obtaining the exact solutions and approximate solutions of mathematical modeling of many problems, gives the solution in the form of rapidly convergent successive approximations, and is capable of handling linear and nonlinear equations in a similar manner. Moreover the comparison of our method with other analytical methods available in the literature show that although the results of these methods are the same, RDTM is a lot easier, more convenient, and reliable than them [4855].

In this paper, we present the application of the modified differential transform method (MDTM) as a hybrid approach, for improving DTM’s truncated series solutions in convergence rate combining DTM, Laplace transforms, and Padé approximant. The solutions series obtained by the differential transform method, even if they contain a large number of terms, may converge in a limited area. Therefore, the domain of convergence of the truncated power series expands by the Laplace–Padé differential transform method (LPDTM) and often leads to the exact solution. To improve the solution of convergent series obtained by the DTM, we apply Laplace transform to it, and then by forming its Padé approximant, the transformed series convert into a meromorphic function. Finally, to obtain the analytical solution, we take the inverse Laplace transform from the function obtained of the Padé approximant. Therefore, in the light of the above-mentioned method, we will study the exact solution of linear and nonlinear pantograph type of DEs and VIDEs with proportional delays

\begin{aligned} & F \bigl(t,u(\alpha _{0}t),u^{\prime }( \alpha _{1}t),\ldots , u^{(n)}( \alpha _{n}t) \bigr)=0, \\ \end{aligned}
(1.1)
\begin{aligned} & G \bigg(t,u(\alpha _{0}t),u^{\prime }(\alpha _{1}t),\ldots , u^{(n)}( \alpha _{n}t), \int _{0}^{lt} K \bigl(t,\xi ,u(\beta _{0}\xi ),u^{\prime }( \beta _{1}\xi ),\ldots , u^{(m)}(\beta _{m}\xi )\,\mathrm{d}\xi \bigg) \\ &\quad =0, \\ & 0\leq t\leq T,\qquad \alpha _{i}, \beta _{j}, l \in (0,1),\quad i=0,1, \ldots ,n, j=0,1,\ldots ,m, m< n, \end{aligned}
(1.2)

where u is the unknown function and the functions F, G, and K are analytic in the domain of interest.

The rest of this study is presented in the following sections: In Sect. 2, The main idea behind the Padé approximant is introduced. In Sect. 3, we simply introduce the modified differential transform method (MDTM) as a combined form of the DTM with Laplace transforms, and the Padé approximant. In Sect. 4, we prove several important Theorems. In Sect. 5, we apply the MDTM to obtain the exact solutions for linear and nonlinear pantograph type of DEs and VIDEs with proportional delays. Finally, we offer some summaries and a conclusion in Sect. 6.

The best approximation of a function with a rational function of a certain order is the Padé approximant. Under this technique, the approximant’s power series agrees with the power series of the function it is approximating.

Let $$u(t)$$ be an analytical function that corresponds to the Maclaurin series:

$$u(t)=\sum_{k=0}^{\infty } c_{k} t^{k},\quad 0\leq t\leq T.$$
(2.1)

Then the Padé approximant to $$u(t)$$ is a rational fraction which we denote as $$[\frac{m}{n} ]$$ and is defined by [56, 57]

$$\biggl[\frac{m}{n} \biggr] = \frac{a_{0}+a_{1} t+a_{2} t^{2}+\cdots +a_{m} t^{m}}{b_{0}+b_{1} t+b_{2} t^{2}+\cdots +b_{n} t^{n}},$$
(2.2)

there are $$m+1$$ independent numerator coefficients and n independent denominator coefficients, making $$m+n+1$$ unknown coefficients in all. We have

$$u(t)- \biggl[\frac{m}{n} \biggr](t)=O \bigl(t^{m+n+1} \bigr).$$
(2.3)

From Eq. (2.3), we have

$$u(t)\sum_{j=0}^{n} b_{j} t^{j}-\sum_{i=0}^{m} a_{i} t^{i}=O \bigl(t^{m+n+1} \bigr).$$
(2.4)

From Eq. (2.4), we get the following algebraic linear systems:

\begin{aligned} &c_{m}b_{1}+\cdots +c_{m-n+1}b_{n}=-c_{m+1}, \\ &c_{m+1}b_{1}+\cdots +c_{m-n+2}b_{n}=-c_{m+2}, \\ &\vdots \\ &c_{m+n-1}b_{1}+\cdots +c_{m}b_{n}=-c_{m+n}, \end{aligned}
(2.5)

and

\begin{aligned} &a_{0}=c_{0}, \\ &a_{1}=c_{1}+b_{1} c_{0}, \\ &a_{2}=c_{2}+b_{1} c_{1}+b_{2} c_{0}, \\ &a_{3}=c_{3}+b_{1}c_{2}+b_{2} c_{1} +b_{3} c_{0}, \\ & \vdots \\ &a_{n}= c_{n}+ \sum_{k=1}^{n} b_{k} c_{n-k}. \end{aligned}
(2.6)

We determine first all the denominator coefficients $$b_{j}$$, $$1\leq j\leq n$$, from Eq. (2.5). Then, we calculate the numerator coefficients $$a_{i}$$, $$0\leq i \leq m$$, from Eq. (2.6).

### Remark 2.1

For a fixed value of $$m+n+1$$, error Eq. (2.3) is the smallest when the numerator has one degree higher than the denominator of (2.2) or when the numerator and denominator have the same degree.

The advantage of Padé approximant is that it often gives a better approximation than the truncated series solutions from the Taylor series. This is because sometimes the Taylor series may not be convergent, and the Padé approximant has the potential to expand the domain of convergence of solutions or includes finding exact solutions.

## 3 Summary of the method

We present some important definitions and mathematical preliminaries operations of the modified differential transform method which can help to gain more understanding of the method stated in this section.

### Definition 3.1

The differential transform function of $$u(t)$$ can be written in the following form:

$$\mathcal{U}(k)=\frac{1}{k!} \biggl[ \frac{\mathrm{d}^{k}}{\mathrm{d} t^{k}}u(t) \biggr]_{t=t_{0}},$$
(3.1)

where $$u(t)$$ represents the main original analytic and differentiated continuously function with regard to time t, in the domain of interest, and $$\mathcal{U}(k)$$ describes the transformed function.

### Definition 3.2

The differential inverse transform of $$\mathcal{U}(k)$$ is determined as

$$u(t)=\sum_{k=0}^{\infty } \mathcal{U}(k) (t-t_{0})^{k}.$$
(3.2)

Then, consolidating Eqs. (3.2) and (3.1) yields

$$u(t)=\sum_{k=0}^{\infty } \frac{1}{k!} \biggl[ \frac{\mathrm{d}^{k}}{\mathrm{d} t^{k}}u(t) \biggr]_{t=t_{0}}(t-t_{0})^{k}.$$
(3.3)

By the help of the upper definitions, to illustrate the basic idea of the DTM, consider the following form of nonlinear ordinary differential equations:

$$\frac{\mathrm{d}u(t)}{\mathrm{d}t}=f \bigl(u(t),t \bigr),\quad t\geq 0,$$
(3.4)

with the following initial condition:

$$u(0)=c,$$
(3.5)

where $$f(u(t),t)$$ indicates a nonlinear smooth function.

After applying the DTM definition on both sides of Eq. (3.4), we can write the following iteration formula:

$$(k+1)\mathcal{U}(k+1)=\mathcal{F} \bigl(\mathcal{U}(0), \ldots , \mathcal{U}(k),k \bigr),\quad k\geq 0,$$
(3.6)

where $$\mathcal{F}(\mathcal{U}(0), \ldots ,\mathcal{U}(k),k)$$ is the differential transform functions of $$f(u(t),t)$$.

Implementing the aforesaid method to the initial condition (3.5), we have

$$\mathcal{U}(0)=c.$$
(3.7)

To discover the remaining iteration, we plug Eq. (3.7) into Eq. (3.6) and by simple reiterative calculation, we get the subsequent $$\mathcal{U}(k)$$ values. Then, by using the inverse transformation of the set of values $$\{\mathcal{U}(k)\}^{n}_{k=0}$$, the approximation solution can be written as follows:

$$\tilde{u}_{n}(t)=\sum _{k=0}^{n}\mathcal{U}(k) (t-t_{0})^{k}.$$
(3.8)

Thus, the exact solution of the considered equation can be gained by

$$u(t)=\lim_{n\rightarrow \infty }\tilde{u}_{n}(t).$$
(3.9)

The solutions series derived from the DTM, even if they contain a large number of terms, may converge in a limited area. To improve the solution obtained in convergent series form using the DTM, and enlarge the domain of convergence of solutions we apply the Laplace–Padé method by following steps.

• Step 1: Applying Laplace transformation with respect to t for the obtained power series (3.8).

• Step 2: Substituting s by $$\frac{1}{t}$$ in the resulting equation.

• Step 3: Creating the Padé approximant of order $$[\frac{m}{n} ]$$ for the transformed series and convert it into a meromorphic function.

### Remark 3.3

m and n are arbitrarily chosen, but they should be of smaller values than the order of the power series. In this step, to obtain better convergence and accuracy, the Padé approximant enlarges the domain of the truncated series solution.

• Step 4: Substituting t by $$\frac{1}{s}$$.

• Step 5: Applying the inverse Laplace transformation with respect to s for obtaining the exact or approximate solution.

Table 1 contains the basic mathematical operations carried out by DTM.

## 4 Main results

The principal main of this section is to study the fundamental Theorem of this paper.

### Theorem 4.1

If $$w(t) =u(\alpha t)$$, then $$\mathcal{W}(k)=\alpha ^{k} \mathcal{U}(k)$$ where $$\alpha \in (0,1)$$.

### Proof

From Eq. (3.1), we get

\begin{aligned} \mathcal{W}(k) &=\frac{1}{k!} \biggl[ \frac{d^{k}}{dt^{k}} w(t) \biggr]_{t=t_{0}} =\frac{1}{k!} \biggl[ \frac{d^{k}}{dt^{k}} u( \alpha t) \biggr]_{t=t_{0}} \\ &= \frac{1}{k!} \biggl[\alpha ^{k} \frac{d^{k}}{d\hat{t}^{k}} u(\hat{t}) \biggr]_{\hat{t}=t_{0}}= \frac{1}{k!} \alpha ^{k} k! \mathcal{U}(k), \end{aligned}

where $$\hat{t}=\alpha t$$, then

$$\mathcal{W}(k)=\alpha ^{k} \mathcal{U}(k).$$

□

### Theorem 4.2

If $$w(t) =u(\alpha _{1}t) v(\alpha _{2}t)$$, then, for $$\alpha _{1}, \alpha _{2}\in (0,1)$$, we have

$$\mathcal{W}(k)=\sum_{r=0}^{k} \alpha _{1}^{r} \alpha _{2}^{k-r} \mathcal{U}(r) \mathcal{V}(k-r),\quad k\geq 0.$$

### Proof

From Eq. (3.1), we get

$\begin{array}{rl}\mathcal{W}\left(k\right)& =\frac{1}{k!}{\left[\frac{{d}^{k}}{d{t}^{k}}w\left(t\right)\right]}_{t={t}_{0}}\\ & =\frac{1}{k!}{\left[\frac{{d}^{k}}{d{t}^{k}}\left(u\left({\alpha }_{1}t\right)v\left({\alpha }_{2}t\right)\right)\right]}_{t={t}_{0}}\\ & =\frac{1}{k!}{\left[\sum _{r=0}^{k}\left(\begin{array}{c}k\\ r\end{array}\right){\alpha }_{1}^{r}\frac{{d}^{r}}{d{\stackrel{ˆ}{t}}^{r}}u\left(\stackrel{ˆ}{t}\right){\alpha }_{2}^{k-r}\frac{{d}^{k-r}}{d{\stackrel{˜}{t}}^{k-r}}v\left(\stackrel{˜}{t}\right)\right]}_{\stackrel{ˆ}{t},\stackrel{˜}{t}={t}_{0}}\\ & =\frac{1}{k!}\sum _{r=0}^{k}\frac{k!}{r!\left(k-r\right)!}{\alpha }_{1}^{r}r!\mathcal{U}\left(r\right){\alpha }_{2}^{k-r}\left(k-r\right)!\mathcal{V}\left(k-r\right),\end{array}$

where $$\hat{t}=\alpha _{1} t$$ and $$\tilde{t}=\alpha _{2} t$$, then

$$\mathcal{W}(k)=\sum_{r=0}^{k} \alpha _{1}^{r} \alpha _{2}^{k-r} \mathcal{U}(r) \mathcal{V}(k-r).$$

□

### Theorem 4.3

If $$w(t)=\frac{d^{r}}{dt^{r}} u(\alpha t)$$, then $$\mathcal{W}(k)=\alpha ^{k+r} \frac{(k+r) !}{k !} \mathcal{U}(k+r)$$.

### Proof

From Eq. (3.1), we have

\begin{aligned} \mathcal{W}(k) &=\frac{1}{k!} \biggl[\frac{d^{k}}{dt^{k}} w(t) \biggr]_{t=t_{0}}= \frac{1}{k!} \biggl[\frac{d^{k+r}}{dt^{k+r}} u( \alpha t) \biggr]_{t=t_{0}}= \frac{1}{k!} \biggl[\alpha ^{k+r} \frac{d^{k+r}}{d\hat{t}^{k+r}} u( \hat{t}) \biggr]_{\hat{t}=t_{0}} \\ &=\frac{1}{k!} \alpha ^{k+r}(k+r) ! \mathcal{U}(k+r), \end{aligned}

therefore

$$\mathcal{W}(k)=\alpha ^{k+r} \frac{(k+r) !}{k !} \mathcal{U}(k+r).$$

□

### Theorem 4.4

If $$w(t) =\frac{d^{m}}{dt^{m}} u(\alpha _{1}t) \frac{d^{n}}{dt^{n}} v( \alpha _{2}t)$$, then

$$\mathcal{W}(k)=\sum_{r=0}^{k} \alpha _{1}^{r+m} \alpha _{2}^{k-r+n} \frac{(r+m)!(k-r+n)!}{r! (k-r)!} \mathcal{U}(r+m) \mathcal{V}(k-r+n),\quad k\geq 0.$$

### Proof

From Eq. (3.1), we have

$\begin{array}{rl}\mathcal{W}\left(k\right)& =\frac{1}{k!}{\left[\frac{{d}^{k}}{d{t}^{k}}w\left(t\right)\right]}_{t={t}_{0}}=\frac{1}{k!}{\left[\frac{{d}^{k}}{d{t}^{k}}\left(\frac{{d}^{m}}{d{t}^{m}}u\left({\alpha }_{1}t\right)\frac{{d}^{n}}{d{t}^{n}}v\left({\alpha }_{2}t\right)\right)\right]}_{t={t}_{0}}\\ & =\frac{1}{k!}{\left[\sum _{r=0}^{k}\left(\begin{array}{c}k\\ r\end{array}\right)\frac{{d}^{r}}{d{t}^{r}}\left(\frac{{d}^{m}}{d{t}^{m}}u\left({\alpha }_{1}t\right)\right)\frac{{d}^{k-r}}{d{t}^{k-r}}\left(\frac{{d}^{n}}{d{t}^{n}}v\left({\alpha }_{2}t\right)\right)\right]}_{t={t}_{0}}\\ & =\frac{1}{k!}{\left[\sum _{r=0}^{k}\left(\begin{array}{c}k\\ r\end{array}\right){\alpha }_{1}^{r+m}\frac{{d}^{r+m}}{d{\stackrel{ˆ}{t}}^{r+m}}u\left(\stackrel{ˆ}{t}\right){\alpha }_{2}^{k-r+n}\frac{{d}^{k-r+n}}{d{\stackrel{˜}{t}}^{k-r+n}}v\left(\stackrel{˜}{t}\right)\right]}_{\stackrel{ˆ}{t},\stackrel{˜}{t}={t}_{0}}\\ & =\frac{1}{k!}\sum _{r=0}^{k}\frac{k!}{r!\left(k-r\right)!}{\alpha }_{1}^{r+m}\left(r+m\right)!\mathcal{U}\left(r+m\right){\alpha }_{2}^{k-r+n}\left(k-r+n\right)!\mathcal{V}\left(k-r+n\right),\end{array}$

where $$\hat{t}=\alpha _{1} t$$ and $$\tilde{t}=\alpha _{2} t$$, then

$$\mathcal{W}(k)=\sum_{r=0}^{k} \alpha _{1}^{r+m} \alpha _{2}^{k-r+n} \frac{(r+m)!(k-r+n)!}{r! (k-r)!} \mathcal{U}(r+m) \mathcal{V}(k-r+n).$$

□

### Theorem 4.5

If $$w(t)=\int _{0}^{lt}u(\alpha \xi ) \,\mathrm{d}\xi$$, then $$\mathcal{W}(k)=\frac{1}{k} l^{k} \alpha ^{k-1} \mathcal{U}(k-1)$$.

### Proof

\begin{aligned} \mathcal{W}(k) =\frac{1}{k!} \biggl[\frac{d^{k}}{dt^{k}} w(t) \biggr]_{t=t_{0}}= \frac{1}{k!} \biggl[l \frac{d^{k-1}}{dt^{k-1}} u(l \alpha t) \biggr]_{t=t_{0}}= \frac{1}{k!} \bigl[l (k-1)! (l\alpha )^{k-1}\mathcal{U}(k-1) \bigr], \end{aligned}

therefore

$$\mathcal{W}(k)=\frac{1}{k} l^{k} \alpha ^{k-1} \mathcal{U}(k-1).$$

□

### Theorem 4.6

If $$w(t)=\int _{0}^{lt}u(\alpha _{1}\xi ) v(\alpha _{2}\xi )\,\mathrm{d} \xi$$, then

$$\mathcal{W}(k)=\frac{1}{k}\sum_{r=0}^{k-1} l^{k} \alpha _{1}^{r} \alpha _{2}^{k-r-1} \mathcal{U}(r) \mathcal{V}(k-r-1),\quad k\geq 1.$$

### Proof

$\begin{array}{rl}\mathcal{W}\left(k\right)& =\frac{1}{k!}{\left[\frac{{d}^{k}}{d{t}^{k}}w\left(t\right)\right]}_{t={t}_{0}}\\ & =\frac{1}{k!}{\left[l\frac{{d}^{k-1}}{d{t}^{k-1}}\left(u\left(l{\alpha }_{1}t\right)v\left(l{\alpha }_{2}t\right)\right)\right]}_{t={t}_{0}}\\ & =\frac{1}{k!}{\left[l\sum _{r=0}^{k-1}\left(\begin{array}{c}k-1\\ r\end{array}\right){\left(l{\alpha }_{1}\right)}^{r}\frac{{d}^{r}}{d{\stackrel{ˆ}{t}}^{r}}u\left(\stackrel{ˆ}{t}\right){\left(l{\alpha }_{2}\right)}^{k-r-1}\frac{{d}^{k-r-1}}{d{\stackrel{˜}{t}}^{k-r-1}}v\left(\stackrel{˜}{t}\right)\right]}_{\stackrel{ˆ}{t},\stackrel{˜}{t}={t}_{0}}\\ & =\frac{1}{k!}l\sum _{r=0}^{k-1}\frac{\left(k-1\right)!}{r!\left(k-r-1\right)!}{l}^{r}{\alpha }_{1}^{r}r!\mathcal{U}\left(r\right){l}^{k-r-1}{\alpha }_{2}^{k-r-1}\left(k-r-1\right)!\mathcal{V}\left(k-r-1\right),\end{array}$

where $$\hat{t}=l\alpha _{1} t$$ and $$\tilde{t}=l\alpha _{2} t$$, then

$$\mathcal{W}(k)=\frac{1}{k}\sum_{r=0}^{k-1} l^{k} \alpha _{1}^{r} \alpha _{2}^{k-r-1} \mathcal{U}(r) \mathcal{V}(k-r-1).$$

□

### Theorem 4.7

If $$w(t)=v(\beta t)\int _{0}^{lt}u_{1}(\alpha _{1}\xi ) u_{2}(\alpha _{2} \xi )\,\mathrm{d}\xi$$, then

$$\mathcal{W}(k)=\sum_{r=0}^{k-1}\sum _{s=0}^{k-r-1}\frac{1}{k-r} l^{k-r} \beta ^{r} \alpha _{1}^{s} \alpha _{2}^{k-r-s-1} \mathcal{V}(r) \mathcal{U}_{1}(s) \mathcal{U}_{2}(k-r-s-1),\quad k\geq 1.$$

### Proof

Let $$y(t)=\int _{0}^{lt}u_{1}(\alpha _{1}\xi ) u_{2}(\alpha _{2}\xi ) \,\mathrm{d}\xi$$. From Theorem 4.6 we have

$\frac{{d}^{k}}{d{t}^{k}}w\left(t\right)=\frac{{d}^{k}}{d{t}^{k}}\left(v\left(\beta t\right)y\left(t\right)\right)=\sum _{r=0}^{k}\left(\begin{array}{c}k\\ r\end{array}\right){\beta }^{r}\frac{{d}^{r}}{d{\overline{t}}^{r}}v\left(\overline{t}\right)\frac{{d}^{k-r}}{d{t}^{k-r}}y\left(t\right),$

where $$\bar{t}=\beta t$$ and from Theorem 4.5 we have

$\begin{array}{rl}\frac{{d}^{k-r}}{d{t}^{k-r}}y\left(t\right)& =l\frac{{d}^{k-r-1}}{d{t}^{k-r-1}}\left({u}_{1}\left(l{\alpha }_{1}t\right){u}_{2}\left(l{\alpha }_{2}t\right)\right)\\ & =l\sum _{s=0}^{k-r-1}\left(\begin{array}{c}k-r-1\\ s\end{array}\right){\left(l{\alpha }_{1}\right)}^{s}\frac{{d}^{s}}{d{\stackrel{ˆ}{t}}^{s}}{u}_{1}\left(\stackrel{ˆ}{t}\right){\left(l{\alpha }_{2}\right)}^{k-r-s-1}\frac{{d}^{k-r-s-1}}{d{\stackrel{˜}{t}}^{k-r-s-1}}{u}_{2}\left(\stackrel{˜}{t}\right),\end{array}$

where $$\hat{t}=l\alpha _{1} t$$ and $$\tilde{t}=l\alpha _{2} t$$, then

$\begin{array}{rl}\mathcal{W}\left(k\right)=& \frac{1}{k!}{\left[\frac{{d}^{k}}{d{t}^{k}}w\left(t\right)\right]}_{t={t}_{0}}\\ =& \sum _{r=0}^{k}\sum _{s=0}^{k-r-1}\left(\begin{array}{c}k\\ r\end{array}\right)\left(\begin{array}{c}k-r-1\\ s\end{array}\right){l}^{k-r}{\beta }^{r}{\alpha }_{1}^{s}{\alpha }_{2}^{k-r-s-1}\\ & ×r!\mathcal{V}\left(r\right)s!{\mathcal{U}}_{1}\left(s\right)\left(k-r-s-1\right)!{\mathcal{U}}_{2}\left(k-r-s-1\right),\end{array}$

therefore

$$\mathcal{W}(k)=\sum_{r=0}^{k-1}\sum _{s=0}^{k-r-1}\frac{1}{k-r} l^{k-r} \beta ^{r} \alpha _{1}^{s} \alpha _{2}^{k-r-s-1} \mathcal{V}(r) \mathcal{U}_{1}(s) \mathcal{U}_{2}(k-r-s-1).$$

□

### Theorem 4.8

If $$w(t)=\int _{0}^{lt} \frac{d^{m}}{dt^{m}} u(\alpha _{1}\xi ) \frac{d^{n}}{dt^{n}} v(\alpha _{2}\xi )\,\mathrm{d}\xi$$, then for $$k\geq 1$$

$$\mathcal{W}(k)=\frac{1}{k}\sum_{r=0}^{k-1} \frac{(r+m)! (k-r+n-1)!}{r! (k-r-1)!} l^{k+m+n} \alpha _{1}^{r+m} \alpha _{2}^{k-r+n-1} \mathcal{U}(r+m) \mathcal{V}(k-r+n-1).$$

### Proof

$\begin{array}{rl}\mathcal{W}\left(k\right)=& \frac{1}{k!}{\left[\frac{{d}^{k}}{d{t}^{k}}w\left(t\right)\right]}_{t={t}_{0}}\\ =& \frac{1}{k!}{\left[l\frac{{d}^{k-1}}{d{t}^{k-1}}\left(\frac{{d}^{m}}{d{t}^{m}}u\left(l{\alpha }_{1}t\right)\frac{{d}^{n}}{d{t}^{n}}v\left(l{\alpha }_{2}t\right)\right)\right]}_{t={t}_{0}}\\ =& \frac{1}{k!}{\left[l\sum _{r=0}^{k-1}\left(\begin{array}{c}k-1\\ r\end{array}\right){\left(l{\alpha }_{1}\right)}^{r+m}\frac{{d}^{r+m}}{d{\stackrel{ˆ}{t}}^{r+m}}u\left(\stackrel{ˆ}{t}\right){\left(l{\alpha }_{2}\right)}^{k-r+n-1}\frac{{d}^{k-r+n-1}}{d{\stackrel{˜}{t}}^{k-r+n-1}}v\left(\stackrel{˜}{t}\right)\right]}_{\stackrel{ˆ}{t},\stackrel{˜}{t}={t}_{0}}\\ =& \frac{1}{k!}l\sum _{r=0}^{k-1}\frac{\left(k-1\right)!}{r!\left(k-r-1\right)!}{\left(l{\alpha }_{1}\right)}^{r+m}\left(r+m\right)!\mathcal{U}\left(r+m\right)\\ & ×{\left(l{\alpha }_{2}\right)}^{k-r+n-1}\left(k-r+n-1\right)!\mathcal{V}\left(k-r+n-1\right),\end{array}$

where $$\hat{t}=l\alpha _{1} t$$ and $$\tilde{t}=l\alpha _{2} t$$, then

$$\mathcal{W}(k)=\frac{1}{k}\sum_{r=0}^{k-1} \frac{(r+m)! (k-r+n-1)!}{r! (k-r-1)!} l^{k+m+n} \alpha _{1}^{r+m} \alpha _{2}^{k-r+n-1} \mathcal{U}(r+m) \mathcal{V}(k-r+n-1).$$

□

### Theorem 4.9

If $$w(t)=\frac{d^{\lambda }}{dt^{\lambda }} v(\beta t)\int _{0}^{lt} \frac{d^{m}}{dt^{m}} u_{1}(\alpha _{1}\xi ) \frac{d^{n}}{dt^{n}} u_{2}( \alpha _{2}\xi )\,\mathrm{d}\xi$$, then for $$k\geq 1$$

\begin{aligned} \mathcal{W}(k)={}&\sum_{r=0}^{k-1}\sum _{s=0}^{k-r-1}\frac{1}{k-r} \frac{(r+\lambda )! (s+m)! (k-r-s+n-1)!}{r! s! (k-r-s-1)!} \\ &{}\times l^{k-r+m+n} \beta ^{r+\lambda } \alpha _{1}^{s+m} \alpha _{2}^{k-r-s+n-1} \mathcal{V}(r+\lambda ) \mathcal{U}_{1}(s+m) \mathcal{U}_{2}(k-r-s+n-1). \end{aligned}

### Proof

Let $$y(t)=\int _{0}^{lt} \frac{d^{m}}{dt^{m}}u_{1}(\alpha _{1}\xi ) \frac{d^{n}}{dt^{n}} u_{2}(\alpha _{2}\xi )\,\mathrm{d}\xi$$. From Theorem 4.8 we have

$\frac{{d}^{k}}{d{t}^{k}}w\left(t\right)=\frac{{d}^{k}}{d{t}^{k}}\left(\frac{{d}^{\lambda }v\left(\beta t\right)}{d{t}^{\lambda }}y\left(t\right)\right)=\sum _{r=0}^{k}\left(\begin{array}{c}k\\ r\end{array}\right){\beta }^{r+\lambda }\frac{{d}^{r+\lambda }}{d{\overline{t}}^{r+\lambda }}v\left(\overline{t}\right)\frac{{d}^{k-r}}{d{t}^{k-r}}y\left(t\right),$

where $$\bar{t}=\beta t$$ and from Theorem 4.5 we have

$\begin{array}{rl}\frac{{d}^{k-r}}{d{t}^{k-r}}y\left(t\right)=& l\frac{{d}^{k-r-1}}{d{t}^{k-r-1}}\left[\frac{{d}^{m}}{d{t}^{m}}{u}_{1}\left(l{\alpha }_{1}t\right)\frac{{d}^{n}}{d{t}^{n}}{u}_{2}\left(l{\alpha }_{2}t\right)\right]\\ =& l\sum _{s=0}^{k-r-1}\left(\begin{array}{c}k-r-1\\ s\end{array}\right){\left(l{\alpha }_{1}\right)}^{s+m}\frac{{d}^{s+m}}{d{\stackrel{ˆ}{t}}^{s+m}}{u}_{1}\left(\stackrel{ˆ}{t}\right)\\ & ×{\left(l{\alpha }_{2}\right)}^{k-r-s+n-1}\frac{{d}^{k-r-s+n-1}}{d{\stackrel{˜}{t}}^{k-r-s+n-1}}{u}_{2}\left(\stackrel{˜}{t}\right),\end{array}$

where $$\hat{t}=l\alpha _{1} t$$ and $$\tilde{t}=l\alpha _{2} t$$, then

$\begin{array}{rl}\mathcal{W}\left(k\right)=& \frac{1}{k!}{\left[\frac{{d}^{k}}{d{t}^{k}}w\left(t\right)\right]}_{t={t}_{0}}\\ =& \sum _{r=0}^{k}\sum _{s=0}^{k-r-1}\left(\begin{array}{c}k\\ r\end{array}\right)\left(\begin{array}{c}k-r-1\\ s\end{array}\right){l}^{k-r+m+n}{\beta }^{r+\lambda }{\alpha }_{1}^{s+m}{\alpha }_{2}^{k-r-s+n-1}\\ & ×\left(r+\lambda \right)!\mathcal{V}\left(r+\lambda \right)\left(s+m\right)!{\mathcal{U}}_{1}\left(s+m\right)\left(k-r-s+n-1\right)!{\mathcal{U}}_{2}\left(k-r-s+n-1\right),\end{array}$

therefore

\begin{aligned} \mathcal{W}(k)={}&\sum_{r=0}^{k-1}\sum _{s=0}^{k-r-1}\frac{1}{k-r} \frac{(r+\lambda )! (s+m)! (k-r-s+n-1)!}{r! s! (k-r-s-1)!} \\ &{}\times l^{k-r+m+n} \beta ^{r+\lambda } \alpha _{1}^{s+m} \alpha _{2}^{k-r-s+n-1} \mathcal{V}(r+\lambda ) \mathcal{U}_{1}(s+m) \mathcal{U}_{2}(k-r-s+n-1). \end{aligned}

□

## 5 Applications

The principal aim of this section is to apply the modified differential transform method to a class of linear and nonlinear pantograph type of DEs and VIDEs with proportional delays. We present the following examples to illustrate the accuracy of the presented method and compare the new method with the previous result.

### Example 5.1

As the first example, we consider the following linear pantograph equation:

\begin{aligned} u^{\prime }(t)=-u(t)+\frac{1}{10}u \biggl( \frac{t}{5} \biggr)-\frac{1}{10} e^{- \frac{t}{5}}, \quad 0\leq t \leq 1, \end{aligned}
(5.1)

subject to the initial condition

\begin{aligned} u(0)=1. \end{aligned}
(5.2)

By using Table 1 and Theorem 4.1, Eq. (5.1) transforms to the following recurrence relations:

\begin{aligned} (k+1)\mathcal{U}(k+1)=-\mathcal{U}(k)+\frac{1}{10} \biggl(\frac{1}{5} \biggr)^{k} \mathcal{U}(k)- \frac{1}{10}\frac{(\frac{-1}{5})^{k}}{k!}, \end{aligned}
(5.3)

and from the initial condition (5.2), we write

$$\mathcal{U}(0)=1.$$
(5.4)

Substituting Eq. (5.4) in Eq. (5.3) recursively we derive the following results:

\begin{aligned} \begin{aligned}&\mathcal{U}(1)=-1,\qquad \mathcal{U}(2)=\frac{1}{2!},\qquad \mathcal{U}(3)= \frac{-1}{3!}, \\ & \mathcal{U}(4)=\frac{1}{4!},\qquad \mathcal{U}(5)= \frac{-1}{5!},\qquad \ldots . \end{aligned} \end{aligned}
(5.5)

Therefore, from (3.2) we have

\begin{aligned} u(t)=\sum_{k=0}^{\infty } \mathcal{U}(k)t^{k}=1-t+\frac{t^{2}}{2!}- \frac{t^{3}}{3!}+ \frac{t^{4}}{4!}-\frac{t^{5}}{5!}+\cdots . \end{aligned}
(5.6)

To improve Eq. (5.1), we implement the (LPDTM) for the third-order approximation solution

\begin{aligned} \tilde{u}_{3}(t)=\sum_{k=0}^{3} \mathcal{U}(k)t^{k}=1-t+ \frac{t^{2}}{2!}-\frac{t^{3}}{3!}. \end{aligned}
(5.7)

Applying the Laplace transformation with respect to t for $$u(t)$$ yields

$$\mathcal{L} \bigl[u(t) \bigr]=\frac{1}{s}-\frac{1}{s^{2}}+ \frac{1}{s^{3}}- \frac{1}{s^{4}}.$$
(5.8)

Let $$s=\frac{1}{t}$$, we have

$$\mathcal{L} \bigl[u(t) \bigr]=t-t^{2}+t^{3}-t^{4}.$$
(5.9)

For $$m\geq 1$$, $$n\geq 1$$, and $$m+n\leq 4$$, all of the $$[\frac{m}{n} ](t)$$-Padé approximants of Eq. (5.9) yield

$$\biggl[\frac{m}{n} \biggr](t)=\frac{t}{1+t}.$$
(5.10)

In this step, by writing $$\frac{1}{s}$$ instead of t in Eq. (5.10) we obtain

$$\biggl[\frac{m}{n} \biggr](t)=\frac{1}{s+1}.$$
(5.11)

Finally, using the inverse Laplace transform on the Padé approximants (5.11), we arrive at the improved solution that corresponds to the exact solution $$u(t)=e^{-t}$$.

### Example 5.2

As the second example, we consider the following linear pantograph equation:

\begin{aligned} u^{\prime \prime }(t)=4 e^{-\frac{t}{2}} \sin \biggl( \frac{t}{2} \biggr) u \biggl( \frac{t}{2} \biggr),\quad 0\leq t\leq 1, \end{aligned}
(5.12)

subject to the initial conditions

\begin{aligned} u(0)=1, \qquad u^{\prime }(0)=-1. \end{aligned}
(5.13)

By using Table 1, Theorems 4.1 and 4.2, Eq. (5.12) transforms to the following recurrence relations:

\begin{aligned} &(k+1) (k+2)\mathcal{U}(k+2) \\ &\quad =4 \sum _{r_{2}=0}^{k} \sum_{r_{1}=0}^{r_{2}} \biggl(\frac{-1}{2} \biggr)^{r_{1}} \biggl( \frac{1}{2} \biggr)^{k-r_{1}} \frac{1}{r_{1}!(r_{2}-r_{1})!}\sin \biggl({ \frac{(r_{2}-r_{1})\pi }{2}} \biggr) \mathcal{U}(k-r_{2}), \end{aligned}
(5.14)

and from the initial conditions (5.13), we write

$$\mathcal{U}(0)=1,\qquad \mathcal{U}(1)=-1.$$
(5.15)

Consequently, we find

\begin{aligned} \mathcal{U}(2)=0,\qquad \mathcal{U}(3)=\frac{1}{3},\qquad \mathcal{U}(4)= \frac{-1}{6}, \qquad \mathcal{U}(5)=\frac{1}{30}, \qquad \ldots . \end{aligned}
(5.16)

Therefore, from (3.2) we have

\begin{aligned} u(t)=\sum_{k=0}^{\infty } \mathcal{U}(k)t^{k}=1-t+\frac{t^{3}}{3}- \frac{t^{4}}{6}+ \frac{t^{5}}{30}-\cdots . \end{aligned}
(5.17)

To improve Eq. (5.12), we implement the (LPDTM) for the third-order approximation solution

\begin{aligned} \tilde{u}_{3}(t)=\sum_{k=0}^{3} \mathcal{U}(k)t^{k}=1-t+ \frac{t^{3}}{3}. \end{aligned}
(5.18)

Applying the Laplace transformation with respect to t for $$u(t)$$ yields

$$\mathcal{L} \bigl[u(t) \bigr]=\frac{1}{s}-\frac{1}{s^{2}}+ \frac{2}{s^{4}}.$$
(5.19)

Let $$s=\frac{1}{t}$$, we have

$$\mathcal{L} \bigl[u(t) \bigr]=t-t^{2}+2t^{4}.$$
(5.20)

For $$m\geq 1$$, $$n\geq 1$$, and $$m+n\leq 4$$, all of the $$[\frac{m}{n} ](t)$$-Padé approximants of Eq. (5.20) yield

$$\biggl[\frac{m}{n} \biggr](t)=\frac{t+t^{2}}{1+2t+2t^{2}}.$$
(5.21)

In this step, by writing $$\frac{1}{s}$$ instead of t in Eq. (5.21) we obtain

$$\biggl[\frac{m}{n} \biggr](t)=\frac{s+1}{s^{2}+2s+2}= \frac{s+1}{(s+1)^{2}+1}.$$
(5.22)

Finally, using the inverse Laplace transform on the Padé approximants (5.22), we arrive at an improved solution that corresponds to the exact solution $$u(t)=e^{-t} \cos (t)$$.

### Example 5.3

As the third example, we consider the following nonlinear pantograph equation:

\begin{aligned} u^{\prime \prime }(t)=u(t)-\frac{8}{t^{2}}u^{2} \biggl(\frac{t}{2} \biggr),\quad 0 \leq t\leq 1, \end{aligned}
(5.23)

subject to the initial conditions

\begin{aligned} u(0)=0, \qquad u^{\prime }(0)=1. \end{aligned}
(5.24)

First, we rewrite the equation as follows:

\begin{aligned} t^{2} u^{\prime \prime }(t)=t^{2} u(t)-8u^{2} \biggl(\frac{t}{2} \biggr). \end{aligned}
(5.25)

By using Table 1 and Theorem 4.2, Eq. (5.15) transforms to the following recurrence relations:

\begin{aligned} &\sum_{r=0}^{k} \delta (r-2) (k-r+1) (k-r+2)\mathcal{U}(k-r+2) \\ &\quad =\sum_{r=0}^{k} \delta (r-2) \mathcal{U}(k-r)-8\sum_{r=0}^{k} \biggl( \frac{1}{2} \biggr)^{k}\mathcal{U}(r)\mathcal{U}(k-r), \end{aligned}
(5.26)

and from the initial conditions (5.24), we write

$$\mathcal{U}(0)=0,\qquad \mathcal{U}^{\prime }(0)=1.$$
(5.27)

Substituting Eq. (5.27) in Eq. (5.26) recursively we derive the following results:

\begin{aligned} \begin{aligned}&\mathcal{U}(2)=-1,\qquad \mathcal{U}(3)=\frac{1}{2!},\qquad \mathcal{U}(4)= \frac{-1}{3!}, \\ &\mathcal{U}(5)=\frac{1}{4!}, \qquad \mathcal{U}(6)= \frac{-1}{5!}, \qquad \ldots . \end{aligned} \end{aligned}
(5.28)

Therefore, from (3.2) we have

\begin{aligned} u(t)=\sum_{k=0}^{\infty } \mathcal{U}(k)t^{k}=t-t^{2}+\frac{t^{3}}{2!}- \frac{t^{4}}{3!}+\frac{t^{5}}{4!}-\frac{t^{6}}{5!}+\cdots . \end{aligned}
(5.29)

To improve Eq. (5.23), we implement the (LPDTM) for the third-order approximation solution

\begin{aligned} \tilde{u}_{3}(t)=\sum_{k=0}^{3} \mathcal{U}(k)t^{k}=t-t^{2}+ \frac{t^{3}}{2!}. \end{aligned}
(5.30)

Applying the Laplace transformation with respect to t for $$u(t)$$ yields

$$\mathcal{L} \bigl[u(t) \bigr]=\frac{1}{s^{2}}-\frac{2}{s^{3}}+ \frac{3}{s^{4}}.$$
(5.31)

Let $$s=\frac{1}{t}$$, we have

$$\mathcal{L} \bigl[u(t) \bigr]=t^{2}-2t^{3}+3t^{4}.$$
(5.32)

For $$m\geq 1$$, $$n\geq 1$$, and $$m+n\leq 4$$, all of the $$[\frac{m}{n} ](t)$$-Padé approximants of Eq. (5.32) yield

$$\biggl[\frac{m}{n} \biggr](t)=\frac{t^{2}}{1+2t+t^{2}}.$$
(5.33)

In this step, by writing $$\frac{1}{s}$$ instead of t in Eq. (5.33) we obtain

$$\biggl[\frac{m}{n} \biggr](t)=\frac{1}{s^{2}+2s+1}= \frac{1}{(s+1)^{2}}.$$
(5.34)

Finally, using the inverse Laplace transform on the Padé approximants (5.34), we arrive at an improved solution that corresponds to the exact solution $$u(t)=te^{-t}$$.

### Example 5.4

As the fourth example, we consider the following nonlinear pantograph equation:

\begin{aligned} u^{\prime \prime \prime } \biggl(\frac{t}{2} \biggr)-2u^{\prime \prime } \biggl(\frac{t}{2} \biggr)u^{ \prime } \biggl(\frac{t}{2} \biggr)+\frac{1}{4}u(t)= \frac{1}{4}\cosh \biggl(\frac{t}{2} \biggr),\quad 0\leq t\leq 1, \end{aligned}
(5.35)

subject to the initial conditions

\begin{aligned} u(0)=1,\qquad u^{\prime }(0)=2,\qquad u^{\prime \prime }(0)=0. \end{aligned}
(5.36)

By using Table 1, Theorems 4.3 and 4.4, Eq. (5.35) transforms to the following recurrence relations:

\begin{aligned} & \biggl(\frac{1}{2} \biggr)^{k+3}(k+1) (k+2) (k+3)\mathcal{U}(k+3) \\ &\qquad {}-2\sum_{r=0}^{k} \biggl( \frac{1}{2} \biggr)^{k+3}(r+2) (r+1) (k-r+1)\mathcal{U}(r+2) \mathcal{U}(k-r+1)+\frac{1}{4}\mathcal{U}(k) \\ &\quad =\frac{1}{8} \biggl(\frac{(\frac{1}{2})^{k}+(\frac{-1}{2})^{k}}{k!} \biggr), \end{aligned}
(5.37)

and from the initial conditions (5.36), we write

$$\mathcal{U}(0)=1,\qquad \mathcal{U}(1)=2,\qquad \mathcal{U}(2)=0.$$
(5.38)

Substituting Eq. (5.38) in Eq. (5.37) recursively we derive the following results:

\begin{aligned} \mathcal{U}(3)=\frac{1}{3}, \qquad \mathcal{U}(4)=0, \qquad \mathcal{U}(5)= \frac{1}{60},\qquad \mathcal{U}(6)=0,\qquad \ldots . \end{aligned}
(5.39)

Therefore, from (3.2) we have

\begin{aligned} u(t)=\sum_{k=0}^{\infty } \mathcal{U}(k)t^{k}=2t+\frac{t^{3}}{3}+ \frac{t^{5}}{60}+ \cdots . \end{aligned}
(5.40)

To improve Eq. (5.35), we implement the (LPDTM) for the third-order approximation solution

\begin{aligned} \tilde{u}_{3}(t)=\sum_{k=0}^{3} \mathcal{U}(k)t^{k}=2t+ \frac{t^{3}}{3}. \end{aligned}
(5.41)

Applying the Laplace transformation with respect to t for $$u(t)$$ yields

$$\mathcal{L} \bigl[u(t) \bigr]=\frac{2}{s^{2}}+\frac{2}{s^{4}}.$$
(5.42)

Let $$s=\frac{1}{t}$$, we have

$$\mathcal{L} \bigl[u(t) \bigr]=2t^{2}+2t^{4}.$$
(5.43)

For $$m\geq 1$$, $$n\geq 1$$, and $$m+n\leq 6$$, all of the $$[\frac{m}{n} ](t)$$-Padé approximants of Eq. (5.43) yield

$$\biggl[\frac{m}{n} \biggr](t)=\frac{2t^{2}}{1-t^{2}}.$$
(5.44)

In this step, by writing $$\frac{1}{s}$$ instead of t in Eq. (5.44) we obtain

$$\biggl[\frac{m}{n} \biggr](t)=\frac{2}{s^{2}-1}= \frac{1}{s-1}- \frac{1}{s+1}.$$
(5.45)

Finally, using the inverse Laplace transform on the Padé approximants (5.45), we arrive at an improved solution that corresponds to the exact solution $$u(t)=e^{t}-e^{-t}$$.

### Example 5.5

As the fifth example, we consider the following linear VIDEs with proportional delay:

\begin{aligned} &u^{\prime \prime }(t)+u \biggl(\frac{t}{2} \biggr)- \frac{3}{4}u(t)- \int _{0}^{t} \xi u(\xi ) \,\mathrm{d}\xi \\ &\quad =- \frac{11}{4} \sin (t)+t\cos (t)+\sin { \biggl( \frac{t}{2} \biggr)},\quad 0\leq t \leq 1, \end{aligned}
(5.46)

subject to the initial conditions

\begin{aligned} u(0)=0,\qquad u^{\prime }(0)=1. \end{aligned}
(5.47)

By using Table 1, Theorems 4.1, 4.5 and 4.6, Eq. (5.46) transforms to the following recurrence relations:

\begin{aligned} &(k+1) (k+2)\mathcal{U}(k+2)+ \biggl(\frac{1}{2} \biggr)^{k}\mathcal{U}(k)- \frac{3}{4}\mathcal{U}(k)- \frac{1}{k}\sum_{r=0}^{k-1} \delta (r-1) \mathcal{U}(k-r) \\ &\quad =-\frac{11}{4} \frac{1}{k!}\sin \biggl({\frac{k\pi }{2}} \biggr)+\sum_{r=0}^{k} \delta (r-1) \frac{1}{(k-r)!}\cos { \biggl(\frac{(k-r)\pi }{2} \biggr)} \\ &\qquad {}+ \frac{(\frac{1}{2})^{k}}{k!}\sin \biggl({\frac{k\pi }{2}} \biggr), \end{aligned}
(5.48)

and from the initial conditions (5.47), we write

$$\mathcal{U}(0)=0,\qquad \mathcal{U}(1)=1.$$
(5.49)

Substituting Eq. (5.49) in Eq. (5.48) recursively we derive the following results:

\begin{aligned} \mathcal{U}(2)=0,\qquad \mathcal{U}(3)=\frac{-1}{3!},\qquad \mathcal{U}(4)=0,\qquad \mathcal{U}(5)=\frac{1}{5!},\qquad \ldots . \end{aligned}
(5.50)

Therefore, from (3.2) we have

\begin{aligned} u(t)=\sum_{k=0}^{\infty } \mathcal{U}(k)t^{k}=t-\frac{t^{3}}{3!}+ \frac{t^{5}}{5!}+ \cdots . \end{aligned}
(5.51)

To improve Eq. (5.46), we implement the (LPDTM) for the fourth-order approximation solution

\begin{aligned} \tilde{u}_{4}(t)=\sum_{k=0}^{4} \mathcal{U}(k)t^{k}=t- \frac{t^{3}}{3!}. \end{aligned}
(5.52)

Applying the Laplace transformation with respect to t for $$u(t)$$ yields

$$\mathcal{L} \bigl[u(t) \bigr]=\frac{1}{s^{2}}-\frac{1}{s^{4}}.$$
(5.53)

Let $$s=\frac{1}{t}$$, we have

$$\mathcal{L} \bigl[u(t) \bigr]=t-t^{4}.$$
(5.54)

For $$m\geq 1$$, $$n\geq 1$$, and $$m+n\leq 4$$, all of the $$[\frac{m}{n} ](t)$$-Padé approximants of Eq. (5.54) yield

$$\biggl[\frac{m}{n} \biggr](t)=\frac{t^{2}}{1+t^{2}}.$$
(5.55)

In this step, by writing $$\frac{1}{s}$$ instead of t in Eq. (5.55) we obtain

$$\biggl[\frac{m}{n} \biggr](t)=\frac{1}{s^{2}+1}.$$
(5.56)

Finally, using the inverse Laplace transform on the Padé approximants (5.56), we arrive at an improved solution that corresponds to the exact solution $$u(t)=\sin (t)$$.

### Example 5.6

As the sixth example, we consider the following nonlinear VIDEs with proportional delay:

\begin{aligned} u^{\prime }(t)u(t)-u \biggl(\frac{t}{2} \biggr)- \frac{3}{2}u \biggl(\frac{t}{2} \biggr) \int _{0}^{t} u(\xi )u \biggl( \frac{\xi }{2} \biggr) \,\mathrm{d}\xi =0,\qquad 0\leq t\leq 1, \end{aligned}
(5.57)

subject to the initial condition

\begin{aligned} u(0)=1. \end{aligned}
(5.58)

By using Table 1 and Theorems 4.1, 4.2 and 4.7, Eq. (5.57) transforms to the following recurrence relations:

\begin{aligned} &\sum_{r=0}^{k} (r+1) \mathcal{U}(r+1)\mathcal{U}(k-r)- \biggl(\frac{1}{2} \biggr)^{k} \mathcal{U}(k) \\ &\qquad {}-\frac{3}{2} \sum_{r=0}^{k-1} \sum_{s=0}^{k-r-1} \frac{1}{k-r} \biggl( \frac{1}{2} \biggr)^{k-s-1}\mathcal{U}(r)\mathcal{U}(s) \mathcal{U}(k-r-s-1)=0, \end{aligned}
(5.59)

and from the initial condition (5.58), we write

$$\mathcal{U}(0)=1.$$
(5.60)

Substituting Eq. (5.60) in Eq. (5.59) recursively we derive the following results:

\begin{aligned} \begin{aligned}&\mathcal{U}(1)=1,\qquad \mathcal{U}(2)=\frac{1}{2!},\qquad \mathcal{U}(3)= \frac{1}{3!}, \\ & \mathcal{U}(4)=\frac{1}{4!},\qquad \mathcal{U}(5)= \frac{1}{5!},\qquad \ldots . \end{aligned} \end{aligned}
(5.61)

Therefore, from (3.2) we have

\begin{aligned} u(t)=\sum_{k=0}^{\infty } \mathcal{U}(k)t^{k}=1+t+\frac{t^{2}}{2!}+ \frac{t^{3}}{3!}+ \frac{t^{4}}{4!}+\frac{t^{5}}{5!}+\cdots . \end{aligned}
(5.62)

To improve Eq. (5.57), we implement the (LPDTM) for the third-order approximation solution

\begin{aligned} \tilde{u}_{3}(t)=\sum_{k=0}^{3} \mathcal{U}(k)t^{k}=1+t+ \frac{t^{2}}{2!}+\frac{t^{3}}{3!}. \end{aligned}
(5.63)

Applying the Laplace transformation with respect to t for $$u(t)$$ yields

$$\mathcal{L} \bigl[u(t) \bigr]=\frac{1}{s}+\frac{1}{s^{2}}+ \frac{1}{s^{3}}+ \frac{1}{s^{4}}.$$
(5.64)

Let $$s=\frac{1}{t}$$, we have

$$\mathcal{L} \bigl[u(t) \bigr]=t+t^{2}+t^{3}+t^{4}.$$
(5.65)

For $$m\geq 1$$, $$n\geq 1$$, and $$m+n\leq 4$$, all of the $$[\frac{m}{n} ](t)$$-Padé approximants of Eq. (5.65) yield

$$\biggl[\frac{m}{n} \biggr](t)=\frac{t}{1-t}.$$
(5.66)

In this step, by writing $$\frac{1}{s}$$ instead of t in Eq. (5.66) we obtain

$$\biggl[\frac{m}{n} \biggr](t)=\frac{1}{s-1}.$$
(5.67)

Finally, using the inverse Laplace transform on the Padé approximants (5.67), we arrive at an improved solution that corresponds to the exact solution $$u(t)=e^{t}$$.

### Example 5.7

Lastly, we consider the following nonlinear VIDEs with proportional delay:

\begin{aligned} \frac{1}{4}u^{\prime } \biggl( \frac{t}{2} \biggr)+\frac{1}{8} u(t)u \biggl( \frac{t}{2} \biggr)-u^{ \prime \prime } \biggl(\frac{t}{2} \biggr) \int _{0}^{\frac{t}{2}} u(\xi )u^{\prime }( \xi ) \,\mathrm{d}\xi =0,\quad 0\leq t\leq 1, \end{aligned}
(5.68)

subject to the initial condition

\begin{aligned} u(0)=1,\qquad u^{\prime }(0)=-1. \end{aligned}
(5.69)

By using Table 1, Theorems 4.2, 4.3, 4.8 and 4.9 Eq. (5.68) transforms to the following recurrence relations:

\begin{aligned} & \biggl(\frac{1}{2} \biggr)^{k+3}(k+1) \mathcal{U}(k+1)+\sum_{r=0}^{k} \biggl( \frac{1}{2} \biggr)^{k-r+3} \mathcal{U}(r)\mathcal{U}(k-r) \\ &\quad {}-\sum_{r=0}^{k-1}\sum _{s=0}^{k-r-1} \frac{(r+2)! (k-r-s)!}{(k-r)! r!} \biggl( \frac{1}{2} \biggr)^{k+2}\mathcal{U}(r+2) \mathcal{U}(s) \mathcal{U}(k-r-s)=0, \end{aligned}
(5.70)

and from the initial conditions (5.69), we write

$$\mathcal{U}(0)=1,\qquad \mathcal{U}(1)=-1.$$
(5.71)

Substituting Eq. (5.71) in Eq. (5.70) recursively we derive the following results:

\begin{aligned} \mathcal{U}(2)=\frac{1}{2!},\qquad \mathcal{U}(3)=\frac{-1}{3!},\qquad \mathcal{U}(4)=\frac{1}{4!},\qquad \mathcal{U}(5)=\frac{-1}{5!},\qquad \ldots . \end{aligned}
(5.72)

Therefore, from (3.2) we have

\begin{aligned} u(t)=\sum_{k=0}^{\infty } \mathcal{U}(k)t^{k}=1-t+\frac{t^{2}}{2!}- \frac{t^{3}}{3!}+ \frac{t^{4}}{4!}-\frac{t^{5}}{5!}+\cdots . \end{aligned}
(5.73)

To improve Eq. (5.68), we implement the (LPDTM) for the third-order approximation solution

\begin{aligned} \tilde{u}_{3}(t)=\sum_{k=0}^{3} \mathcal{U}(k)t^{k}=1-t+ \frac{t^{2}}{2!}-\frac{t^{3}}{3!}. \end{aligned}
(5.74)

Applying the Laplace transformation with respect to t for $$u(t)$$ yields

$$\mathcal{L} \bigl[u(t) \bigr]=\frac{1}{s}-\frac{1}{s^{2}}+ \frac{1}{s^{3}}- \frac{1}{s^{4}}.$$
(5.75)

Let $$s=\frac{1}{t}$$, we have

$$\mathcal{L} \bigl[u(t) \bigr]=t-t^{2}+t^{3}-t^{4}.$$
(5.76)

For $$m\geq 1$$, $$n\geq 1$$, and $$m+n\leq 4$$, all of the $$[\frac{m}{n} ](t)$$-Padé approximants of Eq. (5.76) yield

$$\biggl[\frac{m}{n} \biggr](t)=\frac{t}{1+t}.$$
(5.77)

In this step, by writing $$\frac{1}{s}$$ instead of t in Eq. (5.77) we obtain

$$\biggl[\frac{m}{n} \biggr](t)=\frac{1}{s+1}.$$
(5.78)

Finally, using the inverse Laplace transform on the Padé approximants (5.78), we arrive at an improved solution that corresponds to the exact solution $$u(t)=e^{-t}$$.

Comparing our result with the solutions obtained in [13, 14, 17, 58, 59], we can see that the results are the same.

Results for Examples 5.15.7 are reported in Figs. 17 and Tables 28, respectively. In these tables, the terms $$u_{E}$$, $$u_{n,R}$$ and $$e(u)$$ stand for exact solution, nth order approximate solution of DTM and their absolute error, respectively.

## 6 Conclusion

Most pantograph equations with proportional delays are usually difficult to solve analytically. In many cases, it is required to obtain approximate solutions. In this work, for this purpose, the modified differential transform method, a combined form of the differential transform method with Laplace transforms, and the Padé approximant (LPDTM) is effectively used to find the exact solution of linear and nonlinear pantograph type of differential and Volterra integro-differential equations (DEs and VIDEs) with proportional delays. In fact, the main advantage of this method is its capability of combining the two strongest methods for finding a fast convergent series solution of pantograph equations. Furthermore, as seen from the examples, the results indicate the reliability and efficiency of the method and show that it needs less effort to achieve the results, and is a promising and powerful method over other methods for solving many linear and nonlinear pantograph type of DEs and VIDEs with proportional delays arising in mathematical physics. The method can be extended easily with some modifications to the fractional delay differential equations. It is our aim for future work.

Not applicable.

## References

1. Farah, N., Seadawy, A.R., Ahmad, S., et al.: Interaction properties of soliton molecules and Painleve analysis for nano bioelectronics transmission model. Opt. Quantum Electron. 52, 329 (2020)

2. Ali, S., Younis, M.: Rogue wave solutions and modulation instability with variable coefficient and harmonic potential. Front. Phys. 7, 255 (2020)

3. Younis, M.: Optical solitons in $$(n+1)$$ dimensions with Kerr and power law nonlinearities. Mod. Phys. Lett. B 31(15), 1750186 (2017)

4. Arif, A., Younis, M., Imran, M., et al.: Solitons and lump wave solutions to the graphene thermophoretic motion system with a variable heat transmission. Eur. Phys. J. Plus 134, 303 (2019)

5. Sardar, A., Husnine, S.M., Rizvi, S.T.R., Younis, M., Ali, K.: Multiple travelling wave solutions for electrical transmission line model. Nonlinear Dyn. 82(3), 1317–1324 (2015)

6. Rizvia, S.T.R., Younis, M., Baleanu, D., Iqbal, H.: Lump and rogue wave solutions for the Broer–Kaup–Kupershmidt system. Chin. J. Phys. 68, 19–27 (2020)

7. Younis, M., Yousaf, U., Ahmed, N., Rizvi, S.T.R., Iqbal, M.S., Baleanu, D.: Investigation of electromagnetic wave structures for a coupled model in antiferromagnetic spin-ladder medium. Front. Phys. (2020). https://doi.org/10.3389/fphy.2020.00372

8. Ockendon, J.R., Tayler, A.B.: The dynamics of a current collection system for an electric locomotive. Proc. R. Soc. A 322(1551), 447–468 (1971)

9. Sedaghat, S., Ordokhani, Y., Dehghan, M.: Numerical solution of the delay differential equations of pantograph type via Chebyshev polynomials. Commun. Nonlinear Sci. Numer. Simul. 17, 4815–4830 (2012)

10. Yu, Z.H.: Variational iteration method for solving the multi-pantograph delay equation. Phys. Lett. A 372(43), 6475–6479 (2008)

11. Feng, X.: An analytic study on the multi-pantograph delay equations with variable coefficients. Bull. Math. Soc. Sci. Math. Roum. 56(104), 205–215 (2013)

12. Ahmed, I., Kumam, P., Abubakar, J., Borisut, P., Sitthithakerngkiet, K.: Solutions for impulsive fractional pantograph differential equation via generalized anti-periodic boundary condition. Adv. Differ. Equ. 2020, 477 (2020)

13. Hou, C.C., Simos, T.E., Famelis, I.-T.: Neural network solution of pantograph type differential equations. Math. Methods Appl. Sci. 43(6), 3369–3374 (2020)

14. Trif, D.: Direct operatorial tau method for pantograph-type equations. Appl. Math. Comput. 219, 2194–2203 (2012)

15. Sezer, M., Yalçinbaş, S., Gülsu, M.: A Taylor polynomial approach for solving generalized pantograph equations with nonhomogeneous term. Int. J. Comput. Math. 85(7), 1055–1063 (2008)

16. Cakir, M., Arslan, D.: The Adomian decomposition method and the differential transform method for numerical solution of multi-pantograph delay differential equations. Appl. Math. 6, 1332–1343 (2015)

17. Ezz-Eldien, S.S., Doha, E.H.: Fast and precise spectral method for solving pantograph type Volterra integro-differential equations. Numer. Algorithms 81, 57–77 (2019)

18. Yuzbasi, S., Karacayir, M.: A numerical approach for solving high-order linear delay Volterra integro-differential equations. Int. J. Comput. Methods 15(3), Article ID 1850042 (2018)

19. Rebenda, J., Šmarda, Z.: A differential transformation approach for solving functional differential equations with multiple delays. Commun. Nonlinear Sci. Numer. Simul. 48, 246–257 (2017)

20. Rameh, R.B., Cherry, E.M., Weber dos Santos, R.: Single-variable delay-differential equation approximations of the Fitzhugh–Nagumo and Hodgkin–Huxley models. Commun. Nonlinear Sci. Numer. Simul. 82, Article ID 105066 (2020)

21. Chena, X., Wang, L.: The variational iteration method for solving a neutral functional-differential equation with proportional delays. Comput. Math. Appl. 59, 2696–2702 (2010)

22. Zhou, J.K.: Differential Transformation and Its Application for Electrical Circuits. Huazhong University Press, Wuhan (1986)

23. Chen, C.K., Ho, S.H.: Solving partial differential equations by two-dimensional differential transform method. Appl. Math. Comput. 106(2–3), 171–179 (1999)

24. Ayaz, F.: Solutions of the system of differential equations by differential transform method. Appl. Math. Comput. 147(2), 547–567 (2004)

25. Arikoglu, A., Ozkol, I.: Solution of boundary value problems for integro-differential equations by using differential transform method. Appl. Math. Comput. 168(2), 1145–1158 (2005)

26. Arikoglu, A., Ozkol, I.: Solution of fractional differential equations by using differential transform method. Chaos Solitons Fractals 34(5), 1473–1481 (2007)

27. Odibat, Z., Momani, S.: A generalized differential transform method for linear partial differential equations of fractional order. Appl. Math. Lett. 21, 194–199 (2008)

28. Momani, S.M., Erturk, V.S.: Solutions of nonlinear oscillators by the modified differential transform method. Comput. Math. Appl. 55, 833–842 (2008)

29. Chang, S.-H., Chang, I.L.: A new algorithm for calculating one-dimensional differential transform of nonlinear functions. Appl. Math. Comput. 195, 799–808 (2008)

30. Keskin, Y., Oturanc, G.: Reduced differential transform method for partial differential equations. Int. J. Nonlinear Sci. Numer. Simul. 10(6), 741–749 (2009)

31. Keskin, Y.: Ph.D. Thesis, Selcuk University, in Turkish (2010)

32. Keskin, Y., Oturanc, G.: The reduced differential transform method: a new approach to fractional partial differential equations. Nonlinear Sci. Lett. A 1(2), 207–217 (2010)

33. Keskin, Y., Oturanc, G.: The reduced differential transformation method for solving linear and nonlinear wave equations. Iran. J. Sci. Technol. 34(2), 113–122 (2010)

34. Ayaz, F.: Applications of differential transform method to differential-algebraic equations. Appl. Math. Comput. 152(3), 649–657 (2004)

35. Benhammouda, B., Vazquez-Leal, H.: Analytical solution of a nonlinear index-three DAEs system modelling a slider-crank mechanism. Discrete Dyn. Nat. Soc. 2015, Article ID 206473 (2015)

36. Benhammouda, B.: Solution of nonlinear higher-index Hessenberg DAEs by Adomian polynomials and differential transform method. SpringerPlus 4, Article ID 648 (2015)

37. Celik, E., Tabatabaei, K.: Solving a class of Volterra integral equation systems by the differential transform method. Int. J. Nonlinear Sci. 16(1), 87–91 (2013)

38. Odibat, Z.: Differential transform method for solving Volterra integral equation with separable kernels. Math. Comput. Model. 48, 1144–1149 (2008)

39. Moosavi Noori, S.R., Taghizadeh, N.: Application of reduced differential transform method for solving two-dimensional Volterra integral equations of the second kind. Appl. Appl. Math. 14(2), 1003–1019 (2019)

40. Moosavi Noori, S.R., Taghizadeh, N.: Study on solving two-dimensional linear and nonlinear Volterra partial integro-differential equations by reduced differential transform method. Appl. Appl. Math. 15(1), 394–407 (2020)

41. Arikoglu, A., Ozkol, I.: Solutions of integral and integro-differential equation systems by using differential transform method. Comput. Math. Appl. 56, 2411–2417 (2008)

42. Tari, A., Shahmorad, S.: Differential transform method for the system of two-dimensional nonlinear Volterra integro-differential equations. Comput. Math. Appl. 61, 2621–2629 (2011)

43. Moghadam, M.M., Saeedi, H.: Application of differential transform for solving the Volterra integro-partial equations. Iran. J. Sci. Technol. 34(1), 59–70 (2010)

44. Eslami, M., Taleghani, S.A.: Differential transform method for conformable fractional partial differential equations. Iran. J. Numer. Anal. Optim. 9(2), 17–29 (2019)

45. Secer, A., Akinlar, M.A., Cevikel, A.: Efficient solutions of systems of fractional PDEs by the differential transform method. Adv. Differ. Equ. 2012, 188 (2012)

46. Deepanjan, D.: The generalized differential transform method for solution of a free vibration linear differential equation with fractional derivative damping. J. Appl. Math. Comput. Mech. 18(2), 19–29 (2019)

47. Acan, O., Al Qurashi, M.M., Baleanu, D.: Reduced differential transform method for solving time and space local fractional partial differential equations. J. Nonlinear Sci. Appl. 10(10), 5230–5238 (2017)

48. Durur, H., Ilhan, E., Bulut, H.: Novel complex wave solutions of the $$(2+ 1)$$-dimensional hyperbolic nonlinear Schrödinger equation. Fractal Fract. 4(3), 41 (2020)

49. Gao, W., Senel, M., Yel, G., Baskonus, H.M., Senel, B.: New complex wave patterns to the electrical transmission line model arising in network system. AIMS Math. 5(3), 1881–1892 (2020)

50. Hajipour, M., Jajarmi, A., Baleanu, D.: On the accurate discretization of a highly nonlinear boundary value problem. Numer. Algorithms 79(3), 679–695 (2018)

51. Al-Refai, M.: Maximum principles for nonlinear fractional differential equations in reliable space. Prog. Fract. Differ. Appl. 6(2), 95–99 (2020)

52. Singh, J., Kumar, D., Hammouch, Z., Atangana, A.: A fractional epidemiological model for computer viruses pertaining to a new fractional derivative. Appl. Math. Comput. 316, 504–515 (2018)

53. Gao, W., Veeresha, P., Prakasha, D.G., Senel, B., Baskonus, H.M.: Iterative method applied to the fractional nonlinear systems arising in thermoelasticity with Mittag-Leffler kernel. Fractals (2020). https://doi.org/10.1142/S0218348X2040040X

54. Yel, G., Baskonus, H.M., Gao, W.: New dark–bright soliton in the shallow water wave model. AIMS Math. 5(4), 4027–4044 (2020)

55. García Guirao, J.L., Baskonus, H.M., Kumar, A.: Regarding new wave patterns of the newly extended nonlinear $$(2 + 1)$$-dimensional Boussinesq equation with fourth order. Mathematics 8(3), 341 (2020)

56. Baker, G.A.: Essentials of Padé Approximants. Academic Press, San Diego (1975)

57. Benhammouda, B., Vazquez-Leal, H., Sarmiento-Reyes, A.: Modified reduced differential transform method for partial differential algebraic equations. J. Appl. Math. 2014, Article ID 279481 (2014)

58. Bahsi, M.M., Çevik, M.: Numerical solution of pantograph-type delay differential equations using perturbation-iteration algorithms. J. Appl. Math. 2015, Article ID 139821 (2015)

59. Yuzbasi, S.: Laguerre approach for solving pantograph-type Volterra integro-differential equations. Appl. Math. Comput. 232, 1183–1199 (2014)

Not applicable.

Not applicable.

## Author information

Authors

### Contributions

The authors declare that the study was realized in collaboration with equal responsibility. All authors read and approved the final manuscript.

### Corresponding author

Correspondence to Seyyedeh Roodabeh Moosavi Noori.

## Ethics declarations

Not applicable.

### Competing interests

The authors declare that they have no competing interests.

Not applicable.

Not applicable.

## Rights and permissions

Reprints and permissions

Moosavi Noori, S.R., Taghizadeh, N. Modified differential transform method for solving linear and nonlinear pantograph type of differential and Volterra integro-differential equations with proportional delays. Adv Differ Equ 2020, 649 (2020). https://doi.org/10.1186/s13662-020-03107-9