- Research
- Open access
- Published:
A new generalized Jacobi Galerkin operational matrix of derivatives: two algorithms for solving fourth-order boundary value problems
Advances in Difference Equations volume 2016, Article number: 22 (2016)
Abstract
This paper reports a novel Galerkin operational matrix of derivatives of some generalized Jacobi polynomials. This matrix is utilized for solving fourth-order linear and nonlinear boundary value problems. Two algorithms based on applying Galerkin and collocation spectral methods are developed for obtaining new approximate solutions of linear and nonlinear fourth-order two point boundary value problems. In fact, the key idea for the two proposed algorithms is to convert the differential equations with their boundary conditions to systems of linear or nonlinear algebraic equations which can be efficiently solved by suitable numerical solvers. The convergence analysis of the suggested generalized Jacobi expansion is carefully discussed. Some illustrative examples are given for the sake of indicating the high accuracy and effectiveness of the two proposed algorithms. The resulting approximate solutions are very close to the analytical solutions and they are more accurate than those obtained by other existing techniques in the literature.
1 Introduction
Spectral methods are global methods. The main idea behind spectral methods is to approximate solutions of differential equations by means of truncated series of orthogonal polynomials. The spectral methods play prominent roles in various applications such as fluid dynamics. The three most used versions of spectral methods are: tau, collocation, and Galerkin methods (see for example [1–8]). The choice of the suitable used spectral method suggested for solving the given equation depends certainly on the type of the differential equation and also on the type of the boundary conditions governed by it.
In the collocation approach, the test functions are the Dirac delta functions centered at special collocation points. This approach requires the differential equation to be satisfied exactly at the collocation points. The tau-method is a synonym for expanding the residual function as a series of orthogonal polynomials and then applying the boundary conditions as constraints. The tau approach has an advantage that it can be applied to problems with complicated boundary conditions. In the Galerkin method, the test functions are chosen in a way such that each member of them satisfies the underlying boundary conditions of the given differential equation.
There is extensive work in the literature on the numerical solutions of high-order boundary value problems (BVPs). The great interest in such problems is due to their importance in various fields of applied science. For example, a large number of problems in physics and fluid dynamics are described by problems of this kind. In this respect, there is a huge number of articles handling both high odd- and high even-order BVPs. For example, in the sequence of papers, [5, 9–11], the authors have obtained numerical solutions for even-order BVPs by applying the Galerkin method. The main idea for obtaining these solutions is to construct suitable basis functions satisfying the underlying boundary conditions on the given differential equation, then applying Galerkin method to convert each equation to a system of algebraic equations. The suggested algorithms in these articles are suitable for handling one and two dimensional linear even-order BVPs. The Galerkin and Petrov-Galerkin methods have the advantage that their applications on linear problems enable one to investigate carefully the resulting systems, especially their complexities and condition numbers.
There are many algorithms in the literature which are applied for handling fourth-order boundary value problems. For example, Bernardi et al. in [12] suggested some spectral approximations for handling two dimensional fourth-order problems. In the two leading articles of Shen [13, 14], the author developed direct solutions of fourth-order two point boundary value problems. The suggested algorithms in these articles are based on constructing compact combinations of Legendre and Chebyshev polynomials together with the application of the Galerkin method. Many other techniques were used for solving fourth-order BVPs, for example, variational iteration method is applied in [15], non-polynomial sextic spline method in [16], quintic non-polynomial spline method in [17], and the Galerkin method (see [18, 19]). Theorems which list the conditions for the existence and uniqueness of solution of such problems are thoroughly discussed in the important book of Agarwal [20].
The approach of employing operational matrices of differentiation and integration is considered an important technique for solving various kinds of differential and integral equations. The main advantage of this approach is its simplicity in application and its capability for handling linear differential equations as well as nonlinear differential equations. There are a large number of articles in the literature in this direction. For example, the authors in [6], employed the tau operational matrices of derivatives of Chebyshev polynomials of the second kind for handling the singular Lane-Emden type equations. Some other studies in [21, 22] employ tau operational matrices of derivatives for solving the same type of equations. The operational matrices of shifted Chebyshev, shifted Jacobi, and generalized Laguerre polynomials and other kinds of polynomials are employed for solving some fractional problems (see for example, [23–27]). In addition, recently in the two papers of Abd-Elhameed [28, 29] one introduced and used two Galerkin operational matrices for solving, respectively, the sixth-order two point BVPs and Lane-Emden equations.
In this paper, our main aim is fourfold:
-
Establishing a novel Galerkin operational matrix of derivatives of some generalized Jacobi polynomials.
-
Investigating the convergence analysis of the suggested generalized Jacobi expansion.
-
Employing the introduced operational matrix of derivatives to numerically solve linear fourth-order BVPs based on the application of Galerkin method.
-
Employing the introduced operational matrix of derivatives for solving the nonlinear fourth-order BVPs based on the application of collocation method.
The contents of the paper is organized as follows. Section 2 is devoted to presenting an overview on classical Jacobi and generalized Jacobi polynomials. Section 3 is concerned with deriving the Galerkin operational matrix of derivatives of some generalized Jacobi polynomials. In Section 4, we implement and present two numerical algorithms for the sake of handling linear and nonlinear fourth-order BVPs based on the application of generalized Jacobi-Galerkin operational matrix method (GJGOMM) for linear problems and generalized Jacobi collocation operational matrix method (GJCOMM) for nonlinear problems. Convergence analysis of the generalized Jacobi expansion is discussed in detail in Section 5. Numerical examples including some discussions and comparisons are given in Section 6 for the sake of testing the efficiency, accuracy, and applicability of the suggested algorithms. Finally, conclusions are reported in Section 7.
2 An overview on classical Jacobi and generalized Jacobi polynomials
The classical Jacobi polynomials \(P_{n}^{(\alpha,\beta)}(x)\) associated with the real parameters (\(\alpha>-1\), \(\beta>-1\)) (see [30] and [31]) are a sequence of polynomials defined on \([-1,1]\). Define the normalized orthogonal polynomials \(R_{n}^{(\alpha,\beta)}(x)\) (see [32])
and define the shifted normalized Jacobi polynomials on \([a,b]\) as
The polynomials \(\tilde{R}^{(\alpha ,\beta )}_{n}(x)\) are orthogonal on \([a,b]\) with respect to the weight function \((b-x)^{\alpha } (x-a)^{\beta }\), in the sense that
where
It should be noted here that the Legendre polynomials are particular polynomials of Jacobi polynomials. In fact, \(R_{n}^{(0,0)}(x)=L_{n}(x)\), where \(L_{n}(x)\) is the standard Legendre polynomial of degree n.
Let \(w^{\alpha,\beta}(x)=(b-x)^{\alpha}(x-a)^{\beta}\). We denote by \(L^{2}_{w^{\alpha,\beta}}(a,b)\) the weighted \(L^{2}\) space with inner product:
and the associated norm \(\|u\|_{w^{\alpha,\beta}}=(u,u)^{\frac{1}{2}}_{w^{\alpha,\beta}}\). Now, the definition of the shifted Jacobi polynomials will be extended to include the cases in which α and/or \(\beta\le-1\). Now assume that \(\ell,m\in\mathbb{Z}\), and define
It is worthy to note here that in the case of \([a,b]=[-1,1]\), the polynomials defined in (4) are the so-called generalized Jacobi polynomials \((J_{i}^{(\ell,m)}(x))\), which are defined by Guo et al. in [33]. Now, the symmetric generalized Jacobi polynomials \(J^{(-n,-n)}_{i}(x)\) can be expressed explicitly in terms of the Legendre polynomials, while the symmetric shifted generalized Jacobi polynomials \(\tilde{J}^{(-n,-n)}_{i}(x)\) can be expressed in terms of the shifted Legendre polynomials. These results are given in the following two lemmas.
Lemma 1
For every nonnegative integer n, and for all \(i\ge2n\), one has
and in particular,
Proof
For the proof of Lemma 1, see [9]. □
Now, Lemma 2 is a direct consequence of Lemma 1.
Lemma 2
For every nonnegative integer n, for all \(i\ge2n\), one has
and in particular,
The following lemma is also of interest in the sequel.
Lemma 3
The following integral formula holds:
Proof
Lemma 3 follows if we integrate equation (8) (for the case \([a,b]=[-1,1]\)) together with the aid of the following integral formula:
 □
3 Generalized Jacobi Galerkin operational matrix of derivatives
In this section, a novel operational matrix of derivatives will be developed. For this purpose, we choose the following set of basis functions:
It is easy to see that, the set of polynomials \(\{\phi_{i}(x): i=0,1,2,\ldots\}\) is a linearly independent set. Moreover, they are orthogonal on \([a,b]\) with respect to the weight function \(w(x)=\frac{1}{(x-a)^{2} (b-x)^{2}}\), in the sense that
Let us denote \(H_{w}^{r}(I)\) (\(r=0,1,2,\ldots\)), as the weighted Sobolev spaces, whose inner products and norms are denoted by \((\cdot,\cdot)_{r,w}\) and \(\|\cdot\|_{r,w}\), respectively (see [4]). To account for homogeneous boundary conditions, we define
where \(I=(a,b)\).
Define the following subspace of \(H_{0,w}^{2}(I)\):
Any function \(f(x)\in H_{0,w}^{2}(I)\) can be expanded as
where
Assume that \(f(x)\) in equation (12) can be approximated as
where
Now, we are going to state and prove the main theorem, from which a novel Galerkin operational matrix of derivatives will be introduced.
Theorem 1
If the polynomials \(\phi_{i}(x)\) are selected as in (11), then for all \(i\ge1\), one has
where \(\eta_{i}(x)\) is given by
Proof
The key idea is to prove Theorem 1 on \([-1,1]\), and hence the proof on the general interval \([a,b]\) can easily be transported. Now, we intend to prove the relation
where
and
To prove (18), it is sufficient to prove that the following identity holds, up to a constant:
where
Indeed
If we make use of Lemma 3, then the latter equation-after performing some manipulations-is turned into the relation
where
After performing some rather lengthy manipulations on the right hand side of (21), equation (19) is obtained.
Now, if x in (18) is replaced by \(\frac{2x-a-b}{b-a}\), then after performing some manipulations, we get
where \(\eta_{i}(x)\) is given by
and this completes the proof of Theorem 1. □
Now, with the aid of Theorem 1, the first derivative of the vector \(\boldsymbol {\Phi}(x)\) defined in (15) can be expressed in matrix form:
where \(\boldsymbol {\eta}(x)= (\eta_{0}(x),\eta_{1}(x),\dots,\eta _{N}(x) )^{T}\), and \(H= (h_{ij} )_{0\leqslant i,j\leqslant N}\) is an \((N+1)\times(N+1)\) matrix whose nonzero elements can be given explicitly from equation (16) by
For example, for \(N=5\), the operational matrix M is the following \((6\times6)\) matrix:
Corollary 1
The second-, third- and fourth-order derivatives of the vector \(\boldsymbol {\Phi}(x)\) are given, respectively, by
4 Two algorithms for fourth-order two point BVPs
In this section, we are interested in developing two numerical algorithms for solving both of the linear and nonlinear fourth-order two point BVPs. The Galerkin operational matrix of derivatives that introduced in Section 3 is employed for this purpose. The linear equations are handled by the application of the Galerkin method, while the nonlinear equations are handled by the application of the typical collocation method.
4.1 Linear fourth-order BVPs
Consider the linear fourth-order boundary value problem
subject to the homogeneous boundary conditions
If \(u(x)\) is approximated as
then making use of equations (23)-(26), the following approximations for \(y^{(\ell)}(x)\), \(1\le\ell\le4\), are obtained:
where
If we substitute equations (29)-(31) into equation (27), then the residual, \(r(x)\), of this equation can be written
The application of the Galerkin method (see [4]) yields the following \((N+1)\) linear equations in the unknown expansion coefficients, \(c_{i}\), namely
Thus equation (33) generates a set of \((N+1)\) linear equations which can be solved for the unknown components of the vector C, and hence the approximate spectral solution \(u_{N}(x)\) given in (29) can be obtained.
Remark 1
It should be noted that the problem (27), governed by the nonhomogeneous boundary conditions
can easily be transformed to a problem similar to (27)-(28) (see [10]).
4.2 Solution of nonlinear fourth-order two point BVPs
Consider the following nonlinear fourth-order boundary value problem:
governed by the homogeneous boundary conditions
If \(u^{(\ell)}(x)\), \(0\le\ell\le4\), are approximated as in (29)-(31), then the following nonlinear equations in the unknown vector C can be obtained:
An approximate solution \(u_{N}(x)\) can be obtained by employing the typical collocation method. For this purpose, equation (37) is collocated at \((N+1)\) points. These points may be taken to be the zeros of the polynomial \(\tilde{R}^{(2,2)}_{N+1}(x)\), or by any other choice. Hence, a set of \((N+1)\) nonlinear equations is generated in the expansion coefficients, \(c_{i}\). This nonlinear system can be solved with the aid of a suitable solver, such as the well-known Newton iterative method. Therefore, the corresponding approximate solution \(u_{N}(x)\) can be obtained.
5 Convergence analysis of the approximate expansion
In this section, the convergence analysis of the suggested generalized Jacobi approximate solution will be investigated. We will state and prove a theorem in which the expansion in (12) of a function \(f(x)=(x-a)^{2} (b-x)^{2} G(x)\in H_{0,w}^{2}(I)\), where \(G(x)\) is of bounded fourth derivative, converges uniformly to \(f(x)\).
Theorem 2
A function \(f(x)=(x-a)^{2} (b-x)^{2} G(x)\in H_{0,w}^{2}(I)\), \(w(x) =\frac{1}{(x-a)^{2} (b-x)^{2}}\) with \(|G^{(4)}(x)|\leqslant M\), can be expanded as an infinite sum of the basis given in (12). This series converges uniformly to \(f(x)\), and the coefficients in (12) satisfy the inequality
Proof
From equation (13), one has
and with the aid of equation (11), the coefficients \(c_{i}\) may be written alternatively in the form
Making use of Lemma 2, the polynomials \(\tilde{J}^{(-2,-2)}_{i}(x)\) can be expanded in terms of the shifted Legendre polynomials, and so the coefficients \(c_{i}\) take the form
If the last relation is integrated by parts four times, then the repeated application of equation (10) yields
where \(I^{(4)}(x)\) is given by
which can be written as
and then the coefficients \(c_{i}\) take the form
Now, making use of the substitution \(\frac{2x-a-b}{b-a}=\cos \theta\) enables one to put the coefficients \(c_{i}\) in the form
Taking into consideration the assumption \(\vert G^{(4)}(x)\vert \le M\), then we have
From a Bernstein type inequality (see [34]), it is easy to see that
and hence (44) together with the last inequality leads to the estimation
Finally it is easy show that for all \(i\ge6\),
This completes the proof of the theorem. □
6 Numerical results and discussions
In this section, the two proposed algorithms in Section 4 are applied to solve linear and nonlinear fourth-order two point boundary value problems. The numerical results ensure that the two algorithms are very efficient and accurate.
Example 1
Consider the fourth-order linear boundary value problem (see [35]):
The exact solution of (46) is
Table 1 lists the maximum absolute errors E, which resulted from the application of GJGOMM, for various values of N, while in Table 2 we display a comparison between the relative errors obtained by the application of the two methods namely, the first-order method (1OM) and the second-order methods (2OMs) developed in [35] with the relative errors resulting from the application of GJGOMM.
Example 2
Consider the following fourth-order nonlinear boundary value problem (see [36, 37]):
The exact solution of the above problem is
In Table 3, we list the maximum absolute errors using GJCOMM for various values of N. Let \(E_{1},E_{2},E_{3}\), and \(E_{4}\) denote the maximum absolute errors if the selected collocation points are respectively, the zeros of the shifted Legendre polynomial \(L^{*}_{N+1}(x)\), the shifted Chebyshev polynomials of the first and second kinds \(T^{*}_{N+1}(x)\), \(U^{*}_{N+1}(x)\), and the shifted symmetric Jacobi polynomial \({\tilde{R}^{(2,2)}}_{N+1}(x)\), while Figures 1 and 2 display a comparison between the maximum absolute errors resulting from the application of GJCOMM for \(N=4\) and 6, respectively. Table 3 and Figures 1 and 2 show that the best choice among the previous choices for the collocation points are obtained if the selected collocation points are the zeros of the polynomial \(\tilde{R}^{(2,2)}_{N+1}(x)\). Table 4 displays a comparison between the errors obtained by the application of GJCOMM for \(N=4\), with the errors resulting from the application of the three methods developed in [36, 37]. This comparison ascertains that our results are more accurate than those obtained in [36, 37].
Example 3
Consider the following fourth-order nonlinear boundary value problem (see [37]):
where \(g(x)=-x^{10}+4 x^{9}-4 x^{8}-4 x^{7}+8 x^{6}-4 x^{4}+120 x-48\). The exact solution of the above problem is
In Table 5, we list the maximum absolute errors using GJCOMM for various values of N. Let E denote the maximum pointwise errors if the selected collocation points are the zeros of the polynomial \(\tilde{R}^{(2,2)}_{N+1}(x)\). Moreover, Table 6 displays a comparison between the errors obtained by the application of GJCOMM with the method developed in [37] for the case \(N=2\). The comparison ascertains that our results are more accurate than those obtained by [37].
Example 4
Consider the following nonlinear fourth-order boundary value problem (see [38]):
with the exact solution \(y(x) =\sinh(x)+1\).
In Table 7, the absolute errors are listed for various values of N. In order to compare the absolute errors obtained by applying GJCOMM with those obtained by applying RHKSM in [38], we list the absolute errors obtained by the application of RHKSM in the last column of this table. This table shows that the approximate solution of problem (49) obtained by using GJCOMM is of high efficiency and more accurate than the approximate solution obtained by RHKSM [38].
7 Concluding remarks
In this article, a novel operational matrix of derivatives of certain generalized Jacobi polynomials is derived and used for introducing spectral solutions of linear and nonlinear fourth-order two point boundary value problems. The two spectral methods, namely the Galerkin and collocation methods are employed for this purpose. The main advantages of the introduced algorithms are their simplicity in application, and also their high accuracy, since highly accurate approximate solutions can be achieved by using a small number of terms of the suggested expansion. The numerical results are convincing and the resulting approximate solutions are very close to the exact ones.
References
Rashidinia, J, Ghasemi, M: B-spline collocation for solution of two-point boundary value problems. J. Comput. Appl. Math. 235(8), 2325-2342 (2011)
Elbarbary, EME: Efficient Chebyshev-Petrov-Galerkin method for solving second-order equations. J. Sci. Comput. 34(2), 113-126 (2008)
Julien, K, Watson, M: Efficient multi-dimensional solution of PDEs using Chebyshev spectral methods. J. Comput. Phys. 228, 1480-1503 (2009)
Canuto, C, Hussaini, MY, Quarteroni, A, Zang, TA: Spectral Methods in Fluid Dynamics. Springer, Berlin (1988)
Doha, EH, Abd-Elhameed, WM: Efficient spectral-Galerkin algorithms for direct solution of second-order equations using ultraspherical polynomials. SIAM J. Sci. Comput. 24, 548-571 (2002)
Doha, EH, Abd-Elhameed, WM, Youssri, YH: Second kind Chebyshev operational matrix algorithm for solving differential equations of Lane-Emden type. New Astron. 23/24, 113-117 (2013)
Bhrawy, AH, Hafez, RM, Alzaidy, JF: A new exponential Jacobi pseudospectral method for solving high-order ordinary differential equations. Adv. Differ. Equ. 2015, 152 (2015)
Doha, EH, Bhrawy, AH, Abd-Elhameed, WM: Jacobi spectral Galerkin method for elliptic Neumann problems. Numer. Algorithms 50(1), 67-91 (2009)
Doha, EH, Abd-Elhameed, WM, Bhrawy, AH: New spectral-Galerkin algorithms for direct solution of high even-order differential equations using symmetric generalized Jacobi polynomials. Collect. Math. 64(3), 373-394 (2013)
Doha, EH, Abd-Elhameed, WM, Bassuony, MA: New algorithms for solving high even-order differential equations using third and fourth Chebyshev- Galerkin methods. J. Comput. Phys. 236, 563-579 (2013)
Doha, EH, Abd-Elhameed, WM, Bhrawy, AH: Efficient spectral ultraspherical-Galerkin algorithms for the direct solution of 2nth-order linear differential equations. Appl. Math. Model. 33, 1982-1996 (2009)
Bernardi, C, Giuseppe, C, Maday, Y: Some spectral approximations of two-dimensional fourth-order problems. Math. Comput. 59(199), 63-76 (1992)
Shen, J: Efficient spectral-Galerkin method I. Direct solvers of second-and fourth-order equations using Legendre polynomials. SIAM J. Sci. Comput. 15(6), 1489-1505 (1994)
Shen, J: Efficient spectral-Galerkin method II. Direct solvers of second-and fourth-order equations using Chebyshev polynomials. SIAM J. Sci. Comput. 16(1), 74-87 (1995)
Noor, MA, Mohyud-Din, ST: An efficient method for fourth-order boundary value problems. Comput. Math. Appl. 54(7), 1101-1111 (2007)
Khan, A, Khandelwal, P: Non-polynomial sextic spline approach for the solution of fourth-order boundary value problems. Appl. Math. Comput. 218(7), 3320-3329 (2011)
Lashien, IF, Ramadan, MA, Zahra, WK: Quintic nonpolynomial spline solutions for fourth order two-point boundary value problem. Commun. Nonlinear Sci. Numer. Simul. 14(4), 1105-1114 (2009)
Doha, EH, Bhrawy, AH: Efficient spectral-Galerkin algorithms for direct solution of fourth-order differential equations using Jacobi polynomials. Appl. Numer. Math. 58(8), 1224-1244 (2008)
Doha, EH, Bhrawy, AH: A Jacobi spectral Galerkin method for the integrated forms of fourth-order elliptic differential equations. Numer. Methods Partial Differ. Equ. 25(3), 712-739 (2009)
Agarwal, RP: Boundary Value Problems for Higher-Order Differential Equations. World Scientific, Singapore (1986)
Öztürk, Y, Gülsu, M: An operational matrix method for solving Lane-Emden equations arising in astrophysics. Math. Methods Appl. Sci. 37(15), 2227-2235 (2014)
Bhardwaj, A, Pandey, RK, Kumar, N, Dutta, G: Solution of Lane-Emden type equations using Legendre operational matrix of differentiation. Appl. Math. Comput. 218(14), 7629-7637 (2012)
Bhrawy, AH, Zaky, MA: Numerical simulation for two-dimensional variable-order fractional nonlinear cable equation. Nonlinear Dyn. 80(1-2), 101-116 (2015)
Bhrawy, AH, Taha, TM, Alzahrani, EO, Baleanu, D, Alzahrani, AA: New operational matrices for solving fractional differential equations on the half-line. PLoS ONE 10(5), e0126620 (2015). doi:10.1371/journal.pone.0126620
Saadatmandi, A, Dehghan, M: A new operational matrix for solving fractional-order differential equations. Comput. Math. Appl. 59(3), 1326-1336 (2010)
Maleknejad, K, Basirat, B, Hashemizadeh, E: A Bernstein operational matrix approach for solving a system of high order linear Volterra- Fredholm integro-differential equations. Math. Comput. Model. 55(3), 1363-1372 (2012)
Zhu, L, Fan, Q: Solving fractional nonlinear Fredholm integro-differential equations by the second kind Chebyshev wavelet. Commun. Nonlinear Sci. Numer. Simul. 17(6), 2333-2341 (2012)
Abd-Elhameed, WM: On solving linear and nonlinear sixth-order two point boundary value problems via an elegant harmonic numbers operational matrix of derivatives. Comput. Model. Eng. Sci. 101(3), 159-185 (2014)
Abd-Elhameed, WM: New Galerkin operational matrix of derivatives for solving Lane-Emden singular-type equations. Eur. Phys. J. Plus 130, 52 (2015)
Abramowitz, M, Stegun, IA: Handbook of Mathematical Functions: With Formulas, Graphs, and Mathematical Tables. Dover, New York (2012)
Andrews, GE, Askey, R, Roy, R: Special Functions. Cambridge University Press, Cambridge (1999)
Doha, EH, Abd-Elhameed, WM, Ahmed, HM: The coefficients of differentiated expansions of double and triple Jacobi polynomials. Bull. Iran. Math. Soc. 38(3), 739-766 (2012)
Guo, B-Y, Shen, J, Wang, L-L: Optimal spectral-Galerkin methods using generalized Jacobi polynomials. J. Sci. Comput. 27(1-3), 305-322 (2006)
Chow, Y, Gatteschi, L, Wong, R: A Bernstein-type inequality for the Jacobi polynomial. Proc. Am. Math. Soc. 121(3), 703-709 (1994)
Xu, L: The variational iteration method for fourth order boundary value problems. Chaos Solitons Fractals 39(3), 1386-1394 (2009)
Wazwaz, AM: The numerical solution of special fourth-order boundary value problems by the modified decomposition method. Int. J. Comput. Math. 79(3), 345-356 (2002)
Singh, R, Kumar, J, Nelakanti, G: Approximate series solution of fourth-order boundary value problems using decomposition method with Green’s function. J. Math. Chem. 52(4), 1099-1118 (2014)
Geng, F: A new reproducing kernel Hilbert space method for solving nonlinear fourth-order boundary value problems. Appl. Math. Comput. 213, 163-169 (2009)
Acknowledgements
The authors are grateful to the referees for their valuable comments and suggestions which have improved the manuscript in its present form.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
The authors declare that they carried out all the work in this manuscript and read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Abd-Elhameed, W.M., Ahmed, H.M. & Youssri, Y.H. A new generalized Jacobi Galerkin operational matrix of derivatives: two algorithms for solving fourth-order boundary value problems. Adv Differ Equ 2016, 22 (2016). https://doi.org/10.1186/s13662-016-0753-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662-016-0753-2