Skip to main content

Theory and Modern Applications

Chebyshev reproducing kernel method: application to two-point boundary value problems

Abstract

In this paper, a new implementation of the reproducing kernel method is proposed in order to obtain the accurate numerical solution of two-point boundary value problems with Dirichlet boundary conditions. Based on reproducing kernel theory, reproducing kernel functions with polynomial form will be constructed in the reproducing kernel spaces spanned by the Chebyshev basis polynomials. Convergence analysis and an error estimation for the present method in \(L_{w}^{2}\) space is also discussed. The numerical solutions obtained by this method are compared with the exact solutions. The results reveal that the proposed method is quite efficient and accurate.

1 Introduction

Boundary value problems (BVPs) associated with different kinds of differential equations play important roles in modeling a wide variety of natural phenomena. Therefore, these problems have attracted much attention and have been fascinating to a number of researchers. Two-point boundary value problems associated with second order differential equations have been investigated in a wide variety of problems in science and engineering. Many approaches for solving ordinary boundary value problems numerically are available [1–12]. Recently, reproducing kernel methods (RKMs) were used to solving a variety of BVPs [13–24].

According to these references, we see that the implementation of the reproducing kernel method for solving a problem consists of four stages.

First, we carefully identify a solution space. An inappropriate choice is an obstacle to achieve the desired solution.

Second, we construct the reproducing kernel function. In all of the above mentioned papers, this function is constructed by solving a boundary value problem and a subsequent linear system of equations. Explicit formulas for two kinds of reproducing kernel functions are introduced in [25].

Third, we produce a set of orthonormal basis functions for the space solution by using the kernel function, a boundary operator, a dense sequence of nodal points in the domain of solution space and Gram-Schmidt orthogonalization process.

Finally, we represent the exact solution of the problem by an infinite sum of orthonormal basis functions achieved from the last stage above. We use a truncated series of the exact solution series by N terms as an approximate solution.

Here, we consider the following second order two-point boundary value problems with the Dirichlet boundary conditions:

$$ \left \{ \textstyle\begin{array}{l} u''+p(x)u'+q(x)u=f(x) , \quad a\leq x \leq b, \\ u(a)= \alpha, \\ u(b)=\beta, \end{array}\displaystyle \right . $$
(1)

where \(p, q \in C^{2}(a,b)\) and \(f \in L_{w}^{2}[a,b]\) are sufficiently regular given functions such that equation (1) satisfies the existence and uniqueness of the solution. α, β are finite constants. Without loss of generality, we can assume that the boundary conditions in equation (1) are homogeneous [13]. In this paper, based on reproducing kernel theory, reproducing kernels with polynomial form will be constructed and a computational method is described in order to obtain the accurate numerical solution with polynomial form of equation (1) in the reproducing kernel spaces spanned by the Chebyshev basis polynomials. The paper is organized as follows. In the following section, a proper closed form of the Chebyshev orthonormal basis polynomials which independently satisfy the homogeneous boundary conditions on \([a,b]\) will be introduced. In addition, a reproducing kernel with polynomial form will be constructed. In Section 3, our method as a Chebyshev reproducing kernel method (C-RKM) is introduced. A convergence analysis and an error estimation for the present method in \(L_{w}^{2}\) space is also discussed. Examples are given to illustrate the applicability and accuracy in Section 4, and a few conclusions are presented in Section 5.

2 Basis functions and polynomial reproducing kernel function

2.1 Basis functions

The well-known shifted Chebyshev polynomials of the first kind in x are defined on \([a,b]\) and can be determined with the aid of the following recurrence formula:

$$\begin{aligned}& T_{0}(x)=1,\qquad T_{1}(x)= \frac{2x-(a+b)}{b-a}, \\& T_{n}(x)= 2 \biggl( \frac{2x-(a+b)}{b-a} \biggr)T_{n-1}(x)-T_{n-2}(x), \quad n=2,3,\ldots . \end{aligned}$$

The orthogonality condition is

$$ \langle T_{n},T_{m} \rangle= \int_{a}^{b} w_{[a,b]}(x) T_{n}(x) T_{m}(x) \,dx = \left \{ \textstyle\begin{array}{l@{\quad}l} 0, & n\neq m, \\ \frac{(b-a) \pi}{2}, & n=m=0, \\ \frac{(b-a) \pi}{4} , & n=m\neq0, \end{array}\displaystyle \right . $$
(2)

where

$$ w_{[a,b]}(x)=\frac{1}{\sqrt{1-(\frac{2x-a-b}{b-a})^{2}}}. $$
(3)

In solving boundary value problems, use of basis functions that independently satisfy the boundary conditions are useful. So we construct Chebyshev basis functions that independently satisfy the homogeneous boundary conditions such as

$$ u(a)=u(b)=0. $$
(4)

Lemma 2.1

[26]

The functions defined by

$$ \varphi_{n}(x)=\left \{ \textstyle\begin{array}{l@{\quad}l} T_{n}(x)-T_{0}(x),&n \textit{ is even}, \\ T_{n}(x)-T_{1}(x),&n \textit{ is odd}, \end{array}\displaystyle \right .\quad n\geq2, $$
(5)

have the property

$$\varphi_{n}(a)=\varphi_{n}(b)=0, $$

for all n and for the function space satisfying the boundary conditions equation (4), the basis functions defined by equation (5) are complete.

Proposition 2.1

Let \(\{\varphi_{n}\}_{n=2}^{\infty}\) be the basis functions defined by equation (5), then the Gram-Schmidt process gives a corresponding orthonormal basis functions \(\{h_{n}\}_{n=2}^{\infty}\) such that \(h_{i}\), for \(i=2,3,\ldots \) , has the following closed form:

$$ h_{i}(x)=2\sqrt{\frac{(i-1)}{(i+1)(b-a)\pi}} \left \{ \textstyle\begin{array}{l@{\quad}l} T_{i}(x)-\frac{2}{i-1}\sum_{k=1}^{\frac{i-2}{2}}T_{2k}(x)-\frac {1}{i-1},&i \textit{ is even}, \\ T_{i}(x)-\frac{2}{i-1}\sum_{k=1}^{\frac{i-1}{2}}T_{2k-1}(x),&i \textit { is odd}. \end{array}\displaystyle \right . $$
(6)

Proof

Use Lemma 2.1 and induction on i, which completes the proof. □

2.2 Polynomial reproducing kernel function

Definition 2.1

[13, 27]

For a nonempty set \(\mathcal{X}\), let \((\mathcal{H}, \langle \cdot,\cdot \rangle_{\mathcal{H}} )\) be a Hilbert space of real-valued functions on some set \(\mathcal{X}\). A function \(R:\mathcal{X}\times\mathcal{X}\longrightarrow\mathbb{R}\) is said to be the reproducing kernel function of \(\mathcal{H}\) if and only if

  1. 1.

    \(R(x,\cdot)\in\mathcal{H}\), \(\forall x\in\mathcal{X}\),

  2. 2.

    \(\langle\varphi(\cdot),R(x,\cdot) \rangle_{\mathcal{H}}=\varphi(x)\), \(\forall\varphi\in\mathcal{H}\), \(\forall x \in\mathcal{X}\) (reproducing property).

Also, a Hilbert space of functions \((\mathcal{H}, \langle \cdot,\cdot \rangle_{\mathcal{H}} )\) that possesses a reproducing kernel R is a reproducing kernel Hilbert space (RKHS); we denote it by \((\mathcal{H}, \langle \cdot,\cdot \rangle_{\mathcal{H}},R )\). In the following we often denote by \(R_{y}\) the function \(R(y,\cdot):t\longmapsto R(y,t)\).

Theorem 2.1

[28], Theorem 3.7

Every finite-dimensional inner product space is complete. Let M be a finite-dimensional subspace of the inner product space V where:

  1. 1.

    each bounded sequence in M has a subsequence that converges to a point in M;

  2. 2.

    M is closed;

  3. 3.

    M is complete;

  4. 4.

    suppose \(\{x_{1}, x_{2}, \ldots, x_{n}\}\) is a basis for M, \(y_{k}=\sum_{i=1}^{n} \alpha_{ki}x_{i}\), and \(y=\sum_{1}^{n} \alpha_{i}x_{i}\). Then \(y_{k} \rightarrow y\) if and only if \(\alpha_{ki} \rightarrow\alpha_{i}\) for \(i=1, 2, \ldots, n\).

Theorem 2.2

[13], Theorem 1.1.2

If \(\mathcal{H}\) is an n-dimensional Hilbert space, \(\{e_{i}\}_{i=1}^{n}\) is an orthonormal basis, \(x \in X\), and X is an abstract set, then, for any fixed \(y \in X\),

$$ R_{y}(x)=\sum_{i=1}^{n}e_{i}(x) \bar{e}_{i}(y) $$
(7)

is the reproducing kernel function of \(\mathcal{H}\).

Let \(\Pi_{w}^{m}[a,b]\) be the weighted inner product space of polynomials on \([a,b]\) with real coefficients and degree less than or equal to m with inner product

$$\langle u,v \rangle_{\Pi_{w}^{m}}= \int_{a}^{b} w_{[a,b]}(x) u(x) v(x) \,dx, \quad \forall u,v \in\Pi_{w}^{m} [a,b], $$

with \(w_{[a,b]}(x)\) defined by equation (3), and the norm

$$\|u\|_{\Pi_{w}^{m}}=\sqrt{ \langle u, u \rangle_{\Pi_{w}^{m}}},\quad \forall u \in\Pi_{w}^{m} [a,b]. $$

Since \(L^{2}_{w}[a,b]=\{f\mid\int_{a}^{b}w_{[a,b]}(x)|f(x)|^{2}\,dx< \infty\}\) it can easily be shown that for any fixed m \(\Pi_{w}^{m}[a,b]\) is a subspace of \(L^{2}_{w}[a,b]\) and \(\forall u, v \in\Pi_{w}^{m} [a,b]\), \(\langle u,v \rangle_{\Pi_{w}^{m}}= \langle u,v \rangle_{L^{2}_{w}}\).

Theorem 2.3

The function space \(\Pi_{w}^{m}[a,b]\) by its inner product and norm (mentioned above) is a reproducing kernel Hilbert space.

Proof

It is clear that \(\Pi_{w}^{m}[a,b]\) is a finite-dimensional inner product space, so by Theorems 2.1 and 2.2, \(\Pi_{w}^{m}[a,b]\) is a RKHS, which completes the proof. □

For practical use of the RKM method, it is necessary to define a closed subspace of \(\Pi_{w}^{m}[a,b]\) by imposing required homogeneous boundary conditions on it.

Definition 2.2

Let

$${}^{o}\Pi_{w}^{m}[a,b]=\bigl\{ u \mid u \in \Pi_{w}^{m}[a,b],u(a)=u(b)=0 \bigr\} . $$

So similar to the proof of Theorem 2.3, by using equation (5), we can prove that the function space \({}^{o}\Pi_{w}^{m}[a,b]\) is a reproducing kernel Hilbert space.

According to the Theorem 2.2 and Proposition 2.1, the polynomial reproducing kernel function \(R_{y}^{m}(x)\) of \({}^{o}\Pi_{w}^{m}[a,b]\) obeys the expression

$$ {R}_{y}^{m}(x)=\sum _{i=2}^{m} h_{i}(x) h_{i}(y). $$
(8)

Equation (8) shows that the polynomial reproducing kernel function \(R_{y}^{m}(x)\) not only can easily be constructed by a finite sum of basis functions, also this kernel function and the associated reproducing kernel Hilbert space \(\Pi_{w}^{m}[a,b]\) can easily be updated by increasing m.

Theorem 2.4

[13], Theorem 1.3.5

The reproducing kernel space \({}^{o}\Pi_{w}^{m}[a,b]\) is a closed subspace of \(\Pi_{w}^{m}[a,b]\).

3 The Chebyshev reproducing kernel method (C-RKM)

3.1 Representation of exact solution in \({}^{o}\Pi_{w}^{m}[a,b]\)

Here, we develop a polynomial reproducing kernel computational method for solving equation (1). We assume that the solution of the problem exists and is unique. Also let the problem be transformed into the following operator form:

$$ \left \{ \textstyle\begin{array}{l} (\mathbb{L}u)(x)=f(x) ,\quad a\leq x \leq b, \\ u(a)=u(b)=0, \end{array}\displaystyle \right . $$
(9)

where

$$\begin{aligned}& \mathbb{L}:=\frac{d^{2}}{dx^{2}}+p(x)\frac{d}{dx}+q(x), \\& \mathbb{L}:{}^{o}\Pi_{w}^{m}[a,b] \longrightarrow L_{w}^{2}[a,b], \end{aligned}$$

is a bounded linear operator. We shall give the representation of an analytical solution of equation (9) in the space \({}^{o}\Pi _{w}^{m}[a,b]\). Let \(R_{y}^{m}(x)\) be the polynomial reproducing kernel function of \({}^{o}\Pi_{w}^{m}[a,b]\). For any fixed \(x_{i} \in[a,b]\), put

$$ \psi_{i}^{m}(x)=\mathbb{L}^{*}R_{x_{i}}^{m}(x)= \mathbb {L}_{y}R_{y}^{m}(x)|_{y=x_{i}}, $$
(10)

where \(\mathbb{L}^{*}\) is the adjoint operator of \(\mathbb{L}\) and the subscript y in the operator \(\mathbb{L}\) indicates that the operator \(\mathbb{L}\) applies to the function y. It is clear that, for any fixed m and \(x_{i} \in[a,b]\), \(\psi_{i}^{m} \in {}^{o}\Pi_{w}^{m}[a,b]\).

Theorem 3.1

For \(m \geq2\), let \(\{x_{i}\}_{i=0}^{m-2}\) be any \((m-1)\)-distinct points in \((a,b)\), then \(\{\psi_{i}^{m}\}_{i=0}^{m-2}\) is a basis for \({}^{o}\Pi_{w}^{m}[a,b]\).

Proof

For each fixed \(u \in{}^{o}\Pi_{w}^{m}[a,b]\), let

$$\bigl\langle u(\cdot) , \psi_{i}^{m}(\cdot) \bigr\rangle _{{}^{o}\Pi_{w}^{m}}=0,\quad i=0,1,\ldots, m-2, $$

which means that, for \(i=0,1,2,\ldots,m-2\),

$$0= \bigl\langle u(\cdot) , \mathbb{L}_{y}R_{y}^{m}( \cdot)|_{y=x_{i}} \bigr\rangle _{{}^{o}\Pi_{w}^{m}}=\mathbb{L}_{y} \bigl\langle u(\cdot),R_{y}^{m}(\cdot) \bigr\rangle _{{}^{o}\Pi_{w}^{m}}|_{y=x_{i}}= \mathbb{L}u(x_{i}). $$

So from the existence of \(\mathbb{L}^{-1}\),

$$u(x_{i})=0,\quad i=0,1,2,\ldots,m-2. $$

Since \(u(a)=u(b)=0\) we have \(u \equiv0\). Therefore, \(\{\psi_{i}^{m}\}_{i=0}^{m-2}\) is a complete system for \({}^{o}\Pi _{w}^{m}[a,b]\), which completes the proof. □

Theorem 3.1 shows that, in our method (C-RKM), use of a finite sequence of nodal points is sufficient. So, implementation of C-RKM for solving problems does not need a dense sequence of nodal points.

The orthonormal system \(\{\bar{\psi}_{i}^{m}\}_{i=0}^{m-2}\) of \({}^{o}\Pi _{w}^{m}[a,b]\) can be deduced from the Gram-Schmidt orthogonalization process using \(\{\psi_{i}^{m}\}_{i=0}^{m-2}\),

$$ \bar{\psi}_{i}^{m} (x)=\sum _{k=0}^{i} \beta_{ik}^{m} \psi_{k}^{m}(x), $$
(11)

where \(\beta_{ik}^{m}\) are orthogonalization coefficients.

Theorem 3.2

Suppose that \(u_{m}\) is the unique exact solution of equation (9) in \({}^{o}\Pi_{w}^{m}[a,b]\). Let \(\{x_{i}\}_{i=0}^{m-2}\) be any \((m-1)\)-distinct points in \((a,b)\), then

$$ u_{m}(x)=\sum_{i=0}^{m-2} \sum_{k=0}^{i}\beta_{ik}^{m}f(x_{k}) \bar{\psi }_{i}^{m}(x). $$
(12)

Proof

Since \(u_{m}\in {}^{o}\Pi_{w}^{m}[a,b]\) by Theorem 3.1 we have

$$u_{m}(x)=\sum_{i=0}^{m-2} \bigl\langle u_{m}(\cdot),\bar{\psi }_{i}^{m}(\cdot) \bigr\rangle _{{}^{o}\Pi_{w}^{m}}\bar{\psi}_{i}^{m}(x). $$

On the other hand, using equations (10), (11), and this fact that \(u_{m}\) is the unique exact solution of equation (9) in \({}^{o}\Pi_{w}^{m}[a,b]\), we have

$$\begin{aligned} u_{m}(x) & =\sum_{i=0}^{m-2} \bigl\langle u_{m}(\cdot),\bar{\psi }_{i}^{m}(\cdot) \bigr\rangle _{{}^{o}\Pi_{w}^{m}}\bar{\psi}_{i}^{m}(x) \\ &=\sum_{i=0}^{m-2} \Biggl\langle u_{m}(\cdot),\sum_{k=0}^{i} \beta_{ik}^{m}\psi _{k}^{m}(\cdot) \Biggr\rangle _{{}^{o}\Pi_{w}^{m}}\bar{\psi}_{i}^{m}(x) \\ & =\sum_{i=0}^{m-2}\sum _{k=0}^{i}\beta_{ik}^{m} \bigl\langle u_{m}(\cdot),\psi_{k}^{m}(\cdot) \bigr\rangle _{{}^{o}\Pi_{w}^{m}}\bar{\psi }_{i}^{m}(x) \\ & =\sum_{i=0}^{m-2}\sum _{k=0}^{i}\beta_{ik}^{m} \bigl\langle u_{m}(\cdot), \mathbb{L}_{y}R_{y}^{m}( \cdot) \bigr\rangle _{{}^{o}\Pi _{w}^{m}}|_{y=x_{k}}\bar{\psi}_{i}^{m}(x) \\ & =\sum_{i=0}^{m-2}\sum _{k=0}^{i}\beta_{ik}^{m} \mathbb{L}_{y} \bigl\langle u_{m}(\cdot),R_{y}^{m}( \cdot) \bigr\rangle _{{}^{o}\Pi _{w}^{m}}|_{y=x_{k}}\bar{\psi}_{i}^{m}(x) \\ & =\sum_{i=0}^{m-2} \sum _{k=0}^{i}\beta_{ik}^{m} \mathbb{L}_{y} u_{m}(y)|_{y=x_{k}} \bar { \psi}_{i}^{m}(x) \\ & =\sum_{i=0}^{m-2}\sum _{k=0}^{i}\beta_{ik}^{m}f(x_{k}) \bar{\psi }_{i}^{m}(x), \end{aligned}$$

which completes the proof. □

Theorem 3.3

[24] If \(u\in{}^{o}\Pi_{w}^{m}[a,b]\), then \(|u(x)| \leq C \|u\|_{{}^{o}\Pi _{w}^{m}}\) and \(|u^{(k)}(x)| \leq C \|u\|_{{}^{o}\Pi_{w}^{m}}\) for \(1\leq k \leq m-1\), where C is a constant.

3.2 Convergence analysis and error estimation in \({}^{o}L_{w}^{2}[a,b]\)

3.2.1 Convergence analysis

Let, in equation (9), \(\mathbb{L}:{}^{o}L_{w}^{2}[a,b] \longrightarrow L_{w}^{2}[a,b]\), be a bounded linear operator where

$${}^{o}L_{w}^{2}[a,b]=\bigl\{ u \mid u \in L_{w}^{2}[a,b],u(a)=u(b)=0 \bigr\} . $$

We assume that, for any integer \(m \geq2\), \(\{x_{i}\}_{i=0}^{m-2}\) are any \((m-1)\)-distinct points in \((a,b)\). Let \(u\in{}^{o}L_{w}^{2}[a,b]\) and \(u_{m} \in{}^{o}\Pi_{w}^{m}[a,b]\) be the exact and approximate solutions of the problem, respectively. We discuss the convergence of the approximate solutions constructed in equation (12).

Theorem 3.4

Let \(u \in{}^{o}L_{w}^{2}[a,b]\) be the exact solution of equation (1) and \(u_{m} \in {}^{o}\Pi_{w}^{m}[a,b]\) in equation (12) be the approximation of u, then

$$\|u_{m}-u\|_{{}^{o}L_{w}^{2}} \longrightarrow0,\quad m \longrightarrow \infty. $$

Moreover, the sequence \(\|u_{m}-u\|_{{}^{o}L_{w}^{2}}\) is monotonically decreasing in m.

Proof

From Lemma 2.1, Proposition 2.1, and equation (10), it follows that

$$u(x)=\sum_{i=2}^{\infty} \langle u,h_{i} \rangle _{{}^{o}L_{w}^{2}}h_{i}(x), $$

and, for any integer m,

$$\bigl\langle h_{j},\psi_{i}^{m} \bigr\rangle _{{}^{o}L_{w}^{2}}=0,\quad i=0,1,\ldots, m-2, j=m+1, m+2, \ldots. $$

We have

$$\begin{aligned} \bigl\langle h_{j},\psi_{i}^{m} \bigr\rangle _{{}^{o}L_{w}^{2}} &= \bigl\langle h_{j}(\cdot),\mathbb{L} {y}R_{y}^{m}(\cdot)|_{y=x_{i}} \bigr\rangle _{{}^{o}L_{w}^{2}} \\ & = \Biggl\langle h_{j}(\cdot),\sum_{k=2}^{m}h_{k}( \cdot)\mathbb {L} {y}h_{k}(y)|_{y=x_{i}} \Biggr\rangle _{{}^{o}L_{w}^{2}} \\ & =\sum_{k=2}^{m} \bigl\langle h_{j}(\cdot),h_{k}(\cdot) \bigr\rangle _{{}^{o}L_{w}^{2}} \mathbb{L} {y}h_{k}(y)|_{y=x_{i}}=0,\quad j=m+1, m+2, \ldots. \end{aligned}$$

Let \(\Psi_{m}^{\bot}=\overline{\operatorname{Span} \{h_{i}\}_{i=m+1}^{\infty}}\). So

$$u_{m}-u \in\Psi_{m}^{\bot}, $$

and we have

$$\Vert u_{m}-u\Vert _{{}^{o}L_{w}^{2}}=\Biggl\Vert \sum _{i=m+1}^{\infty} \langle u_{m}-u,h_{i} \rangle_{{}^{o}L_{w}^{2}}h_{i}\Biggr\Vert _{{}^{o}L_{w}^{2}}. $$

Thus

$$\|u_{m}-u\|_{{}^{o}L_{w}^{2}} \longrightarrow0,\quad m \longrightarrow \infty. $$

In addition

$$\begin{aligned} \Vert u_{m}-u\Vert _{{}^{o}L_{w}^{2}}^{2} & = \Biggl\Vert \sum_{i=m+1}^{\infty} \langle u_{m}-u,h_{i} \rangle _{{}^{o}L_{w}^{2}}h_{i}\Biggr\Vert _{{}^{o}L_{w}^{2}}^{2} \\ & =\sum_{i=m+1}^{\infty}\bigl( \langle u_{m}-u,h_{i} \rangle _{{}^{o}L_{w}^{2}}\bigr)^{2}. \end{aligned}$$

Clearly, \(\|u_{m}-u\|_{{}^{o}L_{w}^{2}}\) is monotonically decreasing in m, which completes the proof. □

Theorem 3.5

[13], Theorem 1.3.4

If \(u_{m}(x)\) converges to \(u(x)\) in the sense of \(\|\cdot\|_{{}^{o}L_{w}^{2}}\), then \(u_{m}^{(k)}(x)\) converges to \(u^{(k)}(x)\) uniformly for \(0 \leq k \leq m-1\).

3.2.2 Error analysis

Theorem 3.6

For \(m \geq2\), let \(x_{0}^{(m)} < x_{1}^{(m)} < \cdots< x_{m-2}^{(m)}\) be any \((m-1)\)-distinct points in \((a,b)\), \(u_{m}\in\) \({}^{o}\Pi_{w}^{m}[a,b]\) in equation (12) and \(u \in {}^{o}L_{w}^{2}[a,b]\) be the approximate and the exact solution of equation (1), respectively. If p, q and \(f \in C^{m-1}[a,b]\) and \(\lim_{m \rightarrow\infty}x_{0}^{(m)}=a\), \(\lim_{m \rightarrow \infty}x_{m-2}^{(m)}=b\) then

$$\|\varepsilon_{m}\|_{{}^{o}L_{w}^{2}}=\|u-u_{m} \|_{{}^{o}L_{w}^{2}} \leq \bar{C} \sqrt{\frac{(b-a)\pi}{2}} \hslash_{m}^{m-1}, $$

where CÌ„ is a constant and \(\hslash_{m} = \max_{0\leq i \leq m-3} \{|x_{i+1}^{(m)}-x_{i}^{(m)}|\}\).

Proof

Let, in equation (12), \(A_{i}^{m}=\sum_{k=0}^{i}\beta_{ik}^{m}f(x_{k}^{(m)})\). Note here that

$$\mathbb{L}u_{m}(x)=\sum_{i=0}^{m-2}A_{i}^{m} \mathbb{L}\bar{\psi}_{i}^{m}(x) $$

and

$$\begin{aligned} (\mathbb{L}u_{m}) \bigl(x_{j}^{(m)}\bigr) & =\sum _{i=0}^{m-2}A_{i}^{m} \bigl\langle \mathbb{L}\bar{\psi}_{i}^{m}, R_{x_{j}^{(m)}}^{m} \bigr\rangle _{\Pi_{w}^{m}} \\ & =\sum_{i=0}^{m-2}A_{i}^{m} \bigl\langle \bar{\psi}_{i}^{m}, \mathbb {L}^{*}R_{x_{j}^{(m)}}^{m} \bigr\rangle _{L_{w}^{2}} \\ & =\sum_{i=0}^{m-2}A_{i}^{m} \bigl\langle \bar{\psi}_{i}^{m},\psi _{j}^{m} \bigr\rangle _{\Pi_{w}^{m}}. \end{aligned}$$

Therefore,

$$\begin{aligned} \sum_{j=0}^{n} \beta_{nj}^{m}(\mathbb{L}u_{m}) \bigl(x_{j}^{(m)}\bigr) & =\sum_{i=0}^{m-2}A_{i}^{m} \Biggl\langle \bar{\psi}_{i}^{m}, \sum _{j=0}^{n}\beta_{nj}^{m} \psi_{j}^{m} \Biggr\rangle _{\Pi_{w}^{m}} \\ & =\sum_{i=0}^{m-2}A_{i}^{m} \bigl\langle \bar{\psi}_{i}^{m}, \bar{\psi }_{n}^{m} \bigr\rangle _{\Pi_{w}^{m}} \\ & =A_{n}^{m}. \end{aligned}$$
(13)

In equation (13), by induction on n, we have

$$(\mathbb{L}u_{m}) \bigl(x_{j}^{(m)}\bigr)=f \bigl(x_{j}^{(m)}\bigr),\quad j=0,1,\ldots, m-2. $$

Let

$$r_{m}=f-\mathbb{L}u_{m}. $$

Obviously,

$$r_{m} \in C^{m-1}[a,b], \quad r_{m} \bigl(x_{j}^{(m)}\bigr)=0,\quad j=0,1,\ldots,m-2. $$

On the interval \([x_{i}^{(m)},x_{i+1}^{(m)}]\) the application of Roll’s theorem to \(r_{m}(x)\) yields

$$r'_{m}\bigl(x_{i}^{(1)}\bigr)=0,\quad x_{i}^{(1)}\in\bigl(x_{i}^{(m)},x_{i+1}^{(m)} \bigr), i=0,\ldots,m-3. $$

On the interval \([x_{i}^{(1)},x_{i+1}^{(1)}]\) the application of Roll’s theorem to \(r'_{m}(x)\) yields

$$r''_{m}\bigl(x_{i}^{(2)} \bigr)=0,\quad x_{i}^{(2)}\in\bigl(x_{i}^{(1)},x_{i+1}^{(1)} \bigr), i=0,\ldots,m-4. $$

By the following application of Roll’s theorem to \(r_{m}^{(j)}(x)\) we have

$$r_{m}^{(j+1)}\bigl(x_{i}^{(j+1)}\bigr)=0, \quad x_{i}^{(j+1)}\in\bigl(x_{i}^{(j)},x_{i+1}^{(j)} \bigr), $$

for \(j=2,3,\ldots,m-4\), \(i=0,1,\ldots, m-j-3\).

Putting

$$\hslash_{m} = \max_{0\leq i \leq m-3} \bigl\{ \bigl\vert x_{i+1}^{(m)}-x_{i}^{(m)}\bigr\vert \bigr\} , \qquad \hslash_{m}^{(j)} = \max_{0\leq i \leq m-j-3} \bigl\{ \bigl\vert x_{i+1}^{(j)}-x_{i}^{(j)} \bigr\vert \bigr\} , $$

for \(j=1,2,\ldots,m-3\), clearly, for \(j=1,2,\ldots,m-3\), there exist constants \(c_{j}\) such that

$$\hslash^{(j)}\leq c_{j} \hslash_{m}\leq(b-a). $$

Suppose that \(l(x)\) is a polynomial of degree = 1 that interpolates the function \(r_{m}^{(m-3)}(x)\) at \(x_{0}^{(m-3)}\), \(x_{1}^{(m-3)}\). It is clear that \(l(x)= 0\). Also, for \(\forall x \in [x_{0}^{(m-3)},x_{1}^{(m-3)}]\), there exist \(\eta_{0} \in [x_{0}^{(m-3)},x_{1}^{(m-3)}]\) and a constant \(d_{0}\) such that

$$\begin{aligned}& r_{m}^{(m-3)}(x)=r_{m}^{(m-3)}(x)-l(x)= \frac{r_{m}^{(m-1)}(\eta _{0})}{2!}\bigl(x-x_{0}^{(m-3)}\bigr) \bigl(x-x_{1}^{(m-3)}\bigr), \\& \bigl\vert r_{m}^{(m-3)}(x)\bigr\vert \leq d_{0} \hslash_{m}^{2}. \end{aligned}$$

On the interval \([x_{i}^{(m)}, x_{i+1}^{(m)}]\), \(i = 0,1,\ldots,m-3\), noting that

$$r_{m}^{(m-4)}(x)= \int_{x_{i}^{(m-4)}}^{x}r_{m}^{(m-3)}(s) \,ds, $$

there exist constants \(a_{i}\) such that

$$\bigl\vert r_{m}^{(m-4)}(x)\bigr\vert \leq\bigl\Vert r_{m}^{(m-3)}(x)\bigr\Vert _{\infty}\bigl\vert x-x_{i}^{(m-4)}\bigr\vert \leq a_{i} \hslash_{m}^{3}. $$

It turns out that

$$\bigl\Vert r_{m}^{(m-4)}(x)\bigr\Vert _{\infty} \leq a_{0} \hslash_{m}^{3},\quad x \in \bigl[x_{0}^{(m)},x_{m-2}^{(m)}\bigr], $$

where \(a_{0}\) is a constant. By following the above process, there exists a real constant C such that

$$ \bigl\Vert r_{m}(x)\bigr\Vert _{\infty}=\max _{x \in[a,b]}\bigl\vert r_{m}(x)\bigr\vert \leq C \hslash_{m}^{m-1}, $$
(14)

because \(\lim_{m \rightarrow\infty}x_{0}^{(m)}=a\), \(\lim_{m \rightarrow\infty}x_{m-2}^{(m)}=b\).

According to equations (14) and (2), we have

$$\|r_{m}\|_{L^{2}_{w}}=\sqrt{ \int_{a}^{b}w_{[a,b]}(x) \bigl\vert r_{m}(x)\bigr\vert ^{2} \,dx} \leq C \sqrt{ \frac{(b-a)\pi}{2}} \hslash_{m}^{m-1}. $$

Noting that

$$\varepsilon_{m}=\mathbb{L}^{-1}r_{m}, $$

there exists a constant d such that

$$\Vert \varepsilon_{m}\Vert _{L_{w}^{2}}=\bigl\Vert \mathbb{L}^{-1}r_{m}\bigr\Vert _{L_{w}^{2}} \leq\bigl\Vert \mathbb{L}^{-1}\bigr\Vert _{L_{w}^{2}} \cdot \Vert r_{m}\Vert _{L_{w}^{2}} \leq d \sqrt{\frac{(b-a)\pi}{2}} \hslash_{m}^{m-1}. $$

The proof is completed by putting \(\bar{C}=d \sqrt{\frac{(b-a)\pi}{2}}\). □

Corollary 3.1

Let \(\hslash_{m}=O (m^{-1} )\). The sequence \(\|\varepsilon _{m}\|_{L_{w}^{2}}\) is monotonically decreasing in m,

$$\|\varepsilon_{m}\|_{L_{w}^{2}}=O \bigl(m^{-m+1} \bigr), $$

and

$$\|\varepsilon_{m}\|_{L_{w}^{2}} \longrightarrow0,\quad m \longrightarrow \infty. $$

Corollary 3.2

If \(\varepsilon^{(k)}_{m}(x)=u^{(k)}(x)-u^{(k)}_{m}(x)\), \(1 \leq k \leq m-1\), then

$$\bigl\Vert \varepsilon ^{(k)}_{m}\bigr\Vert _{{}^{o}L_{w}^{2}}=\bigl\Vert u^{(k)}-u^{(k)}_{m}\bigr\Vert _{{}^{o}L_{w}^{2}} \leq\bar{D}\hslash_{m}^{m-1}, $$

where DÌ„ is a constant.

4 Numerical examples

In this section, some numerical examples are considered to illustrate the performance and accuracy of the C-RKM. Results obtained by C-RKM are compared with the exact solution of each example and are found to be in good agreement with each other. In the process of computation, all the symbolic and numerical computations are performed by using Mathematica 10.

Example 4.1

[24] Consider the following two-point boundary value problem:

$$ \left \{ \textstyle\begin{array}{l} u'' +200e^{x}u'+300\sin(x)u=f(x), \quad 0 \leq x \leq1, \\ u(0)=0, \qquad u(1)=\sinh(1), \end{array}\displaystyle \right . $$
(15)

where \(f(x)\) is given such that the exact solution of this problem is \(u(x)=\sinh(x)\). The C-RKM on this example with \(m=2,4,6,8,9\) and \(x_{i}=\frac{(i+0.3)}{m}\), \(i=0,1,\ldots,m-2\) is applied. The absolute errors \(|u(t_{i})-u_{m}(t_{i})|\), \(t_{i}=0.1i\), \(i=1,2,\ldots,9\), and its \(L^{2}_{w}\) norm \(\|\varepsilon_{m}\|_{{}^{o}L_{w}^{2}}\), can be found in Table 1. We see that the accuracies are \(O(10^{-3})\) for \(m=2\), \(O(10^{-5})\) for \(m=4\), \(O(10^{-8})\) for \(m=6\), \(O(10^{-11})\) for \(m=8\) and \(O(10^{-12})\) for \(m=9\), which confirm the convergence of the C-RKM and the order of error is \(O(m^{-m+1})\).

Table 1 Numerical results for Example 4.1

The effect of the number of nodal points, \(m-1\), on the numerical values of the absolute error functions \(|u^{(i)}-u_{m}^{(i)}|\), \(i=0,1,2\), of the C-RKM method are discussed next. Figures 1-12 give the relevant data for Example 4.1, where the number of nodal points is 2, 4, 6, and 8. It is observed that the increase in m results in a reduction in the numerical values of the absolute error function for the numerical solutions and all their numerical derivatives up to order two and correspondingly an improvement in the accuracy of the obtained numerical results. This is in agreement with the result of convergence and error analysis, the error is monotonically decreasing in m, where more accurate solutions are obtained using an increase in m. On the other hand, a rapid decline of absolute errors by increasing m can easily be seen in these figures.

Figure 1
figure 1

The absolute error of \(\pmb{u_{3}(x)}\) for the solutions of Example 4.1 .

Figure 2
figure 2

The absolute error of \(\pmb{u_{5}(x)}\) for the solutions of Example 4.1 .

Figure 3
figure 3

The absolute error of \(\pmb{u_{7}(x)}\) for the solutions of Example 4.1 .

Figure 4
figure 4

The absolute error of \(\pmb{u_{9}(x)}\) for the solutions of Example 4.1 .

Figure 5
figure 5

The absolute error of \(\pmb{u'_{3}(x)}\) for the solutions of Example 4.1 .

Figure 6
figure 6

The absolute error of \(\pmb{u'_{5}(x)}\) for the solutions of Example 4.1 .

Figure 7
figure 7

The absolute error of \(\pmb{u'_{7}(x)}\) for the solutions of Example 4.1 .

Figure 8
figure 8

The absolute error of \(\pmb{u'_{9}(x)}\) for the solutions of Example 4.1 .

Figure 9
figure 9

The absolute error of \(\pmb{u''_{3}(x)}\) for the solutions of Example 4.1 .

Figure 10
figure 10

The absolute error of \(\pmb{u''_{5}(x)}\) for the solutions of Example 4.1 .

Figure 11
figure 11

The absolute error of \(\pmb{u''_{7}(x)}\) for the solutions of Example 4.1 .

Figure 12
figure 12

The absolute error of \(\pmb{u''_{9}(x)}\) for the solutions of Example 4.1 .

It is worth noting here that the obtained numerical solution in this example is very accurate, although the number of basis functions in the expansion of the obtained result is very low.

Example 4.2

[7]

Consider the following nonhomogeneous two-point boundary value problem:

$$ \left \{ \textstyle\begin{array}{l} u''+(1-x)u'+2u=(1+2x-x^{2})\sin(x), \quad 0 \leq x \leq1, \\ u(0)=1, \qquad u(1)=0, \end{array}\displaystyle \right . $$
(16)

the exact solution of this problem is \(u(x)=(1-x)\cos(x)\). It is solved by the spectral second kind Chebyshev wavelets (SSKCW) algorithm in [7]. Here, C-RKM on this problem with \(m=5,6,7,9,11\) and \(x_{i}=\frac{1}{2} (\cos(\frac{(i+1)\pi}{m})+1 )\), \(i=0,1,\ldots,m-2\), is applied. In solving this problem, one of the main advantages of these two algorithms is that highly accurate approximate solutions are obtained using a small number of basis functions, n, in the spectral expansion. In [7] \(n=2^{k}(M+1)\), where k is dilation parameter and M is the order of second kind Chebyshev polynomials. In C-RKM \(n=m-1\), where m is the number of nodal points. The maximum absolute errors E obtained by C-RKM and SSKCW [7] are given in Table 2. Now observe that, for the same n, the results of C-RKM are more accurate than the results of SSKCW. To illustrate the rate of convergence of C-RKM and SSKCW for Example 4.2, the maximum absolute errors are plotted in Figure 13, where n is 4, 5, 6, 8, and 10. It can easily be seen that, by increasing n, both algorithms have convergence, but C-RKM is faster than SSKCW. Hence we can say that for this problem C-RKM gives a better accuracy in comparison to the SSKCW algorithm. Figure 14 gives the order of error for Example 4.2, where the number of nodal points covers the range from 1 to 10. This figure is in agreement with the results of convergence and error analysis.

Figure 13
figure 13

The maximum absolute error for Example 4.2 .

Figure 14
figure 14

The order error of \(\pmb{u_{m}(x)}\) , \(\pmb{m=2,3,\ldots,11}\) , for the Example 4.2 .

Table 2 Numerical results for Example 4.1

Summarizing, RKM is proposed in order to obtain the accurate numerical solution of two-point boundary value problems with the Dirichlet boundary conditions. Chebyshev basis polynomials are used. A convergence analysis is discussed. The numerical solutions obtained by this method are compared with the exact solutions. The results reveal that the proposed method is quite efficient and accurate.

5 Conclusions

In the method of this paper, retaining important property of the reproducing kernel we solved two-point boundary value problems. In fact, with increasing m (the number of nodal points), the solution space and associated reproducing kernel function are improved. Also, the absolute error of the approximate solution is rapidly decreasing with m. This leads to using less nodal points and applying unstable Gram-Schmidt process a moderate number of times.

References

  1. Heath, MT: Scientific Computing an Introductory Survey. McGraw-Hill, New York (2002)

    MATH  Google Scholar 

  2. Caglar, H, Caglar, N, Elfaituri, K: B-spline interpolation compared with finite difference, finite element and finite volume methods which applied to two-point boundary value problems. Appl. Math. Comput. 175, 72-79 (2006)

    MathSciNet  MATH  Google Scholar 

  3. Chun, C, Sakthivel, R: Homotopy perturbation technique for solving two-point boundary value problems-comparison with other methods. Comput. Phys. Commun. 181, 1021-1024 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  4. Tat, CK, Majid, ZA, Suleiman, M, Senu, N: Solving linear two-point boundary value problems by direct Adams Moulton method. Appl. Math. Sci. 99, 4921-4929 (2012)

    MathSciNet  MATH  Google Scholar 

  5. Taiwo, OA: Exponential fitting for the solution of two-point boundary value problems with cubic spline collocation tau-method. Int. J. Comput. Math. 79, 299-306 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  6. Jang, B: Two-point boundary value problems by the extended Adomian decomposition method. Comput. Appl. Math. 219, 253-262 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  7. Abd-Elhameed, WM, Doha, EH, Youssri, YH: New spectral second kind Chebyshev wavelets algorithm for solving linear and nonlinear second-order differential equations involving singular and Bratu type equations. Abstr. Appl. Anal. 2013, Article ID 715756 (2013)

    MathSciNet  MATH  Google Scholar 

  8. Abd-Elhameed, WM: An elegant operational matrix based on harmonic numbers: effective solutions for linear and nonlinear fourth-order two point boundary value problems. Nonlinear Anal., Model. Control 21(4), 448-464 (2016)

    MathSciNet  Google Scholar 

  9. Abd-Elhameed, WM, Ahmed, HM, Youssri, YH: A new generalized Jacobi Galerkin operational matrix of derivatives: two algorithms for solving fourth-order boundary value problems. Adv. Differ. Equ. 2016, 22 (2016)

    Article  MathSciNet  Google Scholar 

  10. Doha, EH, Abd-Elhameed, WM, Youssri, YH: New algorithms for solving third- and fifth-order two point boundary value problems based on nonsymmetric generalized Jacobi Petrov-Galerkin method. J. Adv. Res. 6, 673-686 (2015)

    Article  Google Scholar 

  11. Abd-Elhameed, WM, Doha, EH, Youssri, YH: New wavelets collocation method for solving second-order multipoint boundary value problems using Chebyshev polynomials of third and fourth kinds. Abstr. Appl. Anal. 2013, Article ID 542839 (2013)

    MathSciNet  MATH  Google Scholar 

  12. Abd-Elhameed, WM: On solving linear and nonlinear sixth-order two point boundary value problems via an elegant harmonic numbers operational matrix of derivatives. Comput. Model. Eng. Sci. 101(3), 159-185 (2014)

    MathSciNet  Google Scholar 

  13. Cui, M, Lin, Y: Nonlinear Numerical Analysis in Reproducing Kernel Space. Nova Science Publisher, New York (2009)

    MATH  Google Scholar 

  14. Mohammadi, M, Mokhtari, R: Solving the generalized regularized long wave equation on the basis of a reproducing kernel space. J. Comput. Appl. Math. 235, 4003-4014 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  15. Al-Smadi, M, Abu Arqub, O, Shawagfeh, N, Momani, S: Numerical investigations for systems of second-order periodic boundary value problems using reproducing kernel method. Appl. Math. Comput. 291, 137-148 (2016)

    MathSciNet  Google Scholar 

  16. Abu Arqub, O, Al-Smadi, M, Momani, S, Hayat, T: Application of reproducing kernel algorithm for solving second-order, two-point fuzzy boundary value problems. Soft Comput. (2016). doi:10.1007/s00500-016-2262-3

    Google Scholar 

  17. Al-Smadi, M, Abu Arqub, O, Momani, S: A computational method for two-point boundary value problems of fourth-order mixed integro-differential equations. Math. Probl. Eng. 2013, Article ID 832074 (2013)

    Article  MATH  Google Scholar 

  18. Abu Arqub, O, Al-Smadi, M: Numerical algorithm for solving two-point, second-order periodic boundary value problems for mixed integro-differential equations. Appl. Math. Comput. 243, 911-922 (2014)

    MathSciNet  MATH  Google Scholar 

  19. Geng, FZ: Solving singular second order three-point boundary value problems using reproducing kernel Hilbert space method. Appl. Math. Comput. 215, 2095-2102 (2009)

    MathSciNet  MATH  Google Scholar 

  20. Niu, J, Lin, Y, Cui, M: A novel approach to calculation of reproducing kernel on infinite interval and applications to boundary value problems. Abstr. Appl. Anal. 2013, Article ID 959346 (2013)

    Article  MathSciNet  Google Scholar 

  21. Niu, J, Lin, YZ, Cui, M: Approximate solutions to three-point boundary value problems with two-space integral condition for parabolic equations. Abstr. Appl. Anal. 2012, Article ID 414612 (2012)

    MathSciNet  MATH  Google Scholar 

  22. Niu, J, Lin, YZ, Zhang, CP: Numerical solution of nonlinear three-point boundary value problem on the positive half-line. Math. Methods Appl. Sci. 35, 1601-1610 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  23. Lin, YZ, Niu, J, Cui, M: A numerical solution to nonlinear second order three-point boundary value problems in the reproducing kernel space. Appl. Math. Comput. 218, 7362-7368 (2012)

    MathSciNet  MATH  Google Scholar 

  24. Li, XY, Wu, BY: Error estimation for the reproducing kernel method to solve linear boundary value problems. Comput. Appl. Math. 243, 10-15 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  25. Moradi, E, Babolian, E, Javadi, S: The explicit formulas for reproducing kernel of some Hilbert spaces. Miskolc Math. Notes 16, 1041-1053 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  26. Boyd, JP: Chebyshev and Fourier Spectral Methods, 2nd edn. Dover, New York (2001)

    MATH  Google Scholar 

  27. Babolian, E, Javadi, Sh, Moradi, E: New implementation of reproducing kernel Hilbert space method for solving a class of functional integral equations. Commun. Numer. Anal. 2014, Article ID cna-00205 (2014)

    MathSciNet  Google Scholar 

  28. Deutsch, F: Best Approximation in Inner Product Spaces. Springer, New York (2001)

    Book  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the referee for valuable comments and suggestions, which improved the paper in its present form.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S Abbasbandy.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed extensively to the work presented in this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khaleghi, M., Babolian, E. & Abbasbandy, S. Chebyshev reproducing kernel method: application to two-point boundary value problems. Adv Differ Equ 2017, 26 (2017). https://doi.org/10.1186/s13662-017-1089-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-017-1089-2

Keywords