Now we are in a position to construct new operational matrices. The operational matrices of derivatives and integrals are frequently used in the literature to solve fractional-order differential equations. In this section, we present the proofs of constructions of four new operational matrices. These matrices act as building blocks in the proposed method.

### Theorem 3

*The fractional integration of order*
*σ*
*of the function vector*
\(\boldsymbol {\mho}^{\boldsymbol {\omega}}_{\mathbf{N}}(\mathbf{t})\) (*as defined in* (17)) *is defined as*

$$ I^{\sigma}\boldsymbol {\mho}^{\boldsymbol {\omega}}_{\mathbf{N}}(\mathbf {t})= \mathbf{P}_{(N\times N)}^{(\sigma,\omega)}\boldsymbol {\mho}^{\boldsymbol {\omega}}_{\mathbf{N}}( \mathbf{t}), $$

*where*
\(\mathbf{P}_{(N\times N)}^{(\sigma,\omega)}\)
*is an operational matrix for fractional*-*order integration and is given as*

$$ \mathbf{P}_{(N\times N)}^{(\sigma,\omega)}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} \Omega_{(0,0)} & \Omega_{(0,1)} & \cdots& \Omega_{(0,n)} \\ \Omega_{(1,0)} & \Omega_{(1,1)} & \cdots& \Omega_{(1,n)} \\ \vdots& \vdots& \ddots& \vdots\\ \Omega_{(n,0)} & \Omega_{(n,1)} & \cdots& \Omega_{(n,n)} \end{array}\displaystyle \right ], $$

(21)

*where*

$$ \Omega_{(r,s)}=\mathbf{w}_{(r,n)}\mathbf{w}_{(s,n)}\sum _{k^{\prime }=0}^{s}\sum _{l^{\prime}=0}^{n-s}\sum_{k=0}^{r} \sum_{l=0}^{n-r}\frac{\overbrace{\Delta_{(s,k^{\prime},l^{\prime})}}\Delta _{(r,k,l,\sigma ,\omega)}^{\prime}\omega^{l+r-k+\sigma+1}}{(l+l^{\prime }+s+r-k-k^{\prime}+\sigma+1)}, $$

*where*
\(\overbrace{\Delta_{(s,k^{\prime},l^{\prime})}}\)
*is as defined in* (11), *and*

$$ \Delta_{(r,k,l,\sigma,\omega)}^{\prime}=\frac{\overbrace{\Delta _{(r,k,l)}}\Gamma(l+r-k+1)}{\Gamma(l+r-k+1+\sigma)\omega^{l+r-k}}. $$

### Proof

Consider the general element of (17) and apply fractional integral of order *σ*, consequently we will get

$$ I^{\sigma} \boldsymbol {\phi}_{r,n}(t)= \mathbf{w}_{(r,n)}\sum_{k=0}^{r}\sum_{l=0}^{n-r}\overbrace{ \Delta_{(r,k,l)}}\frac{I^{\sigma }t^{l+r-k}}{\omega^{l+r-k}}. $$

(22)

Using the definition of fractional-order integration we may write

$$ I^{\sigma} \boldsymbol {\phi}_{r,n}(t)= \mathbf{w}_{(r,n)}\sum_{k=0}^{r}\sum_{l=0}^{n-r}\Delta_{(r,k,l,\sigma,\omega)}^{\prime }t^{l+r-k+\sigma}, $$

(23)

where \(\Delta_{(r,k,l,\sigma,\omega)}^{\prime}=\frac{\overbrace {\Delta _{(r,k,l)}}\Gamma(l+r-k+1)}{\Gamma(l+r-k+1+\sigma)\omega^{l+r-k}}\). We can approximate \(t^{l+r-k+\sigma}\) with normalized Bernstein polynomials as follows:

$$ t^{l+r-k+\sigma}=\sum_{s=0}^{n}c_{(r,s)} \boldsymbol {\phi}_{s,n}(t),\quad \text{where } c_{(r,s)}= \int_{0}^{\omega}t^{l+r-k+\sigma} \boldsymbol { \phi}_{s,n}(t)\,dt. $$

(24)

Using equation (12), we can write

$$ c_{(r,s)}=\mathbf{w}_{(s,n)}\sum_{k^{\prime}=0}^{s} \sum_{l^{\prime }=0}^{n-s}\frac{\overbrace{\Delta_{(s,k^{\prime},l^{\prime })}}}{\omega ^{l^{\prime}+s-k^{\prime}}} \int_{0}^{\omega}t^{l+l^{\prime }+s+r-k-k^{\prime}+\sigma}\,dt. $$

(25)

On further simplifications, we can get

$$ c_{(r,s)}=\mathbf{w}_{(s,n)}\sum _{k^{\prime}=0}^{s}\sum_{l^{\prime }=0}^{n-s} \frac{\overbrace{\Delta_{(s,k^{\prime},l^{\prime })}}\omega ^{l+r-k+\sigma+1}}{(l+l^{\prime}+s+r-k-k^{\prime}+\sigma+1)}. $$

(26)

Using (26) and (24) in (23) we get

$$ I^{\sigma}\boldsymbol {\phi}_{r,n}(t)=\sum _{s=0}^{n}\mathbf {w}_{(r,n)} \mathbf{w}_{(s,n)}\sum_{k^{\prime}=0}^{s} \sum_{l^{\prime }=0}^{n-s}\sum _{k=0}^{r}\sum_{l=0}^{n-r} \frac{\overbrace{\Delta _{(s,k^{\prime},l^{\prime})}}\Delta_{(r,k,l,\sigma,\omega )}^{\prime }\omega^{l+r-k+\sigma+1}}{(l+l^{\prime}+s+r-k-k^{\prime}+\sigma+1)} \boldsymbol {\phi}_{s,n}(t). $$

(27)

Using the notation

$$ \Omega_{(r,s)}=\mathbf{w}_{(r,n)}\mathbf{w}_{(s,n)}\sum _{k^{\prime }=0}^{s}\sum _{l^{\prime}=0}^{n-s}\sum_{k=0}^{r} \sum_{l=0}^{n-r}\frac{\overbrace{\Delta_{(s,k^{\prime},l^{\prime})}}\Delta _{(r,k,l,\sigma ,\omega)}^{\prime}\omega^{l+r-k+\sigma+1}}{(l+l^{\prime }+s+r-k-k^{\prime}+\sigma+1)}, $$

and evaluating for \(r=0,1,\ldots,n\) and \(s=0,1,\ldots,n\) completes proof of the theorem. □

### Theorem 4

*The fractional derivative of order*
*σ*
*of the function vector*
\(\boldsymbol {\mho}^{\boldsymbol {\omega}}_{\mathbf{N}}(\mathbf{t})\) (*as defined in* (17)) *is defined as*

$$ D^{\sigma}\boldsymbol {\mho}^{\boldsymbol {\omega}}_{\mathbf{N}}(\mathbf {t})= \mathbf{D}_{(N\times N)}^{(\sigma,\omega)}\boldsymbol {\mho}^{\boldsymbol {\omega}}_{\mathbf{N}}( \mathbf{t}), $$

*where*
\(\mathbf{D}_{(N\times N)}^{(\sigma,\omega)}\)
*is defined as*

$$ \mathbf{D}_{(N\times N)}^{(\sigma,\omega)}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} \Omega_{(0,0)} & \Omega_{(0,1)} & \cdots& \Omega_{(0,n)} \\ \Omega_{(1,0)} & \Omega_{(1,1)} & \cdots& \Omega_{(1,n)} \\ \vdots& \vdots& \ddots& \vdots\\ \Omega_{(n,0)} & \Omega_{(n,1)} & \cdots& \Omega_{(n,n)} \end{array}\displaystyle \right ], $$

(28)

*where*

$$ \Omega_{(r,s)}=\mathbf{w}_{(r,n)}\mathbf{w}_{(s,n)}\sum _{k^{\prime }=0}^{s}\sum _{l^{\prime}=0}^{n-s}\sum_{k=0}^{r} \sum_{l=0}^{n-r}\frac{\overbrace{\Delta_{(s,k^{\prime},l^{\prime})}}\Delta _{(r,k,l,\sigma ,\omega)}^{\prime\prime}\omega^{l+r-k-\sigma+1}}{(l+l^{\prime }+s+r-k-k^{\prime}-\sigma+1)} $$

*and*

$$ \Delta_{(r,k,l,\sigma,\omega)}^{\prime\prime}= \textstyle\begin{cases} \frac{\overbrace{\Delta_{(r,k,l)}}\Gamma(l+r-k+1)}{\Gamma (l+r-k+1-\sigma )\omega^{l+r-k}} & \textit{if} \ l+r-k \geq\sigma, \\ 0 & \textit{if} \ l+r-k < \sigma. \end{cases} $$

(29)

### Proof

On application of the derivative of order *σ* to a general element of (17), we may write

$$ D^{\sigma}\boldsymbol {\phi}_{r,n}(t)= \mathbf{w}_{(r,n)}\sum_{k=0}^{r}\sum _{l=0}^{n-r}\overbrace{ \Delta_{(r,k,l)}}\frac{D^{\sigma}t^{l+r-k}}{\omega^{l+r-k}}. $$

(30)

Using the definition of the fractional-order derivative we can easily write

$$ D^{\sigma}\boldsymbol {\phi}_{r,n}(t)= \mathbf{w}_{(r,n)}\sum_{k=0}^{r}\sum_{l=0}^{n-r} \Delta^{\prime\prime }_{(r,k,l,\sigma,\omega)}t^{l+r-k-\sigma}, $$

(31)

where

$$ \Delta_{(r,k,l,\sigma,\omega)}^{\prime\prime}= \textstyle\begin{cases} \frac{\overbrace{\Delta_{(r,k,l)}}\Gamma(l+r-k+1)}{\Gamma (l+r-k+1-\sigma )\omega^{l+r-k}} & \mbox{if $l+r-k \geq\sigma$}, \\ 0 & \mbox{if $l+r-k < \sigma$}. \end{cases} $$

(32)

We can approximate \(t^{l+r-k-\sigma}\) with the Bernstein polynomials as follows:

$$ \begin{aligned} & t^{l+r-k-\sigma}=\sum _{s=0}^{n}c_{(r,s)} \boldsymbol { \phi}_{s,n}(t), \\ & c_{(r,s)}= \int_{0}^{\omega}t^{l+r-k-\sigma} \boldsymbol { \phi}_{s,n}(t)\,dt. \end{aligned} $$

(33)

Using equation (12), we can write

$$ c_{(r,s)}=\mathbf{w}_{(s,n)}\sum _{k^{\prime}=0}^{s}\sum_{l^{\prime }=0}^{n-s} \frac{\overbrace{\Delta_{(s,k^{\prime},l^{\prime })}}}{\omega ^{l^{\prime}+s-k^{\prime}}} \int_{0}^{\omega}t^{l+l^{\prime }+s+r-k-k^{\prime}-\sigma}\,dt. $$

(34)

On further simplification we can get

$$ c_{(r,s)}=\mathbf{w}_{(s,n)}\sum _{k^{\prime}=0}^{s}\sum_{l^{\prime }=0}^{n-s} \frac{\overbrace{\Delta_{(s,k^{\prime},l^{\prime })}}\omega ^{l+r-k-\sigma+1}}{(l+l^{\prime}+s+r-k-k^{\prime}-\sigma+1)}. $$

(35)

Using (35) and (33) in (31) we get

$$ D^{\sigma} \boldsymbol {\phi}_{r,n}(t)=\sum _{s=0}^{n}\mathbf {w}_{(r,n)} \mathbf{w}_{(s,n)}\sum_{k^{\prime}=0}^{s} \sum_{l^{\prime }=0}^{n-s}\sum _{k=0}^{r}\sum_{l=0}^{n-r} \frac{\overbrace{\Delta _{(s,k^{\prime},l^{\prime})}}\Delta_{(r,k,l,\sigma,\omega )}^{\prime \prime}\omega^{l+r-k-\sigma+1}}{(l+l^{\prime}+s+r-k-k^{\prime }+\sigma +1)}\boldsymbol {\phi}_{s,n}(t). $$

(36)

Using the notation

$$ \Omega_{(r,s)}=\mathbf{w}_{(r,n)}\mathbf{w}_{(s,n)}\sum _{k^{\prime }=0}^{s}\sum _{l^{\prime}=0}^{n-s}\sum_{k=0}^{r} \sum_{l=0}^{n-r}\frac{\overbrace{\Delta_{(s,k^{\prime},l^{\prime})}}\Delta _{(r,k,l,\sigma ,\omega)}^{\prime\prime}\omega^{l+r-k-\sigma+1}}{(l+l^{\prime }+s+r-k-k^{\prime}-\sigma+1)}, $$

and evaluating for \(r=0,1,\ldots,n\), and \(s=0,1,\ldots,n\), we complete the proof of the theorem. □

The operational matrices developed in the previous theorems can easily solve FDEs with initial conditions. Here we are interested in the approximate solution of FDEs under complicated types of boundary conditions. Therefore we need some more operational matrices such that we can easily handle the boundary conditions effectively.

The following matrix plays an important role in the numerical simulation of fractional differential equations with variable coefficients.

### Theorem 5

*For a given function*
\(f\in C[0,\omega]\), *and*
\(u=\mathbf{H}_{\mathbf {N}}^{\mathbf{T}}\boldsymbol {\mho}^{\boldsymbol {\omega}}_{\mathbf {N}}(\mathbf{t})\), *the product of*
\(f(t)\)
*and*
*σ*
*order fractional derivative of the function*
\(u(t) \)
*can be written in matrix form as*

$$ f(t)D^{\sigma}u(t)=\mathbf{H}_{\mathbf{N}}^{\mathbf{T}}\mathbf {Q}_{(N\times N)}^{(f,\sigma,\omega)}\boldsymbol {\mho}^{\boldsymbol {\omega }}_{\mathbf{N}}( \mathbf{t}), $$

*where*
\(\mathbf{Q}_{(N\times N)}^{(f,\sigma,\omega)}=\mathbf {D}_{(N\times N)}^{(\sigma,\omega)}\mathbf{R}_{(N\times N)}^{(f,\omega)}\), *and*
\(\mathbf{D}_{(N\times N)}^{(\sigma,\omega)}\)
*is an operational matrix for fractional*-*order derivative and*

$$ \mathbf{R}_{(N\times N)}^{(f,\omega)}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} \Omega_{(0,0)} & \Omega_{(0,1)} & \cdots& \Omega_{(0,n)} \\ \Omega_{(1,0)} & \Omega_{(1,1)} & \cdots& \Omega_{(1,n)} \\ \vdots& \vdots& \ddots& \vdots\\ \Omega_{(n,0)} & \Omega_{(n,1)} & \cdots& \Omega_{(n,n)} \end{array}\displaystyle \right ], $$

(37)

*where*

$$ \Omega_{(r,s)}=\sum_{q=0}^{n}d_{q} \Theta_{(q,r,s)}r, \quad s=0,1,\ldots,n, $$

*and the entries*
\(\Theta_{(q,r,s)}\)
*are defined as in Theorem *
1, *and*
\(d_{q}\)
*are the spectral coefficients of the function*
\(f(t)\).

### Proof

Applying Theorem 4 we can write

$$ D^{\sigma}u(t)=\mathbf{H}_{\mathbf{N}}^{\mathbf{T}} \mathbf {D}_{(N\times N)}^{(\sigma,\omega)}\boldsymbol {\mho}^{\boldsymbol {\omega }}_{\mathbf{N}}( \mathbf{t}) $$

(38)

and

$$ f(t)D^{\sigma}u(t)=\mathbf{H}_{\mathbf{N}}^{\mathbf{T}} \mathbf {D}_{(N\times N)}^{(\sigma,\omega)}\overbrace{\boldsymbol { \mho}^{\boldsymbol {\omega}}_{\mathbf{N}}(\mathbf{t})}, $$

(39)

where

$$ \overbrace{\boldsymbol {\mho}^{\boldsymbol {\omega}}_{\mathbf{N}}(\mathbf {t})}=\bigl[f(t) \boldsymbol {\phi}_{0,n}(t),f(t)\boldsymbol {\phi}_{1,n}(t),\ldots, f(t) \boldsymbol {\phi}_{n,n}(t)\bigr]^{T}. $$

Consider the general element of \(\overbrace{\boldsymbol {\mho}^{\boldsymbol {\omega}}_{\mathbf{N}}(\mathbf{t})}\), and approximate it with normalized Bernstein polynomials as

$$ f(t)\boldsymbol {\phi}_{r,n}(t)=\sum _{s=0}^{n}c_{s}^{r}\boldsymbol { \phi}_{s,n}(t), $$

(40)

where \(c_{i}^{r}\) can easily be calculated as

$$ c_{s}^{r}= \int_{0}^{\omega}f(t)\boldsymbol {\phi}_{r,n}(t) \boldsymbol {\phi} _{s,n}(t)\,dt. $$

(41)

Now as \(f(t)\in C[0,\omega]\), we can approximate it with normalized Bernstein polynomials as

$$ \begin{aligned} & f(t)=\sum_{q=0}^{n}d_{q} \boldsymbol {\phi}_{q,n}(t), \\ & d_{q}= \int_{0}^{\omega}f(t)\boldsymbol {\phi}_{q,n}(t) \,dt. \end{aligned} $$

(42)

Using equation (42) in (41) we get the following estimates:

$$ c_{s}^{r}=\sum_{q=0}^{n}d_{q} \int_{0}^{\omega} \boldsymbol {\phi}_{q,n}(t)\boldsymbol {\phi}_{r,n}(t)\boldsymbol {\phi}_{s,n}(t)\,dt. $$

(43)

In view of Theorem 1 we obtain the following estimate:

$$ c_{s}^{r}=\sum_{q=0}^{n}d_{q} \Theta_{(q,r,s)}. $$

(44)

Using (44) in (40)

$$ f(t)\boldsymbol {\phi}_{r,n}(t)=\sum _{s=0}^{n}\sum_{q=0}^{n}d_{q} \Theta _{(q,r,s)}\boldsymbol {\phi}_{s,n}(t). $$

(45)

Evaluating (45) for \(s=0,1,\ldots, n\) and \(r=0,1,\ldots, n\) we can write

$$ \left [ \textstyle\begin{array}{@{}c@{}} f(t)\boldsymbol {\phi}_{0,n}(t) \\ f(t)\boldsymbol {\phi}_{1,n}(t) \\ \vdots\\ f(t)\boldsymbol {\phi}_{n,n}(t) \end{array}\displaystyle \right ] =\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} \Omega_{(0,0)} & \Omega_{(0,1)} & \cdots& \Omega_{(0,n)} \\ \Omega_{(1,0)} & \Omega_{(1,1)} & \cdots& \Omega_{(1,n)} \\ \vdots& \vdots& \ddots& \vdots\\ \Omega_{(n,0)} & \Omega_{(n,1)} & \cdots& \Omega_{(n,n)} \end{array}\displaystyle \right ] \left [ \textstyle\begin{array}{@{}c@{}} \boldsymbol {\phi}_{0,n}(t) \\ \boldsymbol {\phi}_{1,n}(t) \\ \vdots\\ \boldsymbol {\phi}_{n,n}(t) \end{array}\displaystyle \right ] , $$

(46)

where \(\Omega_{(r,s)}=\sum_{q=0}^{n}d_{q}\Theta_{(q,r,s)}\). In simplified notation, we can write

$$ \overbrace{\boldsymbol {\mho}^{\boldsymbol {\omega}}_{\mathbf{N}}(\mathbf {t})}=\mathbf{R}_{(N\times N)}^{(f,\omega)}\boldsymbol {\mho}^{\boldsymbol {\omega}}_{\mathbf{N}}( \mathbf{t}). $$

(47)

Using (47) in (39) we get the desired result. The proof is complete. □

Since one of our aims in this paper is to solve FDEs under different types of local and non-local boundary conditions, we have to face some complicated situations, so to handle these situations we will use the operational matrix developed in the next theorem.

### Theorem 6

*Let*
*f*
*be a function of the form*
\(f(t)=at^{n^{\prime}}\)
*where*
\(a\in \mathbb{R}\)
*and*
\(n^{\prime}\in\mathbb{N}\), *then for any function*
\(u(t)\in C[0,\omega]\), \(u(t)=\mathbf{H}_{\mathbf{N}}^{\mathbf{T}}\boldsymbol {\mho}^{\boldsymbol {\omega}}_{\mathbf{N}}(\mathbf{t})\), \(\tau\leq \omega\). *Then we can generalize the product of*
\({}_{0}I_{\tau}^{\sigma}u(t)\,dt\)
*and*
\(f(t)\)
*in matrix form as*

$$ f(t) _{0}I_{\tau}^{\sigma}u(t)\,dt= \mathbf{H}_{\mathbf{N}}^{\mathbf {T}}\mathbf{W}_{(N\times N)}^{(\sigma,\tau ,a,n',\omega)} \boldsymbol {\mho}^{\boldsymbol {\omega}}_{\mathbf{N}}(\mathbf{t}), $$

*where*

$$ \mathbf{W}_{(N\times N)}^{(\sigma,\tau,a,n',\omega)}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} \Omega_{(0,0)} & \Omega_{(0,1)} & \cdots& \Omega_{(0,n)} \\ \Omega_{(1,0)} & \Omega_{(1,1)} & \cdots& \Omega_{(1,n)} \\ \vdots& \vdots& \ddots& \vdots\\ \Omega_{(n,0)} & \Omega_{(n,1)} & \cdots& \Omega_{(n,n)} \end{array}\displaystyle \right ]. $$

(48)

*The entries of the matrix are defined by*

$$ \Omega_{(i,j)}=a\Lambda_{(i,\sigma,\tau,\omega)}\mathbf{w}_{(j,n)} \sum_{p=0}^{j}\sum _{q=0}^{n-j}\overbrace{\Delta _{(j,p,q)}} \frac{\omega^{n^{\prime}+1}}{(q+j-p+n+1)}, $$

*where*

$$ \Lambda_{(i,\sigma,\tau,\omega)}=\mathbf{w}_{(i,n)}\sum _{k=0}^{i}\sum _{l=0}^{n-i}\overbrace{\Delta_{(i,k,l)}} \frac{\Gamma (l+i-k+1)\tau ^{l+i-k+\sigma}}{\Gamma(l+i-k+\sigma+1)\omega^{l+i-k}}. $$

### Proof

Consider the general term \(\boldsymbol {\mho}^{\boldsymbol {\omega}}_{\mathbf {N}}(\mathbf{t})\), if we calculate the *σ* order definite integral from 0 to *τ* we get

$$ {}_{0}I_{\tau}^{\sigma}u(t)\,dt={}_{0}I_{\tau}^{\sigma} \mathbf {H}_{\mathbf{N}}^{\mathbf{T}}\boldsymbol {\mho}^{\boldsymbol {\omega }}_{\mathbf{N}}( \mathbf{t}) =\sum_{i=0}^{n} c_{i} {}_{0} I_{\tau}^{\sigma}\boldsymbol { \phi}_{i,n}(x). $$

(49)

Using (12) we may write

$$ \begin{aligned} _{0}I_{\tau}^{\sigma}u(t) \,dt& =\sum_{i=0}^{n}c_{i} \mathbf{w}_{(i,n)}\sum_{k=0}^{i} \sum_{l=0}^{n-i}\overbrace{ \Delta_{(i,k,l)}} {}_{0}I_{\tau}^{\sigma} \frac{x^{l+i-k}}{\omega^{l+i-k}} \\ & =\sum_{i=0}^{n}c_{i} \mathbf{w}_{(i,n)}\sum_{k=0}^{i} \sum_{l=0}^{n-i}\overbrace{ \Delta_{(i,k,l)}}\frac{\Gamma(l+i-k+1)\tau ^{l+i-k+\sigma}}{\Gamma(l+i-k+\sigma+1)\omega^{l+i-k}}. \end{aligned} $$

(50)

Using the notation

$$ \Lambda_{(i,\sigma,\tau,\omega)}=\mathbf{w}_{(i,n)}\sum _{k=0}^{i}\sum _{l=0}^{n-i}\overbrace{\Delta_{(i,k,l)}} \frac{\Gamma (l+i-k+1)\tau ^{l+i-k+\sigma}}{\Gamma(l+i-k+\sigma+1)\omega^{l+i-k}}, $$

we can write

$$ at^{n^{\prime}} _{0}I_{\tau}^{\sigma}u(t)\,dt=\sum _{i=0}^{n}c_{i}a\Lambda _{(i,\sigma,\tau,\omega)}t^{n^{\prime}}. $$

(51)

Now \(a\Lambda_{(i,\sigma,\tau,\omega)}t^{n^{\prime}}\) can be approximated with Bernstein polynomials as follows:

$$ a\Lambda_{(i,\sigma,\tau,\omega)}t^{n^{\prime}}=\sum _{j=0}^{n}d_{(i,j)}\boldsymbol { \phi}_{j,n}(t), $$

(52)

where \(d_{(i,j)}\) can be calculated as

$$\begin{aligned} d_{(i,j)} =& a\Lambda_{(i,\sigma,\tau,\omega)} \mathbf{w}_{(j,n)}\sum_{p=0}^{j} \sum_{q=0}^{n-j}\overbrace{ \Delta_{(j,p,q)}}\int_{0}^{\omega}\frac{x^{q+j+n^{\prime}-p}}{\omega^{q+j-p}} \\ =& a\Lambda_{(i,\sigma,\tau,\omega)}\mathbf{w}_{(j,n)}\sum _{p=0}^{j}\sum _{q=0}^{n-j}\overbrace{\Delta_{(j,p,q)}} \frac{\omega ^{n^{\prime}+1}}{(q+j-p+n+1)}. \end{aligned}$$

(53)

Using the notation \(\Omega_{(i,j)}=d_{(i,j)}\) and equation (52) in (51) we get the desired result. The proof is complete. □