Skip to main content

A computational method based on the generalized Lucas polynomials for fractional optimal control problems

Abstract

Nonorthogonal polynomials have many useful properties like being used as a basis for spectral methods, being generated in an easy way, having exponential rates of convergence, having fewer terms and reducing computational errors in comparison with some others, and producing most important basic polynomials. In this regard, this paper deals with a new indirect numerical method to solve fractional optimal control problems based on the generalized Lucas polynomials. Through the way, the left and right Caputo fractional derivatives operational matrices for these polynomials are derived. Based on the Pontryagin maximum principle, the necessary optimality conditions for this problem reduce into a two-point boundary value problem. The main and efficient characteristic behind the proposed method is to convert the problem under consideration into a system of algebraic equations which reduces many computational costs and CPU time. To demonstrate the efficiency, applicability, and simplicity of the proposed method, several examples are solved, and the obtained results are compared with those obtained with other methods.

Introduction and background

Fractional optimal control problems (FOCPs) are indeed generalizations of classical optimal control problems (OCPs) in which either the dynamic constraints or the performance index or both include at least one fractional derivative term. In recent years, this kind of problems has received much attention, since many real-world phenomena can be demonstrated or modeled by fractional differential equations (FDEs) much better than by integer order ones. There is a growing body of literature recognizing the importance of FOCPs, like [14]. It should be also emphasized that obtaining exact analytical solutions for nonlinear FOCPs is difficult, and in most cases impossible. Therefore, there exists a critical need to introduce numerical methods to solve these problems.

In spite of the fact that a number of numerical methods have been extensively used for solving FOCPs, considerable attention is still directed to finding some alternative and new methods. It should be remarked that numerical methods for solving FOCPs may be classified into two main categories: indirect and direct methods. The indirect methods are generally based upon the generalization of Pontryagin maximum principle (PMP) for FOCPs and usually need the numerical solution of two-point boundary value problem resulting from the related necessary optimality conditions. Nevertheless, the direct methods are based upon discretization-then-optimization of the main FOCPs. For indirect numerical methods, Agrawal in [5, 6], with dependence on Riemann–Liouville and Caputo operators, introduced a general formulation and solution method for FOCPs. Sweilam et al. in [7] studied two distinct numerical methods based on Chebyshev polynomials for solving FOCP in the sense of Caputo. Pooseh et al. in [8] achieved the necessary optimality conditions for FOCPs with free terminal time. Moreover, one can refer to Legendre spectral collocation method [9], Bessel collocation method [10], Jacobi spectral collocation method [11], fractional Chebyshev pseudospectral method [12], Legendre wavelet collocation method [13], variational iterative method [14], and predictor–corrector method [15]. For direct numerical methods, one can address Legendre orthogonal polynomials [16], Bernoulli polynomials [17], shifted Legendre orthogonal polynomials [18], wavelets methods [19, 20], Boubaker polynomials [21], shifted Chebyshev schemes [22], Hermite polynomials [23], Bernoulli wavelet basis [24], fractional-order Dickson functions [25], generalized shifted Legendre polynomials [26], and generalized Bernoulli polynomials [27].

The above literature review indicates that many researchers have widely used orthogonal basis functions to obtain approximate solutions of FOCPs, but little attention has been directed toward nonorthogonal polynomials such as Fibonacci and Lucas polynomials. The main two advantages of Lucas polynomials in comparison with shifted Legendre and shifted Chebyshev polynomials to approximate an arbitrary function defined in \([0, 1]\) are as follows:

  • The Lucas polynomials have fewer terms than shifted Legendre and shifted Chebyshev polynomials; for example, the sixth Lucas polynomial has four terms, whereas the sixth shifted Legendre and shifted Chebyshev polynomials have seven terms, and this difference will increase by increasing the degree of polynomials. Therefore, Lucas polynomials take less CPU time as compared to shifted Legendre and shifted Chebyshev polynomials to approximate an arbitrary function.

  • The coefficients of the individual terms in Lucas polynomials are smaller than the corresponding ones in shifted Legendre and shifted Chebyshev polynomials. Due to the fact that computational errors in the product are related to the coefficients of individual terms, using Lucas polynomials reduces computational errors.

Recently, by regarding the advantages of Lucas polynomials, attention to these polynomials in the literature has grown. The authors in [28, 29] established numerical algorithms based on Lucas polynomials and generalized Lucas polynomials (GLPs) to solve multiterm fractional differential equations. In [30] the GLPs are utilized to obtain numerical solution of fractional initial value problems. Oruç provided numerical solutions for nonlinear sinh-Gordon and generalized Benjamin–Bona–Mahony–Burgers equations based on a hybridization method of Fibonacci and Lucas polynomials [31, 32]. Dehestani et al. solved variable-order fractional reaction-diffusion and sub-diffusion equations using Lucas multiwavelet functions [33]. The authors in [34] applied Lucas wavelets for solving fractional Fredholm–Volterra integro-differential equations. In [35], a numerical optimization method based on fractional Lucas functions is developed for evaluating the approximate solution of the multidimensional fractional differential equations. Kumar et al. used normalized Lucas wavelets to solve Lane–Emden and pantograph equations [36]. Ali et al. [37] numerically solved multidimensional Burgers-type equations using Lucas polynomials. Furthermore, the authors in [38] investigated the GLPs to solve certain types of fractional pantograph differential equations numerically.

Up to now, great attention has been paid to numerical solutions of fractional differential equations taking Lucas polynomials as basis functions. This gives us a strong motivation to test their ability to solve FOCPs and introduce an efficient numerical method. The principal aim of this research is to construct an indirect numerical method for solving FOCPs using the GLPs in which high accuracy of the obtained approximate solution is one of the remarkable features of the method. To this end, first, we establish the necessary optimality conditions for FOCPs and obtain the operational matrices. Then, we use the necessary optimality conditions, the spectral collocation method, and operational matrices based on the GLPs to reduce the given problem into a nonlinear (or linear) system of algebraic equations that can be simply solved through the Newton iterative technique. Numerical test examples are also given to illustrate the accuracy and simplicity of the proposed method.

This article is organized in the following way. Some preliminaries of fractional calculus are presented in Sect. 2. The problem formulation and also the necessary optimality conditions are introduced in Sect. 3. Section 4 is devoted to introducing the GLPs and some of their properties. In Sect. 5, the GLPs operational matrices of the integer and Caputo fractional derivatives are determined. The proposed scheme is described in Sect. 6 to solve the given FOCP, and numerical examples are considered to show the efficiency of the new approach in Sect. 7. Finally, the conclusions and remarks are given in Sect. 8.

Some basic preliminaries of fractional calculus

In this section, we remind some notations and definitions for Caputo fractional derivatives, Riemann–Liouville fractional derivatives, and integrals. These concepts are common in fractional differential equations references and are used frequently (see for instance [39, 40]).

Definition 2.1

Assume that \(\mathcal{G}:[0,T]\to \mathbb{R} \) is a function and \(\alpha >0 \) is the order of derivative or integral. For \(\tau \in [0,T]\), we define

  • The left-side and right-side Caputo fractional derivatives by

    $$ {}^{C}_{0}{D}_{\tau}^{\alpha}\mathcal{G}(\tau )= \frac{1}{\Gamma (p-\alpha )} \int _{0}^{\tau }(\tau -s)^{p-\alpha -1} \mathcal{G}^{(p)}(s)\,ds $$
    (2.1)

    and

    $$ {}^{C}_{\tau}{D}_{T}^{\alpha}\mathcal{G}(\tau )= \frac{(-1)^{p}}{\Gamma (p-\alpha )}\biggl( \int _{\tau}^{T} (s-\tau )^{p- \alpha -1} \mathcal{G}^{(p)}(s)\,ds\biggr), $$
    (2.2)

    respectively;

  • The left-side and right-side Riemann–Liouville fractional derivatives by

    $$ {}_{0}{D}_{\tau}^{\alpha}\mathcal{G}( \tau )= \frac{1}{\Gamma (p-\alpha )} \frac{d^{p}}{d\tau ^{p}}\biggl( \int _{0}^{ \tau }(\tau -s)^{p-\alpha -1} \mathcal{G}(s)\,ds\biggr) $$
    (2.3)

    and

    $$ {}_{\tau}{D}_{T}^{\alpha}\mathcal{G}(\tau )= \frac{(-1)^{p}}{\Gamma (p-\alpha )} \frac{d^{p}}{d\tau ^{p}}\biggl( \int _{ \tau}^{T} (s-\tau )^{p-\alpha -1} \mathcal{G}(s)\,ds\biggr), $$
    (2.4)

    respectively;

  • The left-side and right-side Riemann–Liouville fractional integrals by

    $$ {}_{0}{I}_{\tau}^{\alpha}\mathcal{G}( \tau )= \frac{1}{\Gamma (\alpha )} \int _{0}^{\tau }(\tau -s)^{\alpha -1} \mathcal{G}(s)\,ds $$
    (2.5)

    and

    $$ {}_{\tau}{I}_{T}^{\alpha}\mathcal{G}(\tau )= \frac{1}{\Gamma (\alpha )} \int _{\tau}^{T} (s-\tau )^{\alpha -1} \mathcal{G}(s)\,ds, $$
    (2.6)

    respectively;

where \(\Gamma (.) \) denotes the gamma function and \(p=[\alpha ]+1 \) (\([\alpha ]\) is the integer part of α).

The Caputo and Riemann–Liouville fractional derivatives are linked with each other as follows:

$$ {}^{C}_{0}{D}_{\tau}^{\alpha}\mathcal{G}(\tau )={}_{0}{D}_{\tau}^{ \alpha}\mathcal{G}( \tau )-\sum_{i=0}^{p-1} \frac {\mathcal{G}^{(i)}(0)}{\Gamma (i-\alpha +1)} \tau ^{i-\alpha} $$
(2.7)

and

$$ {}^{C}_{\tau}{D}_{T}^{\alpha}\mathcal{G}(\tau )={}_{\tau}{D}_{T}^{ \alpha}\mathcal{G}( \tau )-\sum_{i=0}^{p-1} \frac {\mathcal{G}^{(i)}(T)}{\Gamma (i-\alpha +1)}(T- \tau )^{i-\alpha}. $$
(2.8)

As a consequence, if \(\mathcal{G} \) and \(\mathcal{G}^{(k)}\), \(k=1,2,\ldots p-1\), vanish at \(\tau =0 \), then

$$ {}_{0}{D}_{\tau}^{\alpha}\mathcal{G}( \tau )={}^{C}_{0}{D}_{\tau}^{ \alpha} \mathcal{G}(\tau ), $$
(2.9)

and if they vanish at \(\tau =T \), then

$$ {}_{\tau}{D}_{T}^{\alpha} \mathcal{G}(\tau )={}^{C}_{\tau}{D}_{T}^{ \alpha} \mathcal{G}(\tau ). $$
(2.10)

Caputo fractional derivatives of the power functions are yielded in the following forms:

$$ {}^{C}_{0}{D}_{\tau}^{\alpha} \tau ^{\beta} = \textstyle\begin{cases} 0, & \beta \in \mathbb{N}_{0} \text{ and } \beta < \lceil \alpha \rceil , \\ \frac {\Gamma (\beta +1)}{\Gamma (\beta +1-\alpha )}\tau ^{\beta - \alpha},& \beta \in \mathbb{N}_{0} \text{ and } \beta \geq \lceil \alpha \rceil , \\ & \text{or } \beta \notin \mathbb{N} \text{ and } \beta >\lfloor \alpha \rfloor , \end{cases} $$
(2.11)

and

$$ {}^{C}_{\tau}{D}_{T}^{\alpha}(T- \tau )^{\beta} = \textstyle\begin{cases} 0, & \beta \in \mathbb{N}_{0} \text{ and } \beta < \lceil \alpha \rceil , \\ \frac {\Gamma (\beta +1)}{\Gamma (\beta +1-\alpha )}(T-\tau )^{\beta - \alpha}, & \beta \in \mathbb{N}_{0} \text{ and } \beta \geq \lceil \alpha \rceil , \\ & \text{or } \beta \notin \mathbb{N} \text{ and } \beta >\lfloor \alpha \rfloor , \end{cases} $$
(2.12)

where \(\lfloor \alpha \rfloor \) and \(\lceil \alpha \rceil \) are the largest integer less than or equal to α and the smallest integer greater than or equal to α, respectively. Also \(\mathbb{N} _{0}=\{0,1,2,\ldots \} \) and \(\mathbb{N}=\{1,2,3,\ldots \} \).

Theorem 2.2

Let \(\alpha \in (0,1) \) and \(f,g:[0,T] \to R \) be two functions of class \(C^{1} \). Then the formula for fractional integration by parts is derived as follows [41]:

$$\begin{aligned} \int _{0}^{T} f(\tau ) {}^{C}_{0}{D}_{\tau}^{\alpha}g( \tau )\,d\tau = \int _{0}^{T} g(\tau ) {}_{\tau}{D}_{T}^{\alpha}f( \tau )+\bigl[{}_{ \tau}{I}_{T}^{1-\alpha}f( \tau ) g(\tau )\bigr]_{0}^{T}. \end{aligned}$$
(2.13)

Necessary optimality conditions for FOCPs

In this study, we consider a class of FOCPs in the sense of Caputo as follows:

$$\begin{aligned} &\operatorname{Min} \mathfrak{J}(\mathfrak{W}) = \int _{0}^{T} \mathcal{F}\bigl( \tau ,\mathfrak{V}( \tau ),\mathfrak{W}(\tau )\bigr)\,d\tau , \\ &\text{subject to}:\quad {}^{C}_{0} {\mathfrak{D}}_{\tau}^{\alpha} \mathfrak{V}(\tau )=\mathcal{G}\bigl(\tau ,\mathfrak{V}(\tau ), \mathfrak{W}(\tau )\bigr), \\ & \mathfrak{\mathfrak{V}}(0) = \mathfrak{V}_{0}, \end{aligned}$$
(3.1)

where \(0<\alpha \leq 1 \), \(\mathfrak{V} \in R^{n} \), \(\mathfrak{W} \in R^{s} \), \(\mathcal{F}:R \times R^{n} \times R^{s} \to R \), and \(\mathcal{G}:R \times R^{n} \times R^{s} \to R^{n} \). The scalar function \(\mathcal{F} \) and the vector function \(\mathcal{G} \) are generically nonlinear and supposed to be differentiable functions; also \(\mathfrak{V}(\tau ) \) and \(\mathfrak{W}(\tau ) \) are the state and the control variables, respectively. Obviously, when \(\alpha =1 \), this problem is transformed to the standard OCPs.

In 2014, a general formulation of FOCPs in the sense of Caputo was presented by Pooseh et al. In order to obtain the necessary optimality conditions for problem (3.1), we follow the method of [8]. First, the Hamiltonian scalar function is defined as

$$\begin{aligned} &\mathcal{H}\bigl(\tau ,\mathfrak{V}(\tau ),\mathfrak{W}(\tau ), \lambda ( \tau )\bigr) \\ &\quad =\mathcal{F}\bigl(\tau ,\mathfrak{V}(\tau ),\mathfrak{W}( \tau )\bigr)+ \lambda ^{T}(\tau )\mathcal{G}\bigl(\tau ,\mathfrak{V}( \tau ),\mathfrak{W}( \tau )\bigr), \end{aligned}$$
(3.2)

where \(\lambda (\tau )\) is a Lagrange multiplier. Then the necessary optimality conditions for problem (3.1) are determined by the following theorem [8].

Theorem 3.1

If \((\mathfrak{V}(\tau ),\mathfrak{W}(\tau )) \) is a minimizer of (3.1), then there exists a co-state vector \(\lambda (\tau ) \) for which the triple \((\mathfrak{V}(\tau ),\mathfrak{W}(\tau ),\lambda (\tau )) \) satisfies the following relations:

$$\begin{aligned} &{}^{C}_{0}{\mathfrak{D}}_{\tau}^{\alpha} \mathfrak{V}(\tau )= \frac{\partial \mathcal{H}}{ \partial \lambda}\bigl(\tau , \mathfrak{V}(\tau ), \mathfrak{W}(\tau ), \lambda (\tau )\bigr), \\ &{}_{\tau}{\mathfrak{D}}_{T}^{\alpha} \lambda (\tau )= \frac{\partial \mathcal{H}}{ \partial \mathfrak{V}}\bigl(\tau , \mathfrak{V}(\tau ),\mathfrak{U}( \tau ), \lambda (\tau )\bigr), \\ & \frac{\partial \mathcal{H}}{ \partial \mathfrak{W}}\bigl(\tau , \mathfrak{V}(\tau ),\mathfrak{W}(\tau ), \lambda (\tau )\bigr)=0, \\ &\bigl[\lambda (\tau )\bigr]_{\tau =T}=0 \end{aligned}$$
(3.3)

for all \(\tau \in [0,T]\), where \(\mathcal{H} \) is described by (3.2).

The generalized Lucas polynomials and their properties

Lucas polynomials \(L_{n}(\tau ) \) of degree n defined over \([0,1]\), originally studied by Bicknell in 1970, can be generated through the following recurrence relation [42]:

$$\begin{aligned} & L_{j+2}(\tau )=\tau L_{j+1}(\tau )+L_{j}(\tau ), \quad j\geq 0, \\ & L_{0}(\tau )=2,\qquad L_{1}(\tau )=\tau . \end{aligned}$$
(4.1)

Also, the Binet form of the Lucas polynomials is given by [42]

$$ L_{j}(\tau )= \frac{(\tau +\sqrt{\tau ^{2}+4})^{j}+(\tau -\sqrt{\tau ^{2}+4})^{j}}{2^{j}},\quad j\geq 0. $$

Moreover, these polynomials can be represented by the following explicit form as well [29]:

$$ L_{j}(\tau )=j \sum_{i=0}^{\lfloor \frac{j}{2}\rfloor} \frac{1}{j-i} \binom{j-i}{i}\tau ^{j-2i}, j\geq 1. $$
(4.2)

It has been shown that these polynomials satisfy the following properties:

  • \(L_{n}(\tau )=\mathtt{F}_{n+1}(\tau )+\mathtt{F}_{n-1}(\tau )\),

  • \(\tau L_{n}(\tau )=\mathtt{F}_{n+2}(\tau )-\mathtt{F}_{n-2}(\tau )\),

  • \(L_{-n}(\tau )=(-1)^{n}L_{n}(\tau )\),

  • \(\frac{dL_{n}(\tau )}{d\tau}=\frac{n}{\tau ^{2}+4}(\tau L_{n}(\tau )+2L_{n-1}( \tau ))\),

  • \(L_{n}(0)=1+(-1)^{n}\),

  • \(L_{n}(1)=\mathcal{L}_{n}\),

where \(\mathtt{F}_{n}(\tau )\) is the Fibonacci polynomials of order n and \(\mathcal{L}_{n}\) is the Lucas number [42].

Besides, if a and b are nonzero real constants, the sequence of generalized Lucas polynomials (GLPs) defined over \([0,1]\) is given by the following recurrence relation [29]:

$$\begin{aligned} & \mu _{j+2}^{a,b}(\tau )=a\tau \mu _{j+1}^{a,b}(\tau )+b\mu _{j}^{a,b}( \tau ),\quad j\geq 0, \\ & \mu _{0}^{a,b}(\tau )=2,\qquad \mu _{1}^{a,b}( \tau )=a \tau . \end{aligned}$$
(4.3)

In this regard, the first few GLPs \(\mu _{j}^{a,b}(\tau ) \) can be computed as follows:

$$\begin{aligned} &\mu _{0}^{a,b}(\tau )=2, \qquad \mu _{1}^{a,b}( \tau )=a \tau , \\ &\mu _{2}^{a,b}(\tau )=a^{2} \tau ^{2}+2b, \\ & \mu _{3}^{a,b}( \tau )=a^{3} \tau ^{3}+3ab\tau , \\ &\mu _{4}^{a,b}(\tau )=a^{4}\tau ^{4}+4a^{2}b\tau ^{2}+2b^{2}, \\ & \mu _{5}^{a,b}(\tau )=a^{5} \tau ^{5}+5a^{3}b \tau ^{3}+5ab^{2}\tau . \end{aligned}$$

It is also shown that the GLPs can be described with two equivalent forms [29]

$$ \mu _{j}^{a,b}(\tau ) = \textstyle\begin{cases} 2, & j=0, \\ j {\sum_{n=0}^{\lfloor \frac{j}{2}\rfloor} \frac{a^{j-2n}b^{n}\binom{j-n}{n}}{j-n}\tau ^{j-2n}}, & j\geq 1, \end{cases} $$
(4.4)

and

$$ \mu _{j}^{a,b}(\tau ) = \textstyle\begin{cases} 2, & j=0, \\ 2j {\sum_{m=0}^{j} \frac{a^{m}b^{\frac{j-m}{2}}\xi _{j+m}\binom{\frac{j+m}{2}}{\frac{j-m}{2}}}{j+m} \tau ^{m}}, & j\geq 1, \end{cases} $$
(4.5)

where

$$ \xi _{l} = \textstyle\begin{cases} 1, & l \text{ even}, \\ 0, & l \text{ odd}. \end{cases} $$
(4.6)

The Binet form for these polynomials is [29]

$$\begin{aligned} &\mu _{j}^{a,b}(\tau )= \frac{(a\tau +\sqrt{a^{2}\tau ^{2}+4b})^{j}+(a\tau -\sqrt{a^{2}\tau ^{2}+4b})^{j}}{2^{j}}, \quad j\geq 0. \end{aligned}$$

It is worthy to mention here that from the GLPs for special values of a and b, we can extract some of the well-known polynomials; some specific cases of these values are shown in Table 1 [29].

Table 1 The relation between the GLPs and some other polynomials

Any continuous function \(W(\tau ) \) defined over \([0 , 1 ] \) can be expanded in terms of the GLPs in the following form [29]:

$$ W(\tau )=\sum_{i=0}^{\infty}c_{i} \mu _{i}^{a,b}(\tau ). $$
(4.7)

By truncating the infinite series in equation (4.7), it can be written as follows:

$$ W(\tau )\approx W_{N}(\tau )=\sum_{i=0}^{N}c_{i} \mu _{i}^{a,b}(\tau )=C^{T} \Phi (\tau ), $$
(4.8)

where

$$ C^{T}=[c_{0},c_{1},\ldots ,c_{N}] $$
(4.9)

and

$$ \Phi (\tau )=\bigl[\mu _{0}^{a,b}(\tau ),\mu _{1}^{a,b}(\tau ),\ldots , \mu _{N}^{a,b}( \tau )\bigr]^{T}. $$
(4.10)

Now, the following two theorems state the convergence and error estimate of the generalized Lucas expansion.

Theorem 4.1

Suppose that \(h(\tau )\) is defined over \([0,1]\) and \(\vert h^{(j)}(0) \vert \leq K^{j}\), \(j \geq 0\), where K is a positive constant. Also, suppose that \(h(\tau )\) has the expansion \(h(\tau )=\sum_{j=0}^{\infty}c_{j} \mu _{j}^{a,b}(\tau ) \); then it holds that

  1. 1.

    \(\vert c_{j} \vert \leq \frac{ \vert a \vert ^{-j} K^{j} \cosh (2 \vert a \vert ^{-1} b^{\frac{1}{2}} K)}{j!} \);

  2. 2.

    The series converges absolutely.

Proof

The proof is given in [29]. □

Theorem 4.2

If \(h(\tau )\) satisfies the assumptions stated in Theorem 4.1and \(e_{N}(\tau )= \sum_{j=N+1}^{\infty}c_{j} \mu _{j}^{a,b}(\tau ) \), then the global error estimate is given as follows:

$$ \bigl\vert e_{N}(\tau ) \bigr\vert < \frac{2 e^{K(1+\sqrt{1+a^{-2}b})}\cosh (2K(1+\sqrt{1+a^{-2}b}))(1+\sqrt{1+a^{-2}b})^{N+1}}{(N+1)!}. $$

Proof

The proof is given in [29]. □

Operational matrices of the GLPs

This section is devoted to deriving operational matrices of derivatives for the GLPs. Based on the GLPs vector \(\Phi (\tau ) \) mentioned in equation (4.10), we can determine the operational matrix of integer derivative as follows [29, 30]:

$$ \frac{d\Phi (\tau )}{d\tau}=\mathcal{S}^{(1)}\Phi (\tau ), $$
(5.1)

where \(\mathcal{S}^{(1)}=(\mathcal{S}_{ij}^{(1)}) \) is the \((N+1)\times (N+1) \) operational matrix of the first derivative; then, the elements of this matrix can be obtained explicitly in the following form:

$$ \mathcal{S}_{ij}^{(1)} = \textstyle\begin{cases} (-1)^{\frac{i-j+1}{2}}iab^{\frac{i-j-1}{2}}\delta _{j}, & i>j \text{ and } (i+j) \text{ odd}, \\ 0, & \text{otherwise}, \end{cases} $$

where

$$ \delta _{m} = \textstyle\begin{cases} \frac{1}{2}, & m=0, \\ 1, & \text{otherwise}. \end{cases} $$
(5.2)

Equation (5.1) enables one to obtain \(\frac{d^{i} \Phi (\tau )}{d\tau ^{i}} \) for \(i\geq 1\) as

$$ \frac{d^{i} \Phi (\tau )}{d\tau ^{i}}=\mathcal{S}^{(i)}\Phi (\tau )=\bigl( \mathcal{S}^{(1)}\bigr)^{i} \Phi (\tau ). $$
(5.3)

Now, we elicit the left Caputo fractional derivative operational matrix of the GLPs of order α, which generalizes the integer differentiation operator matrix. This matrix will be derived in the next theorem [29, 30].

Theorem 5.1

Let \(\Phi (\tau ) \) be the GLPs vector defined in equation (4.10); then, for any \(\alpha >0\), we have

$$ {}^{C}_{0}{D}^{\alpha}_{\tau} \Phi (\tau )= \frac{d^{\alpha} \Phi (\tau )}{d\tau ^{\alpha}}=\tau ^{-\alpha} \mathcal{S}^{(\alpha )} \Phi (\tau ), $$
(5.4)

where \(\mathcal{S}^{(\alpha )} \) is the \((N+1)\times (N+1) \) lower triangular generalized Lucas operational matrix of order α for the left Caputo fractional derivative. This matrix is obtained explicitly in the form

S ( α ) = [ 0 0 0 0 γ α ( α , 0 ) γ α ( α , α ) 0 0 γ α ( i , 0 ) γ α ( i , α ) γ α ( i , i ) 0 γ α ( N , 0 ) γ α ( N , α ) γ α ( N , i ) γ α ( N , N ) ] ,
(5.5)

where

$$ \gamma _{\alpha }(i,j)=\sum_{n=\lceil \alpha \rceil}^{i} \frac{(-1)^{\frac{n-j}{2}}in!\xi _{i+n}\xi _{j+n} \delta _{j}b^{\frac{i-j}{2}}(\frac{i+n}{2}-1)!}{(\frac{i-n}{2})!(\frac{n-j}{2})!(\frac{j+n}{2})!\Gamma (1+n-\alpha )}, $$

also, \(\xi _{i} \) and \(\delta _{j} \) are given in equations (4.6) and (5.2).

Moreover, we can express the right Caputo fractional derivative operational matrix of order α of the GLPs vector \(\Phi (\tau ) \) in the following form:

$$ {}^{C}_{\tau}{D}^{\alpha}_{l} \Phi (\tau )= \mathcal{S}_{(\alpha )} \Phi (\tau ), $$
(5.6)

which is constructed with the help of the following lemmas.

Lemma 5.2

Let \(\Phi (\tau )\) and \(T_{N}(\tau )=[1,\tau ,\ldots ,\tau ^{N}] ^{T}\) be the vectors of generalized Lucas and Taylor polynomials, respectively; then \(\Phi (\tau )=AT_{N}(\tau ) \), where \(A=(a_{i+1,j+1})_{i,j=0}^{N}\) is a lower triangular \((N+1)\times (N+1)\) matrix, and

$$ a_{i+1,j+1} = \textstyle\begin{cases} 2, & i=j=0, \\ \frac{2ia^{j}b^{\frac{i-j}{2}}\xi _{i+j+2}\binom{\frac{i+j}{2}}{\frac{i-j}{2}}}{(i+j)},& i\geq j, i\neq 0, \\ 0,& \textit{otherwise} , \end{cases} $$

where \(\xi _{i} \) is mentioned in equation (4.6).

Proof

Regarding the definition expressed in (4.5), we have

$$ \Phi (\tau )=AT_{N}(\tau ), $$
(5.7)

where

A = [ 2 0 0 0 a 0 2 b 0 a 2 0 3 a b 0 2 b N 2 ξ N + 2 2 N a b N 1 2 ξ N + 3 ( N + 1 2 N 1 2 ) ( N + 1 ) 2 N a 2 b N 2 2 ξ N + 4 ( N + 2 2 N 2 2 ) ( N + 2 ) 0 0 0 0 0 0 0 0 0 a 3 0 0 2 N a 3 b N 3 2 ξ N + 5 ( N + 3 2 N 3 2 ) ( N + 3 ) 2 N a 4 b N 4 2 ξ N + 6 ( N + 4 2 N 4 2 ) ( N + 4 ) a N ] .

So, the desired result is obtained. □

Lemma 5.3

Suppose that \(\Phi (\tau )\) and \(T_{N}(\tau ) \) are defined in Lemma 5.2. Then there is a lower triangular matrix L such that \(T_{N}(l-\tau )=LT_{N}(\tau ) \), where \(T_{N}(l-\tau )=[1,l-\tau ,\ldots ,(l-\tau )^{N}]^{T}\), and the entries of the matrix L are given in the form

$$ L_{i+1,j+1} = \textstyle\begin{cases} (-1)^{j}l^{i-j}\binom{i}{j},& i\geq j, \\ 0, & \textit{otherwise}. \end{cases} $$

Moreover, we can conclude that \(\Phi (\tau )=AL^{-1}T_{N}(l-\tau ) \).

Proof

Using the binomial expansion of \((l-\tau )^{i}\), we have

$$ (l-\tau )^{i}=\sum_{j=0}^{i}(-1)^{j} \binom{i}{j}l^{i-j}\tau ^{j}. $$

So, we get the following relation:

$$ T_{N}(l-\tau )=LT_{N}(\tau ), $$
(5.8)

where

L= [ 1 0 0 0 l 1 0 0 l 2 2 l 1 0 l N ( N 1 ) l N 1 ( N 2 ) l N 2 ( 1 ) N ] .
(5.9)

Now, using Lemma 5.2 and relation (5.8), we obtain \(\Phi (\tau )=AL^{-1}T_{N}(l-\tau ) \), which completes the proof. □

Note: As a consequence of Lemma 5.3, we have

$$\begin{aligned} & {}^{C}_{\tau}{D}_{l}^{\alpha} \Phi (\tau )=AL^{-1} {}^{C}_{\tau}{D}_{l}^{ \alpha}T_{N}(l- \tau )=AL^{-1}\bigl[{}^{C}_{\tau}{D}_{l}^{\alpha}1,{}^{C}_{ \tau}{D}_{l}^{\alpha}(l- \tau ),\ldots ,{}^{C}_{\tau}{D}_{l}^{\alpha}(l- \tau )^{N}\bigr]^{T}. \end{aligned}$$
(5.10)

Now, by taking the right Caputo fractional derivative operator of the vector \(T_{N}(l-\tau ) \) and using (2.12), we obtain

D l α τ C T N ( l τ ) = [ 0 Γ ( 2 ) Γ ( 2 α ) ( l τ ) 1 α Γ ( N + 1 ) Γ ( N + 1 α ) ( l τ ) N α ] = [ 0 0 0 0 Γ ( 2 ) Γ ( 2 α ) ( l τ ) α 0 0 0 Γ ( N + 1 ) Γ ( N + 1 α ) ( l τ ) α ] T N ( l τ ) = M T N ( l τ ) .

Furthermore, it can be written as follows:

$$\begin{aligned} {}^{C}_{\tau}{D}_{l}^{\alpha}T_{N}(l- \tau )=\boldsymbol{M}T_{N}(l- \tau )=\boldsymbol{M}LT_{N}(\tau )=\boldsymbol{M}LA^{-1}\Phi ( \tau ). \end{aligned}$$
(5.11)

Therefore, substituting (5.11) into (5.10) yields that

$$ {}^{C}_{\tau}{D}_{l}^{\alpha} \Phi (\tau )=AL^{-1}MLA^{-1}\Phi (\tau )= \mathcal{S}_{(\alpha )} \Phi (\tau ). $$
(5.12)

Indeed equation (5.12) determines an easy way for calculating the right Caputo fractional derivative operational matrix of the GLPs vector \(\Phi (\tau ) \).

Description of the proposed method

Herein, we will concentrate on the numerical solution of the FOCP defined in (3.1) by applying the operational matrices for the GLPs and the spectral collocation technique. To do this, first, the necessary optimality conditions for the problem are attained from Theorem 3.1 as

$$\begin{aligned} &{}^{C}_{0}{\mathfrak{D}}_{\tau}^{\alpha} \mathfrak{V}(\tau )= \frac{\partial \mathcal{H}}{ \partial \lambda}\bigl(\tau , \mathfrak{V}(\tau ), \mathfrak{W}(\tau ), \lambda (\tau )\bigr), \end{aligned}$$
(6.1)
$$\begin{aligned} &{}_{\tau}{\mathfrak{D}}_{T}^{\alpha} \lambda (\tau )= \frac{\partial \mathcal{H}}{ \partial \mathfrak{V}}\bigl(\tau , \mathfrak{V}(\tau ),\mathfrak{W}( \tau ), \lambda (\tau )\bigr), \end{aligned}$$
(6.2)
$$\begin{aligned} &\frac{\partial \mathcal{H}}{ \partial \mathfrak{U}}\bigl(\tau , \mathfrak{V}(\tau ),\mathfrak{W}(\tau ), \lambda (\tau )\bigr)=0, \end{aligned}$$
(6.3)
$$\begin{aligned} & \mathfrak{V}(0)=\mathfrak{V}_{0},\qquad \lambda (T)=0. \end{aligned}$$
(6.4)

It should be mentioned that, in practice, we compute an expression for \(\mathfrak{W}(\tau ) \) in terms of \(\mathfrak{V}(\tau ) \) and \(\lambda (\tau ) \) from the condition given in (6.3) in a very straightforward manner. Also, we can replace \({}_{\tau}{\mathfrak{D}}_{T}^{\alpha}\lambda (\tau ) \) with \({}^{C}_{\tau}{\mathfrak{D}}_{T}^{\alpha}\lambda (\tau ) \) by using (2.10). Thus, we can rewrite the above-mentioned system in the following form:

$$\begin{aligned} &{}^{C}_{0}{\mathfrak{D}}_{\tau}^{\alpha} \mathfrak{V}(\tau )= \mathcal{M}\bigl(\tau ,\mathfrak{V}(\tau ),\lambda (\tau ) \bigr), \end{aligned}$$
(6.5)
$$\begin{aligned} &{}^{C}_{\tau}{\mathfrak{D}}_{T}^{\alpha} \lambda (\tau )=\mathcal{W}\bigl( \tau ,\mathfrak{V}(\tau ),\lambda (\tau )\bigr), \\ & \mathfrak{V}(0)=\mathfrak{V}_{0}, \qquad \lambda (T)=0, \end{aligned}$$
(6.6)

where \(\mathcal{M}(\tau ,\mathfrak{V}(\tau ),\lambda (\tau )) \) and \(\mathcal{W}(\tau ,\mathfrak{V}(\tau ),\lambda (\tau )) \) are known functions. Now, we can approximate \(\mathfrak{V}(\tau ) \) and \(\lambda (\tau ) \) as

$$\begin{aligned} \begin{aligned} &\mathfrak{V}(\tau )\approx \mathfrak{V}_{N}(\tau )= \sum_{i=0}^{N} \mathfrak{V}_{i} \mu _{i}^{a,b}(\tau ) =V^{T}\Phi (\tau ), \\ &\lambda ( \tau )\approx \lambda _{N}(\tau )=\sum_{i=0}^{N} \lambda _{i} \mu _{i}^{a,b}(\tau )=\Lambda ^{T}\Phi (\tau ), \end{aligned} \end{aligned}$$
(6.7)

where

$$ V^{T}=[\mathfrak{V}_{0},\mathfrak{V}_{1},\ldots ,\mathfrak{V}_{N}],\qquad \Lambda ^{T}=[\lambda _{0}, \lambda _{1},\ldots ,\lambda _{N}] $$

are unknown vectors which should be determined. By virtue of Sect. 5, the functions \({}^{C}_{0}{\mathfrak{D}}_{\tau}^{\alpha}\mathfrak{V}(\tau ) \) and \({}^{C}_{\tau}{\mathfrak{D}}_{T}^{\alpha}\lambda (\tau ) \) can be approximated in the following manner:

$$ {}^{C}_{0}{\mathfrak{D}}_{\tau}^{\alpha} \mathfrak{V}(\tau )\approx \tau ^{-\alpha}V^{T} \mathcal{S}^{(\alpha )}\Phi (\tau ),\qquad {}^{C}_{\tau}{ \mathfrak{D}}_{T}^{\alpha}\lambda (\tau ) \approx \Lambda ^{T} \mathcal{S}_{(\alpha )}\Phi (\tau ). $$
(6.8)

In addition, the boundary conditions expressed in (6.4) yield

$$ \mathfrak{V}(0)=V^{T}\Phi (0)=\mathfrak{V}_{0},\qquad \lambda (T)=\Lambda ^{T}\Phi (T)=0. $$
(6.9)

Substituting (6.7) and (6.8) into (6.5) and (6.6), the residuals of these equations can be computed as follows:

$$\begin{aligned} &R(\tau )=\tau ^{-\alpha}V^{T} \mathcal{S}^{(\alpha )}\Phi (\tau )- \mathcal{M}\bigl(\tau ,V^{T}\Phi (\tau ),\Lambda ^{T}\Phi (\tau )\bigr), \\ &\tilde{R}(\tau )=\Lambda ^{T} \mathcal{S}_{(\alpha )} \Phi (\tau )- \mathcal{W}\bigl(\tau ,V^{T}\Phi (\tau ),\Lambda ^{T}\Phi (\tau )\bigr). \end{aligned}$$
(6.10)

The application of the spectral collocation technique is based on forcing the residuals to vanish at selected collocation nodes. There are even some other selections for choosing these nodes; one can use the following collocation nodes as well:

$$ t_{j}=\frac{T}{2}-\frac{T}{2}\cos \biggl( \frac{\pi}{N}j\biggr),\quad j=0,1, \ldots ,N, $$
(6.11)

where \(t_{j}\), \(j=0,1,\ldots ,N \), are the shifted Chebyshev–Gauss–Lobatto points in the interval \([0,T] \). We should construct the associated system of \((2N+2) \) algebraic equations since \((2N+2) \) unknown coefficients \(\mathfrak{V}_{j} \) and \(\lambda _{j} \) (\(j=0,1,\ldots ,N\)) exist. For this purpose, the first equation of (6.10) is collocated at the nodes \(t_{i}\), \(i=1,\ldots ,N \), and the second equation of (6.10) is collocated at the nodes \(t_{k}\), \(k=0,1,\ldots ,N-1 \), as follows:

$$\begin{aligned} &R(t_{i})=0,\quad i=1,2,\ldots ,N, \\ &\tilde{R}(t_{k})=0,\quad k=0,1,\ldots ,N-1. \end{aligned}$$
(6.12)

Hence, the above-mentioned system contains 2N algebraic equations. Now, equations given in (6.12) together with the boundary conditions (6.9) form a nonlinear (or linear) system of algebraic equations in the unknown coefficients \(\mathfrak{V}_{j} \) and \(\lambda _{j} \) (\(j=0,1,\ldots ,N\)) that has \((2N+2) \) equations and \((2N+2) \) unknowns. We can utilize the Newton iterative technique for solving this system; then, by determining \(\mathfrak{V}_{j} \) and \(\lambda _{j} \) (\(j=0,1,\ldots ,N\)), the desired approximate solutions can be calculated from (6.7).

Numerical experiments

In this section, we introduce some examples to test the performance and efficiency of the proposed method. These examples are selected from the literature for their importance and repetition. They also cover a variety of FOCPs. In Examples 1, 2, 3, and 5, we consider a linear time-invariant system with a quadratic performance index. Example 5 is a practical example with engineering applications. Moreover, a nonlinear time-varying system with a quadratic performance index is presented in Example 4. For these examples, the exact solutions when \(\alpha =1\) are known, and we can compare them with the approximate solutions obtained by the proposed method. The numerical simulations are implemented by MAPLE 18 with Digits = 20. All computations are performed on a Core i5 PC Laptop with 6 GB of RAM and 1.80 GHz of CPU to show no limitation on memory usage. In the following examples, the parameter N denotes the number of the GLPs.

Example 1

([14, 15, 18, 23, 43])

Consider the following FOCP:

$$\begin{aligned} \operatorname{Min} \mathfrak{J}(\mathfrak{W}) = \frac{1}{2} \int _{0}^{1} \bigl( \mathfrak{V}^{2}( \tau )+\mathfrak{W}^{2}(\tau )\bigr)\,d\tau \end{aligned}$$

subject to

$$\begin{aligned}& {}^{C}_{0} {\mathfrak{D}}_{\tau}^{\alpha} \mathfrak{V}(\tau ) =- \mathfrak{V}(\tau )+\mathfrak{W}(\tau ), \\& \mathfrak{V}(0) = 1. \end{aligned}$$

The exact solution to this problem when \(\alpha =1\) is as follows:

$$\begin{aligned}& \mathfrak{V}^{*}(\tau ) =\cosh (\sqrt{2}\tau )+\theta \sinh (\sqrt{2} \tau ), \\& \mathfrak{W}^{*}(\tau ) =(1+\sqrt{2}\theta )\cosh (\sqrt{2}\tau )+( \sqrt{2}+\theta )\sinh (\sqrt{2}\tau ), \end{aligned}$$

where \(\theta =- \frac {\cosh (\sqrt{2})+\sqrt{2}\sinh (\sqrt{2})}{\sqrt{2}\cosh (\sqrt{2})+\sinh (\sqrt{2})}\). The minimum value of the performance index \(\mathfrak{J}\) when \(\alpha =1\) is \(\mathfrak{J}^{*} = 0.1929092980932\). In Fig. 1, the approximate values and the absolute errors of \(\mathfrak{J}\) for some values of N when \(\alpha =1\) are plotted. Figures 2 and 3 compare the exact solutions and the approximate solutions of \(\mathfrak{V}(\tau )\) and \(\mathfrak{W}(\tau )\) for various values of α and \(N=8\), respectively. Tables 2 and 3 show a comparison between the approximate solutions obtained by our method for various values of α at some values of τ and the exact solutions. In addition, the CPU time for various values of α is included in Table 2. From these results, it is clear that the approximate solutions at \(\alpha =1\) are in very good agreement with the corresponding exact solutions. Furthermore, as α approaches 1, the approximate solutions of \(\mathfrak{V}(\tau )\) and \(\mathfrak{W}(\tau )\) converge to the exact solutions. The approximate values of \(\mathfrak{J}\) at \(\alpha =0.8\), 0.9, and 1 for the proposed and several numerical methods are included in Table 4. We compare the approximate values of \(\mathfrak{J}\) and the CPU time obtained using the methods in [44, 45] and the proposed method in Table 5. From this table, it is clear that our method requires significantly less CPU time. In Table 5, \(M_{1}\), \(M_{2}\), and \(N_{1}\) are the order of Bernoulli polynomials, Taylor polynomials, and block-pulse functions, respectively. The absolute errors of \(\mathfrak{V}(\tau )\) and \(\mathfrak{W}(\tau )\) when \(\alpha =1\) and \(N=4\), 6, 8, and 10 are shown in Figs. 4 and 5. These figures also illustrate the fast convergence rate of the proposed method since the errors decay rapidly by increasing the number of the GLPs. Moreover, Table 6 reports the maximum absolute errors of \(\mathfrak{V}(\tau )\) and \(\mathfrak{W}(\tau )\) and the absolute errors of \(\mathfrak{J}\) given by the proposed method in comparison to the methods in [18, 23] at \(\alpha =1\) and \(N=4\), 6, 8, and 10. The obtained results show that the errors, specially to control variable \(\mathfrak{W}(\tau )\), are better for the proposed method than those obtained in [18, 23]. From these tables and figures, it can be seen that the state and the control variables are accurately approximated by our method.

Figure 1
figure 1

Graphs of the approximate values and the absolute errors of \(\mathfrak{J}\) when \(\alpha =1\) for some values of N in Example 1

Figure 2
figure 2

Graphs of the exact and numerical solutions of \(\mathfrak{V}(\tau )\) for various values of α in Example 1

Figure 3
figure 3

Graphs of the exact and numerical solutions of \(\mathfrak{W}(\tau )\) for \(\alpha =1\) in Example 1

Figure 4
figure 4

Graphs of the absolute errors of \(\mathfrak{V}(\tau )\) when \(\alpha =1\) for some values of N in Example 1

Figure 5
figure 5

Graphs of the absolute errors of \(\mathfrak{W}(\tau )\) when \(\alpha =1\) for some values of N in Example 1

Table 2 Approximate solutions of \(\mathfrak{V}(\tau )\) for various values of α where \(N=8 \) along with CPU time in Example 1
Table 3 Approximate solutions of \(\mathfrak{W}(\tau )\) for various values of α where \(N=8 \) in Example 1
Table 4 The results obtained for \(\mathfrak{J}\) with \(\alpha =0.8, 0.9\), and 1 via several numerical schemes for Example 1
Table 5 The results obtained for \(\mathfrak{J}\) and CPU time with \(\alpha =1\) via several numerical schemes for Example 1
Table 6 A comparison between the results obtained by our method with those obtained in [18, 23] with various values of N for Example 1

Example 2

Consider the following FOCP:

$$\begin{aligned} \operatorname{Min} \mathfrak{J}(\mathfrak{W}) = \frac{1}{2} \int _{0}^{1} \bigl[\bigl( \mathfrak{V}(\tau )- \tau ^{\alpha +1}\bigr)^{2}+\bigl(\mathfrak{W}(\tau )-\tau ^{ \alpha +1}-\tau \Gamma (\alpha +2)\bigr)^{2}\bigr]\,d\tau , \end{aligned}$$

subject to

$$\begin{aligned}& {}^{C}_{0} {\mathfrak{D}}_{\tau}^{\alpha} \mathfrak{V}(\tau ) =- \mathfrak{V}(\tau )+\mathfrak{W}(\tau ), \\& \mathfrak{V}(0) = 0. \end{aligned}$$

For any value of \(\alpha > 0 \), the exact solution to this problem is

$$ \mathfrak{V}^{*}(\tau )=\tau ^{\alpha +1}, \qquad \mathfrak{W}^{*}( \tau )=\tau ^{\alpha +1}+\tau \Gamma (\alpha +2). $$

The minimum value of the performance index \(\mathfrak{J}\) when \(\alpha =1\) is \(\mathfrak{J}^{*} = 0\). Figure 6 compares the exact solutions and the approximate solutions of \(\mathfrak{V}(\tau )\) and \(\mathfrak{W}(\tau )\) for \(N=8\) and \(\alpha =0.5\), 0.7, 0.9, and 1, respectively. In Fig. 7, we plot the state variable \(\mathfrak{V}(\tau )\) and the control variable \(\mathfrak{W}(\tau )\) for \(\alpha =0.5\) and some values of N along with the exact solutions. The absolute errors of \(\mathfrak{V}(\tau )\) and \(\mathfrak{W}(\tau )\) for various values of α are listed in Tables 7 and 8. In addition, the CPU time and the absolute errors of \(\mathfrak{J}\) for various values of α are included in these tables, respectively. From these results, it is worthwhile to note that the approximate solutions obtained by the proposed method completely coincide with the exact solutions.

Figure 6
figure 6

Graphs of the exact and numerical solutions for various values of α in Example 2

Figure 7
figure 7

Graphs of the exact and numerical solutions when \(\alpha =0.5\) for some values of N in Example 2

Table 7 The results obtained for the absolute errors of \(\mathfrak{V}(\tau )\) with various values of α along with CPU time where \(N=8\) for Example 2
Table 8 The results obtained for the absolute errors of \(\mathfrak{W}(\tau )\) and \(\mathfrak{J}\) with various values of α where \(N=8\) for Example 2

Example 3

([14, 21, 25])

Consider the following FOCP:

$$\begin{aligned} \operatorname{Min} \mathfrak{J}(\mathfrak{W}) = \frac{1}{2} \int _{0}^{1} \bigl( \mathfrak{V}_{1}^{2}( \tau )+\mathfrak{V}_{2}^{2}(\tau )+ \mathfrak{W}^{2}( \tau )\bigr)\,d\tau \end{aligned}$$

subject to

$$\begin{aligned}& {}^{C}_{0} {\mathfrak{D}}_{\tau}^{\alpha} \mathfrak{V}_{1}(\tau ) =- \mathfrak{V}_{1}(\tau )+ \mathfrak{V}_{2}(\tau )+\mathfrak{W}(\tau ), \\& {}^{C}_{0} {\mathfrak{D}}_{\tau}^{\alpha} \mathfrak{V}_{2}(\tau ) =-2 \mathfrak{V}_{2}(\tau ), \\& \mathfrak{V}_{1}(0) = 1, \qquad \mathfrak{V}_{2}(0)= 1. \end{aligned}$$

We obtain the exact solution to this problem when \(\alpha =1\) as follows:

$$\begin{aligned} & \mathfrak{V}_{1}^{*}(\tau )=- \frac{3}{2} e^{-2\tau}+(\sqrt{2}+1) \theta _{1} e^{-\sqrt{2}\tau}+(-\sqrt{2}+1)\theta _{2} e^{\sqrt{2} \tau}, \\ & \mathfrak{V}_{2}^{*}(\tau )=e^{-2\tau}, \\ &\mathfrak{W}^{*}(\tau )=\frac{1}{2} e^{-2\tau}-\theta _{1} e^{-\sqrt{2} \tau}-\theta _{2} e^{\sqrt{2}\tau}, \end{aligned}$$

where \(\theta _{1}= \frac {e^{-2}\sqrt{2}+5e^{\sqrt{2}}-e^{-2}}{2(e^{-\sqrt{2}}\sqrt{2}+e^{\sqrt{2}}\sqrt{2}-e^{-\sqrt{2}}+e^{\sqrt{2}})}\) and \(\theta _{2}= \frac {e^{-2}\sqrt{2}-5e^{-\sqrt{2}}+e^{-2}}{2(e^{-\sqrt{2}}\sqrt{2}+e^{\sqrt{2}}\sqrt{2}-e^{-\sqrt{2}}+e^{\sqrt{2}})}\). The minimum value of the performance index \(\mathfrak{J}\) when \(\alpha =1\) is \(\mathfrak{J}^{*} =0.4319872403\). Figure 8 compares the exact solutions and the approximate solutions of \(\mathfrak{V}_{1}(\tau )\), \(\mathfrak{V}_{2}(\tau )\), and \(\mathfrak{W}(\tau )\) for various values of α and \(N=8\), respectively. From this figure, it is clear that the approximate solutions when \(\alpha =1\) are in very good agreement with the corresponding exact solutions. Furthermore, as α approaches 1, the approximate solutions of \(\mathfrak{V}_{1}(\tau )\), \(\mathfrak{V}_{2}(\tau )\), and \(\mathfrak{W}(\tau )\) converge to the exact solutions. Table 9 reports the absolute errors of \(\mathfrak{V}_{1}(\tau )\), \(\mathfrak{V}_{2}(\tau )\), and \(\mathfrak{W}(\tau )\) obtained by the proposed method in comparison to the method in [25] at \(\alpha =1\) and \(N=5\) and 8. The yielded results show that the approximate solutions are more accurate for the proposed method than the method in [25]. Moreover, the absolute errors of \(\mathfrak{V}_{1}(\tau )\), \(\mathfrak{V}_{2}(\tau )\), and \(\mathfrak{W}(\tau )\) for \(N=11\) and \(\alpha =1\) are shown in Fig. 9. These results also illustrate the fast convergence rate of the proposed method since the errors decay rapidly by increasing the number of the GLPs. The approximate values of \(\mathfrak{J}\) at \(\alpha =0.5\), 0.8, 0.9, 0.99, and 1 for the proposed method and the methods in [25] are included in Table 10. In addition, the CPU time for various values of α is included in Table 10. From these tables and figures, it can be seen that the state and the control variables are accurately approximated by the proposed method.

Figure 8
figure 8

Graphs of the exact and numerical solutions for various values of α in Example 3

Figure 9
figure 9

Graphs of the absolute errors when \(\alpha =1\) and \(N=12\) in Example 3

Table 9 The comparison of absolute errors with the method in [25] for \(N=5\) and 8 in Example 3
Table 10 A comparison between the results obtained by our method with those obtained in [14, 21] for \(\mathfrak{J}\) and the CPU time with some values of α for Example 3

Example 4

([48, 49])

Consider the following FOCP:

$$\begin{aligned} \operatorname{Min} \mathfrak{J}(\mathfrak{W}) = \int _{0}^{1} \biggl[\bigl(\mathfrak{V}( \tau )- \tau ^{2}\bigr)^{2}+\biggl(\mathfrak{W}(\tau )-\tau e^{-\tau}+ \frac{1}{2}e^{\tau ^{2}-\tau}\biggr)^{2} \biggr]\,d\tau \end{aligned}$$

subject to

$$\begin{aligned}& {}^{C}_{0} {\mathfrak{D}}_{\tau}^{\alpha} \mathfrak{V}(\tau ) =e^{ \mathfrak{V}(\tau )}+2 e^{\tau}\mathfrak{W}(\tau ), \\& \mathfrak{V}(0) = 0. \end{aligned}$$

The exact solution to this problem when \(\alpha =1\) is as follows:

$$ \mathfrak{V}^{*}(\tau )=\tau ^{2},\qquad \mathfrak{W}^{*}( \tau )= \tau e^{-\tau}-\frac{1}{2}e^{\tau ^{2}-\tau}. $$

The minimum value of the performance index \(\mathfrak{J}\) when \(\alpha =1\) is \(\mathfrak{J}^{*} = 0\). Figure 10 compares the exact solutions and the approximate solutions of \(\mathfrak{V}(\tau )\) and \(\mathfrak{W}(\tau )\) for various values of α and \(N=6\), respectively. From this figure, it is clear that the approximate solutions for the case of \(\alpha =1\) are in very good agreement with the corresponding exact solutions. Furthermore, as α approaches 1, the approximate solutions of \(\mathfrak{V}(\tau )\) and \(\mathfrak{W}(\tau )\) converge to the exact solutions. The absolute errors of \(\mathfrak{V}(\tau )\) and \(\mathfrak{W}(\tau )\) at \(\alpha =1\) and \(N=5\) are shown in Fig. 11. Moreover, Table 11 reports the absolute errors of \(\mathfrak{V}(\tau )\) and \(\mathfrak{W}(\tau )\) obtained by our method in comparison to the method in [48] at \(\alpha =1\) and \(N=5\). Table 12 lists the maximum absolute errors of \(\mathfrak{V}(\tau )\) and \(\mathfrak{W}(\tau )\) and the absolute errors of \(\mathfrak{J}\) given by the proposed method in comparison to the method in [49] at \(\alpha =1\) and \(N=6\). The obtained results show that the errors, specially to \(\mathfrak{W}(\tau )\), are better for the proposed method than those obtained in [48, 49]. From these tables and figures, it can be seen that the state and the control variables are accurately approximated by the proposed method.

Figure 10
figure 10

Graphs of exact and numerical solutions for various values of α in Example 4

Figure 11
figure 11

Graphs of the absolute errors when \(\alpha =1\) and \(N=5\) in Example 4

Table 11 A comparison between the absolute errors obtained by our method with those obtained in [48] with \(N=5\) for Example 4
Table 12 A comparison between the absolute errors obtained by our method with those obtained in [49] with \(N=6\) for Example 4

Example 5

Consider the vibration of a mass-spring-damper system subjected to an external force. In particular, we aim to examine the step forcing functions, impulses, and response to harmonic excitations. Mostly, motors, rotating machinery, and so on lead to periodic motions of structures to induce vibrations into other mechanical devices and structures nearby [50]. Here, the action of an actuator force caused the control force \(F(\tau ) = b\mathfrak{W}(\tau )\), where b is a constant. On summing the forces, the equation for the forced vibration of the system in Fig. 12 becomes

$$ m \ddot{ \mathfrak{V}}(\tau )+c \dot{\mathfrak{V}}(\tau )+k \mathfrak{V}(\tau )=b \mathfrak{W}(\tau ), $$

where m, c, and k are constants. We remind that the mass-spring-damper system can be used to model the response of most dynamic systems as well as study the elasticity and mechanical behavior of nonlinear and viscoelastic material. Based on the number and arrangement (parallel or series combination) of the elements of this system (i.e. mass, spring, or damper), the mass-spring-damper systems have various practical applications, including but not limited to suspension systems of vehicles, vibrations of building on viscoelastic-like foundations, simulation of the motion of tendons and muscle deformations, and computer animations. With the specific application of the linear regulator problem in vibration suppression, extracted from [23], we find the following FOCP:

$$\begin{aligned} \operatorname{Min} \mathfrak{J}(\mathfrak{W}) = \frac{1}{2} \int _{0}^{1} \bigl( \mathfrak{V}_{1}^{2}( \tau )+a \mathfrak{W}^{2}(\tau )\bigr)\,d\tau \end{aligned}$$

subject to

$$\begin{aligned}& {}^{C}_{0} {\mathfrak{D}}_{\tau}^{\alpha} \mathfrak{V}_{1}(\tau ) = \mathfrak{V}_{2}(\tau ), \\& {}^{C}_{0} {\mathfrak{D}}_{\tau}^{\alpha} \mathfrak{V}_{2}(\tau ) =- \frac{k}{m} \mathfrak{V}_{1}( \tau )-\frac{c}{m} \mathfrak{V}_{2}( \tau )+\frac{b}{m} \mathfrak{U}(\tau ), \\& \mathfrak{V}_{1}(0) = 1, \qquad \mathfrak{V}_{2}(0)= 1. \end{aligned}$$

Choosing \(c=2\) and \(a=b=m=k=1\), we obtain the exact solution when \(\alpha =1\) as follows:

$$\begin{aligned}& \mathfrak{V}_{1}^{*}(\tau ) =\biggl(\frac {2{,}448{,}542{,}446{,}934}{574{,}274{,}351{,}289}e^{- \frac {1}{2}\sqrt{2+2\sqrt{2}}\tau )}- \frac {1{,}056{,}415{,}030{,}945}{26{,}501{,}017{,}876{,}847}e^{\frac {1}{2}\sqrt{2+2\sqrt{2}} \tau )}\biggr) \\& \hphantom{\mathfrak{V}_{1}^{*}(\tau ) =}{}\times \sin \biggl(\frac {1}{2} \sqrt{-2+2\sqrt{2}}\tau \biggr) \\& \hphantom{\mathfrak{V}_{1}^{*}(\tau ) =}{} +\biggl(\frac {182{,}130{,}319{,}402}{2{,}268{,}083{,}818{,}399}e^{\frac {1}{2}\sqrt{2+2\sqrt{2}} \tau )}+\frac {717{,}441{,}983{,}179}{780{,}083{,}809{,}860}e^{-\frac {1}{2}\sqrt{2+2 \sqrt{2}}\tau )} \biggr) \\& \hphantom{\mathfrak{V}_{1}^{*}(\tau ) =}{}\times \cos \biggl(\frac {1}{2}\sqrt{-2+2\sqrt{2}}\tau \biggr), \\& \mathfrak{V}_{2}^{*}(\tau ) =\biggl(-\frac {4{,}619{,}908{,}248{,}187}{905{,}327{,}915{,}158}e^{- \frac {1}{2}\sqrt{2+2\sqrt{2}}\tau )}- \frac {247{,}515{,}980{,}953}{3{,}080{,}802{,}211{,}846}e^{\frac {1}{2}\sqrt{2+2\sqrt{2}} \tau )}\biggr) \\& \hphantom{\mathfrak{V}_{2}^{*}(\tau ) =}{}\times \sin \biggl(\frac {1}{2} \sqrt{-2+2\sqrt{2}}\tau \biggr) \\& \hphantom{\mathfrak{V}_{2}^{*}(\tau ) =}{}+\biggl(\frac {107{,}822{,}929{,}289}{1{,}538{,}469{,}380{,}682}e^{\frac {1}{2}\sqrt{2+2\sqrt{2}} \tau )}+\frac {1{,}430{,}646{,}451{,}393}{1{,}538{,}469{,}380{,}682}e^{-\frac {1}{2}\sqrt{2+2 \sqrt{2}}\tau )} \biggr) \\& \hphantom{\mathfrak{V}_{2}^{*}(\tau ) =}{}\times \cos \biggl(\frac {1}{2}\sqrt{-2+2\sqrt{2}}\tau \biggr), \\& \mathfrak{W}^{*}(\tau ) =\biggl(-\frac {1{,}038{,}973{,}168{,}371}{1{,}369{,}025{,}535{,}412}e^{- \frac {1}{2}\sqrt{2+2\sqrt{2}}\tau )}- \frac {476{,}880{,}084{,}019}{1{,}486{,}948{,}346{,}550}e^{\frac {1}{2}\sqrt{2+2\sqrt{2}} \tau )}\biggr) \\& \hphantom{\mathfrak{W}^{*}(\tau ) =}{}\times \sin \biggl(\frac {1}{2} \sqrt{-2+2\sqrt{2}}\tau \biggr) \\& \hphantom{\mathfrak{W}^{*}(\tau ) =}{} +\biggl(\frac {4{,}059{,}677{,}169{,}262}{15{,}559{,}760{,}490{,}977}e^{\frac {1}{2}\sqrt{2+2\sqrt{2}} \tau )}-\frac {584{,}626{,}462{,}642}{1{,}035{,}676{,}735{,}491}e^{-\frac {1}{2}\sqrt{2+2 \sqrt{2}}\tau )} \biggr) \\& \hphantom{\mathfrak{W}^{*}(\tau ) =}{}\times \cos \biggl(\frac {1}{2}\sqrt{-2+2\sqrt{2}}\tau \biggr). \end{aligned}$$

The minimum value of the performance index \(\mathfrak{J}\) when \(\alpha =1\) is \(\mathfrak{J}^{*} =0.6631296243\). In Fig. 13, the approximate values and the absolute errors of \(\mathfrak{J}\) for some values of N when \(\alpha =1\) are plotted. Figure 14 compares the exact solutions and the approximate solutions of \(\mathfrak{V}_{1}(\tau )\), \(\mathfrak{V}_{2}(\tau )\), and \(\mathfrak{W}(\tau )\) for various values of α at \(N=8\), respectively. In Table 13, the absolute errors of \(\mathfrak{V}_{1}(\tau )\), \(\mathfrak{V}_{2}(\tau )\), and \(\mathfrak{W}(\tau )\) for \(N=4\) and 10 at \(\alpha =1\) along with the CPU time are listed. Moreover, the absolute errors of \(\mathfrak{V}_{1}(\tau )\), \(\mathfrak{V}_{2}(\tau )\), and \(\mathfrak{W}(\tau )\) when \(\alpha =1\) and \(N=11\) are shown in Fig. 15. These results also illustrate the fast convergence rate of the proposed method since the errors decay rapidly by increasing the number of the GLPs.

Figure 12
figure 12

(a) Schematic of the forced mass-damper system assuming no friction on the surface and (b) free body diagram of the system of part (a) [50] in Example 5

Figure 13
figure 13

Graphs of the approximate values and absolute errors of \(\mathfrak{J}\) when \(\alpha =1\) for some values of N in Example 5

Figure 14
figure 14

Graphs of the exact and numerical solutions for various values of α in Example 5

Figure 15
figure 15

Graphs of the absolute errors when \(\alpha =1\) and \(N=11\) in Example 5

Table 13 The absolute errors obtained by our method with \(N=4,10\) and \(\alpha =1\) along with the CPU time for Example 5

Conclusions and remarks

In this paper, we established an accurate and efficient new scheme to solve a class of FOCPs. By applying the GLPs, determining the operational matrices of fractional derivatives and the necessary optimality conditions, we reduced the main problem to the simple problem of solving a system of algebraic equations. The proposed scheme is illustrated in some test examples, and the results demonstrate that our scheme is perfectly valid. It is also shown that the new scheme is quite reliable, simple, and reasonably accurate to solve FOCPs. As further research works, we will use the proposed scheme for delay FOCPs. Moreover, the new method has the ability to be applied for solving variable order FOCPs in new research.

Availability of data and materials

Data sharing is not applicable to this study.

References

  1. Bushnaq, S., Saeed, T., Torres, D.F.M., Zeb, A.: Control of COVID-19 dynamics through a fractional-order model. Alex. Eng. J. 60(4), 3587–3592 (2021)

    Article  Google Scholar 

  2. Dong, N.P., Long, H.V., Khastan, A.: Optimal control of a fractional order model for granular SEIR epidemic with uncertainty. Commun. Nonlinear Sci. Numer. Simul. 88, 1–39 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  3. Naik, P.A., Zu, J., Owolabi, K.M.: Global dynamics of a fractional order model for the transmission of HIV epidemic with optimal control. Chaos Solitons Fractals 138, 1–24 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  4. Shi, R., Li, Y., Wang, C.: Stability analysis and optimal control of a fractional-order model for African swine fever. Virus Res. 288, 1–24 (2020)

    Article  Google Scholar 

  5. Agrawal, O.P.: A general formulation and solution scheme for fractional optimal control problems. Nonlinear Dyn. 38(1), 323–337 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  6. Agrawal, O.P.: A formulation and numerical scheme for fractional optimal control problems. J. Vib. Control 14(9–10), 1291–1299 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  7. Sweilam, N.H., Al-Ajami, T.M., Hoppe, R.H.W.: Numerical solution of some types of fractional optimal control problems. Sci. World J. 2013, Article ID 306237 (2013)

    Article  Google Scholar 

  8. Pooseh, S., Almeida, R., Torres, D.F.M.: Fractional order optimal control problems with free terminal time. J. Ind. Manag. Optim. 10(2), 363–381 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  9. Sweilam, N.H., Al-Ajami, T.M.: Legendre spectral-collocation method for solving some types of fractional optimal control problems. J. Adv. Res. 6(3), 393–403 (2015)

    Article  Google Scholar 

  10. Tohidi, E., Saberi Nik, H.: A Bessel collocation method for solving fractional optimal control problems. Appl. Math. Model. 39(2), 455–465 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  11. Yang, Y., Zhang, J., Liu, H., Vasilev, A.O.: An indirect convergent Jacobi spectral collocation method for fractional optimal control problems. Math. Methods Appl. Sci. 44(4), 2806–2824 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  12. Habibli, M., Noori Skandari, M.H.: Fractional Chebyshev pseudospectral method for fractional optimal control problems. Optim. Control Appl. Methods 40(3), 558–572 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  13. Kumar, N., Mehra, M.: Legendre wavelet collocation method for fractional optimal control problems with fractional Bolza cost. Numer. Methods Partial Differ. Equ. 37(2), 1693–1724 (2021)

    Article  MathSciNet  Google Scholar 

  14. Alizadeh, A., Effati, S.: An iterative approach for solving fractional optimal control problems. J. Vib. Control 24(1), 18–36 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  15. Jajarmi, A., Baleanu, D.: On the fractional optimal control problems with a general derivative operator. Asian J. Control 23(2), 1062–1071 (2021)

    Article  MathSciNet  Google Scholar 

  16. Lotfi, A., Yousefi, S.A., Dehghan, M.: Numerical solution of a class of fractional optimal control problems via the Legendre orthonormal basis combined with the operational matrix and the Gauss quadrature rule. J. Comput. Appl. Math. 250, 143–160 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  17. Keshavarz, E., Ordokhani, Y., Razzaghi, M.: A numerical solution for fractional optimal control problems via Bernoulli polynomials. J. Vib. Control 22(18), 3889–3903 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  18. Ezz-Eldien, S.S., Doha, E.H., Baleanu, D., Bhrawy, A.H.: A numerical approach based on Legendre orthonormal polynomials for numerical solutions of fractional optimal control problems. J. Vib. Control 23(1), 16–30 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  19. Heydari, M.H., Hooshmandasl, M.R., Maalek Ghaini, F.M., Cattani, C.: Wavelets method for solving fractional optimal control problems. Appl. Math. Comput. 286, 139–154 (2016)

    MathSciNet  MATH  Google Scholar 

  20. Sahu, P.K., Saha Ray, S.: Comparison on wavelets techniques for solving fractional optimal control problems. J. Vib. Control 24(6), 1185–1201 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  21. Rabiei, K., Ordokhani, Y., Babolian, E.: The Boubaker polynomials and their application to solve fractional optimal control problems. Nonlinear Dyn. 88(2), 1013–1026 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  22. Abdelhakem, M., Moussa, H., Baleanu, D., El-Kady, M.: Shifted Chebyshev schemes for solving fractional optimal control problems. J. Vib. Control 25(15), 2143–2150 (2019)

    Article  MathSciNet  Google Scholar 

  23. Yari, A.: Numerical solution for fractional optimal control problems by Hermite polynomials. J. Vib. Control 27(5–6), 698–716 (2021)

    Article  MathSciNet  Google Scholar 

  24. Barikbin, Z., Keshavarz, E.: Solving fractional optimal control problems by new Bernoulli wavelets operational matrices. Optim. Control Appl. Methods 41(4), 1188–1210 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  25. Dehestani, H., Ordokhani, Y.: A spectral framework for the solution of fractional optimal control and variational problems involving Mittag-Leffler nonsingular kernel. J. Vib. Control 28(3–4), 260–275 (2022)

    Article  MathSciNet  Google Scholar 

  26. Hassani, H., Tenreiro Machado, J.A., Mehrabi, S.: An optimization technique for solving a class of nonlinear fractional optimal control problems: application in cancer treatment. Appl. Math. Model. 93, 868–884 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  27. Hassani, H., Tenreiro Machado, J.A., Hosseini Asl, M.K., Dahaghin, M.S.: Numerical solution of nonlinear fractional optimal control problems using generalized Bernoulli polynomials. Optim. Control Appl. Methods 42(4), 1045–1063 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  28. Abd-Elhameed, W.M., Youssri, Y.H.: Spectral solutions for fractional differential equations via a novel Lucas operational matrix of fractional derivatives. Rom. J. Phys. 61(5–6), 795–813 (2016)

    Google Scholar 

  29. Abd-Elhameed, W.M., Youssri, Y.H.: Generalized Lucas polynomial sequence approach for fractional differential equations. Nonlinear Dyn. 89(2), 1341–1355 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  30. Mokhtar, M.M., Mohamed, A.S.: Lucas polynomials semi-analytic solution for fractional multi-term initial value problems. Adv. Differ. Equ. 2019(1), 1 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  31. Oruç, Ö.: A new algorithm based on Lucas polynomials for approximate solution of 1D and 2D nonlinear generalized Benjamin–Bona–Mahony–Burgers equation. Comput. Math. Appl. 74(12), 3042–3057 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  32. Oruç, Ö.: A new numerical treatment based on Lucas polynomials for 1D and 2D sinh-Gordon equation. Commun. Nonlinear Sci. Numer. Simul. 57, 14–25 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  33. Dehestani, H., Ordokhani, Y., Razzaghi, M.: A novel direct method based on the Lucas multiwavelet functions for variable-order fractional reaction-diffusion and subdiffusion equations. Numer. Linear Algebra Appl. 28(2), e2346 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  34. Dehestani, H., Ordokhani, Y., Razzaghi, M.: Combination of Lucas wavelets with Legendre–Gauss quadrature for fractional Fredholm–Volterra integro-differential equations. J. Comput. Appl. Math. 382, 113070 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  35. Dehestani, H., Ordokhani, Y., Razzaghi, M.: Fractional Lucas optimization method for evaluating the approximate solution of the multi-dimensional fractional differential equations. Eng. Comput. 38, 481–495 (2022)

    Article  Google Scholar 

  36. Kumar, R., Koundal, R., Srivastava, K., Baleanu, D.: Normalized Lucas wavelets: an application to Lane-Emden and pantograph differential equations. Eur. Phys. J. Plus 135(11), 1–24 (2020)

    Article  Google Scholar 

  37. Ali, I., Haq, S., Nisar, K.S., Baleanu, D.: An efficient numerical scheme based on Lucas polynomials for the study of multidimensional Burgers-type equations. Adv. Differ. Equ. 2021(1), 1 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  38. Youssri, Y.H., Abd-Elhameed, W.M., Mohamed, A.S., Sayed, S.M.: Generalized Lucas polynomial sequence treatment of fractional pantograph differential equation. Int. J. Appl. Comput. Math. 7(2), 1–16 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  39. Kilbas, A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differential Equations, vol. 204. Elsevier, Amsterdam (2006)

    Book  MATH  Google Scholar 

  40. Podlubny, I.: Fractional Differential Equations, vol. 198. Elsevier, Amsterdam (1999)

    MATH  Google Scholar 

  41. Agrawal, O.P.: Fractional variational calculus in terms of Riesz fractional derivatives. J. Phys. A, Math. Theor. 40(24), 6287–6303 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  42. Koshy, T.: Fibonacci and Lucas Numbers with Applications, vol. 2. Wiley, New York (2019)

    MATH  Google Scholar 

  43. Agrawal, O.P.: General formulation for the numerical solution of optimal control problems. Int. J. Control 50(2), 627–638 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  44. Mashayekhi, S., Razzaghi, M.: An approximate method for solving fractional optimal control problems by hybrid functions. J. Vib. Control 24(9), 1621–1631 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  45. Yonthanthum, W., Rattana, A., Razzaghi, M.: An approximate method for solving fractional optimal control problems by the hybrid of block-pulse functions and Taylor polynomials. Optim. Control Appl. Methods 39(2), 873–887 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  46. Akbarian, T., Keyanpour, M.: A new approach to the numerical solution of fractional order optimal control problems. Appl. Appl. Math. 8(2), 523–534 (2013)

    MathSciNet  MATH  Google Scholar 

  47. Singha, N., Nahak, C.: An efficient approximation technique for solving a class of fractional optimal control problems. J. Optim. Theory Appl. 174(3), 785–802 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  48. Dehestani, H., Ordokhani, Y., Razzaghi, M.: Fractional-order Bessel wavelet functions for solving variable order fractional optimal control problems with estimation error. Int. J. Syst. Sci. 51(6), 1032–1052 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  49. Heydari, M.H., Avazzadeh, Z.: A new wavelet method for variable-order fractional optimal control problems. Asian J. Control 20(5), 1804–1817 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  50. Inman, D.J.: Vibration with Control. Wiley, New York (2017)

    Book  Google Scholar 

Download references

Acknowledgements

Not available.

Funding

The financial assistance is not applicable.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to M. H. Heydari.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Karami, S., Fakharzadeh Jahromi, A. & Heydari, M.H. A computational method based on the generalized Lucas polynomials for fractional optimal control problems. Adv Cont Discr Mod 2022, 64 (2022). https://doi.org/10.1186/s13662-022-03737-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-022-03737-1

Keywords

  • Generalized Lucas polynomials
  • Fractional optimal control problems
  • Spectral collocation method
  • Operational matrices
  • Caputo fractional derivative
  • Pontryagin’s maximum principle