There are several definitions of fractional derivatives. The RiemannLiouville, the GrünwaldLetnikov, and the Caputo definitions of a fractional derivative of a function are given by
where is the Euler's gamma function, means the integer part of , and is the step time increment.
On the other hand, it is possible to generalize several results based on transforms, yielding expressions such as the Fourier expression
where and represent the Fourier variable and operator, respectively, and .
These expressions demonstrate that fractional derivatives have memory, contrary to integer derivatives that consist in local operators. There is a long standing discussion, still going on, about the pros and cons of the different definitions. These debates are outside the scope of this paper, but, in short, while the RiemannLiouville definition involves an initialization of fractional order, the Caputo counterpart requires integer order initial conditions which are easier to apply (often the Caputo's initial conditions are called freely as "with physical meaning"). The GrünwaldLetnikov formulation is frequently adopted in numerical algorithms because it inspires a discretetime calculation algorithm, based on the approximation of the time increment through the sampling period.
We verify that a fractional derivative requires an infinite number of samples capturing, therefore, all the signal history, contrary to what happens with integer order derivatives that are merely local operators [27]. This fact motivates the evaluation of calculation strategies based on delayed signal samples and leads to the study presented in this paper. In this line of thought we can think in concentrating the delayed samples into a finite number of points that somehow "average" a given set number of sampling instants (see Figure 1).
The concept of timedelayed samples for representing the signal memory can be formulated analytically as
where and are weight coefficients and the corresponding delays and is the order of the approximation.
Before continuing we must mention that, although based on distinct premises, expression (2.3), inspired by the interpretation of fractional derivatives proposed in [27], is somehow a subset of the interesting multiscaling functional equation proposed by Nigmatullin in [28]. Besides, while in [28] we can have complex values, in the present case we are restricted to real values for the parameters. In fact, expression (2.3) adopts the wellknown timedelay operator, usual in control system theory, following the Laplace expression , where and represent the Laplace variable and operator, respectively.
Another aspect that deserves attention is the fact that while stability and causality may impose restrictions to the parameters in (2.3) it was decided not to impose a priori any restriction to the numerical values in the optimization procedure to be developed in the sequel. For example, in what concerns the delays, while it seems not feasible to "guess" the future values of the signal and only the past is available for the signal processing, it is important to analyze the values that emerge without establishing any limitation a priori to their values. Nevertheless, in a second phase, the stability and causality will be addressed.
The development of an algorithm for the calculation of , , given the approximation and fractional orders and , respectively, can be established either in the time or the frequency domains. In this paper we adopt the Fourier expression (2.2) with null initial conditions, leading to
The parameters and can be optimized in the perspective of the functional
where represents an index of the sampling frequencies within the bandwidth and denotes the total number of sampling frequencies. Therefore, the quality of the approximation depends not only on the orders and , but also on the bandwidth .
For the optimization of in (2.5) it is adopted a genetic algorithm (GA). GAs are a class of computational techniques to find approximate solutions in optimization and search problems [29, 30]. GAs are simulated through a population of candidates of size that evolve computationally towards better solutions. Once the genetic representation and the fitness function are defined, the GA proceeds to initialize a population randomly and then to improve them through the repetitive application of mutation, crossover, and selection operators. During the successive iterations, a part or the totality of the population is selected to breed a new generation. Individual solutions are selected through a fitnessbased process, where fitter solutions (measured by a fitness function ) are usually more likely to be selected. The GA terminates when either the maximum number of generations is produced, or a satisfactory fitness level has been reached.
The pseudocode of a standard GA is as follows.

(1)
Choose the initial population

(2)
Evaluate the fitness of each individual in the population

(3)
Repeat

(a)
Select bestranking individuals to reproduce

(b)
Breed new generation through crossover and mutation and give birth to offspring

(c)
Evaluate the fitness of the offspring individuals

(d)
Replace the worst ranked part of population with offspring

(4)
Until termination.
A common complementary technique, often adopted to speedup the convergence, denoted as elitism, is the process of selecting the better individuals to form the parents in the offspring generation.
We observe that we have not introduced a priori any restriction to the numerical values of the parameters that result during the optimization procedure. It is well known that one of the advantages of GAs over classical optimization techniques is precisely its characteristic of handling easily these situations. One technique is simply to substitute "not suitable" elements of the GA population by new ones generated randomly. Furthermore, during the generation of the GA elements it is straightforward to impose restrictions. As mentioned previously, in a first phase it is not considered any limitation in order to reveal more clearly the pattern that emerges freely with the timedelay algorithm. After having the preliminary results, in a second phase, several restrictions are considered, and the optimization GA is executed again.