Skip to main content

Theory and Modern Applications

On q-BFGS algorithm for unconstrained optimization problems

Abstract

Variants of the Newton method are very popular for solving unconstrained optimization problems. The study on global convergence of the BFGS method has also made good progress. The q-gradient reduces to its classical version when q approaches 1. In this paper, we propose a quantum-Broyden–Fletcher–Goldfarb–Shanno algorithm where the Hessian is constructed using the q-gradient and descent direction is found at each iteration. The algorithm presented in this paper is implemented by applying the independent parameter q in the Armijo–Wolfe conditions to compute the step length which guarantees that the objective function value decreases. The global convergence is established without the convexity assumption on the objective function. Further, the proposed method is verified by the numerical test problems and the results are depicted through the performance profiles.

1 Introduction

Several numerical methods have been developed extensively for solving unconstrained optimization problems. The gradient descent method is one of the most simplest and commonly used method in the field of the optimization [1]. This method is globally convergent, but suffers from the slow convergence rate as the iterative point approaches the minimizer. In order to improve the convergence rate, optimizers use the Newton method [1]. This method is one of the most popular methods due to its quadratic convergence. A major disadvantage of the Newton method is its slowness or non-convergence for the starting point not being taken close to an optima, and it also requires one to compute the inverse of the Hessian at every iteration, which is rather costly. The components of the Hessian matrix are constructed using the classical derivative, which is positive definite at every iteration. In quasi-Newton methods, instead of computing the actual Hessian, an approximation of the Hessian is considered [1]. These methods use only first derivatives to make an approximation whose computing costs are low.

The Broyden–Fletcher–Goldfarb–Shanno (BFGS) method is the type of quasi-Newton methods for solving unconstrained nonlinear optimization problems which came into existence from the independent work of Broyden [2], Fletcher [3], Goldfarb [4], and Shanno [5]. Since the 1970s the BFGS method became more and more popular and today it is accepted as one of the best quasi-Newton methods. Along these years, many attempts have been made to improve the performance of the quasi-Newton methods [618].

The global convergence of the BFGS method have been studied by several authors [5, 12, 1921] under the convexity assumption on the objective function. An example was given in [22] to fail the standard BFGS method for non-convex functions with inexact line search [12]. A modified BFGS method was developed to converge globally without a convexity assumption on the objective function [23]. In the reference, Li et al. concerned with the open problem of whether the BFGS method with inexact line search converges globally when applied to non-convex unconstrained optimization problems [23]. We propose a cautious BFGS update and prove that the method with either a Wolfe-type or an Armijo-type line search converges globally if the function to be minimized has Lipschitz continuous gradients. The q-calculus, particularly known as quantum calculus, has gained a lot of interest in various fields of science, mathematics [24], physics [25], quantum theory [26], statistical mechanics [27] and signal processing [28], etc., where the q-derivative is employed. It is also known as the Jackson derivative, as the concept was first introduced by Jackson [29]; it was further studied in the case of a q-difference equation by Carmichael [30], Mason [31], Adams [32] and Trjitzinsky [33]. The word quantum usually refers to the smallest discrete quantity of some physical property and it comes from the Latin word “quantus”, which literally means how many. In mathematics, the quantum calculus is referred as the calculus without limits and it replaces the classical derivative by a difference operator.

A q-version of the steepest descent method was first developed in the field of optimization to solve single objective nonlinear unconstrained problems. The method was able to escape from many local minima and reach the global minimum [34]. The q-LMS (Least Mean Square) algorithm is proposed by employing the q-gradient to compute the secant of the cost function instead of the tangent [28]. The algorithm takes the larger steps towards the optimum solution and achieves a higher convergence rate. An improved version of q-LMS algorithm was developed based on a new class of stochastic q-gradient methods. The proposed approach shows the high convergence rate by utilizing the concept of error correlation energy, and normalization of signal [35]. Global optimization using the q-gradient was further studied in [36], where the parameter q is a dilation that is used to control the degree of localness of the search, solving several multimodal functions. Furthermore, a modified Newton method based on a deterministic scheme using the q-derivative was proposed [37, 38]. Recently, a mathematical package for q-series and partition theory applications has been developed using MATHEMATICA software [39].

A sequence \({q^{k}}\) is introduced instead of using a fixed positive number q in the Newton and limited memory BFGS schemes in [37, 40]. After some large value of k, it is obvious that the Hessian becomes almost the same as the exact Hessian of the objective function. The concept of q-gradient, in contrast to the classical gradient in the q-least mean squares algorithm [28], provides extra freedom to control the performance of the algorithm, which we adopt in our proposed method.

In this article, we propose a method using the q-derivative for solving unconstrained optimization problems. This algorithm is different from the classical BFGS algorithm as the search process moves from global in the beginning to local at the end. We utilize an independent parameter \(q \in (0,1)\) in Armijo–Wolfe conditions for finding the step length. The proposed algorithm with the Armijo–Wolfe line search is globally convergent for general objective functions. Then we compare the new approach with the existing method.

his paper is organized as follows: In the next section, we give the preliminary idea about the q-calculus. In Sect. 3, we present the q-BFGS (quantum-Broyden–Fletcher–Goldfarb–Shanno) method, using q-calculus. In Sect. 4, the global convergence of proposed algorithm is proved. In Sect. 5, we report some numerical experiments. Finally, we present a conclusion in the last section.

2 Preliminaries

In this section, we present some basic definitions of q-calculus. Given value of \(q\neq 1\), we present the q-integer \([n]_{q}\) [41] by

$$ [n]_{q} = \textstyle\begin{cases} \frac{1 - q^{n}}{ 1 - q}, & q \ne 1, \\ n, & q=1, \end{cases} $$

for \(n \in \mathbb{N}\). The q-derivative \(D_{q} [f]\) [42] of a function \(f : \mathbb{R} \to \mathbb{R}\) is given by

$$ D_{q}[f](x) = \frac{ f(x) - f(qx)}{( 1- q)x}, $$

whenever scalar \(q\in (0, 1)\), \(x\ne 0\) and \(D_{q}[f](0) = f'(0)\) provided \(f'(0)\) exists. Note that

$$ \lim_{q \to 1} D_{q} [f](x) = \frac{f(x)- f(qx)}{ (1-q) x} = \frac{{\mathrm{d}} f(x)}{ {\mathrm{d}} x}, $$

if f is differentiable. The q-derivative of a function of the form \(x^{n}\) is

$$ D_{q,x} \bigl[x^{n}\bigr] = \textstyle\begin{cases} \frac{1-q^{n}}{ 1-q}x^{n-1},& q \ne 1, \\ n x^{n-1},& q=1. \end{cases} $$

Let \(f(x)\) be a continuous function on \([a, b]\). Then there exists \(\hat{q} \in (0, 1)\) and \(x \in (a,b)\) [43] such that

$$ f(b) - f(a) = \bigl(D_{q} [f]\bigr) (x) (b-a), $$
(1)

for \(q \in (\hat{q}, 1) \cup (1, \hat{q}^{-1})\). The q-partial derivative of a function \(f : \mathbb{R}^{n} \to \mathbb{R}\) at \(x\in \mathbb{R}^{n}\) with respect to \(x_{i}\), where scalar \(q \in (0,1)\), is given as [34]

$$ D_{q, x_{i}} [f](x) = \textstyle\begin{cases} \frac{1}{ (1-q) x_{i}} [ f (x_{1}, x_{2}, \dots , x_{i-1}, x_{i}, x_{i+1}, \dots , x_{n} ) \\ \quad {}- f (x_{1}, x_{2}, \dots , x_{i-1}, q x_{i},x_{i+1}, \dots , x_{n} ) ], & x_{i}\ne 0, \\ \frac{\partial }{\partial x_{i}} f (x_{1}, x_{2}, \dots , x_{i-1}, 0, x_{i+1}, \dots , x_{n} ), & x_{i}=0, \\ \frac{\partial }{\partial x_{i}} f (x_{1}, x_{2}, \dots , x_{i-1}, x_{i}, x_{i+1}, \dots , x_{n} ), & q=1. \end{cases} $$

We now choose the parameter q as a vector, that is,

$$ q = (q_{1}, \dots , q_{i},\dots , q_{n})^{T} \in \mathbb{R}^{n}. $$

Then the q-gradient vector [34] of f is

$$ \nabla _{q} f(x)^{T} = \begin{bmatrix} D_{q_{1}, x_{1}} [f](x) & \cdots & D_{q_{i}, x_{i}} [f](x) & \cdots & D_{q_{n}, x_{n}} [f](x) \end{bmatrix} . $$
(2)

Let \(\{ q^{k}_{i} \}\) be a real sequence defined by

$$ q^{k+1}_{i} = 1- \frac{q^{k}_{i}}{(k+1)^{2}}, $$
(3)

for each \(i=1,\dots ,n\), where \(k \in \{0\}\cup \mathbb{N}\) and \(q^{0}_{i} \in (0, 1)\) is a fixed starting number, then the sequence \(\{q^{k}_{i}\}\) converges to \((1,\dots ,1)^{T}\) as \(k \to \infty \) [38]. Thus, the q-gradient reduces to its classical version. For the sake of convenience, we represent the q-gradient vector of f at \(x^{k}\) as

$$ g_{q^{k}} \bigl(x^{k}\bigr)=\nabla _{q^{k}} f \bigl(x^{k}\bigr). $$

Example 1

Consider a function \(f : \mathbb{R}^{2} \to \mathbb{R}\) as \(f(x) = x_{1}^{2} x_{2} + x_{2}^{2}\). Then the q-gradient is given as

$$ \nabla _{q^{k}} f(x)^{T} = \begin{bmatrix} (1+q^{k}_{1}) x_{1} x_{2} & x_{1}^{2} + (1 + q^{k}_{2}) x_{2} \end{bmatrix} . $$

We focus our attention on solving the following unconstrained optimization problems:

$$ \min_{x \in \mathbb{R}^{n}} f(x), $$
(4)

where \(f : \mathbb{R}^{n} \to \mathbb{R}\) is a continuously q-derivative. In the next section, we present the q-BFGS algorithm.

3 On q-BFGS algorithm

The BFGS method for solving optimization problems (4) generates a sequence \(\{x^{k}\}\) by the following iterative scheme:

$$ x^{k+1} = x^{k} + \alpha _{k} d_{q^{k}}^{k},$$
(5)

for \(k=\{0\}\cup \mathbb{N}\), where \(\alpha _{k}\) is the step length, and \(d_{q^{k}}^{k}\) is the q-BFGS descent direction obtained by solving the following equation:

$$ g_{q^{0}}^{0} = -W^{0} d_{q^{0}}^{0}, $$

and, for \(k\ge 1\), we have

$$ g_{q^{k}}^{k} = - W^{k} d_{q^{k}}^{k}, $$
(6)

where \(W^{k}\) is the q-quasi-Newton update Hessian. The sequence \(\{W^{k}\}\) satisfies the following equation:

$$ W^{k+1} s^{k}=y^{k}, $$

where \(y^{k}=g_{q^{k}}^{k+1} - g_{q^{k}}^{k}\). We call the famous BFGS (Broyden [2], Fletcher [3], Goldfarb [4], and Shanno [5]) updated formula in the context of q-calculus: q-BFGS. Thus, the Hessian \(W^{k}\) is updated by the q-BFGS formula:

$$ W^{k+1} = W^{k} - \frac{W^{k} s^{k} (s^{k})^{T} W^{k} }{ (s^{k})^{T} W^{k} s^{k} } + \frac{y^{k}( y^{k})^{T} }{ (y^{k})^{T} s^{k}}, $$
(7)

where \(s^{k} = x^{k+1}- x^{k}\). A good property of Eq. (7) is that \(W^{k+1}\) should inherit the positive definiteness of \(W^{k}\) as long as \((y^{k})^{T} s^{k} >0\) and numerically support in the sense of classical BFGS update. The condition \((y^{k})^{T} s^{k} >0\) is guaranteed to hold if the step length \(\alpha _{k}\) is determined by the exact or inexact line search methods. For computing the step length, the modified Armijo–Wolfe line search conditions due to the q-gradient are presented as

$$ f \bigl( x^{k} + \alpha _{k} d_{q^{k}}^{k} \bigr) \le f\bigl(x^{k}\bigr) + \sigma _{1} \alpha _{k} \bigl( d_{q^{k}}^{k} \bigr)^{T} g_{q^{k}}^{k} $$
(8)

and

$$ \nabla _{q} f \bigl( x^{k} + \alpha _{k} d_{q^{k}}^{k} \bigr)^{T} d_{q^{k}}^{k} \ge \sigma _{2} \bigl( d_{q^{k}}^{k} \bigr)^{T} g_{q^{k}}^{k}, $$
(9)

where \(0<\sigma _{1}<\sigma _{2}<1\). The first condition (8) is called the Armijo condition; it ensures a sufficient reduction of the objective function while the second condition (9) is called the curvature condition which ensures the nonacceptance short step length.

The Armijo-type line search does not ensure the condition \((y^{k})^{T} s^{k} >0\) and hence \(W^{k+1}\) is not positive definite even if \(W^{k}\) is positive definite. In order to ensure positive definiteness of \(W^{k+1}\), the condition \((y^{k} )^{T} s^{k} >0 \) is sometimes used to decide whether or not \(W^{k}\) is updated. More specifically, we present the following update due to [23]:

$$ W^{k+1} = \textstyle\begin{cases} W^{k} - \frac{W^{k} s^{k}(s^{k} )^{T} W^{k}}{ (s^{k})^{T} W^{k} s^{k} } + \frac{y^{k} (y^{k} )^{T} }{ (y^{k} )^{T} s^{k}}, & \frac{(y^{k})^{T} s^{k}}{\lVert s^{k} \rVert ^{2}} >\epsilon \lVert g_{q_{k}}^{k} \rVert ^{\beta }, \\ W^{k}, & \mbox{otherwise}, \end{cases} $$
(10)

where ϵ and β are positive constants.

It is not difficult to see from (10) that the updated matrix \(W^{k}\) is symmetric and positive definite for all k, which is in turn implies that \(\{ f(x^{k}) \}\) is a non-increasing sequence when the modified Armijo–Wolfe line search conditions (8) and (9) are used. On the basis of the above theory, we present the following q-BFGS Algorithm 1. In the next section, we shall investigate the global convergence of Algorithm 1.

Algorithm 1
figure a

q-BFGS algorithm

4 Global convergence

In this section, we present the global convergence of Algorithm 1 under the following two assumptions.

Assumption 1

The objective function \(f(x)\) has a lower bound on the level set

$$ \Omega = \bigl\{ x\in \mathbb{R}^{n} \mid f(x) \le f \bigl( x^{0} \bigr) \bigr\} , $$

where \(x^{0}\) is the starting point of Algorithm 1.

Assumption 2

Let function f be a continuously q-derivative on Ω, and there exists a constant \(L>0\) such that \(\lVert g_{q^{k}}(x) - g_{q^{k}}(y) \rVert \le L(x-y)\), for each \(x,y\in \Omega \).

Since \(\{ f(x^{k}) \}\) is a non-increasing sequence, it is clear that the sequence \(\{ x^{k} \}\) generated by Algorithm 1 is contained in Ω. We present the index set as

$$ K := \biggl\{ j : \frac{ (y^{j})^{T} s^{j}}{ \lVert s^{j} \rVert ^{2}} > \epsilon \bigl\lVert g_{q^{j}}^{j} \bigr\rVert ^{\beta } \biggr\} . $$

We can again express (10) as

$$ W^{k+1} = \textstyle\begin{cases} W^{k} - \frac{W^{k} s^{k} (s^{k})^{T} W^{k} }{( s^{k} )^{T} W^{k} s^{k} } + \frac{y^{k}(y^{k} )^{T} }{(y^{k} )^{T} s^{k} },& k \in K, \\ W^{k}. \end{cases}$$
(11)

The following lemma is used to prove the global convergence of Algorithm 1 within the context of q-calculus.

Lemma 3

Let f be satisfied by Assumption 1and Assumption 2. Let \(\{x^{k}\}\) be generated by Algorithm 1, with \(q^{k}_{i} \in (0 , 1)\), where \(i=1,\dots ,n\). If there are positive constants \(\gamma _{1}\), and \(\gamma _{2}\) such that the inequalities

  1. 1)

    \(\lVert W^{k} s^{k} \rVert \le \gamma _{1} \lVert s^{k} \rVert \),

  2. 2)

    \((s^{k} )^{T} W^{k} s^{k} \ge \gamma _{2} \lVert s^{k} \rVert ^{2}\),

hold for infinitely many k, then we have

$$ \lim_{k\to \infty } \inf \bigl\lVert g_{q^{k}}\bigl(x^{k} \bigr) \bigr\rVert =0. $$
(12)

Proof

Since \(s^{k} = \alpha _{k} d_{q^{k}}^{k}\), using Part 1 of this lemma, and (6), we have

$$ \bigl\lVert g_{q^{k}}^{k} \bigr\rVert \le \gamma _{1} \bigl\lVert d_{q^{k}}^{k} \bigr\rVert $$
(13)

and

$$ \gamma _{2} \bigl\lVert d_{q^{k}}^{k} \bigr\rVert \le \bigl\lVert g_{q^{k}}^{k} \bigr\rVert . $$
(14)

From (13) and (14), we get

$$ \gamma _{1} \bigl\lVert d_{q^{k}}^{k} \bigr\rVert \ge \bigl\lVert g_{q^{k}}\bigl(x^{k} \bigr) \bigr\rVert \ge \gamma _{2} \bigl\lVert d_{q^{k}}^{k} \bigr\rVert . $$
(15)

Substituting \(s^{k} = \alpha _{k} d_{q^{k}}^{k}\) in Part 2, we get

$$ \bigl( d_{q^{k}}^{k} \bigr)^{T} W^{k} d_{q^{k}}^{k} \ge \gamma _{2} \bigl\lVert d_{q^{k}}^{k} \bigr\rVert ^{2}. $$
(16)

We present the case where the Armijo-type line search (8) is used with backtracking parameter ρ. If \(\alpha _{k} \ne 1\), then we have

$$ \sigma _{1} \rho ^{-1} \alpha _{k} g_{q^{k}} \bigl( x^{k} \bigr)^{T} d_{q^{k}}^{k} < f \bigl( x^{k} + \rho ^{-1} \alpha _{k} d_{q^{k}}^{k} \bigr) - f\bigl(x^{k}\bigr). $$
(17)

From the q-mean value theorem, there is a \(\theta _{k} \in (0,1)\) such that

$$ f \bigl( x^{k} + \rho ^{-1} \alpha _{k} d_{q^{k}}^{k} \bigr) - f\bigl( x^{k}\bigr) = \rho ^{-1} \alpha _{k} g_{q^{k}} \bigl( x^{k} + \theta _{k} \rho ^{-1} \alpha _{k} d_{q^{k}}^{k} \bigr)^{T} d_{q^{k}}^{k}, $$

that is,

$$ \begin{aligned} f \bigl( x^{k} + \rho ^{-1} \alpha _{k} d_{q^{k}}^{k} \bigr) - f \bigl(x^{k}\bigr) ={}& \rho ^{-1} \alpha _{k}g_{q^{k}} \bigl(x^{k}\bigr)^{T} d_{q^{k}}^{k} \\ &{} + \rho ^{-1}\alpha _{k} \bigl(g_{q^{k}} \bigl( x^{k} + \theta _{k} \rho ^{-1} \alpha _{k} d_{q^{k}}^{k} \bigr) - g_{q^{k}} \bigl(x^{k}\bigr) \bigr)^{T} d_{q^{k}}^{k}. \end{aligned} $$

From Assumption 2, we get

$$ f \bigl( x^{k} + \rho ^{-1} \alpha _{k} d_{q^{k}}^{k} \bigr) - f \bigl(x^{k} \bigr) \le \rho ^{-1} \alpha _{k} g_{q^{k}} \bigl(x^{k} \bigr)^{T} d_{q^{k}}^{k} + L \rho ^{-2}\alpha _{k}^{2} \bigl\lVert d_{q_{k}}^{k} \bigr\rVert ^{2}. $$
(18)

From (17) and (18), we get for any \(k \in K\)

$$ \alpha _{k} \ge \frac{ - (1 - \sigma _{1})\rho g_{q^{k}}^{k} (x^{k})^{T} d_{q^{k}}^{k}}{ L \lVert d_{q^{k}}^{k} \rVert ^{2}}. $$

Since \(-g_{q^{k}}(x^{k})= W^{k} d_{q^{k}}^{k}\),

$$ \alpha _{k} \ge \frac{ (1-\sigma _{1}) \rho ( d_{q^{k}}^{k} )^{T} W^{k} d_{q^{k}}^{k} }{ L \lVert d_{q^{k}}^{k} \rVert ^{2}}. $$

Using (16) in the above inequality we get

$$ \alpha _{k} \ge \min \bigl\{ 1, (1-\sigma _{1}) \gamma _{2} L^{-1} \rho \bigr\} > 0. $$
(19)

We now consider the case where the Wolfe-type line search (9) is used. From (9) and Assumption 2, we get

$$ (\sigma _{2}-1) g_{q^{k}} \bigl(x^{k} \bigr)^{T} d_{q^{k}}^{k} \le \bigl( g_{q^{k}} \bigl( x^{k} + \alpha _{k} d_{q^{k}}^{k} \bigr) - g_{q^{k}} \bigl( x^{k}\bigr) \bigr)^{T} d_{q^{k}}^{k} \le L \alpha _{k} \bigl\lVert d_{q^{k}}^{k} \bigr\rVert ^{2}. $$

This implies that

$$ \alpha _{k} \ge \frac{ ( \sigma _{2} - 1) g_{q^{k}} (x^{k} )^{T} d_{q^{k}}^{k} }{ L \lVert d_{q^{k}}^{k} \rVert ^{2}}. $$

Since \(-g_{q^{k}}^{k} = W^{k}d_{q^{k}}^{k}\),

$$ \alpha _{k} \ge \frac{(1 - \sigma _{2})( d_{q^{k}}^{k} )^{T} W^{k} d_{q^{k}}^{k}}{ L \lVert d_{q^{k}}^{k} \rVert ^{2}}. $$

Since \(W^{k} d_{q^{k}}^{k} \ge \gamma _{2} \lVert d_{q^{k}}^{k} \rVert ^{2}\),

$$ \alpha _{k}\ge \min \bigl\{ 1, (1 - \sigma _{2}) \gamma _{2} L^{-1} \rho \bigr\} >0. $$
(20)

The inequalities (19) together with (20) show that \(\{ \alpha _{k}\}_{ k\in K}\) is bounded below away from zero when we use the Armijo–Wolfe line search conditions. Moreover,

$$ \begin{aligned} \sum_{k=0}^{\infty } \bigl[ f\bigl(x^{k}\bigr) - f\bigl(x^{k+1} \bigr) \bigr] & = \lim_{j \to \infty } \sum_{k=1}^{j} \bigl[f\bigl(x^{k}\bigr) - f\bigl(x^{k+1}\bigr) \bigr] \\ & = f\bigl(x^{1}\bigr) - \lim_{ j \to \infty } f \bigl(x^{j}\bigr), \end{aligned} $$

that is,

$$ \sum_{k=0}^{ \infty } \bigl[ f \bigl(x^{k}\bigr) - f\bigl(x^{k+1}\bigr) \bigr] = f \bigl(x^{1}\bigr)-f\bigl(x^{*}\bigr), $$

this gives

$$ \sum_{k=1}^{\infty } \bigl[ f \bigl(x^{k}\bigr) - f\bigl(x^{k+1}\bigr) \bigr] < \infty . $$

This together with (9) gives

$$ -\sum_{k=1}^{\infty } \alpha _{k}g_{q^{k}}^{T} d_{q^{k}}^{k} < \infty . $$

Since \(g_{q^{k}}^{k} = -W^{k} d_{q^{k}}^{k}\),

$$ \lim_{k\to \infty } \bigl( d_{q^{k}}^{k} \bigr)^{T} W^{k} d_{q^{k}}^{k} = - \lim_{k \to \infty } g_{q^{k}} \bigl(x^{k} \bigr)^{T} d_{q^{k}}^{k} \rightarrow 0. $$

This together with (15) and (16) implies (12). □

Lemma 3 indicates that to prove the global convergence of Algorithm 1, it suffices to show that there are positive constants \(\gamma _{1}\) and \(\gamma _{2}\) such that Part 1 and Part 2 hold for infinitely many k. For this purpose, we require the following lemma which may be proved in the light of [20, Theorem 2.1].

Lemma 4

If there are positive constants \(\gamma _{1}\) and \(\gamma _{2}\) such that, for each \(k\ge 0\),

$$ \frac{\lVert y^{k} \rVert ^{2}}{(s^{k})^{T} y^{k}}\le M, \qquad \frac{(s^{k})^{T} y^{k}}{\lVert s^{k}\rVert ^{2}}\ge m, $$
(21)

then there exist constants \(\gamma _{1}\) and \(\gamma _{2}\) such that, for any positive integer t, Part 1 and Part 2 of Lemma 3hold for at least \(\lceil \frac{t}{2}\rceil \) values of \(k\in \{1,\dots ,t\}\).

From Lemma 3 and Lemma 4, we now prove the global convergence for Algorithm 1.

Theorem 5

Let f satisfy Assumption 1and Assumption 2, and \(\{x^{k}\}\) be generated by Algorithm 1. Then Eq. (12) holds.

Proof

If K is finite then \(W^{k}\) remains constant after a finite number of iterations. Since \(W^{k}\) is symmetric and positive definite for each k, it is obvious that there are constants \(\gamma _{1}\), and \(\gamma _{2}\) such that Part 1 and Part 2 of Lemma 3 hold for all k sufficiently large. If K is infinite, then for the sake of contradiction (12) is not true. Then there exists a positive constant δ such that for all k

$$ \bigl\lVert g_{q^{k}}^{k}\bigr\rVert > \delta . $$
(22)

Since \((y^{k})^{T}s^{k}\ge \epsilon \delta ^{\alpha }\lVert s^{k} \rVert ^{2}\),

$$ \frac{1}{(y^{k})^{T}s^{k}}\le \frac{1}{\epsilon \delta ^{\alpha }\lVert s^{k}\rVert ^{2}}. $$

We know that \(\lVert y^{k} \rVert ^{2} \le L^{2}\lVert s^{k} \rVert ^{2}\). Thus, we get

$$ \frac{\lVert y^{k}\rVert ^{2}}{(y^{k})^{T}s^{k}}\le \frac{\lVert y^{k} \rVert ^{2}}{\epsilon \delta ^{\alpha }\lVert s^{k}\rVert ^{2}}. $$

From Assumption 2, we get

$$ \frac{\lVert y^{k} \rVert ^{2}}{(y^{k})^{T}s^{k}} \le M, $$

for each \(k\ge 0\), where \(M =\frac{L^{2}}{ \epsilon \delta ^{\alpha }}\). Applying Lemma 4 to the matrix subsequence \(\{ W^{k}\}_{k\in K}\), we conclude that Part 1 and Part 2 of Lemma 3 hold for infinitely many k. There exists a subsequence of \(\{ x^{k} \}\) converging to the q-critical point of (4). As \(k \to \infty \), since \(q^{k}\) approaches \((1,1,\ldots , 1)^{T}\), a q-critical point eventually approximates a critical point. If the objective function f is convex then every local minimum point is a global minimum point. Since the sequence \(\{f(x^{k})\}\) converges, every accumulation point of \(\{x^{k}\}\) is a global optimal solution of (4). Then Lemma 3 completes the proof. □

The above theorem proved the global convergence of q-BFGS algorithm without the convexity assumption on the objective function.

5 Numerical experiments

This section reports some numerical experiments with Algorithm 1. We teated on some test problems taken from [44]. Our numerical experiments are performed on a Laptop with Intel(R) Core(TM) CPU (i3-4005U@1.70 GHz) and 4 GB RAM, with MATLAB (2017a).

We used the condition

$$ \bigl\lVert g_{q}\bigl(x^{k}\bigr) \bigr\rVert \le 10^{-6}, $$

as the stopping criterion. The program stops if the total iteration number is larger than 400. For each problem we choose the initial matrix \(W^{0}=I_{n}\), where \(I_{n}\) is an identity matrix. First, we find the q-gradient of the following problem when the parameter q is not fixed.

Example 2

Consider a function \(f : \mathbb{R}^{2} \to \mathbb{R}\) such that

$$ f(x) = e^{x_{1}} + \log x_{2}. $$

We need to find the q-gradient vector at \(x=(2, 3)^{T}\) and \(x=(-4, 5)^{T}\). This function uses the sequence \(\{q^{k}\}\) with an initial vector of

$$ q^{0} =(0.32, 0.32)^{T}. $$

In fact, Tables 1 and 2 show the computational values of \(f(x)\), \(f(q^{k}x)\) and the q-gradient, where \(g_{q^{k}}(1)\) and \(g_{q^{k}}(2)\) are the first and second component of q-gradient. We obtain the q-gradient as

$$ g_{q^{30}}(x) = \begin{bmatrix} 7.3811 \\ 0.3335 \end{bmatrix} ,\qquad g_{q^{30}}(x) = \begin{bmatrix} 0.018355 \\ 0.200108 \end{bmatrix} , $$

for \(x=(2, 3)^{T}\) and \(x=(-4, 5)^{T}\), respectively, where

$$ q^{30} = (0.9989,0.9989), $$

at \(k= 30\). From Fig. 1, one can observe that \(q_{1}^{k}\) and \(q_{2}^{k} \in (0,1)\) for \(k=1,\dots ,30\). As \(q^{k}\to (1,1)\), the q-gradient reduces to the classical gradient. For this case, we have

$$ g(x)= \begin{bmatrix} 7.3891 \\ 0.3333 \end{bmatrix} ,\qquad g(x)= \begin{bmatrix} 0.018315 \\ 0.200000 \end{bmatrix} , $$

for \(x=(2, 3)^{T}\) and \(x=(-4, 5)^{T}\), respectively.

Figure 1
figure 1

Graphical representation of q-gradient of Tables 1 and 2

Table 1 Computational details of Example 2 for sequence \(\{q^{k}\}\)
Table 2 Computational details of Example 2 for sequence \(\{q^{k}\}\)

Example 3

Consider an unconstrained objective function [45] \(f : \mathbb{R} \to \mathbb{R}\) such that

$$ f(x) = -xe^{-x}. $$

This function has the unique minimizer \(x^{*} = 1\). We run Algorithm 1 with a starting point \(x^{0}=9.0\) and get the minimum function value

$$ f\bigl(x^{*}\bigr) = -3.67875, $$

at minimizer \(x^{*} = 1.0\) in 7 iterations, which can be seen in Fig. 2. With different starting points 15, 17 and 19, the algorithm converges to the solution point 1.00, 0.9999 and 0.9998 in iterations 7, 4 and 5, respectively.

Figure 2
figure 2

Iteration points of Example 3

Example 4

Consider an unconstrained optimization function \(f : \mathbb{R}^{2} \to \mathbb{R}\) such that

$$ f (x_{1}, x_{2} ) = 100 \bigl(x_{2} - x_{1}^{2} \bigr)^{2} + (1-x_{1} )^{2}. $$

The Rosenbrock function is a non-convex function, introduced by Rosenbrock in 1960. We consider this function to measure the performance of Algorithm 1. In this case, a starting point

$$ x^{0}=(4, -4)^{T}, $$

is taken. The function converges in 20 iterations to get the minimum function value

$$ f\bigl(x^{*}\bigr) = 0.0039936, $$

at minimizer

$$ x^{*}= (0.9822, 0.9587)^{T}, $$

while we run the Algorithm used in the methodology of [23], the function converges in 29 iterations to get the minimum function value

$$ f\bigl(x^{*}\bigr) = 1.1145 \times 10^{-15}, $$

at minimizer

$$ x^{*} = (0.9998, 0.9996)^{T}, $$

in 29 iterations that are shown in Fig. 3. Of course, Fig. 3(a) shows that our proposed method takes larger steps to converge due to the q-gradient.

Figure 3
figure 3

Comparison of Algorithm 1 with [23] on Example 4

With different starting points, we compare our algorithm with [23] on the Rosenbrock function. The numerical results is shown in Tables 3 and 4 where the columns ‘it’, ‘fe’, and ‘ge’ indicate the total numbers of iterations, the total number of function evaluations, and the total number of gradient evaluations, respectively. Note the total number of q-gradient evaluations for q-BFGS and total number of gradient evaluations for BFGS use the same notation.

Table 3 Numerical results of Example 4
Table 4 Numerical results of Example 4

Dolan and Moré [46] presented an appropriate technique to demonstrate the performance profiles, which is a statistical process. The performance ratio is presented as

$$ \rho _{p,s}=\frac{r_{(p,s )}}{\min \{r_{(p,s)}:1\le r\le n_{s}\}}, $$
(23)

where \(r_{(p,s)}\) refers to the iteration, function evaluations and q-gradient evaluations, respectively for solver s spent on problem p and \(n_{s}\) refers to the number of problems in the model test. The cumulative distribution function is expressed as

$$ P_{s}( \tau ) = \frac{1}{n_{p}} \operatorname{size} \{ p\in \rho _{(p,s)} \le \tau \} , $$
(24)

where \(P_{s}(\tau )\) is the probability that a performance ratio \(\rho _{(p,s)}\) is within a factor of τ of the best possible ratio. That is, for a subset of the methods being analyzed, we plot the fraction \(\rho _{s}(\tau )\) of problems for which any given method is within a factor τ of the best. We use this tool to show the performance of Algorithm 1. Here, Fig. 4(a), Fig. 4(b) and Fig. 4(c) show that q-BFGS method solves about 82%, 59% and 89% of Rosenbrock test problem with the least number of iterations, function evaluations and gradient evaluations, respectively.

Figure 4
figure 4

Performance profiles based on the number of iterations, the number of function evaluations and the number of gradient evaluations given in Tables 3 and 4

Example 5

Consider an unconstrained optimization problem \(f : \mathbb{R}^{2} \to \mathbb{R}\) such that

$$ f (x_{1}, x_{2} ) = 2 + (x_{1} - 2)^{2} + (x_{2} -2)^{2}. $$

With a starting point \(x^{0}=(0.5 , 0.5)^{T}\), the function converges to \(x^{*}=(2 , 2)^{T}\) that is shown in Fig. 5(a). The global minima of this function can also be observed in Fig. 5(b).

Figure 5
figure 5

Visualization of Example 5

Example 6

Consider the following Rastrigin function:

$$ f(x) = 20 + x_{1}^{2} + x_{2}^{2} - 10 (\cos 2\pi x_{1}) + \cos 2 \pi x_{2}. $$

The Rastrigin function is a non-convex. The visualization of in an area from −1 to 1 is shown in Fig. 6(a) with many local minima, and the global optimum at \((0 , 0)^{T}\) in an area from −1 to 1 is shown in Fig. 6(b) with successive iterative points. The numerical results of this function is shown in Table 5.

Figure 6
figure 6

Visualization of Example 6

Table 5 Numerical results of Example 6

We have used the 19 test problems as shown in Table 6 with attributes problem number, problem’s name and starting point, respectively. Table 7 shows the computational results for q-BFGS and BFGS method [23] on small scale test problems. Of course, Fig. 7(a), Fig. 7(b) and Fig. 7(c) show that the q-BFGS method solves about 95%, 79% and 90% of the test problems with the least number of iterations, function evaluations and gradient evaluations, respectively. Therefore, this is concluded that the q-BFGS performs better than the BFGS of [23] and improves the performance in fewer iterations, function evaluations, and gradient evaluations, respectively.

Figure 7
figure 7

Performance profiles based on the number of iterations, the number of function evaluations and the number of gradient evaluations given in Table 7

Table 6 Set of Test Problems
Table 7 Comparison details with [23]

6 Conclusion

We have proposed a q-BFGS update and shown that the method converges globally with the Armijo–Wolfe line search conditions. The variant of the proposed method behaves like the classical BFGS method in limiting case where the existence of second-order partial derivatives at every point is not required. First-order q-differentiability of the function is sufficient to prove the global convergence of the proposed method. The q-gradient enables the q-BFGS quasi-Newton search process to be carried out in a more diverse set of directions and takes larger steps to converge. The reported numerical results show that the proposed method is efficient in comparison to the existing method for solving unconstrained optimization problems. However, other modified BFGS methods using the q-derivative are yet to study.

Availability of data and materials

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

References

  1. Mishra, S.K., Ram, B.: Introduction to Unconstrained Optimization with R. Springer, Singapore (2019). https://doi.org/10.1007/978-981-15-0894-3

    Book  MATH  Google Scholar 

  2. Broyden, C.G.: The convergence of a class of double-rank minimization algorithms. IMA J. Appl. Math. 6(1), 76–90 (1970). https://doi.org/10.1093/imamat/6.1.76

    Article  MATH  Google Scholar 

  3. Fletcher, R.: A new approach to variable metric algorithms computer. Comput. J. 13(3), 317–322 (1970). https://doi.org/10.1093/comjnl/13.3.317

    Article  MATH  Google Scholar 

  4. Goldfarb, A.: A family of variable metric methods derived by variational means. Math. Comput. 24(109), 23–26 (1970). https://doi.org/10.1090/S0025-5718-1970-0258249-6

    Article  MathSciNet  MATH  Google Scholar 

  5. Schanno, J.: Conditions of quasi-Newton methods for function minimization. Math. Comput. 24(111), 647–650 (1970). https://doi.org/10.1090/S0025-5718-1970-0274029-X

    Article  Google Scholar 

  6. Salim, M.S., Ahmed, A.R.: A quasi-Newton augmented Lagrangian algorithm for constrained optimization problems. J. Intell. Fuzzy Syst. 35(2), 2373–2382 (2018). https://doi.org/10.3233/JIFS-17899

    Article  MathSciNet  Google Scholar 

  7. Hedayati, V., Samei, M.E.: Positive solutions of fractional differential equation with two pieces in chain interval and simultaneous Dirichlet boundary conditions. Bound. Value Probl. 2019, 141 (2019). https://doi.org/10.1186/s13661-019-1251-8

    Article  MathSciNet  Google Scholar 

  8. Dixon, L.C.W.: Variable metric algorithms: necessary and sufficient conditions for identical behavior of nonquadratic functions. J. Optim. Theory Appl. 10, 34–40 (1972). https://doi.org/10.1007/BF00934961

    Article  MathSciNet  MATH  Google Scholar 

  9. Samei, M.E., Yang, W.: Existence of solutions for k-dimensional system of multi-term fractional q-integro-differential equations under anti-periodic boundary conditions via quantum calculus. Math. Methods Appl. Sci. 43(7), 4360–4382 (2020). https://doi.org/10.1002/mma.6198

    Article  MATH  Google Scholar 

  10. Powell, M.J.D.: On the convergence of the variable metric algorithm. IMA J. Appl. Math. 7(1), 21–36 (1971). https://doi.org/10.1093/imamat/7.1.21

    Article  MathSciNet  MATH  Google Scholar 

  11. Ahmadi, A., Samei, M.E.: On existence and uniqueness of solutions for a class of coupled system of three term fractional q-differential equations. J. Adv. Math. Stud. 13(1), 69–80 (2020)

    MATH  Google Scholar 

  12. Dai, Y.H.: Convergence properties of the BFGS algoritm. SIAM J. Optim. 13(3), 693–701 (2002). https://doi.org/10.1137/S1052623401383455

    Article  MathSciNet  MATH  Google Scholar 

  13. Samei, M.E., Hedayati, V., Rezapour, S.: Existence results for a fraction hybrid differential inclusion with Caputo–Hadamard type fractional derivative. Adv. Differ. Equ. 2019, 163 (2019). https://doi.org/10.1186/s13662-019-2090-8

    Article  MathSciNet  MATH  Google Scholar 

  14. Samei, M.E., Hedayati, V., Ranjbar, G.K.: The existence of solution for k-dimensional system of Langevin Hadamard-type fractional differential inclusions with 2k different fractional orders. Mediterr. J. Math. 17, 37 (2020). https://doi.org/10.1007/s00009-019-1471-2

    Article  MathSciNet  MATH  Google Scholar 

  15. Aydogan, M., Baleanu, D., Aguilar, J.F.G., Rezapour, S. Samei, M.E.: Approximate endpoint solutions for a class of fractional q-differential inclusions by computational results. Fractals 28, 2040029 (2020). https://doi.org/10.1142/S0218348X20400290

    Article  Google Scholar 

  16. Baleanu, D., Darzi, R., Agheli, B.: Fractional hybrid initial value problem featuring q-derivatives. Acta Math. Univ. Comen. 88, 229–238 (2019)

    MathSciNet  MATH  Google Scholar 

  17. Baleanu, D., Shiri, B.: Collocation methods for fractional differential equations involving non-singular kernel. Chaos Solitons Fractals 116, 136–145 (2018). https://doi.org/10.1016/j.chaos.2018.09.020

    Article  MathSciNet  MATH  Google Scholar 

  18. Shiri, B., Baleanu, D.: System of fractional differential algebraic equations with applications. Chaos Solitons Fractals 120, 203–212 (2019). https://doi.org/10.1016/j.chaos.2019.01.028

    Article  MathSciNet  Google Scholar 

  19. Byrd, R., Nocedal, J., Yuan, Y.: Global convergence of a class of quasi-Newton methods on convex problems. SIAM J. Numer. Anal. 24(5), 1171–1189 (1987)

    Article  MathSciNet  Google Scholar 

  20. Byrd, R.H., Nocedal, J.: A tool for the analysis of quasi-Newton methods with application to unconstrained minimization. SIAM J. Numer. Anal. 26(3), 727–739 (1989). https://doi.org/10.1137/0726042

    Article  MathSciNet  MATH  Google Scholar 

  21. Wei, Z., Li, G.Y., Qi, L.: New quasi-Newton methods for unconstrained optimization problems. Appl. Math. Comput. 175(2), 1156–1188 (2006). https://doi.org/10.1016/j.amc.2005.08.027

    Article  MathSciNet  MATH  Google Scholar 

  22. Mascarenhas, W.F.: The bfgs method with exact line searches fails for non-convex objective functions. Math. Program. 99(1), 49–61 (2004). https://doi.org/10.1007/s10107-003-0421-7

    Article  MathSciNet  MATH  Google Scholar 

  23. Li, D.H., Fukushima, M.: On the global convergence of the BFGS method for nonconvex unconstrained optimization problems. SIAM J. Optim. 11(4), 1054–1064 (2001). https://doi.org/10.1137/S1052623499354242

    Article  MathSciNet  MATH  Google Scholar 

  24. Cieśliński, J.L.: Improved q-exponential and q-trigonometric functions. Appl. Math. Lett. 24(12), 2110–2114 (2011). https://doi.org/10.1016/j.aml.2011.06.009

    Article  MathSciNet  MATH  Google Scholar 

  25. Ernst, T.: A method for q-calculus. J. Nonlinear Math. Phys. 10(4), 487–525 (2003). https://doi.org/10.2991/jnmp.2003.10.4.5

    Article  MathSciNet  MATH  Google Scholar 

  26. Tariboon, J., Ntouyas, S.K.: Quantum calculus on finite intervals and applications to impulsive difference equations. Adv. Differ. Equ. 2013, 282 (2013). https://doi.org/10.1186/1687-1847-2013-282

    Article  MathSciNet  MATH  Google Scholar 

  27. Borges, E.P.: A possible deformed algebra and calculus inspired in nonextensive thermostatistics. Phys. A, Stat. Mech. Appl. 340(1–3), 95–101 (2004). https://doi.org/10.1016/j.physa.2004.03.082

    Article  MathSciNet  Google Scholar 

  28. Al-Saggaf, U.M., Moinuddin, M., Arif, M., Zerguine, A.: The q-least mean squares algorithm. Signal Process. 111, 50–60 (2015)

    Article  Google Scholar 

  29. Jackson, F.H.: On q-definite integrals. Q. J. Pure Appl. Math. 41(15), 193–203 (1910)

    MATH  Google Scholar 

  30. Carmichael, R.D.: The general theory of linear q-difference equations. Am. J. Math. 34, 147–168 (1912)

    Article  MathSciNet  Google Scholar 

  31. Mason, T.E.: On properties of the solution of linear q-difference equations with entire fucntion coefficients. Am. J. Math. 37, 439–444 (1915)

    MATH  Google Scholar 

  32. Adams, C.R.: On the linear partial q-difference equation of general type. Trans. Am. Math. Soc. 31, 360–371 (1929)

    MathSciNet  MATH  Google Scholar 

  33. Trjitzinsky, W.J.: Analytic theory of linear q-difference equations. Acta Math. 61, 1–38 (1933)

    Article  MathSciNet  Google Scholar 

  34. Sterroni, A.C., Galski, R.L., Ramos, F.M.: The q-gradient vector for unconstrained continuous optimization problems. In: Operations Research Proceedings 2010, pp. 365–370. Springer, Berlin (2010). https://doi.org/10.1007/978-3-642-20009-0_58

    Chapter  Google Scholar 

  35. Diqsa, A., Khan, S., Naseem, I., Togneri, R., Bennamoun, M.: Enhanced q-least mean square. Circuits Syst. Signal Process. 38(10), 4817–4839 (2019). https://doi.org/10.1007/s00034-019-01091-4

    Article  Google Scholar 

  36. Gouvêa, E.J., Regis, R.G., Soterroni, A.C., Scarabello, M.C., Ramos, F.M.: Global optimization using the q-gradients. Eur. J. Oper. Res. 251(3), 727–738 (2016). https://doi.org/10.1016/j.ejor.2016.01.001

    Article  MathSciNet  MATH  Google Scholar 

  37. Chakraborty, S.K., Panda, G.: q-Line search scheme for optimization problem (2017). arXiv:1702.01518

  38. Chakraborty, S.K., Panda, G.: Newton like line search method using q-calculus. In: Mathematics and Computing: Third International Conference, Communications in Computer and Information Science, ICMC 2017, Haldia, India, pp. 196–208 (2017). https://doi.org/10.1007/978-981-10-4642-1_17

    Chapter  Google Scholar 

  39. Ablinger, J., Uncu, A.K.: Functions—a mathematica package for q-series and partition theory applications (2019). arXiv:1910.12410

  40. Lai, K.K., Mishra, S.K., Panda, G., Chakraborty, S.K., Samei, M.E., Ram, B.: A limited memory q-BFGS algorithm for unconstrained optimization problems. J. Appl. Math. Comput. 63, 1–2 (2020). https://doi.org/10.1007/s12190-020-01432-6

    Article  MathSciNet  Google Scholar 

  41. Aral, A., Gupta, V., Agarwal, R.P.: Applications of q-Calculus in Operator Theory. Springer, New York (2013). https://doi.org/10.1007/978-1-4614-6946-9

    Book  MATH  Google Scholar 

  42. Jackson, F.H.: On q-functions and a certain difference operator. Trans. R. Soc. Edinb. 46, 253–281 (1909)

    Article  Google Scholar 

  43. Rajković, P.M., Stanković, M.S., Marinković, S.D.: Mean value theorems in q-calculus. Mat. Vesn. 54, 171–178 (2002)

    MathSciNet  MATH  Google Scholar 

  44. Moré, J.J., Garbow, B.S., Hillstrom, K.E.: Testing unconstrained optimization software. ACM Trans. Math. Softw. 7(1), 17–41 (1981)

    Article  MathSciNet  Google Scholar 

  45. Yuan, Y.X.: A modified bfgs algorithm for unconstrained optimization. IMA J. Numer. Anal. 11(3), 325–332 (1991). https://doi.org/10.1093/imanum/11.3.325

    Article  MathSciNet  MATH  Google Scholar 

  46. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002). https://doi.org/10.1007/s101070100263

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the editors and the anonymous reviewers for their constructive comments and suggestions that have helped to improve the present paper. The fourth author was supported by Bu-Ali Sina University. This research was supported by the Science and Engineering Research Board (Grant No. DST-SERB- MTR-2018/000121) and the University Grants Commission (IN) (Grant No. UGC-2015-UTT-59235).

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

The authors declare that the study was realized in collaboration with equal responsibility. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Mohammad Esmael Samei.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Not applicable.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mishra, S.K., Panda, G., Chakraborty, S.K. et al. On q-BFGS algorithm for unconstrained optimization problems. Adv Differ Equ 2020, 638 (2020). https://doi.org/10.1186/s13662-020-03100-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-020-03100-2

MSC

Keywords