Throughout this paper, let \(\mathcal{H}\) be a real Hilbert space with an inner product \(\langle \cdot , \cdot \rangle \) and the induced norm \(\| \cdot \|\). Let \(\mathbb{R}\) and \(\mathbb{N}\) be the set of real numbers and the set of positive integers, respectively. Let I denote the identity operator on \(\mathcal{H}\). The symbols ⇀ and → denote the weak and strong convergence, respectively.
In this work, we are interested in solving the convex minimization problems of the following form:
$$ \mathop {\operatorname {minimize}}_{x \in \mathcal{H}} \psi _{1}(x)+\psi _{2}(x), $$
(1)
where \(\psi _{1} : \mathcal{H}\to \mathbb{R} \) is a convex and differentiable function with a L-Lipschitz continuous gradient of \(\psi _{1}\) and \(\psi _{2} :\mathcal{H}\to \mathbb{R}\cup \{\infty \} \) is a proper lower semi-continuous and convex function. If x is a solution of problem (1), then x is characterized by the fixed point equation of the forward–backward operator
$$ x = \underbrace{\operatorname {prox}_{\alpha \psi _{2}}}_{\text{backward step}} \underbrace{ \bigl(x - \alpha \nabla \psi _{1}(x) \bigr)}_{\text{forward step}}, $$
(2)
where \(\alpha >0\), \(\operatorname {prox}_{\psi _{2}}\) is the proximity operator of \(\psi _{2}\), and \(\nabla \psi _{1}\) stands for the gradient of \(\psi _{1}\).
In the recent years, various iterative algorithms for solving a convex minimization problem of the sum of two convex functions were introduced and studied by many mathematicians, see [1, 4, 7–10, 14–16, 18, 21, 25] for instance.
One of the popular iterative algorithms, called forward–backward splitting (FBS) algorithm [8, 16], is defined by the following: let \(x_{1} \in \mathcal{H}\) and set
$$ x_{n+1}=\operatorname {prox}_{c_{n}\psi _{2}} \bigl(x_{n}-c_{n} \nabla \psi _{1}(x_{n}) \bigr),\quad \forall n\in \mathbb{N}, $$
(3)
where \(0 < c_{n} < 2/L\).
In 2005, Combettes and Wajs [8] introduced the following relaxed forward–backward splitting (R-FBS) algorithm, which is defined by the following: let \(\varepsilon \in (0,\min (1,\frac{1}{L})) \), \(x_{1} \in \mathbb{R}^{N}\) and set
$$ y_{n}= x_{n}-c_{n}\nabla \psi _{1}(x_{n}),\qquad x_{n+1}=x_{n}+ \beta _{n} \bigl(\operatorname {prox}_{c_{n}\psi _{2}}(y_{n})-x_{n} \bigr), \quad \forall n\in \mathbb{N}, $$
(4)
where \(c_{n}\in [\varepsilon , \frac{2}{L}-\varepsilon ] \) and \(\beta _{n}\in [\varepsilon ,1] \).
To accelerate the forward–backward splitting algorithm, an inertial technique is employed. So, various inertial algorithms were introduced and studied in order to accelerate convergence behavior of the algorithms, see [3, 6, 11, 26] for example. Recently, Beck and Teboulle [3] introduced a fast iterative shrinkage-thresholding algorithm (FISTA) for solving problem (1). FISTA is defined by the following: let \(x_{1}=y_{0}\in \mathbb{R}^{N}\), \(t_{1}=1 \) and set
$$ \textstyle\begin{cases} t_{n+1}=\frac{1+\sqrt{1+4t_{n}^{2}}}{2}, \qquad \alpha _{n}= \frac{t_{n}-1}{t_{n+1}}, \\ y_{n}=\operatorname {prox}_{\frac{1}{L}\psi _{2}}(x_{n}-\frac{1}{L}\nabla \psi _{1}(x_{n})), \\ x_{n+1} =y_{n} +\alpha _{n}(y_{n}-y_{n-1}), \quad n \in \mathbb{N}. \end{cases} $$
(5)
Note that \(\alpha _{n} \) is called an inertial parameter which controls the momentum \(y_{n}-y_{n-1} \).
It is observed that both FBS and FISTA algorithms need to assume the Lipschitz continuity condition on the gradient of \(\psi _{1}\), and the stepsize depends on the Lipschitz constant L, which is not an easy task to find in general practice.
In 2016, Cruz and Nghia [9] proposed a linesearch technique for selecting the stepsize which is independent of the Lipschitz constant L. Their linesearch technique is given by the following process:
The forward–backward splitting algorithm where the stepsize \(c_{n}\) is generated by above linesearch was introduced by Cruz and Nghia [9] and defined by the following:
(FBSL). Let \(x_{1} \in \mathcal{H}\), \(\sigma >0\), \(\delta \in (0, 1/2)\), and \(\theta \in (0,1)\). For \(n \geq 1\), let
$$ x_{n+1}=\operatorname {prox}_{c_{n}\psi _{2}} \bigl(x_{n} -c_{n}\nabla \psi _{1}(x_{n}) \bigr), $$
where \(c_{n}:= \operatorname{Linesearch} (x_{n},\sigma , \theta , \delta )\).
Moreover, they also proposed an accelerated algorithm with an inertial technical term as follows.
(FISTAL). Let \(x_{0}=x_{1} \in \mathcal{H}\), \(\alpha _{0}=\sigma > 0\), \(\delta \in (0, 1/2)\), \(\theta \in (0,1)\), and \(t_{1}=1\). For \(n \geq 1\), let
$$\begin{aligned}& t_{n+1} = \frac{1 + \sqrt{1+4t_{n}^{2}}}{2}, \qquad \alpha _{n}= \frac{t_{n}-1}{t_{n+1}}, \\& y_{n} = x_{n} + \alpha _{n}(x_{n}-x_{n-1}), \\& x_{n+1} =\operatorname {prox}_{c_{n}\psi _{2}} \bigl(y_{n} - c_{n}\nabla \psi _{1}(y_{n}) \bigr), \end{aligned}$$
where \(c_{n}:= \operatorname{Linesearch} (y_{n},c_{n-1}, \theta , \delta ) \).
For the past decade, various fixed point algorithms for nonexpansive operators were introduced and studied for solving convex minimization problems, problem (1), see [11, 13, 17, 23]. In 2011, Phuengrattana and Suantai [23] introduced a new fixed point algorithm known as SP-iteration and showed that this algorithm has a convergence rate better than that of Ishikawa [13] and Mann [17] iterations. The SP-iteration for nonexpansive operator S was defined as follows:
$$ \begin{aligned} &v_{n}=(1-\beta _{n})x_{n}+\beta _{n}Sx_{n}, \\ &y_{n}=(1-\gamma _{n})v_{n}+\gamma _{n}Sv_{n}, \\ &x_{n+1} = (1-\theta _{n})y_{n}+\theta _{n}Sy_{n}, \quad n\in \mathbb{N}, \end{aligned} $$
where \(x_{1} \in \mathcal{H}\), \(\{\beta _{n}\}\), \(\{\gamma _{n}\} \), and \(\{\theta _{n}\} \) are sequences in \((0,1) \).
Motivated by these works, we combine the idea of SP-iteration, FBS algorithm, and a linesearch technique to propose a new accelerated algorithm for a convex minimization problem which can be applied to solve the image restoration problems. We obtain weak convergence theorems in Hilbert spaces under some suitable conditions.