Skip to main content

Theory and Modern Applications

On some accelerated optimization algorithms based on fixed point and linesearch techniques for convex minimization problems with applications

Abstract

In this paper, we introduce and study a new accelerated algorithm based on forward–backward and SP-algorithm for solving a convex minimization problem of the sum of two convex and lower semicontinuous functions in a Hilbert space. Under some suitable control conditions, a weak convergence theorem of the proposed algorithm based on a fixed point is established. Moreover, we choose the stepsize of our algorithm which is independent on the Lipschitz constant of the gradient of the objective function by using a linesearch technique, and then a weak convergence result of the proposed algorithm is analyzed. As applications, we apply the proposed algorithm for solving the image restoration problems and compare its convergence behavior with other well-known algorithms in the literature. By our experiment, the algorithms have a higher efficiency than the others.

1 Introduction

Throughout this paper, let \(\mathcal{H}\) be a real Hilbert space with an inner product \(\langle \cdot , \cdot \rangle \) and the induced norm \(\| \cdot \|\). Let \(\mathbb{R}\) and \(\mathbb{N}\) be the set of real numbers and the set of positive integers, respectively. Let I denote the identity operator on \(\mathcal{H}\). The symbols and → denote the weak and strong convergence, respectively.

In this work, we are interested in solving the convex minimization problems of the following form:

$$ \mathop {\operatorname {minimize}}_{x \in \mathcal{H}} \psi _{1}(x)+\psi _{2}(x), $$
(1)

where \(\psi _{1} : \mathcal{H}\to \mathbb{R} \) is a convex and differentiable function with a L-Lipschitz continuous gradient of \(\psi _{1}\) and \(\psi _{2} :\mathcal{H}\to \mathbb{R}\cup \{\infty \} \) is a proper lower semi-continuous and convex function. If x is a solution of problem (1), then x is characterized by the fixed point equation of the forward–backward operator

$$ x = \underbrace{\operatorname {prox}_{\alpha \psi _{2}}}_{\text{backward step}} \underbrace{ \bigl(x - \alpha \nabla \psi _{1}(x) \bigr)}_{\text{forward step}}, $$
(2)

where \(\alpha >0\), \(\operatorname {prox}_{\psi _{2}}\) is the proximity operator of \(\psi _{2}\), and \(\nabla \psi _{1}\) stands for the gradient of \(\psi _{1}\).

In the recent years, various iterative algorithms for solving a convex minimization problem of the sum of two convex functions were introduced and studied by many mathematicians, see [1, 4, 710, 1416, 18, 21, 25] for instance.

One of the popular iterative algorithms, called forward–backward splitting (FBS) algorithm [8, 16], is defined by the following: let \(x_{1} \in \mathcal{H}\) and set

$$ x_{n+1}=\operatorname {prox}_{c_{n}\psi _{2}} \bigl(x_{n}-c_{n} \nabla \psi _{1}(x_{n}) \bigr),\quad \forall n\in \mathbb{N}, $$
(3)

where \(0 < c_{n} < 2/L\).

In 2005, Combettes and Wajs [8] introduced the following relaxed forward–backward splitting (R-FBS) algorithm, which is defined by the following: let \(\varepsilon \in (0,\min (1,\frac{1}{L})) \), \(x_{1} \in \mathbb{R}^{N}\) and set

$$ y_{n}= x_{n}-c_{n}\nabla \psi _{1}(x_{n}),\qquad x_{n+1}=x_{n}+ \beta _{n} \bigl(\operatorname {prox}_{c_{n}\psi _{2}}(y_{n})-x_{n} \bigr), \quad \forall n\in \mathbb{N}, $$
(4)

where \(c_{n}\in [\varepsilon , \frac{2}{L}-\varepsilon ] \) and \(\beta _{n}\in [\varepsilon ,1] \).

To accelerate the forward–backward splitting algorithm, an inertial technique is employed. So, various inertial algorithms were introduced and studied in order to accelerate convergence behavior of the algorithms, see [3, 6, 11, 26] for example. Recently, Beck and Teboulle [3] introduced a fast iterative shrinkage-thresholding algorithm (FISTA) for solving problem (1). FISTA is defined by the following: let \(x_{1}=y_{0}\in \mathbb{R}^{N}\), \(t_{1}=1 \) and set

$$ \textstyle\begin{cases} t_{n+1}=\frac{1+\sqrt{1+4t_{n}^{2}}}{2}, \qquad \alpha _{n}= \frac{t_{n}-1}{t_{n+1}}, \\ y_{n}=\operatorname {prox}_{\frac{1}{L}\psi _{2}}(x_{n}-\frac{1}{L}\nabla \psi _{1}(x_{n})), \\ x_{n+1} =y_{n} +\alpha _{n}(y_{n}-y_{n-1}), \quad n \in \mathbb{N}. \end{cases} $$
(5)

Note that \(\alpha _{n} \) is called an inertial parameter which controls the momentum \(y_{n}-y_{n-1} \).

It is observed that both FBS and FISTA algorithms need to assume the Lipschitz continuity condition on the gradient of \(\psi _{1}\), and the stepsize depends on the Lipschitz constant L, which is not an easy task to find in general practice.

In 2016, Cruz and Nghia [9] proposed a linesearch technique for selecting the stepsize which is independent of the Lipschitz constant L. Their linesearch technique is given by the following process:

figure a

The forward–backward splitting algorithm where the stepsize \(c_{n}\) is generated by above linesearch was introduced by Cruz and Nghia [9] and defined by the following:

(FBSL). Let \(x_{1} \in \mathcal{H}\), \(\sigma >0\), \(\delta \in (0, 1/2)\), and \(\theta \in (0,1)\). For \(n \geq 1\), let

$$ x_{n+1}=\operatorname {prox}_{c_{n}\psi _{2}} \bigl(x_{n} -c_{n}\nabla \psi _{1}(x_{n}) \bigr), $$

where \(c_{n}:= \operatorname{Linesearch} (x_{n},\sigma , \theta , \delta )\).

Moreover, they also proposed an accelerated algorithm with an inertial technical term as follows.

(FISTAL). Let \(x_{0}=x_{1} \in \mathcal{H}\), \(\alpha _{0}=\sigma > 0\), \(\delta \in (0, 1/2)\), \(\theta \in (0,1)\), and \(t_{1}=1\). For \(n \geq 1\), let

$$\begin{aligned}& t_{n+1} = \frac{1 + \sqrt{1+4t_{n}^{2}}}{2}, \qquad \alpha _{n}= \frac{t_{n}-1}{t_{n+1}}, \\& y_{n} = x_{n} + \alpha _{n}(x_{n}-x_{n-1}), \\& x_{n+1} =\operatorname {prox}_{c_{n}\psi _{2}} \bigl(y_{n} - c_{n}\nabla \psi _{1}(y_{n}) \bigr), \end{aligned}$$

where \(c_{n}:= \operatorname{Linesearch} (y_{n},c_{n-1}, \theta , \delta ) \).

For the past decade, various fixed point algorithms for nonexpansive operators were introduced and studied for solving convex minimization problems, problem (1), see [11, 13, 17, 23]. In 2011, Phuengrattana and Suantai [23] introduced a new fixed point algorithm known as SP-iteration and showed that this algorithm has a convergence rate better than that of Ishikawa [13] and Mann [17] iterations. The SP-iteration for nonexpansive operator S was defined as follows:

$$ \begin{aligned} &v_{n}=(1-\beta _{n})x_{n}+\beta _{n}Sx_{n}, \\ &y_{n}=(1-\gamma _{n})v_{n}+\gamma _{n}Sv_{n}, \\ &x_{n+1} = (1-\theta _{n})y_{n}+\theta _{n}Sy_{n}, \quad n\in \mathbb{N}, \end{aligned} $$

where \(x_{1} \in \mathcal{H}\), \(\{\beta _{n}\}\), \(\{\gamma _{n}\} \), and \(\{\theta _{n}\} \) are sequences in \((0,1) \).

Motivated by these works, we combine the idea of SP-iteration, FBS algorithm, and a linesearch technique to propose a new accelerated algorithm for a convex minimization problem which can be applied to solve the image restoration problems. We obtain weak convergence theorems in Hilbert spaces under some suitable conditions.

2 Preliminaries

In this section, we give some definitions and basic properties for proving our results in the next sections.

Let \(\psi : \mathcal{H} \to \mathbb{R}\cup \{\infty \} \) be a proper, lower semi-continuous, and convex function. The proximity (or proximal) operator [2, 19] of ψ, denoted by \(\operatorname {prox}_{\psi }\), is defined for each \(x \in \mathcal{H}\), \(\operatorname {prox}_{\psi }x\) is the unique solution of the minimization problem

$$ \mathop {\operatorname {minimize}}_{y\in \mathcal{H}} \psi (y) + \frac{1}{2} \Vert x - y \Vert ^{2}. $$
(6)

The proximity operator can be formulated in the equivalent form

$$ \operatorname {prox}_{\psi } = (I + \partial \psi )^{-1} : \mathcal{H} \rightarrow \mathcal{H}, $$
(7)

where ∂ψ is the subdifferential of ψ defined by

$$ \partial \psi (x) := \bigl\{ u \in \mathcal{H} : \psi (x) + \langle u, y - x \rangle \leq \psi (y) , \forall y \in \mathcal{H} \bigr\} , \quad \forall x \in \mathcal{H}. $$

Moreover, we have the following useful fact:

$$ \frac{x - \operatorname {prox}_{\alpha \psi }(x) }{\alpha }\in \partial \psi \bigl(\operatorname {prox}_{ \alpha \psi }(x) \bigr),\quad \forall x \in \mathcal{H}, \alpha >0. $$
(8)

Note that the subdifferential operator ∂ψ is maximal monotone (see [5] for more details) and the solution of (1) is a fixed point of the following operator:

$$ x \in \operatorname {Argmin}(\psi _{1}+\psi _{2}) \quad \Longleftrightarrow\quad x=\operatorname {prox}_{c \psi _{2}}(I-c\nabla \psi _{1}) (x), $$

where \(c>0\). If \(0< c< \frac{2}{L} \), we know that \(\operatorname{prox}_{c\psi _{2}}(I-c\nabla \psi _{1}) \) is a nonexpansive operator.

An operator \(S : \mathcal{H} \rightarrow \mathcal{H}\) is said to be Lipschitz continuous if there exists \(L > 0\) such that

$$ \Vert Sx - Sy \Vert \leq L \Vert x - y \Vert , \quad \forall x, y \in \mathcal{H}.$$

If S is 1-Lipschitz continuous, then S is called a nonexpansive operator. A point \(x\in \mathcal{H} \) is called a fixed point of S if \(x=Sx \). The set of all fixed points of S is denoted by \(\operatorname {Fix}(S)\).

The operator \(I - S\) is called demiclosed at zero if for any sequence \(\{x_{n}\}\) in \(\mathcal{H}\) which converges weakly to x and the sequence \(\{x_{n} - Sx_{n}\}\) converges strongly to 0, then \(x \in \operatorname {Fix}(S)\). It is known [22] that if S is a nonexpansive operator, then \(I - S\) is demiclosed at zero. Let \(S : \mathcal{H} \rightarrow \mathcal{H}\) be a nonexpansive operator and \(\{S_{n} : \mathcal{H} \rightarrow \mathcal{H}\}\) be a sequence of nonexpansive operators such that \(\emptyset \neq \operatorname {Fix}(S) \subset \bigcap_{n=1}^{\infty } \operatorname {Fix}(S_{n})\). Then \(\{S_{n}\}\) is said to satisfy NST-condition (I) with S [20] if for each bounded sequence \(\{x_{n}\}\) in \(\mathcal{H}\),

$$ \lim_{n\rightarrow \infty } \Vert x_{n}-S_{n}x_{n} \Vert = 0 \quad \text{implies} \quad \lim_{n\rightarrow \infty } \Vert x_{n}-Sx_{n} \Vert = 0. $$

Let \(x, y \in \mathcal{H}\) and \(t \in [0, 1]\). The following inequalities hold on \(\mathcal{H}\):

$$\begin{aligned}& \bigl\Vert tx +(1-t)y \bigr\Vert ^{2} =t \Vert x \Vert ^{2}+(1-t) \Vert y \Vert ^{2}-t(1-t) \Vert x-y \Vert ^{2}, \end{aligned}$$
(9)
$$\begin{aligned}& \Vert x\pm y \Vert ^{2} = \Vert x \Vert ^{2}\pm 2\langle x,y\rangle + \Vert y \Vert ^{2}. \end{aligned}$$
(10)

The following lemmas are crucial for our main results.

Lemma 2.1

([6])

Let \(\psi _{1} : \mathcal{H} \to \mathbb{R} \) be a convex and differentiable function with an L-Lipschitz continuous gradient of \(\psi _{1}\), and let \(\psi _{2} : \mathcal{H} \to \mathbb{R}\cup \{\infty \} \) be a proper lower semi-continuous and convex function. Let \(S_{n} := \operatorname {prox}_{c_{n}\psi _{2}}(I - c_{n}\nabla \psi _{1})\) and \(S := \operatorname {prox}_{c\psi _{2}}(I - c\nabla \psi _{1})\), where \(c_{ n}, c \in (0,2/L)\) with \(c_{n} \rightarrow c\) as \(n \rightarrow \infty \). Then \(\{ S_{n}\}\) satisfies NST-condition (I) with S.

Lemma 2.2

([24])

If \(f : \mathcal{H} \to \mathbb{R}\cup \{\infty \} \) is a proper, lower semi-continuous, and convex function, then the graph of ∂f defined by \(\operatorname{Gph}(\partial f):= \{(x,y)\in \mathcal{H}\times \mathcal{H} : y\in \partial f(x)\} \) is demiclosed, i.e., if the sequence \(\{(x_{k}, y_{k})\} \) in \(\operatorname{Gph}(\partial f)\) satisfies \(x_{k}\rightharpoonup x \) and \(y_{k}\to y \), then \((x,y) \in \operatorname{Gph}(\partial f)\).

Lemma 2.3

([12])

Let \(\psi _{1}, \psi _{2}:\mathcal{H}\to \mathbb{R}\cup \{\infty \}\) be two proper, lower semi-continuous, and convex functions. Then, for any \(x\in \mathcal{H}\) and \(c_{2}\geq c_{1}>0 \), we have

$$\begin{aligned} \frac{c_{2}}{c_{1}} \bigl\Vert x - \operatorname {prox}_{c_{1}\psi _{2}} \bigl(x-c_{1}\nabla \psi _{1}(x) \bigr) \bigr\Vert &\geq \bigl\Vert x - \operatorname {prox}_{c_{2}\psi _{2}} \bigl(x-c_{2}\nabla \psi _{1}(x) \bigr) \bigr\Vert \\ &\geq \bigl\Vert x - \operatorname {prox}_{c_{1}\psi _{2}} \bigl(x-c_{1}\nabla \psi _{1}(x) \bigr) \bigr\Vert . \end{aligned}$$

Lemma 2.4

([11])

Let \(\{a_{n}\} \) and \(\{t_{n}\} \) be two sequences of nonnegative real numbers such that

$$ a_{n+1}\leq (1+t_{n})a_{n}+t_{n}a_{n-1},\quad \forall n\in \mathbb{N}. $$

Then \(a_{n+1}\leq M \cdot \prod_{j=1}^{n}(1+2t_{j})\), \(\textit{where} M= \max \{a_{1}, a_{2}\}\). Moreover, if \(\sum_{n=1}^{\infty }t_{n}<\infty \), then \(\{a_{n}\} \) is bounded.

Lemma 2.5

([27])

Let \(\{a_{n}\}\) and \(\{b_{n}\}\) be two sequences of nonnegative real numbers such that \(a_{n+1}\leq a_{n}+b_{n}\) for all \(n \in \mathbb{N}\). If \(\sum_{n=1}^{\infty }b_{n}< \infty \), then \(\lim_{n\to \infty }a_{n} \) exists.

Lemma 2.6

([22])

Let \(\{x_{n}\} \) be a sequence in \(\mathcal{H} \) such that there exists a nonempty set \(\Omega \subset \mathcal{H} \) satisfying:

  1. (i)

    For every \(p\in \Omega \), \(\lim_{n\to \infty }\|x_{n}-p\| \) exists;

  2. (ii)

    \(\omega _{w}(x_{n})\subset \Omega \),

where \(\omega _{w}(x_{n}) \) is the set of all weak-cluster points of \(\{x_{n}\}\). Then \(\{x_{n}\} \) converges weakly to a point in Ω.

3 The SP-forward–backward splitting based on a fixed point algorithm

In this section, we introduce a new accelerated algorithm by using FBS and SP-iteration with the inertial technique to solve a convex minimization problem of the sum of two convex functions \(\psi _{1}\) and \(\psi _{2} \), where

  • \(\psi _{1} :\mathcal{H}\to \mathbb{R} \) is a convex and differentiable function with an L-Lipschitz continuous gradient of \(\psi _{1}\);

  • \(\psi _{2} :\mathcal{H}\to \mathbb{R}\cup \{\infty \} \) is a proper lower semi-continuous and convex function;

  • \(\Omega := \operatorname {Argmin}(\psi _{1}+\psi _{2})\neq \emptyset \).

Now, we are ready to prove the convergence theorem of Algorithm 1 (SP-FBS).

Algorithm 1
figure b

SP-forward–backward splitting (SP-FBS)

Theorem 3.1

Let \(\{x_{n}\} \) be the sequence generated by Algorithm 1. Assume that the sequences \(\{\alpha _{n}\}\), \(\{\beta _{n}\}\), \(\{\gamma _{n}\}\), \(\{\theta _{n}\}\), and \(\{c_{n}\}\) satisfy the following conditions:

  1. (C1)

    \(\gamma _{n}, \theta _{n} \in [0,1]\), \(\beta _{n}\in [a,b]\subset (0,1)\);

  2. (C2)

    \(\alpha _{n}\geq 0\), \(\sum_{n=1}^{\infty }\alpha _{n} <\infty \);

  3. (C3)

    \(0 < c_{n}\), \(c < 2/L\) such that \(\lim_{n\to \infty } c_{n} = c\).

Then the following statements hold:

  1. (i)

    \(\|x_{n+1}-p^{*}\|\leq M \cdot \prod_{j=1}^{n}(1+2\alpha _{j}) \), where \(M=\max \{\|x_{1}-p^{*}\|, \|x_{2}-p^{*}\|\} \) and \(p^{*}\in \Omega \).

  2. (ii)

    \(\{x_{n}\} \) converges weakly to a point in Ω.

Proof

For each \(n\in \mathbb{N} \), set \(S_{n} := \operatorname {prox}_{c_{n}\psi _{2}}(I-c_{n}\nabla \psi _{1}) \text{and} S := \operatorname {prox}_{c\psi _{2}}(I-c\nabla \psi _{1}) \). Then the sequence \(\{x_{n}\} \) generated by Algorithm 1 is the same as that generated by the following inertial SP-iteration:

$$ \begin{aligned} &u_{n}= x_{n}+\alpha _{n}(x_{n}-x_{n-1}), \\ &v_{n}=(1-\beta _{n})u_{n}+\beta _{n}S_{n}u_{n}, \\ &y_{n}=(1-\gamma _{n})v_{n}+\gamma _{n}S_{n}v_{n}, \\ &x_{n+1} = (1-\theta _{n})y_{n}+\theta _{n}S_{n}y_{n}. \end{aligned} $$
(11)

By condition (C3), we know that \(S_{n}\) and S are nonexpansive operators with \(\bigcap_{n=1}^{\infty } \operatorname {Fix}(S_{n})= \operatorname {Fix}(S) = \operatorname {Argmin}(\psi _{1}+ \psi _{2}):=\Omega \). By Lemma 2.1, we obtain that \(\{ S_{n}\}\) satisfies NST-condition (I) with S.

(i) Let \(p^{*}\in \Omega \). By (11), we have

$$ \bigl\Vert u_{n}-p^{*} \bigr\Vert \leq \bigl\Vert x_{n}-p^{*} \bigr\Vert +\alpha _{n} \Vert x_{n}-x_{n-1} \Vert $$
(12)

and

$$ \bigl\Vert v_{n}-p^{*} \bigr\Vert \leq (1-\beta _{n}) \bigl\Vert u_{n}-p^{*} \bigr\Vert + \beta _{n} \bigl\Vert S_{n}u_{n}-p^{*} \bigr\Vert \leq \bigl\Vert u_{n}-p^{*} \bigr\Vert . $$
(13)

Similarly, we get that

$$ \bigl\Vert y_{n}-p^{*} \bigr\Vert \leq \bigl\Vert v_{n}-p^{*} \bigr\Vert \quad \text{and}\quad \bigl\Vert x_{n+1}-p^{*} \bigr\Vert \leq \bigl\Vert y_{n}-p^{*} \bigr\Vert . $$
(14)

From (12), (13), and (14), we get

$$\begin{aligned} \bigl\Vert x_{n+1}-p^{*} \bigr\Vert &\leq \bigl\Vert y_{n}-p^{*} \bigr\Vert \\ &\leq \bigl\Vert v_{n}-p^{*} \bigr\Vert \\ &\leq \bigl\Vert u_{n}-p^{*} \bigr\Vert \\ &\leq \bigl\Vert x_{n}-p^{*} \bigr\Vert +\alpha _{n} \Vert x_{n}-x_{n-1} \Vert . \end{aligned}$$
(15)

This implies that

$$ \bigl\Vert x_{n+1}-p^{*} \bigr\Vert \leq (1+\alpha _{n}) \bigl\Vert x_{n}-p^{*} \bigr\Vert + \alpha _{n} \bigl\Vert x_{n-1}-p^{*} \bigr\Vert . $$
(16)

Apply Lemma 2.4, we get \(\|x_{n+1}-p^{*}\|\leq M \cdot \prod_{j=1}^{n}(1+2\alpha _{j}) \), where \(M=\max \{\|x_{1}-p^{*}\|, \|x_{2}-p^{*}\|\} \).

(ii) It follows from (i) that \(\{x_{n}\} \) is bounded. This implies \(\sum_{n=1}^{\infty }\alpha _{n}\|x_{n}-x_{n-1}\|<\infty \). By (15) and Lemma 2.5, we obtain that \(\lim_{n\to \infty }\|x_{n}-x^{*}\| \) exists. By (10), we have

$$ \bigl\Vert u_{n}-x^{*} \bigr\Vert ^{2} \leq \bigl\Vert x_{n}-p^{*} \bigr\Vert ^{2}+\alpha _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2} +2\alpha _{n} \bigl\Vert x_{n}-p^{*} \bigr\Vert \Vert x_{n}-x_{n-1} \Vert . $$
(17)

From (9), we also have

$$\begin{aligned} \bigl\Vert v_{n}-p^{*} \bigr\Vert ^{2} &=(1-\beta _{n}) \bigl\Vert u_{n}-p^{*} \bigr\Vert ^{2}+\beta _{n} \bigl\Vert S_{n}u_{n}-p^{*} \bigr\Vert ^{2} \\ &\quad {} -\beta _{n}(1-\beta _{n}) \Vert u_{n}-S_{n}u_{n} \Vert ^{2} \\ &\leq \bigl\Vert u_{n}-p^{*} \bigr\Vert ^{2}-\beta _{n}(1-\beta _{n}) \Vert u_{n}-S_{n}u_{n} \Vert ^{2}. \end{aligned}$$
(18)

By (14), (17), and (18), we obtain

$$\begin{aligned} \bigl\Vert x_{n+1}-p^{*} \bigr\Vert ^{2}&\leq \bigl\Vert y_{n}-p^{*} \bigr\Vert ^{2} \\ &\leq \bigl\Vert v_{n}-p^{*} \bigr\Vert ^{2} \\ &\leq \bigl\Vert u_{n}-p^{*} \bigr\Vert ^{2}-\beta _{n}(1-\beta _{n}) \Vert u_{n}-S_{n}u_{n} \Vert ^{2} \\ &\leq \bigl\Vert x_{n}-p^{*} \bigr\Vert ^{2}+\alpha _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2} +2 \alpha _{n} \bigl\Vert x_{n}-p^{*} \bigr\Vert \Vert x_{n}-x_{n-1} \Vert \\ &\quad {} -\beta _{n}(1-\beta _{n}) \Vert u_{n}-S_{n}u_{n} \Vert ^{2}. \end{aligned}$$
(19)

Since \(0< a\leq \beta _{n}\leq b<1\), \(\sum_{n=1}^{\infty }\alpha _{n}\|x_{n}-x_{n-1} \|<\infty \) and \(\lim_{n\to \infty }\|x_{n}-p^{*}\| \) exists, the above inequality implies \(\lim_{n\to \infty }\|u_{n}-S_{n}u_{n}\|=0 \). Since \(\{u_{n}\} \) is bounded and \(\{ S_{n}\}\) satisfies NST-condition (I) with S, we have \(\lim_{n\to \infty }\|u_{n}-Su_{n}\|=0 \). By the demiclosedness of \(I-S \), we have \(\omega _{w}(u_{n})\subset \operatorname {Fix}(S) =\Omega \). Since \(\lim_{n\to \infty }\|u_{n}-x_{n}\|=0\), we have \(\omega _{w}(x_{n})\subset \omega _{w}(u_{n})\subset \operatorname {Fix}(S) = \Omega \). By Lemma 2.6, we can conclude that \(\{x_{n}\} \) converges weakly to a point in Ω. This completes the proof. □

Remark 3.2

If we set \(\alpha _{n}=0\), \(S_{n}=S \) for all \(n\in \mathbb{N} \), then Algorithm 1 is reduced to the SP-algorithm [23]:

$$\begin{aligned}& v_{n} =(1-\beta _{n})x_{n}+\beta _{n}Sx_{n}, \\& y_{n} =(1-\gamma _{n})v_{n}+\gamma _{n}Sv_{n}, \\& x_{n+1} = (1-\theta _{n})y_{n}+\theta _{n}Sy_{n}, \end{aligned}$$

where \(\beta _{n},\gamma _{n},\theta _{n}\in (0,1) \).

Remark 3.3

If we set \(\alpha _{n}=\gamma _{n}=\theta _{n}=0\) for all \(n\in \mathbb{N} \), then Algorithm 1 is reduced to the Krasnosel’skii–Mann algorithm [8]:

$$ x_{n+1}=(1-\beta _{n})x_{n}+\beta _{n}S_{n}x_{n},\quad n\geq 1, $$

where \(\beta _{n}\in (0,1) \).

4 The SP-forward–backward splitting algorithm with linesearch technique

In this section, we introduce a new accelerated algorithm by using the inertial and linesearch technique to solve a convex minimization problem of the sum of two convex functions \(\psi _{1}\) and \(\psi _{2} \), where

  1. (B1)

    \(\psi _{1}:\mathcal{H}\to \mathbb{R}\) and \(\psi _{2}:\mathcal{H}\to \mathbb{R}\cup \{\infty \}\) are two proper, lower semi-continuous, and convex functions and \(\Omega := \operatorname {Argmin}(\Psi :=\psi _{1}+\psi _{2})\neq \emptyset \);

  2. (B2)

    \(\psi _{1} \) is differentiable on \(\mathcal{H}\). The gradient \(\nabla \psi _{1} \) is uniformly continuous on \(\mathcal{H} \).

We note that assumption (B2) is a weaker than the Lipschitz continuity assumption on \(\nabla \psi _{1}\).

Lemma 4.1

([9])

If \(\{x_{n}\} \) is a sequence generated by the following algorithm:

$$ x_{n+1}=\operatorname {prox}_{c_{n}\psi _{2}} \bigl(x_{n}-c_{n} \nabla \psi _{1}(x_{n}) \bigr), $$

where \(c_{n}:= \operatorname {Linesearch}(x_{n},\sigma , \theta , \delta ) \). Then, for each \(n\geq 1 \) and \(p \in \mathcal{H} \),

$$\begin{aligned} \Vert x_{n}-p \Vert ^{2}- \Vert x_{n+1}-p \Vert ^{2} &\geq 2c_{n} \bigl[( \psi _{1}+\psi _{2}) (x_{n+1}) -(\psi _{1}+\psi _{2}) (p) \bigr] \\ &\quad {} +(1-2\delta ) \Vert x_{n+1}-x_{n} \Vert ^{2}. \end{aligned}$$

Now, we are ready to prove the convergence theorem of Algorithm 2 (SP-FBSL).

Algorithm 2
figure c

SP-forward–backward splitting with linesearch (SP-FBSL)

Theorem 4.2

Let \(\{x_{n}\} \) be the sequence generated by Algorithm 2. If \(\{\gamma _{n}\},\{\theta _{n}\}\subset [0,1]\), \(\beta _{n}\in [a,b]\subset (0,1)\), \(\alpha _{n}\geq 0 \) for all \(n\in \mathbb{N} \) and \(\sum_{n=1}^{\infty }\alpha _{n}<\infty \), then \(\{x_{n}\} \) converges weakly to a point in Ω.

Proof

We denote

$$\begin{aligned}& \bar{u_{n}}:=\operatorname {prox}_{c^{1}_{n}\psi _{2}} \bigl(u_{n}-c^{1}_{n} \nabla \psi _{1}(u_{n}) \bigr),\qquad \bar{v_{n}}:= \operatorname {prox}_{c^{2}_{n}\psi _{2}} \bigl(v_{n}-c^{2}_{n} \nabla \psi _{1}(v_{n}) \bigr), \quad \text{and} \\& \bar{y_{n}}:=\operatorname {prox}_{c^{3}_{n}\psi _{2}} \bigl(y_{n}-c^{3}_{n} \nabla \psi _{1}(y_{n}) \bigr). \end{aligned}$$

Let \(p^{*} \in \Omega \). Apply Lemma 4.1, we have for any \(n\in \mathbb{N} \) and \(p \in \mathcal{H} \)

$$\begin{aligned}& \Vert u_{n}-p \Vert ^{2}- \Vert \bar{u_{n}}-p \Vert ^{2} \geq 2c^{1}_{n} \bigl[\Psi ( \bar{u_{n}}) -\Psi (p) \bigr]+(1-2\delta ) \Vert \bar{u_{n}}-u_{n} \Vert ^{2}, \end{aligned}$$
(20)
$$\begin{aligned}& \Vert v_{n}-p \Vert ^{2}- \Vert \bar{v_{n}}-p \Vert ^{2} \geq 2c^{2}_{n} \bigl[\Psi ( \bar{v_{n}}) -\Psi (p) \bigr]+(1-2\delta ) \Vert \bar{v_{n}}-v_{n} \Vert ^{2}, \end{aligned}$$
(21)
$$\begin{aligned}& \Vert y_{n}-p \Vert ^{2}- \Vert \bar{y_{n}}-p \Vert ^{2} \geq 2c^{3}_{n} \bigl[\Psi ( \bar{y_{n}}) -\Psi (p) \bigr]+(1-2\delta ) \Vert \bar{y_{n}}-y_{n} \Vert ^{2}. \end{aligned}$$
(22)

Putting \(p=p^{*} \) in (20)–(22), we have

$$ \bigl\Vert \bar{u_{n}}-p^{*} \bigr\Vert \leq \bigl\Vert u_{n}-p^{*} \bigr\Vert , \qquad \bigl\Vert \bar{v_{n}}-p^{*} \bigr\Vert \leq \bigl\Vert v_{n}-p^{*} \bigr\Vert \quad \text{and} \quad \bigl\Vert \bar{y_{n}}-p^{*} \bigr\Vert \leq \bigl\Vert y_{n}-p^{*} \bigr\Vert .$$

So, we obtain

$$\begin{aligned} \bigl\Vert x_{n+1}-p^{*} \bigr\Vert &= \bigl\Vert (1-\theta _{n}) \bigl(y_{n}-p^{*} \bigr) +\theta _{n} \bigl( \bar{y_{n}}-p^{*} \bigr) \bigr\Vert \\ &\leq (1-\theta _{n}) \bigl\Vert y_{n}-p^{*} \bigr\Vert +\theta _{n} \bigl\Vert \bar{y_{n}}-p^{*} \bigr\Vert \\ &\leq \bigl\Vert y_{n}-p^{*} \bigr\Vert . \end{aligned}$$
(23)

Similarly, we get

$$ \bigl\Vert y_{n}-p^{*} \bigr\Vert \leq \bigl\Vert v_{n}-p^{*} \bigr\Vert \quad \text{and}\quad \bigl\Vert v_{n}-p^{*} \bigr\Vert \leq \bigl\Vert u_{n}-p^{*} \bigr\Vert . $$
(24)

From (23) and (24), we obtain

$$\begin{aligned} \bigl\Vert x_{n+1}-p^{*} \bigr\Vert &\leq \bigl\Vert u_{n}-p^{*} \bigr\Vert \\ &= \bigl\Vert x_{k} +\alpha _{n}(x_{n}-x_{n-1}) -p^{*} \bigr\Vert \\ &\leq \bigl\Vert x_{n}-p^{*} \bigr\Vert +\alpha _{n} \Vert x_{n}-x_{n-1} \Vert \\ &\leq (1+\alpha _{n}) \bigl\Vert x_{n}-p^{*} \bigr\Vert +\alpha _{n} \bigl\Vert x_{n-1}-p^{*} \bigr\Vert . \end{aligned}$$
(25)

This implies by Lemma 2.4 that \(\{x_{n}\} \) is bounded, and hence \(\sum_{n=1}^{\infty }\alpha _{n}\|x_{n}-x_{n-1}\|<\infty \). It follows that

$$ \lim_{n\to \infty } \Vert u_{n}-x_{n} \Vert = 0. $$
(26)

By (25) and Lemma 2.5, \(\lim_{n\to \infty }\|x_{n}-p^{*}\| \) exists and \(\lim_{n\to \infty }\|x_{n}-p^{*}\|= \lim_{n\to \infty }\|u_{n}-p^{*} \|\).

Next, we show that \(\omega _{w}(x_{n})\subset \Omega \). Let \(x\in \omega _{w}(x_{n}) \), i.e., there exists a subsequence \(\{x_{n_{k}}\} \) of \(\{x_{n}\} \) such that \(x_{n_{k}}\rightharpoonup x \). By (26), we have \(u_{n_{k}}\rightharpoonup x \).

From (23), (24), and (9), we have

$$\begin{aligned} \bigl\Vert x_{n+1}-p^{*} \bigr\Vert ^{2} &\leq \bigl\Vert v_{n}-p^{*} \bigr\Vert ^{2} \\ &= (1-\beta _{n}) \bigl\Vert u_{n}-p^{*} \bigr\Vert ^{2}+\beta _{n} \bigl\Vert \bar{u_{n}}-p^{*} \bigr\Vert ^{2} -\beta _{n}(1-\beta _{n}) \Vert u_{n}- \bar{u_{n}} \Vert ^{2} \\ &\leq \bigl\Vert u_{n}-p^{*} \bigr\Vert ^{2}-\beta _{n}(1-\beta _{n}) \Vert u_{n}-\bar{u_{n}} \Vert ^{2} \\ &= \bigl\Vert x_{k} +\alpha _{n}(x_{n}-x_{n-1}) -p^{*} \bigr\Vert ^{2}-\beta _{n}(1- \beta _{n}) \Vert u_{n}-\bar{u_{n}} \Vert ^{2} \\ &\leq \bigl\Vert x_{n}-p^{*} \bigr\Vert ^{2}+\alpha _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2}+2 \alpha _{n} \bigl\Vert x_{n}-p^{*} \bigr\Vert \Vert x_{n}-x_{n-1} \Vert \\ &\quad {} -\beta _{n}(1-\beta _{n}) \Vert u_{n}- \bar{u_{n}} \Vert ^{2}. \end{aligned}$$
(27)

Since \(0< a\leq \beta _{n}\leq b <1 \), \(\lim_{n\to \infty }\|x_{n}-p^{*}\| \) exists, and \(\sum_{n=1}^{\infty }\alpha _{n}\|x_{n}-x_{n-1}\|<\infty \), the above inequality implies

$$ \lim_{n\to \infty } \Vert u_{n}- \bar{u_{n}} \Vert =0.\quad \text{Hence } \bar{u_{n_{k}}} \rightharpoonup x. $$
(28)

Now, let us split our further analysis into two cases.

Case 1. Suppose that the sequence \(\{c^{1}_{n_{k}}\} \) does not converge to 0. Without loss of generality, there exists \(c>0 \) such that \(c^{1}_{n_{k}}\geq c>0 \). By (B2), we have

$$ \lim_{n\to \infty } \bigl\Vert \nabla \psi _{1}(u_{n})-\nabla \psi _{1}( \bar{u_{n}}) \bigr\Vert =0. $$
(29)

From (8), we get

$$ \frac{u_{n_{k}}-\bar{u_{n_{k}}}}{c^{1}_{n_{k}}} +\nabla \psi _{1}( \bar{u_{n_{k}}})- \nabla \psi _{1}(u_{k_{i}})\in \partial \psi _{2}( \bar{u_{n_{k}}}) +\nabla \psi _{1}(\bar{u_{n_{k}}})= \partial \Psi ( \bar{u_{n_{k}}}). $$
(30)

By (28)–(30), it follows from Lemma 2.2 that \(0\in \partial \Psi (x) \), that is, \(x\in \Omega \).

Case 2. Suppose that the sequence \(\{c^{1}_{n_{k}}\} \) converges to 0. Define \(\widehat{c^{1}_{n_{k}}} = \frac{c^{1}_{n_{k}}}{\theta }>c^{1}_{n_{k}}>0\) and

$$ \widehat{u_{n_{k}}}:=\operatorname {prox}_{\widehat{c^{1}_{n_{k}}}\psi _{2}} \bigl(u_{n_{k}}- \widehat{c^{1}_{n_{k}}}\nabla \psi _{1}(u_{n_{k}}) \bigr). $$

By Lemma 2.3, we have

$$ \Vert u_{n_{k}} -\widehat{u_{n_{k}}} \Vert \leq \frac{\widehat{c^{1}_{n_{k}}}}{c^{1}_{n_{k}}} \Vert u_{n_{k}} - \bar{u_{n_{k}}} \Vert =\frac{1}{\theta } \Vert u_{n_{k}} -\bar{u_{n_{k}}} \Vert . $$
(31)

Since \(\|u_{n_{k}} -\bar{u_{n_{k}}}\|\to 0 \), we have \(\|u_{n_{k}} -\widehat{u_{n_{k}}}\|\to 0 \). By (B2), we have

$$ \lim_{k\to \infty } \bigl\Vert \nabla \psi _{1}(u_{n_{k}})-\nabla \psi _{1}( \widehat{u_{n_{k}}}) \bigr\Vert =0. $$
(32)

It follows from the definition of Linesearch that

$$ \widehat{c^{1}_{n_{k}}} \bigl\Vert \nabla \psi _{1}(u_{n_{k}})-\nabla \psi _{1}( \widehat{u_{n_{k}}}) \bigr\Vert >\delta \Vert u_{n_{k}}- \widehat{u_{n_{k}}} \Vert . $$
(33)

By (32) and (33), we get

$$ \lim_{k\to \infty } \frac{ \Vert u_{n_{k}}-\widehat{u_{n_{k}}} \Vert }{\widehat{c^{1}_{n_{k}}}}=0. $$
(34)

From (8), we get

$$ \frac{u_{n_{k}}-\widehat{u_{n_{k}}}}{\widehat{c^{1}_{n_{k}}}} + \nabla \psi _{1}( \widehat{u_{n_{k}}})- \nabla \psi _{1}(u_{n_{k}}) \in \partial \psi _{2}(\widehat{u_{n_{k}}}) +\nabla \psi _{1}( \widehat{u_{n_{k}}})= \partial \Psi ( \widehat{u_{n_{k}}}). $$
(35)

Since \(u_{n_{k}}\rightharpoonup x \) and \(\|u_{n_{k}} -\widehat{u_{n_{k}}}\|\to 0 \), we have \(\widehat{u_{n_{k}}}\rightharpoonup x \). By (34) and (35), it follows from Lemma 2.2 that \(0\in \partial \Psi (x) \), that is, \(x\in \Omega \). Therefore, \(\omega _{w}(x_{n})\subset \Omega \). Using Lemma 2.6, we obtain that \(x_{n} \rightharpoonup \bar{x}\) for some \(\bar{x} \in \Omega \). This completes the proof. □

5 Application in image restoration problems

In this section, we apply the convex minimization problem (1) to image restoration problems. We analyze and compare efficiency of SP-FBS and SP-FBSL algorithms with FBS algorithm, R-FBS algorithm, FISTA algorithm, FBSL algorithm, and FISTAL algorithm. All experiments and visualizations are performed on a laptop computer (Intel Core-i5/4.00 GB RAM/Windows 8/64-bit) with MATLAB.

The image restoration problem is a basic linear inverse problem of the form

$$ Ax = y + \varepsilon , $$
(36)

where \(A\in \mathbb{R}^{M\times N} \) and \(y \in \mathbb{R}^{M} \) are known, ε is an unknown noise, and \(x \in \mathbb{R}^{N}\) is the true image to be estimated. To approximate the original image in (36), we need to minimize the value of ε by using the LASSO model [28]:

$$ \min_{x \in \mathbb{R}^{N}} \biggl\{ \frac{1}{2} \Vert Ax-y \Vert _{2}^{2} + \lambda \Vert x \Vert _{1} \biggr\} , $$
(37)

where λ is a positive parameter, \(\|\cdot \|_{1} \) is the \(l_{1} \)-norm, and \(\|\cdot \|_{2}\) is the Euclidean norm. It is noted that problem (1) can be applied to LASSO model (37) by setting

$$ \psi _{1}(x) = \frac{1}{2} \Vert y - Ax \Vert _{2}^{2}\quad \text{and}\quad \psi _{2}(x) = \lambda \Vert x \Vert _{1},$$

where y represents the observed image and \(A = RW \), where R is the kernel matrix and W is 2-D fast Fourier transform.

We take two RGB test images (Wat Chedi Luang and antique kitchen with size of \(256\times 256 \) and \(512\times 512 \), respectively) and use the peak signal-to-noise ratio (PSNR) in decibel (dB) [28] as the image quality measures, which is formulated as follows:

$$ \operatorname{PSNR}(x_{k}) = 10\log _{10} \biggl( \frac{M\cdot 255^{2}}{ \Vert x_{k}- x \Vert ^{2}_{2}} \biggr), $$

where M is the number of image samples, and x is the original image.

Next, we will present three scenarios of blurring processes and noise 10−4 in Table 1 and see the original images and the blurred images in Fig. 1.

Figure 1
figure 1

Deblurring of the Wat Chedi Luang and Antique kitchen

Table 1 Details of blurring processes

Next, we test the image recovery performance of the studied algorithms for recovering the images (Wat Chedi Luang and antique kitchen) by setting the parameters as in (38) and by choosing the blurred images as the starting points. The maximum iteration number for all methods is fixed at 200. In LASSO model (37), the regularization parameter is taken by \(\lambda = 10^{-4}\). Details of parameters for the studied algorithms are chosen as follows:

$$\begin{aligned} & c_{n}=\frac{1}{L},\qquad \sigma =10 ,\qquad \delta = 0.1, \qquad \theta = 0.9,\qquad \beta _{n}= \gamma _{n}= \theta _{n}= \frac{0.99n}{n+1}, \\ & \alpha _{n}= \textstyle\begin{cases} \frac{n}{n+1} &\text{if } 1\leq n \leq \mathcal{M}, \\ \frac{1}{2^{n}} & \text{otherwise}, \end{cases}\displaystyle \end{aligned}$$
(38)

where \(\mathcal{M} \) is a large positive number which depends on the number of iterations.

The obtained results for deblurring test images (scenarios I–III) are presented in Figs. 27. We observe from Figs. 28 that if the iteration number is fixed at 200, the PSNR of SP-FBSL algorithm and SP-FBS algorithm are slightly higher than that of the others.

Figure 2
figure 2

PSNR at the 200th number of iteration of FBS, R-FBS, FISTA, FBSL, FISTAL, SP-FBS and SP-FBSL algorithms for deblurring (scenario I) of the Wat Chedi Luang

Figure 3
figure 3

PSNR at the 200th number of iteration of FBS, R-FBS, FISTA, FBSL, FISTAL, SP-FBS and SP-FBSL algorithms for deblurring (scenario II) of the Wat Chedi Luang

Figure 4
figure 4

PSNR at the 200th number of iteration of FBS, R-FBS, FISTA, FBSL, FISTAL, SP-FBS and SP-FBSL algorithms for deblurring (scenario III) of the Wat Chedi Luang

Figure 5
figure 5

PSNR at the 200th number of iteration of FBS, R-FBS, FISTA, FBSL, FISTAL, SP-FBS and SP-FBSL algorithms for deblurring (scenario I) of the Antique kitchen

Figure 6
figure 6

PSNR at the 200th number of iteration of FBS, R-FBS, FISTA, FBSL, FISTAL, SP-FBS and SP-FBSL algorithms for deblurring (scenario II) of the Antique kitchen

Figure 7
figure 7

PSNR at the 200th number of iteration of FBS, R-FBS, FISTA, FBSL, FISTAL, SP-FBS and SP-FBSL algorithms for deblurring “scenario (III)” of the Antique kitchen

Figure 8
figure 8

The graphs of PSNR of the algorithms: (a)–(c) for “Wat Chedi Luang” image and (d)–(f) for “Antique kitchen” image

6 Conclusions

In this work, we propose an inertial SP-forward–backward splitting (SP-FBS) algorithm for solving convex minimization problems. We prove that a sequence generated by SP-FBS algorithm converges weakly to a solution of problem (1) under the assumption of the Lipschitz continuity of the gradient of the objective function and the stepsize of the algorithm depends on the Lipschitz constant of the gradient of the objective function. Moreover, we remove the Lipschitz continuity assumption on the gradient of the objective function by using the linesearch technique of Cruz and Nghia [9] and propose an inertial SP-forward–backward splitting algorithm with linesearch (SP-FBSL) to solve a convex minimization problem. We also prove that a sequence generated by SP-FBSL converges weakly to a minimizer of the sum of those two convex functions under suitable control conditions. Finally, we present numerical experiments of the studied algorithms for solving image restoration problems. From our experiments, we see that our algorithms have a higher efficiency than the well-known algorithms in [3, 8, 9, 16].

Availability of data and materials

Contact the author for data requests.

References

  1. Aremu, K.O., Izuchukwu, C., Grace, O.N., Mewomo, O.T.: Multi-step iterative algorithm for minimization and fixed point problems in p-uniformly convex metric spaces. J. Ind. Manag. Optim. 17(4), 2161–2180 (2021). https://doi.org/10.3934/jimo.2020063

    Article  MathSciNet  MATH  Google Scholar 

  2. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2011)

    Book  Google Scholar 

  3. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)

    Article  MathSciNet  Google Scholar 

  4. Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods. Athena Scientific, Belmont (1997)

    MATH  Google Scholar 

  5. Burachik, R.S., Iusem, A.N.: Set-Valued Mappings and Enlargements of Monotone Operator. Springer, New York (2007)

    Google Scholar 

  6. Bussaban, L., Suantai, S., Kaewkhao, A.: A parallel inertial S-iteration forward–backward algorithm for regression and classification problems. Carpath. J. Math. 36, 35–44 (2020)

    Article  MathSciNet  Google Scholar 

  7. Combettes, P.L., Pesquet, J.C.: A Douglas–Rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Top. Signal Process. 1, 564–574 (2007)

    Article  Google Scholar 

  8. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)

    Article  MathSciNet  Google Scholar 

  9. Cruz, J.Y.B., Nghia, T.T.A.: On the convergence of the forward–backward splitting method with linesearches. Optim. Methods Softw. 31, 1209–1238 (2016)

    Article  MathSciNet  Google Scholar 

  10. Dunn, J.C.: Convexity, monotonicity, and gradient processes in Hilbert space. J. Math. Anal. Appl. 53, 145–158 (1976)

    Article  MathSciNet  Google Scholar 

  11. Hanjing, A., Suantai, S.: A fast image restoration algorithm based on a fixed point and optimization. Mathematics 8, 378 (2020). https://doi.org/10.3390/math8030378

    Article  Google Scholar 

  12. Huang, Y., Dong, Y.: New properties of forward–backward splitting and a practical proximal-descent algorithm. Appl. Math. Comput. 237, 60–68 (2014)

    MathSciNet  MATH  Google Scholar 

  13. Ishikawa, S.: Fixed points by a new iteration method. Proc. Am. Math. Soc. 44, 147–150 (1974)

    Article  MathSciNet  Google Scholar 

  14. Kankam, K., Pholasa, N., Cholamjiak, P.: On convergence and complexity of the modified forward–backward method involving new linesearches for convex minimization. Math. Methods Appl. Sci. 42, 1352–1362 (2019)

    Article  MathSciNet  Google Scholar 

  15. Lin, L.J., Takahashi, W.: A general iterative method for hierarchical variational inequality problems in Hilbert spaces and applications. Positivity 16, 429–453 (2012)

    Article  MathSciNet  Google Scholar 

  16. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)

    Article  MathSciNet  Google Scholar 

  17. Mann, W.R.: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506–510 (1953)

    Article  MathSciNet  Google Scholar 

  18. Martinet, B.: Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Oper. 4, 154–158 (1970)

    MATH  Google Scholar 

  19. Moreau, J.J.: Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci. Paris Sér. A Math. 255, 2897–2899 (1962)

    MathSciNet  MATH  Google Scholar 

  20. Nakajo, K., Shimoji, K., Takahashi, W.: On strong convergence by the hybrid method for families of mappings in Hilbert spaces. Nonlinear Anal., Theory Methods Appl. 71(1–2), 112–119 (2009)

    Article  MathSciNet  Google Scholar 

  21. Okeke, C.C., Izuchukwu, C.: A strong convergence theorem for monotone inclusion and minimization problems in complete CAT(0) spaces. Optim. Methods Softw. 34(6), 1168–1183 (2019)

    Article  MathSciNet  Google Scholar 

  22. Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73, 591–597 (1967)

    Article  MathSciNet  Google Scholar 

  23. Phuengrattana, W., Suantai, S.: On the rate of convergence of Mann, Ishikawa, Noor and SP-iterations for continuous functions on an arbitrary interval. J. Comput. Appl. Math. 235, 3006–3014 (2011). https://doi.org/10.1016/j.cam.2010.12.022

    Article  MathSciNet  MATH  Google Scholar 

  24. Rockafellar, R.T.: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209–216 (1970)

    Article  MathSciNet  Google Scholar 

  25. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 17, 877–898 (1976)

    Article  MathSciNet  Google Scholar 

  26. Suantai, S., Kankam, K., Cholamjiak, P.: A novel forward–backward algorithm for solving convex minimization problem in Hilbert spaces. Mathematics 8, 42 (2020). https://doi.org/10.3390/math8010042

    Article  Google Scholar 

  27. Tan, K., Xu, H.K.: Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 178, 301–308 (1993)

    Article  MathSciNet  Google Scholar 

  28. Thung, K., Raveendran, P.: A survey of image quality measures. In: Proceedings of the International Conference for Technical Postgraduates (TECHPOS), Kuala Lumpur, Malaysia, 14–15 December, pp. 1–4 (2009)

    Google Scholar 

Download references

Acknowledgements

This work was supported by Fundamental Fund 2022, Chiang Mai University and Thailand Science Research and Innovation under the project IRN62W0007. This research has also received funding support from the NSRI via the program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B05F640183]. We also would like to thank Nakhon Phanom University and Rajamangala University of Technology Isan for partial financial support.

Funding

This work was supported by Fundamental Fund 2022, Chiang Mai University, Thailand Science Research and Innovation under the project IRN62W0007, NSRI [grant number B05F640183], Rajamangala University of Technology Isan and Nakhon Phanom University.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Adisak Hanjing.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yatakoat, P., Suantai, S. & Hanjing, A. On some accelerated optimization algorithms based on fixed point and linesearch techniques for convex minimization problems with applications. Adv Cont Discr Mod 2022, 25 (2022). https://doi.org/10.1186/s13662-022-03698-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-022-03698-5

MSC

Keywords