- Research
- Open Access
- Published:
On some accelerated optimization algorithms based on fixed point and linesearch techniques for convex minimization problems with applications
Advances in Continuous and Discrete Models volume 2022, Article number: 25 (2022)
Abstract
In this paper, we introduce and study a new accelerated algorithm based on forward–backward and SP-algorithm for solving a convex minimization problem of the sum of two convex and lower semicontinuous functions in a Hilbert space. Under some suitable control conditions, a weak convergence theorem of the proposed algorithm based on a fixed point is established. Moreover, we choose the stepsize of our algorithm which is independent on the Lipschitz constant of the gradient of the objective function by using a linesearch technique, and then a weak convergence result of the proposed algorithm is analyzed. As applications, we apply the proposed algorithm for solving the image restoration problems and compare its convergence behavior with other well-known algorithms in the literature. By our experiment, the algorithms have a higher efficiency than the others.
1 Introduction
Throughout this paper, let \(\mathcal{H}\) be a real Hilbert space with an inner product \(\langle \cdot , \cdot \rangle \) and the induced norm \(\| \cdot \|\). Let \(\mathbb{R}\) and \(\mathbb{N}\) be the set of real numbers and the set of positive integers, respectively. Let I denote the identity operator on \(\mathcal{H}\). The symbols ⇀ and → denote the weak and strong convergence, respectively.
In this work, we are interested in solving the convex minimization problems of the following form:
where \(\psi _{1} : \mathcal{H}\to \mathbb{R} \) is a convex and differentiable function with a L-Lipschitz continuous gradient of \(\psi _{1}\) and \(\psi _{2} :\mathcal{H}\to \mathbb{R}\cup \{\infty \} \) is a proper lower semi-continuous and convex function. If x is a solution of problem (1), then x is characterized by the fixed point equation of the forward–backward operator
where \(\alpha >0\), \(\operatorname {prox}_{\psi _{2}}\) is the proximity operator of \(\psi _{2}\), and \(\nabla \psi _{1}\) stands for the gradient of \(\psi _{1}\).
In the recent years, various iterative algorithms for solving a convex minimization problem of the sum of two convex functions were introduced and studied by many mathematicians, see [1, 4, 7–10, 14–16, 18, 21, 25] for instance.
One of the popular iterative algorithms, called forward–backward splitting (FBS) algorithm [8, 16], is defined by the following: let \(x_{1} \in \mathcal{H}\) and set
where \(0 < c_{n} < 2/L\).
In 2005, Combettes and Wajs [8] introduced the following relaxed forward–backward splitting (R-FBS) algorithm, which is defined by the following: let \(\varepsilon \in (0,\min (1,\frac{1}{L})) \), \(x_{1} \in \mathbb{R}^{N}\) and set
where \(c_{n}\in [\varepsilon , \frac{2}{L}-\varepsilon ] \) and \(\beta _{n}\in [\varepsilon ,1] \).
To accelerate the forward–backward splitting algorithm, an inertial technique is employed. So, various inertial algorithms were introduced and studied in order to accelerate convergence behavior of the algorithms, see [3, 6, 11, 26] for example. Recently, Beck and Teboulle [3] introduced a fast iterative shrinkage-thresholding algorithm (FISTA) for solving problem (1). FISTA is defined by the following: let \(x_{1}=y_{0}\in \mathbb{R}^{N}\), \(t_{1}=1 \) and set
Note that \(\alpha _{n} \) is called an inertial parameter which controls the momentum \(y_{n}-y_{n-1} \).
It is observed that both FBS and FISTA algorithms need to assume the Lipschitz continuity condition on the gradient of \(\psi _{1}\), and the stepsize depends on the Lipschitz constant L, which is not an easy task to find in general practice.
In 2016, Cruz and Nghia [9] proposed a linesearch technique for selecting the stepsize which is independent of the Lipschitz constant L. Their linesearch technique is given by the following process:

The forward–backward splitting algorithm where the stepsize \(c_{n}\) is generated by above linesearch was introduced by Cruz and Nghia [9] and defined by the following:
(FBSL). Let \(x_{1} \in \mathcal{H}\), \(\sigma >0\), \(\delta \in (0, 1/2)\), and \(\theta \in (0,1)\). For \(n \geq 1\), let
where \(c_{n}:= \operatorname{Linesearch} (x_{n},\sigma , \theta , \delta )\).
Moreover, they also proposed an accelerated algorithm with an inertial technical term as follows.
(FISTAL). Let \(x_{0}=x_{1} \in \mathcal{H}\), \(\alpha _{0}=\sigma > 0\), \(\delta \in (0, 1/2)\), \(\theta \in (0,1)\), and \(t_{1}=1\). For \(n \geq 1\), let
where \(c_{n}:= \operatorname{Linesearch} (y_{n},c_{n-1}, \theta , \delta ) \).
For the past decade, various fixed point algorithms for nonexpansive operators were introduced and studied for solving convex minimization problems, problem (1), see [11, 13, 17, 23]. In 2011, Phuengrattana and Suantai [23] introduced a new fixed point algorithm known as SP-iteration and showed that this algorithm has a convergence rate better than that of Ishikawa [13] and Mann [17] iterations. The SP-iteration for nonexpansive operator S was defined as follows:
where \(x_{1} \in \mathcal{H}\), \(\{\beta _{n}\}\), \(\{\gamma _{n}\} \), and \(\{\theta _{n}\} \) are sequences in \((0,1) \).
Motivated by these works, we combine the idea of SP-iteration, FBS algorithm, and a linesearch technique to propose a new accelerated algorithm for a convex minimization problem which can be applied to solve the image restoration problems. We obtain weak convergence theorems in Hilbert spaces under some suitable conditions.
2 Preliminaries
In this section, we give some definitions and basic properties for proving our results in the next sections.
Let \(\psi : \mathcal{H} \to \mathbb{R}\cup \{\infty \} \) be a proper, lower semi-continuous, and convex function. The proximity (or proximal) operator [2, 19] of ψ, denoted by \(\operatorname {prox}_{\psi }\), is defined for each \(x \in \mathcal{H}\), \(\operatorname {prox}_{\psi }x\) is the unique solution of the minimization problem
The proximity operator can be formulated in the equivalent form
where ∂ψ is the subdifferential of ψ defined by
Moreover, we have the following useful fact:
Note that the subdifferential operator ∂ψ is maximal monotone (see [5] for more details) and the solution of (1) is a fixed point of the following operator:
where \(c>0\). If \(0< c< \frac{2}{L} \), we know that \(\operatorname{prox}_{c\psi _{2}}(I-c\nabla \psi _{1}) \) is a nonexpansive operator.
An operator \(S : \mathcal{H} \rightarrow \mathcal{H}\) is said to be Lipschitz continuous if there exists \(L > 0\) such that
If S is 1-Lipschitz continuous, then S is called a nonexpansive operator. A point \(x\in \mathcal{H} \) is called a fixed point of S if \(x=Sx \). The set of all fixed points of S is denoted by \(\operatorname {Fix}(S)\).
The operator \(I - S\) is called demiclosed at zero if for any sequence \(\{x_{n}\}\) in \(\mathcal{H}\) which converges weakly to x and the sequence \(\{x_{n} - Sx_{n}\}\) converges strongly to 0, then \(x \in \operatorname {Fix}(S)\). It is known [22] that if S is a nonexpansive operator, then \(I - S\) is demiclosed at zero. Let \(S : \mathcal{H} \rightarrow \mathcal{H}\) be a nonexpansive operator and \(\{S_{n} : \mathcal{H} \rightarrow \mathcal{H}\}\) be a sequence of nonexpansive operators such that \(\emptyset \neq \operatorname {Fix}(S) \subset \bigcap_{n=1}^{\infty } \operatorname {Fix}(S_{n})\). Then \(\{S_{n}\}\) is said to satisfy NST-condition (I) with S [20] if for each bounded sequence \(\{x_{n}\}\) in \(\mathcal{H}\),
Let \(x, y \in \mathcal{H}\) and \(t \in [0, 1]\). The following inequalities hold on \(\mathcal{H}\):
The following lemmas are crucial for our main results.
Lemma 2.1
([6])
Let \(\psi _{1} : \mathcal{H} \to \mathbb{R} \) be a convex and differentiable function with an L-Lipschitz continuous gradient of \(\psi _{1}\), and let \(\psi _{2} : \mathcal{H} \to \mathbb{R}\cup \{\infty \} \) be a proper lower semi-continuous and convex function. Let \(S_{n} := \operatorname {prox}_{c_{n}\psi _{2}}(I - c_{n}\nabla \psi _{1})\) and \(S := \operatorname {prox}_{c\psi _{2}}(I - c\nabla \psi _{1})\), where \(c_{ n}, c \in (0,2/L)\) with \(c_{n} \rightarrow c\) as \(n \rightarrow \infty \). Then \(\{ S_{n}\}\) satisfies NST-condition (I) with S.
Lemma 2.2
([24])
If \(f : \mathcal{H} \to \mathbb{R}\cup \{\infty \} \) is a proper, lower semi-continuous, and convex function, then the graph of ∂f defined by \(\operatorname{Gph}(\partial f):= \{(x,y)\in \mathcal{H}\times \mathcal{H} : y\in \partial f(x)\} \) is demiclosed, i.e., if the sequence \(\{(x_{k}, y_{k})\} \) in \(\operatorname{Gph}(\partial f)\) satisfies \(x_{k}\rightharpoonup x \) and \(y_{k}\to y \), then \((x,y) \in \operatorname{Gph}(\partial f)\).
Lemma 2.3
([12])
Let \(\psi _{1}, \psi _{2}:\mathcal{H}\to \mathbb{R}\cup \{\infty \}\) be two proper, lower semi-continuous, and convex functions. Then, for any \(x\in \mathcal{H}\) and \(c_{2}\geq c_{1}>0 \), we have
Lemma 2.4
([11])
Let \(\{a_{n}\} \) and \(\{t_{n}\} \) be two sequences of nonnegative real numbers such that
Then \(a_{n+1}\leq M \cdot \prod_{j=1}^{n}(1+2t_{j})\), \(\textit{where} M= \max \{a_{1}, a_{2}\}\). Moreover, if \(\sum_{n=1}^{\infty }t_{n}<\infty \), then \(\{a_{n}\} \) is bounded.
Lemma 2.5
([27])
Let \(\{a_{n}\}\) and \(\{b_{n}\}\) be two sequences of nonnegative real numbers such that \(a_{n+1}\leq a_{n}+b_{n}\) for all \(n \in \mathbb{N}\). If \(\sum_{n=1}^{\infty }b_{n}< \infty \), then \(\lim_{n\to \infty }a_{n} \) exists.
Lemma 2.6
([22])
Let \(\{x_{n}\} \) be a sequence in \(\mathcal{H} \) such that there exists a nonempty set \(\Omega \subset \mathcal{H} \) satisfying:
-
(i)
For every \(p\in \Omega \), \(\lim_{n\to \infty }\|x_{n}-p\| \) exists;
-
(ii)
\(\omega _{w}(x_{n})\subset \Omega \),
where \(\omega _{w}(x_{n}) \) is the set of all weak-cluster points of \(\{x_{n}\}\). Then \(\{x_{n}\} \) converges weakly to a point in Ω.
3 The SP-forward–backward splitting based on a fixed point algorithm
In this section, we introduce a new accelerated algorithm by using FBS and SP-iteration with the inertial technique to solve a convex minimization problem of the sum of two convex functions \(\psi _{1}\) and \(\psi _{2} \), where
-
\(\psi _{1} :\mathcal{H}\to \mathbb{R} \) is a convex and differentiable function with an L-Lipschitz continuous gradient of \(\psi _{1}\);
-
\(\psi _{2} :\mathcal{H}\to \mathbb{R}\cup \{\infty \} \) is a proper lower semi-continuous and convex function;
-
\(\Omega := \operatorname {Argmin}(\psi _{1}+\psi _{2})\neq \emptyset \).
Now, we are ready to prove the convergence theorem of Algorithm 1 (SP-FBS).
Theorem 3.1
Let \(\{x_{n}\} \) be the sequence generated by Algorithm 1. Assume that the sequences \(\{\alpha _{n}\}\), \(\{\beta _{n}\}\), \(\{\gamma _{n}\}\), \(\{\theta _{n}\}\), and \(\{c_{n}\}\) satisfy the following conditions:
-
(C1)
\(\gamma _{n}, \theta _{n} \in [0,1]\), \(\beta _{n}\in [a,b]\subset (0,1)\);
-
(C2)
\(\alpha _{n}\geq 0\), \(\sum_{n=1}^{\infty }\alpha _{n} <\infty \);
-
(C3)
\(0 < c_{n}\), \(c < 2/L\) such that \(\lim_{n\to \infty } c_{n} = c\).
Then the following statements hold:
-
(i)
\(\|x_{n+1}-p^{*}\|\leq M \cdot \prod_{j=1}^{n}(1+2\alpha _{j}) \), where \(M=\max \{\|x_{1}-p^{*}\|, \|x_{2}-p^{*}\|\} \) and \(p^{*}\in \Omega \).
-
(ii)
\(\{x_{n}\} \) converges weakly to a point in Ω.
Proof
For each \(n\in \mathbb{N} \), set \(S_{n} := \operatorname {prox}_{c_{n}\psi _{2}}(I-c_{n}\nabla \psi _{1}) \text{and} S := \operatorname {prox}_{c\psi _{2}}(I-c\nabla \psi _{1}) \). Then the sequence \(\{x_{n}\} \) generated by Algorithm 1 is the same as that generated by the following inertial SP-iteration:
By condition (C3), we know that \(S_{n}\) and S are nonexpansive operators with \(\bigcap_{n=1}^{\infty } \operatorname {Fix}(S_{n})= \operatorname {Fix}(S) = \operatorname {Argmin}(\psi _{1}+ \psi _{2}):=\Omega \). By Lemma 2.1, we obtain that \(\{ S_{n}\}\) satisfies NST-condition (I) with S.
(i) Let \(p^{*}\in \Omega \). By (11), we have
and
Similarly, we get that
From (12), (13), and (14), we get
This implies that
Apply Lemma 2.4, we get \(\|x_{n+1}-p^{*}\|\leq M \cdot \prod_{j=1}^{n}(1+2\alpha _{j}) \), where \(M=\max \{\|x_{1}-p^{*}\|, \|x_{2}-p^{*}\|\} \).
(ii) It follows from (i) that \(\{x_{n}\} \) is bounded. This implies \(\sum_{n=1}^{\infty }\alpha _{n}\|x_{n}-x_{n-1}\|<\infty \). By (15) and Lemma 2.5, we obtain that \(\lim_{n\to \infty }\|x_{n}-x^{*}\| \) exists. By (10), we have
From (9), we also have
By (14), (17), and (18), we obtain
Since \(0< a\leq \beta _{n}\leq b<1\), \(\sum_{n=1}^{\infty }\alpha _{n}\|x_{n}-x_{n-1} \|<\infty \) and \(\lim_{n\to \infty }\|x_{n}-p^{*}\| \) exists, the above inequality implies \(\lim_{n\to \infty }\|u_{n}-S_{n}u_{n}\|=0 \). Since \(\{u_{n}\} \) is bounded and \(\{ S_{n}\}\) satisfies NST-condition (I) with S, we have \(\lim_{n\to \infty }\|u_{n}-Su_{n}\|=0 \). By the demiclosedness of \(I-S \), we have \(\omega _{w}(u_{n})\subset \operatorname {Fix}(S) =\Omega \). Since \(\lim_{n\to \infty }\|u_{n}-x_{n}\|=0\), we have \(\omega _{w}(x_{n})\subset \omega _{w}(u_{n})\subset \operatorname {Fix}(S) = \Omega \). By Lemma 2.6, we can conclude that \(\{x_{n}\} \) converges weakly to a point in Ω. This completes the proof. □
Remark 3.2
If we set \(\alpha _{n}=0\), \(S_{n}=S \) for all \(n\in \mathbb{N} \), then Algorithm 1 is reduced to the SP-algorithm [23]:
where \(\beta _{n},\gamma _{n},\theta _{n}\in (0,1) \).
Remark 3.3
If we set \(\alpha _{n}=\gamma _{n}=\theta _{n}=0\) for all \(n\in \mathbb{N} \), then Algorithm 1 is reduced to the Krasnosel’skii–Mann algorithm [8]:
where \(\beta _{n}\in (0,1) \).
4 The SP-forward–backward splitting algorithm with linesearch technique
In this section, we introduce a new accelerated algorithm by using the inertial and linesearch technique to solve a convex minimization problem of the sum of two convex functions \(\psi _{1}\) and \(\psi _{2} \), where
-
(B1)
\(\psi _{1}:\mathcal{H}\to \mathbb{R}\) and \(\psi _{2}:\mathcal{H}\to \mathbb{R}\cup \{\infty \}\) are two proper, lower semi-continuous, and convex functions and \(\Omega := \operatorname {Argmin}(\Psi :=\psi _{1}+\psi _{2})\neq \emptyset \);
-
(B2)
\(\psi _{1} \) is differentiable on \(\mathcal{H}\). The gradient \(\nabla \psi _{1} \) is uniformly continuous on \(\mathcal{H} \).
We note that assumption (B2) is a weaker than the Lipschitz continuity assumption on \(\nabla \psi _{1}\).
Lemma 4.1
([9])
If \(\{x_{n}\} \) is a sequence generated by the following algorithm:
where \(c_{n}:= \operatorname {Linesearch}(x_{n},\sigma , \theta , \delta ) \). Then, for each \(n\geq 1 \) and \(p \in \mathcal{H} \),
Now, we are ready to prove the convergence theorem of Algorithm 2 (SP-FBSL).
Theorem 4.2
Let \(\{x_{n}\} \) be the sequence generated by Algorithm 2. If \(\{\gamma _{n}\},\{\theta _{n}\}\subset [0,1]\), \(\beta _{n}\in [a,b]\subset (0,1)\), \(\alpha _{n}\geq 0 \) for all \(n\in \mathbb{N} \) and \(\sum_{n=1}^{\infty }\alpha _{n}<\infty \), then \(\{x_{n}\} \) converges weakly to a point in Ω.
Proof
We denote
Let \(p^{*} \in \Omega \). Apply Lemma 4.1, we have for any \(n\in \mathbb{N} \) and \(p \in \mathcal{H} \)
Putting \(p=p^{*} \) in (20)–(22), we have
So, we obtain
Similarly, we get
This implies by Lemma 2.4 that \(\{x_{n}\} \) is bounded, and hence \(\sum_{n=1}^{\infty }\alpha _{n}\|x_{n}-x_{n-1}\|<\infty \). It follows that
By (25) and Lemma 2.5, \(\lim_{n\to \infty }\|x_{n}-p^{*}\| \) exists and \(\lim_{n\to \infty }\|x_{n}-p^{*}\|= \lim_{n\to \infty }\|u_{n}-p^{*} \|\).
Next, we show that \(\omega _{w}(x_{n})\subset \Omega \). Let \(x\in \omega _{w}(x_{n}) \), i.e., there exists a subsequence \(\{x_{n_{k}}\} \) of \(\{x_{n}\} \) such that \(x_{n_{k}}\rightharpoonup x \). By (26), we have \(u_{n_{k}}\rightharpoonup x \).
From (23), (24), and (9), we have
Since \(0< a\leq \beta _{n}\leq b <1 \), \(\lim_{n\to \infty }\|x_{n}-p^{*}\| \) exists, and \(\sum_{n=1}^{\infty }\alpha _{n}\|x_{n}-x_{n-1}\|<\infty \), the above inequality implies
Now, let us split our further analysis into two cases.
Case 1. Suppose that the sequence \(\{c^{1}_{n_{k}}\} \) does not converge to 0. Without loss of generality, there exists \(c>0 \) such that \(c^{1}_{n_{k}}\geq c>0 \). By (B2), we have
From (8), we get
By (28)–(30), it follows from Lemma 2.2 that \(0\in \partial \Psi (x) \), that is, \(x\in \Omega \).
Case 2. Suppose that the sequence \(\{c^{1}_{n_{k}}\} \) converges to 0. Define \(\widehat{c^{1}_{n_{k}}} = \frac{c^{1}_{n_{k}}}{\theta }>c^{1}_{n_{k}}>0\) and
By Lemma 2.3, we have
Since \(\|u_{n_{k}} -\bar{u_{n_{k}}}\|\to 0 \), we have \(\|u_{n_{k}} -\widehat{u_{n_{k}}}\|\to 0 \). By (B2), we have
It follows from the definition of Linesearch that
From (8), we get
Since \(u_{n_{k}}\rightharpoonup x \) and \(\|u_{n_{k}} -\widehat{u_{n_{k}}}\|\to 0 \), we have \(\widehat{u_{n_{k}}}\rightharpoonup x \). By (34) and (35), it follows from Lemma 2.2 that \(0\in \partial \Psi (x) \), that is, \(x\in \Omega \). Therefore, \(\omega _{w}(x_{n})\subset \Omega \). Using Lemma 2.6, we obtain that \(x_{n} \rightharpoonup \bar{x}\) for some \(\bar{x} \in \Omega \). This completes the proof. □
5 Application in image restoration problems
In this section, we apply the convex minimization problem (1) to image restoration problems. We analyze and compare efficiency of SP-FBS and SP-FBSL algorithms with FBS algorithm, R-FBS algorithm, FISTA algorithm, FBSL algorithm, and FISTAL algorithm. All experiments and visualizations are performed on a laptop computer (Intel Core-i5/4.00 GB RAM/Windows 8/64-bit) with MATLAB.
The image restoration problem is a basic linear inverse problem of the form
where \(A\in \mathbb{R}^{M\times N} \) and \(y \in \mathbb{R}^{M} \) are known, ε is an unknown noise, and \(x \in \mathbb{R}^{N}\) is the true image to be estimated. To approximate the original image in (36), we need to minimize the value of ε by using the LASSO model [28]:
where λ is a positive parameter, \(\|\cdot \|_{1} \) is the \(l_{1} \)-norm, and \(\|\cdot \|_{2}\) is the Euclidean norm. It is noted that problem (1) can be applied to LASSO model (37) by setting
where y represents the observed image and \(A = RW \), where R is the kernel matrix and W is 2-D fast Fourier transform.
We take two RGB test images (Wat Chedi Luang and antique kitchen with size of \(256\times 256 \) and \(512\times 512 \), respectively) and use the peak signal-to-noise ratio (PSNR) in decibel (dB) [28] as the image quality measures, which is formulated as follows:
where M is the number of image samples, and x is the original image.
Next, we will present three scenarios of blurring processes and noise 10−4 in Table 1 and see the original images and the blurred images in Fig. 1.
Next, we test the image recovery performance of the studied algorithms for recovering the images (Wat Chedi Luang and antique kitchen) by setting the parameters as in (38) and by choosing the blurred images as the starting points. The maximum iteration number for all methods is fixed at 200. In LASSO model (37), the regularization parameter is taken by \(\lambda = 10^{-4}\). Details of parameters for the studied algorithms are chosen as follows:
where \(\mathcal{M} \) is a large positive number which depends on the number of iterations.
The obtained results for deblurring test images (scenarios I–III) are presented in Figs. 2–7. We observe from Figs. 2–8 that if the iteration number is fixed at 200, the PSNR of SP-FBSL algorithm and SP-FBS algorithm are slightly higher than that of the others.
6 Conclusions
In this work, we propose an inertial SP-forward–backward splitting (SP-FBS) algorithm for solving convex minimization problems. We prove that a sequence generated by SP-FBS algorithm converges weakly to a solution of problem (1) under the assumption of the Lipschitz continuity of the gradient of the objective function and the stepsize of the algorithm depends on the Lipschitz constant of the gradient of the objective function. Moreover, we remove the Lipschitz continuity assumption on the gradient of the objective function by using the linesearch technique of Cruz and Nghia [9] and propose an inertial SP-forward–backward splitting algorithm with linesearch (SP-FBSL) to solve a convex minimization problem. We also prove that a sequence generated by SP-FBSL converges weakly to a minimizer of the sum of those two convex functions under suitable control conditions. Finally, we present numerical experiments of the studied algorithms for solving image restoration problems. From our experiments, we see that our algorithms have a higher efficiency than the well-known algorithms in [3, 8, 9, 16].
Availability of data and materials
Contact the author for data requests.
References
Aremu, K.O., Izuchukwu, C., Grace, O.N., Mewomo, O.T.: Multi-step iterative algorithm for minimization and fixed point problems in p-uniformly convex metric spaces. J. Ind. Manag. Optim. 17(4), 2161–2180 (2021). https://doi.org/10.3934/jimo.2020063
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2011)
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)
Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods. Athena Scientific, Belmont (1997)
Burachik, R.S., Iusem, A.N.: Set-Valued Mappings and Enlargements of Monotone Operator. Springer, New York (2007)
Bussaban, L., Suantai, S., Kaewkhao, A.: A parallel inertial S-iteration forward–backward algorithm for regression and classification problems. Carpath. J. Math. 36, 35–44 (2020)
Combettes, P.L., Pesquet, J.C.: A Douglas–Rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Top. Signal Process. 1, 564–574 (2007)
Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)
Cruz, J.Y.B., Nghia, T.T.A.: On the convergence of the forward–backward splitting method with linesearches. Optim. Methods Softw. 31, 1209–1238 (2016)
Dunn, J.C.: Convexity, monotonicity, and gradient processes in Hilbert space. J. Math. Anal. Appl. 53, 145–158 (1976)
Hanjing, A., Suantai, S.: A fast image restoration algorithm based on a fixed point and optimization. Mathematics 8, 378 (2020). https://doi.org/10.3390/math8030378
Huang, Y., Dong, Y.: New properties of forward–backward splitting and a practical proximal-descent algorithm. Appl. Math. Comput. 237, 60–68 (2014)
Ishikawa, S.: Fixed points by a new iteration method. Proc. Am. Math. Soc. 44, 147–150 (1974)
Kankam, K., Pholasa, N., Cholamjiak, P.: On convergence and complexity of the modified forward–backward method involving new linesearches for convex minimization. Math. Methods Appl. Sci. 42, 1352–1362 (2019)
Lin, L.J., Takahashi, W.: A general iterative method for hierarchical variational inequality problems in Hilbert spaces and applications. Positivity 16, 429–453 (2012)
Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)
Mann, W.R.: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506–510 (1953)
Martinet, B.: Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Oper. 4, 154–158 (1970)
Moreau, J.J.: Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci. Paris Sér. A Math. 255, 2897–2899 (1962)
Nakajo, K., Shimoji, K., Takahashi, W.: On strong convergence by the hybrid method for families of mappings in Hilbert spaces. Nonlinear Anal., Theory Methods Appl. 71(1–2), 112–119 (2009)
Okeke, C.C., Izuchukwu, C.: A strong convergence theorem for monotone inclusion and minimization problems in complete CAT(0) spaces. Optim. Methods Softw. 34(6), 1168–1183 (2019)
Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73, 591–597 (1967)
Phuengrattana, W., Suantai, S.: On the rate of convergence of Mann, Ishikawa, Noor and SP-iterations for continuous functions on an arbitrary interval. J. Comput. Appl. Math. 235, 3006–3014 (2011). https://doi.org/10.1016/j.cam.2010.12.022
Rockafellar, R.T.: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209–216 (1970)
Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 17, 877–898 (1976)
Suantai, S., Kankam, K., Cholamjiak, P.: A novel forward–backward algorithm for solving convex minimization problem in Hilbert spaces. Mathematics 8, 42 (2020). https://doi.org/10.3390/math8010042
Tan, K., Xu, H.K.: Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 178, 301–308 (1993)
Thung, K., Raveendran, P.: A survey of image quality measures. In: Proceedings of the International Conference for Technical Postgraduates (TECHPOS), Kuala Lumpur, Malaysia, 14–15 December, pp. 1–4 (2009)
Acknowledgements
This work was supported by Fundamental Fund 2022, Chiang Mai University and Thailand Science Research and Innovation under the project IRN62W0007. This research has also received funding support from the NSRI via the program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B05F640183]. We also would like to thank Nakhon Phanom University and Rajamangala University of Technology Isan for partial financial support.
Funding
This work was supported by Fundamental Fund 2022, Chiang Mai University, Thailand Science Research and Innovation under the project IRN62W0007, NSRI [grant number B05F640183], Rajamangala University of Technology Isan and Nakhon Phanom University.
Author information
Authors and Affiliations
Contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Yatakoat, P., Suantai, S. & Hanjing, A. On some accelerated optimization algorithms based on fixed point and linesearch techniques for convex minimization problems with applications. Adv Cont Discr Mod 2022, 25 (2022). https://doi.org/10.1186/s13662-022-03698-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662-022-03698-5
MSC
- 47H10
- 47J25
- 65K05
- 90C30
Keywords
- Convex minimization problems
- Fixed points
- Forward–backward algorithms
- Image restoration problems