 Research
 Open access
 Published:
A parallel Tseng’s splitting method for solving common variational inclusion applied to signal recovery problems
Advances in Difference Equations volume 2021, Article number: 492 (2021)
Abstract
In this work we propose an accelerated algorithm that combines various techniques, such as inertial proximal algorithms, Tseng’s splitting algorithm, and more, for solving the common variational inclusion problem in real Hilbert spaces. We establish a strong convergence theorem of the algorithm under standard and suitable assumptions and illustrate the applicability and advantages of the new scheme for signal recovering problem arising in compressed sensing.
1 Introduction
Let \(\mathcal{H}\) be a real Hilbert space such that \(\langle \cdot,\cdot \rangle \) and \(\\cdot \\) are the inner product and the induced norm, respectively. We are interested in the variational inclusion problem (VIP) which is to find \(\bar{u}\in \mathcal{H}\) such that
where \(F:\mathcal{H}\rightarrow \mathcal{H}\) is a singlevalued mapping and \(G:\mathcal{H}\rightarrow 2^{\mathcal{H}}\) is a multivalued mapping. The solution set of VIP (1.1) is denoted by \((F+G)^{1}(0)\). These VIPs (1.1) include as particular cases many mathematical problems, such as variational inequalities, split feasibility problem, convex minimization problem, and linear inverse problem, which can be applied in many ways, such as machine learning, statistical modeling, image processing, and signal recovery, see in [5–7, 21]. Many splitting algorithms have been introduced and improved to find a solution of VIP (1.1); one of the famous splitting algorithms is the forwardbackward splitting algorithm, see in [14] for more details. It is well known that VIP (1.1) is equivalent to the following fixed point equation \(\bar{u}=J^{G}_{\gamma }(I\gamma F)\bar{u}\), where \(J^{G}_{\gamma }\) is the resolvent operator of G defined by \(J^{G}_{\gamma }=(I+\gamma G)^{1}\) such that \(\gamma >0\). The following naturally introduced forwardbackward splitting algorithm has been proposed in [1]:
In 2015, Donoghue and Candès [19] showed that the forwardbackward splitting algorithm (1.2), which is reduced to the proximal gradient algorithm for convex optimization problems, may get a lot of iterations when F is the gradient of a convex and differential function. Finding a process to speed up the convergence of algorithms is very important. Before that, in 1964, the inertial extrapolation technique, which is called the heavy ball method, was introduced by Polyak [20] to speed up the convergence of iterative algorithms. Later on, the inertial extrapolation has been used for VIPs (1.1) and improved by many mathematicians, see in [2, 15, 18]. The inertial proximal algorithm is the one of using the inertial technique with the forwardbackward algorithm. The following inertial proximal algorithm has been proposed by Moudafi and Oliny [17]:
where \(\{\gamma _{k}\}\) is a positive real sequence. Based on the condition generated in terms of the sequence \(\{u_{k}\}\) and parameter \(\xi _{k}\) under a cocoercivity condition F with respect to the solution set, the weak convergence of the iterative sequence was established. For obtaining the strong convergence, Cholamjiak et al. [4] introduced Halperntype forwardbackward splitting algorithm (HTFBSA) involving the inertial technique in a Hilbert space. This algorithm was generated by a fixed element \(w\in \mathcal{H}\) and
where \(\{a_{k}\}\) and \(\{b_{k}\}\) are sequences in \([0,1]\). After that, Yambangwai et al. [27] extended the HTFBSA to the following modified viscosity inertial forwardbackward splitting algorithm (MVIFBSA):
where φ is a ρcontractive on \(\mathcal{H}\).
Other developments and modifying of the forwardbackward splitting algorithm have been introduced to speed up the algorithm’s convergence. A wellknown modified forwardbackward algorithm is Tseng’s splitting algorithm [24]. This algorithm uses an adaptive linesearch rule with parameter \(\gamma _{k}\) and converges weakly in a real Hilbert space. Recently, Gibali and Thong [8] presented two additional extensions of the forwardbackward splitting algorithm; these modifications, presented next, are inspired by Mann and viscosity techniques.
Strong convergence of the above two algorithms is established under Lipschitz continuity and monotonicity of the operator F.
While all the above introduction is focused on a single variational inclusion problem (1.1), many realworld problems require to find a solution that fulfils several constraints. These constraints can be reformulated via a nonlinear functional model, and thus in this work we wish to focus on the common variational inclusion problem (CVIP). The CVIP consists of finding a point \(\bar{u}\in \mathcal{H}\) such that
where \(F_{i}:\mathcal{H}\rightarrow \mathcal{H}\) are singlevalued mappings and \(G_{i}:\mathcal{H}\rightarrow 2^{\mathcal{H}}\) are multivalued mappings for all \(i =1,2, \dots, K\). We assume that the solution set of the problem system (1.9) is nonempty. Recently, Yambangwai et al. [26] studied an image restoration problem in which several blurred filters were considered, and the mathematical model used there is the common variational inclusion problem. A parallel inertial forwardbackward splitting algorithm for solving this problem was introduced and analyzed. Some results of the parallel algorithm for solving the common variational inclusion problem and associated issues have been reported, see [3, 9–13, 23].
Inspired by the above works, we focus on the common variational inclusion problem and present a new modified Tseng’s splitting algorithm for solving it with strong converges in real Hilbert spaces.
The paper is organized as follows. We first recall some basic definitions and results in Sect. 2. The new algorithms and their analysis are introduced in Sect. 3. In Sect. 4 we consider as an application a signal recovery problem with several blurred filters, and compare and illustrate computational advantages of the method. Final remarks and conclusions are given in Sect. 5.
2 Preliminaries
In what follows, recall that \(\mathcal{H}\) is a real Hilbert space. Let C be a nonempty, closed, and convex subset of \(\mathcal{H}\). We denote by ⇀ and → weak and strong convergence, respectively. We next collect some necessary definitions and lemmas for proving our main results.
Definition 2.1
Let \(G: \mathcal{H}\rightarrow 2^{\mathcal{H}}\) be a multivalued mapping. Then G is said to be

(i)
monotone if for all \((x, u), (y, v)\in \operatorname{graph}(G)\) (the graph of mapping G)
$$\begin{aligned} \langle uv, xy\rangle \geq 0, \end{aligned}$$ 
(ii)
maximal monotone if there is no proper monotone extension of \(\operatorname{graph}(G)\).
Lemma 2.2
([25])
Let \(\{a_{k}\}\) and \(\{c_{k}\}\) be nonnegative sequences of real numbers such that \(\sum_{k=1}^{\infty }c_{k}<\infty \), and let \(\{b_{k}\}\) be a sequence of real numbers such that \(\limsup_{k\rightarrow \infty } b_{k}\leq 0\). If there exists \(k_{0}\in \mathbb{N}\) such that, for any \(k\geq k_{0}\),
where \(\{\delta _{k}\}\) is a sequence in \((0,1)\) such that \(\sum_{k=1}^{\infty } \delta _{k}=\infty \), then \(\lim_{k\rightarrow \infty } a_{k}=0\).
Lemma 2.3
([16])
Let \(\{\Xi _{k}\}\) be a sequence of real numbers such that there exists a subsequence \(\{\Xi _{k_{j}}\}_{j\geq 0}\) of \(\{\Xi _{k}\}\) satisfying \(\Xi _{k_{j}}<\Xi _{k_{j}+1}\) for all \(j\geq 0\). Define a sequence of integers \(\{\psi (k)\}_{k\geq k^{*}}\) by
Then \(\{\psi (k)\}_{k\geq k^{*}}\) is a nondecreasing sequence such that \(\lim_{k\rightarrow \infty }\psi (k)=\infty \), and for all \(k\geq k^{*}\), we have that \(\Xi _{\psi (k)}\leq \Xi _{\psi (k)+1}\) and \(\Xi _{k}\leq \Xi _{\psi (k)+1}\).
3 Main result
In this section we present our new parallel inertial Tseng type algorithm (PITTA) for solving (1.9). For the convergence analysis of the proposed method, we assume the following assumptions for all \(i=1, 2,\dots, K\).
Assumption 1
\(\mathcal{H}\) is a real Hilbert space, \(F_{i}: \mathcal{H}\to \mathcal{H}\) is an \(\mathcal{L}_{i}\)Lipschitz continuous and monotone mapping and \(G_{i}: \mathcal{H}\to 2^{\mathcal{H}}\) is a maximal monotone operator.
Assumption 2
\(\Phi:= \bigcap_{i=1}^{K}(F_{i} + G_{i})^{1}(0)\) is nonempty.
Assumption 3
\(\{\xi _{k}\}\subset [0, \xi )\), \(\{b_{n}\}\subset (b^{*}, b^{\prime })\subset (0, 1a_{n})\) for some \(\xi > 0, b^{*} >0, b^{\prime } >0\), and \(\{a_{n}\}\subset (0, 1)\) satisfies \(\lim_{k\rightarrow \infty }a_{k}=0 \text{ and } \sum_{k=1}^{ \infty }a_{n}=\infty \).
Assumption 4
\(\varphi: \mathcal{H}\to \mathcal{H}\) is a ρcontractive mapping.
Next the algorithm is presented.
Lemma 3.1
Assume that Assumptions 1–4hold, then any sequence \(\{\gamma _{k}^{i}\}\) in Algorithm PITTA is nonincreasing and converges to \(\gamma _{i}\) such that \(\min \lbrace \gamma _{1}^{i}, \frac{\lambda _{i}}{\mathcal{L}_{i}} \rbrace \leq \gamma _{i}\) for all \(i=1,2,\dots,K\).
Proof
See [8, Lemma 5]. □
Lemma 3.2
Let \(u\in \Phi \). Then under Assumptions 1–4, we have, for all \(i=1,2,\dots,K\),
and
where \(\varrho _{k}^{i}=\lambda _{i} \frac{\gamma _{k}^{i}}{\gamma _{k+1}^{i}}\).
Proof
In the same manner as [8, Lemma 6], we obtain that inequalities (3.1) and (3.2) hold. □
Lemma 3.3
Suppose that \(\lim_{k\rightarrow \infty }\r_{k}s_{k}^{i}\=0\) for all \(i=1,2,\dots,K\). If there exists a weakly convergent subsequence \(\{r_{k_{j}}\}\) of \(\{r_{k}\}\), then under Assumptions 1–4, we have that the limit of \(\{r_{k_{j}}\}\) belongs to Φ.
Proof
The proof is similar to the proof of [8, Lemma 7]. □
With the above results we are now ready for the main convergence theorem.
Theorem 3.4
Suppose that \(\lim_{k\rightarrow \infty }\frac{\xi _{k}}{a_{k}}\u_{k}u_{k1} \=0\), then under Assumptions 1–4, we have \(u_{k}\to \mu \) as \(k\to \infty \), where \(\mu = P_{\Phi }\circ \varphi (\mu )\).
Proof
First, since \(\lim_{k\to \infty } [1 (\varrho _{k}^{i} )^{2} ]=1\lambda _{i}^{2}>0\), one can find \(m_{i}\in \mathbb{N}\) such that \(1 (\varrho _{k}^{i} )^{2}>0\) for all \(k\geq k_{0}\), where \(k_{0} = \max_{i=1,2,\dots,K} m_{i}\). Let \(u\in \Phi \), from (3.1), we get
for all \(k\geq k_{0}\). Next, we divide the proof into the following claims.
Claim 1
\(\{u_{k}\}\) is a bounded sequence.
By the sequence \(\lbrace \frac{\xi _{k}}{a_{k}}\u_{k}u_{k1}\ \rbrace \) converges to 0, we have that there exists a constant \(M_{*}\geq 0\) such that, for all \(k\in \mathbb{N}\),
From the definition of \(r_{k}\) and combining (3.3) and (3.4), we obtain, for all \(k\geq k_{0}\),
From the definition of i, we get, for all \(k\geq k_{0}\),
and
By Assumption 4 and using (3.6), the following relation is obtained for all \(k\geq k_{0}\):
This leads to a conclusion that \(\u_{k+1}u\\leq \max \lbrace \u_{k_{0}}u\, \frac{\\varphi (u)u\+M_{*}}{1\rho } \rbrace \) for any \(k\geq k_{0}\). Consequently, the sequence \(\{u_{k}\}\) is bounded. In addition, \(\{\varphi (u_{k})\}\) is also bounded. Since Φ is a closed and convex set, \(P_{\Phi }\circ \varphi \) is a ρcontractive mapping. Now, we can uniquely find \(\mu \in \Phi \) with \(\mu = P_{\Phi }\circ \varphi (\mu )\) due to the Banach fixed point theorem. We also get that, for any \(u\in \Phi \),
Now, for each \(k\in \mathbb{N}\), set \(\Xi _{k}:=\u_{k}\mu \^{2}\).
Claim 2
There is \(M_{0}>0\) such that
for all \(k\geq k_{0}\).
Applying (3.6), we have, for all \(k\geq k_{0}\),
for some \(M_{0}>0\). For any \(k\geq k_{0}\), it follows from the assumption on φ and (3.8) that
Therefore, Claim 2 is obtained.
Claim 3
There is \(\bar{M}>0\) such that
for all \(k\geq k_{0}\).
Indeed, setting \(c_{k}=(1b_{k})u_{k} + b_{k}\bar{t}_{k}\). From inequality (3.5) and the definition of \(r_{k}\), we have
and
for all \(k\geq k_{0}\). Hence, from the assumption on φ, and (3.2), (3.9), and (3.10), we obtain, for all \(k\geq k_{0}\),
for \(\bar{M}:= \sup_{n\in \mathbb{N}} \lbrace \u_{k} \mu \, \xi \u_{k}u_{k1}\ \rbrace > 0\). Recall that our task is to show that \(u_{k}\to \mu \), which is now equivalent to showing that \(\Xi _{k}\to 0\) as \(k\to \infty \).
Claim 4
\(\Xi _{k}\to 0\) as \(k\to \infty \).
The proof is divided into the following two cases.
Case a. We can find \(N\in \mathbb{N}\) satisfying that, for all \(k\geq N\), the inequality \(\Xi _{k+1}\leq \Xi _{k}\) holds. Since each term \(\Xi _{k}\) is nonnegative, it is convergent. Due to the fact that \(\lim_{k\rightarrow \infty } a_{k}=0\) and \(\lim_{k\rightarrow \infty }b_{k} \in (0,1)\), and by Claim 2,
Indeed, we immediately get
In addition, from the definition of \(\bar{t}_{k}\) and by using the triangle inequality, the following inequalities are obtained:
and
for all \(i=1,2,\dots,K\). It follows from inequality (3.2) that
for all \(i=1,2,\dots,K\). Since \(\lim_{k\to \infty } [1 (\varrho _{k}^{i} )^{2} ]=1\lambda _{i}^{2}>0\), (3.11) and (3.12),
for all \(i=1,2,\dots,K\). Note that, for each \(k\in \mathbb{N}\),
Consequently, since \(\lim_{k\rightarrow \infty }a_{k}=0\) and by (3.14), \(\lim_{k\rightarrow \infty }\u_{k+1}u_{k}\=0\). Next observe that, for the reason that \(\{u_{k}\}\) is bounded, there is \(w\in \mathcal{H}\) such that \(u_{k_{j}}\rightharpoonup w\) as \(j\to \infty \) for some subsequence \(\{u_{k_{j}}\}\) of \(\{u_{k}\}\). By (3.12), we get \(r_{k_{j}}\rightharpoonup w\) as \(j\to \infty \). Then Lemma 3.3 together with (3.13) implies that \(w\in \Phi \). From (3.7), it is straightforward to show that
Since \(\lim_{k\rightarrow \infty }\u_{k+1}u_{k}\=0\), the following result is obtained:
Applying Lemma 2.2 to the inequality from Claim 3, we can conclude that \(\lim_{k\rightarrow \infty }\Xi _{k}= 0\).
Case b. We can find \(k_{n}\in \mathbb{N}\) satisfying that \(k_{n}\geq n\) and \(\Xi _{k_{n}}<\Xi _{k_{n}+1}\) for all \(n\in \mathbb{N}\). According to Lemma 2.3, the inequality \(\Xi _{\psi (k)}\leq \Xi _{\psi (k)+1}\) is obtained, where \(\psi: \mathbb{N}\rightarrow \mathbb{N}\) is defined by (2.1), and \(k\geq k^{*}\) for some \(k^{*}\in \mathbb{N}\). This implies, by Claim 2, for all \(k\geq \max \{k_{0}, k^{*}\}\), that
Similar to Case a, since \(a_{k}\to 0\) as \(k\to \infty \), we obtain
Furthermore, an argument similar to the one used in Case a shows that
Finally, from the inequality \(\Xi _{\psi (k)}\leq \Xi _{\psi (k)+1}\) and by Claim 3, for all \(k\geq \max \{k_{0}, k^{*}\}\), we obtain
Some simple calculations yield
From this it follows that \(\limsup_{k\rightarrow \infty }\Xi _{\psi (k)+1}\leq 0\). Thus, \(\lim_{k\rightarrow \infty }\Xi _{\psi (k)+1}=0\). In addition, by Lemma 2.3,
Hence, we can conclude that \(u_{k}\) converges strongly to μ. □
4 Numerical illustrations
In this section we consider a signal recovery problem in compressed sensing that involves several blurring filters. The classical problem involving a single filter is phrased as follows:
where \(x\in \mathbb{R}^{N}\) is the original signal, \(b\in \mathbb{R}^{M}\) is the observed signal with noise ε, and \(H\in \mathbb{R}^{M\times N}\) (\(M < N\)) is a filter matrix. Clearly solving system (4.1) is equivalent to solving the following regularized least squares problem:
where \(\eta > 0\) is a parameter. Next, let \(g(x) = \frac{1}{2}\Hxb\_{2}^{2}\) and \(h(x) = \eta \x\_{1}\), then \(\nabla g(x) = H^{t}(Hxb)\) is monotone and \(\H\_{2}^{2}\)Lipschitz continuous. Besides, \(\partial h(x)\), the subdifferential of h at x, is maximal monotone, see [22]. In addition, from Proposition 3.1(iii) of [5], x is a solution to problem (4.2) ⇔ \(0\in \nabla g(x)+\partial h(x)\) ⇔ \(x=\operatorname{prox}_{\eta h}(I\eta \nabla g)(x)\) for any \(\eta >0\), where \(\operatorname{prox}_{\eta h}(x) = \operatorname{arg}\min_{u\in \mathbb{R}^{N}} \lbrace h(u)+\frac{1}{2\eta }\xu\^{2} \rbrace \).
Here we consider the following model for the signal recovering problem consisting of various filters:
where, for all \(i = 1, 2, 3, \ldots, K\), \(H_{i}\) is a filter matrix, \(b_{i}\) is an observed signal, and \(\eta _{i} > 0\). Problem (4.3) can be seen as problem (1.9) through the following settings: \(\mathcal{H}=\mathbb{R}^{N}\), \(F_{i}(\cdot ) = \nabla (\frac{1}{2}\H_{i}(\cdot )b_{i}\_{2}^{2} )\), and \(G_{i}(\cdot ) = \partial ( \eta _{i}\\cdot \_{1} )\) for all \(i = 1, 2, 3, \ldots, K\).
For the experiments in this section, we choose the signal size to be \(N = 1024\) and \(M = 512\), and the original signal x is generated by the uniform distribution in \([2, 2]\) with m nonzero elements. We use the meansquared error to measure the restoration accuracy defined as follows: \(\operatorname{MSE}_{k} = \frac{1}{N}\u_{k}x\_{2}^{2}<5\times 10^{5}\) and suppose
for all \(k\in \mathbb{N}\). In the first part, we solve problem (4.2) by considering different components within PITTA (Algorithm 3) where \(K=1\): \(\lambda _{1}, \gamma _{1}^{1}, \varphi (\cdot ), \bar{\xi }_{k}, b_{k}\), and \(a_{k}\). Let H be the Gaussian matrix generated by the MATLAB routine \(\operatorname{randn}(M, N)\), the observation b be generated by white Gaussian noise with signaltonoise ratio SNR=40 and \(\eta =1\). Given that the initial points \(u_{0}, u_{1}\) are generated by commend \(\operatorname{randn}(N, 1)\).
Case 1. We compare the performance of the algorithm with different parameters \(\lambda _{1}\) by setting \(\gamma _{1}^{1} = 7.55, \varphi (\cdot ) = \frac{1}{2}(\cdot ), \bar{\xi }_{k} = \frac{1}{\u_{k}u_{k1}\^{4}+(k+1)^{4}}, a_{k} = \frac{1}{10(k+1)}\), and \(b_{k} =\frac{1}{2}(1a_{k})\). Then the results are presented in Table 1.
Case 2. We compare the performance of the algorithm with different parameters \(\gamma _{1}^{1}\) by setting \(\lambda _{1} = 0.95\), and select \(\varphi (\cdot ), \bar{\xi }_{k}, a_{k} \), and \(b_{k}\) are the same as in Case 1. Then the results are presented in Table 2.
Case 3. We compare the performance of the algorithm with different mappings \(\varphi (\cdot )\) by setting \(\lambda _{1} = 0.95\), \(\gamma _{1}^{1} = 0.01\), and select \(\bar{\xi }_{k}, a_{k} \), and \(b_{k}\) are the same as in Case 1. Then the results are presented in Table 3.
Case 4. We compare the performance of the algorithm with different parameters \(\bar{\xi }_{k}\) by setting \(\lambda _{1} = 0.95\), \(\gamma _{1}^{1} = 0.01\), \(\varphi (\cdot ) = \frac{1}{10}\cos (\cdot )\), and select \(a_{k} \) and \(b_{k}\) are the same as in Case 1. Then the results are presented in Table 4.
Case 5. We compare the performance of the algorithm with different parameters \(b_{k}\) by setting \(\lambda _{1} = 0.95\), \(\gamma _{1}^{1} = 0.01\), \(\varphi (\cdot ) = \frac{1}{10}\cos (\cdot )\), \(\bar{\xi }_{k} = \frac{1}{(k+1)^{1.1}\u_{k}u_{k1}\}\), and select \(a_{k} \) as in Case 1. Then the results are presented in Table 5.
Case 6. We compare the performance of the algorithm with different parameters \(a_{k}\) by setting \(\lambda _{1} = 0.95\), \(\gamma _{1}^{1} = 0.01\), \(\varphi (\cdot ) = \frac{1}{10}\cos (\cdot )\), \(\bar{\xi }_{k} = \frac{1}{(k+1)^{1.1}\u_{k}u_{k1}\}\), and \(b_{k} = \frac{99}{100}(1a_{k})\). Then the results are presented in Table 6.
We noticed that in all the above six cases, selecting \(a_{k} = \frac{1}{k+1}\) for all \(k\in \mathbb{N}\) and setting \(b_{k}, \bar{\xi }_{k}, \lambda _{1}, \gamma _{1}^{1}\), and \(\varphi (\cdot )\) as in Case 6 yield the best results.
In the next experiment, we wish to compare the performance of MTTA (Algorithm 1), VTTA (Algorithm 2), HTFBSA, MVIFBSA, and PITTA for solving problem (4.2) with one filter, that is, \(K=1\). We suppose that \(H, b, \eta, u_{0}\), and \(u_{1}\) are the same as in the first part and select \(a_{k} = \frac{1}{k+1}\) for all \(k\in \mathbb{N}\). We set \(b_{k}, \bar{\xi }_{k}, \lambda _{1}, \gamma _{1}^{1}\), and \(\varphi (\cdot )\) are the same as in Case 6. For MTTA and VTTA, let \(\lambda _{1} = 0.95\) and \(\gamma _{1}^{1} =0.01\). Define w by using \(\operatorname{randn}(N, 1)\) for HTFBSA. Further, for any \(k\in \mathbb{N}\), we select \(\gamma _{k} = \frac{1}{2\H\_{2}^{2}}\) for HTFBSA and MVIFBSA. The results are presented in Table 7 and Figs. 1 and 2.
Based on the above results, we can see that our proposed algorithm is less time consuming and requires lower number of iterations than the other four algorithms.
The final experiment considers PITTA for solving (4.3) with multiple inputs \(H_{i}\), and then we compare it with the parallel monotone hybrid algorithm (PMHA) of Suantai et al. [23]. Gaussian matrices are generated by the MATLAB routine \(\operatorname{randn}(M, N)\). The observation \(b_{i}\) is generated by white Gaussian noise with signaltonoise ratio SNR=40, \(\eta _{i}=1\), \(\lambda _{i}=0.95\), and \(\gamma _{1}^{i} = 0.01\) for all \(i = 1, 2, 3\). Select \(a_{k} = \frac{1}{k+1}\) and set \(u_{0}, u_{1}\), \(\varphi (\cdot )\), \(b_{k}\) and \(\bar{\xi }_{k}\) are the same as in Case 6 for all \(k\in \mathbb{N}\). Further, for any \(k\in \mathbb{N}\) and all \(i = 1, 2, 3\), we select \(\alpha _{k}^{i} = 0.75\) and \(S_{i}(\cdot ) = \operatorname{prox}_{\frac{\\cdot \_{1}}{\H_{i}\_{2}^{2}}}(I \frac{1}{\H_{i}\_{2}^{2}}F_{i})(\cdot )\) for PMHA. The results are presented in Tables 8, 9 and Figs. 3–8.
From the above one can observe that incorporating all three Gaussian matrices (\(H_{1}, H_{2}\), and \(H_{3}\)) into PITTA is more effective with respect to time and number of iterations than involving only one or two of them. PITTA also requires lower number of iterations than PMHA.
5 Discussion
In this work we study the common variational inclusion problem (CVIP) and propose an inertial Tseng’s splitting algorithm for solving it. A parallel iterative method is presented, and under standard assumption we establish its strong convergence in real Hilbert spaces. An intensive numerical investigation with comparison to several related schemes is presented for signal recovery problem involving several filters. Our work extends and generalizes some related works in the literature and also demonstrates great practical potential.
Availability of data and materials
Contact the authors for data requests.
References
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, vol. 408. Springer, New York (2011)
Beck, A., Teboulle, M.: A fast iterative shrinkagethresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Cholamjiak, P., Hieu, D.V., Cho, Y.J.: Relaxed forwardbackward splitting methods for solving variational inclusions and applications. J. Sci. Comput. (2021). https://doi.org/10.1007/s10915021016087
Cholamjiak, W., Cholamjiak, P., Suantai, S.: An inertial forwardbackward splitting method for solving inclusion problems in Hilbert spaces. J. Fixed Point Theory Appl. 20, 42 (2018)
Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forwardbackward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005)
Daubechies, I., Defrise, M., Demol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 1413–1541 (2004)
Duchi, J., Singer, Y.: Efficient online and batch learning using forward backward splitting. J. Mach. Learn. Res. 10, 2899–2934 (2009)
Gibali, A., Thong, D.V.: Tseng type methods for solving inclusion problems and its applications. Calcolo 55(4), 49 (2018)
Hieu, D.V., Anh, P.K., Muu, L.D.: Modified forwardbackward splitting method for variational inclusions. 4OR 19(1), 127–151 (2021)
Hieu, D.V., Anh, P.K., Muu, L.D., Strodiot, J.J.: Iterative regularization methods with new stepsize rules for solving variational inclusions. J. Appl. Math. Comput. (2021). https://doi.org/10.1007/s12190021015349
Hieu, D.V., Cho, Y.J., Xiao, Y., Kumam, P.: Modified extragradient method for pseudomonotone variational inequalities in infinite dimensional Hilbert spaces. Vietnam J. Math. (2020). https://doi.org/10.1007/10.1007/s10013020004477
Hieu, D.V., Muu, L.D., Anh, P.K.: Parallel hybrid extragradient methods for pseudomonotone equilibrium problems and nonexpansive mappings. Numer. Algorithms 73(1), 197–217 (2016)
Hieu, D.V., Reich, S., Anh, P.K., Ha, N.H.: A new proximallike algorithm for solving split variational inclusion problems. Numer. Algorithms (2021). https://doi.org/10.1007/s11075021011354
Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)
Lorenz, D., Pock, T.: An inertial forwardbackward algorithm for monotone inclusions. J. Math. Imaging Vis. 51, 311–325 (2015)
Maingé, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. SetValued Anal. 16(7–8), 899–912 (2008)
Moudafi, A., Oliny, M.: Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 155, 447–454 (2003)
Nesterov, Y.: A method for solving the convex programming problem with convergence rate \(O(1/k^{2})\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983)
O’Donoghue, B., Candès, E.J.: Adaptive restart for accelerated gradient schemes. Found. Comput. Math. 15(3), 715–732 (2015)
Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4, 1–17 (1964)
Raguet, H., Fadili, J., Peyré, G.: A generalized forwardbackward splitting. SIAM J. Imaging Sci. 6, 1199–1226 (2013)
Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)
Suantai, S., Kankam, K., Cholamjiak, P., Cholamjiak, W.: A parallel monotone hybrid algorithm for a finite family of Gnonexpansive mappings in Hilbert spaces endowed with a graph applicable in signal recovery. Comput. Appl. Math. (2021). https://doi.org/10.1007/s40314021015306
Tseng, P.: A modified forwardbackward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)
Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66(1), 240–256 (2002)
Yambangwai, D., Khan, S.A., Dutta, H., Cholamjiak, W.: Image restoration by advanced parallel inertial forwardbackward splitting methods. Soft Comput. (2021). https://doi.org/10.1007/s00500021055966
Yambangwai, D., Suantai, S., Dutta, H., Cholamjiak, W.: Viscosity modification with inertial forwardbackward splitting methods for solving inclusion problems. In: Zeki Sarıkaya, M., Dutta, H., Ocak Akdemir, A., Srivastava, H. (eds.) Mathematical Methods and Modelling in Applied Sciences. ICMRS 2019. Lecture Notes in Networks and Systems, vol. 123. Springer, Cham (2020). https://doi.org/10.1007/9783030430023_14
Acknowledgements
This research was partially supported by Chiang Mai University. W. Cholamjiak would like to thank the University of Phayao, Thailand. T. Mouktonglang would like to thank the Faculty of Science, Chiang Mai University.
Funding
Chiang Mai University, Thailand.
Author information
Authors and Affiliations
Contributions
The authors equally conceived of the study, participated in its design and coordination, drafted the manuscript, participated in the sequence alignment, and read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Suparatulatorn, R., Cholamjiak, W., Gibali, A. et al. A parallel Tseng’s splitting method for solving common variational inclusion applied to signal recovery problems. Adv Differ Equ 2021, 492 (2021). https://doi.org/10.1186/s13662021036478
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662021036478