Skip to main content

Theory and Modern Applications

Hybrid variational model based on alternating direction method for image restoration

Abstract

The total variation model is widely used in image deblurring and denoising process with the features of protecting the image edge. However, this model usually causes some staircase effects. To overcome the shortcoming, combining the second-order total variation regularization and the total variation regularization, we propose a hybrid total variation model. The new improved model not only eliminates the staircase effect, but also well protects the edges of the image. The alternating direction method of multipliers (ADMM) is employed to solve the proposed model. Numerical results show that our proposed model can get more details and higher image visual quality than some current state-of-the-art methods.

1 Introduction

Image restoration mainly includes image deblurring and image denoising, which is one of the most fundamental problems in imaging science. It plays an important role in many mid-level and high-level image-processing areas such as medical imaging, remote sensing, machine identification, and astronomy [1,2,3,4]. The image restoration problem usually can be expressed in the following form:

$$ \textstyle\begin{array}{l} g=Hf+\eta, \end{array} $$
(1.1)

where \(f\in R^{n^{2}}\) is the original \(n\times n\) image, \(H\in R^{n ^{2}\times n^{2}}\) is a blurring operator, \(\eta\in R^{n^{2}}\) is the white Gaussian noise, and \(g\in R^{n^{2}}\) is a degraded image.

It is well known that the image restoration problem is usually an ill-posed problem. An efficient method to overcome the ill-posed problems is to add some regularization terms to the objective functions, which is known as a regularization method. There are two famous regularization methods. One is the Tikhonov regularization [5], and the other is the total variation (TV) regularization [6]. The Tikhonov regularization method has a disadvantage, which tends to make images overly smooth and often fails to adequately preserve important image attributes such as sharp edges. The total variation regularization method has the ability to preserve edges well and remove noise at the same time, which was first introduced by Rudin et al. [6] as follows:

$$ \min_{f} \Vert {Hf - g} \Vert _{2}^{2} + \alpha { \Vert f \Vert _{\mathrm{TV}}}, $$
(1.2)

where \({ \Vert \cdot \Vert _{2}}\) denotes the Euclidean norm, \({ \Vert \cdot \Vert _{\mathrm{TV}}}\) is the discrete total variation regularization term, and α is a positive regularization parameter that controls the tradeoff between these two terms. To define the discrete TV norm, we first introduce the discrete gradient f:

$$ {(\nabla f)_{i,j}} = \bigl((\nabla f)_{i,j}^{x},( \nabla f)_{i,j}^{y} \bigr) $$

with

$$\begin{aligned} (\nabla f)_{i,j}^{x}=\textstyle\begin{cases} f_{i+1,j}-f_{i,j} & \text{if } i < n, \\ f_{1,j}-f_{n,j} & \text{if } i = n, \end{cases}\displaystyle \qquad (\nabla f)_{i,j}^{y}=\textstyle\begin{cases} f_{i,j+1}-f_{i,j} & \text{if } j < n, \\ f_{i,1}-f_{i,n} & \text{if } j = n, \end{cases}\displaystyle \end{aligned}$$

for \(i,j =1,\ldots,n\). Here \(\nabla:\Re^{n^{2}}\rightarrow\Re^{n^{2}}\) denotes the discrete gradient operator, \(f_{i,j}\) refers to the \(((j-1)n+i)\)th entry of the vector f, which is the \((i,j)\)th pixel location of the image; see [7]. Then the discrete TV of f is defined by

$$\begin{aligned} { \Vert f \Vert _{\mathrm{TV}}} = \sum_{1 \le i,j \le n} { \sqrt{ {{ \bigl\vert {(\nabla f)_{i,j}^{x}} \bigr\vert }^{2}} + {{ \bigl\vert {(\nabla f)_{i,j} ^{y}} \bigr\vert }^{2}}} }. \end{aligned}$$

Due to the nonlinearity and nondifferentiability of the total variation function, it is difficult to solve model (1.2). To solve this problem more effectively, many methods have been proposed for total-variation-based image restoration in recent years [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27]. In these methods, Rudin et al. [6] raised a time marching scheme, and Vogel et al. [7] put forward a fixed point iteration method. The time marching scheme converges slowly, especially when the iterate point is close to the solution set. The fixed point iteration method is also very difficult to solve as the blurring kernel becomes larger. Based on the dual formulation, Chambolle [15] proposed a gradient algorithm for the total variation denoising problem. At present, based on variable separation and penalty techniques, Wang et al. [16] proposed the fast total variant deconvolution (FTVd) method. By introducing an auxiliary variable to replace the nondifferentiable part of model (1.2), the TV model (1.2) can be rewritten in the following minimization problem:

$$ \min_{f,\omega} \Vert {Hf - g} \Vert _{2}^{2} + \alpha{ \Vert \omega \Vert _{2}} + \frac{\beta}{2} \Vert { \omega- \nabla f} \Vert _{2}^{2}, $$

where β is a penalty parameter. Experimental results verify the effectiveness of the FTVd method. But in the calculation, the penalty parameter β needs to approach infinity, which creates numerical instability. To avoid the approach of penalty parameter to infinity, Chan et al. [28] proposed the alternating direction method of multipliers (ADMM) to solve model (1.2). By defining the augmented Lagrange function, the image restoration model (1.2) can be translated into the following form:

$$ \min_{f,\omega} \Vert {Hf - g} \Vert _{2}^{2} + \alpha{ \Vert \omega \Vert _{2}} + \langle {\lambda,\omega - \nabla f} \rangle + \frac{\beta}{2} \Vert {\omega- \nabla f} \Vert _{2}^{2}, $$

where λ is a Lagrange multiplier. The experimental results show that the ADMM method is robust and fast, and has a good restoration effect.

More recently, to overcome the shortcoming of the TV norm of f in model (1.2), Huang et al. [29] proposed a fast total variation minimization method for image restoration as follows:

$$ \min_{f,u} \Vert {Hf - g} \Vert _{2}^{2} + {\alpha_{1}} \Vert {f - u} \Vert _{2}^{2} + {\alpha_{2}} { \Vert u \Vert _{\mathrm{TV}}}, $$
(1.3)

where \(\alpha_{1}\), \(\alpha_{2}\) are positive regularization parameters. Model (1.3) adds a term \(\| f-u\|_{2}^{2}\), compared with model (1.2). The experimental results show that the modified TV minimization model can preserve edges very well in the image restoration processing. Based on model (1.3), Liu et al. [30] proposed the following minimization model:

$$ \min_{f,u} \Vert Hf-g \Vert _{2}^{2}+\alpha_{1} \Vert f-u \Vert _{2}^{2}+ \alpha_{2} \Vert f \Vert _{\mathrm{TV}}+\alpha_{3} \Vert u \Vert _{\mathrm{TV}}, $$
(1.4)

where \(\alpha_{1}\), \(\alpha_{2}\), and \(\alpha_{3}\) are positive regularization parameters. Liu et al. [30] adopted the split Bregman method and Chambolle projection algorithm to solve the minimization model (1.4). Numerical results illustrated the effectiveness of their model.

Although the total variation regularization can preserve sharp edges very well, it also causes some staircase effects [31, 32]. To overcome this kind of staircase effect, some high-order total variational models [33,34,35,36,37,38,39] and fractional-order total variation models [40,41,42,43,44] are introduced. It has been proved that the high-order TV norm can remove the staircase effect and preserve the edges well in the process of image restoration.

To eliminate the staircase effect better and preserve edges very well in image processing, we combine the TV norm and second-order TV norm and introduce a new hybrid variational model as follows:

$$ \min_{f,u} \Vert Hf-g \Vert _{2}^{2}+\alpha_{1} \Vert f-u \Vert _{2}^{2}+ \alpha_{2} \bigl\Vert \nabla^{2}f \bigr\Vert _{2}+\alpha_{3} \Vert \nabla u \Vert _{2}, $$
(1.5)

where \(\alpha_{1}\), \(\alpha_{2}\), and \(\alpha_{3}\) are positive regularization parameters, \(\|\nabla u\|_{2}\) is the TV norm of u, and \(\|\nabla^{2}f\|_{2}\) is the second-order TV norm of f. The definition of the second-order TV norm is similar to that of the TV norm. The second-order TV norm is defined by

$$\begin{aligned}& \bigl(\nabla^{2}f \bigr)_{i,j}= \bigl((\nabla f)_{i,j}^{x,x}, (\nabla f)_{i,j}^{x,y},( \nabla f)_{i,j}^{y,x},(\nabla f)_{i,j}^{y,y} \bigr), \\& \bigl\Vert \nabla^{2}f \bigr\Vert =\sum _{1 \le i,j \le n}{\sqrt{{{ \bigl\vert {( \nabla f)_{i,j}^{x,x}} \bigr\vert }^{2}} + {{ \bigl\vert {(\nabla f)_{i,j}^{x,y}} \bigr\vert }^{2}}+ \bigl\vert (\nabla f)_{i,j}^{y,x} \bigr\vert ^{2}+ \bigl\vert (\nabla f)_{i,j} ^{y,y} \bigr\vert ^{2}} }, \end{aligned}$$

where \((\nabla f)_{i,j}^{x,x}\), \((\nabla f)_{i,j}^{x,y}\), \((\nabla f)_{i,j} ^{y,x}\), \((\nabla f)_{i,j}^{y,y}\) denote the second-order differences of the \(((j-1)n+i)\)th entry of the vector f. For more detail about the second-order differences, we refer to [45]. The second-order TV regularization and TV regularization are used; the edges in the restored image can be preserved quite well, and the staircase effect is reduced simultaneously.

The rest of this paper is organized as follows. In Sect. 2, we propose our alternating iterative algorithm to solve model (1.5). In Sect. 3, we give some numerical results to demonstrate the effectiveness of the proposed algorithm. Finally, concluding remarks are given in Sect. 4.

2 The alternating iterative algorithm

In this section, we use an alternating iterative algorithm to solve (1.5). Based on the variable separation technique [16], the minimization problem (1.5) can be divided into deblurring and denoising steps. The alternating iterative algorithm is based on decoupling of denoising and deblurring steps in the image restoration process. The deblurring step is defined as

$$ \arg\min_{f} \Vert Hf-g \Vert _{2}^{2}+ \alpha_{1} \Vert f-u \Vert _{2}^{2}+ \alpha_{2} \bigl\Vert \nabla^{2}f \bigr\Vert _{2}. $$
(2.1)

The denoising step is defined as

$$ \arg\min_{u} \alpha_{1} \Vert u-f \Vert _{2}^{2}+ \alpha _{3} \Vert \nabla u \Vert _{2}. $$
(2.2)

We adopt the alternating direction multiplier method to solve these two subproblems.

2.1 The deblurring step

Because the ADMM method has the characteristics of notable stability and high rate of convergence, this method can avoid the approach of penalty parameter to infinity. We employ the alternating direction method of multipliers to solve the minimization problem (2.1). Because the objective function of (2.1) is nondifferentiable, by introducing an auxiliary variable ω, the unconstrained optimization problem (2.1) can be transformed into the following equivalent constraint optimization problem:

$$ \arg\min_{f,\omega} \Vert Hf-g \Vert _{2}^{2}+ \alpha _{1} \Vert f-u \Vert _{2}^{2}+ \alpha_{2} \Vert \omega \Vert _{2} \quad \text{s.t. } \omega= \nabla^{2}f. $$
(2.3)

For the constrained optimization problem (2.3), its augmented Lagrange function is defined by

$$\begin{aligned} {L_{\mathrm{A}}}(f,\omega,\lambda_{1}) =& \Vert Hf-g \Vert _{2}^{2}+{\alpha_{1}} \Vert {f - u} \Vert _{2}^{2} + {\alpha_{2}} { \Vert \omega \Vert _{2}} \\ &{}+ \bigl\langle {\lambda_{1} ,\omega- \nabla^{2}f} \bigr\rangle + \frac{\beta_{1} }{2} \bigl\Vert {\omega- \nabla^{2}f} \bigr\Vert _{2}^{2}, \end{aligned}$$
(2.4)

where \(\lambda_{1}\) is a Lagrange multiplier, playing the role of avoiding the positive penalty parameters to go to infinity, and \(\beta_{1}\) is a positive penalty parameter. Then, the alternating minimization method to minimize problem (2.4) can be expressed as follows:

$$ \textstyle\begin{cases} (f^{k+1},\omega^{k+1})=\arg\min_{f,\omega}L_{A}(f,\omega, \lambda_{1}^{k}), \\ \lambda_{1}^{k+1}=\lambda_{1}^{k}+\beta_{1}(\omega^{k+1}-\nabla^{2}f ^{k+1}). \end{cases} $$
(2.5)

Based on the classical ADMM, starting at \(u = u^{k}\), \(\omega= \omega ^{k}\), \(\lambda=\lambda^{k}\), the iterative scheme is implemented via the following subproblems:

figure a

Based on the optimal conditions, the solution of (2.6) is given by the equation

$$ \bigl(2H^{T}H+\beta_{1}\nabla^{2^{T}} \nabla^{2}+2\alpha_{1}I \bigr)f=2H^{T}g+2 \alpha_{1}u^{k}+\beta_{1}\nabla^{2^{T}} \biggl(\omega^{k}+\frac{\lambda_{1} ^{k}}{\beta_{1}} \biggr), $$
(2.9)

where \(\nabla^{2^{T}}\) is the conjugate operator of \(\nabla^{2}\). Under the periodic boundary condition, \(H^{T}H\) and \(\nabla^{2^{T}}\nabla ^{2}\) are block circulant matrices [46, 47], so \(H^{T}H\) and \(\nabla^{2^{T}}\nabla^{2}\) can be diagonalized by the Fourier transform. The Fourier transform of f is denoted by \(\mathcal{F}(f)\), and \(\mathcal{F}^{-1}(f)\) is the inverse Fourier transform of f. By using the Fourier transform the solution of f can be given as follows:

$$ {f^{k + 1}} = {\mathcal{F}^{ - 1}}(\gamma), $$

where

$$ \gamma= \frac{{\mathcal{F}(2H^{T}g+2{\alpha_{1}}{u^{k}} + \beta_{1} {\nabla^{2^{T}}}({\omega^{k}} + \frac{\lambda^{k}}{\beta_{1}}))}}{ {\mathcal{F}(2{\alpha_{1}}I + \beta_{1} \nabla^{2^{T}}\nabla^{2}+2H ^{T}H)}}. $$

The subproblem for ω can be written as

$$ \omega^{k+1}=\arg\min_{\omega} \biggl\{ \alpha_{2} \Vert \omega \Vert _{2}+ \frac{\beta_{1}}{2} \biggl\Vert \omega- \biggl(\nabla^{2}f^{k+1}- \frac{ \lambda_{1}^{k}}{\beta_{1}} \biggr) \biggr\Vert _{2}^{2} \biggr\} , $$

and the solution can be explicitly obtained using the following two-dimensional shrinkage operator [16, 48]:

$$ {\omega^{k + 1}} = \max \biggl\{ { \biggl\Vert { \nabla^{2}{f^{k + 1}} - \frac{ {{\lambda_{1}^{k}}}}{\beta_{1} }} \biggr\Vert _{2} - \frac{{{\alpha_{2}}}}{ \beta_{1} },0} \biggr\} \frac{{\nabla^{2}{f^{k + 1}} - \frac{{{\lambda _{1}^{k}}}}{\beta_{1} }}}{{ \Vert {\nabla^{2}{f^{k + 1}} - \frac{ {{\lambda_{1}^{k}}}}{\beta_{1} }} \Vert _{2}}}, $$
(2.10)

where we follow the convention that \(0\cdot(0/0)=0\).

Finally, we update \(\lambda_{1}\) by

$$ \lambda_{1}^{k+1}=\lambda_{1}^{k}+ \eta\beta_{1} \bigl(\omega^{k+1}-\nabla ^{2}f^{k+1} \bigr), $$
(2.11)

where η is a relaxation parameter, and \(\eta\in(0,(\sqrt{5} + 1)/2)\).

The algorithm of the deblurring step is summarized in Algorithm 1.

Algorithm 1
figure b

Alternating direction minimization method for solving subproblem (2.1)

2.2 The denoising step

Subproblem (2.2) is a classical TV regularization process for image denoising, which can be solved by the Chambolle projection algorithm. However, it is well known that the Chambolle projection algorithm has large amount of calculations in the process of experiment and causes numerical instability. To overcome the disadvantage of numerical instability and large amount of calculations of the Chambolle projection algorithm, in this paper, we adopt the alternating direction multiplier method to solve subproblem (2.2).

The solution process of subproblem (2.2) is the same as that of subproblem (2.1). First, introducing an auxiliary variable v, problem (2.2) can be transformed into the following constraint minimization problem:

$$ \min_{u,v} {\alpha_{1}} \bigl\Vert {u - f^{k+1}} \bigr\Vert _{2}^{2} + { \alpha_{3}} { \Vert v \Vert _{2}} \quad \text{s.t. } v= \nabla u. $$
(2.12)

Second, to use the alternating direction multiplier method to solve model (2.12), we define its augmented Lagrangian function

$$ {L_{\mathrm{A}}}(u,v,\lambda_{2}) = { \alpha_{1}} \bigl\Vert {u - f^{k+1}} \bigr\Vert _{2}^{2} + {\alpha_{3}} { \Vert v \Vert _{2}} + \langle {\lambda_{2},v - \nabla u} \rangle + \frac{\beta_{2} }{2} \Vert {v - \nabla u} \Vert _{2}^{2}, $$
(2.13)

where \(\beta_{2}\) is a positive penalty parameters, and \(\lambda_{2}\) is a Lagrange multiplier.

The variables u, f, v are coupled together, so we separate this problem into two subproblems and adopt the alternating iteration minimization method. The two subproblems are given as follows.

The “u-subproblem” for v fixed:

$$ \min_{u}\alpha_{1} \bigl\Vert u-f^{k+1} \bigr\Vert _{2}^{2}+\frac{\beta_{2}}{2} \bigl\Vert v ^{k}-\nabla u \bigr\Vert _{2}^{2}- \bigl\langle \lambda_{2}^{k},\nabla u \bigr\rangle . $$
(2.14)

The “v-subproblem” for u fixed:

$$ \min_{v}\frac{\beta_{2}}{2} \bigl\Vert v-\nabla u^{k} \bigr\Vert _{2}^{2}+ \bigl\langle \lambda_{2}^{k},v \bigr\rangle +\alpha_{3} \Vert v \Vert _{2}. $$
(2.15)

The minimizer of subproblem (2.14) can be simplified as

$$ \min_{u}\alpha_{1} \bigl\Vert u-f^{k+1} \bigr\Vert _{2}^{2}+\frac{\beta_{2}}{2} \biggl\Vert v ^{k}- \biggl(\nabla u+\frac{\lambda_{2}^{k}}{\beta_{2}} \biggr) \biggr\Vert _{2}^{2}, $$
(2.16)

and the minimization problem (2.16) can be solved by the following equation:

$$ \bigl(2{\alpha_{1}}I + \beta_{2} { \nabla^{T}}\nabla \bigr)u = 2{\alpha_{1}} {f^{k + 1}} + \beta_{2} {\nabla^{T}} {v ^{k}} + { \nabla^{T}} {\lambda _{2}^{k}}. $$
(2.17)

Under the periodic boundary conditions, \(\nabla^{T}\nabla\) is a block circulant matrix, So \(\nabla^{T}\nabla\) is diagonalizable by the two-dimensional discrete Fourier transform.

Next, the minimization of (2.15) with respect to v is equivalent to the minimization problem

$$ \min_{v}\frac{\beta_{2}}{2} \biggl\Vert v- \biggl(\nabla u^{k+1}-\frac{ \lambda_{2}^{k}}{\beta_{2}} \biggr) \biggr\Vert _{2}^{2}+\alpha_{3} \Vert v \Vert _{2}, $$
(2.18)

and the solution of (2.18) can be explicitly obtained by the two-dimensional shrinkage:

$$ v^{k+1}=\max \biggl\{ \biggl\Vert \nabla u^{k+1}- \frac{\lambda_{2}^{k}}{\beta _{2}} \biggr\Vert _{2}-\frac{\alpha_{3}}{\beta_{2}},0 \biggr\} \frac{\nabla u ^{k+1}-\frac{\lambda_{2}^{k}}{\beta_{2}}}{ \Vert \nabla u^{k+1}-\frac{ \lambda_{2}^{k}}{\beta_{2}} \Vert _{2}}. $$
(2.19)

The Lagrange multiplier \(\lambda_{2}\) is updated as follows:

$$ {\lambda_{2}^{k + 1}} = {\lambda_{2}^{k}} + \eta\beta_{2} \bigl({v^{k + 1}} - \nabla{u^{k + 1}} \bigr), $$
(2.20)

where η is a relaxation parameter.

The algorithm of the denoising step is written in Algorithm 2.

Algorithm 2
figure c

Alternating direction minimization method for solving the subproblem (2.2)

3 Numerical experiments

This section presents some numerical examples, which show that the performance of our proposed algorithm to solve image restoration problems. In the following experiments, we compare our proposed method (HTV) with Fast-TV [29] and FNDTV methods [30]. All experiments are performed under Windows 7 and MATLAB 2012a running on a desktop with an core i5 Duo central processing unit at 2.50 GHz and 4 GB memory. The quality of the restoration results by different methods is compared quantitatively by using the peak-signal-to-noise ratio (PSNR) and structural similarity index metric (SSIM). Suppose g, \(f^{0}\), and u are the observed image, the ideal image, and the restored image, respectively. Then, the BSNR, MSE, PSNR, and SSIM are defined as follows:

$$\begin{aligned}& \mathrm{BSNR} = 20\log_{10}\frac{ \Vert g \Vert _{2}}{ \Vert \eta \Vert _{2}}, \\& \mathrm{MSE} =\frac{1}{{{n^{2}}}}\sum_{i = 0}^{{n^{2}} - 1} {{{ \bigl({f ^{0}}(i) - u(i) \bigr)}^{2}}}, \\& \mathrm{PSNR} = 20\log_{10}\frac{\mathrm{MAX}_{{f^{0}}}}{{\sqrt{\mathrm{MSE}} }}, \\& \mathrm{SSIM} =\frac{(2\mu_{f^{0}}\mu_{u}+c_{1})(2\sigma_{f^{0}u}+c_{2})}{( \mu_{f^{0}}^{2}+\mu_{u}^{2}+c_{1})(\sigma_{f^{0}}^{2}+\sigma_{u}^{2}+c _{2})}, \end{aligned}$$

where η is the additive noise vector, \(n^{2}\) is the number of pixels of image, \(\mathrm{MAX}_{f^{0}}\) is the maximum possible pixel value of the \(f^{0}\), is the mean intensity value of \(f^{0}\), \(\mu_{f^{0}}\) is the mean value of the \(f^{0}\), \(\mu_{u}\) is the mean value of u, \(\sigma_{f^{0}}^{2}\) and \(\sigma_{u}^{2}\) are the variances of \(f^{0}\) and u, respectively, and \(\sigma_{f^{0}u}\) is the covariance of \(f^{0}\) and u, and \(c_{1}\) and \(c_{2}\) are stabilizing constants for near-zero denominator values. We will also use the SSIM index map to reveals areas of high/low similarity between two images; the whiter the SSIM index map, the closer the two images. Further details on SSIM can be founded in the pioneer work [49].

Four test images, “Cameraman”, “Lena”, “Baboon”, and “Man”, which are commonly used in the literature, are shown in Fig. 1. We test three kinds of blur, that is, Gaussian blur, average blur, and motion blur. These different blurring kernels can be builded by the function “fspecial” in the Matlab. The additive noise is a Gaussian noise in all experiments. In all tests, we add the Gaussian white noise of different BSNR to the blurred images. In our experiments, the stopping criterion is that the relative difference between the successive iteration of the restored image should satisfy the inequality

$$ \frac{{ \Vert {{f^{k + 1}} - {f^{k}}} \Vert _{2}} }{{ \Vert {{f^{k}}} \Vert _{2}}} \le1 \times{10^{ - 4}}, $$

where \(f^{k}\) is the computed image at the kth iteration of the tested method. In the following experiments, for our proposed method, we fixed the parameter \(\alpha_{2}=1.3e{-}2\) for all experiments, \(\alpha_{1}=1e{-}4\) (for Gaussian blur and average blur), \(3e{-}4\) (for motion blur), \(\alpha_{3}=1e{-}4\) (for Gaussian blur and average blur), \(2e{-}4\) (for motion blur). For the parameters of FastTV and FNDTV, we refer to [29, 30]. The parameters for every compared method are selected from many experiments until we obtain the best PSNR values.

Figure 1
figure 1

Test images

Figure 2 shows the experiment for Gaussian blur. We select the “Cameraman” image \((256\times256)\) as the test image, which is shown in Figure 1(a). The “Cameraman” image degraded by Gaussian blur with blur nucleus \(9\ast9\) and a noise with \(\mathrm{BSNR}=35\) is shown in Fig. 2(a). The recovered images by FastTV, FNDTV, and our method are shown in Fig. 2(b)–(d). To demonstrate the effectiveness of our method more intuitively, we enlarge some part of the three restored images, and the results of enlarged parts are shown in Fig. 2(e)–(h). We also show the SSIM index maps of the restored images recovered by the three methods in Fig. 2(i)–(l). The SSIM map of the restored image by the proposed method is slightly whiter than the SSIM map by FastTV and FNDTV. The values of PSNR and SSIM by these methods are shown in Table 1. We see that both PSNR and SSIM values of the restored image by our proposed method are higher than FastTV and FNDTV. We also plot the changing curve of SSIM versus iterations with three different methods in Fig. 3. It is not difficult to see that our method can achieve a high SSIM over the other two methods with a few iterations. In addition, for the restoration effect of other images, we depict them by PSNRs and SSIMs; see Table 1. It is easy to detect that both of PSNR and SSIM of the restored image by our method are higher than others obtained by FastTV and FNDTV.

Figure 2
figure 2

Results of different methods when restoring blurred and noisy image “Cameraman” degraded by Gaussian blur with Gaussian blur nucleus \(9\ast9\) and a noise with \(\mathrm{BSNR}=35\): (a) blurred and noisy image; (b) restored image by FastTV; (c) restored image by FNDTV; (d) restored image by our method; (e) zoomed part of (a); (f) zoomed part of (b); (g) zoomed part of (c); (h) zoomed part of (d); (i) SSIM index map of the corrupted image; (j) SSIM index map of the recovered image by FastTV; (k) SSIM index map of the recovered image by FNDTV; (l) SSIM index map of the recovered image by our method

Figure 3
figure 3

Changes of SSIM value versus iteration number for the three methods about Gaussian blur

Table 1 Experimental results for different images and different blur kernels, \(\mathrm{BSNR}=35\)

Figure 4 shows the experiment about the “Lenna” image with size \(256\times256\) degraded by the average blur with length 9 and a noise with \(\mathrm{BSNR}=35\). The degraded “Lenna” image is shown Fig. 4(a). The recovered images by FastTV, FNDTV, and our method are shown in Fig. 4(b)–(d). More precisely, Fig. 4(e)–(h) displays the same regions of special interest, which are zoomed in for comparing the performance of the three methods. It is not difficult to observe that the proposed method can alleviate the staircase phenomenon better. In addition, the SSIM map of the restored images recovered by the three methods is shown in Fig. 4(i)–(l). It is easy to see that the SSIM map obtained by the proposed method is slightly whiter than the maps by the other two methods. In Fig. 5, we plot the changes of SSIM value versus iteration number for the three methods. It can also be found from the relationship between SSIM values and iteration numbers that our method requires fewer iterations and the values are superior to the other two methods. These experiments demonstrate the outstanding performance of our proposed method to overcome the blocky images while preserving edge details. We also report the PSNR and SSIM values by these methods in Table 1. The PSNR and SSIM values of the restored image by our proposed method are higher than those of FastTV and FNDTV.

Figure 4
figure 4

Results of different methods when restoring blurred and noisy image “Lenna” degraded by average blur with length 9 and a noise with \(\mathrm{BSNR}=35\): (a) blurred and noisy image; (b) restored image by Fast-TV; (c) restored image by FNDTV; (d) restored image by our method; (e) zoomed part of (a); (f) zoomed part of (b); (g) zoomed part of (c); (h) zoomed part of (d); (i) SSIM index map of the corrupted image; (j) SSIM index map of the recovered image by FastTV; (k) SSIM index map of the recovered image by FNDTV; (l) SSIM index map of the recovered image by our method

Figure 5
figure 5

Changes of SSIM value versus iteration number for the three methods about average blur

The experiments about the motion blur are shown in Figs. 6 and 8. On the motion blur, we do two groups of experiments for different thetas. one group of experiment is added serious degree of blur that is shown in Fig. 8, the other group of experiment is added slight degree of blur that is shown in Fig. 6. The recovered images by the three methods are shown in Figs. 6 and 8(b)–(d), and the enlarged parts are shown in Figs. 6 and 8(e)–(h). We also show the SSIM index maps of the restored images recovered by the three methods in Figs. 6 and 8(j)–(l). It is easy to see that the SSIM map of the restored image by the proposed method is slightly whiter than the SSIM map by FastTV and FNDTV. In Figs. 7 and 9, we plot the changes of SSIMs with iteration number for FastTV method, FNDTV method, and our method. From Figs. 7 and 9, we can see that our method can get higher image visual quality and more details than Fast-TV method and FNDTV method. The values of PSNR and SSIM are listed in Tables 1 and 2. We see that both the PSNR and SSIM values of the restored image by the proposed method are much better than those provided by FastTV and FNDTV.

Figure 6
figure 6

Results of different methods when restoring blurred and noisy image “Man” degraded by motion blur with \(\mathrm{len}=20\) and \(\mathrm{theta}=20\) and a noise with \(\mathrm{BSNR}=35\): (a) blurred and noisy image; (b) restored image by Fast-TV; (c) restored image by FNDTV; (d) restored image by our method; (e) zoomed part of (a); (f) zoomed part of (b); (g) zoomed part of (c); (h) zoomed part of (d); (i) SSIM index map of the corrupted image; (j) SSIM index map of the recovered image by FastTV; (k) SSIM index map of the recovered image by FNDTV; (l) SSIM index map of the recovered image by our method

Figure 7
figure 7

Changes of SSIM value versus iteration number for the three methods about motion blur with \(\mathrm{theta}=20\)

Figure 8
figure 8

Results of different methods when restoring blurred and noisy image “Baboon” degraded by motion blur with \(\mathrm{len}=10\) and \(\mathrm{theta} =100\) and a noise with \(\mathrm{BSNR}=35\): (a) blurred and noisy image; (b) restored image by FastTV; (c) restored image by FNDTV; (d) restored image by our method; (e) zoomed part of (a); (f) zoomed part of (b); (g) zoomed part of (c); (h) zoomed part of (d); (i) SSIM index map of the corrupted image; (j) SSIM index map of the recovered image by FastTV; (k) SSIM index map of the recovered image by FNDTV; (l) SSIM index map of the recovered image by our method

Figure 9
figure 9

Changes of SSIM value versus iteration number for the three methods about motion blur with \(\mathrm{theta}=100\)

Table 2 Experimental results for different images and different blur kernels, \(\mathrm{BSNR}=40\)

The numerical results of three different methods in terms of PSNR and SSIM are shown in the following tables. From Tables 1 and 2 it is not difficult to see that the PSNR and SSIM of the restored image by our proposed method are higher than those obtained by FastTV and FNDTV.

4 Conclusion

In this paper, we propose a hybrid total variation model. In addition, we employ the alternating direction method of multipliers to solve it. Experimental results demonstrate that the proposed model can obtain better results than those restored by some existing restoration methods. It also shows that the new model can obtain a better visual resolution than the other two methods.

References

  1. Hajime, T., Hayashi, T., Nishi, T.: Application of digital image analysis to pattern formation in polymer systems. J. Appl. Phys. 59(11), 3627–3643 (1986)

    Article  Google Scholar 

  2. Chen, M., Xia, D., Han, J., Liu, Z.: An analytical method for reducing metal artifacts in X-ray CT images. Math. Probl. Eng. 2019, Article ID 2351878 (2019)

    Google Scholar 

  3. Chen, M., Li, G.: Forming mechanism and correction of CT image artifacts caused by the errors of three system parameters. J. Appl. Math. 2013, Article ID 545147 (2013)

    MATH  Google Scholar 

  4. Chen, Y., Guo, Y., Wang, Y., et al.: Denoising of hyperspectral images using nonconvex low rank matrix approximation. IEEE Trans. Geosci. Remote Sens. 55(9), 5366–5380 (2017)

    Article  Google Scholar 

  5. Tikhonov, A.N., Arsenn, V.Y.: Solution of Ill-Posed Problem. Winston and Sons, Washington (1977)

    Google Scholar 

  6. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physics D 60, 259–268 (1992)

    Article  MathSciNet  Google Scholar 

  7. Vogel, C.R., Oman, M.E.: Iterative methods for total variation denoising. SIAM J. Sci. Comput. 17, 227–238 (1996)

    Article  MathSciNet  Google Scholar 

  8. Zhu, J.G., Hao, B.B.: A new noninterior continuation method for solving a system of equalities and inequalities. J. Appl. Math. 2014, Article ID 592540 (2014)

    MathSciNet  Google Scholar 

  9. Yu, J., Li, M., Wang, Y., He, G.: A decomposition method for large-scale box constrained optimization. Appl. Math. Comput. 231(12), 9–15 (2014)

    MathSciNet  MATH  Google Scholar 

  10. Han, C., Feng, T., He, G., Guo, T.: Parallel variable distribution algorithm for constrained optimization with nonmonotone technique. J. Appl. Math. 2013, Article ID 295147 (2013)

    MathSciNet  MATH  Google Scholar 

  11. Sun, L., He, G., Wang, Y.: An accurate active set newton algorithm for large scale bound constrained optimization. Appl. Math. 56(3), 297–314 (2011)

    Article  MathSciNet  Google Scholar 

  12. Zheng, F., Han, C., Wang, Y.: Parallel SSLE algorithm for large scale constrained optimization. Appl. Math. Comput. 217(12), 5277–5384 (2011)

    MathSciNet  MATH  Google Scholar 

  13. Zhu, J., Hao, B.: A new smoothing method for solving nonlinear complementarity problems. Open Math. 17(1), 21–38 (2019)

    Google Scholar 

  14. Tian, Z., Tian, M., Gu, C.: An accelerated jacobi gradient based iterative algorithm for solving sylvester matrix equations. Filomat 31(8), 2381–2390 (2017)

    Article  MathSciNet  Google Scholar 

  15. Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20(1–2), 89–97 (2004)

    MathSciNet  MATH  Google Scholar 

  16. Wang, Y., Yang, J., Yin, W., et al.: A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imaging Sci. 1(3), 248–272 (2008)

    Article  MathSciNet  Google Scholar 

  17. Zhang, R.Y., Xu, F.F., Huang, J.C.: Reconstructing local volatility using total variation. Acta Math. Sin. Engl. Ser. 33(2), 263–277 (2017)

    Article  MathSciNet  Google Scholar 

  18. Bai, Z.B., Dong, X.Y., Yin, C.: Existence results for impulsive nonlinear fractional differential equation with mixed boundary conditions. Bound. Value Probl. 2016, 63 (2016)

    Article  MathSciNet  Google Scholar 

  19. Wang, Z.: A numerical method for delayed fractional-order differential equations. J. Appl. Math. 2013, 256071 (2013)

    MathSciNet  MATH  Google Scholar 

  20. Wang, Z., Huang, X., Zhou, J.P.: A numerical method for delayed fractional-order differential equations: based on G-L definition. Appl. Math. Inf. Sci. 7(2), 525–529 (2013)

    Article  MathSciNet  Google Scholar 

  21. Zhang, Y.L., Lv, K.B., et al..: Modeling gene networks in Saccharomyces cerevisiae based on gene expression profiles. Comput. Math. Methods Med. 2015, Article ID 621264 (2015)

    MATH  Google Scholar 

  22. Lu, X., Wang, H.X., Wang, X.: On Kalman smoothing for wireless sensor networks systems with multiplicative noises. J. Appl. Math. 2012, 203–222 (2012)

    MathSciNet  MATH  Google Scholar 

  23. Ding, S.F., Huang, H.J., Xu, X.Z., et al.: Polynomial smooth twin support vector machines. Appl. Math. Inf. Sci. 8(4), 2063–2071 (2014)

    Article  MathSciNet  Google Scholar 

  24. Goldfarb, D., Yin, W.: Second-order cone programming methods for total variation based image restoration. SIAM J. Sci. Comput. 27(2), 622–645 (2005)

    Article  MathSciNet  Google Scholar 

  25. Han, C.Y., Zheng, F.Y., Guo, T.D., He, G.P.: Parallel algorithms for large-scale linearly constrained minimization problem. Acta Math. Appl. Sin. Engl. Ser. 30(3), 707–720 (2014)

    Article  MathSciNet  Google Scholar 

  26. Hao, B.B., Zhu, J.G.: Fast L1 regularized iterative forward backward splitting with adaptive parameter selection for image restoration. J. Vis. Commun. Image Represent. 44, 139–147 (2017)

    Article  Google Scholar 

  27. Morini, B., Porcelli, M., Chan, R.H.: A reduced Newton method for constrained linear least squares problems. J. Comput. Appl. Math. 233, 2200–2212 (2010)

    Article  MathSciNet  Google Scholar 

  28. Chan, R.H., Tao, M., Yuan, X.: Constrained total variation deblurring models and fast algorithms based on alternating direction method of multipliers. SIAM J. Imaging Sci. 6(1), 680–697 (2013)

    Article  MathSciNet  Google Scholar 

  29. Huang, Y.M., Ng, M.K., Wen, Y.W.: A fast total variation minimization method for image restoration. Multiscale Model. Simul. 7(2), 774–795 (2008)

    Article  MathSciNet  Google Scholar 

  30. Liu, J., Huang, T.Z., et al..: An efficient variational method for image restoration. Abstr. Appl. Anal. 2013, 213536 (2013)

    MathSciNet  MATH  Google Scholar 

  31. Chambolle, A., Lions, P.L.: Image recovery via total variation minimization and related problems. Numer. Math. 76, 167–188 (1997)

    Article  MathSciNet  Google Scholar 

  32. Chan, T., Marquina, A., Mulet, P.: High-order total variation-based image restoration. SIAM J. Sci. Comput. 22(2), 503–516 (2000)

    Article  MathSciNet  Google Scholar 

  33. Lv, X., Song, Y., Wang, S., et al.: Image restoration with a high-order total variation minimization method. Appl. Math. Model. 37, 8210–8224 (2013)

    Article  MathSciNet  Google Scholar 

  34. Liu, G., Huang, T., Liu, J.: High-order TVL1-based images restoration and spatially adapted regularization parameter selection. Comput. Math. Appl. 67, 2015–2026 (2014)

    Article  MathSciNet  Google Scholar 

  35. Zhu, G.J., Li, K., Hao, B.B.: Image restoration by a mixed high-order total variation and l1 regularization model. Math. Probl. Eng. 2018, Article ID 6538610 (2018)

    Google Scholar 

  36. Lysaker, M., Tai, X.C.: Iterative image restoration combining total variation minimization and a second-order functional. Int. J. Comput. Vis. 66(1), 5–18 (2006)

    Article  Google Scholar 

  37. Zhu, J., Liu, K., Hao, B.: Restoration of remote sensing images based on nonconvex constrained high-order total variation regularization. J. Appl. Remote Sens. 13(2), 022006 (2019)

    Google Scholar 

  38. You, Y.L., Kaveh, M.: Fourth-order partial differential equations for noise removal. IEEE Trans. Image Process. 9(10), 1723–1730 (2000)

    Article  MathSciNet  Google Scholar 

  39. Hajiaboli, M.R.: An anisotropic fourth-order diffusion filter for image noise removal. Int. J. Comput. Vis. 92(2), 177–191 (2011)

    Article  MathSciNet  Google Scholar 

  40. Chen, D., Chen, Y., Xue, D.: Fractional-order total variation image denoising based on proximity algorithm. Appl. Math. Comput. 257, 537–545 (2015)

    MathSciNet  MATH  Google Scholar 

  41. Wang, Z., Xie, Y., Lu, J.,et al..: Stability and bifurcation of a delayed generalized fractional-order prey–predator model with interspecific competition. Appl. Math. Compt. 347, 360–369 (2019)

    Article  MathSciNet  Google Scholar 

  42. Wang, X., Wang, Z., Huang, X.,et al..: Dynamic analysis of a fractional-order delayed SIR model with saturated incidence and treatment functions. Int. J. Bifurcat. Chaos 28(14), 1850180 (2018)

    Article  Google Scholar 

  43. Jiang, C., Zhang, F., Li, T.: Synchronization and antisynchronization of N-coupled fractional-order complex systems with ring connection. Math. Methods Appl. Sci. 41(3), 2625–2638 (2018)

    Article  MathSciNet  Google Scholar 

  44. Ren, Z., He, C., Zhang, Q.: Fractional order total variation regularization for image super-resolution. Signal Process. 93(9), 2408–2421 (2013)

    Article  Google Scholar 

  45. Wu, C., Tai, X.C.: Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models. SIAM J. Imaging Sci. 3(3), 300–339 (2010)

    Article  MathSciNet  Google Scholar 

  46. Michael, N., Chan, R.H., Tang, W.C.: A fast algorithm for deblurring models with Neumann boundary conditions. SIAM J. Sci. Comput. 21(3), 851–866 (1999)

    Article  MathSciNet  Google Scholar 

  47. Gonzalez, R.C., Woods, R.E.: Digital Image Processing, 3rd edn. Prentice Hall, New York (2011)

    Google Scholar 

  48. Yang, J.F., et al.: A fast algorithm for edge-preserving variational multichannel image restoration. SIAM J. Imaging Sci. 2, 569–592 (2011)

    Article  MathSciNet  Google Scholar 

  49. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004)

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the referees for their valuable comments and suggestions.

Funding

This work was supported by National Key Research and Development Program of China (No. 2017YFC1405600), by the Training Program of the Major Research Plan of National Science Foundation of China (No. 91746104), by National Science Foundation of China (Nos. 61101208, 11326186), Qindao Postdoctoral Science Foudation (No. 2016114), Project of Shandong Province Higher Educational Science and Technology Program (No. J17KA166), Joint Innovative Center for Safe and Effective Mining Technology and Equipment of Coal Resources, Shandong Province of China and SDUST Research Fund (No. 2014TDJH102).

Author information

Authors and Affiliations

Authors

Contributions

All authors worked together to produce the results and read and approved the final manuscript.

Corresponding author

Correspondence to Binbin Hao.

Ethics declarations

Competing interests

The authors declare that there is no conflict of interest regarding the publication of this paper.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhu, J., Li, K. & Hao, B. Hybrid variational model based on alternating direction method for image restoration. Adv Differ Equ 2019, 34 (2019). https://doi.org/10.1186/s13662-018-1908-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-018-1908-0

Keywords