A Generalized Elastic Net Regularization with Smoothed l0 Penalty

Show more

1. Introduction

Compressive sensing (CS) has been emerging as a very active research field and brought about great changes in the field of signal processing during recent years with broad applications such as compressed imaging, analog-to-information conversion, biosensors, and so on [1] [2] [3] . Meanwhile, the ${\mathcal{l}}_{0}$ norm based signal recovery is attractive in compressed sensing as it can facilitate exact recovery of sparse signal with very high probability [4] [5] . Mathematically, the problem can be presented as

$\underset{x\in {R}^{N}}{\mathrm{min}}{\Vert x\Vert}_{0},\mathrm{}\text{subjectto}\mathrm{}Ax=y,$ (1)

where $y\in {R}^{m},A\in {R}^{m\times N}$ is a measurement matrix, ${\Vert \cdot \Vert}_{2}$ denotes the Euclidean norm and ${\Vert x\Vert}_{0}$ , formally called the quasi-norm, denotes the number of the nonzero components of $x={\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)}^{\text{T}}\in {R}^{N}$ , and the $\lambda >0$ is a regulari- zation parameter.

We can then solve the unconstrained ${\mathcal{l}}_{0}$ regularization problem

$\underset{x\in {R}^{N}}{\mathrm{min}}\left\{\frac{1}{2}{\Vert Ax-y\Vert}_{2}^{2}+\lambda {\Vert x\Vert}_{0}\right\},$ (2)

A natural approach to this problem is to solve a convex relaxation ${\mathcal{l}}_{1}$ re- gularization problem [6] [7] as following

$\underset{x\in {R}^{N}}{\mathrm{min}}\left\{\frac{1}{2}{\Vert Ax-y\Vert}_{2}^{2}+\lambda {\Vert x\Vert}_{1}\right\},$ (3)

where the ${\Vert x\Vert}_{1}={\displaystyle {\sum}_{i=1}^{N}}\Vert {x}_{i}\Vert $ is the ${\mathcal{l}}_{1}$ norm. Undoubtly, the ${\mathcal{l}}_{1}$ regularization

has many applications [8] [9] and can be solved by many classic algorithms such as the iterative soft thresholding algorithm [7] , the LARs [10] , etc. An effective regression method, Lasso [11] , has a very close relationship with the ${\mathcal{l}}_{1}$ re- gularization as well. In 2005, Zou et al. proposed the following algorithm, which called the elastic net regularization [12]

$\underset{x\in {R}^{N}}{\mathrm{min}}\left\{\frac{1}{2}{\Vert Ax-y\Vert}_{2}^{2}+{\lambda}_{1}{\Vert x\Vert}_{1}+{\lambda}_{2}{\Vert x\Vert}_{2}\right\},$ (4)

where the ${\lambda}_{1},{\lambda}_{2}>0$ are two regularization parameters. It is proved in many papers that the elastic net regularization outperforms the Lasso in prediction accuracy. Cands proved that as long as A satisfies the RIP condition with a constant parameter, the ${\mathcal{l}}_{1}$ minimization can yield an equivalent solution as that of ${\mathcal{l}}_{0}$ minimization [13] . So in general, the ${\mathcal{l}}_{1}$ regularization problem can be regard as an aproach to the ${\mathcal{l}}_{0}$ regularization. Therefore, we shall consider a generalized elastic net regularization problem with ${\mathcal{l}}_{0}$ penalty:

$\underset{x\in {R}^{N}}{\mathrm{min}}\left\{\frac{1}{2}{\Vert Ax-y\Vert}_{2}^{2}+{\lambda}_{1}{\Vert x\Vert}_{0}+{\lambda}_{2}{\Vert x\Vert}_{2}^{2}\right\},$ (5)

Unfortunately, the ${\mathcal{l}}_{0}$ norm minimization problem is NP-hard [14] . And due to the sparsity of the solution x, we could turn out to calculate the following generalized elastic net regularization with smoothed ${\mathcal{l}}_{0}$ penalty:

$\underset{x\in {R}^{N}}{\mathrm{min}}\left\{\frac{1}{2}{\Vert Ax-y\Vert}_{2}^{2}+{\lambda}_{1}{\Vert x\Vert}_{0,\delta}+{\lambda}_{2}{\Vert x\Vert}_{2}^{2}\right\},$ (6)

where ${\Vert x\Vert}_{0,\delta}={\displaystyle {\sum}_{i=1}^{N}}\frac{{x}_{i}^{2}}{{x}_{i}^{2}+\delta}$ , the $\delta >0$ is a parameter which approaches zero in

order to approximate ${\Vert x\Vert}_{0}$ .

In this paper,we propose an iterative algorithm for recovering sparse vectors which substitute the ${\mathcal{l}}_{0}$ penalty with a function [15] . And by adding an ${\mathcal{l}}_{2}$ term, we can prove that the algorithm is convergent based on the algebraic methods. In the experiment part, we compare the algorithm with the ${\mathcal{l}}_{1}$ soft thresholding algorithm (ITH) [16] . And the output results show an outstanding success of the new method.

The rest of this paper is organized as follows. We develop the new algorithm in Section 2 and prove its convergence in Section 3. Experiments on accuracy and efficiency are reported in Section 4. Finally, we conclude this paper in Section 5.

2. Problem Reformulation

The reconstruction method discussed in this paper is for directly approaching the ${l}_{0}$ norm and obtaining its minimal solution with suitably designed objective functions. We denote by ${C}_{\delta}\left(x,{\lambda}_{1},{\lambda}_{2}\right)$ the objective function of the minimization problem (6).

${C}_{\delta}\left(x,{\lambda}_{1},{\lambda}_{2}\right)=\frac{1}{2}{\Vert Ax-y\Vert}_{2}^{2}+{\lambda}_{1}{\displaystyle \underset{i=1}{\overset{N}{\sum}}}\frac{{x}_{i}^{2}}{{x}_{i}^{2}+\delta}+{\lambda}_{2}{\Vert x\Vert}_{2}^{2}.$ (7)

Our goal is to minimize the objective function. For any $\delta >0$ and ${\lambda}_{1},{\lambda}_{2}>0$ , the minimization problem is convex coercie, thus it has a solution. So the optimal solution of (7) can be given according to the optimal condition.

${A}^{\text{T}}\left(A\stackrel{^}{x}-y\right)+{\left[\frac{2{\lambda}_{1}{\stackrel{^}{x}}_{i}\delta}{{\left({\stackrel{^}{x}}_{i}{}^{2}+\delta \right)}^{2}}\right]}_{1\le i\le N}+2{\lambda}_{2}\stackrel{^}{x}=0.$ (8)

Then we can present the following iterative algorithm to solve the above minimization problem.

3. Convergence of the Algorithm

In this section, we prove that the algorithm is convergent. Firstly, we start from the lemma 1 [17] which we can deduce the inquality directly by using the mean value theorem.

Lemma 1. Given $\delta >0$ , then the inequality

$\frac{{x}^{2}}{{x}^{2}+\delta}-\frac{{y}^{2}}{{y}^{2}+\delta}-\frac{2\delta \left(x-y\right)y}{{\left({x}^{2}+\delta \right)}^{2}}\ge \frac{\delta {\left(x-y\right)}^{2}}{{\left({x}^{2}+\delta \right)}^{2}}$ (11)

holds for any $x,y\in R$ .

Proof. We first denote $f\left({x}^{2}\right)=\frac{{x}^{2}}{{x}^{2}+\delta}$ , then by the mean value theorem, we

have

$f\left({x}^{2}\right)-f\left({y}^{2}\right)={f}^{\prime}\left(\xi \right)\left({x}^{2}-{y}^{2}\right)\mathrm{}\text{where}\mathrm{}\xi \mathrm{}\text{between}\mathrm{}{x}^{2}\text{and}{y}^{2}\mathrm{.}$ (12)

So we have

$\frac{{x}^{2}}{{x}^{2}+\delta}-\frac{{y}^{2}}{{y}^{2}+\delta}=\frac{\delta \left({x}^{2}-{y}^{2}\right)}{{\left(\xi +\delta \right)}^{2}}=\frac{2\delta \left(x-y\right)y+\delta {\left(x-y\right)}^{2}}{{\left({x}^{2}+\delta \right)}^{2}}.$ (13)

Thus, we can simplify the inequality as follow:

$\frac{{x}^{2}}{{x}^{2}+\delta}-\frac{{y}^{2}}{{y}^{2}+\delta}-\frac{2\delta \left({x}^{2}-{y}^{2}\right)}{{\left({x}^{2}+\delta \right)}^{2}}\ge \frac{\delta {\left(x-y\right)}^{2}}{{\left({x}^{2}+\delta \right)}^{2}}.$

This inequality of (11) holds no matter ${x}^{2}>{y}^{2}$ , ${x}^{2}<{y}^{2}$ or ${x}^{2}={y}^{2}$ . And the next Lemma proves that the sequence ${x}^{\left(k\right)}$ drives the function ${C}_{\delta}\left(x\mathrm{,}{\lambda}_{1}\mathrm{,}{\lambda}_{2}\right)$ downhill. $\square $

Lemma 2. For any $\delta >0$ and ${\lambda}_{1},{\lambda}_{2}>0$ , let ${x}^{\left(k+1\right)}$ be the solution of (9) for $k=1,2,3,\cdots $ Then we can have

${\Vert A{x}^{k}-A{x}^{k+1}\Vert}_{2}^{2}\le 2\left({C}_{\delta}\left({x}^{k}\mathrm{,}{\lambda}_{1}\mathrm{,}{\lambda}_{2}\right)-{C}_{\delta}\left({x}^{k+1}\mathrm{,}{\lambda}_{1}\mathrm{,}{\lambda}_{2}\right)\right)\mathrm{.}$ (14)

Furthermore,

${\Vert {x}^{k}-{x}^{k+1}\Vert}_{2}^{2}\le c\left({C}_{\delta}\left({x}^{k},{\lambda}_{1},{\lambda}_{2}\right)-{C}_{\delta}\left({x}^{k+1},{\lambda}_{1},{\lambda}_{2}\right)\right).$ (15)

where c is a positive constant that depends on ${\lambda}_{2}$ .

Proof.

$\begin{array}{c}{C}_{\delta}\left({x}^{k},{\lambda}_{1},{\lambda}_{2}\right)-{C}_{\delta}\left({x}^{k+1},{\lambda}_{1},{\lambda}_{2}\right)={\lambda}_{1}{\displaystyle \underset{i=1}{\overset{N}{\sum}}}\left(\frac{{\left({x}_{i}^{k}\right)}^{2}}{{\left({x}_{i}^{k}\right)}^{2}+\delta}-\frac{{\left({x}_{i}^{k+1}\right)}^{2}}{{\left({x}_{i}^{k+1}\right)}^{2}+\delta}\right)\\ \text{}+{\lambda}_{2}({\Vert {x}^{k}\Vert}_{2}^{2}-{\Vert {x}^{k+1}\Vert}_{2}^{2})+\frac{1}{2}\left({\Vert A{x}^{k}-y\Vert}_{2}^{2}-{\Vert A{x}^{k+1}-y\Vert}_{2}^{2}\right)\\ ={\lambda}_{1}{\displaystyle \underset{i=1}{\overset{N}{\sum}}}\left(\frac{{\left({x}_{i}^{k}\right)}^{2}}{{\left({x}_{i}^{k}\right)}^{2}+\delta}-\frac{{\left({x}_{i}^{k+1}\right)}^{2}}{{\left({x}_{i}^{k+1}\right)}^{2}+\delta}\right)\\ \text{}+\frac{1}{2}{\Vert A{x}^{k}-A{x}^{k+1}\Vert}_{2}^{2}+{\lambda}_{2}{\Vert {x}^{k}-{x}^{k+1}\Vert}_{2}^{2}\\ \text{}+2{\lambda}_{2}{\left({x}^{k}-{x}^{k+1}\right)}^{\text{T}}{x}^{k+1}+{\left(A{x}^{k}-A{x}^{k+1}\right)}^{\text{T}}\left(A{x}^{k+1}-y\right).\end{array}$ (16)

Using (9). The last term in (16) can be simplified to be

$\begin{array}{c}{\left(A{x}^{k}-A{x}^{k+1}\right)}^{\text{T}}\left(A{x}^{k+1}-y\right)={\left(A{x}^{k}-A{x}^{k+1}\right)}^{\text{T}}\left[-{\left({A}^{T}\right)}^{-1}\left(2{\lambda}_{2}{x}^{k+1}+\frac{2{\lambda}_{1}\delta {x}^{k+1}}{{\left({\left({x}^{k}\right)}^{2}+\delta \right)}^{2}}\right)\right]\\ =-{\displaystyle \underset{i=1}{\overset{N}{\sum}}}\frac{2{\lambda}_{1}\delta {x}_{i}^{k+1}+\left({x}_{i}^{k}-{x}_{i}^{k+1}\right)}{{\left({\left({x}^{k}\right)}^{2}+\delta \right)}^{2}}-2{\lambda}_{2}{\left({x}^{k}-{x}^{k+1}\right)}^{T}{x}^{k+1}.\end{array}$ (17)

Substituting (15) into (16) and using (11),

$\begin{array}{c}{C}_{\delta}\left({x}^{k},{\lambda}_{1},{\lambda}_{2}\right)-{C}_{\delta}\left({x}^{k+1},{\lambda}_{1},{\lambda}_{2}\right)\\ ={\lambda}_{1}{\displaystyle \underset{i=1}{\overset{N}{\sum}}}\left(\frac{{\left({x}_{i}^{k}\right)}^{2}}{{\left({x}_{i}^{k}\right)}^{2}+\delta}-\frac{{\left({x}_{i}^{k+1}\right)}^{2}}{{\left({x}_{i}^{k+1}\right)}^{2}+\delta}-\frac{2\delta {x}_{i}^{k+1}\left({x}_{i}^{k}-{x}_{i}^{k+1}\right)}{{\left({\left({x}^{k}\right)}^{2}+\delta \right)}^{2}}\right)\\ \text{}+\frac{1}{2}{\Vert A{x}^{k}-A{x}^{k+1}\Vert}_{2}^{2}+{\lambda}_{2}{\Vert {x}^{k}-{x}^{k+1}\Vert}_{2}^{2}\\ \ge {\displaystyle \underset{i=1}{\overset{N}{\sum}}}\frac{\delta {\lambda}_{1}{\left({x}^{k}-{x}_{i}^{k+1}\right)}^{2}}{{\left({\left({x}^{k}\right)}^{2}+\delta \right)}^{2}}+\frac{1}{2}{\Vert A{x}^{k}-A{x}^{k+1}\Vert}_{2}^{2}+{\lambda}_{2}{\Vert {x}^{k}-{x}^{k+1}\Vert}_{2}^{2}.\end{array}$ (18)

$\square $

Since ${\sum}_{i=1}^{N}}\frac{\delta {\lambda}_{1}{\left({x}^{k}-{x}_{i}^{k+1}\right)}^{2}}{{\left({\left({x}^{k}\right)}^{2}+\delta \right)}^{2}}\ge 0$ for any ${x}^{k}$ and ${x}^{k+1}$ . From (18) we can

obtain the results of (14) and (15) with $C=\frac{1}{{\lambda}_{2}}$ .

Lemma 3. ( [18] , Theorem 3.1) Let $P\left(z,\stackrel{\xaf}{w}\right)=0$ to be given, and let $Q\left(z,\left(\stackrel{\xaf}{a}\right),\left(\stackrel{\xaf}{c}\right)\right)=0$ be its corresponding highest ordered system of equations. If $Q\left(z,\left(\stackrel{\xaf}{a}\right),\left(\stackrel{\xaf}{c}\right)\right)=0$ has only the trivial solution $z=0$ , then $P\left(z,\stackrel{\xaf}{w}\right)=0$ has $\beta ={\displaystyle {\prod}_{i=1}^{m}}{q}_{i}$ solutions, where ${q}_{i}$ is the degree of ${P}_{i}$ .

Theorem 1. For any $\delta >0$ and ${\lambda}_{1},{\lambda}_{2}>0$ . Then the iterative solutions ${x}^{k}$ in (9) converge to ${x}^{\mathrm{*}}$ , that is ${\mathrm{lim}}_{k\to \infty}{x}^{k}={x}^{*}$ and ${x}^{\mathrm{*}}$ is a critical point of (6).

Proof. Here, we need to prove that the sequence ${x}^{k}$ is bounded. We assume that ${x}^{{k}_{i}}$ is one convergent subsequence of ${x}^{k}$ and its limit point is ${x}^{\mathrm{*}}$ . By (15) we know that the sequence ${x}^{{k}_{i}+1}$ also converges to ${x}^{\mathrm{*}}$ . If we replace ${x}^{k}$ , ${x}^{k+1}$ with ${x}^{{k}_{i}}$ , ${x}^{{k}_{i}+1}$ in (10) and letting $i\to \infty $ yields.

$\square $

$\frac{2{\lambda}_{1}\delta {x}_{j}^{*}}{{\left({\left({x}_{j}^{*}\right)}^{2}+\delta \right)}^{2}}+{A}^{\text{T}}\left(A{x}^{*}-y\right)+2{\lambda}_{2}{x}^{*}=0.$ (19)

And this implies that the limit point which converges to any convergent subsequence of ${x}^{k}$ is the critical point of (8). In order to prove the convergence of sequence ${x}^{k}$ , we need to prove that the limit point set M, which contains all the limit points of convergent subsequence of ${x}^{k}$ is a finite set. So we have to prove that the following equation has finite solutions.

${\left[\frac{2{\lambda}_{1}\delta {u}_{j}}{{\left({\left({u}_{i}\right)}^{2}+\delta \right)}^{2}}\right]}_{1\le i\le N}+{A}^{\text{T}}\left(Au-y\right)+2{\lambda}_{2}u=0.$ (20)

where $u={\left({u}_{1},{u}_{2},\cdots ,{u}_{N}\right)}^{\text{T}}\in {R}^{N}$ . We can rewrite (20) as follow:

${\left[\frac{2{\lambda}_{1}\delta {u}_{j}}{{\left({\left({u}_{i}\right)}^{2}+\delta \right)}^{2}}\right]}_{1\le i\le N}+\left({A}^{\text{T}}A+2{\lambda}_{2}{I}_{N}\right)u-{A}^{\text{T}}y=0.$ (21)

It is obvious that ${A}^{\text{T}}A+2{\lambda}_{2}{I}_{N}$ is a positive definite matrix, ${A}^{\text{T}}y\in {R}^{N}$ is the $N\times N$ identity matrix. Then the (21) can be rewritten as the following eq- uation:

$2{\lambda}_{1}\delta u+B\left(\left({A}^{\text{T}}A+2{\lambda}_{2}{I}_{N}\right)u-{A}^{\text{T}}y\right)=0.$ (22)

where B is an $N\times N$ diagonal matrix with diagonal entries ${B}_{ii}={\left({\left({u}_{i}\right)}^{2}+\delta \right)}^{2},i=1,2,3,\cdots ,N$ . We denote ${A}^{\text{T}}A+2{\lambda}_{2}{I}_{N}={\left({a}_{ii}\right)}_{N\times N}$ and ${A}^{\text{T}}y={\left({q}_{1},{q}_{2},\cdots ,{q}_{N}\right)}^{\text{T}}$ . Then

$(\begin{array}{l}2{\lambda}_{1}\delta {u}_{1}+\left({a}_{11}{u}_{1}+{a}_{12}{u}_{2}+\cdots +{a}_{1N}{u}_{N}-{q}_{1}\right){\left({u}_{1}^{2}+\delta \right)}^{2}=0,\hfill \\ 2{\lambda}_{1}\delta {u}_{2}+\left({a}_{21}{u}_{1}+{a}_{22}{u}_{2}+\cdots +{a}_{2N}{u}_{N}-{q}_{2}\right){\left({u}_{2}^{2}+\delta \right)}^{2}=0,\hfill \\ \mathrm{}\cdots \cdots \cdots \cdots \cdots \cdots \hfill \\ 2{\lambda}_{1}\delta {u}_{N}+\left({a}_{N1}{u}_{1}+{a}_{N2}{u}_{2}+\cdots +{a}_{NN}{u}_{N}-{q}_{N}\right){\left({u}_{N}^{2}+\delta \right)}^{2}=0.\hfill \end{array}$ (23)

If we want to prove that (23) has finite solutions, then we can prove the (22) system has finite solutions. According to lemma 3, if we prove that the highest ordered system of (23) has only trivial solution, then it's easy to conclude that the Equation (23) has finite solutions.

$(\begin{array}{l}\left({a}_{11}{u}_{1}+{a}_{12}{u}_{2}+\cdots +{a}_{1N}{u}_{N}\right){u}_{1}^{4}=0,\hfill \\ \left({a}_{21}{u}_{1}+{a}_{22}{u}_{2}+\cdots +{a}_{2N}{u}_{N}\right){u}_{2}^{4}=0,\hfill \\ \mathrm{}\cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \hfill \\ \left({a}_{N1}{u}_{1}+{a}_{N2}{u}_{2}+\cdots +{a}_{NN}{u}_{N}\right){u}_{N}^{4}=0.\hfill \end{array}$ (24)

We prove the system (24) has only trivial solution. We assume that $u={\left({u}_{1},{u}_{2},\cdots ,{u}_{s},0,\cdots ,0\right)}^{\text{T}}\in {R}^{N}$ is a nonzero solution of (24), ${u}_{i}\ne 0$ for $i=1,2,\cdots ,s$ , $1\le s\le N$ . Then we have

$C{u}^{s}=0.$ (25)

where $C={\left({a}_{ii}\right)}_{s\times s}$ is the $s\times s$ leading principle submatrix of matrix ${A}^{\text{T}}A+2{\lambda}_{2}{I}_{N}$ is the positive definite, therefore the matrix C is positive definite as well. So we have ${u}_{i}=0$ for $i=1,2,\cdots ,s$ . This contradicts the assumption of ${u}_{i}\ne 0$ , $i=1,2,\cdots ,s$ , $1\le i\le s$ .

Therefore, the system (24) has only trivial solution. So the Equation (20) has finite solutions. Since all the limit points of convergent subsequence of ${x}^{\left(k\right)}$ satisfies the Equation (20) and we have proved that (20) has finite solutions. So the limit point set M is a finite set. Combining with ${\Vert {x}^{\left(k+1\right)}-{x}^{\left(k\right)}\Vert}_{2}\to 0$ as $k\to \infty $ , we thus obtain that the sequence ${x}^{\left(k\right)}$ is convergent and limit ${x}^{\mathrm{*}}$ is a critical point of problem (7).

4. Numerical Experiments

In this section, we present some numerical experiments to show the efficiency and the accuracy of the Algorithm 1 for sparse vector recovery. We compare the performance of Algorithm 1 with ${\mathcal{l}}_{1}$ IST [3] . In the test, the matrix A had the size of $100\times 250$ , which is $m=100$ and $N=250$ . All the experiments were performed in Matlab and all the experimental results were averaged over 100 independent trials for various sparsity s.

The experiment results contain two parts: the first one focuses on the comparison of the two algorithms in accuracy; the other one focuses on the efficiency of the two algorithms. In the experiments, the mean squared error of the original vector and the result is recorded as

$\text{MSE}={\Vert {x}^{k}-{x}^{0}\Vert}_{2}^{2}/N$ (26)

4.1. Comparison on the Accuracy

The matrix $A\in {R}^{100\times 250}$ and the original sparse vector ${x}^{0}\in {R}^{250}$ was gene- rated randomly according to the standard Gaussian distribution with N-length and s-sparse, which varies as 2, 4, 6, 8, ・・・, 48. The location of the nonzero elements were randomly generated. The regularization parameters were set as $\delta ={10}^{-6}$ and ${\lambda}_{1}={10}^{-3},{\lambda}_{2}={10}^{-5}$ . All the other parameters of the two al- gorithms were set to be the same.The results are shown in Figure 1.

The Figure 1 shows that the convergence error MSE for the two algorithms tends to be stable at last for different sparsity s. We can also observe that the MSE of the LAGENR-L0 is lower than the IST which demonstrates that our algorithm is not only convergent,but also outperforms the IST in accuracy.

4.2. Comparison on the Efficiency

In this subsection, we focus on the speed of the two algorithms. We conduct various experiments to test the effectiveness of the proposed algorithm. Table 1 report the numerical results of the two algorithms for recovering vectors for different sparsity level. From the results, we can see that the IAGENR-L0 performs much better than IST in efficiency and the accuracy.

Figure 1. Comparison of the convergence error $\text{MSE}={\Vert {x}^{k}-{x}^{0}\Vert}_{2}^{2}/N$ for both IAGENR-L0 and IST.

Table 1. The iteration time of the IAGENR-L0 and the IST for different sparsity level.

5. Conclusion

In this paper, we consider an iterative algorithm for solving the generalized elastic net regularization problems with smoothod ${\mathcal{l}}_{0}$ penalty for recovering sparse vectors. Then a detailed proof of convergence of the iterative algorithm is given in Section 2 by using the algebraic method. Additionally, the numerical experiments in Section 3 show that our iterative algorithm is convergent and performs better than the IST on recovering sparse vectors.

References

[1] Donoho, D.L. (2006) Compressed Sensing. IEEE Transactions on Information Theory, 52, 1289-1306.

[2] Cands, E.J., Romberg, J. and Tao, T. (2006) Robust Uncertainty Principles: Exagct signal Reconstruction from Highly Incomplete Frequency Information. IEEE Transactions on Information Theory, 26, 489-509.

[3] Duarte, M.F. and Eldar, Y.C. (2011) Structured Compressed Sensing: From Theory to Applications. IEEE Transactions on Signal Processing, 59, 4053-4085.

[4] Lu, Z. (2014) Iterative Hard Thresholding Methods for Regularized Convex cone Programming. Mathematical Programming, 147, 125-154.

https://doi.org/10.1007/s10107-013-0714-4

[5] Cand‘es, E.J. and Tao, T. (2005) Decoding by Linear Programming. IEEE Transactions on Information Theory, 51, 4203-4215.

[6] Chen, S.S., Donodo, D.L. and Saunders, M.A. (1998) Atomic Decomposition by basis Pursuit. SIAM Journal on Scientific Computing, 20, 33-61.

https://doi.org/10.1137/S1064827596304010

[7] Daubechies, I., Defries, M. and DeMol, C. (2004) An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint. Communications on Pure and Applied Mathematics, 57, 1413-1457.

[8] Zou, H. (2006) The Adaptive Lasso and Its Oracle Properties. Journal of the American Statistical Association, 101, 1418-1429.

https://doi.org/10.1198/016214506000000735

[9] Meinshausen, N. and Yu, B. (2009) Lasso-Type Recovery of Sparse Representations for High-Dimensional Data. Annals of Statistics, 46, 246-270.

https://doi.org/10.1214/07-AOS582

[10] Efron, B., Hastie, T., Johnstone, I. and Tibshirani, R. (2004) Least Angle Regression. The Annals of Statistics, 32, 407-451.

[11] Tibshirani, R. (1996) Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society Series B (Statistical Methodology), 73, 273-282.

[12] Zou, H. and Hastie, T. (2005) Regularization and Variable Selection via the Elastic Net. Journal of the Royal Statistical Society Series B (Statistical Methodology), 67, 301-320.

[13] Kordas, G. (2015) A Neurodynamic Optimization Method for Recovery of Compressive Sensed Signals with Globally Converged Solution Approximating to L0 Minimization. IEEE Transactions on Neural Networks and Learning Systems, 26, 1363-1374.

[14] Natraajan, B.K. (1995) Sparse Approximation to Linear Systems. SIAM Journal on Computing, 24, 227-234.

https://doi.org/10.1137/S0097539792240406

[15] Xiao, Y.H. and Song, H.N. (2012) An Inexact Alternating Directions Algorithm for Constrained Total Variation Regularized Compressive Sensing Problems. Journal of Mathematical Imaging and Vision, 44, 114-127.

[16] Daubechies, I., Defrise, M. and Mol, C.D. (2004) An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint.

[17] Lai, M.J., Xu, Y.Y. and Yin, W.T. (2013) Improved Iteratively Reweighted Least Squares for Unconstrained Smoothed Lq Minimization. SIAM Journal on Numerical Analysis, 51, 927-957.

[18] Garcia, C.B. and Li, T.Y. (1980) On the Number of Solutions to Polynomial Systems of Equations. SIAM Journal on Numerical Analysis, 17, 540-546.

https://doi.org/10.1137/0717046