Improvement of Forward-Backward Pursuit Algorithm Based on Weak Selection

Show more

1. Introduction

Compressed Sensing (CS) [1] [2] [3] is a new kind of signal processing theory, which makes use of the sparsity of signal to combine sampling and compression together. CS theory broke through the traditional Nyquist sampling limitation and achieved an efficient acquisition of signal, therefore, CS theory makes high speed and quality of information transmission to be possible. Once CS theory was proposed, it aroused scholars’ wide research. Now it is widely used in signal processing, wireless communication and medical imaging, etc.

The main research of CS theory includes signal sparse transformation, design of measurement matrix and signal reconstruction algorithm. At present, the common algorithms used in CS theory are mainly divided into two categories: one is linear programming algorithm based on optimization norm ${l}_{1}$ , which approximates the original signal by converting the non-convex problem into convex problem. It mainly includes Basis Pursuit (BP) [4] which uses norm ${l}_{1}$ to substitute norm ${l}_{0}$ with quite high computing complexity; Iterative Thresholding (IT) [5] and Iterative Hard Thresholding (IHT) [6] that are derived by an optimization problem which is similar to OMP algorithm; Two Stage Thresholding (TST) [7] that adopts Two Stage Threshold strategy to improve its reconstruction performance, etc. The other one is greedy pursuit algorithm based on norm ${l}_{0}$ , which approximates the original signal by local optimization. It includes Matching Pursuit (MP) [8] and Orthogonal Matching Pursuit (OMP) [9] at the earliest. Later on, based on the idea of backward strategy, Compressive Sampling Matching Pursuit (CoSaMP) [10] , Subspace Pursuit (SP) [11] were proposed, etc.

On the basis of TST algorithm, N. B. Karahanoglu and H. N. Erdogan proposed Forward-Backward Pursuit (FBP) in reference [12] with the feature without prior information of the sparsity. FBP algorithm uses fixed forward step to select atoms and backward step to delete atoms, through this process to implement the support set expansion and reduction. Hence, FBP does not require K as a priori in contrast to SP and CoSaMP. Additionally, the backward step of FBP can remove some possibly misplaced atoms from the support set, which is an advantage over forward greedy algorithms such as OMP. But in practical application, the major issue of FBP algorithm is time complexity, as the forward and backward step is a fixed size in each iteration without considering the transformation of signal in the process of iteration. The execution efficiency is low, thus it leads to taking more time to reconstruct signals and affects the reconstruction accuracy.

To address the above issues of FBP, forward-backward pursuit algorithm based on weak selection (SWFBP) was proposed by introducing threshold strategy into FBP algorithm to calculate the forward step, by which can make the forward step flexible. And for the first few iterations, most of the atoms which are selected are right, and they don’t need backward strategy, so this part of atoms would be directly incorporated into support set. Through above two methods, SWFBP has flexible forward and backward steps which would bring down the computing complexity and improve the reconstruction performance. The simulation results show that SWFBP is a more efficient greedy pursuit algorithm without prior information of the sparsity compared with FBP.

2. Compressed Sensing Theory and Reconstruction Algorithm

2.1. Compressed Sensing Theory

The core of CS theory is choosing an appropriate measurement matrix, by which can project a high-dimensional sparse signal to a low dimensional space, then use reconstruction algorithm to reconstruct the original high-dimensional signal from low dimension projection value. Specifically, suppose that $x$ is a K-sparse, N-dimensional digital signal, that is, only K of $x$ are valid and nonzero. We are interested in the case when $K\ll N$ . One can reconstruct signal $x$ with the following equation:

$y=\Phi x$ (1)

where $y$ is an M-dimensional vector indicating the measurement value. $\Phi $ represents an $M\ast N$ matrix (M < N), which called measurement matrix as well. Reference [13] states that if the signal $x$ is sparse, then it will be reconstructed from a small number of linear projections via some optimization algorithms. The original solution of sparse signal reconstruction is mathematically described as follow:

$\mathrm{min}{\Vert x\Vert}_{0}$ subject to $y=\Phi x$ (2)

However, ${l}_{0}$ minimization problem is NP-hard, while greedy pursuit algorithms provide a favorable tool to solve this problem for approximate solutions. Reference [14] point out that when the measurement matrix $\Phi $ satisfies the restricted isometry property (RIP), we can solve Equation (2) by ${l}_{1}$ optimization problem, i.e.:

$\mathrm{min}{\Vert x\Vert}_{1}$ subject to $y=\Phi x$ (3)

2.2. Forward-Backward Pursuit Algorithm

FBP algorithm is a greedy pursuit algorithm based on ${l}_{0}$ norm minimization problem, which extends the signal support set by forward and backward step. It receives wide attention due to its high reconstruction accuracy and the feature without prior information of the sparsity. The first stage of FBP algorithm is the forward step which expands the support set by indices of $\alpha $ largest magnitude elements in ${\Phi}^{\text{T}}{r}^{l-1}$ , where we call $\alpha $ the forward step size. The second step is removing $\beta $ smallest magnitude projection coefficients in ${\Vert y-{\Phi}_{{\stackrel{\u02dc}{T}}^{l}}w\Vert}_{2}$ . Similar to $\alpha $ , $\beta $ is referred to as the backward step size. The forward step size is chosen larger than the backward step size, hence the support estimate is enlarged by $\alpha -\beta $ atoms at each iteration.

When the sparse degree of signal is relatively small, the residual will constantly decrease by the process of iteration until it meet the termination conditions ${\Vert {r}^{l}\Vert}_{2}\le \epsilon {\Vert y\Vert}_{2}$ ( $\epsilon $ is the termination threshold of iteration). As well, when the sparse degree is a large one, then it will exit the iteration by the termination conditions $\left|{T}^{l}\right|\ge {l}_{\mathrm{max}}$ . A schematic diagram of the FBP algorithm is depicted in Figure 1 and the pseudo-code of FBP algorithm is given in Algorithm 1.

Algorithm 1: FBP Algorithm

Input: Measurement matrix $\Phi $ , measurement vector $y$

Define: Forward step $\alpha $ , backward step $\beta $ , largest number of iterations ${l}_{\mathrm{max}}$ , termination threshold of iteration $\epsilon $

Initialize: ${T}^{0}=\varnothing $ , ${r}^{0}=y$ , $l=0$

while true do

$l=l+1$

Figure 1. Description of FBP algorithms.

Forward update:

${T}_{f}=\underset{J:\left|J\right|=\alpha}{\mathrm{arg}\mathrm{max}}{\Vert {\Phi}_{J}^{T}{r}^{l-1}\Vert}_{1}$

${\stackrel{\u02dc}{T}}^{l}={T}^{l-1}\cup {T}_{f}$

$w=\underset{w}{\mathrm{arg}\mathrm{min}}{\Vert y-{\Phi}_{{\stackrel{\u02dc}{T}}^{l}}w\Vert}_{2}$

Backward update:

${T}_{b}=\underset{J:\left|J\right|=\beta}{\mathrm{arg}\mathrm{min}}{\Vert {w}_{J}\Vert}_{1}$

${T}^{l}={\stackrel{\u02dc}{T}}^{l}-{T}_{b}$

Projection:

$w=\underset{w}{\mathrm{arg}\mathrm{min}}{\Vert y-{\Phi}_{{T}^{l}}w\Vert}_{2}$

${r}^{l}=y-{\Phi}_{{T}^{l}}w$

if ${\Vert {r}^{l}\Vert}_{2}\le \epsilon {\Vert y\Vert}_{2}$ or $\left|{T}^{l}\right|\ge {l}_{\mathrm{max}}$ then

break

end if

end while

$\stackrel{\u02dc}{x}=0$

${\stackrel{\u02dc}{x}}_{{T}^{l}}=w$

Output: $\stackrel{\u02dc}{x}$

2.3. Forward-Backward Pursuit Algorithm Based on Weak Selection

The basic framework of SWFBP algorithm is similar to FBP algorithm. In the first few iterations, we select atoms by weak selection strategy. As most of these atoms are right, so join these atoms into support set directly. For the following iterations, first of all, through threshold strategy to expand the forward support set, and the forward step $\alpha $ is equal to the length of this set. Then calculate the orthogonal projection of the measurement vector $y$ on this support set. Next, use $\alpha $ to calculate the backward step $\beta $ and delete the indexes of $\beta $ smallest orthogonal projection coefficients. Last, calculate the new projection coefficients and update the residuals. Terminal condition of iteration is the same as FBP algorithm. A schematic diagram of the SWFBP algorithm is depicted in

Figure 2. Description of SWFBP algorithms.

Figure 2 and the pseudo-code of SWFBP algorithm is given in Algorithm 2.

Algorithm 2: SWFBP Algorithm

Input: Measurement matrix $\Phi $ , measurement vector $y$

Define: Threshold parameter $\theta $ , value of step difference $\delta $ , largest number of iterations ${l}_{\mathrm{max}}$ , threshold of iteration numbers $\gamma $ , termination threshold of iteration $\epsilon $

Initialize: ${T}^{0}=\varnothing $ , ${r}^{0}=y$ , $l=0$

while true do

$l=l+1$

Forward update:

$u=abs\left({\Phi}^{T}{r}^{l-1}\right)$

$J=\text{find}(u\ge \theta \ast \mathrm{max}(u))$

the forward step $\alpha =\text{length}(J)$

if $l<=\gamma $ then

${T}^{l}={T}^{l-1}\cup J$

else

Backward update:

${\stackrel{\u02dc}{T}}^{l}={T}^{l-1}\cup J$

$w=\underset{w}{\mathrm{arg}\mathrm{min}}{\Vert y-{\Phi}_{{\stackrel{\u02dc}{T}}^{l}}w\Vert}_{2}$

the backward step $\beta =\alpha -\delta $

${T}_{b}=\underset{J:\left|J\right|=\beta}{\mathrm{arg}\mathrm{min}}{\Vert {w}_{J}\Vert}_{1}$

${T}^{l}={\stackrel{\u02dc}{T}}^{l}-{T}_{b}$

end if

Projection:

$w=\underset{w}{\mathrm{arg}\mathrm{min}}{\Vert y-{\Phi}_{{T}^{l}}w\Vert}_{2}$

${r}^{l}=y-{\Phi}_{{T}^{l}}w$

if ${\Vert {r}^{l}\Vert}_{2}\le \epsilon {\Vert y\Vert}_{2}$ or $\left|{T}^{l}\right|\ge {l}_{\mathrm{max}}$ then

break

end if

end while

$\stackrel{\u02dc}{x}=0$

${\stackrel{\u02dc}{x}}_{{T}^{l}}=w$

Output: $\stackrel{\u02dc}{x}$

3. Simulations and Analysis

Simulation environment was MATLAB R2015b. We experiment with one- dimensional sparse signal (Gaussian sparse signal and 0 - 1 sparse signal) and two-dimensional international standard test image lena.bmp by using FBP and SWFBP algorithm. Measurement matrix $\Phi $ obeys Gaussian distribution.

3.1. Reconstruction of One-Dimensional Sparse Signal

Experimental parameters are setting as follows: length of the sparse signal N = 256, sampling numbers M = 102, sparsity level K = 20, largest number of iterations ${l}_{\mathrm{max}}=10$ , termination threshold of iteration $\epsilon ={10}^{-6}$ , threshold parameter $\theta =0.95$ , value of step difference $\delta =2$ . For FBP algorithm, the forward step $\alpha =4$ , 10, 20 separately, and corresponding to $\alpha $ , the backward step $\beta =2$ , 8, 18. Simultaneously, for SWFBP algorithm, the relationship between $\alpha $ and $\beta $ is $\beta =\alpha -2$ . For each combination of (K, M), do 500 times independent experiments to compare the frequency of exact reconstruction, average recovery time and average normalized mean-squared error (ANMSE). The condition of exact reconstruction is described as follow:

${{\Vert x-\stackrel{^}{x}\Vert}_{2}/\Vert x\Vert}_{2}\le {10}^{-8}$ (4)

And ANMSE was calculated by Equation (5).

$\text{ANMSE}=\frac{1}{500}{\displaystyle \underset{i=1}{\overset{500}{\sum}}{\Vert {x}_{i}-{\stackrel{^}{x}}_{i}\Vert}_{2}^{2}/{\Vert {x}_{i}\Vert}_{2}^{2}}$ (5)

Among (5), ${\stackrel{^}{x}}_{i}$ represents the reconstruction signal for ${x}_{i}$ .

Figure 3 shows the reconstruction results of SWFBP $\left(\beta =\alpha -2\right)$ and FBP $\left(\alpha =4,\beta =2;\alpha =10,\beta =8;\alpha =20,\beta =18\right)$ , under Gaussian sparse signal. From Figure 3, we can see that the exact reconstruction frequency of SWFBP algorithm is higher than FBP algorithm whether under different sparsity or sampling numbers. For example, when the sparsity level K = 40, the exact reconstruction frequency of SWFBP is 90% and FBP’s is 85%. For recovery time, compared with FBP algorithm, SWFBP algorithm has improved largely, and with the increasing of K, the effect is more obviously. Specifically, when $K\ge 45$ , recovery time of SWFBP is 50% to 60% of FBP’s. For ANMSE, there is not much difference between SWFBP and FBP with $\alpha $ increasing, but in general, SWFBP’s is lower than FBP’s.

Figure 4 shows the reconstruction results of SWFBP $\left(\beta =\alpha -2\right)$ and FBP $\left(\alpha =4,\beta =2;\alpha =10,\beta =8;\alpha =20,\beta =18\right)$ , under 0 - 1 sparse signal. Similar to the above test, on the exact reconstruction frequency, SWFBP is not lower than FBP. And for ANMSE, there is also not much difference between SWFBP and FBP. These are because that there is no difference between non-zero element in 0 - 1 sparse signal, also this will lead the algorithms’ reconstruction performance reduce greatly for which selecting the atoms according to the correlation of residual. In spite of this, the recovery time of SWFBP has improved significantly.

3.2. Threshold Parameter

In SWFBP algorithm, parameters involved are $\alpha $ , $\beta $ , $\theta $ and $\delta $ . Among them, $\alpha $ and $\beta $ are mainly decided by $\theta $ . In this part, we experiment with different $\theta $ under the condition of $\beta =\alpha -\delta $ and $\delta =2$ for SWFBP algorithm.

Figure 3. Reconstruction results for Gaussian sparse signal. (a) Comparison of the exact reconstruction frequency under different K; (b) Comparison of the exact reconstruction frequency under different M; (c) Comparison of average recovery time; (d) Comparison of ANMSE.

Figure 4. Reconstruction results for 0 - 1 sparse signal. (a) Comparison of the exact reconstruction frequency under different K; (b) Comparison of the exact reconstruction frequency under different M; (c) Comparison of average recovery time; (d) Comparison of ANMSE.

Figure 5 shows the reconstruction results of SWFBP algorithm while $\theta $ is equal to 0.65, 0.75, 0.85 and 0.95. From Figure 5, we know that with the increase of $\theta $ , the exact reconstruction frequency and average recovery time are increased in the same space. In other words, the influence of parameter $\theta $ to this algorithm is that with $\theta $ increase, the exact reconstruction frequency also increase, but at the cost of more recovery time.

3.3. Reconstruction of Two-Dimensional Image

In order to further illustrate the reconstruction performance of SWFBP algorithm, in this part, we experiment with two-dimensional standard image 256 × 256 Lena. The parameters are setting the same as part 3.1. We choose two indicators to judge the reconstruction performance of algorithm: Recovery time, Peak Signal to Noise Ratio (PSNR), through the following two formulas to calculate PSNE.

$\text{PSNR}=20\ast \mathrm{lg}\left(255/\sqrt{MSE}\right)$ (6)

$\text{MSE}=\frac{1}{M\ast N}{\displaystyle \underset{m=1}{\overset{M}{\sum}}{\displaystyle \underset{n=1}{\overset{N}{\sum}}{\left(\stackrel{^}{x}\left(m,n\right)-x\left(m,n\right)\right)}^{2}}}$ (7)

Figure 6 shows the curve of recovery time and PSNR under different sampling numbers respectively. It can be seen that compared with FBP, performance of SWFBP are all have certain improved. By introducing the threshold strategy in SWFBP algorithm, the iteration times decreased, thus the recovery time is shorten greatly, all are maintained under 3 seconds. When sampling number is 70, SWFBP algorithm has 1 dB gain in the PSNR over FBP algorithm, and the recovery time is only 40% of FBP algorithm.

Figure 5. (a) Comparison of exact reconstruction frequency under different threshold $\theta $ ; (b) Comparison of average recovery time under different threshold $\theta $ .

Figure 6. (a) Comparison of recovery time for two-dimensional image; (b) Comparison of PSNR for two-dimensional image.

Figure 7. Reconstruction images for Lena by FBP and SWFBP at M/N = 0.4.(a) Original image; (b) SWFBP $\left(\beta =\alpha -2\right)$ ; (c)FBP $\left(\alpha =4,\beta =2\right)$ ; (d) FBP $\left(\alpha =10,\beta =8\right)$ ; (e)FBP $\left(\alpha =20,\beta =18\right)$ .

Table 1. Comparison of recovery time and PSNR under different sampling rate.

Figure 7 shows the reconstruction images of Lena under sampling rate M/N = 0.4. As shown in Figure 7, the reconstruction image of SWFBP is clearer than FBP’s visually, i.e. the improved algorithm has higher accuracy in image sparse approximation.

Table 1 shows the recovery time and PSNR under different sampling rate for FBP and SWFBP algorithm. From the date in Table 1, we can draw a conclusion that SWFBP has advantages in no matter recovery time or PSNR.

4. Conclusion

On the basis of FBP algorithm, in this paper, SWFBP algorithm was proposed. By adapting a simple threshold strategy to FBP algorithm, the proposed algorithm SWFBP can be flexibly the forward and backward steps, and thus to combine the high reconstruction accuracy and flexible atom selecting mechanism of weak selection together. The simulation results demonstrate that SWFBP algorithm provides a better performance and lower computation cost compared to FBP algorithm. In particular, while the sparse signal obeys Gaussian distribution, the exact reconstruction frequency and accuracy are better than FBP, and while the sparse signal obeys 0 - 1 distribution, the exact reconstruction frequency and precise are approximate to FBP’s. Reconstruction results of two-dimensional images show that the reconstruction performances of SWFBP have certainly improved while its recovery time is decreased obviously. Despite of this, the atom selecting method is still not match enough so far, and we hope to find a solution to perfect the atom selecting mechanism in the future.

Acknowledgements

This work was supported by the Specialized Research Fund for the Doctoral Program of Higher Education (No. 20130031110032 and No. 20130031110033).

References

[1] Donoho, D.L. (2006) Compressed Sensing. IEEE Transactions on Information Theory, 52, 1289-1306.

https://doi.org/10.1109/TIT.2006.871582

[2] Tsaig, Y. and Donoho, D.L. (2006) Extensions of Compressed Sensing. Signal Processing, 86, 549-571.

https://doi.org/10.1016/j.sigpro.2005.05.029

[3] Candes, E.J. and Wakin, M.B. (2008) An Introduction to Compressive Sampling. IEEE Signal Processing Magazine, 25, 21-30.

https://doi.org/10.1109/MSP.2007.914731

[4] Chen, S.S., Donoho, D.L. and Saunders, M.A. (1998) Atomic Decomposition by Basis Pursuit. SIAM Journal on Scientific Computing, 20, 33-61.

https://doi.org/10.1137/S1064827596304010

[5] Daubechies, I., Defrise, M. and De Mol, C. (2004) An Iterative Thresholding Algorithm for Liner Inverse Problems with a Sparsity Constraint. Communications on Pure and Applied Mathematics, 57, 1413-1457.

https://doi.org/10.1002/cpa.20042

[6] Blumensath, T. and Davies, M.E. (2009) Iterative Hard Thresholding for Compressed Sensing. Applied and Computational Harmonic Analysis, 27, 265-274.

https://doi.org/10.1016/j.acha.2009.04.002

[7] Maleki, A. and Donoho D.L. (2010) Optimally Tuned Iterative Reconstruction Algorithms for Compressed Sensing. IEEE Journal of Selected Topics in Signal Processing, 4, 330-341.

https://doi.org/10.1109/JSTSP.2009.2039176

[8] Mallat, S.G. and Zhang, Z. (1993) Matching Pursuit with Time-Frequency Dictionaries. IEEE Transactions on Signal Processing, 41, 3397-3415.

https://doi.org/10.1109/78.258082

[9] Tropp, J.A. and Gilbbert, A.C. (2007) Signal Recovery from Random Measurements via Orthogonal Matching Pursuit. IEEE Transactions on Information Theory, 53, 4655-4666.

https://doi.org/10.1109/TIT.2007.909108

[10] Needell, D. and Tropp, J.A. (2009) CoSaMP: Iterative Signal Recovery from Incomplete and Inaccurate Samples. Applied and Computational Harmonic Analysis, 26, 301-321.

https://doi.org/10.1016/j.acha.2008.07.002

[11] Dai, W. and Milenkovic, O. (2009) Subspace Pursuit for Compressive Sensing Signal Reconstruction. IEEE Transactions on Information Theory, 55, 2230-2249.

https://doi.org/10.1109/TIT.2009.2016006

[12] Karahanoglu, N.B. and Erdogan, H. (2013) Compressed Sensing Signal Recovery via Forward-Backward Pursuit. Digital Signal Processing, 23, 1539-1548.

https://doi.org/10.1016/j.dsp.2013.05.007

[13] Candes, E.J., Romberg, J. and Tao, T. (2006) Robust Uncertainty Principles Exact Signal Reconstruction from Highly Incomplete Frequency Information. IEEE Transactions on Information Theory, 52, 489-509.

https://doi.org/10.1109/TIT.2005.862083

[14] Candes, E.J. (2008) The Restricted Isometry Property and Its Implications for Compressed Sensing. Comptes Rendus Mathematique, 346, 589-592.

https://doi.org/10.1016/j.crma.2008.03.014