Received 25 April 2016; accepted 15 May 2016; published 29 July 2016
Image processing has been one of the major areas governing the current multimedia age. In order to effectively process an image and obtain information, it is mandatory that the image is in appropriate format and is legible for a machine to process and interpret. Especially in the medical field, this is important due to the information context and as a result the image reconstruction has become important. In general, image reconstruction is an inverse problem that recovers the original image from damaged version due to ageing, atmospheric interferences or other physical damage  . One of the challenges is to reconstruct the image as close as possible to the original image. Image denoising is a form of image reconstruction, which is performed by eliminating additional noise components from the images. Noise present in images varies significantly and is often introduced during the process to convert images from analog to digital format.
The basic measuring unit for identifying the quality of an image is the signal to noise ratio. It is the physical measure of the sensitivity of an imaging system and is identified by applying the 20 log rule,
According to the industry standard, SNR of 32.04 dB refers to good quality image, while 20 dB SNR is the acceptable quality. Several denoising methods exist in literature and are categorized as filtering based approaches  , wavelet based approaches  , principal component analysis based approaches  , independent component analysis approaches  and sparse coding approaches   . All these methods are good for denoising process, however, they have their limitations. Wavelet based methods are fast algorithms, however, they exhibit very high dependency on the image components. As a result, these methods are not considered as adaptive. The PCA and the sparse coding techniques rely on the statistical properties of the data.
A modified sparse coding algorithm is presented in  . It exploits the maximum Kurtosis as the maximizing sparse measure criteria. A fixed variance term of the sparse coefficient provides information of a fixed capacity. A determinative basis function is used to improve the convergence speed. A fringe pattern denoising algorithm is presented in  . The process of fringe reduction is based on the principle of dimensionality reduction. This technique is applied to interferometry images. Noise reduction based on Krylov iterative solvers  has been presented and it is performed by updating a preconditioner based on incomplete factorizations, which presents a global computational cost higher than the number of image pixels. A fast image recovery algorithm  based on splitting deblurring and denoising is presented. The algorithm component is divided into two steps such as the deblurring and the denoising. Fourier transform is used to deblur the images and the algorithm is used for denoising. Hybrid regularizers are based denoising and deblurring algorithm  and it mainly concentrates on eliminating the staircase effect as well as helps preserve edge details. A local and non-local circulant similarity based image denoising  is proposed. This is a patch based denoising method that considers the self similarity of images while performing the denoising operation. A similar method that works specifically on grayscale images is proposed in  . The self similarity is obtained by cyclic shift called the circulant similarity. A curvelet transformation technique  for denoising digital images has been proposed. Finite RIdgelet Transform (FRIT) is used to solve the problem of mapping curves. Appropriate application of FRIT is mandatory, hence quadrant based division is carried out in the image. This method works best on medical images, compared to regular images. A shearlet domain based image denoising technique is presented in  . An effective multi-scale and multi direction analysis methodology is proposed that also performs the process of image smoothening. A genetic programming based robust noise removal technique  is introduced. This is based on two stages, the noise detection stage and the noise removal stage. Genetic programming is used for the process of noise detection and noise removal is performed by image smoothening. A variation based noise removal method is proposed in  . This method tends to detect noise based on the differences in the frequencies of the pixels. Wavelet domain based image noising based on statistical models  is proposed. A similar method that works on the curvlet domain is presented in  .
Based on the above discussion, it is observed that most of the approaches consider a single particular noise and denoise the same. In real-time scenario, there is a possibility that more noises affect the image and the proposed approach handles this issue. The distributions of noise will also be irregular. Hence it is mandatory to identify the appropriate parts affected by noise and then denoise it rather than denoising the entire image.
The proposed approach identifies the noise and performs pixel based corrections rather than block based corrections, hence multiple noise distributions and irregular noise distributions are handled well. As the corrections are based on the environment of the pixel under scrutiny, adverse effects on the image is reduced to a large extent. Experiments were conducted using Salt and Pepper noise, Gaussian noise and a combination of both the noise distributed in the image. It was observed that the proposed technique worked effectively in identifying the noise components. The denoising module also exhibited high efficiency in reducing the noise providing images with high PSNR values (maximum of 30).
The rest of the paper is organized as follows. The proposed metaheuristic based denoising method is presented in the next section. The result is presented in Section 3 and the paper is concluded in the last section.
2. Metaheuristic Based Noise Identification and Image Denoising Using Adaptive Block Selection Based Filtering
Denoising images has become the major requirements, especially in the medical domain. Problems existing in the current denoising techniques are that they operate on a single noise. However in real time, while the images are converted to digital format, they tend to contain a combination of noises. The locality based neighborhood filtering technique is an effective technique for denoising images to provide noise removed one.
The input image is initially converted to grayscale (2-dimensional image). To identify the effectiveness of the system, noise is introduced into the input image. The introduction of noise is in terms of separate grids, and each grid contains different noise in different levels. For the current evaluation, we have used salt and pepper and Gaussian noise.
The proposed approach is divided into two phases. The first phase deals in identifying the noise using Particle Swarm Optimization (PSO) and the next phase deals with applying appropriate techniques to eliminate the noise. The architecture for the denoising approach is presented in Figure 1.
The grayscale noisy image passed to the application is initially segmented into n × n blocks. The actual size of division (n) is dependent on the type of image and the level of granularity of the noise contained in the image. This is determined by the user by trial and error method. Intensity variations contained in each block is identified. This data serves as the search space for PSO to operate on. Images containing salt and pepper and Gaussian
Figure 1. MH based denoising-architecture.
noise are used as the training data. PSO is applied on this search space to identify the type of noise contained in the block. The block is then passed to noise elimination phase, where adaptive block selection is performed. Real time images do not have uniform distribution of noise, hence, this type of selection becomes mandatory. The noisy components are filtered and smoothened to obtain the final denoised image.
2.1. PSO Based Noise Identification
The noise identification phase uses a constant size block selection for the process. The block is always maintained as a square matrix of size n × n. The intensity variations of the images are considered as the base data for the classification process. These sample images along with a block from the current image are used as the search space. The initialization of particles in the search space is the initial process performed by PSO. This distribution is performed using uniform random distribution model. The initial velocity for the particles are calculated using
where Vi is the velocity; bup and blo are the upper and lower bounds of the search space respectively.
The particle best (pbest) and global best (gbest) values are identified. Using the initial velocity obtained from Equation (2) particle acceleration is triggered. New pbest and gbest values are calculated using
where Pi,d and gd are the parameter best and the global best values; rp and rg are the random numbers; Xi,d is the value of current particle position and the parameters ω; φp, and φg are selected by the user. PSO operates on continuous domain hence particle movement is continuous. Due to the discrete nature of the proposed approach, this continuous movement is discretized as follows
where Pik refers to the particle i’s current location corresponding to dimension k; Njk refers to the kth dimension of node Ni. This process helps us to effectively identify the type of noise contained in the image block. The corresponding denoising mechanism can then be applied on the block to eliminate noise.
The noise is introduced in original image to obtain the noisy image. Salt and Pepper and Gaussian noise are considered for this approach. The image is read in blocks and for every block, one of the two noises is applied and the final noisy image is obtained as shown in Figure 2.
Variance between pixels in every block is considered as the base for PSO and particle distribution is carried out in this point. The sample space is made up of variances of several noisy images incorporated with either salt and pepper or Gaussian noise. PSO identifies the noise component contained in the image block and its corresponding denoising schemes are applied.
Figure 2. (a) Original image (b) Noisy image (Salt & pepper and Gaussian noise).
This process is carried out on every block of the noisy imageand the corresponding denoising mechaisms are applied to the image of any type of noise. Figures 3(a)-(c) shows the stages of denoising the image and it is noticed from Figure 3 that every block is processed independently and the final denoised image (c) is obtained.
2.2. Denoising Mechanisms
This sub-section presents two denoising mechanisms that are used to eliminate salt and pepper and Gaussian noise. Denoising is done in two major phases; the filtering phase and the image-smoothing phase. Filtering is the process to modify an image to enhance its features or to modify it. Filtering can either emphasize certain features, and it can also eliminate several other components in an image.
Here, we consider a common degradation model as given below
where K is the blurring matrix; y is the observed image; b is an unknown noise vector and x is the unknown real image. We know that estimating x from the observed blurred and noisy image y is a very ill-posed problem. The underlying idea is to minimize the following regularization model to get the results,
where represents Euclidean norm; Φ is usually called regularization function and λ is a positive regularization parameter, which provides a tradeoff between fidelity to the measurements and noise sensitivity. Clearly, a small λ favors a small solution residual norm at the cost of a large solution semi-norm, while a large λ has the opposite effect. The most common form of regularization techniques is the regularization in which one searches for the solution of
where stands for the sum of the absolute values of the components.
This is a neighborhood based operation that considers blocks of constant sizes, analyzes them and applies the filtering process. The process of filtering is carried out using median based filtering technique. The median based filtering technique is based on spatial 2D filtering. Every pixel is analyzed and the neighborhood details are obtained. In general, noise does not occur in terms of single pixels. It can span up to multiple blocks. Existing median based filtering techniques considers only specific block entries and replaces all the pixels in that block. This leads to a problem, where the selection block contains only the noise data. This leads to the noise being interpreted as normal data and hence it tends to get ignored. Due to the adaptive block selection method, the blocks are identified in an adaptive manner not pertaining to any fixed boundaries. The blocks are identified along with its surrounding background components, which helps to identify the components to be replaced. This makes sure that the process of filtering is performed with minimum loss to the image clarity.
(a) (b) (c)
Figure 3. Denoising stages initial (a), mid (b) final (c).
Even with the adaptive block selection component in place, it was identified that several moderately large noise components has been classified as the original image. This effect is rectified by a smoothing technique. Smoothing, also known as a low pass filtering is used to eliminate spatial frequency noise from digital images. In general, smoothing can be effectively carried out if multiple copies of the same image are available. However this is not feasible in most cases. Hence, smoothing usually uses either reconstruction based methodologies, or enhancement methodologies to denoise the images. Reconstruction based methodologies often require prior knowledge about the degradation process undergone by the image. Images pertaining to certain specific applications alone contain information about the degradation process. Hence this method is not suitable for general applications. Enhanced filtering is a technique that improves the image on the basis of human or machine interpretability. This technique is more adaptable and hence can be used on general applications. They are heuristic and problem oriented.
The proposed approach identifies sudden variances of pixels in the image and reduces it. This method concentrates on the salt and pepper based noise reduction. In this category of noise, each corrupted pixel either equals the maximum or minimum gray scale. Hence, if a pixel exhibits maximum or minimum intensity, it is considered as noise and hence needs to be smoothened. Otherwise the pixel is considered as a normal pixel belonging to the original image. The smoothing rule is given as follows
where kn is the final pixel intensity; ωnj and µnj refers to the intensity of the pixel values that exhibit least variances compared to the pixel under analysis and M refers to a single block dimension.
An image with salt and pepper noise exhibits high fluctuation with reference to the center of the block. The neighboring elements are collected based on their intensity correspondence with the pixel exhibiting the variance. Block based smoothing is performed on the selected pixels and this guarantees that the signal to noise ratio is maintained maximum even after eliminating the noise components.
Figures 4(a)-(c) show various phases of denoising the image and the noisy image is passed through a modified median filter, which identifies high variations and converts them to pixels depending on its neigh- boring pixel intensities. This acts as the stage 1 denoising mechanism represented in Figure 4(b). Even after this stage, some closely grouped noise pixels tend to remain in the image. Image smoothing is performed on the image to obtain the final denoised image 4(c).
Gaussian noise, unlike salt and pepper is spread across the entire image and it concentrates on increasing or decreasing the intensity of the pixels. Hence the noise cannot be isolated from the image (Figure 5 (a)). Instead it can be observed only from the huge variation in pixel intensities. This variations need to be smoothened in order to eliminate the accumulated noise. Gaussian filter is applied to the image to smoothen it (Figure 5(b)) and Weiner filter is applied to it for eliminating tiny noise elements from the image. The image is then sharpened to obtain the final denoised image (Figure 5(c)).
(a) (b) (c)
Figure 4. Image denoising with salt and pepper noise. (a) Noisy image, (b) Denoising stage 1, (c) Final denoised image.
3. Results and Discussion
The performance of the proposed approach is evaluated using standard images (Lena, Baboon and Peppers). Lena is the standard benchmark image containing required shade variations with smooth transitions. Baboon image contains huge and sudden variations in the image intensities. These have very large probability of being misclassified as noise. Hence in-order to identify the efficiency of the detection process in an image with high variations, Baboon image is used. Peppers image contains large areas of similar color tones with varying sizes, which makes the process of adaptive division a complicated process. Peppers image was introduced to identify the efficiency of the block selection mechanism.
Noise is introduced to these images at various levels and PSNR of the noisy and denoised images are measured and used as the evaluation parameters.
Figures 6-8 present the Peak SNR values obtained by denoising the image Lena with salt and pepper noise, Gaussian noise and a combination of the two noises. It is observed from Figures 6-8 that the performance of the proposed approach is encouraging. For salt and pepper noise, the difference in PSNR of noisy and denoised images is quite good, and a similar result is observed in combination noise. The proposed approach, when applied on Gaussian noise exhibited very slight improved performance as shown in Figure 7, while combination noise was handled effectively and exhibited effective increase in the PSNR values as shown in Figure 8. Since the proposed approach is efficient in identifying salt and pepper noise, better than Gaussian noise, lesser PSNR is exhibited in Figure 7. In the mixed noise, the noise values are averaged out and hence PSNR improvements can be observed.
(a) (b) (c)
Figure 5. Image denoising with Gaussian noise. (a) Noisy image, (b) Denoising stage 1. (c) Final denoised image.
Figure 6. PSNR (Salt and pepper noise).
Figure 7. PSNR (Gaussian noise).
Figure 8. PSNR (Salt & pepper and Gaussian noise).
Figures 9-11 show the noise reduction level of the images while considering salt and pepper noise, Gaussian noise and a combination of the two noises. It could be observed that the level of reduction ranges from 1% and moves linearly reaching up to 160% in salt and pepper, while Gaussian noise shows very low levels of reduction. While in the combination method, the noise reduction level is observed to increase as the noise increases, however after a particular noise level, the noise reduction levels become constant and exhibit low levels of fluctuation. The reduction rate is calculated by finding the difference between the PSNR values of the noisy image and the denoised image and calculating its percentage with respect to the noisy image.
The appropriately defined large boundaries contained in Lena enable effective working of the noise detection module. The salt and pepper noise, makes absolute modification of pixels, hence it was effectively identified and eliminated.
Gaussian noise distorts pixels rather than performing absolute modification, hence is much more difficult to identify and correct when compared to salt and pepper noise. It could be observed that better detection levels were observed in low level noise, but as the noise level increases, it blends with the actual image, hence detection becomes challenging.
Combination noise deals with both detecting the noise type and applying corrective actions. Reduction in the noise levels exhibits the efficiency of the noise identification and the correction process.
Figure 9. Noise reduction % (Salt and pepper noise).
Figure 10. Noise reduction % (Gaussian noise).
Figure 11. Noise reduction % (Salt & pepper and Gaussian noise).
Figures 12-17 show the peak SNR values of the noisy image and the denoised image and the noise reduction levels by the proposed approach.
The Baboon image contains huge frequency variations. Hence, in Figure 12 it is observed that the denoised image achieves lesser PSNR value compared to the noisy image. Its corresponding depiction of noise reduction is present in the negative quadrant. This is due to the intrinsic frequency variations contained in the actual image. However, as the noise intensity starts to raise, it is noticed that the proposed approach performs effectively and also reaches a noise reduction rate of 140%.
Figures 18-23 show the peak SNR values and the noise reduction rates for the peppers image. Due to the regular and lower intrinsic variations contained in the image, the noise reduction algorithm gains an upper hand even from the beginning.
Figures 24-26 show the noise levels, noisy and the denoised images of Lena. It is observed that the initial noise levels do not have a great impact on the quality of the image, as the noise is distributed to a specific region. However, as the intensity of the noise increases, an impact on the clarity was observed and when it reaches 0.55, the maximum level dealt with in our experiments, the legibility of the image is completely lost. But it is noticed that the denoised image has been constructed with most of the mandatory details.
Figure 12. PSNR (Salt and pepper noise).
Figure 13. PSNR (Gaussian noise).
Figure 14. PSNR (Salt & pepper and Gaussian noise).
Figure 15. Noise reduction % (Salt and pepper noise).
Figure 16. Noise reduction % (Gaussian noise).
Figure 17. Noise reduction % (Salt & pepper and Gaussian noise).
Figure 18. PSNR (Salt and pepper noise).
Figure 19. PSNR (Gaussian noise).
Figure 20. PSNR (Salt & pepper and Gaussian noise).
Figure 21. Noise Reduction % (Salt and pepper noise).
Figure 22. Noise reduction % (Gaussian noise).
Figure 23. Noise reduction % (Salt & pepper and Gaussian noise).
Figure 24. Lena (Noisy image vs. Denoised image) (Salt and pepper).
Figure 25. Lena (Noisy image vs. Denoised image) (Gaussian noise).
Figures 27-29 show the images of Baboon subject to denoising. This image plays a vital role in identifying
Figure 26. Lena (Noisy image vs. Denoised image) (Salt & pepper and Gaussian noise).
Figure 27. Baboon (Noisy image vs. Denoised Image) (Salt and pepper).
the accuracy of the denoising algorithms due to its highly fluctuating nature. The spatial domain pixels of baboon exhibit large variations. From the denoised images, it could be observed that the even with high pixel variations, the basic details of the image are preserved during the denoising phase.
Figure 28. Baboon (Noisy image vs. Denoised Image) (Gaussian noise).
Figures 30-32 show the images of fruits subject to denoising. The specialty of this image is that the pixel variations are very low, but the image has large number of edges associated with it. It could be observed from the results that the edges are preserved to the maximum extent.
Figure 29. Baboon (Noisy image vs. Denoised image) (Salt & pepper and Gaussian noise).
Figure 30. Fruits (Noisy image vs. Denoised Image) (Salt and pepper).
This paper presents an effective method that performs denoising to provide a denoised image with acceptable image quality. This paper is built on a principle that the noise contained in an image is not always constant. It tends to vary and a single image can contain several types of noise in it. This method uses Particle Swarm
Figure 31. Fruits (Noisy image vs. Denoised image) (Gaussian).
Optimization to identify the noise contained in the image and uses appropriate methods to treat the noise. The noise identification and the denoising components were found to perform efficiently. The noise identification component effectively identified noise even at the noise level of 0.01. The denoising module was able to achieve noise reduction levels as high as 160%.
Figure 32. Fruits (Noisy image vs. Denoised image) (Salt & pepper and Gaussian noise).
Limitations of our approach include fixed grid size. We assume that the noise distributed in the image occupies a single defined grid. The size of the grid is defined in the application. Since the size of the grid is predefined, we can see patches in the output denoised images though the SNR is better than the previous approaches.
Our future contributions will be based on providing dynamic grid size selection methods to identify the noise distribution and filter out grids of various sizes containing specific noise and hence avoiding the appearance of patches in the image. Domain specific block selections can also be performed, by modifying our algorithm to work on particular set of image patterns which will further enhance the PSO performance, thereby resulting in better SNR values. The proposed approach uses a plug-in model. PSO was used as a multi-class classifier. Any type of error can be integrated by using the appropriate training data. Hence application based fine-tuning of the detection process is possible.