A New Method of Multi-Focus Image Fusion Using Laplacian Operator and Region Optimization

Show more

1. Introduction

Image fusion is one of the most important techniques used to extract and integrate as much information as possible for image analysis, such as surveillance, target tracking, target detection and face recognition [1] [2] . Image fusion is often applied to multi-focus image processing. Due to the limited focus range of the optical lens, the optical lens will blur the object outside the focused region in the process of optical imaging [3] . To get the full focus image, multi-focus image fusion is an effective technique to solve this problem. Multi-focus image fusion is to integrate the focus area from images with different depth focus. So far, many multi-focus image fusion algorithms have been proposed. All methods can be divided into two categories: spatial domain fusion and transform domain fusion [4] .

In the transform domain, the multi-scale decomposition is very similar to the human visual system and computer vision process from coarse to fine understanding of things, and no block effect in the fusion process [5] . In multi-focus image fusion algorithm and the image fusion field, they are considered by researchers. This class of algorithms is more widely in current research. At present, the research on multiscale image fusion methods is mainly focused on the image multiscale analysis tools and the fusion rules. In recent years, researchers have proposed many tools for multiscale analysis of images, including pyramid transform, wavelet transform and other multiscale geometric analysis methods.

The method based on spatial domain mainly deals with the image fusion according to the spatial feature information of image pixels [6] . As a single pixel cannot represent the image space feature information, the block method is generally used. This method has a better effect on the area rich image. However, the processing of the flat area is likely to cause misjudgment, the size of the block is difficult to select, and the image edge will appear discontinuous small pieces, resulting in serious block effect. In view of the shortcomings of the image fusion algorithm based on block segmentation, some scholars have proposed an improved scheme. Among them, V. Aslantas and R. Kurban proposed differential evolution algorithm to determine the size of segmented image blocks, and achieved some results [6] . To a certain extent, it solved the problem that the size of image blocks was difficult to select. A. Goshtasby and others calculate the corresponding blocks of the fused image by calculating the weighted sum of the sub blocks, and introduce the weighting factors to each corresponding block in the source image [7] ; H. Hariharan et al. defined the focal connectivity of the same focal plane, and segmented the fused source image according to the connectivity [8] . In addition to the above several spatial domain fusion algorithms, many scholars have proposed the fusion method based on the focus region detection in recent years.

From a large number of literatures, one of the key problems of spatial image fusion algorithm is how to measure the sharpness of blocks or regions or the saliency level of regions. In order to solve these problems, new multi-focus image fusion method, a spatial domain method, have been proposed based on Laplacian operator and region optimization. The saliency level of regions is the main part of the paper. The method of evaluating saliency level of image includes Tenengrad gradient function [9] , Laplacian gradient [10] function, sum modulus difference (SMD) [11] function, energy gradient [12] function, and so on. The image was processed by the better method of evaluating saliency level of image, and then the general focusing region was obtained. Then, the focusing region is optimized according to the focusing connectivity of the focal plane and the edge detection. Finally, the multi-focus image fusion is finished by using the final decision map.

2. Materials and Methology

2.1. Materials

In order to prove the superiority of the proposed fusion method, three sets of images are selected for multi-focus image fusion, as shown in Figures 1(a)-(c). The images on the top row are mainly focused on the foreground while the images on the bottom row are mainly focused on the background. To better evaluate the performance of the fusion method, the proposed method is compared with several current mainstream multi-focus image fusion methods based on DWT [13] , NSCT [14] , OPT [15] and LP [16] . All experiments are carried out in MATLAB2016a.

2.2. The Evaluation of Image Saliency

In the quality evaluation of no reference image [17] , the saliency of image is an important index to evaluate the quality of image. It can be better suited to human subjective feelings. If the image is not high in significance, the image is blurred. In this paper, the Laplacian gradient [10] is used.

The Laplacian operator is an important algorithm in the image processing, which is a marginal point detection operator that is independent of the edge direction. The Laplacian operator is a kind of second order differential operator. A continuous two-element function f (x, y), whose Laplacian operation is defined as

${\nabla}^{2}f={\partial}^{2}f/\partial {x}^{2}+{\partial}^{2}f/\partial {y}^{2}$ (1)

For digital images, the Laplacian operation can be simplified as

$g\left(i,j\right)=4f\left(i,j\right)-f\left(i+1,j\right)-f\left(i-1,j\right)-f\left(i,j+1\right)-f\left(i,j-1\right)$ (2)

At the same time the above formula can be expressed as a convolution form, that is

$g\left(i,j\right)={\displaystyle \underset{r=-k}{\overset{k}{\sum}}{\displaystyle \underset{s=-l}{\overset{l}{\sum}}f\left(i-r,j-s\right)H\left(r,s\right)}}$ (3)

In the above formula, $i,j=0,1,2,\cdots ,N-1$ ; k = 1, l = 1, H(r, s) can take a lot of values, one of which is

${H}_{1}=\left[\begin{array}{ccc}0& 1& 0\\ 1& -4& 1\\ 0& 1& 0\end{array}\right]$

Experiments show that the higher the image saliency is, the greater the sum of the mean of the corresponding matrix is after being processed by the Laplacian operator. Therefore, the image saliency (D(f)) based on the Laplacian gradient function is defined as follows:

(a) (b) (c)

Figure 1. Images for multi-focus image fusion. (a) Backgammon, the upper one is foreground focus and the lower one is background focus; (b) Clock, the upper one is foreground focus and the lower one is background focus; (c) Lab, the upper one is foreground focus and the lower one is background focus.

$D\left(f\right)={\displaystyle {\sum}_{y}{\displaystyle {\sum}_{x}\left|g\left(x,y\right)\right|}}\text{\hspace{1em}}\left(g\left(x,y\right)>T\right)$ (4)

Among them, g (x, y) is the convolution of Laplacian operators at pixel points (x, y).

By using the value of D(f), it is easy to divide images with different clarity. Next, it is applied to the saliency decision of different regions of images. According to the above, the region saliency of an image can be defined as:

${D}_{I}\left(i,j\right)=D\left(I\left(i-n:i+n,j-n:j+n\right)\right)$ (5)

Among them, D is the function of saliency method based on Laplacian gradient operator. D_{I} is the matrix of saliency of image I. And
$\left(2n+1\right)\times \left(2n+1\right)$ is the scale of processing template.

In the multi-focus image processing, we can get significant matrices (D_{I}_{1}, D_{I}_{2}) of different focus images, obtain a decision matrix (M_{decision}) by comparing.

${M}_{\text{decision}}=\left({D}_{{I}_{1}}\ge {D}_{{I}_{2}}\right)$ (6)

For various reasons, there are some noise and erroneous judgment in the decision map. It will affect the quality of image fusion. As for erroneous judgment, it will be mentioned later in the article.

2.3. Region Optimization

In the first obtained decision map, there are often some noise and misjudged areas need to be corrected. In most methods, morphological processing is usually used to solve this problem. But this method often leads to the destruction of the boundary. H. Hariharan et al. [18] defined the focusing connectivity of the same focal plane. Most of the noise and misjudged areas can be corrected, according to it.

${M}_{\text{DF-decision}}={\text{Delete}}_{\text{Larea}}\left({M}_{\text{decision}}\right)$ (7)

As for Delete_{Larea}, it needs to be mentioned that its function is to delete smaller connected areas which include most noise and misjudged areas.

At this stage, there is an important problem to be solved. The erroneous judgment adhered to the focus edge is not removed by the above method. When using the Laplacian method to deal with the edges of multi-focus images, there is often edge information interference, in the case of Figure 2. Because the black part is more than the white part in their corresponding templates, point A and the points around it will turn black. We can understand it from Formula 2. f uses 3 × 3 this module for processing. And it will also be false for other reasons. Therefore, we put forward a focus edge optimization method based on edge detection. Edge detection is used to find the edges of the original images. Using a module scans the edges to modify the area in the module. The g is an edge detection function. As shown in Formula 8, the h is the function that if it is found that one side of the edge is dominated by an element, all this side is modified to the element. Among them, A is a decision map, B is an edge map, and C is an optimized decision map.

$\left[\begin{array}{ccccc}\text{1}& 1& 1& \text{0}& 0\\ \text{1}& \text{1}& \overline{)\text{0}}& \text{0}& \text{0}\\ \text{1}& \text{1}& \overline{)\text{1}}& \text{0}& \text{0}\\ \text{1}& \text{1}& \text{0}& \text{0}& \text{0}\\ \text{1}& \text{0}& \text{0}& \text{0}& \text{0}\end{array}\right]\begin{array}{c}\stackrel{f}{\to}A=\left[\begin{array}{ccccc}\text{1}& 1& 1& \text{0}& 0\\ \text{1}& \text{1}& \overline{)1}& \text{0}& \text{0}\\ \text{1}& \text{1}& \overline{)\text{0}}& \text{0}& \text{0}\\ \text{1}& \text{1}& \text{0}& \text{0}& \text{0}\\ \text{1}& \text{0}& \text{0}& \text{0}& \text{0}\end{array}\right]\\ \stackrel{g}{\to}B=\left[\begin{array}{ccccc}\text{1}& 1& 1& \text{0}& \text{0}\\ 0& \text{1}& \text{0}& \text{0}& \text{0}\\ 0& 0& \text{\hspace{0.17em}}1\text{\hspace{0.17em}}& \text{0}& \text{0}\\ 0& \text{1}& \text{0}& \text{0}& \text{0}\\ \text{1}& \text{0}& \text{0}& \text{0}& \text{0}\end{array}\right]\end{array}\}\stackrel{h}{\to}C=\left[\begin{array}{ccccc}\text{1}& \text{1}& 1& \text{0}& \text{0}\\ \text{1}& \text{1}& \overline{)\text{0}}& \text{0}& \text{0}\\ \text{1}& \text{1}& \overline{)\text{1}}& \text{0}& \text{0}\\ \text{1}& \text{1}& \text{0}& \text{0}& \text{0}\\ \text{1}& \text{0}& \text{0}& \text{0}& \text{0}\end{array}\right]$ (8)

2.4. Multi-Focus Image Fusion

Image fusion is carried out according to the final decision map (D_{final}). Then, the fused image f (x, y) could be expressed as:

$f\left(x,y\right)={D}_{\text{final}}\times {f}_{1}\left(x,y\right)+\left(1-{D}_{\text{final}}\right)\times {f}_{2}\left(x,y\right)$ (9)

It means the fused image is composed by the focus regions in the image f_{1} (x, y) and f_{2} (x, y). Though these steps, a fused image fully focused could be obtained.

For more than two images to be fused, it is necessary to change the form of the decision map. It will storage the serial number of the most significant image in the corresponding region. Figure 3 is a schematic map of a decision map in the process of fusion of four multi-focus images. When image fusion is made, each point of the image is assigned according to the index value. The fused image f (x, y) could be expressed as:

Figure 2. Example image of edge information interference.

Figure 3. Schematic map of a decision map in the process of fusion of four multi-focus images.

$f\left(x,y\right)={f}_{{D}_{\text{final}}\left(x,y\right)}\left(x,y\right)$ (10)

2.5. Evaluation Index System

The performance of the fusion algorithm can be evaluated subjectively and objectively. Since the evaluation is highly dependent on human visual characteristics, it is difficult to distinguish between the fused images when they are approximately similar. Therefore, one subjective evaluation method and four objective evaluation methods are adopted in this article.

1) Subjective evaluation method

a) Comparison of residual maps

The residual map can display the difference between two images in the image. We can observe the effect of image fusion by observing residual maps of different methods. The residual map I_{r} between the source image and the fused image is defined as follows:

${I}_{r}={I}_{\text{origin}}-{I}_{\text{fusion}}+\mathrm{max}\left({I}_{\text{origin}}\right)/2$ (11)

2) Objective evaluation methods

a) Mutual information (MI)

The greater the sum of the mutual information between the fusion image and the source image, the richer the information obtained from the source image of the fused image, and the better the fusion effect. The MI between the source image and the fused image is defined as follows:

$MI={\displaystyle \underset{k=0}{\overset{L}{\sum}}{\displaystyle \underset{i=0}{\overset{L}{\sum}}{p}_{AF}\left(i,k\right){\mathrm{log}}_{2}\frac{{p}_{AF}\left(i,k\right)}{{p}_{A}\left(i\right){p}_{F}\left(k\right)}}}\text{+}{\displaystyle \underset{k=0}{\overset{L}{\sum}}{\displaystyle \underset{j=0}{\overset{L}{\sum}}{p}_{BF}\left(j,k\right){\mathrm{log}}_{2}\frac{{p}_{BF}\left(j,k\right)}{{p}_{A}\left(j\right){p}_{F}\left(k\right)}}}$ (12)

Among them, p_{A}, p_{B} and p_{F} are the normalized gray histogram of A, B and F. p_{AF} (i, k) and p_{BF} (j, k) are united gray histograms between the fused image and the source image. L is the number of intensity levels.

b) Peak signal to noise ratio (PSNR)

PSNR is the most common and widely used objective measure of image quality. The larger the PSNR, the less the distortion is represented. The PSNR is calculated as follows:

$PSNR=10\cdot {\mathrm{log}}_{10}\left(MA{X}_{I}^{2}/\left(\frac{1}{mn}{\displaystyle \underset{i=1}{\overset{m-1}{\sum}}{\displaystyle \underset{j=1}{\overset{n-1}{\sum}}{\Vert I\left(i,j\right)-K\left(i,j\right)\Vert}^{2}}}\right)\right)$ (13)

where A represents one of the pre-processed images, F represents the processed image, and MAX_{I} is the maximum value that represents the color of the image point.

c) Spatial frequencies (SF)

SF reflects the change of the pixel gray level of the image in space. To some extent, SF can reflect the clarity of images. SF is defined as follows:

$SF=\sqrt{\frac{1}{M\times N}{\displaystyle \underset{i=1}{\overset{M}{\sum}}{\displaystyle \underset{j=2}{\overset{N}{\sum}}{\left[I\left(i,j\right)-I\left(i,j-1\right)\right]}^{2}}}+\frac{1}{M\times N}{\displaystyle \underset{i=2}{\overset{M}{\sum}}{\displaystyle \underset{j=1}{\overset{N}{\sum}}{\left[I\left(i,j\right)-I\left(i-1,j\right)\right]}^{2}}}}$ (14)

where I (I, j) represents the image, and M and N represent the number of rows and columns of the image.

d) Edge intensity (EI)

EI is a measure of the local change intensity of the image in the normal direction along the edge, and also reflects the image sharpness to some extent. Its formula is expressed as:

$EI=\frac{1}{M\times N}{\displaystyle \underset{i=1}{\overset{M}{\sum}}{\displaystyle \underset{j=1}{\overset{N}{\sum}}\sqrt{{I}_{x}^{2}\left(i,j\right)+{I}_{y}^{2}\left(i,j\right)}}}$ (15)

where I_{x} (i, j) and I_{y} (i, j) represent horizontal gradient and longitudinal gradient of the image.

3. Results

3.1. Laplacian Gradient

In order to test the accuracy of the above method, the experiment uses MATLAB language programming to achieve the above algorithm. Experimental pictures use Lena images. The image size is 512 × 512 pixels. Then, the four focus images are generated by blurring each with a Gaussian radius of 2.5, 5, 7.5, and 10, respectively. Five images of Lena, Lena 2.5, Lena 5, Lena 7.5, and Lena 10 are shown in Figures 4(a)-(e).

The five images were tested using the image saliency assessment method based on the Laplacian gradient. Get the corresponding D(f). The data is shown

(a) (b) (c) (d) (e)

Figure 4. Initial and blurry images of Lena images, (a) Initial Lena image with D(f) = 1.0000; (b) Lean images blurred with a Gaussian radius of 2.5 with D(f) = 0.1117, Lean images blurred with a Gaussian radius of 5 with D(f) = 0.0920; Lean images blurred with a Gaussian radius of 7.5 with D(f) = 0.0842; Lean images blurred with a Gaussian radius of 10 with D(f) = 0.0797.

that this method is very sensitive to fuzziness. Contrast experiments are performed using a group of multi-focus images in Figure 1(a). The results are shown in Figure 5(a).

From Figures 5(a)-(d), one can clearly see that the performance of these fusion methods showed difference when fused with the same multi-focus image. From a detailed observation, the fused image obtained by Tenengrad and SMD is not clear and there are a large number of residuals in Figure 5(c) and Figure 5(d). Besides, the edge of the object is fuzzy from the decision map of Tenengrad, SMD and energy gradient. At the same time, compared with the actual situation, it can be clearly seen that the decision map obtained by these fusion methods appeared to have more obvious false information. However, it can be easily observed that the fused image acquired by the image saliency assessment method based on the Laplacian gradient is more ideal in the subjective effect because the residual information is also less than other methods which means that the method transfer almost all focus information to the fused image. On the other hand, good preprocessing is very convenient for the later operation, especially in edge optimization.

3.2. Region Optimization

In turn, the focusing connectivity of the same focal plane and a focus edge optimization method based on edge detection are used to deal with the initial decision map. We can see that obvious interference have been removed in decision map in Figure 6(e). It is not difficult to find that the edge is smoother in the decision map after edge optimization in Figure 6(f). And it shows more clearly in Figures 7(a)-(d).

3.3. Multi-Focus Image Fusion

The whole process can be summarized below. First, we choose a set of multi-focus images (Figure 6(a) and Figure 6(b)) for processing to obtain the corresponding saliency maps (Figure 6(c)). And then we can get an initial decision map (Figure 6(d)) through them. Next, the focusing connectivity of the same focal plane is used to remove most of the noise and misjudge areas. The edge correction method is used to optimize the decision map (Figure 6(e)). Finally,

(a) (b) (c) (d)

Figure 5. Decision map of different methods. (a) Decision map using the image saliency assessment method based on the Laplacian gradient; (b) Decision map using the Tenengrad method; (c) Decision map using the SMD method; (d) Decision map using the energy gradient method.

(a) (b) (c) (d)(e) (f) (g)

Figure 6. Results of multi-focus image fusion. (a) multi-focus images with foreground focus; (b) multi-focus images with background focus; (c) the edge map; (d) the initial decision map; (e) the decision map without obvious interference; (f) the final decision map; (g) the fused image.

(a) (b) (c) (d)

Figure 7. Images of local magnification. (a) one of images in Figure 1(a) marked the processing area; (b) partial decision map before edge optimization; (c) partial edge map; (d) partial decision map after edge optimization.

the multi-focus images are fused according to the final decision map. In the final decision map (Figure 6(f)), it is clear from Figure 7(d) that the edge is more smooth and consistent with the actual situation. Finally, we got a globally clear image (Figure 6(g)).

4. Discussions

4.1. Subjective Evaluation

The fused image and corresponding residual map of Backgammon Clock, and Lab using different method are shown in Figures 8(a)-(j), Figures 9(a)-(j) and Figures 10(a)-(j). From Figures 8(a)-(e), we see that the five algorithms can produce fused images separately, but it is very difficult to distinguish the differences between some fusion results only by visual observation. To better evaluate the visual quality of the fused image, it is a good method to compare their residual map shown in Figures 8(f)-(j). Comparing the residual maps of these methods, the results are obvious. The residual maps obtained by DWT, NSCT, OPT and LP has more residual information, but the proposed method has less.

What would be resulted from Figure 8(c) and Figure 9(c) is that the images are not enough clear using OPT. Many fusion errors exit at the right edge of a Gobang box in Figure 8(a) and Figure 8(b) and the surface of the back clock in Figure 9(a), Figure 9(b) and Figure 9(d). From the residual maps, there are more residual information on the surface of clock in Figure 10(f), Figure 10(h) and Figure 10(i). Compared with other methods, the method proposed in this paper has better subjective performance.

4.2. Objective Evaluation

In the last part, the residual maps are used to compare the different image fusion methods. In order to further verify the performance of the proposed method, the objective quality evaluation is carried out. Objective evaluation indicators have been introduced above, including MI, PSNR, SF and EI. The evaluation results are shown in Table 1.

From the data in Table 1, the evaluation results are obvious. When the origin image is “Backgammon”, most values of four indexes of the proposed method are observably higher than those of the other methods. It has the same situation for “Clock” and “Lab”. By the meaning of several evaluation methods, it can be shown that the fusion image obtained by the proposed method contains more information, also shows that the fusion image has higher definition.

5. Conclusions

This paper presents an improved algorithm for multi-focus image fusion based

(a) (b) (c) (d) (e)(f) (g) (h) (i) (j)

Figure 8. The fused image and corresponding residual map of Backgammon used different method. (a) Fused image of DWT; (b) Fused image of NSCT; (c) Fused image of OPT; (d) Fused image of LP; (e) Fused image of Proposed method; (f) Residual map of DWT; (g) Residual map of NSCT; (h) Residual map of OPT; (i) Residual map of LP; (j) Residual map of Proposed method.

(a) (b) (c) (d) (e)(f) (g) (h) (i) (j)

Figure 9. The fused image and corresponding residual map of Clock used different method. (a) Fused image of DWT; (b) Fused image of NSCT; (c) Fused image of OPT; (d) Fused image of LP; (e) Fused image of Proposed method; (f) Residual map of DWT; (g) Residual map of NSCT; (h) Residual map of OPT; (i) Residual map of LP; (j) Residual map of Proposed method.

(a) (b) (c) (d) (e)(f) (g) (h) (i) (j)

Figure 10. The fused image and corresponding residual map of Lab used different method. (a) Fused image of DWT; (b) Fused image of NSCT; (c) Fused image of OPT; (d) Fused image of LP; (e) Fused image of Proposed method; (f) Residual map of DWT; (g) Residual map of NSCT; (h) Residual map of OPT; (i) Residual map of LP; (j) Residual map of Proposed method.

Table 1. Quantitative indexes of the fusion results.

on Laplacian operator and region optimization. There are two innovations in the algorithm which are the evaluation of image saliency based on Laplacian gradient and focus area and edge optimization based on the connectedness of the focused region and edge detection.

The evaluation of image saliency based on Laplacian gradient has performed well in distinguishing image clarity. It facilitates the extraction of precise focus areas. At the same time, focus area and edge optimization can make the focus area more accurate. From subjective and objective evaluation, it can be seen that the proposed algorithm is effective for multi-focus image fusion and it performs better than other four representative fusion algorithms. Many experiments have been done, and the algorithm still needs to be improved in the edge detection. Accurate edge detection can bring better fusion results.

Acknowledgements

This work is partially supported by the Hubei Provincial Department of Education, National Natural Science Foundation of China (11571041) and Natural Science Foundation of Hubei Province (2013CFA053).

References

[1] Sankaranarayanan, G., Veeraraghavan, A. and Chellappa, R. (2008) Object Detection, Tracking and Recognition for Multiple Smart Cameras. Proceedings of the IEEE, 96, 1606-1624.

https://doi.org/10.1109/JPROC.2008.928758

[2] Stathaki, T. (2008) Image Fusion: Algorithms and Applications. Academic Press, Cambridge.

[3] Rahman, M.A., Liu, S., Wong, C.Y., Lin, S.C.F., Liu, S.C. and Kwok, N.M. (2017) Multi-Focal Image Fusion Using Degree of Focus and Fuzzy Logic. Digital Signal Processing, 60, 1-19.

https://doi.org/10.1016/j.dsp.2016.08.004

[4] Balasubramaniam, P. and Ananthi, V.P. (2014) Image Fusion Using Intuitionistic Fuzzy Sets. Information Fusion, 20, 21-30.

https://doi.org/10.1016/j.inffus.2013.10.011

[5] Piella, G. (2002) A Region-Based Multiresolution Image Fusion Algorithm. International Conference on Information Fusion, Annapolis, 8-11 July 2002, 1557-1564.

https://doi.org/10.1109/ICIF.2002.1021002

[6] Aslantas, V. and Kurban, R. (2010) Fusion of Multi-Focus Images Using Differential Evolution Algorithm. Expert Systems with Applications, 37, 8861-8870.

https://doi.org/10.1016/j.eswa.2010.06.011

[7] Rattá, G.A., Vega, J., Murari, A. and Contributors, J.E. (2007) Image Fusion: Advances in the State of the Art. Information Fusion, 8, 114-118.

https://doi.org/10.1016/j.inffus.2006.04.001

[8] Hariharan, H., Gribok, A., Abidi, M.A. and Koschan, A. (2006) Image Fusion and Enhancement via Empirical Mode Decomposition. Journal of Pattern Recognition Research, 1, 16-32.

https://doi.org/10.13176/11.6

[9] Yu, M.Y., Han, M.L., Cheng, Y.S. and Wei, T.A. (2011) Autofocusing Algorithm Comparison in Bright Field Microscopy for Automatic Vision Aided Cell Micromanipulation. IEEE International Conference on Nano/Molecular Medicine and Engineering, Hong Kong, 5-9 December 2010, 88-92.

[10] Raghunandana, R., Manikantan, K., Murthy, N.N. and Ramachandran, S. (2012) Face Recognition Using DWT Thresholding Based Feature Extraction with Laplacian-Gradient Masking as a Pre-Processing Technique. Proceedings of Cube International Information Technology Conference, Pune, 3-5 September 2012, 82-89.

[11] Choi, K.S., Lee, J.S. and Ko, S.J. (2002) New Autofocusing Technique Using the Frequency Selective Weighted Median Filter for Video Cameras. IEEE Transactions on Consumer Electronics, 45, 820-827.

https://doi.org/10.1109/30.793616

[12] Guo, J.B., Feng, H.J., Wang, L., Peng, Q.J. and Li, X.F. (2016) Design of Focusing Window Based on Energy Function of Gradient. Infrared Technology, 38, 197-202.

[13] Li, S. and Yang, B. (2008) Multifocus Image Fusion by Combining Curvelet and Wavelet Transform. Pattern Recognition Letters, 29, 1295-1301.

https://doi.org/10.1016/j.patrec.2008.02.002

[14] Bhatnagar, G., Wu, Q.M.J. and Liu, Z. (2013) Directive Contrast Based Multimodal Medical Image Fusion in NSCT Domain. IEEE Transactions on Multimedia, 15, 1014-1024.

https://doi.org/10.1109/TMM.2013.2244870

[15] Jie-Feng, X.U., Ai-Guo, L.I. and Qin, Z. (2006) Image Fusion Algorithm Based on Orthogonal Polynomial Transform. Microelectronics & Computer, 23, 93-95.

[16] Wang, W. and Chang, F. (2011) A Multi-Focus Image Fusion Method Based on Laplacian Pyramid. Journal of Computers, 6, 2559-2566.

https://doi.org/10.4304/jcp.6.12.2559-2566

[17] Choi, M.G., Jung, J.H. and Jeon, J.W. (2009) No-Reference Image Quality Assessment Using Blur and Noise. International Journal of Electrical & Electronics Engineering, 3, 184-188.

[18] Hariharan, H., Koschan, A. and Abidi, M. (2007) Multifocus Image Fusion by Establishing Focal Connectivity. IEEE International Conference on Image Processing, 3, 321-324.