Study on Focusing of Area Array Camera by Using Frequency of Images
Abstract: Focusing of an area array camera is an important step in making a high precision imaging camera. Its testing method needs special study. In this paper, a method of camera focusing is introduced. The defocusing depth of camera is calculated by using the frequency spectrum of defocused image. This method is especially suitable for the focusing of the Planar Array Camera, and avoids the complicated work of adjusting the focus plane of the planar array camera in the focusing process.

1. Introduction

Two kinds of method named DFF [1] [2] [3] [4] [5] (Depth from Focus) and DFD (Depth from Defocus) are described for measuring the defocusing distance of the camera and rapid autofocusing of the camera system. DFF method depends on searching arithmetic. Through analyzing the image quality of a series of images, we can get the clarity of the images in different focus positions, and then calculate the optimal position of the optical focal plane of the camera by fitting the defocusing curve. In actual process of adjustment, the collimator target should be parallel to the focal plane. To Line-Array CCD cameras, we can adjust the level direction of the target to realize that. The process is simple and convenient. Therefore, the method is widely used in the focal plane adjustment of the line array camera area. But to plane array camera, the method has some limitations. To ensure the parallel, two-dimensional direction of the target needs to be adjusted, but that is difficult to carry out. And this algorithm needs many images, the workload is heavy. Therefore, the DFF method is not very applicable to plane array camera.

This paper describes the DFD method to adjust the focal plane of the camera. We can use only two images taken with different camera parameters such as lens position, focal length, and aperture diameter to calculate the depth information of the camera through the curve of the frequency distribution of the images, then directly calculate the defocusing distance of the camera by DFD method. Through adjusting the thickness of the gasket at the lens position, focusing of camera systems can be realized. Although the method is based on the analysis of frequency Domain of the images, the calculation doesn’t need educe the result in frequency space. Therefore, the method is very applicable to plane array camera and simple. The DFD method is explained particularly below, and the process of operation is demonstrated.

2. The Principle of DFD Method

According to geometrical optics function and Figure 1, we have

$D/2R=f/\left(v-f\right)$ (1)

R is positive when the receiver is behind the focal plane and R is negative when the receiver is in front of the focal plane. In practical optical system, the Point Spread Function (PSF) of imaging system is not the ideal Airy-Pattern, because of the influence of system design and fabrication deflection. If the system transfer function is close to the Diffraction-Limited and the system is Non-destructive, the energy distribution can be described by Gaussian distribution [1]

$h\left(x,y\right)=1/\left(2\pi {\delta }^{2}\right)\cdot \mathrm{exp}\left[-\left({x}^{2}+{y}^{2}\right)/2{\delta }^{2}\right]$ (2)

and

$\iint h\left(x,y\right)dxdy=1$ (3)

According to the conclusion of many experiments [2], $\delta$ is proportion to R, i.e.

Figure 1. Sketch for imaging of optical imaging system. L: Lens, f: Focal Length, R: Blur circle Radius, D: Aperture Diameter, v: Focusing image distance.

$\delta =\alpha R$ For $\alpha >0$ (4)

where $\alpha$ is a constant of proportionality characteristic of the measured camera. In most practical cases, $\alpha =1/\sqrt{2}$ is a good approximation.

Therefore

$\delta =R/\sqrt{2}$ (5)

The imaging system can be considered as a line system, so the imaging process can be expressed [6] as

${I}_{i}\left({x}_{i},{y}_{i}\right)=\iint {I}_{g}\left({x}_{0},{y}_{0}\right)\cdot h\left({x}_{i}-{x}_{0},{y}_{i}-{y}_{0}\right)d{x}_{0}d{y}_{0}={I}_{g}\left({x}_{{}_{}},{y}_{}\right)\ast {h}_{i}\left(x,y\right)$ (6)

where ${I}_{g}$ is the energy distribution of the image through the ideal optical system, ${I}_{j}$ is the energy distribution of the image through the real optical system. Applying Fourier Transform to Equation (6), we have

${G}_{i}\left(\xi ,\eta \right)={G}_{g}\left(\xi ,\eta \right)\cdot H\left(\xi ,\eta \right)={G}_{g}\left(\xi ,\eta \right)\cdot \mathrm{exp}\left[-2{\pi }^{2}{\delta }^{2}\left({\xi }^{2}+{\eta }^{2}\right)\right]$ (7)

where ${G}_{g}$ is the frequency distribution of the image through the ideal optical system, ${G}_{i}$ is the frequency distribution of the image through the real optical system. Because $H\left(\zeta ,\eta \right)$ is circular symmetry distribution, the real radial frequency distribution can be expressed as

$D\left(r\right)=1/\left(2\pi r\right)\underset{0}{\overset{2\pi }{\int }}|{G}_{i}\left(r,\theta \right)|d\theta =1/\left(2\pi r\right)\mathrm{exp}\left(-2{\pi }^{2}{\delta }^{2}{r}^{2}\right)\underset{0}{\overset{2\pi }{\int }}|{G}_{g}\left(r,\theta \right)|d\theta$ (8)

where r is the radius of radial frequency distribution of the image. From the radial frequency distribution of the blurred image, we have the method below to obtain the defocusing distance of the camera. Using two blurred images taken with different focusing image distance, i.e. ${\nu }_{1}$ and ${\nu }_{2}$, we can get the radial frequency distribution of the two images when r = a. Comparing the radial frequency distribution, we have

${D}_{1}\left(a\right)/{D}_{2}\left(a\right)=\mathrm{exp}\left(-2{\pi }^{2}{a}^{2}{\delta }_{1}^{2}\right)/\mathrm{exp}\left(-2{\pi }^{2}{a}^{2}{\delta }_{2}^{2}\right)$ (9)

The function is required to calculate the frequency distribution apparently; actually we just need to calculate $D\left(r\right)$,

$D\left(r\right)=\frac{1}{360}\underset{\theta =0}{\overset{360}{\sum }}|\underset{m=0}{\overset{M-1}{\sum }}\underset{n=0}{\overset{N-1}{\sum }}f\left(m,n\right)\mathrm{exp}\left[-j\left(2\pi m/Mr\mathrm{cos}\theta \right)\right]\mathrm{exp}\left[-j\left(2\pi n/Nr\mathrm{sin}\theta \right)\right]|$ (10)

where $f\left(m,n\right)$ is the energy distribution of the image, m and n represent the Unit of X-axis and Unit of Y-axis in the image, M and N are the size of the image in x aspect and in y aspect. From Equations (9) and (10), the function ${\delta }_{1}^{2}-{\delta }_{2}^{2}$ can be obtained. d is defined by

$d={v}_{2}-{v}_{1}$ (11)

Using the function (1), (5) and (9), we have the distance ${\nu }_{1}$

${v}_{1}=-\left({\delta }_{1}^{2}-{\delta }_{2}^{2}\right)4{f}^{2}/\left({D}^{2}d\right)+f-d/2$ (12)

where D is the aperture diameter.

3. Programs

3.1. The Program of DFD Method

A plane array camera as an example is used to illustrate the process of realizing the fixed-focus of the camera by DFD method. The program of the measurement of the camera is shown in Figure 2. The camera is installed on a tripod by special tools, and a collimator is placed before it. A target is placed at the infinity focal plane of the collimator; and its shape is shown in Figure 3. The target is divided to 60 parts average, 6 degree interval of black and white stripes.

The optical axis of the camera and the collimator should be coincided to make the image of the target on the image surface of the CMOS all the time. Adjust the target position of the collimator to simulate the defocus phenomenon. The focal position of the collimator is taken for the origin of coordinates in the experiment. We put the target at the focal position of the collimator, defining that the target away from the camera positive, close to the camera negative. The defocused images are acquired when the target is moved from −2 mm to 3 mm, the interval is 0.5 mm. The image signal of the camera is acquired by the video output of the imaging system.

Figure 2. Program of the DFD measurement.

Figure 3. Fan-shaped target.

3.2. Results

Figures 4-9 illustrate the high-frequency centered in the imaging of the target which is zoomed in 8 times by the optical system and defocused at different positions.

According to the results shown in Table 1, the optimal position of the focal plane of the camera is conjugate with the target at 1mm. We should adjust the position of the focal plane to satisfy the position of the focal plane is conjugate with the target at 0 mm. Therefore the thickness of gaskets should be thinned 0.02 mm.

Comparing the results of defocusing distance by DFD method and by the traditional method, the results by DFD method show the difference of within 0.05 mm with the results by the traditional method to satisfy the accuracy requirements.

We can get the radial frequency distribution graphs shown in Figure 10. The frequency data at the target position 1 mm is the largest in the graph. Therefore we can get the target position is the location of the optimal focal plane and that is constant with the result we get by the traditional method. That proves this algorithm correctly.

According to the result, the accuracy of the measurement is within the range of the camera’s depth of focus, so the result by DFD method is real and effective.

Figure 4. The target at 2.

Figure 5. The target at 1.5.

Figure 6. The target at 1.

Figure 7. The target at 0.5.

Figure 8. The target at 0.

Figure 9. The target at −0.5.

Table 1. Comparing the results of defocusing distance by traditional method and by DFD method.

4. Conclusion

According to the method of focusing above, the process of DFD method is much simpler and more feasible to operate than the traditional method. DFD method can improve measuring efficiency, and can be applied to plane array camera broadly.

Cite this paper: Ma, L. , Li, C. , Wang, D. , Zhao, Y. , Jin, Z. and Liu, Z. (2021) Study on Focusing of Area Array Camera by Using Frequency of Images. Optics and Photonics Journal, 11, 394-401. doi: 10.4236/opj.2021.118028.
References

[1]   Pei, X., Feng, H., Li, Q. and Xu, Z. (2003) A Depth from Defocus Auto-Focusing Method Based on Frequency Analysis. Opto-Electronic Engineering, 30, 62-65.

[2]   Subbarao, M. and Surva, G. (1994) Depth from Defocus: A Spatial Domain Approach. International Journal of Computer Vision, l3, 271-294. https://doi.org/10.1007/BF02028349

[3]   Kim, S.K., Paik, S.R. and Park, J.K. (1998) Simultaneous Out-of-Focus Blur Estimation and Restoration for Digital Auto-Focusing System. IEEE Transaction on Consumer Electronics, 44, 107l-1075. https://doi.org/10.1109/30.713236

[4]   Subbarao, M. and Wei, T. (1992) Depth from Defocus and Rapid Autofocusing: :A Practical Approach. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Champaign, Illinois, June 1992. https://doi.org/10.1109/CVPR.1992.223176

[5]   Subbarao, M. and Surya, G. (1992) Application of Spatial-Domain Convolution/Deconvolution Transform for Determining Distance from Image Defocus. Proceedings of SPIE Conference, OE/TECHNOLOGY’92, Boston, November 1992, Vol. 1822, 159-167.

[6]   Lai, S. and Fu, C. (1992) A Generalized Depth Estimation Algorithm with a Single Image. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14, 405-411. https://doi.org/10.1109/34.126803

Top