Improving the Autoregressive Modeling Method in Random Noise Suppression of GPR Data Using Undecimated Discrete Wavelet Transform

Show more

1. Introduction

Geophysical methods are conducted to study characteristics of geological structures, distinguish their layers, find the elastic coefficients of each layer, evaluate dynamic parameters of surface layers, investigate the behavior of surface layers during earthquakes in order to design construction, and locate reservoirs such as hydrocarbons, metal mines, and underground water.

Ground penetrating radar is geophysical tool with an active source which uses high-frequency electromagnetic waves to study near surface layers. It was first used in 1956 and has been increasingly used ever since 1970. GPR instruments have been commercially available since the 80 s and have become popular over the past decade. GPR method beams very high frequency (12.5 - 2300 MHz) electromagnetic waves into the Earth which are reflected upon contact with various underground materials and relatively distinct boundaries therein. Such radar reflections are created as a result of the differences between electrical conductivity (dielectric constant) among the material through which the electromagnetic waves are passing. The electromagnetic waves from the GPR pass through the material with low electrical conductivity, but are strongly absorbed by conductive components such as clay, organic acidic soils and the material saturated with salt water [1] .

The GPR resolution varies by depth from centimeters at a few meters below the surface to meters at hundred meters of depth. It also depends on the amount of variation in the electric properties between the target and its surrounding environment, geometry of the target and the applied bandwidth, etc. and can be high enough to distinguish subtle layering in shallow structures or buried objects [2] .

Wavelet transform is widely used in signal processing and is especially applied in image and signal compression and noise suppression. It also allows for obtaining an accurate understanding of the signal properties. Contrary to Fourier transform, where each coefficient is a component dealing with all times and therefore a phase is necessary to isolate temporary events through canceling or amplifying over large periods, wavelet coefficients deal with already local and easy-to-interpret components. Wavelet transform allows for separating overlapping signal components in space and time through adjustable and adaptive wavelets. A multitude of discrete wavelets is ideal for adaptive systems―such as digital computers―which are flexible based on the signals [3] .

Autoregression is based on regressing a suitable model to data which results in more information than the original dataset. It is important to correctly estimation the autoregressive model parameters (AR), as using appropriate parameters is essential in iterative parametric methods used in estimating the spectra of random signals. In general, since such autoregressive models are easy to use, useful techniques have been introduced to estimate appropriate model parameters. This method can also be used in studying the effects of parameters on each other or sets of variables [4] .

2. Method

The GPR signal (commonly including noise) in the receiver, X(t), can be represented as (1):

$X\left(t\right)=s\left(t\right)\ast w\left(t\right)+n\left(t\right)$ (1)

where the reflected series, s(t), is convolved with the source wavelet, w(t), to which the noise, n(t), is added which has to be removed. Since it is impossible to remove all the noises, noise suppression is conducted to obtain an X(t) which similar to s(t) as much as possible [5] .

In this study, we use the new automated noise suppression technique in wavelet and f-x domains to remove the random noise in the GPR data. First, we discuss autoregression and then introduce the f-x and wavelet domains and the necessary concepts in noise suppression. Finally, we will apply these methods to synthetic and real GPR data and compare the results.

3. Wavelet Transform

Wavelets are wave-shape mathematical functions with zero mean, and confined periods as opposed to sine functions which theoretically extend to infinity. Wavelet transform and series have proved to be efficient in the analysis of a wide range of signals and phenomena. (2) formulates the wavelet expansion

$f\left(t\right)=\underset{k}{{\displaystyle \sum}}\underset{j}{{\displaystyle \sum}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{a}_{j,k}{\Psi}_{j,k}\left(t\right)$ (2)

where j and k are integer indices, ${\Psi}_{j,k}\left(t\right)$ are the wavelet expansion orthogonal functions [3] . The size of ${a}_{j,k}$ indices in (2) decreases for a wide range of signals. This property, called the non-conditionality principle, explains why wavelets are useful in compressing images and signals, and noise suppression.

3.1. Discrete Wavelet Transform

Similar to the Fourier transform, wavelet transform also has continuous and discrete forms. A number of issues, namely redundancy, infinite number of wavelets, and lack of analytical solutions, make direct use of wavelet transform difficult. Discrete wavelet transform was introduced to overcome these issues, as it has orthogonal wavelets (no redundancy) through expansion and dilatation of an appropriate mother wavelet.

Linearly decomposing signals or functions in the following form allows for better analysis

$f\left(t\right)={\displaystyle \underset{l}{\sum}{a}_{l}{\psi}_{l}\left(t\right)}$ (3)

In the wavelet domain, ${\psi}_{l}\left(t\right)$ is called a mother wavelet which is defined in (4), after compression and transformation.

${\psi}_{j,k}\left(t\right)={2}^{\frac{j}{2}}\psi \left({2}^{j}t-k\right)$ (4)

An exact method of studying the wavelet transform is multiresolution analysis which is defined as a continuation of finite subspaces of L2(R) in the Hilbert domain. In order to use multiresolution analysis, we define a scale function similar to the mother wavelet, as [3]

${\varphi}_{j,k}\left(t\right)={2}^{\frac{j}{2}}\varphi \left({2}^{j}t-k\right)$ (5)

We then define any f (t) as [3]

$f\left(t\right)={\displaystyle \underset{k}{\sum}{c}_{{j}_{0}}}\left(k\right){\varphi}_{{j}_{0},k}\left(t\right)+{\displaystyle \underset{k}{\sum}{\displaystyle \underset{j={j}_{0}}{\overset{\infty}{\sum}}{d}_{j}\left(k\right){\psi}_{j,k}\left(t\right)}}$ (6)

where ${c}_{{j}_{0}}\left(k\right)$ and ${d}_{j}\left(k\right)$ are discrete wavelet transform of $f\left(t\right)$ . ${j}_{0}$ Represents the larger scale whose space is created by the ${\varphi}_{{j}_{0},k}\left(t\right)$ elements. For a high enough resolution, signal samples are very similar to scale coefficients. Discrete wavelet transform is similar to the Fourier series with more flexibility and efficiency, and just like the Fourier transform, it is useful in representing periodic signals. However, in contrast with Fourier, it can be also used in dealing with non-periodic signals with excellent results.

3.2. Undecimated Discrete Wavelet Transform

Undecimated discrete wavelet transform is not as popular as its regular counterpart. Figure 1 shows the simplest filter bank of the undecimated discrete wavelet transform. While the left hand side of the diagram in Figure 1 is called the analysis section, the right hand side is called returning section.

In Figure 1, the signal, S, is first filtered by a high-pass decomposition filter, H, in order to create cD1 coefficients. It is then filtered by a high-pass returning filter, H’, to generate the details (D1). S is also decomposed by a low-pass filter to make the cA1 coefficients. Finally, these coefficients are filtered by a high-pass returning filter, L’, to create the general features (A1).

3.3. Random Noise Suppression by Using the Autoregressive Vector Operator in the Wavelet Domain

In order to increase the signal to noise ratio (S/N) of a multi-component signal, we first consider a vector operator, ${\stackrel{^}{A}}_{j}$ , for autoregression of noisy data, and then a forward noise suppression estimation follows as (Naghizadeh & Sacchi, 2012).

Figure 1. Filter bank of the first order undecimated discrete wavelet transform [6] .

${\stackrel{^}{g}}_{k}^{f}=\underset{j=1}{\overset{m}{{\displaystyle \sum}}}{\stackrel{^}{A}}_{j}{g}_{k-j},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=M+1,\cdots ,N$ (7)

Similarly, the backward noise suppression estimate is given by

${\stackrel{^}{g}}_{k}^{b}=\underset{j=1}{\overset{M}{{\displaystyle \sum}}}{\stackrel{^}{A}}_{j}^{*}{g}_{k+j},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,\cdots ,N-M$ (8)

where ${\stackrel{^}{A}}_{j}^{*}$ represents the complex conjugate of the autoregressive operator. The final value for the denoised data through averaging the forward and backward values is

${\stackrel{^}{g}}_{k}^{t}=\frac{{\stackrel{^}{g}}_{k}^{f}+{\stackrel{^}{g}}_{k}^{b}}{2}$ (9)

First, the noisy signal is transformed into the wavelet domain, by converting the AR coefficients, where it is filtered. After filtering, the inverse wavelet transform is applied which will generate the denoised version of the original signal.

4. The f-x Domain

4.1. General Basics of the Predictive Filter in the f-x Domain

Prediction in the f-x domain is a successful method to remove random noise from seismic data [7] , as linear events can be fully predicted using a Wiener prediction filter [7] . A good prediction filter may be used to interpolate lost data in the absence of wide gaps. An average of filters to should be used for large spaces. The main problem is to obtain an autocorrelation function for sparse data. [7] Used the Burg technique that handles the missing data in the same well known way as it handles the missing end points.

In general, a signal in the time domain is treated as a complex signal in the frequency domain. We can extract information such as phase, amplitude and energy spectra in the frequency domain. (10) gives the energy distribution as a function of frequency [8] .

$E=\frac{1}{2\text{\pi}}{\displaystyle \underset{-\infty}{\overset{+\infty}{\int}}{\left|x\left(f\right)\right|}^{2}\text{d}f}$ (10)

Since in the f-x domain, linear events in the input signal are represented as sine functions in position, we discuss the prediction filter in this domain. This method assumes that the traces are composed of delayed impulses as shown in Figure 2.

The f-x prediction filter can be widely applied to noise reduction. In this method, we assume that the existing trends in the data to be linear. If not completely linear, we can divide the seismic section to shorter windows to satisfy the linearity condition. Let’s assume that a seismic trace, $U\left(x,t\right)$ , to be a train of impulses with various amplitudes [7] .

$U\left(x,t\right)={\displaystyle \underset{j=1}{\overset{N}{\sum}}{A}_{j}\delta \left(t-{g}_{j}\left(x\right)\right)}$ (11)

where t and x are lateral positions, and A_{i} is the amplitude of the j^{th} impulse, and g_{j}(x) is the delay function containing the shapes of the events. By applying a

Figure 2. Schematics of the delayed traces [7] .

Fourier transform with respect to time from (11), we have

$U\left(x,\omega \right)={\displaystyle \underset{j=1}{\overset{N}{\sum}}{A}_{j}{\text{e}}^{-i\omega {g}_{j}\left(x\right)}}$ (12)

where ω is angular frequency. Since exponential functions can be rewritten as sine’s and cosines, the seismic traces in (12) are in fact a set of sine’s and cosines as functions of ω and x. Because f-x filters only predict linear data, the events in question have to be linear with respect to x, and therefore g_{j}(x) functions must be linear (Canales, 1984).

By assuming a linear U(x, ω), g_{j}(x) can be written as (13) in the frequency domain.

$U\left(x,\omega \right)={\displaystyle \underset{j=1}{\overset{N}{\sum}}{C}_{j}{\text{e}}^{-i\omega {b}_{j}x}}$ (13)

where C_{j} is a complex constant and is determined by source power and reflection coefficients and b_{j} is the slope of the linear event. (13) shows that linearity results in a U(x,t) which is a perfect sine function of x which means it is a periodic and predictable function. Therefore, the signal is a predictable exponential function of x in the frequency-time domain [9] .

4.2. Wiener Filter

Wiener filter is an efficient, stable, linear filter which is applied to noisy images. It requires the assumption that both the signal and the noise are stable second order functions. For this purpose, noise is assumed to be frames of zero mean. This method is based on minimizing the sum of the squares of differences between arbitrary and real output signals. Wiener filter cannot reconstruct the frequency components which are contaminated with noise and simply suppress them. It also cannot neutralize images and is slow. To improve the filter’s speed, we can apply an inverse FFT to obtain the impulse response [10] .

5. Autoregressive Vector

5.1. Definition

Classic models of time series are divided into stationary and nonstationary. Autoregression (AR) is a class of classic stationary time series models which we discuss in this study.

A limitation of our models, so far, is that they impose a one way relationship so that predictive variables are affected by predicted ones and not vice versa. However, in many cases, the reverse is needed when the variables affect each other. Such relationships are allowed in the framework of autoregressive vectors where all variables are symmetric [11] .

5.2. Mathematical Framework

Autoregression

In autoregressive models, variables can be predicted using a linear combination of their previous values. The term “autoregression” is due to regressing a variable against itself [12] . An autoregressive model is defined as

${y}_{t}=c+{a}_{1}{y}_{t-1}+\cdots +{a}_{p}{y}_{t-p}+{\epsilon}_{t}$ (14)

where c is constant, ε_{t} is white noise (with zero mean and a variance,
${\sigma}_{\epsilon}^{2}$ ) and a_{i}’s are model parameters. In this fashion, y_{t} is called a p-order autoregressive model, or AR(p). The structure of a first order autoregressive models, AR(1), shown in (15), is simple, useful and is applicable to a wide range of problems [6] .

${y}_{t}={a}_{1}{y}_{t-1}+{\epsilon}_{t}$ (15)

where we assume:

1) residuals are zero:

$E\left[{\epsilon}_{i,t}\right]=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{with}\text{\hspace{0.17em}}i=1,2$ (16)

2) error are not autocorrelated:

$E\left[{\epsilon}_{i,t}{\epsilon}_{j,\tau}\right]=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{with}\text{\hspace{0.17em}}t\ne \tau $ (17)

Autoregressive models are significantly flexible in controlling a wide range of time series. Figure 3 shows the main steps of vector autoregressive analysis.

5.3. Interpreting Vector Autoregressive Models

Autoregressive models do not allow us to comment on causality relationships. This is especially true when they are generally designed to process unknown time series. Causal interpretations require essential economic models. However, autoregression allows for active interpretations between the variables [4] .

In summary, the advantages of autoregressive vector are:

1) Predicting a set of related variables where an implied interpretation is needed.

2) Testing of whether or not a variable is useful in predicting another variable.

3) Analysis of impulse response where the response of a variable to an abrupt but temporary change in another variable is analyzed.

4) Error prediction of variance decomposition where a part of variance prediction for a variable is attributed to other variables [4] .

Figure 3. Flowchart for the main steps in the autoregressive vector analysis [11] .

5.4. Random Noise Reduction Using Autoregressive Vector Operator

In order to increase the signal to noise ratio (S/N) in multicomponent signal, first we calculate the autoregressive vector operator for noisy data, ${\stackrel{^}{A}}_{j}$ . Then, the forward estimate of the denoised data is given as [13]

${\stackrel{^}{g}}_{k}^{f}=\underset{j=1}{\overset{m}{{\displaystyle \sum}}}{\stackrel{^}{A}}_{j}{g}_{k-j},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=M+1,\cdots ,N$ (18)

Similarly, the backward estimate of the denoised data is

${\stackrel{^}{g}}_{k}^{b}=\underset{j=1}{\overset{M}{{\displaystyle \sum}}}{\stackrel{^}{A}}_{j}^{*}{g}_{k+j},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=1,\cdots ,N-M$ (19)

${\stackrel{^}{A}}_{j}^{*}$ is the complex conjugate of the autoregressive vector operator. The final estimate of the denoised date will be the average of between forward and backward estimates:

${\stackrel{^}{g}}_{k}^{t}=\frac{{\stackrel{^}{g}}_{k}^{f}+{\stackrel{^}{g}}_{k}^{b}}{2}$ (20)

6. Applying Autoregressive Filters to GPR Data in the f-x and Wavelet Domains

6.1. Applying the Method on Synthetic GPR Data

As we know, GPR is an instrument that used electromagnetic waves through transmitter and receiver antennas to determine the depth and trend of anomalies. The emitted waves from the transmitter antenna arrive at the target and are received by the receiver. These steps can be simulated into synthetic data. Since we aim to study random noise, it has to be added to the resulted section. In order to generate synthetic data, we first assume an Earth model with arbitrary coefficient. Due to the similarity of electromagnetic and seismic waves, modeling synthetic GPR and seismic data are similar. We have used Ricker wavelets which is formulated as [14]

$\omega \left(t\right)=\left(1-{\text{\pi}}^{2}{f}^{2}{t}^{2}\right)\mathrm{exp}\left(-{\text{\pi}}^{2}{f}^{2}{t}^{2}\right)$ (21)

where ω(t) is the Ricker wavelet with t and f as time and central frequency of the wavelet, respectively. The Ricker wavelet is symmetric in time and has a zero

mean ( $\underset{-\infty}{\overset{+\infty}{\int}}r\left(\tau \right)d\tau}=0$ )

The synthetic signal can be formulated as

$X\left(t\right)=S\left(t\right)\ast \omega \left(t\right)+n\left(t\right)$ (22)

where the received signal, S(t), is convolved with the Ricker wavelet, ω(t). n(t) is the random noise added to the input signal. Figure 4 compares application of autoregressive and wavelet autoregressive filters to a noisy section, which was created in the MATLAB environment.

Here, we created a synthetic GPR section, as shown in Figure 4(b), to study the efficiency of the autoregressive filter in the wavelet domain, with a sampling rate of 40 ns and a Gaussian white noise. As shown in Figure 4(b), the layers, especially narrower ones are to some extent removed and the presence of noise has caused the boundaries to become completely so vague that the bulges and trends of the layers are damaged and noisy throughout the section.

6.2. Applying the Autoregressive Filter on GPR Data in the f-x Domain

We apply the autoregressive filter in the f-x space to denoise the section as

Figure 4. (a) Synthetic GPR section; (b) noisy section; (c) filtered by autoregression; (d) filtered by autoregression in the wavelet domain.

shown in Figure 4(c). As we can see in Figure 4(c), after applying the filter, noise is removed and the layers are more visible, however, the many of the boundaries have become murky. Structures with low amplitude are most affected by denoising, as opposed to the high-amplitude structures which are more visible, and therefore, reconstruction of the layers has not been done efficiently.

6.3. Applying the AR Filter in the Wavelet Domain

Here we have used the undecimated discrete wavelet transform in applying the autoregressive filter to the noise section in the wavelet domain. We note that here, the applied filter is linear which is a great advantage in noise reduction due to the linearity of the wavelet space.

The denoised section by using this method is shown in Figure 4(d) which looks promising, since the layer boundaries are distinguishable and there is logical smooth trend throughout the section. Both the low and high amplitude structures are more well-defined compared to the autoregressive filter in the f-x domain. We also note that the layer trends are efficiently reconstructed.

6.4. Applying the Method on Real Data

Here, we apply the linear regressive model to real data first in the f-x and then in the wavelet domain in order to remove random noise and eventually compare the results. We show traces #450 onward for better comparison in Figure 6.

We first, suppress the noise in the f-x domain using the autoregressive filter on the noisy real data shown in Figure 5(a) where the boundaries and traces (especially after trace #450) are difficult to distinguish and the bulges are faded in the noise. As shown in Figure 6(c), the autoregressive filter in the f-x domain has properly reduced the noise and the layer trends are visible.

As mentioned before, the noise is random and does not have a specific source. As we can see in Figure 5(a), the right (and specially lower right) portion of the section is very noisy as noise has covered all the trends and layers. Figure 5(c) shows the denoised section after applying the autoregressed filter in the wavelet domain. As we can see in Figure 5(c), the trends are well-defined, especially in the lower right corner of the section and the previously vague parts can now be distinguished to a much greater extent. Therefore the method has successfully reduced the noise.

6.5. Comparing the Application of AR in the Wavelet and f-x Domains

Considering the previous sections on wavelet and f-x domains, as well as the better performance of the filter in the wavelet domain, here we compare denoising on synthetic and real data in both domains. By comparing Figure 5(b) and Figure 5(c), we notice the better performance of the filter in denoising the synthetic data in the wavelet domain. Especially, in the later arrival times which correspond to greater depths, noise reduction is more evident. We note that the boundaries are better represented and are more distinguishable. Also, by comparing

Figure 5. The GPR data section; (a) raw data; (b) after applying autoregression in the f-x domain; (c) after applying autoregression in the wavelet domain.

Figure 6. The GPR data section from trace 450 to the end; (a) raw data; (b) after applying autoregression in the f-x domain; (c) after applying autoregression in the wavelet domain.

Figure 6(b) and Figure 6(c), we can see that again the wavelet domain has had a much better performance in reducing the noise and retrieving the real signal (the layer trends are more visible). Overall, the autoregressive filter in the wavelet domain had done a better job in retrieving the signal due to the linearity of the wavelet domain.

7. Conclusions

As discussed above, various factors such as phone networks, power posts, utility poles, etc. cause contamination in the GPR data. Since the goal of this study was to increase the signal to noise ratio, a method was chosen to damage the signal as least as possible while reducing the noise. In this study, the noise suppression procedure was applied to both synthetic and real GPR data in f-x and wavelet domains through using autoregressive filter. As we see, noise reduction improves interpretation of data and Autoregressive filter bears good results in both f-x and wavelet domains. Which means that Linear regression in the wavelet domain leads to better results, compared to those of the f-x domain, due to the local nature of the wavelet transform and the imposed linearity on the events on different scales.

We should also note that in contrast with the f-x domain, the autoregressive filter is linear just as the wavelet domain which is why the filter does not work as well in the f-x domain.

References

[1] Annan, A.P. (2001) Ground Penetrating Radar. Workshop Notes, No. September, 192.

[2] Daniels, D.J. (2004) Ground Penetrating Radar.

[3] Burrus, S.C. and Gopinath, R.A. (2013) Wavelets and Wavelet Transforms. Rice Univ. Houston, Texas, 1-281.

[4] Leonard, M. and Kennatt, B.L. (2008) Multicomponent Autoregressive Techniques for the Analysis of Seismograms. Physics of the Earth and Planetary Interiors, 113.

[5] Oskooi, B., Julayusefi, M. and Goudarzi, A. (2015) GPR Noise Reduction Based on Wavelet Thresholdings. Arabian Journal of Geosciences, 2937-2951.

https://doi.org/10.1007/s12517-014-1339-5

[6] Kim, J.H. and Cho, S.J. (2007) Removal of Rining Noise in GPR Data by Signal Processing. Geosciences Journal, 11, 75-81.

https://doi.org/10.1007/BF02910382

[7] Canales, L.L. (1984) Random Noise Reduction. 54th Ann. Mtg. Society of Exploration Geophysicists, 3.

https://doi.org/10.1190/1.1894168

[8] Yilmaz, O. (2000) Seismic Data Analysis: Processing, Inversion, and Interpretation of Seismic Data. Soc. Explor. Geophys, 2000, 1, 7. Society of Exploration Geophysicists, 1-7.

[9] Hashemi Amroabadi, M. (2010) Data Denoising in Analog & Digital Domain. A Thesis Present to Ryerson Univ., 11.

[10] Harrison, M.P. (n.d.) f-x Linear Prediction Filtering of Seismic Images. 147-165.

[11] Luetkepohl, H. (2011) Vector Autoregressive Models, Economics Departments, In-stitutes and Research Centers in the World. Retrieved from IDEAS.

https://ideas.repec.org/p/eui/euiwps/eco2011-30.html

[12] Fuss, R. (2007) Vector Autoregressive Models. Albert-Ludwigs-University of Freiburg.

[13] Naghizadeh, M. and Sacchi, M. (2012) Multicomponent f-x Seismic Random Noise Attenuation via Vector Autoregressive Operators. Geophysics, 99, 15.

https://doi.org/10.1190/geo2011-0198.1

[14] Wang, Y. (2015) Frequencies of the Ricker Wavelet. Geophysics, 80, A31-A37.

https://doi.org/10.1190/geo2014-0441.1