Cone Bearing Estimation Utilizing a Hybrid HMM and IFM Smoother Filter Formulation

Show more

1. Introduction

Cone Penetration Test (CPT) [1] [2] [3] [4] is extensively used in geotechnical engineering to determine the *in-situ* subsurface stratigraphy and to estimate geotechnical parameters of the soils present. Geotechnical engineers use CPT to characterize and quantify soil properties and groundwater conditions so that the infrastructure (e.g., bridges, roads, buildings) construction requirements can be determined. In CPT a cone penetration test rig pushes the steel cone vertically into the ground at a standard rate and data are recorded at regular intervals during penetration. The cone penetrometer has electronic sensors to measure penetration resistance at the tip (*q _{m}*) and friction in the shaft (friction sleeve) during penetration. A CPT probe equipped with a pore-water pressure sensor is called a piezo-cones (CPTU cones). CPT penetrometers with other sensors such as a seismic sensor are also used for

For piezo-cones with the filter element right behind the cone tip (*i.e.*, the u2 position), it is standard practice to correct the recorded tip resistance for the impact of the pore pressure on the back of the cone tip. This corrected cone tip resistance is normally referred to as *q _{t}*, but in this paper, we focus on an additional correction that should be made to address the averaging that takes place when performing CPT to obtain the actual cone tip resistance values.

Boulanger and DeJong [5] outlined the distortions which occur when obtaining *q _{m}* measurements and proposed an “inverse” algorithm where the results of the distortion could be optimally removed. In their work Boulanger and DeJong incorrectly described the distortions as a convolution operation (Equations (1), (2), (10), (12), (13), and (15)). In fact, the tip-bearing distortions are an averaging process [2] [5] where cone tip values measured at a particular depth are affected by values above and below the depth of interest. This averaging or smoothing results in the inability to identify thin layers which is critical for liquefaction assessment and the reduction in soil layer resolution. The averaging/smoothing effect is subsequently described along with a proposed algorithm which combines the Bayesian recursive estimation Hidden Markov Model (HMM) filter with Iterative Forward Modelling (IFM) parameter estimation in a smoother formulation for optimal estimation of true

Figure 1. Schematic and terminology for cone penetrometer ( [1] ).

2. Mathematical Background

2.1. Cone Penetration Testing Model

When performing CPT the layers above and below the cone tip affect the measured tip resistance as illustrated in Figure 2.

The measured cone penetration tip resistance *q _{m}* can then be described as

$\begin{array}{l}{q}_{m}\left(d\right)={\displaystyle \underset{j=1}{\overset{60\times \left(\frac{{C}_{d}}{\Delta}\right)}{\sum}}{w}_{c}\left(j\right)\times {q}_{t}\left({\Delta}_{qt}+j\right)}+v\left(d\right)\\ {\Delta}_{qt}=\left(d-{\Delta}_{wc}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\Delta}_{wc}=30\times \left(\frac{{d}_{c}}{\Delta}\right)\end{array}$ (1)

where

*d*: the cone depth;

*d _{c}*: the cone tip diameter;

Δ: the *q _{t}* sampling rate;

*q _{m}*(

*q _{t}*(

*w _{c}*(

*v*(*d*): additive noise, generally taken to be white with a Gaussian pdf.

In Equation (1) it is assumed that *w _{c}* averages

The cone penetration averaging function *w _{c}* for varying
${q}_{t,{z}^{\prime}}/{q}_{t,{z}^{\prime}=0}$ ratios is illustrated in Figure 3. As outlined out by Boulanger and DeJong [5],

Figure 2. Schematic of thin layer effect for a sand layer embedded in a clay layer [5].

Figure 3. Normalized cone penetration filter versus normalized depth from the cone tip ( ${q}_{t,{z}^{\prime}}/{q}_{t,{z}^{\prime}=0}=0.01,0.10,10$ and 100) [5].

${w}_{c}=\frac{{w}_{1}{w}_{2}}{{\displaystyle \sum {w}_{1}{w}_{2}}}$ (2a)

${w}_{1}=\frac{{C}_{1}}{1+\left|{\left(\frac{{z}^{\prime}}{{{z}^{\prime}}_{50}}\right)}^{{m}_{z}}\right|}$ (2b)

${w}_{2}=\sqrt{\frac{2}{1+{\left(\frac{{q}_{t,{z}^{\prime}}}{{q}_{t,{z}^{\prime}=0}}\right)}^{{m}_{q}}}}$ (2c)

2.2. Bayesian Recursive Estimation

Bayesian Recursive Estimation (BRE) is a filtering technique based on state-space, time-domain formulations of physical problems [6] [7]. Application of this filter type requires that the dynamics of the system and measurement model, which relates the noisy measurements to the system state equations, be describable in a mathematical representation and probabilistic form that uniquely define the system behaviour. The potentially nonlinear discrete stochastic equation describing the system dynamics is defined as follows:

${x}_{k}={f}_{k-1}\left({x}_{k-1},{u}_{k-1}\right)\leftrightarrow p\left({x}_{k}|{x}_{k-1}\right)$ (3)

In Equation (3), the vector *f _{k}* is a function of the state vector

${z}_{k}={h}_{k}\left({x}_{k},{v}_{k}\right)\leftrightarrow p\left({z}_{k}|{x}_{k}\right)$ (4)

In Equation (4), *h _{k}* depends upon the index

BRE is a two step process consisting of prediction and update. In the prediction step the system equation defined by Equation (3) is used to obtain the prior PDF of the state at time *k* using the Chapman-Kolmogorov equation, which is given as

$p\left({x}_{k}|{z}_{1:k-1}\right)={\displaystyle \int p\left({x}_{k}|{x}_{k-1}\right)p\left({x}_{k-1}|{z}_{1:k-1}\right)\text{d}{x}_{k-1}}$ (5)

The update step then computes the posterior PDF from the predicted PDF and the newly available measurement as follows:

$p\left({x}_{k}|{z}_{1:k}\right)=\frac{p\left({z}_{k}|{x}_{k}\right)p\left({x}_{k}|{z}_{1:k-1}\right)}{p\left({z}_{k}|{z}_{1:k-1}\right)}$ (6)

The recurrence Equations (5) and (6) form the basis for the optimal Bayesian solution. The BRE of the posterior density can generate an exact solution when the state-space equations fit into a Kalman Filter (KF) formulation or a Hidden Markov Model (HMM). Otherwise, BRE will generate an estimation numerically using Particle Filters (PF) when deriving the posterior PDF.

2.3. Hidden Markov Model (HMM) Filter

The HMM filter (also termed a grid-based filter) has a discrete state-space representation and has a finite number of states. In the HMM filter the posterior PDF is represented by the delta function approximation as follows:

$p\left({x}_{k-1}|{z}_{1:k-1}\right)={\displaystyle {\sum}_{i=1}^{{N}_{s}}{w}_{k-1\backslash k-1}^{i}\delta \left({x}_{k-1}-{x}_{k-1}^{i}\right)}$ (7)

where
${x}_{k-1}^{i}$ and
${w}_{k-1|k-1}^{i}$,
$i=1,\cdots ,{N}_{s}$, represent the fixed discrete states and associated conditional probabilities, respectively, at time index *k* − 1, and *N _{s}* the number of particles utilized. The governing equations for the HMM filter are derived by substituting Equation (7) into Equations (5) and (6). This substitution results in the HMM prediction and update equations which are outlined in Table 1.

2.4. Iterative Forward Modelling

Iterative forward modeling (IFM) is a parameter estimation technique which is

Table 1. HMM governing equations.

based upon iteratively adjusting the parameters until a user specified cost function is minimized. The desired parameter estimates are defined as those which minimize the user specified cost function. The IFM technique which is utilized within the *q _{t}* estimation algorithm is the downhill simplex method (DSM) originally developed by Nelder and Mead [8]. The DSM in multidimensions has the important property of not requiring derivatives of function evaluations and it can minimize nonlinear-functions of more than one independent variable. Although it is not the most efficient optimization procedure, the DSM is versatile, robust and simple to implement. A simplex defines the most elementary geometric figure of a given dimension: a line in one dimension, the triangle in two dimensions, the tetrahedron in three, etc; therefore, in an

The DSM starts at *N* + 1 vertices that form the initial simplex. The initial simplex vertices are chosen so that the simplex occupies a good portion of the solution space. In addition, it is also required that a scalar cost function be specified at each vertex of the simplex. The general idea of the minimization is to keep the minimum within the simplex during the optimization, at the same time decreasing the volume of the simplex. The DSM searches for the minimum of the costs function by taking a series of steps, each time moving a point in the simplex away from where the cost function is largest. The simplex moves in space by variously reflecting, expanding, contracting, or shrinking. The simplex size is continuously changed and mostly diminished, so that finally it is small enough to contain the minimum with the desired accuracy. The DSM incorporates the following basic steps:

1) Specify initial simplex vertices.

2) Specify the cost function at each vertex of the simplex.

3) Compare the cost function for each vertex and determine the lowest error “best” and highest error “worst” vertices.

4) Sequentially locating first the reflected, then if necessary, the expanded, and then if necessary, the contracted vertices, and calculating for each the corresponding cost function and comparing it to the worst vertex; if at any step the cost function of the new trial point is less than the value at the worst vertex; then this vertex is substituted as a vertex in place of the current worst vertex.

5) If the process in step 4 does not yield a lower error value than the previous worst, then the other vertices are shrunken towards the best vertex.

6) At each stage of shrinking, the distances between vertices are calculated and compared to a set tolerance value to check if the simplex has become sufficiently small for termination of the estimation; when the test criterion is reached, the previous best vertex becomes the solution.

7) At each stage of shrinking, the cost function values at the vertices is compared to a set minimum value to check if the error residual has become sufficiently small for termination of the estimation; when the test criterion is reached, the previous best vertex becomes the solution.

3. *q _{t}HMM*

The *q _{t}* optimal filter estimation technique is referred to as the

3.1. *q _{t}HMM* Algorithm Formulation

The HMM portion of the *q _{t}HMM*

$30\times \left(\frac{{d}_{c}}{\Delta}\right)$ in (1)). Next a backward-depth formulation (
${\stackrel{^}{q}}_{k}^{B}$ ) is implemented, where the filter recurses through the data below the cone tip (
$j=30\times \left(\frac{{d}_{c}}{\Delta}\right)$ to
$60\times \left(\frac{{d}_{c}}{\Delta}\right)$ in (1)) starting at the final *q _{m}* value. The optimal estimate for

${\stackrel{^}{q}}_{k}^{t}=\left({\stackrel{^}{q}}_{k}^{F}+{\stackrel{^}{q}}_{k}^{B}\right)/2$ * (12)*

Since the structure of the backward-depth *q _{t}HMM* filter is similar to that of the forward-depth

$\begin{array}{l}{z}_{k}^{i}={\displaystyle \underset{j=1}{\overset{30\times \left(\frac{{d}_{c}}{\Delta}\right)}{\sum}}{w}_{c}\left(j\right)\times {q}_{k}^{i}\left({\Delta}_{qt}+j\right)}+{v}_{k}\\ {\Delta}_{qt}=\left(d-{\Delta}_{wc}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\Delta}_{wc}=30\times \left(\frac{{d}_{c}}{\Delta}\right)\end{array}$ (13)

The transitional probabilities (*i.e.*,
$p\left({x}_{k}^{i}|{x}_{k-1}^{j}\right)$ or
$p\left({q}_{k}^{i}|{q}_{k-1}^{j}\right)$ ) for each HMM state (*i.e.*, discrete cone tip, *q*^{,i}) is set equal due to the fact that there is equal probability of moving from a current cone tip value to any other value between the range *q _{tL}* to

$p\left({z}_{k}|{q}_{k}^{i}\right)=\frac{1}{\sqrt{2\pi \sigma}}{\text{e}}^{\left[-\frac{{q}_{m}\left(d\right)-{z}_{k}^{i}}{2{\sigma}^{2}}\right]}$ (14)

where *σ*^{2} is the variance of the measurement noise. The HMM forward-depth estimated *q _{t}HMM* cone tip values (
${\stackrel{^}{q}}_{k}^{F}$ ) are calculated as follows:

${\stackrel{^}{q}}_{k}^{F}={\displaystyle {\sum}_{i=1}^{{N}_{s}}{w}_{k|k}^{i}{q}_{k}^{i}}$ (15)

3.2. *q _{t}HMM* Test Bed Example

The previously outlined *q _{t}HMM* algorithm was subjected to extensive test bed simulations. Unfortunately, it was concluded that this formulation would not work due to the significant challenges in estimating the unknown

Figure 5 illustrates values of *q _{m}* and the forward-depth

Figure 4. Values of *q _{t}*,

Figure 5. Values of *q _{m}* (black series) and

those of the true *q _{t}* values (

3.3. Incorporation of IFM into the *q _{t}HMM* Algorithm

IFM is incorporated into the *q _{t}HMM* algorithm to address the poor test bed results. In this case, initial estimates for

Figure 6. Significant instability in the estimates of *q _{t}* when using the Boulanger and DeJong inversion estimation algorithm [5].

As an example, assuming that a 10 cm^{2} cone is utilized for the sounding (with a diameter of 36 mm), then the extent of the *w _{c}* averaging window (equal to 60 cone diameters) is approximately 2 m and the depth interval below the cone for the forward-depth analysis is approximately 1 m. Assuming that a maximum of three possible layers exist within this depth interval, then only values for

3.4. *q _{t}HMM*

The performance of the *q _{t}HMM*

A second test bed simulation of the performance of the *q _{t}HMM*

Figure 7. S Schematic outlining proposed IFM portion of the *q _{t}HMM*

(a)(b)

Figure 8. TEST BED 1 (a) Specified *q _{t}* values (grey line), derived

(a)(b)

Figure 9. TEST BED 2 (a) Specified *q _{t}* values (grey line), derived

(black line in Figure 9(a)). Using the *q _{t}HMM*

4. Conclusions

Cone penetrometer testing (CPT) is an effective, fast and relatively inexpensive system for determining the *in-situ* subsurface stratigraphy and estimating geotechnical parameters of the soils present. When performing CPT the layers above and below the cone tip affect the measured tip resistance. The extent of this issue can be significant and is dependent upon variable *in-situ* soil properties (*i.e.*, it is site specific). As a result, a specific soil (with a specific *q _{t}* value) can generate significantly different tip resistance readings (

This paper has outlined an algorithm which utilizes a hybrid HMM and IFM filter for the purpose of obtaining CPT true cone tip bearing values from measured blurred measured values. The *q _{t}* estimation algorithm is referred to as

References

[1] Lunne, T., Robertson, P.K. and Powell, J.J.M. (1997) Cone Penetrating Testing: In Geotechnical Practice. Taylor & Francis, 1997.

[2] Robertson, P.K. (1990) Soil Classification Using the Cone Penetration Test. Canadian Geotechnical Journal, 27, 151-158.

https://doi.org/10.1139/t90-014

[3] (2017) ASTM D6067/D6067M-17 Standard Practice for Using the Electronic Piezocone Penetrometer Tests for Environmental Site Characterization and Estimation of Hydraulic Conductivity. Soil and Rock, 4, 324-333.

[4] Cai, G.J., Liu, L.Y., Tong and Du, G.Y. (2006) General Factors Affecting Interpretation for the Piezocone Penetration Test (CPTU) Data. Journal of Engineering Geology, 14, 632-636.

[5] Boulanger, R.W. and DeJong, T.J. (2018) Inverse Filtering Procedure to Correct cone Penetration Data for Thin-Layer and Transition Effects. In: Hicks, M.A., Pisano, F. and Peuchen, J., Eds., Cone Penetration Testing 2018, CRC Press, London, 25-44.

[6] Arulampalam, M.S., Maskell, S. and Clapp, T. (2002) A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking. IEEE Transactions on Signal Processing, 50, 174-188.

https://doi.org/10.1109/78.978374

[7] Baziw, E. (2007) Application of Bayesian Recursive Estimation for Seismic Signal Processing, Ph.D. Thesis, University of British Columbia, Columbia, Canada.

[8] Nelder, J.A. and Mead, R. (1965) A Simplex Method for Function Optimization. Computing Journal, 7, 308-313.

https://doi.org/10.1093/comjnl/7.4.308

[9] Gibowicz, S.J. and Kijko, A. (1994) An Introduction to Mining Seismology. Academic Press, CA.

[10] Baziw, E., Nedilko, B. and Weir-Jones, I. (2004) Microseismic Event Detection Kalman Filter: Derivation of the Noise Covariance Matrix and Automated First Break Determination for Accurate Source Location Estimation. Pure and Applied Geophysics, 161, 303-329.

https://doi.org/10.1007/s00024-003-2443-8

[11] Baziw, E. (2011) Incorporation of Iterative Forward Modeling into the Principle Phase Decomposition Algorithm for Accurate Source Wave and Reflection Series Estimation. IEEE Transactions on Geoscience and Remote Sensing, 49, 650-660.

https://doi.org/10.1109/TGRS.2010.2058122