Block Iterative STMV Algorithm and Its Application in Multi-Targets Detection

Show more

1. Introduction

The bearing spectrum estimation is very important in sonar and radar fields. The original algorithm of bearing spectrum estimation based on arrays is conventional beam-forming (CBF), whose azimuth resolution is restricted by space “Fourier threshold”, often termed “Rayleigh threshold” [1] [2]. There are many kinds of high-resolution bearing spectrum estimation algorithms since 1970s and Capon proposed the minimum variance distortion response (MVDR) beam-forming algorithm. MVDR has two manifestations when dealing with wide-band signals. One is incoherent signal-subspace processing method (ISM) proposed by Wax, whose azimuth resolution decreases when dealing with correlated sources. The other one is coherent signal-subspace processing method (CSM) proposed by Wang and Kaveh. Importing the beam-focused idea, this algorithm’s estimation performance is better than ISM and can deal with correlated sources. Based on CSM, Krolik and Swinger proposed the steered minimum variance (STMV) algorithm, which can obtain wide-band gains and can deal with correlated sources directly.

Although STMV algorithm has good performance, it is not widely used for a long time because its huge computational cost has a high command for hardware performance when solving inversion operation of matrix, which restricts its practical application. Swingler [3] proposed sub-array STMV algorithm. Papers [4] [5] [6] [7] make further research on sub-array STMV algorithm. Zhu [8] proposed iterative STMV algorithm based on one-phase regressive filter and matrix inversion lemma which can reduce the computational cost to 2/M of STMV algorithm (M refer to array number) and meanwhile preserves perfectly the characters of STMV algorithm. Inspired by this idea, this paper proposes block iterative STMV algorithm based on one-phase regressive filter, matrix inversion lemma and inversion of block matrix, which can reduce computational cost further. Simulation experiments show that this iterative algorithm maintains high azimuth resolution of STMV and advantage of multi-targets detection, besides reducing computational cost markedly. The processing results of actual experimental data are also verified this algorithm has a good stability.

2. Steered Minimum Variance Beamforming

2.1. Basic Principles of STMV Algorithm

The basic principle of STMV based on STCM is keeping the response at the steered bearing angle $\theta $ as 1 by rotating the beam and minimizing the total energy of output.

Assuming that each array output is $x\left(t\right)={[{x}_{1}(t),{x}_{2}(t),\cdots ,{x}_{M}(t)]}^{T}$. Correspondingly, frequency domain output is $X\left({f}_{k}\right)$. For CBF, according to the setting azimuth, each array output is inserted the time-delay ${\tau}_{m}\left(\theta \right)=md\mathrm{cos}\left(\theta \right)/c$, $m=0,\mathrm{...},M-1$. The output after time-delay is

$Y\left({f}_{k},\theta \right)={\stackrel{\dot{}}{T}}_{p}\left({f}_{k},\theta \right)X\left({f}_{k}\right)$ (1)

where M is the array number, $\theta $ is the bearing angle, d is the interval between the adjacent array, c is the acoustic speed, ${\stackrel{\dot{}}{T}}_{p}\left({f}_{k},\theta \right)=diag\left(\left[1,{e}^{j2\pi {f}_{k}d\mathrm{cos}\left(\theta \right)/c},\cdots ,{e}^{j2\pi {f}_{k}\left(M-1\right)d\mathrm{cos}\left(\theta \right)/c}\right]\right)$ is the steered matrix.

Assuming that the weight vector of each array is $w={[{w}_{1},{w}_{2},\cdots ,{w}_{M}]}^{T}$, the output of array is

$y\left(t,\theta \right)={w}^{H}{\displaystyle \underset{k=l}{\overset{h}{\sum}}{\stackrel{\dot{}}{T}}_{p}\left({f}_{k},\theta \right)X\left({f}_{k}\right){e}^{j2\pi {f}_{k}t}}$ (2)

where, h is the highest frequency point, and l is the lowest one.

The average power of array output is

$\begin{array}{l}P\left(\theta \right)=E\left[y\left(t,\theta \right)y{\left(t,\theta \right)}^{H}\right]\\ ={w}^{H}E\left[\left({\displaystyle \underset{k=l}{\overset{h}{\sum}}{\stackrel{\dot{}}{T}}_{p}\left({f}_{k},\theta \right)X\left({f}_{k}\right)}\right){\left({\displaystyle \underset{k=l}{\overset{h}{\sum}}{\stackrel{\dot{}}{T}}_{p}\left({f}_{k},\theta \right)X\left({f}_{k}\right)}\right)}^{H}\right]w\end{array}$ (3)

Considering that $X\left({f}_{k}\right)$ and $X\left({f}_{m}\right)$ are uncorrelated when T is large enough and $k\ne m$. Therefore

$\begin{array}{c}P\left(\theta \right)={w}^{H}E\left[\left({\displaystyle \underset{k=l}{\overset{h}{\sum}}{\stackrel{\dot{}}{T}}_{p}\left({f}_{k},\theta \right)X\left({f}_{k}\right)}\right){\left({\displaystyle \underset{k=l}{\overset{h}{\sum}}{\stackrel{\dot{}}{T}}_{p}\left({f}_{k},\theta \right)X\left({f}_{k}\right)}\right)}^{H}\right]w\\ ={w}^{H}E\left({\displaystyle \underset{k=l}{\overset{h}{\sum}}{\displaystyle \underset{m=l}{\overset{h}{\sum}}{\stackrel{\dot{}}{T}}_{p}\left({f}_{k},\theta \right)X\left({f}_{k}\right){X}^{H}\left({f}_{m}\right){\stackrel{\dot{}}{T}}_{p}^{H}\left({f}_{m},\theta \right)}}\right)w\\ ={w}^{H}\left({\displaystyle \underset{k=l}{\overset{h}{\sum}}{\stackrel{\dot{}}{T}}_{p}\left({f}_{k},\theta \right)E\left[X\left({f}_{k}\right){X}^{H}\left({f}_{k}\right)\right]{\stackrel{\dot{}}{T}}_{p}^{H}\left({f}_{k},\theta \right)}\right)w\\ ={w}^{H}{R}_{stmv}\left(\theta \right)w\end{array}$ (4)

Estimating the cross-spectral density matrix (CSDM) $R\left({f}_{k}\right)$ using the frequency domain output ${X}_{n}\left({f}_{k}\right)$ of N snapshots, shown as Equation (5)

$R\left({f}_{k}\right)=E\left[X\left({f}_{k}\right){X}^{H}\left({f}_{k}\right)\right]={\displaystyle \underset{n=1}{\overset{N}{\sum}}{X}_{n}\left({f}_{k}\right){X}_{n}^{H}\left({f}_{k}\right)}$ (5)

Defining ${R}_{stmv}\left(\theta \right)$ as the steered covariance matrix (STCM) corresponding to the bearing angle $\theta $, shown as Equation (6)

${R}_{stmv}\left(\theta \right)={\displaystyle \underset{k=l}{\overset{h}{\sum}}{\stackrel{\dot{}}{T}}_{p}\left({f}_{k},\theta \right)R\left({f}_{k}\right){\stackrel{\dot{}}{T}}_{p}^{H}\left({f}_{k},\theta \right)}$ (6)

The relationship between STCM and CSDM is also given by the formula.

The best weight vector of beam-former is to keep the response as 1 at the specified direction, meanwhile minimize the power of array output, i.e.

$\{\begin{array}{l}w=\mathrm{arg}\left[\mathrm{min}\left({w}^{H}{R}_{stmv}^{}w\right)\right]\\ {w}^{H}{L}_{M}=1\end{array}$ (7)

where, ${L}_{M}$ is $M\times 1$ vector of ones.

The weight vector can be obtained as Equation (8) by applying the method of Lagrange multiplier.

$w=\frac{{R}_{stmv}^{-1}\left(\theta \right){L}_{M}}{{L}_{M}{}^{H}{R}_{stmv}^{-1}\left(\theta \right){L}_{M}}$ (8)

Substituting Equation (8) into Equation (4),

${P}_{stmv}\left(\theta \right)=\frac{1}{{L}_{M}{}^{H}{R}_{stmv}^{-1}\left(\theta \right){L}_{M}}$ (9)

STCM comprehensively utilizes information of each frequency and the matrix is invertible if the product of frequency points and snapshots is bigger than order of the matrix.

2.2. Principles of Iterative STMV Algorithm

When STMV beams are formed, the average output in frequency domain can be obtained through summing up N outputs ${X}_{n}\left({f}_{k}\right)$ in frequency domain.

$\stackrel{\xaf}{X}\left({f}_{k}\right)={\displaystyle \underset{n=1}{\overset{N}{\sum}}{X}_{n}\left({f}_{k}\right)}$ (10)

Then STCM ${R}_{stmv}\left(\theta \right)$ can be represented as

${R}_{stmv}\left(\theta \right)={\displaystyle \underset{k=l}{\overset{h}{\sum}}{\stackrel{\dot{}}{T}}_{p}\left({f}_{k},\theta \right)\stackrel{\xaf}{X}\left({f}_{k}\right){\stackrel{\xaf}{X}}_{}^{H}\left({f}_{k}\right){\stackrel{\dot{}}{T}}_{p}^{H}\left({f}_{k},\theta \right)}$ (11)

Defining that $U={\stackrel{\dot{}}{T}}_{p}\left({f}_{k},\theta \right)\stackrel{\xaf}{X}\left({f}_{k}\right)$, Equation (11) can be represented as

${R}_{stmv}\left(\theta \right)={\displaystyle \underset{k=l}{\overset{h}{\sum}}U{U}^{H}}$ (12)

In this algorithm, the former data are utilized sufficiently and one-phase regressive filter is used to integrate the covariance matrix ${R}_{stmv}\left(\theta \right)$, then STCM ${\stackrel{^}{R}}_{stmv}^{}(n)$ estimated by using the nth snapshot can be represented as

${\stackrel{^}{R}}_{stmv}(n)=\alpha {\stackrel{^}{R}}_{stmv}\left(n-1\right)+(1-\alpha ){\displaystyle \underset{k=l}{\overset{h}{\sum}}U{U}^{H}}$ (13)

By matrix inversion lemma,

${(A+x{y}^{H})}^{-1}={A}^{-1}-\frac{{A}^{-1}x{y}^{H}{A}^{-1}}{1+{y}^{H}{A}^{-1}x}$ (14)

Therefore, ${\stackrel{^}{R}}_{stmv}^{-1}(n)$ can be represented as

$\begin{array}{l}{\stackrel{^}{R}}_{stmv}^{-1}(n)={[}^{\alpha}\\ ={[}^{\alpha}\\ ={\stackrel{\u02dc}{R}}_{stmv}^{-1}(h-1)-\frac{(1-\alpha ){\stackrel{\u02dc}{R}}^{-1}{}_{stmv}(h-1)U\left({f}_{h}\right)U{\left({f}_{h}\right)}^{H}{\stackrel{\u02dc}{R}}^{-1}{}_{stmv}(h-1)}{1+(1-\alpha )U{\left({f}_{h}\right)}^{H}{\stackrel{\u02dc}{R}}^{-1}{}_{stmv}(h-1)U\left({f}_{h}\right)}\end{array}$ (15)

where ${\stackrel{\u02dc}{R}}^{}{}_{stmv}(h-1)=\alpha {\stackrel{^}{R}}_{stmv}\left(n-1\right)+(1-\alpha ){\displaystyle \underset{k=l}{\overset{h-1}{\sum}}U{U}^{H}}$. Obviously, in order to solve

${\stackrel{^}{R}}_{stmv}^{-1}(n)$, we should solve ${\stackrel{\u02dc}{R}}^{-1}{}_{stmv}(h-1)$ first. Noting that ${\stackrel{\u02dc}{R}}^{}{}_{stmv}(h-1)$ and ${\stackrel{^}{R}}_{stmv}^{}(n)$ have the same form, hence in order to solve ${\stackrel{\u02dc}{R}}^{-1}{}_{stmv}(h-1)$, we should solve ${\stackrel{\u02dc}{R}}^{-1}{}_{stmv}(h-2)$ first. By analogy, we can solve ${\stackrel{^}{R}}_{stmv}^{-1}(n)$ by iterative as long as ${\stackrel{\u02dc}{R}}^{-1}{}_{stmv}(l)$ is solved. ${\stackrel{\u02dc}{R}}^{-1}{}_{stmv}(l)$ can be represented as

$\begin{array}{l}{\stackrel{\u02dc}{R}}^{-1}{}_{stmv}(l)={[}^{\alpha}\\ =\frac{1}{\alpha}[{\stackrel{^}{R}}^{-1}{}_{stmv}(n-1)-\frac{(1-\alpha ){\stackrel{^}{R}}^{-1}{}_{stmv}(n-1)U\left({f}_{l}\right){U}^{\text{H}}\left({f}_{l}\right){\stackrel{^}{R}}^{-1}{}_{stmv}(n-1)}{\alpha +(1-\alpha ){U}^{\text{H}}\left({f}_{l}\right){\stackrel{^}{R}}^{-1}{}_{stmv}(n-1)U\left({f}_{l}\right)}]\end{array}$ (16)

Utilizing Equations (15), (16) comprehensively, ${\stackrel{^}{R}}_{stmv}^{-1}(n)$ can be estimated by $h-l$ iterations.

2.3. Principles of Block Iterative STMV Algorithm

2.3.1. Inversion Algorithm of Block Matrix and Complexity Analysis

In [9], the inversion formula of $2\times 2$ block matrix has been elaborated.

Lemma Assuming that $A=\left[\begin{array}{cc}{A}_{11}& {A}_{12}\\ {A}_{12}^{H}& {A}_{22}\end{array}\right]$, ${A}_{11}$ and ${A}_{22}$ are m order Hermitian matrices, ${A}_{12}$ is m order matrix, then

${A}^{-1}=\left[\begin{array}{cc}{A}_{11}^{-1}+{A}_{11}^{-1}{A}_{12}{\left({A}_{22}-{A}_{12}^{H}{A}_{11}^{-1}{A}_{12}\right)}^{-1}{\left({A}_{11}^{-1}{A}_{12}\right)}^{H}& -{A}_{11}^{-1}{A}_{12}{\left({A}_{22}-{A}_{12}^{H}{A}_{11}^{-1}{A}_{12}\right)}^{-1}\\ {\left(-{A}_{11}^{-1}{A}_{12}{\left({A}_{22}-{A}_{12}^{H}{A}_{11}^{-1}{A}_{12}\right)}^{-1}\right)}^{H}& {\left({A}_{22}-{A}_{12}^{H}{A}_{11}^{-1}{A}_{12}\right)}^{-1}\end{array}\right]$ (17)

Noting that computing cost of multiplication is tens of times of add operation, so this paper only takes computational cost of multiplication into consideration. Assuming inversion computational cost of m order matrix, then inversion computational cost of $2\times 2$ block matrix is

${x}_{2}=2C\left(m\right)+4{m}^{3}$ (18)

Inversion formula and computational cost of $n\times n$ block matrix is deduced below.

Theorem 1. Assuming that $A=\left[\begin{array}{ccc}{A}_{11}& {A}_{12}& {A}_{13}\\ {A}_{12}^{H}& {A}_{22}& {A}_{23}\\ {A}_{13}^{H}& {A}_{23}^{H}& {A}_{33}\end{array}\right]$, ${A}_{11}$, ${A}_{22}$ and ${A}_{33}$ are m order Hermitian matrices, ${A}_{12}$, ${A}_{13}$, ${A}_{32}$ and ${A}_{33}$ are m order matrix, then

${A}^{-1}=\left[\begin{array}{cc}{A}_{11}^{-1}+D{E}^{-1}{D}^{H}& D{E}^{-1}\\ {\left(D{E}^{-1}\right)}^{H}& {E}^{-1}\end{array}\right]$ (19)

where $D=\left(-{A}_{11}^{-11}{A}_{12},-{A}_{11}^{-11}{A}_{13}\right)$, $E=\left[\begin{array}{cc}{A}_{22}-{A}_{12}^{H}{A}_{11}^{-1}{A}_{12}& {A}_{23}-{A}_{12}^{H}{A}_{11}^{-1}{A}_{13}\\ {\left({A}_{23}-{A}_{12}^{H}{A}_{11}^{-1}{A}_{13}\right)}^{H}& {A}_{33}-{A}_{13}^{H}{A}_{11}^{-1}{A}_{13}\end{array}\right]$, naming Equation (19) as “combination inversion formula”.

Proof: Noting

$\begin{array}{l}\left[\begin{array}{ccc}{I}_{m}& {O}_{m}& {O}_{m}\\ {\left(-{A}_{11}^{-11}{A}_{12}\right)}^{H}& {I}_{m}& {O}_{m}\\ {\left(-{A}_{11}^{-11}{A}_{13}\right)}^{H}& {O}_{m}& {I}_{m}\end{array}\right]\left[\begin{array}{ccc}{A}_{11}& {A}_{12}& {A}_{13}\\ {A}_{12}^{H}& {A}_{22}& {A}_{23}\\ {A}_{13}^{H}& {A}_{23}^{H}& {A}_{33}\end{array}\right]\left[\begin{array}{ccc}{I}_{m}& -{A}_{11}^{-11}{A}_{12}& -{A}_{11}^{-11}{A}_{13}\\ {O}_{m}& {I}_{m}& {O}_{m}\\ {O}_{m}& {O}_{m}& {I}_{m}\end{array}\right]\\ =\left[\begin{array}{ccc}{A}_{11}& {O}_{m}& {O}_{m}\\ {O}_{m}& {A}_{22}-{A}_{12}^{H}{A}_{11}^{-1}{A}_{12}& {A}_{23}-{A}_{12}^{H}{A}_{11}^{-1}{A}_{13}\\ {O}_{m}& {\left({A}_{23}-{A}_{12}^{H}{A}_{11}^{-1}{A}_{13}\right)}^{H}& {A}_{33}-{A}_{13}^{H}{A}_{11}^{-1}{A}_{13}\end{array}\right]=\left[\begin{array}{cc}{A}_{11}& {O}_{m\times 2m}\\ {O}_{m\times 2m}& E\end{array}\right]\end{array}$

Then,

$\begin{array}{c}{A}^{-1}=\left[\begin{array}{cc}{I}_{m}& D\\ {O}_{2m\times m}& {I}_{2m}\end{array}\right]{\left[\begin{array}{cc}{A}_{11}& {O}_{m\times 2m}\\ {O}_{2m\times m}& E\end{array}\right]}^{-1}\left[\begin{array}{cc}{I}_{m}& {O}_{m\times 2m}\\ {D}^{H}& {I}_{2m}\end{array}\right]\\ =\left[\begin{array}{cc}{I}_{m}& D\\ {O}_{2m\times m}& {I}_{2m}\end{array}\right]\left[\begin{array}{cc}{A}_{11}^{-1}& {O}_{m\times 2m}\\ {O}_{2m\times m}& {E}^{-1}\end{array}\right]\left[\begin{array}{cc}{I}_{m}& {O}_{m\times 2m}\\ {D}^{H}& {I}_{2m}\end{array}\right]\\ =\left[\begin{array}{cc}{A}_{11}^{-1}+D{E}^{-1}{D}^{H}& D{E}^{-1}\\ {\left(D{E}^{-1}\right)}^{H}& {E}^{-1}\end{array}\right]\end{array}$

where, ${E}^{-1}$ can be derived by Equation (17). Then inversion computational cost of $3\times 3$ block matrix is

${x}_{3}={x}_{2}+C\left(m\right)+11{m}^{3}$ (20)

Theorem 2. Assuming that $A=\left[\begin{array}{cccc}{A}_{11}& {A}_{12}& {A}_{13}& {A}_{14}\\ {A}_{12}^{H}& {A}_{22}& {A}_{23}& {A}_{24}\\ {A}_{13}^{H}& {A}_{23}^{H}& {A}_{33}& {A}_{34}\\ {A}_{14}^{H}& {A}_{24}^{H}& {A}_{34}^{H}& {A}_{44}\end{array}\right]$, ${A}_{11}$, ${A}_{22}$, ${A}_{33}$ and ${A}_{44}$ are m order Hermitian matrices, ${A}_{12}$, ${A}_{13}$, ${A}_{14}$, ${A}_{23}$, ${A}_{24}$ and ${A}_{34}$ are m order matrix, then

${A}^{-1}=\left[\begin{array}{cc}{A}_{11}^{-1}+D{E}^{-1}{D}^{H}& D{E}^{-1}\\ {\left(D{E}^{-1}\right)}^{H}& {E}^{-1}\end{array}\right]$ (21)

where $D=\left(-{A}_{11}^{-11}{A}_{12},-{A}_{11}^{-11}{A}_{13},-{A}_{11}^{-11}{A}_{14}\right)$,

$E=\left[\begin{array}{ccc}{A}_{22}-{A}_{12}^{H}{A}_{11}^{-1}{A}_{12}& {A}_{23}-{A}_{12}^{H}{A}_{11}^{-1}{A}_{13}& {A}_{24}-{A}_{12}^{H}{A}_{11}^{-1}{A}_{14}\\ {\left({A}_{23}-{A}_{12}^{H}{A}_{11}^{-1}{A}_{13}\right)}^{H}& {A}_{33}-{A}_{13}^{H}{A}_{11}^{-1}{A}_{13}& {A}_{34}-{A}_{13}^{H}{A}_{11}^{-1}{A}_{14}\\ {\left({A}_{24}-{A}_{12}^{H}{A}_{11}^{-1}{A}_{14}\right)}^{H}& {\left({A}_{34}-{A}_{13}^{H}{A}_{11}^{-1}{A}_{14}\right)}^{H}& {A}_{44}-{A}_{14}^{H}{A}_{11}^{-1}{A}_{14}\end{array}\right]$

The proof is the same as above, where, ${E}^{-1}$ can be derived by Equation (19). Then inversion computational cost of $4\times 4$ block matrix is

${x}_{4}={x}_{3}+C\left(m\right)+21{m}^{3}$ (22)

Illustration: In order to derive inversion formula of $n\times n$ block matrix, we can solve ${A}_{11}^{-1}$ first and then construct matrices D and E. Noting that E is $\left(n-1\right)\times \left(n-1\right)$ block matrix, ${E}^{-1}$ can be solved by applying inversion formula of $\left(n-1\right)\times \left(n-1\right)$ block matrix. By analogy, we can solve ${A}^{-1}$ by applying “combination inversion formula” finally. This process is basement of deriving inversion computational cost of block matrix.

In summary, we consider the general formula of inversion computational cost of $n\times n$ block matrix. Inversion computational cost of $n\times n$ block matrix is

${x}_{n}={x}_{n-1}+C\left(m\right)+\frac{5}{2}\left(n-1\right){m}^{3}+\frac{3}{2}{\left(n-1\right)}^{2}{m}^{3}$ (23)

Combining Equations (18), (20) and (22), we can derive the general formula

${x}_{n}=nC\left(m\right)+\frac{1}{2}\left({\left(n-1\right)}^{3}+4{\left(n-1\right)}^{2}+3\left(n-1\right)\right){m}^{3}$ (24)

2.3.2. Principles of Algorithm

Assuming that U is segmented equally into K parts, calling ${U}_{i}$, where $i=1,\cdots ,K$. The length of each segment is $k=M/K$. Then STCM ${R}_{stmv}\left(\theta \right)$ is segmented equally into $K\times K$ parts, which order of each segment is k. Considering ${R}_{stmv}\left(\theta \right)$ is Hermitian matrix, then Equation (12) can be represented as

${R}_{stmv}\left(\theta \right)=\left[\begin{array}{cccc}{\displaystyle \underset{i=l}{\overset{h}{\sum}}{U}_{1}{U}_{1}^{H}}& {\displaystyle \underset{i=l}{\overset{h}{\sum}}{U}_{1}{U}_{2}^{H}}& \cdots & {\displaystyle \underset{i=l}{\overset{h}{\sum}}{U}_{1}{U}_{K}^{H}}\\ {\left({\displaystyle \underset{i=l}{\overset{h}{\sum}}{U}_{1}{U}_{2}^{H}}\right)}^{H}& {\displaystyle \underset{i=l}{\overset{h}{\sum}}{U}_{2}{U}_{2}^{H}}& \cdots & {\displaystyle \underset{i=l}{\overset{h}{\sum}}{U}_{2}{U}_{K}^{H}}\\ \vdots & \vdots & \ddots & \vdots \\ {\left({\displaystyle \underset{i=l}{\overset{h}{\sum}}{U}_{1}{U}_{K}^{H}}\right)}^{H}& {\left({\displaystyle \underset{i=l}{\overset{h}{\sum}}{U}_{2}{U}_{K}^{H}}\right)}^{H}& \cdots & {\displaystyle \underset{i=l}{\overset{h}{\sum}}{U}_{K}{U}_{K}^{H}}\end{array}\right]$ (25)

Assuming that STCM ${\stackrel{^}{R}}_{stmv}^{}(n)$ estimated by using the nth snapshot is segmented equally into $K\times K$ parts, calling ${\stackrel{^}{R}}_{stmv}^{ij}(n)$, where $i,j=1,\mathrm{...},K$. Considering ${\stackrel{^}{R}}_{stmv}^{}(n)$ is Hermitian matrix, then Equation (13) can be represented as

$\begin{array}{l}\left[\begin{array}{cccc}{\stackrel{^}{R}}_{stmv}^{11}(n)& {\stackrel{^}{R}}_{stmv}^{12}(n)& \cdots & {\stackrel{^}{R}}_{stmv}^{1K}(n)\\ {\left({\stackrel{^}{R}}_{stmv}^{12}(n)\right)}^{H}& {\stackrel{^}{R}}_{stmv}^{22}(n)& \cdots & {\stackrel{^}{R}}_{stmv}^{2K}(n)\\ \vdots & \vdots & \ddots & \vdots \\ {\left({\stackrel{^}{R}}_{stmv}^{1K}(n)\right)}^{H}& {\left({\stackrel{^}{R}}_{stmv}^{2K}(n)\right)}^{H}& \cdots & {\stackrel{^}{R}}_{stmv}^{KK}(n)\end{array}\right]\\ =\alpha \left[\begin{array}{cccc}{\stackrel{^}{R}}_{stmv}^{11}(n-1)& {\stackrel{^}{R}}_{stmv}^{12}(n-1)& \cdots & {\stackrel{^}{R}}_{stmv}^{1K}(n-1)\\ {\left({\stackrel{^}{R}}_{stmv}^{12}(n-1)\right)}^{H}& {\stackrel{^}{R}}_{stmv}^{22}(n-1)& \cdots & {\stackrel{^}{R}}_{stmv}^{2K}(n-1)\\ \vdots & \vdots & \ddots & \vdots \\ {\left({\stackrel{^}{R}}_{stmv}^{1K}(n-1)\right)}^{H}& {\left({\stackrel{^}{R}}_{stmv}^{2K}(n-1)\right)}^{H}& \cdots & {\stackrel{^}{R}}_{stmv}^{KK}(n-1)\end{array}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(1-\alpha \right)\left[\begin{array}{cccc}{\displaystyle \underset{i=l}{\overset{h}{\sum}}{U}_{1}{U}_{1}^{H}}& {\displaystyle \underset{i=l}{\overset{h}{\sum}}{U}_{1}{U}_{2}^{H}}& \cdots & {\displaystyle \underset{i=l}{\overset{h}{\sum}}{U}_{1}{U}_{K}^{H}}\\ {\left({\displaystyle \underset{i=l}{\overset{h}{\sum}}{U}_{1}{U}_{2}^{H}}\right)}^{H}& {\displaystyle \underset{i=l}{\overset{h}{\sum}}{U}_{2}{U}_{2}^{H}}& \cdots & {\displaystyle \underset{i=l}{\overset{h}{\sum}}{U}_{2}{U}_{K}^{H}}\\ \vdots & \vdots & \ddots & \vdots \\ {\left({\displaystyle \underset{i=l}{\overset{h}{\sum}}{U}_{1}{U}_{K}^{H}}\right)}^{H}& {\left({\displaystyle \underset{i=l}{\overset{h}{\sum}}{U}_{2}{U}_{K}^{H}}\right)}^{H}& \cdots & {\displaystyle \underset{i=l}{\overset{h}{\sum}}{U}_{K}{U}_{K}^{H}}\end{array}\right]\end{array}$ (26)

Applying inversion formula of block matrix equation to solving ${\stackrel{^}{R}}_{stmv}^{-1}(n)$, the basic principle is

1) Solving ${\left({\stackrel{^}{R}}_{stmv}^{11}(n)\right)}^{-1}$ by iterative Equations (15) and (16);

2) Constructing matrices

$D=\left(-{\left({\stackrel{^}{R}}_{stmv}^{11}(n)\right)}^{-1}{\stackrel{^}{R}}_{stmv}^{12}(n),\mathrm{...},-{\left({\stackrel{^}{R}}_{stmv}^{11}(n)\right)}^{-1}{\stackrel{^}{R}}_{stmv}^{1K}(n)\right)$ (27)

$E=\left[\begin{array}{ccc}{\stackrel{^}{R}}_{stmv}^{22}(n)-{\left({\stackrel{^}{R}}_{stmv}^{12}(n)\right)}^{H}{\left({\stackrel{^}{R}}_{stmv}^{11}(n)\right)}^{-1}{\stackrel{^}{R}}_{stmv}^{12}(n)& \cdots & {\stackrel{^}{R}}_{stmv}^{2K}(n)-{\left({\stackrel{^}{R}}_{stmv}^{12}(n)\right)}^{H}{\left({\stackrel{^}{R}}_{stmv}^{11}(n)\right)}^{-1}{\stackrel{^}{R}}_{stmv}^{1K}(n)\\ \vdots & \ddots & \vdots \\ {\left({\stackrel{^}{R}}_{stmv}^{2K}(n)-{\left({\stackrel{^}{R}}_{stmv}^{12}(n)\right)}^{H}{\left({\stackrel{^}{R}}_{stmv}^{11}(n)\right)}^{-1}{\stackrel{^}{R}}_{stmv}^{1K}(n)\right)}^{H}& \cdots & {\stackrel{^}{R}}_{stmv}^{KK}(n)-{\left({\stackrel{^}{R}}_{stmv}^{1K}(n)\right)}^{H}{\left({\stackrel{^}{R}}_{stmv}^{11}(n)\right)}^{-1}{\stackrel{^}{R}}_{stmv}^{1K}(n)\end{array}\right]$ (28)

3) Applying inversion formula of block matrix to solving ${E}^{-1}$ recursively;

4) Applying “combination inversion formula” to solving ${R}_{stmv}^{-1}(\theta )$.

3. Analysis of Complexity

3.1. Complexity of Classic STMV Algorithm

Assuming that array number is M, the bandwidth of signal is B and CSDM is estimated by the frequency domain output of N snapshots. The computational cost of STMV is mainly focused on two parts as follows:

1) The CSDM is estimated with Equation (5). It should be multiplied ${M}^{2}$ times at each frequency point, and ${M}^{2}NB$ times of multiplication operation is necessary in all;

2) The following calculation is necessary for each beam while estimating spatial spectrum:

a) The STCM is estimated with Equation (6). It should be multiplied $2{M}^{2}$ times at each frequency point, and $2{M}^{2}B$ times of multiplication operation is necessary in all;

b) The inversion operation of STCM needs ${M}^{3}$ times of multiplication operation, then it should be multiplied ${M}^{2}+M$ times while estimating spatial spectrum with Equation (9).

In summary, assuming the beam number is L, then the total times of multiplication operation is

$N{M}^{2}B+\left[{M}^{3}\left(2B+1\right)+{M}^{2}+M\right]L$ (29)

3.2. Complexity of Iterative STMV Algorithm

Assuming that array number is M, the bandwidth of signal is B. The computational cost of iterative STMV algorithm is mainly focused on two parts as follows:

1) It should be multiplied by ${M}^{2}$ times at each frequency point when utilizing $U={\stackrel{\dot{}}{T}}_{p}\left({f}_{k},\theta \right)\stackrel{\xaf}{X}\left({f}_{k}\right)$ to solve U. ${M}^{2}B$ times of multiplication operation is necessary in all;

2) Each iteration operation needs $3{M}^{2}+M$ times of multiplication operation at each frequency point when solving ${\stackrel{^}{R}}_{stmv}^{-1}(n)$ according to Equations (15) and (16). $\left(3{M}^{2}+M\right)B$ times of multiplication operation is necessary in all, then it should be multiplied ${M}^{2}+M$ times while estimating spatial spectrum with Equation (9).

In summary, assuming the beam number is L, then the total times of multiplication operation is

$\left[{M}^{2}\left(4B+1\right)+M\left(B+1\right)\right]L$ (30)

3.3. Complexity of Block Iterative STMV Algorithm

Assuming that array number is M, the bandwidth of signal is B and U is segmented equally into K parts, calling ${U}_{i}$ which length is $k=M/K$, where $i=1,\cdots ,K$. The computational cost of block iterative STMV algorithm is mainly focused on five parts as follows:

1) It should be multiplied by ${M}^{2}$ times at each frequency point when utilizing $U={\stackrel{\dot{}}{T}}_{p}\left({f}_{k},\theta \right)\stackrel{\xaf}{X}\left({f}_{k}\right)$ to solve U. ${M}^{2}B$ times of multiplication operation is necessary in all;

2) Each iteration operation needs $3{k}^{2}+k$ times of multiplication operation at each frequency point when solving ${\left({\stackrel{^}{R}}_{stmv}^{11}(n)\right)}^{-1}$ according to Equations (15) and (16). $\left(3{k}^{2}+k\right)B$ times of multiplication operation is necessary in all;

3) In order to construct matrices D and E, it needs to solve ${\stackrel{^}{R}}_{stmv}^{12}(n),\mathrm{...},{\stackrel{^}{R}}_{stmv}^{1K}(n)$, ${\stackrel{^}{R}}_{stmv}^{22}(n),\mathrm{...},{\stackrel{^}{R}}_{stmv}^{2K}(n)$, ${\stackrel{^}{R}}_{stmv}^{33}(n),\mathrm{...},{\stackrel{^}{R}}_{stmv}^{3K}(n)$, …, ${\stackrel{^}{R}}_{stmv}^{KK}(n)$. $\left(\frac{K\left(K+1\right)}{2}-1\right){k}^{2}B$ times of multiplication operation is necessary in all;

4) Applying inversion formula of block matrix to solving ${E}^{-1}$ recursively, It should be multiplied by $\left(K-1\right)C\left(k\right)+\frac{1}{2}\left({\left(K-2\right)}^{3}+4{\left(K-2\right)}^{2}+3\left(K-2\right)\right){k}^{3}$ times;

5) Applying “combination inversion formula” to solving ${R}_{stmv}^{-1}(\theta )$, It should be multiplied by ${\left(K-1\right)}^{2}{k}^{3}+\left(K-1\right){k}^{3}$ times. Then it should be multiplied ${M}^{2}+M$ times while estimating spatial spectrum with Equation (9).

In summary, assuming the beam number is L, then the total times of multiplication operation is

$\begin{array}{l}\left({M}^{2}B+\left(3{k}^{2}+k\right)B+\left(\frac{K\left(K+1\right)}{2}-1\right){k}^{2}B+\left(K-1\right)C\left(k\right)\right)L\\ +\left(\frac{1}{2}\left({\left(K-2\right)}^{3}+4{\left(K-2\right)}^{2}+3\left(K-2\right)\right){k}^{3}+{\left(K-1\right)}^{2}{k}^{3}+\left(K-1\right){k}^{3}+{M}^{2}+M\right)L\end{array}$ (31)

where, the computational cost of k order inverse matrix is $C\left(k\right)$, where $C\left(k\right)={k}^{3}$. Considering $k=M/K$ meanwhile, Equation (31) can be simplified to

$\left(\left(\frac{3}{2}+\frac{1}{2K}+\frac{2}{{K}^{2}}\right){M}^{2}B+\frac{1}{K}MB+\left(\frac{1}{2}-\frac{1}{2{K}^{2}}\right){M}^{3}+{M}^{2}+M\right)L$ (32)

Illustration: In fact, considering ${\stackrel{\dot{}}{T}}_{p}\left({f}_{k},\theta \right)$ is diagonal matrix, the computational cost can be reduced to MB when utilizing $U={\stackrel{\dot{}}{T}}_{p}\left({f}_{k},\theta \right)\stackrel{\xaf}{X}\left({f}_{k}\right)$ to solve U. Now, the total times of multiplication operation

$\left(\left(\frac{1}{2}+\frac{1}{2K}+\frac{2}{{K}^{2}}\right){M}^{2}B+\left(1+\frac{1}{K}\right)MB+\left(\frac{1}{2}-\frac{1}{2{K}^{2}}\right){M}^{3}+{M}^{2}+M\right)L$ (33)

Assuming that the bandwidth of signal $B=5\text{\hspace{0.17em}}\text{kHz}$ and a beam is formed every two degrees. That’s to say, it should estimate spatial spectrum at $L=91$ beams. The snapshot is $N=4$, $K=20$. Then, the ratio of computational cost between the STMV algorithm and block iterative algorithm for various array number M are shown in Figure 1.

The computational cost of iterative algorithm proposed in paper [8] is approximately $2/M$ times of computational cost of STMV algorithm when Equation (30) is divided by Equation (29). For the same of analysis, Equations (29),

(33) can be simplified to $2{M}^{3}BL$, $\frac{\left({M}^{2}B+{M}^{3}\right)L}{2}+\frac{{M}^{2}BL}{2K}$. If we take $L\gg N$

and $B\gg M$ both into consideration, then the ratio of computational cost of two algorithms is

$\frac{\frac{\left({M}^{2}B+{M}^{3}\right)L}{2}+\frac{{M}^{2}BL}{2K}}{2{M}^{3}BL}=\frac{1+\frac{M}{B}+\frac{1}{K}}{4M}\approx \frac{1}{4M}$ (34)

Figure 1. Ratio of computational cost of two algorithms.

The computational cost of the algorithm proposed in this paper is $1/4M$ times of the computational cost of original algorithm, and 1/8 times of the computational cost of iterative algorithm. The computational cost drops dramatically when $M\gg 5$.

4. Simulation and Analysis

4.1. Azimuth Resolution

The simulation conditions are listed as follows. Two sources signals and background noise defined over the same frequency band 0.05 fs - 0.25 fs are Gaussian white noise which are independent with each other. The signal-to-noise ratio (SNR) of each source is 0 dB, sampling frequency is fs and sound speed is 1500 m/s. The receiver is a uniformly spaced linear array of 48 sensors with inter-element spacing of d, corresponding to the maximum frequency. The length of each snapshot is 0. 8 s and the integration time of beam-former is 3.2 s, where $\alpha =2/3$. The bearing angles of two sources are assumed in two conditions: 80˚ and 88˚, 80˚ and 84˚. The spatial spectrum estimated by STMV algorithm and block iterative STMV algorithm ( $K=6$ ) are shown in Figure 2. In the condition of their bearing angles lied at 80˚ and 84˚, both of the above algorithms are simulated independently for 5 times and the results are shown in Figure 3.

It is seen from the simulation results that compared to STMV algorithm, there is no significant change both on azimuth resolution and main lobe width for block iterative STMV algorithm. The only change is that the side lobe fluctuate slightly. In summary, the azimuth resolution of block iterative STMV algorithm has no significant decrement while the computational cost reduced sharply.

4.2. Ability of Detecting Weak Source

The simulation conditions are listed as follows. There is an interference lied at the bearing angle 100˚ with SNR = 5 dB, and weak target source lied at the bearing angle 80˚. The simulation is in two conditions: SNR = −15 dB and SNR = −20 dB, other parameters are the same as former. The spatial spectrum estimated by STMV algorithm and block iterative STMV algorithm are shown in Figure 4. In the condition of SNR = −20 dB, both of the above algorithms are simulated independently for 5 times and the results are shown in Figure 5.

(a) (b)

Figure 2. Comparison of two algorithms on spatial spectrum. (a) Two sources lie at 80˚ and 88˚; (b) Two sources lie at 80˚ and 84˚.

(a) (b)

Figure 3. Comparison of two algorithms on spatial spectrum. (a) STMV algorithm; (b) Block iterative STMV algorithm.

(a) (b)

Figure 4. Comparison of two algorithms on spatial spectrum in different SNR. (a) SNR = 5 dB and −15 dB; (b) SNR = 4 dB and −20 dB.

(a) (b)

Figure 5. Comparison of two algorithms on spatial spectrum. (a) STMV algorithm; (b) Block iterative STMV algorithm.

It is seen from the simulation results that block iterative STMV algorithm inherits the ability to detect weak target source of STMV algorithm while decreasing the computational cost dramatically. Moreover, because the side lobe of spatial spectrum is stationary, the weak target source can be detected clearly. This algorithm can realize detection of weak target source in strong coherent interference.

5. The Sea Trial Data Analysis

The sea trial data is analyzed to verify the adaptation and stability of the block iterative STMV algorithm. The trial frame is shown in Figure 6, a uniformly spaced linear array of 48 sensors is towed behind the towboat sailing straightly, whose original course is 0˚. The target 2 originally lies at about bearing angle 75˚ and sails in the direction of 0˚. The target 4 sails in the direction of array. During the experiment, shipping is busy and compared to targets 2, 4, targets 1, 3 are far away from the towboat whose radiated noise is weak.

The spatial spectrum estimated with STMV algorithm and the block iterative STMV algorithm after the data filtered with a differential whiten filter is shown in Figure 7, Figure 8.

It is not hard to see that there are five targets from Figure 7, where the target at bearing angle about 20˚ is towboat. For the reason that target (i.e. target 3) at bearing angle about 100˚ is far away from the towboat, its bearing changes slowly. Target (i.e. target 1) at bearing angle about 80˚ whose bearing change fast has bearing crossing with target 3 at about 40 s, but these two targets can be discriminated during the short time before or after the crossing moment, which proves further the high azimuth resolution character of block iterative STMV algorithm. Besides, from Figure 8, we can see that there are two weak targets at bearing angle about 65˚, whose bearings are first close, then far away.

In order to further illustrate the ability to detect weak target, the spatial spectrum of the 27^{th}, the 45^{th} and the 126^{st} moment are shown as Figures 9-11. Although the SNR of target 2, and target 4 is about 20 dB lower than target 1 and target 3, these two weak targets still can be detected validly.

Figure 6. Trail frame.

Figure 7. Diagram of STMV.

Figure 8. Diagram of block iterative STMV.

Figure 9. Spatial spectrum at 27 s.

Figure 10. Spatial spectrum at 45 s.

Figure 11. Spatial spectrum at 126 s.

Multi-targets with strong and weak SNR can be detected with block iterative STMV algorithm in Figures 9-11. The directional index within 1 - 2 dB for targets and higher azimuth discrimination are improved than STMV, and the power on background azimuth without target is suppressed about 1dB. It’s due to the matrixes are added first in Equation (5) for STMV algorithm and the spectrum are added first in Equation (10) for block iterative STMV algorithm for the continuum sequence, the correlation of target’s radial signal is utilized for block iterative STMV algorithm and better performance is achieved. Both of the target signal background noise are Gaussian noise which without any correlation in simulation, so the same results can’t be obtained in Figure 4 and Figure 5.

6. Conclusion

First, the computational cost of STMV algorithm and iterative STMV algorithm is analyzed and their application in practical engineering is restricted due to huge computational cost. This paper proposed block iterative STMV algorithm based on one-phase regressive filter, matrix inversion lemma and inversion of block matrix and analyzed its computational cost, which is 1/4 M times of computational cost of STMV algorithm. This algorithm maintains high azimuth resolution and advantage of weak target detection of STMV algorithm showed by simulation results. The computational cost is reduced by about 140 times if this algorithm is used to analyze the sea trail data of a uniformly spaced linear array of 48 sensors. Meanwhile this algorithm can validly detect weak targets when several targets with very large power difference and approximate bearing angles presented. The performance of block iterative STMV algorithm which has the directional index within 1 - 2 dB for targets, the higher azimuth discrimination and the background azimuth power suppress about 1 dB is improved than STMV algorithm. It also has good robustness when processing sea trial data, which lays the foundation of its engineering application.

References

[1] Van Trees, L. (2003) Optimum Array Processing. John Wiley & Sons Inc., New York, 48.

[2] Wang, Y.L. (2004) Theory and Algorithm on Spatial Spectrum Estimation. Tsinghua University Press, Beijing. (In Chinese)

[3] Swingler, D. (1999) A Low-Complexity MVDR Beamformer for Use with Short Observation Times. Proceeding of the IEEE, 47, 1154-1160.
https://doi.org/10.1109/78.752616

[4] Zhou, S.Z. and Du, X.M. (2009) Research and Application of Fast-Convergent Minimum Variance Distortionless Response Algorithm. Chinese Journal of Acoustics, 34, 515-520. (In Chinese)

[5] Lu, Z.X., Zhou, S.Z. and Gao, Y. (2014) Research on SAVSTMV Beamforming Algorithm. Chinese Journal of Acoustics, 33, 477-480. (In Chinese)

[6] Xu, G. and Zhou, S.Z. (2014) Development and Application of MVDR Adaptive Beamforming Technique in Underwater Acoustic. Chinese Journal of Acoustics, 33, 554-558. (In Chinese)

[7] Qian, B., Wang, P. and Wang, X. (2013) Subarray Optimization and Partition Technology in Dimension Reduction Steered Minimum Variance. Journal of Harbin Engineering University, 34, 445-449. (In Chinese)

[8] Zhu, D.Z., et al. (2010) Iterative Algorithm of Steered Minimum Variance and Its Application in Weak Targets Detection. Journal of Shanghai Jiaotong University (Science), 15, 694-701. https://doi.org/10.1007/s12204-010-1071-6

[9] Zhang, X.D. (2004) The Analysis and Application of Matrix. Tsinghua University Press, Beijing. (In Chinese)