On an Application of the Improved Maximum Product Criterion to Inverse Acoustic Scattering in a Layered Medium

Fermin S. Viloche Bazán^{1},
Juliano de Bem Francisco^{1},
Koung Hee Leem^{2},
George Pelekanos^{2},
Vassilios Sevroglou^{3}

Show more

1. Introduction

The scientific field of inverse scattering theory for acoustic waves has been a very active field in the recent years, and its mathematical and modeling techniques are widely used in real-world problems. Acoustic scattering problems are very extensive and there a lot of examples due to the following areas: medical imaging, ultrasound tomography, nondestructive testing, material science, radar, etc. The aim in this research field has been not only to detect but also to identify unknown obstacles throughout the use of acoustic waves, and a lot of work has been done for direct scattering problems as well as for the inverse ones (see [1] [2] and the references therein).

In particular, the inverse scattering problems are exterior problems where the measurement data are taken outside the scattering obstacles. The paper at hand deals with reconstructions of sound-soft two-dimensional obstacles from time-harmonic acoustic plane waves in a layered background medium. These types of problems arise in applications where the background is not homogeneous and hence is modeled as a layered medium. This topic was originally investigated by Bondarenko *et al.* [3] within the framework of the factorization method.

As shown in Figure 1, let ${D}_{2}\subset {\mathbb{R}}^{2}$ denote a non-penetrable obstacle which is an open and bounded domain with a ${C}^{2}$ boundary $\partial {D}_{2}\equiv {S}_{1}$. We also denote the background medium by ${\mathbb{R}}^{2}\backslash {\stackrel{\xaf}{D}}_{2}$, which is divided, due to its ${C}^{2}$ -boundary ${S}_{0}$, into two connected domains: the bounded homogeneous domain ${D}_{1}$ and the homogeneous unbounded one, ${D}_{0}$ (see Figure 1). Hence the background medium will consist of a finite number of homogeneous layers (domains) ${D}_{j}$, where without loss of generality, we will assume that $j=0,1$, meaning that the background medium consists of two layers. Let ${k}_{j}=\omega /{c}_{j}$ be the positive wave number in terms of the frequency $\omega $ and the speed of sound ${c}_{j}$ in the corresponding domain ${D}_{j}$, $j=0,1$. Mathematically, the problem of scattering of time-harmonic acoustic waves in a two-layered medium in ${\mathbb{R}}^{2}$ is described by the following boundary value problem: Determine the total field $u={u}^{inc}+{u}^{sct}$ such that

$\Delta u\left(x\right)+{k}_{0}^{2}u\left(x\right)=0,\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}x\in {D}_{0},$ (1)

$\Delta u\left(x\right)+{k}_{1}^{2}u\left(x\right)=0,\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}x\in {D}_{1},$ (2)

$u\left(x\right)=0,\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}x\in {S}_{1},$ (3)

${u}^{+}\left(x\right)={u}^{-}\left(x\right),\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\frac{\partial {u}^{+}\left(x\right)}{\partial \nu}-{\mu}_{0}\frac{\partial {u}^{-}\left(x\right)}{\partial \nu}=0,\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}x\in {S}_{0}$ (4)

$\underset{r\to \infty}{\mathrm{lim}}\sqrt{r}\left(\frac{\partial {u}^{sct}\left(x\right)}{\partial r}-i{k}_{0}{u}^{sct}\left(x\right)\right)=0,\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}r:=\left|x\right|,$ (5)

Figure 1. The problem configuration.

where
${u}^{inc}$ and
${u}^{sct}$ denote the incident plane wave field and the corresponding scattered field, respectively. The outward unit vector to
${D}_{j}$ is denoted by
$\nu $, and
${u}^{+}$,
$\frac{\partial {u}^{+}}{\partial \nu}$ (
${u}^{-}$,
$\frac{\partial {u}^{-}}{\partial \nu}$ ) denote the limit of *u*,
$\frac{\partial u}{\partial \nu}$ on
${S}_{0}$ from the

outside (inside) of ${S}_{0}$. In addition, ${\mu}_{0}=\frac{{\rho}_{0}}{{\rho}_{1}}$ is a non-negative constant written

in terms of the density ${\rho}_{j}$ in the corresponding medium ${D}_{j}$, $j=0,1$. The Dirichlet boundary condition will be applied on ${S}_{1}$ and corresponds to a sound-soft obstacle, whereas the transmission one will be imposed on ${S}_{0}$ and represents the continuity of the medium and equilibrium of the forces acting on it.

An application of the Green’s formula implies that the solution *u* of the direct problem (1)-(5), has the asymptotic behavior [4]

${u}^{sct}\left(x\right)={u}^{\infty}\left(\stackrel{^}{x}\right)\frac{{\text{e}}^{ikr}}{\sqrt{r}}+O\left({r}^{-3/2}\right),\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}r=:\left|x\right|$ (6)

uniformly with respect to
$\stackrel{^}{x}=x/\left|x\right|\in \text{S}$ (S is the unit circle) for some analytic function
${u}^{\infty}$ which is called the *far-field* *pattern* of
${u}^{sct}$ defined on S. In the case of the inverse problem, it represents the measured data. In particular, the inverse problem that will be considered here is the problem of recovering the shape of the imbedded objects from a complete knowledge of the far-field pattern.

The existence, uniqueness and stability of the direct problem, modeled by (1)-(5), have been established by Liu *et al.* in [5]. Most of the theoretical work for the corresponding inverse problem, including a rigorous proof of the factorization method for recovering imbedded obstacles, has been done in [3] within the framework of the factorization method [2] [6], and with the use of a mixed reciprocity principle which doesn’t require knowledge of the Green function for the background medium. Therefore, the resulting method is more computationally efficient than its counterparts.

Recall that the main idea of the factorization method is that the support of the scattering obstacle is obtained by solving an integral equation of the first kind and noting that the norm of an appropriate indicator function becomes unbounded as a point lying on a rectangular grid containing the scatterer approaches its boundary. Due to the ill-posedness of the corresponding integral equation, Tikhonov regularization with the regularization constant computed via Morozov’s discrepancy principle [7] is employed. It is well known however that Morozov’s discrepancy principle is time-consuming and requires an *a* *priori* knowledge of the noise level in the data, something that is unavoidable in real life applications. Therefore we employ an Improved Maximum Product Criterion (IMPC), developed by Bazán *et al.* [8], which via a fast and efficient algorithm chooses as regularization parameter, the critical point associated with the largest local maximum, of a product created by the regularized solution norm and the corresponding residual norm. The IMPC is an improved version of the Maximum Product Criterion (MPC) that originally appeared in [9]. Moreover, as with MPC, IMPC does not depend on user specified input parameters (like subspace dimension or truncating parameter) and requires no *a* *priori* knowledge of the noise level [9]. As we mentioned before, IMPC extends in a very elegant way the maximum product criterion, and it is worth mentioning that has been applied with great success in reconstructing obstacles in acoustics [9], electromagnetics [8] as well as applications in elastic scattering [10]. In addition to employing IMPC, the introduction of the mixed reciprocity relation avoids the computation of the far field of the Green function by replacing it with the total wave, and therefore renders the resulting algorithm even more attractive with respect to the computational point of view [3].

We organize our paper as follows. In Section 2 by using [3] as our main reference, we formulate a two-layered background medium problem and present a mixed reciprocity relation along with a formulation of an appropriate version of the factorization method. In Section 3, after discussing recently introduced indicator functions used for object reconstruction [11] [12], IMPC is revisited under the framework of the factorization method established in Section 2 and evidence is provided about why the method works so well in various applications tested so far. Using IMPC, the regularization parameter will be computed via a fast iterative algorithm which requires no *a* *priori* knowledge of the noise level in the data. Consequently, in Section 4, full far-field patterns will be generated, and the method will be applied for the reconstruction of imbedded obstacles under the assumption that the background medium is given in advance. Concluding remarks are given in Section 5.

2. Formulation of the Problem

Similarly as in [3], we let $D={D}_{2}\cup {S}_{1}\cup {D}_{1}$, and we denote the total field by ${u}_{t}={u}^{inc}+{u}^{sct}$, where ${u}^{inc}$ and ${u}^{sct}$ denote the incident plane wave and the corresponding scattering field respectively. In particular, the scattering of incident plane waves in a two-layered background medium can be formulated as:

$\Delta {u}_{t}+{k}_{0}^{2}{u}_{t}=\mathrm{0,}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{in}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{D}_{0}\mathrm{,}$ (7)

$\Delta {u}_{t}+{k}_{1}^{2}{u}_{t}=0,\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{in}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}D,$ (8)

${u}_{t}^{+}={u}_{t}^{-},\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\frac{\partial {u}_{t}^{+}}{\partial \nu}-{\mu}_{0}\frac{\partial {u}_{t}^{-}}{\partial \nu}=0,\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{on}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{S}_{0}$ (9)

$\underset{r\to \infty}{\mathrm{lim}}\sqrt{r}\left(\frac{\partial {u}_{t}^{sct}}{\partial r}-i{k}_{0}{u}_{t}^{sct}\right)=0,\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}r:=\left|x\right|,$ (10)

Recall that the fundamental solution of the Helmholtz equation is given by

$\Phi \left(x\mathrm{,}y\right)=\frac{i}{4}{H}_{0}^{\left(1\right)}\left(k\left|x-y\right|\right)\mathrm{,}\text{\hspace{0.05em}}\text{\hspace{0.05em}}x\ne y$ (11)

in which ${H}_{0}^{\left(1\right)}$ is the Hankel function of order zero and of the first kind. Now, let ${u}^{\infty}\left(\cdot \mathrm{,}d\right)$ be the far field pattern of the scattered field ${u}^{sct}\left(\cdot \mathrm{,}d\right)$ due to an incident plane wave ${u}^{inc}\left(\cdot \mathrm{,}d\right)$ with the incident direction $d\in \text{S}$. In addition, if the incident field is generated by a point source $\Phi \mathrm{(}\cdot \mathrm{,}z\mathrm{)}$ with source point $z\in {\mathbb{R}}^{2}$, we denote by ${\Phi}^{s}\left(\cdot \mathrm{,}z\right)$ its scattered field and by ${\Phi}^{\infty}\left(\cdot \mathrm{,}z\right)$ its far field pattern. Following Potthast [3] and [13], the following mixed reciprocity relation has been established.

Theorem 1. (Mixed reciprocity relation) For acoustic scattering of plane waves ${u}^{inc}\left(\cdot \mathrm{,}d\right)$, $d\in \text{S}$ and point sources $\Phi \left(\cdot \mathrm{,}z\right)$, $z\in {\mathbb{R}}^{2}$ from an obstacle ${D}_{2}$ we have

${\Phi}^{\infty}\left(\stackrel{^}{x}\mathrm{,}z\right)=(\begin{array}{c}\begin{array}{l}{u}_{t}\left(z\mathrm{,}\stackrel{^}{x}\right)\mathrm{,}z\in {D}_{0}\\ {\mu}_{0}{u}_{t}\left(z\mathrm{,}\stackrel{^}{x}\right)\mathrm{,}z\in D\end{array}\end{array}$ (12)

where ${u}_{t}\left(\cdot \mathrm{,}\stackrel{^}{x}\right)$ solves the transmission problem (7)-(10).

For the proof, we refer the reader to [3].

The far field patterns ${u}^{\infty}\left(\stackrel{^}{x}\mathrm{,}d\right)$, $\stackrel{^}{x}\mathrm{,}d\in \text{S}$ given by (6), define the far field operator $F\mathrm{:}{L}^{2}\left(\text{S}\right)\to {L}^{2}\left(\text{S}\right)$ by

$\left(Fg\right)\left(\stackrel{^}{x}\right)={\displaystyle {\int}_{\text{S}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{u}^{\infty}\left(\stackrel{^}{x}\mathrm{;}d\right)g\left(d\right)\text{d}s\left(d\right)\mathrm{,}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\stackrel{^}{x}\in \text{S}$ (13)

It is well known that the factorization method is looking for a regularized solution $g\in {L}^{2}\left(\text{S}\right)$ of the equation

$\left(\stackrel{\u02dc}{F}{g}_{z}\right)\left(\stackrel{^}{x}\right)=\frac{{\text{e}}^{i\pi /4}}{\sqrt{8\pi {k}_{0}}}{\text{e}}^{-i{k}_{0}\stackrel{^}{x}\cdot z}\mathrm{,}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}z\in {\mathbb{R}}^{2}$ (14)

where $\stackrel{\u02dc}{F}$ is a self-adjoint operator and the right hand side is chosen as the far field pattern of the problem specific Green function. We now proceed with the theoretical background needed for the recovery of the interface, and the shape and location of the imbedded obstacle.

For recovering the interface, we replace $\stackrel{\u02dc}{F}$ in Equation (14), by the well known operator ${\left({F}^{\star}F\right)}^{1/4}$ which characterizes the factorization method as developed by Kirsch in [6]. Hence Equation (14) takes the form

${\left({F}^{\star}F\right)}^{1/4}{g}_{z}=\frac{{\text{e}}^{i\pi /4}}{\sqrt{8\pi {k}_{0}}}{\text{e}}^{-i{k}_{0}\stackrel{^}{x}\cdot z}$ (15)

and the spectral properties of *F* are used for the reconstructions. To be more specific, since *F* is normal and compact, which guarantees the existence of a singular system
$\left\{{\sigma}_{j}^{c}\mathrm{,}{u}_{j}^{c}\mathrm{,}{v}_{j}^{c}\right\}$,
$j\in \mathbb{N}$, of *F* with
${v}_{j}^{c}={s}_{j}{u}_{j}^{c}$ and
${s}_{j}\in \u2102$ with
$\left|{s}_{j}\right|=1$, then the characterization of the object depends on a range test as described in the following theorem due to Kirsch [6].

Theorem 2. For any
$z\in {\mathbb{R}}^{2}$ let
${\Phi}_{z}$ denote the right hand side of Equation (15). Assume that
${k}_{0}^{2}$ is not a Dirichlet eigenvalue of
$-{\Delta}_{2}$ in D *i.e.* the corresponding homogeneous problem has only the trivial solution. Then a point
$z\in {\mathbb{R}}^{2}$ belongs to D if and only if the series

$\underset{j=1}{\overset{\infty}{\sum}}}\frac{{\left|\left({\Phi}_{z},{v}_{j}^{c}\right)\right|}^{2}}{{\sigma}_{j}^{c}$ (16)

converges, or equivalently, $z\in D$ if and only if ${\Phi}_{z}$ belongs to the range of the operator ${\left({F}^{\mathrm{*}}F\right)}^{1/4}$.

A consequence of the above result is that if one considers the partial sum of the first *n* terms of the series (16), for large *n* the partial sum must be large for *z* outside *D*, as the corresponding series fails to converge, and small for *z* inside. The factorization method characterizes the object by plotting a suitable indicator function which can be defined in several ways. More precisely, one defines an indicator function depending on
$z\in {\mathbb{R}}^{2}$, so that the function is relatively large when
$z\in D$ and small otherwise. Several indicator functions will be presented and discussed later.

In reconstructing the imbedded obstacle, we employ a variant of the factorization method, called
${F}_{\#}$ -factorization, described in [2] and also successfully used by Bondarenko *et al.* to recover this type of obstacles in [3]. To this end, we replace the left hand side operator,
$\stackrel{\u02dc}{F}$, of Equation (14) by the operator
${F}_{\#}^{1/2}$ with
${F}_{\#}=\left|\mathrm{Re}F\right|+\left|\mathrm{Im}F\right|$, *i.e.* we now obtain the modified integral equation

${F}_{\#}^{1/2}{g}_{z}\left(\stackrel{^}{x}\right)=\frac{{\text{e}}^{i\pi /4}}{\sqrt{8\pi {k}_{0}}}{\text{e}}^{-i{k}_{0}\stackrel{^}{x}\cdot z}\mathrm{,}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}z\in {\mathbb{R}}^{2}\mathrm{,}$ (17)

where the real and imaginary parts of *F* are self-adjoint operators given by
$\mathrm{Re}F=\frac{1}{2}\left(F+{F}^{*}\right)$ and
$\mathrm{Im}F=\frac{1}{2i}\left(F-{F}^{*}\right)$.

Consequently, consider the far field patterns ${u}_{t}^{\infty}\left(\stackrel{^}{x}\mathrm{,}d\right)$, $\stackrel{^}{x}\mathrm{,}d\in \text{S}$, and define the far field operator ${F}_{0}\mathrm{:}{L}^{2}\left(\text{S}\right)\to {L}^{2}\left(\text{S}\right)$ by

$\left({F}_{0}g\right)\left(\stackrel{^}{x}\right)={\displaystyle {\int}_{\text{S}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{u}_{t}^{\infty}\left(\stackrel{^}{x}\mathrm{;}d\right)g\left(d\right)\text{d}s\left(d\right)\mathrm{,}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\stackrel{^}{x}\in \text{S}$ (18)

Then it follows that the scattering operator ${\mathcal{S}}_{0}\mathrm{:}{L}^{2}\left(\text{S}\right)\to {L}^{2}\left(\text{S}\right)$ can be defined by

${\mathcal{S}}_{0}=\mathcal{I}+2i{k}_{0}{\left|\frac{{\text{e}}^{i\pi /4}}{\sqrt{8\pi {k}_{0}}}\right|}^{2}{F}_{0}$ (19)

where $\mathcal{I}$ denotes the identity. It can be shown that ${\mathcal{S}}_{0}$ is unitary [2].

We now formulate the main theorem of the paper, which will allow us to characterize the imbedded obstacle. Its proof can be found in our main reference [3].

Theorem 3. For any $z\in D={D}_{2}\cup {S}_{1}\cup {D}_{1}$, let ${\Phi}_{z}\left(\stackrel{^}{x}\right)$ defined by ${\Phi}_{z}\left(\stackrel{^}{x}\right)={u}_{t}\left(z\mathrm{,}-\stackrel{^}{x}\right)\mathrm{,}\stackrel{^}{x}\in \text{S}$ where ${u}_{t}\left(\cdot \mathrm{,}-\stackrel{^}{x}\right)$ solves the problem (7)-(10), and assume that ${k}_{1}^{2}$ is not a Dirichlet eigenvalue of $-{\Delta}_{2}$ in ${D}_{2}$. Then a point z belongs to ${D}_{2}$ if and only if the series

$\underset{j=1}{\overset{\infty}{\sum}}}\frac{{\left|\left({\Phi}_{z},{v}_{j}\right)\right|}^{2}}{\left|{\lambda}_{j}\right|$ (20)

converges, or equivalently, $z\in {D}_{2}$ if and only if ${\Phi}_{z}$ belongs to the range of the operator ${F}_{\#}^{1/2}$. Finally, $\left({\lambda}_{j}\mathrm{,}{v}_{j}\right)$ is the eigensystem of the operator ${F}_{\#}$ which is given by

${F}_{\#}=\left|\mathrm{Re}\left(\left({F}_{0}-F\right){\mathcal{S}}_{0}^{*}\right)\right|+\left|\mathrm{Im}\left(\left({F}_{0}-F\right){\mathcal{S}}_{0}^{*}\right)\right|$ (21)

It is now obvious from the Theorem above, that the explicit use of the Green’s function of the background medium (which corresponds to solving a direct problem for each grid point *z*) is avoided due to the utilization of the near field data
${u}^{sct}$ that have already been computed. In addition, similar to interface reconstruction, obstacle reconstructions are also performed through appropriate indicator functions.

3. On Sampling Indicator Functions

In practice, we work with finite dimensional approximations of the linear operator Equations (15) (resp. (17)) and characterize the object through related indicator functions. The following analysis concerns some indicator functions associated with (15) but something similar can be elaborated to determine objects by means of (17). Let
${\left\{{\sigma}_{j},{u}_{,},{v}_{j}\right\}}_{j=1}^{n}$ be a singular system of a finite dimensional approximation of the far-field operator *F*,
${F}_{n}\in {\text{IC}}^{n\times n}$, constructed by some numerical method. When using the factorization method, for each sampling point
$z\in {\mathbb{R}}^{2}$ we deal with the system of linear equations

${\left({F}_{n}^{*}{F}_{n}\right)}^{1/4}g={r}_{z},$ (22)

where
${F}^{\mathrm{*}}$ denotes the conjugate transpose of *F* and
${r}_{z}$ denotes the discretized version of the right hand side in (15), and then we calculate the discrete the indicator function

$W\left(z\right)={\left[{\displaystyle \underset{j=1}{\overset{M}{\sum}}}\frac{{\left|{\alpha}_{j}\right|}^{2}}{{\sigma}_{j}}\right]}^{-1}\doteq {\Vert {g}_{z}\Vert}_{2}^{-2},$ (23)

where
${\alpha}_{j}={v}_{j}^{*}{r}_{z}$ is the Fourier coefficient of
${r}_{z}$ along the *j*th right singular vector of
${F}_{n}$ and
${g}_{z}$ is the unique solution of (22),
${g}_{z}={\left({F}_{n}^{\mathrm{*}}{F}_{n}\right)}^{-1/4}{r}_{z}$, where for simplicity, this solution is denoted with the same symbol as that of the continuous case, see (15).

The factorization method determines the shape of the object by inspecting the sampling points *z* where
$W\left(z\right)$ becomes large. The rationale behind such a strategy lies on the fact that
${\Vert {g}_{z}\Vert}_{2}$ remains within reasonable bounds when *z* belongs to the scatterer and becomes large when *z* moves away from the boundary. Actually, because of Theorem 2, for sampling points *z* inside the scatterer, it is mandatory that the Fourier coefficients
$\left|\left({\Phi}_{z}\mathrm{,}{v}_{j}^{c}\right)\right|$ decay faster than
$\sqrt{{\sigma}_{j}^{c}}$ to assure convergence of the series (16). This is not the case for *z* outside *D* because for these sampling points the series diverges and therefore the associated Fourier coefficients should get larger than
$\sqrt{{\sigma}_{j}}$ even for small *j*. Most importantly, this observation is proven numerically valid in the case of finite approximations of the operator Equation (15), even using noisy far-field data, as one can see in Figure 2, which compares Fourier coefficients for a sampling point inside *D*, denoted as above by
${\alpha}_{j}$, Fourier coefficients for a sampling point outside *D*, denoted by
${\stackrel{^}{\alpha}}_{j}$, and
$\sqrt{{\sigma}_{j}}$. This illustration corresponds to the reconstruction of a kite using 64 incident and observation directions,
$100\times 100$ sampling

Figure 2. Singular values and Fourier coefficients.

points, and Far-field data corrupted by additive noise with relative noise level 5%. Data are taken from [9].

From the behavior of the Fourier coefficients described above, it is evident that:

1) For points *z* outside and close to *D*,
${\Vert {g}_{z}\Vert}_{2}$ is necessarily large as in this case most Fourier coefficients remain above the singular values, so
$W\left(z\right)$ becomes very small;

2) As Fourier coefficients for external points are on the average much larger than the Fourier coefficients for internal points, which generally remain below the singular values, for *z* inside *D* the solution norm
${\Vert {g}_{z}\Vert}_{2}$ will be of moderate size.

Hence provided the Fourier coefficients behave as described above,
$W\left(z\right)$ will be relatively large when *z* belongs to *D* and small for *z* outside. That is, whenever the Fourier coefficients separate into two *well* *distinguished* groups in terms of size, one group associated with points inside *D* and other groups for points outside, several sampling indicator functions can be considered. In fact, a class of sampling indicators that exploit such a separation considers the *q*-norm (*q* = 2 or 1) of the vector
$\Omega \alpha $ where
$\Omega $ is a diagonal matrix such that, for chosen natural
$\mathcal{l}$,
$1<\mathcal{l}\ll n$,
${\Omega}_{i,i}=0$ for
$1\le i\le \mathcal{l}$ and
${\Omega}_{i,i}=1$ otherwise. Then a sampling indicator function can be defined as

${I}_{\text{FC}}\left(z\right)={\Vert \Omega \alpha \Vert}_{q}^{\mu},\text{\hspace{0.17em}}q=2\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{or}\text{\hspace{0.05em}}\text{\hspace{0.17em}}1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mu \ne 0.$ (24)

We recall that the decision to take
$n-\mathcal{l}+1$ Fourier coefficients
${\alpha}_{j}$ in this indicator lies on the fact that
${\Vert \alpha \Vert}_{2}$ remains constant for all sampling point *z*, hence no recovering is possible. The choice
$q=2$,
$\mu =1$ leads to a method known as *SVD-tail* proposed in [14], for which, unlike
$W\left(z\right)$,
${I}_{\text{FC}}\left(z\right)$ becomes large for *z* outside *D* and small for *z* inside. Obviously, for
${I}_{\text{FC}}\left(z\right)$ to behave similarly as
$W\left(z\right)$ it suffices taking
$\mu =-1$. In this work, we have tested the choice
$q=1$, which avoids calculating squares and square roots, and our conclusion has been that both choices work relatively well, but showing low contrast profile reconstruction.

Apart from the indicators in (24), although not explicitly mentioned in the literature, some recently published qualitative methods, referred to as direct sampling methods (DSM), also rely on the separation of Fourier coefficients. As examples we quote the indicators

${I}_{\text{DSM}}\left(z\right)=\left|{r}^{\mathrm{*}}{F}_{n}{r}_{z}\right|=\left|{\left[{\Sigma}^{1/2}\beta \right]}^{\mathrm{*}}\left[{\Sigma}^{1/2}\alpha \right]\right|\mathrm{,}$ (25)

where $\beta $ is the vector of Fourier coefficients of ${r}_{z}$ with respect to left singular vectors of ${F}_{n}$, $\alpha $ is the vector of Fourier coefficients ${\alpha}_{j}$ defined above and ${\Sigma}^{1/2}$ is the diagonal matrix with diagonal entries equal to $\sqrt{{\sigma}_{j}}$, and

${I}_{\text{DFM1}}\left(z\right)={r}^{\mathrm{*}}{\left({F}_{n}^{\mathrm{*}}{F}_{n}\right)}^{1/4}{r}_{z}={\left[{\Sigma}^{1/4}\alpha \right]}^{\mathrm{*}}\left[{\Sigma}^{1/4}\alpha \right]={\Vert {\Sigma}^{1/4}\alpha \Vert}_{2}^{2}\mathrm{,}$ (26)

see [11] [12] for details. We observe in passing that (23) as be regarded as a sampling indicator of a direct sampling method since, like (26) we use a diagonal weighted matrix,

$W\left(z\right)={\Vert {\Sigma}^{-1/2}\alpha \Vert}_{2}^{-2}\mathrm{.}$ (27)

It is evident that the sampling indicator functions mentioned above avoid solving linear system and are therefore very efficient in terms of computational effort. From a theoretical point of view, it is known that DSM and DFM1 are equivalent and that the indicators decay as the sampling point *z* moves away from the scatter, see Theorems 1 and 4 in [15]. A missing information however, is how the indicators behave for *z* outside and close to *D*, and this may partially be the cause of low contrast reconstruction profiles as well as the presence of artifacts such as spurious oscillatory peaks in the corresponding surfaces, although of course, numerical results presented in [11] indicate that DFM1 delivers better accuracy than DSM.

3.1. Understanding Drawbacks Attributed to DSM and DFM Indicators

In this section, we intend to some extent, explain why the graphs of the indicator functions (25)-(26) include spurious oscillatory peaks. For this, it will be sufficient to show, not with proofs or theorems but with strong arguments, that the frequency content of
${I}_{\text{DFM1}}$ is high and that high frequency modes in the indicator are propagated for *z* outside *D*. The oscillatory behavior of the Fourier coefficients seen in Figure 2 is the key piece in our analysis and requires therefore justification. For this, we take *n* equidistantly distributed angles on the unit circle,
${\omega}_{j}=2\pi \left(j-1\right)/n,\text{\hspace{0.17em}}j=1,\cdots ,n$, and set
${t}_{j}=\left(\mathrm{cos}{\omega}_{j},\mathrm{sin}{\omega}_{j}\right)$. With this notation, we see that the discretized version of the right hand side of the operator Equation (15) reads

${r}_{z}=\frac{{\text{e}}^{i\pi /4}}{\sqrt{8\pi {k}_{0}}}={\left[{\text{e}}^{-i{k}_{0}{t}_{1}\cdot z},{\text{e}}^{-i{k}_{0}{t}_{2}\cdot z},\cdots ,{\text{e}}^{-i{k}_{0}{t}_{N}\cdot z}\right]}^{\text{T}}$

where the superscript T denotes transpose of a vector or matrix, and thus the Fourier coefficient ${\alpha}_{j}$ can be expressed as

${\alpha}_{j}={v}_{j}^{*}{r}_{z}={\displaystyle \underset{k=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{\xaf}{V}}_{k,j}{\text{e}}^{-i{k}_{0}{t}_{j}\cdot z},$ (28)

where the bar denotes complex conjugate. This shows that
${\alpha}_{j}$ involves a sum of weighted harmonics, with potentially high frequency content, and oscillatory weights that are the components of singular vectors
${v}_{j}$. Weight oscillation is because singular vectors
${v}_{j}$ oscillate strongly as *j* grows, which is well known in connection with discrete inverse problems, see, for example [16]. This justifies the oscillatory behavior of the
${\alpha}_{j}$ displayed in Figure 2.

To complete our explanation, notice now that because the indicator computes inner products of the vector
${\Sigma}^{1/4}\alpha $ with itself, the highest frequency in the indicator will be at least twice the highest frequency in
$\alpha $. Hence, oscillatory peaks in the indicator behavior become completely natural because the weights
$\sqrt[4]{{\sigma}_{j}}$ propagate high oscillations in
$\alpha $, this being more pronounced for *z* outside *D* in which case
$\left|{\alpha}_{j}\right|$ gets larger. To illustrate this a 3D graph of
${I}_{\text{DFM1}}\left(z\right)$ for far-field data used to build Figure 2 is presented in Figure 3 where spurious peaks are evident.

We close the section with the observation that the effect of oscillatory
$\left|{\alpha}_{j}\right|$ on
$W\left(z\right)$ is not so dramatic since for *z* outside *D*, the weight
$1/\sqrt{{\sigma}_{j}}$ implies that
${\Vert {g}_{z}\Vert}_{2}$ gets large, which is good for separating the indicator behavior, while for *z* inside, since on the average
$\left|{\alpha}_{j}\right|<\sqrt{{\sigma}_{j}}$, see Figure 2, the coefficient
$\left|{\alpha}_{j}\right|/\sqrt{{\sigma}_{j}}$ remains relatively small and so
${\Vert {g}_{z}\Vert}_{2}$ gets moderate size. That is, although as already commented, the reconstructed profiles obtained with
$W\left(z\right)$ suffer from low contrast reconstruction, spurious oscillatory peaks are mitigated. A similar observation holds true for
${I}_{\text{FC}}\left(z\right)$.

To overcome the above drawbacks, post-filtering strategies have been proposed recently for the above DSM methods [17], whereas for Kirch’s factorization method (FM) the ad-hoc strategy has been to use Tikhonov regularization often based on Morozov’s generalized discrepancy principle (GDP). However, while filtering did not completely succeed in eliminating artifacts for DSMs, the performance of Tikhonov regularization has been highly satisfactory in several reconstruction problems, specially when the regularization parameter is selected via the Improved Maximum Product Criterion (IMPC). Recall that unlike GDP which requires *a* *priori* knowledge of the noise level in the data, IMPC does not require any user specified input parameters such as noise level or others.

To illustrate the superior reconstruction quality of the Tikhonov regularized based method coupled with IMPC over the above DSMs, we normalize the indicators dividing them by their maximum values and use level curves to recover the profile of the obstacle. Level curves at the values 0.5, 0.6, 0.7, and 0.8 for the indicators ${I}_{\text{DFM1}}\left(z\right)$, $W\left(z\right)$ and the one based on the Tikhonov based approach coupled with IMPC, labeled here by W-FP, are displayed in Figure 4.

Figure 3. Mesh version of ${I}_{\text{DFM1}}$.

The level curves behavior confirms two facts. First, that reconstructed profiles obtained by ${I}_{\text{DFM1}}\left(z\right)$ are indeed contaminated by spurious peaks and second, that the Tikhonov regularized version of $W\left(z\right)$ coupled with IMPC mitigates significantly the effects of oscillatory ${\alpha}_{j}$.

In addition to the level curves behavior displayed in Figure 4, to reinforce once more the superior reconstruction quality of W-FP over the other indicators, in Figure 5 we display flat surface plots for
${I}_{\text{DFM1}}\left(z\right)$,
$W\left(z\right)$, W-FP and
${I}_{\text{FC}}$, where the last one is denoted by AFC and corresponds to
$q=-\mu =1$,
$p=10$. Flat surface plots are obtained using the built-in matlab function *pcolor*. As we can see, although this particular form of visualization hides oscillating peaks, the same is not true if we focus on the background, where small oscillations are evident. Other than that, the low reconstruction contrast of profiles obtained with
$W\left(z\right)$ and
${I}_{\text{FC}}$ is also evident.

Despite the fact that DSMs are theoretically stable and computationally cheap [11] [15], motivated for their shortcomings and the superior quality of the reconstructions obtained with W-FP, perhaps the most important observation here is that if the main objective is the quality of the reconstruction, one should choose W-FP. With this motivation, DSMs will no longer be used in this paper, and to justify using IMPC, in the next section, we revisit IMPC to describe new theoretical properties.

3.2. Revisiting IMPC and GDP

We first revisit a modified version of (23) based on Tikhonov regularization where the regularization parameter is chosen by the improved maximum product criterion (IMPC) [9]. In this case, the indicator function takes the form

$W\left(z,\lambda \right)={\left[{\displaystyle \underset{j=1}{\overset{n}{\sum}}}\frac{{\sigma}_{j}{\left|{\alpha}_{j}\right|}^{2}}{{\left({\lambda}^{2}+{\sigma}_{j}\right)}^{2}}\right]}^{-1}\doteq {\Vert {g}_{\lambda ,z}\Vert}_{2}^{-2},$ (29)

Figure 4. Level curves for ${I}_{\text{DFM1}}\left(z\right)$, $W\left(z\right)$ and W-FP.

Figure 5. Reconstructed objects using four indicator functions.

with

${g}_{\lambda ,z}=\mathrm{arg}\underset{g\in \text{I}\text{}\text{}\text{}{\text{C}}^{n}}{\mathrm{min}}\left\{{\Vert {\left({F}_{n}^{*}{F}_{n}\right)}^{1/4}g-{r}_{z}\Vert}_{2}^{2}+{\lambda}^{2}{\Vert g\Vert}_{2}^{2}\right\},$ (30)

where the regularization parameter $\lambda >0$ is a critical point of the product of the regularized solution norm and the corresponding residual norm. Let ${\rho}_{\lambda \mathrm{,}z}$ be the residual vector corresponding to the regularized solution ${g}_{\lambda \mathrm{,}z}$, that is, ${\rho}_{\lambda \mathrm{,}z}={r}_{z}-{\left({F}_{n}^{\mathrm{*}}{F}_{n}\right)}^{1/4}{g}_{\lambda \mathrm{,}z}$. From the SVD of ${F}_{n}$, it is simple to check that ${g}_{\lambda \mathrm{,}z}$, ${\rho}_{\lambda \mathrm{,}z}$ and the corresponding squared norms are given by

${g}_{\lambda ,z}={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\phi}_{i}^{\left[\lambda \right]}\frac{{v}_{i}^{*}{r}_{z}}{\sqrt{{\sigma}_{i}}}{v}_{i},\text{\hspace{1em}}{\rho}_{\lambda ,z}={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\left(1-{\phi}_{i}^{\left[\lambda \right]}\right){v}_{i}^{*}{r}_{z}{v}_{i},$ (31)

${\Vert {g}_{\lambda ,z}\Vert}_{2}^{2}={\displaystyle \underset{i=1}{\overset{n}{\sum}}}{\left({\phi}_{i}^{\left[\lambda \right]}\right)}^{2}\frac{{\alpha}_{i}^{2}}{{\sigma}_{i}},\text{\hspace{1em}}{\Vert {\rho}_{\lambda ,z}\Vert}_{2}^{2}={\displaystyle \underset{i=1}{\overset{n}{\sum}}}{\left(1-{\phi}_{i}^{\left[\lambda \right]}\right)}^{2}{\alpha}_{i}^{2}$ (32)

where ${\phi}_{i}^{\left[\lambda \right]}$ are the Tikhonov filter factors,

${\phi}_{i}^{\left[\lambda \right]}=\frac{{\sigma}_{i}}{{\sigma}_{i}+{\lambda}^{2}}\approx \{\begin{array}{ll}1,\hfill & \sqrt{{\sigma}_{i}}\gg \lambda \hfill \\ {\sigma}_{i}/{\lambda}^{2},\hfill & \sqrt{{\sigma}_{i}}\ll \lambda \hfill \end{array}$ (33)

It is well known that ${\Vert {g}_{\lambda ,z}\Vert}_{2}^{2}$ and ${\Vert {\rho}_{\lambda ,z}\Vert}_{2}^{2}$ vary monotonically with $\lambda $, the former being decreasing and the latter increasing [16]. Recall that if $\text{r}\left(\lambda \right)=\Vert {\rho}_{\lambda ,z}\Vert $, $\text{s}\left(\lambda \right)=\Vert {g}_{\lambda ,z}\Vert $, IMPC selects as regularization parameter the largest maximum point of

$\Psi \left(\lambda \right)={\left[\text{r}\left(\lambda \right)\right]}^{2}{\left[\text{s}\left(\lambda \right)\right]}^{2}\mathrm{.}$ (34)

which is simple to compute via a fast fixed point procedure [8] [9]. It is instructive to notice that

${\Psi}^{\prime}\left(\lambda \right)=\left({\left[\text{r}\left(\lambda \right)\right]}^{2}-{\lambda}^{2}{\left[\text{s}\left(\lambda \right)\right]}^{2}\right){\left\{{\left[\text{s}\left(\lambda \right)\right]}^{2}\right\}}^{\prime}.$ (35)

Hence, since ${\left\{{\left[\text{s}\left(\lambda \right)\right]}^{2}\right\}}^{\prime}<0$ for all $\lambda >0$, as seen in [9], $\stackrel{\u2323}{\lambda}$ is maximum point of $\Psi $ if and only if $\stackrel{\u2323}{\lambda}$ is a fixed point of $\Phi \left(\lambda \right)=\text{r}\left(\lambda \right)/\text{s}\left(\lambda \right)$, and ${\Phi}^{\prime}\left(\stackrel{\u2323}{\lambda}\right)>1$ [9]. For convergence details and practical implementation of IMPC the reader is referred to [8].

The goal of this subsection is to give further insight into the theoretical properties of IMPC to fully understand why it work so well. A key fact for this is to understand the behavior of
$\Psi $ as a function of the sampling point *z*. Figure 6 displays a typical behavior of
$\Psi $ for points outside and inside *D*, labeled as
${\Psi}_{\text{out}}$ and
${\Psi}_{\text{on}}$, respectively, as well as the corresponding functions
$\Phi $. To explain such a behavior we assume, as usual, that
$\Psi $ has only maximum point, and we will examine its location.

The expressions in (31)-(32) are the basis for our analysis of the first factor in (35). Notice that for *z* outside *D*, for
$\lambda \approx 0$,
${\Vert {g}_{\lambda \mathrm{,}z}\Vert}_{2}$ becomes close to
${\Vert {g}_{z}\Vert}_{2}$, which is large, because generally the Fourier coefficients
$\left|{\alpha}_{i}\right|$ remain above
$\sqrt{{\sigma}_{i}}$, see Figure 2. Assume now that
$\lambda $ lies not so far from
${\sigma}_{n}$ such that

Figure 6. Typical behavior of functions
$\Psi $ and
$\Phi $ for *z* outside (resp. inside) the scatterer. Maximum points of
$\Psi $ or equivalently related Fixed points of
$\Phi $ are marked with small circle.

$\sqrt{{\sigma}_{n}}\le \cdots \le \sqrt{{\sigma}_{p+1}}\le \lambda <\sqrt{{\sigma}_{p}}\le \cdots \le \sqrt{{\sigma}_{1}}$ with *p* filter factor close to 1 and
$n-p$ small filter factors. Then, based on (33) we have

${\lambda}^{2}{\Vert {g}_{\lambda ,z}\Vert}_{2}^{2}\approx {\lambda}^{2}{\displaystyle \underset{i=1}{\overset{p}{\sum}}}\frac{{\left|{\alpha}_{i}\right|}^{2}}{{\sigma}_{i}}+{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\frac{{\sigma}_{i}}{{\lambda}^{2}}{\left|{\alpha}_{i}\right|}^{2}\text{\hspace{1em}}\text{and}\text{\hspace{1em}}{\Vert {\rho}_{\lambda ,z}\Vert}_{2}^{2}\approx {\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}{\left|{\alpha}_{i}\right|}^{2}.$

Thus, for *z* outside *D* and
$\lambda $ relatively close to
${\sigma}_{n}$, that is,
$p\approx n$, it is clear that
${\lambda}^{2}{\Vert {g}_{\lambda \mathrm{,}z}\Vert}_{2}^{2}\approx {\lambda}^{2}{\Vert {g}_{z}\Vert}_{2}^{2}$ dominates
${\Vert {\rho}_{\lambda \mathrm{,}z}\Vert}_{2}^{2}$ (because
${\Vert {g}_{z}\Vert}_{2}$ becomes large), so
${\Psi}^{\prime}\left(\lambda \right)>0$, and
$\Psi $ increases most often quickly and dramatically as
$\lambda $ moves away from
$\sqrt{{\sigma}_{n}}$, until the maximum is reached. From there, as *p* diminishes or equivalently, as
$\lambda $ starts to get away from
$\sqrt{{\sigma}_{n}}$,
${\Vert {\rho}_{\lambda \mathrm{,}z}\Vert}_{2}^{2}$ dominates

${\lambda}^{2}{\Vert {g}_{\lambda \mathrm{,}z}\Vert}_{2}^{2}$ and so $\Psi $ is decreasing and becoming small as $\lambda $ approaches

$\sqrt{{\sigma}_{1}}$. A similar phenomenon happens when *z* lies inside *D*, but with the difference that, as
$\lambda $ moves away from
$\sqrt{{\sigma}_{n}}$,
$\Psi $ increases much slower than when *z* is outside *D* because the coefficients
$\left|{\alpha}_{i}\right|$ remain below
$\sqrt{{\sigma}_{i}}$, see again Figure 2; in this case, the maximum point of
$\Psi $ is reached relatively close to
$\sqrt{{\sigma}_{1}}$. A simple way to see this is by noting that since
${\lambda}^{2}{\Vert {g}_{\lambda \mathrm{,}z}\Vert}_{2}^{2}$ can be approximated as

${\lambda}^{2}{\Vert {g}_{\lambda ,z}\Vert}_{2}^{2}\approx {\displaystyle \underset{i=1}{\overset{p}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\alpha}_{i}^{2},$ (36)

a relatively large number of Fourier coefficients in ${\Vert {\rho}_{\lambda \mathrm{,}z}\Vert}_{2}^{2}$ is required for ${\lambda}^{2}{\Vert {g}_{\lambda \mathrm{,}z}\Vert}_{2}^{2}$ to be dominated by ${\Vert {\rho}_{\lambda \mathrm{,}z}\Vert}_{2}^{2}$. That is, the maximum point should be relatively close to $\sqrt{{\sigma}_{1}}$.

Having discussed the behavior of function
$\Psi $ for points inside and outside *D*, to strengthen the conclusions above we now describe some theoretical properties in the following theorem.

Theorem 4. Assume that ${F}_{n}$ is nonsingular with distinct singular values and ${\alpha}_{i}\ne 0$, $i=1,\cdots ,n$. Let $\Phi \left(\lambda \right)=\text{r}\left(\lambda \right)/\text{s}\left(\lambda \right)$. Then for all $\lambda >0$

$\frac{{\lambda}^{2}}{\sqrt{{\sigma}_{1}}}\le \Phi \left(\lambda \right)\le \frac{{\lambda}^{2}}{\sqrt{{\sigma}_{n}}}\mathrm{.}$ (37)

Consequently,

$0<\lambda \le \sqrt{{\sigma}_{n}}\text{\hspace{0.17em}}\Rightarrow 0\le \Phi \left(\lambda \right)\le \lambda ,\text{\hspace{1em}}\lambda \ge \sqrt{{\sigma}_{1}}\Rightarrow \Phi \left(\lambda \right)\ge \lambda .$ (38)

Moreover, irrespective of the sampling point, function $\Psi $ has always a maximum point $\stackrel{\u2323}{\lambda}$ that is a fixed point of $\Phi $, $\sqrt{{\sigma}_{n}}<\stackrel{\u2323}{\lambda}<\sqrt{{\sigma}_{1}}$, and this point is unique if ${\Phi}^{\prime}\left(\lambda \right)>\Phi \left(\lambda \right)/\lambda $ for all $\lambda \in \left[\sqrt{{\sigma}_{n}}\mathrm{,}\sqrt{{\sigma}_{1}}\right]$.

Proof: It is the well known that the solution to the optimization problem (30) satisfies the regularized normal equations,

$\left[{\left({F}_{n}^{\mathrm{*}}{F}_{n}\right)}^{1/2}+{\lambda}^{2}I\right]{g}_{\lambda \mathrm{,}z}={\left({F}_{n}^{\mathrm{*}}{F}_{n}\right)}^{1/4}{r}_{z}$ (39)

where *I* denotes the
$n\times n$ identity matrix. Therefore

${\lambda}^{2}{g}_{\lambda \mathrm{,}z}={\left({F}_{n}^{\mathrm{*}}{F}_{n}\right)}^{1/4}\left[{r}_{z}-{\left({F}_{n}^{\mathrm{*}}{F}_{n}\right)}^{1/4}{g}_{{\lambda}_{z}}\right]\mathrm{,}$ (40)

and hence, taking 2-norm on both sides of the above equality we see that

$\text{s}\left(\lambda \right)=\frac{1}{{\lambda}^{2}}{\Vert {\left({F}_{n}^{\mathrm{*}}{F}_{n}\right)}^{1/4}\left[{r}_{z}-{\left({F}_{n}^{\mathrm{*}}{F}_{n}\right)}^{1/4}{g}_{\lambda \mathrm{,}z}\right]\Vert}_{2}$

implies

$\Phi \left(\lambda \right)={\lambda}^{2}\frac{{\Vert {\rho}_{\lambda \mathrm{,}z}\Vert}_{2}}{{\Vert {\left({F}_{n}^{\mathrm{*}}{F}_{n}\right)}^{1/4}{\rho}_{\lambda \mathrm{,}z}\Vert}_{2}}\mathrm{.}$ (41)

Since $\left({F}_{n}^{\mathrm{*}}{F}_{n}\right)$ is non-singular, from the SVD of ${F}_{n}$ and well known eigenvalue properties of symmetric matrices, the inequalities in (37) are straightforward consequences of (41), while the inequalities in (38) are consequences of (37). In addition, from (38) it is clear that $\Phi $ has at least a fixed point $\stackrel{\u2323}{\lambda}\in \left[\sqrt{{\sigma}_{n}}\mathrm{,}\sqrt{{\sigma}_{1}}\right]$. It remains to show firstly that $\stackrel{\u2323}{\lambda}\ne \sqrt{{\sigma}_{1}}$, $\stackrel{\u2323}{\lambda}\ne \sqrt{{\sigma}_{n}}$, and then that this fixed point maximizes $\Psi $ and is unique. We will do the first part by contradiction. Assume that $\stackrel{\u2323}{\lambda}=\sqrt{{\sigma}_{n}}$. Taking 2-norm on both sides of (40) and taking (31)-(32) into account, the fixed point equality $\stackrel{\u2323}{\lambda}=\Phi \left(\stackrel{\u2323}{\lambda}\right)$ will be satisfied either when ${\alpha}_{i}=0$, for $i=1,\cdots ,n-1$, or when all singular values of ${F}_{n}$ are equal to ${\sigma}_{n}$. This is a contradiction. Therefore $\stackrel{\u2323}{\lambda}\ne \sqrt{{\sigma}_{n}}$. A similar analysis shows that $\stackrel{\u2323}{\lambda}\ne \sqrt{{\sigma}_{1}}$. What $\stackrel{\u2323}{\lambda}$ maximizes $\Psi $ is a consequence of (35) and the condition ${\Phi}^{\prime}\left(\lambda \right)>\Phi \left(\lambda \right)/\lambda $.

Finally, to prove uniqueness, assume that $\Psi $ has two maximum points, say ${\lambda}_{1}<{\lambda}_{2}$ such that $\Phi \left(\lambda \right)\le {\lambda}_{1}$ for $\sqrt{{\sigma}_{n}}<\lambda \le {\lambda}_{1}$. Following an analysis from [18] it follows that there exists a fixed point $\stackrel{\u2323}{\lambda}$ of $\Phi $ between ${\lambda}_{1}$ and ${\lambda}_{2}$ that minimizes $\Psi $. Using the mean value theorem for derivatives, $\stackrel{\u2323}{\lambda}-{\lambda}_{1}=\Phi \left(\stackrel{\u2323}{\lambda}\right)-\Phi \left({\lambda}_{1}\right)={\Phi}^{\prime}\left(\stackrel{\xaf}{\lambda}\right)\left(\stackrel{\u2323}{\lambda}-{\lambda}_{1}\right)$ with $\stackrel{\xaf}{\lambda}\in \left[\stackrel{\u2323}{\lambda}\mathrm{,}{\lambda}_{1}\right]$. This implies that ${\Phi}^{\prime}\left(\stackrel{\xaf}{\lambda}\right)=1$ and so ${\Phi}^{\prime}\left(\stackrel{\xaf}{\lambda}\right)=1<\Phi \left(\stackrel{\xaf}{\lambda}\right)/\stackrel{\xaf}{\lambda}$ which is a contradiction because $\Phi \left(\stackrel{\xaf}{\lambda}\right)>\stackrel{\xaf}{\lambda}$. This completes the proof. $\square $

Except for inequality (37), the statements of Theorem 4 were demonstrated in [9] [18] by other means. Here, these statements are elegantly proven from (37). In addition, the current approach allows us to explain to some extent what we discussed above about the location of the maximizing point of $\Psi $. Actually, using the notation ${\Phi}_{\text{out}}$ and ${\Psi}_{\text{on}}$ introduced in Figure 6, and assuming that the maximum point of ${\Psi}_{\text{on}}$ is unique and achieved at $\lambda =\stackrel{\u2323}{\lambda}$, from (37) it follows that

${\Phi}_{\text{out}}\left(\lambda \right)\le \frac{\lambda}{\sqrt{{\sigma}_{n}}}{\Phi}_{\text{on}}\left(\lambda \right)\text{\hspace{1em}}\text{if}\text{\hspace{0.17em}}\stackrel{\u2323}{\lambda}\le \lambda \le \sqrt{{\sigma}_{1}},$ (42)

${\Phi}_{\text{out}}\left(\lambda \right)\ge \frac{\lambda}{\sqrt{{\sigma}_{1}}}{\Phi}_{\text{on}}\left(\lambda \right)\text{\hspace{1em}}\text{if}\text{\hspace{0.17em}}\sqrt{{\sigma}_{n}}\le \lambda \le \stackrel{\u2323}{\lambda}.$ (43)

In particular, at $\lambda =\stackrel{\u2323}{\lambda}$ we see that (42) implies

${\Phi}_{\text{out}}\left(\stackrel{\u2323}{\lambda}\right)\le \left(\frac{\stackrel{\u2323}{\lambda}}{\sqrt{{\sigma}_{n}}}\right)\stackrel{\u2323}{\lambda}\mathrm{,}$

and this indicates that
${\Phi}_{\text{out}}\left(\stackrel{\u2323}{\lambda}\right)$ may reach a very large value and that this value may be much larger than
${\Phi}_{\text{on}}\left(\stackrel{\u2323}{\lambda}\right)$. If this is the case, the maximizer of
${\Psi}_{\text{out}}$ must be smaller than
$\stackrel{\u2323}{\lambda}$, a fact which, as we have seen, depends on the behavior of the Fourier coefficients. A line of analysis involving derivatives of
$\Phi $ can be used to characterize the location of the maximum point of
${\Psi}_{\text{out}}$. In fact, as Fourier coefficients associated with points outside *D* are generally larger than Fourier coefficients associated with internal points, to characterize such a location it is reasonable to assume that
${{\Phi}^{\prime}}_{\text{out}}\left(\lambda \right)>{{\Phi}^{\prime}}_{\text{on}}\left(\lambda \right)$. This can be seen as follows. Let
$\Xi \left(\lambda \right)={\Phi}_{\text{out}}\left(\lambda \right)-{\Phi}_{\text{on}}\left(\lambda \right)$. It is clear that
$\Xi \left(0\right)=0$ and that
$\Xi $ is differentiable for
$\lambda >0$. Then

$\Xi \left(\lambda \right)-\Xi \left(0\right)={\displaystyle {\int}_{0}^{\lambda}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\Xi}^{\prime}\left(\xi \right)\text{d}\xi \mathrm{,}$ (44)

where ’ denotes differentiation with respect to $\xi $. Based on (44) we obtain the following corollary.

Corollary 1. Assume that ${{\Phi}^{\prime}}_{\text{out}}\left(\lambda \right)>{{\Phi}^{\prime}}_{\text{on}}\left(\lambda \right)$ for all $\lambda \in \left[\mathrm{0,}\sqrt{{\sigma}_{1}}\right]$. Then ${\Phi}_{\text{on}}\left(\lambda \right)<{\Phi}_{\text{out}}\left(\lambda \right)$ and therefore, the maximizer of ${\Psi}_{\text{out}}$ lies on the left of the maximizer of ${\Psi}_{\text{on}}$.

The discussion and analysis developed in this section regarding DSMs and IMPC are relevant in many ways. For example, since we have identified the source of oscillations and spurious peaks in DSM reconstructions, strategies to circumvent DSM difficulties can be thought of. Now, focusing on IMPC, what matters here is that we have provided additional insight to explain why the method works so well and why its reconstructions are potentially of better quality compared to those obtained by DSMs, although clearly at a higher cost. However, we emphasize that the IMPC cost is not a crucial issue in the case of 2D reconstructions as IMPC is based on a fast fixed point iteration algorithm. A similar observation holds true for 3D reconstructions since in this case IMPC is applied in conjunction with a projection technique. Other than that, it is worth noting that if ${I}_{\text{FC}}$ is used as a preprocessing step to identify a small region containing the object, by applying IMPC to that region the total cost of the reconstruction can be significantly reduced. This subject is left to future work.

Recall that GDP chooses as regularization parameter the only root of the nonlinear equation

$G\left(\lambda \right)={\Vert {\left({\stackrel{\u02dc}{F}}_{n}^{\mathrm{*}}{\stackrel{\u02dc}{F}}_{n}\right)}^{1/4}{g}_{\lambda \mathrm{,}z}-{r}_{z}\Vert}_{2}^{2}-{\delta}_{F}^{2}{\Vert {g}_{\lambda \mathrm{,}z}\Vert}_{2}^{2}=0$ (45)

where
${\delta}_{F}$ is an estimate for
$\Vert E\Vert ={\Vert {\left({\stackrel{\u02dc}{F}}_{n}^{\mathrm{*}}{\stackrel{\u02dc}{F}}_{n}\right)}^{1/4}-{\left({F}_{n}^{\mathrm{*}}{F}_{n}\right)}^{1/4}\Vert}_{2}$ such that
$\Vert E\Vert \le {\delta}_{F}$. It is well known that *G* is convex for small
$\lambda $ and concave for large
$\lambda $. As a result, global and monotone convergence of Newton’s method cannot be guaranteed [19]. This difficulty is circumvented through a fast fixed-point algorithm proven convergent irrespective of the chosen initial guess [20].

We close the subsection with a new estimate for the only root of (45) that is again a consequence of (37).

Corollary 2. Let the only root of (45) be denoted by ${\lambda}_{\text{GDP}}$. Whenever ${\delta}_{F}=\epsilon \sqrt{{\sigma}_{1}}$ we have

$\sqrt{\epsilon}\sqrt{{\sigma}_{n}}\le {\lambda}_{\text{GDP}}\le \sqrt{\epsilon}\sqrt{{\sigma}_{1}}\mathrm{.}$ (46)

Estimates in (46) improve significantly an earlier estimate reported in Bazán ( [20], Theorem 3.2).

4. Numerical Results

In this section, we give several 2D numerical examples to illustrate the effectiveness of IMPC in reconstructing interface and imbedded obstacles in layered medium. To generate discrete data for the reference operator
${F}_{0}$ and the far-field operator *F* we use the boundary integral equation method as described in [3]. This yields finite approximations
${F}_{\mathrm{0,}n}$ and
${F}_{n}$ given as

${F}_{0,n}={\left[{u}_{t}^{\infty}\left({\theta}_{j},{\theta}_{\mathcal{l}}\right)\right]}_{j,\mathcal{l}=1}^{n},\text{\hspace{1em}}{F}_{n}={\left[{u}^{\infty}\left({\theta}_{j},{\theta}_{\mathcal{l}}\right)\right]}_{j,\mathcal{l}=1}^{n},$ (47)

where
${\theta}_{j}=2\pi j/n$ and *n* is the number of incident and observation directions around the unit circle. For our examples we set
${k}_{0}=2$,
${k}_{1}=1$ and for the transmission parameter we chose
${\mu}_{0}=0.8$ ; in all cases we used
$n=64$ and the reconstructions were made using a grid of
$200\times 200$ equally spaced sampling points on the rectangle
$\left[-\mathrm{6,6}\right]\times \left[-\mathrm{6,6}\right]$.

To recover the interface we rely on the ${\left({F}^{\star}F\right)}^{1/4}$ method and use the indicator functions (23) for noise free data as described in (47) and its regularized counterpart (29) for noisy data, with regularization parameter being chosen by IMPC and GDP and calculated by using fast fixed-point iterations as described in [8] [10]. Noisy data are generated as

${\stackrel{\u02dc}{F}}_{n}={F}_{n}+\epsilon \mathcal{N}{\Vert {F}_{n}\Vert}_{2}$ (48)

where ${\Vert \text{\hspace{0.05em}}\cdot \text{\hspace{0.05em}}\Vert}_{2}$ denotes the spectral matrix norm, $\mathcal{N}$ is a complex random matrix satisfying ${\Vert \mathcal{N}\Vert}_{2}=1$, and $\epsilon $ gives the relative noise level in the data. On the other hand, the reconstruction of imbedded obstacles is based on a discrete counterpart of the linear operator Equation (17),

${F}_{n,\#}^{1/2}f={r}_{z},$ (49)

where ${r}_{z}$ is a discrete version of function ${\Phi}_{z}$ in Theorem 3 and

${F}_{n,\#}=\left|\mathrm{Re}\left(\left({F}_{n}-{F}_{0,n}\right){S}_{0,n}\right)\right|+\left|\mathrm{Im}\left(\left({F}_{n}-{F}_{0,n}\right){S}_{0,n}\right)\right|$ (50)

with
${S}_{\mathrm{0,}n}$ being obtained from (19) by replacing
${F}_{0}$ by
${F}_{\mathrm{0,}n}$. In (50), the absolute value of a Hermitian matrix *A* with eigenvalue decomposition
$A=V\Lambda {V}^{*}$ is given as
$\left|A\right|=V\left|\Lambda \right|{V}^{*}$. Also, as usual,
$\mathrm{Re}\left(A\right)=\left(A+{A}^{*}\right)/2$, and
$\mathrm{Im}\left(A\right)=\left(A-{A}^{*}\right)/\left(2i\right)$. Obviously, since
${F}_{n\mathrm{,\#}}$ is Hermitian positive definite, we have
${F}_{n\mathrm{,\#}}^{1/2}={\left({F}_{n\mathrm{,\#}}^{\mathrm{*}}{F}_{n\mathrm{,\#}}\right)}^{1/4}$. Therefore, indicator functions
$W\left(z\right)$ and
$W\left(\lambda \mathrm{,}z\right)$ based on (49) are essentially the same as those based on (23) and (29). Noisy data
${\stackrel{\u02dc}{F}}_{n\mathrm{,\#}}$ is generated similarly as in (48).

We report numerical results using noise free data and noisy data with
$\epsilon =0.05$ and
$\epsilon =0.1$, that is, data with relative noise level 5% and 10% respectively. In this sense, the numerical experiment performed here complement what is reported in [3] where reconstructions are based on
${F}_{n}$ and
${F}_{n\mathrm{,\#}}$ without considering additive noise. Reconstruction based on noise free data are labeled by *W* and reconstructions based on noisy data are labeled by W-FP when we use IMPC and by W-GDP when we use Morozov’s generalized discrepancy principle. For the implementation of GDP we use the noise level estimate
${\delta}_{F}=1.5{\Vert E\Vert}_{2}$, see (45).

Example 1. The interface is given by a rounded square and the imbedded obstacle is given by the union of an ellipse and a kite, see Figure 7. Both obstacles are assumed to be sound-soft.

Results for noise level 5% shown in Figure 8 indicate that both GDP and IMPC produce good quality interface reconstructions (first line) and that the three indicators give comparable object reconstructions, with the observation

Figure 7. Profile of interface and objects.

Figure 8. Top: Reconstructed interfaces. Noiseless data (Left), Noisy data (Center and Right). Bottom: Reconstructed imbedded obstacles.

that the quality of the W-FP reconstruction looks better than that of W-GDP (second line). Reconstructions using data with noise level 10% are displayed in Figure 9. Again we see that the three indicators provide good quality interface reconstructions and comparable quality object reconstructions, although with the background better resolved with IMPC than with GDP.

Example 2. In this example, the interface and obstacle are the same as in Example 1 but now the kite is assumed to be sound-hard. Reconstructions using data with noise level 10% are displayed in Figure 10. Again we see that interface reconstructions are of good quality and that the reconstructions of the sound-soft object are visually of better quality than those of the sound-soft one. As explained in [3], this is because the values of the indicator functions are much smaller for the sound-hard obstacle compared to the sound-soft one, as these values depend on the physical property under consideration.

Example 3. The interface is composed of one rounded square and a circle and the imbedded obstacles are an ellipse and a kite, see Figure 11. Both objects are assumed to be sound-soft.

Figure 9. Top: Reconstructed interfaces. Noiseless data (Left), Noisy data (Center and Right). Bottom: Reconstructed imbedded obstacles.

Figure 10. Top: Reconstructed interfaces. Noiseless data (Left), Noisy data (Center and Right). Bottom: Reconstructed imbedded obstacles. Obstacle is assumed to be sound-hard.

Figure 11. Profile of interface and objects.

Figure 12 shows reconstructions using data with noise levels 5% (line 1 and line 2) and 10% (line 3 and line 4). As in example 1, we see that interface reconstructions for both noise levels have comparable quality, while the reconstruction of the objects obtained with W-GDP appears to degrade when the noise level increases to 10%. Interestingly, the interface seems to be part of the background in the reconstruction of the objects. Anyway, here we can see that the best object reconstructions for both noise levels are obtained with W-FP. This confirms once again what is often seen when IMPC is compared to GDP in solving other inverse scattering problems, see, for instance [8] [9] [11] [21].

Example 4. In this example, the interface and object are as in Example 3 but the kite is assumed to be sound-hard. The reconstructions are shown in Figure 13. As we can see, while interface reconstructions are robust to noise, as in the examples above, a certain lack of resolution seems to be present in the kite reconstruction, just as in Example 2 where we dealt with the sound-hard case. Other than that, note again that when reconstructing the objects, the interface seems to be part of the background.

Example 5. The interface is composed of two rounded squares, as displayed in Figure 14, and the imbedded obstacles are an ellipse and a kite, assuming as in Example 4 that the ellipse is sound-soft and the kite is sound-hard. Reconstructions using data with noise level 10% are displayed in Figure 15. Once again, we see that the interface reconstructions are of good quality, while the reconstructions of the sound-hard object are as in Examples 2 and 4, that is, with slightly poor resolution.

Figure 12. Top: Reconstructed interfaces. Noiseless data (Left), Noisy data (Center and Right). Bottom: Reconstructed imbedded obstacles.

Figure 13. Top: Reconstructed interfaces. Noiseless data (Left), Noisy data (Center and Right). Bottom: Reconstructed imbedded obstacles.

Figure 14. Interface and imbedded obstacles.

Figure 15. Top: Reconstructed interfaces. Noiseless data (Left), Noisy data (Center and Right). Bottom: Reconstructed imbedded obstacles.

5. Conclusion

We have presented an updated version of the Improved Maximum Product Criterion (IMPC) which combined with a mixed reciprocity relation is used for the reconstruction of two-dimensional obstacles in a layered medium. Numerical results indicate that the method yields accurate and robust reconstructions and comparison with some recently developed direct factorization methods illustrates its superiority in terms of reconstruction quality. Extension to 3D reconstructions can be a topic of future research in which the significant reduction of the computational cost will be a primary task. This can be achieved by using an indicator function that relies on the separation of the Fourier coefficients as a preprocessing step in order to identify a small region in which the object is located, and then obtain the reconstruction by applying the IMPC.

References

[1] Colton, D. and Kress, R. (1998) Inverse Acoustic and Electromagnetic Scattering Theory. 2nd Edition, Springer, Berlin.

https://doi.org/10.1007/978-3-662-03537-5

[2] Kirsch, A. and Grinberg, N. (2008) The Factorization Method for Inverse Problems. Oxford Lecture Series in Mathematics and its Applications Vol. 36, Oxford University Press, Oxford.

https://doi.org/10.1093/acprof:oso/9780199213535.003.0005

[3] Bondarenko, O., Kirsch, A. and Liu, X. (2013) The Factorization Method for Inverse Acoustic Scattering in a Layered Medium. Inverse Problems, 29, Article ID: 045010.

https://doi.org/10.1088/0266-5611/29/4/045010

[4] Colton, D. and Haddar, H. (2005) An Application of the Reciprocity Gap Functional in Inverse Scattering Theory. Inverse Problems, 21, 383-398.

https://doi.org/10.1088/0266-5611/21/1/023

[5] Liu, X., Zhang, B. and Hu, G. (2010) Uniqueness in the Inverse Scattering Problem in a Piecewise Homogeneous Medium. Inverse Problems, 26, Article ID: 015002.

https://doi.org/10.1088/0266-5611/26/1/015002

[6] Kirsch, A. (1998) Characterization of the Shape of the Scattering Obstacle by the Spectral Data of the Far-Field Operator. Inverse Problems, 14, 1489-1512.

https://doi.org/10.1088/0266-5611/14/6/009

[7] Colton, D., Pianna, M. and Potthast, R. (1999) A Simple Method Using Morozov’s Discrepancy Principle for Solving Inverse Scattering Problems. Inverse Problems, 13, 1477-1493.

https://doi.org/10.1088/0266-5611/13/6/005

[8] Bazán, F.S.V., Francisco, J.B., Leem, K.H. and Pelekanos, G. (2015) Using the Linear Sampling Method and an Improved Maximum Product Criterion for the Solution of the Electromagnetic Inverse Medium Problem. Journal of Computational and Applied Mathematics, 273, 61-75.

https://doi.org/10.1016/j.cam.2014.06.003

[9] Bazán, F.S.V., Francisco, J.B., Leem, K.H. and Pelekanos, G. (2012) A Maximum Product Criterion as a Tikhonov Parameter Choice Rule for Kirsch’s Factorization Method. Journal of Computational and Applied Mathematics, 236, 4264-4275.

https://doi.org/10.1016/j.cam.2012.05.008

[10] Bazán, F.S.V., Francisco, J.B., Leem, K.H., Pelekanos, G. and Sevroglou, V. (2017) A Numerical Reconstruction Method in Inverse Elastic Scattering. Inverse Problems in Science and Engineering, 25, 1577-1600.

https://doi.org/10.1080/17415977.2016.1273919

[11] Leem, K.H., Liu, J. and Pelekanos, G. (2018) Two Direct Factorization Methods for Inverse Scattering Problems. Inverse Problems, 34, Article ID: 125004.

https://doi.org/10.1088/1361-6420/aae15e

[12] Liu, X. (2017) A Novel Sampling Method for Multiple Multiscale Targets from Scattering Amplitudes at a Fixed Frequency. Inverse Problems, 33, Article ID: 085011.

https://doi.org/10.1088/1361-6420/aa777d

[13] Potthast, R. (2001) Point Sources and Multipoles in Inverse Scattering Theory. Chapman and Hall/CRC, London.

https://doi.org/10.1201/9781420035483

[14] Fares, M., Gratton, S. and Toint, P. (2009) SVD-Tail: A New Linear-Sampling Reconstruction Method for Inverse Scattering Problems. Inverse Problems, 25, Article ID: 095013.

https://doi.org/10.1088/0266-5611/25/9/095013

[15] Harris, I. and Kleefeld, A. (2019) Analysis of New Direct Sampling Indicators for Far-Field Measurements. Inverse Problems, 35, Article ID: 054002.

https://doi.org/10.1088/1361-6420/ab08be

[16] Hansen, P.C. (1998) Rank-Deficient and Discrete Ill-Posed Problems. SIAM, Philadelphia.

https://doi.org/10.1137/1.9780898719697

[17] Liu, K. (2016) Two Effective Post-Filtering Strategies for Improving Direct Sampling Methods. Applicable Analysis, 96, 502-515.

https://doi.org/10.1080/00036811.2016.1204441

[18] Bazán, F.S.V. and Francisco, J.B. (2009) An Improved Fixed-Point Algorithm for Determining a Tikhonov Regularization Parameter. Inverse Problems, 25, No. 4.

https://doi.org/10.1088/0266-5611/25/4/045007

[19] Lu, S., Pereverzev, S.V., Shao, Y. and Tautenhahn, U. (2010) On the Generalized Discrepancy Principle for Tikhonov Regularization in Hilbert Scales. Journal of Integral Equations and Applications, 22, 483-517.

https://doi.org/10.1216/JIE-2010-22-3-483

[20] Bazán, F.S.V. (2015) Simple and Efficient Determination of the Tikhonov Regularization Parameter Chosen by the Generalized Discrepancy Principle for Discrete Ill-Posed Problems. Journal of Scientific Computing, 63, 163-184.

https://doi.org/10.1007/s10915-014-9888-z

[21] Bazán, F.S.V., Kleefeld, A., Leem, K.H. and Pelekanos, G. (2016) Sampling Method Based Projection Approach for the Reconstruction of 3D Acoustic Penetrable Scatterers. Linear Algebra and its Applications, 495, 289-323.

https://doi.org/10.1016/j.laa.2015.12.020