Applications of Symmetric Circulant Matrices to Isotropic Markov Chain Models and Electrical Impedance Tomography

Show more

1. Introduction

Electrical impedance tomography (EIT) and electrical capacitance tomography (ECT) are emerging engineering modalities designed to solve inverse problems by measuring electric signals on the periphery of the body and reconstructing the electromagnetic properties within the body. In its simplest 2D disk form, the problem reduces to relation of vector of voltages to vector of currents through a matrix, as in Ohm’s law. It was proven that this Ohm’s matrix is symmetric circulant. Although some properties of these matrices have been recently men- tioned in several papers including [1] and [2] , no systematic study has been conducted including computation of eigenvectors and eigenvalues. The problem of computation of the elements of symmetric circulant matrix with equidistant electrodes gives rise to a new special function. We also suggest application of symmetric circulants to model very special isotropic Markov chains.

2. Properties Overview

In a symmetric Toeplitz matrix, the elements of diagonal parallel to the main diagonal are the same. A circulant matrix, or shortly circulant, is a special Toeplitz matrix which does not change upon forward shift of its elements; a concise discussion of circulant matrices and their properties is found in [3] [4] . In the matrix form, we can express the shift as multiplication by a permutation matrix [5] . The $n\times n$ shift matrix $\mathcal{S}=\left\{{s}_{ij}\right\}$ is such that ${s}_{n1}=1,\text{}{s}_{i,i+1}=1$ for $i=1,\cdots ,n-1,$ and 0 elsewhere. For example, the 4 ´ 4 shift matrix $\left(n=4\right)$ has the form

$\mathcal{S}=\left[\begin{array}{cccc}0& 1& 0& 0\\ 0& 0& 1& 0\\ 0& 0& 0& 1\\ 1& 0& 0& 0\end{array}\right]\mathrm{.}$ (1)

Davis [5] proves that a square matrix $A$ is circulant if and only if

$\mathcal{S}A=A\mathcal{S}\mathrm{,}$

or equivalently

$A={\mathcal{S}}^{-1}A\mathcal{S}.$ (2)

Hereafter bold upper case is used for matrices and bold lowcase is used for vectors. The elements of a $n\times n$ circulant matrix are defined by the $n$ ele- ments in the first row. The following properties of circulant matrices are well known: 1) A linear combination of circulants is a circulant, 2) the product of circulants is a circulant, 3) the inverse of a circulant is a circulant, 4) circulants have the same eigenvectors, 5) circulants commute. The proof and other pro- perties of circulants can be found in the books cited above.

The object of the present paper is symmetric circulant matrix, or shortly Symmetric Circulant (SC). This very special class of Toeplitz matrices sometimes emerge in problems of discrete periodic convolutions with symmetric/spherical kernel [6] [7] . The properties of SC are scattered through the literature; below we summarize these properties with emphasis on the computation of their eigenvalues and eigenvectors.

Below are examples of SC matrices for $n=4,5$ and 6.

$\left[\begin{array}{cccc}a& b& c& b\\ b& a& b& c\\ c& b& a& b\\ b& c& b& a\end{array}\right],\text{}\left[\begin{array}{ccccc}a& b& c& c& b\\ b& a& b& c& c\\ c& b& a& b& c\\ c& c& b& a& b\\ b& c& c& b& a\end{array}\right],\text{}\left[\begin{array}{cccccc}a& b& c& d& c& b\\ b& a& b& c& d& c\\ c& b& a& b& c& d\\ d& c& b& a& b& c\\ c& d& c& b& a& b\\ b& c& d& c& b& a\end{array}\right].$

Since a SC matrix is Toeplitz and symmetric, the elements parallel to the main diagonal are the same on both sides of the diagonal. Since a SC matrix is cir- culant, each row is constructed by the left shift of the previous row: the last element of the previous row moves to the first column with the rest of elements from the previous row to follow. The following theorem collects peculiar pro- perties of a SC matrix (the proof is either elementary or can be found elsewhere).

Theorem 1. The following holds:

1) The sum of elements in each row and column of a SC matrix is the same.

2) A linear combination of SC matrices is a SC matrix.

3) The inverse of a SC matrix is a SC matrix.

4) The product of SC matrices is a SC matrix.

5) Matrix $T$ is SC if and only if ${T}_{ij}={t}_{\left|i-j\right|}$ and the sum of elements in each row is the same.

6) The $n\times n$ SC matrix is defined by $\lceil \left(n+1\right)/2\rceil $ distinct elements, where $\lceil \text{\hspace{0.17em}}\rceil $ denotes the closest larger integer.

7) SC matrices of the same size have the same eigenvectors.

8) SC matrices commute, namely, if T_{1} and T_{2} are SC matrices, then
${T}_{1}{T}_{2}={T}_{2}{T}_{1}$ .

Eigenvalues and Eigenvectors of Symmetric Circulant

The eigenvalues and eigenvectors of SC matrices are real because the matrix is symmetric. If ${t}_{0},{t}_{1},\cdots ,{t}_{n-1}$ are the elements in the first row then the eigenvalues are expressed as a linear combination:

${\nu}_{i}={\displaystyle \underset{k=0}{\overset{n-1}{\sum}}}{t}_{k}\mathrm{cos}\left(\frac{2\text{\pi}i}{n}k\right),\text{}i=1,2\cdots ,n\mathrm{.}$ (3)

Using elementary trigonometry one can show that for an even $n$ the largest and the smallest eigenvalues are unique, but the rest repeat twice. For an odd $n$ the largest eigenvalue is unique but the rest repeat twice. The last eigenvalue $\left(i=n\right)$ corresponds to the eigenvector $1/\sqrt{n}$ . As follows from the previous theorem

${t}_{n-i+1}={t}_{i+1},\text{}i=1,2,\cdots ,k,$

so the number of unique eigenvalues is $k=\lceil \left(n+1\right)/2\rceil .$

As was shown in [6] and [8] the $i$ th eigenvector corresponding to the eigen- value (3) of any symmetric circulant matrix is

${p}_{i}=\sqrt{\frac{2}{n}}\mathrm{cos}\left(\frac{2\text{\pi}i}{n}k-\frac{\text{\pi}}{4}\right),\text{}i=1,2\cdots ,n\mathrm{,}$ (4)

where $k={\left(0,1,\cdots ,n-1\right)}^{\prime}$ . These vectors have unit norm and are pair wise orthogonal. As a word of caution, several authors including [9] and [10] , mis- takenly reported the formula for the eigenvectors proportional to $\mathrm{cos}\left(2\text{\pi}ik/n\right)$ . The adjustment $-\text{\pi}/4$ in formula (4) is due to the presence of repeated eigen- vectors in symmetric circulants. Many properties of SC matrices, such as those listed in Theorem 1, follow from the fact that SC matrices have the same eigen- vectors provided by formula (4).

3. Applications

The symmetric circulant matrix is not just a curious mathematical object; it has several applications. For example, a SC matrix is used in physical applications [6] and image processing to describe the Karhunen-Loève type rotations of image templates [9] [11] . Generally, the SC matrix may be useful in modeling rotation- invariant systems in equilibrium. This claim is illustrated by the following example.

3.1. Isotropic Markov Chain

The Markov chain model describes a stochastic process, $\left\{{X}_{0},{X}_{1},{X}_{2},\cdots \right\}$ , where each random variable ${X}_{t}$ at time t may be in one of $n$ states $\left\{{x}_{i}=1,2,\cdots ,n\right\}$ . In a Markov chain, the probability distribution at time t is completely specified at time $t-1$ ; in other words, $\mathrm{Pr}\left({X}_{t}|{X}_{0},{X}_{1},\cdots ,{X}_{t-1}\right)=\mathrm{Pr}\left({X}_{t}|{X}_{t-1}\right)$ . The con- ditional probabilities are called transition probabilities,

$\mathrm{Pr}\left({X}_{t}={x}_{j}|{X}_{t-1}={x}_{i}\right)={p}_{ij},$

which reads as the probability to move to state ${x}_{j}$ given that at the previous time it was at state ${x}_{i}$ . Note that these probabilities do not depend on time $t$ . There are many excellent books on Markov chain models, see [12] [13] [14] among them.

Definition 2. A Markov chain model is called isotropic if the transition pro- bability, ${p}_{ij}$ , is a function of $\left|i-j\right|$ .

The transition probability matrix of an isotropic Markov chain is symmetric and consequently doubly stochastic. That is, the sum of elements in each column is 1. An Isotropic Markov chain can be used to describe a random walk on the circle or globe as a model for virus or rumor spread. To be specific, we talk in terms of a random walk. Let $n$ points/states be specified by $n$ vector co- ordinates $\left\{{x}_{i}\mathrm{,}i=1,\cdots ,n\right\}$ . We assume that the probability, ${p}_{ij}$ , of walking from state $i$ to state $j$ is a function of the distance $\Vert {x}_{i}-{x}_{j}\Vert $ . A natural assump- tion is that ${p}_{ij}$ is a decreasing function of the distance. Clearly, the transition- probability matrix is specified by $n$ numbers ${t}_{0},{t}_{1},\cdots ,{t}_{n-1}.$

Theorem 3. The matrix of transition probabilities of an isotropic Markov chain model is a symmetric circulant matrix.

Proof. We simply refer to Theorem 1 #5 noting that the sum of elements in each row is 1, which also holds for the sum of any exhaustive set of probabilities. $\ufffd$

An important question is the stationary state of an isotropic Markov chain, the probability distribution of each state after an infinite number of steps, the long-run behavior of Markov chain, $\mathrm{Pr}\left(X={x}_{j}\right)={\pi}_{j}$ , $j=1,2,\cdots ,n$ . These probabilities are called stationary probabilities.

Theorem 4. The stationary probability distribution of an isotropic Markov chain with positive transition probabilities is uniform, ${\pi}_{j}=1/n$ .

Proof. The long-run (stationary) vector-row probability, $\pi =\left({\pi}_{1},{\pi}_{2},\cdots ,{\pi}_{n}\right)$ , is the solution to the equation $\pi P=\pi $ , where $P$ is the $n\times n$ matrix of transition probabilities [15] . Since $P$ is symmetric for an isotropic Markov chain, we conclude that $\pi $ is the eigenvector corresponding to the eigenvalue 1. Since ${t}_{k}>0$ for all $k=0,1,\cdots ,n-1$ , it follows from (3) that the maximum eigenvalue is unique with $i=n$ : ${v}_{n}={\displaystyle {\sum}_{k=0}^{n}}{t}_{k}=1.$ The eigenvector correspond- ing to this value is

${p}_{n}=\sqrt{\frac{2}{n}}\mathrm{cos}\left(2\text{\pi}k-\frac{\text{\pi}}{4}\right)=\sqrt{\frac{2}{n}}\mathrm{cos}\left(\frac{\text{\pi}}{4}\right)1=\frac{1}{\sqrt{n}}{\left(1,1,\cdots ,1\right)}^{\prime}\mathrm{,}$

which means that the probability of stepping to each state is the same, $1/n$ . In fact, one can prove that the stationary probabilities are the same if the matrix of transition probabilities is symmetric. $\ufffd$

Using the previous interpretation with a virus epidemic on the globe, if the probability of infection is positive for each pair of locations $\left({x}_{j},{x}_{i}\right)$ , sooner of later all people get infected with equal probability.

3.2. Laplace Equation on a Disk and Generalized Ohm’s Law

The symmetric circulant matrix appears in Ohm’s law on a disk. From ele- mentary physics, we know that by Ohm’s law the difference of voltages at the ends of a wire, $V$ , is the product of the resistivity of the wire, $\rho $ , and the current $C$ ,

$V=\rho C.$ (5)

What is Ohm’s law on a disk when $L$ currents and voltages are measured at a finite number of electrodes on the perimeter? Apparenly, the vectors of vol- tages and currents must be connected through a $L\times L$ matrix with $\rho $ as the resistivity coefficient since the disk is homoheneous. We term this matrix the Ohm’s matrix, and as we show below this matrix is symmetric circulant if electrodes are placed at angles $\eta +2\text{\pi}i/L$ , $i=1,2,\cdots ,L$ where $\eta $ is any.

The following formulation may be viewed a simplified scheme of electrical impedance tomography (EIT), when electromagnetic properties, such as re- sistivity and permittivity within the body, are reconstructed from measurements of current and voltage on the periphery of the body; more detail can be found in [16] [17] [18] [19] .

We consider a homogeneous disk with resistivity $\rho $ of radius $R$ . The cur- rent is applied at the circle with density $J\left(\theta \right)$ , $0<\theta <2\text{\pi}$ . The system is in equilibrium, so we have ${\int}_{0}^{2\text{\pi}}}J\left(\theta \right)\text{d}\theta =0$ . The resulting voltage $V$ , as a function of radius $r$ and angle $\theta $ , in polar coordinates is governed by a Laplace equa- tion:

${\nabla}^{2}V\left(r,\theta \right)=0,\text{}0\le r\le R,\text{}0<\theta \le 2\text{\pi}$ (6)

with the Neumann boundary condition

$\frac{1}{R}{\frac{\text{d}V}{\text{d}r}|}_{r=R}=J\left(\theta \right).$

In reality, the current is applied at a finite number of electrodes and is zero elsewhere on the circle. Typically $L$ electrodes (usually an even number) are located on the circle centered at angles ${\theta}_{i}=2\text{\pi}i/L$ , $i=1,2,\cdots ,L$ with the same half-width, $w$ , measured in radians (the width of the electrodes is fairly small, so they do not overlap). To simplify, we shall assume that the current density flux is uniform over the electrode (the so called shunt model [20] ). These assumptions specify the step function $J\left(\theta \right)$ as follows.

$J\left(\theta \right)=\{\begin{array}{l}{J}_{i}=\text{constfor}\theta \in \left[{\theta}_{i}-w,{\theta}_{i}+w\right],\text{}i=1,2,\cdots ,L\hfill \\ 0\text{elsewhere}\hfill \end{array}$

As shown in [1] , the potential $V$ at angle $\theta $ and distance $r$ from the center in the finite-electrode EIT system is given by the equation

$V\left(r,\theta \right)=\frac{2\rho R}{\text{\pi}}{\displaystyle \underset{i=1}{\overset{L}{\sum}}}{J}_{i}\left[{\displaystyle \underset{n=1}{\overset{\infty}{\sum}}}{\left(\frac{r}{R}\right)}^{n}\frac{1}{{n}^{2}}\mathrm{sin}\left(nw\right)\mathrm{cos}\left(n\left(\theta -{\theta}_{i}\right)\right)\right].$ (7)

Two examples of potential distribution on a disk using this formula is shown in Figure 1 with $L=4$ , $R=1$ and $w=2/\left(5\text{\pi}\right).$ All currents are the same in absolute value but differ in sign (the inward arrows indicate negative current and the outward arrows indicate the positive current). In the example at left, the currents at the opposite electrodes have the opposite sign and in the example at right the currents are the same. Clearly, the two current scenarios create quite different voltage distributions in the disk.

In Ohm’s law, we relate the voltage and current on the periphery of the conductor. From (7), we derive how voltages and currents at the electrodes are related, letting $r=R$ and $\theta ={\theta}_{j}$ , which yields

${V}_{j}=\frac{\rho}{\text{\pi}w}{\displaystyle \underset{i=1}{\overset{L}{\sum}}}{C}_{i}\left[{\displaystyle \underset{n=1}{\overset{\infty}{\sum}}}\frac{1}{{n}^{2}}\mathrm{sin}\left(nw\right)\mathrm{cos}\left(n\left({\theta}_{j}-{\theta}_{i}\right)\right)\right],$

where ${C}_{i}=\left(2wR\right){J}_{i}$ is the current at electrode $i$ (the integral of the current density over the width of the electrode), and $C={\left({C}_{1},\cdots ,{C}_{L}\right)}^{\prime}$ is the $L\times 1$ vector of currents.

Finally, in the matrix form, Ohm’s law on a homogeneous disk with re- sistivity $\rho $ and $L$ equidistant electrodes placed at angles ${\theta}_{i}=2\text{\pi}i/L$ for $i=1,2,\cdots ,L$ is given by

$V=\rho TC\mathrm{,}$ (8)

where $T$ is the $n\times n$ matrix with entries

${T}_{ij}=\frac{1}{\text{\pi}w}{\displaystyle \underset{n=1}{\overset{\infty}{\sum}}}\frac{1}{{n}^{2}}\mathrm{sin}\left(nw\right)\mathrm{cos}\left(\frac{2\text{\pi}n}{L}\left(i-j\right)\right).$ (9)

Compared to Ohm’s law on a wire (5), Ohm’s law on a disk (8) contains an $L\times L$ matrix. This matrix (9) will be referred to as the Ohm’s matrix because it relates the vectors of voltages and currents. Below, we prove that this matrix is symmetric circulant.

Theorem 5. The $L\times L$ Ohm’s matrix with elements specified by (9) is a symmetric circulant matrix.

Proof. It is easy to see that T is symmetric. The fact that the sum of elements in each row is constant follows from the fact that $cos$ is an even function and

the sum ${\sum}_{j=1}^{L}}\mathrm{cos}\left(\frac{2\text{\pi}n}{L}\left(i-j\right)\right)$ does not depend on $i$ . To prove this we note that (a)

Figure 1. The voltage distribution on a homogeneous disk with four electrodes (shown on the circle as wide black segments). Arrows indicate the current supplied at the elec- trodes: All currents are the same but differ by sign. The inward arrows indicate negative and the outward arrows indicate positive current.

$\mathrm{cos}\left(\frac{2\text{\pi}n}{L}\left(i-j\right)\right)=\mathrm{cos}\left(\frac{2\text{\pi}n}{L}\left|i-j\right|\right),$

and (b) the set of unique values of $\left\{\left|i-j\right|,j=1,2,\cdots L\right\}$ does not depend on $i$ and equal $\left\{0,1,\cdots ,L\right\}$ . Therefore, we deduce that

$\underset{j=1}{\overset{L}{\sum}}}\mathrm{cos}\left(\frac{2\text{\pi}n}{L}\left(i-j\right)\right)={\displaystyle \underset{j=0}{\overset{L-1}{\sum}}}\mathrm{cos}\left(\frac{2\text{\pi}n}{L}j\right)$

and does not depend on $i$ . Since ${A}_{ij}$ is a function of $\left|i-j\right|$ , as follows from Theorem 1 #4, matrix (9) is a SC matrix. $\ufffd$

The Ohm’s law matrix posesses unique properties and have been studied previously in connection with electrical impedance tomography [1] [21] [22] [23] . As we know from the previous section, there are ${L}_{u}=\lceil \left(L+1\right)/2\rceil $ unique elements of matrix $T$ , denoted ${t}_{0},{t}_{1},\cdots ,{t}_{{L}_{u}-1}$ . Their computation gives rise to a new special function, defined below.

Definition 6. Function

$D\left(x\right)=\frac{1}{2\text{\pi}}{\displaystyle \underset{n=1}{\overset{\infty}{\sum}}}\frac{1}{{n}^{2}}\mathrm{sin}\left(2\text{\pi}xn\right)$ (10)

$=-{\displaystyle {\int}_{0}^{x}}\mathrm{ln}\mathrm{sin}\left(\text{\pi}t\right)\text{d}t-x\mathrm{ln}2,\text{}0<x<1$ (11)

is called the D-function.

Note that we provide two forms of the function: the first is through Fourier series and the second is through an integral. The equality comes from a known fact: ${\sum}_{n=1}^{\infty}}{n}^{-1}\mathrm{cos}\left(2\text{\pi}zn\right)=-\mathrm{ln}2-\mathrm{ln}\mathrm{sin}\left(\text{\pi}z\right)$ after integration from 0 to $x$ . Interestingly, mathematics reference books list pairs of functions that are close in terms of expressions, but not this one. To the best of our knowledge, the $D$ - function cannot be easily expressed as other known special functions. This function takes zero values at 0, 1/2 and 1 and is odd around $x=1/2$ . It is elementary to show that it takes its maximum at ${\text{\pi}}^{-1}\mathrm{arcsin}\left(1/\mathrm{ln}2\right)$ and mini- mum at $1-{\text{\pi}}^{-1}\mathrm{arcsin}\left(1/\mathrm{ln}2\right).$

By direct examination, it is easy to see that

${t}_{k}=\frac{1}{w}D\left(\frac{k}{L}+\frac{w}{2\text{\pi}}\right)-\frac{1}{w}D\left(\left|\frac{k}{L}-\frac{w}{2\text{\pi}}\right|\right)\text{sign}\left(\frac{k}{L}-\frac{w}{2\text{\pi}}\right),$ (12)

$k=0,1,\cdots ,{L}_{u}-1.$ For example, when the number of electrodes $L=8$ , we have that matrix

$T=\left[\begin{array}{cccccccc}{t}_{0}& {t}_{1}& {t}_{2}& {t}_{3}& {t}_{4}& {t}_{3}& {t}_{2}& {t}_{1}\\ {t}_{1}& {t}_{0}& {t}_{1}& {t}_{2}& {t}_{3}& {t}_{4}& {t}_{3}& {t}_{2}\\ {t}_{2}& {t}_{1}& {t}_{0}& {t}_{1}& {t}_{2}& {t}_{3}& {t}_{4}& {t}_{3}\\ {t}_{3}& {t}_{2}& {t}_{1}& {t}_{0}& {t}_{1}& {t}_{2}& {t}_{3}& {t}_{4}\\ {t}_{4}& {t}_{3}& {t}_{2}& {t}_{1}& {t}_{0}& {t}_{1}& {t}_{2}& {t}_{3}\\ {t}_{3}& {t}_{4}& {t}_{3}& {t}_{2}& {t}_{1}& {t}_{0}& {t}_{1}& {t}_{2}\\ {t}_{2}& {t}_{3}& {t}_{4}& {t}_{3}& {t}_{2}& {t}_{1}& {t}_{0}& {t}_{1}\\ {t}_{1}& {t}_{2}& {t}_{3}& {t}_{4}& {t}_{3}& {t}_{2}& {t}_{1}& {t}_{0}\end{array}\right]$ (13)

is completely specified by 5 unique elements. In Figure 2, we show this matrix as an image with $w=0.1$ at left and values of ${t}_{0},{t}_{1},{t}_{2},{t}_{3},{t}_{4}$ with $w=0.05$ , 0.1 and 0.2.

When $w=0.1$ , we can roughly approximate $T$ with the identity matrix, so the disk and wire Ohm’s laws become virtually equivalent. The diagonal ele- ments of the SC matrix are dominant. There is a weak dependence of ${t}_{k}$ on $w$ for $k>0$ .

The SC matrix reflects the physical principals of Ohm’s law on a disk with equidistant electrodes:

1) It does not matter how the electrodes are indexed (i.e., which electrode is #1) because matrix $T$ is circulant.

2) Ohm’s law (8) can be rewritten as $C=\sigma {T}^{-1}V$ , where $\sigma $ is conductivity because matrix ${T}^{-1}$ is a SC matrix.

3.3. Resistivity Estimation in a Homogeneous Tank

In this section, we apply the generalized Ohm’s law on a disk to reconstruct the

Figure 2. Matrix (13) shown as an image and unique t elements with three values of ω.

Figure 3. Left panel: EIT experiment with tank filled with gelatin. The current is injected at 16 equally spaced electrodes (color of clips does not matter). The voltage is measured at the same electrodes, 15 current patterns are applied with 17 frequencies. Right panel: Estimation of resistivity of gelatin using the generalized Ohm’s law on a disk with standard error bars.

electric resistivity of gelatin in the lab tank using EIT hardware (see Figure 3). Currents are injected at $L=16$ equally spaced electrodes with 17 different fre- quencies varying from 10 KHz to 3359 KHz. The diameter of the tank was 20 cm and the width of each electrode was 1 cm, so $w=0.5/10=0.05$ radian (the 16 electrodes cover about $25\%$ of the tank circle). For each frequency, 15 sinu- soidal patterns (vectors) of currents are injected and the resulting voltage is measured; more detail about the hardware design is found in [24] .

Let $V$ and $C$ be 16 ´ 15 matrices of voltage and current (the $i$ th column of $C$ is the $i$ th pattern). Then, by Ohm’s law, the matrix of voltages can be expressed as a linear regression model with an unknown coefficient,

$V=\rho AC+\epsilon ,$

where $\epsilon $ is the 16 ´ 15 matrix with independent and identically distributed error elements with zero mean and constant variance ${\sigma}^{2}$ . A least squares esti- mate of $\rho $ minimizes ${\Vert V-\rho AC\Vert}^{2}$ and can be written in matrix form as

$\stackrel{^}{\rho}=\frac{\text{tr}\left({V}^{\prime}AC\right)}{\text{tr}\left({C}^{\prime}{A}^{2}C\right)}\mathrm{.}$

The estimate of the variance is

$\text{var}\left(\rho \right)=\frac{{\stackrel{^}{\sigma}}^{2}}{\text{tr}\left({C}^{\prime}{A}^{2}C\right)}\mathrm{,}$

where ${\stackrel{^}{\sigma}}^{2}={\Vert V-\stackrel{^}{\rho}AC\Vert}^{2}/\left(16\times 15-1\right)$ is an unbiased estimate of ${\sigma}^{2}$ [25] . The results of estimation of the gelatin resistivity with $SE=\sqrt{\mathrm{var}\left(\stackrel{^}{\rho}\right)}$ bars for 17 frequencies estimated separately are shown in the right panel of Figure 3. Note that 1) the resistivity estimate drops with increasing frequency and 2) the uncertainty of resistivity estimation (SE) is about 1 cm/S.

References

[1] Demidenko, E., Hartov, A. and Paulsen, K. (2004) Statistical Estimation of Resistance/Conductance by Electrical Impedance Tomography Measurements. IEEE Transactions in Medical Imaging, 23, 829-838.

https://doi.org/10.1109/TMI.2004.827965

[2] Fang, W. and Cumberbatch, E. (2005) Matrix Properties of Data from Electrical Capacitance Tomography. Journal of Engineering Mathematics, 51, 127-146.

https://doi.org/10.1007/s10665-004-1589-4

[3] Bellman, R. (1995) Introduction to Matrix Analysis. SIAM, Philadelphia.

[4] Graybill, F.A. (1969) Matrices with Applications in Statistics. Wadsworth, Belmont.

[5] Davis, P.J. (1979) Circulant Matrices. Wiley, New York.

[6] Berlin, T.H. and Kac, M. (1952) The Spherical Model of a Ferromagnet. Physical Review, 85, 821-835.

https://doi.org/10.1103/PhysRev.86.821

[7] Van Loan, C. (1992) Computational Frameworks for the Fast Fourier Transform. SIAM, Philadelphia.

https://doi.org/10.1137/1.9781611970999

[8] Park, R.-H. (2002) Comments on “Optimal Approximation of Uniformly Rotated Images: Relationship between Karhunen-Loève Expansion and Discrete Cosine Transform. IEEE Transactions on Image Processing, 11, 332-334.

https://doi.org/10.1109/83.988965

[9] Uenohara, M. and Kanade, T. (1998) Optimal Approximation of Uniformly Rotated Images: Relationship to Karhunen-Loève Expansion and Discrete Cosine Transform. IEEE Transactions on Image Processing, 7, 116-119.

https://doi.org/10.1109/83.650856

[10] Gray, R.M. (2006) Toeplitz and Circulant Matrices: A Review. Foundations and Trends in Communication Theory, 2, 155-239.

https://doi.org/10.1561/0100000006

[11] Jogan, M., Zagar, E. and Leonardis, A. (2003) Karhunen-Loève Expansion of a Set of Rotated Templates. IEEE Transactions on Image Processing, 12, 817-825.

https://doi.org/10.1109/TIP.2003.813141

[12] Minh, D.L. (2001) Applied Probability Models. Duxbury, Pacific Grove.

[13] Ross, S.M. (1993) Introduction to Probability Models. 5th Edition, Academic Press, San Diego.

[14] Norris, J.R. (1997) Markov Chains. Cambridge University Press, Cambridge.

https://doi.org/10.1017/CBO9780511810633

[15] Taylor, H.M. and Karlin, S. (1998) An Introduction to Stochastic Modeling. 3rd Edition, Academic Press, San Diego.

[16] Borcea, L. (2002) Electrical Impedance Tomography. Inverse Problems, 18, 99-136.

https://doi.org/10.1088/0266-5611/18/6/201

[17] Cheney, M., Isaacson, D. and Newell, J.C. (1999) Electrical Impedance Tomography. SIAM Review, 41, 85-101.

https://doi.org/10.1137/S0036144598333613

[18] Holder, D.S. (2005) Electrical Impedance Tomography. Methods, History and Applications. Institute of Physics, Bristol and Philadelphia.

[19] Isaacson, D. (1986) Distinguish Ability of Conductivities by Electric Current Computed Tomography. IEEE Transactions on Medical Imaging, 5, 91-95.

http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=4307752

[20] Cheng, K.-S., Isaacson, D., Newell, J.C. and Gisser, D.G. (1989) Electrode Models for Electric-Current Computed Tomography. IEEE Transactions on Biomedical Imaging, 36, 918-924.

http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=35300

https://doi.org/10.1109/10.35300

[21] Demidenko, E. (2006) Separable Laplace Equation, Magic Toeplitz Matrix, and Generalized Ohm’s Law. Applied Mathematics and Computations, 181, 1313-1327.

https://doi.org/10.1016/j.amc.2006.02.030

[22] Demidenko, E., Borsic, A., Wan, Y.Q., Halter, R.J. and Hartov, A. (2011) Statistical Estimation of EIT Electrode Contact Impedance Using a Magic Toeplitz Matrix. IEEE Transactions on Biomedical Imaging, 58, 2194-2201.

https://doi.org/10.1109/TBME.2011.2125790

[23] Demidenko, E. (2011) An Analytic Solution to the Homogeneous EIT Problem on the 2D Disk and Its Application to Estimation of Electrode Contact Impedances. Physiological Measurement, 32, 1453-1471.

http://iopscience.iop.org/article/10.1088/0967-3334/32/9/008/pdf

https://doi.org/10.1088/0967-3334/32/9/008

[24] Hartov, A., Mazzarese, R., Reiss, F., Kerner, T., Osterman, S., Williams, D. and Paulsen, K. (2000) A Multichannel Continuously Selectable Multi-Frequency Electrical Impedance Spectroscopy Measurement System. IEEE Transactions on Medical Imaging, 47, 49-58.

https://doi.org/10.1109/10.817619