Mean Absolute Deviations about the Mean, the Cut Norm and Taxicab Correspondence Analysis

Show more

1. Introduction

Optimization has two faces, minimization of a loss function or maximization of a gain function. The following two well known dispersion measures, the variance (s^{2}) and mean absolute deviations about the median (LAD), are optimal because each minimizes a different loss function

${s}^{2}=\frac{{\displaystyle {\sum}_{i=1}^{n}{\left({y}_{i}-\stackrel{\xaf}{y}\right)}^{2}}}{n}\le \frac{{\displaystyle {\sum}_{i=1}^{n}{\left({y}_{i}-c\right)}^{2}}}{n}$ (1)

and

$LAD=\frac{{\displaystyle {\sum}_{i=1}^{n}\left|{y}_{i}-median\right|}}{n}\le \frac{{\displaystyle {\sum}_{i=1}^{n}\left|{y}_{i}-c\right|}}{n},$ (2)

where ${y}_{1}\mathrm{,}{y}_{2},\cdots \mathrm{,}{y}_{n}$ and c represent a sample of ( $n+1$ ) values. To our knowledge, no optimality property is known for the mean absolute deviations about the mean defined by

$d=\frac{{\displaystyle {\sum}_{i=1}^{n}\left|{y}_{i}-\stackrel{\xaf}{y}\right|}}{n},$ (3)

even though it has been studied in several papers for modeling purposes by, see among others, [1] [2] [3]. [1] [2] essentially compare the dispersion measures d and s in the statistical literature, with their preference clearly oriented towards d for its simple interpretability. While the authors in [3] compare the statistics d and LAD with the Gini dispersion measure and conclude that “The downside of using (d and LAD) is that robustness is achieved by omitting the information on the intra-group variability”.

The following inequality $LAD\le d\le s$ is well known and is a corollary to Lyapounov inequality, see for instance [4] : the first part $LAD\le d$ follows from (2) and the second part $d\le s$ follows from $n{s}^{2}={\displaystyle {\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{w}_{i}^{2}-n{\stackrel{\xaf}{w}}^{2}\ge 0$, where ${w}_{i}=\left|{y}_{i}-\stackrel{\xaf}{y}\right|$.

d is the measure of dispersion used in taxicab correspondence analysis (TCA), an L_{1} variant of correspondence analysis (CA), see [5]. An explanation for the robustness of d is the boundedness of the relative contribution of a point, see [6] - [11]. However, this paper provides further details on d, relating it to cut-norm and balanced 2-blocks seriation for double-centered data. [11] argued that often sparse contingency tables are better visualized by TCA; here, we present an analysis of a 0-1 affinity matrix, where TCA produces a much more interpretable map than CA, by comparing Figure 3 and Figure 4. We see that repetition and experience play an indispensable and illuminating role in data analysis.

This paper is organized as follows: In Section 2, we show the optimality of the d, s^{2} and LAD statistics based on maximizing gain functions, but d beats s^{2} and LAD with respect to the property of relative contribution of a point (a robustness measure used in French data analysis circles based on geometry): this results from Lemma 1, which states the fact that for a centered vector nd equals twice its cut-norm; Sections 3 and 4 generalize the optimality result of the d to double-centered and triple-centered arrays; and we conclude in Section 5. Balanced 2-blocks seriation of a matrix with application to TCA is discussed in Section 3.

2. Optimality of d

We consider the centered vector $x=y-\stackrel{\xaf}{y}{1}_{n}$, where ${1}_{n}$ is composed of n ones. Let $I=\left\{\mathrm{1,2,}\cdots \mathrm{,}n\right\}$ and $I=S\cup \stackrel{\xaf}{S}$ a binary partition of I. We have

$\underset{i\in I}{\sum}}\text{\hspace{0.05em}}{x}_{i}=0={\displaystyle \underset{i\in S}{\sum}}\text{\hspace{0.05em}}{x}_{i}+{\displaystyle \underset{i\in \stackrel{\xaf}{S}}{\sum}}\text{\hspace{0.05em}}{x}_{i}\mathrm{;$

from which we deduce

$\underset{i\in S}{\sum}}\text{\hspace{0.05em}}{x}_{i}=-{\displaystyle \underset{i\in \stackrel{\xaf}{S}}{\sum}}\text{\hspace{0.05em}}{x}_{i}.$ (4)

We define the cut-norm of a centered vector $x$ to be

${\Vert x\Vert}_{\u22a1}=\underset{S}{max}{\displaystyle \underset{i\in S}{\sum}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{x}_{i}={\displaystyle \underset{i\in {S}_{opt}}{\sum}}\text{\hspace{0.05em}}{x}_{i}=-{\displaystyle \underset{i\in {\stackrel{\xaf}{S}}_{opt}}{\sum}}{x}_{i}$ by

where ${S}_{opt}=\left\{i\mathrm{:}{x}_{i}\ge 0\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}i=\mathrm{1,2,}\cdots \mathrm{,}n\right\}$. By casting the computation of d as a combinatorial maximization problem, we have the following main result describing the optimality of the d-statistic over all elements of the power set of I.

Lemma 1: (2-equal parts property) $nd=2{\Vert x\Vert}_{\u22a1}\ge 2{\displaystyle {\sum}_{i\in S}}\text{\hspace{0.05em}}{x}_{i}$ for all $S\subset I$.

Proof:

$\begin{array}{c}nd={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\left|{x}_{i}\right|={\displaystyle \underset{i\in {S}_{opt}}{\sum}}{x}_{i}-{\displaystyle \underset{i\in {\stackrel{\xaf}{S}}_{opt}}{\sum}}{x}_{i}\\ =2{\Vert x\Vert}_{\u22a1}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{by}\text{\hspace{0.17em}}\left(\text{4}\right)\\ \ge 2{\displaystyle \underset{i\in S}{\sum}}\text{\hspace{0.05em}}{x}_{i}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}\text{all}\text{\hspace{0.17em}}S\subset I\mathrm{.}\end{array}$

Corollary 1: $d\ge {x}^{\prime}u/n$ for $u\in {\left\{-\mathrm{1,1}\right\}}^{n}$.

Proof: By defining ${u}_{opt}\left(i\right)=1$ if $i\in {S}_{opt}$ and ${u}_{opt}\left(i\right)=-1$ if $i\in {\stackrel{\xaf}{S}}_{opt}$, we get $d={x}^{\prime}{u}_{opt}/n\ge {x}^{\prime}u/n$.

Corollary 2: $LAD\ge {\left(y-median{1}_{n}\right)}^{\prime}u/n$ for $u\in {\left\{-\mathrm{1,1}\right\}}^{n}$.

Corollary 2 shows that LAD has a second optimality property. We emphasize the fact that the optimizing function in (2) is a univariate loss function of $c\in \mathbb{R}$ ; while the optimizing function in Corollary 2 is a multivariate gain function of $u\in {\left\{-\mathrm{1,1}\right\}}^{n}$.

There is a similar result also for the variance in (1), based on Cauchy-Schwarz inequality stated in Lemma 2.

Lemma 2: $s={\Vert \left(y-\stackrel{\xaf}{y}{1}_{n}\right)/\sqrt{n}\Vert}_{2}\ge {\left(y-\stackrel{\xaf}{y}{1}_{n}\right)}^{\prime}u/\sqrt{n}$ for ${u}^{\prime}u=1$.

We note that Corollaries 1 and 2 and Lemma 2 represent particular cases of Hölder inequality, see [11].

Definition 1: We define the relative contribution of an element
${y}_{i}$ to d, LAD and s^{2}, respectively, to be

$R{C}_{{}^{d}}\left({y}_{i}\right)=\frac{\left|{y}_{i}-\stackrel{\xaf}{y}\right|}{nd}\mathrm{,}$

$R{C}_{{s}^{2}}\left({y}_{i}\right)=\frac{{\left|{y}_{i}-\stackrel{\xaf}{y}\right|}^{2}}{n{s}^{2}}\mathrm{,}$

$R{C}_{LAD}\left({y}_{i}\right)=\frac{\left|{y}_{i}-median\right|}{nLAD}\mathrm{.}$

Then the following inequalities are true

$0\le R{C}_{{}^{d}}\left({y}_{i}\right)\le \mathrm{0.5,}$

$0\le R{C}_{{s}^{2}}\left({y}_{i}\right)<1,$

$0\le R{C}_{LAD}\left({y}_{i}\right)\le \mathrm{1;}$

from which we conclude that the most robust dispersion measure among the three dispersion measures, based on the relative contribution criterion, is d because it is bounded above by 0.5.

We note that the inequality, $0\le R{C}_{{s}^{2}}\left({y}_{i}\right)<1$, is a weaker variant of Laguerre-Samuelson inequality; see for instance [13], whose MS thesis presents nine different proofs.

We have

Definition 2: An element ${x}_{i}={y}_{i}-\stackrel{\xaf}{y}$ is a heavyweight if $R{C}_{{}^{d}}\left({y}_{i}\right)=0.5$ ; that is, $\left|{x}_{i}\right|=\left|{y}_{i}-\stackrel{\xaf}{y}\right|=nd/2$.

We note that a heavyweight element attains the upper bound of $R{C}_{{}^{d}}\left({y}_{i}\right)$, but it never attains the upper bound of $R{C}_{{s}^{2}}\left({y}_{i}\right)$ and $R{C}_{LAD}\left({y}_{i}\right)$.

3. 2-Way Interactions of a Correspondence Matrix

Let $P=\left({p}_{ij}\right)$ be a correspondence matrix; that is, ${p}_{ij}\ge 0$ for $i\in I$ and $j\in J=\left\{\mathrm{1,2,}\cdots \mathrm{,}m\right\}$ and ${\sum}_{j=1}^{m}}{\displaystyle {\sum}_{i=1}^{n}}\text{\hspace{0.05em}}{p}_{ij}=1$. As usual, we define ${p}_{i\ast}={\displaystyle {\sum}_{j=1}^{m}}\text{\hspace{0.05em}}{p}_{ij}$ and ${p}_{\ast j}={\displaystyle {\sum}_{i=1}^{n}}\text{\hspace{0.05em}}{p}_{ij}$. Let ${P}_{1}=\left({x}_{ij}={p}_{ij}-{p}_{i\ast}{p}_{\ast j}\right)$ for $i\in I$ and $j\in J$ ; then ${P}_{1}$ represents the residual matrix of $P$ with respect to the independence model $\left({p}_{i\ast}{p}_{\ast j}\right)$. In the jargon of statistics, the cell ${x}_{ij}$ represents the multiplicative 2-way interaction of the cell $\left(i\mathrm{,}j\right)\in I\times J$. ${P}_{1}$ is double-centered

${P}_{1}{1}_{m}={0}_{n}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{{P}^{\prime}}_{1}{1}_{n}={0}_{m}.$ (5)

From (5) we get

$\underset{i\in S}{\sum}}\text{\hspace{0.05em}}{x}_{ij}=-{\displaystyle \underset{i\in \stackrel{\xaf}{S}}{\sum}}\text{\hspace{0.05em}}{x}_{ij}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}j\in J\mathrm{,$ (6)

$\underset{j\in T}{\sum}}\text{\hspace{0.05em}}{x}_{ij}=-{\displaystyle \underset{j\in \stackrel{\xaf}{T}}{\sum}}\text{\hspace{0.05em}}{x}_{ij}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}i\in I\mathrm{,$ (7)

for $T\subset J$. From (6) and (7), we get

$\underset{j\in T}{\sum}}{\displaystyle \underset{i\in S}{\sum}}\text{\hspace{0.05em}}{x}_{ij}=-{\displaystyle \underset{j\in T}{\sum}}{\displaystyle \underset{i\in \stackrel{\xaf}{S}}{\sum}}\text{\hspace{0.05em}}{x}_{ij$ (8)

$=-{\displaystyle \underset{i\in S}{\sum}}{\displaystyle \underset{j\in \stackrel{\xaf}{T}}{\sum}}\text{\hspace{0.05em}}{x}_{ij}$ (9)

$={\displaystyle \underset{i\in \stackrel{\xaf}{S}}{\sum}}{\displaystyle \underset{j\in \stackrel{\xaf}{T}}{\sum}}\text{\hspace{0.05em}}{x}_{ij}.$ (10)

We define the cut-norm of ${P}_{1}$ to be

${\Vert {P}_{1}\Vert}_{\u22a1}=\underset{S\mathrm{,}T}{max}{\displaystyle \underset{j\in T}{\sum}}{\displaystyle \underset{i\in S}{\sum}}\text{\hspace{0.05em}}{x}_{ij}={\displaystyle \underset{j\in {T}_{opt}}{\sum}}{\displaystyle \underset{i\in {S}_{opt}}{\sum}}{x}_{ij}\mathrm{.}$

The cut-norm ${\Vert {P}_{1}\Vert}_{\u22a1}$ is a well known quantity in theoretical computer science, because of its relationship to the famous Grothendieck inequality, which is based on ${\Vert {P}_{1}\Vert}_{\infty \to 1}$, see among others [14].

The matrix
${P}_{1}$ can be considered as the starting point in taxicab correspondence analysis, an L_{1} variant of correspondence analysis, see [5]. The optimization criterion in TCA of
$P$ or
${P}_{1}$ is based on taxicab matrix norm, which is a combinatorial optimization problem

$\begin{array}{c}{\delta}_{1}={\Vert {P}_{1}\Vert}_{\infty \to 1}={\Vert {{P}^{\prime}}_{1}\Vert}_{\infty \to 1}\\ =\underset{u\in {\mathbb{R}}^{m}}{max}\frac{{\Vert {P}_{1}u\Vert}_{1}}{{\Vert u\Vert}_{\infty}}=\underset{v\in {\mathbb{R}}^{n}}{max}\frac{{\Vert {{P}^{\prime}}_{1}v\Vert}_{1}}{{\Vert v\Vert}_{\infty}}=\underset{u\in {\mathbb{R}}^{m}\mathrm{,}v\in {\mathbb{R}}^{n}}{max}\frac{{v}^{\prime}{P}_{1}u}{{\Vert u\Vert}_{\infty}{\Vert v\Vert}_{\infty}}\mathrm{,}\\ =max{\Vert {P}_{1}u\Vert}_{1}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{subject}\text{\hspace{0.17em}}\text{to}\text{\hspace{0.17em}}u\in {\left\{-\mathrm{1,}+1\right\}}^{m}\mathrm{,}\\ =max{\Vert {{P}^{\prime}}_{1}v\Vert}_{1}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{subject}\text{\hspace{0.17em}}\text{to}\text{\hspace{0.17em}}v\in {\left\{-\mathrm{1,}+1\right\}}^{n}\mathrm{,}\end{array}$

$=max{v}^{\prime}{P}_{1}u\text{\hspace{0.17em}}\text{subjectto}\text{\hspace{0.17em}}u\in {\left\{-\mathrm{1,}+1\right\}}^{m}\mathrm{,}v\in {\left\{-\mathrm{1,}+1\right\}}^{n}\mathrm{,}$ (11)

$={{v}^{\prime}}_{1}{P}_{1}{u}_{1}\mathrm{.}$ (12)

In data analysis, the vectors ${v}_{1}$ and ${u}_{1}$ are interpreted as first taxicab principal axes and ${\delta}_{1}$ as first taxicab dispersion. So we can compute the projection of the rows (resp. the columns) of ${P}_{1}$ on the first taxicab principal axis ${u}_{1}$ (resp. ${v}_{1}$ ) to be

${a}_{1}={P}_{1}{u}_{1}$ (13)

${b}_{1}={{P}^{\prime}}_{1}{v}_{1}\mathrm{.}$ (14)

Equation (12) implies

${v}_{1}=sign\left({a}_{1}\right)\mathrm{,}$ (15)

${u}_{1}=sign\left({b}_{1}\right)\mathrm{,}$ (16)

named transition formulas, see [5] and [11]. We also note the following identities

${{1}^{\prime}}_{n}{a}_{1}=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\delta}_{1}={\Vert {a}_{1}\Vert}_{1}\mathrm{,}$ (17)

${{1}^{\prime}}_{m}{b}_{1}=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\delta}_{1}={\Vert {b}_{1}\Vert}_{1}\mathrm{.}$ (18)

Using the above results, we get the following

Lemma 3: (4-equal parts property) The norm ${\delta}_{1}={\Vert {P}_{1}\Vert}_{\infty \to 1}=4{\Vert {P}_{1}\Vert}_{\u22a1}\ge 4{\displaystyle {\sum}_{j\in T}}{\displaystyle {\sum}_{i\in S}}\text{\hspace{0.05em}}{x}_{ij}$.

In data analysis, Lemma 3 implies balanced 2-blocks seriation of ${P}_{1}$ ; see example 1. The subsets ${T}_{opt}$ and ${S}_{opt}$ are positively associated and ${\Vert {P}_{1}\Vert}_{\u22a1}={\displaystyle {\sum}_{j\in {T}_{opt}}}{\displaystyle {\sum}_{i\in {S}_{opt}}}{x}_{ij}$ ; similarly the subsets ${\stackrel{\xaf}{T}}_{opt}$ and ${\stackrel{\xaf}{S}}_{opt}$ are positively associated and ${\Vert {P}_{1}\Vert}_{\u22a1}={\displaystyle {\sum}_{j\in {\stackrel{\xaf}{T}}_{opt}}}{\displaystyle {\sum}_{i\in {\stackrel{\xaf}{S}}_{opt}}}{x}_{ij}$. While the subsets ${\stackrel{\xaf}{T}}_{opt}$ and ${S}_{opt}$ are negatively associated and ${\Vert {P}_{1}\Vert}_{\u22a1}=-{\displaystyle {\sum}_{j\in {\stackrel{\xaf}{T}}_{opt}}}{\displaystyle {\sum}_{i\in {S}_{opt}}}{x}_{ij}$ ; similarly the subsets ${T}_{opt}$ and ${\stackrel{\xaf}{S}}_{opt}$ are negatively associated and ${\Vert {P}_{1}\Vert}_{\u22a1}=-{\displaystyle {\sum}_{j\in {T}_{opt}}}{\displaystyle {\sum}_{i\in {\stackrel{\xaf}{S}}_{opt}}}{x}_{ij}$. [15] presents an interesting overview of seriation and block seriation.

Using Definition 2, we get

Definition 3: The relative contribution of the row i to ${\delta}_{1}$ (respectively of the column j to ${\delta}_{1}$ ) is

$R{C}_{{\delta}_{1}}\left(row\text{\hspace{0.17em}}i\right)=\frac{\left|{a}_{1}\left(i\right)\right|}{{\delta}_{1}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}R{C}_{{\delta}_{1}}\left(col\text{\hspace{0.17em}}j\right)=\frac{\left|{b}_{1}\left(j\right)\right|}{{\delta}_{1}}\mathrm{.}$

We have

$0\le R{C}_{{\delta}_{1}}\left(row\text{\hspace{0.17em}}i\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}R{C}_{{\delta}_{1}}\left(col\text{\hspace{0.17em}}j\right)\le \mathrm{0.5.}$

Definition 4: 1) On the first taxicab principlal axis the row i is heavyweight if $R{C}_{{\delta}_{1}}\left(row\text{\hspace{0.17em}}i\right)=0.5$, and the column j is heavyweight if $R{C}_{{\delta}_{1}}\left(col\text{\hspace{0.17em}}j\right)=0.5$.

2) On the first taxicab principlal axis the cell $\left(i\mathrm{,}j\right)$ is heavyweight if and only if both row i and column j are heavyweights; and in this case $R{C}_{{\delta}_{1}}\left({p}_{ij}-{p}_{i\ast}{p}_{\ast j}\right)=\frac{\left|{p}_{ij}-{p}_{i\ast}{p}_{\ast j}\right|}{{\delta}_{1}}=0.25$.

For an application of Definitions 3 and 4 see [6].

Using Wedderburn’s rank-1 reduction rule, see [11], we construct the 2nd residual matrix ${P}_{2}=\left({x}_{ij}={p}_{ij}-{p}_{i\ast}{p}_{\ast j}-\frac{{a}_{1}\left(i\right){b}_{1}\left(j\right)}{{\sigma}_{1}}\right)$, which is also

double-centered, and repeat the above procedure. After $k=rank\left({P}_{1}\right)$ iterations, we decompose the correspondence matrix $P$ into ( $k+1$ ) bilinear parts

${p}_{ij}={p}_{i\ast}{p}_{\ast j}+{\displaystyle \underset{\alpha =1}{\overset{k}{\sum}}}\frac{{a}_{\alpha}\left(i\right){b}_{\alpha}\left(j\right)}{{\delta}_{\alpha}}\mathrm{,}$

named taxicab singular value decomposition; which can be rewritten, similar to data reconstruction formula in correspondence analysis (CA), as

${p}_{ij}={p}_{i\ast}{p}_{\ast j}\left(1+{\displaystyle \underset{\alpha =1}{\overset{k}{\sum}}}\frac{{f}_{\alpha}\left(i\right){g}_{\alpha}\left(j\right)}{{\delta}_{\alpha}}\right)\mathrm{,}$

where in TCA

${f}_{\alpha}\left(i\right)={a}_{\alpha}\left(i\right)/{p}_{i\ast}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{g}_{\alpha}\left(j\right)={b}_{\alpha}\left(j\right)/{p}_{\ast j}\mathrm{.}$ (19)

We note that Equations (5) through (18) are valid for higher residual correspondence matrices ${P}_{\alpha}$ for $\alpha =\mathrm{1,}\cdots \mathrm{,}k$.

CA and TCA satisfy an important invariance property: columns (or rows) with identical profiles (conditional probabilities) receive identical factor scores ${g}_{\alpha}\left(j\right)$ (or ${f}_{\alpha}\left(i\right)$ ). The factor scores are used in the graphical displays. Moreover, merging of identical profiles does not change the results of the data analysis: This is named the principle of equivalent partitioning by [16]; it includes the famous invariance property named principle of distributional equivalence, on which [17] developed CA.

In the next subsections we shall present two data sets, where taxicab correspondence analysis (TCA) is applied. The first data set is a small contingency table taken from [18], for which we present the details of the computation of the first two principal dimensions; in particular we highlight the balanced 2-blocks seriation of the residual data sets ${P}_{\alpha}$ for $\alpha =\mathrm{1,2}$ during the computation of each principal dimension. The second data set is a networks affinity matrix from [19] who applied CA to visually explore it; on this data set we compare CA and TCA maps highlighting the robustness of the TCA map to rare observations on the second principal dimension.

The theory of CA can be found, among others, in [17] [20] - [25]; the recent book, authored by [18], presents a panoramic review of CA and related methods.

3.1. Selikoff’s Asbestos Data Set

Table 1, taken from [18], is a contingency table Y of size $5\times 4$ cross-classifying 1117 New York workers with occupational exposure to asbestos; the workers are classified according to the number of exposure in years (five categories) and the asbestos grade diagnosed (four categories). Figure 1 and Figure 2 display the maps obtained by CA and TCA: almost no difference between them. Here, we present the details of the computation for TCA. Table 2 presents the residual correspondence table ${P}_{1}$ with respect to the independence model, where we see diagonal 2-blocks seriation of ${P}_{1}$ with: ${S}_{opt}=\left\{\text{row1},\text{row2}\right\}$ is positively associated with ${T}_{opt}=\left\{\text{column1}\right\}$ and the cut-norm ${\Vert {P}_{1}\Vert}_{\u22a1}=\left(0.1181+0.0151\right)=0.1332$ ; similarly, ${\stackrel{\xaf}{S}}_{opt}=\left\{\text{row3},\text{row4},\text{row5}\right\}$ is positively associated with ${\stackrel{\xaf}{T}}_{opt}=\left\{\text{column2},\text{column3},\text{column4}\right\}$ and ${\Vert {P}_{1}\Vert}_{\u22a1}={\displaystyle {\sum}_{j\in {\stackrel{\xaf}{T}}_{opt}}}{\displaystyle {\sum}_{i\in {\stackrel{\xaf}{S}}_{opt}}}\text{\hspace{0.05em}}{x}_{ij}=0.0087+\cdots +0.0202=0.1332$. Note that the elements in the positively associated diagonal blocks have in majority positive values; while the elements in the negatively associated diagonal blocks have in majority negative values. The last three columns and the last three rows of Table 2 display principal axes ( ${v}_{1}$ and ${u}_{1}$ ), coordinates of the projected points ( ${a}_{1}$ and ${b}_{1}$ ) and coordinates of TCA factor scores ( ${f}_{1}$ and ${g}_{1}$ ).

Table 3 shows the 2nd residual correspondence matrix ${P}_{2}$, where we note that its first column is zero, because by Definition 4a column 1 is heavyweight in ${P}_{1}$ : $R{C}_{{\delta}_{1}}\left(\text{G0}\right)=0.5$, see [6]. We see that columns (3 and 4) are positively associated with rows (1 and 5); similarly column 2 is positively associated with rows (2, 3 and 4). It is difficult to interpret the diagonal balanced 2-blocks seriation in Table 3; however, the map in Figure 2 is interpretable, it shows a Guttman effect known as horseshoe or parabola.

Figure 1. CA map of asbestos exposure data.

Table 1. Selikoff’s Asbestos contingency table of size $5\times 4$.

Table 2. Balanced 2-blocks seriation of ${P}_{1}=\left({p}_{ij}-{p}_{i\ast}{p}_{\ast j}\right)$ of size $5\times 4$.

Table 3. Balanced 2-blocks seriation of ${P}_{2}=\left({p}_{ij}-{p}_{i\ast}{p}_{\ast j}-\frac{{a}_{1i}{b}_{1j}}{{\delta}_{1}}\right)$ of size $5\times 4$.

Figure 2. TCA map of asbestos exposure data.

3.2. Western Hemisphere Countries and Their Memberships in Trade and Treaty Organizations

Table 4 presents a two-mode affiliation network matrix $Z=\left({z}_{ij}\right)$ of size $22\times 15$ taken from [19]. The 22 rows represent 22 countries and the 15 columns the regional trade and treaty organizations, described in Appendix A. The country i is a member of the organization j if ${z}_{ij}=1$ ; and ${z}_{ij}=0$ means the country i is not a member of the organization j. [19] visualized this data by correspondence analysis, see Figure 3, which is quite cluttered. She interpreted the first two principal dimensions by examining the factor scores of the countries and summarized the results in 3 points:

1) The first dimension contrasts South American countries and organizations on the one hand, and Central American countries and organizations on the other hand.

2) The second dimension clearly distinguishes Canada and the United States (both North American countries) along with NAFTA from other countries and organizations. In CA, the relative contribution of Canada (resp. US) to the second axis is $R{C}_{{\sigma}_{2}^{2}}\left(\text{Canada}\right)=R{C}_{{\sigma}_{2}^{2}}\left(\text{US}\right)=0.409$, and $R{C}_{{\sigma}_{2}^{2}}\left(\text{NAFTA}\right)=0.821$, where ${\sigma}_{2}^{2}$ is the variance, also named inertia, of the second principal dimension.

3) Organizations (SELA, OAS, and IDB) are in the center because they have membership profiles that are similar to the marginal profile: almost all countries belong to (SELA, OAS, and IDB), see Table 4.

Figure 4 provides the TCA map, which is much more interpretable than the corresponding CA map in Figure 3; where we see that, additionally to the three points mentioned by [19], the south american countries are divided into two

Figure 3. CA map of Western Hemisphere affinity network.

Table 4. Sociomatrix of American countries and their memberships.

Figure 4. TCA map of Western Hemisphere affinity network.

groups, northern (Venezuela, Bolivia, Peru and Ecuador) and southern countries (Brazil, Uruguay, Argentina, Paraguay and Chile). Furthermore, the contributions of the points Canada, the United States, and NAFTA to the second axis are not substantial compared to CA: $R{C}_{{}_{{\delta}_{2}}}\left(\text{Canada}\right)=R{C}_{{}_{{\delta}_{2}}}\left(\text{US}\right)=0.088$, and $R{C}_{{}_{{\delta}_{2}}}\left(\text{NAFTA}\right)=0.10$. This shows the robustness of TCA due to the robustness of the $\delta $ statistic following Definition 1.

It is well known that, CA is very sensitive to some particularities of a data set; further, how to identify and handle these is an open unresolved problem. However, for contingency tables [12] enumerated three under the umbrella of sparse contingency tables: rare observations, zero-block structure and relatively high-valued cells. It is evident that this data set has specially three rare observations (NAFTA, CANADA and USA), which determine the 2nd dimension of CA. A row or a column category is considered rare, if its marginal probablity is quite small.

3.3. Maximal Interaction Two-Mode Clustering of Continuous Data

[26] discussed maximum interaction two-mode clustering of continuous data. By generalizing their objective function, we want to show that the results of this section can be considered a particular robust L_{1} variant of their approach. Let
$Y=\left({y}_{ij}\right)$ be a 2-way array for
$i\in I\mathrm{,}j\in J$. As usual, we define, for instance,
${\stackrel{\xaf}{y}}_{\ast j}={\displaystyle {\sum}_{i=1}^{n}}\frac{{y}_{ij}}{n}$ and
${\stackrel{\xaf}{y}}_{\ast \ast}={\displaystyle {\sum}_{j=1}^{m}}{\displaystyle {\sum}_{i=1}^{n}}\frac{{y}_{ij}}{mn}$. Let
$X=\left({x}_{ij}\right)$ be the additive double-centered array, where

${x}_{ij}={y}_{ij}-{\stackrel{\xaf}{y}}_{i\ast}-{\stackrel{\xaf}{y}}_{\ast j}+{\stackrel{\xaf}{y}}_{\ast \ast}\mathrm{.}$

In the jargon of statistics, the cell ${x}_{ij}$ represents the additive 2-way interaction of the cell $\left(i\mathrm{,}j\right)\in I\times J$. The matrix $X$ is double-centered, and it satisfies Equations (6) through (10). Let $I={U}_{\alpha =1}^{r}{S}_{\alpha}$ be an r-partition of I and $J={U}_{\beta =1}^{c}{T}_{\beta}$ be a c-partition of J. We consider the following maximization of the overall interaction problem for $p\ge 1$

${f}_{p}\left({S}_{\alpha}\mathrm{,}{T}_{\beta}\mathrm{:}\alpha =\mathrm{1,}\cdots \mathrm{,}r\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\beta =\mathrm{1,}\cdots \mathrm{,}c\right)={\displaystyle \underset{\alpha =1}{\overset{r}{\sum}}}{\displaystyle \underset{\beta =1}{\overset{c}{\sum}}}\left|{S}_{\alpha}\right|\left|{T}_{\beta}\right|{g}_{p}\left(\alpha \mathrm{,}\beta \right)\mathrm{,}$

where $\left|{S}_{\alpha}\right|$ is the cardinality of the set ${S}_{\alpha}$ and

${g}_{p}\left(\alpha \mathrm{,}\beta \right)\mathrm{=}{\left(\left|\frac{{\displaystyle {\sum}_{i\in {S}_{\alpha}}{\displaystyle {\sum}_{j\in {T}_{\beta}}{x}_{ij}}}}{\left|{S}_{\alpha}\right|\left|{T}_{\beta}\right|}\right|\right)}^{p}\mathrm{.}$

When $p=2$, then maximizing ${f}_{2}\left({S}_{\alpha}\mathrm{,}{T}_{\beta}\mathrm{:}\alpha =\mathrm{1,}\cdots \mathrm{,}r\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\beta =\mathrm{1,}\cdots \mathrm{,}c\right)$, named maximal overall interaction, is the criterion computed in [26]. When $p=1,r=c=2$, then maximizing ${f}_{1}\left({S}_{\alpha},{T}_{\beta}:\alpha =1,2\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\beta =1,2\right)={\Vert X\Vert}_{\infty \to 1}=4{\Vert X\Vert}_{\u22a1}$ by Lemma 3, which is the criterion computed in TCA.

4. Triple-Centered Arrays

To motivate our subject, we start with an example. Let $Y=\left({y}_{ijk}\right)$ be a 3-way array for $i\in I\mathrm{,}j\in J$ and $k\in K=\left\{\mathrm{1,2,}\cdots \mathrm{,}t\right\}$. As usual, we define, for instance, ${\stackrel{\xaf}{y}}_{ij\ast}={\displaystyle {\sum}_{k=1}^{t}}{y}_{ijk}/t$, ${\stackrel{\xaf}{y}}_{\ast j\ast}={\displaystyle {\sum}_{k=1}^{t}}{\displaystyle {\sum}_{i=1}^{n}}\frac{{y}_{ijk}}{tn}$ and ${\stackrel{\xaf}{y}}_{\ast \ast \ast}={\displaystyle {\sum}_{k=1}^{t}}{\displaystyle {\sum}_{j=1}^{m}}{\displaystyle {\sum}_{i=1}^{n}}\frac{{y}_{ijk}}{tmn}$. Let $X=\left({x}_{ijk}\right)$ be the triple-centered array, where

${x}_{ijk}={y}_{ijk}-{\stackrel{\xaf}{y}}_{ij\ast}-{\stackrel{\xaf}{y}}_{i\ast k}-{\stackrel{\xaf}{y}}_{\ast jk}+{\stackrel{\xaf}{y}}_{i\ast \ast}+{\stackrel{\xaf}{y}}_{\ast j\ast}+{\stackrel{\xaf}{y}}_{\ast \ast k}-{\stackrel{\xaf}{y}}_{\ast \ast \ast}.$

In the jargon of statistics, the cell ${x}_{ijk}$ represents the additive 3-way interaction of the cell $\left(i\mathrm{,}j\mathrm{,}k\right)\in I\times J\times K$. The tensor $X$ is triple-centered; that is,

$\underset{k=1}{\overset{t}{\sum}}}\text{\hspace{0.05em}}{x}_{ijk}={\displaystyle \underset{j=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}{x}_{ijk}={\displaystyle \underset{i=1}{\overset{m}{\sum}}}\text{\hspace{0.05em}}{x}_{ijk}=0.$

A generalization of Lemma 3 is

Lemma 4: (8-equal parts property) The tensor norm

$\begin{array}{c}{\Vert X\Vert}_{\left(\infty \mathrm{,}\infty \right)\to 1}=max{\displaystyle \underset{k\in K}{\sum}}{\displaystyle \underset{j\in J}{\sum}}{\displaystyle \underset{i\in I}{\sum}}\text{\hspace{0.05em}}{w}_{k}{v}_{j}{u}_{i}{x}_{ijk}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{subjectto}\text{\hspace{0.17em}}u\times v\times w\in {\left\{-\mathrm{1,}+1\right\}}^{m\times n\times t}\\ =8{\displaystyle \underset{k\in {W}_{opt}}{\sum}}{\displaystyle \underset{j\in {T}_{opt}}{\sum}}{\displaystyle \underset{i\in {S}_{opt}}{\sum}}{x}_{ijk}\ge 8{\displaystyle \underset{k\in W}{\sum}}{\displaystyle \underset{j\in T}{\sum}}{\displaystyle \underset{i\in S}{\sum}}\text{\hspace{0.05em}}{x}_{ijk}\mathrm{,}\end{array}$

where $W\subset K$.

The proof is similar to the proof of Lemma 3.

Lemma 4 can easily be generalized to higher-way arrays.

5. Conclusions

This essay is an attempt to emphasize the following two points.

First, we showed the optimality and robustness of the mean absolute deviations about the mean, its interpretation, and its generalization to higher-way arrays. A key notion in describing its robustness is that the relative contribution of a point is bounded by 50%.

Second, within the framework of TCA, we showed that the following three identities ${\delta}_{1}={\Vert {P}_{1}\Vert}_{\infty \to 1}=4{\Vert {P}_{1}\Vert}_{\u22a1}$ reveal three different but related aspects of TCA: 1) ${\delta}_{1}$, computed in (17) and (18), by (19) represents the mean absolute deviations about the mean statistic; 2) The taxicab norm ${\Vert {P}_{1}\Vert}_{\infty \to 1}$, via (15) and (16), shows that uniform weights are affected to the columns and the rows; 3) The cut norm $4{\Vert {P}_{1}\Vert}_{\u22a1}$ shows that the computation of each principal dimension of TCA corresponds to balanced 2-blocks seriation, with equality of the cut norm in the 4 associated blocks.

A list of the principal used variables is provided in Appendix B.

Acknowledgements

We thank the Editor and the referee for their comments. Research of V. Choulakian is funded by the National Science and Engineering Research Council of Canada grant RGPIN-2017-05092. This support is greatly appreciated. The authors thank William Alexander Digout for help in computations.

Appendix A: List of Western Hemisphere Organizations

1) Association of Caribbean States (ACS): Trade group sponsored by the Caribbean Commnnity and Common Market (CARlCOM).

2) Latin American Integration Association (ALADI): Free trade organization.

3) Amazon Pact: Promotes development of Amazonian territories.

4) Andean Pact: Promotes development of members through economic and social integration.

5) Caribbean Commnnity and Common Market (CARICOM): Caribbean trade organization; promotes economic development of members.

6) Group of Latin American and Caribbean Sugar Exporting Countries (GEPLACEA): Sugar-producing and exporting countries.

7) Group of Rio: Organization for joint political action.

8) Group of Three (G-3): Trade organization.

9) Inter-American Development Bank (IDB): Promotes development of member nations.

10) South American Common Market (MERCOSUR): Increases economic cooperation in the region.

11) North American Free Trade Agreement (NAFTA): Free trade organization.

12) Organization of American States (OAS): Promotes peace, security, economic, and social development in the Western Hemisphere.

13) Central American Parliament (PARLACÉN): Works for the political integration of Central America.

14) San José Group: Promotes regional economic integration.

15) Latin American Economie System (SELA): Promotes economic and social development of member nations.

Appendix B: A List of principal used variables

Mean absolute deviations about the mean of a sample $d=\frac{{\displaystyle {\sum}_{i=1}^{n}}\left|{y}_{i}-\stackrel{\xaf}{y}\right|}{n}$

Mean absolute deviations of a sample about the median $LAD=\frac{{\displaystyle {\sum}_{i=1}^{n}}\left|{y}_{i}-median\right|}{n}$

Variance of a sample ${s}^{2}=\frac{{\displaystyle {\sum}_{i=1}^{n}}{\left({y}_{i}-\stackrel{\xaf}{y}\right)}^{2}}{n}$

Cut norm of a centered sample ${\Vert y-\stackrel{\xaf}{y}{1}_{n}\Vert}_{\u22a1}=ma{x}_{S}{\displaystyle {\sum}_{i\in S}}\left({y}_{i}-\stackrel{\xaf}{y}\right)$, where $S\subset I=\left\{\mathrm{1,2,}\cdots \mathrm{,}n\right\}$

Taxicab operator norm of a double centered matrix ${\delta}_{\alpha}={\Vert {P}_{\alpha}\Vert}_{\infty \to 1}={\mathrm{max}}_{u\in {\mathbb{R}}^{m}}\frac{{\Vert {P}_{\alpha}u\Vert}_{1}}{{\Vert u\Vert}_{\infty}}$

Cut norm of a double centered matrix ${\Vert {P}_{\alpha}\Vert}_{\u22a1}={\mathrm{max}}_{S\mathrm{,}T}{\displaystyle {\sum}_{j\in T}}{\displaystyle {\sum}_{i\in S}}\text{\hspace{0.05em}}{P}_{\alpha}\left(i\mathrm{,}j\right)$, where $T\subset J=\left\{\mathrm{1,2,}\cdots \mathrm{,}m\right\}$

${\delta}_{\alpha}$ is the dispersion value of αth taxicab principal axis

${f}_{\alpha}\left(i\right)$ is taxicab principal factor score of row i on the αth principal axis and ${\delta}_{\alpha}={\displaystyle {\sum}_{i=1}^{n}}\text{\hspace{0.05em}}{p}_{i\ast}\left|{f}_{\alpha}\left(i\right)\right|$

${g}_{\alpha}\left(j\right)$ is taxicab principal factor score of column j on αth principal axis and ${\delta}_{\alpha}={\displaystyle {\sum}_{j=1}^{m}}\text{\hspace{0.05em}}{p}_{\ast j}\left|{g}_{\alpha}\left(j\right)\right|$

References

[1] Pham-Gia, T. and Hung, T.L. (2001) The Mean and Median Absolute Deviations. Mathematical and Computer Modelling, 34, 921-936.

https://doi.org/10.1016/S0895-7177(01)00109-1

[2] Gorard, S. (2015) Introducing the Mean Absolute Deviation ‘Effect’ Size. International Journal of Research & Method in Education, 38, 105-114.

https://doi.org/10.1080/1743727X.2014.920810

[3] Yitzhaki, S. and Lambert, P.J. (2013) The Relationship between the Absolute Deviation from a Quantile and Gini’s Mean Difference. Metron, 71, 97-104.

https://doi.org/10.1007/s40300-013-0015-y

[4] Rothagi, V.K. (1976) An Introduction to Probability Theory and Mathematical Statistics. John Wiley and Sons, New York.

[5] Choulakian, V. (2006) Taxicab Correspondence Analysis. Psychometrika, 71, 333-345.

https://doi.org/10.1007/s11336-004-1231-4

[6] Choulakian, V. (2008) Taxicab Correspondence Analysis of Contingency Tables with One Heavyweight Column. Psychometrika, 73, 309-319.

https://doi.org/10.1007/s11336-007-9041-0

[7] Choulakian, V. (2008) Multiple Taxicab Correspondence Analysis. Advances in Data Analysis and Classification, 2, 177-206.

https://doi.org/10.1007/s11634-008-0023-6

[8] Choulakian, V. and de Tibeiro, J. (2013) Graph Partitioning by Correspondence Analysis and Taxicab Correspondence Analysis. Journal of Classification, 30, 397-427.

https://doi.org/10.1007/s00357-013-9145-4

[9] Choulakian, V., Allard, J. and Simonetti, B. (2013) Multiple Taxicab Correspondence Analysis of a Survey Related to Health Services. Journal of Data Science, 11, 205-229.

[10] Choulakian, V., Simonetti, B. and Gia, T.P. (2014) Some Further Aspects of Taxicab Correspondence Analysis. Statistical Methods and Applications, 23, 401-416.

https://doi.org/10.1007/s10260-014-0259-6

[11] Choulakian, V. (2016) Matrix Factorizations Based on Induced Norms. Statistics, Optimization and Information Computing, 4, 1-14.

https://doi.org/10.19139/soic.v4i1.160

[12] Choulakian, V. (2017) Taxicab Correspondence Analysis of Sparse Two-Way Contingency Tables. Statistica Applicata-Italian Journal of Applied Statistics, 29, 153-179.

[13] Jensen, S.T. (1999) The Laguerre-Samuelson Inequality with Extensions and Applications in Statistics and Matrix Theory. McGill University, Quebec.

https://doi.org/10.1007/978-94-011-4577-0_10

[14] Khot, S. and Naor, A. (2012) Grothendieck-Type Inequalities in Combinatorial Optimization. Communications on Pure and Applied Mathematics, 65, 992-1035.

https://doi.org/10.1002/cpa.21398

[15] Liiv, I. (2010) Seriation and Matrix Reordering Methods: An Historical Overview. Statistical Analysis and Data Mining, 3, 70-91.

https://doi.org/10.1002/sam.10071

[16] Nishisato, S. (1984) Forced Classification: A Simple Application of a Quantification Method. Psychometrika, 49, 25-36.

https://doi.org/10.1007/BF02294203

[17] Benzécri, J.P. (1973) L’Analyse des Données: Vol. 2: L’Analyse des Correspondances. Dunod, Paris.

[18] Beh, E. and Lombardo, R. (2014) Correspondence Analysis: Theory, Practice and New Strategies. Wiley, New York.

https://doi.org/10.1002/9781118762875

[19] Faust, K. (2005) Using Correspondence Analysis for Joint Displays of Affiliation Networks. In: Carrington, P.J., Scott, J. and Wasserman, S., Eds., Models and Methods in Social Network Analysis, Cambridge University Press, Cambridge, 117-147.

https://doi.org/10.1017/CBO9780511811395.007

[20] Benzécri, J.P. (1992) Correspondence Analysis Handbook. Marcel Dekker, New York.

https://doi.org/10.1201/9780585363035

[21] Greenacre, M. (1984) Theory and Applications of Correspondence Analysis. Academic Press, London.

[22] Gifi, A. (1990) Nonlinear Multivariate Analysis. Wiley, New York.

[23] Le Roux, B. and Rouanet, H. (2004) Geometric Data Analysis. From Correspondence Analysis to Structured Data Analysis. Kluwer-Springer, Dordrecht.

https://doi.org/10.1007/1-4020-2236-0

[24] Murtagh, F. (2005) Correspondence Analysis and Data Coding with Java and R. Chapman & Hall/CRC, Boca Raton, FL.

https://doi.org/10.1201/9781420034943

[25] Nishisato, S. (2007) Mutidimensional Nonlinear Descriptive Analysis. Chapman & Hall/CRC, Boca Raton, FL.

https://doi.org/10.1201/9781420011203

[26] Schepers, J., Bock, H.-H. and Van Mechelen, I. (2017) Maximal Interaction Two-Mode Clustering. Journal of Classification, 34, 49-75.

https://doi.org/10.1007/s00357-017-9226-x