High Order Tensor Forms of Growth Curve Models

Show more

1. Introduction

Linear regression model, or called linear model (LM), is one of the most widely used models in statistics. There are many kinds of linear models including simple linear models, general linear models, generalized linear models, mixed effects linear models and some other extended forms of linear models [1] [2] [3] [4] [5] . The growth curve model (GCM) is a special kind of general linear models which have applications in many areas such as the psychology data analysis [6] . The GCMs can be used to handle longitudinal data or missing data or even the hierarchical multilevel mixed case [2] [3] [5] [7] - [12] . There are some variations of GCMs such as the latent GCMs which are also very useful. The traditional treatment of the GCMs in the estimation of the parameters in the case of mixed effects for a single response factor is usually to stack all the dependent observations vertically into a very long column vector, usually denoted by y, and all the design matrices (both the fixed effect design matrix and the random effect design matrix), the random errors are accordingly concatenated to fit the size of y. This treatment makes the implementation of the related programming much slow due to the magnificent dimensions of the data (matrices and vectors). Things will get even worse if we encounter a huge dataset (big data) such as the data of genome, web related, image gallery, or social network etc.

In this paper, we first use the generalized inverse of matrices and the singular value decomposition to obtain the norm-minimized estimation of the parameters in the linear model. Then we introduce some basic knowledge about tensors before we employ tensors to express and extend the multivariate mixed effects linear models. The extended tensor form of the model can be also regarded as a generalization of the GCM.

Let us first begin with some basic linear regression models. We let y be a response variable and ${x}_{1},\cdots ,{x}_{r}$ be independent random variables for explaining y. The most general regression model between y and ${x}_{1},\cdots ,{x}_{r}$ is in form

$y=f\left({x}_{1},\cdots ,{x}_{r}\right)+\epsilon $ (1.1)

where $\epsilon $ is the error term, and f is an unknown regression function. In linear regression model, f is assumed to be a linear function, i.e.,

${y}_{i}={\beta}_{0}+{\beta}_{1}{x}_{1}+{\beta}_{2}{x}_{2}+\cdots +{\beta}_{r}{x}_{r}+\epsilon $ (1.2)

where all ${\beta}_{i}$ are unknown parameters. Denote $x=\left({x}_{1},\cdots ,{x}_{r}\right)$ which is called a random vector. Let $P=\left(y,x\right)$ , an $\left(r+1\right)$ -dimensional random vector, which is called an observable vector. Given N observations of P, say ${P}_{i}=\left({y}_{i},{x}_{i1},{x}_{i2},\cdots ,{x}_{ir}\right)$ , $i=1,2,\cdots ,N$. Here ${y}_{i}$ stands for the ith observation of the response variable y, and ${x}_{i1},{x}_{i2},\cdots ,{x}_{ir}$ are the corresponding explanatory observations. The sample model of Equation (1.2) turns out to be

${y}_{i}={\beta}_{0}+{\beta}_{1}{x}_{i1}+{\beta}_{2}{x}_{i2}+\cdots +{\beta}_{r}{x}_{ir}+\epsilon $ (1.3)

or equivalently

$y=X\beta +\epsilon $ (1.4)

where $y={\left({y}_{1},{y}_{2},\cdots ,{y}_{N}\right)}^{\text{T}}\in {\mathbb{R}}^{N}$ (here and throughout the paper ${}^{\text{T}}$ stands for the transpose of a matrix/vector) is the sample vector of the response variable y, $X=\left({x}_{ij}\right)\in {\mathbb{R}}^{N\times \left(r+1\right)}$ is the data matrix or the design matrix each of whose rows corresponding to an observation of x, $\beta =\left({\beta}_{0},{\beta}_{1},\cdots ,{\beta}_{r}\right)\in {\mathbb{R}}^{r+1}$ is the regression coefficient vector, which is to be estimated, and $\epsilon ={\left({\epsilon}_{1},{\epsilon}_{2},\cdots ,{\epsilon}_{N}\right)}^{\text{T}}$ is the random error vector. A general linear regression model, abbreviated GLM, is a LM (1.4) with the error terms ${\epsilon}_{i}$ satisfying:

1) Zero-mean: ${\rm E}\left[{\epsilon}_{i}\right]=0,\forall i\in \left[N\right]$ , i.e., the expected value of the error term is zero for all the observations.

2) Homoskedasticity: $\text{Var}\left[\epsilon \right]={\sigma}^{2}$ , i.e., all the error term are distributed with the same variance.

3) Uncorrelation: $\text{Cov}\left({\epsilon}_{i},{\epsilon}_{j}\right)=0$ for all distinct $i,j$ , i.e., distinct error terms are uncorrelated.

Equations (1)-(3) is called Gauss-Markov assumption [13] . The model (1.4) under the Gauss-Markov assumption is called the Gauss-Markov model. Note that the variance reflects the uncertainty of the model, the zero-mean, homoskedasticity and the uncorrelation of the sample errors form the Gauss-Markov assumption. An alternative form of the Gauss-Markov model is

${\rm E}\left[Y\right]=X\beta ,\text{\hspace{0.17em}}\text{Cov}\left(\epsilon \right)={\sigma}^{2}{I}_{N}$ (1.5)

where ${I}_{N}$ is the $N\times N$ identity matrix and ${\sigma}^{2}>0$. In order to investigate the general linear model and extend the properties, we recall some known results concerning the linear combinations of some random variables. Suppose $\alpha \in {\mathbb{R}}^{n}$ is a constant vector with the same length as that of y, the random vector under the investigation.

Let $A\in {\mathbb{R}}^{m\times n}$. The g-inverse of A, denoted ${A}^{g}$ , is a generalized inverse defined as an $n\times m$ matrix satisfying [4] $A{A}^{g}A=A$. An equivalent definition for g-inverse is that $x={A}^{g}b$ is always a solution to equation $Ax=b$ whenever $b\in \mathcal{C}\left(A\right)$ , the column space of A. A well known result is that all the solutions to $Ax=b$ (when compatible) are in form

$x={A}^{g}b+\left(I-{A}^{g}A\right)\omega ,\forall \omega \in {\mathbb{R}}^{n}$. (1.6)

It is easy to verify that ${A}^{g}={A}^{-1}$ is unique when A is invertible. The g-inverse of a matrix (usually not unique) can be calculated by using singular value decomposition (SVD).

Lemma 1.1. Let $X\in {\mathbb{R}}^{N\times p}$ with a SVD decomposition $X=UD{V}^{\text{T}}$ such that $U\in {\mathbb{R}}^{N\times N}$ and $V\in {\mathbb{R}}^{p\times p}$ are orthogonal matrices, and $D\in {\mathbb{R}}^{N\times p}$ is in form $D=diag\left({\sigma}_{1},{\sigma}_{2},\cdots ,{\sigma}_{r},0,\cdots ,0\right)$ where $r=rank\left(X\right)\le \mathrm{min}\left(N,p\right)$ , and ${\sigma}_{1}\ge {\sigma}_{2}\ge \cdots \ge {\sigma}_{r}$. Then

${A}^{g}=V\left[\begin{array}{cc}{D}_{r}^{-1}& *\\ *& *\end{array}\right]{U}^{\text{T}}\in {\mathbb{R}}^{n\times m}$ (1.7)

where * denotes any matrix of suitable size and ${D}_{r}=diag\left({\sigma}_{1},{\sigma}_{2},\cdots ,{\sigma}_{r}\right)$.

The Gauss-Markov Theorem (e.g. Page 51 of [13] ) is stated as:

Lemma 1.2. Suppose that model (1.4) satisfies the Gauss-Markov assumption and $a\in {\mathbb{R}}^{N}$ be a constant vector. Then $z={a}^{\text{T}}y$ is estimable, and ${a}^{\text{T}}\stackrel{^}{\beta}$ is the best (minimum variance) linear unbiased estimator (BLUE) of ${a}^{\text{T}}\beta $ , with $\stackrel{^}{\beta}={\left({X}^{\text{T}}X\right)}^{g}{X}^{\text{T}}y$.

Based on Lemma 1.2, we get

Proposition 1.3. If $rank\left(X\right)=r<\mathrm{min}\left(N,p\right)$ in Equation (1.4) and $X$ satisfies condition in Lemma 1.2. Then the estimator of $\beta $ with minimal 2-norm is in form

$\stackrel{^}{\beta}=V\left[\begin{array}{c}{D}_{r}^{-1}{\stackrel{\u02dc}{y}}_{1}\\ 0\end{array}\right]$ (1.8)

where $\stackrel{\u02dc}{y}={U}^{\text{T}}y={\left[{\stackrel{\u02dc}{y}}_{1}^{\text{T}},{\stackrel{\u02dc}{y}}_{2}^{\text{T}}\right]}^{\text{T}}$ with ${\stackrel{\u02dc}{y}}_{1}\in {\mathbb{R}}^{r}$ , ${\stackrel{\u02dc}{y}}_{2}\in {\mathbb{R}}^{N-r}$.

Proposition 1.3 tells us that by taking D as a block upper triangle form in the decomposition

${\left({X}^{\text{T}}X\right)}^{g}=V\left[\begin{array}{cc}{G}_{11}& {G}_{12}\\ 0& {G}_{22}\end{array}\right]{V}^{\text{T}}$.

We can reach a norm-minimised estimator of $\beta $. Now denote $H:={\left({X}^{\text{T}}X\right)}^{g}X$. By Gauss-Markov Theorem, we have

${\Vert \stackrel{\u2322}{\beta}\Vert}^{2}=\langle Hy,Hy\rangle ={y}^{\text{T}}{H}^{\text{T}}Hy={y}^{\text{T}}X{\left({X}^{\text{T}}X\right)}^{2g}{X}^{\text{T}}y$

which implies that $\Vert \stackrel{\u2322}{\beta}\Vert =\Vert {\left(X{X}^{\text{T}}\right)}^{g}y\Vert $.

The generalized linear model (GLM) is a generalization of LM [1] . In a GLM model some basic assumptions in linear regression model are relaxed. Also the fitting values of the response variables are no longer directly expressed as a linear combination of parameters, but rather a function which is usually called a link function. A GLM consists of the independent random components ${y}_{i}$ in exponential distribution, the predictive value ${X}_{i}^{\text{T}}\beta $ , the system components, and the link function f, strictly monotone differentiable function in GLM ${\eta}_{i}=f\left({X}_{i}^{\text{T}}\beta \right)$. The parameters in a GLM include regression parameters $\beta $ and the discrete parameters in the covariance matrix, both can be estimated with maximum likelihood method. The estimation of the regression parameter for model (1.4) can be expressed as

${\beta}^{\left(m\right)}={\left({X}^{\text{T}}{W}^{\left(m-1\right)}X\right)}^{g}{X}^{\text{T}}{W}^{\left(m-1\right)}{z}^{\left(m-1\right)},\text{\hspace{0.17em}}W=diag\left({W}_{1},{W}_{2},\cdots ,{W}_{N}\right)$

where ${W}_{i}={w}_{i}/\left\{\varphi v\left({\mu}_{i}\right){\left[{g}^{\text{T}}\left({\mu}_{i}\right)\right]}^{2}\right\}$ with ${w}_{i}$ being a known priori weight,

$\varphi $ the dispersion parameter, $v(\cdot )$ a variance function, g a link function, and $z\in {\mathbb{R}}^{N}$ the work dependent variable with ${z}_{i}={\eta}_{i}+\left({y}_{i}-{\mu}_{i}\right){g}^{\text{T}}\left({\mu}_{i}\right)$. The moment estimation of discrete parameters is

$\stackrel{^}{\varphi}=\frac{1}{N-k-1}{\displaystyle \underset{i=1}{\overset{N}{\sum}}\frac{{w}_{i}{\left({y}_{i}-{\stackrel{^}{\mu}}_{i}\right)}^{2}}{v\left({\stackrel{^}{\mu}}_{i}\right)}}$. (1.9)

In order to extend the GLMS to more general case, we need some knowledge on tensors. In the next section, we will introduce some basic terminology and operations implemented on tensors, especially on low order tensors.

2. The 3-Order Tensors and Their Applications in GLMs

A tensor is an extension of a matrix in the case of high order, which is an important tool to study high-dimensional arrays. The origin of tensor can be traced back to early nineteenth century when Cayley studied linear transformation theory and invariant representation. Gauss and Riemann et al. promoted the development of tensor in mathematics. In 1915 Albert Einstein used tensor to describe his general relativity, leading tensor calculus more widely accepted. In the early twentieth century, Ricci and Levi-Civita further developed tensor analysis in absolute differential methods and explored their applications [14] .

For our convenience we denote $\left[n\right]:=\left\{1,2,\cdots ,n\right\}$ and use $S\left(m,n\right)$ to denote the index set

$S\left(m,n\right):=\left\{\tau =\left({i}_{1},{i}_{2},\cdots ,{i}_{m}\right):{i}_{k}\in \left[n\right],\forall k\in \left[m\right]\right\}$.

Let ${I}_{k}\left(k\in \left[m\right]\right)$ be any positive integer (usually larger than 1). Sometimes we abuse ${I}_{k}$ as a set $\left[{I}_{k}\right]$. Denote $I:={I}_{1}\times {I}_{2}\times \cdots \times {I}_{m}$. If ${I}_{k}$ stands for an index set, then I is a tensor product of ${I}_{1},{I}_{2},\cdots ,{I}_{m}$. An m-order tensor $\mathcal{A}=\left({A}_{\sigma}\right)$ of size I is an m-array whose entries are denoted by ${A}_{\sigma}:={A}_{{i}_{1}{i}_{2}\cdots {i}_{m}}$ with $\sigma =\left({i}_{1},{i}_{2},\cdots ,{i}_{m}\right)\in I$. Note that a vector is a 1-order tensor and an $m\times n$ matrix is a 2-order or second order tensor. An $m\times n$ tensor is a tensor with ${I}_{1}={I}_{2}=\cdots ={I}_{m}=n$. We denote by ${\mathcal{T}}_{m,n}$ the set of all mth order n-dimensional real tensors . An $m\times n$ tensor $\mathcal{A}$ is called symmetric if ${A}_{\sigma}$ is constant under any permutation on its index.

An mth order n-dimensional real tensors $\mathcal{A}$ is always associated with an m-order homogeneous polynomial ${f}_{\mathcal{A}}\left(x\right)$ which is defined by

${f}_{\mathcal{A}}\left(x\right):=\mathcal{A}{x}^{m}={\displaystyle \underset{{i}_{1},{i}_{2},\cdots ,{i}_{m}}{\sum}{A}_{{i}_{1},{i}_{2},\cdots ,{i}_{m}}{x}_{i1}{x}_{i2}\cdots {x}_{im}}$. (2.10)

$\mathcal{A}$ is called positive definite or pd (positive semidefinite or psd) if

${f}_{\mathcal{A}}\left(x\right):=\mathcal{A}{x}^{m}\ge 0,\forall x\in {\mathbb{R}}^{n}$. (2.11)

A nonzero psd tensor must be of an even order. Let $\mathcal{A}$ be of size $m\times n\times p$. Given an r-order tensor $\mathcal{A}\in {\mathbb{R}}^{{n}_{1}\times {n}_{2}\times \cdots \times {n}_{r}}$ and a matrix $U=\left({u}_{ij}\right)\in {\mathbb{R}}^{{I}_{k}\times {J}_{k}}$ where $k\in \left[r\right]$. The product of $\mathcal{A}$ with $U$ along k-mode is defined as the r-order tensor $\mathcal{A}{\times}_{k}U$ defined as

${\left(\mathcal{A}{\times}_{k}U\right)}_{{i}_{1},\cdots ,{i}_{k-1},{i}_{k},{i}_{k+1},\cdots ,{i}_{m}}={\displaystyle \underset{i=1}{\overset{n}{\sum}}{A}_{{i}_{1},\cdots ,{i}_{k-1},{i}_{k},{i}_{k+1},\cdots ,{i}_{m}}{u}_{iik}}$. (2.12)

Note that $\mathcal{A}{\times}_{k}U$ is compressed into an $\left(m-1\right)$ -order tensor when $U\in {\mathbb{R}}^{{I}_{k}}$ is a column vector $\left({J}_{k}=1\right)$. There are two kinds of tensor decomposition, i.e., the rank-1 decomposition, also called the CP decomposition, and the Tucker decomposition, or HOSVD. The former is the generalization of matrix rank-1 decomposition and the latter is the matrix singular value decomposition in the high order case. A zero tensor is a tensor with all entries being zero. A diagonal tensor is a tensor whose off-diagonal elements are all zero, i.e., ${A}_{{i}_{1},{i}_{2},\cdots ,{i}_{m}}=0$ if ${i}_{1},{i}_{2},\cdots ,{i}_{m}$ are not identical. Thus an $m\times n$ tensor has n diagonal elements. By this way, we can define similarly (and analogous to matrix case) the identity tensor and a scalar tensor.

For any $i\in \left[n\right]$ , an i-slice of an m-order tensor $\mathcal{A}=\left({A}_{{i}_{1}{i}_{2}\cdots {i}_{m}}\right)$ along mode k for any given $k\in \left[m\right]$ is an $\left(m-1\right)\times n$ tensor $\mathcal{B}$ with

${B}_{{i}_{1},{i}_{2},\cdots ,{i}_{m-1}}={A}_{{i}_{1},\cdots ,{i}_{k-1},i,{i}_{k+1},\cdots ,{i}_{m}},i=1,2,\cdots ,n.$

A slice of 3-order tensor $\mathcal{A}=\left({A}_{ijk}\right)\in {\mathbb{R}}^{m\times n\times p}$ along mode-3 is an $m\times n$ matrix $A\left(:,:,k\right)$ with $k\in \left[p\right]$ , and a slice of a 4-order tensor is a 3-order tensor.

Let $\mathcal{A}\in {\mathbb{R}}^{m\times {n}_{1}\times p},\mathcal{B}\in {\mathbb{R}}^{{n}_{1}\times n\times p}$ be two tensors of 3-order. The slice-wise product of $\mathcal{A},\mathcal{B}$ , denoted by, is defined as where $C\left(:,:,k\right)=A\left(:,:,k\right)B\left(:,:,k\right)$ for all $k\in \left[p\right]$. This multiplication can be used to build a regression model

(2.13)

where $A\left(:,:,k\right)$ is the matrix consisting of n sample points of size m in class k and $X\left(:,:,k\right)$ is the design matrix corresponding to the kth sample (there are ${n}_{1}$ observations in each class in this situation).

Let k be a positive integer. The k-moment of a random variable x is defined as the expectation of x, i.e., ${m}_{x}^{\left(k\right)}\left(x\right)={\rm E}\left({x}^{k}\right)$. The traditional extension of moments to a multivariate case is done by an iterative vectorization imposed on k. This technique is employed not only in the definition of moments but also in other definitions such as that of a characteristic function. By introducing the tensor form into these definitions, we find that the expressions will be much easier to handle than the classical ones. In the next section, we will introduce the tensor form of all these definitions.

Let $x={\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)}^{\text{T}}$ be a random vector. Denote by ${x}^{m}$ the (symmetric) rank-one m-order tensor with

${x}_{\sigma}^{m}={x}_{i1}{x}_{i2}\cdots {x}_{im},\text{\hspace{0.17em}}\forall \sigma :=\left({i}_{1},{i}_{2},\cdots ,{i}_{m}\right)\in S\left(m,n\right)$.

${x}^{m}$ is called a rank-1 tensor generated by $x$ which is also symmetric. It is shown by Comon et al. [15] that a real tensor $\mathcal{A}$ (with size ${I}_{1}\times {I}_{2}\times \cdots \times {I}_{m}$ ) can always be decomposed into form

(2.14)

where ${\alpha}_{i}^{\left(j\right)}\in {\mathbb{R}}^{{I}_{i}}$ for all $j\in \left[r\right],i\in \left[m\right]$. The smallest positive integer r is called the rank of $\mathcal{A}$ , denoted by $rank\left(\mathcal{A}\right)$. We note that Equation (2.14) can also be used to define the tensor product of two matrices, which will be used in our next work on the covariance of random matrices. Note that the tensor product of two rank-one matrices is

$\left({\alpha}_{1}\times {\beta}_{1}\right)\times \left({\alpha}_{2}\times {\beta}_{2}\right)={\alpha}_{1}\times {\alpha}_{2}\times {\beta}_{1}\times {\beta}_{2}$.

Now consider two matrices $A\in {\mathbb{R}}^{m\times n},B\in {\mathbb{R}}^{p\times q}$. Then write $A,B$ in a rank-1 decomposition, i.e.,

$A={\displaystyle \underset{j=1}{\overset{{R}_{1}}{\sum}}{\alpha}_{1}^{\left(j\right)}\times {\beta}_{1}^{\left(j\right)}},\text{\hspace{0.17em}}B={\displaystyle \underset{k=1}{\overset{{R}_{2}}{\sum}}{\alpha}_{2}^{\left(k\right)}\times {\beta}_{2}^{\left(k\right)}}$.

Tucker decomposition decomposes the original tensor into a product of the core tensor and a number of unitary matrices in different directions [15] so $\mathcal{A}$ can be decomposed into

$\mathcal{A}=S{\times}_{1}{U}_{1}{\times}_{2}{U}_{2}{\times}_{3}\cdots {\times}_{N}{U}_{N}$ (2.15)

where $S$ is the core tensor, and ${U}_{1},{U}_{2},\cdots ,{U}_{N}$ are unitary matrices.

Example 2.1. Let be an $2\times 2\times 2$ tensor which is defined by

$X\left(:,:,1\right)=\left[\begin{array}{cc}1& 3\\ 2& 4\end{array}\right],\text{\hspace{0.17em}}X\left(:,:,2\right)=\left[\begin{array}{cc}5& 7\\ 6& 8\end{array}\right]$.

Then the unfolded matrices along 1-mode, 2-mode and 3-mode are respectively

${X}_{1}=\left[\begin{array}{cccc}1& 5& 3& 7\\ 2& 6& 4& 8\end{array}\right],\text{\hspace{0.17em}}{X}_{2}=\left[\begin{array}{cccc}1& 3& 5& 7\\ 2& 4& 6& 8\end{array}\right],\text{\hspace{0.17em}}{X}_{3}=\left[\begin{array}{cccc}1& 2& 5& 6\\ 3& 4& 7& 8\end{array}\right]$.

3. Application of 3-Order Tensors in GLMs

The growth curve model (GCM) is one of the GLMs introduced by Wishart in 1938 [16] to study the growth situation of animals and plant between different groups. It is a kind of generalized multivariate variance analysis model, and has been widely used in modern medicine, agriculture and biology etc. GCM originally referred to a wide array of statistical models for repeated measures data [2] [14] . The contemporary use of GCM allows the estimation of inter-object variability such as time trends, time paths, growth curves or latent trajectories, in intra-object patterns of change over time [17] . The trajectories are the primary focus of analysis in most cases, whereas in others, they may represent just one part of a much broader longitudinal model. The most basic GCMs contain fixed and random effects that best capture the collection of individual trajectories over time. In a GCM, the fixed effects represent the mean of the trajectory pooling of all individuals, and the random effects represent the variance of the individual trajectories around these group means. For example, the fixed effects in a linear trajectory are estimates of the mean intercept and mean slope that define the underlying trajectory pooling of the entire sample, and the random effects are estimates of the between-person variability in the individual intercepts and slopes. Smaller random effects imply the more similar parameters that define the trajectory across the sample of individuals; at the extreme situation where the random effects equal 0, all individuals are governed by precisely the same trajectory parameters (i.e., there is a single trajectory shared by all individuals). In contrast, larger random effects imply greater individual differences in the magnitude of the trajectory parameters around the mean values.

The analysis of a GCM focuses on the functional relationship among ordered responses. Conventional GCM methods apply to growth data and to other analogs such as dose-response data (indexed by dose), location-response data (indexed by distance), or response-surface data (indexed by two or more variables such as latitude and longitude). The GCM methods mainly focus on longitudinal observations on a one-dimensional characteristic even though they may also be used in multidimensional cases [2] .

A general GCM can be indicated by

$Y=XBT+E$ (3.16)

where $Y\in {\mathbb{R}}^{N\times p}$ is the random response matrix whose rows are mutually independent and columns correspond to the response variables ordered according to $d={\left[{d}_{1},{d}_{2},\cdots ,{d}_{p}\right]}^{\text{T}}$ ; $X\in {\mathbb{R}}^{N\times q}$ is the fixed design matrix with $r:=rank\left(X\right)\le q\le N$ ; The matrix $B\in {\mathbb{R}}^{q\times m}$ is a fixed parameter matrix whose entries are the regression coefficients; $T\in {\mathbb{R}}^{m\times p}$ is a within-subject design matrix each of whose entries is a fixed function of d, and $E\in {\mathbb{R}}^{N\times p}$ is a random error with matrix normal distribution $E\sim {\mathcal{N}}_{N,p}\left(0,\Sigma ,{I}_{N}\right)$ where $\Sigma \in {\mathbb{R}}^{p\times p}$ is an unknown symmetric positive definite matrix. Suppose the samplings corresponding to each object are recorded at p different times (moments) ${d}_{1},{d}_{2},\cdots ,{d}_{p}$. Consider an example of a pattern of the children’s weight. The plotting of the weights against the ages indicates a temporal pattern of growth. A univariate linear model for weight given age could be fitted with a design matrix T expressing the central tendency of the children’s weights as a linear or curvilinear function of age. Here T is an example of a within-subject design matrix. If $N>1$ , a separate curve could be fitted for each subject to obtain a separate matrix of regression parameter estimators for each independent sampling units, ${\stackrel{^}{\beta}}_{i}={Y}_{i}T{\left({T}^{\text{T}}T\right)}^{-1}$ for $i\in \left[N\right]$ , and a simple average of the N fitted curves is a proper (if not efficient) estimator of the population growth curve, that is,

$\stackrel{^}{B}=\frac{1}{N}{\displaystyle \underset{j=1}{\overset{N}{\sum}}{\stackrel{^}{B}}_{j}}$.

The efficient estimator has the form

$\stackrel{^}{B}=\left[{\left({X}^{\text{T}}X\right)}^{-1}{X}^{\text{T}}\right]\left[T{\left({T}^{\text{T}}T\right)}^{-1}\right]$. (3.17)

If the subjects are grouped in a balanced way, i.e., The N observations are clustered into m groups, each containing the same number, say n, of observations. For simplicity, we may assume that first n each then $X={l}_{N}$ , the all-one vector, is the appropriate choice for computing $\stackrel{^}{B}$. The choice of T defines the functional form of the population growth curve by describing a function relationship between weight and age.

Example 3.1. We recorded the heights of n boys and n girls whose ages are 2, 3, 4 and 6 years. From the observations we make an assumption that the average height increases linearly with age. Since the observed data is partitioned into two groups (one is for the heights of n boys and another is for the height of n girls), each consisting of n objects, and $p=4$ with age vector $d={\left[2,3,4,6\right]}^{\text{T}}$. Thus the model for the height vs. age shall be $Y=XBT+E$ where

$X=\left[\begin{array}{cc}{l}_{n}& 0\\ 0& {l}_{n}\end{array}\right],T=\left[\begin{array}{c}{l}_{p}^{\text{T}}\\ {d}^{\text{T}}\end{array}\right]$

where ${l}_{k}\in {\mathbb{R}}^{k}$ is an all-ones vector of dimension k.

Here ${\beta}_{11},{\beta}_{12}$ are respectively the intercept and the slope for girls and ${\beta}_{21},{\beta}_{22}$ are respectively the intercept and the slope for boys. We find that it is not so easy for us to investigate the relationship between the gender, height, weight, and age. In the following we employ tensor expression to deal with this issue.

Using the notation in tensor theory, we rewrite model (3.16) in form

$Y=X{\times}_{2}B{\times}_{2}T+E$

or equivalently

$Y=B{\times}_{1}{X}^{\text{T}}{\times}_{2}T+E$ (3.18)

where B is regarded as a second order tensor and $X,T$ as two matrices. Actually, according to Equation (2.12), we have

${\left(B{\times}_{1}{X}^{\text{T}}\right)}_{ij}={\displaystyle \underset{k=1}{\overset{q}{\sum}}{X}_{ik}{B}_{kj}}$.

Similarly we can define $B{\times}_{2}V$. Note that

$B{\times}_{1}U{\times}_{2}V=B{\times}_{2}V{\times}_{1}U$.

Now we extend model (3.16) in a more general form as

$\mathcal{A}=\mathcal{B}{\times}_{1}{X}_{1}{\times}_{2}{X}_{2}{\times}_{3}{X}_{3}+E$ (3.19)

where $\mathcal{A}\in {\mathbb{R}}^{{n}_{1}\times {n}_{2}\times {n}_{3}},\mathcal{B}=\left({B}_{ijk}\right)\in {\mathbb{R}}^{{m}_{1}\times {m}_{2}\times {m}_{3}}$ is a 3-order tensor, which is usually an unknown constant parameter tensor or the kernel tensor, and ${X}_{i}\in {\mathbb{R}}^{{n}_{i}\times {m}_{i}}$ for $i=1,2,3$. Here the tensor-matrix multiplication is defined by Equation (2.12) according to the dimensional coherence along each mode.

The potential applications of Equation (3.21) are obvious. A HOSVD (high order singular value decomposition) of a 3-order tensor can be regarded as a good example for this model.

Example 3.2. A sequence of 1000 images extracted from a repository of face images of ten individuals, each with 100 face images. Suppose each face image is of size $256\times 256$. Then these images can be restored in an $256\times 256\times 1000$ tensor $\mathcal{A}$. Let $\mathcal{A}$ be decomposed as

$\mathcal{A}=\mathcal{B}{\times}_{1}{U}_{1}^{\text{T}}{\times}_{2}{U}_{2}^{\text{T}}{\times}_{3}{U}_{3}^{\text{T}}$ (3.20)

where

$\mathcal{B}\in {\mathbb{R}}^{16\times 16\times 50},{U}_{1}\in {\mathbb{R}}^{256\times 16},{U}_{2}\in {\mathbb{R}}^{256\times 16},{U}_{3}\in {\mathbb{R}}^{1000\times 50}.$

The decomposition Equation (3.20) yields a set of compressed images, each with size $16\times 16$. If each individual can be characterized by five images (this is called a balanced compression), then the kernel tensor $\mathcal{B}$ consists of 50 compressed images where each ${U}_{i}$ is a projection matrix along mode-i (i = 1, 2, 3). Specifically, ${U}_{1}$ and ${U}_{2}$ together play a role of compression of each image into an $16\times 16$ image, while ${U}_{3}$ finds the representative elements (here is the 50 images) among a large set of images (the set of 1000 face images).

Analog to GCM, we let ${Y}_{ijk}$ be the measured value of Index ${I}_{k}$ in Class ${C}_{i}$ at time ${T}_{j}$. A tensor $Y=\left({Y}_{ijk}\right)\in {\mathbb{R}}^{m\times n\times p}$ can be used to express m objects, say ${P}_{1},\cdots ,{P}_{m}$ , each having p indexes ${I}_{1},\cdots ,{I}_{p}$ measured respectively at times ${t}_{1},\cdots ,{t}_{n}$. For each index ${I}_{k},k\in \left[p\right]$ , we have GCM form:

${Y}_{k}={B}_{k}{\times}_{1}X{\times}_{2}T+{E}_{k}$ (3.21)

where ${Y}_{k}=\left({y}_{1}^{\left(k\right)},\cdots ,{y}_{n}^{\left(k\right)}\right)\in {\mathbb{R}}^{m\times n}$. Suppose each row of ${Y}_{k}$ stands for a class of individuals, e.g., partitioned by ages. To make things more clear, we consider a concrete example.

Example 3.3. There are 30 persons under health test, each measured, at time ${T}_{1},\cdots ,{T}_{4}$ , 10 indexes such as the lower/higher blood pressures, heartbeat rate, urea, cholesterol, bilirubin, etc. We label these indexes respectively by ${I}_{1},\cdots ,{I}_{10}$. Suppose that the 30 people are partitioned into three groups (denoted by ${C}_{1},{C}_{2},{C}_{3}$ ) with respect to their ages, consisting of 5, 10, 15 individuals respectively. Denote

$X=\left[\begin{array}{ccc}{l}_{5}& 0& 0\\ 0& {l}_{10}& 0\\ 0& 0& {l}_{15}\end{array}\right],T=\left[\begin{array}{cccc}1& 1& 1& 1\\ {t}_{1}& {t}_{2}& {t}_{3}& {t}_{4}\\ {t}_{1}^{2}& {t}_{2}^{2}& {t}_{3}^{2}& {t}_{4}^{2}\end{array}\right]$

and

${B}_{k}=\left[\begin{array}{ccc}{\beta}_{11k}& {\beta}_{12k}& {\beta}_{1pk}\\ {\beta}_{21k}& {\beta}_{22k}& {\beta}_{2pk}\\ {\beta}_{31k}& {\beta}_{32k}& {\beta}_{3pk}\end{array}\right]$.

Denote by ${Y}_{ijk}$ the measurement of Index ${I}_{k}$ in group ${C}_{i}$ at time ${T}_{j}$. Set $Y\left(:,:,k\right)={Y}_{k}$ , $B\left(:,:,k\right)={B}_{k}$ for $k=1,2,\cdots ,10$. Then we have

$\mathcal{Y}=\mathcal{B}{\times}_{1}{X}^{\text{T}}{\times}_{2}{T}^{\text{T}}+\epsilon $ (3.22)

where $\mathcal{Y},\epsilon \in {\mathbb{R}}^{30\times 4\times 10},X\in {\mathbb{R}}^{30\times 3},T\in {\mathbb{R}}^{4\times 3}$ and $B\in {\mathbb{R}}^{3\times 3\times 10}$ is an unknown constant parameter tensor to be estimated, where $B\left(:,:,k\right)={B}_{k}$ is the parameter matrix corresponding to the k th index model. The model (3.22) can be further promoted to manipulate a balanced linear mixed model when multiple responses are measured for balanced clustered (i.e., there are same number of subjects in each cluster) subjects.

4. Tensor Normal Distributions

In the multivariate analysis, the correlations between the coordinates of a random vector $x={\left({x}_{1},\cdots ,{x}_{n}\right)}^{\text{T}}$ are represented by the covariance matrix $\Sigma :=\Sigma \left(x\right)$ , which is symmetric and positive semidefinite. When the variables are arrayed as a matrix, say $X=\left({X}_{ij}\right)\in {\mathbb{R}}^{m\times n}$ , which is called a random matrix, the correlation between any pair of entries, say ${X}_{{i}_{1}{j}_{1}}$ and ${X}_{{i}_{2}{j}_{2}}$ of matrix $X$ , is represented as an entry of a matrix $\Sigma $ which is defined as the covariance matrix of the vector. A matrix normal distribution is defined. $\mu \in {\mathbb{R}}^{m\times n}$ , and $\Sigma \in {\mathbb{R}}^{m\times m},\varphi \in {\mathbb{R}}^{n\times n}$ are two positive definite matrices. A random matrix $X\in {\mathbb{R}}^{m\times n}$ is said to obey a matrix normal distribution, denoted by $X\sim {\mathcal{N}}_{m,n}\left(\mu ,\Sigma ,\varphi \right)$ , if it satisfies the following the conditions:

1), i.e., for each $i\in \left[m\right],j\in \left[n\right]$.

2) Each row ${X}_{i\cdot}$ of $X$ obeys normal distribution ${X}_{i\cdot}\sim {\mathcal{N}}_{p}\left(0,\varphi \right)$ for $i\in \left[m\right]$.

3) Each column ${X}_{\cdot j}$ obeys normal distribution ${X}_{\cdot j}\sim {\mathcal{N}}_{q}\left(0,\Sigma \right)$.

It is easy to show that a matrix normal distribution $X\sim {\mathcal{N}}_{p.q}\left(\mu ,\Sigma ,\varphi \right)$ is equivalent to $\text{vec}\left(X\right)\sim {\mathcal{N}}_{pq}\left(\text{vec}\left(\mu \right),\Sigma \otimes \varphi \right)$ (see e.g. [8] ).

We now define the tensor normal distribution. Let $\mathcal{A}=\left({A}_{{i}_{1}{i}_{2}\dots {i}_{m}}\right)\in {\mathcal{T}}_{m}$ be an m-order tensor of size $\mathcal{N}:={N}_{1}\times {N}_{2}\times \cdots \times {N}_{m}$ , each of whose entries is a random variable. Let $\mu =\left({\mu}_{{i}_{1}{i}_{2}\cdots {i}_{m}}\right)$ be an m-order tensor of the same size as that of $\mathcal{A}$ , and ${\Sigma}_{k}\left(k\in \left[m\right]\right)$ be an ${N}_{k}\times {N}_{k}$ positive definite matrix. For convenience, we denote by ${I}^{\left(n\right)}$ the $\left(m-1\right)$ -tuple $\left({i}_{1},{i}_{2},\cdots ,{i}_{n-1},{i}_{n+1},\cdots ,{i}_{m}\right)$ with ${i}_{k}\in \left[{N}_{k}\right]$. We denote by $A\left({I}^{\left(n\right)}\right)$ and $\mu \left({I}^{\left(n\right)}\right)$ , both in ${\mathbb{R}}^{N{}_{n}}$ respectively the corresponding fibre (vector) of $\mathcal{A}$ and $\mu $ , indexed by ${I}^{\left(n\right)}$ , i.e.,

$A\left({I}^{\left(n\right)}\right):=A\left({i}_{1},{i}_{2},\cdots ,{i}_{n-1},:,{i}_{n+1},\cdots ,{i}_{m}\right),\forall {i}_{k}\in {N}_{k},k\in \left[m\right]\backslash \left\{n\right\}.$

$A\left({I}^{\left(n\right)}\right)\mu \left({I}^{\left(n\right)}\right)$ is called a fibre of $\mathcal{A}$ ( $\mu $ resp.) along mode-n indexed by ${I}^{\left(n\right)}$. $\mathcal{A}$ is said to obey a tensor normal distribution with parameter matrices $\left(\mu ,{\Sigma}_{1},\cdots ,{\Sigma}_{m}\right)$ or denoted by

$\mathcal{A}\sim {\mathcal{N}}_{T}\left(\mu ,{\Sigma}_{1},\cdots ,{\Sigma}_{m}\right)$

if for any $n\in \left[m\right]$ , we have

$A\left({i}_{1},\cdots ,{i}_{n-1},:,{i}_{n+1},\cdots ,{i}_{m}\right)\sim {\mathcal{N}}_{Nn}\left(\mu \left({I}^{\left(n\right)}\right){\Sigma}_{n}\right)$.

$\mathcal{A}$ is said to follow a standard tensor normal distribution if all the ${\Sigma}_{k}$ ’s are identity matrices. A model (2.13) with a tensor normal distribution is called a general tensor normal (GTN) model.

To show the application of tensor normal distribution, we consider the 3-order tensor. For our convenience, we use $\left(i,j\right)$ -value to denote the value related to the ith subject at jth measurement for any $\left(i,j\right)\in \left[m\right]\times \left[n\right]$. For example, the kth response observation ${Y}_{ijk}$ at $\left(i,j\right)$ represents the kth response value measured on the ith subject at time j. Now we let $m,n,p$ be respectively the number of observed objects, number of measuring times for each subject, and the number of responses for each observation. Denote by $\mathcal{Y}$ the response tensor with ${Y}_{ijk}$ being the kth response at $\left(i,j\right)$ , and by $\mathcal{X}$ the covariate tensor with ${X}_{ij:}\in {\mathbb{R}}^{r}$ being the covariate vector at $\left(i,j\right)$ for fixed effects, and by ${U}_{ij:}$ the covariate vector at $\left(i,j\right)$ for random effects. Further, for each $k\in \left[p\right]$ , we denote by ${B}_{k}\in {\mathbb{R}}^{r}$ the coefficient vector related to the fixed effects corresponding to the kth response ${Y}_{ijk}$ at $\left(i,j\right)$ for each pair $\left(i,j\right)\in \left[m\right]\times \left[n\right]$ , and similarly by ${C}_{k}\in {\mathbb{R}}^{q}$ the coefficient vector related to the random effects. Now let $B=\left[{B}_{1},\cdots ,{B}_{p}\right]$ and $\gamma =\left[{C}_{1},\cdots ,{C}_{p}\right]$. Then $B\in {\mathbb{R}}^{r\times p},\gamma \in {\mathbb{R}}^{q\times p}$. We call $X,U$ respectively the design matrix for fixed effects and the design matrix for random effects. Then we have

(4.23)

with $\mathcal{Y}=\left({Y}_{ijk}\right)\in {\mathbb{R}}^{m\times n\times p}$ , $\mathcal{X}=\left({X}_{ijk}\right)\in {\mathbb{R}}^{m\times n\times r}$ , $B=\left({\beta}_{ij}\right)\in {\mathbb{R}}^{r\times p}$ , $\mathcal{U}=\left({U}_{ijk}\right)\in {\mathbb{R}}^{m\times n\times q}$ , $\gamma =\left({\gamma}_{ij}\right)\in {\mathbb{R}}^{q\times p}$ , $\epsilon =\left({\epsilon}_{ijk}\right)\in {\mathbb{R}}^{m\times n\times p}$ where ${\epsilon}_{ijk}$ is the error term. Here the tensor-matrix multiplications $\mathcal{X}B$ and are defined by

${\left(\mathcal{X}B\right)}_{ijk}=X\left(i,j,:\right){B}_{k}={\displaystyle \underset{{k}^{\prime}=1}{\overset{r}{\sum}}{X}_{ij{k}^{\prime}}{\beta}_{{k}^{\prime}k}},\forall i\in \left[m\right],j\in \left[n\right],k\in \left[p\right]$

.

Denote $B=\left[{\beta}_{1},\cdots ,{\beta}_{p}\right],\gamma =\left[{\gamma}_{1},\cdots ,{\gamma}_{p}\right]$ , and

${E}_{i}^{\left(1\right)}:=E\left(i,:,:\right),{E}_{j}^{\left(2\right)}:=E\left(:,j,:\right),{E}_{k}^{\left(3\right)}:=E\left(:,:,k\right),\forall i\in \left[m\right],j\in \left[n\right],k\in \left[p\right]$

${E}_{jk}^{\left(1\right)}:=E\left(:,j,k\right),{E}_{ik}^{\left(2\right)}:=E\left(i,:,k\right),{E}_{ij}^{\left(3\right)}:=E\left(i,j,:\right),\forall i\in \left[m\right],j\in \left[n\right],k\in \left[p\right]$

where each matrix ${E}_{l}^{\left(s\right)}$ is called a slice on mode s, and each vector ${E}_{lt}^{\left[s\right]}$ is called a fibre along mode-s. We also use ${E}^{\left[s\right]}$ to denote the set consisting of all fibres of $\epsilon $ along mode-s, and use notation ${E}^{\left[s\right]}\sim P$ to express that each element of ${E}^{\left[s\right]}$ obeys distribution P where P is a distribution function. For example, ${E}^{\left[1\right]}\sim {\mathcal{N}}_{m}\left(0,{I}_{m}\right)$ means that each 1-mode fibre ${E}_{jk}^{\left[1\right]}$ (there are $np$ 1-mode fibres) obeys a standard normal distribution, i.e., ${E}_{jk}^{\left[1\right]}\sim {\mathcal{N}}_{m}\left(0,{I}_{m}\right)$.

Now for convenience we let $\left({n}_{1},{n}_{2},{n}_{3}\right):=\left(m,n,p\right)$. We assume that

1) $\gamma $ obeys matrix normal distribution $\gamma \sim {\mathcal{N}}_{q,p}\left(0,\Sigma ,\varphi \right)$.

2) The random vectors in are independent with for each.

3) For any with being positive definite of size.

The model (4.23) with conditions (I, II, III) is called a 3-order general mixed tensor (GMT) model. We will generalise this model to a more general case. In the following we first define the standard normal 3-order tensor distribution:

Definition 4.1. Let be a random tensor, i.e., each entry of is a random variable. Let be a constant tensor and be a positive definite matrix for each. Then is said to obey 3-order tensor standard normal (TSN) distribution if for all.

A 3-order random tensor satisfying TSN distribution has the following property:

Theorem 4.2. Let be an 3-order random tensor which obeys the tensor standard normal (TSN) distribution. Then each slice of shall obey a standard matrix normal distribution. Specifically, we have

(4.24)

Proof. □

Note that condition (III) is a generalization to the matrix normal distribution, and we denote it by,

.

Note that is a diagonal matrix since both and are diagonal. Write where is the expansion of along the third mode, specifically,

here is the tensor consisting of n identity matrices of size stacking along the third mode, thus and. Then we have

. (4.25)

Now we unfold along mode-3 to get matrix and respectively. Then Equation (4.25) is equivalent to

(4.26)

where and are generated similarly as.

The multivariate linear mixed model (4.23) or (4.25) can be transformed into a general linear model through the vectorization of matrices. Recall that the vectorization of a matrix is a vector of dimension, denoted by, formed by vertically stacking the columns of A in order, that is,

where are the column vectors of A. The vectorization is closely related to Kronecker product of matrices. The following lemma presents some basic properties of the vectorization and Kronecker product. We will use the following lemma (Proposition 1.3.14 on Page 89 of [4] ) to prove our main result:

Lemma 4.3. Let be matrices of appropriate sizes such that all the operations defined in the following are valid. Then

1).

2).

3).

The following property of the multiplication of a tensor with a matrix is shown by Kolda [18] and will be used to prove our main result.

Lemma 4.4. Let be a real tensor of size, and where. Then if and only if

(4.27)

where are respectively the flattened matrices of and along mode-n.

Proof. Let. Then for any, we have

. (4.28)

From which the result Equation (4.27) is immediate. □

Note that our Formula (4.27) is different from that in Section 2.5 in [18] since the definition of tensor-matrix multiplication is different.

We have the following result for the estimation of the parameter matrix:

Theorem 4.5. Suppose in Equation (4.23). Then the optimal estimation of the parameter matrix in Equation (4.23) is

. (4.29)

Proof. We first write Equation (4.25) in a matrix-vector form by vectorization by using the first item in Lemma 4.3,

(4.30)

where is the sum of two random terms. By the property of the vectorizations (see e.g. [4] ), we know that. By the ordinary least square solution method we get

By using (1) of Lemma 4.3 again (this time in the opposite direction), we get result (4.29). □

For any, we denote when, and when (stands for an g-inverse of a matrix). Then can be regarded as the projection from into since. Furthermore, we have. We now end the paper by presenting the following result as a pre diction model which follows directly from Theorem 4.5.

Theorem 4.6. Suppose in Equation (4.23). Then the mean of the response tensor in Equation (4.23) is

(4.31)

where.

Proof. By Theorem 4.5 and Equation (4.26), we have

It follows that

. (4.32)

By employing Lemma 4.4, we get result (4.31). □

Acknowledgements

This research was partially supported by the Hong Kong Research Grant Council (No. PolyU 15301716) and the graduate innovation funding of USTS. We shall thank the anonymous referees for their patient and elaborate reading and their suggestions which improved the writing of the paper.

References

[1] Bilodeau, M. and Brenner, D. (1961) Theory of Multivariate Statistics. Springer, New York.

[2] Bollen, K.A. and Curran, P.J. (2006) Latent Curve Models: A Structure Equation Perspective. Wiley, Hoboken.

[3] Bryk, A.S. and Raudenbush, S.W. (1987) Application of Hierarchical Linear Models to Assessing Change. Psychological Bulletin, 101, 147-158.

https://doi.org/10.1037/0033-2909.101.1.147

[4] Kollo, T. and Rosen, D. (2005) Advanced Multivariate Statistics with Matrices. Springer, New York.

https://doi.org/10.1007/1-4020-3419-9

[5] Raudenbush, S.W. and Bryk, A.S. (2002) Hierarchical Linear Models: Applications and Data Analysis Methods 2. Sage Publications, Thousand Oaks, CA.

[6] Bauer, D.J. (2007) Observations on the Use of Growth Mixture Models in Psychological Research. Multivariate Behavioral Research, 42, 757-786.

https://doi.org/10.1080/00273170701710338

[7] Bollen, K.A. and Curran, P.J. (2004) Autoregressive Latent Trajectory (ALT) Models: A Synthesis of Two Traditions. Sociological Methods and Research, 32, 336-383.

https://doi.org/10.1177/0049124103260222

[8] Coffman, D.L. and Millsap, R.E. (2006) Evaluating Latent Growth Curve Models Using Individual Fit Statistics. Structural Equation Modeling, 13, 1-27.

https://doi.org/10.1207/s15328007sem1301_1

[9] Hedeker, D. and Gibbons, R. (2006) Longitudinal Data Analysis. Wiley Inc., New York.

[10] Little, R.J. and Rubin, D.B. (1987) Statistical Analysis with Missing Data. Wiley Inc., New York.

[11] Singer, J.D. (1998) Using SAS Proc Mixed to Fit Multilevel Models, Hierarchical Models, and Individual Growth Models. Journal of Educational and Behavioral Statistics, 23, 323-355.

https://doi.org/10.3102/10769986023004323

[12] Singer, J.D. and Willett, J.B. (2003) Applied Longitudinal Data Analysis: Modeling Change and Event Occurrence. Oxford University Press, New York.

https://doi.org/10.1093/acprof:oso/9780195152968.001.0001

[13] Hastie, T., Tibshirani, R. and Friedman, J. (2009) The Elements of Statistical Learning. 2nd Edition, Springer, New York.

https://doi.org/10.1007/978-0-387-84858-7

[14] Bollen, K.A. (2007) On the Origins of Latent Curve Models. In: Cudeck, R. and MacCallum, R., Eds., Factor Analysis, at 100, Lawrence Erlbaum Associates, Mahwah, NJ, 79-98.

[15] Comon, P., Golub, G., Lim, L.-H. and Mourrain, B. (2008) Symmetric Tensors and Symmetric Tensor Rank. SIAM Journal on Matrix Analysis and Applications, 30, 1254-1279.

https://doi.org/10.1137/060661569

[16] Wishart, J. (1938) Growth Rate Determinations in Nutrition Studies with the Bacon Pig and Their Analysis. Biometrika, 30, 16-28.

https://doi.org/10.1093/biomet/30.1-2.16

[17] McArdle, J.J. (2009) Latent Variable Modeling of Differences and Changes with Longitudinal Dynamic Structural Analysis. Annual Review of Psychology, 60, 577-605.

https://doi.org/10.1146/annurev.psych.60.110707.163612

[18] Kolda, T.G. and Bader, B.W. (2009) Tensor Decompositions and Applications. SIAM Review, 51, 455-500.

https://doi.org/10.1137/07070111X