Testing the Equality Hypothesis on a Cross-Covariance Matrix

Show more

1. Introduction

Tests of covariance matrices in multivariate statistical analysis have wide applications in many fields of research and practice, such as target detection [1] , face recognition [2] and so on. They have already attracted considerable interests since the 1940s. However, most of the existing researches on this topic focus on testing for covariance matrix other than cross-covariance matrix. In some circumstances, not all the entries in the covariance matrix are concerned, thus testing for a cross-covariance matrix being equal to a specified one becomes an important issue. For instance, when testing for time-reversibility (see Section 3 for more details), we can transform the problem into the one of cross-covariance matrix test. Therefore, like the covariance matrix test, it is also of great practical interest to develop methods for the cross-covariance matrix test.

Over the past several years, many types of statistics have been proposed to test various equalities of covariance matrices. The first type is a class of statistics based on the likelihood ratio (LR). Mauchly [3] was one of the earlier attempts whose approach was based on the likelihood ratio. The statistic of Mauchly depends on the determinant and the trace of sample covariance matrix. It requires that the sample covariance matrix is non-singular, which is the case with probability one when the sample size is larger than the dimension. Gupta and Xu [4] generalized the likelihood ratio test to non-normal distributions by deriving the asymptotic expansion of the test statistic under the null hypothesis when the sample size is moderate. Latterly, Jiang et al. [5] proved that the likelihood ratio test statistic has an asymptotic normal distribution under two different assumptions by the aid of Selberg integrals. Also, the first type of statistics can be extended to analyze high-dimensional data. For instance, Bai et al. [6] used central limit theorems for linear spectral statistics of sample covariance matrices and of random F-matrices, and proposed a modification of the likelihood ratio test to cope with high-dimensional effects. In the following, Niu et al. [7] considered testing mean vector and covariance matrix simultaneously with high-dimensional non-Gaussian data. Niu et al. applied the central limit theorem for linear spectral statistics of sample covariance matrices and established new modification for the likelihood ratio test. The second type is a class of statistics based on empirical distance. Let ${Z}_{1}\mathrm{,}{Z}_{2}\mathrm{,}\cdots \mathrm{,}{Z}_{N}$ be a p-dimensional random sample drawn from a normal distribution with mean vector $\mu $ and covariance matrix $\Sigma $ . Nagao [8] proposed a test statistic

${T}_{1}=\frac{n}{2}\text{tr}{\left(S{\Sigma}_{0}^{-1}/n-I\right)}^{2}\mathrm{,}$ (1)

to test the null hypothesis ${H}_{0}:\Sigma ={\Sigma}_{0}$ versus the alternative ${H}_{1}\mathrm{:}\Sigma \ne {\Sigma}_{0}$ , where $S={\displaystyle {\sum}_{i=1}^{N}}\left({Z}_{i}-\stackrel{\xaf}{Z}\right){\left({Z}_{i}-\stackrel{\xaf}{Z}\right)}^{\prime}$ with $\stackrel{\xaf}{Z}={N}^{-1}{\displaystyle {\sum}_{i=1}^{N}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Z}_{i}$ , $n=N-1$ and I is the identity matrix. Thus the null hypothesis should be rejected when the observed value of ${T}_{1}$ exceeds a pre-assigned level of significance. The third type statistic is based on the largest eigenvalue of the covariance matrix and the random matrix theory. For instance, Cai et al. [9] studied the limiting laws of the coherence of an $n\times p$ random matrix in the high-dimensional setting that p can be much larger than n, then Cai et al. considered testing the structure of the covariance matrix of a high-dimensional Gaussian distribution, where the random matrix plays a crucial role in the construction of the test. The last type is a statistic based on the examination of a fixed column of the sample covariance matrix. Gupta and Bodnar [10] proposed an exact test on the structure of the covariance matrix. The test statistic of Gupta is based on the examination of a fixed column of the sample covariance matrix, and it can also be applied if the sample size is much smaller than the dimension.

The above mentioned statistics for covariance matrix test are applicable when the dispersion matrix has the Wishart distribution or the distribution of the test statistic is derivable, thus the asymptotic properties of these statistics can be obtained. In many circumstances, the asymptotic distribution of the test statistic is complicated in the absence of strict normality or when the Wishart distribution is unavailable. In this paper, we provide a new method for testing cross-covariance matrix other than covariance matrix, which can be more efficient in some problems that the variance are not concerned. Moreover, the proposed test is independent of Wishart distribution but can be implemented by parametric bootstrap scheme.

The proposed statistic is based on the Frobenius norm of the difference between the sample cross-covariance matrix and the given matrix. Theoretically, it can detect any deviation of the cross-covariance matrix from a pre-assigned one. Several numerical examples show that it is more powerful in testing a cross-covariance matrix deviating from the pre-assigned matrix than some other competitive methods.

Recently, tests of time-reversibility (TR) have drawn much attention due to that time reversibility is a necessary condition for an independent and identically distributed (i.i.d) sequence. As is known, i.i.d sequence and stationary Gaussian models are time reversible. Otherwise, a linear, non-Gaussian process is time-irreversible, except when its coefficients satisfy particular constraints [11] . Several tests for TR have been proposed to be applied as tests for specification check in model construction [12] [13] [14] [15] . In this paper, the TR test method is based on the copula spectral density kernel (CSDK) proposed by Dette et al. [16] , which is more informative than the traditional spectral density, the CSDK captures serial dependence more than covariance-related. The CSDK ${\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)$ (defined in (17)) is indexed by couple $\left({q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{2}}\right)$ of quantile levels, where $\left({\tau}_{1}\mathrm{,}{\tau}_{2}\right)\in {\left[\mathrm{0,1}\right]}^{2}$ and ${q}_{{\tau}_{i}}={F}^{-1}\left({\tau}_{i}\right)\left(i=1,2\right)$ , with $F\left(x\right)$ being the one-dimensional marginal cumulative distribution function of a strictly stationary univariate process ${\left\{{x}_{t}\right\}}_{t\in \mathbb{Z}}$ . Obviously, the time series is pairwise time reversible if and only if $\Im {\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)=0$ for all $\omega $ and all $\left({\tau}_{1}\mathrm{,}{\tau}_{2}\right)\in {\left[\mathrm{0,1}\right]}^{2}$ , where $\Im a$ is the imaginary part of a complex number a. Thus the imaginary part of CSDK is equal to zero if ${\left\{{x}_{t}\right\}}_{t\in \mathbb{Z}}$ is time reversible. So we can transform the problem of testing pairwise time-reversibility into one of testing the imaginary part of CSDK being zero. By Theorem 3.3 of Dette et al. [16] , we derived a covariance matrix ${\Sigma}_{2}\left(\omega \right)$ (defined in (24)), we find that time-reversibility indicates the cross-covariance matrix in ${\Sigma}_{2}\left(\omega \right)$ is equal to a zero matrix. Theoretically, we can transform the problem of testing pairwise time-reversibility into that of testing for the specification of a cross-covariance matrix.

Throughout the paper, we denote by $\stackrel{d}{=}$ equality in distribution, and define $\Re a$ and $\Im a$ as the real part and imaginary part of a complex number a, respectively. For matrix notation, ${I}_{m}$ and ${O}_{m}$ denote the $m\times m$ identity matrix and $m\times m$ zero matrix, respectively; $\text{det}\left(M\right)$ and $\text{tr}\left(M\right)$ represent the determinant and trace of the matrix M, respectively; ${\Vert M\Vert}_{F}$ indicates the Frobenius norm of M. ${\chi}^{2}\left(q\right)$ and ${t}_{q}$ denote a chi-square distribution and a Student t distribution with q degrees of freedom, respectively.

The rest of the paper is organized as follows. Section 2 presents the test statistic with the bootstrap scheme in computing the p-value of the cross-covariance matrix test. Section 3 reports empirical results for examining performance of the proposed test by using simulated data. Section 4 illustrates the applications in detecting any deviation from time-reversibility of a time series. Section 5 contains our conclusions.

2. Test Statistic and Its Distributional Approximation

2.1. Test Statistic

Let ${Z}_{1}\mathrm{,}\cdots \mathrm{,}{Z}_{N}$ be an independent sample from ${\mathcal{N}}_{p}\left(\mu \mathrm{,}\Sigma \right)\mathrm{,}p\ge 2$ , an multivariate normal distribution with mean vector $\mu $ , covariance matrix $\Sigma $ , where

${Z}_{i}$ is expressed by $\left(\begin{array}{c}{X}_{i}\\ {Y}_{i}\end{array}\right),i=1,\cdots ,N$ , and $\Sigma $ is a blocked matrix given by

$\Sigma =\left(\begin{array}{cc}{\Sigma}_{x}& \Gamma \\ {\Gamma}^{\prime}& {\Sigma}_{y}\end{array}\right),$ (2)

with $\Gamma $ being the cross-covariance matrix. In this section, we consider the problem of testing

${H}_{0}\mathrm{:}\Gamma ={\Gamma}_{0}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{versus}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{H}_{1}\mathrm{:}\Gamma \ne {\Gamma}_{0}\mathrm{.}$

The test statistic is constructed based on the Frobenius norm of the difference between the sample cross-covariance matrix and the given matrix. In the derivation, no assumption on p, like $n\ge p$ or $n<p$ , is required. Since the Wishart distribution is not achieved here, we implement the derivation by the aid of parametric bootstrap scheme. We define the test statistic

$T=N{\Vert \frac{1}{N}{S}_{xy}-{\Gamma}_{0}\Vert}_{F}^{2},$ (3)

where

${S}_{xy}={\displaystyle \underset{i=1}{\overset{N}{\sum}}}\left({X}_{i}-\stackrel{\xaf}{X}\right)\cdot {\left({Y}_{i}-\stackrel{\xaf}{Y}\right)}^{\prime},$ (4)

with $\stackrel{\xaf}{X}={N}^{-1}{\displaystyle {\sum}_{i=1}^{N}}\text{\hspace{0.05em}}{X}_{i}$ and $\stackrel{\xaf}{Y}={N}^{-1}{\displaystyle {\sum}_{i=1}^{N}}\text{\hspace{0.05em}}{Y}_{i}$ . In (3),

${\Vert \frac{1}{N}{S}_{xy}-{\Gamma}_{0}\Vert}_{F}^{2}=\text{tr}\left[{\left(\frac{1}{N}{S}_{xy}-{\Gamma}_{0}\right)}^{\text{T}}\left(\frac{1}{N}{S}_{xy}-{\Gamma}_{0}\right)\right]$ , which can detect any deviation

of cross-covariance from the pre-specified matrix ${\Gamma}_{0}$ .

2.2. Bootstrap Approximation of the Null Distribution

Let ${Z}_{1}\mathrm{,}{Z}_{2}\mathrm{,}\cdots \mathrm{,}{Z}_{N}$ be an independent sample that are drawn from ${\mathcal{N}}_{p}\left(\mu \mathrm{,}\Sigma \right)$ . ${\stackrel{^}{\Sigma}}_{x}$ and ${\stackrel{^}{\Sigma}}_{y}$ are the estimator of the parameters ${\Sigma}_{x}$ and ${\Sigma}_{y}$ . Suppose that the pseudo data set ${Z}_{1}^{\ast}\mathrm{,}\cdots \mathrm{,}{Z}_{N}^{\ast}$ was resample from ${\mathcal{N}}_{p}\left(\mu \mathrm{,}{\Sigma}^{\ast}\right)$ , where

${\Sigma}^{\ast}=\left(\begin{array}{cc}{\stackrel{^}{\Sigma}}_{x}& \Gamma \\ {\Gamma}^{\prime}& {\stackrel{^}{\Sigma}}_{y}\end{array}\right)\mathrm{,}$ (5)

and ${Z}_{i}^{\ast}$ is expressed by $\left(\begin{array}{c}{X}_{i}^{\ast}\\ {Y}_{i}^{\ast}\end{array}\right),i=1,\cdots ,N$ . The bootstrap statistic is defined as

${T}^{\ast}=N{\Vert \frac{1}{N}{S}_{xy}^{\ast}-{\Gamma}_{0}\Vert}_{F}^{2}\mathrm{,}$ (6)

where

${S}_{xy}^{\ast}={\displaystyle \underset{i=1}{\overset{N}{\sum}}}\left({X}_{i}^{\ast}-{\stackrel{\xaf}{X}}^{\ast}\right)\cdot {\left({Y}_{i}^{\ast}-{\stackrel{\xaf}{Y}}^{\ast}\right)}^{\prime},$ (7)

with ${\stackrel{\xaf}{X}}^{\ast}={N}^{-1}{\displaystyle {\sum}_{i=1}^{N}}\text{\hspace{0.05em}}{X}_{i}^{\ast}$ and ${\stackrel{\xaf}{Y}}^{\ast}={N}^{-1}{\displaystyle {\sum}_{i=1}^{N}}\text{\hspace{0.05em}}{Y}_{i}^{\ast}$ .

To study the bootstrap approximation of the null distribution, we need the following conditions:

Condition (A1) Let $\stackrel{\u02dc}{F}$ be the cumulative distribution function (cdf) of the bootstrap statistic ${T}^{\ast}$ , ${F}_{M}$ be the empirical distribution function of the bootstrap sample ${T}_{1}^{\ast}\mathrm{,}\cdots \mathrm{,}{T}_{M}^{\ast}$ . Let

${W}_{\stackrel{\u02dc}{F}M}\left(t\right)=\sqrt{M}\left\{{F}_{M}\left(t\right)-\stackrel{\u02dc}{F}\left(t\right)\right\}\mathrm{.}$ (8)

We assume the value of the equality (8) vanishes as t goes to infinity.

Condition (A2) As M tends to infinity, ${W}_{\stackrel{\u02dc}{F}M}$ converges weakly to $B\left(F\right)$ in distribution, provided that $\underset{-\infty <x<+\infty}{\mathrm{sup}}\left|\stackrel{\u02dc}{F}\left(x\right)-F\left(x\right)\right|\to 0$ a.s., where F denotes the cdf of the statistic T and B is the Brownian bridge on [0,1].

Theorem 1. Suppose F is nondegenerate. Suppose also that $0<\alpha <1$ be the nominal size of the test. Under conditions (A1) and (A2), if $c\left(\stackrel{\u02dc}{F}\right)$ satisfies

$P\left\{{M}^{1/2}{\text{sup}}_{x}\left|{F}_{M}\left(x\right)-\stackrel{\u02dc}{F}\left(x\right)\right|\le c\left(\stackrel{\u02dc}{F}\right)\right\}\to 1-\alpha \mathrm{,}$ (9)

then

$P\left\{{M}^{1/2}{\text{sup}}_{x}\left|\stackrel{\u02dc}{F}\left(x\right)-F\left(x\right)\right|\le c\left(\stackrel{\u02dc}{F}\right)\right\}\to 1-\alpha \mathrm{.}$ (10)

The result (9) is almost immediate from Corollary 4.2 of Bickel et al. [17] . By Lemma 8.11 of Bickel et al. [17] , we obtain that ${\text{sup}}_{x}\left|B\left(F\left(x\right)\right)\right|$ has a continuous distribution, $c\left(\stackrel{\u02dc}{F}\right)$ converges to the $\left(1-\alpha \right)$ -quantile of the law of ${\text{sup}}_{x}\left|B\left(F\left(x\right)\right)\right|$ .

2.3. Algorithm for Calculating Test p-Value

In order to carry out the parametric bootstrap procedure for the proposed test, we present the following simulation steps. The bootstrap p-value is approximated by the following procedure.

Step 1. Calculate an observation ${T}_{obs}$ of statistic T.

Step 2. Estimate the covariance matrix ${\Sigma}_{x}$ and ${\Sigma}_{y}$ by sample covariance matrices, say ${\stackrel{^}{\Sigma}}_{x}$ and ${\stackrel{^}{\Sigma}}_{y}$ .

Step 3. Resample from ${\mathcal{N}}_{p}\left(\mu \mathrm{,}{\Sigma}^{\ast}\right)$ and calculate the value of bootstrap statistic ${T}^{\ast}$ , where ${\Sigma}^{\ast}$ is defined in (5).

Step 4. Repeat Step 3 M times, and compute the p-value by p-value =

$\frac{1}{M}{\displaystyle \underset{i=1}{\overset{M}{\sum}}{\mathbb{I}}_{\left\{{T}_{i}^{\ast}>{T}_{obs}\right\}}}$ , where ${\mathbb{I}}_{A}$ denotes an indicator on the set A, which equals 1

when A occurs, and 0 otherwise.

3. A Simulation Study

3.1. Comparison Study

We briefly describe the tests which are compared in the current paper, the modified LR test [18] , and the test of Nagao [8] . These two tests are used to test

${H}_{0}:\Sigma ={\Sigma}_{0}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{versus}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{H}_{1}:\Sigma \ne {\Sigma}_{0}.$

Let ${Z}_{1}\mathrm{,}\cdots \mathrm{,}{Z}_{N}$ be an independent sample from ${\mathcal{N}}_{p}\left(\mu \mathrm{,}\Sigma \right)$ , the modified LR test statistic is based on

${\lambda}_{1}^{\ast}={\text{e}}^{\frac{1}{2}pn}{\left(\text{det}\left(S{\Sigma}_{0}\right){\text{e}}^{-\text{tr}\left(S{\Sigma}_{0}^{-1}\right)}\right)}^{\frac{1}{2}n}\mathrm{,}$ (11)

where $S=\left(1/n\right)A$ , $n=N-1$ , $A={\displaystyle {\sum}_{i}^{N}}\left({Z}_{i}-\stackrel{\xaf}{Z}\right){\left({Z}_{i}-\stackrel{\xaf}{Z}\right)}^{\prime}$ , and $\stackrel{\xaf}{Z}={N}^{-1}{\displaystyle {\sum}_{i=1}^{N}}\text{\hspace{0.05em}}{Z}_{i}$ . As is known, $A~{W}_{p}\left(N-\mathrm{1,}\Sigma \right)$ , where the ${W}_{p}\left(N-\mathrm{1,}\Sigma \right)$ stands for p-dimension Wishart distribution with $N-1$ degrees of freedom and covariance matrix $\Sigma $ . Anderson [19] derived the limiting distribution of the modified LR test statistic $-2\mathrm{log}{\lambda}_{1}^{\ast}$ with the help of Wishart distribution, when $\Sigma ={I}_{p}$ , $-2\mathrm{log}{\lambda}_{1}^{\ast}$ is

asymptotically distributed as ${\chi}^{2}\left(\frac{1}{2}p\left(p+1\right)\right)$ , where $\mathrm{log}a$ denotes the

logarithmic function based on natural logarithm e. Then, Nagao [8] proposed a test statistic ${T}_{1}$ (defined in (1)) which can be regarded as a measure of departure from the null hypothesis.

In what follows, we propose the statistic T (3) to test

${H}_{0}:\Gamma ={O}_{\frac{p}{2}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{versus}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{H}_{1}:\Gamma \ne {O}_{\frac{p}{2}}.$ (12)

So far, no test methods are available for this problem. Thus, we choose statistics $-2\mathrm{log}{\lambda}_{1}^{\ast}$ and ${T}_{1}$ to test

${H}_{0}:\Sigma ={I}_{p}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{versus}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{H}_{1}:\Sigma \ne {I}_{p},$ (13)

for the comparative study, where the cross-covariance matrix in $\Sigma $ is equal to ${O}_{\frac{p}{2}}$ . Testing the structure of the covariance matrix can also detect the deviation of cross-covariance from the pre-specified matrix. For a pre-specified level of significance $\alpha $ $\left(0<\alpha <1\right)$ , the null hypothesis in (13) is rejected if

$-2\mathrm{log}{\lambda}_{1}^{\ast}>{\chi}_{1-\alpha}^{2}\left(\frac{1}{2}p\left(p+1\right)\right),$ (14)

or

${T}_{1}>{T}_{{1}_{1-\alpha}}\left(n\right),$ (15)

where ${T}_{{1}_{1-\alpha}}\left(n\right)$ denotes the $1-\alpha $ quantile of the empirical distribution of statistic ${T}_{1}$ .

We employ simulation data to evaluate the performance of the proposed statistic T, statistics $-2\mathrm{log}{\lambda}_{1}^{\ast}$ and ${T}_{1}$ when applied to test the hypotheses (12) and (13) at a significant level of $\alpha =0.05$ . Empirical sizes and powers of the proposed test are computed based on $M=250$ resample times and 500 repetition times, and that of tests (14) and (15) are based on 500 Monte Carlo replications. In the simulation study we choose mean vector $\mu ={O}_{p}$ , dimension $p=4$ , and take sample size $n\in \left\{\mathrm{50,100,200,400,800}\right\}$ . The results are shown in Table 1.

Results in Table 1 show that each empirical type I error rate of above three different statistics is very close to the pre-specified nominal value. Also, the proposed test has higher empirical power than its counterpart tests (14) and (15). With increasing sample size, the change in empirical power of test (15) is barely noticeable while the performance of the proposed test improves significantly, which means statistic ${T}_{1}$ is not sensitive to the change of the cross elements of the covariance matrix while statistic T can detect any deviation of cross-covariance from the pre-specified matrix ${\Gamma}_{0}$ . Thus, when the variance in the covariance matrix is not concerned, we recommend that statistic T can be applied to testing the equality hypothesis about a cross-covariance matrix.

3.2. Bootstrap Asymptotic Study

In this section, we employ simulation data to investigate whether the performance of proposed test is sensitive to the block matrices in the diagonal. For this purpose, we consider two choices of $\Sigma $ : one is

${\Sigma}_{1}=\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& 1& 0\\ 0& 0& 0& 1\end{array}\right),$

and the other is

${\Sigma}_{2}=\left(\begin{array}{cccc}1& 0.5& 0& 0\\ 0.5& 1& 0& 0\\ 0& 0& 1& 0.5\\ 0& 0& 0.5& 1\end{array}\right).$

We propose statistic T to test

${H}_{0}:\Gamma ={O}_{2}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{versus}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{H}_{1}:\Gamma \ne {O}_{2}.$ (16)

For the two covariance matrices mentioned above, we run a simulation with $M=250$ resample times and 500 repetition times to obtain the empirical sizes and powers of the proposed test at significant level $\alpha =0.01,0.05,0.1$ , where we take sample size $n\in \left\{\mathrm{64,128,256,512,1024}\right\}$ and choose mean vector $\mu ={O}_{p}$ . The results are shown in Table 2 and Table 3.

For each sample size $n=128,256,512,1024$ and each nominal size

$\alpha =0.01,0.05,0.1$ , Table 2 shows the empirical rejection probabilities of the proposed test. We present simulation results of ${\Sigma}_{1}$ and ${\Sigma}_{2}$ in the first panel

Table 1. Rejection probabilities of the proposed test, tests (14) and (15) from simulated data.

Table 2. Probability of committing the type I error of the proposed test in testing (16) for two different $\Sigma $ .

Table 3. Empirical rejection probabilities of the proposed test in testing (16) for two different $\Sigma $ .

and the second panel, respectively. We see that each empirical type I error rate of two different cases is close to their nominal sizes. For the nominal size $\alpha =0.05$ , Table 3 shows how the empirical rejection probability of the proposed test changes with respect to five different sample sizes $n=64,128,256,512,1024$ . Although the block matrices in the diagonal are different, their empirical powers improve significantly with increasing sample size n. Last but not least, when $n=512$ , both the empirical powers of above two cases reach the maximum. By our simulation experiments in Table 2 and Table 3, we find that the proposed test is not sensitive to the block matrices in the diagonal. It is due to that the proposed statistic T depends only on the sample cross-covariance matrix. Thus we can conclude that our proposed test still achieve good performance though the change of the variance components in the covariance matrix take place.

4. An Empirical Application: Testing for Pairwise Time-Reversibility

4.1. Time Reversible Time Series and Prior Specification

A formal statistical definition of pairwise time-reversibility is defined as follows.

Definition 1. A time series ${\left\{{x}_{t}\right\}}_{t\in \mathbb{Z}}$ is pairwise time reversible if for all positive integers k, the random vectors ${\left({x}_{t}\mathrm{,}{x}_{t+k}\right)}^{\text{T}}$ and ${\left({x}_{t}\mathrm{,}{x}_{t-k}\right)}^{\text{T}}$ have the same joint probability distributions.

Under this definition, one can show that pairwise time reversibility implies stationarity. Likewise, nonstationarity implies time irreversibility [14] . Clearly, $\left\{{x}_{t}\right\}$ is time reversible when $\left\{{x}_{t}\right\}$ is i.i.d. Thus, for the study of testing time-reversibility, the pairwise case is generally considered. For instance, Ramsey et al. [12] proposed a pairwise TR test statistic consists of a sample estimate of the symmetric-bicovariance function given by the difference between two bicovariances of $\left({x}_{t}\mathrm{,}{x}_{t-k}\right)$ . Laterly, Chen et al. [11] considered the pairwise time-reversibility and proposed a new test aiming at the symmetrical distribution of $\left({x}_{t}-{x}_{t-k}\right)$ rather than moments. In the following, Dette et al. [16] briefly analyzed the pairwise time-reversibility of four different time series models by the aid of quantile-based spectral analysis.

In this section, we primarily focus on testing for pairwise time-reversibility and the test method is based on copula spectral density kernel (CSDK) proposed by Dette et al. [16] . Let ${\left\{{x}_{t}\right\}}_{t\in \mathbb{Z}}$ be a strictly stationary univariate process, the CSDK is defined as

${\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)=\frac{1}{2\text{\pi}}{\displaystyle \underset{k=-\infty}{\overset{\infty}{\sum}}}{\gamma}_{k}^{U}\left({\tau}_{1}\mathrm{,}{\tau}_{2}\right){\text{e}}^{-\text{i}k\omega}\mathrm{,}\text{\hspace{1em}}\left({\tau}_{1}\mathrm{,}{\tau}_{2}\right)\in {\left(\mathrm{0,1}\right)}^{2}\mathrm{,}$ (17)

where ${q}_{{\tau}_{i}}$ is the ${\tau}_{i}$ -quantile of the marginal distribution of the process ${\left\{{x}_{t}\right\}}_{t\in \mathbb{Z}}$ , i.e. ${q}_{{\tau}_{i}}={F}^{-1}\left({\tau}_{i}\right),\mathrm{}i=1,2$ . The ${\gamma}_{k}^{U}\left({\tau}_{1}\mathrm{,}{\tau}_{2}\right)$ (copula cross-covariance kernel) of lag $k\in \mathbb{Z}$ is also introduced by Dette et al. [16] which is defined as

${\gamma}_{k}^{U}\left({\tau}_{1}\mathrm{,}{\tau}_{2}\right)\mathrm{:}=\text{Cov}\left({\mathbb{I}}_{\left(\mathrm{0,}{\tau}_{1}\right]}\left({U}_{t}\right)\mathrm{,}{\mathbb{I}}_{\left(\mathrm{0,}{\tau}_{2}\right]}\left({U}_{t-k}\right)\right)\mathrm{,}$ (18)

where ${U}_{t}:=F\left({x}_{t}\right)$ , F denotes one-dimensional marginal cumulative distribution function of the process ${\left\{{x}_{t}\right\}}_{t\in \mathbb{Z}}$ . Compared with traditional covariances, the concept of copula cross-covariance kernel is proper for describing a serial copula.

The collection of CSDKs for different $\left({\tau}_{1}\mathrm{,}{\tau}_{2}\right)$ provides a full characterization of the copulas associated with the pairs $\left({x}_{t}\mathrm{,}{x}_{t-k}\right)$ , and accounts for many important dynamic features of $\left\{{x}_{t}\right\}$ , such as changes in the conditional shape (skewness, kurtosis), time-irreversibility, or dependence in the extremes that the traditional second-order spectra cannot capture [20] .

4.2. Test for Time-Reversibility

In the sequel, we will concentrate on testing for pairwise time-reversibility. Obviously, we have $\left({x}_{t},{x}_{t+k}\right)\stackrel{d}{=}\left({x}_{t},{x}_{t-k}\right)$ for all $k\in \mathbb{Z}$ if and only if $\Im {\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)=0$ for all $\omega $ and all $\left({\tau}_{1}\mathrm{,}{\tau}_{2}\right)\in {\left[\mathrm{0,1}\right]}^{2}$ . For the purpose of pairwise time-reversibility test, we consider the problem of testing

${H}_{0}\mathrm{:}\Im {\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{versus}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{H}_{1}\mathrm{:}\Im {\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)\ne \mathrm{0,}$ (19)

for all $\omega \in \left(\mathrm{0,}\text{\pi}\right)$ . The method introduced by Dette et al. [16] for estimating the ${\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)$ is first to calculate the rank-based Laplace periodogram (RLP) and then to smooth it to obtain the consistent estimator. Let $\left\{{x}_{1}\mathrm{,}\cdots \mathrm{,}{x}_{n}\right\}$ be the observation from a strictly stationary univariate process $\left\{{x}_{t}\right\}$ . Like Dette et al. [16] , we define $\stackrel{^}{b}$ by

$\left({\stackrel{^}{a}}_{n\mathrm{,}R}^{\tau}\left({\omega}_{j\mathrm{,}n}\right)\mathrm{,}{\stackrel{^}{b}}_{n\mathrm{,}R}^{\tau}\left({\omega}_{j\mathrm{,}n}\right)\right)\mathrm{:}=\underset{a\mathrm{,}{b}^{\text{T}}\in {\mathbb{R}}^{3}}{\text{Argmin}}{\displaystyle \underset{t=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\rho}_{\tau}\left({n}^{-1}{R}_{t}^{\left(n\right)}-\left(a\mathrm{,}{b}^{\text{T}}\right){c}_{t}\left({\omega}_{j\mathrm{,}n}\right)\right)\mathrm{,}$ (20)

where

${\rho}_{\tau}\left(x\right)\mathrm{:}=x\left(\tau -{\mathbb{I}}_{\left(-\infty \mathrm{,0}\right]}\left(x\right)\right)\mathrm{,}$ (21)

is the so-called check function [21] , ${c}_{t}\left({\omega}_{j\mathrm{,}n}\right)\mathrm{:}={\left(\mathrm{1,}\mathrm{cos}\left(t{\omega}_{j\mathrm{,}n}\right)\mathrm{,}\mathrm{sin}\left(t{\omega}_{j\mathrm{,}n}\right)\right)}^{\text{T}}$ , ${\omega}_{j\mathrm{,}n}\in {\mathcal{F}}_{n}\mathrm{:}=\left\{2\text{\pi}j/n\mathrm{|}j=\mathrm{1,}\cdots \mathrm{,}\lfloor \frac{n-1}{2}\rfloor -\mathrm{1,}\lfloor \frac{n-1}{2}\rfloor \right\}$ . We extend the definition of ${\stackrel{^}{b}}_{n\mathrm{,}R}^{\tau}\left({\omega}_{j\mathrm{,}n}\right)$ to a piecewise constant function on ${\Omega}_{n}:=\left({\omega}_{j,n}-\frac{\text{2\pi}}{n},{\omega}_{j,n}+\frac{\text{2\pi}}{n}\right]$ as follows:

${\stackrel{^}{b}}_{n\mathrm{,}R}^{\tau}\left(\omega \right)={\stackrel{^}{b}}_{n\mathrm{,}R}^{\tau}\left({\omega}_{j\mathrm{,}n}\right)\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}{\omega}_{j\mathrm{,}n}-\frac{\text{2\pi}}{n}<\omega <{\omega}_{j,n}+\frac{\text{2\pi}}{n}\mathrm{.}$ (22)

Let $\tau ={\left\{{\tau}_{1}\mathrm{,}{\tau}_{2}\right\}}^{\text{T}}$ . We denote ${\stackrel{^}{b}}_{n\mathrm{,}R}^{\tau}\left(\omega \right)={\left({\stackrel{^}{b}}_{n\mathrm{,}R}^{{\tau}_{1}}{\left(\omega \right)}^{\text{T}}\mathrm{,}{\stackrel{^}{b}}_{n\mathrm{,}R}^{{\tau}_{2}}{\left(\omega \right)}^{\text{T}}\right)}^{\text{T}}$ , where $\sqrt{n}{\stackrel{^}{b}}_{n\mathrm{,}R}^{{\tau}_{1}}\left(\omega \right)={\left({Z}_{11}\left(\omega \right)\mathrm{,}{Z}_{12}\left(\omega \right)\right)}^{\text{T}}$ and $\sqrt{n}{\stackrel{^}{b}}_{n\mathrm{,}R}^{{\tau}_{2}}\left(\omega \right)={\left({Z}_{21}\left(\omega \right)\mathrm{,}{Z}_{22}\left(\omega \right)\right)}^{\text{T}}$ . Then, under standard mixing conditions (Theorem 3.3 of Dette et al. [16] ), for $\omega \in \left(\mathrm{0,}\text{\pi}\right)$ , ${\left({Z}_{11}\left(\omega \right)\mathrm{,}{Z}_{12}\left(\omega \right)\mathrm{,}{Z}_{21}\left(\omega \right)\mathrm{,}{Z}_{22}\left(\omega \right)\right)}^{\text{T}}$ converges to a zero-mean real Gaussian distribution with covariance matrix

${\Sigma}_{4}\left(\omega \right)\mathrm{:}=4\text{\pi}\left(\begin{array}{cccc}{\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{1}}}\left(\omega \right)& 0& \Re {\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)& \Im {\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)\\ 0& {\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{1}}}\left(\omega \right)& -\Im {\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)& \Re {\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)\\ \Re {\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)& -\Im {\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)& {\mathfrak{f}}_{{q}_{{\tau}_{2}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)& 0\\ \Im {\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)& \Re {\mathfrak{f}}_{{q}_{{\tau}_{2}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)& 0& {\mathfrak{f}}_{{q}_{{\tau}_{2}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)\end{array}\right)\mathrm{.}$ (23)

Not all the entries in ${\Sigma}_{4}\left(\omega \right)$ need to be concerned when testing hypotheses (19), so we consider the block covariance matrix $\text{Cov}\left({Z}_{11}\left(\omega \right)\mathrm{,}{Z}_{22}\left(\omega \right)\right)$ ,

${\Sigma}_{2}\left(\omega \right)\mathrm{:}=4\text{\pi}\left(\begin{array}{cc}{\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{1}}}\left(\omega \right)& \Im {\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)\\ \Im {\mathfrak{f}}_{{q}_{{\tau}_{1}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)& {\mathfrak{f}}_{{q}_{{\tau}_{2}}\mathrm{,}{q}_{{\tau}_{2}}}\left(\omega \right)\end{array}\right)\mathrm{.}$ (24)

Thus, for the test of hypotheses (19), we can transform the problem into one of testing the cross-covariance matrix in (24) being a one dimensional zero matrix. Here, we define random vectors X and Y as

$X=\text{sgn}\left({\underset{\u02dc}{\stackrel{^}{\mathfrak{f}}}}_{n\mathrm{,}{\tau}_{1}\mathrm{,}{\tau}_{1}}\left(\omega \right)\right)\frac{{Z}_{11}\left(\omega \right)}{\sqrt{4\text{\pi}\left|{\underset{\u02dc}{\stackrel{^}{\mathfrak{f}}}}_{n\mathrm{,}{\tau}_{1}\mathrm{,}{\tau}_{1}}\left(\omega \right)\right|}}\mathrm{,}Y=\text{sgn}\left({\underset{\u02dc}{\stackrel{^}{\mathfrak{f}}}}_{n\mathrm{,}{\tau}_{2}\mathrm{,}{\tau}_{2}}\left(\omega \right)\right)\frac{{Z}_{22}\left(\omega \right)}{\sqrt{4\text{\pi}\left|{\underset{\u02dc}{\stackrel{^}{\mathfrak{f}}}}_{n\mathrm{,}{\tau}_{2}\mathrm{,}{\tau}_{2}}\left(\omega \right)\right|}}\mathrm{,}$

where the smoothed rank-based Laplace periodogram ${\stackrel{^}{\mathfrak{f}}}_{n\mathrm{,}{\tau}_{1}\mathrm{,}{\tau}_{2}}\left(\omega \right)$ denotes a consistent estimator of CSDK, and

$\text{sgn}\left({\underset{\u02dc}{\stackrel{^}{\mathfrak{f}}}}_{n\mathrm{,}{\tau}_{i}\mathrm{,}{\tau}_{i}}\left(\omega \right)\right)=(\begin{array}{ll}\mathrm{1,}\hfill & \text{if}\text{\hspace{0.17em}}{\underset{\u02dc}{\stackrel{^}{\mathfrak{f}}}}_{n,{\tau}_{i},{\tau}_{i}}\left(w\right)>\text{0}\mathrm{,}\hfill \\ -\mathrm{1,}\hfill & \text{if}\text{\hspace{0.17em}}{\underset{\u02dc}{\stackrel{^}{\mathfrak{f}}}}_{n,{\tau}_{i},{\tau}_{i}}\left(w\right)<\text{0}\mathrm{,}\hfill \end{array}\text{\hspace{1em}}i=\mathrm{1,2}$ (25)

Let ${Z}_{1}\mathrm{,}\cdots \mathrm{,}{Z}_{N}$ be an independent sample, where ${Z}_{i}$ is expressed by $\left(\begin{array}{c}{X}_{i}\\ {Y}_{i}\end{array}\right),i=1,\cdots ,\omega $ . The test p-value is approximated by the previous bootstrap scheme.

4.3. An Illustration Example

Due to lack of the accurate values of CSDKs, it is difficult for us to evaluate our methodology by using the general models by simulation [20] . We consider two AR(1) models (models 1 and 2) with the form

${x}_{t}=-0.3{x}_{t-1}+{\epsilon}_{t},$ (26)

since their CSDKs can be computed numerically. In model 1, ${\epsilon}_{t},t=1,2,\cdots $ are independent $\mathcal{N}\left(\mathrm{0,1}\right)$ -distributed random variables, while independent Student t-distributed with 1 degree of freedom in model 2. For model 1, the imaginary component of CSDK is vanishing, which reflects that the process is time-reversible; for model 2, there exists a time-irreversible impact of extreme values on the central ones [16] .

Recently, Dette et al. [16] and Kley et al. [22] proposed to estimate the CSDKs by smoothing RLP, which can be defined by quantile regression (QR) or clipped time series (CT). The finite sample performance of the smoothed RLP can be conducted using the R package quantspec [22] . This makes the method of smoothed (QR- or CT-based) RLP serve a good reference to calculate the test p-value. We take the smoothed QR-based RLP to cope with the pairwise TR test. For each generated pseudo-random time series, we computed the smoothed QR-based RLP using the Epanechnikov kernel and bandwidth $bw=0.07$ . For each generated dataset and each pair of $\left({\tau}_{1}\mathrm{,}{\tau}_{2}\right)$ , the test p-value is computed by the previous bootstrap procedure.

For each of those two models, we generated 512 and 1024 dataset with each containing pseudo-random time series of lengths $n=1024$ and $n=2048$ respectively. We set $\tau ={\left(0.1,0.5,0.9\right)}^{\text{T}}$ . A boxplot that is drawn is based on 50 realizations of the log scale of the realized p-value. For each pairs $\left({\tau}_{1},{\tau}_{2}\right)=\left(0.1,0.9\right),\left(0.1,0.5\right),\left(0.5,0.9\right)$ , the boxplots of two models are presented on the left, middle and right, respectively. For each boxplot, the median, extreme points and box shaped by lower and upper quartiles are marked in Figure 1 and Figure 2.

Next we discuss the simulation results in the case of an AR(1) process. We find from Figure 1 that the lower quartile of boxplots of two models is greater than $\mathrm{log}\left(\alpha \right)$ at quantile pair $\left(\mathrm{0.1,0.9}\right)$ , which means the null hypothesis in (19) cannot be rejected, i.e. $P\left({x}_{t}\le {q}_{0.1}\mathrm{,}{x}_{t+k}\le {q}_{0.9}\right)$ is approximately equal to

Figure 1. Boxplots of the estimated $\mathrm{log}\left(p\right)$ -value for different $\left({\tau}_{1}\mathrm{,}{\tau}_{2}\right)$ , and $n=1024$ .

Figure 2. Boxplots of the estimated $\mathrm{log}\left(p\right)$ -value for different $\left({\tau}_{1}\mathrm{,}{\tau}_{2}\right)$ , and $n=2048$ .

$P\left({x}_{t}\le {q}_{0.9}\mathrm{,}{x}_{t+k}\le {q}_{0.1}\right)$,means that AR process with Gaussian innovations or ${t}_{1}$ distributed innovations is time-reversible. Also, when $\left({\tau}_{1}\mathrm{,}{\tau}_{2}\right)=\left(\mathrm{0.1,0.5}\right)$ or $\left(\mathrm{0.5,0.9}\right)$ , these observations also reflect the fact that AR process with Gaussian innovations are time-reversible. However, for ${t}_{1}$ distributed innovations, this phenomenon only takes place for the extreme quantiles ( ${\tau}_{1}=0.1,{\tau}_{2}=0.9$ ), does not hold for ${\tau}_{1}=0.5$ and ${\tau}_{2}=0.1$ or ${\tau}_{2}=0.9$ , one of the important reason is that there exist a marked discrepancy between tail and central dependence structures when the innovations $\left\{{\epsilon}_{t}\right\}$ of AR(1) process are non-Gaussian. From Figure 2, it can also be evidenced that AR process with Gaussian innovations is time reversible, while the case of ${t}_{1}$ distributed innovations is time reversible only for extreme quantiles. Above results indicate that the imaginary parts of CSDK are not zero suggesting time-irreversibility.

5. Conclusions

In this paper, we proposed a new statistic (3) for testing the specification of the cross-covariance matrix. The test statistic is constructed based on Frobenius norm of the difference between the sample cross-covariance matrix and the given matrix. The asymptotic properties of test statistic were obtained with the help of bootstrap scheme. By computing the empirical size and power of the proposed test, the rationality of the test statistic was obtained. The advantage of the proposed statistic is twofold. First, through comparative study, we found that our empirical powers are clearly superior to others in detecting any deviation of the cross-covariance from the pre-assigned matrix. Second, there is no need to make complex derivations of the distribution of statistic T and only a few simulation studies we can obtain the performance of the test.

However, one challenge is to determine whether the test performs very well in the case where the data is high-dimensional, this will be our future work.

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China (grant number: 11671416).

References

[1] Liu, J., Han, J.W., Zhang, Z.J. and Li, J. (2018) Target Detection Exploiting Covariance Matrix Structures in MIMO Radar. Signal Processing, 154, 174-181.

https://doi.org/10.1016/j.sigpro.2018.07.013

[2] Yu, H. and Yang, J. (2001) A Direct LDA Algorithm for High-Dimensional Data with Application to Face Recognition. Pattern Recognition, 34, 2067-2070.

https://doi.org/10.1016/S0031-3203(00)00162-X

[3] Mauchly, J.W. (1940) Significance Test for Sphericity of a Normal n-Variate Distribution. The Annals of Mathematical Statistics, 11, 204-209.

https://doi.org/10.1214/aoms/1177731915

[4] Gupta, A.K. and Xu, J. (2006) On Some Tests of the Covariance Matrix under General Conditions. Annals of the Institute of Statistical Mathematics, 58, 101-114.

https://doi.org/10.1007/s10463-005-0010-z

[5] Jiang, D.D., Jiang, T.F. and Yang, F. (2012) Likelihood Ratio Tests for Covariance Matrices of High-Dimensional Normal Distributions. Journal of Statistical Planning and Inference, 142, 2241-2256. https://doi.org/10.1016/j.jspi.2012.02.057

[6] Bai, Z.D., Jing, D.D., Zheng, S.R., et al. (2009) Corrections to LRT on Large-Dimensional Covariance Matrix by RMT. The Annals of Statistics, 37, 3822-3840.

https://doi.org/10.1214/09-AOS694

[7] Niu, Z.Z., Hu, J.H., Bai, Z.D. and Gao, W. (2019) On LR Simultaneous Test of High-Dimensional Mean Vector and Covariance Matrix under Non-Normality. Statistics and Probability Letters, 145, 338-344.

https://doi.org/10.1016/j.spl.2018.10.008

[8] Nagao, H. (1973) On Some Test Criteria for Covariance Matrix. The Annals of Statistics, 1, 700-709. https://doi.org/10.1214/aos/1176342464

[9] Cai, T.T. and Jiang, T.F. (2011) Limiting Laws of Coherence of Random Matrices with Applications to Testing Covariance Structure and Construction of Compressed Sensing Matrices. The Annals of Statistics, 39, 1496-1525.

https://doi.org/10.1214/11-AOS879

[10] Gupta, A.K. and Bodnar, T. (2014) An Exact Test about the Covariance Matrix. Journal of Multivariate Analysis, 125, 176-189.

https://doi.org/10.1016/j.jmva.2013.12.007

[11] Chen, Y.T., Chou, R.Y. and Kuan, C.M. (2000) Testing Time Reversibility without Moment Restrictions. Journal of Econometrics, 95, 199-218.

https://doi.org/10.1016/S0304-4076(99)00036-6

[12] Ramsey, J.B. and Rothman, P. (1996) Time Irreversibility and Business Cycle Asymmetry. Journal of Money, Credit and Banking, 28, 1-21.

https://doi.org/10.2307/2077963

[13] Rothman, P. (1999) Higher-Order Residual Analysis for Simple Bilinear and Threshold Autoregressive Models with the TR Test. Nonlinear Time Series Analysis of Economic and Financial Data, 1, 357-367.

https://doi.org/10.1007/978-1-4615-5129-4_16

[14] Hinich, M.J. and Rothmane, P. (1998) Frequen-cy-Domain Test of Time Reversibility. Macroeconomic Dynamics, 2, 72-88.

[15] Cox, D.R. (1981) Statistical Analysis of Time Series: Some Recent Developments. Scandinavian Journal of Statistics, 8, 110-111.

[16] Dette, H., Hallin, M. and Kley, T. (2015) Of Copulas, Quantiles, Ranks and Spectra: An Approach to Spectral Analysis. Bernoulli, 21, 781-831.

https://doi.org/10.3150/13-BEJ587

[17] Bickel, P.J. and Freedman, D.A. (1981) Some Asymptotic Theory for the Bootstrap. The Annals of Statistics, 9, 1196-1217. https://doi.org/10.1214/aos/1176345637

[18] Sugiura, N. and Nagao, H. (1968) Unbiasedness of Some Test Criteria for Equality of One or Two Covariance Matrices. The Annals of Mathematical Statistics, 39, 1686-1692.

https://doi.org/10.1214/aoms/1177698150

[19] Anderson, T.W. (2003) An Introduction to Multivariate Statistical Analysis. 3rd Edition, John Wiley and Sons Press, New York.

[20] Zhang, S.B. (2019) Bayesian Copula Spectral Analysis for Stationary Time Series. Computational Statistics and Data Analysis, 133, 166-179.

https://doi.org/10.1016/j.csda.2018.10.001

[21] Koenker, R. (2005) Quantile Regression. Cambridge University Press, Cambridge.

https://doi.org/10.1017/CBO9780511754098

[22] Kley, T. (2016) Quantile-Based Spectral Analysis in an Object-Oriented Framework and a Reference Implementation in R: The Quantspec Package. Journal of Statistical Software, 70, 1-27. https://doi.org/10.18637/jss.v070.i03