Topology Data Analysis Using Mean Persistence Landscapes in Financial Crashes

Show more

1. Introduction

Topological data analysis (TDA) extracts topological features by examining the shape of the data through persistent homology to produce topological summaries. Two topological summaries, the persistent barcode [1] [2], and the persistent diagram [3], provide visual representation of persistent topological features. However, these topological summaries lack geometric properties and do not have a unique (Fréchet) mean [4], which makes it difficult to conduct statistical analysis and machine learning. In fact, Bubenik [5] states effective algorithms do not exist for computing means for a wide variety of examples, but he notes that [6] and [7] have made noteworthy advancement in this direction.

In Bubenik [5], the persistence landscape is provided as an alternative topological summary. The computation time is less for the persistence landscape than the persistence barcode and persistence diagram, because the persistence landscape is a sequence of piece-wise linear functions. Yet, the main advantage of using a persistence landscape is that they are situated in a separable Banach space, which means that we may use probability theory and random variables. After defining the persistence landscape and the norms of persistence landscapes, Bubenik [5] further develops his work by introducing the mean or average persistence landscape, which may be used for statistical analysis and again is something the persistence diagram and persistence barcode do not have. Furthermore, he proves many statistical properties for using persistence landscapes, such as convergence, stability, the Central Limit Theorem, and the Strong Law of Large Numbers (SLLN), which is important, so that one may conduct statistical inference. In particular, Bubenik [5] conducts a permutation test using multiple persistence landscapes to obtain a p-value (see Section 2.5, Section 2.6, Section 3.4), which is something that also cannot be done with persistence diagrams and persistence barcodes. This permutation test and the persistence landscape can be found in [8]. Other notable topological data analysis applications include the discovery of a subgroup of breast cancers [9], an understanding of the topology of the space of natural images [10], brain signals [11], and pre-clinical spinal cord injury [12].

With this alternative topological summary and the ability to conduct statistical inference, we bring our focus to critical transitions in complex dynamical systems, in particular, the financial market. Scheffer et al. [13] asserted that predicting critical transitions in a complex dynamical system prior to occurring is unreliable and very challenging, because the state of the complex system may not fluctuate substantially before reaching a critical threshold. However, in their work, they found for vast classes of systems, early warning signals may exist to indicate when a critical transition is imminent. Even though Scheffer et al. [13] presented examples of early warning signals in ecosystems, time series, climate dynamics, and epileptic seizures, there is not an example of an early warning signal for a financial crash. While Scheffer et al. [13] clarified that some predictability may be employed by experts, in general financial markets are complicated to predict. Although Scheffer et al. [13] referenced excellent works with financial early warning indicators, such as the VIX (volatility based index), systematic relationships in the variance and first order auto-correlation, and correlation increases across returns in falling markets, it was not relevant to our study, which leads us to research more about financial market dynamics.

Ensor and Koev [14] focused on the multivariate GARCH (MGARCH) and the hierarchical regime switching dynamic covariance (HRSDC) models, in which both models examined the co-variance structure within and between market sectors for the time period January 2, 1998 to December 2001. The HRSDC model provided early detection of several anomalous behaviors, such as the decline of Enron, the unusual returns for Silicon Graphics, and the fall of Lehmann Brothers. The detection of these anomalous behaviors is based on price movement of individual securities when viewed as a system of securities with the correlation within and between sectors. Therefore, Ensor and Koev [14] demonstrated a nested model is useful for understanding the correlation structure between different market sectors and how these sectors interacted as the market changes between regimes.

While Ensor and Koev [14] study prompted us to use ETF sectors in our study and is effective in identifying anomalous behaviors, such as the decline of Enron and the fall of Lehnman Brothers, our interest is to use TDA to detect early warning signals for financial crashes and examine topological features changing within time with statistical significance. A number of recent studies have explored the use of TDA on financial time series data to detect early warning signals of financial crashes. Gidea [2] analyzed the cross correlation network of the daily returns (adjusted closing prices) of the Dow Jones Industrial Average (DJIA) stocks listed as February 19, 2008 from January 2004 to September 2008. They tracked the topological changes when approaching a critical transition and showed some presence of early signs of a critical transition. On the other hand, Gidea et al. [15] analyzed four major cryptocurrencies (Bitcoin, Ethereum, Litecoin, and Ripple) before the beginning of 2018 and showed these cryptocurriences exhibiting highly erratic behavior. The paper introduced a method that combines TDA with machine learning to understand what happens before a critical transition. Moreover, they use Takens’ theorem, the time delay embedding theorem, and C^{1}-norms of persistence landscapes. While the paper has valid analysis, our interest is in stocks and ETF sectors rather than cyprtocurrencies.

Alternatively, Gidea and Katz [16] investigated the daily log-returns of four stock indices (DJIA, S&P500, NASDAQ, and Russell 2000) from December 23, 1987 and December 08, 2016, where the topological properties of these stock indices were examined. This paper uses a sequence of a point cloud data set with a sliding window. Gidea and Katz [16] provided an excellent framework using persistence diagrams, persistence landscapes, and norms for persistence landscapes and we were able to replicate all of their results for 2000 and 2008 crashes. They demonstrated that the variance as defined in [17] shows rising trends, we are not convinced about the average spectral densities and auto-correlation function (ACF) with their associated Kendall-Mau tests demonstrated trends.

While these papers provide insightful groundwork for TDA in financial markets and cryptocurrencies, such as showing how to use cross correlation networks to track topological changes, using TDA with machine learning to understand what happens before a critical transition, and using the norms of persistence landscapes to indicate an approaching critical transition, these financial papers lack statistical inference. We are motivated to explore how the topological features change within a given time period for stocks and ETF sectors and find any statistical significant using a permutation test [5] [8], which we discuss in detail in Section 2.5, Section 2.6, and Section 3.4.

While we acknowledge the previous cited authors, we deem our contributions as an empirical framework that adapts their analytical models to new data sets and expand by conducting statistical inference. Similar to Gidea and Katz [16], we investigate the same four major indices (DJIA, S&P500, NASDAQ, and Russell 2000), but we extend our data set to include 10 ETF sectors (Consumer Discretionary, Consumer Staples, Energy, Financials, Health Care, Industrials, Materials, Information Technology, Utilities, and Index) for January 4, 2010-July 1, 2020 to examine their topological features to detect a critical transition or transitions. Moreover, we generate several topological summaries, norms for persistence landscapes $p=1$ and $p=2$, and conduct statistical inference on how these topological features change over time. In particular, we want to compare only sliding windows within a sliding step of one day from each other, which will be done separately for all the stock indices and for all the ETF sectors. We also compare all the stock indices against ETF sectors within the same sliding window. Our hypotheses tests will distinguish for two groups at a time if the means of topological features are the same either within a sliding step of one day in their respective sliding windows or within the same sliding window. The statistical tests of interest have not been seen before in any financial papers and will be our main contribution. The remainder of this paper is organized as follows.

In Section 2, we provide background information on algebraic topology, homology, constructing the Vietoris-Rips complex, persistent homology, topological summaries, norms of persistent landscapes, and statistical inference. In Section 3, we outline our methods for obtaining the data, constructing a sequence of a point cloud data, using persistent homology on a sequence of a point cloud data set, generating topological summaries, and performing statistical inference. In Section 4, we present our findings from our data. In Section 5, we discuss and provide an interpretation of our results. In Section 6, we conclude the paper.

2. Background

This study presents a topological data analysis of financial time series data. Here we provide background material about four relevant areas: algebraic topology, homology, topological summaries, and norms for persistent landscapes. We apply topological data analysis to a sequence of point cloud data sets to examine their topological properties within a point cloud matrix of d 1-dimensional time series. For our analysis, a sequence of point cloud data sets denoted ${X}_{n}$ is shown below:

$\begin{array}{l}{X}_{1}=\left[\begin{array}{c}x\left({t}_{1}\right)\\ x\left({t}_{2}\right)\\ \vdots \\ x\left({t}_{w}\right)\end{array}\right]=\left[\begin{array}{ccc}{x}_{1}^{1}& \cdots & {x}_{1}^{d}\\ {x}_{2}^{1}& \cdots & {x}_{2}^{d}\\ \vdots & \ddots & \vdots \\ {x}_{w}^{1}& \cdots & {x}_{w}^{d}\end{array}\right]\\ \text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\vdots \\ {X}_{q}=\left[\begin{array}{c}x\left({t}_{q}\right)\\ x\left({t}_{q+1}\right)\\ \vdots \\ x\left({t}_{q+w-1}\right)\end{array}\right]=\left[\begin{array}{ccc}{x}_{q}^{1}& \cdots & {x}_{q}^{d}\\ {x}_{q+1}^{1}& \cdots & {x}_{q+1}^{d}\\ \vdots & \ddots & \vdots \\ {x}_{q+w-1}^{1}& \cdots & {x}_{q+w-1}^{d}\end{array}\right],\end{array}$ (1)

where each point in the sequence is expressed as $x\left({t}_{n}\right)=\left({x}_{q}^{1}\mathrm{,}{x}_{q}^{2}\mathrm{,}\cdots \mathrm{,}{x}_{q}^{d}\right)\in {\mathbb{R}}^{d}$, d is the column number from a 1-dimensional time series, w is the sliding window size for a certain number of trading days ( ${n}_{td}$ ) with a sliding step of one day, and $n=\mathrm{1,2,}\cdots \mathrm{,}q$. To obtain q, the difference is taken between the total number of days of the daily log returns ( ${n}_{dlr}$ ) and one less than the sliding window size $w-1$, so that q becomes $q={n}_{dlr}-\left(w-1\right)$ or $q={n}_{dlr}-w+1$. The total number of days of the daily log returns ( ${n}_{dlr}$ ) is the total number of trading days ( ${n}_{td}$ ) minus 1 or ${n}_{dlr}={n}_{td}-1$. To approximate the daily log returns, the formula is discussed in Section 3.1. So, every point cloud is compromised of a $d\times w$ matrix, where $w>d$ [16]. Note our method uses a sliding window w as seen in [16] and it does not apply the sliding window embedding theorem or Takens’ theorem. In the next two subsections, we provide background information on algebraic topology and persistent homology, so that for every point cloud, we generate topological summaries and compute their ${L}^{p}$ norms based on their corresponding persistence landscapes to conduct statistical inference. For a more in depth background, we refer readers to [3] [5] [18] [19] [20] [21].

2.1. Algebraic Topology

To produce topological summaries, we must first construct a Vietoris-Rips filtration for each point cloud in a sequence of point cloud data sets, which requires understanding simplices and simplicial complexes and are defined below [19]:

Definition 2.1 *Let
$\left\{{a}_{0}\mathrm{,}\cdots \mathrm{,}{a}_{n}\right\}$ be a geometrically independent set in
${\mathbb{R}}^{N}$ *.* We define the *n-simplex*
$\sigma $ spanned by
${a}_{0}\mathrm{,}\cdots \mathrm{,}{a}_{n}$ to be the set of all points x of
${\mathbb{R}}^{N}$ such that*:

$x={\displaystyle \underset{i=0}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{t}_{i}{a}_{i}\mathrm{,}\text{\hspace{1em}}\text{where}\text{\hspace{0.17em}}t={\displaystyle \underset{i=0}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{t}_{i}=1,$ (2)

and ${t}_{i}\ge 0$ for all i.

Definition 2.2 *A *simplicial complex K* in
${\mathbb{R}}^{N}$ in a collection of simplices in
${\mathbb{R}}^{N}$ such that*:

· Every face of a simplex of K is in K.

· The intersection of any two simplexes of K is a face.

2.2. Homology

In homology, we are interested in a vector space ${H}_{i}\left(X\right)$ to a space X for each natural number $i\in \left\{\mathrm{0,1,2},\cdots \right\}$, because ${H}_{i}\left(X\right)$ counts the number of k-dimensional holes in X. For example, ${H}_{0}\left(X\right)$ counts the number of 0-dimensional holes or the number of connected components in X, while ${H}_{1}\left(X\right)$ counts number of 1-dimensional holes or the number of loops in X. Furthermore, the algebraic structures must be homotopy invariant, meaning they must not change through deformations. Yet, it is very challenging to determine the homology of arbitrary topological spaces, because it is computationally inefficient, so instead we approximate using simplicial complexes.

Now that simplicial complexes have been defined, we are introducing the p^{th} homology of a simplicial complex K. First, we denote the field with two elements as
${\mathbb{F}}_{2}$. Second, for a given simplicial complex K, we let
${C}_{p}\left(K\right)$ denote the
${\mathbb{F}}_{2}$ -vector space with basis given by the p-simplices of K. Third, for any
$p\in \left\{\mathrm{1,2,}\cdots \right\}$, we define the linear map:

${\partial}_{p}\mathrm{:}{C}_{p}\left(K\right)\to {C}_{p-1}\left(K\right)\mathrm{:}\sigma \mapsto \underset{\tau \subset \sigma \mathrm{,}\tau \in {K}_{p-1}}{{\displaystyle \sum}}\tau \mathrm{,}$ (3)

The kernel of ${\partial}_{p}\mathrm{:}{C}_{p}\left(K\right)\to {C}_{p-1}\left(K\right)$ is the subgroup ${\partial}_{p}^{-1}\left(0\right)$ of ${C}_{p}\left(K\right)$ and is called the group of p-cycles. The image of ${\partial}_{p+1}\mathrm{:}{C}_{p+1}\left(K\right)\to {C}_{p}\left(K\right)$ is the image ${\partial}_{p+1}$ is the subgroup of ${\partial}_{p+1}\left({C}_{p+1}\left(K\right)\right)$ of ${C}_{p}\left(K\right)$ and is called the group of p-boundaries [19].

Definition 2.3 *For any
$p\in \left\{\mathrm{0,1,2,}\cdots \right\}$ *,* the p ^{th} *homology of a simplicial complex

${H}_{p}\left(K\right)=\text{kernel}\left({\partial}_{p}\right)/\text{image}\left({\partial}_{p+1}\right)\mathrm{.}$ (4)

Its dimension is defined by:

${\beta}_{p}\left(K\right)\mathrm{:}=\mathrm{dim}{H}_{p}\left(K\right)=\text{dimkernel}\left({\partial}_{p}\right)-\text{dimimage}\left({\partial}_{p+1}\right)\mathrm{,}$ (5)

which is called the p^{th} Betti number of K [22].

The p-cycles that are not boundaries represent p-dimensional holes, which the p^{th} Betti number counts. For the p^{th} homology of a filtered simplicial complex K, we apply definition 2.3 and define as:

Definition 2.4 *Let *K be a finite simplicial complex,* and let**
${K}_{1}\subset {K}_{2}\subset {K}_{3}\dots \subset {K}_{l}\mathrm{=}K$ ** be a finite sequence of nested subcomplexes of *K.* The simplicial complex *K with such a sequence of subcomplexes is called a filtered simplicial complex.* The *p^{th}* persistent homology of *K is the pair

$\left({\left\{{H}_{p}\left({K}_{i}\right)\right\}}_{1\le i\le l}\mathrm{,}{\left\{{f}_{p}^{i\mathrm{,}j}\right\}}_{1\le i\le j\le l}\right)\mathrm{,}$

where $i\mathrm{,}j\in \left\{\mathrm{1,}\cdots \mathrm{,}l\right\}$ for all $i\le j$, ${f}_{p}^{i\mathrm{,}j}\mathrm{:}{H}_{p}\left({K}_{i}\right)\to {H}_{p}\left({K}_{j}\right)$ are the linear maps induced by the inclusion maps ${K}_{i}\to {K}_{j}$ [22].

The p^{th} persistent homology of a filtered simplicial complex provides more information about the maps between each subcomplex than the homologies of single subcomplexes, which is explained further in Section 2.2.2. While there are several filtered simplicial complexes, such as the Cech, Alpha, and Delaunay, we chose the Vietoris-Rips complex, because it is computationally efficient [22].

2.2.1. Vietoris-Rips Construction

Definition 2.5 *Let
$X=\left\{{x}_{1},\cdots ,{x}_{n}\right\}$ be a collection of points in
${\mathbb{R}}^{d}$ *.* Given a distance
$\u03f5>0$ *,*
$\mathcal{R}\left(X\mathrm{,}\u03f5\right)$ denotes the simplicial complex on n vertices
${x}_{1},\cdots ,{x}_{n}$ *,* **where an edge between the vertices
${x}_{i}$ and
${x}_{j}$ with
$i\ne j$ is included if and only if
$d\left({x}_{i}\mathrm{,}{x}_{j}\right)\le \u03f5$ or generally the k-simplex are included with vertices
${x}_{{i}_{0}}\mathrm{,}\cdots ,{x}_{{i}_{k}}$ if and only if all of the pairwise distances are at most
$\u03f5$ *.* This type of simplicial complex is called a *Vietoris-Rips complex [8] [16].* *

When $\u03f5<{\u03f5}^{\prime}$, the Vietoris-Rips complex forms a filtration, $\mathcal{R}\left(X\mathrm{,}\u03f5\right)\subseteq \mathcal{R}\left(X\mathrm{,}{\u03f5}^{\prime}\right)$, which by definition 2.4 is a filtered simplicial complex. While there is no clear criteria for select ${\u03f5}^{\prime}$, [23] used ${\u03f5}^{\prime}=0.05$ in their study. In this study, the Vietoris-Rips complex of ${X}_{n}$ is denoted as $\mathcal{R}\left({X}_{n}\mathrm{,}\u03f5\right)$ and follows definition 2.5, where ${X}_{n}$ is a sequence of point cloud data sets as given by Equation (1). Moreover, the filtration of $\mathcal{R}\left({X}_{n}\mathrm{,}\u03f5\right)\subseteq \mathcal{R}\left(X\mathrm{,}{\u03f5}^{\prime}\right)$ is shown below:

$\begin{array}{l}\mathcal{R}\left({X}_{1}\mathrm{,}\u03f5\right)\subseteq \mathcal{R}\left({X}_{1}\mathrm{,}{\u03f5}^{\prime}\right)\\ \text{\hspace{1em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\vdots \\ \mathcal{R}\left({X}_{q}\mathrm{,}\u03f5\right)\subseteq \mathcal{R}\left({X}_{q}\mathrm{,}{\u03f5}^{\prime}\right)\mathrm{,}\end{array}$ (6)

where q is the difference between the number of the daily log returns and the sliding window $\left(w+1\right)$ or $q={n}_{dlr}-w+1$. By definition 2.4, $\mathcal{R}\left({X}_{n}\mathrm{,}\u03f5\right)$ is a filtered simplicial complex.

2.2.2. Persistent Homology

Using definition 2.4 and definition 2.5, it is possible to find the p-dimensional homology of the Vietoris-Rips complex of ${X}_{n}$ labelled as ${H}_{p}\left(\mathcal{R}\left({X}_{n}\mathrm{,}\u03f5\right)\right)$ with coefficients in field $\mathbb{Z}\mathrm{/2}\mathbb{Z}$ for small values of p and for different values of $\u03f5$ [8]. Recall from section 2.2, ${H}_{p}\left({K}_{i}\right)$ is a vector space and ${\beta}_{p}\left({K}_{i}\right)$ counts the number of p-dimensional holes. When $\u03f5<{\u03f5}^{\prime}$, we apply definition 2.4 to the filtration $\mathcal{R}\left({X}_{n}\mathrm{,}\u03f5\right)\subseteq \mathcal{R}\left({X}_{n}\mathrm{,}{\u03f5}^{\prime}\right)$, which induces linear maps ${f}_{p}^{i\mathrm{,}j}\mathrm{:}{H}_{p}\left(\mathcal{R}\left({X}_{n}\mathrm{,}\u03f5\right)\right)\to {H}_{p}\left(\mathcal{R}\left({X}_{n}\mathrm{,}{\u03f5}^{\prime}\right)\right)$ as seen below:

$\begin{array}{l}{H}_{p}\left(\mathcal{R}\left({X}_{1}\mathrm{,}\u03f5\right)\right)\to {H}_{p}\left(\mathcal{R}\left({X}_{1}\mathrm{,}{\u03f5}^{\prime}\right)\right)\\ \text{\hspace{1em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\vdots \\ {H}_{p}\left(\mathcal{R}\left({X}_{q}\mathrm{,}\u03f5\right)\right)\to {H}_{p}\left(\mathcal{R}\left({X}_{q}\mathrm{,}{\u03f5}^{\prime}\right)\right)\mathrm{,}\end{array}$ (7)

where $q={n}_{dlr}-w+1$. Each ${H}_{p}\left(\mathcal{R}\left({X}_{n}\mathrm{,}\u03f5\right)\right)$ is a vector space whose generators correspond to holes in $\mathcal{R}\left({X}_{n}\mathrm{,}\u03f5\right)$, and the linear maps ${f}_{p}^{i\mathrm{,}j}$ allow us to track the generators from ${H}_{p}\left(\mathcal{R}\left({X}_{n}\mathrm{,}\u03f5\right)\right)\to {H}_{p}\left(\mathcal{R}\left({X}_{n}\mathrm{,}{\u03f5}^{\prime}\right)\right)$. A suitable basis is selected by applying the Fundamental Theorem of Persistence Homology.

Theorem 2.1 (Fundamental Theorem of Persistent Homology) *The Fundamental Theorem of Persistent Homology states there is a choice of basis vectors
${H}_{p}\left({K}_{i}\right)$ for each
$i\in \left\{\mathrm{1,}\cdots \mathrm{,}l\right\}$ such that each map is determined by a bipartite matching of basis vectors* [22].* *

Given Theorem 2.1, there is a choice of basis vectors of ${H}_{p}\left(\mathcal{R}\left({X}_{n}\mathrm{,}\u03f5\right)\right)$, such that one may construct a well-defined and unique collection of disjoint half-open intervals, where a generator $x\in {H}_{p}\left(\mathcal{R}\left({X}_{n}\mathrm{,}\u03f5\right)\right)$ corresponds to a half-open interval $\left[{b}_{i}\mathrm{,}{d}_{i}\right)$, which represents the lifetime of x. The endpoints ${b}_{i}$ and ${d}_{i}$ refer to x first appearing and finally disappearing respectively in $\mathcal{R}\left({X}_{n}\mathrm{,}\u03f5\right)$. Specifically, if $x\ne 0$ is not in the image of ${f}_{p}^{{b}_{i-1}\mathrm{,}{b}_{i}}$, then x is born in ${H}_{p}\left(\mathcal{R}\left({X}_{n}\mathrm{,}\u03f5\right)\right)$. Conversely, if ${d}_{i}>{b}_{i}$ is the smallest index for which ${f}_{p}^{{b}_{i},{d}_{i}}\left(x\right)=0$, then x dies in ${H}_{p}\left(\mathcal{R}\left({X}_{n}\mathrm{,}\u03f5\right)\right)$. Persistence is determined by a generator’s lifetime in the half-open interval, where a generator is considered more persistent the longer it appears in the half-open interval. If ${f}_{p}^{{b}_{i},{d}_{i}}\left(x\right)=0$ for all ${b}_{i}>{d}_{i}$ in ${I}_{j}$, then x lives forever, and its lifetime is represented by the interval $\left[{b}_{i}\mathrm{,}\infty \right)$ [22]. Then, the set of vector spaces ${H}_{p}\left(\mathcal{R}\left({X}_{n}\mathrm{,}\u03f5\right)\right)$ together with the corresponding linear maps is referred to as a persistence module, which is the foundation for constructing topological summaries.

2.3. Topological Summaries

To visualize, construct, and produce topological summaries, Theorem 2.1 is used to select the choice of basis vectors from ${H}_{p}\left(\mathcal{R}\left({X}_{n}\mathrm{,}\u03f5\right)\right)$ and the corresponding linear maps ${f}_{p}^{{b}_{i}\mathrm{,}{d}_{i}}$, in which all topological summaries are derived from the persistent modules.

2.3.1. Persistence Module, Persistence Barcode, Persistence Diagram

Definition 2.6 *A *persistence module* is defined as a vector space
${M}_{\alpha}$ for all
$a\in \mathbb{R}$ and linear maps
$M\left(a\le b\right)\mathrm{:}{M}_{a}\to {M}_{b}$ for all
$a\le b$ such that*:

1) $M\left(a\le a\right)$ is the identity map;

2) For all $a\le b\le c,M\left(b\le c\right)\circ M\left(a\le b\right)=M\left(a\le c\right)$.

For additional information about the construction of a persistence module, see [5]. There are three main types of topological summaries associated with a persistence module. The first type of topological summary is a called a barcode. It represents a finite collection of disjoint half-open intervals
${I}_{j}$, in which each interval’s endpoints are a birth-death pairs, (b) and (d) respectively. In particular, an interval starts with the time of birth (b) and ends with the time of death (d) of a topological feature. The p^{th} barcode is denoted by
${B}_{p}=\left\{{I}_{j}\right\}$. A topological feature’s survival or persistence is represented by the interval’s length. The second type of topological summary is the p^{th} persistence diagram, which is denoted as
${D}_{p}={\left\{\left({b}_{i},{d}_{i}\right)\right\}}_{i\in {I}_{j}}$, where
${b}_{i}$ and
${d}_{i}$ are the bar codes intervals’ end points and
$-\infty <{b}_{i}<{d}_{i}<\infty $.

Unfortunately, the geometric properties of the barcodes and persistence diagrams present a difficult challenge for the calculation of means and variances, since two barcodes or two persistence diagrams may not have the same unique Friechet mean, which means statistical inference cannot be done. While the barcode and the persistence diagram are conventional topological summaries, Bubenik [5] showed how the persistence landscape is a better alternative.

2.3.2. Persistence Landscapes and Mean Landscape

Bubenik and Dlotko [18] proved numerous statistical properties of persistence landscapes that we may use for statistical inference, such as stability, convergence, central limit theorem, and strong law of large numbers. The persistent landscape and mean landscape are also used as topological summaries to indicate how persistence changes by examining the number of peaks. First, given a pair of numbers $\left(b\mathrm{,}d\right)$ with $b<d$, the piecewise linear (PL) function ${f}_{\left(b\mathrm{,}d\right)}\mathrm{:}\mathbb{R}\to \left[\mathrm{0,}\infty \right]$ is defined by [18]:

${f}_{\left(b\mathrm{,}d\right)}=(\begin{array}{ll}0\hfill & \text{if}\text{\hspace{0.17em}}x\notin \left(b\mathrm{,}d\right)\hfill \\ x-b\hfill & \text{if}\text{\hspace{0.17em}}x\in \left(b\mathrm{,}\frac{b+d}{2}\right]\hfill \\ -x+d\hfill & \text{if}\text{\hspace{0.17em}}x\in \left(\frac{b+d}{2}\mathrm{,}d\right)\hfill \end{array}$ (8)

Second, given a persistence module, M, the persistence landscape may be defined as the function $\lambda \mathrm{:}\mathbb{N}\times \mathbb{R}\to R$ given by:

$\lambda \left(k\mathrm{,}t\right)=\text{sup}\left(h>0|\text{rank}M\left(t-h\le t+h\right)>k\right)\mathrm{.}$ (9)

Third, given a persistence diagram ${D}_{p}={\left\{\left({b}_{i}\mathrm{,}{d}_{i}\right)\right\}}_{i\in I}$ for $b<d$, $f\left(b\mathrm{,}d\right)\left(t\right)=\text{max}\left(\mathrm{0,}\text{min}\left(b+t\mathrm{,}d-t\right)\right)$, and the persistence landscape is defined as follows:

$\lambda \left(k,t\right)=\text{kmax}{\left\{{f}_{p}^{{b}_{i},{d}_{i}}\left(t\right)|\left({b}_{i},{d}_{i}\right)\in {D}_{p}\left(t\right)\right\}}_{i\in I},$ (10)

where kmax denotes the k^{th} largest element. Using Equation (10) for
${X}_{n}$, the persistence landscape of
${X}_{n}$ denoted by
$\lambda \left({X}_{n}\right)$ is the following:

$\begin{array}{l}\lambda \left({X}_{1}\right)=\text{k-max}{\left\{{f}_{p}^{{b}_{i},{d}_{i}}\left({X}_{1}\right)|\left({b}_{i},{d}_{i}\right)\in {D}_{p}\left({X}_{1}\right)\right\}}_{i\in I}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\vdots \\ \lambda \left({X}_{q}\right)=\text{k-max}{\left\{{f}_{p}^{{b}_{i},{d}_{i}}\left({X}_{q}\right)|\left({b}_{i},{d}_{i}\right)\in {D}_{p}\left({X}_{q}\right)\right\}}_{i\in I},\end{array}$ (11)

where $q={n}_{dlr}-w+1$. This results in the following lemma from [5]:

Lemma 2.2

The persistence landscape has the following properties:

1) ${\lambda}_{k}\left(t\right)\ge 0$,

2) ${\lambda}_{k}\left(t\right)\ge {\lambda}_{k+1}\left(t\right)$, and

3) ${\lambda}_{k}\left(t\right)0$ is 1-Lipschitz.

From Equation (10), the persistence landscape is obtained and used to calculate the mean landscape, which is defined below:

Definition 2.7 *Let
${Y}_{1}\mathrm{,}\cdots ,{Y}_{n}$ be independent and identically distributed copies of Y*,* and let
${\Lambda}^{1},\cdots ,{\Lambda}^{n}$ be corresponding persistence landscapes*.* The *mean landscape
${\stackrel{\xaf}{\Lambda}}^{n}$ * is given by the point wise mean*,* in particular*,*
${\stackrel{\xaf}{\Lambda}}^{n}\left(\omega \right)={\stackrel{\xaf}{\Lambda}}^{n}$ *,* where*

${\stackrel{\xaf}{\lambda}}^{n}\left(k,t\right)=\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}^{i}\left(k,t\right).$ (12)

Using Equation (12) for ${X}_{n}$, we have the following:

$\begin{array}{l}{\stackrel{\xaf}{\lambda}}^{n}\left({X}_{1}\right)=\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}^{i}\left({X}_{1}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\vdots \\ {\stackrel{\xaf}{\lambda}}^{n}\left({X}_{q}\right)=\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}^{i}\left({X}_{q}\right),\end{array}$ (13)

where $q={n}_{dlr}-w+1$. The mean landscape is used in section 2.5 and section 2.6.

2.4. Norms for Persistence Landscapes

Gidea and Katz [16] applied ${L}^{p}$ norms of the persistence landscapes to identify the signs of a financial crash, which usually occurs within a time of high variance and cross-correlations among stocks or ETFs, and demonstrated that ${L}^{1}$ and ${L}^{2}$ norms of the persistence landscapes of four stock indices exhibited significant rising trends before the financial crashes. We adopt their approach in our study.

Therefore, for real valued functions on $\mathbb{R}\times \mathbb{R}$, for $1\le p<\infty $, p-norms of persistence landscapes are defined as:

${\Vert \lambda \Vert}_{p}={\displaystyle \underset{i=1}{\overset{\infty}{\sum}}}{\left[{\displaystyle {\int}_{-\infty}^{\infty}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{k}{\left(t\right)}^{p}\text{d}t\right]}^{\frac{1}{p}}\mathrm{,}$ (14)

and for $p=\infty $,

${\Vert \lambda \Vert}_{\infty}=\underset{k\mathrm{,}t}{\text{sup}}{\lambda}_{k}\left(t\right)\mathrm{.}$ (15)

Applying Equation (14) to our sequence of point cloud data sets ${X}_{n}$ results in:

$\begin{array}{l}{\Vert \lambda \left({X}_{1}\right)\Vert}_{p}={\displaystyle \underset{i=1}{\overset{\infty}{\sum}}}{\left[{\displaystyle {\int}_{-\infty}^{\infty}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\lambda {\left({X}_{1}\right)}^{p}\text{d}t\right]}^{\frac{1}{p}}\\ {\Vert \lambda \left({X}_{q}\right)\Vert}_{p}={\displaystyle \underset{i=1}{\overset{\infty}{\sum}}}{\left[{\displaystyle {\int}_{-\infty}^{\infty}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\lambda {\left({X}_{q}\right)}^{p}\text{d}t\right]}^{\frac{1}{p}},\end{array}$ (16)

where $q={n}_{dlr}-w+1$.

2.5. Statistical Inference: Part I

To compare the topological features between two groups, the persistence landscape is used to conduct a hypothesis test and statistical inference, which require several assumptions provided by [5]. First, the persistence landscapes lie in a separable Banach space ${\mathcal{L}}^{p}\left(\mathcal{S}\right)$ for $1\le p\le \infty $, where $\mathcal{S}=\mathbb{N}\times \mathbb{R}$. Second, Y is to be a random variable on some underlying probability space $\left(\Omega \mathrm{,}\mathcal{F}\mathrm{,}P\right)$ with a corresponding landscape $\Lambda $. Third, if we have $\omega \in \Omega $, then $Y\left(\omega \right)$ is the random variable and $\Lambda \left(\omega \right)=\lambda \left(Y\left(\omega \right)\right):=\lambda $ is the corresponding topological summary statistic. To avoid confusion, we use Y instead of X as a random variable, because our sequence of point cloud data sets uses the variable ${X}_{n}$. In addition, Bubenik [5] proved the convergence of persistence landscapes using the Strong Law of Large Numbers and the Central Limit Theorem, which is extremely important for setting up our random variables and hypothesis test. Our random variable Y is defined as:

$Y=f\left(\lambda \left(k\mathrm{,}t\right)\right)={\displaystyle \underset{k}{\sum}}{\displaystyle {\int}_{\mathbb{R}}}\text{\hspace{0.05em}}{\lambda}_{k}\left(t\right)\text{d}t\mathrm{,}$ (17)

where $f\in {\mathcal{L}}^{b}\left(\mathcal{S}\right)$ is a continuous linear functional, $\frac{1}{a}+\frac{1}{b}=1$, and Y satisfies the (SLLN) and (CLT) as seen in [5], which implies Y has an adequate sample size and follows an approximately normal distribution.

The statistical properties and definitions above are utilized to a conduct hypothesis tests with corresponding p-value based on a permutation test. To compare the topological features of two groups, ${Y}_{1}$ and ${Y}_{2}$, where ${k}_{1}$ and ${k}_{2}$ are samples taken from these groups respectively, and ${\Lambda}_{1}$ and ${\Lambda}_{2}$ are the corresponding landscapes respectively. The associate sample values of ${Y}_{1}$ and ${Y}_{2}$ are denoted as ${y}_{1}^{1}\mathrm{,}\cdots \mathrm{,}{y}_{1}^{{k}_{1}}$ and ${y}_{2}^{1}\mathrm{,}\cdots \mathrm{,}{y}_{2}^{{k}_{2}}$ and the corresponding landscapes of these sample values are labelled as ${\lambda}_{1}^{1}\mathrm{,}\cdots \mathrm{,}{\lambda}_{1}^{{k}_{1}}$ and ${\lambda}_{2}^{1}\mathrm{,}\cdots \mathrm{,}{\lambda}_{2}^{{k}_{2}}$. We apply Equation (17) to ${Y}_{1}$ and ${Y}_{2}$, so the functional of ${Y}_{1}$ and ${Y}_{2}$ are as follows:

$\begin{array}{l}{Y}_{1}=f\left({y}_{1}^{1}\right),\cdots ,f\left({y}_{1}^{{k}_{1}}\right)=f\left({\lambda}_{1}^{1}\left(k,t\right)\right),\cdots ,f\left({\lambda}_{1}^{{k}_{1}}\left(k,t\right)\right)={\displaystyle \underset{i=1}{\overset{{k}_{1}}{\sum}}}{\displaystyle {\int}_{\mathbb{R}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{1}^{i}\left(k,t\right)\text{d}t\\ {Y}_{2}=f\left({y}_{2}^{1}\right),\cdots ,f\left({y}_{2}^{{k}_{2}}\right)=f\left({\lambda}_{2}^{1}\left(k,t\right)\right),\cdots ,f\left({\lambda}_{2}^{{k}_{2}}\left(k,t\right)\right)={\displaystyle \underset{i=1}{\overset{{k}_{2}}{\sum}}}{\displaystyle {\int}_{\mathbb{R}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{2}^{i}\left(k,t\right)\text{d}t.\end{array}$ (18)

Recall the sample mean is $\stackrel{\xaf}{Y}=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Y}_{i}$, so the sample means of the ${Y}_{1}$ and ${Y}_{2}$ are the following:

$\begin{array}{l}{\stackrel{\xaf}{Y}}_{1}=\frac{1}{{k}_{1}}{\displaystyle \underset{i=1}{\overset{{k}_{1}}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}f\left({y}_{1}^{i}\right)=\frac{1}{{k}_{1}}{\displaystyle \underset{i=1}{\overset{{k}_{1}}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}f\left({\lambda}_{1}^{i}\left(k,t\right)\right)\\ {\stackrel{\xaf}{Y}}_{2}=\frac{1}{{k}_{2}}{\displaystyle \underset{i=1}{\overset{{k}_{2}}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}f\left({y}_{2}^{i}\right)=\frac{1}{{k}_{2}}{\displaystyle \underset{i=1}{\overset{{k}_{2}}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}f\left({\lambda}_{2}^{i}\left(k,t\right)\right),\end{array}$ (19)

where again ${k}_{1}$ and ${k}_{2}$ are the samples taken from ${Y}_{1}$ and ${Y}_{2}$. We assume that ${\mu}_{1}$ and ${\mu}_{2}$ are the expectations of ${Y}_{1}$ and ${Y}_{2}$. So, ${\mu}_{1}$ and ${\mu}_{2}$ are assumed to be the population means of ${Y}_{1}$ and ${Y}_{2}$. Therefore, the statistical hypothesis is:

${H}_{0}:{\mu}_{1}={\mu}_{2}\text{\hspace{1em}}{H}_{a}:{\mu}_{1}\ne {\mu}_{2}.$ (20)

To test the null-hypothesis, we use a two sample permutation test. Let

$t=\frac{\left|{\stackrel{\xaf}{Y}}_{1}-{\stackrel{\xaf}{Y}}_{2}\right|}{\sqrt{\frac{Var\left({Y}_{1}\right)}{{k}_{1}}+\frac{Var\left({Y}_{2}\right)}{{k}_{2}}}}.$ (21)

Using Equation (21), ${t}_{1}\mathrm{,}\cdots \mathrm{,}{t}_{m}$ of the test statistic are calculated for permutations $s=1,\cdots ,m$. The observed value of the test statistic is expressed as ${t}_{\text{observed}}$. The p-value is calculated by comparing ${t}_{\text{observed}}$ with ${t}_{s}$ and averaging the number of times ${t}_{\text{observed}}\le {t}_{s}$. Thus, Equation (21) becomes:

$\begin{array}{l}{t}_{\left\{\mathrm{1,}{Y}_{1}\mathrm{,}{Y}_{2}\right\}}=\frac{\left|{\stackrel{\xaf}{Y}}_{1}-{\stackrel{\xaf}{Y}}_{2}\right|}{\sqrt{\frac{Var\left({Y}_{1}\right)}{{k}_{1}}+\frac{Var\left({Y}_{2}\right)}{{k}_{2}}}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\vdots \\ {t}_{\left\{m\mathrm{,}{Y}_{1}\mathrm{,}{Y}_{2}\right\}}=\frac{\left|{\stackrel{\xaf}{Y}}_{1}-{\stackrel{\xaf}{Y}}_{2}\right|}{\sqrt{\frac{Var\left({Y}_{1}\right)}{{k}_{1}}+\frac{Var\left({Y}_{2}\right)}{{k}_{2}}}}\mathrm{.}\end{array}$ (22)

A general form of Equation (22) is:

${t}_{\left\{s,{Y}_{1},{Y}_{2}\right\}}=\frac{\left|{\stackrel{\xaf}{Y}}_{1}-{\stackrel{\xaf}{Y}}_{2}\right|}{\sqrt{\frac{Var\left({Y}_{1}\right)}{{k}_{1}}+\frac{Var\left({Y}_{2}\right)}{{k}_{2}}}}.$ (23)

Hence, using Equation (22) and every instance where ${t}_{\text{observed}}\le {t}_{s}$, the p-value is obtained as:

$p{\text{-value}}_{\left\{{Y}_{1}\mathrm{,}{Y}_{2}\right\}}=\frac{1}{m}{\displaystyle \underset{i=1}{\overset{m}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{t}_{\left\{i\mathrm{,}{Y}_{1}\mathrm{,}{Y}_{2}\right\}}\mathrm{.}$ (24)

To measure the statistical significance, [8] used a significance level $\alpha =0.05$ in their study, which we incorporate in our study. We may apply the above assumptions, equations, and definitions to compare the topological features of more groups.

2.6. Statistical Inference: Part II

Instead of conducting one hypothesis test, multiple hypotheses tests are conducted to determine how the topological features in our sequence of point cloud data sets ${X}_{n}$ change within a particular time frame. The hypotheses tests are done on all the sliding window matrices within ${X}_{n}$. In particular, two adjacent sliding window matrices are compared, where adjacent means the sliding window matrices differ by a sliding step of one day. For example, the sliding window matrices ${X}_{1}$ and ${X}_{2}$ would be compared, while the sliding window matrices ${X}_{1}$ and ${X}_{3}$ would not be compared. Therefore, the assumptions, equations, and definitions from section 2.5 are applied to ${X}_{n}$. When hypotheses tests are performed, there are $q={n}_{dlr}-w+1$ random variables (see Equation (1)), which is also the size of the sequence of the point cloud data set ${X}_{n}$.

So, we let ${Y}_{1}\mathrm{,}{Y}_{2}\mathrm{,}\cdots \mathrm{,}{Y}_{q}$ be random variables, where ${k}_{1}\mathrm{,}{k}_{2}\mathrm{,}\cdots ,{k}_{q}$ are taken as samples from these groups respectively, and ${\Lambda}_{1}\mathrm{,}{\Lambda}_{2}\mathrm{,}\cdots \mathrm{,}{\Lambda}_{q}$ are the corresponding landscapes respectively. The associate sample values of ${Y}_{1}\mathrm{,}{Y}_{2}\mathrm{,}\cdots \mathrm{,}{Y}_{q}$ are denoted as ${y}_{1}^{1}\mathrm{,}\cdots \mathrm{,}{y}_{1}^{{k}_{1}}$, ${y}_{2}^{1}\mathrm{,}\cdots \mathrm{,}{y}_{2}^{{k}_{2}}$, $\cdots $, ${y}_{q}^{1}\mathrm{,}\cdots \mathrm{,}{y}_{q}^{{k}_{q}}$, and the corresponding landscapes of these sample values are labelled as ${\lambda}_{1}^{1}\mathrm{,}\cdots \mathrm{,}{\lambda}_{1}^{{k}_{1}}$, ${\lambda}_{2}^{1}\mathrm{,}\cdots \mathrm{,}{\lambda}_{2}^{{k}_{2}}$, $\cdots $, ${\lambda}_{q}^{1}\mathrm{,}\cdots \mathrm{,}{\lambda}_{q}^{{k}_{q}}$. The functional in Equation (17) is used to define the following for ${Y}_{1}\mathrm{,}{Y}_{2}\mathrm{,}\cdots \mathrm{,}{Y}_{q}$ :

$\begin{array}{l}{Y}_{1}={\displaystyle \underset{i=1}{\overset{{k}_{1}}{\sum}}}{\displaystyle {\int}_{\mathbb{R}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{1}^{i}\left({X}_{1}\right)\text{d}t\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\vdots \\ {Y}_{q}={\displaystyle \underset{i=1}{\overset{{k}_{q}}{\sum}}}{\displaystyle {\int}_{\mathbb{R}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{j}^{i}\left({X}_{q}\right)\text{d}t,\end{array}$ (25)

where $q={n}_{dlr}-w+1$. Recall the sample mean is $\stackrel{\xaf}{Y}=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Y}_{i}$, so the sample means of the ${Y}_{1}\mathrm{,}{Y}_{2}\mathrm{,}\cdots \mathrm{,}{Y}_{q}$ as follows:

$\begin{array}{l}{\stackrel{\xaf}{Y}}_{1}=\frac{1}{{k}_{1}}{\displaystyle \underset{i=1}{\overset{{k}_{1}}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}f\left({\lambda}^{i}\left({X}_{1}\right)\right)\\ {\stackrel{\xaf}{Y}}_{q}=\frac{1}{{k}_{q}}{\displaystyle \underset{i=1}{\overset{{k}_{q}}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}f\left({\lambda}^{i}\left({X}_{q}\right)\right),\end{array}$ (26)

where $q={n}_{dlr}-w+1$. We assume that ${\mu}_{1}\mathrm{,}{\mu}_{2}\mathrm{,}\cdots \mathrm{,}{\mu}_{q}$ are the expectations of ${Y}_{1}\mathrm{,}{Y}_{2}\mathrm{,}\cdots \mathrm{,}{Y}_{q}$. So, ${\mu}_{1}\mathrm{,}{\mu}_{2}\mathrm{,}\cdots \mathrm{,}{\mu}_{q}$ are assumed to be population means of ${Y}_{1}\mathrm{,}{Y}_{2}\mathrm{,}\cdots \mathrm{,}{Y}_{q}$, and the statistical hypotheses are:

$\begin{array}{l}{H}_{0}:{\mu}_{1}={\mu}_{2}\text{\hspace{1em}}{H}_{a}:{\mu}_{1}\ne {\mu}_{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\vdots \\ {H}_{0}:{\mu}_{q-1}={\mu}_{q}\text{\hspace{1em}}{H}_{a}:{\mu}_{q-1}\ne {\mu}_{q},\end{array}$ (27)

where $q={n}_{dlr}-w+1$. To test the null-hypothesis, we use a two sample permutation test with statistics,

$\begin{array}{l}{t}_{\left\{{Y}_{1},{Y}_{2}\right\}}=\frac{\left|{\stackrel{\xaf}{Y}}_{1}-{\stackrel{\xaf}{Y}}_{2}\right|}{\sqrt{\frac{Var\left({Y}_{1}\right)}{{k}_{1}}+\frac{Var\left({Y}_{2}\right)}{{k}_{2}}}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\vdots \\ {t}_{\left\{{Y}_{q-1},{Y}_{q}\right\}}=\frac{\left|{\stackrel{\xaf}{Y}}_{q-1}-{\stackrel{\xaf}{Y}}_{q}\right|}{\sqrt{\frac{Var\left({Y}_{q-1}\right)}{{k}_{q-1}}+\frac{Var\left({Y}_{q}\right)}{{k}_{q}}}}.\end{array}$ (28)

where $q={n}_{dlr}-w+1$. Using Equation (28), ${t}_{1}\mathrm{,}\cdots \mathrm{,}{t}_{m}$ of the test statistic are calculated for permutations $s=1,\cdots ,m$. The observed value of the test statistic is expressed as ${t}_{\text{observed}}$. The p-value is calculated by comparing ${t}_{\text{observed}}$ with ${t}_{s}$ and averaging the number of times ${t}_{\text{observed}}\le {t}_{s}$. Using Equation (23), Equation (28) becomes:

$\begin{array}{l}{t}_{\left\{s,{Y}_{1},{Y}_{2}\right\}}=\frac{\left|{\stackrel{\xaf}{Y}}_{1}-{\stackrel{\xaf}{Y}}_{2}\right|}{\sqrt{\frac{Var\left({Y}_{1}\right)}{{k}_{1}}+\frac{Var\left({Y}_{2}\right)}{{k}_{2}}}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\vdots \\ {t}_{\left\{s,{Y}_{q-1},{Y}_{q}\right\}}=\frac{\left|{\stackrel{\xaf}{Y}}_{q-1}-{\stackrel{\xaf}{Y}}_{q}\right|}{\sqrt{\frac{Var\left({Y}_{q-1}\right)}{{k}_{q-1}}+\frac{Var\left({Y}_{q}\right)}{{k}_{q}}}},\end{array}$ (29)

where $q={n}_{dlr}-w+1$. Hence, using Equation (29) and every instance where ${t}_{\text{observed}}\le {t}_{s}$, the p-value is obtained as:

$\begin{array}{l}p{\text{-value}}_{\left\{{Y}_{1}\mathrm{,}{Y}_{2}\right\}}=\frac{1}{m}{\displaystyle \underset{i=1}{\overset{m}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{t}_{\left\{i,{Y}_{1},{Y}_{2}\right\}}\\ p{\text{-value}}_{\left\{{Y}_{q-1}\mathrm{,}{Y}_{q}\right\}}=\frac{1}{m}{\displaystyle \underset{i=1}{\overset{m}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{t}_{\left\{i,{Y}_{q-1},{Y}_{q}\right\}},\end{array}$ (30)

where $q={n}_{dlr}-w+1$. In our study, we also conduct hypotheses tests between two sequences of point cloud data sets, ${X}_{n}^{1}$ and ${X}_{n}^{2}$, within the same sliding window, so using the same assumptions, definitions, and results from this section. The only difference is a change in subscripts and superscripts. This case is presented in Section 3.4.

3. Methods

In this section, we describe the methods to obtain the data and analyze the financial time series using topological data analysis, statistical inference, and RStudio [23]. The data, which were obtained from Yahoo Finance, consisted of daily adjusted closing prices (amended for corporate actions such as stocks and dividends) for four major US stock indices: S&P 500, DJIA, NASDAQ, and Russell 2000 and 10 ETF sectors between January 4, 2010 and July 1, 2020 (2641 trading days). During this time period, a decline in the daily log returns happened on March 16, 2020. In order to examine this date of interest, we limited our data sets to 1001 trading days ( ${n}_{td}$ ) before March 16, 2020 to observe any patterns in the ${L}^{p}$ norms and determine any critical thresholds. To analyze the data, we first approximated the daily log returns of the adjusted closing prices. A return is defined as ${r}_{t}=\left(\frac{{x}_{t}-{x}_{t-1}}{{x}_{t-1}}\right)$, where ${x}_{t}$ is the actual value (adjusted closing price) of the desired stock index or ETF sector. The daily log returns are defined as:

${\mathrm{log}}_{10}\left(\frac{{x}_{t}}{{x}_{t-1}}\right)={\mathrm{log}}_{10}\left({x}_{t}\right)-{\mathrm{log}}_{10}\left({x}_{t-1}\right)\approx {r}_{t},$

which is an approximation of a return [24]. Since the daily log returns are forward daily changes, then the time frame of the daily log returns is from January 5, 2010 to June 30, 2020.

3.1. Point Cloud Data

After approximating the daily log returns, we designed two sequences of point cloud data sets, each with a sliding window of $w=50$ and a sliding step of one day, which is based on the same method found in [16]. The first sequence of point cloud data set denoted by ${X}_{n}^{SI}$ examined the four major US stock indices ( $d=4$ ), which resulted in a 4 × 50 matrix for each individual point cloud for a total of $q={n}_{dlr}-w+1=\left(1001-1\right)-\left(50-1\right)=951$ point clouds as seen below from using Equation (1):

$\begin{array}{l}{X}_{1}^{SI}=\left[\begin{array}{c}x\left({t}_{1}\right)\\ x\left({t}_{2}\right)\\ \vdots \\ x\left({t}_{50}\right)\end{array}\right]=\left[\begin{array}{ccc}{x}_{1}^{1}& \cdots & {x}_{1}^{4}\\ {x}_{2}^{1}& \cdots & {x}_{2}^{4}\\ \vdots & \ddots & \vdots \\ {x}_{50}^{1}& \cdots & {x}_{50}^{4}\end{array}\right]\\ \text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{0.17em}}\text{\hspace{1em}}\text{\hspace{1em}}\vdots \\ {X}_{951}^{SI}=\left[\begin{array}{c}x\left({t}_{951}\right)\\ x\left({t}_{952}\right)\\ \vdots \\ x\left({t}_{1000}\right)\end{array}\right]=\left[\begin{array}{ccc}{x}_{951}^{1}& \cdots & {x}_{951}^{4}\\ {x}_{952}^{1}& \cdots & {x}_{952}^{4}\\ \vdots & \ddots & \vdots \\ {x}_{1000}^{1}& \cdots & {x}_{1000}^{4}\end{array}\right].\end{array}$ (31)

The second sequence of point cloud data set denoted by ${X}_{n}^{ETF}$ examined the 10 ETF sectors ( $d=10$ ), which yielded a 10 × 50 matrix for each single point cloud for a total of $q={n}_{dlr}-w+1=951$ point clouds as seen below from using Equation (1):

$\begin{array}{l}{X}_{1}^{ETF}=\left[\begin{array}{c}x\left({t}_{1}\right)\\ x\left({t}_{2}\right)\\ \vdots \\ x\left({t}_{50}\right)\end{array}\right]=\left[\begin{array}{ccc}{x}_{1}^{1}& \cdots & {x}_{1}^{10}\\ {x}_{2}^{1}& \cdots & {x}_{2}^{10}\\ \vdots & \ddots & \vdots \\ {x}_{50}^{1}& \cdots & {x}_{50}^{10}\end{array}\right]\\ \text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{1em}}\vdots \\ {X}_{951}^{ETF}=\left[\begin{array}{c}x\left({t}_{951}\right)\\ x\left({t}_{952}\right)\\ \vdots \\ x\left({t}_{1000}\right)\end{array}\right]=\left[\begin{array}{ccc}{x}_{951}^{1}& \cdots & {x}_{951}^{10}\\ {x}_{952}^{1}& \cdots & {x}_{952}^{10}\\ \vdots & \ddots & \vdots \\ {x}_{1000}^{1}& \cdots & {x}_{1000}^{10}\end{array}\right].\end{array}$ (32)

3.2. Vietoris-Rips Complex and Persistent Homology

Next, we constructed Vietoris-Rips complexes and filtration for each point cloud in ${X}_{n}^{SI}$ and ${X}_{n}^{ETF}$ from definition 2.4, definition 2.5, and Equation (6) and R-package “TDA” [25]. The Rips filtration for all the stock indices and all the ETFs are denoted by $R\left({X}_{n}^{SI}\mathrm{,}\u03f5\right)$ and $R\left({X}_{n}^{ETF}\mathrm{,}\u03f5\right)$ respectively for $\u03f5>0$. For the maximum filtration, we used ${{\u03f5}^{\prime}}^{SI}=0.055$ and ${{\u03f5}^{\prime}}^{ETF}=0.08$, which are based on similar methods found in [16]. Therefore, we obtained the following Rips filtration:

$R\left({X}_{n}^{SI},\u03f5\right)=R\left({X}_{n}^{SI},0\right)\subset \cdots \subset R\left({X}_{n}^{SI},0.055\right),$ (33)

$R\left({X}_{n}^{ETF},\u03f5\right)=R\left({X}_{n}^{ETF},0\right)\subset \cdots \subset R\left({X}_{n}^{ETF},0.08\right),$ (34)

where $n=1,\cdots ,951$. Based on the Equations (6), (33), and (34), we computed only the $p=1$ dimensional homology ${H}_{1}\left(R\left({X}_{n}\mathrm{,}\u03f5\right)\right)$ with coefficients in the field $\mathbb{Z}\mathrm{/2}\mathbb{Z}$ from Equation (7) as follows:

${H}_{1}\left(R\left({X}_{n}^{SI}\mathrm{,0}\right)\right)\to {H}_{1}\left(R\left({X}_{n}^{SI}\mathrm{,0.055}\right)\right)\mathrm{,}$ (35)

${H}_{1}\left(R\left({X}_{n}^{ETF}\mathrm{,0}\right)\right)\to {H}_{1}\left(R\left({X}_{n}^{ETF}\mathrm{,0.08}\right)\right)\mathrm{,}$ (36)

where $n=1,\cdots ,951$. Also, we are only interested in the persistence of loops in as they appear in each point cloud during the transition states of the market, which is why we did the first dimensional homology. From definition 2.4, the filtration from Equations (33) and (34) induced a sequence of linear maps ${f}_{1}^{{b}_{i},{d}_{i},SI}:{H}_{1}\left(R\left({X}_{n}^{SI},0\right)\right)\to {H}_{1}\left(R\left({X}_{n}^{SI},0.055\right)\right)$ and ${f}_{1}^{{b}_{i}\mathrm{,}{d}_{i}\mathrm{,}ETF}\mathrm{:}{H}_{1}\left(R\left({X}_{n}^{SI}\mathrm{,0}\right)\right)\to {H}_{1}\left(R\left({X}_{n}^{SI}\mathrm{,0.08}\right)\right)$. The images of these maps are the persistent homology groups. The collection of vector spaces ${H}_{1}\left(R\left({X}_{n}^{SI}\right)\right)$ and ${H}_{1}\left(R\left({X}_{n}^{ETF}\right)\right)$ along with the corresponding linear maps is a persistent module, which leads us to the topological summaries.

3.3. Topological Summaries

By modifying the R script in [5], the first dimensional persistence diagrams denoted by ${D}_{1}\left({X}_{n}^{SI}\right)={\left\{{b}_{i}\mathrm{,}{d}_{i}\right\}}_{i\in I}$ and ${D}_{1}\left({X}_{n}^{ETF}\right)={\left\{{b}_{i}\mathrm{,}{d}_{i}\right\}}_{i\in I}$ for each point cloud data set were used along with Equations (10) and (11) to produce the analogous first dimensional persistent landscapes $\lambda \left({X}_{n}^{SI}\right)$ and $\lambda \left({X}_{n}^{ETF}\right)$ as seen below:

(37)

(38)

where. Next, the norms of the persistence landscapes and were computed for and using Equations (14)-(16). The norms of the persistence landscapes and the daily log returns were plotted in juxtaposition, where it is important to remember that a point in the norms of persistence landscapes refers to a sliding window of 50 trading days in the daily log returns. After generating the topological summaries, the mean landscape is constructed using definition 2.7 and Equations (12)-(13) for the time period between July 1, 2019 and July 1, 2020 (253 trading days) or the time frame between July 2, 2019 to June 30, 2019 (252 days for the daily log returns). For this reason, the sequences of point cloud data sets will go from to. Also, recall that the daily log returns are forward daily changes, so the time frame of the daily log returns are from July 2, 2019 to June 30, 2020. We assigned and to be the corresponding landscapes for all the point clouds in and to obtain the mean landscapes as seen below:

(39)

(40)

where for and samples [5]. We are interested in time period between July 1, 2019 and July 1, 2020, which has 253 trading days, because we wanted to observe market conditions prior to our market decline of interest and see if we are able to detect any critical transitions. Therefore, we provide summary statistics for this time period for all the stock indices and all the ETF sectors. The daily log returns, persistent diagrams, persistent landscapes, and the mean landscapes for sliding windows of 50 trading days were generated and plotted together for July 2, 2019 and June 30, 2020, but we only highlighted specific date ranges near our market fall of interest and peaks in the norms in the persistence landscape for all the stock indices and ETF sectors, which is discussed in Section 4.

3.4. Statistical Inference

While the topological summaries were useful for examining topological features, we were also interested in finding statistical significant for any changes of these topological features within time. The time period of interest is July 1, 2019 to July 1, 2020, which has trading days.

We make the same assumptions from Section 2.5 and Section 2.6. Our random variables will derive from our two sequences of point cloud data sets, and. Since our time period of interest has 253 trading days, our sequence of point cloud data sets are size. For this reason, we have random variables in each sequence of point cloud data sets.

For all the stock indices and all the ETF sectors, we have and be random variables respectively for and samples for these groups respectively and and are the corresponding landscapes respectively for. The associate sample values of and are denoted as and respectively and the corresponding landscapes of these sample values are labelled as and respectively.

The functional in Equation (25) is used to define the random variables for all the stock indices and all the ETFs as follows:

(41)

(42)

where for and samples. We recall the sample mean, so using Equation (26) for the sample means for the random variables of all the stock indices and ETF sectors, we have the following:

(43)

(44)

where for and samples. We assume that and are the expectations and population means of and respectively for. We set up three sets of hypotheses test and an analogous p-value based on a permutation test. For our first two sets of statistical hypotheses, we desire separate hypotheses tests for all the stock indices and for all the ETF sectors within a one day lag in their respective sliding windows. Our statistical hypotheses will distinguish for two groups at a time if the means of topological features are the same within a one day lag in their respective sliding windows and point cloud data sets as seen below:

(45)

(46)

For our third set of statistical hypotheses, we also wish to compare all the stock indices against all the ETF sectors within the same sliding windows. Our statistical hypotheses will determine for two groups at a time if the means of topological features are the same within the same sliding window as shown below:

(47)

where in Equations (45)-(47),. To test the null hypotheses found in Equations (45) and (46), we used a two-sample permutation test from Equation (28) to obtain:

(48)

(49)

where for and samples. To test the null hypotheses found in Equation (47), we used a two-sample permutation test from Equations (28) to obtain:

(50)

where for and samples. Using Equations (48), (49), and (50), of the test statistic were calculated for permutations. The observed value of the test statistic is expressed as. The p-value is calculated by comparing with and averaging the number of times. Using Equation (23), Equations (48) and (49) become:

(51)

(52)

Similarly, Equation (50) becomes:

(53)

where in Equations (51)-(53), where for and samples. Hence, using Equations (48) and (52) and every instance where, the p-values were obtained as:

(54)

(55)

Similarly, using Equation (53) and every instance where, the p-value was obtained as:

(56)

where in Equations (54)-(56),. To evaluate statistical significance, using Equations (41)-(56), a permutation is completed at a significance level of for homology in degree 1 for all our hypothesis tests. Since we are only interested in the number of loops, we will look at homology in degree 1. All these hypothesis testing methods were modified from the R script in [5]. After finding the p-values, we plotted the daily log returns with the p-values that were less than or greater than or equal to our significant level for either all the stock indices or all the ETF sectors along a sliding window of 50 trading days.

4. Results

The goal of this study is to detect a statistically relevant critical transition and characterize any changes in topological features over time. To assess the statistical significance of observed differences in the topological features that change over time, we used a permutation test. For degree 1, we obtained ten sample values of the random variables and as in Equation (41). Using Equations (43), (45), (48), (51), and (54), the permutation test is implemented with a significance level when comparing all stock indices in different sliding windows between July 1, 2019 and July 1, 2020. The permutation test yields 164 p-values of 0.0000, 2 p-values of 0.001, and 36 p-values of 1 for homology in degree 1.

Using Equations (44), (46), (49), (52), and (55), the permutation test is conducted with a significance level when comparing all the ETF sectors in different sliding windows between July 1, 2019 and July 1, 2020. The permutation test returns 164 p-values of 0.0000, 4 p-values of 0.001, and 33 p-values of 1 for homology in degree 1. Using Equations (43), (44), (47), (50), (53), and (56), the permutation is performed with a significance level when comparing all the stock indices and all the ETF sectors in the same sliding windows between July 1, 2019 and July 1, 2020, which results in 199 p-values of 0.0000 and 2 p-values of 0.001 for homology in degree 1.

In order to understand these results, we will review the daily log returns, the norms of the persistence landscapes, and the topological summaries of all the stock indices and all the ETF sectors. When reviewing the daily log returns for DJIA, the S&P 500, NASDAQ, and Russell 2000 between January 5, 2010 and June 30, 2020 (see Figure 1), the stock indices range from −0.05 and 0.05 from 2010 to mid 2011, with some positive and negative spikes that appear leading up to 2012. From 2012 to March 2020, the daily log returns once again fall between −0.05 and 0.05. However, from March 2020 to June 2020, the market is highly volatile. Similar patterns are observed for the ETF sectors, but there is a notable spike around 2017 and from March 2020 to June 2020, the ETF sectors are more volatile than the stock indices as shown in Figure 2.

When we examine the daily log returns of all of the stock indices between January 5, 2010-June 1, 2020, the minimum daily log return occur on March 16, 2020, where Russell 2000 had a return of −0.154, the S&P 500 had a return at −0.1277, and the other stock indices were in between these values. When reviewing the daily log returns for all of the ETFs sectors for the same time period, the minimum daily log return also occurs on March 16, 2020, where Information Technology (XLK) had a return of −0.1487, Consumer Staples (XLP) had a return of −0.0702, and the other ETF sectors were in between these values. While March 16, 2020 is not recognized as an official financial crash or meltdown, this

Figure 1. The figures are the daily log returns for all the stock indices from January 5, 2010 to June 30, 2020. The reporting period of this figure contains 2641 trading days from January 4, 2010 to July 1, 2020.

Figure 2. The figures are the daily log returns for all the ETF sectors from January 5, 2010 to June 30, 2020. The reporting period of this figure contains 2641 trading days from January 4, 2010 to July 1, 2020.

date is noteworthy, and warrants closer examination for potential critical transitions prior to this date. Focusing on when the peaks occur, we include summary statistics for July 1, 2019 to July 1, 2020 for all the stock indices and all the ETF sectors in Table 1 and Table 2, respectively.

The norms of the persistence landscapes in homology degree 1 presented in Figure 3 and Figure 4 display all of the stock indices and all of the ETFs respectively for and for 1001 trading days prior to March 16, 2020. For the stock indices, the L^{1} distances are less than 0.01 between 2017 and 2018, less than 0.02 between 2018 and 2020, but the greatest L^{1} distance occurs in 2020 at approximately 0.08 as seen in Figure 3. The L^{1} distance for all of the ETFs, have more spikes than the L^{1} distances of the stock indices, especially between 2018 and 2020, but the greatest L^{1} distance occurs in March 2020 at approximately 0.14 as seen in Figure 3. While the L^{2} norms for all of the stock indices and all of the ETFs have similar distances, there is a noticeable spike in 2020. However, the distances in L^{2} are not as great as in L^{1}, as shown in Figure 3 and Figure 4.

Figure 3 and Figure 4 highlight the peak in more detail for the time period between January 3, 2020 to June 30, 2020. While critical points are discernible in the month of February 2020, the peaks occurred on February 21, 2020 and March 3, 2020 for all of the stock indices and for all of the ETF sectors respectively as seen in Figure 4. Recall that a point on the norms of the persistence landscapes coincides with a sliding window of 50 trading days in the daily log returns, which means the peaks are from February 21, 2020 to May 1, 2020 and March 3, 2020 to May 12, 2020 for all of the stock indices and for all of the ETF

Table 1. Summary statistics for stock indices.

Table 2. Summary statistics for ETF sectors.

Figure 3. The figures are the norms of the persistence landscapes of all the stock indices, where (solid line) and (dashed line)) and each point in the figure represents a sliding window of 50 trading days. Panel A plots the time frame June 3, 2016 to March 16, 2020, where the last sliding window is from March 16, 2020 to May 26, 2020 and the reporting period of this figure contains 1001 trading days from June 2, 2016 to May 27, 2020. Panel B plots the time frame between January 3, 2020 to June 30, 2020, where the last sliding window is from April 21, 2020 to June 30, 2020 and the reporting period of this figure contains 76 trading days from January 2, 2020 to July 1, 2020.

Figure 4. The figures are the norms of the persistence landscapes of all the ETF sectors, where (solid line) and (dashed line)) and each point in the figure represents a sliding window of 50 trading days. Panel A plots the time frame between June 3, 2016 to March 16, 2020, where the last sliding window is from March 16, 2020 to May 26, 2020 and the reporting period of this figure contains 1001 trading days from June 2, 2016 to May 27, 2020. Panel B plots the time frame between January 3, 2020 to June 30, 2020, where the last sliding window is from April 21, 2020 to June 30, 2020 and the reporting period of this figure contains 76 trading days from January 2, 2020 to July 1, 2020.

sectors respectively. In particular, Figure 5 and Figure 6 emphasize this point, where the norms of the persistence landscapes and the daily log returns of either all of the stock indices or all of the ETF sectors are next to each other. The sliding windows of 50 trading days of the daily log returns synchronize to the first point in the norms of the persistence landscape and to the maximum values of the norms of the persistence landscapes as indicated by Figure 5 and Figure 6.

Aside from the norms of the persistence landscapes, we produce topological summaries to represent the persistence of topological features for all the stock indices and for all the ETF sectors between January 3, 2020 and June 30, 2020. Along with these topological summaries (the persistence diagram, the persistence landscape, the mean landscape), we plotted the daily log returns for the corresponding sliding window of 50 trading days shown in Figures 7-12.

Figure 7 and Figure 8 indicate that the daily log returns are centered around zero from January 3, 2020 to February 21, 2020 for all of the stock indices and from January 3, 2020 to February 26, 2020 for all of the ETF sectors. Not much persistence is evident in the persistence diagram and few spikes appear in persistence landscape and mean landscape. Figure 9 and Figure 10 illustrate more variability in the daily log returns from March 1, 2020 to April 16, 2020 for all of

Figure 5. The figures are the norms of the persistence landscapes and the daily log returns of all the stock indices from for July 2, 2019-June 30, 2020. Panel A is the norms of the persistence landscape, where (solid line) and (dashed line)), each point in the figure represents a sliding window of 50 trading days, and two dashed lines depicting the first and maximum points in this figure. Panel B plots the daily log returns with two sliding windows of 50 trading days (depicted as rectangles) corresponding to the first and max points in the Panel A. The first sliding window is from July 2, 2019 to September 11, 2019, while the second sliding window is from February 21, 2020 to May 1, 2020. The reporting period of this figure contains 253 trading days from July 1, 2019 to July 1, 2020.

Figure 6. The figures are the norms of the persistence landscapes and the daily log returns of all the ETF sectors from for July 2, 2019-June 30, 2020. Panel A is the norms of the persistence landscape, where (solid line) and (dashed line)), each point in the figure represents a sliding window of 50 trading days, and two dashed lines depicting the first and maximum points in this figure. Panel B plots the daily log returns with two sliding windows of 50 trading days (depicted as rectangles) corresponding to the first and max points in the Panel A. The first sliding window is from July 2, 2019 to September 11, 2019, while the second sliding window is from March 3, 2020 to May 12, 2020. The reporting period of this figure contains 253 trading days from July 1, 2019 to July 1, 2020.

Figure 7. These figures are the daily log returns and topological summaries of all the stocks indices from January 3, 2020-March 16, 2020. Panel A plots the daily log returns for all the stock indices with a sliding window of 50 trading days. Panel B plots the first dimension of the Vietoris-Rips persistence diagram, where the solid black dots represent connected components and the red triangles represent loops. Panel C plots the first dimension of corresponding persistence landscape. Panel D plots the corresponding the mean landscape.

Figure 8. These figures are the daily log returns and topological summaries of all the ETF sectors from January 3, 2020-March 16, 2020. Panel A plots the daily log returns for all the ETF sectors with a sliding window of 50 trading days. Panel B plots the first dimension of the Vietoris-Rips persistence diagram, where the solid black dots represent connected components and the red triangles represent loops. Panel C plots the first dimension of corresponding persistence landscape. Panel D plots the corresponding the mean landscape.

Figure 9. These figures are the daily log returns and topological summaries of all the stocks indices from February 21, 2020 to May 1, 2020. Panel A plots the daily log returns for all the stock indices with a sliding window of 50 trading days. Panel B plots the first dimension of the Vietoris-Rips persistence diagram, where the solid black dots represent connected components and the red triangles represent loops. Panel C plots the first dimension of corresponding persistence landscape. Panel D plots the corresponding the mean landscape.

Figure 10. These figures are the daily log returns and topological summaries of all the ETF sectors from March 3, 2020-May 12, 2020. Panel A plots the daily log returns for all the ETF sectors with a sliding window of 50 trading days. Panel B plots the first dimension of the Vietoris-Rips persistence diagram, where the solid black dots represent connected components and the red triangles represent loops. Panel C plots the first dimension of corresponding persistence landscape. Panel D plots the corresponding the mean landscape.

Figure 11. These figures are the daily log returns and topological summaries of all the stocks indices from March 16, 2020-May 26, 2020. Panel A plots the daily log returns for all the stock indices with a sliding window of 50 trading days. Panel B plots the first dimension of the Vietoris-Rips persistence diagram, where the solid black dots represent connected components and the red triangles represent loops. Panel C plots the first dimension of corresponding persistence landscape. Panel D plots the corresponding the mean landscape.

Figure 12. These figures are the daily log returns and topological summaries of all the ETF sectors from March 16, 2020-May 26, 2020. Panel A plots the daily log returns for all the ETF sectors with a sliding window of 50 trading days. Panel B plots the first dimension of the Vietoris-Rips persistence diagram, where the solid black dots represent connected components and the red triangles represent loops. Panel C plots the first dimension of corresponding persistence landscape. Panel D plots the corresponding the mean landscape.

the stock indices and from March 1, 2020 to May 1, 2020 for all of the ETF sectors. Significant persistence is apparent in the persistence diagram and more spikes appear in the persistence landscape and mean landscape.

5. Discussion

From reviewing the norms of the persistence landscape, the daily log returns, persistence diagrams, persistence landscapes, and mean landscapes for all of the selected dates, it is clear that the number of the loops in the relevant point clouds are more pronounced resulting in more persistence, which signifies that the stock market is transitioning from a stable state to a more unpredictable, volatile state. Moreover, the ETF sectors demonstrate more volatility than the stock indices. These stock indices’ findings coincide with the 2000 and 2008 market crashes findings found in [16]. Similar to Gidea and Katz [16], we observe L^{1} distances that confirm the critical thresholds prior to the 2020 peak and exhibit more than the L^{2} norm. In other words, the L^{p}-norms exhibit strong growth around the emergence of the primary peak.

While the highest peak occurred on February 21, 2020 for all of the stock indices and March 3, 2020 for all of the ETF sectors in the L^{p} norms, the Coronavirus (COVID-19) broke out in 2019 in Wuhan, China, but on January 21, 2020, the first US case was confirmed. The most important dates are March 13, 2020 when President Trump declares national emergency, March 15, 2020 when the Center of Disease Control and Prevention warns against large gatherings, and March 17, 2020 when COVID is present in all 50 states. The daily log returns for all of the stock indices and for all of the ETF sectors do not include negative values. Yet, there are other dates that could have lead to a market decline in March 16, 2020. For example, on January 30, 2020 when World Health Organization (WHO) declares a global health emergency or between February 5, 2020 and February 29, 2020 when the outbreak becomes an epidemic. While we acknowledge that it is quite difficult to predict a market crash, the norms of the persistence landscape performed really well as indicator in detecting critical transitions and the topological summaries authenticated volatility by of the number of loops increasing.

Our hypotheses tests aimed to find how topological features change within time, notably between July 1, 2019 and July 1, 2020. Our hypotheses tests for all of the stock indices found evidence of difference in topological features when comparing adjacent sliding windows of a sliding step of one day. In particular, we found for the chosen time frame that the daily log returns of all the stock indices significantly differ in the number of loops. Equivalently, our hypotheses tests for all of the ETF sectors found evidence of difference in topological features when comparing adjacent sliding windows of a sliding step of one day. Specifically, we found for the selected time frame that the daily log returns of all the ETF sectors significantly differ in the number of loops. Our last hypotheses tests between all of the stock indices and all of the ETF sectors within the same sliding window found inconclusive evidence of difference in topological features for the entire time frame.

6. Conclusions

In this paper, we investigated the topological features of four major indices and 10 ETF sectors for January 4, 2010-July 1, 2020. We used two sequences of point cloud data sets, one for all the stock indices and the other for all the ETFs with a sliding window. Both sequences were used to perform TDA through algebraic topology and persistent homology. From there, topological summaries are generated to determine persistence and the norms for persistence landscapes are used to detect a critical transition by adapting methods found in [16]. Our goal is to determine how the statistical significance of topological features of stock indices and ETF sectors change for a specific time frame. We found that between July 1, 2019 and July 1, 2020, there is evidence of difference of topological features for all the stock indices and all the ETFs. As a result, critical transitions are determined using the norms of the persistence landscape and topological features of stock indices and ETF sectors change within time when comparing two sliding windows of a sliding step of one day.

We conclude with possible future research goals. Further work could be done analyzing persistence landscapes for homology in degree two. It would be interesting to study topological features based on higher degree persistence. Furthermore, it would be fascinating to expand to commodities, futures, and other financial time series. Moreover, it would be more resourceful to expand topological data analysis to statistics beyond statistical inference and use for predictive modeling with machine learning.

This table presents summary statistics for all the stock indices. We estimated the mean (), standard deviation (), variance (), skewness (), and kurtosis () of the daily log returns from July 2, 2019 to June 30, 2020. The reporting period of this table contains 253 trading days from July 1, 2019 to July 1, 2020.

This table presents summary statistics for all the ETF sectors. We estimated the mean (), standard deviation (), variance (), skewness (), and kurtosis () of the daily log returns from July 2, 2019 to June 30, 2020. The reporting period of this table contains 253 trading days from July 1, 2019 to July 1, 2020.

Acknowledgements

The authors would like to thank Tracy Volz for helpful discussions in the editing process. The authors would also like to thank the Center of Computational Finance and Economic Systems (https://cofes.rice.edu).

References

[1] Collins, A., Zomorodian, A., Carlsson, G. and Guibas, L.J. (2004) A Barcode Shape Descriptor for Curve Point Cloud Data. Computers & Graphics, 28, 881-894.

https://doi.org/10.1016/j.cag.2004.08.015

[2] Mileyko, Y., Mukherjee, S. and Harer, J. (2011) Probability Measures on the Space of Persistence Diagrams. Inverse Problems, 27, Article ID: 124007.

https://doi.org/10.1088/0266-5611/27/12/124007

[3] Turner, K., Mileyko, Y., Mukherjee, S. and Harer, J. (2014) Fréchet Means for Distributions of Persistence Diagrams. Discrete & Computational Geometry, 52, 44-70.

https://doi.org/10.1007/s00454-014-9604-7

[4] Munch, E., Bendich, P., Turner, K., Mukherjee, S., Mattingly, J. and Harer, J. (2013) Probabilistic Fréchet Means and Statistics on Vineyards.

[5] Nicolau, M., Levine, A.J. and Carlsson, G. (2011) Topology Based Data Analysis Identifies a Subgroup of Breast Cancers with a Unique Mutational Profile and Excellent Survival. Proceedings of the National Academy of Sciences, 108, 7265-7270.

https://doi.org/10.1073/pnas.1102826108

[6] Carlsson, G., Ishkhanov, T., De Silva, V. and Zomorodian, A. (2008) On the Local Behavior of Spaces of Natural Images. International Journal of Computer Vision, 76, 1-12.

https://doi.org/10.1007/s11263-007-0056-x

[7] Wang, Y., Ombao, H. and Chung, M.K. (2019) Statistical Persistent Homology of Brain Signals. IEEE International Conference on Acoustics, Speech and Signal Processing, Brighton, 12-17 May 2019, 1125-1129.

https://doi.org/10.1109/ICASSP.2019.8682978

[8] Nielson, J.L., Paquette, J., Liu, A.W., Guandique, C.F., Tovar, C.A., Inoue, T., Irvine, K.-A., Gensel, J.C., Kloke, J., Petrossian, T.C., et al. (2015) Topological Data Analysis for Discovery in Preclinical Spinal Cord Injury and Traumatic Brain Injury. Nature Communications, 6, 8581.

https://doi.org/10.1038/ncomms9581

[9] Scheffer, M., Bascompte, J., Brock, W.A., Brovkin, V., Carpenter, S.R., Dakos, V., Held, H., Van Nes, E.H., Rietkerk, M. and Sugihara, G. (2009) Early-Warning Signals for Critical Transitions. Nature, 461, 53-59.

https://doi.org/10.1038/nature08227

[10] Ensor, K.B. and Koev, G.M. (2014) Computational Finance: Correlation, Volatility, and Markets. Wiley Interdisciplinary Reviews: Computational Statistics, 6, 326-340.

https://doi.org/10.1002/wics.1323

[11] Gidea, M. (2017) Topological Data Analysis of Critical Transitions in Financial Networks. In: International Conference and School on Network Science, Springer, Berlin, 47-59.

https://doi.org/10.1007/978-3-319-55471-6_5

[12] Gidea, M., Goldsmith, D., Katz, Y., Roldan, P., Shmalo, Y., et al. (2020) Topological Recognition of Critical Transitions in Time Series of Cryptocurrencies. Physica A: Statistical Mechanics and Its Applications, 548, Article ID: 123843.

https://doi.org/10.1016/j.physa.2019.123843

[13] Guttal, V., Raghavendra, S., Goel, N. and Hoarau, Q. (2016) Lack of Critical Slowing Down Suggests That Financial Meltdowns Are Not Critical Transitions, Yet Rising Variability Could Signal Systemic Risk. PLoS ONE, 11, e0144198.

https://doi.org/10.1371/journal.pone.0144198

[14] Edelsbrunner, H., Letscher, D. and Zomorodian, A. (2002) Topological Persistence and Simplification. Discrete & Computational Geometry, 28, 511-533.

https://doi.org/10.1007/s00454-002-2885-2

[15] Hatcher, A. (2002) Algebraic Topology. Cambridge University Press, Cambridge.

[16] Munkres, J.R. (1984) Elements of Algebraic Topology. The Benjamin/Cummings Publishing Company, Meno Park, CA.

[17] Dundas, B.I. (2013) Differential Topology.

[18] Bubenik, P. and Dłotko, P. (2017) A Persistence Landscapes Toolbox for Topological Statistics. Journal of Symbolic Computation, 78, 91-114.

https://doi.org/10.1016/j.jsc.2016.03.009

[19] Otter, N., Porter, M.A., Tillmann, U., Grindrod, P. and Harrington, H.A. (2017) A Roadmap for the Computation of Persistent Homology. EPJ Data Science, 6, 17.

https://doi.org/10.1140/epjds/s13688-017-0109-5

[20] Kovacev-Nikolic, V., Bubenik, P., Nikolić, D. and Heo, G. (2016) Using Persistent Homology and Dynamical Distances to Analyze Protein Binding. Statistical Applications in Genetics and Molecular Biology, 15, 19-38.

https://doi.org/10.1515/sagmb-2015-0057

[21] RStudio Team (2020) RStudio: Integrated Development Environment for R. RStudio, PBC, Boston.

[22] Shumway, R.H. and Stoffer, D.S. (2016) Time Series Analysis and Its Applications. Springer, Berlin.

https://doi.org/10.1007/978-3-319-52452-8

[23] Gidea, M. and Katz, Y. (2018) Topological Data Analysis of Financial Time Series: Landscapes of Crashes. Physica A: Statistical Mechanics and Its Applications, 491, 820-834.

https://doi.org/10.1016/j.physa.2017.09.028

[24] Bubenik, P. (2015) Statistical Topological Data Analysis Using Persistence Landscapes. The Journal of Machine Learning Research, 16, 77-102.

[25] Fasy, B.T., Kim, J., Lecci, F., Maria, C. and Rouvreau, V. (2019) TDA: Statistical Tools for Topological Data Analysis. RStudio, PBC, Boston.

https://cran.r-project.org/web/packages/TDA/index.html