Asymptotic Normality of the Nelson-Aalen and the Kaplan-Meier Estimators in Competing Risks
Show more
Abstract: This paper studies the asymptotic normality of the Nelson-Aalen and the Kaplan-Meier estimators in a competing risks context in presence of independent right-censorship. To prove our results, we use Robelledo’s theorem which makes it possible to apply the central limit theorem to certain types of particular martingales. From the results obtained, confidence bounds for the hazard and the survival functions are provided.

1. Introduction and Background

The model of competing risks has been widely studied in the literature, see e.g., Heckman and Honoré  , Commenges  , Com-nougué  , Fine and Gray  , Crowder  , Fermanian  , Latouche, A.  , Geffray  , Belot  , Njamen and Ngatchou (  ,  ), Njamen (  ,  ). In most approaches, the competing risks are assumed to be either all independent or all dependent. Here, the independent component of the potential risks constitutes an independent censoring variable while the other risks are kept as possibly dependent. This approach is used by Geffray  . Namely, we consider a population in which each subject is exposed to m mutually exclusive competing risks which may be dependent. For $j\in \left\{1,\cdots ,m\right\}$, the failure time from the jth cause is a non-negative random variable (r.v.) ${\tau }_{j}$. The competing risks model postulates that only the smallest failure time is observable, it is given by the r.v. $T=\mathrm{min}\left({\tau }_{1},\cdots ,{\tau }_{m}\right)$ with distribution function (d.f.) denoted by F. The cause of failure associated to T is then indicated by a r.v. $\eta$ which takes value j if the failure is due to the jth cause for a $j\in \left\{1,\cdots ,m\right\}$ i.e. $\eta =j$ if $T={\tau }_{j}$. The following modeling technique is extracted in Njamen and Ngatchou  : we assume that T is, in its turn, at risk of being independently right-censored by a non-negative r.v. C with d.f. G. Consequently, the observable r.v. are

$\left(Z=\mathrm{min}\left(T,C\right),\xi =\eta \delta \right),$

where $\delta =1\text{​}\text{​}{1}_{\left\{T\le C\right\}}$ and where $1\text{​}\text{​}{1}_{\left(.\right)}$ denotes the indicator function. As T and C are independent, the r.v. Z has d.f. H given by $1-H=\left(1-F\right)\left(1-G\right)$. Let ${\tau }_{H}=\mathrm{sup}\left\{t:H\left(t\right)<1\right\}$ denote the right-endpoint of H beyond which no observation is possible. The subdistribution functions ${F}^{\left(j\right)}$ pertaining to the different risks or causes of failure are defined for $j=1,\cdots ,m$ and $t\ge 0$ by

${F}^{\left(j\right)}\left(t\right)=ℙ\left[T\le t,\eta =j\right],j=1,\cdots ,m$ (1)

When the independence of the different competing risks may not be assumed, the functions ${F}^{\left(j\right)}$ for $j=1,\cdots ,m$ are the basic estimable quantities.

The Kaplan-Meier estimator was developed for situations in which only one cause of failure and the independent right-censoring are considered. Aalen and Johansen  were the first to extend the Kaplan-Meier estimator to several causes of failure in the presence of independent censoring. In the present situation, the d.f. F may be consistently estimated by the Kaplan-Meier estimator denoted by b ${\stackrel{^}{F}}_{n}$. For $j=1,\cdots ,m$, the subdistribution functions ${F}^{\left(j\right)}$ may be consistently estimated by means of the Aalen-Johansen estimators denoted respectively by ${\stackrel{^}{F}}_{n}^{\left(j\right)}$, for $j=1,\cdots ,m$. Indeed, when the process of the states occupied by an individual in time is a time-inhomogeneous Markov process, Aalen and Johansen  introduced an estimator of the transition probabilities between states in presence of independent random right-censoring. The competing risks set-up corresponds to the case of a time-inhomogeneous Markov process with only one transient state and several absorbing states (that can be labeled $1,\cdots ,m$ ). Aalen and Johansen  obtained the joint consistency of ${\stackrel{^}{F}}_{n}^{\left(j\right)}$ to ${F}^{\left(j\right)}$ for $j=1,\cdots ,m$ uniformly over fixed compact intervals $\left[0,\sigma \right]$ for $\sigma <{\tau }_{H}$. They also obtained the joint weak convergence of the processes $\sqrt{n}\left({\stackrel{^}{F}}_{n}^{\left(j\right)}-{F}^{\left(j\right)}\right)$ on fixed compact intervals $\left[0,\sigma \right]$ for $\sigma <{\tau }_{H}$.

The asymptotic properties of the Kaplan-Meier estimator on the distribution function have been studied by several authors (see Peterson  , Andersen and al.  , Shorack and Wellner  , Breslow and Crowley  ).

In this paper, in a region where there is at least one observation, we are interested in providing asymptotic properties of the Nelson-Aalen and Kaplan-Meier nonparametric estimators of the functions ${\Lambda }^{\ast \left(j\right)}$ and ${S}^{\ast \left(j\right)}$. For $j=1,\cdots ,m$ in the presence of independent right-wing censorship in the context of competitive risks set out in Njamen and Ngatchou (  ,  ).

The rest of the paper is organized as follows: Section 2 describes preliminary results and rappels used in the paper. In Section 3, we obtain two laws: In Section 3.1, we give limit law of Nelson-Aalen’s nonparametric estimator for competing risks as defined in Njamen and Ngatchou  and Njamen  . In Sect. 3.2, we give limit law of Kaplan-Meier’s nonparametric estimator in competing risks as defined in Njamen and Ngatchou  and Njamen  . In Section 4, we give the trust Bands, including the Hall-Wellner trust Bands and the Nair precision equal bands.

2. Preliminary and Rappels

For $t\ge 0$, we introduce the following subdistribution functions ${H}^{\left(0\right)}$ and ${H}^{\left(1\right)}$ of H by:

${H}^{\left(0\right)}\left(t\right)=ℙ\left[Z\le t,\xi =0\right],$

and

${H}^{\left(1\right)}\left(t\right)=ℙ\left[Z\le t,\xi \ne 0\right]$

and for $j=1,\cdots ,m$

${H}^{\left(1,j\right)}\left(t\right)=ℙ\left[Z\le t,\xi =j\right].$

The relations $F\left(t\right)={\sum }_{j=1}^{m}{F}^{\left(j\right)}\left(t\right)$ and ${H}^{\left(1\right)}\left(t\right)={\sum }_{j=1}^{m}{H}^{\left(1,j\right)}\left(t\right)$ hold for $t\ge 0$ since the different risks are mutually exclusive. The relation $H\left(t\right)={H}^{\left(0\right)}\left(t\right)+{H}^{\left(1\right)}\left(t\right)$ is also valid for $t\ge 0$. The relations that connect the observable distribution functions ${H}^{\left(0\right)}$, ${H}^{\left(1\right)}$ and ${H}^{\left(1,j\right)}$ to the unobservable distributions F, G and ${F}^{\left(j\right)}$ are given by:

${H}^{\left(0\right)}\left(t\right)={\int }_{0}^{t}\left(1-F\right)\text{d}G,$

${H}^{\left(1\right)}\left(t\right)={\int }_{0}^{t}\left(1-{G}^{-}\right)\text{d}F,$

and

${H}^{\left(1,j\right)}\left(t\right)={\int }_{0}^{t}\left(1-{G}^{-}\right)\text{d}{F}^{\left(j\right)}.$

The cumulative hazard function of T and the partial cumulative hazard function of T related to cause j for $j\in \left\{1,\cdots ,m\right\}$ are given for $t\ge 0$ respectively by the following expressions:

$\Lambda \left(t\right)={\int }_{0}^{t}\frac{\text{d}F}{1-{F}^{-}}={\int }_{0}^{t}\frac{\text{d}{H}^{\left(1\right)}}{1-{H}^{-}},$ (2)

${\Lambda }^{\left(1,j\right)}\left(t\right)={\int }_{0}^{t}\frac{\text{d}{F}^{\left(j\right)}}{1-{F}^{-}}={\int }_{0}^{t}\frac{\text{d}{H}^{\left(1,j\right)}}{1-{H}^{-}}.$ (3)

Let us set estimators for the different quantities. Let ${\left({Z}_{i},{\xi }_{i}\right)}_{i=1,\cdots ,n}$ be n independent copies of the random vector $\left(Z,\xi \right)$. We define the empirical counterparts of ${H}^{\left(0\right)}$, ${H}^{\left(1\right)}$, ${H}^{\left(1,j\right)}$ and H, for $j\in 1,\cdots ,m$ by:

${H}_{n}^{\left(0\right)}\left(t\right)=\frac{1}{n}\underset{i=1}{\overset{n}{\sum }}1\text{​}\text{​}{1}_{\left\{{Z}_{i}\le t,{\xi }_{i}=0\right\}},$

${H}_{n}^{\left(1\right)}\left(t\right)=\frac{1}{n}\underset{i=1}{\overset{n}{\sum }}1\text{​}\text{​}{1}_{\left\{{Z}_{i}\le t,{\xi }_{i}\ne 0\right\}},$

${H}_{n}^{\left(1,j\right)}\left(t\right)=\frac{1}{n}\underset{i=1}{\overset{n}{\sum }}1\text{​}\text{​}{1}_{\left\{{Z}_{i}\le t,{\xi }_{i}=j\right\}},$

${H}_{n}\left(t\right)=\frac{1}{n}\underset{i=1}{\overset{n}{\sum }}1\text{​}\text{​}{1}_{\left\{{Z}_{i}\le t\right\}}.$

The relations ${H}_{n}\left(t\right)={H}_{n}^{\left(0\right)}\left(t\right)+{H}_{n}^{\left(1\right)}\left(t\right)$ and ${H}_{n}^{\left(1\right)}\left(t\right)={\sum }_{j=1}^{m}{H}_{n}^{\left(1,j\right)}\left(t\right)$ are valid for $t\ge 0$. As T is independently randomly right-censored by C, a well-known estimator for F is the Kaplan-Meier estimator defined for $t\ge 0$ by:

${\stackrel{^}{F}}_{n}\left(t\right)=1-\underset{i=1}{\overset{n}{\prod }}\left(1-\frac{1\text{​}\text{​}{1}_{\left\{{Z}_{i}\le t,{\xi }_{i}\ne 0\right\}}}{n\left(1-{H}_{n}^{-}\left({Z}_{i}\right)\right)}\right),$

where the left-continuous modification of any d.f. L is denoted by ${L}^{-}$. The Nelson-Aalen estimators of $\Lambda$ and of ${\Lambda }^{\left(1,j\right)}$ for $j=1,\cdots ,m$ respectively are defined for $t\ge 0$ by:

${\Lambda }_{n}\left(t\right)={\int }_{0}^{t}\frac{\text{d}{H}_{n}^{\left(1\right)}}{1-{H}_{n}^{-}},$ (4)

${\Lambda }_{n}^{\left(1,j\right)}\left(t\right)={\int }_{0}^{t}\frac{\text{d}{H}_{n}^{\left(1,j\right)}}{1-{H}_{n}^{-}}.$ (5)

The Aalen-Johansen estimator for ${F}^{\left(j\right)}$ is defined for $t\ge 0$ by:

${\stackrel{^}{F}}_{n}^{\left(j\right)}\left(t\right)={\int }_{0}^{t}\frac{1-{\stackrel{^}{F}}_{n}^{-}}{1-{H}_{n}^{-}}\text{d}{H}_{n}^{\left(1,j\right)}.$

For all $t\ge 0$, the following equalities hold:

$1-{H}_{n}\left(t\right)=\left(1-{\stackrel{^}{F}}_{n}\left(t\right)\right)\left(1-{\stackrel{^}{G}}_{n}\left(t\right)\right)$

${\Lambda }_{n}\left(t\right)={\int }_{0}^{t}\frac{\text{d}{\stackrel{^}{F}}_{n}}{1-{\stackrel{^}{F}}_{n}^{-}},$

where ${\stackrel{^}{G}}_{n}$, the Kaplan-Meier estimator of G, is defined for $t\ge 0$ by:

${\stackrel{^}{G}}_{n}\left(t\right)=1-\underset{i=1}{\overset{n}{\prod }}\left(1-\frac{1\text{​}\text{​}{1}_{\left\{{T}_{i}\le t,{\xi }_{i}=0\right\}}}{n\left(1-{H}_{n}^{-}\left({Z}_{i}\right)\right)}\right).$

3. Results

In this section, we continue the works of Njamen and Ngatchou  , Njamen  and Njamen and Ngatchou  . In fact, Njamen and Ngatchou (  , p. 9), studies the consistency of Nelson-Aalen’s non-parametric estimator in competing risks, while Njamen (  , pp. 11-12) studies respectively the simple convergence and the uniform convergence in probability of Nelson-Aalen’s nonparametric estimator in competing risks; and Njamen and Ngatchou (  , p. 13) study the bias and the uniform convergence of the non-parametric estimator survival function in a context of competing risks. It is also shown there that this estimator is asymptotically unbiased. For this purpose, we use the martingale approach as the authors mentioned above.

3.1. Limit Law of Nelson-Aalen’s Nonparametric Estimator for Competing Risks

In what follows, we study the asymptotic normality of Nelson-Aalen’s non-parametric estimator in competitive risks. For that, considering, for all $j\in \left\{1,\cdots ,m\right\}$ and $t\ge 0$, one has the Nelson-Aalen type cumulative hazard function estimator (Nelson,  ; Aalen,  , Njamen and Ngatchou,  ) defined by

${\stackrel{^}{\Lambda }}_{n}\left(t\right)={\int }_{0}^{t}\frac{J\left(u\right)}{Y\left(u\right)}\text{d}N\left(u\right),$ (6)

where $J\left(t\right)=1\text{​}\text{​}{1}_{\left\{Y\left(t\right)>0\right\}}$.

The cumulative risk in a region where there is at least one observation is given for all $j\in \left\{1,\cdots ,m\right\}$, by (see Njamen,  . p. 9)

${\Lambda }^{\ast \left(j\right)}={\int }_{0}^{t}{L}^{\ast \left(j\right)}{\lambda }^{\ast \left(j\right)}\left(s\right)\text{d}s,$ (7)

with ${L}_{i}^{\ast \left(j\right)}\left(t\right)=1\text{​}\text{​}{1}_{\left\{{Z}_{i}\ge t\right\}}$ which indicates whether the individual i is still at risk just before time t (the individual has not yet undergone the event). Its estimator was defined in Njamen and Ngatchou (  , p. 7).

The following theorem gives the limit law of the Neslson-Aalen estimator ${\stackrel{^}{\Lambda }}_{n}^{\ast \left(j\right)}$ in competing risks of Njamen (2017, p. 9). This is the first fundamental result of this article.

Theorem 1.

In a region where there is at least one observation, it is assumed that ${F}_{i}^{\ast \left(j\right)}\left(t\right)<1$ for $i\in \left\{1,\cdots ,n\right\}$ and $j\in \left\{1,\cdots ,m\right\}$. Then, for all $t\ge 0$,

$\sqrt{n}\left({\stackrel{^}{\Lambda }}_{n}^{\ast \left(j\right)}\left(t\right)-{\Lambda }^{\ast \left(j\right)}\left(t\right)\right)\stackrel{\mathcal{L}}{\to }{U}_{i}^{\ast \left(j\right)}\left(t\right),$ (8)

where ${U}_{i}^{\ast \left(j\right)}$ is a centered Gaussian martingale of variance such that:

$\left\{\begin{array}{l}{U}_{i}^{\ast \left(j\right)}\left(0\right)=0\\ \mathbb{V}\left({U}_{i}^{\ast \left(j\right)}\left(t\right)\right)={\int }_{0}^{t}\frac{{\alpha }_{i}^{\ast \left(j\right)}\left(u\right)}{{y}_{i}^{\ast \left(j\right)}\left(u\right)}\text{d}u,\end{array}$ (9)

where for all $s\ge 0$,

${y}_{i}^{\ast \left(j\right)}\left(s\right)=\left[1-{F}_{i}^{\ast \left(j\right)}\left(s\right)\right]\left[1-{G}_{i}^{\ast \left(j\right)}\left({s}^{-}\right)\right]$ (10)

with ${G}_{i}^{\ast \left(j\right)}$ standing for the distribution function of ${C}_{i}^{\ast \left(j\right)}$ and ${\alpha }_{i}^{\ast \left(j\right)}$ the instant risk function.

To prove this theorem, we need the Robelledo theorem. In fact, the Rebolledo theorem below makes it possible to apply the central limit theorem for certain types of particular martingales.

Theorem 2. (Rebolledo’s Theorem)

Let ${M}^{n}={\sum }_{i=1}^{n}{M}_{i}$ a sequence of martingales where ${M}_{i}={K}_{i}-{A}_{i}$, ${K}_{i}$ denotes a counting process and ${A}_{i}$ its compensator. Consider the processes ${I}_{n}\left(t\right)={\int }_{0}^{t}{f}_{n}\left(s\right)\text{d}{M}^{n}\left(s\right)$, and for all $\epsilon >0$, ${I}_{n,\epsilon }\left(t\right)={\int }_{0}^{t}\text{ }{f}_{n}\left(s\right)1\text{​}\text{​}{1}_{\left\{|{f}_{n}\left(s\right)|>\epsilon \right\}}\text{d}{M}^{n}\left(s\right)$. Suppose that ${f}_{n}$ and f are predictable and locally bounded ${\mathcal{F}}_{{s}^{-}}$ processes such that

$\underset{s}{\mathrm{sup}}|{f}_{n}-f\left(s\right)|\to 0\text{ }\left(n\to \infty \right).$

Suppose also that the processes ${K}_{i},{A}_{i},{f}_{n}$ are bounded. Let’s for all $t>0$, $\alpha \left(t\right)={\int }_{0}^{t}\text{ }{f}^{2}\left(s\right)\text{d}s$. If

1) ${〈{I}_{n}〉}_{t}\stackrel{ℙ}{\to }\alpha \left(t\right),\left(n\to \infty \right)$ ;

2) for all $\epsilon >0$, ${〈{I}_{n,\epsilon }〉}_{t}\stackrel{ℙ}{\to }0,\left(n\to \infty \right)$.

Then,

$\left({I}_{n}\left(t\right),t>0\right)⇒\left({\int }_{0}^{t}\text{ }f\left(s\right)\text{d}W\left(s\right),t>0\right),\text{ }\left(n\to \infty \right),$

where $⇒$ denotes the weak convergence in the space of continuous functions on the right, having a left-hand boundary with the topology of Skorokhod and where W is a Brownian motion.

To prove Theorem 1, it is sufficient to check whether the previous conditions of Rebolledo’s Theorem are satisfied:

Proof. For all $j\in \left\{1,\cdots ,m\right\}$ and $t\ge 0$, ${M}_{i}^{\ast \left(j\right)}\left(t\right)$ also decomposes into

${M}_{i}^{\ast \left(j\right)}\left(t\right)={K}_{i}^{\ast \left(j\right)}\left(t\right)-{\int }_{0}^{t}\text{ }\text{ }\text{d}{\Lambda }_{i}^{\ast \left(j\right)}\left(s\right)\text{d}s,$

which in turn can be written in terms of ${\alpha }_{j}\left(t\right)$ by

${M}_{i}^{\ast \left(j\right)}\left(t\right)={K}_{i}^{\ast \left(j\right)}\left(t\right)-{\int }_{0}^{t}\text{ }\text{ }{\alpha }_{i}^{\ast \left(j\right)}\left(s\right){L}_{i}^{\ast \left(j\right)}\left(s\right)\text{d}s,$

which finally, can be rewritten as

$\text{d}{K}_{i}^{\ast \left(j\right)}\left(t\right)={\alpha }_{i}^{\ast \left(j\right)}\left(t\right){L}_{i}^{\ast \left(j\right)}\left(t\right)\text{d}t+\text{d}{M}_{i}^{\ast \left(j\right)}\left(t\right),$

where $\text{d}{M}_{i}^{\ast \left(j\right)}\left(t\right)$ can be seen as a random noise process. The martingale ${M}_{i}^{\ast \left(j\right)}\left(t\right)$ above represents the difference between the number of failures due to a specific cause j observed in the time interval $\left[0,t\right]$, i.e. ${K}_{i}^{\ast \left(j\right)}\left(t\right)$ (see Njamen,  , p.6), and the number of failures predicted by the model for the jth cause. This definition fulfills the Doob-Meyer decomposition.

This martingale is used in Fleming and Harrington (  , p. 26) and in Breuils (  , p. 25).

Now, to explain the asymptotic nature of the results, we defined, for all $t\ge 0$, $j\in \left\{1,\cdots ,m\right\}$, to pose:

${N}^{\left(n\right)}\left(t\right)=\underset{i=1}{\overset{n}{\sum }}\text{ }\text{ }{K}_{i}^{\ast \left(j\right)}\left(t\right),\text{ }\text{ }\text{ }{Y}^{\left(n\right)}\left(t\right)=\underset{i=1}{\overset{n}{\sum }}\text{ }\text{ }{L}_{i}^{\ast \left(j\right)}\left(t\right),\text{ }\text{ }\text{ }{J}^{\left(n\right)}=1\text{​}\text{​}{1}_{\left\{{Y}^{\left(n\right)}\left(t\right)>0\right\}},$

In a subgroup ${A}^{\left(j\right)}$, where there is at least one observation, the survival function of ${Z}_{i}=\mathrm{min}\left({T}_{i},{C}_{i}\right)$ is defined for all $t\ge 0$ by:

${S}_{Z}^{\ast \left(j\right)}\left(t\right)=\left(1-{F}_{i}^{\ast \left(j\right)}\left(t\right)\right)\left(1-{G}_{i}^{\ast \left(j\right)}\left({t}^{-}\right)\right).$

Recall also that ${F}_{i}^{\ast \left(j\right)}$ is the distribution function of ${T}_{i}$, ${G}_{i}^{\ast \left(j\right)}$ is that of ${C}_{i}$ ’s and $\left[1-\left(1-{F}_{i}^{\ast \left(j\right)}\right)\right]\left[1-{G}_{i}^{\ast \left(j\right)}\right]$ that of the ${Z}_{i}$ ’s. From the Glivenko-Cantelli theorem, one has:

$\underset{s\in \left[0,t\right]}{\mathrm{sup}}|\frac{{Y}^{\left(n\right)}\left(s\right)}{n}-\left[1-{F}_{i}^{\ast \left(j\right)}\left(s\right)\right]\left[1-{G}_{i}^{\ast \left(j\right)}\left({s}^{-}\right)\right]|\stackrel{ℙ}{\to }0\text{ }\left(n\to \infty \right).$ (11)

Otherwise,

${J}^{\left(n\right)}\left(t\right)=1\text{​}\text{​}{1}_{\left\{{Y}^{\left(n\right)}\left(t\right)>0\right\}},$

one has:

$1-{J}^{\left(n\right)}\left(t\right)=1\text{​}\text{​}{1}_{\left\{{Y}^{\left(n\right)}\left(t\right)=0\right\}}=1\text{​}\text{​}{1}_{\left\{\mathcal{B}\left(n,\left[1-{F}_{i}^{\ast \left(j\right)}\left(t\right)\right]\left[1-{G}_{i}^{\ast \left(j\right)}\left({t}^{-}\right)\right]\right)=0\right\}}\stackrel{ℙ}{\to }0\text{ }\text{ }\text{ }\left(n\to \infty \right),$

from which one obtains (see Theorem 3, p. 11 of Njamen,  ),

${J}^{\left(n\right)}\left(t\right)\stackrel{ℙ}{\to }1\text{ }\left(n\to \infty \right).$

Differentiating the martingale ${M}_{i}^{\ast \left(j\right)}\left(t\right)={K}_{i}^{\ast \left(j\right)}-{\int }_{0}^{t}\text{ }\text{ }{L}_{i}^{\ast \left(j\right)}\left(s\right){\alpha }_{i}^{\ast \left(j\right)}\left(s\right)\text{d}s$, one has:

$\text{d}{M}_{i}^{\ast \left(j\right)}\left(t\right)=\text{d}{K}_{i}^{\ast \left(j\right)}\left(t\right)-{L}_{i}^{\ast \left(j\right)}\left(t\right){\alpha }_{i}^{\ast \left(j\right)}\left(t\right)\text{d}t,$

and from

$\text{d}{〈{M}_{i}^{\ast \left(j\right)}〉}_{t}=\mathbb{V}ar\left(\text{d}{M}_{i}^{\ast \left(j\right)}\left(t\right)/{\mathcal{F}}_{{t}^{-}}\right),$

one obtains

$\begin{array}{c}\text{d}{〈{M}_{i}^{\ast \left(j\right)}〉}_{t}=\mathbb{V}ar\left(\text{d}{K}_{i}^{\ast \left(j\right)}\left(t\right)-{L}_{i}^{\ast \left(j\right)}\left(t\right){\alpha }_{i}^{\ast \left(j\right)}\left(t\right)\text{d}t/{\mathcal{F}}_{{t}^{-}}\right)\\ =\mathbb{V}ar\left(\text{d}{K}_{i}^{\ast \left(j\right)}\left(t\right)\text{ }/{\mathcal{F}}_{{t}^{-}}\right)={L}_{i}^{\ast \left(j\right)}\left(t\right){\alpha }_{i}^{\ast \left(j\right)}\left(t\right)\text{d}t.\end{array}$

Consequently, the increasing process of

${D}_{t}={\int }_{0}^{t}\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}\text{d}{M}_{i}^{\ast \left(j\right)}\left(u\right),\text{\hspace{0.17em}}t\ge 0,$

is given by

${〈D〉}_{t}={\int }_{0}^{t}\frac{{\left({J}^{\left(n\right)}\right)}^{2}\left(u\right)}{{\left({Y}^{\left(n\right)}\right)}^{2}\left(u\right)}\text{d}{〈M〉}_{u},\text{\hspace{0.17em}}t\ge 0.$

Next, for all $t\ge 0$ and $j=\left\{1,\cdots ,m\right\}$, one has

$\begin{array}{c}{〈\sqrt{n}\underset{i=1}{\overset{n}{\sum }}\text{ }\text{ }{\int }_{0}^{t}\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}\text{d}{M}_{i}^{\ast \left(j\right)}\left(u\right)〉}_{t}=\underset{i=1}{\overset{n}{\sum }}\text{ }\text{ }n{\int }_{0}^{t}\frac{{\left({J}^{\left(n\right)}\right)}^{2}\left(u\right)}{{\left({Y}^{\left(n\right)}\right)}^{2}\left(u\right)}{L}_{i}^{\ast \left(j\right)}\left(u\right){\alpha }_{i}^{\ast \left(j\right)}\left(u\right)\text{d}u\\ ={\int }_{0}^{t}\text{ }\text{ }n\frac{{\left({J}^{\left(n\right)}\right)}^{2}\left(u\right)}{{\left({Y}^{\left(n\right)}\right)}^{2}\left(u\right)}\underset{i=1}{\overset{n}{\sum }}{L}_{i}^{\ast \left(j\right)}\left(u\right){\alpha }_{i}^{\ast \left(j\right)}\left(u\right)\text{d}u\\ ={\int }_{0}^{t}\text{ }\text{ }n\frac{{\left({J}^{\left(n\right)}\right)}^{2}\left(u\right)}{{\left({Y}^{\left(n\right)}\right)}^{2}\left(u\right)}{Y}^{\left(n\right)}\left(u\right){\alpha }_{i}^{\ast \left(j\right)}\left(u\right)\text{d}u\\ ={\int }_{0}^{t}\text{ }\text{ }n\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}{\alpha }_{i}^{\ast \left(j\right)}\left(u\right)\text{d}u.\end{array}$

Also, for all $t\ge 0$ and for all $j\in \left\{1,\cdots ,m\right\}$, the process

$\sqrt{n}\left({\stackrel{^}{\Lambda }}_{n}^{\ast \left(j\right)}\left(t\right)-{\Lambda }^{\ast \left(j\right)}\left(t\right)\right)=\sqrt{n}\underset{i=1}{\overset{n}{\sum }}{\int }_{0}^{t}\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}\text{d}{M}_{i}^{\ast \left(j\right)}\left(u\right)={R}_{n}\left(t\right),\text{ }\forall i\in \left\{1,\cdots ,n\right\},$

is a martingale. We apply the central limit theorem for the martingales (Rebolledo’s Theorem). In this purpose, we show that the condition of this theorem is satisfied by ${R}_{n}\left(t\right)$.

One has, for all $i\in \left\{1,\cdots ,n\right\}$,

${〈{R}_{n}〉}_{t}={\int }_{0}^{t}\text{ }\text{ }n\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}{\alpha }_{i}^{\ast \left(j\right)}\left(u\right)\text{d}u,\text{ }\forall j\in \left\{1,\cdots ,m\right\},$

and also by the proof of the Theorem 3 of Njamen (  , p. 11), we have:

$\frac{{Y}^{\left(n\right)}\left(u\right)}{n}\stackrel{ℙ}{\to }\left(1-{F}_{i}^{\ast \left(j\right)}\left(u\right)\right)\left(1-{G}_{i}^{\ast \left(j\right)}\left({u}^{-}\right)\right),\text{ }{J}^{\left(n\right)}\left(u\right)\stackrel{ℙ}{\to }1,\text{ }\left(n\to \infty \right).$

So that, for all $j\in \left\{1,\cdots ,m\right\}$, when $n\to \infty$,

$\begin{array}{l}{〈{R}_{n}〉}_{t}={\int }_{0}^{t}\frac{{J}^{\left(n\right)}\left(u\right)}{\frac{{Y}^{\left(n\right)}\left(u\right)}{n}}{\alpha }_{i}^{\ast \left(j\right)}\left(u\right)\text{d}u\\ \stackrel{ℙ}{\to }{\int }_{0}^{t}\frac{{\alpha }_{i}^{\ast \left(j\right)}\left(u\right)\text{d}u}{\left(1-{F}_{i}^{\ast \left(j\right)}\left(u\right)\right)\left(1-{G}_{i}^{\ast \left(j\right)}\left({u}^{-}\right)\right)}=\beta \left(t\right),\text{ }\left(n\to \infty \right),\end{array}$

which is determinist. Thus, the first condition of Robelledo Theorem holds.

To check the second condition, for all $ϵ>0$ and $t\ge 0$, define

${R}_{n,\epsilon }\left(t\right)={\int }_{0}^{t}\sqrt{n}\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}1\text{​}\text{​}{1}_{\left\{|\sqrt{n}\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}|>ϵ\right\}}\text{d}{M}^{\left(n\right)}\left(u\right),$

where for all $j=1,\cdots ,m$, ${M}^{\left(n\right)}\left(u\right)={\sum }_{i=1}^{n}{M}_{i}^{\ast \left(j\right)}\left(u\right)$.

We have to show that as $n\to \infty$, ${〈{Z}_{n,ϵ}〉}_{t}$ converges to 0 in probability.

One has, for all $t\ge 0$,

$\begin{array}{c}{〈{R}_{n,\epsilon }〉}_{t}={\int }_{0}^{t}\text{ }\text{ }n\frac{{J}^{\left(n\right)}\left(u\right)}{{\left({Y}^{\left(n\right)}\left(u\right)\right)}^{2}}1\text{​}\text{​}{1}_{\left\{|\sqrt{n}\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}|>ϵ\right\}}\text{d}{〈{M}^{\left(n\right)}〉}_{u}\\ ={\int }_{0}^{t}\text{ }\text{ }n\frac{{J}^{\left(n\right)}\left(u\right)}{{\left({Y}^{\left(n\right)}\left(u\right)\right)}^{2}}1\text{​}\text{​}{1}_{\left\{|\sqrt{n}\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}|>ϵ\right\}}{Y}^{\left(n\right)}\left(u\right){\alpha }_{i}^{\ast \left(j\right)}\left(u\right)\text{d}u\\ ={\int }_{0}^{t}\text{ }\text{ }n\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}1\text{​}\text{​}{1}_{\left\{|\sqrt{n}\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}|>ϵ\right\}}{\alpha }_{i}^{\ast \left(j\right)}\left(u\right)\text{d}u\\ \stackrel{ℙ}{\to }0,\text{ }\text{ }\text{ }\text{ }\text{ }\left(n\to \infty \right),\end{array}$

because

$n\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}\stackrel{ℙ}{\to }\frac{1}{\left(1-{F}_{i}^{\ast \left(j\right)}\left(u\right)\right)\left(1-{G}_{i}^{\ast \left(j\right)}\left({u}^{-}\right)\right)},\text{ }\left(n\to \infty \right).$

Then

$\sqrt{n}\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}=\frac{1}{\sqrt{n}}n\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}\stackrel{ℙ}{\to }0,\text{ }\text{ }\left(n\to \infty \right).$

Thus, the second condition of Robelledo Theorem holds.

The conditions of the Rebolledo Theorem are verified and by consequently, for all $t\ge 0$,

$\left({R}_{n}\left(t\right),t>0\right)⇒\left({\int }_{0}^{t}\text{ }f\left(s\right)\text{d}W\left(s\right),t>0\right),\text{ }\text{ }\text{ }\text{ }\text{ }\left(n\to \infty \right),$

with $\gamma \left(t\right)={\int }_{0}^{t}\text{ }\text{ }{f}^{2}\left(s\right)\text{d}s$.

Finally, for all $t>0$,

This ends the proof of the Theorem 1.

The following subsection gives the asymptotic law of nonparametric Kaplan-Meier’s estimator of the survival function in the competing risks of Njamen and Ngatchou (  , p. 13).

3.2. Limit Law of Kaplan-Meier’s Nonparametric Estimator in Competing Risks

The Kaplan-Meier estimator of the survival function (Kaplan and Meier,  ) is defined by

${\stackrel{^}{S}}_{n}\left(t\right)=\underset{s\le t}{\prod }\left(1-\Delta {\stackrel{^}{\Lambda }}_{n}\left(s\right)\right)=\underset{s\le t}{\prod }\left(1-\frac{{J}^{\left(n\right)}\left(s\right)\Delta {N}^{\left(n\right)}\left(s\right)}{{Y}^{\left(n\right)}\left(s\right)}\right),$

where ${\stackrel{^}{\Lambda }}_{n}\left(t\right)$ is the Nelson-Aalen estimator and where, for a process $X\left(t\right)$ continuous to the right with a left limit such that

$\Delta X\left(t\right)=X\left(t\right)-X\left({t}^{-}\right).$

For all $j=1,\cdots ,m$, an estimator of the variance of ${\stackrel{^}{S}}_{n}^{\left(j\right)}\left(t\right)/{S}^{\ast \left(j\right)}\left(t\right)$, where ${S}^{\ast \left(j\right)}$ is the survival function associated with the subgroup ${A}^{\left(j\right)}$ is given by

${\stackrel{^}{\sigma }}^{\left(j\right)2}\left(t\right)={\int }_{0}^{t}\frac{{J}^{\left(n\right)}\left(s\right)}{{\left({Y}^{\left(n\right)}\right)}^{2}\left(s\right)}\text{d}{N}^{\left(n\right)}\left(s\right).$

The variance of ${\stackrel{^}{S}}_{n}^{\left(j\right)}\left(t\right)/{S}^{\left(j\right)}\left(t\right)$ approximated by that of ${\stackrel{^}{S}}^{\left(j\right)}\left(t\right)/{S}^{\ast \left(j\right)}\left(t\right)$ is:

$\begin{array}{c}\mathbb{V}\left[\frac{{\stackrel{^}{S}}_{n}^{\left(j\right)}\left(t\right)}{{S}^{\ast \left(j\right)}\left(t\right)}-1\right]=\mathbb{E}\left[〈\frac{{\stackrel{^}{S}}_{n}^{\left(j\right)}}{{S}^{\ast \left(j\right)}}-1〉\left(t\right)\right]\\ ={\int }_{0}^{t}{\left\{\frac{{\stackrel{^}{S}}_{n}^{\left(j\right)}\left({s}^{-}\right)}{{S}^{\ast \left(j\right)}\left(s\right)}\right\}}^{2}×\frac{{J}^{\left(n\right)}\left(s\right)}{{Y}^{\left(n\right)}\left(s\right)}{\alpha }_{i}^{\ast \left(j\right)}\left(s\right)\text{d}s\text{ }\forall i\in \left\{1,\cdots ,n\right\}.\end{array}$ (12)

The estimator of the corresponding variance of ${\stackrel{^}{S}}_{n}^{\left(j\right)}\left(t\right)$ is given by

$\stackrel{^}{\mathbb{V}}\left({\stackrel{^}{S}}_{n}^{\left(j\right)}\left(t\right)\right)={\left[{\stackrel{^}{S}}_{n}^{\left(j\right)}\left(t\right)\right]}^{2}{\stackrel{^}{\sigma }}_{i}^{\left(j\right)2}\left(t\right)\text{ }\forall i\in \left\{1,\cdots ,n\right\}.$ (13)

The following result concerning the asymptotic law of nonparametric Kaplan-Meier estimator and constituted the second fundamental result of this paper:

Theorem 3.

In an area where there is at least one observation, if we assume that for all $j\in \left\{1,\cdots ,m\right\}$ and $i\in \left\{1,\cdots ,n\right\}$,

1) for all $s\in \left[0,t\right]$,

$n{\int }_{0}^{s}\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}{\alpha }_{i}^{\ast \left(j\right)}\left(u\right)\text{d}u\stackrel{ℙ}{\to }{\sigma }_{i}^{\ast \left(j\right)2}\left(u\right)\text{ }\left(n\to \infty \right),$

2) for all $\epsilon >0$,

$n{\int }_{0}^{t}\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}{\alpha }_{i}^{\ast \left(j\right)}1\text{​}\text{​}{1}_{\left\{\sqrt{n}|\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}|>\epsilon \right\}}\text{d}u\stackrel{ℙ}{\to }0\text{ }\left(n\to \infty \right),$

3) for all $t>0$,

$\sqrt{n}{\int }_{0}^{t}\left(1-{J}^{\left(n\right)}\left(u\right)\right){\alpha }_{i}^{\ast \left(j\right)}\left(u\right)\text{d}u\stackrel{ℙ}{\to }0\text{ }\left(n\to \infty \right).$

Then, for all $t>0$ and $j\in \left\{1,\cdots ,m\right\}$, the non-parametric estimator ${\stackrel{^}{S}}_{n}^{\ast \left(j\right)}$ checks

$\sqrt{n}\left({\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\left(t\right)-{S}^{\ast \left(j\right)}\left(t\right)\right)⇒-{U}_{i}^{\ast \left(j\right)}\left(t\right)×{S}^{\ast \left(j\right)}\left(t\right),\text{ }\left(n\to \infty \right),$

where ${U}_{i}^{\ast \left(j\right)}$ is the center Gaussian martingale and where $⇒$ denotes the weak convergence in the space of continuous functions on the right, having a left-hand boundary with the topology of Skorokhod.

Proof. To prove this theorem, it suffices to show that it satisfies the conditions of the Rebolledo Theorem.

In an area where there is at least one observation, by posing, for all $j=1,\cdots ,m$, $i=1,\cdots ,n$,

${\stackrel{˜}{S}}_{n}^{\ast \left(j\right)}\left(t\right)=\mathrm{exp}\left(-{\stackrel{˜}{\Lambda }}_{n}^{\ast \left(j\right)}\right)$

where ${\stackrel{˜}{\Lambda }}_{n}^{\ast \left(j\right)}={\int }_{0}^{t}\text{ }{J}^{\left(n\right)}\left(u\right){\alpha }_{i}^{\ast \left(j\right)}\left(u\right)\text{d}u$.

For $t\in \left[0,\tau \left[$ and $\tau >0$, we have for all $j=1,\cdots ,m$ and $i=1,\cdots ,n$,

$\begin{array}{l}\sqrt{n}{〈\left(\frac{{\stackrel{^}{S}}_{n}^{\left(j\right)}}{{\stackrel{˜}{S}}_{n}^{\ast \left(j\right)}}-1\right)〉}_{t}=n{\int }_{0}^{t}\frac{{\stackrel{^}{S}}_{n}^{\left(j\right)}{\left({u}^{-}\right)}^{2}}{{\stackrel{˜}{S}}_{n}^{\ast \left(j\right)}{\left(u\right)}^{2}}\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}{\alpha }_{i}^{\ast \left(j\right)}\left(u\right)\text{d}u\\ \stackrel{ℙ}{\to }{\sigma }_{i}^{\ast \left(j\right)2},\text{ }\left(n\to \infty \right).\end{array}$

By the proof of Theorem 3 of Njamen (  , p.11), we deduce that

$\frac{{\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\left({u}^{-}\right)}{{\stackrel{˜}{S}}_{n}^{\ast \left(j\right)}\left(u\right)}\stackrel{ℙ}{\to }1,\text{ }\left(n\to \infty \right).$

Hence the 1st condition of Robolledo’s Theorem.

For the second condition of Robolledo’s Theorem, condition B is similar to the proof of Theorem 1 above, we find that for all $\epsilon >0$,

$n{\int }_{0}^{t}\frac{{\stackrel{^}{S}}_{n}^{\left(j\right)}{\left({u}^{-}\right)}^{2}}{{\stackrel{˜}{S}}_{n}^{\ast \left(j\right)}{\left(u\right)}^{2}}\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}1\text{​}\text{​}{1}_{\left\{\sqrt{n}\text{ }\text{ }|\frac{{J}^{\left(n\right)}\left(s\right)}{{Y}^{\left(n\right)}\left(s\right)}|>\epsilon \right\}}{\alpha }_{i}^{\ast \left(j\right)}\left(u\right)\text{d}u\to 0,\text{ }\left(n\to \infty \right).$

So, for each $t>0$,

$\sqrt{n}{\int }_{0}^{t}\frac{{\stackrel{^}{S}}_{n}^{\left(j\right)}\left({u}^{-}\right)}{{\stackrel{˜}{S}}_{n}^{\ast \left(j\right)}\left(u\right)}\frac{{J}^{\left(n\right)}\left(u\right)}{{Y}^{\left(n\right)}\left(u\right)}\text{d}{M}^{\left(n\right)}\left(u\right)⇒{U}_{i}^{\ast \left(j\right)}\left(t\right),$

where ${M}^{\left(n\right)}\left(u\right)={\sum }_{i=1}^{n}{M}_{i}^{\ast \left(j\right)}\left(u\right)$ and where

Finally,

$\sqrt{n}\left(\frac{{\stackrel{^}{S}}_{n}^{\left(j\right)}\left(t\right)}{{\stackrel{˜}{S}}_{n}^{\ast \left(j\right)}\left(t\right)}-1\right)⇒-{U}_{i}^{\ast \left(j\right)}\left(t\right).$

The fact that ${S}^{\ast \left(j\right)}\left(u\right)\le {S}_{n}^{\ast \left(j\right)}\left(u\right)$, for all $u\in \left[0,s\left[$ and condition C implies:

$\begin{array}{c}\sqrt{n}\text{ }\text{ }|\frac{{S}^{\ast \left(j\right)}\left(s\right)}{{\stackrel{˜}{S}}_{n}^{\ast \left(j\right)}\left(u\right)}-1|\le \sqrt{n}{\int }_{0}^{t}\frac{{S}^{\ast \left(j\right)}\left(u\right)}{{\stackrel{˜}{S}}_{n}^{\ast \left(j\right)}\left(u\right)}\text{d}\left({\Lambda }^{\ast \left(j\right)}-{\stackrel{˜}{\Lambda }}^{\ast \left(j\right)}\right)\left(u\right)\\ \le \sqrt{n}{\int }_{0}^{t}\left(1-J\left(u\right)\right){\alpha }_{i}^{\ast \left(j\right)}\left(u\right)\text{d}u\\ \stackrel{ℙ}{\to }0\text{ }\left(n\to \infty \right).\end{array}$

As ${\stackrel{˜}{S}}_{n}^{\ast \left(j\right)}\left(t\right)\to {S}^{\ast \left(j\right)}\left(t\right)$ when $n\to \infty$, we deduce that:

$\sqrt{n}\left({\stackrel{˜}{S}}_{n}^{\ast \left(j\right)}-{S}^{\ast \left(j\right)}\left(t\right)\right)\stackrel{ℙ}{\to }0,\text{ }n\to \infty .$

It follows that:

$\begin{array}{l}\sqrt{n}\left({\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\left(t\right)-{S}^{\ast \left(j\right)}\left(t\right)\right)\\ =\sqrt{n}\left({\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\left(t\right)-{\stackrel{˜}{S}}_{n}^{\ast \left(j\right)}\left(t\right)\right)+\sqrt{n}\left({\stackrel{˜}{S}}_{n}^{\ast \left(j\right)}\left(t\right)-{S}^{\ast \left(j\right)}\right)\\ =\frac{\sqrt{n}\left({\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\left(t\right)-{\stackrel{˜}{S}}_{n}^{\ast \left(j\right)}\left(t\right)\right)}{{\stackrel{˜}{S}}_{n}^{\ast \left(j\right)}}{\stackrel{˜}{S}}_{n}^{\ast \left(j\right)}+\sqrt{n}\left({\stackrel{˜}{S}}_{n}^{\ast \left(j\right)}\left(t\right)-{S}^{\ast \left(j\right)}\right)\\ ⇒-{U}_{i}^{\ast \left(j\right)}\left(t\right){S}^{\ast \left(j\right)},\text{ }\left(n\to \infty \right).\end{array}$

This ends the proof of the theorem.

4. Confidence Bands of Survival Function

4.1. Confidence Intervals

For $\alpha \in \left(0,1\right)$, we wish to find two random functions ${b}_{L}$ and ${b}_{U}$ such that $\forall t>0$,

$ℙ\left[{b}_{U}\left(t\right)\ge S\left(t\right)\ge {b}_{L}\left(t\right)\right]=1-\alpha .$

Recall that from the previous sections, for all $j\in \left\{1,\cdots ,m\right\}$, $\sqrt{n}\left({\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\left(t\right)-{S}^{\ast \left(j\right)}\left(t\right)\right)/{S}^{\ast \left(j\right)}\left(t\right)$ converges in distribution to a Gaussian martingale centered (see Theorem 3 above). As a consequence, ${\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\left(t\right)$ is asymptotically Gaussian centered on ${S}^{\ast \left(j\right)}$. Given the above results, the estimated standard deviation of ${S}^{\ast \left(j\right)}$, noted ${\stackrel{^}{\sigma }}_{{S}_{t}}$ is given for all $t\ge 0$ by:

${\stackrel{^}{\sigma }}_{{S}_{t}^{*}}^{\ast 2}\left(t\right)=\frac{\stackrel{^}{\mathbb{V}}\left({\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\left(t\right)\right)}{{\left[{\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\left(t\right)\right]}^{2}}.$ (14)

Therefore a threshold confidence level $100\left(1-\alpha \right)%$ can be built for all $t\ge 0$ and $j\in \left\{1,\cdots ,m\right\}$, by:

${\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\left(t\right)-{Z}_{1-\alpha /2}{\stackrel{^}{\sigma }}_{{S}_{t}^{*}}^{\ast \left(j\right)}\left(t\right){\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\left(t\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\left(t\right)+{Z}_{1-\alpha /2}{\stackrel{^}{\sigma }}_{{S}_{t}^{*}}^{\ast \left(j\right)}\left(t\right){\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\left(t\right).$ (15)

Here ${z}_{1-\alpha /2}$ is the $1-\alpha /2$ percentile of a standard normal distribution.

A threshold confidence interval $100\left(1-\alpha \right)%$ can also be obtained for all $j\in \left\{1,\cdots ,m\right\}$, by:

${\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\left(t\right)±{z}_{\alpha /2}{\stackrel{^}{\sigma }}_{{S}_{t}^{*}}^{\ast \left(j\right)},$ (16)

where ${z}_{\alpha /2}$ is the rank of fractile $100×\alpha /2$ of the standardized normal distribution.

A disadvantage of the construction of the confidence interval (CI) with the previous formula is that the bound can be obtained external to the interval $\left[0,1\right]$. A solution is to consider a ${S}^{\ast \left(j\right)}\left(t\right)$ $\left(j\in \left\{1,\cdots ,m\right\}\right)$ transform via a continuous function g, differentiable and invertible such that $g\left({S}^{\ast \left(j\right)}\left(t\right)\right)$ belongs to a more wide space ideally unbounded and best approximate a Gaussian random variable. The delta method then allows for the estimation of

the standard deviation of the object created by ${\stackrel{^}{\sigma }}_{g\left({S}_{t}^{*}\right)}^{\ast \left(j\right)}$ defined by ${\stackrel{^}{\sigma }}_{g\left({S}_{t}^{*}\right)}^{\ast \left(j\right)}\left(t\right)={g}^{\prime }\left({\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\right){\stackrel{^}{\sigma }}_{{S}_{t}^{*}}^{\ast \left(j\right)}\left(t\right)$. The confidence interval associated with the risk threshold $\alpha$ is built as for all $j\in \left\{1,\cdots ,m\right\}$,

${g}^{-1}\left(g\left({\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\right)±{z}_{\alpha /2}{g}^{\prime }\left({\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\right){\stackrel{^}{\sigma }}_{{S}_{t}^{*}}^{\ast \left(j\right)}\left(t\right)\right).$

The most common transformation is $g\left({S}_{t}^{*}\right)=\mathrm{log}\left[\mathrm{log}\left({S}_{t}^{*}\right)\right]$, and in this case we have: for all $j\in \left\{1,\cdots ,m\right\}$,

${\stackrel{^}{\sigma }}_{\mathrm{log}\left[-\mathrm{log}\left({S}_{t}^{*}\right)\right]}^{\ast \left(j\right)}=\frac{{\stackrel{^}{\sigma }}_{{S}_{t}^{*}}^{\ast \left(j\right)}}{{\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\mathrm{log}{\stackrel{^}{S}}_{n}^{\ast \left(j\right)}}\text{ }\text{and}\text{ }{\stackrel{^}{S}}_{n}^{\ast \left(j\right)\mathrm{exp}\left(±{z}_{\alpha /2}\frac{{\stackrel{^}{\sigma }}_{{S}_{t}^{*}}^{\ast \left(j\right)}}{{\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\mathrm{log}\left({\stackrel{^}{S}}_{t}^{\ast \left(j\right)}\right)}\right)}.$

Remark 1. It is also possible to use log, square-root or logit-type transformations in most software defined respectively by for all $j\in \left\{1,\cdots ,m\right\}$,

$g\left({S}_{t}^{\ast \left(j\right)}\right)=\mathrm{log}\left[{S}_{t}^{\ast \left(j\right)}\right],\text{ }g\left({S}_{t}^{\ast \left(j\right)}\right)={\mathrm{sin}}^{-1}\left[\sqrt{{S}_{t}^{\ast \left(j\right)}}\right],\text{ }g\left({S}_{t}^{\ast \left(j\right)}\right)=\mathrm{log}\left[\frac{{S}_{t}^{\ast \left(j\right)}}{1-{S}_{t}^{\ast \left(j\right)}}\right].$

4.2. The Confidence Bands

The challenge now is to find an area containing the survival function with probability $1-\alpha$, or a set of bounds ${b}_{L}\left(t\right)$ and ${b}_{U}\left(t\right)$ which, with probability $1-\alpha$, contains ${S}^{\ast \left(j\right)}\left(t\right)$ for all $t\in \left[{t}_{L},{t}_{U}\right]$ and $j\in \left\{1,\cdots ,m\right\}$. Among the proposed solutions, the two most commonly used are firstly Hall and Wellner (  ) bands and secondly, strips Nair (  ) (“equal precision bands”). If ${t}_{k}$ is the maximum time event observed in the sample, then for the Nair bands, we have the following restrictions $0<{t}_{L}<{t}_{U}\le {t}_{k}$, however, boter Hall-Wiener may authorize the nullity of ${t}_{L}$, let $0\le {t}_{L}<{t}_{U}\le {t}_{k}$. Technically obtaining these bands is complex, and their practical utility in relation to the point intervals is not obvious.

Remark 2. The starting point uses the fact that for all $j\in \left\{1,\cdots ,m\right\}$, $\sqrt{n}\left(\frac{{\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\left(t\right)}{{S}^{\ast \left(j\right)}\left(t\right)}-1\right)$ converges to a centered Gaussian martingale. We then go through a transformation making appear a Brownian bridge $\left\{{W}^{0}\left(x\right),x\in \left[0,1\right]\right\}$, weighted by $\frac{1}{\sqrt{x\left(1-x\right)}}$ at Nair, to retrieve the suitable critical value.

In particular, because of the joined character, for a given t their extent is wider than that of the corresponding point IC. In what follows we give the expressions obtained in the absence of transformation.

4.2.1. The Hall-Wellner Confidence Bands

Under the assumption of continuity of survival functions ${S}^{\ast \left(j\right)}\left(t\right)$ and ${C}^{\ast \left(j\right)}\left(t\right)$ respectively related to the event time and the time of censorship, Hall and Wellner show that for every $t\in \left[{t}_{L},{t}_{U}\right]$, the IC joined the risk threshold $\alpha$ is given for all $j=1,\cdots ,m$ and $t\ge 0$ by:

${\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\left(t\right)±{h}_{\alpha }\left({x}_{L},{x}_{U}\right){n}^{\frac{-1}{2}}\left[1+n{\stackrel{^}{\sigma }}_{{S}_{t}^{*}}^{\ast \left(j\right)2}\left(t\right)\right]{\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\left(t\right),$ (17)

where ${x}_{L}$ and ${x}_{U}$ are given by

${x}_{i}=\frac{n{\stackrel{^}{\sigma }}_{{S}_{{t}_{i}}^{*}}^{\ast \left(j\right)2}\left(t\right)}{\left(1+n{\stackrel{^}{\sigma }}_{{S}_{{t}_{i}}^{*}}^{\ast \left(j\right)2}\left(t\right)\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}i=L,U$

and ${h}_{\alpha }\left({x}_{L},{x}_{U}\right)$ is bounds checking

$\alpha =ℙ\left[\underset{{x}_{L}\le x\le {x}_{U}}{\mathrm{sup}}|{W}^{0}\left(x\right)|>{h}_{\alpha }\left({x}_{L},{x}_{U}\right)\right].$

4.2.2. The Nair Precision Equal Bands

Using a weighted Brownian bridge will notably modify the bounds to IC. For $\alpha \in \left(0,1\right)$, $t\in \left[{t}_{L},{t}_{U}\right]$ and all $j\in \left\{1,\cdots ,m\right\}$, they are then given by:

${\stackrel{^}{S}}_{n}^{\ast \left(j\right)}\left(t\right)±{e}_{\alpha }\left({x}_{L},{x}_{U}\right){\stackrel{^}{\sigma }}_{{S}_{t}^{*}}^{\ast \left(j\right)},$ (18)

where ${e}_{\alpha }\left({x}_{L},{x}_{U}\right)$ satisfies

$\alpha =ℙ\left[\underset{{x}_{L}\le x\le {x}_{U}}{\mathrm{sup}}\frac{|{W}^{0}\left(x\right)|}{\sqrt{x\left(1-x\right)}}>{e}_{\alpha }\left({x}_{L},{x}_{U}\right)\right].$

If we compare (12) and (14), we see that the bounds relating to Nair (  ) bands are proportional to the bounds IC and simply correspond to a risk adjustment threshold used in the past.

5. Conclusions and Perspectives

In this paper we have studied the asymptotic normality of Nelson-Aalen and Kaplan-Meier type estimators in the presence of independent right-censorship as defined in Njamen and Ngatchou (  ,  ) and Njamen  using Robelledo’s theorem that allows applying the central limit theorem to certain types of particular martingales. From the results obtained, confidence bounds for the hazard and the survival functions are provided.

As a perspective, obtaining actual data would allow us to perform numerical simulations to gauge the robustness of our obtained estimators.

Acknowledgements

We thank the publisher and the referees for their comments which allowed to raise considerably the level of this article.

Cite this paper: Njomen, D. (2019) Asymptotic Normality of the Nelson-Aalen and the Kaplan-Meier Estimators in Competing Risks. Applied Mathematics, 10, 545-560. doi: 10.4236/am.2019.107038.
References

   Heckman, J.J. and Honoré, B.E. (1989) The Identifiability of the Competing Risks Models. Biometrika, 77, 325-330.
https://www.jstor.org/stable/2336666
https://doi.org/10.1093/biomet/76.2.325

   Commemges, D. (2017) Risques compétitifs et modèles multi-états en épidemiologie. Revue d’épidémiologie et de santé publique Elsevier Masson, 77, 605-611.

   Com-Nougué, C., Guérin, S. and Rey, A. (1999) Estimation des risques associés à des événements multiples. Revue d’épidémiologie et de Santé Publique, 47, 75-85.

   Fine, J.P. and Gray, R.J. (1999) A Proportional Hazards Model for the Subdistribution of a Competing Risk. Journal of the American Statistical Association, 94, 496-509.
https://www.jstor.org/stable/2670170
https://doi.org/10.1080/01621459.1999.10474144

   Crowder, M. (2001) Classical Competing Risks. Chapman and Hall, London.

   Fermanian, J.D. (2003) Nonparametric Estimation of Competing Risks Models with Covariates. Journal of Multivariate Analysis, 85, 156-191.
https://doi.org/10.1016/S0047-259X(02)00069-6

   Latouche, M. (2004) Modèles de régression en présence de compétition. Thèse de doctorat, Université de Paris, Paris, 6.
https://tel.archives-ouvertes.fr/tel-00129238

   Geffray, S. (2009) Strong Approximations for Dependent Competing Risks with Independent Censoring with Statistical Applications. Test, 18, 76-95.
https://doi.org/10.1007/s11749-008-0113-y

   Belot, A. (2009) Modélisation flexible des données de survie en présence de risques concurrents et apports de la mthode du taux en excès. Thèse de doctorat, Université de la Méditerranée, Marseille.

   Njamen, N.D.A. and Ngatchou, W.J. (2014) Nelson-Aalen and Kaplan-Meier Estimators in Competing Risks. Applied Mathematics, 5, 765-776.
https://doi.org/10.4236/am.2014.54073

   Njamen, N.D.A. and Ngatchou, W.J. (2018) Consistency of the Kaplan-Meier Estimator of the Survival Function in Competiting Risks. The Open Statistics and Probability Journal, 9, 1-17.
https://benthamopen.com/TOSPJ/home
https://doi.org/10.2174/1876527001809010001

   Njamen, N.D.A. (2017) Convergence of the Nelson-Aalen Estimator in Competing Risks. International Journal of Statistics and Probability, 6, 9-23.
https://doi.org/10.5539/ijsp.v6n3p9

   Njamen, N.D.A. (2018) Study of the Nonparametric Kaplan-Meier Estimator of the Cumulative Incidence Function in Competiting Risks. Journal of Advanced Statistics, 3, 1-13.
https://doi.org/10.22606/jas.2018.31001

   Aalen, O.O. and Johansen, S. (1978) An Empirical Transition Matrix for Non-Homogeneous Markov Chains Based on Censored Observations. Scandinavian Journal of Statistics, 5, 141-150.
https://www.jstor.org/stable/4615704

   Peterson, G.L. (1977) A Simplification of the Protein Assay Method of Lowry et al. Which Is More Generally Applicable. Analytical Biochemistry, 83, 346-356.
https://doi.org/10.1016/0003-2697(77)90043-4

   Andersen, P.K., Borgan, Ø., Gill, R.D. and Keiding, N. (1993) Statistical Models Based on Counting Processes. Springer Series in Statistics, Spring-Verlag, New York.

   Shorack, G.R. and Wellner, J.A. (1986) Empirical Processes with Applications to Statistics. John Wiley and Sons, Inc., New York.

   Breslow, N. and Crowley, J. (1974) A Large Sample Study of the Life Table and Product-Limit Estimates under Random Censorship. The Annals of Statistics, 2, 437-453.
https://www.jstor.org/stable/2958131
https://doi.org/10.1214/aos/1176342705

   Nelson, W. (1972) A Short Life Test for Comparing a Sample with Previous Accelerated Test Results. Technometrics, 14, 175-185.
https://www.jstor.org/stable/1266929
https://doi.org/10.1080/00401706.1972.10488894

   Aalen, O.O. (1978) Nonparametric Inference for a Family of Counting Processes. The Annals of Statistics, 6, 701-726.
https://www.jstor.org/stable/2958850
https://doi.org/10.1214/aos/1176344247

   Fleming, T.R. and Harrington, D.P. (1990) Counting Processes and Survival Analysis. John Wiley and Sons, Hoboken.

   Breuils, C. (2003) Analyse de Durées de Vie: Analyse Séquentielle du Modèle des Risques Proportionnels et Tests d’Homogénéité. Thèse de doctorat, Université de Technologie de Compiégne, Compiègne.
https://tel.archives-ouvertes.fr/tel-00005524

   Kaplan, E.L. and Meier, P. (1958) Nonparametric Estimation from Incomplete Observations. Journal of the American Statistical Association, 53, 457-481.
https://www.jstor.org/stable/2281868
https://doi.org/10.1080/01621459.1958.10501452

   Hall, W.J. and Wellner, J.A. (1980) Confidence Bands for a Survival Curve. Biometrika, 67, 133-143.
https://www.jstor.org/stable/2335326
https://doi.org/10.1093/biomet/67.1.133

   Nair, V.N. (1984) Confidence Bands for Survival Functions with Censored Data: A Comparative Study. Technometrics, 26, 265-275.
https://www.jstor.org/stable/1267553
https://doi.org/10.1080/00401706.1984.10487964

Top