Maximum Likelihood Estimation for the Pooled Repeated Partly Interval-Censored Observations Logistic Regression Model

Show more

1. Introduction

In longitudinal studies, subjects who are likely to progress to a new state during the study are monitored over time. For example, in clinical trials, subjects who are at high risk of a certain disease are monitored and have follow-up visits. Some subjects complete all of their follow-up visits and their failure times are recorded. However, others miss their follow-up visits, and they may learn that the event of interest had already occurred when they came back. The event times for these patients are censored within the corresponding person-specific time intervals. Although there are multiple follow-up visiting intervals for each subject, researchers often use one particular interval that contains the true unknown failure time unless they had accurately determined the failure time. This is known as “partly interval-censored failure time data”. There are quite a few research works based on partly interval-censored data such as [1] [2] [3] and [4] among others.

Another commonly available data type in longitudinal studies is called pooled repeated observations. Subjects have multiple follow-up visits as usual. From every visit, a subject obtains a binary outcome for the event of interest. All those repeated binary outcomes are pooled together to develop a model to analyze the effects of time-dependent covariates on the occurrence of the event. [5] and [6] pooled such repeated observations with binary outcomes for the event of interest into a single sample. Then they used logistic regression model to estimate the effects of the risk factors on the occurrence of the event. Each observation interval is considered a mini follow-up study in which the current risk factors are updated to predict events in the interval. Once an individual has an event in a particular interval, all subsequent intervals from that individual are excluded from the analysis.

Now, we define pooled repeated partly interval-censored data. We have pooled repeated observations, but some binary outcomes and covariates are incomplete. They can only be determined with certain unknown probabilities within the corresponding specific follow-up visits. In this case, the analysis of such data requires a new method that combines a model that handles pooled repeated observations without censoring and a method that deals with partly or completely interval-censored data.

The main goal of this study is to estimate the effects of the time-dependent covariates on the occurrence of the event of interest (e.g., progression to a disease, becoming a frequent smoker, etc.). We extend the work of [7], who employed conditional expected score test (CEST) to determine the presence of association of a longitudinal marker and an event with missing binary outcomes to the estimation problem when the event of interest has a single progression state and the response is pooled, repeated, and partly interval-censored. We assume that the missing data is missing at random (MAR). In MAR data, there might be systematic differences between the observed and missing data, but the differences can be explained by the observed data. EM algorithm was originally developed to handle MAR data.

The organization of this paper is as follows. In Section 2, we present a logistic regression model for pooled repeated partly interval-censored data. In Section 3, we provide the details of computation of the MLEs of the regression parameter via EM algorithm and the variance estimation through the missing information principle. Section 4 displays the simulation study results. Section 5 illustrates an application to a real data set. Finally, Section 6 briefly summarizes what we have achieved and also discusses potential extensions of our work.

2. Model

We consider a case of longitudinal studies, where subjects are at risk of an event of interest and have follow-up visits. Some subjects make complete follow-up visits, but others miss some of their follow-up appointments and come back after the event of interest has occurred. Whenever they miss a visit, both their binary outcome of the event of the interest and covariates are missing. Our proposed model estimates the effects of time-dependent covariates on the event of interest.

Let
${T}_{i}$ be the time subject *i* experiences the event of interest,
$i=\mathrm{1,}\cdots \mathrm{,}n$. At the beginning of the study, every subject is assigned to the same follow-up visits,
${t}_{j}$,
$j=\mathrm{1,}\cdots \mathrm{,}M$. Let
${y}_{ij}$ be the indicator of whether or not subject *i* has had the event of interest in the *j*th interval given a subject was event-free through
${t}_{j-1}$ and
${x}_{ij}$, the covariate at time
${t}_{j-1}$. Since we are interested in modeling a binary outcome, we use a logit link to model the probability of event as in [7].

$\text{logit}\left({p}_{ij}\right)=\mathrm{log}\left({p}_{ij}/\left(1-{p}_{ij}\right)\right)=\alpha +{\beta}^{\prime}{x}_{ij}\mathrm{,}$ (1)

where

${p}_{ij}=P\left({y}_{ij}=1|{x}_{ij},{T}_{i}>{t}_{j-1}\right).$ (2)

We construct the full (complete) log-likelihood, assuming as if there were no missing visits while subjects are in the study.

$l={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.17em}}{\displaystyle \underset{j=1}{\overset{{M}_{i}}{\sum}}}\left[-\mathrm{log}\left(1+\mathrm{exp}\left(\alpha +{\beta}^{\prime}{x}_{ij}\right)\right)+{y}_{ij}\left(\alpha +{\beta}^{\prime}{x}_{ij}\right)\right],$ (3)

where
${M}_{i}$ is the index of the last time subject *i* was in the study.

3. Methods

3.1. Parameter Estimation

Assume that the *i*^{th} subject missed visits after time
${t}_{{L}_{i}}$ and came back at
${t}_{{R}_{i}}$.
${L}_{i}$ is the index of the last time subject *i* made the visit and was event-free.
${R}_{i}$ is the index of the first time subject *i* was observed with the event of interest. Then
${y}_{i{L}_{i}}=0$,
${y}_{i{R}_{i}}=1$, and
${y}_{ij}$ is missing for
${L}_{i}+1\le j\le {R}_{i}-1$. For the subjects who do not miss visits,
${L}_{i}+1={R}_{i}$. Whenever subjects miss visits, their covariate value,
${x}_{ij}$, is also missing. We use the EM algorithm ( [8] ) to estimate the parameters.

E-step: For individuals whose failure times are interval-censored, we need to estimate both ${y}_{ij}$ and ${x}_{ij}$ in the expression (3) for $j\in \left\{{L}_{i}+\mathrm{1,}\cdots \mathrm{,}{R}_{i}-1\right\}$.

${x}_{ij}$ could be continuous or categorical ( [9] ). We assume that ${x}_{ij}$ has a linear growth curve with fixed effects to incorporate a real data, NLSY97. That is,

${x}_{ij}={\theta}_{0i}+{\theta}_{1i}{t}_{j-1}+{\epsilon}_{ij}\mathrm{,}$ (4)

where ${\epsilon}_{ij}\sim N\left(\mathrm{0,}{\sigma}_{\epsilon}^{2}\right)$, $cov\left({\epsilon}_{ij}\mathrm{,}{\epsilon}_{i{j}^{\prime}}\right)=\mathrm{0,}j\ne {j}^{\prime}$. We estimate ${x}_{ij}$ by ${\stackrel{^}{x}}_{ij}={\stackrel{^}{\theta}}_{0i}+{\stackrel{^}{\theta}}_{1i}{t}_{j-1}$ for ${L}_{i}+1\le j\le {R}_{i}-1$, where ${\stackrel{^}{\theta}}_{0i}$ and ${\stackrel{^}{\theta}}_{1i}$ are least squares estimators.

If
${x}_{ij}$ is ordinal, we assign numbers to corresponding categories. Then we again assume linear growth curve with fixed effects to estimate the missing
${x}_{ij}$ ’s. Let
${n}_{c}$ be the number of categories for this ordinal variable. For each individual*i*, the observed
${x}_{ij}$ ’s are used in model (4) to compute
${\stackrel{^}{\theta}}_{0i}$ and
${\stackrel{^}{\theta}}_{1i}$. Then we compute
${\stackrel{^}{x}}_{ij}={\stackrel{^}{\theta}}_{0i}+{\stackrel{^}{\theta}}_{1i}{t}_{j-1}$ as usual.

Next, we create ${n}_{c}-1$ thresholds in order to uniquely assign ${\stackrel{^}{x}}_{ij}$ into one of the ${n}_{c}$ categories. Note that ${\stackrel{^}{x}}_{ij}\sim N\left({\theta}_{0i}+{\theta}_{1i}{t}_{j-1}\mathrm{,}{\sigma}_{{\stackrel{^}{x}}_{i}}^{2}\right)$ and ${\stackrel{^}{x}}_{i}=\left\{{\stackrel{^}{x}}_{i\mathrm{,}{L}_{i}+1}\mathrm{,}\cdots \mathrm{,}{\stackrel{^}{x}}_{i\mathrm{,}{R}_{i}-1}\mathrm{,}{x}_{i\mathrm{,}{R}_{i}}\right\}$. We use the quantiles of this normal distribution to define the thresholds. Since we need to compute ${\stackrel{^}{\sigma}}_{{\stackrel{^}{x}}_{i}}^{2}$ to define thresholds, we need at least three distinct observed covariate values, ${x}_{ij}$ ’s for each subject, otherwise, ${\stackrel{^}{\sigma}}_{{\stackrel{^}{x}}_{i}}^{2}$ would be undefined due to the zero degrees of freedom.

The observed ordinal covariates for some subjects do not include the entire ordinal categories. Therefore, the ordinal logistic regression model does not work for estimating ordinal covariates. In Appendix 2, we provide a detailed rationale for choosing fixed effects model, its extension in a general setting, and challenges with random effects model.

$\begin{array}{l}{\stackrel{^}{y}}_{ij}=E\left[{y}_{ij}|{Y}_{i},{\stackrel{^}{x}}_{i},\alpha ,\beta ,{T}_{i}>{t}_{j-1}\right]\\ =P\left[{T}_{i}={t}_{j}|{t}_{{L}_{i}}<{T}_{i}\le {t}_{{R}_{i}},{Y}_{i},{\stackrel{^}{x}}_{i},\alpha ,\beta \right]\\ =(\begin{array}{l}\frac{{\stackrel{^}{p}}_{ij}}{{\stackrel{^}{p}}_{i,{L}_{i}+1}+{\displaystyle {\sum}_{k={L}_{i}+2}^{{R}_{i}-1}\left[{\stackrel{^}{p}}_{ik}{\displaystyle {\prod}_{o={L}_{i}+1}^{k-1}\left(1-{\stackrel{^}{p}}_{io}\right)}\right]}+{\displaystyle {\prod}_{o={L}_{i}+1}^{{R}_{i}-1}\left(1-{\stackrel{^}{p}}_{io}\right)}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}j={L}_{i}+1,\\ \frac{{\stackrel{^}{p}}_{ij}{\displaystyle {\prod}_{o={L}_{i}+1}^{j-1}\left(1-{\stackrel{^}{p}}_{io}\right)}}{{\stackrel{^}{p}}_{i,{L}_{i}+1}+{\displaystyle {\sum}_{k={L}_{i}+2}^{{R}_{i}-1}\left[{\stackrel{^}{p}}_{ik}{\displaystyle {\prod}_{o={L}_{i}+1}^{k-1}\left(1-{\stackrel{^}{p}}_{io}\right)}\right]}+{\displaystyle {\prod}_{o={L}_{i}+1}^{{R}_{i}-1}\left(1-{\stackrel{^}{p}}_{io}\right)}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}j\in \left\{{L}_{i}+2,\cdots ,{R}_{i}-1\right\},\\ \frac{{\displaystyle {\prod}_{o={L}_{i}+1}^{j-1}\left(1-{\stackrel{^}{p}}_{io}\right)}}{{\stackrel{^}{p}}_{i,{L}_{i}+1}+{\displaystyle {\sum}_{k={L}_{i}+2}^{{R}_{i}-1}\left[{\stackrel{^}{p}}_{ik}{\displaystyle {\prod}_{o={L}_{i}+1}^{k-1}\left(1-{\stackrel{^}{p}}_{io}\right)}\right]}+{\displaystyle {\prod}_{o={L}_{i}+1}^{{R}_{i}-1}\left(1-{\stackrel{^}{p}}_{io}\right)}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}j={R}_{i},\end{array}\end{array}$ (5)

where

${\stackrel{^}{p}}_{ij}=\frac{\mathrm{exp}\left(\alpha +{\beta}^{\prime}{\stackrel{^}{x}}_{ij}\right)}{1+\mathrm{exp}\left(\alpha +{\beta}^{\prime}{\stackrel{^}{x}}_{ij}\right)}$ (6)

This is an extension of a geometric-type experiment, where the probability of success (progression) changes at each follow-up visit, ${t}_{j}$, ${L}_{i}+1\le j\le {R}_{i}$.

M-step: We find the values of $\alpha $ and $\beta $ that maximize the expected value of log-likelihood in Equation (3), conditioned on the observed data. Therefore, we have

$\left(\stackrel{^}{\alpha},\stackrel{^}{\beta}\right)=\mathrm{arg}\mathrm{max}{l}_{\alpha ,\beta}|{\stackrel{^}{y}}_{ij},{\stackrel{^}{x}}_{ij},$ (7)

where ${\stackrel{^}{y}}_{ij}={y}_{ij},{\stackrel{^}{x}}_{ij}={x}_{ij}$, if uncensored.

Expressions (5)-(7) are repeated until convergence. As there are no closed forms for $\stackrel{^}{\alpha}$ and $\stackrel{^}{\beta}$, we used an optimization package optim in R to obtain $\left(\stackrel{^}{\alpha},\stackrel{^}{\beta}\right)$.

3.2. Variance Estimation

We apply Louis’ method for variance estimation using the notation in [10]. Following the missing information principle, we compute the observed information by subtracting the missing information from the complete information.

$\begin{array}{c}\frac{-{\partial}^{2}\mathrm{log}P\left(\theta \mathrm{|}W\right)}{\partial {\theta}^{2}}=-{\displaystyle {\int}_{v}}\frac{{\partial}^{2}\mathrm{log}P\left(\theta \mathrm{|}W\mathrm{,}V\right)}{\partial {\theta}^{2}}P\left(V\mathrm{|}\theta \mathrm{,}W\right)\text{d}V\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}-Var\left(\frac{-\partial \mathrm{log}P\left(\theta \mathrm{|}W\mathrm{,}V\right)}{\partial \theta}\right)\mathrm{,}\end{array}$ (8)

where *W* is observed data, *i.e.*, partly interval-censored pooled repeated observations. *V* is latent data, the true unknown counterpart of the interval-censored portion of *W*.
$\theta \mathrm{|}W$ is the observed posterior and
$\theta \mathrm{|}W\mathrm{,}V$ is the augmented posterior.

The details of the expression (8) are provided in Appendix 3.

4. Simulation Study

4.1. Data Simulation

We considered $n=300$ subjects who have $M=7$ follow-up visits each. We generated covariates as follows:

$x{1}_{ij}\sim N\left(5.8+0.3{t}_{j-1}\mathrm{,0.1}\right)\mathrm{.}$

$x{2}_{ij}\sim \mathrm{\text{\hspace{0.17em}}}N\left(0.4+0.15{t}_{j-1}\mathrm{,0.1}\right)\mathrm{.}$

$x{1}_{ij}$ represents a continuous covariate with larger values and faster growth rate over time, while $x{2}_{ij}$ represents one with smaller values and slower growth rate over time.

First, we generate $n=300$ subjects who have complete follow-up visits. This makes the original complete data (OC), pooled repeated data. We randomly choose ${n}_{1}$ subjects out of these. This makes the exact data (E), a proper subset of the OC. For the remaining ${n}_{2}=n-{n}_{1}$ subjects, we randomly designate some of their follow-up visits missing. This makes the pooled repeated interval-censored observations. The observed data (O), which is the pooled repeated partly interval-censored data, is the mix of pooled repeated data (E) and pooled repeated interval-censored data. We considered several values for ${n}_{1}$ and ${n}_{2}$ to cover different proportions of exact data.

We randomly sampled
${L}_{i}$ and
${R}_{i}$ for each patient. Note that for the exact data, we have
${R}_{i}={L}_{i}+1$ and for the pooled repeated interval-censored data,
${R}_{i}\ge {L}_{i}+2$. Then for
$j=\mathrm{1,}\cdots \mathrm{,}{L}_{i}$, we have
${y}_{ij}=0$ and for
$j={R}_{i},\cdots ,M$, we have
${y}_{ij}=1$.
${y}_{ij}$ is missing for
$j={L}_{i}+\mathrm{1,}\cdots \mathrm{,}{R}_{i}-1$ in the pooled repeated interval-censored data.
${y}_{ij}$ is 1 when the *i*^{th} subject at risk at the *j*^{th} visit experiences the event of interest in the*j*^{th} interval.

We computed the bias and variance for original complete data, exact data, and observed data based on $B=1500$ replications. In addition, we investigated the power of our test.

4.2. Results

We first considered the case where there was only one attribute in the model. The EM algorithm (Section 3.1) was used for the parameter estimation. The variance of the parameter estimator was calculated using Louis’ method (Section 3.2).

The results are shown in Table 1. For all the different combinations of ${n}_{1}$ and ${n}_{2}$, the proposed estimator based on the observed data produces a smaller bias and a smaller variance than that based on the exact data alone. In particular, for the case of (250, 50), containing E 84% (250) and only 16% (50) pooled-repeated interval-censored data, the proposed estimator produces a smaller bias and a smaller variance than that based on E alone. We also notice that the more exact data we have, the smaller bias and variance we get. These results have a quite similar pattern to those in [3], who employed a proportional hazards model with partly interval-censored data. [6] notes that pooled repeated observations logistic regression is close to the time-dependent covariate Cox regression analysis. Therefore, this simulation result coincides with what we expected. In order to see if bootstrap would be of help, we also ran simulations with various pairs of ${n}_{1}$ and ${n}_{2}$ to compare the bootstrap variance with the variances for the O, E, and OC. We considered two covariates; one is continuous and the other is ordinal. Table 2 shows the results. For all pairs of ${n}_{1}$ and ${n}_{2}$, the bootstrap variance for the O is smaller than that for OC, which is supposed to be the smallest. This is, bootstrapping suffers from substantial underestimation. Therefore, we do not recommend it for this setting. Another issue is that it is time-consuming.

Next, we computed the power of the test
${H}_{0}:\beta ={\beta}_{0}$ vs.
${H}_{1}\mathrm{:}\beta \ne {\beta}_{0}$. We considered both one-dimensional covariate and two-dimensional covariates. We considered 3 sample sizes (100, 200, and 300), and for each of these sample sizes we ran
$B=1000$ replications of the test. The power was calculated as the proportion of times
${H}_{0}$ was rejected at 5% level of significance. Both Figure 1 and Figure 2 show the powers for different values of
${\beta}_{0}$ and different sample sizes. The power curves are symmetric for all the different sample sizes. As a sample size increases or the parameter values are farther apart from the true parameter value (*i.e.*, an effect size increases), the corresponding power increases. From Figure 1, with a sample of size
$n=300$, one can achieve 80% power for the effect size of 0.45. Moreover, for the effect size of 0.55, a sample of size
$n=200$ is enough to achieve 80% power. [11] achieved approximately 80% power in

Table 1. Results for 1-dimensional
$\beta $,
${\beta}^{true}=3.6$, *B*: Bias,
${\sigma}^{2}$ : variance.

Table 2. Estimated variance, boot: bootstrap.

Figure 1. Power of the test for one-dimensional $\beta $.

detecting the effect size of 0.75 for the proportional hazards model with a sample of size 300 using current status data. Considering that pooled repeated partly interval-censored data has more information than current status data, we fully

Figure 2. Power of the test for multidimensional $\beta $.

Table 3. The 95% coverage probabilities.

agree with this better power result. The 95% coverage probabilities for different proportions of pooled repeated partly interval-censored data are shown in Table 3.

In summary, even a small amount of pooled repeated interval-censored data within O does make our statistical inference more accurate and more powerful.

5. Analysis of NLSY97 Data

For more than 4 decades, the National Longitudinal Surveys (NLS) data have served as an important tool for economists, sociologists, and other researchers. The NLSY97 is a nationally representative sample of approximately 9000 youths who were 12 to 18 years old as of December 31, 1996. The NLSY97 is designed to document the transition from school to work and into adulthood. It collects extensive information about youths’ labor market behavior and educational experiences over time. In addition to educational and labor market experiences, the NLSY97 contains detailed information on many other topics. Some of the areas included in the data are criminal behavior, alcohol, and drug use. For the purpose of illustration of our methods, we use the NLSY97 data from 1997 to 2013 ( [12] ). We illustrate how to analyze the effects of covariates that may affect an adolescent’s smoking behavior.

There are 8984 subjects in the data set. We analyze the 1822 subjects who did not smoke at the beginning of the study in 1997, but by the end of 2013 became frequent smokers (smoking for more than 10 days in a month). That is O. The response variable is defined as

${y}_{ij}=(\begin{array}{cc}\mathrm{1,}& \text{a frequent smoker}\\ \mathrm{0,}& \text{not a frequent smoker}\mathrm{.}\end{array}$ (9)

Exact observations (E) are available in approximately 87.5% of those analyzed. The 1^{st} covariate,
$x{1}_{ij}$, is the number of days an individual drank alcohol in the last 30 days. The 2^{nd} covariate,
$x{2}_{ij}$, is an individual’s self-evaluation of “general state of health”.
$x{2}_{ij}$ is defined as: 1 = excellent, 2 = very good, 3 = good, 4 = fair, and 5 = poor. The covariate effects are estimated by the EM algorithm in Section 0. The standard errors of these estimators are computed by Louis’ method in Section 0. The results are shown in Table 4. Fixing an individual’s self-evaluated health level as the subject drinks alcohol one more day during the past 30 days, the log of odds of becoming a frequent smoker increases by 0.1 (s.e. = 0.002). Furthermore, by fixing an individual’s amount of drink as the subject’s health level rises (*i.e.*, gets worse) by one unit, the log of odds of becoming a frequent smoker increases by 0.19 (s.e. = 0.015).

Additionally, we analyzed only E from O in order to see how much smaller the pooled repeated interval-censored data can help make the analysis more accurate. Another rationale for this is some practitioners often analyze only E due to the unavailability of software. The results are shown in Table 5. The parameter estimates are very close to those from O. However, the estimated standard errors are much larger than those from O. This is consistent with the simulation results in Section 3.2. The Wald test statistic for testing $\left({\beta}_{1}\mathrm{,}{\beta}_{2}\right)=\left(\mathrm{0,0}\right)$ is quite large for both E alone and O. Therefore, the p-values are nearly 0. Though both tests tell us that the covariates have a statistically significant effect on adolescent’s smoking behavior, O provides us with much stronger evidence for the

Table 4. The results of NLSY97 analysis using the observed data.

Table 5. The results of NLSY97 analysis using only the exact data.

effect. Therefore, this data analysis reaffirms that even a small amount of pooled repeated interval-censored portion of O increases the sensitivity of the test.

6. Discussion

We focused on developing a method to estimate the regression parameters and the variance-covariance matrix of those estimators for the pooled repeated partly interval-censored data logistic regression model. We employed the EM algorithm to estimate the parameters and missing information principle to estimate the variance-covariance matrix of those estimators.

Monte Carlo simulation demonstrates acceptable levels of bias, standard error, and power. To our knowledge, this is the first extensive power study for the pooled repeated partly interval-censored data logistic regression model. The simulation results suggest that in practice, one needs a sample of size around 300 to achieve an 80% power of the test to detect a very small effect size (0.45) for the regression parameter of interest. However, one needs a much smaller size, only around 200, for a bit larger effect size (0.55).

There are several potential extensions of our methods. Our methods can also be used when the predetermined follow-up visits were person-dependent. Our methods can be extended to handle correlated covariates by employing a ridge regression model ( [13] ), variable selections by lasso regression ( [14] ), and multiple progression states due to the fact that the likelihood factors into a distinct term for each interval ( [15] ).

Last but not least, we note that there are challenges in including either left-censoring or right-censoring. Refer to Appendix 1 for details.

Acknowledgements

The authors appreciate Dr. Alexis Dinno for introducing the data.

Appendix 1. Right and Left Censoring in the Model

In some special cases, the visiting time of some subjects in the data may have either right or left censoring. If a subject has not failed at the last visit (
${y}_{i{L}_{i}}=0$ ) and does not come back for the proceeding interview visits, then the subject’s time to the event of interest is right-censored. In this case
${L}_{i}={M}_{i}$ and
${R}_{i}=M$. As NLSY predetermined *M* for all subjects, *M* plays the role of
$\infty $.

One may want to impute the covariate, ${x}_{ij}$ and reponse, ${y}_{ij}$ according to the procedures in Section 3.1. Unfortunately, extrapolating the covariates ${x}_{ij}$ for $j>{L}_{i}$ using the linear growth curve in Section 3.1 may well increase bias and variance.

If a subject’s first visit is at time *k* and the subject shows the symptoms of the event of interest, then both
${y}_{ij}$ and
${x}_{ij}$ are missing for
$j=\mathrm{1,}\cdots \mathrm{,}k-1$, and
${y}_{ik}=1$. Therefore, the covariate,
${x}_{ij}$ and response,
${y}_{ij}$ should be estimated for
$j\le k-1$ at E-step. We merely have
${L}_{i}=0$,
${R}_{i}=k$, and two observed covariate values
${x}_{i0}$ and
${x}_{ik}$. Therefore, we cannot fit the subject-dependent growth curve to estimate the covariates at the missed visits.

In summary, there is no merit to include individuals whose event-times are either left-censored or right-censored when fitting a logistic regression model with pooled repeated observations.

Appendix 2. Imputation of Covariates

In Section 3.1, we assumed that covariates have a linear growth curve with fixed effects. This was motivated by NLSY97 data. In NLSY97, follow-up interviews were relatively far apart (1 year). Additionally, some individuals had no change in their covariate values, e.g., some individuals had no drinking throughout the study. This motivated us to assume that for a given individual, the covariate values are uncorrelated at different follow-up visits, *i.e.*,
$cov\left({\epsilon}_{ij}\mathrm{,}{\epsilon}_{i{j}^{\prime}}\right)=\mathrm{0,}j\ne {j}^{\prime}$.

If the follow-up time intervals are relatively short and there are no constant covariate values for any individual over time, one may adopt a linear growth curve with fixed effects and autocorrelated errors. That is,

${x}_{ij}={\theta}_{0i}+{\theta}_{1i}{t}_{j-1}+{\epsilon}_{ij}\mathrm{,}$ (10)

where
${\epsilon}_{ij}$ is an autoregressive process with lag 1, *AR *(1),
${\epsilon}_{ij}\sim N\left(\mathrm{0,}{\sigma}_{\epsilon}^{2}\right)$,
$cov\left({\epsilon}_{i\mathrm{,}j}\mathrm{,}{\epsilon}_{i\mathrm{,}j+1}\right)=\rho \mathrm{,}\rho \ne 0$.

[7] and [16] assumed random effects. In a linear growth curve with random effects, all subjects have the same growth curve distribution, which depends on time points and it is correlated within the same subject. The least squares estimators for this model are the same for all subjects. For example, assume that we get
${\stackrel{^}{\theta}}_{0}$ and
${\stackrel{^}{\theta}}_{1}$ for a random effect model. If two subjects
${i}^{\mathrm{*}}$ and
${i}^{\mathrm{**}}$ have missing covariates at a given time point *j*, then they will have the same estimated covariate
${\stackrel{^}{x}}_{ij}={\stackrel{^}{\theta}}_{0}+{\stackrel{^}{\theta}}_{1}{t}_{j}$ for
$i={i}^{*},{i}^{**}$. This may cause a substantial amount of bias.

Appendix 3. Formulas for Computing the Variance in Section 3.2

The variance estimation is based on
${Y}_{i}$, the observed binary outcomes for subject *i* and the variability of missing response,
${y}_{ij}$ conditioned on
${x}_{i}$, where
${x}_{i}=\left\{{x}_{i1}\mathrm{,}{x}_{i2}\mathrm{,}\cdots \mathrm{,}{x}_{i\mathrm{,}{L}_{i}}\mathrm{,}{\stackrel{^}{x}}_{i\mathrm{,}{L}_{i}+1}\mathrm{,}\cdots \mathrm{,}{\stackrel{^}{x}}_{i\mathrm{,}{R}_{i}-1}\mathrm{,}{x}_{i\mathrm{,}{R}_{i}}\right\}$. Let
$\theta =\left(\alpha \mathrm{,}\stackrel{\to}{\beta}\right)$, and
${z}_{i}=\left(\mathrm{1,}{x}_{i}\right)$. Then the complete information matrix in (8) can be computed by

$\underset{i}{\sum}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{z}_{i}{z}_{i}^{\text{T}}\frac{\mathrm{exp}\left({\theta}^{\text{T}}{z}_{i}\right)}{{\left(1+\mathrm{exp}\left({\theta}^{\text{T}}{z}_{i}\right)\right)}^{2}}\mathrm{.$ (11)

The missing information in (8) is computed by Monte Carlo simulations.

$\begin{array}{l}Var\left(\frac{-\partial \mathrm{log}P\left(\theta \mathrm{|}W\mathrm{,}V\right)}{\partial \theta}\right)\\ =E{\left(\frac{-\partial \mathrm{log}P\left(\theta \mathrm{|}W\mathrm{,}V\right)}{\partial \theta}\right)}^{2}-{\left[E\left(\frac{-\partial \mathrm{log}P\left(\theta \mathrm{|}W\mathrm{,}V\right)}{\partial \theta}\right)\right]}^{2}\mathrm{,}\end{array}$ (12)

where

$E\left(\frac{-\partial \mathrm{log}P\left(\theta |W,V\right)}{\partial \theta}\right)\approx \frac{1}{B}{\displaystyle {\sum}_{b=1}^{B}}\left[\frac{-\partial \mathrm{log}P\left(\theta |W,{V}_{b}\right)}{\partial \theta}\right]$

and

$E{\left(\frac{-\partial \mathrm{log}P\left(\theta |W,V\right)}{\partial \theta}\right)}^{2}\approx \frac{1}{B}{\displaystyle {\sum}_{b=1}^{B}}{\left[\frac{-\partial \mathrm{log}P\left(\theta |W,{V}_{b}\right)}{\partial \theta}\right]}^{2}.$

References

[1] Gao, F., Zeng, D.L. and Lin, D.Y. (2017) Semiparametric Estimation of the Accelerated Failure Time Model with Partly Interval-Censored Data. Biometrics, 73, 1161-1168.

https://doi.org/10.1111/biom.12700

[2] Zhao, X.Q., Zhao, Q., Sun, J.G. and Kim, J.S. (2008) Generalized Log-Rank Tests for Partly Interval-Censored Failure Time Data. Biometrical Journal, 50, 375-385.

https://doi.org/10.1002/bimj.200710419

[3] Kim, J.S. (2003) Maximum Likelihood Estimation for the Proportional Hazards Model with Partly Interval-Censored Data. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65, 489-502.

https://doi.org/10.1111/1467-9868.00398

[4] Huang, J. (1999) Asymptotic Properties of Nonparametric Estimation Based on Partly Interval-Censored Data. Statistica Sinica, 9, 501-519.

[5] Adrienne Cupples, L., D’Agostino, R.B., Anderson, K. and Kannel, W.B. (1988) Comparison of Baseline and Repeated Measure Covariate Techniques in the Framingham Heart Study. Statistics in Medicine, 7, 205-218.

https://doi.org/10.1002/sim.4780070122

[6] D’Agostino, R., Lee, M.L., Belanger, A., Cupples, L.A., Anderson, K. and Kannel, W.B. (1990) Relation of Pooled Logistic Regression to Time Dependent Cox Regression Analysis: The Framingham Heart Study. Statistics in Medicine, 9, 1501-1515.

https://doi.org/10.1002/sim.4780091214

[7] Finkelstein, D.M., Wang, R., Ficociello, L.H. and Schoenfeld, D.A. (2010) A Score Test for Association of a Longitudinal Marker and an Event with Missing Data. Biometrics, 66, 726-732.

https://doi.org/10.1111/j.1541-0420.2009.01326.x

[8] Dempster, A.P., Laird, N.M. and Rubin, D.B. (1997) Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society: Series B (Methodologica), 39, 1-22.

https://doi.org/10.1111/j.2517-6161.1977.tb01600.x

[9] Masyn, K.E., Petras, H. and Liu, W. (2014) Growth Curve Models with Categorical Outcomes. In: Bruinsma, G. and Weisburd, D., Eds., Encyclopedia of Criminology and Criminal Justice, Springer, New York, 2013-2025.

https://doi.org/10.1007/978-1-4614-5690-2_404

[10] Tanner, M.A. (1996) Tools for Statistical Inference. 3rd Edition, Springer-Verlag, New York.

https://doi.org/10.1007/978-1-4612-4024-2

[11] Mongoué-Tchokoté, S. and Kim, J.S. (2008) New Statistical Software for the Proportional Hazards Model with Current Status Data. Computational Statistics and Data Analysis, 52, 4272-4286.

https://doi.org/10.1016/j.csda.2008.02.007

[12] Bureau of Labor Statistics, U.S. Department of Labor (2015) National Longitudinal Survey of Youth 1997 Cohort, 1997-2013 (Rounds 1-16) Produced by the National Opinion Research Center, the University of Chicago and Distributed by the Center for Human Resource Research, The Ohio State University. Columbus.

[13] Hoerl, A., Kennard, R. and Baldwin, K. (1975) Ridge Regression: Some Simulations. Communications in Statistics, 4, 105-123.

[14] Tibshirani, R. (1996) Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58, 267-288.

https://doi.org/10.1111/j.2517-6161.1996.tb02080.x

[15] Allison, P.D. (2010) Survival Analysis Using SAS: A Practical Guide. SAS Institute, Cary.

[16] Wulfsohn, M.S. and Tsiatis, A. (1997) A Joint Model for Survival and Longitudinal Data Measured with Error. Biometrics, 53, 330-339.

https://doi.org/10.2307/2533118