Empirical Likelihood Based Longitudinal Data Analysis

Show more

1. Introduction

Longitudinal studies are common in areas such as epidemiology, clinical trials, economics, agriculture, and survey sampling. In longitudinal studies, we are interested in the changes in the variables over time as a function of the covariates, generally under the assumption that observations from different individuals are independent. For example, longitudinal studies are used to characterize growth and ageing, to assess the effect of risk factors on human health, and to evaluate the effectiveness of treatments. To obtain an unbiased, efficient, and reliable estimate, we must properly model the correlation between the repeated responses for each individual. However, the modelling of correlation, especially when the responses are discrete, is a challenging task even if the responses are collected over equispaced time points.

The approaches used for the analysis of longitudinal data can be classified as mixed effects models, transitional models, and marginal regression models. A potential disadvantage of mixed effects models is that they rely on parametric assumptions, which may lead to biased parameter estimates when a model is misspecified. Moreover, the estimation of the parameters is challenging when the random effects have a high dimension; it typically involves integrals that do not have an explicit form. Transitional models are more difficult to apply when there are missing data and the repeated measurements are not equally spaced in time. In addition, the interpretation of the regression parameters varies with the order of the serial correlation, and the regression parameter estimates are sensitive to the assumption of time dependence. Because of the aforementioned difficulties in modelling and performing inference, we focus on marginal models in this paper.

We start with brief review on existing methods for longitudinal data under the framework of generalized linear models (GLMs). The longitudinal observations consist of an outcome random variable ${y}_{it}$ and a p-dimensional vector of covariates ${x}_{it}$, observed for subjects $i=\mathrm{1,}\cdots \mathrm{,}k$ at a time point t, $t=\mathrm{1,}\cdots \mathrm{,}{m}_{i}$. For the ith subject, let ${y}_{i}={\left({y}_{i1}\mathrm{,}\cdots \mathrm{,}{y}_{i{m}_{i}}\right)}^{\text{T}}$ be the response vector, and let ${X}_{i}={\left({x}_{i1}\mathrm{,}{x}_{i2}\mathrm{,}\cdots \mathrm{,}{x}_{it}\mathrm{,}\cdots \mathrm{,}{x}_{i{m}_{i}}\right)}^{\text{T}}$ be the ${m}_{i}\times p$ matrix of covariates. Marginal models for longitudinal data can be extended to the GLM framework. The marginal density of ${y}_{it}$ is assumed to follow an exponential family [1] of the form

$f\left({y}_{it}\right)=\mathrm{exp}\left[\left({y}_{it}{\theta}_{it}-a\left({\theta}_{it}\right)\right)\varphi +b\left({y}_{it},\varphi \right)\right],$ (1)

where ${\theta}_{it}=h\left({\eta}_{it}\right)$, h is a known injective function with ${\eta}_{it}={x}_{it}\beta $, $\beta $ is a $p\times 1$ vector of regression effects of ${x}_{it}$ on ${y}_{it}$, and $a(\ast )$ and $b(\ast )$ are functions that are assumed to be known. The mean and variance of ${y}_{it}$ can be written

$\text{E}\left({y}_{it}\mathrm{|}{x}_{it}\right)={a}^{\prime}\left({\theta}_{it}\right)={\mu}_{it}\mathrm{}\text{and}\mathrm{}\text{Var}\left({y}_{it}\right)={a}^{\u2033}\left({\theta}_{it}\right)=v\left({\mu}_{it}\right)\varphi \mathrm{,}$

where $\varphi $ is the unknown over-dispersion parameter and $v(\ast )$ is a known variance function. For simplicity, we set the nuisance scale parameter $\varphi $ to 1 for the rest of this paper.

The generalized estimating equation (GEE) approach [2] is a semiparametric method in which the estimating equations are derived without a full specification of the joint distribution of the observed data. This approach allows the user to specify any structure for the correlation matrix of the outcomes ${y}_{i}$ for estimating the regression parameters. In this approach, [2] introduced a “working” correlation structure based on the GEE approach to obtain consistent and efficient estimators for the regression parameter $\beta $. The estimates of parameters are obtained by solving

$g\left(\beta \mathrm{,}\stackrel{^}{\alpha}\left(\beta \right)\right)={\displaystyle \underset{i=1}{\overset{k}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{X}_{i}^{\text{T}}{A}_{i}^{1/2}{R}_{i}^{-1}\left(\stackrel{^}{\alpha}\right){A}_{i}^{-1/2}\left({y}_{i}-{\mu}_{i}\right)=\mathrm{0,}$ (2)

where ${A}_{i}$ is an ${m}_{i}\times {m}_{i}$ diagonal matrix with $\text{Var}\left({\mu}_{it}\right)$ as the tth diagonal element and ${R}_{i}\left(\stackrel{^}{\alpha}\right)$ is the ${m}_{i}\times {m}_{i}$ working correlation matrix of the ${m}_{i}$ repeated measurements. For $j=\mathrm{1,}\cdots \mathrm{,}{m}_{i}$ and ${j}^{\prime}=\mathrm{1,}\cdots \mathrm{,}{m}_{i}$, the ${\left(j\mathrm{,}{j}^{\prime}\right)}^{\text{th}}$ element of ${R}_{i}$ is the known, hypothesized, or estimated correlation. The working correlation may depend on an unknown $s\times 1$ correlation parameter vector $\alpha $. The observation times and correlation matrix may differ from subject to subject, but the correlation matrix ${R}_{i}\left(\alpha \right)$ for the ith subject is fully specified by $\alpha $. Some common working correlation structures are independent, autoregressive of order 1 (AR(1)), equally correlated (EQC), moving average of order 1 (MA(1)), or unstructured.

It has been demonstrated that in some situations the use of an arbitrary working correlation structure may lead to no solution for $\stackrel{^}{\alpha}$, which may break down the entire GEE methodology (see [3]). In an another study [4] showed that the GEE approach may yield an estimator of $\beta $, that, although consistent, is less efficient than that of the independence estimating equation approach under an arbitrary working correlation structure. To overcome this difficulty, [5] proposed using a stationary lag correlation structure instead of the working correlation matrix.

The estimate for $\beta $ is obtained by solving the following estimating equations:

$g\left(\beta \mathrm{,}\rho \right)={\displaystyle \underset{i=1}{\overset{k}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{X}_{i}^{\text{T}}{A}_{i}{\Sigma}_{i}^{-1}\left(\stackrel{^}{\rho}\right)\left({y}_{i}-{\mu}_{i}\right)=\mathrm{0,}$ (3)

where ${\Sigma}_{i}\left(\stackrel{^}{\rho}\right)={A}_{i}^{1/2}{C}_{i}^{\mathrm{*}}\left(\rho \right){A}_{i}^{1/2}$, with ${C}_{i}^{\mathrm{*}}\left(\rho \right)$ the stationary lag correlation structure for the AR(1), MA(1), or EQC models. The stationary lag correlations can be estimated via the method of moments introduced by [6] and showed that the stationary lag correlation approach produces regression estimates that are consistent and more efficient than those obtained from the independence-assumption-based estimating equation approach (see [4]). Using simulation studies, [7] showed that there is a loss in the efficiency of the GEE estimators when the correlation structures are misspecified. Note, however, that the correlation structure is unknown in practice, and it is better to use a stationary lag-correlation structure which can accommodates the AR(1), EQC, and MA(1) structures. We, therefore, recommend defining a lag correlation structure for the longitudinal responses. When the number of estimations (r) is more than the number of parameters (p) (i.e. $r>p$ ), we have extra information about the parameter for improved efficiency, but it may not be possible to solve the estimating equations directly. To overcome this problem, [8] proposed an adaptive quadratic inference function based on moment assumptions and the estimated variance; this does not involve direct estimation of the correlation parameter. Moreover, if the covariates are time-dependent the assumption $li{m}_{k\to \infty}E\left[g\left({\beta}_{0}\mathrm{,}\stackrel{^}{\alpha}\left({\beta}_{0}\right)\right)\right]=0$ might not hold for an arbitrary working correlation structure, and so the GEE estimate of $\beta $ is not necessarily consistent; see [9] [10] [11] [12] and [13]. The GEE estimator of $\beta $ with the independent working correlation is always consistent, so [10] recommended using this correlation as a safe choice. However, ignoring the correlation of observations may lead to an inefficient estimate of the regression coefficients and an underestimate of the standard errors.

The GEE approach requires only the assumption of the existence of the first two marginal moments and a correlation structure. GEE estimators are consistent and asymptotically normal as long as the mean, variance, and correlation structure are correctly specified. Marginal models have satisfactory performance when the assumptions are satisfied. Misspecification can cause estimates based on marginal models to be inefficient and inconsistent, and inference in this situation can be completely inappropriate. Confidence regions and hypothesis tests are based on asymptotic normality, which may not hold since the finite-sample distribution may not be symmetric. These problems motivate us to investigate the applicability of empirical likelihood (EL), a nonparametric likelihood method, based on a set of GEEs for the parameter of interest.

The EL introduced by [14] has properties similar to the parametric likelihood. The EL combines the reliability of nonparametric methods with the flexibility and effectiveness of the likelihood approach. The EL has many nice properties parallel to those of parametric likelihood, including the ability to carry out hypothesis tests and construct confidence intervals without estimating the variance. The shape of EL confidence regions automatically reflects the emphasis of the observed data set. The EL method also offers advantages in parameter estimation and the formulation of goodness-of-fit tests. The EL has been successfully applied in areas such as linear models, GLMs, survey sampling, variable selection, survival analysis, and time series. We investigate the use of a nonparametric EL in subject-wise longitudinal data analysis, simultaneously estimating within-subject correlations using the method of moments by [6]. We used the adjusted EL to avoid computational issues. We explore the asymptotic properties of the proposed method and assess the method’s performance based on a large number of simulations. Our approach provides consistent estimators and has comparable performance to marginal models when the model assumptions are correct. It is superior to marginal models when the variance function and correlation structure are even misspecified.

The remaining part of the paper is organized as follows. In Section 2, we develop the subject-wise EL via a set of GEEs of the parameter of interest and discuss its characteristics. We then introduce an adjusted EL (AEL) inference to longitudinal data. We discuss its characteristics and asymptotic properties in Section 3. In Section 4, we developed an algorithm based on EL principles for the estimation of the regression parameters and the construction of its confidence region. In Section 5, the performance of the proposed method is assessed based on Monte Carlo simulations. The implementation of the proposed method in two real case examples is discussed in Section 6 and the conclusions are given in Section 7.

2. Empirical-Likelihood-Based Longitudinal Modelling

EL is a nonparametric-likelihood-based approach, introduced by [14], which is an alternative to parametric likelihood and bootstrap methods. This method enables us to fully employ the information available from the data for making asymptotically efficient inference about the population parameters. In this section, we introduce the EL-based longitudinal modelling.

In a seminal paper, [15] introduced the EL for linear models. EL confidence regions for regression coefficients in linear models were studied by [16]. The EL method can also be used to estimate the parameters defined by a set of estimating equations [17]. A comprehensive overview of the EL and its properties can be obtained from [18]. EL methods have attracted increasing attention over the last two decades, and the literature is extensive.

In longitudinal data analysis framework, [19] applied the EL approach using a subject-wise working independence model. This method ignores the within-subject correlation structure. Also note that [20] proposed a subject-wise EL by entering the longitudinal data and obtained asymptotic normality of the maximum EL estimator (MELE) of the regression coefficients. They did not consider the within-subject correlation structure. It is well known that the working-independence assumption may lead to a loss of efficiency in estimation when a within-subject correlation is present. To estimate the within-subject covariance matrices, [21] used the nonparametric sample covariance matrix obtained from the residuals of a GEE using the working-independence assumption. In this work, we show how to incorporate the within-subject correlation structure of the repeated measurements into the EL.

Following [15] and [17], we can extend the EL inference to longitudinal data based on a set of estimating functions $g\left(\beta \mathrm{,}\rho \right)$ given in (3). We incorporate the within-subject correlation structure of the repeated measurements into the EL using the well-known method of moments estimators by [6] for a given value of $\beta $. The profile empirical log-likelihood function of $\beta $ is defined by

$\mathcal{l}\left(\beta \right)=\mathrm{sup}\left[{\displaystyle \underset{i=1}{\overset{k}{\sum}}}\mathrm{log}\left({p}_{i}\right):\mathrm{}{p}_{i}\ge 0,i=1,2,\cdots ,k;\mathrm{}{\displaystyle \underset{i=1}{\overset{k}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{p}_{i}=1,\mathrm{}{\displaystyle \underset{i=1}{\overset{k}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{p}_{i}{g}_{i}\left(\beta \mathrm{,}\rho \right)=0\right]\mathrm{.}$

The EL is maximized when

${\stackrel{^}{p}}_{i}=\frac{1}{k\left\{1+{\stackrel{^}{\lambda}}^{\text{T}}{g}_{i}\left(\beta \mathrm{,}\rho \right)\right\}}\mathrm{,}i=\mathrm{1,2,}\cdots \mathrm{,}k\mathrm{,}$ (4)

where the Lagrange multiplier $\stackrel{^}{\lambda}=\stackrel{^}{\lambda}\left(\beta \right)$ is the solution of

$\underset{i=1}{\overset{k}{\sum}}}\frac{{g}_{i}\left(\beta \mathrm{,}\rho \right)}{1+{\lambda}^{\text{T}}{g}_{i}\left(\beta \mathrm{,}\rho \right)}=0.$ (5)

This result leads to the profile empirical log-likelihood function

$\mathcal{l}\left(\beta \right)=-klog\left(k\right)-{\displaystyle \underset{i=1}{\overset{k}{\sum}}}log\left(1+{\stackrel{^}{\lambda}}^{\text{T}}\left(\beta \right){g}_{i}\left(\beta \mathrm{,}\rho \right)\right)$

and the profile empirical log-likelihood ratio function

${W}_{l}\left(\beta \right)=-{\displaystyle \underset{i=1}{\overset{k}{\sum}}}log\left(k{\stackrel{^}{p}}_{i}\right)={\displaystyle \underset{i=1}{\overset{k}{\sum}}}log\left[1+{\stackrel{^}{\lambda}}^{\text{T}}\left(\beta \right){g}_{i}\left(\beta \mathrm{,}\rho \right)\right]\mathrm{.}$ (6)

Under some regularity conditions, we have $2{W}_{l}\left({\beta}_{0}\right)\stackrel{D}{\to}{\chi}_{p}^{2}$ as $k\to \infty $ if

$E\left[g\left({\beta}_{0}\mathrm{,}\stackrel{^}{\rho}\left({\beta}_{0}\right)\right){g}^{\text{T}}\left({\beta}_{0}\mathrm{,}\stackrel{^}{\rho}\left({\beta}_{0}\right)\right)\right]$

is full rank where ${\beta}_{0}$ is the true parameter value. This conclusion is similar to that for the parametric likelihood ratio function. The vector $\beta $ can be estimated by minimizing

${W}_{l}\left(\beta \right)={\displaystyle \underset{i=1}{\overset{k}{\sum}}}\mathrm{log}\left(1+{\stackrel{^}{\lambda}}^{\text{T}}\left(\beta \right)g\left(\beta \mathrm{,}\rho \right)\right)$ (7)

with respect to $\beta $. Note that the profile log-likelihood ratio function can be minimized with respect to $\beta $ when $\rho $ is known. In practice, $\rho $ is unknown, but can be consistently estimated using the method of moments by [6].

The computation of the profile EL function is a key step in EL applications, and it involves constrained maximization. In some situations, the algorithm may fail because of poor initial values of the parameters. Moreover, the poor accuracy of EL confidence regions has been reported by several authors, including [17] [18] [22] [23] [24] and [25]. In the next subsection we will discuss how to address these problems in the context of longitudinal data.

Adjusted Empirical Likelihood

The computation of the profile EL ratio function ${W}_{l}\left(\beta \right)$ given in (7) is a key step in EL applications. The solution for $\lambda $ must satisfy $\left\{1+{\stackrel{^}{\lambda}}^{\text{T}}\left(\beta \right){g}_{i}\left(\beta ,\stackrel{^}{\rho}\left(\beta \right)\right)\right\}>0$ for all $i=\mathrm{1,}\cdots \mathrm{,}k$. A necessary and sufficient condition for its existence is that the vector $0$ is an interior point of the convex hull of $\left\{{g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\mathrm{,}i=\mathrm{1,}\cdots \mathrm{,}k\right\}$. Under some moment conditions on $g\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)$ [18], the convex hull contains $0$ as an interior point with probability 1 as $k\to \infty $. However, when $\beta $ is not close to the true parameter value ${\beta}_{0}$ or when k is small, it is possible that the solution of (5) does not exist. To avoid this problem, [25] introduced the adjusted EL (AEL). The AEL is obtained by adding a pseudo-observation to the data set. It overcomes the difficulties arising when the estimating equations for $\lambda $ have no solution.

Let ${g}_{i}\left(\beta \right)={g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)$ and ${\stackrel{\xaf}{g}}_{k}\left(\beta \right)=\frac{1}{k}{\displaystyle {\sum}_{i=1}^{k}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{g}_{i}\left(\beta \right)$ for any given $\beta $. For some positive constant ${b}_{k}$, by the addition of an artificial observation

${g}_{k+1}\left(\beta \right)=-\frac{{b}_{k}}{k}{\displaystyle \underset{i=1}{\overset{k}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{g}_{i}\left(\beta \right)=-{b}_{k}{\stackrel{\xaf}{g}}_{k}(\; \beta \; )$

with ${b}_{k}=log\left(k\right)/2$. The adjusted profile empirical log-likelihood ratio function is

$\begin{array}{l}{W}_{l}^{\mathrm{*}}\left(\beta \right)\\ =\mathrm{inf}\left[-{\displaystyle \underset{i=1}{\overset{k+1}{\sum}}}\mathrm{log}\left[\left(k+1\right){p}_{i}\right]:{p}_{i}\ge 0,i=1,2,\cdots ,k+1;{\displaystyle \underset{i=1}{\overset{k+1}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{p}_{i}=1,{\displaystyle \underset{i=1}{\overset{k+1}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{p}_{i}{g}_{i}\left(\beta \right)=0\right]\\ ={\displaystyle \underset{i=1}{\overset{k+1}{\sum}}}log\left[1+{\stackrel{^}{\lambda}}^{\text{T}}\left(\beta \right){g}_{i}\left(\beta \right)\right]\end{array}$

with $\stackrel{^}{\lambda}=\stackrel{^}{\lambda}\left(\beta \right)$ being the solution of ${\sum}_{i=1}^{k+1}}\frac{{g}_{i}\left(\beta \right)}{1+{\lambda}^{\text{T}}{g}_{i}\left(\beta \right)}=0$. Note that $0$ always lies inside the convex hull of $\left\{{g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\mathrm{,}i=\mathrm{1,}\cdots \mathrm{,}k+1\right\}$. The adjusted profile empirical log-likelihood ratio function is well defined after adding a pseudo value ${g}_{k+1}\left(\beta \right)$. For a wide range of ${b}_{k}$, following [25], we can show that the adjusted profile EL ratio function ${W}_{l}^{\mathrm{*}}\left(\beta \right)$ has the same asymptotic properties as the unadjusted profile EL ratio function ${W}_{l}\left(\beta \right)$. We define the adjusted profile EL estimator of $\beta $ to be the minimizer of

${W}_{l}^{\mathrm{*}}\left(\beta \right)={\displaystyle \underset{i=1}{\overset{k+1}{\sum}}}\left[log\left(1+{\stackrel{^}{\lambda}}^{\text{T}}\left(\beta \right){g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\right)\right]$ (8)

with respect to $\beta $.

The adjustment is particularly useful because, even for some undesirable values of $\beta $, the algorithm guarantees a solution. The confidence regions constructed via the AEL are found to have better coverage probabilities than those for the regular EL and the algorithm provides a promising solution for $\lambda $, particularly when the sample size is small. The improved coverage probability is achieved without resorting to more complex procedures such as Bartlett correction or bootstrap calibration.

In the next section, following [17], we state and prove the results on the distributional properties of the adjusted profile EL estimates of $\stackrel{^}{\beta}$. We construct these theorems based on the GEE with lag correlation given in (3), since the GEE estimate of $\beta $ under an arbitrary working-correlation structure is not necessarily consistent.

3. Main Results

In this section, we present the first-order asymptotic properties of $\stackrel{^}{\beta}$ and the adjusted profile empirical log-likelihood ratio statistics. We first introduce some notation and regularity conditions that are used in the theorems and lemma.

Regularity Conditions:

A1. $E\left\{g\left({\beta}_{0}\mathrm{,}\stackrel{^}{\rho}\left({\beta}_{0}\right)\right)\right\}=0$, where ${\beta}_{0}$ is the true value of $\beta $, $g\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)={\displaystyle {\sum}_{i=1}^{k}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{D}_{i}^{\text{T}}{\Sigma}_{i}^{-1}\left(\stackrel{^}{\rho}\right)\left({y}_{i}-{\mu}_{i}\right)$ be the estimating function for $\beta \in {\mathcal{R}}^{p}$ (defined in (3)), ${D}_{i}=\partial \left\{{{a}^{\prime}}_{i}\left(\theta \right)\right\}/\partial \beta $, ${\Sigma}_{i}\left(\stackrel{^}{\rho}\right)={A}_{i}^{1/2}{C}_{i}^{\mathrm{*}}\left(\stackrel{^}{\rho}\right){A}_{i}^{1/2}$, and ${A}_{i}=diag\left\{{{a}^{\u2033}}_{i}\left(\theta \right)\right\}$ for $i=\mathrm{1,2,}\cdots \mathrm{,}k$. Let ${\stackrel{\xaf}{g}}_{k}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)=\frac{1}{k}{\displaystyle {\sum}_{i=1}^{k}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)$ and ${g}_{k+1}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)=-{b}_{k}{\stackrel{\xaf}{g}}_{k}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)$, where ${b}_{k}$ is a positive constant.

A2. $\left\{{a}^{\prime}\left(\theta \right)\right\}$ is three times continuously differentiable and $\left\{{a}^{\u2033}\left(\theta \right)\right\}>0$ in ${\Theta}^{\circ}$, where $\Theta $ be the natural parameter space of the exponential family distributions presented in (1) and ${\Theta}^{\circ}$ the interior of $\Theta $. Also, $h\left(\eta \right)$ is three times continuously differentiable and ${h}^{\prime}\left(\eta \right)>0$.

A3. ${E}_{{\beta}_{0}}\left\{\frac{\partial {g}_{k}\left(\beta \mathrm{,}\rho \right)}{\partial \beta}\right\}$ and ${V}_{k}\left({\beta}_{0}\mathrm{,}\stackrel{^}{\rho}\left({\beta}_{0}\right)\right)={E}_{{\beta}_{0}}\left\{{g}_{k}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right){g}_{k}^{\text{T}}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\right\}$ are positive definite.

A4. The rank of $E\left\{\frac{\partial {g}_{k}\left(\beta \mathrm{,}\rho \right)}{\partial \beta}\right\}$ is p in a neighbourhood of ${\beta}_{0}$.

A5. There exist functions $G\left(y\mathrm{,}X\right)$ such that in a neighbourhood of ${\beta}_{0}$.

$\left|\frac{\partial {g}_{k}\left(\beta \mathrm{,}\rho \right)}{\partial \beta}\right|<G\left(y\mathrm{,}X\right)\mathrm{,}{\Vert {g}_{k}\left(y\mathrm{,}X\mathrm{,}\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\Vert}^{3}<G\left(y\mathrm{,}X\right)$

with $E\left[G\left(y\mathrm{,}X\right)\right]<\infty $.

Theorem 3.1. *Under regularity conditions A*1*-A*5,* suppose
$\left({y}_{i}\mathrm{,}{X}_{i}\right)\mathrm{,}i=\mathrm{1,2,}\cdots \mathrm{,}k$ ** is a set of independent and identically distributed random vectors. Let*

$2{W}_{l}^{\mathrm{*}}\left(\beta \right)=2{\displaystyle \underset{i=1}{\overset{k+1}{\sum}}}log\left[1+{\stackrel{^}{\lambda}}^{\text{T}}\left(\beta \right){g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\right]$ (9)

be the adjusted profile empirical log-likelihood ratio function. Then, as $k\to \infty $, $\stackrel{^}{\rho}\left(\beta \right)$ is a consistent estimator in the neighbourhood of $\beta $ ; the correlation matrix of ${y}_{i}$ is ${C}_{i}^{\mathrm{*}}\left(\rho \right)$, defined in (3) and ${W}_{l}^{\mathrm{*}}\left(\beta \right)$ attains its minimum value at some point $\stackrel{^}{\beta}$ in the interior of $\Vert \stackrel{^}{\beta}-{\beta}_{0}\Vert <{k}^{-1/3}$ in probability.

This result corresponds to Lemma 1 in [17] which is about the consistency of maximum empirical likelihood estimates (MELE) for independent and identically distributed data. By following [17], under the regularity conditions A1-A5, we can obtain a subject-wise MELE, as $k\to \infty $, with probability tending to 1 the equation ${W}_{l}^{\mathrm{*}}\left(\beta \right)$ has a solution within the open ball $\Vert \stackrel{^}{\beta}-{\beta}_{0}\Vert <{k}^{-1/3}$. It is noted that the proof is similar to the proof of Lemma 1 in [19] and the details are omitted here.

Theorem 3.2. *In addition to the regularity conditions A*1*-A*5,* suppose that
$\frac{{\partial}^{2}g\left(\beta \mathrm{,}\rho \right)}{\partial \beta \partial {\beta}^{\text{T}}}$ is bounded by some integrable function
$G\left(y\mathrm{,}X\right)$ in the neighbourhood. Then*,* there exists a sequence of adjusted profile EL estimates
$\stackrel{^}{\beta}$ of
$\beta $ such that*

$\sqrt{k}\left(\stackrel{^}{\beta}-{\beta}_{0}\right)\stackrel{D}{\to}N\left(0\mathrm{,}\Delta \right)\mathrm{,}$

where

$\Delta ={\left[{E}_{{\beta}_{0}}{\left\{\frac{\partial g\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)}{\partial \beta}\right\}}^{\text{T}}\left\{{E}_{{\beta}_{0}}{\left\{g\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right){g}^{\text{T}}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\right\}}^{-1}\right\}{E}_{{\beta}_{0}}\left\{\frac{\partial g\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)}{\partial \beta}\right\}\right]}^{-1}$

It is noted that the proof of Theorem 3.2 is similar to the proof of Theorem 1 in [17]. The details are thus omitted here.

Theorem 3.3. *Under regularity conditions A*1*-A*5,* the adjusted profile empirical log-likelihood ratio function
$2{W}_{l}^{\mathrm{*}}\left({\beta}_{0}\right)$ *,* where
${\beta}_{0}$ is the true value of
$\beta $ *,* is asymptotically chi-squared distributed with degrees of freedom p.*

The proof of Theorem 3.3 can be achieved by using similar arguments as those used in the proof of Theorem 2 in [17]. The details are thus omitted here.

4. Algorithm

To implement our method, we need an efficient algorithm. We minimize the profile EL ratio function ${W}_{l}\left(\beta \right)$ with respect to $\beta $ using a Newton-Raphson algorithm. At each Newton-Raphson iteration, we compute the Lagrange multiplier for updated values of $\beta $ and $\stackrel{^}{\rho}\left(\beta \right)$. We used the modified Newton-Raphson algorithm proposed by [26] for computing the Lagrange multiplier for a given value of the parameter. We implemented this method, which is numerically stable. The algorithm given in Sections 4.1, 4.2, and 4.3 can easily be extended to the AEL by the addition of a pseudo-value ${g}_{k+1}\left(\beta \right)=-{b}_{k}{\stackrel{\xaf}{g}}_{k}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)$, where ${b}_{k}$ is a positive constant.

4.1. Computation of Lagrange Multiplier

The Lagrange multiplier $\lambda $ is estimated by solving the equation

$\underset{i=1}{\overset{k}{\sum}}}\frac{{g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)}{1+{\lambda}^{\text{T}}{g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)}=0$

for a given set of vectors ${g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)$, $i=1,2,\cdots ,k$. Note that the above equation is the derivative of R with respect to $\lambda $ for a given $\beta $, where

$R={\displaystyle \underset{i=1}{\overset{k}{\sum}}}\mathrm{log}\left\{1+{\lambda}^{\text{T}}{g}_{i}\left(\beta ,\stackrel{^}{\rho}\left(\beta \right)\right)\right\}.$ (10)

In the EL problem, the solution must satisfy

$1+{\lambda}^{\text{T}}{g}_{i}\left(\beta ,\stackrel{^}{\rho}\left(\beta \right)\right)>0,\mathrm{}i=1,2,\cdots ,k.$

The modified Newton-Raphson algorithm for estimating $\lambda $ for a given value of $\beta $ and $\stackrel{^}{\rho}\left(\beta \right)$ is as follows:

1. Set ${\lambda}^{c}=0$, $c=0$, ${\gamma}^{c}=1$, $\u03f5=1{e}^{-08}$, $\rho ={\rho}^{0}$, and $\beta ={\beta}^{0}$.

2. Let ${R}^{\lambda}$ and ${R}^{\lambda \lambda}$ be the first and second partial derivatives of R (given in (10)) with respect to $\lambda $:

${R}^{\lambda}={\displaystyle \underset{i=1}{\overset{k}{\sum}}}\left[\frac{{g}_{i}\left(\beta ,\stackrel{^}{\rho}\left(\beta \right)\right)}{\left\{1+{\lambda}^{\text{T}}{g}_{i}\left(\beta ,\stackrel{^}{\rho}\left(\beta \right)\right)\right\}}\right]\mathrm{,}$

${R}^{\lambda \lambda}=-{\displaystyle \underset{i=1}{\overset{k}{\sum}}}\left[\frac{{g}_{i}\left(\beta ,\stackrel{^}{\rho}\left(\beta \right)\right){g}_{i}^{\text{T}}\left(\beta ,\stackrel{^}{\rho}\left(\beta \right)\right)}{{\left\{1+{\lambda}^{\text{T}}{g}_{i}\left(\beta ,\stackrel{^}{\rho}\left(\beta \right)\right)\right\}}^{2}}\right].$

Compute ${R}^{\lambda}$ and ${R}^{\lambda \lambda}$ for $\lambda ={\lambda}^{c}$ and let $\Delta \left({\lambda}^{c}\right)=-{\left[{R}^{\lambda \lambda}\right]}^{-1}{R}^{\lambda}$.

If $\Vert \Delta \left({\lambda}^{c}\right)\Vert <\u03f5$ stop the algorithm and report ${\lambda}^{c}$ ; otherwise continue.

3. Calculate ${\delta}^{c}={\gamma}^{c}\Delta \left({\lambda}^{c}\right)$. If $1+\left({\lambda}^{c}-{\delta}^{c}\right){g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\le 0$ for some i, set ${\gamma}^{c}=\frac{{\gamma}^{c}}{2}$ and go to Step 2.

4. Set ${\lambda}^{c+1}={\lambda}^{c}-{\delta}^{c},\mathrm{}c=c+1$, and ${\gamma}^{c+1}={\left(c+1\right)}^{-\frac{1}{2}}$ and go to Step 2. Step 2 will guarantee that ${p}_{i}>0$ and the optimization is carried out in the right direction.

4.2. Algorithm for Optimizing Profile Empirical Likelihood Ratio Function

Let $\stackrel{^}{\lambda}\left(\beta \right)$ be the estimated value of $\lambda $ for a given $\beta $. We minimize the profile EL ratio function defined in (7) over $\beta $. The Newton-Raphson algorithm is as follows:

1. Set $\beta ={\beta}^{0}$, $h=0$, and $\u03f5=1{e}^{-08}$.

2. Let $\stackrel{^}{\lambda}=\lambda \left(\beta \right)$ and $\stackrel{^}{\rho}\left(\beta \right)$ be the estimated values of $\lambda $ and $\rho $.

3. Compute the new estimate of $\beta $ via

${\beta}^{\left(h+1\right)}={\beta}^{\left(h\right)}-{\left\{{W}_{l}^{\beta \beta}\left({\beta}^{h}\right)\right\}}^{-1}\left\{{W}_{l}^{\beta}\left({\beta}^{h}\right)\right\}$ (11)

where ${W}_{l}\left(\beta \right)$ is the profile empirical log-likelihood ratio function defined in (7), with

${W}_{l}^{\beta}=\frac{\partial {W}_{l}\left(\beta \right)}{\partial \beta}$, ${W}_{l}^{\beta \beta}=\frac{{\partial}^{2}{W}_{l}\left(\beta \right)}{\partial \beta \partial {\beta}^{\text{T}}}$.

Note that to compute ${W}_{l}^{\beta}$ and ${W}_{l}^{\beta \beta}$, we need to estimate the Lagrange multiplier $\stackrel{^}{\lambda}\left(\beta \right)$ as in Section 4.1. In practice, $\rho $ is unknown, and the correlations can be consistently estimated by [24] using the method of moments.

4. If $min\left|{\beta}^{\left(h+1\right)}-{\beta}^{\left(h\right)}\right|<\u03f5$ stop the algorithm and report ${\beta}^{\left(h+1\right)}$ ; otherwise set $h=h+1$ and go to Step 3.

The simplified expressions for ${W}_{l}^{\beta}$ and ${W}_{l}^{\beta \beta}$ are as follows. Let ${R}^{\beta}$, ${R}^{\beta \beta}$, and ${R}^{\beta \lambda}$ be the first and second partial derivatives of (10) with respect to $\beta $ and $\lambda $

${R}^{\beta}={\displaystyle \underset{i=1}{\overset{k}{\sum}}}\left[\frac{{{g}^{\prime}}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\lambda}{\left\{1+{\lambda}^{\text{T}}{g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\right\}}\right]\mathrm{,}$

${R}^{\beta \beta}={\displaystyle \underset{i=1}{\overset{k}{\sum}}}\left\{\left[\frac{{{g}^{\u2033}}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right){\lambda}^{\text{T}}}{\left\{1+{\lambda}^{\text{T}}{g}_{i}\left(\beta \right)\right\}}\right]-\left[\frac{{{g}^{\prime}}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\lambda {\lambda}^{\text{T}}{\left[{{g}^{\prime}}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\right]}^{\text{T}}}{{\left\{1+{\lambda}^{\text{T}}{g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\right\}}^{2}}\right]\right\}\mathrm{,}$

and

${R}^{\beta \lambda}={\displaystyle \underset{i=1}{\overset{k}{\sum}}}\left[\frac{\left\{1+{\lambda}^{\text{T}}{g}_{i}\left(\beta \right)\right\}{{g}^{\prime}}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)-{{g}^{\prime}}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\lambda {\left[{g}_{i}\left(\beta \right)\right]}^{\text{T}}}{{\left\{1+{\lambda}^{\text{T}}{g}_{i}\left(\beta \right)\right\}}^{2}}\right]\mathrm{.}$

The first derivative of ${W}_{l}\left(\beta \right)$ with respect to $\beta $ is

$\begin{array}{c}{W}_{l}^{\beta}={\displaystyle \underset{i=1}{\overset{k}{\sum}}}\left[\frac{{\left[\frac{\partial \lambda \left(\beta \right)}{\partial \beta}\right]}^{\text{T}}{g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)+{{g}^{\prime}}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\lambda \left(\beta \right)}{\left\{1+{\lambda}^{\text{T}}\left(\beta \right){g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\right\}}\right]\\ ={\left[\frac{\partial \lambda \left(\beta \right)}{\partial \beta}\right]}^{\text{T}}{R}^{\lambda}+{R}^{\beta}\mathrm{.}\end{array}$

Note that for $\lambda =\stackrel{^}{\lambda}\left(\beta \right)$, ${R}^{\lambda}=0$. Therefore,

${W}_{l}^{\beta}={R}^{\beta}\mathrm{.}$ (12)

Similarly, the second derivative of ${W}_{l}\left(\beta \right)$ with respect to $\beta $ is

$\begin{array}{l}{W}_{l}^{\beta \beta}={\displaystyle \underset{i=1}{\overset{k}{\sum}}}\left[\frac{\left\{1+{\lambda}^{\text{T}}\left(\beta \right){g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\right\}\left\{\left[\frac{{\partial}^{2}\lambda \left(\beta \right)}{\partial \beta \partial {\beta}^{\text{T}}}\right]{g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)+2{{g}^{\prime}}_{i}\left(\beta \right){\left[\frac{\partial \lambda \left(\beta \right)}{\partial \beta}\right]}^{\text{T}}+{{g}^{\u2033}}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\lambda \left(\beta \right)\right\}}{{\left\{1+{\lambda}^{\text{T}}\left(\beta \right){g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\right\}}^{2}}\right]\\ \text{\hspace{0.05em}}-{\displaystyle \underset{i=1}{\overset{k}{\sum}}}\left[\frac{\left\{{\left[\frac{\partial \lambda \left(\beta \right)}{\partial \beta}\right]}^{\text{T}}{g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)+{{g}^{\prime}}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\lambda \left(\beta \right)\right\}{\left\{{\left[\frac{\partial \lambda \left(\beta \right)}{\partial \beta}\right]}^{\text{T}}{g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)+{{g}^{\prime}}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\lambda \left(\beta \right)\right\}}^{\text{T}}}{{\left\{1+{\lambda}^{\text{T}}\left(\beta \right){g}_{i}\left(\beta \mathrm{,}\stackrel{^}{\rho}\left(\beta \right)\right)\right\}}^{2}}\right]\\ ={\left[\frac{\partial \lambda \left(\beta \right)}{\partial \beta}\right]}^{\text{T}}{R}^{\lambda \lambda}\left[\frac{\partial \lambda \left(\beta \right)}{\partial \beta}\right]+2{\left[\frac{\partial \lambda \left(\beta \right)}{\partial \beta}\right]}^{\text{T}}{R}^{\beta \lambda}+{R}^{\beta \beta}\mathrm{.}\end{array}$

Following [18], a local quadratic approximation to $R$ leads to

$\left[\frac{\partial \lambda \left(\beta \right)}{\partial \beta}\right]={\left({R}^{\lambda \lambda}\right)}^{-1}{R}^{\beta \lambda}\mathrm{,}$

so

${W}_{l}^{\beta \beta}={R}^{\beta \beta}-{R}^{\beta \lambda}{\left({R}^{\lambda \lambda}\right)}^{-1}{R}^{\lambda \beta}\mathrm{.}$ (13)

4.3. Construction of Confidence Interval

We use the bisection method to construct the lower and upper confidence limits based on the profile EL ratio for $\beta $. Let $\stackrel{^}{\beta}={\left({\stackrel{^}{\beta}}_{1}\mathrm{,}{\stackrel{^}{\beta}}_{2}\right)}^{\text{T}}$ be the estimated value of $\beta $ from Section 4.2, where ${\stackrel{^}{\beta}}_{1}$ is a scalar and ${\stackrel{^}{\beta}}_{1}$ is the $1\times p-1$ vector of parameters and we are interested to construct confidence interval for ${\beta}_{1}$.

1. Compute a reasonable lower confidence limit ${\beta}_{\mathrm{1,}L}$ for ${\beta}_{1}$. Set ${L}_{1}={\stackrel{^}{\beta}}_{1}$, ${L}_{2}={\stackrel{^}{\beta}}_{1}-a\times \text{SE}\left({\stackrel{^}{\beta}}_{1}\right)$, and $\u03f5=1{e}^{-05}$, where $\text{SE}\left({\stackrel{^}{\beta}}_{1}\right)$ is the standard error of ${\stackrel{^}{\beta}}_{1}$ using any existing method. We can choose $a$ such that ${W}_{l}\left({L}_{2}\mathrm{,}{\stackrel{^}{\beta}}_{2}\right)>\left[{\chi}_{\mathrm{1,1}-\alpha}^{2}\right]/2>{W}_{l}\left({L}_{1}\mathrm{,}{\stackrel{^}{\beta}}_{2}\right)$, where ${\chi}_{\mathrm{1,1}-\alpha}^{2}$ is the $\left(1-\alpha \right)\text{th}$ quantile from a ${\chi}^{2}$ distribution with one degree of freedom.

2. Compute the profile empirical log-likelihood ratio values ${W}_{1}=2{W}_{l}\left({L}_{1}\mathrm{,}{\stackrel{^}{\beta}}_{2}\right)$ and ${W}_{2}=2{W}_{l}\left({L}_{2}\mathrm{,}{\stackrel{^}{\beta}}_{2}\right)$.

3. Minimize the profile EL ratio function defined in (7) over ${\beta}_{2}$ for a given ${L}_{new}=\left({L}_{1}+{L}_{2}\right)/2$. Let ${\stackrel{^}{\beta}}_{2new}$ be the new estimate of ${\beta}_{2}$ and ${W}_{new}=2{W}_{l}\left({L}_{new}\mathrm{,}{\stackrel{^}{\beta}}_{2new}\right)$.

4. If ${W}_{new}<{\chi}_{1,1-\alpha}^{2}$, set ${L}_{1}={L}_{new}$ and ${W}_{1}={W}_{new}$ ; else set ${L}_{2}={L}_{new}$ and ${W}_{2}={W}_{new}$.

5. If $\left|{W}_{1}-{W}_{2}\right|<\u03f5$ stop the algorithm and report ${\beta}_{\mathrm{1,}L}$ ; otherwise go to Step 3.

We can use this approach to construct the upper confidence limit by setting ${U}_{1}={\stackrel{^}{\beta}}_{1}$ and ${U}_{2}={\stackrel{^}{\beta}}_{1}+a\times \text{SE}\left({\stackrel{^}{\beta}}_{1}\right)$.

5. Performance Analysis

In this section, we conduct simulation studies to investigate the performance of our EL-based approach. We compute the coverage probabilities based on the ordinary EL, AEL and compare them with those of the GEE approach, which is based on a normal approximation. We also compute the coverage probabilities based on the extended empirical likelihood (EEL) by [27] and [28], which expands the EL domain geometrically and improve the coverage probabilities. We use different working correlations for the comparison. We generate count and continuous responses with different correlation structures and compare the methods under different working correlation structures.

5.1. Correlation Models for Stationary Count Data

We consider the stationary correlation models for count data discussed by [29] and [30]. The three models used to generate the data are

(i) Poisson Autoregressive Order 1 (AR(1)) Model

Let ${y}_{i1}~\text{Poi}\left({\stackrel{\u02dc}{\mu}}_{i}\right)$, where ${\stackrel{\u02dc}{\mu}}_{i}=\mathrm{exp}\left({\stackrel{\u02dc}{x}}_{i}\beta \right)$. The repeated responses follow the AR lag 1 dynamic model given by

${y}_{it}=\rho \ast {y}_{i\mathrm{,}t-1}+{d}_{it}\mathrm{,}t=\mathrm{2,}\cdots \mathrm{,}{m}_{i}\mathrm{.}$ (14)

Given ${y}_{i\mathrm{,}t-1}$, $\rho \ast {y}_{i\mathrm{,}t-1}$ is the binomial thinning operation. That is,

$\rho \ast {y}_{i,t-1}={\displaystyle \underset{j=1}{\overset{{y}_{i,t-1}}{\sum}}}{b}_{j}\left(\rho \right)={z}_{i,t-1},$

where the ${b}_{j}\left(\rho \right)$ are independent and identically distributed $\text{Bernoulli}\left(\rho \right)$ random variables. We assume that ${d}_{it}~\text{Poi}\left({\stackrel{\u02dc}{\mu}}_{i}\left(1-\rho \right)\right)$ and it is independent of ${z}_{i\mathrm{,}t-1}$. Let ${\stackrel{\u02dc}{x}}_{i}=\left({\stackrel{\u02dc}{x}}_{i1}\mathrm{,}\cdots \mathrm{,}{\stackrel{\u02dc}{x}}_{ip}\right)$ be the time-independent covariate for the ith individual.

(ii) Poisson Moving Average Order 1 (MA(1)) Model

The repeated responses follow the MA lag 1 dynamic model given by

${y}_{it}=\rho \ast {d}_{i,t-1}+{d}_{it},\mathrm{}t=2,\cdots ,{m}_{i},$ (15)

where $\rho \ast {d}_{i\mathrm{,}t-1}={\displaystyle {\sum}_{j=1}^{{d}_{i\mathrm{,}t-1}}}\text{\hspace{0.05em}}{b}_{j}\left(\rho \right)$ is a binomial thinning operation and ${d}_{it}~\text{Poi}\left[\frac{{\stackrel{\u02dc}{\mu}}_{i}}{1+\rho}\right]$, $t=\mathrm{0,}\cdots \mathrm{,}{m}_{i}$, with ${\stackrel{\u02dc}{\mu}}_{i}=\mathrm{exp}\left({\stackrel{\u02dc}{x}}_{i}\beta \right)$. Here $t=0$ is the initial time.

(iii) Poisson Equally Correlated Model

Let ${y}_{i0}~\text{Poi}\left({\stackrel{\u02dc}{\mu}}_{i}\right)$ and ${d}_{it}~\text{Poi}\left[{\stackrel{\u02dc}{\mu}}_{i}\left(1-\rho \right)\right]$ for all $t=1,\cdots ,{m}_{i}$. The repeated responses follow the dynamic equicorrelation model given by

${y}_{it}=\rho \ast {y}_{i0}+{d}_{it},\mathrm{}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}t=1,\cdots ,{m}_{i}.$ (16)

We simulated 1000 data sets from each of these models follow the AR(1), EQC, or MA(1) structure, and used EL-based methods to estimate the parameters using different working correlation such as AR(1), EQC, and MA(1) as well as lag correlation. In each simulation we use the parameters $\beta ={\left({\beta}_{1}\mathrm{,}{\beta}_{2}\right)}^{\text{T}}={\left(\mathrm{0.3,0.2}\right)}^{\text{T}}$ and $\rho =0.5$. We consider $k=100$ subjects and $m=4$ time points. For the ith subject, we generate the covariates ${\stackrel{\u02dc}{x}}_{i}=\left({\stackrel{\u02dc}{x}}_{i1}\mathrm{,}{\stackrel{\u02dc}{x}}_{i2}\right)$ from a normal distribution with mean 0 and standard deviation 1. For the analysis, we consider the working correlation to be either a true correlation or a lag correlation. We did not consider other possible values for $\rho $ since the working correlation structure may lead to no solution for $\stackrel{^}{\alpha}$ in some situations.

Table 1 gives the average estimated values of the regression coefficients with the corresponding simulated standard errors in parentheses for the independent, AR(1), EQC, and MA(1) models. We also give the coverage probabilities for ${\beta}_{1}$ and ${\beta}_{2}$ for the 0.95 and 0.99 confidence levels with the average width of the CI in parentheses. The results in Table 1 shows that the estimates ${\stackrel{^}{\beta}}_{1}$ and ${\stackrel{^}{\beta}}_{2}$ are close to the true values, width, and the coverage probabilities of the intervals based on the EL, EEL, and AEL are similar to those of the GEE. For instance, in the AR(1)/AR(1) case (true model/working correlation structure) the coverage probabilities of ${\stackrel{^}{\beta}}_{1}$ based on the GEE, EL, EEL, and AEL are 0.947, 0.928, 0.937, and 0.937 respectively for the nominal level of 0.95. For ${\stackrel{^}{\beta}}_{2}$, these probabilities are 0.954, 0.934, 0.940, and 0.942 for the same nominal level. Note that the intervals based on the EL have a slight undercoverage compared with those based on GEE. The EEL and AEL give substantially better coverage probabilities. Moreover, EEL and AEL are consistently more accurate than the EL. The results for lag correlations have similar patterns.

5.2. Misspecified Working Correlation Structure

In the simulation studies discussed in Section 5.1 we considered the correlation structure used to generate the data as the working correlation in the GEE-based modelling. However, in practice, we do not know the correlation structure of the data. As discussed before, if the working correlation is misspecified, we may lose the efficiency of the parameter estimates [3] [4].

We conducted a simulation study to assess the loss of efficiency. We generated repeated counts with the AR(1) correlation structure given in Section 5.1(i) with $\rho =0.49$ and 0.70 and $m=5$ time points. We used three working correlation structures: EQC, MA(1), and lag correlation. Table 2 gives the results for the GEE, EL, EEL, and AEL.

Table 2 shows that the EL, EEL, and AEL are superior to the GEE when the correlation structure is misspecified. Note that, in this EL-based approach, we could construct CIs without estimating the variance of the parameter of interest. For example, in the AR(1)/EQC case the coverage probabilities of ${\stackrel{^}{\beta}}_{1}$ based on the GEE, EL, EEL, and AEL are 0.917, 0.928, 0.934, and 0.935 respectively for the nominal 0.95 level. For ${\stackrel{^}{\beta}}_{2}$, these probabilities are 0.916, 0.929, 0.937, and 0.937 for the same nominal level. In this situation, the GEE with stationary lag correlation performs better than the GEE with a misspecified working correlation. However, the EL, EEL, and AEL perform as well as the former method, despite being nonparametric methods based on a data-driven likelihood ratio function. We did not consider all possible cases, for instance, a true EQC or MA(1) correlation model, since under different working correlation structures the correlation parameter $\stackrel{^}{\alpha}$ does not exist (see [5]).

5.3. Over-Dispersed Stationary Count Data

In this section, we consider the performance of our approach when the variance function is misspecified, in the context of stationary count data. We generate over-dispersed stationary count data ${y}_{it}$ using ${\stackrel{\u02dc}{\mu}}_{i}={u}_{i}\mathrm{exp}\left({\stackrel{\u02dc}{x}}_{i}\beta \right)$ for the three models discussed in Section 5.1, where ${u}_{i}$ is a random sample such that $\text{E}\left({u}_{i}\right)=1$ and $\text{Var}\left({u}_{i}\right)=\omega $. Marginally, we have $\text{E}\left({y}_{it}\right)={\stackrel{\u02dc}{\mu}}_{i}$ and $\text{Var}\left({y}_{it}\right)={\stackrel{\u02dc}{\mu}}_{i}\left(1+{\stackrel{\u02dc}{\mu}}_{i}\omega \right)$. The distribution of u is chosen to be gamma with shape parameter $\omega $ and scale parameter $1/\omega $, where $\omega $ is the over-dispersion parameter. We choose over-dispersion parameter $\omega =1/4$. However, the GEE, EL, EEL, and AEL CIs are constructed under the assumption that there is no over-dispersion.

Table 3 gives the average estimated values of the regression coefficients, the corresponding simulated standard errors in parentheses, the coverage probabilities for ${\beta}_{1}$ and ${\beta}_{2}$ for the 0.95 and 0.99 confidence levels, and the average width of the CI in parentheses for the independent, AR(1), EQC, and MA(1) models. Table 3 shows that when there is over-dispersion, the EL, EEL, and AEL outperform the GEE. In the AR(1)/AR(1) case the coverage probabilities of ${\stackrel{^}{\beta}}_{1}$ based on the GEE, EL, EEL, and AEL are 0.876, 0.916, 0.926, and 0.931 respectively for the nominal 0.95 level. For ${\stackrel{^}{\beta}}_{2}$, these probabilities are 0.891, 0.920, 0.929, and 0.931 for the same nominal level. This indicates that the EL, EEL, and AEL are fairly robust to model misspecification. Note that the construction of the CI based on the EL, EEL, and AEL does not require the estimation of the scale parameter.

5.4. Correlation Models for Continuous Data

In this section, we investigate the performance of our EL approach on a class of stationary and nonstationary correlation models for longitudinal continuous data. The random errors ${\left({\u03f5}_{1}\mathrm{,}{\u03f5}_{2},{\u03f5}_{3}\mathrm{,}{\u03f5}_{4}\right)}^{\text{T}}$ are generated from the multivariate normal distribution with marginal mean 0, marginal variance 1, and an auto-correlation coefficient $\rho =0.5$. In this performance analysis, we consider three correlation models: exchangeable, AR(1), and MA(1).

(i) AR(1) Structure

For $t=1,\cdots ,{m}_{i}$, for the ith individual

${y}_{it}={x}_{it}\beta +{\u03f5}_{it}\mathrm{,}$ (17)

and we assume that

${\u03f5}_{it}=\rho {\u03f5}_{it}+{a}_{it}\mathrm{,}$

with $\left|\rho \right|<1$ and ${a}_{it}~N\left(\mathrm{0,1}\right)$.

(ii) MA(1) Structure The ${\u03f5}_{it}$ in (17) follow the model

${\u03f5}_{it}=\rho {a}_{i\mathrm{,}t-1}+{a}_{it}$

where $\rho $ is a suitable scale parameter that does not necessarily satisfy $\left|\rho \right|<1$, and ${a}_{it}~N\left(\mathrm{0,1}\right)$.

(iii) Equicorrelation (EQC) Structure The ${\u03f5}_{it}$ in (17) follow the model

${\u03f5}_{it}=\rho {a}_{i0}+{a}_{it}\mathrm{,}$

where ${a}_{i0}$ is an error value at the initial time, and $\rho $ is a suitable correlation parameter. We assume that

${a}_{it}~N\left(\mathrm{0,1}\right)\mathrm{}\text{\hspace{0.05em}}\text{and}\mathrm{}\text{\hspace{0.05em}}{a}_{i0}~N\left(\mathrm{0,1}\right)\mathrm{,}$

and ${a}_{it}$ and ${a}_{i0}$ are independent for all t.

We simulated 1000 data sets from the above models under stationary and nonstationary covariates, using the parameters $\beta ={\left({\beta}_{1},{\beta}_{2}\right)}^{\text{T}}={\left(0.4,0.5\right)}^{\text{T}}$, $\rho =0.5$, and $m=4$. For the ith subject, we generate the covariates ${\stackrel{\u02dc}{x}}_{i}=\left({\stackrel{\u02dc}{x}}_{i1}\mathrm{,}{\stackrel{\u02dc}{x}}_{i2}\right)$ from a normal distribution with mean 0 and standard deviation 1. Table 4 gives the mean estimated values of the regression coefficients, the corresponding simulated standard errors in parentheses, the simulated coverage probabilities for ${\beta}_{1}$ and ${\beta}_{2}$ for the 0.95 and 0.99 confidence levels, and the average width of the CI in parentheses for the independent, AR(1), EQC, and MA(1) models with stationary covariates. Table 5 gives the results for nonstationary covariates.

The coverage probabilities of the intervals based on the EL, EEL, and AEL are similar to those of the GEE. For instance, in the MA(1)/MA(1) case in Table 4 the coverage probabilities of ${\stackrel{^}{\beta}}_{1}$ based on the GEE, EL, EEL, and AEL are 0.955, 0.945, 0.955, and 0.954 respectively for the nominal 0.95 level. For ${\stackrel{^}{\beta}}_{2}$, these probabilities are 0.958, 0.944, 0.948, and 0.951 for the same nominal level. Note that the intervals based on the EL have a slight undercoverage compared with those for the GEE. Also, the EEL and AEL are consistently more accurate than the EL. The lag-correlation-based coverage probabilities have similar patterns.

5.5. Correlation Models for Misspecified Continuous Data

In this section, we compare the performances of the methods when the correlation model for continuous data is misspecified. The stationary and nonstationary correlation models for longitudinal continuous data are generated from (17) for the parameter set in Section 5.4, and the correlated random errors ${\left({\u03f5}_{1}\mathrm{,}{\u03f5}_{2},{\u03f5}_{3}\mathrm{,}{\u03f5}_{4}\right)}^{\text{T}}$ are generated from the ${\chi}^{2}\left(1\right)-1$ distribution instead of the normal distribution for the three correlation models:

Table 1. Coverage probabilities of regression estimates for count data with stationary covariates for the independent, AR(1), EQC, and MA(1) models.

Table 2. Coverage probabilities of regression estimates for count data with stationary covariates when the working correlation is misspecified for an AR(1) model.

Table 3. Coverage probabilities of regression estimates for over-dispersion of count data with stationary covariates for the independent, AR(1), EQC, and MA(1) models.

Table 4. Coverage probabilities of regression estimates for continuous data with stationary covariates for the independent, AR(1), EQC, and MA(1) models.

· AR(1): ${\u03f5}_{it}=\rho {\u03f5}_{i\mathrm{,}t-1}+{a}_{it}\mathrm{,}t=\mathrm{1,2},\mathrm{3,4}$,

· EQC: ${\u03f5}_{it}=\rho {a}_{i\mathrm{,0}}+{a}_{it}\mathrm{,}t=\mathrm{1,2},\mathrm{3,4}$,

· MA(1): ${\u03f5}_{it}=\rho {a}_{i\mathrm{,}t-1}+{a}_{it}\mathrm{,}t=\mathrm{1,2},\mathrm{3,4}$.

However, the confidence regions for the GEE are constructed under the normality assumption.

Table 6 gives the mean estimated values of the coefficients and the corresponding simulated standard errors in parentheses. It also includes the coverage probability for ${\beta}_{1}$ and ${\beta}_{2}$ for the 0.95 and 0.99 confidence levels and the average width of the CI in parentheses for samples of sizes $k=50$ and $k=100$ for the independent, AR(1), EQC, and MA(1) models with stationary covariates. Table 7 gives the results for nonstationary covariates.

When the model is misspecified, the EL, EEL, and AEL outperform the GEE. For example, in the AR(1)/Lag case in Table 6 the coverage probabilities of ${\stackrel{^}{\beta}}_{1}$ based on the GEE, EL, EEL, and AEL are 0.790, 0.918, 0.931, and 0.932 respectively for the nominal 0.95 level. For ${\stackrel{^}{\beta}}_{2}$, these probabilities are 0.801, 0.924, 0.937, and 0.937 for the same nominal level. Note that we do not need to estimate a scale parameter in the construction of the CI in the EL setup, and also in the EL we did not model the over-dispersion. Table 7 shows that when the covariates are time-dependent the GEE has substantial undercoverage compared with the results for time-independent covariates, as discussed by [9].

6. Applications

In this section, we illustrate the applicability of our proposed method to two real-world examples.

6.1. Health Care Utilization Study

We consider longitudinal health care utilization data [5] that was collected by Eastern Health, St. John’s, Newfoundland, Canada. These longitudinal count data contain complete records for $k=144$ individuals for the $m=4$ years from 1985 to 1988. The response of interest was the number of visits to a physician by each individual during a given year. Information on four covariates, namely, gender, number of chronic conditions, education level, and age, was recorded for each individual. Background information allows us to assume that the response variable, marginally, follows the Poisson distribution, and the repeated counts over the four years will be longitudinally correlated. Since the data indicate over-dispersion, we consider a negative binomial model with two variance functions

$\mathrm{var}\left(y\right)=\mu +\alpha \mu $

and

$\mathrm{var}\left(y\right)=\mu +\alpha {\mu}^{2}.$

Thus, the variance function is different from that of the Poisson model, $\mathrm{var}\left(y\right)=\mu $. To confirm the over-dispersion, we test ${H}_{0}:\alpha =0$ against

Table 5. Coverage probabilities of regression estimates for continuous data with nonstationary covariates for the independent, AR(1), EQC, and MA(1) models.

Table 6. Coverage probabilities of regression estimates for misspecified data with stationary covariates for the independent, AR(1), EQC, and MA(1) models (k = 50).

Table 7. Coverage probabilities of regression estimates for misspecified data with nonstationary covariates for the independent, AR(1), EQC, and MA(1) models (k = 100).

${H}_{a}:\alpha >0$ using the likelihood ratio test. The result confirms the presence of over-dispersion in both variance function models.

Our analysis used the GEE with a working correlation matrix (AR(1), EQC, MA(1), or lag correlation) and our EL approach. Table 8 gives the regression parameter estimates and 95% CIs. The gender covariate was coded as 1 for male and 0 for female. Under the AR(1) structure, the estimate of its regression coefficient is ${\stackrel{^}{\beta}}_{1}=-0.1929$, suggesting that females make more visits to physicians. The GEE CI indicates that this variable is significant, but the EL CI does not. The estimated values ${\stackrel{^}{\beta}}_{2}=0.1668$ and ${\stackrel{^}{\beta}}_{4}=0.0308$ suggest that individuals with chronic diseases and older individuals pay more visits to physicians, as expected. The corresponding CIs show that both variables are significant. The education covariate was coded as 1 for less than high school and 0 for higher education. The value ${\stackrel{^}{\beta}}_{3}=-0.4738$ indicates that educated individuals pay more visits to physicians, showing that they are more concerned about their health or can afford it. The corresponding CIs show that this variable is significant. Table 8 shows that different working correlations lead to slightly different parameter estimates, but the overall conclusion remains the same. Since the data indicate over-dispersion, the GEE-based approach may be inefficient, as shown

Table 8. Regression estimates for health care utilization count data.

in our performance analysis. We conclude that the EL approach is more appropriate for this data set, and the significant variables identified by this approach are more reliable.

6.2. Longitudinal CD4 Cell Counts of HIV Seroconverters

This data set contains 2376 observations of the CD4 cell counts of $k=369$ men infected with the HIV virus [31]. The goal of our analysis is to estimate the average evolution over time of the CD4 counts by considering the effects of AGE, SMOKE (smoking status measured by packs of cigarettes per day), DRUG (yes = 1; no = 0), SEXP (number of sex partners), DEPRESSION (measured by the CESD scale) and YEAR (time since seroconversion). To examine whether there are any interaction effects between the covariates, we included all the two-factor interactions in our model.

This data set has the subject-specific evolution over time of the CD4 cell counts with and without drug use. The cell counts are right-skewed, so the analysis was conducted on square-root transformed CD4 cell counts whose distribution is more nearly Gaussian. Tables 9-11 summarize the analysis for the AR(1), EQC, and lag working correlations. The GEE indicates that SMOKE, DRUG, SEXP, AGE × SEXP, SMOKE × DRUG, SMOKE × SEXP, and DRUG × SEXP are significant. Under EQC, AGE × SMOKE and AGE × DRUG are also significant. The EL selects SMOKE, DRUG, SEXP, and DRUG × SEX. Under EQC and lag AGE × SEXP is also significant. The GEE approach is sensitive to the choice of correlation structure. In this real data set, the true correlation structure is unknown, so the lag correlation approach is appropriate since it can accommodate all three correlation structures. The Shapiro-Wilk test shows that the square-root transformed CD4 cell counts are not normally distributed. The GEE-based method is, therefore, not appropriate. We, therefore, conclude that the EL is a better choice.

Table 9. Estimated coefficients for CD4 data set using AR(1) working correlation.

Table 10. Estimated coefficients for CD4 data set using EQC working correlation.

Table 11. Estimated coefficients for CD4 data set using lag working correlation.

7. Conclusions

Longitudinal data modelling using the GEE approach assumes a working correlation model for the within-subject correlation of the responses. When the working correlation is incorrectly specified, the GEE based estimates are not necessarily consistent and may lose efficiency. Any misspecification can cause estimates based on marginal models to be inefficient and misleading conclusions. Also, the construction of a confidence region and hypothesis testing are based on asymptotic normality, which may not hold since the finite-sample distribution may not be symmetric.

Taking these issues into account, we have proposed an EL-based longitudinal modelling based on a data-driven likelihood ratio approach sharing many of the properties of the parametric likelihood. We do not need to specify the complete parametric distribution to perform the inference. We can, therefore, use likelihood methods without assuming that the data come from a known family of distributions. We defined the subject-wise profile EL based on a set of GEEs. The estimation and confidence region construction using the EL approach are proposed, which has advantages over other methods such as those based on normal approximations. We introduced the adjusted EL to avoid any computational issues, which improve the coverage probabilities. A major advantage of EI is that involves no prior assumptions about the shape of an EL-based confidence region, which is data-driven. The construction of the confidence region based on the EL method does not involve any variance estimation.

The proposed approach yields more efficient estimators than the conventional GEE approach and achieves the same asymptotic properties as [17]. Our performance analysis showed that our method for longitudinal count and continuous responses is comparable to the GEE when the model assumptions are satisfied. For instance, when the working correlation is correctly specified, the coverage probabilities of the intervals based on the EL, EEL, and AEL are similar to those of the GEE. CIs based on the regular EL have slight undercoverage compared with those of the GEE; the coverage probabilities are substantially improved with the EEL and AEL. Moreover, these methods are consistently more accurate than the regular EL. When the working correlation is misspecified, the coverage probabilities of the intervals based on the EL, EEL, and AEL are shown to be equally efficient to the GEE estimator with stationary lag correlation structure. Also, the results show that when the working correlation is misspecified, the GEE estimator with stationary lag correlation structure, EL, EEL, and AEL outperforms the GEE with an incorrect working correlation structure. When the model is misspecified such as marginal variance, our method outperforms the GEE. This result shows that EL methods are robust to misspecification. Moreover, the EL-based CI has a data-driven shape, whereas the GEE-based CI is always symmetric due to normal approximation.

Acknowledgements

The authors’ research was supported by grants from Natural Sciences & Engineering Research Council of Canada and Canadian Institute of Health Research.

References

[1] McCullagh, P. and Nelder, J. (1989) Generalized Linear Models. 2nd Edition, CRC Press, London.

https://doi.org/10.1007/978-1-4899-3242-6

[2] Liang, K.Y. and Zeger, S.L. (1986) Longitudinal Data Analysis Using Generalized Linear Models. Biometrika, 73, 13-22.

https://doi.org/10.1093/biomet/73.1.13

[3] Crowder, M.J. (1995) On Use of a Working Correlation Matrix in Using Generalized Linear Models for Repeated Measures. Biometrika, 82, 407-410.

https://doi.org/10.1093/biomet/82.2.407

[4] Sutradhar, B.C. and Das, K. (1999) On the Efficiency of Regression Estimators in Generalized Linear Models for Longitudinal Data. Biometrika, 86, 459-465.

https://doi.org/10.1093/biomet/86.2.459

[5] Sutradhar, B.C. (2003) An Overview on Regression Models for Discrete Longitudinal Responses. Statistical Science, 18, 377-393.

https://doi.org/10.1214/ss/1076102426

[6] Sutradhar, B.C. and Kovacevic, M. (2000) Analysing Ordinal Longitudinal Survey Data: Generalized Estimating Equations Approach. Biometrika, 87, 837-848.

https://doi.org/10.1093/biomet/87.4.837

[7] Nadarajah, T., Variyath, A.M. and Loredo-Osti, J.C. (2016) Penalized Generalized Quasi-Likelihood Based Variable Selection for Longitudinal Data. In: ISS-2015 Proceedings Volume on Advances in Parametric and Semiparametric Analysis of Multivariate, Time Series, Spatial-Temporal, and Familial-Longitudinal Data, Springer Lecture Notes in Statistics, Springer, Berlin, 233-250.

https://doi.org/10.1007/978-3-319-31260-6_8

[8] Qu, A., Lindsay, B.G. and Li, B. (2000) Improving Estimating Equations Using Quadratic Inference Functions. Biometrika, 87, 823-836.

https://doi.org/10.1093/biomet/87.4.823

[9] Hu, F.-C. (1993) A Statistical Methodology for Analyzing the Causal Health Effect of a Time Dependent Exposure from Longitudinal Data. Dissertation, Harvard School of Public Health, Department of Biostatistics, Boston.

[10] Pepe, M.S. and Anderson, G.L. (1994) A Cautionary Note on Inference for Marginal Regression Models with Longitudinal Data and General Correlated Response Data. Communications in Statistics, Part B Simulation and Computation, 23, 939-951.

https://doi.org/10.1080/03610919408813210

[11] Emond, M.J., Ritz, J. and Oakes, D. (1997) Bias in GEE Estimates from Misspecified Models for Longitudinal Data. Communications in Statistics, 26, 15-32.

https://doi.org/10.1080/03610929708831899

[12] Pan, W., Louis, T.A. and Connett, J.E. (2000) A Note on Marginal Linear Regression with Correlated Response Data. American Statistician, 54, 191-195.

https://doi.org/10.1080/00031305.2000.10474544

[13] Diggle, P.J., Heagerty, P.J., Liang, K.-Y. and Zeger, S.L. (2002) The Analysis of Longitudinal Data. 2nd Edition, Oxford Statistical Science, Oxford University Press, Oxford.

[14] Owen, A.B. (1988) Empirical Likelihood Ratio Confidence Interval for a Single Functional. Biometrika, 75, 237-249.

https://doi.org/10.1093/biomet/75.2.237

[15] Owen, A.B. (1991) Empirical Likelihood for Linear Models. The Annals of Statistics, 19, 1725-1747.

https://doi.org/10.1214/aos/1176348368

[16] Chen, S.X. (1994) Empirical Likelihood Confidence Intervals for Linear Regression Coefficients. Journal of Multivariate Analysis, 49, 24-40.

https://doi.org/10.1006/jmva.1994.1011

[17] Qin, J. and Lawless, J. (1994) Empirical Likelihood and General Estimating Equations. Annals of Statistics, 22, 300-325.

https://doi.org/10.1214/aos/1176325370

[18] Owen, A.B. (2001) Empirical Likelihood. 2nd Edition, CRC Press, London.

https://doi.org/10.1201/9781420036152

[19] You, J., Chen, G. and Zhou, Y. (2006) Block Empirical Likelihood for Longitudinal Partially Linear Regression Models. Canadian Journal of Statistics, 34, 79-96.

https://doi.org/10.1002/cjs.5550340107

[20] Xue, L.G. and Zhu, L.X. (2007) Empirical Likelihood Semiparametric Regression Analysis for Longitudinal Data. Biometrika, 94, 921-937.

https://doi.org/10.1093/biomet/asm066

[21] Wang, S., Qian, L. and Carroll, J.R. (2010) Generalized Empirical Likelihood Methods for Analyzing Longitudinal Data. Biometrics, 97, 79-93.

https://doi.org/10.1093/biomet/asp073

[22] Hall, P. and La Scala, B. (1990) Methodology and Algorithm of Empirical Likelihood. International Statistical Review, 58, 109-127.

https://doi.org/10.2307/1403462

[23] Corcoran, S.A., Davison, A.C. and Spady, R.H. (1995) Reliable Inference from Empirical Likelihood. Technical Report, Nuffield College, University of Oxford, Oxford.

[24] Tsao, M. (2004) Bounds on Coverage Probabilities of the Empirical Likelihood Ratio Confidence Regions. The Annals of Statistics, 32, 1215-1221.

https://doi.org/10.1214/009053604000000337

[25] Chen, J., Variyath, A.M. and Abraham, B. (2008) Adjusted Empirical Likelihood and Its Properties. Journal of Computational Graphics and Statistics, 17, 426-443.

https://doi.org/10.1198/106186008X321068

[26] Chen, J., Sitter, R.R. and Wu, C. (2002) Using Empirical Likelihood Methods to Obtain Range Restricted Weights in Regression Estimators for Surveys. Biometrika, 89, 230-237.

https://doi.org/10.1093/biomet/89.1.230

[27] Tsao, M. (2013) Extending the Empirical Likelihood by Domain Expansion. Canadian Journal of Statistics, 41, 257-274.

https://doi.org/10.1002/cjs.11175

[28] Tsao, M. and Wu, F. (2013) Empirical Likelihood on the Full Parameter Space. Revised for the Annals of Statistics, 41, 2176-2196.

https://doi.org/10.1214/13-AOS1143

[29] McKenzie, E. (1988) Some ARMA Models for Dependent Sequences of Poisson Counts. Advances in Applied Probability, 20, 822-835.

https://doi.org/10.1017/S0001867800018395

[30] Sutradhar, B.C. (2011) Dynamic Mixed Models for Familial Longitudinal Data. Springer Series in Statistics, Springer, Berlin.

https://doi.org/10.1007/978-1-4419-8342-8

[31] Zeger, L.S. and Diggle, J.P. (1994) Semiparametric Models for Longitudinal Data with Application to CD4 Cell Numbers in HIV Seroconverters. Biometrics, 50, 689-699.

https://doi.org/10.2307/2532783