A Note on Improving Inference of Relative Risk

Show more

1. Introduction

Consider two groups of subjects: exposure group (i = 1) and control group (i = 2). Let n_{i} be the number of subjects in group i with
${p}_{i}$ being the risk of a specific outcome in group i. Then the random variable
${X}_{i}$ , which is the number of subjects that give the specific outcome in group i, is distributed as Binomial (
${n}_{i}$ ,
${p}_{i}$ ). As defined in [3] and [4] , the relative risk of the outcome in the exposure group versus the control group is
$\theta ={p}_{1}/{p}_{2}$ . Note that
$\theta $ can be any nonnegative real number. When
$\theta <1$ , it suggests that the exposure being considered is associated with a reduction in risk, and
$\theta >1$ suggests that the exposure is associated with an increase in risk.
$\theta =1$ is generally of interest because it suggests that the exposure has no impact on risk. In general, p_{i} is unknown, but we observed x_{i}. Then p_{i}_{ }can be estimated by
${\stackrel{^}{p}}_{i}={x}_{i}/{n}_{i}$ . And, therefore, an estimate of the relative risk based on the observed sample is
$\stackrel{^}{\theta}={\stackrel{^}{p}}_{1}/{\stackrel{^}{p}}_{2}$ .

Relative risk is a popular measure used in biomedical studies because it is easy to compute and interpret, and it is included in standard statistical software output (e.g., in R and SAS). [5] gives a detailed discussion on the application of relative risk to failure time data. [6] applies relative risk to study populations with differing disease prevalence. [7] compares relative risk with odds ratio, and absolute risk reduction in comparing the effectiveness of certain treatments.

To illustrate the concept of relative risk, let us consider the following example. [1] examined the Physicians’ Health Study, which analyzed whether taking aspirin regularly will reduce cardiovascular disease. Data of the study are reported in Table 1.

Out of 11,037 physicians taking aspirin over the course of the study, 104 of them had heart attacks. Similarly 189 of 11,034 physicians in the placebo group had heart attacks. Based on this dataset, the relative risk of having heart attacks among physicians is

$\stackrel{^}{\theta}=\frac{\left(5+99\right)/\left(5+99+10933\right)}{\left(18+171\right)/\left(18+131+10845\right)}=\mathrm{0.55.}$

Thus, physicians who took aspirin over the course of the study have 0.55 times the risk of having a heart attack as physicians who were in the placebo group. This suggests that taking aspirin is associated with a reduction in the risk of heart attacks among physicians as they are about half as likely to have a heart attack as physicians who did not take aspirin throughout the study.

Although reporting a point estimate of relative risk is important, it does not provide information about the variations arising from the observed data. Hence, in practice, a $\left(1-\alpha \right)100\%$ confidence interval for $\theta $ is usually reported and recommended (see [8] ). A standard approximated $\left(1-\alpha \right)100\%$ confidence interval for $\theta $ is given in [1] , which is widely implemented in statistical software. [2] proposed an alternate way of approximating a $\left(1-\alpha \right)100\%$ confidence interval for $\theta $ via the likelihood ratio statistic. It is well-known that both methods are not accurate when the sample size is small. In this paper, by adjusting the likelihood ratio statistic obtained by [2] , a new method is proposed to obtain the confidence interval for the relative risk. Simulation results show that the proposed method is extremely accurate even when the sample size is small.

2. Methology

Let ${X}_{i}$ , i = 1, 2, be independent random variables distributed as Binomial ( ${n}_{i}$ , ${p}_{i}$ ). Then the relative risk is defined as $\theta ={p}_{1}/{p}_{2}$ . A standard estimator of $\theta $ is $\stackrel{^}{\Theta}=\left({X}_{1}/{n}_{1}\right)/\left({X}_{2}/{n}_{2}\right)$ . With realizations ${x}_{1}$ and ${x}_{2}$ , a standard estimate of $\theta $ is $\stackrel{^}{\theta}=\left({x}_{1}/{n}_{1}\right)/\left({x}_{2}/{n}_{2}\right)$ . [1] considered the parameter $\psi =\mathrm{ln}\theta $ . The corresponding estimator of $\psi $ is $\stackrel{^}{\Psi}=\mathrm{ln}\stackrel{^}{\Theta}$ . By applying the delta method, we have

Table 1. Cross-classification of aspirin use and heart attack.

$E\left(\stackrel{^}{\Psi}\right)=E\left(\mathrm{ln}\stackrel{^}{\Theta}\right)\approx \mathrm{ln}\theta =\psi $ and $\mathrm{var}\left(\stackrel{^}{\Psi}\right)=\mathrm{var}\left(\mathrm{ln}\stackrel{^}{\Theta}\right)\approx {\displaystyle {\sum}_{i=1}^{2}\left(\frac{1}{{n}_{i}{p}_{i}}-\frac{1}{{n}_{i}}\right)}.$

Therefore, an estimate of $\psi $ is $\stackrel{^}{\psi}=\mathrm{ln}\stackrel{^}{\theta}$ , and the estimated variance of $\stackrel{^}{\Psi}$ is

$\stackrel{^}{\mathrm{var}}\left(\stackrel{^}{\Psi}\right)\approx {\displaystyle {\sum}_{i=1}^{2}\left(\frac{1}{{n}_{i}{\stackrel{^}{p}}_{i}}-\frac{1}{{n}_{i}}\right)}={\displaystyle {\sum}_{i=1}^{2}\left(\frac{1}{{x}_{i}}-\frac{1}{{n}_{i}}\right)}.$

Hence, when ${n}_{1}$ and ${n}_{2}$ are large, by the Central Limit Theorem, an approximate $\left(1-\alpha \right)100\%$ confidence interval for $\psi =\mathrm{ln}\theta $ is:

$\left(\mathrm{ln}\stackrel{^}{\theta}-{z}_{\alpha /2}\sqrt{\stackrel{^}{\mathrm{var}}\left(\stackrel{^}{\Psi}\right)},\mathrm{ln}\stackrel{^}{\theta}+{z}_{\alpha /2}\sqrt{\stackrel{^}{\mathrm{var}}\left(\stackrel{^}{\Psi}\right)}\right)$

where ${z}_{\alpha /2}$ is the $\left(1-\alpha /2\right){100}^{\text{th}}$ percentile of the standard normal distribution. Since $\psi $ and $\theta $ are one-one correspondence, we have an approximate $\left(1-\alpha \right)100\%$ confidence interval for $\theta $ is

$\left({\text{e}}^{\mathrm{ln}\stackrel{^}{\theta}-{z}_{\alpha /2}\sqrt{\stackrel{^}{\mathrm{var}}\left(\stackrel{^}{\Psi}\right)}},{\text{e}}^{\mathrm{ln}\stackrel{^}{\theta}+{z}_{\alpha /2}\sqrt{\stackrel{^}{\mathrm{var}}\left(\stackrel{^}{\Psi}\right)}}\right).$

The above interval is directly available from R using the riskratio() command.

Since $\stackrel{^}{\Theta}$ is a biased estimator of $\theta $ , [1] suggests using a modified estimator for $\theta $ , which takes the form

$\stackrel{\u02dc}{\Theta}=\frac{\left({X}_{1}+0.5\right)/\left({n}_{1}+0.5\right)}{\left({X}_{2}+0.5\right)/\left({n}_{2}+0.5\right)}$ and $\stackrel{\u02dc}{\Psi}=\mathrm{ln}\stackrel{\u02dc}{\Theta}$ .

The estimated variance of $\stackrel{\u02dc}{\Psi}=\mathrm{ln}\stackrel{\u02dc}{\Theta}$ is

$\stackrel{^}{\mathrm{var}}\left(\stackrel{\u02dc}{\Psi}\right)\approx {\displaystyle {\sum}_{i=1}^{2}\left(\frac{1}{{x}_{i}+0.5}-\frac{1}{{n}_{i}+0.5}\right)}.$

Thus, the corresponding approximate $\left(1-\alpha \right)100\%$ confidence interval for $\theta $ is

$\left({\text{e}}^{\mathrm{ln}\stackrel{\u02dc}{\theta}-{z}_{\alpha /2}\sqrt{\stackrel{^}{\mathrm{var}}\left(\stackrel{\u02dc}{\Psi}\right)}},{\text{e}}^{\mathrm{ln}\stackrel{\u02dc}{\theta}+{z}_{\alpha /2}\sqrt{\stackrel{^}{\mathrm{var}}\left(\stackrel{\u02dc}{\Psi}\right)}}\right)$ .

[2] proposed to construct an approximate $\left(1-\alpha \right)100\%$ confidence interval for $\theta $ based on the likelihood ratio statistic. Since ${X}_{i}$ are independently distributed as Binomial ( ${n}_{i}$ , ${p}_{i}$ ), the joint log-likelihood function is

$l\left({p}_{1},{p}_{2}\right)=\underset{i=1}{\overset{2}{{\displaystyle \sum}}}\left[{x}_{i}\mathrm{ln}{p}_{i}+\left({n}_{i}-{x}_{i}\right)\mathrm{ln}\left(1-{p}_{i}\right)\right].$

The point $\left({\stackrel{^}{p}}_{1},{\stackrel{^}{p}}_{2}\right)$ that maximizes the log-likelihood function is known as the maximum likelihood estimate (MLE) of $\left({p}_{1},{p}_{2}\right)$ , which can be obtained by solving

$\frac{\partial l\left({p}_{1},{p}_{2}\right)}{\partial {p}_{1}}=0$ and $\frac{\partial l\left({p}_{1},{p}_{2}\right)}{\partial {p}_{2}}=0$ .

In this case, the MLE of $\left({p}_{1},{p}_{2}\right)$ is $\left({\stackrel{^}{p}}_{1},{\stackrel{^}{p}}_{2}\right)=\left(\frac{{x}_{1}}{{n}_{1}},\frac{{x}_{2}}{{n}_{2}}\right)$ . Moreover, for a given

$\theta $ value, the point $\left({\stackrel{\u02dc}{p}}_{1},{\stackrel{\u02dc}{p}}_{2}\right)$ that maximized the log-likelihood function subject

to the constraint $\frac{{\stackrel{\u02dc}{p}}_{1}}{{\stackrel{\u02dc}{p}}_{2}}=\theta $ is known as the constrained MLE of $\left({p}_{1},{p}_{2}\right)$ . [2]

gives a numerical algorithm to obtain $\left({\stackrel{\u02dc}{p}}_{1},{\stackrel{\u02dc}{p}}_{2}\right)$ . However, by applying the Lagrange multiplier technique, we have the explicit closed form of the constrained MLE:

${\stackrel{\u02dc}{p}}_{1}=\theta {\stackrel{\u02dc}{p}}_{2}$

and

${\stackrel{\u02dc}{p}}_{2}=\frac{\left[\left({n}_{1}+{x}_{2}\right)\theta +\left({n}_{2}+{x}_{1}\right)\right]-\sqrt{{\left[\left({n}_{1}+{x}_{2}\right)\theta +\left({n}_{2}+{x}_{1}\right)\right]}^{2}-4\left({x}_{1}+{x}_{2}\right)\left({n}_{1}+{n}_{2}\right)\theta}}{2\left({n}_{1}+{n}_{2}\right)\theta}$

The observed likelihood ratio statistic is

$w\left(\theta \right)=2\left[l\left({\stackrel{^}{p}}_{1},{\stackrel{^}{p}}_{2}\right)-l\left({\stackrel{\u02dc}{p}}_{1},{\stackrel{\u02dc}{p}}_{2}\right)\right]$ .

With the regularity conditions given in [9] , the Wilks Theorem can be applied, and hence, $W\left(\theta \right)$ is asymptotically distributed as the chi-square distribution with 1 degree of freedom, ${\chi}_{1}^{2}$ . Therefore, the approximate $\left(1-\alpha \right)100\%$ confidence interval for $\theta $ obtained in [2] is

$\left\{\theta :w\left(\theta \right)<{\chi}_{1,\alpha}^{2}\right\}$

where ${\chi}_{1,\alpha}^{2}$ is the $\left(1-\alpha \right){100}^{\text{th}}$ percentile of the ${\chi}_{1}^{2}$ distribution.

It is well-known that the above methods are not very accurate when the sample size is small. Although $W\left(\theta \right)$ is asymptotically distributed as ${\chi}_{1}^{2}$ distribution, except in special cases, $E\left[W\left(\theta \right)\right]\ne 1$ , which is the mean of the ${\chi}_{1}^{2}$ distribution. [10] proposed a scale transformation of $W\left(\theta \right)$ , such that the mean of the transformed statistics is the mean of the ${\chi}_{1}^{2}$ distribution. This transformed statistic is known as the Bartlett corrected likelihood ratio statistic. Mathematically, let the Bartlett corrected likelihood ratio statistic be

${W}^{\ast}\left(\theta \right)=\frac{W\left(\theta \right)}{E\left[W\left(\theta \right)\right]}$ .

Then ${W}^{\ast}\left(\theta \right)$ is asymptotically distributed as ${\chi}_{1}^{2}$ distribution and $E\left[{W}^{\ast}\left(\theta \right)\right]=1$ . However, the explicit form of $E\left[W\left(\theta \right)\right]$ is only available in a few well-defined problems.

In this paper, I propose to use the following algorithm to approximate $E\left[W\left(\theta \right)\right]$ and hence the observed Bartlett corrected likelihood ratio statistic ${w}^{\ast}\left(\theta \right)$ .

Note that the key step of the algorithm is Step 4 where we simulate new data from the Binomial distribution where the parameter is chosen to be the constrained MLE obtained in Step 2. The reason is that we are trying to obtain a sampling distribution of the likelihood ratio statistic $W\left(\theta \right)$ , which is a function of the $\theta $ value given in Step 2. Hence, constrained MLE is used in Step 4.

As a final note in this section, the method by [2] is a computationally intensive method because, to obtain the required confidence limits, we need to find the smallest $\theta $ value and also the largest $\theta $ value such that $w\left(\theta \right)={\chi}_{1,\alpha}^{2}$ . The same needs to be done for the proposed method. However, the [1] methods have a closed form expression of the confidence limits, so they are easier to calculate and are available in statistical software.

3. Results

Our first example is to revisit the dataset discussed in previous section. Table 2 recorded the 95% confidence interval for the relative risk obtained by the method discussed in this paper. Since the sample sizes are very large, it is not surprising that all the intervals are very close to each other.

As for our second example, the number of divorces during 2006 in a random sample of Army Reserve and Army Guard couples is reported in [11] . The data are presented in Table 3.

The estimated relative risk is $\stackrel{^}{\theta}=\frac{12/324}{7/286}=1.51$ , which indicates that the divorce rate for Army Reserve personnel is higher than the divorce rate of the

Table 2. 95% confidence interval for the relative risk of having heart attacks among physicians taking aspirin versus physicians taking a placebo.

Table 3. The number of divorces during 2006 in a random sample of Army Reserve and Army Guard couples.

Army Guard. Table 4 recorded the 95% confidence interval for the relative risk obtained by the method discussed in this paper. Despite the sample sizes being relatively large, the results are still quite different.

For this example, we also calculated the probability that the true relative risk is as extreme or more extreme than the estimated relative risk by the four methods discussed in this paper. The results are plotted in Figure 1(a) for small true relative risk and Figure 1(b) for large true relative risk. The plots clearly showed that the four methods give different results especially when the true relative risk is large.

Hence, it is important to investigate which method is more accurate when sample size is small. The following simulation studies were performed.

Note that the proportion of samples with ${\theta}_{0}$ less than the lower confidence limit is known as the lower error proportion, the proportion of samples with ${\theta}_{0}$ larger than the upper confidence limit is known as the upper error proportion, and the proportion of samples with ${\theta}_{0}$ falling within the confidence interval is known as the central coverage proportion. Moreover, the average absolute bias is defined as

$\frac{\left|\text{lower error proportion}-0.025\right|+\left|\text{upper error proportion}-0.025\right|}{2}$ ,

which is a measure of bias of the 95% confidence interval. The nominal values for the lower error proportion, central coverage proportion, upper error proportion, and average absolute bias are 0.025, 0.95, 0.025, and 0, respectively.

Table 5 records the lower error proportion, central coverage proportion, and upper error proportion for a sample of simulation studies that I have performed. Results for other combinations of ${n}_{1},{p}_{1},{n}_{2}$ and ${p}_{2}$ are very similar and are available upon request.

Table 4. 95% confidence interval for the relative risk of divorce in the Army Reserve versus the Army Guard.

Table 5. Lower error proportion (le), central coverage proportion (cc), upper error proportion (ue), and absolute average bias (aab) of the 95% confidence interval for θ with N = 10,000 and M = 200.

Note: Method 1 = Agresti’s method without adjustment, Method 2 = Agresti’s method with adjustment, Method 3 = Zhou’s method, and Method 4 = Proposed method.

(a)(b)

Figure 1. (a) and (b) show probability that the true relative risk is as extreme or more extreme than the estimated relative risk.

From Table 5, the two methods by [1] do not give satisfactory results. While one can argue that they have decent central coverage proportion when the sample sizes are large, they also have asymmetric tail errors. Moreover, although the aim of the adjusted method in [1] is a bias adjustment to the standard point estimator, it has little effect on the central coverage proportion, and it has adverse effect on the tail errors proportion. [2] method gives good central coverage proportion, but the tail errors are asymmetric. The proposed method outperformed the other three methods discussed in this paper regardless of the sample sizes.

4. Conclusion

In this paper, we demonstrated via simulations that the two methods discussed in [1] , which are implemented in most standard statistical software, do not have good central coverage properties and the tail errors are extremely asymmetric, particularly when the sample sizes are small. Thus, practitioners should interpret confidence intervals obtained from standard statistical software with caution, especially when the sample sizes are small. The likelihood ratio method proposed in [2] has good central coverage, but the tail errors are asymmetric, which is still an improvement over [1] methods. In comparison, the proposed modification of the likelihood ratio method outperforms the other three methods in terms of both central coverage and tail error symmetry even when the sample sizes are small.

References

[1] Agresti, A. (2012) Categorical Data Analysis. 3rd Edition, Wiley, New York.

[2] Zhou, M. (2018) Confidence Intervals for Relative Risk by Likelihood Ratio Test. Biostatistics and Biometrics Open Access Journal, 6, 1-3.

https://doi.org/10.19080/BBOAJ.2018.06.555700

[3] Sistrom, C.L. and Garvan, C.W. (2004) Proportions, Odds, and Risk. Radiology, 230, 12-19.

https://doi.org/10.1148/radiol.2301031028

[4] Di Lorenzo, L., Coco, V., Forte, F., Trinches, G.F., Forte, A.M. and Pappagallo, M. (2014) The Use of Odds Ratio in the Large Population-Based Studies: Warnings to Readings. Muscles Ligaments and Tendons Journal, 4, 90-92.

https://doi.org/10.11138/mltj/2014.4.1.090

[5] Kalbfleisch, J.D. and Prentice, R.L. (2011) The Statistical Analysis of Failure Time Data. 2nd Edition, Wiley, New York.

[6] Tenny, S. and Bhimji, S.S. (2017) Relative Risk. StatPearls.

https://www.ncbi.nlm.nih.gov/books/NBK430824

[7] Schechtman, E. (2002) Odds Ratio, Relative Risk, Absolute Risk Reduction, and the Number Needed to Treat—Which of These Should We Use? Value Health, 5, 431-436.

https://doi.org/10.1046/J.1524-4733.2002.55150.x

[8] Gardner, M.J. and Altman, D.G. (1986) Confidence Intervals Rather Than P-Values: Estimation Rather Than Hypothesis Testing. British Medical Journal (Clinical Research Edition), 292, 746-750.

https://doi.org/10.1136/bmj.292.6522.746

[9] Shao, J. (2009) Mathematical Statistics. 2nd Edition, Springer, New York.

[10] Bartlett, M.S. (1937) Properties of Sufficiency and Statistical Tests. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 160, 268-282.

[11] Kokoska, S. (2011) Introductory Statistics: A Problem-Solving Approach. Freeman, New York.