Consider two groups of subjects: exposure group (i = 1) and control group (i = 2). Let ni be the number of subjects in group i with being the risk of a specific outcome in group i. Then the random variable , which is the number of subjects that give the specific outcome in group i, is distributed as Binomial ( , ). As defined in  and  , the relative risk of the outcome in the exposure group versus the control group is . Note that can be any nonnegative real number. When , it suggests that the exposure being considered is associated with a reduction in risk, and suggests that the exposure is associated with an increase in risk. is generally of interest because it suggests that the exposure has no impact on risk. In general, pi is unknown, but we observed xi. Then pi can be estimated by . And, therefore, an estimate of the relative risk based on the observed sample is .
Relative risk is a popular measure used in biomedical studies because it is easy to compute and interpret, and it is included in standard statistical software output (e.g., in R and SAS).  gives a detailed discussion on the application of relative risk to failure time data.  applies relative risk to study populations with differing disease prevalence.  compares relative risk with odds ratio, and absolute risk reduction in comparing the effectiveness of certain treatments.
To illustrate the concept of relative risk, let us consider the following example.  examined the Physicians’ Health Study, which analyzed whether taking aspirin regularly will reduce cardiovascular disease. Data of the study are reported in Table 1.
Out of 11,037 physicians taking aspirin over the course of the study, 104 of them had heart attacks. Similarly 189 of 11,034 physicians in the placebo group had heart attacks. Based on this dataset, the relative risk of having heart attacks among physicians is
Thus, physicians who took aspirin over the course of the study have 0.55 times the risk of having a heart attack as physicians who were in the placebo group. This suggests that taking aspirin is associated with a reduction in the risk of heart attacks among physicians as they are about half as likely to have a heart attack as physicians who did not take aspirin throughout the study.
Although reporting a point estimate of relative risk is important, it does not provide information about the variations arising from the observed data. Hence, in practice, a confidence interval for is usually reported and recommended (see  ). A standard approximated confidence interval for is given in  , which is widely implemented in statistical software.  proposed an alternate way of approximating a confidence interval for via the likelihood ratio statistic. It is well-known that both methods are not accurate when the sample size is small. In this paper, by adjusting the likelihood ratio statistic obtained by  , a new method is proposed to obtain the confidence interval for the relative risk. Simulation results show that the proposed method is extremely accurate even when the sample size is small.
Let , i = 1, 2, be independent random variables distributed as Binomial ( , ). Then the relative risk is defined as . A standard estimator of is . With realizations and , a standard estimate of is .  considered the parameter . The corresponding estimator of is . By applying the delta method, we have
Table 1. Cross-classification of aspirin use and heart attack.
Therefore, an estimate of is , and the estimated variance of is
Hence, when and are large, by the Central Limit Theorem, an approximate confidence interval for is:
where is the percentile of the standard normal distribution. Since and are one-one correspondence, we have an approximate confidence interval for is
The above interval is directly available from R using the riskratio() command.
Since is a biased estimator of ,  suggests using a modified estimator for , which takes the form
The estimated variance of is
Thus, the corresponding approximate confidence interval for is
 proposed to construct an approximate confidence interval for based on the likelihood ratio statistic. Since are independently distributed as Binomial ( , ), the joint log-likelihood function is
The point that maximizes the log-likelihood function is known as the maximum likelihood estimate (MLE) of , which can be obtained by solving
In this case, the MLE of is . Moreover, for a given
value, the point that maximized the log-likelihood function subject
to the constraint is known as the constrained MLE of . 
gives a numerical algorithm to obtain . However, by applying the Lagrange multiplier technique, we have the explicit closed form of the constrained MLE:
The observed likelihood ratio statistic is
With the regularity conditions given in  , the Wilks Theorem can be applied, and hence, is asymptotically distributed as the chi-square distribution with 1 degree of freedom, . Therefore, the approximate confidence interval for obtained in  is
where is the percentile of the distribution.
It is well-known that the above methods are not very accurate when the sample size is small. Although is asymptotically distributed as distribution, except in special cases, , which is the mean of the distribution.  proposed a scale transformation of , such that the mean of the transformed statistics is the mean of the distribution. This transformed statistic is known as the Bartlett corrected likelihood ratio statistic. Mathematically, let the Bartlett corrected likelihood ratio statistic be
Then is asymptotically distributed as distribution and . However, the explicit form of is only available in a few well-defined problems.
In this paper, I propose to use the following algorithm to approximate and hence the observed Bartlett corrected likelihood ratio statistic .
Note that the key step of the algorithm is Step 4 where we simulate new data from the Binomial distribution where the parameter is chosen to be the constrained MLE obtained in Step 2. The reason is that we are trying to obtain a sampling distribution of the likelihood ratio statistic , which is a function of the value given in Step 2. Hence, constrained MLE is used in Step 4.
As a final note in this section, the method by  is a computationally intensive method because, to obtain the required confidence limits, we need to find the smallest value and also the largest value such that . The same needs to be done for the proposed method. However, the  methods have a closed form expression of the confidence limits, so they are easier to calculate and are available in statistical software.
Our first example is to revisit the dataset discussed in previous section. Table 2 recorded the 95% confidence interval for the relative risk obtained by the method discussed in this paper. Since the sample sizes are very large, it is not surprising that all the intervals are very close to each other.
As for our second example, the number of divorces during 2006 in a random sample of Army Reserve and Army Guard couples is reported in  . The data are presented in Table 3.
The estimated relative risk is , which indicates that the divorce rate for Army Reserve personnel is higher than the divorce rate of the
Table 2. 95% confidence interval for the relative risk of having heart attacks among physicians taking aspirin versus physicians taking a placebo.
Table 3. The number of divorces during 2006 in a random sample of Army Reserve and Army Guard couples.
Army Guard. Table 4 recorded the 95% confidence interval for the relative risk obtained by the method discussed in this paper. Despite the sample sizes being relatively large, the results are still quite different.
For this example, we also calculated the probability that the true relative risk is as extreme or more extreme than the estimated relative risk by the four methods discussed in this paper. The results are plotted in Figure 1(a) for small true relative risk and Figure 1(b) for large true relative risk. The plots clearly showed that the four methods give different results especially when the true relative risk is large.
Hence, it is important to investigate which method is more accurate when sample size is small. The following simulation studies were performed.
Note that the proportion of samples with less than the lower confidence limit is known as the lower error proportion, the proportion of samples with larger than the upper confidence limit is known as the upper error proportion, and the proportion of samples with falling within the confidence interval is known as the central coverage proportion. Moreover, the average absolute bias is defined as
which is a measure of bias of the 95% confidence interval. The nominal values for the lower error proportion, central coverage proportion, upper error proportion, and average absolute bias are 0.025, 0.95, 0.025, and 0, respectively.
Table 5 records the lower error proportion, central coverage proportion, and upper error proportion for a sample of simulation studies that I have performed. Results for other combinations of and are very similar and are available upon request.
Table 4. 95% confidence interval for the relative risk of divorce in the Army Reserve versus the Army Guard.
Table 5. Lower error proportion (le), central coverage proportion (cc), upper error proportion (ue), and absolute average bias (aab) of the 95% confidence interval for θ with N = 10,000 and M = 200.
Note: Method 1 = Agresti’s method without adjustment, Method 2 = Agresti’s method with adjustment, Method 3 = Zhou’s method, and Method 4 = Proposed method.
Figure 1. (a) and (b) show probability that the true relative risk is as extreme or more extreme than the estimated relative risk.
From Table 5, the two methods by  do not give satisfactory results. While one can argue that they have decent central coverage proportion when the sample sizes are large, they also have asymmetric tail errors. Moreover, although the aim of the adjusted method in  is a bias adjustment to the standard point estimator, it has little effect on the central coverage proportion, and it has adverse effect on the tail errors proportion.  method gives good central coverage proportion, but the tail errors are asymmetric. The proposed method outperformed the other three methods discussed in this paper regardless of the sample sizes.
In this paper, we demonstrated via simulations that the two methods discussed in  , which are implemented in most standard statistical software, do not have good central coverage properties and the tail errors are extremely asymmetric, particularly when the sample sizes are small. Thus, practitioners should interpret confidence intervals obtained from standard statistical software with caution, especially when the sample sizes are small. The likelihood ratio method proposed in  has good central coverage, but the tail errors are asymmetric, which is still an improvement over  methods. In comparison, the proposed modification of the likelihood ratio method outperforms the other three methods in terms of both central coverage and tail error symmetry even when the sample sizes are small.
 Di Lorenzo, L., Coco, V., Forte, F., Trinches, G.F., Forte, A.M. and Pappagallo, M. (2014) The Use of Odds Ratio in the Large Population-Based Studies: Warnings to Readings. Muscles Ligaments and Tendons Journal, 4, 90-92.
 Schechtman, E. (2002) Odds Ratio, Relative Risk, Absolute Risk Reduction, and the Number Needed to Treat—Which of These Should We Use? Value Health, 5, 431-436.
 Gardner, M.J. and Altman, D.G. (1986) Confidence Intervals Rather Than P-Values: Estimation Rather Than Hypothesis Testing. British Medical Journal (Clinical Research Edition), 292, 746-750.