Exact Distribution of Difference of Two Sample Proportions and Its Inferences
Abstract: Comparing two population proportions using confidence interval could be misleading in many cases, such as the sample size being small and the test being based on normal approximation. In this case, the only one option that we have is to collect a large sample. Unfortunately, the large sample might not be possible. One example is a person suffering from a rare disease. The main purpose of this journal is to derive a closed formula for the exact distribution of the difference between two independent sample proportions, and use it to perform related inferences such as a confidence interval, regardless of the sample sizes and compare with the existing Wald, Agresti-Caffo and Score. In this journal, we have derived a closed formula for the exact distribution of the difference between two independent sample proportions. This distribution doesn’t need any requirements, and can be used to perform inferences such as: a hypothesis test for two population proportions, regardless of the nature of the distribution and the sample sizes. We claim that exact distribution has the least confidence width among Wald, Agresti-Caffo and Score, so it is suitable for inferences of the difference between the population proportion regardless of sample size.

1. Introduction

Comparing two population proportions, especially when the sample size is small is very challenging in statistics, and has applications in many fields. Several procedures have been suggested [One of the most popular and common methods that has been used for a long time is the Wald interval]. Due to simplicity and convenience, the first method that comes in the mind of most statisticians is the Wald method. However, there are some disadvantages of the Wald interval. Firstly, it is based on normal approximation and for this approximation to work well, we need a large sample. Unfortunately, large samples may be costly in practice. Secondly, the coverage probability is liberal. The coverage probability with nominal 95% confidence interval is almost less than 0.5 when the sample size is small. Even for a large sample size, the coverage probability is always less than the nominal confidence level ( $1-\alpha$ ).

Agresti and Brian Caffo (2000)  introduced Adjusted Wald Confidence Interval by slightly modifying Wald interval by adding one success and one failure for each group. They have also shown that the coverage probability of the Adjusted Wald Interval is reasonably greater than the regular Wald interval. However, Agresti-Caffo interval is also based on normal approximation.

Robert G. Newcombe (1998)  has explained eleven different methods to compare the difference between two population proportions. Some of them are conservative, like Score, while others are liberal, like Wald.

The main purpose of this journal is to derive a closed formula for the exact distribution of the difference between two independent sample proportions, and use it to perform related inferences such as a hypothesis test. The rest of the journal is organized as follows. In Section 2, we derive the closed formula for exact distribution of the difference between two independent sample proportions and break it into different cases. We obtain the support of the distribution in Section 3. In Section 4, we perform the hypothesis test. In Section 5, we compute the power of the hypothesis test. In Section 6, we compute the confidence interval and compare it to others. In Section 7, we summarize the main findings and conclude the journal.

2. Exact Distribution of Difference of Two Sample Proportions

Let ${X}_{1},{X}_{2},\cdots ,{X}_{m}$ and ${Y}_{1},{Y}_{2},\cdots ,{Y}_{n}$ are iid Bernoulli random samples from two different populations with parameters ${p}_{1}$ and ${p}_{2}$ respectively and let

${\stackrel{^}{p}}_{1}=\frac{1}{m}\underset{i=1}{\overset{m}{\sum }}\text{ }\text{ }{X}_{i}$ and ${\stackrel{^}{p}}_{2}=\frac{1}{n}\underset{i=1}{\overset{n}{\sum }}\text{ }\text{ }{Y}_{i}$ be the point estimates of the parameters ${p}_{1}$

and ${p}_{2}$ respectively. We denote the difference between ${\stackrel{^}{p}}_{1}$ and ${\stackrel{^}{p}}_{2}$ by D.

To obtain the exact distribution of D, we first derive the probability generating function (pgf) of $W=mn\left(D+1\right)$ in the following lemma.

Lemma

Let $W=mn\left(D+1\right)$, then the pgf of W is given by

${p}_{w}\left(z\right)=\underset{s=0}{\overset{m}{\sum }}\underset{u=0}{\overset{s}{\sum }}\underset{t=0}{\overset{n}{\sum }}\underset{v=0}{\overset{t}{\sum }}{\left(-1\right)}^{s+t+u+v}\left(\begin{array}{c}m\\ s\end{array}\right)\left(\begin{array}{c}s\\ u\end{array}\right)\left(\begin{array}{c}n\\ t\end{array}\right)\left(\begin{array}{c}t\\ v\end{array}\right){p}_{1}^{s}{\left(1-{p}_{2}\right)}^{t}{z}^{un+vm}$ (1)

Now, let $f\left(\frac{k}{m}-\frac{l}{n}\right)$ denote the probability mass function (pmf) of D at the point $\frac{k}{m}-\frac{l}{n}$, for $k=0,\cdots ,m$ and $l=0,\cdots ,n$.

Theorem

Let the greatest common divisor: $gcd\left(m,n\right)=r$, and ${m}^{\prime }$ and ${n}^{\prime }$ be such that $m=r{m}^{\prime }$ and $n=r{n}^{\prime }$. The pmf of D is given by

$\begin{array}{c}f\left(\frac{k}{m}-\frac{l}{n}\right)={\left(-1\right)}^{k+n-l}\underset{s=0}{\overset{m}{\sum }}{\left(-1\right)}^{s}\left(\begin{array}{c}m\\ s\end{array}\right){p}_{1}^{s}\underset{t=0}{\overset{n}{\sum }}{\left(-1\right)}^{t}\left(\begin{array}{c}n\\ t\end{array}\right){\left(1-{p}_{2}\right)}^{t}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdot \underset{i\in {S}_{m,n}\left(s,t\right)}{\sum }{\left(-1\right)}^{i\left({m}^{\prime }-{n}^{\prime }\right)}\left(\begin{array}{c}s\\ k+i{m}^{\prime }\end{array}\right)\left(\begin{array}{c}t\\ \left(n-l\right)-i{n}^{\prime }\end{array}\right),\end{array}$

for $k=0,\cdots ,m$ and $l=0,\cdots ,n$, where

${S}_{m,n}\left(s,t\right)=\left[\mathrm{max}\left(-\frac{k}{{m}^{\prime }},\frac{\left(n-l\right)-t}{{n}^{\prime }}\right),\mathrm{min}\left(\frac{s-k}{{m}^{\prime }},\frac{\left(n-l\right)}{{n}^{\prime }}\right)\right]\cap ℤ$.

From the Theorem above, we derive the next results by corresponding them to different relations between m and n.

Corollary 1

If $gcd\left(m,n\right)=1$, then the exact distribution of D is given by:

$Pr\left(D=\frac{k}{m}-\frac{l}{n}\right)=\left(\begin{array}{c}m\\ k\end{array}\right)\left(\begin{array}{c}n\\ l\end{array}\right){p}_{1}^{k}{p}_{2}^{l}{\left(1-{p}_{1}\right)}^{m-k}{\left(1-{p}_{2}\right)}^{n-l}$

for $\frac{k}{m}-\frac{l}{n}\ne 0$, while $Pr\left(D=0\right)={\left(1-{p}_{1}\right)}^{m}{\left(1-{p}_{2}\right)}^{n}+{p}_{1}^{m}{p}_{2}^{n}$.

Corollary 2

If $m=n$ and $k=l$ then the exact distribution of D is given by

$Pr\left(D=0\right)={\left(-1\right)}^{n}\underset{s=0}{\overset{n}{\sum }}{\left(-1\right)}^{s}\left(\begin{array}{c}n\\ s\end{array}\right){p}_{1}^{s}\underset{t=0}{\overset{n}{\sum }}{\left(-1\right)}^{t}\left(\begin{array}{c}n\\ t\end{array}\right){\left(1-{p}_{2}\right)}^{t}\underset{u=n-t}{\overset{s}{\sum }}\left(\begin{array}{c}s\\ u\end{array}\right)\left(\begin{array}{c}t\\ n-u\end{array}\right)$

Corollary 3

The exact distribution of D is given by

$\begin{array}{l}Pr\left(D=\frac{k}{m}-\frac{l}{n}\right)\\ =\underset{s=0}{\overset{m}{\sum }}\underset{t=0}{\overset{n}{\sum }}\underset{\left(u,v\right)\in {S}_{s,t}}{\sum }{\left(-1\right)}^{s+t+u+\left(k-u\right)\frac{n}{m}+n-l}\left(\begin{array}{c}m\\ s\end{array}\right)\left(\begin{array}{c}s\\ u\end{array}\right)\left(\begin{array}{c}n\\ t\end{array}\right)\left(\begin{array}{c}t\\ \left(k-u\right)\frac{n}{m}+n-l\end{array}\right){p}_{1}^{s}{\left(1-{p}_{2}\right)}^{t}\end{array}$

for $k=0,\cdots ,m$ and $l=0,\cdots ,n$ where,

$\begin{array}{l}{S}_{s,t}=\left\{\left(u,\left(k-u\right)\frac{n}{m}+n-l\right)\in {ℕ}^{2}:\mathrm{max}\left(0,k+\left(n-l-t\right)\frac{m}{n}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\le u\le \mathrm{min}\left(s,k+\left(n-l\right)\frac{m}{n}\right)\right\}\end{array}$

Corollary 4

The exact distribution of D is symmetrical about zero if $m=n$ and ${p}_{1}={p}_{2}$.

3. Support of the Distribution

Support of the exact distribution is denoted by $D\left(m,n\right)$. For small values of m and n, it can be derived manually. However, for larger values of m and n, it is tedious and time consuming, so the software such as R is used.

For $m=n=2,D=\frac{k}{2}-\frac{l}{2}$. Where $k=0,1,2$ and $l=0,1,2$.

Thus the support for $m=n=2$ is $-1,-0.5,0,0.5,1$.

The graphs of the Probability mass function for exact distribution for the difference of two population proportion for m = n and p1 = p2 are plotted in Figure 1. These graphs (Figure 1) are the evidence to support corollary 4.

Figure 1. Probability mass function for exact distribution for the difference of two population proportion for $m=n$ and ${p}_{1}={p}_{2}$.

4. Hypothesis Testing

To test ${H}_{0}:{p}_{1}={p}_{2}=p$ against ${H}_{1}:{p}_{1}-{p}_{2}=\delta \ne 0$, we use D as a test statistic. Let $p\left(D=\frac{k}{m}-\frac{l}{n}|{H}_{0}\right)={f}_{0}\left(\frac{k}{m}-\frac{l}{n}\right)$. Then the null distribution of D is given by

$\begin{array}{c}{f}_{0}\left(\frac{k}{m}-\frac{l}{n}\right)={\left(-1\right)}^{k+n-l}\underset{s=0}{\overset{m}{\sum }}{\left(-1\right)}^{s}\left(\begin{array}{c}m\\ s\end{array}\right){p}^{s}\underset{t=0}{\overset{n}{\sum }}{\left(-1\right)}^{t}\left(\begin{array}{c}n\\ t\end{array}\right){\left(1-p\right)}^{t}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdot \underset{i\in {S}_{m,n}\left(s,t\right)}{\sum }{\left(-1\right)}^{i\left({m}^{\prime }-{n}^{\prime }\right)}\left(\begin{array}{c}s\\ k+i{m}^{\prime }\end{array}\right)\left(\begin{array}{c}t\\ \left(n-l\right)-i{n}^{\prime }\end{array}\right),\end{array}$

for $k=0,\cdots ,m$ and $l=0,\cdots ,n$, where

${S}_{m,n}\left(s,t\right)=\left[\mathrm{max}\left(-\frac{k}{{m}^{\prime }},\frac{\left(n-l\right)-t}{{n}^{\prime }}\right),\mathrm{min}\left(\frac{s-k}{{m}^{\prime }},\frac{\left(n-l\right)}{{n}^{\prime }}\right)\right]\cap ℤ$.

The critical region can be obtained by finding ${c}_{\alpha /2}$ and ${c}_{1-\alpha /2}$ such that:

$\mathrm{max}\left\{D:pr\left(D\le {c}_{\frac{\alpha }{2}}|{H}_{0}\right)\le \frac{\alpha }{2}\right\}$ and $\mathrm{min}\left\{D:pr\left(D\ge {c}_{1-\frac{\alpha }{2}}|{H}_{0}\right)\le \frac{\alpha }{2}\right\}$.

This means that:

$\underset{\left(k,l\right)\in {E}_{\frac{\alpha }{2}}}{\sum }{f}_{0}\left(\frac{k}{m}-\frac{l}{n}\right)\le \frac{\alpha }{2}$ and $\underset{\left(k,l\right)\in {E}_{1-\frac{\alpha }{2}}}{\sum }{f}_{0}\left(\frac{k}{m}-\frac{l}{n}\right)\le \frac{\alpha }{2}$.

where

${E}_{\frac{\alpha }{2}}=\left\{\left(k,l\right)\in {ℕ}^{2}:0\le k\le m,0\le l\le n,\frac{k}{m}-\frac{l}{n}\le {c}_{\frac{\alpha }{2}}\right\}$

and

${E}_{1-\frac{\alpha }{2}}=\left\{\left(k,l\right)\in {ℕ}^{2}:0\le k\le m,0\le l\le n,\frac{k}{m}-\frac{l}{n}\ge {c}_{1-\frac{\alpha }{2}}\right\}$

Example: Gender Discrimination

The table below shows the gender distribution of the promoted files.

Data Source:

https://www2.stat.duke.edu/courses/Spring12/sta101.1/lec/lec14S.pdf.

In this question, we will investigate whether or not gender discrimination is associated with the promotion of the employees. In other words, we would like to conduct the following hypothesis test.

${H}_{0}$ : There is no gender discrimination in promotion vs ${H}_{1}$ : There is gender discrimination in promotion.

We run the R program for exact distribution for $m=24$, $n=24$, ${\stackrel{^}{p}}_{1}=\frac{21}{24}$, and ${\stackrel{^}{p}}_{2}=\frac{14}{24}$, obtain the test statistic, and p-value to 0.291667 and 0.03286628

respectively. Since p-value is less than $\alpha$, we reject the null hypothesis and conclude that there is gender discrimination in promotion. However the p-value is slightly less than $\alpha$, so there is moderate gender discrimination for the promotion of the employees.

5. Power Calculation

If ${c}_{\frac{\alpha }{2}}$ and ${c}_{1-\frac{\alpha }{2}}$ are the left and right critical values and if the Null hypothesis is rejected for the test statistic, $d={\stackrel{^}{p}}_{1}-{\stackrel{^}{p}}_{2}$ then the power of the corresponding hypothesis test is given by:

$1-\beta =2\mathrm{min}\left\{pr\left(D\le d|{H}_{\alpha }\right),pr\left(D\ge d|{H}_{\alpha }\right)\right\}=2\underset{\left(k,l\right)\in {E}_{\alpha }}{\sum }f\left(\frac{k}{m}-\frac{l}{n}\right)$

where

${E}_{\alpha }=\left\{\left(k,l\right)\in {ℕ}^{2}:0\le k\le m,0\le l\le n,\frac{k}{m}-\frac{l}{n}\le d\text{\hspace{0.17em}}\text{ }\text{or}\text{\hspace{0.17em}}\text{ }\frac{k}{m}-\frac{l}{n}\ge d\right\}$

Continuation of the example: Gender Discrimination

In this example, we have rejected null hypothesis with the significance level $\alpha =0.05$. Now we want to find power of the hypothesis test for

${p}_{1}={\stackrel{^}{p}}_{1}=\frac{21}{24},{p}_{2}={\stackrel{^}{p}}_{2}=\frac{14}{24}$, and $\alpha =0.05$. We run the R program for the power

calculation of exact distribution and obtain that the power of the hypothesis test equals to 0.5657226.

6. Confidence Interval

Point estimator of ${p}_{1}-{p}_{2}$ is $D={\stackrel{^}{p}}_{1}-{\stackrel{^}{p}}_{2}$, which can be obtained by the given samples. Let ${L}_{\alpha /2}$ and ${U}_{\alpha /2}$ are lower and upper bound for $1-\alpha$ confidence coefficient for ${p}_{1}-{p}_{2}$. We obtain ${L}_{\alpha /2}$ and ${U}_{\alpha /2}$ as follows:

${L}_{\alpha /2}=\mathrm{max}\left\{D:pr\left(D\le {L}_{\frac{\alpha }{2}}\right)\le \frac{\alpha }{2}\right\}$

${U}_{\alpha /2}=\mathrm{min}\left\{D:pr\left(D\ge {U}_{\frac{\alpha }{2}}\right)\le \frac{\alpha }{2}\right\}.$

Thus, $\left(1-\alpha \right)100%$ confidence interval for ${p}_{1}-{p}_{2}$ is $\left({L}_{\alpha /2},{U}_{\alpha /2}\right)$.

A relatively easy approach to compare the difference between population proportions ( ${p}_{1}-{p}_{2}$ ) is confidence interval. We calculate the sample proportions ${\stackrel{^}{p}}_{1}$ and ${\stackrel{^}{p}}_{2}$ from respective samples. Once ${\stackrel{^}{p}}_{1}$ and ${\stackrel{^}{p}}_{2}$ are calculated, we use them to construct confidence interval with nominal confidence coefficient $1-\alpha$. If the confidence interval does not include 0, we reject the null hypothesis. Otherwise fail to reject null hypothesis.

(a) (b)

Table 1. 95% confidence interval for Exact, Wald, Agresti-Caffo, and Score.

For the purpose of this comparison, we have constructed some confidence intervals including respective confidence width for Exact, Wald, Agresti-Caffo and Score for $m=n=20$ and 95% confidence coefficient (Table 1).

The last four columns of the above table are the confidence widths for Exact, Wald, Agrest-Caffo, and Score. It can be seen that the confidence width of Exact has the least amount.

7. Conclusion

Inferences of the difference of the population proportion are a very basic problem in statistics. Standard Wald interval has been used universally. Standard Wald interval is persistently chaotic, and has unacceptably poor coverage probabilities when either the sample sizes are small or one proportion is very large and the other is very small. Several intervals have been suggested but their level of performance is not satisfactory when the sample size is small. We have been shown that our distribution does not depend on sample size. We have also shown that exact distribution has the least confidence width among Wald, Agresti-Caffo and Score, so it is suitable for inferences of the difference between the population proportion regardless of sample size.

Appendix

Proof of lemma

If we define ${Z}_{j}=\left(1-{Y}_{j}\right)$, then W can be written as $W=n\underset{i=1}{\overset{m}{\sum }}\text{ }\text{ }{X}_{i}+m\underset{j=1}{\overset{n}{\sum }}\text{ }\text{ }{Z}_{j}$. The pgf of W can be written as ${p}_{w}\left(z\right)=\underset{i=1}{\overset{m}{\prod }}E\left({z}^{n{X}_{i}}\right)\underset{i=1}{\overset{n}{\prod }}E\left({z}^{m{Z}_{j}}\right)$ since the two

samples are independent of each other and the observations in each sample are independent and identically distributed.

Since ${X}_{i}\stackrel{iid}{~}Ber\left({p}_{1}\right)$ for $i=1,\cdots ,m$, then $E\left({z}^{n{X}_{i}}\right)=1-{p}_{1}\left(1-{z}^{n}\right)$ and

$\begin{array}{c}E\left(\underset{i=1}{\overset{m}{\prod }}\text{ }\text{ }{z}^{n{X}_{i}}\right)={\left(1-{p}_{1}\left(1-{z}^{n}\right)\right)}^{m}=\underset{s=0}{\overset{m}{\sum }}{\left(-1\right)}^{s}\left(\begin{array}{c}m\\ s\end{array}\right){p}_{1}^{s}{\left(1-{z}^{n}\right)}^{s}\\ =\underset{s=0}{\overset{m}{\sum }}\underset{u=0}{\overset{s}{\sum }}{\left(-1\right)}^{s+u}\left(\begin{array}{c}m\\ s\end{array}\right)\left(\begin{array}{c}s\\ u\end{array}\right){p}_{1}^{s}{z}^{un}.\end{array}$ (2)

Similarily, since ${Y}_{i}~Ber\left({p}_{2}\right)$ for $j=1,\cdots ,n$, then

$E\left(\underset{j=1}{\overset{n}{\prod }}\text{ }\text{ }{z}^{m{Z}_{j}}\right)=\underset{t=0}{\overset{n}{\sum }}\underset{v=0}{\overset{t}{\sum }}{\left(-1\right)}^{t+v}\left(\begin{array}{c}n\\ t\end{array}\right)\left(\begin{array}{c}t\\ v\end{array}\right){\left(1-{p}_{2}\right)}^{t}{z}^{vm}.$ (3)

We multiply the RHS’ of 2 and 3 to obtain 1.

Proof of Theorem

Notice that, even though the support of D and W are different, their pmf’s have

the same probabilities: $Pr\left(W=kn+\left(n-l\right)m\right)=Pr\left(D=\frac{k}{m}-\frac{l}{n}\right)$ for $k=0,\cdots ,m$

and $l=0,\cdots ,n$. The pmf of W can be obtained from the pgf as follows:

$Pr\left(W=kn+\left(n-l\right)m\right)=\frac{1}{\left(kn+\left(n-l\right)m\right)!}{\frac{{\text{d}}^{kn+\left(n-l\right)m}}{\text{d}{z}^{kn+\left(n-l\right)m}}{p}_{w}\left(z\right)|}_{z=0}.$

Therefore,

$\begin{array}{l}Pr\left(W=kn+\left(n-l\right)m\right)\\ =\underset{s=0}{\overset{m}{\sum }}\underset{u=0}{\overset{s}{\sum }}\underset{t=0}{\overset{n}{\sum }}\underset{v=0}{\overset{t}{\sum }}{\left(-1\right)}^{s+t+u+v}\left(\begin{array}{c}m\\ s\end{array}\right)\left(\begin{array}{c}s\\ u\end{array}\right)\left(\begin{array}{c}n\\ t\end{array}\right)\left(\begin{array}{c}t\\ v\end{array}\right){p}_{1}^{s}{\left(1-{p}_{2}\right)}^{t}{\delta }_{kn+\left(n-l\right)m}\left(un+vm\right),\end{array}$ (4)

where ${\delta }_{a}\left(x\right)=1$ if $x=a$ and 0 otherwise.

To simplify the formula 4, we use the fact that ${\delta }_{kn+\left(n-l\right)m}\left(un+vm\right)=1$ is equivalent to $kn+\left(n-l\right)m=un+vm$ which, in its turn, is equivalent to $\left(u-k\right){n}^{\prime }=\left(n-l-v\right){m}^{\prime }$. From this last equality, we conclude that $u-k=i{m}^{\prime }$ and $n-l-v=i{n}^{\prime }$ for some $i\in ℤ$ because ${m}^{\prime }$ and ${n}^{\prime }$ are relative prime to each other. The values of i are hence obtained by solving the following system of equations:

$\left(\begin{array}{l}u-k=i{m}^{\prime }\\ \left(n-l\right)-v=i{n}^{\prime }\\ 0\le u\le s\\ 0\le v\le t\\ i\in ℤ\end{array}$

This leads to the following simplified system: $\left(\begin{array}{l}-\frac{k}{{m}^{\prime }}\le i\le \frac{s-k}{{m}^{\prime }}\\ \frac{\left(n-l\right)-t}{{n}^{\prime }}\le i\le \frac{\left(n-l\right)}{{n}^{\prime }}\\ i\in ℤ\end{array}$. which corresponds to the values of i that forms the set ${S}_{m,n}\left(s,t\right)=\left[\mathrm{max}\left(-\frac{k}{{m}^{\prime }},\frac{\left(n-l\right)-t}{{n}^{\prime }}\right),\mathrm{min}\left(\frac{s-k}{{m}^{\prime }},\frac{\left(n-l\right)}{{n}^{\prime }}\right)\right]\cap ℤ$.

Proof of Corollary 1

Since m and n are relatively prime to each other, the support of D becomes:

${S}_{m,n}\left(s,t\right)=\left[\mathrm{max}\left(-\frac{k}{m},\frac{\left(n-l\right)-t}{n}\right),\mathrm{min}\left(\frac{s-k}{m},\frac{\left(n-l\right)}{n}\right)\right]\cap ℤ.$

when $\frac{k}{m}-\frac{l}{n}\ne 0$, we have $\left(k,l\right)\notin \left\{\left(0,0\right),\left(m,n\right)\right\}$, hence $-1<\mathrm{max}\left(-\frac{k}{m},\frac{\left(n-l\right)-t}{n}\right)<1$ and $-1<\mathrm{min}\left(\frac{s-k}{m},\frac{\left(n-l\right)}{n}\right)<1$. Therefore ${S}_{m,n}\left(s,t\right)=\left\{0\right\}$. Now from Theorem above we get,

$\begin{array}{l}Pr\left(D=\frac{k}{m}-\frac{l}{n}\right)=\underset{s=k}{\overset{m}{\sum }}\underset{t=n-l}{\overset{n}{\sum }}{\left(-1\right)}^{s+t+k+n-l}\left(\begin{array}{c}m\\ s\end{array}\right)\left(\begin{array}{c}s\\ k\end{array}\right)\left(\begin{array}{c}n\\ t\end{array}\right)\left(\begin{array}{c}t\\ n-l\end{array}\right){p}_{1}^{s}{\left(1-{p}_{2}\right)}^{t}\\ ={\left(-1\right)}^{k+n-l}\left[\underset{{s}^{\prime }=0}{\overset{m-k}{\sum }}{\left(-1\right)}^{{s}^{\prime }+k}\left(\begin{array}{c}m\\ {s}^{\prime }+k\end{array}\right)\left(\begin{array}{c}{s}^{\prime }+k\\ k\end{array}\right){p}_{1}^{{s}^{\prime }+k}\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\cdot \left[\underset{{t}^{\prime }=0}{\overset{l}{\sum }}{\left(-1\right)}^{{t}^{\prime }+n-l}\left(\begin{array}{c}n\\ {t}^{\prime }+n-l\end{array}\right)\left(\begin{array}{c}{t}^{\prime }+n-l\\ n-l\end{array}\right){\left(1-{p}_{2}\right)}^{{t}^{\prime }+n-l}\right]\\ ={p}_{1}^{k}{\left(1-{p}_{2}\right)}^{n-l}\left[\underset{{s}^{\prime }=0}{\overset{m-k}{\sum }}{\left(-1\right)}^{{s}^{\prime }}\left(\begin{array}{c}m\\ k\end{array}\right)\left(\begin{array}{c}m-k\\ {s}^{\prime }\end{array}\right){p}_{1}^{{s}^{\prime }}\right]\left[\underset{{t}^{\prime }=0}{\overset{l}{\sum }}{\left(-1\right)}^{{t}^{\prime }}\left(\begin{array}{c}n\\ l\end{array}\right)\left(\begin{array}{c}l\\ {t}^{\prime }\end{array}\right){\left(1-{p}_{2}\right)}^{{t}^{\prime }}\right]\\ =\left(\begin{array}{c}m\\ k\end{array}\right)\left(\begin{array}{c}n\\ l\end{array}\right){p}_{1}^{k}{p}_{2}^{l}{\left(1-{p}_{1}\right)}^{m-k}{\left(1-{p}_{2}\right)}^{n-l}\end{array}$

when $\frac{k}{m}-\frac{l}{n}=0$, we have $\left(k,l\right)\in \left\{\left(0,0\right),\left(m,n\right)\right\}$ and hence:

$\begin{array}{c}{S}_{m,n}\left(s,t\right)=\left(\left[\mathrm{max}\left(0,\frac{n-t}{n}\right),\mathrm{min}\left(\frac{s}{m},1\right)\right]\cap ℤ\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\cup \left(\left[\mathrm{max}\left(-1,\frac{-t}{n}\right),\mathrm{min}\left(\frac{s-m}{m},0\right)\right]\cap ℤ\right)\\ =\left(\left[\frac{n-t}{n},\frac{s}{m}\right]\cap ℤ\right)\cup \left(\left[-\frac{t}{n},\frac{s-m}{m}\right]\cap ℤ\right)\end{array}$

For this case, $\frac{-k}{m}$ is either 0 or −1 and $\frac{n-l}{n}$ is either 0 or 1 so, now from the theorem we get,

$\begin{array}{l}Pr\left(D=0\right)\\ =\underset{s=k}{\overset{m}{\sum }}\underset{t=n-l}{\overset{n}{\sum }}{\left(-1\right)}^{s+t+k+n-l}\left(\begin{array}{c}m\\ s\end{array}\right)\left(\begin{array}{c}s\\ k\end{array}\right)\left(\begin{array}{c}n\\ t\end{array}\right)\left(\begin{array}{c}t\\ n-l\end{array}\right){p}_{1}^{s}{\left(1-{p}_{2}\right)}^{t}\end{array}$

$\begin{array}{l}=\underset{s=0}{\overset{m}{\sum }}\underset{t=n}{\overset{n}{\sum }}{\left(-1\right)}^{s+t+n}\left(\begin{array}{c}m\\ s\end{array}\right)\left(\begin{array}{c}s\\ 0\end{array}\right)\left(\begin{array}{c}n\\ t\end{array}\right)\left(\begin{array}{c}t\\ n\end{array}\right){p}_{1}^{s}{\left(1-{p}_{2}\right)}^{t}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }+\underset{s=m}{\overset{m}{\sum }}\underset{t=0}{\overset{n}{\sum }}{\left(-1\right)}^{s+t+m+n-n}\left(\begin{array}{c}m\\ s\end{array}\right)\left(\begin{array}{c}s\\ m\end{array}\right)\left(\begin{array}{c}n\\ t\end{array}\right)\left(\begin{array}{c}t\\ n-n\end{array}\right){p}_{1}^{s}{\left(1-{p}_{2}\right)}^{t}\end{array}$

$\begin{array}{l}=\underset{s=0}{\overset{m}{\sum }}{\left(-1\right)}^{s+n+n}\left(\begin{array}{c}m\\ s\end{array}\right)\left(\begin{array}{c}n\\ n\end{array}\right)\left(\begin{array}{c}n\\ n\end{array}\right){p}_{1}^{s}{\left(1-{p}_{2}\right)}^{n}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }+\underset{t=0}{\overset{n}{\sum }}{\left(-1\right)}^{m+t+m}\left(\begin{array}{c}m\\ m\end{array}\right)\left(\begin{array}{c}m\\ m\end{array}\right)\left(\begin{array}{c}n\\ t\end{array}\right){p}_{1}^{m}{\left(1-{p}_{2}\right)}^{t}\\ =\underset{s=0}{\overset{m}{\sum }}{\left(-1\right)}^{s}\left(\begin{array}{c}m\\ s\end{array}\right){p}_{1}^{s}{\left(1-{p}_{2}\right)}^{n}+\underset{t=0}{\overset{n}{\sum }}{\left(-1\right)}^{t}\left(\begin{array}{c}n\\ t\end{array}\right){p}_{1}^{m}{\left(1-{p}_{2}\right)}^{t}\\ ={\left(1-{p}_{1}\right)}^{m}{\left(1-{p}_{2}\right)}^{n}+{p}_{1}^{m}{\left(1-\left(1-{p}_{2}\right)\right)}^{n}\\ ={\left(1-{p}_{1}\right)}^{m}{\left(1-{p}_{2}\right)}^{n}+{p}_{1}^{m}{p}_{2}^{n}\end{array}$

Proof of corollary 2

For $m=n$ and $k=l$, the theorem reduces to,

$Pr\left(D=0\right)={\left(-1\right)}^{n}\underset{s=0}{\overset{n}{\sum }}{\left(-1\right)}^{s}\left(\begin{array}{c}n\\ s\end{array}\right){p}_{1}^{s}\underset{t=0}{\overset{n}{\sum }}{\left(-1\right)}^{t}\left(\begin{array}{c}n\\ t\end{array}\right){\left(1-{p}_{2}\right)}^{t}\underset{i\in {S}_{n,n}\left(s,t\right)}{\sum }\left(\begin{array}{c}s\\ k+in\end{array}\right)\left(\begin{array}{c}t\\ n-k-in\end{array}\right),$

where,

$i\in {S}_{n,n}\left(s,t\right)=\left[\mathrm{max}\left(-\frac{k}{n},\frac{n-k-t}{n}\right),\mathrm{min}\left(\frac{s-k}{n},\frac{n-k}{n}\right)\right]\cap ℤ.$

$in\in {S}_{n,n}\left(s,t\right)=\left[\mathrm{max}\left(-k,n-k-t\right)\right),\mathrm{min}\left(s-k,n-k\right)\right)\right]\cap ℤ.$

$k+in\in {S}_{n,n}\left(s,t\right)=\left[\mathrm{max}\left(0,n-t\right),\mathrm{min}\left(s,n\right)\right].$

Now we replace $k+in$ by u and obtain the following result:

$\begin{array}{l}Pr\left(D=0\right)\\ ={\left(-1\right)}^{n}\underset{s=0}{\overset{n}{\sum }}{\left(-1\right)}^{s}\left(\begin{array}{c}n\\ s\end{array}\right){p}_{1}^{s}\underset{t=0}{\overset{n}{\sum }}{\left(-1\right)}^{t}\left(\begin{array}{c}n\\ t\end{array}\right){\left(1-{p}_{2}\right)}^{t}\underset{u=\mathrm{max}\left(0,n-t\right)}{\overset{\mathrm{min}\left(s,n\right)}{\sum }}\left(\begin{array}{c}s\\ u\end{array}\right)\left(\begin{array}{c}t\\ n-u\end{array}\right)\\ ={\left(-1\right)}^{n}\underset{s=0}{\overset{n}{\sum }}{\left(-1\right)}^{s}\left(\begin{array}{c}n\\ s\end{array}\right){p}_{1}^{s}\underset{t=0}{\overset{n}{\sum }}{\left(-1\right)}^{t}\left(\begin{array}{c}n\\ t\end{array}\right){\left(1-{p}_{2}\right)}^{t}\underset{u=n-t}{\overset{s}{\sum }}\left(\begin{array}{c}s\\ u\end{array}\right)\left(\begin{array}{c}t\\ n-u\end{array}\right)\end{array}$.

Proof of corollary 3

The exact distribution of D, using lemma, is given by;

$Pr\left(D=\frac{k}{m}-\frac{l}{n}\right)=\underset{s=0}{\overset{m}{\sum }}\underset{u=0}{\overset{s}{\sum }}\underset{t=0}{\overset{n}{\sum }}\underset{v=0}{\overset{t}{\sum }}{\left(-1\right)}^{s+t+u+v}\left(\begin{array}{c}m\\ s\end{array}\right)\left(\begin{array}{c}s\\ u\end{array}\right)\left(\begin{array}{c}n\\ t\end{array}\right)\left(\begin{array}{c}t\\ v\end{array}\right){p}_{1}^{s}{\left(1-{p}_{2}\right)}^{t}{\delta }_{kn+\left(n-l\right)m}\left(un+vm\right)$

where ${\delta }_{a}\left(x\right)=1$ if $x=a$ and 0 otherwise. Let us define a set ${H}_{s,t}$ as follows:

$\begin{array}{c}{H}_{s,t}=\left\{\left(u,v\right)\in {ℕ}^{2}:0\le u\le s,0\le v\le t,un+vm=kn+\left(n-l\right)m\right\}\\ =\left\{\left(u,v\right)\in {ℕ}^{2}:0\le u\le s,0\le v\le t,v=\left(k-u\right)\frac{n}{m}+n-l\right\}\\ =\left\{\left(u,\left(k-u\right)\frac{n}{m}+n-l\right)\in {ℕ}^{2}:0\le u\le s,0\le \left(k-u\right)\frac{n}{m}+n-l\le t\right\}\\ =\left\{\left(u,\left(k-u\right)\frac{n}{m}+n-l\right)\in {ℕ}^{2}:0\le u\le s,-t\le \left(u-k\right)\frac{n}{m}+l-n\le 0\right\}\end{array}$

$\begin{array}{l}=\left\{\left(u,\left(k-u\right)\frac{n}{m}+n-l\right)\in {ℕ}^{2}:0\le u\le s,n-l-t\le \left(u-k\right)\frac{n}{m}\le n-l\right\}\\ =\left\{\left(u,\left(k-u\right)\frac{n}{m}+n-l\right)\in {ℕ}^{2}:0\le u\le s,k+\left(n-l-t\right)\frac{m}{n}\le u\le k+\left(n-l\right)\frac{m}{n}\right\}\\ =\left\{\left(u,\left(k-u\right)\frac{n}{m}+n-l\right)\in {ℕ}^{2}:\mathrm{max}\left(0,k+\left(n-l-t\right)\frac{m}{n}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\le u\le \mathrm{min}\left(s,k+\left(n-l\right)\frac{m}{n}\right)\right\}\\ ={S}_{s,t}\end{array}$

Thus, $\begin{array}{l}Pr\left(D=\frac{k}{m}-\frac{l}{n}\right)\\ =\underset{s=0}{\overset{m}{\sum }}\underset{t=0}{\overset{n}{\sum }}\underset{u\in {S}_{s,t}}{\sum }{\left(-1\right)}^{s+t+u+\left(k-u\right)\frac{n}{m}+n-l}\left(\begin{array}{c}m\\ s\end{array}\right)\left(\begin{array}{c}s\\ u\end{array}\right)\left(\begin{array}{c}n\\ t\end{array}\right)\left(\begin{array}{c}t\\ \left(k-u\right)\frac{n}{m}+n-l\end{array}\right){p}_{1}^{s}{\left(1-{p}_{2}\right)}^{t}\end{array}$

Proof of corollary 4

Using Corollary (3), the exact distribution of D for $m=n$ and ${p}_{1}={p}_{2}$ is given by

$\begin{array}{l}Pr\left(D=\frac{k}{n}-\frac{l}{n}\right)\\ =\underset{s=0}{\overset{n}{\sum }}\underset{t=0}{\overset{n}{\sum }}\underset{u\in {S}_{s,t}}{\sum }{\left(-1\right)}^{s+t+u+\left(k-u\right)\frac{n}{n}+n-l}\left(\begin{array}{c}n\\ s\end{array}\right)\left(\begin{array}{c}s\\ u\end{array}\right)\left(\begin{array}{c}n\\ t\end{array}\right)\left(\begin{array}{c}t\\ \left(k-u\right)\frac{n}{n}+n-l\end{array}\right){p}_{1}^{s}{\left(1-{p}_{1}\right)}^{t}\\ =\underset{s=0}{\overset{n}{\sum }}\underset{t=0}{\overset{n}{\sum }}\underset{u\in {S}_{s,t}}{\sum }{\left(-1\right)}^{s+t+u+k-u+n-l}\left(\begin{array}{c}n\\ s\end{array}\right)\left(\begin{array}{c}s\\ u\end{array}\right)\left(\begin{array}{c}n\\ t\end{array}\right)\left(\begin{array}{c}t\\ k-u+n-l\end{array}\right){p}_{1}^{s}{\left(1-{p}_{1}\right)}^{t}\\ =\underset{s=0}{\overset{n}{\sum }}\underset{t=0}{\overset{n}{\sum }}\underset{u\in {S}_{s,t}}{\sum }{\left(-1\right)}^{s+t+n+k-l}\left(\begin{array}{c}n\\ s\end{array}\right)\left(\begin{array}{c}s\\ u\end{array}\right)\left(\begin{array}{c}n\\ t\end{array}\right)\left(\begin{array}{c}t\\ k-l+u+n\end{array}\right){p}_{1}^{s}{\left(1-{p}_{1}\right)}^{t}\end{array}$

where,

${S}_{s,t}=\left\{\left(u,k-l+n-u\right)\in {ℕ}^{2}:\mathrm{max}\left(0,k-l+n-t\right)\le u\le \mathrm{min}\left(s,k-l+n\right)\right\}$

Since both k and l run from 0 to n so $Pr\left(D=\frac{k}{n}-\frac{l}{n}\right)=Pr\left(-D=\frac{l}{n}-\frac{k}{n}\right)$.

Cite this paper: Dahal, K. and Amezziane, M. (2020) Exact Distribution of Difference of Two Sample Proportions and Its Inferences. Open Journal of Statistics, 10, 363-374. doi: 10.4236/ojs.2020.103024.
References

   Agresti, A. and Caffo, B. (2000) Simple and Effective Confidence Interval for Proportions and Differences of Proportions Result from Adding Two Successes and Two Failures. American Statistical Association, 54, 280-288.
https://doi.org/10.2307/2685779

   Newcombe, R.G. (1998). Interval Estimation for the Difference between Independent Proportions: Comparison of Eleven Methods. John Wiley & Sons, Ltd., Hoboken.
https://doi.org/10.1002/(SICI)1097-0258(19980430)17:8<873::AID-SIM779>3.0.CO;2-I

Top