The Behaviour of the Dispersion Matrix of the Information Matrix Test under the Wrong Logistic Regression Model

Show more

Estimation of Parameters

1. Introduction

The IMT is a test for general misspecification, produced by [1] who pointed out that the properties of the Maximum likelihood estimator and the information matrix can be exploited to yield a family of useful tests for model mis-specification. The idea of the IMT is to compare two different estimators of the information matrix to assess model fit. The IMT is based on the information matrix equality that obtains when the model specification is correct. This equality implies the asymptotic equivalence of the Hessian and the score forms of Fisher’s information matrix [2]. As [1], points out, the IMT is designed to detect the failure of this equality and the failure implies the model misspecification. [3] discussed the information matrix test and showed that it is useful with binary data models. Many researchers, [4] [5] and [6] pointed out the behaviour of the asymptotic distribution of IMT statistic and dispersion matrix. The idea of the

information matrix test is to compare $E\left(\frac{-{\partial}^{2}\mathcal{l}}{\partial \theta \partial {\theta}^{\text{T}}}\right)$ and $E\left(\frac{\partial \mathcal{l}}{\partial \theta}\frac{\partial \mathcal{l}}{\partial {\theta}^{\text{T}}}\right)$, as these

differ when the model is mis-specified but not when the model is correct. [7], pointed out, can be estimated the covariance matrix of IMT, dependent upon the IMT of [1], which can be estimated without the computation of analytic third derivatives of the density function. [4], discussed that, the IMT is sensitive to non-normality. Moreover, he proposed a simple computation procedure which employs the Outer Product of the Gradient (OPG) covariance matrix estimator of IMT statistic. However, [5] argue that, such a procedure maybe give unreliable inferences, related to the stochastic nature of the covariance matrix estimator which uses high sample moments to estimate high population moments. [6] purposed a simple calculation procedure for the test statistic, for general binary data models, which employs the ML covariance matrix estimator instead the OPG estimator. Moreover, [8], computed and examined IMT and found it had good power for logistic model.

*Basic Idea of the *IMT

Let us consider the density function $f\left({x}_{i}\mathrm{,}\theta \right)$ for individual observation and the data are independent, identically distribution so we have

$\int f\left(x|\theta \right)\text{d}x}=1$

and we consider $\mathcal{l}\left(\theta \right)=\mathrm{log}f\left(x\mathrm{,}\theta \right)$ to be the logarithm of a density function of x dependent upon p parameters $\theta $, so the log-likelihood function in this case is

${\mathcal{l}}_{n}\left(\theta \right)={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\mathrm{log}f\left({x}_{i},\theta \right)$

Now, as we defined the idea of the IMT to compare two different matrix of expected the first and second partial derivatives of the ${\mathcal{l}}_{n}\left(\theta \right)$, we have

$\frac{\partial \mathcal{l}}{\partial \theta}={\displaystyle \int}\frac{\partial f\left(x|\theta \right)}{\partial \theta}\text{d}x={\displaystyle \int}\frac{\partial \mathrm{log}f\left(x|\theta \right)}{\partial \theta}f\left(x|\theta \right)\text{d}x=E\left(\frac{\partial \mathrm{log}\left(f\left(x|\theta \right)\right)}{\partial \theta}\right)=0$ (1)

So, according to the ML method, we have

$E\left(\frac{\partial \mathcal{l}}{\partial \theta}\right)=0.$

Differentiating (1) again we get

$0={\displaystyle \int}\frac{{\partial}^{2}\mathrm{log}f\left(x|\theta \right)}{\partial \theta \partial {\theta}^{\text{T}}}f\left(x|\theta \right)\text{d}x+{\displaystyle \int}\frac{\partial \mathrm{log}f\left(x|\theta \right)}{\partial \theta}\frac{\partial \mathrm{log}f\left(x|\theta \right)}{\partial {\theta}^{\text{T}}}f\left(x|\theta \right)\text{d}x$ (2)

So

$E\left(\frac{{\partial}^{2}\mathcal{l}}{\partial \theta \partial {\theta}^{\text{T}}}\right)+E\left(\frac{\partial \mathcal{l}}{\partial \theta}\frac{\partial \mathcal{l}}{\partial {\theta}^{\text{T}}}\right)=0.$ (3)

When the model is mis-specified, the above quantity will be not necessarily equal zero.

*Asymptotic Distribution of
$\stackrel{^}{\theta}$ *

The asymptotic distribution of estimated parameters and the behaviour of the MLE under the wrong model discussed by [9] and more investigated considered by [10]. [11], pointed out the estimation the parameters of a given regression model. In the limit for each value of the parameter vector $\theta $,

${n}^{-1}{\mathcal{l}}_{n}\left(\theta \right)\to {\displaystyle \int g\left(Y\right)\mathrm{log}f\left(Y|\theta \right)\text{d}Y}=E\left(\mathrm{log}f\left(Y|\theta \right)\right)$

where $g\left(Y\right)$ denoted to the true model and $f\left(Y|\theta \right)$ is the fitted model. Also, consider the Kullback-Leibler divergence (KL) from the true to the approximating model conditional on X, under the wrong model. In this case $\stackrel{^}{\theta}\to {\theta}^{\mathrm{*}}$, where ${\theta}^{\mathrm{*}}$ is the least false value (LF). Note that the least false value ${\theta}^{\mathrm{*}}$ minimizes the KL divergence, because the derivative of the KL is

$E\left(\frac{\partial \mathrm{log}f\left(Y\mathrm{,}\theta \right)}{\partial \theta}\right)\mathrm{=}{\displaystyle \int g\left(Y\right)\frac{\partial \mathrm{log}f\left(Y\mathrm{,}\theta \right)}{\partial \theta}\text{d}Y}=0.$

Also, if we need define

$J=-E\left(\frac{{\partial}^{2}\mathcal{l}}{\partial \theta \partial {\theta}^{\text{T}}}\right)$

and

$K=\mathrm{var}\left(\frac{\partial \mathrm{log}f\left(Y,\theta \right)}{\partial \theta}\right)=E\left(\frac{\partial \mathcal{l}}{\partial \theta}\frac{\partial \mathcal{l}}{\partial {\theta}^{\text{T}}}\right)$

these matrixes are identical when $g\left(Y\right)=\frac{\partial \mathrm{log}f\left(Y\mathrm{,}\theta \right)}{\partial \theta}$ for all Y. As explained in [11], the distribution of the $\stackrel{^}{\theta}$, in this case from the central limit theorem there is convergence in distribution

$\sqrt{n}{\stackrel{\xaf}{U}}_{n}\to {U}^{\prime}~{N}_{p}(\; 0,\; K\; )$

where, $\stackrel{\xaf}{U}={n}^{-1}{\displaystyle {\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}u\left({Y}_{i},{\theta}^{*}\right)$, which is leads to

$\sqrt{n}\left(\stackrel{^}{\theta}-{\theta}^{\mathrm{*}}\right)\to {J}^{-1}{U}^{\prime}~{N}_{p}\left(\mathrm{0,}{J}^{-1}K{J}^{-1}\right)\mathrm{.}$

So, we can say, the asymptotic MLE distribution under the null hypotheses H_{0}, in this case

$\sqrt{n}\stackrel{^}{\theta}~N\left({\theta}_{0}\mathrm{,}{J}^{-1}\right)$

where,
${\theta}_{0}$ is the true value. And the asymptotic distribution of
$\stackrel{^}{\theta}$ under alternative hypotheses H_{1} is

$\sqrt{n}\stackrel{^}{\theta}~N\left({\theta}^{*},{J}^{-1}K{J}^{-1}\right)$

So, that is meaning (
$J=K$ ) if and only if when fitted the correct model (i.e. under H_{0}).

2. The IMT under Missing Covariates for Logistic Regression Model

In this part, we apply the procedure of the IMT statistic under missing covariates for a logistic regression model. If ${X}_{i}$ is a p-dimensional vector of covariates draw from normal distribution and ${Y}_{i}$ is binary with

$P\left({Y}_{i}=1|{X}_{i}\right)=\text{expit}\left(\alpha +{\beta}^{\text{T}}{X}_{i}\right)\mathrm{.}$ (4)

In the following we treat the simple case where the fitted model is

$P\left({Y}_{i}=1|{X}_{i}\right)=\text{expit}\left(\alpha +{\beta}_{1}{X}_{1i}\right)$ (5)

for a scalar ${X}_{1}$ and that the true model has

$P\left({Y}_{i}=1|{X}_{i}\right)=\text{expit}\left(\alpha +{\beta}_{1}{X}_{1i}+{\beta}_{2}{X}_{2i}\right)\mathrm{,}$ (6)

where
${X}_{2}$ is also a scalar. We have the log-likelihood function contribution for the i^{th} element
$\left({Y}_{i}\mathrm{,}{X}_{i}\right)$ is

$\mathcal{l}\left({Y}_{i}\mathrm{,}{X}_{i}\right)={Y}_{i}\left(\alpha +{\beta}^{\text{T}}{X}_{i}\right)-\mathrm{log}\left(1+\mathrm{exp}\left(\alpha +{\beta}^{\text{T}}{X}_{i}\right)\right)$ (7)

and so,

$\frac{\partial {\mathcal{l}}_{i}}{\partial \alpha}={Y}_{i}-{\pi}_{i};\text{\hspace{0.17em}}\frac{\partial {\mathcal{l}}_{i}}{\partial {\beta}_{1}}=\left({Y}_{i}-{\pi}_{i}\right){X}_{1i}$

and note that we only consider fitting the model with ${X}_{1}$, even if the true model also includes ${X}_{2}$ (i.e. ${\beta}_{2}\ne 0$ ). From this we get:

$\frac{{\partial}^{2}{\mathcal{l}}_{i}}{\partial \theta \partial {\theta}^{\text{T}}}=\left[\begin{array}{cc}-{\pi}_{i}\left(1-{\pi}_{i}\right)& -{\pi}_{i}\left(1-{\pi}_{i}\right){X}_{i}\\ -{\pi}_{i}\left(1-{\pi}_{i}\right){X}_{i}& -{\pi}_{i}\left(1-{\pi}_{i}\right){X}_{i}^{2}\end{array}\right]$

Also,

$\frac{\partial {\mathcal{l}}_{i}}{\partial \theta}\frac{\partial {\mathcal{l}}_{i}}{\partial {\theta}^{T}}=\left[\begin{array}{cc}{\left({Y}_{i}-{\pi}_{i}\right)}^{2}& {\left({Y}_{i}-{\pi}_{i}\right)}^{2}{X}_{i}\\ {\left({Y}_{i}-{\pi}_{i}\right)}^{2}{X}_{i}& {\left({Y}_{i}-{\pi}_{i}\right)}^{2}{X}_{i}^{2}\end{array}\right]$

using,

${\left({Y}_{i}-{\pi}_{i}\right)}^{2}-{\pi}_{i}\left(1-{\pi}_{i}\right)=\left({Y}_{i}-{\pi}_{i}\right)\left(1-2{\pi}_{i}\right)\mathrm{,}$

as ${Y}_{i}^{2}$ is ${Y}_{i}$, and so we get that

${d}_{g}\left({y}_{i}\mathrm{,}\theta \right)=\left({Y}_{i}-{\pi}_{i}\right)\left(1-2{\pi}_{i}\right)\left[\begin{array}{c}1\\ {X}_{i}\\ {X}_{i}^{2}\end{array}\right]\mathrm{.}$ (8)

3. An Alternative Formulae of Variance

In this part we are interested to find a formulae of the variance of d statistic, even when the model is mis-specified. To perform the IMT we need to find the mean and variance of

$T=\frac{1}{\sqrt{n}}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{d}_{gi}$

Under H_{0}
$E\left({d}_{gi}\right)=0$, and so the IMT could be written as

${T}^{\text{T}}\text{var}{\left(T\right)}^{-1}T$

which will have a ${\chi}^{2}$ -distribution on $\text{rank}\left(\text{var}\left(T\right)\right)$ d.f. as T is asymptotically Normal. However, the test statistic has to be evaluated at the MLE $\stackrel{^}{\theta}$ and this introduces a complication. The MLE $\stackrel{^}{\theta}$ is the solution to

$S=\frac{1}{\sqrt{n}}\nabla \mathcal{l}=\frac{1}{\sqrt{n}}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\nabla {\mathcal{l}}_{i}=\frac{1}{\sqrt{n}}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\left({y}_{i}-{\pi}_{i}\right)\left[\begin{array}{c}1\\ {X}_{i}\end{array}\right]=0.$

The expression for T is

$T=\frac{1}{\sqrt{n}}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\left({y}_{i}-{\pi}_{i}\right)\left(1-2{\pi}_{i}\right)\left[\begin{array}{c}1\\ {x}_{i}\\ {x}_{i}^{2}\end{array}\right]$

and this is clearly going to be highly correlation with S. Therefore, the appropriate variance for the IMT is $\mathrm{var}\left(T|S=0\right)$. As T and S are sums of independent elements, the Central limit Theorem implies that ${\left(T\mathrm{,}S\right)}^{\text{T}}$ is asymptotically Normal and so we can use

$\mathrm{var}\left(T|S=0\right)=\mathrm{var}\left(T\right)-\mathrm{cov}\left(T,S\right)\mathrm{var}{\left(S\right)}^{-1}\mathrm{cov}{\left(T,S\right)}^{\text{T}}.$ (9)

To work out $\mathrm{var}\left(T|S=0\right)$, so, in this case we can write

$\mathrm{var}\left(T\right)=\mathrm{var}\left(\left[{d}_{g1}+{d}_{g2}+\cdots +{d}_{gn}\right]/\sqrt{n}\right)=\mathrm{var}\left({d}_{g1}\right),$

and similarly

$\mathrm{var}\left(S\right)=\mathrm{var}\left(\nabla {\mathcal{l}}_{1}\right),\mathrm{cov}\left(T,S\right)=\mathrm{cov}\left({d}_{g1},\nabla {\mathcal{l}}_{1}\right).$

3.1. The Variance of IMT under Missing Covariates for Logistic Regression Model

We now need to find expressions for $\text{var}\left({d}_{g1}\right)$, $\text{var}\left(\nabla {\mathcal{l}}_{1}\right)$ and $\text{cov}\left({d}_{g1}\mathrm{,}\nabla {\mathcal{l}}_{1}\right)$

We already have that

${d}_{g}=\left({y}_{i}-{\pi}_{i}\right)\left(1-2{\pi}_{i}\right)\left[\begin{array}{c}1\\ {x}_{i}\\ {x}_{i}^{2}\end{array}\right]$

and

$\nabla {\mathcal{l}}_{i}=\left({y}_{i}-{\pi}_{i}\right)\left[\begin{array}{c}1\\ {x}_{i}\end{array}\right]$

so, the variance is

$\text{var}\left({d}_{g}\right)=E\left({d}_{g}{d}_{g}^{\text{T}}\right)-E\left({d}_{g}\right)E\left({d}_{g}^{\text{T}}\right)$ (10)

and we have

${d}_{g}{d}_{g}^{\text{T}}={\left(y-\pi \right)}^{2}{\left(1-2\pi \right)}^{2}\left[\begin{array}{ccc}1& {x}_{i}& {x}_{i}^{2}\\ {x}_{i}& {x}_{i}^{2}& {x}_{i}^{3}\\ {x}_{i}^{2}& {x}_{i}^{3}& {x}_{i}^{4}\end{array}\right]$ (11)

taking expectation ${E}_{Y|X}$ we obtain

$E\left({d}_{g1}\right)={E}_{X}\left[\left({\pi}_{t}-\pi \right)\left(1-2\pi \right)\left[\begin{array}{c}1\\ {x}_{i}\\ {x}_{i}^{2}\end{array}\right]\right]$ (12)

and,

$E\left({d}_{g1}{d}_{g1}^{\text{T}}\right)={E}_{X}\left[\left({\pi}_{t}\left(1-2\pi \right)+{\pi}^{2}\right){\left(1-2\pi \right)}^{2}\left[\begin{array}{ccc}1& X& {X}^{2}\\ X& {X}^{2}& {X}^{3}\\ {X}^{2}& {X}^{3}& {X}^{4}\end{array}\right]\right]\mathrm{.}$ (13)

Now we need to compute $\text{cov}\left({d}_{g}\mathrm{,}\nabla \mathcal{l}\right)$. In fact $E\left(\nabla \mathcal{l}\right)=0$, not only if the model is correct but also when evaluated at the least false value ${\theta}^{\mathrm{*}}$ (under wrong model), so in this case

$\text{cov}\left({d}_{g1}\mathrm{,}\nabla {\mathcal{l}}_{1}\right)=E{\left({d}_{g}\nabla \mathcal{l}\right)}^{\text{T}}\mathrm{.}$

and we have

$\begin{array}{c}{d}_{{g}_{1}}\nabla {\mathcal{l}}_{1}^{\text{T}}=\left(y-\pi \right)\left(1-2\pi \right)\left[\begin{array}{c}1\\ {x}_{i}\\ {x}_{i}^{2}\end{array}\right]\left(y-\pi \right)\left[\begin{array}{cc}1& {x}_{i}\end{array}\right]\\ ={\left(y-\pi \right)}^{2}\left(1-2\pi \right)\left[\begin{array}{cc}1& {x}_{i}\\ {x}_{i}& {x}_{i}^{2}\\ {x}_{i}^{2}& {x}_{i}^{3}\end{array}\right]\end{array}$

then,

$E\left({d}_{{g}_{1}}\nabla {\mathcal{l}}_{1}^{\text{T}}\right)={E}_{X}\left[\left({\pi}_{t}\left(1-2\pi \right)+{\pi}^{2}\right)\left(1-2\pi \right)\left[\begin{array}{cc}1& X\\ X& {X}^{2}\\ {X}^{2}& {X}^{3}\end{array}\right]\right].$ (14)

Now we will work out $\text{var}\left(\nabla \mathcal{l}\right)$, as before, since $E\left(\nabla \mathcal{l}\right)=0$, so

$\text{var}\left(\nabla {\mathcal{l}}_{1}\right)=E\left(\nabla \mathcal{l}\nabla {\mathcal{l}}^{\text{T}}\right)={E}_{X}{E}_{Y|X}\left[\begin{array}{cc}{\left(Y-\pi \right)}^{2}& {\left(Y-\pi \right)}^{2}X\\ {\left(Y-\pi \right)}^{2}X& {\left(Y-\pi \right)}^{2}{X}^{2}\end{array}\right]$

and note that

${E}_{Y|X}{\left(Y-\pi \right)}^{2}={E}_{Y|X}\left(Y\left(1-2\pi \right)+{\pi}^{2}\right)={\pi}_{t}\left(1-2\pi \right)+{\pi}^{2},$

where, ${\pi}_{t}$ is $E\left(Y\right)$ under the true model. So,

$E\left(\nabla \mathcal{l}\nabla {\mathcal{l}}^{\text{T}}\right)={E}_{X}\left[\begin{array}{cc}{\pi}_{t}\left(1-2\pi \right)+{\pi}^{2}& \left({\pi}_{t}\left(1-2\pi \right)+{\pi}^{2}\right)X\\ \left({\pi}_{t}\left(1-2\pi \right)+{\pi}^{2}\right)X& \left({\pi}_{t}\left(1-2\pi \right)+{\pi}^{2}\right){X}^{2}\end{array}\right].$ (15)

Hence, the required variance (9)

$E\left({d}_{g}{d}_{g}^{\text{T}}\right)-E\left({d}_{g}\right)E\left({d}_{g}^{\text{T}}\right)-E\left({d}_{g}\nabla {\mathcal{l}}^{\text{T}}\right)E{\left(\nabla \mathcal{l}\nabla {\mathcal{l}}^{\text{T}}\right)}^{-1}E\left(\left(\nabla \mathcal{l}\right){d}_{g}^{\text{T}}\right)$ (16)

and we have expressions for each component from (12), (13), (14) and (15) We need to evaluate these components by simulation.

3.2. The Dispersion Matrix under Wrong Model

In fact, may be some elements of the covariance matrix of the IMT are linear combinations of others leading to singularity of the estimated covariance matrix, this point discussed by [1] and [12]. We are interested to compute the $\mathrm{var}\left(T|S=0\right)$, even when the wrong model has been fitted. We will compute each of the components of this variance separately. We see from Section 3.1 that we need to evaluate, e.g.

$E\left(d\right)={E}_{X}\left(\left({\pi}_{t}-\pi \right)\left(1-2\pi \right)\left[\begin{array}{c}1\\ X\\ {X}^{2}\end{array}\right]\right)$

and also,

$E\left(d{d}^{\text{T}}\right)={E}_{X}\left(\left[{\pi}_{t}\left(1-2\pi \right)+{\pi}^{2}\right]{\left(1-2\pi \right)}^{2}\left[\begin{array}{ccc}1& X& {X}^{2}\\ X& {X}^{2}& {X}^{3}\\ {X}^{2}& {X}^{3}& {X}^{4}\end{array}\right]\right)\mathrm{.}$

This cannot be done analytically so we simulate 5000 values of X and replace the $E\left(d\right)$ by the mean of these 5000 values. In evaluating ${\pi}_{t}$ we use the values of the parameters ${\alpha}_{t}$, ${\beta}_{1t}$ and ${\beta}_{2t}$. What do we use for $\pi $ ? We need to evaluate $\pi \left(\alpha \mathrm{,}{\beta}_{1}\right)$ at the least false values ${\alpha}^{\mathrm{*}}$ and ${\beta}_{1}^{\mathrm{*}}$ for $\alpha $ and ${\beta}_{1}$. So, e.g, the first element of $E\left(d\right)$ is found by simulation from

${E}_{X}\left[\left(\text{expit}\left({\alpha}_{t}+{\beta}_{t1}{X}_{1}+{\beta}_{t2}{X}_{2}\right)-\text{expit}\left({\alpha}^{\mathrm{*}}+{\beta}_{1}^{\mathrm{*}}{X}_{1}\right)\right)\left(1-2\text{expit}\left({\alpha}^{\mathrm{*}}+{\beta}_{1}^{\mathrm{*}}{X}_{1}\right)\right)\right]$

where,

${\alpha}^{\mathrm{*}}=\frac{{\alpha}_{t}+{\beta}_{t2}\left({\mu}_{2}-\rho {\mu}_{1}\right)}{\sqrt{1+{k}^{2}{\beta}_{t2}^{2}{\sigma}^{2}\left(1-{\rho}^{2}\right)}}\mathrm{,}$ (17)

${\beta}_{1}^{\mathrm{*}}=\frac{{\beta}_{t1}+\rho {\beta}_{t2}}{\sqrt{1+{k}^{2}{\beta}_{t2}^{2}{\sigma}^{2}\left(1-{\rho}^{2}\right)}}$ (18)

and X draw from bivariate normal distribution with $\mu =\left({\mu}_{1}\mathrm{,}{\mu}_{2}\right)$, and ${\sigma}_{1}^{2}={\sigma}_{2}^{2}$. The formulae of the least false values ${\alpha}^{\mathrm{*}}$ and ${\beta}^{\mathrm{*}}$ has been discussed and calculated by [10].

4. Empirical Variance of IMT

The expression in (16) is the variance V of d at $\stackrel{^}{\theta}$ but we need an estimate, $\stackrel{^}{V}$. If we have a sample $\left\{\left({y}_{i}\mathrm{,}{x}_{i1}\right)|i=\mathrm{1,}\cdots \mathrm{,}n\right\}$ how can we estimate V consistently? One candidate would be to compute

${d}_{i}=\left({y}_{i}-{\stackrel{^}{\pi}}_{i}\right)\left(1-2{\stackrel{^}{\pi}}_{i}\right)\left[\begin{array}{c}1\\ {x}_{i}\\ {x}_{i}^{2}\end{array}\right],\text{\hspace{0.17em}}i=1,\cdots ,n$

and

$\nabla {\mathcal{l}}_{i}=\left({y}_{i}-{\stackrel{^}{\pi}}_{i}\right)\left[\begin{array}{c}1\\ {x}_{i}\end{array}\right],\text{\hspace{0.17em}}i=1,\cdots ,n$

where, ${\stackrel{^}{\pi}}_{i}$ is the fitted value from the model with just ${x}_{1}$. Now compute

${\stackrel{^}{W}}_{n}=\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{d}_{i}{d}_{i}^{\text{T}}-\left(\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{d}_{i}\right)\left(\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{d}_{i}^{\text{T}}\right)$

and

${\stackrel{^}{B}}_{n}=\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}{\left({y}_{i}-\stackrel{^}{\pi}\right)}^{2}\left[\begin{array}{cc}1& {x}_{i}\\ {x}_{i}& {x}_{i}^{2}\end{array}\right],$

${\stackrel{^}{C}}_{n}=\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}{\left({y}_{i}-\stackrel{^}{\pi}\right)}^{2}\left(1-2{\stackrel{^}{\pi}}_{i}\right)\left[\begin{array}{cc}1& {x}_{i}\\ {x}_{i}& {x}_{i}^{2}\\ {x}_{i}^{2}& {x}_{i}^{3}\end{array}\right]$

Then use

$\stackrel{^}{V}={\stackrel{^}{W}}_{n}-{\stackrel{^}{C}}_{n}{\stackrel{^}{B}}_{n}^{-1}{\stackrel{^}{C}}_{n}^{\text{T}}$ (19)

as an estimate of V, we will assess this by simulation.

5. Simulation Study

This simulation examines the correctness of the form of the dispersion matrix V in (16) and (19). To achieve the aim of this simulation, we will consider a logistic regression model which has two covariates draw from bivariate normal distribution with mean zero and covariance matrix $\Sigma $ as:

${\pi}_{t}=\text{expit}\left({\alpha}_{t}+{\beta}_{t1}{x}_{1}+{\beta}_{t2}{x}_{2}\right)$

and the fitted model is

$\pi =\text{expit}\left(\alpha +{\beta}_{1}{x}_{1}\right)$

・ Apply in two cases of logistic model,

・ The fitted is the true logistic model (i.e. ${\beta}_{t2}=0$ )

・ The fitted model is mis-specified (i.e. ${\beta}_{t2}\ne 0$ ).

・ Use variance ( ${\sigma}_{1}^{2}={\sigma}_{2}^{2}=2$ ) and correlation $\rho =0.1$.

・ We choose some different components of parameters ${\alpha}_{t}$, ${\beta}_{t1}$ and ${\beta}_{t2}$ to calculate ${\pi}_{t}$.

・ We compute the least false values ${\alpha}^{\mathrm{*}}$ and ${\beta}_{1}^{\mathrm{*}}$ by formulae to calculate $\pi $.

・ We compute the true variance by simulating ${d}_{i}$ and take the variance to be $\text{var}\left(\sqrt{n}\stackrel{\xaf}{d}\right)={V}_{tr}$.

・ We compute the theoretical variance $\text{var}\left(d\right)={V}_{T}$ at the least false value and calculate $E\left({d}_{1}\right)$ and $E\left({d}_{1}{d}_{1}^{\text{T}}\right)$ as described in section 3.2.

・ Finally, for each simulation we compute the empirical variance ${V}_{E}$ and take the mean over the simulations.

・ We make comparison between the diagonal elements of dispersion matrix ${V}_{E}\mathrm{,}{V}_{T}$ vs. ${V}_{tr}$ respectively.

・ Apply on different sample size $n=500,1000$ and $N=5000$ number of simulations.

6. Results and Discussion

The results were reported in tables, which show the diagonal elements of the variance matrix: ${V}_{E}$ denotes the empirical variance, ${V}_{T}$ denotes the theoretical variance and ${V}_{tr}$ denotes the true variance. The true parameters appear as ${\alpha}_{t}$, ${\beta}_{t1}$, and ${\beta}_{t2}$ ; $R{n}_{E}$ and $R{n}_{T}$ denote to the rank of the covariance matrix

empirical and theoretical respectively. The Ratio ${R}_{E}$ and ${R}_{T}$ are $\sqrt{\frac{{V}_{E}}{{V}_{tr}}}$, $\sqrt{\frac{{V}_{T}}{{V}_{tr}}}$ respectively. $S\mathrm{.}D\left({\pi}_{t}\right)$ denotes the standard deviation over a sample

where ${\pi}_{t}$ is the true model. In our simulation we consider two covariates, so in this case the dispersion matrix of d is a 3 × 3 dimensional matrix.

Firstly, we consider the results under true logistic model, Table 1, shows the results of simulation, which appeared the diagonal elements of matrix V, the empirical version and theoretical form comparing with true variance, which use $\rho =0.1$ in case of ${\sigma}_{1}^{2}={\sigma}_{2}^{2}=2$ by sample size $n=500$. Table 2, reported the results by sample size $n=1000$, with equal variance ${\sigma}_{1}^{2}={\sigma}_{2}^{2}=2$. We can see clearly, that all diagonal elements appeared small in value in two different cases of sample size. The first element was much closer to zero than of the rest. In almost cases the results appeared reasonable ratio which is meaning the theoretical variance and empirical variance are close to the true value. There are some slightly strange ratio almost in case of sample size $n=500$, the reason may be affected by small value of standard deviation of ${\pi}_{t}$ $S\mathrm{.}D\left({\pi}_{t}\right)$, otherwise the ratio is close to one. In case of sample size $n=1000$, the behaviour of results shows almost the same pattern, with the ratio close to one and that is meaning the formulae of the variance works well. In a few cases with small values of $S\mathrm{.}D\left({\pi}_{t}\right)$ which affected on the ratio where the first two elements were more sensitive. Overall, we have reasonable results to say that, the alternative formulae of variance works well and the two first elements still more sensitive which appeared tend to zero.

Secondly, we consider the results when the missing covariate logistic model has been fitted. That is meaning when the variance of IMT computed under H_{1} and uses the least false values. Table 3, shows the results of sample size
$n=500$. Table 4, shows the results of sample size 1000. In general, the behaviour of ratio

Table 1. Simulation results of the variance ( ${V}_{tr}$ ), ( ${V}_{E}$ ) and ( ${V}_{T}$ ) in case of fitted true model, with sample size $n=500$ and ${\sigma}_{1}^{2}={\sigma}_{2}^{2}=2$.

Table 2. Simulation results of the variance ( ${V}_{tr}$ ), ( ${V}_{E}$ ) and ( ${V}_{T}$ ) in case of fitted true model, with sample size $n=1000$ and ${\sigma}_{1}^{2}={\sigma}_{2}^{2}=2$.

Table 3. Simulation results of the variance ( ${V}_{tr}$ ), ( ${V}_{E}$ ) and ( ${V}_{T}$ ) in case of fitted missing covariates model, with sample size $n=500$ and ${\sigma}_{1}^{2}={\sigma}_{2}^{2}=2$.

Table 4. Simulation results of the variance ( ${V}_{tr}$ ), ( ${V}_{E}$ ) and ( ${V}_{T}$ ) in case of fitted missing covariates model, with sample size $n=1000$ and ${\sigma}_{1}^{2}={\sigma}_{2}^{2}=2$.

appeared the same behaviour which found in case of ${\beta}_{2t}=0$, the two cases of different sample size appeared reasonable ratio which is close to one. A few cases shows low ratio, the reason is as discussed before concerning to the small value of $S\mathrm{.}D\left({\pi}_{t}\right)$.

7. Conclusion

This paper carried out to investigate the behaviour of IMT and compute the covariance matrix under the wrong logistic regression model. As result, we can see that the alternative formula of the variance appeared reasonable results under the true and missing covariate model. As we computed the final form of the variance of IMT, we can see clearly it is dependent on $E\left(d\right)$. As we know, we made some notes on the first two elements of $E\left(d\right)$, which may be quite close to zero under true model and use the least false value, the $E\left({\pi}_{t}-\pi \right)=E\left(\left({\pi}_{t}-\pi \right)X\right)=0$ related to the log likelihood functions. So, these elements leading to singularity of the estimated covariance matrix, and have effect on the behaviour of the dispersion matrix of the IMT.

Acknowledgements

I am very grateful to Professor J. N. S. Matthews, School of Mathematics and Statistics, Newcastle University for academic supporting and Dr. Hamza M. A. Boauod, (FCOPHTH) (SA), Consultant Ophthalmologist, Eye Department, Klerksdorp Hospital. South Africa for his financial support. Also thank the referees, associate editor and joint editor for their helpful comments and additional references.

References

[1] White, H. (1982) Maximum Likelihood Estimation of Misspecified Models. Econometrica, 50, 1-25. https://doi.org/10.2307/1912526

[2] Hausman, J. A. (1978) Spesification Tests in Econometrics. Econometrica, 46, 1251- 1271.

https://doi.org/10.2307/1913827

[3] Chesher, A. (1984) Testing for Neglected Heterogeneity. Econometrica, 52, 865-872.

https://doi.org/10.2307/1911188

[4] Newey, W.K. (1984) Maximum Likelihood Specification Testing and Conditional Moment Tests. Econometrica, 53, 1047-1070. https://doi.org/10.2307/1911011

[5] Davidson, R. and Mackinnon, J.G. (1984) Convenient Specification Tests for Logit and Probit Models. Journal of Econometrics, 25, 241-262.
https://doi.org/10.1016/0304-4076(84)90001-0

[6] Orme, C. (1988) The Calculation of the Information Matrix Test for Binary Data Models. The Manchester School, 56, 370-376.
https://doi.org/10.1111/j.1467-9957.1988.tb01339.x

[7] Lancaster, T. (1984) Covariance Matrix of the Information Matrix Test. Econometrica, 52, 1051-1053. https://doi.org/10.2307/1911198

[8] Kuss, O. (2002) Global Goodness-of-Fit Tests in Logistic Regression with Sparse Data. Statistics in Medicine, 21, 3789-3801. https://doi.org/10.1002/sim.1421

[9] Matthews, J.N.S. and Badi, N.H. (2015) Inconsistent Treatment Estimates from Mis-Specified Logistic Regression Analyses of Randomized Trials. Statistics in Medicine, 34, 2681-2694.

https://doi.org/10.1002/sim.6508

[10] Badi, N.H.S. (2017) Properties of the Maximum Likelihood Estimates and Bias Reduction for Logistic Regression Model. Open Access Library Journal, 4, e3625.

[11] Claeskens, G. and Hjort, N.L. (2008) Model Selection and Model Averaging. Cambridge University Press, Cambridge.

[12] Lin, D.Y. and Wel, L.J. (1991) Goodness-of-Fit Tests for the General Cox Regression Model. Statistica Sinica, 1, 1-17.