Back
 AM  Vol.12 No.4 , April 2021
Kolmogorov-Smirnov APF Test for Inhomogeneous Poisson Processes with Shift Parameter
Abstract: In this article, we study the Kolmogorov-Smirnov type goodness-of-fit test for the inhomogeneous Poisson process with the unknown translation parameter as multidimensional parameter. The basic hypothesis and the alternative are composite and carry to the intensity measure of inhomogeneous Poisson process and the intensity function is regular. For this model of shift parameter, we propose test which is asymptotically partially distribution free and consistent. We show that under null hypothesis the limit distribution of this statistic does not depend on unknown parameter.

1. Introduction

One of the central themes of statistical theory and practice is the problem of the quality of goodness-of-fit tests. The problems of constructing the quality of goodness-of-fit tests in the case of i.i.d. are well studied in [1]. To set up a test that allows, if possible, accepting or rejecting the hypothesis to be tested against a given alternative, depending on a data set, a nonparametric study of the hypothesis tests is required, including a typical example that is the goodness-of-fit test and other important examples for applications that are the tests for symmetry, independence and homogeneity. [2] [3], and many other authors have worked in this area mainly in the mini max approach which is considered in nonparametric statistics as a good framework for determining the performance of an estimator.

In classical mathematical statistics, [4] intensely studied the Chi-square, Kolmogorov-Smirnov and Cramér-von Mises tests, and the Kolmogorov-Smirnov and Cramér-von Mises goodness-of-fit tests shown are asymptotically statistically free (i.e. have independent laws of the distribution under the null hypothesis).

[5] recently studied in their paper the tests of nonparametric hypotheses for the intensity of the inhomogeneous Poisson process. The study they carried out is an extension to the Poisson processes of Ingster’s work. [4] studied nonparametric tests for Gaussian white noise models with a ε noise level tending to 0. [6] presented in their article a review of several results concerning the construction of Kolmogorov-Smirnov-type and Cramér-von Mises-type fit tests for continuous-time processes. As models, they considered a small noise stochastic differential equation, an ergodic diffusion process, a Poisson process, and self-exciting or self-exciting point processes. [7] [8] consider the shift parameter model and the shift and scale parameter model, and show that the Cramér-von Mises test is asymptotically distribution free and asymptotically partially distribution free, and consistent. For each model, they proposed the tests which provide the asymptotic size α and describe the form of the power function under the local alternatives.

In applications, the hypotheses to be tested are often of a more complex nature. The first works on the problems of goodness-of-fit testing of composite hypotheses concerning classical statistics are due to [9] ( [2] ) who proposed to test composite hypotheses, in the case where the distribution function under the hypothesis to be tested depends on a multidimensional unknown parameter. The null hypothesis therefore becomes composite, i.e. it does not determine the distribution of the sample in a unique way. In the case where the parameters are estimated, the Kolmogorov-Smirnov test, as well as the Cramér-von Mises test is no longer asymptotically distribution free.

It follows that the critical values change from one null hypothesis to another. Different values of the parameter result in different critical values, often within the same parametric family. The distribution free character is therefore crucial in applications since the critical values are calculated only once for any distribution defined under the hypothesis to be tested. To work around this problem, [9] suggested the split sample method. Durbin’s problem involves a martingale transformation of the parametric empirical process which was proposed by [10].

The martingale approach of [10] allows building asymptotically distribution free hypothesis tests. This approach proposed by [10] is used by various authors including [11] in the regression models, [12]. We use an approach similar to that of [10] to construct, in this article, Kolmogorov-Smirnov-type asymptotically distribution free and consistent goodness-of-fit tests.

We will consider the same model as [7]. In general, dealing with the measurement of the intensity of the Poisson process, we will consider the model depending on an unknown translation parameter with a composite parametric base assumption and show that the Kolmogorov-Smirnov test is asymptotically parameter free.

2. Statement of the Problem and Auxiliary Results

Suppose that we observe n independents inhomogeneous Poisson processes X ( n ) = ( X 1 , , X n ) where X j = { X j ( t ) , t } , j = 1, , n are trajectories of the Poisson processes with the mean function Λ ( t ) = E X j ( t ) = t λ ( s ) d s . Here λ ( ) 0 is the corresponding intensity function.

Let us remind the construction of GoF test of Kolmogorov-Smirnov type in the case of simple null hypothesis. The class of tests ( Ψ ¯ n ) n 1 of asymptotic size ε ( 0,1 ) is

K ε = { Ψ ¯ n : l i m n E 0 Ψ ¯ n = ε } .

Suppose that the basic hypothesis is simple, say, H 0 : Λ ( ) = Λ 0 ( ) , where Λ 0 ( ) is a known function which is continuous and differentiable, and satisfies Λ 0 ( ) < . The alternative is composite (non parametric) H 1 : Λ ( ) Λ 0 ( ) . Then we can introduce the Kolmogorov-Smirnov (K-S) type statistic

Γ ˜ n = n Λ 0 ( ) sup t | Λ ^ n ( t ) Λ 0 ( t ) | ,

where Λ ^ n ( t ) = 1 n j = 1 n X j ( t ) is the empirical mean of the Poisson process. It can be verified that under H 0 , this statistic converges to the following limit:

Γ ˜ n Γ sup 0 s 1 | W ( s ) | ,

where W ( s ) , 0 s 1 is a standard Wiener process. Therefore the K-S type test Ψ ˜ n ( X n ) = 1 l { Γ ˜ n > c ε } with the threshold c ε defined by the equation

( Γ > c ε ) = ε belongs to K ε . This test is asymptotically distribution free (ADF) (see, e.g., [6] [13] ). Remind that the test is called ADF if the limit distribution of the test statistic under hypothesis does not depend on the mean function Λ 0 ( ) .

Let us consider the case of the parametric null hypothesis. It can be formulated as follows. We have to test the null hypothesis

H 0 : Λ ( ) L ( Θ ) = { Λ ( t ) = Λ 0 ( t ϑ ) , ϑ Θ , t }

against the alternative H 1 : Λ ( ) L ( Θ ) . Here Λ 0 ( ϑ , ) is a known mean function of the Poisson process depending on some finite-dimensional unknown parameter ϑ Θ . Note that under H 0 there exists the true value ϑ 0 Θ such that the mean of the observed Poisson process Λ ( t ) = Λ ( ϑ 0 , t ) , t .

The K-S type GoF test can be constructed by a similar way. Introduce the normalized process u ^ n ( t ) u n ( ϑ ^ n , t ) = n ( Λ ^ n ( t ) Λ 0 ( ϑ ^ n , t ) ) , t , where ϑ ^ n is the maximum likelihood estimator of the unknown parameter ϑ which is (under hypothesis H 0 ) consistent and asymptotically normal n ( ϑ ^ n ϑ 0 ) ξ .

Therefore if we propose a goodness of fit test based on this statistic, say, Φ n ( X n ) = 1 l { Γ ¯ n > c α } then to find the threshold c α such that Φ n K ε we have to solve the equation ϑ 0 ( Γ > c ε ) = ε . The goal of this work is to show that if the unknown parameter ϑ , when ϑ Θ is the shift parameter, then it is possible to construct a test statistic Γ ^ n whose limit distribution does not depend on ϑ 0 . The test will be uniformly consistent against another class of alternatives

H 1 ρ : Λ ( ) L ρ = { Λ ( ) : inf ϑ Θ sup t | Λ ( t ) Λ 0 ( t ϑ ) | > ρ } .

Here ρ > 0 is some given number.

The mean function under null hypothesis is

Λ 0 ( ϑ , t ) = t λ 0 ( s ϑ ) d s , t .

the proposed test statistic is

Γ ^ n = n Λ 0 ( ϑ ^ n , ) sup t | Λ ^ n ( t ) Λ 0 ( ϑ ^ n , t ) | .

We show that Γ ^ n Γ , where Γ = Γ ( Λ 0 ) , i.e. the distribution of the random variable Γ ( Λ 0 ) does not depend on ϑ 0 . Remind that the function Λ 0 ( t ) , t is known and therefore the solution c ε = c ε ( Λ 0 ) can be calculated before the experiment using, say, numerical simulations.

We are given n independent observations X ( n ) = ( X 1 , , X n ) of inhomogeneous Poisson processes X j = { X j ( t ) , t } with the mean function Λ ( t ) = E X j ( t ) , t . We have to construct a GoF test in the hypothesis testing problem with parametric null hypothesis H 0 . More precisely, we suppose that under H 0 , the mean function Λ ( t ) is absolutely continuous: Λ ˙ ( t ) = λ 0 ( ϑ 0 , t ) . Here ϑ 0 is the true value, and the intensity function is λ 0 ( ϑ 0 , t ) = λ 0 ( t ϑ 0 ) , ϑ Θ . The set Θ = ( α , β ) , 0 < α < β < . Therefore if we denote Λ 0 ( t ) = t λ 0 ( ν ) d ν , t , then the mean function under null hypothesis is Λ ( t ) = Λ 0 ( ϑ 0 , t ) = Λ 0 ( t ϑ 0 ) .

It is convenient to use two different functions Λ 0 ( ϑ , t ) and Λ 0 ( t ) and we hope that such notation will not be misleading.

Therefore, we have the parametric null hypothesis

H 0 : Λ ( ) L ( Θ )

where the parametric family is

L ( Θ ) = { Λ ( ) : Λ ( t ) = Λ 0 ( t ϑ ) , t , ϑ Θ }

Here Λ 0 ( ) is a known absolutely continuous function with properties: Λ 0 ( ) = 0 , Λ 0 ( ) < .

In this work, we denote by f ˙ ( ϑ , t ) the derivative with respect to ϑ of any function f ( ϑ , t ) ( ϑ Θ , t ) .

We consider the class of tests of asymptotic level ε :

K ε = { Ψ ¯ n : l i m n E ϑ Ψ ¯ n = ε , ϑ Θ } .

The test studied in this work is based on the following statistic of K-S type:

Γ ^ n = n sup t | Λ ^ n ( t ) Λ 0 ( t ϑ ^ n ) |

when ϑ ^ n is the MLE.

As we use the asymptotic properties of the MLE ϑ ^ n , we need some regularity conditions.

Conditions C

C 1 The function λ 0 ( ) L 2 ( ) is strictly positive and three times continuously differentiable.

C 2 Its derivatives belong to L 2 ( ) . The Fisher information

I n ( ϑ ) = n + λ ˙ 0 2 ( t ϑ ) λ 0 ( t ϑ ) d t = n + λ ˙ 0 2 ( s ) λ 0 ( s ) d s n I 0

I 0 > 0 does not depend on ϑ .

C 3 The derivative λ ˙ 0 ( ) L 1 ( ) .

C 4 For any ν > 0 we have

inf | ϑ ϑ 0 | > ν λ 0 ( ϑ ) λ 0 ( ϑ 0 ) ϑ > 0.

Here ϑ is the usual L ( ) norm define as f ( ) = sup t | f ( t ) | .

Note that, by these conditions, the MLE ϑ ^ n is consistent, asymptotically normal

n ( ϑ ^ n ϑ ) N ( 0, I 0 1 )

and the moments converge: for any p > 0

n p / 2 E ϑ | ϑ ^ n ϑ | p E | ζ | p , ζ N ( 0, I 0 1 )

Moreover, it admits the representation (see [14], Theorem 3.1, page 101)

ϑ ^ n = ϑ 1 n I 0 1 + λ ˙ 0 ( t ϑ ) λ 0 ( t ϑ ) d W n ( t ) + O ( n 3 / 4 ) (2.1)

where W n ( t ) = n ( Λ ^ n ( t ) Λ 0 ( t ϑ ) ) . For the proofs see [14].

3. Main Result

Let us introduce the following random variable

Γ 0 = sup t | W ( Λ 0 ( t ) ) λ 0 ( t ) I 0 1 + λ ˙ 0 ( s ) λ 0 ( s ) d W ( Λ 0 ( s ) ) |

where W ( ) is a standard Wiener process.

The main result of this work is the following theorem.

Theorem 3.1. Let the conditions C are fulfilled. Then, the test

Φ ^ n ( X ( n ) ) = 1 l { Γ ^ n > c ε } belongs to the class K ε

Proof.

Let us consider n independent observations X ( n ) = ( X 1 , , X n ) of inhomogeneous Poisson processes X j = { X j ( t ) , t } .

We have to show that lim n E ϑ Ψ ^ n ( X n ) = ε , ϑ Θ .

We have

E ϑ Ψ ^ n ( X n ) = E ϑ 1 l { Γ ^ n > c ε } = ϑ [ sup t | n ( Λ ^ n ( t ) Λ 0 ( t ϑ ^ n ) ) | > c ε ] = ϑ [ sup t | u n ( t ) | > c ε ]

where we put u n ( t ) = n ( Λ ^ n ( t ) Λ 0 ( t ϑ ^ n ) ) .

The parametric empirical process defined by

u n ( t ) = n ( Λ ^ n ( t ) Λ 0 ( t ϑ ^ n ) ) = n ( Λ ^ n ( t ) Λ 0 ( t ϑ 0 ) + Λ 0 ( t ϑ 0 ) Λ 0 ( t ϑ ^ n ) ) = n ( Λ ^ n ( t ) Λ 0 ( t ϑ 0 ) ) n ( Λ 0 ( t ϑ ^ n ) Λ 0 ( t ϑ 0 ) ) = W n ( t ) n ( Λ 0 ( t ϑ ^ n ) Λ 0 ( t ϑ 0 ) ) . (3.2)

Since the function Λ 0 ( t ϑ ) is differentiable on Θ , according to the formula of finite increments applied to Λ 0 on [ ϑ 0 , ϑ ^ n ] , we have:

Λ 0 ( t ϑ ^ n ) Λ 0 ( t J 0 ) = Λ ˙ 0 ( t ϑ ˜ n ) ( ϑ ^ n ϑ 0 ) + o ( Λ ˙ 0 ( t ϑ ˜ n ) ( ϑ ^ n ϑ 0 ) ) .

where J ˜ n is an intermediate point between ϑ 0 and ϑ ^ n .

According to (3.2), we have the representation

u n ( t ) = W n ( t ) Λ ˙ 0 ( t ϑ ˜ n ) ( ϑ ^ n ϑ 0 ) n o ( Λ ˙ 0 ( t ϑ ˜ n ) n ( ϑ ^ n ϑ 0 ) ) = W n ( t ) + Λ ˙ 0 ( t ϑ ˜ n ) h ( s ϑ 0 ) d W n ( s ) + r n ( t ) , (3.3)

where

r n ( t ) = O ( n 1 / 4 Λ ˙ 0 ( t ϑ ˜ n ) ) o ( Λ ˙ 0 ( t ϑ ˜ n ) n ( ϑ ^ n ϑ 0 ) )

is the remainder.

Let us put h ( v ) = I 0 1 λ ˙ 0 ( v ) λ 0 ( v ) , W n ( t ) = n ( Λ ^ n ( t ) Λ 0 ( t ϑ 0 ) ) and denote by ϑ 0 the true value. Then relation (2.1) becomes

ϑ ^ n ϑ 0 = 1 n ( h ( t ϑ 0 ) d W n ( t ) + O ( n 1 / 4 ) )

and we have

n ( ϑ ^ n ϑ 0 ) = h ( t ϑ 0 ) d W n ( t ) + O ( n 1 / 4 ) .

Therefore,

u n ( t ) = W n ( t ) + Λ ˙ 0 ( t ϑ ˜ n ) v n + r n ( t ) , (3.4)

where we have set v n = h ( s ϑ 0 ) d W n ( s ) . Since ϑ ˜ n is himself an estimator of ϑ 0 therefore ϑ ˜ n converges to ϑ 0 . Also r n ( t ) converge in probability to 0. Under these considerations we can rewrite u n ( t ) as follow

u n ( t ) = W n ( t ) + Λ ˙ 0 ( t ϑ 0 ) v n , (3.5)

Furthermore, we put

u ^ n ( t ) = W n ( t ) + Λ ˙ 0 ( t ϑ ^ n ) v ^ n , (3.6)

where v ^ n = h ( s ϑ ^ n ) d W n ( s ) .

The intensity function λ ( ϑ 0 , t ) = λ 0 ( t ϑ 0 ) is strictly positive. Therefore it was shown that the process W n ( ) is asymptotically the composition of a Brownian motion (in the sense of the weak convergence) with Λ ( ϑ 0 , t ) whitch we note W ( Λ ( ϑ 0 , t ) ) , Λ ( ϑ 0 , t ) [ 0, Λ ( ϑ 0 , + ) ] . In the other words W n ( t ) converge weakly to the process W ( Λ ( t ϑ 0 ) ) in the space D [ 0, Λ 0 ( + ) ] .

We introduce the stochastic process

u ^ ( t ) = W ( Λ 0 ( t ϑ 0 ) ) + Λ ˙ 0 ( t ϑ 0 ) h ( s ϑ 0 ) d W ( Λ 0 ( s ϑ 0 ) ) . (3.7)

It is easy to see that, if we change the variables t ϑ 0 = u and s ϑ 0 = v in the integrals then we obtain the following equality

sup t | u ^ ( t ) | = sup u | W ( Λ 0 ( u ) ) + Λ ˙ 0 ( u ) h ( v ) d W ( Λ 0 ( v ) ) | = sup u | W ( Λ 0 ( u ) ) λ 0 ( u ) I 0 1 + λ ˙ 0 ( v ) λ 0 ( v ) d W ( Λ 0 ( v ) ) | = Γ 0 .

The proof of the theorem is based on the proof of the following fundamental lemma.

Lemma 3.2. Let the conditions C are satisfied. The process u ^ n ( t ) , t converges weakly in the space D ( , ) to the process u ^ ( t ) as n . Since T ( ) is a continuous function in D ( , ) in sense of the Skorohod distance, the random variable T ( u ^ n ) = sup t | u ^ n ( t ) | converges weakly to the random variable T ( u ^ ) = sup t | u ^ ( t ) | . In other words, we have

Γ n = sup t | u ^ n ( t ) | sup t | u ^ ( t ) | = Γ 0 .

To prove the Lemma 3.2, we need the following lemmas.

Lemma 3.3. Let the conditions C are satisfied. Then the following convergence hold

u ^ n ( t ) u n ( t ) = o ( 1 ) .

Proof of Lemma 3.3. For this, we need two relations

sup t | Λ ˙ 0 ( t ϑ ^ n ) Λ ˙ 0 ( t ϑ 0 ) | = o ( 1 ) , (3.8)

[ h ( t ϑ ^ n ) h ( t ϑ 0 ) ] d W n ( t ) = o ( 1 ) . (3.9)

Indeed, for the first relation, since the consistent estimator ϑ ^ n converges to the true value ϑ 0 and Λ ˙ ( ) is a continuous function for all t , then Λ ˙ 0 ( t ϑ ^ n ) converges in probability to Λ ˙ 0 ( t ϑ 0 ) for all t . Hence

sup t | Λ ˙ 0 ( t ϑ ^ n ) Λ ˙ 0 ( t ϑ 0 ) | = o ( 1 )

Furthermore by the condition C 1 , the function Λ ˙ 0 ( t ϑ 0 ) is also bounded. Hence, we can easily obtain the relation (3.8).

Further, for the second relation, we have

E ϑ 0 ( ( h ( t ϑ ^ n ) h ( t ϑ 0 ) ) d W n ( t ) ) 2 = E ϑ 0 ( h ( t ϑ ^ n ) h ( t ϑ 0 ) ) 2 d Λ 0 ( t ϑ 0 ) = E ϑ 0 ( ( ϑ ^ n ϑ 0 ) h ˙ ( s ϑ ˜ n ) ) 2 d Λ 0 ( t ϑ 0 ) = C 2 E ϑ 0 ( ϑ ^ n ϑ 0 ) 2 d Λ 0 ( t ϑ 0 )

Remind that E ϑ 0 | n ( ϑ ^ n ϑ 0 ) | 2 = 2 π Γ ( 3 2 ) + o ( 1 ) , Λ 0 ( ) = 0 and Λ 0 ( ) < , therefore

C 2 E ϑ 0 ( ϑ ^ n ϑ 0 ) 2 d Λ 0 ( t ϑ 0 ) n + 0.

Hence

E ϑ 0 ( ( h ( t ϑ ^ n ) h ( t ϑ 0 ) ) d W n ( t ) ) 2 n + 0.

which gives the proof of relation (3.9).

Now we can evaluate the difference u ^ n ( t ) u n ( t ) .

We have

u ^ n ( t ) u n ( t ) = W n ( t ) + Λ ˙ 0 ( t ϑ ^ n ) v ^ n W n ( t ) Λ ˙ 0 ( t ϑ 0 ) v n = Λ ˙ 0 ( t ϑ ^ n ) v ^ n Λ ˙ 0 ( t ϑ 0 ) v n = Λ ˙ 0 ( t ϑ ^ n ) v ^ n Λ ˙ 0 ( t ϑ ^ n ) v n + Λ ˙ 0 ( t ϑ ^ n ) v n Λ ˙ 0 ( t ϑ 0 ) v n = Λ ˙ 0 ( t ϑ ^ n ) [ v ^ n v n ] + [ Λ ˙ 0 ( t ϑ ^ n ) Λ ˙ 0 ( t ϑ 0 ) ] v n .

Since Λ ˙ ( ϑ ^ n ) is a uniformly consistent estimator of Λ ˙ ( ϑ 0 ) on , then Λ ˙ ( t ϑ ^ n ) Λ ˙ ( t ϑ 0 ) = o ( 1 ) .

Further the relation (3.9) allows

v ^ n v n = [ h ( s ϑ ^ n ) h ( s ϑ 0 ) ] d W n ( s ) = o ( 1 ) .

The function Λ ˙ 0 ( t ϑ ^ n ) = Λ ˙ 0 ( t ϑ 0 ) + o ( 1 ) < , implies that Λ ˙ 0 ( t ϑ ^ n ) = O ( 1 ) , and

E ϑ 0 ( v n ) 2 = E ϑ 0 ( h ( s ϑ 0 ) d W n ( s ) ) 2 = E ϑ 0 h ( s ϑ 0 ) 2 d Λ 0 ( s ϑ 0 ) < ,

implies also that v n = O ( 1 ) .

Therefore the Lemma 3.3 is proved.

Lemma 3.4. Let the conditions C are satisfied, then the finite dimensional distributions of the process u ^ n ( t ) , t converge to the finite dimensional distributions of the process u ^ ( t ) , t as n .

Proof of the Lemma 3.4. The proof of the Lemma is based on the Central Limit theorem for stochastic integrals (see, e.g., Kutoyants [14], Theorem 1.1). We follow the proof of this theorem. In particular, we obtain the convergence when n of the characteristic function ϕ n ( μ ) to the characteristic function of the limit process ϕ 0 ( μ ) .

They are defined as following

ϕ n ( μ ) = E ϑ 0 exp { i μ u n ( t ) } = E ϑ 0 exp { i μ W n ( t ) + i μ Λ ˙ 0 ( t ϑ ^ n ) v ^ n } (3.10)

ϕ 0 ( μ ) = E ϑ 0 exp { i μ u ^ ( t ) } = E ϑ 0 exp { i μ W ( Λ 0 ( t ϑ 0 ) ) + i μ Λ ˙ 0 ( t ϑ 0 ) + h ( s ϑ 0 ) d W ( Λ 0 ( s ) ) } . (3.11)

Indeed, we have

W n ( t ) = n ( Λ ^ n ( t ) Λ 0 ( t ϑ 0 ) ) = n ( 1 n j = 1 n X j ( t ) Λ 0 ( t ϑ 0 ) ) = 1 n j = 1 n [ X j ( t ) Λ 0 ( t ϑ 0 ) ] = 1 n j = 1 n t d [ X j ( s ) λ 0 ( s ϑ 0 ) d s ] = 1 n j = 1 n + 1 l { s < t } d π j ( s ) (3.12)

where we put π j ( t ) = X j ( t ) Λ 0 ( t ϑ 0 ) .

On the other hand, we have

+ h ( s ϑ 0 ) d W n ( s ) = 1 n j = 1 n + h ( s ϑ 0 ) d π j ( s ) . (3.13)

Taking into account the expression (3.12) and (3.13), we have the representation of u ^ n ( t )

u ^ n ( t ) = W n ( t ) + Λ ˙ 0 ( t ϑ ^ n ) v ^ n = 1 n j = 1 n + [ 1 l { s < t } + Λ ˙ 0 ( t ϑ ^ n ) h ( s ϑ ^ n ) ] d π j ( s ) . (3.14)

Thus, we can calculate the characteristic function as following

ϕ n ( μ ) = exp { n + [ exp { i μ n [ 1 l { s < t } + Λ ˙ 0 ( t ϑ ^ n ) h ( s ϑ ^ n ) ] } 1 i μ n [ 1 l { s < t } + Λ ˙ 0 ( t ϑ ^ n ) h ( s ϑ ^ n ) ] ] d s } . (3.15)

By the Taylor formula

e i ϕ 1 i ϕ = ( i ϕ ) 2 2 + o ( ϕ 2 ) ,

we have as n

ϕ n ( μ ) exp { μ 2 2 + [ 1 l { s < t } + Λ ˙ 0 ( t ϑ 0 ) h ( s ϑ 0 ) ] 2 λ 0 ( s ϑ 0 ) d s } . (3.16)

This last expression (3.16) is equivalent to:

E ϑ 0 exp { i μ W ( Λ 0 ( t ϑ 0 ) ) + i μ Λ ˙ 0 ( t ϑ 0 ) + h ( s ϑ 0 ) d W ( Λ 0 ( s ) ) } ,

which is the characteristic function defined in (3.11).

Therefore, we have the convergence of the one-dimensional distributions. In the general case, the verification of the convergence is entirely similar.

Lemma 3.5. For any n , and for any t 1 , t 2 , we have

E ϑ 0 | u n ( t 1 ) u n ( t 2 ) | 2 C | t 1 t 2 | .

Proof of the Lemma 3.5. For any n , and for any t 1 , t 2 (say t 1 t 2 ), we have

E θ 0 | u n ( t 1 ) u n ( t 2 ) | 2 = E ϑ 0 | W n ( t 1 ) + Λ ˙ 0 ( t 1 ϑ 0 ) h ( s ϑ 0 ) d W n ( s ) W n ( t 2 ) Λ ˙ 0 ( t 2 ϑ 0 ) h ( s ϑ 0 ) d W n ( s ) | 2 2 E ϑ 0 | W n ( t 1 ) W n ( t 2 ) | 2 + 2 E ϑ 0 | [ Λ ˙ 0 ( t 1 ϑ 0 ) Λ ˙ 0 ( t 2 ϑ 0 ) ] h ( s ϑ 0 ) d W n ( s ) | 2

= 2 ( Λ 0 ( t 1 ϑ 0 ) Λ 0 ( t 2 ϑ 0 ) ) + 2 [ Λ ˙ 0 ( t 1 ϑ 0 ) Λ ˙ 0 ( t 2 ϑ 0 ) ] 2 h ( s ϑ 0 ) 2 d Λ 0 ( s ϑ 0 ) 2 t 2 ϑ 0 t 1 ϑ 0 λ 0 ( s ) d s + 2 ( t 2 ϑ 0 t 1 ϑ 0 λ ˙ 0 ( τ ) d τ ) 2 h ( s ϑ 0 ) 2 λ 0 ( s ϑ 0 ) d s 2 | t 1 t 2 | sup s | λ 0 ( s ) | + 2 | t 1 t 2 | 2 ( sup s | λ ˙ ( s ) | ) 2 h ( u ) 2 λ 0 ( u ) d u C | t 1 t 2 | + C | t 1 t 2 | 2 C | t 1 t 2 | .

Note that the two lemmas above are not sufficient to establish the weak convergence of the process u n in the space D ( , ) and also the convergence of the random process T ( u n ) . However, the increments of the process u n being independent, the convergence of the process u n on finite intervals [ A , B ] (that is, convergence in the Skorohod space D [ A , B ] of functions on [ A , B ] without discontinuities of the second kind) follows from ( [15], Theorem 6.5.5), that is Lemma 3.4 and the following lemma.

Lemma 3.6. For any ε > 0 , we have

lim κ 0 lim n sup | t 1 t 2 | < κ { | u n ( t 1 ) u n ( t 2 ) | > ε } = 0.

Proof of the Lemma 3.6. For all ε > 0 , we must show that

lim κ 0 lim n sup | t 1 t 2 | < κ { | u n ( t 1 ) u n ( t 2 ) | > ε } = 0.

In fact, by Bienaymé-Chebyshev inequality we have:

ϑ 0 { | u n ( t 1 ) u n ( t 2 ) | > ε } 1 ε 2 E ϑ 0 | u n ( t 1 ) u n ( t 2 ) | 2 C ε 2 | t 1 t 2 | C κ ε 2 κ 0 0.

Therefore the Lemma 3.2 is proved.

So, the last ingredient of the proof of Theorem 3.1 is the following estimate on the tails of the process u n ( t ) .

Lemma 3.7. Let the conditions C are satisfied. For any ε > 0 , there exist T > 0 and n 0 such that for all n n 0 , we have

ϑ 0 ( sup | s | > T u n ( s ) > ε ) ε (3.17)

Proof of the Lemma 3.7. We have

ϑ 0 ( sup | s | > T u n ( s ) > ε ) ϑ 0 ( sup s > T u n ( s ) > ε ) + ϑ 0 ( sup s < T u n ( s ) > ε ) (3.18)

we have for the first expression

ϑ 0 ( sup s > T u n ( s ) > ε ) K E ϑ 0 u n 2 ( s ) ε 2

Direct calculation allows verifying that

sup s E ϑ 0 u ^ n 2 ( s ) C 1

where the constant C 1 > 0 does not depend on n. Hence

ϑ 0 ( sup s > T u n ( s ) > ε ) K C 1 ε 2 0

For the second term of 18, in a similar manner, we obtain a bound

ϑ 0 ( sup s < T u n ( s ) > ε ) K C 2 ε 2 0

This convergence allows us to say that for n n 0 with some n 0 , we obtain the estimate (3.17)

Proposition 3.8. Let the conditions C are satisfied. Then the test

Φ ^ n ( X ( n ) ) = 1 l { Γ ^ n > c ε }

is consistent under alternatives H 1 ,that is:

β ( Φ ^ n , Λ ) n 1,

and it is uniformly consistent under alternatives H 1 ρ , that is:

inf Λ ( ) F ρ β ( Φ ^ n , Λ ) n 1.

Proof of the Proposition 3.8. Under the hypothesis H 1 , the power β ( Φ ^ n , Λ ) is

β ( Φ ^ n , Λ ) = ( do not choose H 0 / H 0 is false ) = ( Γ ^ n > c ε / H 1 ) = Λ ( Γ ^ n > c ε ) .

We can write

Λ ( Γ ^ n > c ε ) = Λ ( n Λ ^ n ( t ) Λ 0 ( ϑ ^ n ) ϑ ^ n > c ε ) Λ ( n Λ ( ) Λ 0 ( ϑ ^ n ) ϑ ^ n n Λ ( ) Λ ^ n ( ) ϑ ^ n > c ε ) = Λ ( n Λ ( ) Λ ^ n ( ) ϑ ^ n < n Λ ( ) Λ 0 ( ϑ ^ n ) ϑ ^ n c ε ) = Λ ( W n ( ) ϑ ^ n < n Λ ( ) Λ 0 ( ϑ ^ n ) ϑ ^ n c ε ) Λ ( W n ( ) ϑ ^ n < n g c ε )

n { sup u | W ( u ) | < } = 1

where we have put

g = inf ϑ Θ Λ ( ) Λ 0 ( ϑ ) ϑ > 0.

Therefore the Kolmogorov-Smirnov type test is consistent for this alternative. The presented above proof allows verifying the uniform consistency of this test against the alternative H 1 ρ .

Indeed we have

inf Λ ( ) L ρ β ( Φ ^ n , Λ ) Λ ( W n ( ) ϑ ^ n < n g ρ c ε ) n 1

where g ρ = inf Λ ( ) L ρ inf ϑ Θ Λ ( ) Λ 0 ( ϑ ) ϑ > 0

The Proposition 3.8 is thus proved.

4. Conclusions

This work is devoted to the Kolmogorov-Smirnov test in the case of observations of non-homogeneous Poisson processes. The main results are obtained in the situation where, under the null hypothesis, the intensity functions of the observed inhomogeneous Poisson processes depend on an unknown parameter.

As the GoF test studied in this work is mainly based on the maximum likelihood estimator (MLE), we present the asymptotic properties of MLE in asymptotics of large samples. The conditions of coherence and asymptotic normality are given.

We have studied the Kolmogorov-Smirnov test for inhomogeneous Poisson processes with a parametric null hypothesis. The unknown parameter is the translation parameter. The construction of the test is based on the MLE of this parameter and the main result is that due to the structure of the statistics the substitution of the estimator instead of the unknown parameter leads to the limit of the test statistic with distribution which does not depend on the unknown parameter.

In this work, we find the Kolmogorov-Smirnov GoF test based on sup-metrics in the case of the translation parameter. It is natural to ask: what if we take L 2 ( ) metrics?

Cite this paper: Tanguep, E. and Njomen, D. (2021) Kolmogorov-Smirnov APF Test for Inhomogeneous Poisson Processes with Shift Parameter. Applied Mathematics, 12, 322-335. doi: 10.4236/am.2021.124023.
References

[1]   Lehmann, E.L. and Romano, J.P. (2005) Testing Statistical Hypothesis. 3rd Edition, Springer-Verlag, New York.

[2]   Durbin, J. (1973) Distribution Theory for Tests Based on the Sample Distribution Function. SIAM, Philadelphia.
https://doi.org/10.1137/1.9781611970586

[3]   Mann, H.B. and Wald, A. (1942) On the Choice of the Number of Class Intervals in the Application of Chi-Square Test. Annals of Mathematical Statistics, 13, 306-317.
https://doi.org/10.1214/aoms/1177731569

[4]   Ingster, Yu.I. and Suslina, I.A. (2003) Nonparametric Goodness-of-Fit Testing Under Gaussian Models. Springer-Verlag, New York.
https://doi.org/10.1007/978-0-387-21580-8

[5]   Ingster, Yu.I. and Kutoyants, Yu.A. (2007) Nonparametric Hypothesis Testing for an Intensity of Poisson Process. Mathematical Methods of Statistics, 16, 217-245.
https://doi.org/10.3103/S1066530707030039

[6]   Dachian, S. and Kutoyants, Yu.A. (2007) On the Goodness-of-Fit Tests for Some Continuous Time Processes. In: Vonta, F., Nikulin, M., Limnios, N. and Huber-Carol, C., Eds., Statistical Models and Methods for Biomedical and Technical Systems, Birkhäuser, Boston, 395-413.
https://doi.org/10.1007/978-0-8176-4619-6_27

[7]   Dabye, A.S. (2013) On the Cramér-von Mises Test with Parametric Hypothesis for Poisson Processes. Statistical Inference for Stochastic Processes, 16, 1-13.
https://doi.org/10.1007/s11203-013-9077-y

[8]   Dabye, A.S., Tanguep, W.E.D. and Top, A. (2016) On the Cramér-von Mises Test for Poisson Processes with Scale Parameter. Far East Journal of Theoretical Statistics, 52, 419-441.

[9]   Rao, C.R. (1965) Linear Statistical Inference and Its Applications. Wiley, New York.

[10]   Khmaladze, E. (1981) Martingale Approach in the Theory of Goodness-of-Fit Tests. Theory of Probability and Its Applications, 26, 240-257. (Translated by A.B. Aries)
https://doi.org/10.1137/1126027

[11]   Bai, J. (2002) Testing Parametric Conditional Distributions of Dynamic Models. Boston College, Chestnut Hill, MA.

[12]   Koenker, R. and Xiao, Z. (2002) Inference on the Quantile Regression Process. Econometrica, 70, 1583-1612.
https://doi.org/10.1111/1468-0262.00342

[13]   Darling, D.A. (1958) The Cramér-Smirnov Test in the Parametric Case. Annals of Mathematical Statistics, 26, 1-20.
https://doi.org/10.1214/aoms/1177728589

[14]   Kutoyants, Yu.A. (1998) Statistical Inference for Spatial Poisson Processes. Lecture Notes in Statistics, 134, Springer-Verlag, New York.
https://doi.org/10.1007/978-1-4612-1706-0

[15]   Gihman, I.I. and Skorohod, A.V. (1974) The Theory of Stochastic Processes I. Springer-Verlag, New-York.

 
 
Top