The Rate of Asymptotic Normality of Frequency Polygon Density Estimation for Spatial Random Fields
Abstract: This paper is to investigate the convergence rate of asymptotic normality of frequency polygon estimation for density function under mixing random fields, which include strongly mixing condition and some weaker mixing conditions. A Berry-Esseen bound of frequency polygon is established and the convergence rates of asymptotic normality are derived. In particularly, for the optimal bin width , it is showed that the convergence rate of asymptotic normality reaches to  when mixing coefficient tends to zero exponentially fast.

1. Introduction

Denote the integer lattice points in the N-dimensional Euclidean space by ${Z}^{N}$ for $N\ge 1$. Let $\left\{{X}_{i}:i\in {Z}^{N}\right\}$ be a strictly stationary random field with common density $f\left(x\right)$ on the real line R. Throughout this paper, let $‖i‖={\left({i}_{1}^{2}+{i}_{2}^{2}+\cdots +{i}_{N}^{2}\right)}^{1/2}$, $\stackrel{^}{i}={i}_{1}{i}_{2}\cdots {i}_{N}$, $i\preccurlyeq j$ denote ${i}_{k}\le {j}_{k}$ ( $1\le k\le N$ ) for $i=\left({i}_{1},{i}_{2},\cdots ,{i}_{N}\right)\in {Z}^{N}$ and $j=\left({j}_{1},{j}_{2},\cdots ,{j}_{N}\right)\in {Z}^{N}$, and $1=\left(1,1,\cdots ,1\right)\in {Z}^{N}$. The limit process $n\to \infty$ denotes

$\mathrm{min}\left\{{n}_{i};1\le i\le N\right\}\to \infty \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{n}_{i}/{n}_{j}\le C\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(1\le i,j\le N\right)$

for some constant $C>0$.

For a set of sites $S\subset {Z}^{N}$, $\mathcal{B}\left(S\right)=\mathcal{B}\left({X}_{i};i\in S\right)$ denotes the σ-field generated by the random variables $\left({X}_{i};i\in S\right)$. $\text{Card}\left(S\right)$ denotes the cardinality of S, and $\text{dist}\left(S,{S}^{\prime }\right)$ denotes the Euclidean distance between S and ${S}^{\prime }$, that is $\text{dist}\left(S,{S}^{\prime }\right)=\mathrm{min}\left\{‖i-j‖;i\in S,j\in {S}^{\prime }\right\}$. We will use the following mixing coefficient

$\begin{array}{c}\alpha \left(\mathcal{B}\left(S\right),\mathcal{B}\left({S}^{\prime }\right)\right)=\mathrm{sup}\left\{|P\left(AB\right)-P\left(A\right)P\left(B\right)|,A\in \mathcal{B}\left(S\right),B\in \mathcal{B}\left({S}^{\prime }\right)\right\}\\ \le Ch\left(\text{Card}\left(S\right),\text{Card}\left({S}^{\prime }\right)\right)\phi \left(\text{dist}\left(S,{S}^{\prime }\right)\right),\end{array}$ (1)

where C is some positive constant, $\phi \left(u\right)↓0$ as $u\to \infty$, and $h\left(n,m\right)$ is a symmetric positive function nondecreasing in each variable.

If $h\equiv 1$, then $\left\{{X}_{i}:i\in {Z}^{N}\right\}$ is called strongly mixing. In Carbon et al.  , it is assumed that h satisfies either

$h\left(n,m\right)\le \mathrm{min}\left\{n,m\right\},$ (2)

or

$h\left(n,m\right)\le {\left(n+m+1\right)}^{\stackrel{˜}{k}}$ (3)

where $\stackrel{˜}{k}\ge 1$. Conditions (2) and (3) are also used by Neaderhouser  and Takahata  , respectively and are weaker than the strong mixing condition.

In recent years, there is a growing interest in statistical problem for random fields, because spatial data are modeled as finite observations of random fields. For asymptotic properties of kernel density estimators for spatial random fields, one can refer to Tran  , Hallin et al.   , Cheng et al.  , El Machkouri   , Wang and Woodroofe  , among others. For spatial regression models, see, Biau and Cadre  , Lu and Chen  , Hallin et al.  , Gao et al.  , Carbon et al.  , Dabo-Niang and Yao  .

The purpose of this paper is going to investigate the convergence rate of asymptotic normality of frequency polygon estimation of density function for mixing random fields. The frequency polygon has the advantage to be conceptually and computationally simple. Furthermore, Scott  showed that the rate of convergence of frequency polygon is superior to the histogram for smooth densities, and similar to those of kernel estimators. In recent years, frequency polygon estimator is given increasing attention. For example, key references that can be found for non-spatial random variables are Scott  , Beirlant et al.  , Carbon et al.  , Yang  , Xin et al.  , etc. For spatial random fields, the references on frequency polygon are Carbon  , Carbon et al  , Bensad and Dabo-Niang  and El Machkouri  . For continuous indexed random fields, Bensad and Dabo-Niang  derived the integrated mean squared error of frequency polygon and the optimal uniform strong rate of convergence. For discretely indexed random fields, Carbon  obtained the optimal bin width based on asymptotically minimize integrated error and the rate of uniform convergence, Carbon  derived the asymptotic normality of frequency polygon under the mixing conditions that the function h in (1.1) satisfies (2) or (3), El Machkouri  established the asymptotic normality of frequency polygon for strongly mixing coefficients (that is, $h\equiv 1$ ). However, the convergence rate of asymptotic normality of frequency polygon has not been discussed in these literature. In this paper, we will prove a Berry-Esseen bound of frequency polygon and the convergence rate of asymptotic normality under weaker mixing conditions, which include strongly mixing condition.

This paper is organized as follows: Next section presents the main results. Section 3 gives some lemmas, which will be used later. Section 4 provides the proofs of theorems. Throughout this paper, the letter C will be used to denote positive constants whose values are unimportant and may vary, but not dependent on $n$.

2. Main Results

Suppose that we observe $\left\{{X}_{n}\right\}$ on a rectangular region $\left\{i:1\preccurlyeq i\preccurlyeq n\right\}$. Consider a partition $\cdots <{x}_{-2}<{x}_{-1}<{x}_{0}<{x}_{1}<{x}_{2}\cdots$ of the real line into equal intervals ${I}_{k}=\left[\left(k-1\right){b}_{n},k{b}_{n}\right)$ of length ${b}_{n}$, where ${b}_{n}$ is the bin width and $k=0,±1,±2,\cdots$. For $x\in \left[\left({k}_{0}-1/2\right){b}_{n},\left({k}_{0}+1/2\right){b}_{n}\right)$, consider the two adjacent histogram bins ${I}_{{k}_{0}}$ and ${I}_{{k}_{0}+1}$. Denote the number of observations falling in these intervals respectively by ${v}_{{k}_{0}}$ and ${v}_{{k}_{0}+1}$. Then the values of the histogram in these previous bins are given by

${f}_{{k}_{0}}={v}_{{k}_{0}}/\left(\stackrel{^}{n}{b}_{n}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}{f}_{{k}_{0}+1}={v}_{{k}_{0}+1}/\left(\stackrel{^}{n}{b}_{n}\right).$ (4)

Thus the frequency polygon estimation of the density function $f\left(x\right)$ is defined as follows

${f}_{n}\left(x\right)=\left(\frac{1}{2}+{k}_{0}-\frac{x}{{b}_{n}}\right){f}_{{k}_{0}}+\left(\frac{1}{2}-{k}_{0}+\frac{x}{{b}_{n}}\right){f}_{{k}_{0}+1}$ (5)

for $x\in \left[\left({k}_{0}-1/2\right){b}_{n},\left({k}_{0}+1/2\right){b}_{n}\right)$.

We know that the curve estimated by the frequency polygon is a non-smooth curve, but it tends to be a smooth density curve as the interval length ${b}_{n}$ of interpolation gradually tends to zero. So we always assume that ${b}_{n}$ tends to zero as $n\to \infty$. In addition, we need the following basic assumptions.

Assumption (A1) The density $f\left(x\right)$ with bounded derivative. For all $i,j$ and some constant $M>0$,

$|{f}_{j|i}\left(y|x\right)|\le M,$

where ${f}_{j|i}\left(y|x\right)$ is the conditional density of ${X}_{j}$ given ${X}_{i}$.

Assumption (A2) The random field $\left\{{X}_{i}:i\in {Z}^{N}\right\}$ satisfies (1) with $\phi \left(u\right)=O\left({u}^{-\theta }\right)$ for some $\theta >2N$.

Under Assumption (A2), we can take $\beta$ such that $N{\theta }^{-1}<\beta <1/2$, then ${\sum }_{i=1}^{\infty }\text{ }{i}^{N-1}{\phi }^{\beta }\left(i\right)<\infty$. Carefully checking the proof of Theorem 3.1 in Carbon et al  , we find that the conditions (2) and (3) are not used, in fact, it only uses the positive constant $h\left(1,1\right)$. Therefore, by Theorem 3.1 in Carbon et al.  , we obtain the following result on asymptotic variance.

Proposition 1 Suppose that Assumption (A1) and (A2) are satisfied. Then, for $x\in \left[\left({k}_{0}-1/2\right){b}_{n},\left({k}_{0}+1/2\right){b}_{n}\right)$, we have

$n{b}_{n}Var\left({f}_{n}\left(x\right)\right)={\sigma }_{n}^{2}\left(x\right)+o\left(1\right),$ (6)

where

${\sigma }_{n}^{2}\left(x\right)=\left[\frac{1}{2}+2{\left({k}_{0}-\frac{x}{{b}_{n}}\right)}^{2}\right]f\left(x\right).$ (7)

It should be reminded that, as in Remark 3 in El Machkouri (2013), it should be $\left(1/2+2{\left({k}_{0}-x/{b}_{n}\right)}^{2}\right)f\left(x\right)$ instead of $\left(1/2+{\left(2{k}_{0}-x/{b}_{n}\right)}^{2}\right)f\left(x\right)$ for the asymptotic variance ${\sigma }_{n}^{2}\left(x\right)$.

Let ${S}_{n}={\left(\stackrel{^}{n}{b}_{n}\right)}^{1/2}\left[{f}_{n}\left(x\right)-E{f}_{n}\left(x\right)\right]{\sigma }_{n}^{-1}\left(x\right)$, ${F}_{{S}_{n}}\left(u\right)=P\left({S}_{n} and $\Phi \left(u\right)$ denote the distribution function of $N\left(0,1\right)$. Now we give our main results as follow.

Theorem 1. Suppose that Assumption (A1) and (A2) hold. Assume that there exist integers $p={p}_{n}\to \infty$ and $q={q}_{n}\to \infty$ such that

${\tau }_{1,n}\to 0,\text{\hspace{0.17em}}{\tau }_{2,n}\to 0,{\tau }_{3,n}\to 0$ (8)

where ${\tau }_{1,n}=q{p}^{-1},{\tau }_{2,n}={\left(\stackrel{^}{n}{b}_{n}\right)}^{-1/2}{p}^{N}$ and ${\tau }_{3,n}={\left(\stackrel{^}{n}{b}_{n}^{-1}\right)}^{1/2}{q}^{-\theta }h\left(\stackrel{^}{n},{p}^{N}\right)$. Then, for $x$ such that $f\left(x\right)>0$ and as $n\to \infty$, we have

$\underset{u\in R}{\mathrm{sup}}|{F}_{{S}_{n}}\left(u\right)-\Phi \left(u\right)|=O\left({\tau }_{n}\right)$ (9)

where ${\tau }_{n}={\tau }_{1,n}^{1/3}+{\tau }_{2,n}+{\tau }_{3,n}^{1/2}+{\tau }_{4,n}^{1/3}$ and ${\tau }_{4,n}={b}_{n}^{-1}{p}^{N}{q}^{-\theta }$.

Remark 1. In the theorem above, it does not need to assume that ${\tau }_{4,n}\to 0$ because $0\le {\tau }_{4,n}\le C{\tau }_{2,n}{\tau }_{3,n}\to 0$ from (8).

Theorem 1 provides a general result for Berry-Esseen bound of frequency polygon estimation. Some specific bounds can be obtained by choosing different ${b}_{n}$, p and q.

Theorem 2. Suppose that Assumption (A1) and (A2) hold. Let ${b}_{n}=C{\stackrel{^}{n}}^{-\nu }$ for some $\nu \in \left(0,1\right)$. Denote that ${\eta }_{1}=\frac{\left(1+\nu \right)N}{\left(1-\nu \right)\epsilon }+\frac{2\left(1-\epsilon \right)N}{\left(1+3N\right)\epsilon }$, ${\eta }_{2}={\eta }_{1}+\frac{\left(\epsilon +3N\right)N}{\left(1+3N\right)\epsilon }$ and ${\eta }_{3}={\eta }_{1}+\frac{4N\stackrel{˜}{k}}{\left(1-\nu \right)\epsilon }$ for some $\epsilon \in \left(0,1\right)$.

1) If $h\equiv 1$ and

$\theta \ge \mathrm{max}\left\{2N,{\eta }_{1}\right\},$ (10)

2) or if (2) is satisfied and

$\theta \ge \mathrm{max}\left\{2N,{\eta }_{2}\right\},$ (11)

3) or if (3) is satisfied and

$\theta \ge \mathrm{max}\left\{2N,{\eta }_{3}\right\},$ (12)

then, for $x$ such that $f\left(x\right)>0$ and as $n\to \infty$, we have

$\underset{u\in R}{\mathrm{sup}}|{F}_{{S}_{n}}\left(u\right)-\Phi \left(u\right)|=O\left({\stackrel{^}{n}}^{-\frac{\left(1-\nu \right)\left(1-\epsilon \right)}{2\left(1+3N\right)}}\right)$ (13)

Carbon  proved that the optimal bin width for asymptotical mean square error

${b}_{opt}=2{\left(\frac{15}{49{R}_{2}\left(f\right)}\right)}^{1/5}{\stackrel{^}{n}}^{-1/5}$ (14)

where ${R}_{2}\left(f\right)={\int }_{-\infty }^{\infty }{\left[{f}^{″}\left(x\right)\right]}^{2}\text{d}x$, when $\theta >2N+3/2$. For the optimal bin width, it is ease to get the following result by Theorem 2.

Corollary 1. Suppose that Assumption (A1) and (A2) hold and $h\equiv 1$. Let ${b}_{n}=C{\stackrel{^}{n}}^{-1/5}$. 1) If

$\theta \ge \mathrm{max}\left\{2N,\frac{3N}{2\epsilon }+\frac{2\left(1-\epsilon \right)N}{\left(1+3N\right)\epsilon }\right\}$ (15)

for some $\epsilon \in \left(0,1\right)$, then, for $x$ such that $f\left(x\right)>0$,

$\underset{u\in R}{\mathrm{sup}}|{F}_{{S}_{n}}\left(u\right)-\Phi \left(u\right)|=O\left({\stackrel{^}{n}}^{-\frac{2\left(1-\epsilon \right)}{5\left(1+3N\right)}}\right).$ (16)

2) If $\phi \left(u\right)$ tends to zero exponentially fast as u tends to infinity, then, for $x$ such that $f\left(x\right)>0$,

$\underset{u\in R}{\mathrm{sup}}|{F}_{{S}_{n}}\left(u\right)-\Phi \left(u\right)|=O\left({\stackrel{^}{n}}^{-\frac{2}{5\left(1+3N\right)}}\right).$ (17)

Remark 2. The asymptotic normality of frequency polygon under the strongly mixing conditions established by Carbon  and El Machkouri  . As far as we know, however, the convergence rate of asymptotic normality has not been studied. Our conclusions make an effort in this respect.

3. Lemmas

In the later proof, we need to estimate the upper bounds of covariance and variance of dependent variables. The following two lemmas give the upper bounds of covariance and variance respectively.

Lemma 1. Roussas and Ioannides  suppose that $\xi$ and $\eta$ are $\mathcal{B}\left(S\right)$ - measurable and $\mathcal{B}\left({S}^{\prime }\right)$ -measurable random variables, respectively. If $|\xi |\le {C}_{1}$ a.s. and $|\eta |\le {C}_{2}$ a.s., then

$|E\left(\xi \eta \right)-\left(E\xi \right)\left(E\eta \right)|\le 4{C}_{1}{C}_{2}\alpha \left(dist\left(S,{S}^{\prime }\right)\right).$ (18)

Let

${Y}_{i,k}=I\left(\left(k-1\right){b}_{n}\le {X}_{i} (19)

Lemma 2. Gao et al.  let assumption (A1) and (A2) be satisfied. Suppose that the integer vectors $a=\left({a}_{1},{a}_{2},\cdots ,{a}_{N}\right)$, $m=\left({m}_{1},{m}_{2},\cdots ,{m}_{N}\right)$ and $n=\left({n}_{1},{n}_{2},\cdots ,{n}_{N}\right)$ satisfy $0\le {a}_{i}<{a}_{i}+{m}_{i}\le {n}_{i}$ for $1\le i\le N$. Then there exists a positive constant C, which is no depending on $n$, $a$ and $m$, such that

$E{\left(\underset{a\preccurlyeq i\preccurlyeq a+m}{\sum }{\stackrel{˜}{Y}}_{i,k}\right)}^{2}\le C\stackrel{^}{m}{b}_{n}.$ (20)

Lemma 3. Lemma 3.7 in Yang  suppose that $\left\{{\zeta }_{n}:n\ge 1\right\}$ and $\left\{{\eta }_{n}:n\ge 1\right\}$ are two random variable sequences, $\left\{{\gamma }_{n}:n\ge 1\right\}$ is a positive constant sequence, and ${\gamma }_{n}\to 0$. If

$\underset{u}{\mathrm{sup}}|{F}_{{\zeta }_{n}}\left(u\right)-\Phi \left(u\right)|=O\left({\gamma }_{n}\right),$ (21)

then for any $\epsilon >0$,

$\underset{u}{\mathrm{sup}}|{F}_{{\zeta }_{n}+{\eta }_{n}}\left(u\right)-\Phi \left(u\right)|=O\left({\gamma }_{n}+\epsilon +P\left(|{\eta }_{n}|\ge \epsilon \right)\right).$ (22)

4. Proofs

Proof of Theorem 1 We will use the methodology of using “small” and “big” blocks which is similar to that of Carbon et al.  . For $x\in \left(\left({k}_{0}-1/2\right){b}_{n},\left({k}_{0}+1/2\right){b}_{n}\right)$, define

${Z}_{i,{k}_{0}}={b}_{n}^{-1/2}\left\{\left(\frac{1}{2}+{k}_{0}-\frac{x}{{b}_{n}}\right){Y}_{i,{k}_{0}}+\left(\frac{1}{2}-{k}_{0}+\frac{x}{{b}_{n}}\right){Y}_{i,{k}_{0}+1}\right\}.$ (23)

and ${Z}_{i,{k}_{0}}={Z}_{i,{k}_{0}}-E{Z}_{i,{k}_{0}}$. Then

${S}_{n}\left(x\right)={\stackrel{^}{n}}^{-1/2}\underset{1\preccurlyeq i\preccurlyeq n}{\sum }{\stackrel{˜}{Z}}_{i,{k}_{0}}.$ (24)

Now we divide ${S}_{n}\left(x\right)$ into the sum of large blocks and the sum of small blocks. According to the block size method, we assume $q and $p,q$ satisfy (8). Assume for some integer vector $r=\left({r}_{1},{r}_{2},\cdots ,{r}_{N}\right)$, we have ${n}_{1}={r}_{1}\left(p+q\right),\cdots ,{n}_{N}={r}_{N}\left(p+q\right)$. If it is not this case, there will be a remainder term in the splitting block, but it will not change the proof much. For $1\preccurlyeq j\preccurlyeq r$, let

$U\left(1,n,j\right)={\stackrel{^}{n}}^{-1/2}\underset{{i}_{k}=\left({j}_{k}-1\right)\left(p+q\right)+1;1\le k\le N}{\overset{\left({j}_{k}-1\right)\left(p+q\right)+p}{\sum }}{\stackrel{˜}{Z}}_{i,{k}_{0}}$

$U\left(2,n,j\right)={\stackrel{^}{n}}^{-1/2}\underset{{i}_{k}=\left({j}_{k}-1\right)\left(p+q\right)+1;1\le k\le N-1}{\overset{\left({j}_{k}-1\right)\left(p+q\right)+p}{\sum }}\text{\hspace{0.17em}}\underset{{i}_{N}=\left({j}_{N}-1\right)\left(p+q\right)+p+1}{\overset{{j}_{N}\left(p+q\right)}{\sum }}{\stackrel{˜}{Z}}_{i,{k}_{0}}$

$U\left(3,n,j\right)={\stackrel{^}{n}}^{-1/2}\underset{{i}_{k}=\left({j}_{k}-1\right)\left(p+q\right)+1;1\le k\le N-2}{\overset{\left({j}_{k}-1\right)\left(p+q\right)+p}{\sum }}\text{\hspace{0.17em}}\underset{{i}_{N-1}=\left({j}_{N-1}-1\right)\left(p+q\right)+p+1}{\overset{{j}_{N-1}\left(p+q\right)}{\sum }}\text{\hspace{0.17em}}\underset{{i}_{N}=\left({j}_{N}-1\right)\left(p+q\right)+1}{\overset{\left({j}_{N}-1\right)\left(p+q\right)+p}{\sum }}{\stackrel{˜}{Z}}_{i,{k}_{0}}$

$U\left(4,n,j\right)={\stackrel{^}{n}}^{-1/2}\underset{{i}_{k}=\left({j}_{k}-1\right)\left(p+q\right)+1;1\le k\le N-2}{\overset{\left({j}_{k}-1\right)\left(p+q\right)+p}{\sum }}\text{\hspace{0.17em}}\underset{{i}_{N-1}=\left({j}_{N-1}-1\right)\left(p+q\right)+p+1}{\overset{{j}_{N-1}\left(p+q\right)}{\sum }}\text{\hspace{0.17em}}\underset{{i}_{N}=\left({j}_{N}-1\right)\left(p+q\right)+p+1}{\overset{{j}_{N}\left(p+q\right)}{\sum }}{\stackrel{˜}{Z}}_{i,{k}_{0}}$

an so on. Note that

$U\left({2}^{N}-1,n,j\right)={\stackrel{^}{n}}^{-1/2}\underset{{i}_{k}=\left({j}_{k}-1\right)\left(p+q\right)+p+1;1\le k\le N-1}{\overset{{j}_{k}\left(p+q\right)}{\sum }}\text{\hspace{0.17em}}\underset{{i}_{N}=\left({j}_{N}-1\right)\left(p+q\right)+1}{\overset{\left({j}_{N}-1\right)\left(p+q\right)+p}{\sum }}{\stackrel{˜}{Z}}_{i,{k}_{0}}.$

Finally

$U\left({2}^{N},n,j\right)={\stackrel{^}{n}}^{-1/2}\underset{{i}_{k}=\left({j}_{k}-1\right)\left(p+q\right)+p+1;1\le k\le N}{\overset{{j}_{k}\left(p+q\right)}{\sum }}{\stackrel{˜}{Z}}_{i,{k}_{0}}.$

For each integer $i\in \left[{1,2}^{N}\right]$, define

$T\left(i,n\right)=\underset{1\preccurlyeq j\preccurlyeq r}{\sum }U\left(i,n,j\right)$ (25)

and ${B}_{n}={\sum }_{i=2}^{{2}^{N}}\text{ }\text{ }T\left(i,n\right)$. Then

${S}_{n}\left(x\right)=T\left(1,n\right)+{B}_{n}.$ (26)

Enumerate the random variables $\left\{U\left(1,n,j\right):1\preccurlyeq j\preccurlyeq r\right\}$ in an arbitrary manner and refer to them as ${V}_{1},{V}_{2},\cdots ,{V}_{\stackrel{^}{r}}$. Note that $|{V}_{i}|\le C{\stackrel{^}{n}}^{-1/2}{p}^{N}{b}_{n}^{-1/2}$. Using Theorem 4 in Rio  or Lemma 4.5 in Carbon et al.   , there exists ${\stackrel{˜}{V}}_{1},{\stackrel{˜}{V}}_{2},\cdots ,{\stackrel{˜}{V}}_{\stackrel{^}{r}}$, independent random variables, independent of ${V}_{1},{V}_{2},\cdots ,{V}_{\stackrel{^}{r}}$ with the same law verifying

$\begin{array}{c}E|{V}_{i}-{\stackrel{˜}{V}}_{i}|\le C{\stackrel{^}{n}}^{-1/2}{p}^{N}{b}_{n}^{-1/2}h\left(\left(\stackrel{^}{r}-1\right){p}^{N},{p}^{N}\right)\phi \left(q\right)\\ \le C{\stackrel{^}{n}}^{-1/2}{p}^{N}{b}_{n}^{-1/2}h\left(\stackrel{^}{n},{p}^{N}\right)\phi \left(q\right).\end{array}$ (27)

Let $\stackrel{˜}{T}\left(1,n\right)={\sum }_{i=1}^{\stackrel{^}{r}}\text{ }{\stackrel{˜}{V}}_{i}$ and ${A}_{n}=T\left(1,n\right)-\stackrel{˜}{T}\left(1,n\right)$. Thus

${S}_{n}\left(x\right)=\stackrel{˜}{T}\left(1,n\right)+{A}_{n}+{B}_{n}.$ (28)

By Lemma 3, it is sufficient to show that

$P\left(|{A}_{n}|>{\tau }_{3,n}^{1/2}\right)=O\left({\tau }_{3,n}^{1/2}\right),$ (29)

$P\left(|{B}_{n}|>{\tau }_{1,n}^{1/3}+{\tau }_{4,n}^{1/3}\right)=O\left({\tau }_{1,n}^{1/3}+{\tau }_{4,n}^{1/3}\right),$ (30)

and

$\underset{u\in R}{\mathrm{sup}}|{F}_{\stackrel{˜}{T}\left(1,n\right)}\left(u\right)-\Phi \left(u\right)|=O\left({\tau }_{2,n}\right).$ (31)

Obviously, from (27)

$\begin{array}{c}P\left(|{A}_{n}|>{\tau }_{3,n}^{1/3}\right)\le C{\tau }_{3,n}^{-1/2}\underset{i=1}{\overset{\stackrel{^}{r}}{\sum }}E|{V}_{i}-{\stackrel{˜}{V}}_{i}|\\ \le C{\tau }_{3,n}^{-1/2}\stackrel{^}{r}{\stackrel{^}{n}}^{-1/2}{p}^{N}{b}_{n}^{-1/2}h\left(\stackrel{^}{n},{p}^{N}\right)\phi \left(q\right)\\ \le C{\tau }_{3,n}^{-1/2}{\left(\stackrel{^}{n}{b}_{n}^{-1}\right)}^{1/2}{q}^{-\theta }h\left(\stackrel{^}{n},{p}^{N}\right)\\ =C{\tau }_{3,n}^{1/2},\end{array}$ (32)

it follows (29). Now consider that

$\begin{array}{c}P\left(|{B}_{n}|>{\tau }_{1,n}^{1/3}+{\tau }_{4,n}^{1/3}\right)\le \underset{i=2}{\overset{{2}^{N}}{\sum }}\text{ }P\left(|T\left(i,n\right)|>{\tau }_{1,n}^{1/3}+{\tau }_{4,n}^{1/3}\right)\\ \le C{\left({\tau }_{1,n}^{1/3}+{\tau }_{4,n}^{1/3}\right)}^{-2}\underset{i=2}{\overset{{2}^{N}}{\sum }}\text{ }E{T}^{2}\left(i,n\right).\end{array}$ (33)

Note that

$\begin{array}{l}E{T}^{2}\left(2,n\right)=\stackrel{^}{r}E{U}^{2}\left(2,n,j\right)+\underset{1\preccurlyeq j,{j}^{\prime }\preccurlyeq r,j\ne {j}^{\prime }}{\sum }\text{Cov}\left(U\left(2,n,j\right),U\left(2,n,{j}^{\prime }\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }=:{\Lambda }_{1}+{\Lambda }_{2}\end{array}$ (34)

By Lemma 2,

${\Lambda }_{1}\le C\stackrel{^}{r}{\stackrel{^}{n}}^{-1}{b}_{n}^{-1}{p}^{N-1}q{b}_{n}\le C{p}^{-1}q=C{\tau }_{1,n}.$ (35)

Define $\begin{array}{l}\mathcal{J}\left(2,n,j\right)=\left\{i:\left({j}_{k}-1\right)\left(p+q\right)+1\le {i}_{k}\le \left({j}_{k}-1\right)\left(p+q\right)+p,\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{ }\text{\hspace{0.17em}}1\le k\le N-1,\left({j}_{N}-1\right)\left(p+q\right)+p+1\le {i}_{N}\le {j}_{N}\left(p+q\right)\right\}\end{array}$. By Lemma 1,

$\begin{array}{l}|{\Lambda }_{2}|\le {\stackrel{^}{n}}^{-1}\underset{1\preccurlyeq j,{j}^{\prime }\preccurlyeq r,j\ne {j}^{\prime }}{\sum }\text{\hspace{0.17em}}\underset{i\in \mathcal{J}\left(2,n,j\right),{i}^{\prime }\in \mathcal{J}\left(2,n,{j}^{\prime }\right)}{\sum }|Cov\left({\stackrel{˜}{Z}}_{i,{k}_{0}},{\stackrel{˜}{Z}}_{{i}^{\prime },{k}_{0}}\right)|\\ \le C{\stackrel{^}{n}}^{-1}{b}_{n}^{-1}\underset{1\preccurlyeq j,{j}^{\prime }\preccurlyeq r,j\ne {j}^{\prime }}{\sum }\text{\hspace{0.17em}}\underset{i\in \mathcal{J}\left(2,n,j\right),{i}^{\prime }\in \mathcal{J}\left(2,n,{j}^{\prime }\right)}{\sum }\phi \left(‖i-{i}^{\prime }‖\right)\\ \le C{\stackrel{^}{n}}^{-1}{b}_{n}^{-1}\underset{1\preccurlyeq j,{j}^{\prime }\preccurlyeq r,j\ne {j}^{\prime }}{\sum }\text{\hspace{0.17em}}\underset{i\in \mathcal{J}\left(2,n,j\right),{i}^{\prime }\in \mathcal{J}\left(2,n,{j}^{\prime }\right)}{\sum }\phi \left(‖j-{j}^{\prime }‖q\right)\\ \le C{\stackrel{^}{n}}^{-1}{b}_{n}^{-1}{p}^{2N-2}{q}^{2}\underset{1\preccurlyeq j,{j}^{\prime }\preccurlyeq r,j\ne {j}^{\prime }}{\sum }{\left(‖j-{j}^{\prime }‖q\right)}^{-\theta }\end{array}$

$\begin{array}{l}\le C{\stackrel{^}{n}}^{-1}{b}_{n}^{-1}{p}^{2N-2}{q}^{2}\stackrel{^}{r}{q}^{-\theta }\underset{1\preccurlyeq j\preccurlyeq r}{\sum }{‖j‖}^{-\theta }\\ \le C{\stackrel{^}{n}}^{-1}{b}_{n}^{-1}{p}^{2N}\stackrel{^}{r}{q}^{-\theta }{\left(q{p}^{-1}\right)}^{2}\\ \le C{b}_{n}^{-1}{p}^{N}{q}^{-\theta }\\ =C{\tau }_{4,n}.\end{array}$ (36)

Combining (34)-(36), we have

$E{T}^{2}\left(2,n\right)\le C\left({\tau }_{1,n}+{\tau }_{4,n}\right).$ (37)

similarly, $E{T}^{2}\left(i,n\right)\le C\left({\tau }_{1,n}+{\tau }_{4,n}\right)$ for $3\le i\le {2}^{N}$. Thus, we obtain (30) from (33).

Finely, to show that (31). Clearly,

$\text{Var}\left(T\left(1,n\right)\right)=\stackrel{^}{r}\text{Var}\left({V}_{1}\right)+\underset{1\le t,{t}^{\prime }\le \stackrel{^}{r},t\ne {t}^{\prime }}{\sum }\text{Cov}\left({V}_{t},{V}_{{t}^{\prime }}\right)$ (38)

Define $\mathcal{J}\left(1,n,j\right)=\left\{i:\left({j}_{k}-1\right)\left(p+q\right)+1\le {i}_{k}\le \left({j}_{k}-1\right)\left(p+q\right)+p,1\le k\le N\right\}$. Recalling (36), we have

$\begin{array}{l}|\underset{1\le t,{t}^{\prime }\le \stackrel{^}{r},t\ne {t}^{\prime }}{\sum }\text{Cov}\left({V}_{t},{V}_{{t}^{\prime }}\right)|\\ \le \underset{1\preccurlyeq j,{j}^{\prime }\preccurlyeq r,j\ne {j}^{\prime }}{\sum }|\text{Cov}\left(U\left(1,n,j\right),U\left(1,n,{j}^{\prime }\right)\right)|\\ ={\stackrel{^}{n}}^{-1}\underset{1\preccurlyeq j,{j}^{\prime }\preccurlyeq r,j\ne {j}^{\prime }}{\sum }\text{\hspace{0.17em}}\underset{i\in \mathcal{J}\left(2,n,j\right),{i}^{\prime }\in \mathcal{J}\left(2,n,{j}^{\prime }\right)}{\sum }\text{Cov}\left({\stackrel{˜}{Z}}_{i,{k}_{0}},{\stackrel{˜}{Z}}_{{i}^{\prime },{k}_{0}}\right)\\ \le C{\stackrel{^}{n}}^{-1}{b}_{n}^{-1}\underset{1\preccurlyeq j,{j}^{\prime }\preccurlyeq r,j\ne {j}^{\prime }}{\sum }\text{\hspace{0.17em}}\underset{i\in \mathcal{J}\left(2,n,j\right),{i}^{\prime }\in \mathcal{J}\left(2,n,{j}^{\prime }\right)}{\sum }\phi \left(‖i-{i}^{\prime }‖\right)\end{array}$

$\begin{array}{l}\le C{\stackrel{^}{n}}^{-1}{b}_{n}^{-1}\underset{1\preccurlyeq j,{j}^{\prime }\preccurlyeq r,j\ne {j}^{\prime }}{\sum }\text{\hspace{0.17em}}\underset{i\in \mathcal{J}\left(2,n,j\right),{i}^{\prime }\in \mathcal{J}\left(2,n,{j}^{\prime }\right)}{\sum }\phi \left(‖j-{j}^{\prime }‖q\right)\\ \le C{\stackrel{^}{n}}^{-1}{b}_{n}^{-1}{p}^{2N}\underset{1\preccurlyeq j,{j}^{\prime }\preccurlyeq r,j\ne {j}^{\prime }}{\sum }{\left(‖j-{j}^{\prime }‖q\right)}^{-\theta }\\ \le C{\stackrel{^}{n}}^{-1}{b}_{n}^{-1}{p}^{2N}\stackrel{^}{r}{q}^{-\theta }\underset{1\preccurlyeq j\preccurlyeq r}{\sum }{‖j‖}^{-\theta }\\ \le C{\stackrel{^}{n}}^{-1}{b}_{n}^{-1}{p}^{2N}\stackrel{^}{r}{q}^{-\theta }\end{array}$ (39)

and by Lemma 2

$\stackrel{^}{r}\text{Var}\left({V}_{1}\right)=\stackrel{^}{r}\text{Var}\left(U\left(1,n,1\right)\right)\le C\stackrel{^}{r}{\stackrel{^}{n}}^{-1}{p}^{N}\le C.$ (40)

Combining (38)-(40) yields that $\text{Var}\left(T\left(1,n\right)\right)=\stackrel{^}{r}\text{Var}\left({V}_{1}\right)+o\left(1\right)$ and $\text{Var}\left(T\left(1,n\right)\right)\le C$, so that $\text{Var}\left({S}_{n}\right)=\text{Var}\left(T\left(1,n\right)+{B}_{n}\right)=\text{Var}\left(T\left(1,n\right)\right)+o\left(1\right)$ from $E{B}_{n}^{2}\to 0$. Hence

$\begin{array}{c}\text{Var}\left(\stackrel{˜}{T}\left(1,n\right)\right)=\stackrel{^}{r}\text{Var}\left({\stackrel{˜}{V}}_{1}\right)=\stackrel{^}{r}\text{Var}\left({V}_{1}\right)\\ =\text{Var}\left(T\left(1,n\right)\right)+o\left(1\right)\\ =\text{Var}\left({S}_{n}\right)+o\left(1\right)\\ ={\sigma }_{n}^{2}\left(x\right)+o\left(1\right).\end{array}$ (41)

Let ${\Delta }_{n}\equiv {\left\{\text{Var}\left(\stackrel{˜}{T}\left(1,n\right)\right)\right\}}^{-3/2}{\sum }_{i=1}^{\stackrel{^}{r}}\text{ }\text{ }E{|{\stackrel{˜}{V}}_{i}|}^{3}$. Note that $\text{Var}\left(\stackrel{˜}{T}\left(1,n\right)\right)\ge {\sigma }_{n}^{2}\left(x\right)/2\ge f\left(x\right)/4$ for $x\in \left(\left({k}_{0}-1/2\right){b}_{n},\left({k}_{0}+1/2\right){b}_{n}\right)$. From (40), we have

${\Delta }_{n}\le C\underset{i=1}{\overset{\stackrel{^}{r}}{\sum }}\text{ }E{|{\stackrel{˜}{V}}_{i}|}^{3}\le C{\left(\stackrel{^}{n}{b}_{n}\right)}^{-1/2}{p}^{N}\stackrel{^}{r}\text{Var}\left({\stackrel{˜}{V}}_{1}\right)\le C{\left(\stackrel{^}{n}{b}_{n}\right)}^{-1/2}{p}^{N}\to 0,$ (42)

yields (31) by Berry-Esseen theorem. Complete the proof.

Proof of Theorem 2 In Theorem 1, take $p=\left[{\stackrel{^}{n}}^{\rho }\right]$ and $q=\left[{\stackrel{^}{n}}^{\tau }\right]$ where $\rho =\frac{\left(1-\nu \right)\left(\epsilon +3N\right)}{2N\left(1+3N\right)}$ and $\tau =\frac{\left(1-\nu \right)\epsilon }{2N}$ for $0<\nu <1$ and $0<\epsilon <1$. Notes that ${b}_{n}=C{\stackrel{^}{n}}^{-\nu }$. Then

${\tau }_{1,n}=q{p}^{-1}={\stackrel{^}{n}}^{-\frac{3\left(1-\nu \right)\left(1-\epsilon \right)}{2\left(1+3N\right)}},$ (43)

${\tau }_{2,n}={\left(\stackrel{^}{n}{b}_{n}\right)}^{-1/2}{p}^{N}={\stackrel{^}{n}}^{-\frac{\left(1-\nu \right)\left(1-\epsilon \right)}{2\left(1+3N\right)}},$ (44)

${\tau }_{4,n}={b}_{n}^{-1}{p}^{N}{q}^{-\theta }={\stackrel{^}{n}}^{-\left[\theta \tau -\nu -\frac{\left(1-\nu \right)\left(\epsilon +3N\right)}{2\left(1+3N\right)}\right]}.$ (45)

First consider the case (1), that is that $h\equiv 1$ and the condition (10) holds. At this time, we have

${\tau }_{3,n}={\left(\stackrel{^}{n}{b}_{n}^{-1}\right)}^{1/2}{q}^{-\theta }h\left(\stackrel{^}{n},{p}^{N}\right)={\stackrel{^}{n}}^{-\frac{\left(1-\nu \right)\epsilon \theta -\left(1+\nu \right)N}{2N}}.$ (46)

The condition (10) implies that $\theta \ge \frac{\left(1+\nu \right)N}{\left(1-\nu \right)\epsilon }+\frac{2\left(1-\epsilon \right)N}{\left(1+3N\right)\epsilon }$. Combining this with (45) and (46), we can get that

${\tau }_{3,n}^{1/2}=O\left({\stackrel{^}{n}}^{-\frac{\left(1-\nu \right)\left(1-\epsilon \right)}{2\left(1+3N\right)}}\right),$ (47)

${\tau }_{4,n}^{1/3}=O\left({\stackrel{^}{n}}^{-\frac{\left(1-\nu \right)\left(1-\epsilon \right)}{2\left(1+3N\right)}}\right).$ (48)

From (43),(44), (47) and (48), it is ease to know that

${\tau }_{n}={\tau }_{1,n}^{1/3}+{\tau }_{2,n}+{\tau }_{3,n}^{1/2}+{\tau }_{4,n}^{1/3}=O\left({\stackrel{^}{n}}^{-\frac{\left(1-\nu \right)\left(1-\epsilon \right)}{2\left(1+3N\right)}}\right).$ (49)

It follows the desired result (13). For the case (2) and the case (3), the proving methods are similar to the method used to prove the case (1). Complete the proof.

5. Conclusion

The frequency polygon estimation has the advantage of simple calculation. It can save calculation cost in the face of large data, so it is a valuable and worth studying method. In the existing literature, the asymptotic normality of the frequency polygon estimation has been studied, but its convergence rate has not been established. This paper proves a Berry-Esseen bound of the frequency polygon and derives the convergence rate of asymptotic normality under weaker mixing conditions. In particularly, for the optimal bin width ${b}_{opt}=C{\stackrel{^}{n}}^{-1/5}$, it is showed that the convergence rate of asymptotic normality reaches to ${\stackrel{^}{n}}^{-2/5\left(1+3N\right)}$ when mixing coefficient tends to zero exponentially fast. These conclusions show that the asymptotic normality of the frequency polygon estimator also has a good convergence rate under the dependent samples. Therefore, when the sample size is large, the normal distribution can be used to give a better confidence interval estimation.

Acknowledgements

This research was supported by the Natural Science Foundation of China (11461009) and the Scientific Research Project of the Guangxi Colleges and Universities (KY2015YB345).

Cite this paper: Yang, S. , Yang, X. , Xing, G. and Li, Y. (2018) The Rate of Asymptotic Normality of Frequency Polygon Density Estimation for Spatial Random Fields. Open Journal of Statistics, 8, 962-973. doi: 10.4236/ojs.2018.86064.
References

   Carbon, M., Francq, C. and Tran, L.T. (2010) Asymptotic Normality of Frequency Polygons for Random Fields. Journal of Statistical Planning and Inference, 140, 502-514.
https://doi.org/10.1016/j.jspi.2009.07.028

   Neaderhouser, C.C. (1980) Convergence of Block Spins Defined on Random Fields. Journal of Statistical Physics, 22, 673-684.
https://doi.org/10.1007/BF01013936

   Takahata, H. (1983) On the Rates in the Central Limit Theorem for Weakly Dependent Random Fields. Zeitschrift fur Wahrscheinlichkeitstheorie und verwandte Gebiete, 62, 477-480.

   Tran, L.T. (1990) Kernel Density Estimation on Random Fields. Journal of Multivariate Analysis, 34, 37-53.
https://doi.org/10.1016/0047-259X(90)90059-Q

   Hallin, M., Lu, Z. and Tran, L.T. (2001) Density Estimation for Spatial Linear Processes. Bernoulli, 7, 657-668.
https://doi.org/10.2307/3318731

   Hallin, M., Lu, Z. and Tran, L.T. (2004) Kernel Density Estimation for Spatial Processes: The L1 Theory. Journal of Multivariate Analysis, 88, 61-75.
https://doi.org/10.1016/S0047-259X(03)00060-5

   Cheng, T.L., Ho, H.C. and Lu, X. (2008) A Note on Asymptotic Normality of Kernel Estimation for Linear Random Fields on Z2. Journal of Theoretical Probability, 21, 267-286.
https://doi.org/10.1007/s10959-008-0146-x

   El Machkouri, M. (2011) Asymptotic Normality for the Parzen-Rosenblatt Density Estimator for Strongly Mixing Random Fields. Statistical Inference for Stochastic Processes, 14, 73-84.
https://doi.org/10.1007/s11203-011-9052-4

   El Machkouri, M. (2014) Kernel Density Estimation for Stationary Random Fields. ALEA—Latin American Journal of Probability and Mathematical Statistics, 11, 259-279.

   Wang, Y. and Woodroofe, M. (2014) On the Asymptotic Normality of Kernel Density Estimators for Causal Linear Random Fields. Journal of Multivariate Analysis, 123, 201-213.
https://doi.org/10.1016/j.jmva.2013.09.008

   Biau, G. and Cadre, B. (2004) Nonparametric Spatial Prediction. Statistical Inference for Stochastic Processes, 7, 327-349.
https://doi.org/10.1023/B:SISP.0000049116.23705.88

   Lu, Z. and Chen, X. (2004) Spatial Kernel Regression Estimation: Weak Concistency. Statistics & Probability Letters, 68, 125-136.
https://doi.org/10.1016/j.spl.2003.08.014

   Hallin, M., Lu, Z. and Tran, L.T. (2004) Local Linear Spatial Regression. Annals of Statistics, 32, 2469-2500.
https://doi.org/10.1214/009053604000000850

   Gao, J., Lu, Z. and Tjostheim, D. (2006) Estimation in Semi-Parametric Spatial Regression. Annals of Statistics, 34, 1395-1435.
https://doi.org/10.1214/009053606000000317

   Carbon, M., Francq, C. and Tran, L.T. (2007) Kernel Regression Estimation for Random Fields. Journal of Statistical Planning and Inference, 37, 778-798.
https://doi.org/10.1016/j.jspi.2006.06.008

   Dabo-Niang, S. and Yao, A.F. (2007) Kernel Regression Estimation for Continuous Spatial Processes. Mathematical Methods of Statistics, 16, 298-317.
https://doi.org/10.3103/S1066530707040023

   Scott, D.W. (1985) Frequency Polygons: Theory and Application. Journal of the American Statistical Association, 80, 348-354.
https://doi.org/10.1080/01621459.1985.10478121

   Beirlant, J., Berlinet, A. and Gyorfi, L. (1999) On Piecewise Linear Density Estimators. Statistica Neerlandica, 53, 287-308.
https://doi.org/10.1111/1467-9574.00113

   Carbon, M., Garel, B. and Tran, L.T. (1997) Frequency Polygons for Weakly Dependent Processes. Statistics & Probability Letters, 33, 1-13.
https://doi.org/10.1016/S0167-7152(96)00104-6

   Yang, X. (2015) Frequency Polygon Estimation of Density Function for Dependent Samples. Journal of the Korean Statistical Society, 44, 530-537.
https://doi.org/10.1016/j.jkss.2015.01.006

   Xing, G.D., Yang, S.C. and Liang, X. (2015) On the Uniform Consistency of Frequency Polygons for ψ-Mixing Samples. Journal of the Korean Statistical Society, 44, 179-186.
https://doi.org/10.1016/j.jkss.2014.07.001

   Bensad, N. and Dabo-Niang, S. (2010) Frequency Polygons for Continuous Random Fields. Statistical Inference for Stochastic Processes, 13, 55-80.
https://doi.org/10.1007/s11203-009-9038-7

   El Machkouri, M. (2013) On the Asymptotic Normality of Frequency Polygons for Strongly Mixing Spatial Processes. Statistical Inference for Stochastic Processes, 16, 193-206.
https://doi.org/10.1007/s11203-013-9086-x

   Carbon, M. (2006) Polygone des fréquences pour des champs aléatoires. Comptes Rendus Mathematique, 342, 693-696.
https://doi.org/10.1016/j.crma.2006.02.019

   Roussas, G.G. and Ioannides, D.A. (1987) Moment Inequalities for Mixing Sequences of Random Variables. Stochastic Analysis and Applications, 5, 61-120.
https://doi.org/10.1080/07362998708809108

   Gao, J., Lu, Z. and Tjostheim, D. (2008) Moment Inequalities for Spatial Processes. Statistics and Probability Letters, 78, 687-697.
https://doi.org/10.1016/j.spl.2007.09.032

   Yang, S.C. (2003) Uniformly Asymptotic Normality of the Regression Weighted Estimator for Negatively Associated Samples. Statistics and Probability Letters, 62, 101-110.
https://doi.org/10.1016/S0167-7152(02)00427-3

   Rio, E. (1995) The Functional Law of the Iterated Logarithm for Stationary Strongly Mixing Sequences. Annals of Probability, 23, 1188-1203.
https://doi.org/10.1214/aop/1176988179

   Carbon, M., Tran, L.T. and Wu, B. (1997) Kernel Density Estimation for Random Fields (Density Estimation for Random Fields). Statistics & Probability Letters, 36, 115-125.
https://doi.org/10.1016/S0167-7152(97)00054-0

   Carbon, M., Hallin, M. and Tran, L.T. (1996) Kernel Density Estimation for Random Fields: The L1 Theory. Journal of Nonparametric Statistics, 6, 157-170.
https://doi.org/10.1080/10485259608832669

Top