Let Y be a scalar response variable and X be an explanatory variable in regression. We consider the nonparametric regression model
where is an unknown nonparametric regression function, is a noise variable, and given X the errors are assumed to be independent and identically distributed. We consider the model (1) with explanatory variable X measured with error and Y measured exactly. That is, instead of the true X, the surrogate variable W is observed. Throughout we assume
which is always satisfied if, for example, W is a function of X and some independent noise (see e.g.  ).
Nonparametric regression model (1) in presence of errors in covariables has attracted considerable attention in the literature, and is by now well understood. See Carroll et al.  for an excellent source of references for various approaches. However, all these works mostly focus on specifying error model structure between the true variables X and the surrogate variables W (e.g. the classical error structure and the Berkson error structure). In practice, the relationship between the surrogate variables and the true variables can be rather complicated compared to the classical or Berkson error structural equations usually assumed. This situation presents serious difficulties in making valid statistical inferences. Common solution is to use the help of validation data to infer the missing information about relationship between W and X.
We consider settings where some validation data are available for relating X and W. To be specific, we assume that independent validation data , are available in addition to the independent primary data . Recently, several approaches to statistical inference based on surrogate data and a validation sample are available (see, for example,  ,  -  and among others). But these approaches do not applicable for handling nonparametric regression measurement error model with the availability of a validation data set. Actually, the models considered by the above referenced authors are some parametric or semiparametric models, and the model (1) is a nonparametric one. With the help of validation data,  ,  and  developed estimation methods for the nonparametric regression model (1) with measurement error. However,  assumes that the response Y but not the covariable X is measured with error; The method proposed by  cannot be extended to the subject assume explanatory variable X is a vector; The approach proposed by  is too complicated to calculate.
In this paper, without specifying any structural equations, an orthogonal series method is proposed to estimate g with the help of validation data. As explained in Section 2, we estimate g by solving the following Fredholm equation of the first kind,
Here, we propose orthogonal series estimator of T using the validation data. Using a similar approach, we estimate m based on primary data set. Then an estimator of g is obtained by Tikhonov regularization method.
This paper is arranged as follows. In Section 2, we define an orthogonal series estimation method. In Section 3, we state the convergence rates of the proposed estimator. Simulation results are reported in Section 4 and a brief discussion is given in Section 5. Proofs of the theorems are presented in Appendix.
2. Model and Series Estimation
Recall model (1) and the assumptions below it. Assume that in addition to the primary data set consisting of N independent and identically distributed obser- vations from model (1), validation consisting of n independent and identically distributed observations are available. Furthermore, we suppose that X and W are both real-valued random variables. The extension to random vectors complicates the notation but does not affect the main ideas and results. Without loss of generality, let the supports of X and W both be contained in (otherwise, one can carry out monotone transformations of X and W).
Let and denote respectively the joint density of and marginal density of W. Then, according to (2), we have
Define the operator as
So that Equation (4) is equivalent to the operator equation
According to Equation (5), the function g is the solution of a Fredholm integral equation of the first kind, and this inverse problem is known to be ill-posed and needs a regularization method. A variety of regulation schemes are available in the literature (see e.g.  ) but we focus in this paper on the Tikhonov regularized solution:
where the penalization term is the regularization parameter.
We define the adjoint operator of
where . Then the regularized solution (6) is equivalently:
2.2. Orthogonal Series Estimation
In order to estimate the solution (7), we need to estimate , and . In this paper, we consider the orthogonal series method. Under some regularity conditions in Section 4.1, the density function and may be approximated with any wished accuracy by a truncated orthogonal series,
Here, is an orthonormal basis of which may be trigonome- tric, polynomial, spline, wavelet, and so on. A discussion of different bases and their properties can be found in the literature (see e.g.  ). Only to be specific, here and in what follows we are considering the normalized Legendre polynomials on , which can be obtained through the Rodrigues’ formula
The integer K is a truncation point which is the main smoothing parameter in the approximating series, and and represent the generalized Fourier coefficients of and m, respectively.
Note that and . Intuitively, we can obtain the estimators of , , and by
respectively. The operators and can then be consistently estimated by
Conclude that, the estimator of is obtained by
3. Theoretical Properties
The main objective of this section is to derive the statistical properties of the estimator proposed in Section 2.2. For this purpose, we assume:
Assumption 1. 1) The support of is contained in ; 2) The joint density of is square integrable w.r.t the Lesbegue measure on .
This is sufficient condition for T to be a Hilbert-Schmidt operator and therefore to be compact (see  ). As a result of compactness, there exists a singular values decomposition. Let be the sequence of the nonzero singular values of T and the two orthonormal sequences , and such that (see  ):
We define as a b-regularity space for :
Here and blow, we denote by the scalar product in .
Assumption 2. We have for some .
We then obtain the following result (see  ).
Proposition 3.1. Suppose Assumptions 1 and 2 hold, then we have , where .
In order to obtain the rate of convergence for , we impose the following additional conditions:
Assumption 3. 1) The joint density is r-times continuously differen- tiable on ; 2) The function is s-times continuously differentiable on .
Assumption 4. The function is bounded uniformly on .
Assumption 5. 1) ; 2) , , , as , .
Theorem 3.1. Suppose Assumptions 1 - 5 hold. Let , then we have
In (10), the term arises from the bias of caused by truncating the series approximation of and . The truncation bias decreases as increases. The terms and are respectively induced by random surrogate sampling errors and random validation sampling errors in the estimates of the generalized Fourier coefficients and . By Theorem 3.1, it is easy to obtain the following corollary.
Corollary 3.1. Suppose the assumptions of Theorem 3.1 are satisfied. Let and , then we have
The proofs of all the results are reported in the Appendix.
4. Simulation Studies
In this section, we conducted simulation studies of the finite-sample perfor- mance of the proposed estimators. First, for comparison, we consider the standard Nadaraya-Watson estimator base on the primary dataset (denoted as ). It should be pointed out that can serve as a gold standard in the simulation study, even though it is practically unachievable due to measurement errors. Second, The performance of estimator is assessed by using the square root of average square errors (RASE)
where , are grid points at which is evaluated.
We considered model (1) with the regression function being
where is the density of an variable. To perform this simulation, we generate W from and from , and put . The densities and , chosen in the beta family, are
where we chose , and (in fact, the greater the value of , the smaller the variance of ). Simulations were run with different validation and primary data sizes ranging from to according to the ratio and , respectively. For each case, 500 simulated data sets were generated for each sample size of .
To implement our method (9), the regularization parameter and truncating parameter K should be chosen. Here, we estimate and K by minimizing the following two-dimensional cross-validation score selection criterion
where are the solutions based on (9), after deleting the ith primary observation . In addition, for the naive estimator , we used the standard normal kernel, and the bandwidth was selected by leave-one-out CV approach. In all graphs, to illustrate the performance of an estimator, we show the estimated curves corresponding to the first (Q1), second (Q2) and third (Q3) quartiles of the ordered RASEs. The target curve is always represented by a solid curve.
Figure 1 shows the regression function curve, the quartile curves of 500 estimates under different values of for sample size , in the example (a). From this figure, we clearly see that the proposed estimator appeared to perform very well in this study. Taking the measurement error levels into account, as the variances of decrease, tends to have smaller bias at the peaks of the regression curve.
Figure 2 illustrates the way in which the estimator improves as sample size increases. We compare the results obtained when estimating curve (b) under different settings of sample size for . We see clearly that, as the sample size increases, the quality of the estimators improves significantly.
Table 1 compares, for various sample sizes, the results obtained for estimating curves (a) and (b) when , and . The estimated RASEs which were evaluated at 27 grid points of x are presented. Our results show that
Figure 1. Estimation of regression function (a) for samples of size , when (left panel), (middle panel) and (right panel). The solid curve is the target curve.
Table 1. The RASE comparison for the estimators and . Let .
Figure 2. Estimation of regression function (b) for , when (left panel), (middle panel) and (right panel). The solid curve is the target curve.
the estimator outperforms . Also, the performance of improves (i.e., the corresponding RASEs decrease) considerably as the sample sizes increases. For any nonparametric method in measurement error regression problem, the quality of the estimator also depends on the discrepancy of the observed sample. That is, the performance of the estimator depends on the variances of measurement error. Here, we compare the results for different values of . As expected, Table 1 shows that the effect of the variances on the estimator performance is obvious.
In this paper, we have proposed a new method for estimating non-parametric regression models when the explanatory variable is measured with error under the assumption that a proper validation data set is available. The validation data set allows us to estimate joint density of the true variable and the surrogate variable via an orthogonal series method. In practice, our proposed method can be extended to multidimensional cases in which X may be a p-variate explanatory variable. When the dimension of X and hence of W is large, the curse of dimensionality may occur because of the multivariate density estimation of . In this case, exponential series estimator proposed by  ensures the positiveness of the estimated density. After obtaining the exponential series estimator of , we can obtain results similar to those in the previous sections. Asymptotic theory in this setting still needs to be pursued in the further research.
This work was supported by GJJ160927 and Natural Science Foundation of Jiangxi Province of China under grant number 20142BAB211018.
Proofs of Theorem 3.1 and Corollary 3.1:
We first present some Lemmas that are need to prove the main theorem.
Lemma 7.1. Suppose Assumptions 1 and 3(1) hold. Then:
where denotes the Hilbert-Schmidt norm, i.e.:
Proof of Lemma 7.1. According to Lemma A1 of Wu  , we have
Note that the Legendre polynomials in (8) are orthonormal and complete on . Then
By , we have
where we have used the fact that is uniformly bounded on .
By Chebyshev’s inequality, then we have . Then the desired result follows immediately.
Lemma 7.2. Suppose Assumptions 1, 3 and 4 hold. Let , then
Proof of Lemma 7.2. Note that . By the triangle inequality and Jensen inequality, we have
If , Lemma 7.1 gives . According to the proof of Lemma 7.1, under Assumptions 3(2) and 4, we can show that . Then we obtain the result in Lemma 7.2.
Proof of Theorem 3.1. Define and . Notice that , then we have
The second right-hand side term can itself be decomposed into two components:
Actually, since in this case and , the identity gives:
From the properties of norm, we have
Let us consider the first term, we have
The first norm is equal to the larger eigen value of the operator. These eigen values converges to and are then smaller than . It follows from Lemma 7.2 that
Next, we consider the term . Note that
We have , and (see  ). According to Lemma 7.1, we have or are .
By Proposition 3.1:
The term identical to , is the regularity bias of equal to .
Therefore, we have
Combining (11), (12) and (13) gives the desired result of Theorem 3.1.
Proof of Corollary 3.1. By Theorem 3.1, the proof of Corollary 3.1 is straightforward and is omitted.