Variance Estimation for High-Dimensional Varying Index Coefficient Models

ABSTRACT

This paper studies the re-adjusted cross-validation method and a semiparametric regression model called the varying index coefficient model. We use the profile spline modal estimator method to estimate the coefficients of the parameter part of the Varying Index Coefficient Model (VICM), while the unknown function part uses the B-spline to expand. Moreover, we combine the above two estimation methods under the assumption of high-dimensional data. The results of data simulation and empirical analysis show that for the varying index coefficient model, the re-adjusted cross-validation method is better in terms of accuracy and stability than traditional methods based on ordinary least squares.

This paper studies the re-adjusted cross-validation method and a semiparametric regression model called the varying index coefficient model. We use the profile spline modal estimator method to estimate the coefficients of the parameter part of the Varying Index Coefficient Model (VICM), while the unknown function part uses the B-spline to expand. Moreover, we combine the above two estimation methods under the assumption of high-dimensional data. The results of data simulation and empirical analysis show that for the varying index coefficient model, the re-adjusted cross-validation method is better in terms of accuracy and stability than traditional methods based on ordinary least squares.

KEYWORDS

High-Dimensional Data, Refitted Cross-Validation, Varying Index Coefficient Models, Variance Estimation

High-Dimensional Data, Refitted Cross-Validation, Varying Index Coefficient Models, Variance Estimation

1. Introduction

The variance estimate, in this paper, is the residual variance of the model. In the process of statistical modeling, the variance estimation of the model has been extensively studied. Most of the research methods are simple two-stage method, in the first stage, the important variables in the model are selected by the method of variable selection; in the second stage, the variance is estimated by the ordinary least squares method. In the first phase, the traditional variable selection method has two criteria, namely the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). These two traditional methods use the empirical likelihood method to select the model with the smallest AIC and BIC values. At the same time, the variables contained in the model are the selected optimal variables. However, this variable selection method is neither continuous nor ordered. Therefore, the variance of the model estimated by the traditional method will be large. Moreover, with the development of technology, high-dimensional data is applied to all aspects of life. The number of variables increases exponentially, and the calculation of the above two criteria also shows an exponential increasing trend, so the above method cannot be applied to high-dimensional data.

In the past research, many important variable selection methods such as LASSO (Least Absolute Shrinkage Selection Operator) and SCAD (Smoothly Clipped Absolute Deviation) have been proposed. LASSO was proposed by Tibshirani (1996) [1] . For more details, see Fan and Peng (2004), Zhao and Yu (2006), Bunea (2007), Zhang and Huang (2008), Lv and Fan (2009), Fan and Lv (2011), and Kim (2008) [2] - [8] . In this method, a penalty term is added on the basis of the ordinary least squares method, and the coefficient value is reduced to 0, so that the corresponding variable is excluded from the model. Another type of variable selection tool is DS (Dantzig Selector). This method was first proposed by Candes and Tao (2005) [9] and can be easily reshaped into a linear model. Fan and Lv (2008) [10] sort the covariance matrix between covariate and response variables, and then select the first few variables with the largest correlation coefficient to complete the variable selection. This method is called SIS (Sure Independence Screening). In later studies, some scholars extended the SIS, namely the iterative SIS (ISIS) method: the regression analysis was performed using the variables and dependent variables selected by SIS, and the regression residuals were replaced with response variables. Then continue to use the SIS method for a new round of variable selection. And repeat the above steps until all the important variables. For details, see Fan et al. (2009) [11] . After screening out the important variables, the second step of the simple two-stage method is generally calculated by least squares method. However, in order to overcome the root cause of the dimension, many scholars study the variance estimation in the case of high-dimensional data. Fan et al. proposed a re-adjusted cross-validation method (RCV) in 2012 to improve the simple two-stage approach. It is proved that the variance estimated by this method is stable and accurate. Zhao et al. (2014) [12] studied the variance estimation of linear models under certain assumptions. Reid, Tibshirani, Friedman (2016) [13] studied the model residual estimation in LASSO regression and performed a large number of simulations. They considered that the variance estimation of the residual sum of squares based on adaptive regularization parameter selection has the properties of finite samples.

A well-behaved variance estimation method can improve the prediction accuracy of the model and better explain the socio-economic phenomena. However, it is more important to choose a suitable regression model. There is also a large amount of literature on the study of regression models. When the data dimension is low, the parametric model and the nonparametric model are sufficient to solve the problem. But as the dimension increases, a more flexible semi-parametric model is more suitable. The literature research on semi-parametric models is mostly focused on the introduction of new models, such as linear models, add-on models, and so on. Hastie and Tibishirani (1993) [14] proposed the Variable Coefficient Model (VCM), which has been widely used in practical applications. In addition, some scholars studied the single index coefficient model (SICM). The Variable Coefficient Single Index Model (VICSIM) was proposed by Wong et al. (2008) [15] . Ma and Song (2014) [16] proposed the varying index coefficient model for the first time, which has overcome the problems that the variable coefficient model cannot solve. Most scholars apply variable selection methods such as SIS, LASSO, and SCAD to the parametric model, while the method used in nonparametric estimation are kernel estimation, local linear kernel estimation, and spline functions. For the estimation of semi-parametric regression models, such as partial linear regression model, variable coefficient model, single-index model, etc., the parameter part is estimated by Profile Least Square Estimation (PLSE), and its non-parametric part is still using the previous non-parametric method. For example, Xue and Liang (2010) [17] used the PLSE method of kernel estimation when estimating the non-parametric part of the single-index model. However, there are few literatures on varying index coefficients proposed in 2015, and the related estimation algorithms mainly use the profile least squares estimation method with B-spline to estimate the variable coefficient index model. Lv et al. (2016) [18] improved the PLSE, proposed a robust estimation procedure combining the logarithmic regression and the B-spline, and established the large sample property of the parameter estimation. The estimation of the unknown coefficient $\beta $ is estimated by the profile spline modal estimator method (PSME). Moreover, in order to obtain the progressive distribution of the unknown function ${m}_{l}\left(Z\right)$ , they also proposed a two-stage method of local linear kernel estimation.

The rest of the paper is organized as follows. In Section 2, we briefly introduce the varying index coefficient model, including the estimation method, the statistical inference of the coefficients, and the RCV estimation of the model. In Section 3, simulation studies are conducted to evaluate the finite sample performance of the proposed methods. In Section 4, a real data set is analyzed to compare the proposed methods with the existing methods. A discussion is given in Section 5.

2. Methodology

2.1. Varying Index Coefficient Models

The semiparametric model is widely used in regression models, especially the varying coefficient model (VCM) proposed by Hastie and Tibishirani in 1993, which has been widely used in real data. An important feature of the varying coefficient model is that the coefficients of its covariates are controlled by smooth functions, which can show nonlinear reactions. The form of the variable coefficient model is as follows:

$Y={\displaystyle \underset{l=1}{\overset{d}{\sum}}{m}_{l}\left(Z\right){X}_{l}+\epsilon}$ (2.1)

where Y is a response variable, $X={\left({X}_{1},\cdots ,{X}_{p}\right)}^{\text{T}}$ and $Z\in \left[0,1\right]$ (for simplicity) are explanatory covariates, $m(\cdot )={\left({m}_{1}(\cdot ),\cdots ,{m}_{p}(\cdot )\right)}^{\text{T}}$ is a p-dimensional vector of the unknown coefficient functions, and model error $\epsilon $ is independent of $\left(X,Z\right)$ with mean zero and finite variance ${\sigma}^{2}$ . The variable coefficient model of Equation (2.1) faces two challenges in the case of today’s complex data. First, the variable Z has little effect relative to Y, so the interaction between the variables Z and X is difficult to detect; second, in many complex situations, Z is multi-dimensional, for example, studying the effects between chemical constituents. Thus, the coefficient function ${m}_{l}\left(Z\right)$ in the VCM model will fall into the dimension curse. To overcome these two problems, Ma and Song proposed the Varying index Coefficient Model (VICM) in 2015. The varying index coefficient model is as follows:

$Y=m\left(Z,X,\beta \right)+\epsilon ={\displaystyle \underset{l=1}{\overset{d}{\sum}}m\left({Z}^{\text{T}}{\beta}_{l}\right){X}_{l}+\epsilon}$ (2.2)

where ${\beta}_{l}={\left({\beta}_{l1},\cdots ,{\beta}_{lp}\right)}^{\text{T}}$ is the coefficient of the variable Z and ${\beta}_{lk}$ is the coefficient of ${Z}_{k}$ in Z. The introduction of the varying index coeffcient model was based on Ma and Song’s study of this biomedical project that affects children’s growth rates.

2.2. Estimation Procedure for the VICM

The estimation of the varying index coefficient models has two main aspects: one is the estimation of the parameter part $\beta $ , and the other is the estimation of the function coefficient ${m}_{l}\left({u}_{l}\right)$ of the non-parametric part. In this paper, the estimation of the unknown coefficient $\beta $ is estimated by the profile spline modal estimator method (PSME). Once $\beta $ is fixed, the unknown function coefficient ${m}_{l}\left({u}_{l}\right)$ is estimated using B-spline. The specific estimation process of the varying index coefficient models is as follows.

Let $\left\{\left({X}_{i},{Z}_{i},{Y}_{i}\right),1\le i\le n\right\}$ be the independent and identically distributed samples from model (2.2). Our main interest is to estimate the coefficient vectors ${\beta}_{l}$ and the non-parametric functions ${m}_{l}(\cdot )$ for $l=1,\cdots ,d$ . The estimation of ${\beta}_{l}$ and ${m}_{l}(\cdot )$ in VICM is equivalent to maximizing

$\frac{1}{n}{\displaystyle \underset{i=1}{\overset{d}{\sum}}{\varphi}_{{h}_{1}}}\left\{{Y}_{i}-{\displaystyle \underset{l=1}{\overset{d}{\sum}}{m}_{l}\left({Z}_{i}^{\text{T}}{\beta}_{l}\right){X}_{il}}\right\}$ (2.3)

subject to the constraint $\Vert {\beta}_{l}\Vert =1$ and ${\beta}_{l1}>0$ , where ${\varphi}_{{h}_{1}}\left(t\right)={h}_{1}^{-1}\varphi \left(t/{h}_{1}\right)$ , ${\varphi}_{t}$ is a kernel density function symmetric about 0 and ${h}_{1}$ is a bandwidth which determines the degree of robustness of the estimate. We use the standard normal density for ${\varphi}_{t}$ throughout this paper to simplify the calculation. We use a basic approximation to estimate nonparametric functions. That is, we approximate ${m}_{l}(\cdot )$ by the B-spline basis function because they have bounded support and

are numerically stable. More specially, let ${B}_{q}\left(u\right)={\left({B}_{1q}\left(u\right),\cdots ,{B}_{{J}_{n}q}\left(u\right)\right)}^{\text{T}}$ be the

B-spline basis functions of order $q\left(q\ge 2\right)$ , where ${J}_{n}={N}_{n}+q$ and ${N}_{n}$ is the number of interior knots for a knot sequence

${\xi}_{1}=\cdots =0={\xi}_{q}<{\xi}_{q+1}<\cdots <{\xi}_{{N}_{n}+q}<1={\xi}_{{N}_{n}+q+1}=\cdots ={\xi}_{{N}_{n}+2q}$ ,

where ${N}_{n}$ increases along with the sample size n. Consider the distance between two neighboring knots ${H}_{i}={\xi}_{i}-{\xi}_{i-1}$ and $H={\mathrm{max}}_{1\le i\le {N}_{n}+1}\left\{{H}_{i}\right\}$ . Then, there exists constants ${C}_{0}$ such that $\frac{H}{{\mathrm{min}}_{1\le i\le {N}_{n}+1}\left\{{H}_{i}\right\}}<{C}_{0}$ , ${\mathrm{max}}_{1\le i\le {N}_{n}}\left\{{H}_{i+1}-{H}_{i}\right\}=o\left({N}_{n}^{-1}\right)$ . Let ${U}_{l}\left({\beta}_{l}\right)={Z}^{\text{T}}{\beta}_{l}$ , without loss of generality, we assume that ${U}_{l}\left({\beta}_{l}\right)$ is confined in a compact set $\left[0,1\right]$ . Then, nonparametric functions ${m}_{l}\left({u}_{l}\right)$ can be approximated by

${m}_{l}\left({u}_{l}\right)\approx {B}_{q}{\left({u}_{l}\right)}^{\text{T}}{\lambda}_{l}\left(\beta \right)$ , $l=1,\cdots ,d$ (2.4)

where ${\lambda}_{l}\left(\beta \right)={\left({\lambda}_{s,l}\left(\beta \right):1\le s\le {J}_{n}\right)}^{\text{T}}$ . Let $\lambda \left(\beta \right)={\left({\lambda}_{1}{\left(\beta \right)}^{\text{T}},\cdots ,{\lambda}_{d}{\left(\beta \right)}^{\text{T}}\right)}^{\text{T}}$ . Based on the above approximation, the objective function (2.3) becomes

$\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}{\varphi}_{h2}}\left\{{Y}_{i}-{\displaystyle \underset{l=1}{\overset{d}{\sum}}{\displaystyle \underset{s=1}{\overset{{J}_{n}}{\sum}}{B}_{s,q}\left({U}_{il}\left({\beta}_{l}\right)\right){\lambda}_{s,l}{X}_{il}}}\right\}$ . (2.5)

Subsequently, we estimate the parameter vectors ${\beta}_{l}$ and the nonparametric functions ${m}_{l}(\cdot )$ in two steps below.

Step 1. Given $\beta $ , we obtain estimate $\stackrel{^}{\lambda}\left(\beta \right)$ of $\lambda \left(\beta \right)$ by maximizing the objective function (2.5). Then, the estimator of ${m}_{l}\left({u}_{l}\right)$ can be obtained by

${\stackrel{^}{m}}_{l}\left({u}_{l},\beta \right)={\displaystyle \underset{s=1}{\overset{{J}_{n}}{\sum}}{B}_{s,q}\left({u}_{l}\right){\stackrel{^}{\lambda}}_{s,l}\left(\beta \right)}={B}_{q}{\left({u}_{l}\right)}^{\text{T}}{\stackrel{^}{\lambda}}_{l}\left(\beta \right)$ . (2.6)

In order to obtain efficient estimators of $\beta $ , the “remove-one-component” method is employed. Specifically, for ${\beta}_{l}={\left({\beta}_{l1},\cdots ,{\beta}_{lp}\right)}^{\text{T}}$ , let ${\beta}_{l,-1}={\left({\beta}_{l2},\cdots ,{\beta}_{lp}\right)}^{\text{T}}$ be a $p-1$ dimensional vector by removing the 1st component ${\beta}_{l1}$ in ${\beta}_{l}$ for all $1\le l\le d$ . Then ${\beta}_{l}$ can be rewritten as

${\beta}_{l}={\beta}_{l}\left({\beta}_{l,-1}\right)={\left(\sqrt{1-{\Vert {\beta}_{l,-1}\Vert}^{2}},{\beta}_{l,-1}^{\text{T}}\right)}^{\text{T}}$ , ${\Vert {\beta}_{l,-1}\Vert}^{2}<1$ . (2.7)

Thus, ${\beta}_{l}$ is infinitely differentiable with respect to ${\beta}_{l,-1}$ and the Jacobian matrix is

${J}_{l}=\frac{\partial {\beta}_{l}}{\partial {\beta}_{l,-1}}=\left(\begin{array}{c}-{\beta}_{l,-1}^{T}/\sqrt{1-{\Vert {\beta}_{l,-1}\Vert}^{2}}\\ {I}_{p-1}\end{array}\right)$ , (2.8)

where ${I}_{p}$ is the $p\times p$ identity matrix. We denote ${\beta}_{-1}={\left({\beta}_{l,-1}^{\text{T}},\cdots ,{\beta}_{d,-1}^{\text{T}}\right)}^{\text{T}}$ and reformulate the parameter space of ${\beta}_{-1}$ as follows:

${\Theta}_{-1}=\left\{{\beta}_{-1}={\left({\beta}_{l,-1}^{\text{T}}:1\le l\le d\right)}^{\text{T}}:{\Vert {\beta}_{l,-1}\Vert}^{2}<1,{\beta}_{l,-1}\in {R}^{p-1}\right\}$ . (2.9)

Let $\beta =\beta \left({\beta}_{-1}\right)$ with ${\beta}_{l}={\beta}_{l}\left({\beta}_{l,-1}\right)$ for $1\le l\le d$ . Since the estimation procedure of $\beta $ requires estimates of both ${m}_{l}$ and its first order derivative ${\stackrel{\dot{}}{m}}_{l}$ . We can adopt the spline functions of one order lower than that of ${m}_{l}$ to approximate the ${\stackrel{\dot{}}{m}}_{l}$ . Following Ma and Song (2014), a spline estimator of ${\stackrel{\dot{}}{m}}_{l}$ is given by

${\stackrel{^}{\stackrel{\dot{}}{m}}}_{l}\left({u}_{l},\beta \right)={\displaystyle \underset{s=1}{\overset{{J}_{n}}{\sum}}{\stackrel{\dot{}}{B}}_{s,q}}\left({u}_{l}\right){\stackrel{^}{\lambda}}_{s,l}\left(\beta \right)={\displaystyle \underset{s=2}{\overset{{J}_{n}}{\sum}}{B}_{s,q-1}}\left({u}_{l}\right){\stackrel{^}{\omega}}_{s,l}\left(\beta \right)$ (2.10)

where ${\stackrel{^}{\omega}}_{s,l}\left(\beta \right)=\left(q-1\right)\left\{{\stackrel{^}{\lambda}}_{s,l}\left(\beta \right)-{\stackrel{^}{\lambda}}_{s-1,l}\left(\beta \right)\right\}/\left({\xi}_{s+q-1}-{\xi}_{s}\right)$ for $2\le s\le {J}_{n}$ . Thus, one has

${\stackrel{^}{\stackrel{\dot{}}{m}}}_{s,l}\left({u}_{l},\beta \right)={B}_{q-1}{\left({u}_{l}\right)}^{\text{T}}{D}_{1}{\stackrel{^}{\lambda}}_{l}\left(\beta \right)$ ,

where ${B}_{q-1}\left({u}_{l}\right)={\left({B}_{s,q-1}\left({u}_{l}\right):2\le s\le {J}_{n}\right)}^{\text{T}}$ and

${D}_{1}=\left(q-1\right){\left[\begin{array}{ccccc}\frac{-1}{{\xi}_{q+1}-{\xi}_{2}}& \frac{1}{{\xi}_{q+1}-{\xi}_{2}}& 0& \cdots & 0\\ 0& \frac{-1}{{\xi}_{q+2}-{\xi}_{3}}& \frac{1}{{\xi}_{q+2}-{\xi}_{3}}& \cdots & 0\\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0& 0& \cdots & \frac{-1}{{\xi}_{N+2q-1}-{\xi}_{N+q}}& \frac{1}{{\xi}_{N+2q-1}-{\xi}_{N+q}}\end{array}\right]}_{\left({J}_{n}-1\right)\times {J}_{n}}$ .

Step 2. After this re-parametrization, combine with the estimators ${\stackrel{\dot{}}{m}}_{l}$ and ${\stackrel{^}{\stackrel{\dot{}}{m}}}_{l}$ for $l=1,\cdots ,d$ , we can construct the profile spline modal objective function for the parametric components. Then, we can obtain the estimator ${\stackrel{^}{\beta}}_{-1}$ of ${\beta}_{-1}$ by maximizing ${L}_{n}\left(\beta \left({\beta}_{-1}\right)\right)$ over ${\beta}_{-1}\in {\Theta}_{-1}$ , where

${L}_{n}\left(\beta \left({\beta}_{-1}\right)\right)=\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}{\varphi}_{h2}}\left\{{Y}_{i}-{\displaystyle \underset{l=1}{\overset{d}{\sum}}{\displaystyle \underset{s=1}{\overset{{J}_{n}}{\sum}}{B}_{s,q}\left({U}_{il}\left({\beta}_{l}\right)\right){\lambda}_{s,l}\left(\beta \right){X}_{il}}}\right\}$ , (2.11)

which is equivalent to solve the following estimating equations:

$\begin{array}{l}\partial {L}_{n}\left(\beta \left({\beta}_{-1}\right)\right)/\partial {\beta}_{-1}\\ =-\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}{\stackrel{\dot{}}{\varphi}}_{h2}}\left\{{Y}_{i}-{\displaystyle \underset{l=1}{\overset{d}{\sum}}{\displaystyle \underset{s=1}{\overset{{J}_{n}}{\sum}}{B}_{s,q}\left({U}_{il}\left({\beta}_{l}\right)\right){\stackrel{^}{\lambda}}_{s,l}\left(\beta \right){X}_{il}}}\right\}\\ \text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\times \left\{\begin{array}{c}\left\{{\stackrel{^}{\stackrel{\dot{}}{m}}}_{1}\left({U}_{i1}\left({\beta}_{1}\right),\beta \right){X}_{i1}{J}_{1}^{\text{T}}{Z}_{i}+\left(\partial \stackrel{^}{\lambda}{\left(\beta \right)}^{\text{T}}/\partial {\beta}_{1,-1}\right){D}_{i}\left(\beta \right)\right\}\\ \vdots \\ \left\{{\stackrel{^}{\stackrel{\dot{}}{m}}}_{d}\left({U}_{id}\left({\beta}_{d}\right),\beta \right){X}_{id}{J}_{d}^{\text{T}}{Z}_{i}+\left(\partial \stackrel{^}{\lambda}{\left(\beta \right)}^{\text{T}}/\partial {\beta}_{d,-1}\right){D}_{i}\left(\beta \right)\right\}\end{array}\right\}\\ =0\end{array}$ (2.12)

where ${D}_{i}\left(\beta \right)={\left({D}_{i,sl}\left({\beta}_{l}\right),1\le s\le {J}_{n},1\le l\le d\right)}^{\text{T}}$ with ${D}_{i,sl}\left({\beta}_{l}\right)={B}_{s,q}\left({U}_{il}\left({\beta}_{l}\right)\right){X}_{il}$ , ${\stackrel{^}{\stackrel{\dot{}}{m}}}_{l}\left(\cdot ,\beta \right)$ is given in (2.10) and ${\stackrel{\dot{}}{\varphi}}_{h2}$ is the first derivative of ${\varphi}_{h2}$ . We obtain the estimate of ${\beta}_{-1}$ , say, ${\stackrel{^}{\beta}}_{-1}$ and then obtain $\stackrel{^}{\beta}$ via the transformation (2.7). Thus, we call the estimator $\stackrel{^}{\beta}$ as the profile spline modal estimator (PSME).

3. Simulation Studies

3.1. Results in Finite Sample

In this section, we conduct simulation studies to evaluate the finite sample performance of the proposed methodology. We generate data from the following VICM:

${Y}_{i}=m\left({Z}_{i},{X}_{i},\beta \right)+{\epsilon}_{i}={m}_{1}\left({Z}_{i}^{\text{T}}{\beta}_{1}\right){X}_{i1}+{m}_{2}\left({Z}_{i}^{\text{T}}{\beta}_{2}\right){X}_{i2}+{m}_{3}\left({Z}_{i}^{\text{T}}{\beta}_{3}\right){X}_{i3}+{\epsilon}_{i}$ (3.1)

with ${X}_{i}={\left({X}_{i1},{X}_{i2},{X}_{i3}\right)}^{\text{T}}$ , where ${X}_{i}$ is generated from Bernoulli (p = 0.5), and ${\left({X}_{i2},{X}_{i3}\right)}^{\text{T}}$ is drawn from a bivariate normal distribution with mean 0, variance 1, and covariance 0.2. To generate ${Z}_{i}={\left({Z}_{i1},{Z}_{i2},{Z}_{i3}\right)}^{\text{T}}$ , we first sample ${\left({Z}_{i1}^{*},{Z}_{i2}^{*},{Z}_{i3}^{*}\right)}^{\text{T}}$ from a multivariate normal with mean 0, variance 1, and covariance 0.2, and then let ${Z}_{ik}=\Phi \left({Z}_{ik}^{*}\right)-0.5,k=1,2,3$ , where $\Phi (\cdot )$ is the CDF of the standard normal. The true loading parameters are set as ${\beta}_{1}=\frac{1}{\sqrt{14}}{\left(2,1,3\right)}^{\text{T}}$ , ${\beta}_{1}=\frac{1}{\sqrt{14}}{\left(3,2,1\right)}^{\text{T}}$ , ${\beta}_{1}=\frac{1}{\sqrt{14}}{\left(2,3,1\right)}^{\text{T}}$ . Set

${m}_{l}\left({u}_{l}\right)={m}_{l}^{*}\left({u}_{l}\right)-E\left\{{m}_{l}^{*}\left({u}_{l}\right)\right\}$ , $l=1,2,3$

where ${m}_{1}^{*}\left({u}_{1}\right)=10\mathrm{exp}\left(5{u}_{1}\right)/\left\{1+\mathrm{exp}\left(5{u}_{1}\right)\right\}$ , ${m}_{2}^{*}\left({u}_{2}\right)=5\mathrm{sin}\left(\text{\pi}{u}_{2}\right)$ , and ${m}_{3}^{*}\left({u}_{3}\right)=3\left\{\mathrm{sin}\left(\text{\pi}{u}_{3}\right)+\mathrm{cos}\left(2\text{\pi}{u}_{3}-4\text{\pi}/3\right)\right\}$ . Finally, ${Y}_{i}$ , $1\le i\le n$ , are generated from the VICM (3-1), where $\beta ={\left({\beta}_{1}^{\text{T}},{\beta}_{2}^{\text{T}},{\beta}_{3}^{\text{T}}\right)}^{\text{T}}$ , and errors ${\epsilon}_{i}$ follow $N\left(0,{\sigma}^{2}\left({Z}_{i},{X}_{i}\right)\right)$ with ${\sigma}^{2}\left({Z}_{i},{X}_{i}\right)=\left\{100-m\left({Z}_{i},{X}_{i},\beta \right)\right\}/\left\{100+m\left({Z}_{i},{X}_{i},\beta \right)\right\}$ .

Although the estimation process of the varying index coefficient model is introduced in Section 2.2, it is still difficult to directly estimate (2.12). Therefore, an iterative calculation algorithm is needed to estimate the unknown parameters and the unknown function coefficients. The specific algorithm is divided into the following two steps:

Step 1. The initial value $\left({\beta}_{1},{\beta}_{2},{\beta}_{3}\right)$ of $\beta $ is obtained in the following four steps:

1) Assuming that the unknown function ${m}_{l}$ is a linear function, then $m\left({Z}_{i},{X}_{i},\beta \right)={\displaystyle \underset{i=1}{\overset{d}{\sum}}{a}_{l}+{b}_{l}\left({Z}_{i}^{\text{T}}{\beta}_{l}\right){X}_{il}}$ .

2) The estimated value $\left({\stackrel{^}{a}}_{l},{\stackrel{^}{v}}_{l}\right)$ of $\left({a}_{l},{v}_{l}\right)$ is estimated by minimizing ${{\displaystyle \underset{i=1}{\overset{n}{\sum}}\left\{{Y}_{i}-{\displaystyle \underset{l=1}{\overset{d}{\sum}}\left({a}_{l}+{v}_{l}^{\text{T}}{Z}_{i}{X}_{il}\right)}\right\}}}^{2}$ , and thus the expression ${\stackrel{^}{\beta}}_{l}^{0}=\left({{\stackrel{^}{v}}_{l}/\Vert {\stackrel{^}{v}}_{l}\Vert}_{2}\right)\mathrm{sgn}\left({\stackrel{^}{v}}_{1l}\right)$ is obtained, where ${\stackrel{^}{v}}_{1l}$ is a part of ${\stackrel{^}{v}}_{l}$ .

3) Let ${\stackrel{^}{U}}_{l}={Z}_{i}^{\text{T}}{\stackrel{^}{\beta}}_{l}^{0}$ , then obtain the initial unknown function ${\stackrel{^}{m}}_{l}^{ini}(\cdot )$ from the varying coefficient model $Y={\displaystyle \underset{l=1}{\overset{d}{\sum}}{m}_{l}}\left({\stackrel{^}{U}}_{l}\right){X}_{l}+\epsilon $ .

4) Obtain ${\beta}_{1}^{ini}$ by minimizing ${2}^{-1}{{\displaystyle \underset{i=1}{\overset{n}{\sum}}\left\{{Y}_{i}-{\displaystyle \underset{l=1}{\overset{n}{\sum}}{\stackrel{^}{m}}_{l}^{ini}\left({Z}_{i}^{\text{T}}{\beta}_{l}\right){X}_{il}}\right\}}}^{2}$ , i.e. the initial value.

Step 2. Iterative calculations are performed by the asymptotic properties of the large sample parameter estimates and the theorems given by Ma and Song (2015) [16] . Under certain assumptions, the estimated parameters satisfy the following asymptotic properties:

$\begin{array}{c}\sqrt{n}\left({\stackrel{^}{\beta}}_{-1}-{\beta}_{-1}^{0}\right)={\left\{{n}^{-1}{\displaystyle \underset{i=1}{\overset{n}{\sum}}\Phi {\left({X}_{i},{Z}_{i},{\beta}^{0}\right)}^{\otimes 2}}\right\}}^{-1}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\times \left\{{n}^{-1/2}{\displaystyle \underset{i=1}{\overset{n}{\sum}}\left({Y}_{i}-m\left({Z}_{i},{X}_{i}\right)\right)\Phi \left({X}_{i},{Z}_{i},{\beta}^{0}\right)}\right\}+{o}_{p}\left(1\right)\end{array}$ (3.2)

where $\Phi \left({X}_{i},{Z}_{i},{\beta}^{0}\right)=\left[{\left\{{\stackrel{\dot{}}{m}}_{l}\left({U}_{l}\left({\beta}_{l}^{0}\right),{\beta}_{l}^{0}\right){X}_{l}{J}_{l}^{\text{T}}\stackrel{\u02dc}{Z}\right\}}^{\text{T}},1\le l\le d\right]$ , and $\stackrel{\u02dc}{Z}=Z-P\left(Z\right)$ in the above expression. Here $\stackrel{\u02dc}{Z}$ can be estimated by $\stackrel{\u02dc}{Z}=Z-{P}_{n}\left(Z\right)$ , where

${P}_{n}\left({Z}_{k}\right)={\displaystyle \underset{l=1}{\overset{d}{\sum}}{\stackrel{^}{g}}_{1J}^{0}}\left({U}_{l}\left(\stackrel{^}{\beta}\right),\stackrel{^}{\beta}\right){X}_{l}$ . (3.3)

The estimation procedure of
${\stackrel{^}{g}}_{1J}^{0}\left(\cdot ,\stackrel{^}{\beta}\right)$ in Equation (3.3) is similar to the estimation of the unknown function
${\stackrel{^}{m}}_{l}\left(\cdot ,\stackrel{^}{\beta}\right)$ , except that the response variable Y is replaced by
${Z}_{k}$ in the iterative estimation process. According to the asymptotic properties (3.2) we can get an equation and use this equation for iterative calculations. The iteration stops when the absolute difference (dif) from the last calculated unknown parameter is less than 10^{-4} or the iteration number (iter) is greater than or equal to 100.

According to the idea of the above specific algorithm, we use R (64-bit) to write four functions such as Design matrix, transform, Jac, vicmest. Among them, vicmest is the main program for estimating unknown parameters, and the other three functions are intermediate conversion functions. First, we calculate the initial value ${\beta}_{1},{\beta}_{2},{\beta}_{3}$ of $\beta $ through the first step. The results are shown in Table 1. It can be seen from Table 1 that the initial value calculated by Step 1 is consistent with the trend of the actual value, but the deviation from the actual value is still large. Therefore, it is necessary to further calculate the estimated value by the second step. At this point, by running the four programs such as vicmest, the result of stopping the main program after 64 iterations is finally obtained, and $dif=8.45267\times {10}^{-6}$ at this time. The specific calculation results of the estimated values $\stackrel{^}{\beta}$ of $\beta $ and their deviations are shown in Table 2. It can be seen from Table 2 that the value of a estimated by the profile spline modal estimator (PSME) is better, the deviation from the actual value (Bias) is smaller, and the mean deviation is less than 5%.

Table 1. The initial values of $\beta $ calculated by step 1.

Table 2. The estimated value of $\beta $ and its deviation from the true value.

From the main program vicmest, not only can the estimated value $\stackrel{^}{\beta}$ be obtained, but also can we obtain gamm0, which is the coefficient after the expansion of the B-spline basis function. Bring the calculated $\stackrel{^}{\beta}$ and the coefficient gamm0 into the Formula (2.10), get the value of the unknown function ${\stackrel{^}{m}}_{l}\left(\cdot ,\beta \right)$ and the predicted value of the response variable Y. The results of the gamm0 coefficient are shown in Table 3. An estimate of ${\stackrel{^}{m}}_{l}\left(\cdot ,\beta \right)$ can be seen from Figure 1, where the red curve represents the estimate and the black curve represents the actual value. It can be seen intuitively from Figure 1 that the fitting effect of the B-spline expansion is very good, not only the general trend of the unknown non-parametric function is well maintained, but also the accuracy of the estimation is relatively high. As shown in Table 4, we calculate the root mean square error of the coefficient of ${\stackrel{^}{m}}_{l}\left(\cdot ,\beta \right)$ by further calculation. It can be seen from Table 4 that the unknown function has a small deviation, and the RMSE is less than 0.28, which indicates that the estimation effect is better. Moreover, Y can be calculated after obtaining the estimated values $\stackrel{^}{\beta}$ and ${\stackrel{^}{m}}_{l}\left(\cdot ,\beta \right)$ . Finally, the variance of error of the model (3.1) is calculated to be 5.777.

3.2. Results in High-Dimensional Case

In this section, we numerically simulate the variance estimation of the varying index coefficient model in high-dimensional conditions.

The profile spline modal estimator (PSME) shows good estimation variance under low-dimensional data settings. However, in the case of high-dimensional data, it will fall into the dimension curse, and the deviation of the estimated variance will increase as the dimension increases. The re-adjustment cross-validation method proposed by Fan et al. (2012) [19] can be considered as an effective way to overcome the dimension curse in high-dimensional problems through theoretical proof and data simulation test. Naturally, this paper applies the re-adjusted cross-validation method (RCV) to the high-dimensional varying index coefficient model for the first time. There are two types of covariates in the varying index coefficient model (2.2). The first type is Z. If Z is from a single variable, its relationship with the covariate X is more difficult to detect. The second type of variable is the covariate X. What we are concerned about is the estimation of the variance of the varying index coefficient model when the first type of covariate Z is high-dimensional.

We first perform data simulation. The setting of the real model is the same as model (3.1), with only the dimension of the covariant Z changed, that is, the first

Figure 1. Plots of the estimated nonparametric curves.

Table 3. The coefficient of the B-spline basis function.

Table 4. Root mean square error (RMSE) of ${\stackrel{^}{m}}_{l}\left(\cdot ,\beta \right)$ .

type of covariate Z is set to a high dimensional variable. Where ${Z}_{ik}=\Phi \left({Z}_{ik}^{*}\right)-0.5$ , $k=1,2,\cdots ,d$ , that is, Z is a d-dimensional covariate. The setting of ${m}_{l}\left({u}_{l}\right)$ and the error term is also the same as model (3.1).

In the case of high dimensional data, the independent variables are often highly correlated. However, not all independent variables are related to the dependent variable Y. In fact, only a small number of covariates are associated with the dependent variable Y. For the selection of such high-dimensional variables, Fan et al. (2008) [10] proposed SIS method with Sure Screening properties based on the relevant criteria, which can first reduce the dimension d to a relatively small number. Therefore, all important variables could be filtered into the model. So that lower-dimensional model selection methods such as SCAD, Dangit selector, LASSO, or adaptive LASSO could be used. With lower-dimensional model selection method, some smaller coefficients can be compressed to zero, thereby removing the extraneous variables that are filtered by the SIS method. The idea of SIS makes high-dimensional model selection possible, greatly speeding up the selection of variables, and making model selection problems efficient and modular. The SIS variable selection method can be used in conjunction with any model selection technique. Fan et al. (2010) [20] apply the SIS method to the Cox proportional hazard model. The Cox proportional hazard model is similar to the varying index coefficient model mentioned in this paper. They are all nonlinear models and both have the need to estimate the coefficients of the nonparametric function and its parameter parts. Therefore, we believe that in the case of high-dimensional data, it is feasible to use the SIS method to make the first variable selection of the varying index coefficient model.

We use the SIS method proposed by Fan et al. (2008) [10] to select variables. The number of variables selected is tentatively 20. The calculation process is simulated using R software. We have written VicmRCV and the vicmest function for the estimation of the RCV process. The data simulation process was repeated 100 times, and a box plot of the variance as shown in Figure 2 was obtained. In the figure, naïve represents a simple two-stage approach, while rcv represents a re-adjusted cross-validation method.

It can be seen from Figure 2 that in the d dimension (the dimension of the variable Z, is as high as 100), the sample size is only 200, the variance of the re-adjusted cross-validation (RCV) two-stage method is better than the simple two-stage method. However, the calculated error variance value is large. To some extent, the estimation method of the estimated varying index coefficient model mentioned in Section 3.1 of this paper is not accurate enough and not robust enough as the estimated error variance value is large.

Figure 2. Box plot of the variance calculated by using the simple two-stage method and the RCV method.

As shown in Table 5, changing the values of p and n gives more simulation results. Table 5 compares the normal two-stage method (Naive-SIS) with the RCV two-stage method (RCV-SIS) at $n=100$ , $d=50,100,500$ . By comparing the root mean square error estimated by the two estimation methods, we find that the mean square error (MSE) of the RCV two-stage estimation is smaller in each dimension than the MSE estimated by the ordinary two-stage method. That is, the model estimated by the RCV method is more accurate. But from Table 5, we can also find other laws. Conventionally, as the dimension p increases, the estimated accuracy decreases, which results in the root mean square error becomes larger. However, from the results of Table 5, this law is completely inapplicable in the ordinary two-stage method. When the dimension comes to maximum ( $d=500$ ), the root mean square error is the smallest, and its value is 6.692. When $d=100$ , the MSE is the largest with a value of 8.273. In conclusion, the order is disorganized, and the mean square error does not become larger as the dimension becomes larger in general cases.

In fact, it is not difficult to explain this phenomenon because in the variable selection phase, for the SIS method, we select the variables with the co-correlation coefficients ranked in the top twenty (descending order). Since the fixed value 20 is small relative to the covariate, the probability of selecting all important variables is relatively low. From the data in the RCV-SIS column in Table 5, it can be seen that the SIS method is much more stable after combining RCV. At d = 50, the estimated MSE is the smallest with value of 4.838. In the case of three different dimensions, the error estimated by the RCV method is smaller than the mean square error estimated by the ordinary two-stage method.

4. Real Data Analysis

In this section, we will use the data collected by the Mayo Clinic. These data were obtained from trials conducted by the Mayo Clinic in primary biliary cirrhosis (PBC) from 1974 to 1984. Specific data can be found in the R language Survival package. The dataset included 424 PBC patients who were referred to the Mayo Clinic during the decade between 1974 and 1984. The data met the randomized placebo-based eligibility criteria.

In the data set, the first 312 patients participated in the randomized trial while the other 112 patients did not participate in the clinical trial, but agreed to record the basic measurements and follow the medical recommendations. Six of the above samples lost follow-up shortly after diagnosis. Thus there are 106 cases and 312 random participants. We preprocessed the data set via R software.

Table 5. Mean Square Error (MSE) for two different estimation methods at $n=100$ .

We first remove some samples with missing values. The number of samples that were eventually brought into the calculation after deletion was 276. The specific variables are described in Table 6.

As can be seen from Table 6, the response variable Y is the survival time of the patient. Since the difference among the response variables Y is large, we logarithmically convert the time Y to reduce the error. There are three covariates of X, which are the patient’s state X_{1} (Status), the patient’s age X_{2} (Age), and the patient’s gender X_{3} (Sex). Here we need to explain why we want to add gender variables. The gender variable was added because Huang Siyu (1985) [21] found that the incidence of men with primary biliary cirrhosis (PBC) was much lower than that of women. Therefore, we can know that gender has a great relationship with PBC. Another type of covariate Z has a total of 15 variables including albumin (albumin), alkaline phosphatase (alk.phos), triglyceride (Trig), and platelet count (Platelet). Therefore, the varying index coefficient model constructed in this section is as follows:

$Ln\left({Y}_{i}\right)=m\left(Z,X,\beta \right)+{\epsilon}_{I}={\displaystyle \underset{l}{\overset{3}{\sum}}{m}_{l}\left({\displaystyle \underset{d}{\overset{15}{\sum}}{Z}_{ld}*{\beta}_{ld}}\right){X}_{il}+{\epsilon}_{i}}$ . (4.1)

Table 6. Interpretation of experimental variables in primary biliary cirrhosis (PBC).

Since the covariate Z has different physical meanings and different dimensions, it is meaningless to simulate the model at this time. So we need to eliminate the effects of different dimensions of the data through transformation. Therefore, these 15 variables should be standardized before the specific calculation, which is Z-Score standardization.

Through previous studies, we have roughly learned that variables such as serum bilirubin content (Z6), albumin content (Z8), urinary copper content (Z9), alkaline phosphatase content (Z10), prothrombin time (Z14) have a strong relationship with the response variable Y. We first use the SIS method to select the variables with the first 8 covariate correlations, and then use the simple two-stage method and the re-adjusted cross-validation (RCV) two-stage method to estimate the coefficient $\beta $ of the covariate Z and the model variance. The results are shown in Table 7.

As can be seen from Table 7, when using SIS for variable selection, the important variables such as Z6 and Z9 are selected three times. Important variables have strong correlations in theoretical analysis. From this aspect, it can be seen that the SIS variable selection method can select all important variables with a high probability to a certain extent. The RCV can repeatedly select variables by selecting the first missing variable or deleting the extra variable that was selected for the first time. The model variance estimated by the RCV-SIS two-stage method is significantly better than the N-SIS simple two-stage method. In the high-dimensional case, the re-adjusted cross-validation method (RCV) has a better performance in the varying index coefficient model. The root mean square error and the resulting variance are smaller than the simple two-stage estimate. Therefore, the RCV-SIS two-stage method is more accurate in predicting the survival time of patients, and can provide more reasonable guidance and advice for follow-up medical treatments.

5. Discussion

In this paper, we study a new class of semiparametric regression models: varying index coefficient models. The estimation of the unknown coefficient $\beta $ is estimated by the profile spline modal estimator method (PSME), while the unknown non-parametric function part is expanded with the B-spline. After studying the gradual nature of the coefficients, we estimate the coefficient $\beta $

Table 7. PBC dataset estimation results.

using an iterative method. With data simulation, we found that the estimated $\beta $ of this method has a small deviation, and the unknown function part of the B-spline estimation has a good fitting effect as well. Finally, under the setting conditions of high-dimensional data, we carried out a two-stage RCV estimation of the varying index coefficient model. We find that the variance and mean square error estimated by the RCV method are superior to the simple two-stage method. In the final empirical phase, it was originally intended to model the PBC data using a survival model (semi-parametric varying coefficient additive risk model). However, through research literature, it is known that gender variables and state variables are closely related to the survival time of patients with primary biliary cirrhosis. The variable Z has a certain relationship with the three variables X (status, gender and age). Therefore, we used the varying index coefficient model to model the PBC data, and found that the variance and mean square error of the RCV method are better than the simple two-stage method.

Further researches for the proposed method are needed. Firstly, further effort to investigate the asymptotic properties of the proposed method needs to be done. Secondly, this paper only estimates the variance and mean square error of the varying index coefficient model, but lacks the research on the coefficient $\beta $ and the estimation of the nonparametric function of the parameter part of the model. Therefore, we can study more robust estimation methods in the future. In addition, we can focus more on the asymptotic properties of the non-parametric part of the varying index coefficient model.

Cite this paper

Wang, M. , Lv, H. and Wang, Y. (2019) Variance Estimation for High-Dimensional Varying Index Coefficient Models.*Open Journal of Statistics*, **9**, 555-570. doi: 10.4236/ojs.2019.95037.

Wang, M. , Lv, H. and Wang, Y. (2019) Variance Estimation for High-Dimensional Varying Index Coefficient Models.

References

[1] Tibshirani, R. (1996) Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society (Series B), 58, 267-288.

https://doi.org/10.1111/j.2517-6161.1996.tb02080.x

[2] Fan, J. and Peng, H. (2004) On Nonconcave Penalized Likelihood with Diverging Number of Parameters. Annals of Statistics, 32, 928-961.

https://doi.org/10.1214/009053604000000256

[3] Zhao, P. and Yu, B. (2006) On Model Selection Consistency of Lasso. Journal of Machine Learning Research, 7, 2541-2563.

[4] Bunea, F., Tsybakov, A. and Wegkamp, M. (2007) Sparsity Oracle Inequalities for the Lasso. Electronic Journal of Statistics, 64, 330-332.

https://doi.org/10.1214/07-EJS008

[5] Zhang, C.H. and Huang, J. (2008) The Sparsity and Bias of the Lasso Selection in High-Dimensional Linear Regression. Annals of Statistics, 36, 1567-1594.

https://doi.org/10.1214/07-AOS520

[6] Lv, J. and Fan, Y. (2009) A Unified Approach to Model Selection and Sparse Recovery Using Regularized Least Squares. Annals of Statistics, 37, 3498-3528.

https://doi.org/10.1214/09-AOS683

[7] Fan, J. and Lv, J. (2011) Nonconcave Penalized Likelihood with NP-Dimensionality. Journal IEEE Transactions on Information Theory, 57, 5467-5484.

https://doi.org/10.1109/TIT.2011.2158486

[8] Kim, Y., Choi, H. and Oh, H.S. (2008) Smoothly Clipped Absolute Deviation on High Dimensions. Journal of the American Statistical Association, 103, 1665-1673.

https://doi.org/10.1198/016214508000001066

[9] Candes, E. and Tao, T. (2005) The Danzig Selector: Statistical Estimation When p Is Much Larger than n. Annals of Statistics, 35, 2313-2351.

https://doi.org/10.1214/009053606000001523

[10] Fan, J. and Lv, J. (2008) Sure Independence Screening for Ultrahigh Dimensional Feature Space. Journal of the Royal Statistical Society, 70, 849-911.

https://doi.org/10.1111/j.1467-9868.2008.00674.x

[11] Fan, J., Samworth, R. and Wu, Y. (2009) Ultrahigh Dimensional Feature Selection: Beyond the Linear Model. Journal of Machine Learning Research, 10, 2013-2038.

[12] Zhao, S.D., Cai, T.T. and Li, H. (2014) Variance Estimation in High-Dimensional Linear Models. Biometrika, 2, 269-284.

https://doi.org/10.1093/biomet/ast065

[13] Reid, S., Tibshirani, R. and Friedman, J. (2016) A Study of Error Variance Estimation in Lasso Regression. Statistica Sinica, 26, 35-67.

https://doi.org/10.5705/ss.2014.042

[14] Hastie, T. and Tibshirani, R. (1993) Varying-Coefficient Models. Journal of the Royal Statistical Society (Series B), 55, 757-796.

https://doi.org/10.1111/j.2517-6161.1993.tb01939.x

[15] Wong, H., Ip, W.C. and Zhang, R. (2008) Varying-Coefficient Single-Index Model. Statistics, 52, 1458-1476.

https://doi.org/10.1016/j.csda.2007.04.008

[16] Ma, S. and Song, X.K. (2015) Varying Index Coefficient Models. Journal of the American Statistical Association, 110, 341-356.

https://doi.org/10.1080/01621459.2014.903185

[17] Xue, L. and Liang, H. (2008) Polynomial Spline Estimation for a Generalized Additive Coefficient Model. Scandinavian Journal of Statistics, 37, 26-46.

https://doi.org/10.1111/j.1467-9469.2009.00655.x

[18] Lv, J., Yang, H. and Guo, C. (2016) Robust Estimation for Varying Index Coefficient Models. Computational Statistics, 31, 1-37.

https://doi.org/10.1007/s00180-015-0595-5

[19] Fan, J., Guo, S. and Hao, N. (2012) Variance Estimation Using Refitted Cross-Validation in Ultrahigh Dimensional Regression. Journal of the Royal Statistical Society (Series B), 74, 37-65.

https://doi.org/10.1111/j.1467-9868.2011.01005.x

[20] Fan, J., Yang, F. and Wu, Y. (2010) High-Dimensional Variable Selection for Cox’s Proportional Hazards Model. Statistics, 105, 205-217.

https://doi.org/10.1214/10-IMSCOLL606

[21] Huang, S.Y. (1985) Is There a Difference in the Severity of Primary Biliary Liver Hardening? International Journal of Digestive Diseases, No. 3, 186-187.

[1] Tibshirani, R. (1996) Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society (Series B), 58, 267-288.

https://doi.org/10.1111/j.2517-6161.1996.tb02080.x

[2] Fan, J. and Peng, H. (2004) On Nonconcave Penalized Likelihood with Diverging Number of Parameters. Annals of Statistics, 32, 928-961.

https://doi.org/10.1214/009053604000000256

[3] Zhao, P. and Yu, B. (2006) On Model Selection Consistency of Lasso. Journal of Machine Learning Research, 7, 2541-2563.

[4] Bunea, F., Tsybakov, A. and Wegkamp, M. (2007) Sparsity Oracle Inequalities for the Lasso. Electronic Journal of Statistics, 64, 330-332.

https://doi.org/10.1214/07-EJS008

[5] Zhang, C.H. and Huang, J. (2008) The Sparsity and Bias of the Lasso Selection in High-Dimensional Linear Regression. Annals of Statistics, 36, 1567-1594.

https://doi.org/10.1214/07-AOS520

[6] Lv, J. and Fan, Y. (2009) A Unified Approach to Model Selection and Sparse Recovery Using Regularized Least Squares. Annals of Statistics, 37, 3498-3528.

https://doi.org/10.1214/09-AOS683

[7] Fan, J. and Lv, J. (2011) Nonconcave Penalized Likelihood with NP-Dimensionality. Journal IEEE Transactions on Information Theory, 57, 5467-5484.

https://doi.org/10.1109/TIT.2011.2158486

[8] Kim, Y., Choi, H. and Oh, H.S. (2008) Smoothly Clipped Absolute Deviation on High Dimensions. Journal of the American Statistical Association, 103, 1665-1673.

https://doi.org/10.1198/016214508000001066

[9] Candes, E. and Tao, T. (2005) The Danzig Selector: Statistical Estimation When p Is Much Larger than n. Annals of Statistics, 35, 2313-2351.

https://doi.org/10.1214/009053606000001523

[10] Fan, J. and Lv, J. (2008) Sure Independence Screening for Ultrahigh Dimensional Feature Space. Journal of the Royal Statistical Society, 70, 849-911.

https://doi.org/10.1111/j.1467-9868.2008.00674.x

[11] Fan, J., Samworth, R. and Wu, Y. (2009) Ultrahigh Dimensional Feature Selection: Beyond the Linear Model. Journal of Machine Learning Research, 10, 2013-2038.

[12] Zhao, S.D., Cai, T.T. and Li, H. (2014) Variance Estimation in High-Dimensional Linear Models. Biometrika, 2, 269-284.

https://doi.org/10.1093/biomet/ast065

[13] Reid, S., Tibshirani, R. and Friedman, J. (2016) A Study of Error Variance Estimation in Lasso Regression. Statistica Sinica, 26, 35-67.

https://doi.org/10.5705/ss.2014.042

[14] Hastie, T. and Tibshirani, R. (1993) Varying-Coefficient Models. Journal of the Royal Statistical Society (Series B), 55, 757-796.

https://doi.org/10.1111/j.2517-6161.1993.tb01939.x

[15] Wong, H., Ip, W.C. and Zhang, R. (2008) Varying-Coefficient Single-Index Model. Statistics, 52, 1458-1476.

https://doi.org/10.1016/j.csda.2007.04.008

[16] Ma, S. and Song, X.K. (2015) Varying Index Coefficient Models. Journal of the American Statistical Association, 110, 341-356.

https://doi.org/10.1080/01621459.2014.903185

[17] Xue, L. and Liang, H. (2008) Polynomial Spline Estimation for a Generalized Additive Coefficient Model. Scandinavian Journal of Statistics, 37, 26-46.

https://doi.org/10.1111/j.1467-9469.2009.00655.x

[18] Lv, J., Yang, H. and Guo, C. (2016) Robust Estimation for Varying Index Coefficient Models. Computational Statistics, 31, 1-37.

https://doi.org/10.1007/s00180-015-0595-5

[19] Fan, J., Guo, S. and Hao, N. (2012) Variance Estimation Using Refitted Cross-Validation in Ultrahigh Dimensional Regression. Journal of the Royal Statistical Society (Series B), 74, 37-65.

https://doi.org/10.1111/j.1467-9868.2011.01005.x

[20] Fan, J., Yang, F. and Wu, Y. (2010) High-Dimensional Variable Selection for Cox’s Proportional Hazards Model. Statistics, 105, 205-217.

https://doi.org/10.1214/10-IMSCOLL606

[21] Huang, S.Y. (1985) Is There a Difference in the Severity of Primary Biliary Liver Hardening? International Journal of Digestive Diseases, No. 3, 186-187.