With the advent of high-throughput technologies, high-dimensional data have been frequently generated for the understanding of biological processes such as disease occurrence and cancer study. Motivated by these important applications, there has been a dramatic development in the statistical analysis of high-dimensional data; see  and  , and examples therein.
Model selection and model averaging are two approaches used to improve estimation and prediction in the regression problems. Model selection assigns the weight of a single optimal model to 1 and weights for other candidate models to 0, thus the parsimonious and compact representations of the data can be obtained. In recent years, the shrinkage methods have become popular as they can achieve simultaneous model selection and parameter estimation. Such methods include, but are not limited to, the least absolute shrinkage and selection operator (LASSO, Tibshirani  ), the smoothly clipped absolute deviation (SCAD, Fan and Li  ), the elastic net (Zou and Hastie  ), and the minimax concave penalty (MCP, Zhang  ).
However, the process of model selection ignores the additional uncertainty or even introduces bias, and therefore often underestimates variance  . In addition, different selection methods or criteria may yield different best models. Hence inference based on the final model can be seriously misleading.
Instead of relying on only one model, model averaging compromises across a set of competing models by assigning different weights. In doing so, model uncertainty is incorporated into the conclusions about the unknown parameters. Besides, if the weights can be properly determined, then prediction performance could be enhanced  .
Regarding model averaging techniques, Frequentist Model Averaging (FMA) and Bayesian Model Averaging (BMA) are two different methods in the literature. Compared with FMA, there are extensive references on BMA where a prior probability to each candidate model is set for the model uncertainty; for an overview of BMA, see  . On the other hand, the FMA approach, whose estimators are totally determined by data, is starting to receive more attention over the last decade, as the procedure avoids problems such as how to set priors and how to deal with the priors when they are in conflict.
The aim of this paper is to make a review on the current methods of the FMA in the high-dimensional linear models. The methods on FMA estimation are surveyed in Section 2. Some future research topics are discussed in Section 3.
2. High-Dimensional FMA
So far, most current model averaging approaches are developed for the classic setting in which the number of observations is greater than the number of predictors, with the main focus of determination of the weights for individual models. These approaches include Akaike information criterion model averaging (AIC, Akaike  ), Bayesian information criterion model averaging (BIC, Hoeting et al.  ), Mallows model averaging (Hansen  ; Wan et al.  ), and Jackknife model averaging (Hansen and Racine  ; Zhang et al.  ), to name but a few.
However, for the high-dimensional setting, model averaging has only recently been studied. This is very different from the finite dimensional case because many of the fixed dimensional model averaging procedures either do not work at all or, for their implementation, require some theoretical or computational adjustment.
Given the dataset of n observations, a linear regression model takes the form of
where is the response in the ith trial, are the predictors, are the regression coefficients, and is the error term. Alternatively, in matrix form, model (1) can be written as
Developing for the data in which the number of predictors p is much greater than the number of observations n, Ando and Li  proposed a two-stage model averaging procedure. The procedure first divides p predictors into groups by the absolute marginal correlations between all predictors and the response. Let model consist of the regressors with marginal correlations falling into the kth group. The first group has the highest values, and the group has values closest to 0 and is then discarded. Thus the number of candidate models is M. Each model can also be written in matrix form , for . Given candidate models whose number of predictors is smaller than the sample size, the regression coefficients are estimated by the usual least-squares method as and the predicted value .
After the candidate models and their corresponding least-squares predicted values are obtained, the second stage of procedure of  is to determine the model weights. Let be an n-dimensional vector, where is the predicted value of the ith observation from using the data without the ith observation, then the optimal weight vector is optimized by minimizing the delete-one cross-validation criterion
where . Finally, the model averaging predicted value is expressed as
There are several contributions of Ando and Li  . One notable feature of this method is the relaxation on the total model weights. The standard constraint of the model weights summing up to 1 is relaxed to the model weights can be vary freely between 0 and 1, and it is shown that this relaxation is helpful to lower the prediction error. Furthermore, the algorithm is computationally feasible for high-dimensional data, since each candidate model and its corresponding weight are first determined in the low-dimensional setting and then organically combined. Theoretically, it is proved that the proposed method could asymptotically achieve the lowest possible prediction loss, which is an important property in prediction performance.
Following  , Ando and Li  further extended model averaging to high-dimensional generalized linear models. Still allowing the weights to alter between 0 and 1, the Kullback-Leibler distance is used in  as a replacement of the squared error for risk measure, to overcome several technical and theoretical challenges.
Nevertheless, Lin et al.  showed through a simulated example that the two-stage model averaging procedure in  tends to have high variance and may lead the final estimator to be overfitting. They argued that the increase in variance is due to the reuse of the same data for generating candidate models and estimating model weights in the two steps.
To reduce the variance of estimators, Lin et al.  proposed a random splitting approach by first dividing the original data set into training set and test set for B times, . For each , the variable selection method LASSO is applied to determine candidate model for each candidate tuning parameter and the corresponding coefficients . In the next step, the second level data is constructed, , where
where is the set of indexes of test dataset that contain observation i. After is determined, the optimal weight vector is estimated by minimizing
Finally, the model averaging predicted value takes the form of
The procedure of  selects candidate models and obtains estimators using training sets, while finds optimal weights using only test sets, which could successfully avoid model overfitting and could improve prediction accuracy by combining models from multiple random splits. The main price one pays for using the random splitting, however, is in significantly increased computational complexity.
3. Conclusion and Discussion
In this paper, we have made a review on the development of the FMA approach for high-dimensional linear regression models. The performance of the FMA procedures highly depends on how to choose weights in estimation, since different weights will result in different risks and asymptotic properties. Consequently, much of the current work focuses on weight choice to achieve stable prediction. Another issue is how to deal with the high-dimensional settings as the least-squares estimates are not unique. The general idea is to reduce the dimensions first and then to combine the low-dimensional models using the appropriate weights.
Although substantial progress has been made recently, the research on the FMA approach is a relatively new topic, for which a lot of problems remain unsolved and future work still needs to be done.
On possible direction is the extension of the FMA approach to other modeling settings containing generalized linear mixed model and Cox proportional hazards model, both of which are widely used in biological and medical research. For example, Zhang and Zou  proposed a model averaging approach in linear mixed-effects models. To determine the optimal weight with more complex model structures still is a meaningful work.
We also note that missing values are quite common in high-dimensional data, which leaves space for further research in model averaging. Schomaker et al.  suggested an FMA method with the presence of missing observations for low-dimensional data. Imputation handling will also be considered in addressing missing data in the future study for larger data sets.
Finally, in current research on the weight choice, many focus on developing those weights which are non-negative; it seems interesting to explore the possibility of further relaxing the weights to allow for negative values. These and many other unsettled issues deserve further investigation.
The authors are grateful for a grant from Shandong University (IFYT18032).
 Fan, J. and Li, R. (2001) Variable Selection via Nonconcave Penalized Likelihood and Its Oracle Properties. Journal of the American Statistical Association, 96, 1348-1360.
 Zou, H. and Hastie, T. (2005) Regularization and Variable Selection via the Elastic Net. Journal of the Royal Statistical Society: Series B, 67, 301-320.
 Liang, H., Zou, G., Wan, A.T.K. and Zhang, X. (2011) Optimal Weight Choice for Frequentist Model Average Estimators. Journal of the American Statistical Association, 106, 1053-1066.
 Zhang, X.Y., Wan, A.T.K. and Zou, G.H. (2013) Model Averaging by Jackknife Criterion in Models with Dependent Data. Journal of Econometrics, 174, 82-94.
 Ando, T. and Li, K.C. (2014) A Model-Averaging Approach for High-Dimensional Regression. Journal of the American Statistical Association, 109, 254-265.
 Schomaker, M., Wan, A.T.K. and Heumann, C. (2010) Frequentist Model Averaging with Missing Observations. Computational Statistics and Data Analysis, 54, 3336-3347.