Partial Functional Linear Models with ARCH Errors

Show more

1. Introduction

In order to combine the flexibility of linear regression models with the recent methodology for the functional linear regression models, partial functional linear models, which was introduced by [1] , is considered as follows:

$Y={\beta}^{\prime}z+{\displaystyle {\int}_{\mathcal{T}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\gamma \left(t\right)X\left(t\right)\text{d}t+\epsilon ,$ (1.1)

where Y is a real-valued response random variable, z is a d-dimensional vector of random variables with zero means and finite second moments, and $X\left(t\right)$ is an explanatory functional variable defined on T with zero mean and finite second moments (i.e. $\text{E}{\left|X\left(t\right)\right|}^{2}<\infty $ , for all $t\in \mathcal{T}$ ), β is a d-dimensional vector of unknown parameters, $\gamma \left(t\right)$ is a square integrable function on T, ε is a random error and is independent of z and X. For simplicity, without loss of generality, it is assumed that $\mathcal{T}=\left[\mathrm{0,1}\right]$ in the remainder of this paper. All the random variables are defined on the same probability space $\left(\Omega \mathrm{,}\mathcal{A}\mathrm{,}P\right)$ .

Model (1.1) has been studied by many authors from different points. From the view of estimation of model (1.1), for example, reference [2] studied the estimate of the model (1.1) using the nonparametric kernel regression methods and showed the proposed estimators are asymptotically normal as well as the estimator of the slope function $\gamma \left(t\right)$ is consistent in supremum norm. Reference [3] considered the least square estimator of model (1.1) using the Karhunen-Loève (K-L) expansion to approximate the slope function, established asymptotic properties of the resulting estimation. Based on Tikhonov regularization, [4] introduced the functional ridge regression estimation procedure, and showed asymptotic normality of the estimated infinite dimensional regression coefficients as well as the convergence rate of the estimated slope function. Using the technique of polynomial splines, [5] considered the estimation of model (1.1) by minimizing the square of residuals, and furtherly considered the asymptotic property of the estimators. Recently, to get the robust estimator of coefficients of (1.1), the model has been also considered in the frame of qunatile regression ( [6] [7] ). Some authors also considered the model (1.1) from the view of hypothesis test, such as, [8] construct pivot by the square of residuals under the null and alternative hypothesis, to test whether the linearity term of (1.1) exists or not. Moreover, the generalized form of model (1.1), like semiparametric partially linear regression model for functional data and functional partial linear single-index model, has been respectively considered by [9] [10] .

However, all the works have a common assumption that the responses are observed independently. As is well known, uncertainty such as volatility uncertainty is a common phenomenon in modern economic and financial theory. Therefore, the assumption of independence of the response observations is not valid in the real data analysis. Motivated by the fact mentioned above, we may want to reconsider the model (1.1) so that it can reflect the volatility of the data. Fortunately, conditional heteroscedasticity can reflect the size of volatility appropriately. One of the most popular models which can show the heteroscedasticity in econometrics is the autoregressive conditional heteroscedasticity (ARCH) model which was introduced by [11] and have had an enormous impact on the modeling of financial data. More importantly, many authors have studied the ARCH models to make it more perfect in theory. For example, reference [12] considered the existence of the strictly stationary and ergodic solution and high moment of the ARCH model; [13] studied strong law of large numbers of the absolute value sequence from ARCH.

If we have n observations $\left\{\left({z}_{1}\mathrm{,}{X}_{1}\mathrm{,}{Y}_{1}\right)\mathrm{,}\cdots \mathrm{,}\left({z}_{n}\mathrm{,}{X}_{n}\mathrm{,}{Y}_{n}\right)\right\}$ , model (1.1) can be written as

${Y}_{i}={\beta}^{\prime}{z}_{i}+{\displaystyle {\int}_{0}^{1}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\gamma \left(t\right){X}_{i}\left(t\right)\text{d}t+{\epsilon}_{i}.$ (1.2)

The ARCH(p) model for $\left\{{\epsilon}_{i}\right\}$ is defined by the following equations:

$\{\begin{array}{l}{\epsilon}_{i}={e}_{i}{h}_{i}^{1/2},\\ {h}_{i}={\alpha}_{0}+{\alpha}_{1}{\epsilon}_{i-1}^{2}+{\alpha}_{2}{\epsilon}_{i-2}^{2}+\cdots +{\alpha}_{p}{\epsilon}_{i-p}^{2},\end{array}$ (1.3)

where ${\alpha}_{0}>0,{\alpha}_{i}\ge 0,i=1,\cdots ,p$ . Besides, $\left\{{e}_{i}\mathrm{:}i\ge 1\right\}$ is an independent and identically distributed (i.i.d.) random sequence and independent of $\left\{{\epsilon}_{t}:t<i\right\}$ with $E{e}_{i}=0$ and $E{e}_{i}^{2}=1$ . For sake of establishing the asymptotic properties of the joint model (1.2) and (1.3), in this paper, we assume that the distribution functions $\left\{{F}_{i}\right\}$ of $\left\{{\epsilon}_{i}^{2}\right\}$ are absolutely continuous with continuous densities ${f}_{i}$ , which is uniformally bounded away from 0 and ¥ at the 1/2-th quantile points ${\xi}_{i},i=1,\cdots ,n$ . Moreover, similar to [3] , assume that $\left({z}_{1}\mathrm{,}{X}_{1}\right)\mathrm{,}\cdots \mathrm{,}\left({z}_{n}\mathrm{,}{X}_{n}\right)$ in (1.2) are i.i.d..

The ordinary regression models with ARCH errors have been considered by many authors. For example, [14] considered a p-th order autoregression process with ARCH errors; [15] studied the estimation of the partly linear regression models with ARCH(p) errors. Under some regularity conditions, we study the estimation of the unknown parameters in the joint model of (1.2) and (1.3), and propose a hybrid estimation method with combining the functional principle component analysis in the mean model with the least absolute deviation for the error model. The asymptotic normality of the real-valued parameter estimators is established, and the convergence rate of the slope function estimator is obtained.

The rest of the paper is organized as follows. Section 2 gives the estimation of parameters for the partial functional linear regression models as well as ARCH(p) errors. Asymptotic theory of the proposed estimators is given in Section 3. In Section 4, we carry out a simulation study to illustrate the finite sample performance, and a real data analysis is conducted in Section 5. Some preliminary lemmas and the proofs of the theorems are presented in Appendix.

2. Estimation

Firstly, we shall study how to produce the estimators
$\stackrel{^}{\beta},\stackrel{^}{\gamma}$ of
$\beta ,\gamma $ in this section. Let
$\langle \cdot \mathrm{,}\cdot \rangle $ and
$\Vert \text{\hspace{0.05em}}\cdot \text{\hspace{0.05em}}\Vert $ denote inner product and norm on
${L}^{2}\left[\mathrm{0,1}\right]$ respectively. Denote the covariance function of process X by C_{X} which is continuous on
$\mathcal{T}\times \mathcal{T}$ . Then we have the following expansion

${C}_{X}\left(s,t\right)={\displaystyle \underset{j=1}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\lambda}_{j}{\rho}_{j}\left(s\right){\rho}_{j}(t)$

by Mercer’s theorem ( [16] ) with nonnegative eigenvalues ${\lambda}_{1}\mathrm{,}{\lambda}_{2}\mathrm{,}\cdots $ and continuous orthonormal eigenfunctions ${\rho}_{1}\mathrm{,}{\rho}_{2}\mathrm{,}\cdots $ of the covariance operator. For convenience, we assume ${\lambda}_{1}>{\lambda}_{2}>\cdots >0$ throughout this paper. Therefore, by K-L expansion, one has

${X}_{i}\left(t\right)={\displaystyle \underset{j=1}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{U}_{ij}{\rho}_{j}(t)$

and

$\gamma \left(t\right)={\displaystyle \underset{j=1}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\gamma}_{j}{\rho}_{j}\left(t\right),$

where ${U}_{ij}=\langle {X}_{i},{\rho}_{j}\rangle $ are uncorrelated random variables with $\text{E}\left[{U}_{ij}\right]=0$ and $\text{E}\left[{U}_{ij}^{2}\right]={\lambda}_{j}$ , and ${\gamma}_{j}=\langle \gamma ,{\rho}_{j}\rangle $ . Then (1.2) is equivalent to

${Y}_{i}={\beta}^{\prime}{z}_{i}+{\displaystyle \underset{j=1}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\gamma}_{j}{U}_{ij}+{\epsilon}_{i},\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,\cdots ,n.$ (2.1)

To estimate the parameters in (1.2), following [3] ’s idea, we approximate the second term in (2.1) by finite sum

${Y}_{i}\doteq {\beta}^{\prime}{z}_{i}+{\displaystyle \underset{j=1}{\overset{m}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\gamma}_{j}{U}_{ij}+{\epsilon}_{i},\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,\cdots ,n,$ (2.2)

where $m\to \infty $ as $n\to \infty $ . Furthermore, we employ the empirical version of ${C}_{X}$

${\stackrel{^}{C}}_{X}\left(s,t\right)=\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{X}_{i}\left(s\right){X}_{i}\left(t\right)={\displaystyle \underset{j=1}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{\lambda}}_{j}{\stackrel{^}{\rho}}_{j}\left(s\right){\stackrel{^}{\rho}}_{j}(t)$

with $\left({\stackrel{^}{\lambda}}_{j}\mathrm{,}{\stackrel{^}{\rho}}_{j}\right)$ being the pairs of eigenvalues and eigenfunctions of covariance operator related to ${\stackrel{^}{C}}_{X}$ and ${\stackrel{^}{\lambda}}_{1}\ge {\stackrel{^}{\lambda}}_{2}\ge \cdots \ge 0$ , and substitute ${U}_{ij}$ in (2.2) with ${\stackrel{^}{U}}_{ij}=\langle {X}_{i},{\stackrel{^}{\rho}}_{j}\rangle $ . To get an elegant matrix form for model (2.2), denote $Y={\left({Y}_{1}\mathrm{,}\cdots \mathrm{,}{Y}_{n}\right)}^{\prime}$ , $Z={\left({z}_{1}\mathrm{,}\cdots \mathrm{,}{z}_{n}\right)}^{\prime}$ , ${U}_{m}={\left({\stackrel{^}{U}}_{ij}\right)}_{\begin{array}{c}i=1,\cdots ,n\\ j=1,\cdots ,m\end{array}}$ , $\stackrel{\u02dc}{\gamma}={\left({\gamma}_{1}\mathrm{,}\cdots \mathrm{,}{\gamma}_{m}\right)}^{\prime}$ and $\epsilon ={\left({\epsilon}_{1},\cdots ,{\epsilon}_{n}\right)}^{\prime}$ . Then (2.2) can be rewritten as

$Y\doteq Z\beta +{U}_{m}\stackrel{\u02dc}{\gamma}+\epsilon ,$

and the least square estimator $\stackrel{^}{\beta}$ and $\stackrel{^}{\stackrel{\u02dc}{\gamma}}$ are given by

${\left({\stackrel{^}{\beta}}^{\prime}\mathrm{,}{\stackrel{^}{\stackrel{\u02dc}{\gamma}}}^{\prime}\right)}^{\prime}=\mathrm{arg}\mathrm{min}{\left(Y-Z\beta -{U}_{m}\stackrel{\u02dc}{\gamma}\right)}^{\prime}\left(Y-Z\beta -{U}_{m}\stackrel{\u02dc}{\gamma}\right)\mathrm{.}$

By simple calculation, we have

$\stackrel{^}{\beta}={\left({Z}^{\prime}\left(I-{V}_{m}\right)Z\right)}^{-1}{Z}^{\prime}\left(I-{V}_{m}\right)Y$

with ${V}_{m}={U}_{m}{\left({{U}^{\prime}}_{m}{U}_{m}\right)}^{-1}{{U}^{\prime}}_{m}$ and

$\stackrel{^}{\stackrel{\u02dc}{\gamma}}={\left({{U}^{\prime}}_{m}{U}_{m}\right)}^{-1}{{U}^{\prime}}_{m}\left(Y-Z\stackrel{^}{\beta}\right)$

provided that ${\left({Z}^{\prime}\left(I-{V}_{m}\right)Z\right)}^{-1}$ exists (this is true with probability tending to 1, see Lemma 1 in [8] ). The estimator $\stackrel{^}{\gamma}$ of γ can be given as

$\stackrel{^}{\gamma}(\cdot )={\displaystyle \underset{j=1}{\overset{m}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{\gamma}}_{j}{\stackrel{^}{\rho}}_{j}(\cdot ),$

where ${\stackrel{^}{\gamma}}_{j}$ is the jth element of $\stackrel{^}{\stackrel{\u02dc}{\gamma}}$ .

To get asymptotic properties of $\stackrel{^}{\beta}$ , let ${\stackrel{^}{C}}_{z}={n}^{-1}{\displaystyle {\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{z}_{i}{{z}^{\prime}}_{i}$ , ${\stackrel{^}{C}}_{zY}={n}^{-1}{\displaystyle {\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{z}_{i}{Y}_{i}$ , ${\stackrel{^}{C}}_{zX}\left(t\right)={n}^{-1}{\displaystyle {\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{z}_{i}{X}_{i}\left(t\right)$ , ${\stackrel{^}{C}}_{Xz}\left(t\right)={\left({\stackrel{^}{C}}_{zX}\left(t\right)\right)}^{\prime}$ and ${\stackrel{^}{C}}_{YX}\left(t\right)={n}^{-1}{\displaystyle {\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Y}_{i}{X}_{i}\left(t\right)$ . Then $\stackrel{^}{\beta}$ is equal to

$\stackrel{^}{\beta}={\left({\stackrel{^}{C}}_{z}-{\displaystyle \underset{j=1}{\overset{m}{\sum}}}\frac{\langle {\stackrel{^}{C}}_{zX}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle \langle {\stackrel{^}{C}}_{Xz}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle}{{\stackrel{^}{\lambda}}_{j}}\right)}^{-1}\left({\stackrel{^}{C}}_{zY}-{\displaystyle \underset{j=1}{\overset{m}{\sum}}}\frac{\langle {\stackrel{^}{C}}_{zX}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle \langle {\stackrel{^}{C}}_{YX}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle}{{\stackrel{^}{\lambda}}_{j}}\right)\mathrm{,}$ (2.3)

with $\langle {\stackrel{^}{C}}_{zX}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle ={n}^{-1}{\displaystyle {\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{z}_{i}\langle {X}_{i}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle $ and $\langle {\stackrel{^}{C}}_{YX},{\stackrel{^}{\rho}}_{j}\rangle ={n}^{-1}{\displaystyle {\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{Y}_{i}\langle {X}_{i},{\stackrel{^}{\rho}}_{j}\rangle $ . Similarly, ${\stackrel{^}{\gamma}}_{j}$ can be represented as ${\stackrel{^}{\gamma}}_{j}=\langle {\stackrel{^}{C}}_{YX}-{\stackrel{^}{\beta}}^{\prime}{\stackrel{^}{C}}_{zX}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle /{\stackrel{^}{\lambda}}_{j}\mathrm{,}j=\mathrm{1,}\cdots \mathrm{,}m$ .

So far, we have already obtained the estimator $\stackrel{^}{\beta}$ and $\stackrel{^}{\gamma}$ , now we turn to consider the estimation of $\alpha ={\left({\alpha}_{0}\mathrm{,}{\alpha}_{1}\mathrm{,}\cdots \mathrm{,}{\alpha}_{p}\right)}^{\prime}$ . Denote by

${\stackrel{^}{\epsilon}}_{i}={Y}_{i}-{\stackrel{^}{\beta}}^{\prime}{z}_{i}-{\displaystyle \underset{j=1}{\overset{m}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{\gamma}}_{j}{\stackrel{^}{U}}_{ij}\mathrm{,}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}i=\mathrm{1,}\cdots \mathrm{,}n$

the residuals. For ARCH(p) models, in view of the higher peak and heavy tail phenomenon, unlike Sastry’s idea that regress ${\stackrel{^}{\epsilon}}_{i}^{2}$ on a column of ones and ${\stackrel{^}{\epsilon}}_{i-1}^{2}$ by minimizing the sum of the square of residuals, after getting the residuals ${\stackrel{^}{\epsilon}}_{i},i=1,\cdots ,n$ to get the parameter’s estimate of ARCH (1) sequence ( [14] ), minimizing the sum of the absolute residuals is used in this paper to get an estimator of $\alpha ={\left({\alpha}_{0}\mathrm{,}{\alpha}_{1}\mathrm{,}\cdots \mathrm{,}{\alpha}_{p}\right)}^{\prime}$ . That is to say

$\stackrel{^}{\alpha}=\mathrm{arg}{\mathrm{min}}_{\alpha \in {R}^{p+1}}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}\left|{\stackrel{^}{\epsilon}}_{i}^{2}-{\alpha}_{0}-{\alpha}_{1}{\stackrel{^}{\epsilon}}_{i-1}^{2}-\cdots -{\alpha}_{p}{\stackrel{^}{\epsilon}}_{i-p}^{2}\right|,$ (2.4)

where $\stackrel{^}{\alpha}={\left({\stackrel{^}{\alpha}}_{0},{\stackrel{^}{\alpha}}_{1},\cdots ,{\stackrel{^}{\alpha}}_{p}\right)}^{\prime}$ .

3. Asymptotic Properties

We first state the assumptions under which the asymptotic properties are proved, then present the theorems. Let $\rho \left(D\right)$ and ${D}^{\otimes m}$ denote the spectral radius and Kronecker product of matrix D respectively, and ${\mathcal{F}}_{t}=\sigma \left\{{\epsilon}_{i}\mathrm{:}i\le t\right\}$ in the following.

It is easy to see that $\text{E}\left[{\epsilon}_{t}|{\mathcal{F}}_{t-1}\right]=0$ , $\text{E}\left[{\epsilon}_{t}^{2}|{\mathcal{F}}_{t-1}\right]={h}_{t}$ , namely, the ARCH(p) process forms a martingale difference sequence with $\text{E}\left[{\epsilon}_{t}^{2}\right]=\frac{{\alpha}_{0}}{1-{\alpha}_{1}-\cdots -{\alpha}_{p}}$ . In order to attain the stationary solution and guarantee the existence of high moment of $\left\{{\epsilon}_{t}\right\}$ , we suppose that

$0<{\alpha}_{0}<\infty ,\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}{\displaystyle \underset{j=1}{\overset{p}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\alpha}_{j}<1,\text{\hspace{0.17em}}\rho \left({\Sigma}_{r}\right)<1$ (3.1)

for some integer $r\ge 1$ , where ${\Sigma}_{r}=\text{E}\left({D}_{t}^{\otimes r}\right)$ ,

${D}_{t}=\left(\begin{array}{cccc}{\alpha}_{1}{e}_{t}^{2}& \cdots & {\alpha}_{p-1}{e}_{t}^{2}& {\alpha}_{p}{e}_{t}^{2}\\ 1& \cdots & 0& 0\\ \vdots & \ddots & \vdots & \vdots \\ 0& \cdots & 1& 0\end{array}\right).$

Then, as [12] and [14] proved, there exists a strictly stationary solution for the p-th order ARCH process given by

${\epsilon}_{t}^{2}={\alpha}_{0}\left[{e}_{t}^{2}+{\displaystyle \underset{j=1}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{e}_{t-j}^{2}{{\delta}^{\prime}}_{1}\left({\displaystyle \underset{i=0}{\overset{j-1}{\prod}}}{D}_{t-i}\right){\delta}_{1}\right]$ (3.2)

with ${\delta}_{1}={\left(\mathrm{1,0,}\cdots \mathrm{,0}\right)}^{\prime}$ .

In the following, let C denote positive constant which may change from line to line. It is assumed that the random function X satisfies

$\text{E}{\Vert X\Vert}^{4}<\infty $ (3.3)

and for each j

$\text{E}\left[{U}_{j}^{4}\right]\le C{\lambda}_{j}^{2}$ (3.4)

for some constant C. For the eigenvalues of ${C}_{X}$ , assume that there exist C and $a>1$ such that

${C}^{-1}{j}^{-a}\le {\lambda}_{j}\le C{j}^{-a},\text{\hspace{0.17em}}{\lambda}_{j}-{\lambda}_{j-1}\ge C{j}^{-a-1},\text{\hspace{0.17em}}j\ge 1$ (3.5)

to prevent the spacings among eigenvalues being too small. In order to guarantee that the regression weight function γ is smoother than the sample path X, for the Fourier coefficients ${\gamma}_{j}$ , we suppose that

$\left|{\gamma}_{j}\right|\le C{j}^{-b}$ (3.6)

for some constant C and $b>a/2+1$ . On the tuning parameter, we assume that

$m~{n}^{1/\left(a+2b\right)}\mathrm{,}$ (3.7)

where ${a}_{n}~{b}_{n}$ means there exist constants $0<L<M<\infty $ s.t. $L\le \frac{{a}_{n}}{{b}_{n}}\le M$ for all n. Besides, we also assume that

$E{\Vert z\Vert}_{{R}^{d}}^{4}<\infty $ (3.8)

for the random vector $z$ with ${\Vert z\Vert}_{{R}^{d}}={\left({z}^{\prime}z\right)}^{\frac{1}{2}}$ and ${C}_{{z}_{k}X}(\cdot )=\text{Cov}\left({z}_{k},X(\cdot )\right)$ satisfies

$\left|\langle {C}_{{z}_{k}X}\mathrm{,}{\rho}_{j}\rangle \right|\le C{j}^{-\left(a+b\right)}$ (3.9)

for each $k=1,2,\cdots ,d$ and $j\ge 1$ .

Let ${\eta}_{ik}={z}_{ik}-\langle {\chi}_{k},{X}_{i}\rangle $ , where ${\chi}_{k}={\displaystyle {\sum}_{j=1}^{\infty}}\left(\langle {C}_{{z}_{k}X},{\rho}_{j}\rangle /{\lambda}_{j}\right){\rho}_{j}$ . Then, ${\eta}_{1k}\mathrm{,}\cdots \mathrm{,}{\eta}_{nk}$ are i.i.d. random variables. We suppose that

$\text{E}\left[{\eta}_{1k}|{X}_{1},\cdots ,{X}_{n}\right]=0,\text{\hspace{0.17em}}\text{E}\left[{\eta}_{1k}^{2}|{X}_{1},\cdots ,{X}_{n}\right]={B}_{kk},$

where ${B}_{kk}$ is the kth diagonal element of

$B=\text{E}\left[{\eta}_{1}{{\eta}^{\prime}}_{1}\right]={C}_{z}-{\displaystyle \underset{j=1}{\overset{\infty}{\sum}}}\frac{\langle {C}_{zX}\mathrm{,}{\rho}_{j}\rangle \langle {C}_{Xz}\mathrm{,}{\rho}_{j}\rangle}{{\lambda}_{j}}\mathrm{,}$ (3.10)

which is assumed positive definite, and ${\eta}_{i}={\left({\eta}_{i1}\mathrm{,}\cdots \mathrm{,}{\eta}_{id}\right)}^{\prime}$ .

With the assumptions that mentioned above, we have the following results.

Theorem 1. If the assumptions (3.1) with $r=2$ , (3.3)-(3.10) hold, we have

${n}^{1/2}\left(\stackrel{^}{\beta}-\beta \right)\stackrel{d}{\to}N\left(\mathrm{0,}\frac{{\alpha}_{0}}{1-{\alpha}_{1}-\cdots -{\alpha}_{p}}{B}^{-1}\right)$

as $n\to \infty $ , where “ $\stackrel{d}{\to}$ ” denotes convergence in distribution.

Theorem 2. Under the assumptions (3.1) with $r=1$ , (3.3)-(3.10), one has

${\Vert \stackrel{^}{\gamma}-\gamma \Vert}^{2}={O}_{p}\left({n}^{-\left(2b-1\right)/\left(a+2b\right)}\right)\mathrm{.}$

Theorem 3. Under the conditions of $\left\{{\epsilon}_{i}\right\}$ and the assumptions (3.1) with $r=2$ , (3.3)-(3.10), we have

${n}^{1/2}\left(\stackrel{^}{\alpha}-\alpha \right)\stackrel{d}{\to}N\left(0,\frac{1}{4}{D}_{1}^{-1}P{D}_{1}^{-1}\right)$

as $n\to \infty $ , where

$P=\text{E}\left(\begin{array}{ccccc}1& {\epsilon}_{p}^{2}& {\epsilon}_{p-1}^{2}& \cdots & {\epsilon}_{1}^{2}\\ {\epsilon}_{p}^{2}& {\epsilon}_{p}^{4}& {\epsilon}_{p}^{2}{\epsilon}_{p-1}^{2}& \cdots & {\epsilon}_{p}^{2}{\epsilon}_{1}^{2}\\ \vdots & \vdots & \vdots & & \vdots \\ {\epsilon}_{1}^{2}& {\epsilon}_{1}^{2}{\epsilon}_{p}^{2}& {\epsilon}_{1}^{2}{\epsilon}_{p-1}^{2}& \cdots & {\epsilon}_{1}^{4}\end{array}\right)\mathrm{,}$

${D}_{1}={\mathrm{lim}}_{n\to \infty}{n}^{-1}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}{f}_{i}\left({\xi}_{i}\right){v}_{i}{{v}^{\prime}}_{i},$

with ${v}_{i}={\left(\mathrm{1,}{\epsilon}_{p+i-1}^{2}\mathrm{,}{\epsilon}_{p+i-2}^{2}\mathrm{,}\cdots \mathrm{,}{\epsilon}_{i}^{2}\right)}^{\prime}$ .

Remark 1. Compared with [3] , we can find that the estimator of the regression coefficient vector also has the convergence rate $\sqrt{n}$ and is asymptotically normal under the ARCH(p) errors.

Remark 2. To implement the proposed method, we need to know how to choose the cut-off point m. Theoretically, if m is too large, the number of parameters in model (2.2) is also too large and the estimate of the slope function γ may goes terrible by the properties of Functional Principal Component Analysis (FPCA); if m is taken as a small value, the approximation of model (2.2) to model (2.1) may not be enough. This is the role that condition (3.7) plays. There are well-established methods for choosing such tuning parameter m, such as Generalized Cross-Validation (GCV), AIC, BIC and FPCA. As we all know, the first three criteria are data-driven and the FPCA is based on the ratio of variance explained by the first m eigenvalues to the total variation of X. In section 4, GCV and FPCA are respectively considered.

Remark 3. In order to make inference for $\alpha $ , the estimation of the asymptotic variance, mainly involving the estimation of P and ${f}_{i}\left({\xi}_{i}\right)$ , is needed to be given. Based on (A.8) in the Appendix, it is reasonable to use ${n}^{-1}{\displaystyle {\sum}_{i=p+1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{v}}_{i}{{\stackrel{^}{v}}^{\prime}}_{i}$ as the estimate of P with ${\stackrel{^}{v}}_{i}={\left(\mathrm{1,}{\stackrel{^}{\epsilon}}_{p+i-1}^{2}\mathrm{,}{\stackrel{^}{\epsilon}}_{p+i-2}^{2}\mathrm{,}\cdots \mathrm{,}{\stackrel{^}{\epsilon}}_{i}^{2}\right)}^{\prime}$ . For ${f}_{i}\left({\xi}_{i}\right)$ , the sparsity estimation methods or the kernel density estimation ideas, suggested by [17] and [18] respectively, can be used for this paper.

4. Simulation Studies

In this section, simulations are carried out to show the finite sample performance of the proposed method. The data is generated from the model (1.1) in the case where ${z}_{i1}$ and ${z}_{i2}$ are standard normal,

$X\left(t\right)={\displaystyle \underset{j=1}{\overset{200}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{U}_{j}{\rho}_{j}\left(t\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}t\in \left[0,1\right],$

where the U_{j}s are distributed as independent normal with mean 0 and variance
${\lambda}_{j}={\left(\left(j-0.5\right)\text{\pi}\right)}^{-2}$ respectively,
${\rho}_{j}\left(t\right)=\sqrt{2}\mathrm{sin}\left(\left(j-0.5\right)\text{\pi}t\right)$ and

${Y}_{i}={z}_{i1}{\beta}_{1}+{z}_{i2}{\beta}_{2}+{\displaystyle {\int}_{0}^{1}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\gamma \left(t\right){X}_{i}\left(t\right)\text{d}t+{\epsilon}_{i}$

with $\beta ={\left(2,-1\right)}^{\prime},\gamma \left(t\right)=\sqrt{2}\mathrm{sin}\left(\text{\pi}t/2\right)+3\sqrt{2}\mathrm{sin}\left(\text{3\pi}t/2\right)$ . For the random error,

we take the following form: ${\epsilon}_{i}={e}_{i}{h}_{i}^{1/2}$ , ${h}_{i}={\alpha}_{0}+{\alpha}_{1}{\epsilon}_{i-1}^{2}+{\alpha}_{2}{\epsilon}_{i-2}^{2}$ , ${e}_{i}\stackrel{i.i.d.}{~}t\left(5\right)$ ,

where
${\alpha}_{0}$ takes value 0.1,
${\alpha}_{1}$ takes value from 0.1, 0.3 and
${\alpha}_{2}$ takes value from 0.3, 0.1 correspondingly. Note that
$t\left(5\right)$ has finite 4^{th} order moment, and

the condition (3.1) is satisfied by $\alpha ={\left(0.1,0.1,0.3\right)}^{\prime}$ and $\alpha ={\left(0.1,0.3,0.1\right)}^{\prime}$

with $r=1$ , where it may be shown that both $\stackrel{^}{\beta}$ and $\stackrel{^}{\alpha}$ given by (2.3) and

(2.4) are consistent. For $\alpha ={\left(0.1,0.3,0.3\right)}^{\prime}$ with $r=1$ , $\rho \left({\Sigma}_{r}\right)=1$ . That is, it is on the boundary of the condition region.

We also consider the situation that ${\alpha}_{1}$ and ${\alpha}_{2}$ take values 0 to compare with the independent structure. For each $\alpha $ , we simulate 1000 random samples, each with sample size $n=100,300,500$ respectively. For the determination of

m by FPCA, $m=\mathrm{min}\left\{k:{\displaystyle {\sum}_{i=1}^{k}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{\lambda}}_{i}/{\displaystyle {\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{\lambda}}_{i}\ge 0.85\right\}$ is used. The accuracy of the

slope function estimate is checked by the mean integrated square error (MISE) which is defined as

$\text{MISE}=\frac{1}{1000}{\displaystyle \underset{i=1}{\overset{1000}{\sum}}}\left[\frac{1}{N}{\displaystyle \underset{s=1}{\overset{N}{\sum}}}{\left({\stackrel{^}{\gamma}}_{i}\left({t}_{s}\right)-\gamma \left({t}_{s}\right)\right)}^{2}\right],$

where ${\stackrel{^}{\gamma}}_{i}(\cdot )$ is the estimate of the slope function $\gamma (\cdot )$ obtained from the i-th replication, and ${t}_{s},s=1,\cdots ,N$ are the equally spaced grid points at which the function ${\stackrel{^}{\gamma}}_{i}\left(t\right)$ is evaluated. In our implementation, $N=100$ is used. In this section, the results of the estimators of $\alpha $ using the Least Square (LS) method is also carried out to compare with the Least Absolute Deviation (LAD) method which is proposed by this paper. The results are summarized into Tables 1-3 and the shape of the true function γ and the estimated function $\stackrel{^}{\gamma}$ , based on the average of 1000 replications with $\alpha ={\left(0.1,0.1,0.3\right)}^{\prime}$ are depicted in Figure 1.

Table 1. MSE and MISE under LAD (GCV).

Table 2. MSE and MISE under LAD (FPCA).

Table 3. MSE and MISE under LS (GCV).

Table 4. MSE and MISE with $N\left(\mathrm{0,1}\right)$ produced errors under GCV ( $\alpha ={\left(0.1,0.3,0.3\right)}^{\prime}$ ).

We also would like to know that how will the LAD method behave when the error of the ARCH sequence is not heavy tailed, such as $e~N\left(\mathrm{0,1}\right)$ . The simulation results are summarized into Table 4 with $\alpha ={\left(0.1,0.3,0.3\right)}^{\prime}$ . In this case, (3.1) is satisfied with both $r=1$ and $r=2$ .

We can derive the following conclusions from Tables 1-4.

1) From Table 1 and Table 2, with the increasing of sample size n, it can be seen MSEs, MISE decrease in all scenarios that we considered about error. This reflects the proposed estimators fit better to the real values as the sample size increases and thus is promising.

2) For every fixed sample size n, it can be seen the larger value of coefficients $\alpha $ , the larger the corresponding MSE for the different coefficients form of errors. For example, when $n=100$ , $\alpha $ take values $\left(\mathrm{0.1,0.1,0.3}\right)$ and $\left(\mathrm{0.1,0.3,0.1}\right)$ respectively, the MSE of ${\alpha}_{1}=0.3$ is larger than the MSE of ${\alpha}_{1}=0.1$ and so is ${\alpha}_{2}$ . Moreover, the MSE of $\stackrel{^}{\alpha}$ and MISE of $\stackrel{^}{\gamma}$ become large when the coefficients ${\alpha}_{1},{\alpha}_{2}$ take relative large values simultaneously, such as $\left({\alpha}_{1},{\alpha}_{2}\right)=\left(0.3,0.3\right)$ in Table 1. This is due to the stronger volatility for larger ${\alpha}_{j},j=1,2$ .

3) From Table 1, for every fixed sample size n, when ${\alpha}_{1}\mathrm{,}{\alpha}_{2}$ take values 0, which is the case considered by [3] , the MSE and MISE of $\stackrel{^}{\beta}$ are smaller than those with ARCH errors. This shows that the dependence of errors makes the estimators more varying. However, with the increasing of sample size, the later ones decrease and could reach the former quantities.

4) The MSE of the coefficients $\stackrel{^}{\alpha}$ , in Table 3, using the LS method for $t\left(5\right)$ produced errors get larger values, compared to the results of estimator given by (2.3). Specifically, unlike the results in Table 1 and Table 2, the results of ${\alpha}_{0}$ for the boundary case $\left({\alpha}_{1},{\alpha}_{2}\right)=\left(0.3,0.3\right)$ are unstable, illustrating the reasonableness of the proposed method.

5) Table 4 shows that the LAD method could perform as well as the LS method even for non-heavy tailed distribution of the error.

6) As Table 1 and Table 2 show, the difference of the estimators of $\beta $ between the two selection methods of m is very small.

Based on simulation results from Table 1 and Table 2, it seems that the estimator of $\gamma $ corresponding to FPCA is better in view of MISE. As we know, when using FPCA to choose m, a threshold value for the ratio is needed. We reset the threshold value as 0.80 rather than 0.85 for the case $n=100$ and $\alpha ={\left(0.1,0.1,0.3\right)}^{\prime}$ , the MISE will become 6.6373 which is bigger than 0.9230 given in Table 1. As far as we know, there is no theoretical research on how the threshold value should be set to get a compromise between goodness of fit and the precision of the estimated slope function.

From Figure 1, it can be seen that the estimated function can fit the true function approximately no matter which method is used to choose the tuning parameter m, which demonstrate that the proposed method works well.

From the above observation, we see that the estimator (2.4) performs well, even under the boundary condition. It may be theoretically interesting to know the performance of the estimator in this case, but it is beyond our focus here.

5. Real Data Analysis

In this section, we apply the proposed method to deal with a real dataset. The data consist of monthly electricity consumption, denote by C, consumed by

Figure 1. The true function γ (solid line) and the estimated function $\stackrel{^}{\gamma}$ (dashed line) using GCV (left) and FPCA (right) with $n=100$ .

commercial sectors from January 1972 to January 2005 (397 months) and their annual average retail price P (33 years). A main goal of this study is to consider the effect of dependence structure of the error on the asymptotic variance of $\stackrel{^}{\beta}$ , when using the price and consumption to predict the consumption 6 months later.

According to the stationary test of the electricity consumption data, the heteroscedasticity and linear trend can be found and then may be eliminated by differencing the ln data. Corresponding to the general notation introduced in model (1.1), let

${D}_{j}=\mathrm{ln}{C}_{j}-\mathrm{ln}{C}_{j-1},\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}j=1,2,\cdots ,397,$

${X}_{i}=\left\{{D}_{12\left(i-1\right)+t},t\in \left[1,12\right]\right\},\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}i=1,2,\cdots ,32.$

The response variable is

${Y}_{i}={D}_{12i+6},\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}i=1,2,\cdots ,32,$

and the additional real variable is defined by

${z}_{i}={P}_{i},\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}i=1,2,\cdots ,32.$

Regress Y on Z and X with m chosen by the FPCA with threshold 0.85, then the residuals are obtained. Although it seems reasonable to treat the residuals as white noise sequence, the characteristics of volatility clustering may exists according to Figure 2. For this sequence, a further analysis is conducted numerically, and the significant level takes value 0.05 in the following test. The stationary is tested firstly using the function “adf.test()” in R packages with the p value 0.019. Whether the sequence is uncorrelated or not is considered using the Box-Ljung statistics, accepting the null hypothesis with the p value 0.08. However, for this sequence, the skewness value 0.03 and kurtosis value 2.48 shows it has high peak and heavy tailed features. Box-Ljung test is adopted again for the square of the residuals sequence, demonstrating the existence of the ARCH structure with p value 0.03. As Figure 3 show, it is appropriate to use ARCH(3) to fit the square residuals sequence. By calculation, under the existence of volatility clustering for the errors, the asymptotic standard variation

Figure 2. The graph of error sequence.

Figure 3. The pacf of the errors square.

of $\stackrel{^}{\beta}$ is 0.01, which is reduced by 94% comparing with the value 0.18, which is given under ignoring concrete form of the error, showing it is promising to consider the ARCH structure.

6. Discussion

In this paper, the estimation of partial functional linear models with ARCH(p) errors using the LS method, as well as the parameters of ARCH(p) sequence using the LAD method are respectively considered. Considering that the dimensionality of the slope function is infinite, for this paper, the key point we have given consists in transforming the partial functional linear models with ARCH errors into the corresponding linear regression models by the K-L expansion and the idea of FPCA. The linear relationship between z and X is essentially assumed (see Remark 1 in [8] ). In the future study, under the errors’ dependent structure, we will further consider the estimation of the model (1.1) using the kernel method noticing that the relationship between z and X may be relaxed. Since the heteroscedasticity in economics is a common phenomenon, the theory study of the model is practically useful and worthy to be explored. Furthermore, based on the fact the consistency of $\stackrel{^}{\alpha}$ and $\stackrel{^}{\beta}$ can be respectively obtained from Theorem 3 and the proof of Theorem 1, the inference to the models could be made precisely within this paper by the asymptotic normality of $\stackrel{^}{\beta}$ .

Acknowledgements

This work is supported by NSFC No. 11771032, No.11571340 and the Science and Technology Project of Beijing Municipal Education Commission No. KM201710005032.

Appendix. Proofs of the Theorems

We will state the proofs of the theorems given in Section 3. Firstly, some lemmas will be given.

Lemma A.1. ( [12] , Theorem 1) $\left\{{\epsilon}_{t}\right\}$ is a strictly stationary solution of model (1.3) and $\text{E}{\epsilon}_{0}^{2}<\infty $ if and only if ${\sum}_{j=1}^{p}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\alpha}_{j}<1$ . Furthermore, this solution is unique and ergodic.

Lemma A.2. ( [12] , Theorem 3) Let ${L}^{r}=\left\{x|{\Vert x\Vert}_{r}={\left(\text{E}{\left|x\right|}^{r}\right)}^{1/r}<\infty \mathrm{,}x\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{is}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{random}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{variable}\right\}$ and suppose (3.1) holds, $\text{E}{e}_{t}^{4\left(r-1\right)}<\infty $ , where $r\ge 1$ is an integer, then ${\epsilon}_{t}^{2}\in {L}^{r}$ .

Lemma A.3. Consider $\left\{{\epsilon}_{i}\mathrm{:}i\ge 1\right\}$ forms an ARCH(p) process. Besides, (3.1) holds, then

${n}^{-1}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\epsilon}_{i}^{2r}\to \text{E}\left[{\epsilon}_{i}^{2r}\right]\text{\hspace{0.05em}}\text{\hspace{0.17em}}a.s.$

for the integer r in condition (3.1); furthermore, if $r\ge 2$ , then

${n}^{-1}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\epsilon}_{i}^{2}{\epsilon}_{i-j}^{2}\to \text{E}\left[{\epsilon}_{i}^{2}{\epsilon}_{i-j}^{2}\right]\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{a}\text{.s}\mathrm{..}$

Proof. From Lemma A.1 and the representation (3.2), it follows that $\left\{{\epsilon}_{t}\right\}$ and $\left\{{\epsilon}_{t}^{2}\right\}$ are strictly stationary ergodic sequences. Combining with Lemma A.2, the results follow immediately from the ergodic theorem ( [19] [20] ).,

Lemma A.4. If ε is independent of X and (3.1)-(3.2) hold, one has

$\Vert {n}^{-1}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{X}_{i}{\epsilon}_{i}\Vert ={O}_{p}\left({n}^{-\frac{1}{2}}\right).$

Proof. By simple calculation, the conclusion can be easily derived under the fact $\text{E}\left[{\epsilon}_{i}^{2}\right]={\alpha}_{0}/\left(1-{\alpha}_{1}-\cdots -{\alpha}_{p}\right)$ .,

Proof of Theorem 1. Let ${\stackrel{^}{\Phi}}_{k}\left(x\right)={\displaystyle {\sum}_{j=1}^{m}}\left(\langle {\stackrel{^}{C}}_{{z}_{k}X}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle /{\stackrel{^}{\lambda}}_{j}\right)\langle {\stackrel{^}{\rho}}_{j}\mathrm{,}x\rangle $ and ${\Phi}_{k}\left(x\right)={\displaystyle {\sum}_{j=1}^{\infty}}\left(\langle {C}_{{z}_{k}X},{\rho}_{j}\rangle /{\lambda}_{j}\right)\langle {\rho}_{j},x\rangle $ with $x\in {L}^{2}\left(\left[\mathrm{0,1}\right]\right)$ . Set ${\Vert A\Vert}_{\infty}={\mathrm{max}}_{i}{\displaystyle {\sum}_{j}}\left|{A}_{ij}\right|$ and $\Vert A\Vert ={\displaystyle {\sum}_{i=1}^{d}}{\displaystyle {\sum}_{j=1}^{d}}\left|{A}_{ij}\right|$ for $A=\left({A}_{ij}\right)\in {R}^{d\times d}$ . Observe that

$\begin{array}{c}{n}^{1/2}\left(\stackrel{^}{\beta}-\beta \right)={\stackrel{^}{B}}^{-1}{n}^{1/2}\left\{\frac{1}{n}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\left({z}_{i}-{\displaystyle \underset{j\mathrm{=1}}{\overset{m}{\sum}}}\frac{\langle {\stackrel{^}{C}}_{zX}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle \langle {X}_{i}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle}{{\stackrel{^}{\lambda}}_{j}}\right)\left(\langle \gamma \mathrm{,}{X}_{i}\rangle +{\epsilon}_{i}\right)\right\}\\ ={\stackrel{^}{B}}^{-1}{n}^{-1/2}\{{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\left({z}_{i}-{\displaystyle \underset{j=1}{\overset{m}{\sum}}}\frac{\langle {\stackrel{^}{C}}_{zX}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle \langle {X}_{i}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle}{{\stackrel{^}{\lambda}}_{j}}\right)\langle \gamma \mathrm{,}{X}_{i}\rangle \\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\left({\displaystyle \underset{j=1}{\overset{\infty}{\sum}}}\frac{\langle {C}_{zX}\mathrm{,}{\rho}_{j}\rangle \langle {X}_{i}\mathrm{,}{\rho}_{j}\rangle}{{\lambda}_{j}}-{\displaystyle \underset{j=1}{\overset{m}{\sum}}}\frac{\langle {\stackrel{^}{C}}_{zX}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle \langle {X}_{i}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle}{{\stackrel{^}{\lambda}}_{j}}\right){\epsilon}_{i}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\left({z}_{i}-{\displaystyle \underset{j=1}{\overset{\infty}{\sum}}}\frac{\langle {C}_{zX}\mathrm{,}{\rho}_{j}\rangle \langle {X}_{i}\mathrm{,}{\rho}_{j}\rangle}{{\lambda}_{j}}\right){\epsilon}_{i}\}\end{array}$

with $\stackrel{^}{B}={\stackrel{^}{C}}_{z}-{\left\{{\stackrel{^}{\Phi}}_{k}\left({\stackrel{^}{C}}_{{z}_{m}X}\right)\right\}}_{k\mathrm{,}m=\mathrm{1,}\cdots \mathrm{,}d}$ .

According to Lemma A.4, similar to [3] , one has

${\Vert \stackrel{^}{B}-B\Vert}_{\infty}={O}_{p}\left({n}^{-\left(2b-1\right)/\left(a+2b\right)}\right)\mathrm{,}$

${n}^{-1/2}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\left({z}_{i}-{\displaystyle \underset{j=1}{\overset{m}{\sum}}}\frac{\langle {\stackrel{^}{C}}_{zX}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle \langle {X}_{i}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle}{{\stackrel{^}{\lambda}}_{j}}\right)\langle \gamma \mathrm{,}{X}_{i}\rangle ={o}_{p}\left(1\right)\mathrm{,}$

${n}^{-1/2}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\left({\displaystyle \underset{j=1}{\overset{\infty}{\sum}}}\frac{\langle {C}_{zX}\mathrm{,}{\rho}_{j}\rangle \langle {X}_{i}\mathrm{,}{\rho}_{j}\rangle}{{\lambda}_{j}}-{\displaystyle \underset{j\mathrm{=1}}{\overset{m}{\sum}}}\frac{\langle {\stackrel{^}{C}}_{zX}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle \langle {X}_{i}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle}{{\stackrel{^}{\lambda}}_{j}}\right){\epsilon}_{i}={o}_{p}\left(1\right)\mathrm{.}$

Now, we consider the term ${n}^{-1/2}{\displaystyle {\sum}_{i=1}^{n}}\left({z}_{i}-{\displaystyle {\sum}_{j=1}^{\infty}}\frac{\langle {C}_{zX}\mathrm{,}{\rho}_{j}\rangle \langle {X}_{i}\mathrm{,}{\rho}_{j}\rangle}{{\lambda}_{j}}\right){\epsilon}_{i}:={n}^{-1/2}{\displaystyle {\sum}_{i=1}^{n}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\eta}_{i}{\epsilon}_{i}$ . We will show

${n}^{-1/2}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\eta}_{i}{\epsilon}_{i}\stackrel{d}{\to}N\left(\mathrm{0,}\frac{{\alpha}_{0}}{1-{\alpha}_{1}-\cdots -{\alpha}_{p}}B\right)\mathrm{.}$ (A.1)

Let ${P}_{j-1}(\cdot )=\text{E}\left[\cdot |{\mathcal{F}}_{j-1}\right]$ and ${\xi}_{i}={n}^{-1/2}{\eta}_{i}{\epsilon}_{i}$ , then $\left\{{\xi}_{i}\right\}$ forms a martingale difference series due to the fact that ${\xi}_{i}$ is ${\mathcal{F}}_{i-1}$ -measurable and ${P}_{i-1}\left({\xi}_{i}\right)=0$ . Let ${u}_{i}$ denote the conditional variances of ${\xi}_{i}$ , then, for $i=1,\cdots ,n$ ,

${u}_{i}={P}_{i-1}\left({\xi}_{i}{{\xi}^{\prime}}_{i}\right)={n}^{-1}{P}_{i-1}\left({\eta}_{i}{{\eta}^{\prime}}_{i}{\epsilon}_{i}^{2}\right)={n}^{-1}\text{E}\left({\eta}_{i}{{\eta}^{\prime}}_{i}\right){P}_{i-1}\left({\epsilon}_{i}^{2}\right)={n}^{-1}B{h}_{i}\mathrm{.}$

Therefore,

$\underset{i}{\sum}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{u}_{i}={n}^{-1}{\displaystyle \underset{i}{\sum}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}B{h}_{i}\stackrel{p}{\to}\frac{{\alpha}_{0}}{1-{\alpha}_{1}-\cdots -{\alpha}_{p}}B\mathrm{,$

according to the law of large numbers ( [19] ). Furthermore, for any $\delta >0$ ,

$\begin{array}{l}{\displaystyle \underset{j}{\sum}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{P}_{j-1}\left({\xi}_{j}{{\xi}^{\prime}}_{j}\left\{\Vert {\xi}_{j}\Vert >\delta \right\}\right)\\ ={n}^{-1}{\displaystyle \underset{j}{\sum}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{P}_{j-1}\left({\eta}_{j}{{\eta}^{\prime}}_{j}{\epsilon}_{j}^{2}\left\{\Vert {\eta}_{j}{\epsilon}_{j}\Vert >{n}^{1/2}\delta \right\}\right)\\ \le {n}^{-1}{\displaystyle \underset{j}{\sum}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{P}_{j-1}\left({\eta}_{j}{{\eta}^{\prime}}_{j}{\epsilon}_{j}^{2}\left[\left\{{\Vert {\eta}_{j}\Vert}^{2}>{n}^{1/2}\delta \right\}\cup \left\{{\epsilon}_{j}^{2}>{n}^{1/2}\delta \right\}\right]\right)\\ ={n}^{-1}{\displaystyle \underset{j}{\sum}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{P}_{j-1}\left({\epsilon}_{j}^{2}\right)E\left({\eta}_{1}{{\eta}^{\prime}}_{1}\left\{{\Vert {\eta}_{1}\Vert}^{2}>{n}^{1/2}\delta \right\}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{n}^{-1}{\displaystyle \underset{j}{\sum}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{P}_{j-1}\left({\epsilon}_{j}^{2}\left\{{\epsilon}_{j}^{2}>{n}^{1/2}\delta \right\}\right)\text{E}\left({\eta}_{1}{{\eta}^{\prime}}_{1}\right).\end{array}$

For the first term, it converges to zero because $\Vert {\eta}_{j}{{\eta}^{\prime}}_{j}\Vert $ is uniformly integrable. In view of the integrability of ${\epsilon}_{i}^{2}$ , the second term also converges to zero in probability. Using the martingale difference central limit theorem (CLT) ( [21] , we get (A.1) holds. Therefore, the conclusion of Theorem 1 holds.

Proof of Theorem 2. With Lemma A.4, the technics in the proof of Theorem 3.2 of [3] can be extended to the present model. So we omit it here.

Proof of Theorem 3. Firstly, we consider the following two equalities:

${n}^{-1}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}{\stackrel{^}{\epsilon}}_{i}^{2}={n}^{-1}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}{\epsilon}_{i}^{2}+{o}_{p}\left({n}^{-\frac{1}{2}}\right),$ (A.2)

${n}^{-1}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}{\stackrel{^}{\epsilon}}_{i}^{2}{\stackrel{^}{\epsilon}}_{i-j}^{2}={n}^{-1}{\displaystyle \underset{i=p+1}{\overset{n}{\sum}}}{\epsilon}_{i}^{2}{\epsilon}_{i-j}^{2}+{o}_{p}\left({n}^{-\frac{1}{2}}\right).$ (A.3)

From Theorem 1 and Theorem 2, we learn that

$\underset{j=1}{\overset{m}{\sum}}}{\left({\stackrel{^}{\gamma}}_{j}-{\gamma}_{j}\right)}^{2}={\Vert \stackrel{^}{\stackrel{\u02dc}{\gamma}}-\stackrel{\u02dc}{\gamma}\Vert}^{2}={O}_{p}\left({n}^{-\left(2b-1\right)/\left(a+2b\right)}\right),$ (A.4)

$\Vert \stackrel{^}{\beta}-\beta \Vert ={O}_{p}\left({n}^{-1/2}\right).$ (A.5)

Under the conditions (3.5)-(3.7) and $X\in {L}^{2}\left(\mathcal{T}\right)$ , one has

$\underset{j=m+1}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\gamma}_{j}^{2}{\langle {X}_{i},{\rho}_{j}\rangle}^{2}\le C{\displaystyle \underset{j=m+1}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{j}^{-2b}{\langle {X}_{i},{\rho}_{j}\rangle}^{2}={O}_{p}\left({n}^{-\left(2b-1\right)/\left(a+2b\right)}\right).$ (A.6)

In addition, according to (3.3) and ${\lambda}_{1}>{\lambda}_{2}>\cdots >{\lambda}_{m}$ , the relation

$\underset{n\to \infty}{\mathrm{lim}\mathrm{sup}}n\text{E}{\Vert {\stackrel{^}{\rho}}_{j}-{\rho}_{j}\Vert}^{2}<\infty $ (A.7)

holds, see ( [22] , ch4). For the residual ${\stackrel{^}{\epsilon}}_{i}^{2}$ , we have

$\begin{array}{c}{\stackrel{^}{\epsilon}}_{i}={Y}_{i}-{\stackrel{^}{\beta}}^{\prime}{z}_{i}-{\displaystyle \underset{j=1}{\overset{m}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{\gamma}}_{j}{\stackrel{^}{U}}_{ij}\\ ={\beta}^{\prime}{z}_{i}+{\displaystyle \underset{j=1}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\gamma}_{j}{U}_{ij}+{\epsilon}_{i}-{\stackrel{^}{\beta}}^{\prime}{z}_{i}-{\displaystyle \underset{j=1}{\overset{m}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\stackrel{^}{\gamma}}_{j}{\stackrel{^}{U}}_{ij}\\ ={\epsilon}_{i}-{\left(\stackrel{^}{\beta}-\beta \right)}^{\prime}{z}_{i}-{\displaystyle \underset{j=1}{\overset{m}{\sum}}}\left({\stackrel{^}{\gamma}}_{j}-{\gamma}_{j}\right)\langle {X}_{i}\mathrm{,}{\stackrel{^}{\rho}}_{j}\rangle \\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-{\displaystyle \underset{j=1}{\overset{m}{\sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{\gamma}_{j}\langle {X}_{i}\mathrm{,}{\stackrel{^}{\rho}}_{j}-{\rho}_{j}\rangle +{\displaystyle \underset{j=m+1}{\overset{\infty}{\sum}}}{\gamma}_{j}\langle {X}_{i}\mathrm{,}{\rho}_{j}\rangle \end{array}$

by (2.1). Combining this equality with (A.4)-(A.7), (A.2) and (A.3) can be proved.

Now we turn to consider the asymptotic form of $\stackrel{^}{\alpha}$ . By Lemma A.3, we can conclude

${\left({n}^{-1}{\stackrel{^}{P}}^{\prime}\stackrel{^}{P}\right)}^{-1}\to {P}^{-1}\text{\hspace{0.17em}}a\mathrm{.}s\mathrm{.}\text{\hspace{0.17em}}\text{as}\text{\hspace{0.17em}}n\to \infty \mathrm{,}$ (A.8)

where

$\stackrel{^}{P}=\left(\begin{array}{ccccc}1& {\stackrel{^}{\epsilon}}_{p}^{2}& {\stackrel{^}{\epsilon}}_{p-1}^{2}& \cdots & {\stackrel{^}{\epsilon}}_{1}^{2}\\ 1& {\stackrel{^}{\epsilon}}_{p+1}^{2}& {\stackrel{^}{\epsilon}}_{p}^{2}& \cdots & {\stackrel{^}{\epsilon}}_{2}^{2}\\ \vdots & \vdots & \vdots & & \vdots \\ 1& {\stackrel{^}{\epsilon}}_{n-1}^{2}& {\stackrel{^}{\epsilon}}_{n-2}^{2}& \cdots & {\stackrel{^}{\epsilon}}_{n-p}^{2}\end{array}\right).$

Combine (A.2), (A.3), (A.8) and the assumptions about the densities of $\left\{{\epsilon}_{i}^{2}\right\}$ , the results of Theorem 3 holds by the Theorem 4.1 of [23] .

References

[1] Aneiros-Pérez, G. and Vieu, P. (2006) Semi-Functional Partial Linear Regression. Statistics & Probability Letters, 76, 1102-1110.

https://doi.org/10.1016/j.spl.2005.12.007

[2] Aneiros-Pérez, G. and Vieu, P. (2008) Nonparametric Time Series Prediction: A Semi-Functional Partial Linear Modeling. Journal of Multivariate Analysis, 99, 834-857.

https://doi.org/10.1016/j.jmva.2007.04.010

[3] Shin, H. (2009) Partial Functional Linear Regression. Journal of Statistical Planning and Inference, 139, 3405-3418.

https://doi.org/10.1016/j.jspi.2009.03.001

[4] Shin, H. and Lee, M. (2012) On Prediction Rate in Partial Functional Linear Regression. Journal of Multivariate Analysis, 103, 93-106.

https://doi.org/10.1016/j.jmva.2011.06.011

[5] Zhou, J.J., Chen, Z. and Peng, Q.Y. (2016) Polynomial Spline Estimation for Partial Functional Linear Regression Models. Computational Statistics, 31, 1107-1129.

https://doi.org/10.1007/s00180-015-0636-0

[6] Lv, Y., Du, J. and Sun, Z.M. (2014) Functional Partially Linear Quantile Regression Model. Metrika, 77, 317-332.

[7] Zhou, J., Du, J. and Sun, Z.M. (2016) M-Estimation for Partially Functional Linear Regression Model Based on Splines. Communications in Statistics-Theory and Methods, 45, 6436-6466.

https://doi.org/10.1080/03610926.2014.921309

[8] Yu, P., Zhang, Z.Z. and Du, J. (2016) A Test of Linearity in Partial Functional Linear Regression. Metrika, 79, 953-969.

https://doi.org/10.1007/s00184-016-0584-x

[9] Zhang, T. and Wang, Q.H. (2012) Semiparametric Partially Linear Regression Models for Functional Data. Journal of Statistical Planning and Inference, 142, 2518-2529.

https://doi.org/10.1016/j.jspi.2012.03.004

[10] Wang, G.C., Feng, X.N. and Chen, M. (2015) Functional Partial Linear Single-Index Model. Scandinavian Journal of Statistics, 43, 261-274.

[11] Engle, R.F. (1982) Autoregressive Conditional Heteroscedasticity with Estimates of United Kingdom Inflation. Econometrica, 50, 987-1007.

https://doi.org/10.2307/1912773

[12] Chen, M. and An, H.Z. (1995) The Strictly Stationary Ergodicity and High Moment of ARCH(p) Models. Chinese Science Bulletin, 40, 2118-2123.

[13] Zhang, Z.Q., Feng, J.Y. and Zhang, R.Q. (2007) Strong Law of Large Numbers of the Absolute Value Sequences from ARCH. Journal of Yanbei Normal University, 23, 9-11.

[14] Sastry, G.P. (1988) Estimation of Autoregressive Models with ARCH Errors. The Indian Journal of Statistics Series B, 50, 119-138.

[15] Lu, Z.D. and Gijbels, I. (2001) Asymptotics for Partly Linear Regression with Dependent Samples and ARCH Errors: Consistency with Rates. Science in China Series A: Mathematics, 44, 168-183.

https://doi.org/10.1007/BF02874419

[16] Riesz, F. and Sz-Nagy, B. (1955) Functional Analysis. Dover Publications, New York.

[17] Hendrick, W. and Koenker, R. (1991) Hierarchical Spline Models for Conditional Quantiles and the Demand for Electricity. Journal of the American Statistical Association, 87, 58-68.

https://doi.org/10.1080/01621459.1992.10475175

[18] Pollard, D. (1991) Asymptotics for Least Absolute Deviation Regression Estimators. Econometrics Theory, 7, 186-199.

https://doi.org/10.1017/S0266466600004394

[19] Revesz, P. (1968) The Laws of Large Numbers. Academic Press, New York.

[20] Hall, P. and Heyde, C.C. (1980) Martingale Limit Theory and Its Application. Academic Press, New York.

[21] Pollard, D. (1984) Convergence of Stochastic Process. Springer-Verlag, New York.

https://doi.org/10.1007/978-1-4612-5254-2

[22] Bosq, D. (2000) Linear Processes in Function Spaces. Springer-Verlag, New York.

https://doi.org/10.1007/978-1-4612-1154-9

[23] Koenker, R. (2005) Quantile Regression. Cambridge University Press, Cambridge.

https://doi.org/10.1017/CBO9780511754098