A coordinate-free approach in finite populations was introduced by  as an alternative to the Gauss-Markov set up, used with the purpose of predicting li- near functions. The Gauss-Markov approach is characterized by a dependence on a particular basis matrix, but in the coordinate-free language, we need only to describe a parametric subspace of , where is the size of the finite po- pulation. Coordinate-free models in the linear models context are discussed by  and  .
In a finite population , where is the known population size, let be the value of a random variable associated to each population unit. Under the superpopulation approach, we will assume that is a random vector such that , where is an -dimensional real vector space with the usual inner product.
The superpopulation model is expressed by
where is a -dimensional subspace of , is a unknown positive pa- rameter and is a known positive definite matrix.
The considered model is coordinate free, in the sense that no basis is defined for , the parametric space of .
Our main objective is predicting , a linear combination of the elements of . With this purpose, a sample of observations is drawn of the population and the values of in become known for the sample elements. Let and be the sets of sample and non sample elements, respectively, such that .
We will consider, without loss of generality that and are reordered as
with containing the observed sample elements, containing the unobserved elements, , and are the covariance matrix.
Under a less general model, with , a known diagonal matrix,  presented the optimal linear predictor of . In the next section, we extended the result, obtaining the best linear unbiased predictor of in the model (1.1) and this was the main contribution of the paper. In Section 3, we show that under the coordinatized model, this predictor coincides with that given by  . Finally, we conclude the paper with some examples in Section 4.
2. Best Linear Unbiased Predictor of Linear Functions
The linear function to be predicted may be written as
where is a diagonal matrix with its -th diagonal element , where if and if , , .
We note that with this notation, corresponds to the linear combina- tion of the components of in the sample and is the com- bination of the unobserved elements.
Before stating the predicting results, it is necessary to introduce some de- finitions and preliminary results.
Since after the sample is observed, will be known, we restrict our atten- tion to linear predictors of in the form
where is a -dimensional vector.
Definition. A linear predictor of is unbiased if and only if
for every .
The class of all linear unbiased predictors of will be denoted by .
Finally, next definition states the concept of optimality of the linear predictor of .
Definition. The linear predictor is the best linear unbiased predictor of or the optimal linear predictor of if and
for every and every .
The value of corresponds to the mean-squared error of the predictor .
The optimal linear predictor of under the model
where is a known diagonal matrix and is unknown was obtained by  . It was shown that if , where is the dimension of the linear space , then the best linear unbiased predictor of is given by
where , 0 is a null vector of dimension , is such that
and is the orthogonal projector onto .
Returning to the model (1.1), with a non diagonal covariance matrix , let us consider the decomposition , with a lower triangular matrix. As shown by  (Theorem 7.2.1) there is a unique lower triangular matrix such that . In addition, is nonsingular. Then, we define the random vector and, as a consequence, by multivariate properties of covariance matrix of random vectors and matrix results,
Next theorem presents the best linear unbiased predictor of under model (1.1).
Theorem 1. In the model (1.1)
a known positive definite matrix, the optimal linear predictor of any linear function of , , is
where , 0 is the null vector of dimension , is the solution in
of the system of linear equations
and is the orthogonal projection matrix onto .
Proof. Let with the lower triangular matrix such that
where , 0 is the null vector of dimension and the solution in
of the system of linear equations
We note that does not depend on unknown quantities because, as it will be shown in the appendix, and do not depend on unknown quantities.
by  results, the optimal linear predictor of is
with , where 0 is the null vector of dimension and obtained
by (1.2) is the solution of the system of linear equations
Taking , this predictor reduces to and . So, by (1.2), we have just proved that is the optimal linear predictor of .
To finish the proof, it is enough to show that . For this purpose we write some of matrices already defined in the partitioned form as
where the submatrix are of dimension , , and and 0 denotes the null matrix.
Further,  , then and after some calculations
Thus, if is the solution in of
it follows that
Now, with this notation,
which implies that
It is important to observe that has unknown elements and it
may be difficult to calculate by the above definition. But it can be obtained as , when is a basis matrix for .
Some applications of the result in Theorem 1 will be presented in the examples.
3. Best Linear Unbiased Predictor in the Coordinatized Model
We now consider a coordinatized version of the model (1.1), given by
, with a known positive definite matrix and a basis matrix of .
Under this formulation, is a matrix of full rank and there exists a unique such that . Regression models are included in the class of models defined in (3.1).
 derived the best linear unbiased predictor of the population total .
This predictor, adapted to the notation introduced here and to predict any linear combination of is given by
where and .
Next theorem shows that in the coordinatized model (3.1), the optimal linear predictor obtained in Theorem 1 reduces to the Royall’s predictor defined in (3.2).
Theorem 2. Under model (3.1), the optimal linear predictor given in (2.1) is equal to .
Proof. We must show that in (2.1) is equal to .
As proved in Theorem 1
which is equivalent to
Applying (A.3), (A.1) and (A.2) of the appendix, it follows that
Now, it is enough showing that
and employing (A.2), last expression reduces to
Finally, using (A.5), we get
In this section, we present two examples to illustrate the optimal predictors that are obtained in the theorems.
In the first one, we consider a coordinate free model and the predictor is derived applying Theorem 1. Second example shows an application of Theorem 2 in a particular coordinatized model.
Example 1. Our objective is to predict the population total in the
with a known parameter and .
Because of the great quantity of calculations, without loss of generality, we restrict the attention to the situation where , , such that
In this case,
a base for is given by .
Then, it is easy to see that
By Theorem 1, the optimal linear predictor of is , where
is the solution in of the equation
After calculations, we get
where , and .
It is interesting to note that, if , such that and and are uncorrelated, , then , where is the sample mean. In this case, is the expansion predictor which was found by  under the model and .
Example 2. Let us consider the superpopulation model
with , , for , , and a known parameter, .
Our objective is to calculate the best linear unbiased predictor of the popula-
tion total .
In this situation, the model is coordinatized, and by Theorem 2, it is enough to obtain the value
Let and be written as
where and are respectively the and matrix of ones.
Thus, it is easy to see that
First, we show that defined in the proof of Theorem 1 does not depend on unknown quantities.
Since is a lower triangular matrix, is lower triangular also, then
So, it is shown that does not depend on unknown quantities. By the proof of Theorem 1, we can see that and thus, also does not depend on unknown quantities. Then is a predictor of .
Now we derive the results (A.1) through (A.6) which are necessary to prove Theorem 2.
Let partitioned as in the proof of Theorem 1, which
Then using the equality and after some algebraic manipulations, it follows that
In the coordinatized model with and covariance matrix , it is well known  , that
In the partitioned form, this matrix can be written as
Using the fact that , it follows that
Application of a result of inverse matrix in conjunction with (A.4) and (A.5) yields
 Royall, R.M. (1976) The Linear Least-Squares Prediction Approach to Two-Stage Sampling. Journal of the American Statistical Association, 71, 657-664.