Approximation of Finite Population Totals Using Lagrange Polynomial

Show more

1. Introduction

This study is using an approximation technique to approximate the finite population total called the Lagrange polynomial that doesn’t require any selection of bandwidth as in the case of local polynomial regression estimator. The Lagrange polynomials are used for polynomial interpolation and extrapolation. For each given set of distinct points x_{j} and y_{j}, the Lagrange polynomial of the lowest degree takes on each point x_{j} corresponding to y_{j} (i.e. the functions coincide at each point). Although named after Joseph Louis Lagrange, who published it in 1795, the method was first discovered in 1779 by Edward Waring. It is also an easy consequence of a formula published in 1783 by Leonhard Euler as will be seen later on how it works.

[1] in the context of using auxiliary information from survey data to estimate the population total defined
${U}_{1},{U}_{2},\cdots ,{U}_{N}$ as the set of labels for the finite population. Letting
$\left({y}_{i},{x}_{i}\right)$ be the respective values of the study variable y and the auxiliary variable x attached to i^{th} unit. Of interest is the estimation of population total
${Y}_{t}={\displaystyle {\sum}_{i=1}^{N}{x}_{i}}$ using the known population totals
${X}_{t}={\displaystyle {\sum}_{i=1}^{N}{x}_{i}}$ at the estimation stage, if we let
${s}_{1},{s}_{2},\cdots ,{s}_{n}$ be the set of sampled units under a general sampling design p, and let
${\pi}_{i}=p\left(i\in s\right)$ be the first order inclusion probabilities. In 1940, Cochran made an important contribution to the modern sampling theory by suggesting methods of using the auxiliary information for the purpose of estimation in order to increase the precision of the estimates [2] . He developed the ratio estimator to estimate the population mean or the total of the study variable y. The ratio estimator of population
$\stackrel{\xaf}{Y}$ is of the form

${\stackrel{\xaf}{y}}_{r}=\frac{\stackrel{\xaf}{y}}{\stackrel{\xaf}{x}}\stackrel{\xaf}{X};\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\stackrel{\xaf}{x}\ne 0$

The aim of this method is to use the ratio of sample means of two characters which would be almost stable under sampling fluctuations and, thus, would provide a better estimate of the true value. It has been well-known fact that
${\stackrel{\xaf}{y}}_{r}$ is most efficient than the sample mean estimator
$\stackrel{\xaf}{y}$ , where no auxiliary information is used, if ρ_{yx}, the coefficient of correlation between y and x, is greater than half the ratio of coefficient of variation of x to that of y, that is, if

${\rho}_{yx}>\frac{1}{2}\left(\frac{{C}_{x}}{{C}_{y}}\right)$ (1.0)

Thus, if the information on an auxiliary variable is either already available or can be obtained at no extra cost and it has a high positive correlation with the main character, one would certainly prefer ratio estimator to develop more and more superior techniques to reduce bias and also to obtain unbiased estimators with greater precision by modifying either the sampling schemes or the estimation procedures or both. [3] further extended the work of [4] on systematic sampling. [5] also dealt with the problem of estimation using the priori-information. Contrary to the situation of ratio estimator, if variables y and x are negatively correlated, then the product estimator of population mean $\stackrel{\xaf}{Y}$ is of the form

${\stackrel{\xaf}{y}}_{q}=\frac{\stackrel{\xaf}{y}}{\stackrel{\xaf}{X}}\stackrel{\xaf}{x};\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\stackrel{\xaf}{X}\ne 0$ (1.1)

that was proposed by [6] . It has been observed that the product estimator gives higher precision than the sample mean estimator $\stackrel{\xaf}{y}$ under the condition that is if

${\rho}_{yx}\le -\frac{1}{2}\left(\frac{{C}_{x}}{{C}_{y}}\right)$ (1.2)

The expressions for bias and mean square errors of ${\stackrel{\xaf}{y}}_{r}$ and ${\stackrel{\xaf}{y}}_{q}$ have been derived by [7] .

[8] made use of known value of $\stackrel{\xaf}{X}$ for defining the difference estimator

${\stackrel{\xaf}{y}}_{d}=\stackrel{\xaf}{y}+\beta \left(\stackrel{\xaf}{X}-\stackrel{\xaf}{x}\right)$ (1.3)

where β is a constant. The best choice of β which minimizes the variance of the estimator is seen to be

$\beta =\frac{{S}_{yx}}{{S}_{x}^{2}}$ (1.4)

which is the population regression coefficient of y on x. Since, β is generally unknown in practice, it is estimated by sample regression coefficient

$b=\frac{{s}_{yx}}{{s}_{x}^{2}}$ (1.5)

Using sample regression coefficient (i.e. b), Watson defined simple linear regression estimator as

${\stackrel{\xaf}{y}}_{1r}=\stackrel{\xaf}{y}+b\left(\stackrel{\xaf}{X}+\stackrel{\xaf}{x}\right)$ (1.6)

This estimator is biased, the bias being negligible for large samples.

The most common way of defining a more efficient class of estimators than usual ratio (product) and sample mean estimator is to include one or more unknown parameters in the estimators whose optimum choice is made by minimizing the corresponding mean square error or variance. Sometimes, such modifications or generalizations are made by mixing two or more estimators with unknown weights whose optimum values are then determined which generally depend upon population parameters. In order to propose efficient classes of estimators, [9] suggested a one-parameter family of factor-type (F-T) ratio estimators defined as

${\stackrel{\xaf}{y}}_{f}=\stackrel{\xaf}{y}\left[\frac{\left(A+C\right)\stackrel{\xaf}{X}+fB\stackrel{\xaf}{x}}{\left(A+fB\right)\stackrel{\xaf}{X}+C\stackrel{\xaf}{x}}\right]$ (1.7)

where $A=\left(d-1\right)\left(d-2\right)$ , $B=\left(d-1\right)\left(d-4\right)$ , $C=\left(d-2\right)\left(d-3\right)\left(d-4\right)$ , $d>0$ , $f=\frac{n}{N}$ . The literature on survey sampling describes a great variety of

techniques of using auxiliary information to obtained more efficient estimators. Keeping this fact in view, a large number of authors have paid their attention toward the formulation of modified ratio and product estimators using information on an auxiliary variate, for instance, see [10] and Singh et al. [11] .

Suppose n is large and $MSE\left(\stackrel{^}{R}\right)=Var\left(\stackrel{^}{R}\right)$ . We assume that $\stackrel{\xaf}{x}$ and $\stackrel{\xaf}{X}$ are quite close such that

$\stackrel{^}{R}-R=\frac{\stackrel{\xaf}{y}-R\stackrel{\xaf}{x}}{\stackrel{\xaf}{x}}=\frac{\stackrel{\xaf}{y}-R\stackrel{\xaf}{x}}{\stackrel{\xaf}{X}}$

so that the bias of $\stackrel{\xaf}{R}$ becomes quite small.

The concept of nonparametric models within a model assisted framework was first introduced by [12] in estimating population parameters like population total and mean. The estimator was based on local polynomial smoothing. For a population of size N and where values for y are fully observed, they proposed the following estimator for population total of the variable y. The estimator could also be written as

${\stackrel{^}{Y}}_{gen}=\underset{i\in s}{{\displaystyle \sum}}\frac{{y}_{i}}{{\pi}_{i}}+\left(\underset{j=1}{\overset{N}{{\displaystyle \sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\stackrel{^}{\mu}\left({x}_{j}\right)-\underset{i\in s}{{\displaystyle \sum}}\frac{\stackrel{^}{\mu}\left({x}_{i}\right)}{{\pi}_{i}}\right)$ (1.8)

The first term in (1.8) is a design estimator which the second is model component. Therefore, when the sample comprises of the whole population, the model component reduces to zero since π_{i} = 1 and s = N. We therefore have the actual population total. [13] proposed the super population model ξ, such that
${E}_{\xi}\left({y}_{i}\right)=\mu \left({x}_{i}\right)$ where
$\mu \left({x}_{i}\right)$ is a known function of x_{i}. They proposed model

calibration estimator for population total Y_{t} to be
$\stackrel{\u02dc}{Y}={\displaystyle {\sum}_{i\in s}\frac{{y}_{i}}{{\pi}_{i}}}$

In local polynomial regression, a lower-order weighted least squares (WLS) regression is fit at each point of interest, x using data from some neighborhood around x. Following the notation from [14] , let the (X_{i}, Y_{i}) be ordered pairs such that

${Y}_{i}=m\left({X}_{i}\right)+\sigma \left({X}_{i}\right){\epsilon}_{i}$ (1.9)

where
$\epsilon ~N\left(0,1\right)$ ,
${\sigma}^{2}\left({X}_{i}\right)$ is the variance of Y_{i} at the point X_{i}, and X_{i} comes from some distribution, f. In some cases, homoscedastic variance is assumed, so we let
${\sigma}^{2}\left(X\right)={\sigma}^{2}$ . It is typically of interest to estimate m(x). Using Taylor’s expansion:

$m\left(x\right)\approx m\left({x}_{o}\right)+{m}^{\prime}\left({x}_{o}\right)\left(x-{x}_{o}\right)+\cdots +\frac{{m}^{n}\left({x}_{o}\right)}{n!}{\left(x-{x}_{o}\right)}^{n}$ (1.91)

We can estimate these terms using weighted least squares by solving the following for β:

${\sum}_{i=1}^{n}{\left[{Y}_{i}-{\displaystyle {\sum}_{j=0}^{q}{\beta}_{j}{\left({X}_{i}-{x}_{0}\right)}^{j}}\right]}^{2}{K}_{h}\left({X}_{i}-{x}_{0}\right)$ (1.92)

In (1.92), h controls the size of the neighborhood around x_{0}, and K_{h}(.) controls the weights, where
${K}_{h}(.)\equiv \frac{K\left(\frac{\cdot}{h}\right)}{h}$ , and K is a kernel function. Denote the solution to (1.92) as
$\stackrel{^}{\beta}$ ^{.} Then estimated
${m}^{v}\left({x}_{0}\right)=v!{\stackrel{^}{\beta}}_{V}$ . [15] proposed to use nonparametric method to obtain
$\mu (.)$ . However, this estimator experiences a twin problem of how to determine the optimal degrees of the local polynomial. A higher degree polynomial yields a smoother
$\stackrel{\xaf}{\mu}(.)$ but worsens the boundary variance [16] . Such estimators are challenging to employ in cases of multiple covariates and when data is sparse. Another challenge is how to incorporate categorical covariates. It is therefore necessary to consider other methods to recover the fitted values such as splines. The term spline originally referred to a tool used by draftsmen to draw curves. According to [17] , splines are piecewise regression functions we constrain to join at points called knots.

The Horvitz-Thompson (HT) estimator, which is originally discussed by [18] doesn’t make use of the auxiliary information x_{i} but instead uses only the study variable y_{i} to obtain the population total.

Consider the population of size N with units
${y}_{1},{y}_{2},{y}_{3},\cdots ,{y}_{N}$ . Suppose we want to select sample s of size n_{s}.

Let π_{i} be the probability of including i^{th} unit of the population in sample s. This is called the inclusion probability or first order inclusion probability of i^{th} unit in the sample.

Let π_{ij} be the probability of including i^{th} and j^{th} units in the sample. This is called the joint inclusion probability or second order inclusion probability.

When the sample is obtained from a probability sampling design, an unbiased estimator for the Total $Y={\displaystyle {\sum}_{i=1}^{N}{y}_{i}}$ is given by

${\stackrel{^}{Y}}_{HT}=\underset{i=1}{\overset{N}{{\displaystyle \sum}}}\frac{{y}_{i}}{{\pi}_{i}}=\underset{i=1}{\overset{N}{{\displaystyle \sum}}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{y}_{i}{\pi}_{i}^{-1}$ (1.93)

${\stackrel{^}{Y}}_{HT}$ is unbiased under design based approach [19]

Variance

$V\left({\stackrel{^}{Y}}_{HT}\right)=\underset{i=1}{\overset{N}{{\displaystyle \sum}}}\underset{j=1}{\overset{N}{{\displaystyle \sum}}}\left({\pi}_{ij}-{\pi}_{i}{\pi}_{j}\right)\frac{{y}_{i}{y}_{j}}{{\pi}_{i}{\pi}_{j}}$

The variance of this estimator can be minimized when π_{i} ∝ y_{i}. That is, if the first order inclusion probability is proportional to y_{i}, the resulting HT estimator under this sampling design will have zero variance. However, in practice, we can’t construct such design because we don’t know the value of y_{i} in the design stage. If there is a good auxiliary variable x_{i} which is believed to be closely related with y_{i}, then a sampling design with π_{i} ∝ x_{i} can lead to very efficient sampling design This method of estimating the finite population totals doesn’t make use of the auxiliary information x_{i} but instead uses only the study variable y_{i} to obtain the population totals.

Research literature has revealed that the ratio estimator performs better than the local linear polynomial estimator when the population is linear no matter which variance is used. The local linear polynomial regression estimator becomes a better estimator when the population used is either quadratic or exponential especially with an increase in the sample size which increases the likelihood of outliers in the sample.

One of the most useful and well-known classes of functions mapping the set of real numbers into itself is algebraic polynomials, the set of functions of the form

${P}_{n}\left(x\right)={a}_{n}{x}^{n}+{a}_{n}{}_{-1}{x}^{n}{}^{-1}+\cdots +{a}_{1}x+{a}_{0}$

where n is a non-negative integer and ${a}_{0},\cdots ,{a}_{n}$ are real constants. One reason for their importance is that they uniformly approximate continuous functions. By this we mean that given any function, defined and continuous on a closed and bounded interval, there exists a polynomial that is as “close” to the given function as desired [20] .

In Section 2 we briefly introduced the Lagrange polynomial and in Section 2.1 we further defined the Lagrange polynomial. Section 2.2 talked about properties of polynomial approximations and proof of the Karl Weierstrass theorem. Section 3 talked about the main results with the use of real data from the Kenya National Bureau of Statistics on population census. While Section 3.2 showed how to calculate missing values via interpolation. Section 3.3 and 3.4 extrapolated the population totals in 2009 and 2019 respectively. Section 4 concluded by stating that, the best approximating polynomial for a quick convergence must be a linear one in order to give a better extrapolation.

2. Approximation of Finite Population Totals

In this section, we are basically introducing an approximator that is the Lagrange polynomial approximate of the finite population totals.

2.1. Proposed Lagrange Polynomial

Consider a finite population
$U=\left\{{U}_{1},{U}_{2},\cdots ,{U}_{N}\right\}$ of N units. Let (y, x) be the (total, year) variables taking non negative real values (y_{i}, x_{i}) respectively, on the unit
${U}_{i}\left(i=1,2,\cdots ,N\right)$ . From the population U, a simple random sample of size n is drawn without replacement. Then, the Lagrange interpolating polynomial is the polynomial p(x) of degree ≤ (n − 1) that passes through the n points
$\left({x}_{1},{y}_{1}=f\left({x}_{1}\right)\right)$ ,
$\left({x}_{2},{y}_{2}=f\left({x}_{2}\right)\right),\cdots ,\left({x}_{n},{y}_{n}=f\left({x}_{n}\right)\right)$ and is given by:

$P\left(x\right)={\displaystyle {\sum}_{j=1}^{n}{P}_{j}\left(x\right)}$ ,

where ${P}_{j}\left(x\right)={y}_{j}{\displaystyle {\prod}_{k=1}^{n}\frac{x-{x}_{k}}{{x}_{j-}{x}_{k}}}$ written explicitly,

$\begin{array}{c}P\left(x\right)=\frac{\left(x-{x}_{2}\right)\left(x-{x}_{3}\right)\cdots \left(x-{x}_{n}\right)}{\left({x}_{1}-{x}_{2}\right)\left({x}_{1}-{x}_{3}\right)\cdots \left({x}_{1}-{x}_{n}\right)}{y}_{1}+\frac{\left(x-{x}_{1}\right)\left(x-{x}_{3}\right)\cdots \left(x-{x}_{n}\right)}{\left({x}_{2}-{x}_{1}\right)\left({x}_{2}-{x}_{3}\right)\cdots \left({x}_{2}-{x}_{n}\right)}{y}_{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\cdots +\frac{\left(x-{x}_{1}\right)\cdots \left(x-{x}_{n-1}\right)}{\left({x}_{n}-{x}_{1}\right)\cdots \left({x}_{n}-{x}_{n-1}\right)}{y}_{n}\end{array}$

2.2. Asymptotic Properties of Polynomial Approximations

Polynomial Approximation of Functions:

Weierstrass Theorem:

$f:\left[a,b\right]\to R$ continuous

Then there exists a sequence of polynomials P_{n}(x) such that
${\Vert f-{P}_{n}\Vert}_{\infty}={\mathrm{max}}_{x}{}_{\in \left[a,b\right]}\left|f\left(x\right)-{P}_{n}\left(x\right)\right|\to 0$ as n → ∞

Proof of Theorem:

$f:\left[a,b\right]=\left[0,1\right]\to R$ continuous.

${P}_{n}\left(x\right)={B}_{n}\left(f\right)\left(x\right)=\underset{k=0}{\overset{n}{{\displaystyle \sum}}}\left(\frac{n!}{k!\left(n-k\right)}\right)f\left(\frac{k}{n}\right){x}^{k}{\left(1-x\right)}^{n-k}$

(Bernstein Polynomial)

${\left|\left|f-{P}_{n}\right|\right|}_{\infty}\to 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{as}\text{\hspace{0.17em}}n\to \infty $

We are going to consider three functions: $f\left(x\right)=1$ , $f\left(x\right)=x$ and $f\left(x\right)={x}^{2}$ and show convergence.

$f\left(x\right)=1$

${B}_{n}\left(f\right)\left(x\right)=\underset{k=0}{\overset{n}{{\displaystyle \sum}}}\frac{n!}{k!\left(n-k\right)!}{x}^{k}{\left(1-k\right)}^{n-k}={\left(x+1-x\right)}^{n}=1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}n\ge 0$

Hence

${\Vert f-{B}_{n}\left(f\right)\Vert}_{\infty}=0$

Also,

$f\left(x\right)=x$

$\begin{array}{c}{B}_{n}\left(f\right)\left(x\right)=\underset{k=0}{\overset{n}{{\displaystyle \sum}}}\frac{n!}{k!\left(n-k\right)!}\frac{k}{n}{x}^{k}{\left(1-k\right)}^{n-k}\\ =\underset{k=1}{\overset{n}{{\displaystyle \sum}}}\frac{\left(n-1\right)!}{\left(k-1\right)!\left(n-k\right)!}{x}^{k}{\left(1-x\right)}^{n-k}\end{array}$

Let $L=k-1$

$=x\underset{L=0}{\overset{n-1}{{\displaystyle \sum}}}\frac{\left(n-1\right)!}{L!\left(n-1-L\right)!}{x}^{L}{\left(1-x\right)}^{n-1-L}$

Hence

${\Vert {B}_{n}\left(f\right)-f\Vert}_{\infty}=0,\text{\hspace{0.17em}}n\ge 1$

$f\left(x\right)={x}^{2}$

$\begin{array}{l}=\underset{k=1}{\overset{n}{{\displaystyle \sum}}}\frac{\left(n-1\right)!}{\left(k-1\right)!\left(n-k\right)!}\frac{k-1+1}{n}{x}^{k}{\left(1-x\right)}^{n-k}\\ =\underset{k=2}{\overset{n}{{\displaystyle \sum}}}\frac{\left(n-1\right)!}{\left(k-2\right)!\left(n-k\right)!}\frac{1}{n}{x}^{k}{\left(1-x\right)}^{n-k}+\frac{1}{n}\underset{k=1}{\overset{n}{{\displaystyle \sum}}}\frac{\left(n-1\right)!}{\left(k-1\right)!\left(n-k\right)!}{x}^{k}{\left(1-x\right)}^{n-k}\end{array}$

${B}_{n}\left(f\right)\left(x\right)=\frac{\left(n-1\right)}{n}{x}^{2}\underset{k=2}{\overset{n}{{\displaystyle \sum}}}\frac{\left(n-2\right)!}{\left(k-2\right)!\left(n-k\right)!}{x}^{k-2}{\left(1-x\right)}^{n-k}+\frac{1}{n}x$

${\Vert f-{B}_{n}\left(f\right)\Vert}_{\infty}=\frac{1}{4n}\to 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{as}\text{\hspace{0.17em}}n\to \infty $

In order to obtain a best approximating polynomial that has less error, one needs to choose a linear interpolating points that is closest to the target point

3. Main Results

3.1. Data Exploration

The plot showed an upward growth in the population of Kenya. This could be attributed to good health services causing a reduction in the maternal death, deaths as a result of disease outbreak, a boost in socio-economic growth and political stability (Figure 1).

Figure 1. The Kenya population census data since 1969 to 2009 were plotted to see the behaviour of the data as soon shown above in green.

However, we aimed at selecting a sample size of two from 1969 to 2009 population census using a technique of simple random sampling without replacement making a sample total of ten. A pair of linear samples selected were plotted on the same charts to approximate the function f(x) in green colour as shown below for each.

The chart in (Figure 2) below comprises of two linear polynomials that have uniformly approximated the function in green in order to give a better approximate to the population total in 2019. As can be seen, the two linear plots are not showing any better approximate of the function f(x) in green in order to help us extrapolate the population total in 2019.

The linear polynomials in (Figure 3) below in red and blue are used to uniformly approximate the function f(x) in green so as to help us extrapolate the population total in 2019. This was clearly seen to have obtained high variation in the approximation. The blue line appeared to be better than the red at the end point.

Similarly, the approximating linear polynomials in red and green in (Figure 4) are used to approximate the function f(x) in green. Unfortunately, the two approximating lines are not suitable to help extrapolate the population total in 2019.

The approximating linear polynomials shown below in (Figure 5) are used to uniformly approximate the function f(x) in green representing the trend of the entire population. As seen on the chart, the black line appeared to perform better at the end point than the blue but showed some variations.

Finally, the approximating linear polynomials in (Figure 6) are used to uniformly approximate the function f(x) representing the total population trend per year. The chart has clearly shown that, the black dotted line depicted the best

Figure 2. This chart was obtained from a set of data ranging from [1969, 1979] in yellow to [1969, 1989] in blue and the green function (f(x)).

Figure 3. This chart was obtained from a set of data ranging from [1969, 1999] in red to [1969, 2009] in blue and the green function (f(x)).

Figure 4. This chart was obtained from a set of data ranging from [1979, 1989] in green dotted line to [1979, 1999] in red and the green function(f(x)).

Figure 5. This chart was obtained from a set of data ranging from [1979, 2009] in black to [1989, 1999] in blue and the green function(f(x)).

Figure 6. This chart was obtained from a set of data ranging from [1989, 2009] in red dotted line to [1999, 2009] in black dotted line and the green function (f(x)).

approximate on its entire interval which is [1999, 2009] as the place for the Best Approximating Polynomial (BAP) to approximate the function f(x) uniformly to any degree of accuracy.

3.2. Calculating Missing Values via Interpolation

$x\left[1\right]=\left[1999\right]$ and $y\left[1\right]=\left[28,686,607\right]$ ; $x\left[11\right]=\left[2009\right]$ and $y\left[11\right]=\left[38,610,097\right]$

Columns 1 through 8

28,686,607 29,678,956 30,671,305 31,663,654 32,656,003 33,648,352 34,640,701 35,633,050

Columns 9 through 11

36,625,399 37,617,748 38,610,097

$y\left[i\right]=y\left[i-1\right]+\left(y\left[11\right]-y\left[i-1\right]\right)/h$

where i ≥ 2 and h = annual step size

3.3. Approximation of Population Total in 2009

$x\left[11\right]=\left[2009\right]$ and $y\left[11\right]=\left[38,610,097\right]$ given

$x\left[10\right]=\left[2008\right]$ and $y\left[10\right]=\left[37,617,748\right]$ approximated

$x\left[9\right]=\left[2007\right]$ and $y\left[9\right]=\left[36,625,399\right]$ approximated

$L9=\left(x\left[11\right]-x\left[10\right]\right)/\left(x\left[9\right]-x\left[10\right]\right)\ast y\left[9\right]$

$L10=\left(x\left[11\right]-x\left[10\right]\right)/\left(x\left[10\right]-x\left[9\right]\right)\ast y\left[10\right]$

Approximated value = L9 + L10

Approximated population total = 38,610,097

Error = 0

3.4. Extrapolation of 2019 Population Total

$x\left[11\right]=\left[2009\right]$ and $y\left[11\right]=\left[38,610,097\right]$

$x\left[10\right]=\left[2008\right]$ and $y\left[10\right]=\left[37,617,748\right]$

$L19=\left(2019-x\left[11\right]\right)/\left(x\left[10\right]-x\left[11\right]\right)\ast y\left[10\right]$

$L20=\left(2019-x\left[10\right]\right)/\left(x\left[11\right]-x\left[10\right]\right)\ast y\left[11\right]$

Approximated value = L19 + L20

Approximated population total = 48,533,587

4. Conclusion

In this work, the Lagrange polynomial has proven to be a good technique in approximating the population total from data obtained from the Kenya National Bureau of Statistics (KNBS). The research revealed that, subsequent population totals can better be approximated using a sample closest to the target population being approximated. Therefore, the best approximating polynomial must be a linear form in order to obtain convergence with a diminishing variation in a given interval. The precision of this technique can better be measured with the outcome obtained in the interpolation of missing values shown in the results above to extrapolate the population total in 2009 which was equal to the exact population obtained in that census. We therefore conclude that, the population of Kenya for the 2019 census will be forty-eight million five hundred and thirty-three thousand five hundred and eighty-seven.

Acknowledgements

We are grateful to the authors for their numerous and valuable contributions to this work, most especially the first author.

Conflict of Interest

The author(s) declare(s) that there is no conflict of interest regarding the publication of this paper.

References

[1] Deville, J.-C. and Sarndal, C.-E. (1992) Calibration Estimators in Survey Sampling. Journal of the American Statistical Association, 87, 376.

https://doi.org/10.1080/01621459.1992.10475217

[2] Cochran, W.G. and Goulden, C.H. (1940) Methods of Statistical Analysis. Journal of the Royal Statistical Society, 103, 250.

https://doi.org/10.2307/2980420

[3] Cochran, W.G. (1946) Graduate Training in Statistics. The American Mathematical Monthly, 53, 193.

https://doi.org/10.2307/2305269

[4] Nadaraya, E.A. (1964) On Estimating Regression. Theory of Probability and Its Applications, 9, 141-142.

https://doi.org/10.1137/1109020

[5] Singh, V.K., Singh, H.P., Singh, H.P. and Shukla, D. (1994) A General Class of Chain Estimators for Ratio and Product of Two Means of a Finite Population. Communications in Statistics Theory and Methods, 23, 1341-1355.

https://doi.org/10.1080/03610929408831325

[6] Searls, D.T. (1964) The Utilization of a Known Coefficient of Variation in the Estimation Procedure. Journal of the American Statistical Association, 59, 1225.

https://doi.org/10.1080/01621459.1964.10480765

[7] Wu, C. and Sitter, R.R. (2001) A Model-Calibration Approach to Using Complete Auxiliary Information from Survey Data. Journal of the American Statistical Association, 96, 185-193.

https://doi.org/10.1198/016214501750333054

[8] Johnson, A.A., Breidt, F.J. and Opsomer, J.D. (2008) Estimating Distribution Functions from Survey Data Using Nonparametric Regression. Journal of Statistical Theory and Practice, 2, 419-431.

https://doi.org/10.1080/15598608.2008.10411884

[9] Sukhatme, V. (1984) Future Dimensions of World Food and Population. Economic Development and Cultural Change, 32, 892-897.

https://doi.org/10.1086/451435

[10] Watson, G. (1964) Smooth Regression Analysis. The Indian Journal of statistics Series A, 26, 359-372.

[11] Solanki, R.S., Singh, H.P. and Pal, S.K. (2014) Improved Ratio-Type Estimators of Finite Population Variance Using Quartiles. Hacettepe Journal of Mathematics and Statistics, 45, 1.

https://doi.org/10.15672/HJMS.2014448247

[12] Lairez, P. (2016) A Deterministic Algorithm to Compute Approximate Roots of Polynomial Systems in Polynomial Average Time. Foundations of Computational Mathematics.

https://doi.org/10.1007/s10208-016-9319-7

[13] Godambe, V.P. and Thompson, M.E. (1986) Parameters of Super Population and Survey Population: Their Relationships and Estimation. International Statistical Review/Revue International de Statistique, JSTOR, 127-138.

[14] Hansen, M.H., Hurwitz, W.N. and Madow, W.G. (1953) Sample Survey Methods and Theory. Vol. 1, Wiley, New York.

[15] Robson, D.S. (1957) Applications of Multivariate Polykays to the Theory of Unbiased Ratiotype Estimation. Journal of the American Statistical Association, 52, 511-522.

https://doi.org/10.1080/01621459.1957.10501407

[16] Montanari, G.E. and Ranalli, M.G. (2003) On Calibration Methods for Design Based Finite Population Inferences. Bulletin of the International Statistical Institute, 54th Session 60.

[17] Madow, W.G. and Madow, L.H. (1944) On the Theory of Systematic Sampling, I. The Annals of Mathematical Statistics, 15, 1-24.

https://doi.org/10.1214/aoms/1177731312

[18] Keele, L.J. (2008) Semiparametric Regression for the Social Sciences. John Wiley and Sons.

[19] Singh, H.P., Pal, S.K. and Mehta, V. (2016) A Generalized Class of Ratio-Cum-Dual to Ratio Estimators of Finite Population Mean Using Auxiliary Information in Sample Surveys. Mathematical Sciences Letters, 5, 203-211.

https://doi.org/10.18576/msl/050215

[20] Burden, R.L. and Faires, J.D. (2001) Numerical Analysis. Brooks/Cole.