White Noise Analysis: A Measure of Time Series Model Adequacy

Show more

1. Introduction

The fundamental building block of time series is stationarity and basically, the idea behind stationarity is that the probability laws that govern the behaviour of the process do not change overtime. This is to ensure that the time series process is in a state of statistical equilibrium and would in turn enhance a statistical setting for describing and making inferences about the structure of data that somehow fluctuate in a random manner [1] [2] [3]. According to [3], a process is said to be strictly stationary if the whole probability structure must depend only on time differences. A less restrictive requirement, called weak stationarity of order k, is that the moments up to some order k depends only on time lags and that the second order stationarity plus an assumption of normality are sufficient to produce strict stationarity (see also, [4] [5]). For simplicity, a time series is said to be stationary, if it has a mean, variance and autocovariance function that are constant over time (see [6]). Moreover, one most important and fundamental example of a stationary process is the white noise process which is defined as a sequence of independent (uncorrelated) and identically distributed random variables with zero mean and constant variance [2] [3] [5]. Thus, the white noise process is particularly important and constitutes an essential bedrock in time series model building.

In this study, our aim is to apply white noise process in measuring model adequacy targeted at confirming independence assumption, which ensures that no autocorrelation exists in the time series considered and that the ARMA model entertained is able to capture the linear structure in the dataset.

The motivation stems from the fact that the problem of statistical modeling is to achieve parsimony (i.e. the principle of model selection with the possibility of having the smallest number of parameters that completely express the linear dependence structure, providing better prediction, and generalization of new observations) conditional on the restriction of model adequacy.

Testing for model adequacy or diagnostic checking as defined by [7] incorporates all relevant information and when calibrated to the data no important significant departures from statistical assumptions made can be found. Actually, model adequacy involves residual analysis and overfitting. In time series modeling, a good model parameter estimates must be reasonably close to the true values, should have the dependence structure of the data adequately captured, and should also produce residuals that are approximately uncorrelated [2] [6] [8]. These residuals are obtained by taking the difference between an observed value of a time series and a predicted value from fitting a candidate model to the data. They are useful in checking whether a model has adequately captured the information in the data. According to [6], model adequacy is related primarily to the assumption that residuals are independent. Moreover, if the residuals of a given model are correlated, the model must be refined because it does not completely capture the statistical relationship amongst the time series [2]. Furthermore, a model is said to be adequate if the residuals are statistically independent implying that the residual series is uncorrelated. Therefore, in testing for model adequacy, which is mainly to check for independence of the residual series, an autocorrelation function (ACF), Partial autocorrelation function (ACF) and Ljung-Box test on the residuals are considered.

Another adequacy checking tool is overfitting, which has to do with adding another coefficient to a fitted model so as to see if the resulting model is better. The following are identified as the implications of fitting and overfitting:

1) Specify the original model carefully. If a simple model seems promising, check it out before trying a more complicated model.

2) When overfitting, do not increase the orders of both the autoregressive (AR) and moving average (MA) parts of the model simultaneously.

3) Extend the model in directions suggested by the analysis of the residuals. However, one setback of overfitting is the tendency of the violation of the principle of parsimony [2] [6].

Model adequacy has also been explored by the following studies: [7] - [17].

In addition, the remaining part of this work is organized as follows; Section 2 takes care of the methodology, followed by the results and then the discussion in Section 3, while the conclusion of the overall results is handled in Section 4.

2. Methodology

2.1. Return Series

The return series, ${R}_{t}$, can be obtained given that ${P}_{t}$ is the price of a unit share at time, t, while ${P}_{t-1}$ is the share price at time, t − 1.

${R}_{t}=\nabla \mathrm{ln}\left({P}_{t}\right)=\left(1-B\right)\mathrm{ln}\left({P}_{t}\right)=\mathrm{ln}\left({P}_{t}\right)-\mathrm{ln}\left({P}_{t-1}\right)$ (1)

here, ${R}_{t}$ is regarded as a transformed series of the share price, ${P}_{t}$, meant to attain stationarity while B is the backshift operator. Thus, both the mean and the variance of the series are stable [18] [19].

2.2. Autoregressive Integrated Moving Average (ARIMA) Model

In [3] the extension of ARMA model to deal with homogenous non-stationary time series in which ${X}_{t}$, itself is non-stationary but its ${d}^{th}$ difference is a stationary ARMA model. Denoting the ${d}^{th}$ difference of ${X}_{t}$ by

$\phi \left(B\right)=\varphi \left(B\right){\nabla}^{d}{X}_{t}=\theta \left(B\right){\epsilon}_{t}$ (2)

where $\phi \left(B\right)$ is the nonstationary autoregressive operator such that d of the roots of $\phi \left(B\right)=0$ are unity and the remainder lie outside the unit circle. $\varphi \left(B\right)$ is a stationary autoregressive operator (see also, [20] [21]).

2.3. Stationarity

The foundation of time series analysis is stationarity. Consider a finite set of return variables $\left\{{R}_{{t}_{1}},{R}_{{t}_{2}},\cdots ,{R}_{{t}_{n}}\right\}$ from a time series process, $\left\{R\left(t\right):t=0,+1,+2,\cdots \right\}$. The k-dimensional distribution function is defined as

${F}_{{R}_{{t}_{1}}\cdots {R}_{{t}_{k}}}\left({r}_{1},{r}_{2},\cdots ,{r}_{k}\right)=P\left\{{R}_{{t}_{1}}\le {r}_{1},{R}_{{t}_{2}}\le {r}_{2},\cdots ,{R}_{{t}_{k}}\le {r}_{k}\right\},$ (3)

where ${r}_{j},j=1,2,\cdots ,k$ are any real numbers.

A process is said to be:

1) first-order stationary in distribution if its one-dimensional distribution is time invariant. That is, if

${F}_{{R}_{{t}_{1}}}\left({r}_{1}\right)={F}_{{R}_{{t}_{1}+k}}\left({r}_{1}\right)$, (4)

for any integers ${t}_{1},k$ and ${t}_{1}+k$.

2) second-order stationary in distribution if

${F}_{{R}_{{t}_{1}},{R}_{{t}_{2}}}\left({r}_{1},{r}_{2}\right)={F}_{{R}_{{t}_{1}+k},{R}_{{t}_{2}+k}}\left({r}_{1},{r}_{2}\right),$ (5)

for any integers ${t}_{1},{t}_{2},k,{t}_{1}+k$ and ${t}_{2}+k$.

3) the ${n}^{th}$ -order stationary in distribution if

${F}_{{R}_{{t}_{1}},{R}_{{t}_{2}},\cdots ,{R}_{{t}_{n}}}\left({r}_{1},\cdots ,{r}_{n}\right)={F}_{{R}_{{t}_{1}+k},{R}_{{t}_{2}+k},{R}_{{t}_{n}+k}}\left({r}_{1},\cdots ,{r}_{n}\right),$ (6)

for any $\left({t}_{1},\cdots ,{t}_{n}\right)$ and k integers.

A process is said to be strictly stationary if (3.6) is true for any n, that is, $n=1,2,\cdots $

According to [3], a process $\left\{{R}_{t}\right\}$ is weakly stationary if the mean $E\left({R}_{t}\right)=\mu $ is a fixed constant for all t and the autocovariances $Cov\left({R}_{t},{R}_{t+k}\right)={\gamma}_{k}$ depends only on the time difference or time lag k for all t.

Stationary in the wide sense or covariance stationary is also referred to as second-order stationary process.

2.4. White Noise Process

A process $\left\{{a}_{t}\right\}$ is called a white noise process if it is a sequence of uncorrelated random variables from a fixed distribution with constant mean, $E\left({a}_{t}\right)={\mu}_{a}$, usually assumed to be zero, constant variance, $Var\left({a}_{t}\right)={\sigma}_{a}^{2}$ and ${\gamma}_{k}=Cov\left({a}_{t},{a}_{t+k}\right)=0$, for all $k\ne 0$. It is denoted by ${a}_{t}~\text{WN}\left(0,{\sigma}_{a}^{2}\right)$, where WN stands for white noise [5]. By definition, a white noise process $\left\{{a}_{t}\right\}$ is stationary with autocovariance function,

${\gamma}_{k}=\{\begin{array}{l}{\sigma}_{a}^{2},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=0,\\ 0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k\ne 0.\end{array}$ (7)

The autocorrelation function is given as:

${\rho}_{k}=\{\begin{array}{l}1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=0,\\ 0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}k\ne 0.\end{array}$ (8)

while the partial autocorrelation function is

${\phi}_{kk}=\{\begin{array}{l}1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k=0,\\ 0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}k\ne 0.\end{array}$ (9)

Thus, the implication of a white noise specification is that the ACF and PACF are identically equal to zero.

2.5. Autocovariance and Autocorrelation Functions

According to [5], covariance between ${R}_{t}$ and ${R}_{t+k}$ denoted by $Cov\left({R}_{t},{R}_{t+k}\right)$, which is a function of the time difference, k, is called the autocovariance function $\left\{{\gamma}_{k}\right\}$ of the stochastic process. As a function of k, ${\gamma}_{k}$ is called the autocovariance function in the time series analysis since it represents the covariance between ${R}_{t}$ and ${R}_{t+k}$ from the same process. It is defined as

${\gamma}_{k}=Cov\left({R}_{t},{R}_{t+k}\right)=E\left({R}_{t}-\mu \right)\left({R}_{t+k}-\mu \right).$ (10)

The sample estimate of ${\gamma}_{k}$ is ${C}_{k}$ given by

${C}_{k}=\frac{1}{n}{\displaystyle {\sum}_{t=1}^{n-k}\left({R}_{t}-\stackrel{\xaf}{R}\right)\left({R}_{t+k}-\stackrel{\xaf}{R}\right)}$ (11)

Similarly, the correlation between ${R}_{t}$ and ${R}_{t+k}$ denoted by $Corr\left({R}_{t},{R}_{t+k}\right)$, which is a function of the time difference, k, is called the autocorrelation function $\left\{{\rho}_{k}\right\}$ of the stochastic process. As function of k, ${\rho}_{k}$ is called the autocorrelation function in time series analysis since it represents the correlation between ${R}_{t}$ and ${R}_{t+k}$ from the same process. It is defined as

${\rho}_{k}=\frac{Cov\left({R}_{t},{R}_{t+k}\right)}{\sqrt{Var\left({R}_{t}\right)}\sqrt{Var\left({R}_{t+k}\right)}}=\frac{{\gamma}_{k}}{{\gamma}_{0}}$ (12)

The corresponding sample estimate is given by

${\stackrel{^}{\rho}}_{k}=\frac{{C}_{k}}{{C}_{0}},k=0,1,2,\cdots $ (13)

2.6. Partial Autocorrelation Function (PACF)

The conditional correlation between ${R}_{t}$ and ${R}_{t+k}$ after their mutual linear dependency on the intervening variables $\left({R}_{t+1},{R}_{t+2},\cdots ,{R}_{t+k-1}\right)$ has been removed, given by $Corr\left({R}_{t},{R}_{t+k}/{R}_{t+1},{R}_{t+2},\cdots ,{R}_{t+k-1}\right)$, is usually referred to as the partial autocorrelation in time series analysis ([5]).

Partial autocorrelation can be derived from the regression model, where the dependent variable, ${R}_{t+k}$, from a zero-mean stationary process is regressed on k-lagged variables ${R}_{t+k-1},{R}_{t+k-2},\cdots $ and ${R}_{t}$, that is

${R}_{t+k}={\phi}_{k1}{R}_{t+k-1}+{\phi}_{k2}{R}_{t+k-2}+\cdots +{\phi}_{kk}{R}_{t}+{\alpha}_{t+k},$ (14)

where ${\phi}_{ki}$ denotes the ${i}^{th}$ regression parameter and ${\alpha}_{t+k}$ is an error term with mean zero and uncorrelated with ${R}_{t+k-j}$, for $j=1,2,\cdots ,k$. Multiplying ${R}_{t+k-j}$ on both sides of the above regression equation and taking the expectation, we get

${\gamma}_{j}={\phi}_{k1}{\gamma}_{j-1}+{\phi}_{k2}{\gamma}_{j-2}+\cdots +{\phi}_{kk}{\gamma}_{j-k}$. (15)

Hence,

${\rho}_{j}={\phi}_{k1}{\rho}_{j-1}+{\phi}_{k2}{\rho}_{j-2}+\cdots +{\phi}_{kk}{\rho}_{j-k}$. (16)

2.7. Diagnostic Checking of Linear Time Series Models

Diagnostic checking is applied with an objective of uncovering a possible lack-of-fit of the tentative model and possibly unraveling the cause of such a case. If no lack-of-fit is indicated, the model is ready for use. In other words, if any inadequacy is found, the iterative cycle of identification, estimation and diagnostic checking is repeated until a suitable and appropriate representation is obtained.

Once the parameters of the tentative models have been estimated, we check whether or not the residuals obtained from the estimated equation are approximately white noise. This is done by examining the ACF and PACF of the residuals to see whether they are statistically insignificant, that is, within two standard deviations at 5% level of significance. If the residuals are approximately white noise, the model may be entertained provided the parameters are significantly different from zero.

The Portmanteau lack-of-fit test uses the residual sample ACFs as a unit to check the joint null hypothesis test, which requires that several autocorrelations of ${a}_{t}$ are zero. [17] proposed the Portmanteau statistics given as:

${Q}^{\ast}\left(m\right)=T{\displaystyle \underset{l=1}{\overset{m}{\sum}}{\stackrel{^}{\rho}}_{l}^{2}}$, (17)

where T is the number of observations.

A test statistic for the null hypothesis, ${H}_{0}:{\rho}_{1}=\cdots ={\rho}_{m}=0$, against the alternative, ${H}_{a}:{\rho}_{i}\ne 0$, for some $i\in \left\{1,\cdots ,m\right\}$ under the assumption that $\left\{{a}_{t}\right\}$ is an i.i.d. sequence with certain moment conditions while ${Q}^{\ast}\left(m\right)$ is asymptotically a Chi-square random variable with m degrees of freedom.

[22] modified the ${Q}^{\ast}\left(m\right)$ statistic to increase the power of the test in finite samples as follows:

$Q\left(m\right)=T\left(T+2\right){\displaystyle \underset{l=1}{\overset{m}{\sum}}\frac{{\stackrel{^}{\rho}}_{l}^{2}}{T-l}}$, (18)

where T is the number of observations.

The decision rule is to reject ${H}_{0}$ if $Q\left(m\right)>{\chi}_{\alpha}^{2}$, where ${\chi}_{\alpha}^{2}$ denotes the 100(1 – α)th percentile of a Chi-squared distribution with m – (p + q) degrees of freedom. The decision rule can also reject ${H}_{0}$ if the p-value is less than or equal to α, the significance level.

In practice, the selection of m may affect the performance of the Q(m) statistic. The choice $m\approx \mathrm{ln}\left(T\right)$ provides better power performance [4].

3. Results and Discussion

3.1. Dataset

Data collection was based on secondary source from the records of Nigerian Stock Exchange. The data on the daily closing share prices of the sampled banks (Union bank, Unity bank and Wema bank) from January 3, 2006 to November 24, 2016 were obtained from the Nigerian Stock Exchange [23] and delivered through contactcentre@nigerianstockexchange.com.

3.2. Interpretation of Time Plot

Figures 1-3 represent the share price series for the three banks. The share prices of all the banks do not fluctuate around a common mean, which clearly indicates the presence of a stochastic trend in the share prices, and is also an indication of non-stationarity. Since the share price series are found to be non-stationary, the first difference of the natural logarithm of share price series is taken to obtain stationary (returns) series. The inclusion of the log transformation is to stabilize the variance. Figures 4-6 show that the returns series appear to be stationary.

Figure 1. Share price series of union bank of Nigeria.

Figure 2. Share price series of unity bank.

Figure 3. Share price series of wema bank.

Figure 4. Return series of union bank of Nigeria.

Figure 5. Return series of unity bank.

Figure 6. Return series of wema bank.

3.3. Building Autoregressive Integrated Moving Average (ARIMA) Model

3.3.1. Building Autoregressive Integrated Moving Average (ARIMA) Model of Union Bank of Nigeria

1) Model identification

From Figure 7 and Figure 8, both ACF and PACF indicate that mixed model could be entertained. The following models; ARIMA(1,1,0), ARIMA(0,1,1) and ARIMA(1,1,1) were considered tentatively.

Figure 7. ACF of return series of union bank of Nigeria.

Figure 8. PACF of return series of union bank of Nigeria.

2) Estimation of parameters

From Table 1, ARIMA(1,1,0) model is selected based on the grounds of significance of the parameters and minimum AIC.

3) Diagnostic checking of the model

From Figure 9 and Figure 10, all the lags coefficients of ACF and PACF are within the significance bands, that is, they are zero implying that the residual series of ARIMA(1,1,0) model appears to be a white noise series, that is, the series is independent and identically distributed with mean zero and constant variance.

Evidence from Ljung-Box Q-statistics in Table 2 shows that ARIMA(1,1,0) model is adequate at 5% level of significance given the Q-statistic at Lags 1, 4, 8 and 24. That is, the hypothesis of no autocorrelation is not rejected. Thus, confirming the independence of residual series.

3.3.2. Building Autoregressive Integrated Moving Average (ARIMA) Model of Unity

Bank

1) Model identification

From Figure 11 and Figure 12, both ACF and PACF indicate that mixed model could be entertained. The following models; ARIMA(1,1,0), ARIMA(0,1,1) and ARIMA(1,1,1) were considered tentatively.

2) Estimation of parameters

From Table 3, ARIMA(1,1,0) model is selected based on the grounds of significance of the parameters and minimum AIC.

3) Diagnostic checking of the model

From Figure 13 and Figure 14, all the lags coefficients of ACF and PACF are within the significance bands except lag 9, that is, they are zero implying that the residual series of ARIMA(1,1,0) model appears to be a white noise series, that is, the series is independent and identically distributed with mean zero and constant variance.

Evidence from Ljung-Box Q-statistics in Table 4 shows that ARIMA(1,1,0) model is adequate at 5% level of significance given the Q-statistic at Lags 1, 4, 8 and 24. That is, the hypothesis of no autocorrelation is not rejected. Hence, confirming the independence of the residual series.

3.3.3. Building Autoregressive Integrated Moving Average (ARIMA) Model of Wema Bank

1) Model identification

From Figure 15 and Figure 16, both ACF and PACF indicate that mixed model could be entertained. The following models; ARIMA(1,1,0), ARIMA(2,1,0), ARIMA(0,1,2) and ARIMA(2,1,1) were considered tentatively.

2) Estimation of parameters

From Table 5, ARIMA(2,1,0) model is selected based on the grounds of significance of the parameters and minimum AIC.

3) Diagnostic checking of the model

From Figure 17 and Figure 18, all the lags coefficients of ACF and PACF are within the significance bands, that is, they are zero implying that the residual series of ARIMA(2,1,0) model appears to be a white noise series, that is, the series is independent and identically distributed with mean zero and constant variance.

Evidence from Ljung-Box Q-statistics in Table 6 shows that ARIMA(2,1,0) model is adequate at 5% level of significance given the Q-statistic at Lags 1, 4, 8 and 24. That is, the hypothesis of no autocorrelation is not rejected. Thus, confirming the independence of the residual series.

Table 1. ARIMA models for return series of union bank of nigeria.

Source: output of data analysis.

Table 2. Ljung-box test on ARIMA(1,1,0) model for return series of union bank of nigeria.

Source: output of data analysis.

Table 3. ARIMA models for return series of unity bank.

Source: output of data analysis.

Table 4. Ljung-box test on ARIMA(1,1,0) model for return series of unity bank.

Source: output of data analysis.

Table 5. ARIMA models for return series of wema bank.

Source: output of data analysis.

Table 6. Ljung-box test on ARIMA(2,1,0) model for return series of wema bank.

Source: output of data analysis.

Figure 9. ACF of Residuals of ARIMA(1,1,0) Model fitted to Return Series of Union Bank of Nigeria.

Figure 10. PACF of Residuals of ARIMA(1,1,0) Model fitted to Return Series of Union Bank of Nigeria.

Figure 11. ACF of return series of unity bank.

Figure 12. PACF of return series of unity bank.

Figure 13. ACF of Residuals of ARIMA(1,1,0) Model fitted to the Return Series of Unity Bank.

Figure 14. PACF of Residuals of ARIMA(1,1,0) Model fitted to the Return Series of Unity Bank.

Figure 15. ACF of return series of wema bank.

Figure 16. PACF of return series of wema bank.

Figure 17. ACF of residual series of ARIMA(2,1,0) model fitted to return series of wema bank.

Figure 18. PACF of residual series of ARIMA(2,1,0) model fitted to return series of wema bank.

So far, the residuals series of the selected models for the three banks considered have been analyzed and found to follow a noise process and it thus suffice the aim of our study. The study further agrees with the works of [7] - [17] that model adequacy could be measured by white noise processes through ACF, PACF and Ljung-Box test but differs in that it considers the returns series of Nigerian banks.

4. Conclusion

In summary, our study showed that model adequacy could be measured by white noise process through ACF, PACF, and Ljung-Box test. The role of white noise process in checking the model adequacy was properly appraised and confirmed that modeling a white noise process satisfies all the conditions for stationarity (independence). However, the failure to apply overfitting approach of model adequacy is one weakness of this study and it is recommended that further study should be extended to cover overfitting.

References

[1] Shumway, R.H. and Stoffer, D.S. (2011) Time Series Analysis and Its Applications with R Examples. 3rd Edition, Springer, New York.

[2] Cryer, J.D. and Chan, K. (2008) Time Series Analysis with Application in R. 2nd Edition, Springer, New York, 249-260.

https://doi.org/10.1007/978-0-387-75959-3

[3] Box, G.E.P., Jenkins, G.M. and Reinsel, G.C. (2008) Time Series Analysis: Forecasting and Control. 3rd Edition, John Wiley & Sons, Hoboken.

[4] Tsay, R.S. (2010) Analysis of Financial Time Series. 3rd Edition, John Wiley & Sons, New York.

https://doi.org/10.1002/9780470644560

[5] Wei, W.W.S. (2006) Time Series Analysis Univariate and Multivariate Methods. 2nd Edition, Adison Westley, New York.

[6] Pankratz, A. (1983) Forecasting with Univariate Box-Jenkins Models: Concepst and Cases. John Willey & Sons, New York.

https://doi.org/10.1002/9780470316566

[7] McLead, A.I. (1993) Parsimony, Model Adequacy and Periodic Correlation in Forecasting Time Series. International Statistical Review, 61, 387-393.

https://doi.org/10.2307/1403750

[8] Li, W.K. (2014) Diagnostic Checks in Time Series. In: Monographs on Statistics and Applied Probability, Volume 102, Chapman & Hall/CRC, New York.

[9] Alsharif, M.H., Younes, M.K. and Kim, J. (2019) Time Series ARIMA Model for Prediction of Daily and Monthly Average Global Solar Radiation: The Case Study of Seoul, South Korea. Symmetry, 11, 240.

https://doi.org/10.3390/sym11020240

[10] Iwundu, M.P. and Efezino, O.P. (2015) On Adequacy of Variable Selection Techniques on Model Building. Asian Journal of Mathematics & Statistics, 8, 19-34.

https://doi.org/10.3923/ajms.2015.19.34

[11] Sarkar, S.K. and Midi, H. (2010) Importance of Assessing the Model Adequacy of Binary Logistic Regression. Journal of Applied Sciences, 10, 479-486.

https://doi.org/10.3923/jas.2010.479.486

[12] Goldstein, M., Seheult, A. and Vernon, I. (2013) Assessing Model Adequacy. In: Environmental Modeling: Finding Simplicity in Complexity, John Wiley & Sons, Hoboken, NJ, 435-449.

https://doi.org/10.1002/9781118351475.ch26

[13] Kheifets, I. and Velasco, C. (2012) Model Adequacy Checks for Discrete Choice Dynamic Models. In: Chen, X. and Swanson, N., Eds., Recent Advances and Future Directions in Causality, Prediction, and Specification Analysis, Springer, New York, 363-382.

https://doi.org/10.1007/978-1-4614-1653-1_14

[14] Chand, S. and Karmal, S. (2014) Mixed Portmanteau Test for Diagnostic Checking of Time Series Models. Journal of Applied Mathematics, 2014, Article ID: 545413.

https://doi.org/10.1155/2014/545413

[15] Zapranis, A. and Refenes, A.P.N. (1999) Model Adequacy Testing. In: Principles of Neural Model Identification, Selection and Adequacy. In: Perspectives in Neural Computing, Springer, London.

https://doi.org/10.1007/978-1-4471-0559-6

[16] Godfrey, L.G. and Tremayne, A.R. (1998) Checks of Model Adequacy for Univariate Time Series Models and their Application to Econometric Relationships. Econometric Reviews, 7, 1-42.

https://doi.org/10.1080/07474938808800138

[17] Box, G.E.P. and Pierce, D. (1970) Distribution of Residual Autocorrelations in Autoregressive Integrated Moving Average Time Series Models. Journal of the American Statistical Association, 65, 1509-1526.

https://doi.org/10.1080/01621459.1970.10481180

[18] Akpan, E.A., Lasisi, K.E., Adamu, A. and Rann, H.B. (2019) Application of Iterative Approach in Modeling the Efficiency of ARIMA-GARCH Processes in the Presence of Outliers. Applied Mathematics, 10, 138-158.

https://doi.org/10.4236/am.2019.103012

[19] Akpan, E.A. and Moffat, I.U. (2017) Detection and Modeling of Asymmetric GARCH Effects in a Discrete-Time Series. International Journal of Statistics and Probability, 6, 111-119.

https://doi.org/10.5539/ijsp.v6n6p111

[20] Akpan, E.A. and Moffat, I.U. (2019) Modeling the Effects of Outliers on the Estimation of Linear Stochastic Time Series Model. International Journal of Analysis and Applications, 17, 530-547.

[21] Moffat, I.U. and Akpan, E.A. (2019) Selection of Heteroscedastic Models: A Time Series Forecasting Approach. Applied Mathematics, 10, 333-348.

https://doi.org/10.4236/am.2019.105024

[22] Ljung, G. and Box, G.C. (1978) On a Measure of Lack of Fit in Time Series Models. Biometrica, 65, 265-270.

https://doi.org/10.1093/biomet/65.2.297

[23] NSE Contact Center. The Nigerian Stock Exchange. Stock Exchange House.

https://www.contactcenter@nse.com