This study seeks to assess the cointegration and causal relationship between financial system development and economic growth, from a Zimbabwean perspective, for the period 2005-2013. Traditional theorists believed that financial market in general has no correlation with economic growth. This proposition aroused studies on finding the effect of financial market on growth. Ample studies have debunked the traditionalists and established association between financial market and economic growth.
The development in the financial system is identified as factor which played a critical role in industrializing most European countries  and  . This role, according to theory is played via the platform of enabling the accumulation of financial resources in the banking system which is later “loaned” to the private sector for investment purposes.  also acknowledges the funding role played by the banking sector as it finances ideas developed by the private sector. In recent years, theorists look more critically into the role played by the financial sector in economic development  . Financial sector deposit has taken centre stage as a base on which economic growth can be premised. This development is based on the sustainability of contractual savings as compared to Domar’s savings which incorporates spontaneous, volatile savings. While earlier theories suggest that once financial resources are in place economic investment naturally follows, recent theorists argue that there should be an efficient allocation of these deposits through a well-developed financial market into the various sectors of the economy.
While a developed financial system is regarded as a pre-requisite for economic growth, there has been a general argument as to the level and comparative contribution to economic growth by the various segments of the financial system, namely banking sector and stock markets.  finds that contribution by each of the named segments depends, to a greater extent, on the efficiency of each segment. This efficiency, according to  , depends on the level and nature of regulation of each segment especially where the regulators are different. In most African countries, Zimbabwe included, the Ministry of Finance is the prime regulator of these.
Some studies find no relationship between financial system development and economic growth. They argue that the role of financial systems, whether banking or stock markets, is over emphasised in both theory and empirical findings. Policies that are pro-growth are enough in stimulating economic growth.
 argues that it is economic growth which leads to financial system development therefore banking sector and stock markets do not contribute to the existence of economic growth. Banks are there to serve companies not to create them while stock markets are also to finance already existing companies. From general observation, it is clear that banks and stock markets definitely cause growth even in those existing companies implying that they do contribute to economic growth.
Through general inspection, the Stock markets appear more efficient than the banking sector since there is no directed channelling of resources compared to directed lending that is common in the banking sector  . Stock markets are viewed as fair assessors of risk through market consensus which is evidenced through the markets uptake or rejection of certain initial public offers (IPOs) while the credit analysis of the banking sector is flawed with lack transparency  .
The comparison between stock markets and banking sector contribution to economic growth has attracted a lot of attention largely as a result of policy implications embedded in the comparisons’ outcome. Stock markets are linked to contractual savings hence associated with long term savings and hence long term capital injection into the various markets of an economy leading to economic growth    . It is a general observation therefore to associate any economic development “drive”, its sustainability and magnitude to the stock markets. Findings in  &  also establish that capital markets development is correlated to activity and efficiency of contractual savings). In related studies, economic growth has been found to be positively correlated to development of the stock markets in OECD member countries  . In all the findings, stock market development is regarded a crucial factor to sustainable mobilisation and/ or directing of financial resources to the productive sectors.
While there have been a number of studies analysing economic growth and financial sector development, most of these focused on the causality between economic growth and financial development in general, for example      . These all find a relationship between economic growth, banking sector and stock market development.
In light of the various economic reforms done in Zimbabwe since the early 1990s which have yielded no significant results, it is imperative to investigate the role played by banking sector and stock markets so as to come with appropriate policy prescriptions.
2. Overview of the Zimbabwean Financial System
The problems surrounding the financial system in Zimbabwe have been signalled since 1993 when UMB (United Merchant Bank) collapsed and the government failing to save it despite the fact that most of its exposures were linked to the government. From thereon, the public did not lose confidence in only the financial system but the government as well whose policies were being “doctored” to suite or exclude certain individuals or companies. Inflation started accelerating in 2003. This and the collapse of Asset Management Firms in early 2004 led to a total loss of confidence in the financial services sector finally leading to the instability of the whole sector. Confidence in the financial sector is still very low, although it improved slightly in 2009-2010 when the economy was under the Unity Government.
Despite its existence for over 140 years, both under the colonial government and the 35 years of independence, the financial sector has remained fragile. Regulation of the sector has remained fairly comparable with other countries in the region in terms of regulatory institutions albeit under inconsistent policies.
Historically, the colonial era maintained a very “lean” banking sector structure with only four banks in operation until 1990. The first Economic Structural Adjustment Programme of 1993, which left the economy in a worse off situation, forced the government to liberalise the financial system in 1997 as it sought a quick recovery through the banking sector  . The hurried financial sector reforms resulted in non-deserving banks being opened, namely FNB, Intermarket Building Society, Bard Discount House and Genesis Bank, with these collapsing within four years of operation.
While Zimbabwe generally led financial sector development within the SADC region both before and after independence, the post-Independence development was not in line with economic growth. Economic decline in GDP terms between 1997 and 2003 saw annual worsening from, from −2.6% to −6.1%. This was surprisingly accompanied by increased banking lending, from 63% total banking assets to 78% in 2003, a realisation of consumptive lending increases within that period  . To that effect, the failure to generate deposits from the lending practice saw banks competing for a shrinking deposit base  . This has been the trend since then and has been witnessed by the closing down of fourteen stockbroking firms in 2014 citing no business as investors dwindled on the Zimbabwe stock Exchange. The formerly collapsed banks, which were re-li- cenced after dollarization had to close again within less than one year of operation citing a small depositor base although their lending portfolios had already grown. Retrenchments in the manufacturing and agricultural sector sector since 2001 resulted in a 45% reduction in pension contributing workforce  . This reduced the resources available by pension funds by 38% between 2001 and 2008  .
While financial liberalisation was introduced to induce a bank-led economic growth, most of the funds went to consumptive borrowing. The increase in the number of financial has still failed under the multi-currency regime in absence of policies to arrest consumptive borrowing. Real GDP declined by more than 50% between 2000 and 2008 while hyperinflation reached triple figures in 2007. There was however some recovery in GDP particularly in 2009 under reforms, of the Inclusive Government (IG) which helped to restore macroeconomic stability. In response to the Short-Term Emergency Recovery Program (STERP) a 5.7% growth in GDP was realised in 2009. In 2010, GDP grew by about 8% 2010. This was a strong performance compared with a decline of about 14% in 2008 pointing towards the fact of lack of policy coherence prior to the IG. Total bank deposits grew from USD276 million in January 2009 to USD1.35 billion and USD2.34 billion by December 2009 and December 2010 respectively, again indicating a rather improved confidence in the policies of the government in place at that time  .
Zimbabwe Stock Exchange
The Zimbabwe Stock Exchange was closed during the hyper inflationary period in November 2008, and resumed trading in February 2009 after the dollarization of the economy, and for the first time, the shares were denominated in US dollars and trading was done in US dollars. From 2009 Zimbabwe’s economy has been recovering from a significant hyperinflationary period. The introduction of the US Dollar in February 2009 brought relative currency stability to the economy which in turn resulted in increased investment and investor confidence.
The ZSE has two indices namely the Industrial Index and Mining Index. The industrial index is a stock index derived from the values of the industrial stocks on ZSE. It consists of all companies except mining companies. It is the main index on ZSE and is composed of 63 companies. The top 9 companies on the industrial index by market capitalisation are Delta Corporation Limited, Econet Wireless Zimbabwe Limited, Innscor Africa Limited, British American Tobacco Zimbabwe Limited, OK Zimbabwe Limited, Hippo Valley Estates Limited, Seed Co Limited, National Foods Holdings Limited, and Old Mutual Plc (www.zimbabwe-stock-exchange.com). It consists of companies from various sectors including Agriculture, engineering, banking and finance, insurance, property, retail, beverages, food and Pharmaceuticals and Chemicals.
The stock market provides a low cost way of companies to raise capital to finance their business. The capital is raised by equities, depository receipts and debentures. This leads to growth of the industry and commerce of the country, thus economic growth. The stock market also provides an opportunity for investors to invest their surplus funds and have capital gain. Thus, the overall development of the economy is a function of how well the stock market performs and empirical evidence has proved that development of the capital market is essential for economic growth  .
The stock market is expected to lead to economic growth by directing funds from the public investors to efficient companies, increasing the liquidity of financial assets, disseminating information to promote better investment decisions, make company managers to work harder for shareholders interests as the value of wealth of the shareholders depend upon the share price. It also leads to economic growth by providing a platform where foreign investors can come and invest in the local economy. The stock market act as a mediator between borrowers and savers by mobilising funds from many small investors and channelling them to efficient companies.
Zimbabwe economy experienced a decade of contraction from 1998 to 2008, and extreme hyperinflation from 2004 to 2008. The economy started recovering from 2009 after the formation of an inclusive government and the introduction of multi-currency system with the US$ being the predominant one. Dollarization reversed inflation, permitting the banking system to stabilize and the economy to resume slow growth after 2009. In 2012, inflation averaged about 5.0%. However dollarisation also had negative impacts including high real interest rates due to lack of capital. Zimbabwe’s economy recorded real growth of more than 9% per year in 2010-11, before slowing to 5% in 2012, partly due to a poor harvest and low diamond revenues .However the economy continues to experience structural challenges emanating from the limited sources and high cost of capital; uncertainties arising from policy inconsistencies, especially with respect to economic empowerment and indigenisation regulations; dilapidated infrastructure and obsolete technologies  .
3.1. Unit Root Testing
Time series is considered as stationary if a series is mean-reverting, that is, the series repeatedly returns back to its mean and does not have a tendency to drift. Therefore, if the mean and variance of the series are constant overtime, while the value of the covariance between the two periods depends only on the gap between the periods and not on the actual time at which the covariance is considered, then the series is stationary. But, if one or more of the above mentioned conditions are not fulfilled, then the series is non-stationary  .
One of the most important data characteristic that must be determined before applying econometric methods is the order of integration. If the applied data does not have the correct order of integration, spurious regressions or wrong test statistics are the consequences and can make the analysis useless. For cointegration analysis to be valid all series must be integrated of the same order usually of order one  . When a time series is not stationery, it can be converted into a stationery series by differencing among other ways. A difference stationary series is said to be integrated and is denoted as I(d) where d is the order of integration. The order of integration is the number of unit roots contained in the series, or the number of differencing operations it takes to make the series stationary.
There have been a variety of proposed methods for implementing stationarity tests (for example,    among the others) and each has been widely used in the econometrics literature. In this study, the two most widely used tests, the Augmented Dickey-Fuller (ADF) Test and Phillips and Perron (PP) test procedure are going to be employed for implementing stationary tests and determining the order of integration.
3.2. Augmented Dickey-Fuller (ADF) Test
Augmented Dickey-Fuller (ADF) test is an extension of Dickey-Fuller test. The ADF test entails regressing the first difference of a variable y on its lagged level, exogenous variable(s) and k lagged first differences. The following equation of ADF test, which include both a drift and linear time trend, checks the stationarity of time series data:
where is the variable in period t, T denotes a time trend, , , are constants, is the first difference operator, is an error term disturbance with mean zero and variance , and k represents the number of lags of the differences in the ADF equation. This augmented specification is then used to test the hypothesis below. The null hypothesis ( ) is that the variable is not stationery while the alternative hypothesis ( ) is that the variable is stationery:
which is evaluated using the conventional t ratio for :
where is the estimate of and is the coefficient standard error. The ADF is restricted by its number of lags. It decreases the power of the test to reject the null of a unit root, because the increased number of lags necessitates the estimation of additional parameters and a loss of degree of freedom. The number of lags is being determined by minimum number of residuals free from auto correlation. In this study, the number of lags will be determined by the Schwarz information criterion (SIC). The test for a unit root is conducted on the coefficient of in the regression. If the coefficient is significantly different from zero (less than zero) then the hypothesis that y contains a unit root is rejected. Rejection of the null hypothesis denotes stationarity in the series.
3.3. The Phillips-Perron (PP) Test
 proposed an alternative nonparametric unit root test to control for serial correlation in the error terms. The Phillips-Perron test (PP test) estimates a non-augmented Dickey Fuller test equation and modifies the t-ratio so that serial correlation does not affect the asymptotic distribution of the test statistic. Below is the equation of the PP Test:
where the variables and parameters are the same as defined in the ADF test. The hypothesis is the same as that in ADF test, but it’s evaluated using the t statistic below:
where is the estimate of and the t-ratio of , is the coefficient standard error, s is the standard error of the test regression, is a consistent estimator of the error variance and is an estimator of the residual spectrum at frequency zero.
Various econometrics time series data like exports and GDP, consumption and income share theoretical long run relationships. It’s also known that these time series data evolve over time such that their mean and variance are not constant  . Non-stationary time series data may lead macroeconomists to wrongly conclude that two variables are related when in reality they are not. This phenomenon is well known as spurious regression  . The typical method to analyze a non-stationary process is to either detrend or difference the data depending on the type of trend. While these methods may provide stationary variables for the regression, they can cause a loss of significant long run information and omitted variables bias  . Cointegration is an effective way to analyse non-sta- tionery time series without losing significant long run information. In general, a set of variables are cointegrated if a linear combination of the integrated series is stationary. More specifically, if the variable under consideration are found to be I(1) (i.e. they are non-stationary at level but stationary at first difference), but the linear combination of the integrated variables is I(0) (i.e. stationery), then the variables are said to be cointegrated.If the variables are found to be cointegrated, they would not drift apart over time and the long run combination amongst the non-stationary variables can be established  . This linear combination is called the cointegrating equation and reflects a long run equilibrium relationship among the variables.
The two main cointegration techniques used in literature are the  cointegration test and the other is the  cointegration test. The former is suitable for bivariate analysis, while the latter is more convenient to use when there are more than two variables. This study is going to use the Johansen cointegration test to test for cointegration between Zimbabwe stock prices and the macroeconomic variables. The optimality of the  cointegration technique was shown by  in terms of symmetry, unbiasedness, and efficiency properties. A Monte Carlo study by  supports the superiority of Johansen test relative to other cointegration tests. It is appropriate for small samples and multivariate tests (ie anything more than two variables) the Johansen method is better. But for bivariate testing of typical runs of financial price data the Engle-Granger method has certain advantages. For example, by using a criterion of minimum variance (as opposed to the Johansen criterion of maximum stationarity) the method lends itself far more to risk/portfolio management applications.
3.5. Johansen (1991) Cointegration Test
The Johansen method of cointegration can be written as the following vector autoregressive (VAR) framework of order p.
where is an vector of non stationery I(1) variables, is an vector of constants, p is the maximum lag length, is an matrix of coefficients, and is an vector of white noise terms. To use Johansen method, the equation above has to be turned into a vector error correction model (VECM) which can be written as
where is the first difference operator, I, and
I is an identity matrix.
To test for cointegration between the , the rank of the matrix is observed via its eigenvalues. The rank of the matrix is equal to its characteristic roots that are different from zero. The hypothesis is where and are loading matrices of eigenvectors. The matrix gives the cointegrating vectors, while is known as the adjustment parameters that gives the amount of each cointegration entering each equation of the VECM. The aim is to test the number of r cointegrating vector such as . The Johansen approach has two likelihood ratio statistics to examine the rank of matrix . These are the trace and maximum eigenvalues tests which are given by the following formulas:
where T is the sample size, the eigenvalues from the matrix or the characteristic roots from the matrix. For the trace test, the null hypothesis is that the number of cointegrating vectors is less than or equal to r while the alternative hypothesis is that they are more than r. For the maximum eigenvalue test the null hypothesis is that the number of cointegrating vectors is less than or equal to r against the alternative of r + 1. For both tests if the test statistic is more than the critical value, the null hypothesis is rejected. Testing is conducted as a sequence under the null, until the null is no longer rejected. When r = 0 failing to reject will complete the test, otherwise the test continues until the null is no longer rejected.
There are so many advantages for employing Vector Error Correction Model (VECM) for the short run Controls variables such as market capitalisation and interest rate which are included in our study. Among them is that the VECM offers a possibility of applying Vector Autoregressive Model (VAR) in order to use integrated multivariate time series and therefore avoid spurious regression.
3.6. VECM Causality Test
The causality test is a statistical hypothesis test used to determine whether one time series is significant in forecasting another. This test aims at determining whether past values of a variable help to predict changes in another variable. The most widely used test is the Granger causality test. But according to  if the Granger tests is misspecified and may lead to spurious causality among the variables if they are cointegrated. That means if cointegration exists among the variables, then the Granger test is not valid.
The short run causal relationships between the variables should be examined in a VECM frame work. With X and Y integrated of order 1, the Vector Error Correctional Model (VECM) can be represented as
where and are the error correction terms obtained from the long run model lagged once, which can be interpreted as the deviation of X and Y and from their long run equilibrium values, respectively. Including the error correction terms represents the short-run dynamics necessary to reach the long run equilibrium and opens a channel to detect Granger causality  . captures the long run casual relationships among the variables in the system. When the are not statistically significant, the system of equations suggests that the variables of the system are independent in the context of prediction. When is statistically significant, while is not, the system suggests a unidirectional causality from Y to X, meaning that Y drives X toward long run equilibrium but not the other way around. When is statistically significant, while is not, the system suggests a unidirectional causality from X to Y, meaning that X drives Y toward long run equilibrium but not the other way around. When both and are significant, then this suggests feedback causal relationships in the system or bidirectional Granger causality relationships which translates into joint causality . measures the short run impact of changes in X on Y, measures the short run impact of changes in X on Y, and is the standard error term.
3.7. Impulse Response Function (IRF)
VAR’s impulse response function examines how the dependent variables react to shocks from each independent variable. A shock to the variable not only directly affects that variable but is also transmitted to all of the other endogenous variables through the dynamic structure of the VAR. An impulse response function traces the effect of a one-time shock to one of the innovations on current and future values of the endogenous variables. Thus the impulse response function is a useful tool for determining the magnitude, direction, and the length of time that the variables in the system are affected by a shock to another variable. To estimate the impulse response functions, the VAR model needs to be transformed into a Vector Moving Average (VMA) representation.  advocates that this transformation is essential since it allows for tracing out the effects of various shocks on variables contained in the VAR system. The form of the IRFs can be written as a VMA representation as shown in:
where is the impulse response functions of the disturbances. The impulse response function is found by reading off the coefficients in the moving average representation of the process. For each variable, a unit shock is applied to the error term and its effects upon the system are noted. If the innovations are contemporaneously uncorrelated, the interpretation of the impulse response is straightforward. For example, the innovation of is simply a shock to the endogenous variable in the system. Innovations, however, are usually correlated, and may be viewed as having a common component which cannot be associated with a specific variable.
3.8. Forecast Error Variance Decomposition (FEVD)
Forecast Error Variance decompositions trace out the proportion of movements in the dependent variables that are due to their own shocks versus shocks to the other variables. They indicate the relative importance of each structural shock to the variables in the system. It separates the variation in an endogenous variable into the component shocks to the VAR. Thus, the variance decomposition provides information about the relative importance of each random innovation in affecting the endogenous variables in the VAR. Thus, variance decompositions can be considered to be similar to values associated with the dependent variables in different horizons of shocks. Below is an equation as shown by Enders (2004)  of how to write FEVD to conditionally calculate n-period forecast error considering the VMA representation of VAR presented in equations 12 to 14 as:
Considering , the first element of the matrix in equation 12 to 14 the variance of the n-step-ahead forecast error can be calculated as :
where and represent the n step ahead forecast error variance of and respectively. The first part of equation 16 ad 17 shows the proportion of variance due to the variables own shock, , while the second part of equation 16 shows the proportion of variance due to the other variables shock, . It is typical for a variable to explain almost all of its forecast error variance at a short horizon and smaller proportions at longer horizons (Enders, 2010)  .
3.9. Tests for Checking the Appropriateness of the Models
There are certain assumptions for the VAR and VECM models which include absence of heteroskedasticity and autocorrelation in the residuals and normality of the residuals. These are tested for by the methods outlined below.
3.9.1. White Heteroskedasticity Test
 test is a test of the null hypothesis of no heteroskedasticity against heteroskedasticity of unknown, general form. The test statistic is computed by an auxiliary regression, where we regress the squared residuals on all possible (nonredundant) cross products of the regressors. White test statistic is asymptotically distributed as a chi-square with degrees of freedom equal to the number of slope coefficients (excluding the constant) in the test regression. If the following regression is estimated:
where are the estimated parameters and e the residual. The test statistic is then based on the auxiliary regression:
3.9.2. Serial Correlation Lagrange Multiplier (LM) Test
The null hypothesis of the LM test is that there is no serial correlation up to lag order p, where p is a pre-specified integer. The local alternative is ARMA (r, q) errors, where the number of lag terms p = max(r, q). This alternative includes both AR (p) and MA (p) error processes, so that the test may have power against a variety of alternative autocorrelation structures. The test statistic is computed by an auxiliary regression as follows. If you have estimated the regression:
where are the estimated coefficients and are the errors. The test statistic for lag order is based on the auxiliary regression for the residuals :
The test statistic is the Breusch-Godfrey LM test statistic. This LM statistic is computed as the number of observations, times the (uncentered) from the test regression. Under quite general conditions, the LM test statistic is asymptotically distributed as a .
3.9.3. Multivariate Jarque-Bera Residual Normality Test
It reports the multivariate extensions of the Jarque-Bera residual normality test, which compares the third and fourth moments of the residuals to those from the normal distribution. The multivariate test uses a factorization of the k residuals that are orthogonal to each other. Let P be a factorization matrix such that:
where is the demeaned residuals. Define the third and fourth moment vec-
tors and . Then:
under the null hypothesis of normal distribution. Since each component is independent of each other, we can form a statistic by summing squares of any of these third and fourth moments.
4. Results and Discussion
4.1. Time Plots
The graphs above show the behaviour of the interest rate, market capitalization and GDP for the period January 2005 to December 2013. From the plots the trends of the variables over the period can be observed. Figure 1 shows that the GDP was fairly stable between 2005 and mid-2007 and then registered a strong downward trend between end-2007 and mid-2009. Results from the same study reveals that interest rate was on a strong upward trend for the last half of 2009 and then it registered a strong downward trend till it stabilised in 2011. The market capitalization used in this study is in millions of US Dollars. It registered a steady upward trend during the entire period. The sharp increase in interest rates in 2008 was a result of the galloping hyperinflationary situation which resulted in interest rates reaching the 700% mark.
Figure 1. Time series plot of GDP, Interest rate and Market capitalisation.
4.2. Descriptive Statistics
Table 1 shows the summary statistics of the data. The coefficient of variation shows that interest rates and market capitalisation have high variability while the GDP has low variability. The star (*) sign in all the tables in this study denotes significance at 5% probability level.
Jarque-Bera is used for testing whether the series is normally distributed. The test statistic measures the difference of the skewness and kurtosis of the series with those from the normal distribution. The p-values which are associated with the Jarque-Bera statistics for market capitalisation and interest rates are significant. This shows that those two series are not normal. Skewness is a measure of asymmetry of the distribution of the series around its mean. Positive significant skewness for the market capitalization and interest rates shows that those variables have a long right tail. The p-values which are associated with the Jarque- Bera statistics for GDP is not significant. This shows that GDP exhibit normality.
4.3. Unit Root Results
The star (*) sign in the Table 2 and Table 3 denotes significance at 5% probability level. The null hypothesis ( ) is that the variable is not stationery while the alternative hypothesis ( ) is that the variable is stationery. The tests were performed using a model with intercept and trend. If the p-value is less than 0.05, then the null hypothesis is rejected and the variable will be stationery.
Considering the p-values in Table 2 and Table 3, both tests fail to reject the null hypothesis, implying that all the variables are not stationery in their levels. This is quite common with many economic time series data. Both tests reject the null hypothesis for all the variables in their first difference, implying that all the
Table 1. Descriptive statistics.
Table 2. ADF results.
Table 3. Phillips-Perron results.
variables become stationery after the first differencing. This implies that all the variables are integrated of order 1, i.e. they are I (1).
4.4. VAR Model
In this study, the optimum number of lags in our VAR was determined by the Schwarz information criterion. It suggested an optimum of one lag for the VAR model. Using this length of one lag produced no autocorrelation between the residuals of the VAR (1) model for up to 12 months as shown in Table 4. The serial correlation tests between residuals were done with the autocorrelation Lagrange multiplier (LM) test. The null hypothesis is that there is no autocorrelation. If the p-value is less than 0.05, then the null hypothesis is rejected and there will be autocorrelation between the residuals. The probabilities of the LM test are from chi-square with 36 degrees of freedom. Since the p-values are insignificant, we fail to reject the null hypothesis and conclude that there is no serial correlation in the VAR model. This indicates that the VAR (1) model is an appropriate one.
The other results revealed that the estimated residuals of the VAR (1) model behave like white noise. This supports the appropriateness of the VAR (1) model in determining the long term relationship between GDP and market capitalisation and stock market development.
The residuals of the VAR are tested for normality using the Cholesky (Lutkepohl) orthogonalization. The Jarque-Bera test statistic is used for determining whether the residuals are normally distributed. The test statistic measures the difference of the skewness and kurtosis of the series with those from the normal distribution. The null hypothesis is that the residuals are multivariate normal. If the p-value is less than 0.05, then the null hypothesis is rejected and the conclusion is that the residuals are not normal. The results are shown in Table 5. As the p-value is not significant, we conclude that the residuals are normally distributed. Thus the VAR model is appropriate.
The VAR residuals should not have heteroskedasticity, i.e. their variance
Table 4. VAR residual serial correlation LM test.
Table 5. VAR residual normality tests.
should be constant. The study uses the White Heteroskedasticity Test. The null hypothesis is that there is no heteroskedasticity, while there alternative is that there is heteroskedasticity. If the p-value is less than 0.05, then the null hypothesis is rejected and the conclusion will be that heteroskedasticity is present from Table 6 shows the p-value is not significant, thus the null hypothesis is not rejected, and the conclusion is that there is no heteroskedasticity present, thus the VAR is appropriate.
4.5. Johansen Cointegration Results
Since our variables are all I(1), Johansen cointegration test can be applied to the VAR(1) to find the long run relationship between the variables. The test is for identifying the number of cointegrating vectors and the corresponding cointegrating equations. The test has been carried out assuming a linear trend with an intercept in the cointegration equation. This is because the economic data which is being used in this study is assumed to have trends. The lag length used is 1 as determined by the Schwarz information criterion. The Johansen approach has two likelihood ratio statistics which are the trace and maximum eigenvalues tests. These tests are alternate tests. The null hypothesis is that the number of cointegrating vectors is equal to r while the alternative hypothesis is that they are greater than r. The tests are conducted at the 5% significance level. If the p-value is less than 0.05, then the null hypothesis is rejected and the iteration proceeds to test the next hypothesis that number of cointegrating vectors is equal to r + 1.
The star (*) sign in the tables above denotes significance at 5% probability level. The p-values are from  . Both the trace test (Table 7) and maximum eigenvalue test (Table 8) reject the null hypothesis of no cointegrating equations as the associated p-values are significant. Both tests also fail to reject the null hypothesis of at most one cointegration equation and conclude that there is one cointegrating equation. This implies that the GDP and market capitalisation and interest rate are cointegrated. This also implies that there exists a long run relationship between the Zimbabwe industrial index and the macroeconomic variables. This is consistent with many studies including those of  in India,  in Jordan,  in Malaysia,  in Sweden and  in Lithuania.
As the results show one cointegrating vector, the study normalises the
Table 6. VAR white heteroskedasticity test.
Table 7. Unrestricted cointegration rank test (trace).
Table 8. Unrestricted cointegration rank test (maximum eigenvalue).
cointegrating vector on Zimbabwe Industrial Index (ZII). This produces the below equation:
a) There is a significant negative long run relationship between GDP and interest rate. This result is in line with economic theory. This is understandable since the study include data which was captured during the hyperinflationary period when the excessive inflation was driving the prices of everything upwards including that of stock prices. On the other hand, during the period of this study, 2009-2013, inflation was under control and it was single digit inflation. This negative relationship is consistent with the results of  in India,  in Japan and  in Ghana among others. This negative relationship supports the proxy effect of  , which explains that higher interest rates raise the borrowing cost which adversely affects the profitability and the level of real economic activity; since the real activity is positively associated with interest rate, an increase in interest rate tends to reduce GDP.
b) There is a positive significant long run relationship between GDP and market
Table 9. The standard errors and t-statistics associated with the coefficients.
capitalisation. This was expected. This is because GDP is being used as a proxy for real output. In times of high economic growth, companies will be able to increase production and sales hence higher turnover. Economies of scale may lead to higher profitability and also increased profits due to higher turnover. Hence higher expected cash flows and dividends, thus higher stock prices and ultimately a higher market capitalisation. This result is consistent with studies of  in India,  in the US and  in Jordan among others. There is a positive significant long run relationship between the.
4.6. VECM Short Run Causality Results
 suggest that if cointegration exist between the variables in the long run, then, there must be either unidirectional or bidirectional relationship between variables. Since the GDP, interest rate and market capitalisation are cointegrated, the short run casual relationships are examined in a VECM framework as developed by  . The optimal lag for the VECM is determined by the Akaike information criterion. It suggests an optimal lag of 4. The results are shown in Table 10.
In Table 10, X ® Y means X causes Y, denoting causality from X to Y. Thus the null hypothesis is that there is no causality from X to Y. The star (*) sign in Table 10 denotes significance at 5% probability level. If the p-value is less than 0.05, then the null hypothesis is rejected and there will be causality from X to Y. In this context, “cause’’ means past values of a variable are significant in forecasting the values of the other variable. Considering the respective p-values, the
Table 10. VECM causality tests.
results show that there is unidirectional causality from interest rate to GDP, interest rate does not cause market capitalization and market capitalization does not cause interest rate. This means that interest rate can predict GDP in the short run, but the reverse is not true. There is a bidirectional effect on GDP and market capitalization.
4.7. VECM Diagnostic Tests
To ascertain the results of the VECM causality tests, diagnostic tests are performed on the VECM. For statistical accuracy and efficiency, certain conditions should be fulfilled. There should not be serial correlation between the residuals. The serial correlation tests between residuals were done with the Autocorrelation Lagrange multiplier (LM) test. The test was performed up to lag 12. The null hypothesis is that there is no autocorrelation. If the p-value is less than 0.05, then the null hypothesis is rejected and there will be autocorrelation between the residuals. The probabilities of the LM test are from chi-square with 36 degrees of freedom. The results are shown in Table 11. Since the p-values are insignificant, we fail to reject the null hypothesis, thus conclude that there is no serial correlation in the VECM residuals. Thus the VECM model is appropriate.
The residuals of the VECM should be multivariate normal. The residuals of the VECM are tested for normality using the Cholesky (Lutkepohl) orthogonalization. The Jarque-Bera test statistic is used for determining whether the residuals are normally distributed. The test statistic measures the difference of the skewness and kurtosis of the series with those from the normal distribution. The null hypothesis is that the residuals are multivariate normal.If the p-value is less than 0.05, then the null hypothesis is rejected and the conclusion will be the residuals are not normal. The results are shown in Table 12. As the p-value is not significant, we fail to reject the null hypothesis, and conclude that the residuals are normally distributed. Thus the VECM model is appropriate.
The VECM should be stable for the results to be valid. The inverse roots of the characteristic autoregressive polynomial are tested. The estimated VECM is stable if all roots have modulus not greater than one and do not lie outside the unit circle. Since no root lies outside the unit circle, it implies the VECM is stable, thus the VECM is appropriate.
There VECM residuals should not have Heteroskedasticity. That means their variance should be constant. The study uses the White Heteroskedasticity Test. The null hypothesis is that “there is no heteroskedasticity”. If the p-value is less than 0.05, then the null hypothesis is rejected and the conclusion will be that heteroskedasticity is present. Table 13 shows the results. The p-value is not significant, thus the null hypothesis is not rejected and the conclusion is that there is no heteroskedasticity present, thus the VECM is appropriate.
4.8. Impulse Response Function Analysis
Impulse response functions are used to determine how the GDP respond to shocks in other economic variables. They track the response of the GDP over a period of time after the shock. They are carried out in the VAR (1) system, which was shown to be an appropriate equation (25). The response they show include the magnitude of the effect on GDP, the direction of the effect i.e. whether it’s positive or negative, and the length of time that the GDP is affected by that shock, while holding all the other factors constant. The impulse response functions analyses are carried out with a cholesky ordering of index, interest rate and market capitalization. The impulse responses in this study are used to track the response for up to 12 months. The results are shown in Figure 2.
The blue line in Figure 2 represents the shocks while red lines mark the 95% confidence interval. A shock in the GDP does not cause significant change in GDP. The only significant reaction to a shock is that of interest rate to interest rate. A shock in interest rate will have a significant impact on interest rate in the long run. The results are also in line with the VECM causality tests which showed that market capitalization causes GDP and GDP also causes market capitalisation
Interest rates shocks cause the GDP to decrease for 5 months, then the effect settles to a permanent level. This contradicts the VECM causality results which
Table 11. VECM residual serial correlation LM test.
Table 12. VECM residual normality tests.
Table 13. VECM white heteroskedasticity test.
Figure 2. Impulse response of dependent to variable shocks.
showed that interest rates do not cause GDP in the short run. This however is in line with the results of the Johansen cointegration which showed a negative long run relationship between GDP and interest rates.
4.9. Forecast Error Variance Decomposition Results
Forecast Error Variance decompositions trace out the proportion of movements in the dependent variables that are due to their own shocks versus shocks to the other variables  . They separate the variation in an endogenous variable into the component shocks to the VAR. They show the relative importance of each independent variable in explaining for the variations observed in the dependent variable. Thus, variance decompositions can be considered to be similar to values associated with the dependent variables in different horizons of shocks. The variance decompositions are traced over a period of 12 months.
From the results in Table 14, we observe that about 75% of the variations in GDP after the first quarter are due to its own shocks, while interest rate explains for about 10% of the variations, market capitalisation about 3%. At the end of the fourth quarter, about 60% of the variations in GDP are due to its own shocks, while the remaining 40% are explained for by the shocks in the other variables.
Among the explanatory variables, after 12 months, GDP accounts for the greatest variation, followed by market capitalization. As explained before, the effect of market capitalisation on GDP is likely to be through its effect of causing investment. Real activity as proxied by interest rate, accounts for a very low variation in the GDP. Overally, the interest rate and market capitalisation explain for 40% of the variations in the GDP, which is quite a significant percent
Table 14. Variance decompositions of GDP.
age, supporting the Johansen cointegration results which showed a long run relationship between the Industrial index and macroeconomic variables.
5. Conclusions and Policy Implications
The purpose of this study was to investigate the short run and long run relationship relationships between GDP, stock market development and banking sector development using quarterly data from January 2005 to December 2013. The study also sought to determine which one of the two has significant impact on GDP. Statistical and econometrics techniques were used to examine the short run and long run relationships. These techniques include the Johansen cointegration test, VECM causality tests, impulse response functions and variance decompositions.
Results of the long run analysis obtained from the Johansen cointegration test showed that the GDP and banking sector development (interest rate) and stock market development (market capitalisation) are cointegrated, implying that they share a long run relationship. The resulting cointegration equation showed the nature of the long run relationship. There is a significant negative long run relationship between GDP and stock market development and stock market development.
Further, the study shows that there is a significant negative long run relationship between the Zimbabwe stock prices and money supply. This is a surprising result, as it is expected that higher money supply will reduce the liquidity constraints companies are facing thus increase their profitability thus higher stock prices. This may be explained by noting that most companies, which are listed on the ZSE, are large companies, which makes it easier for them to raise capital, and also they have access to foreign borrowing, thus the liquidity crisis may not adversely affect them. The negative relationship is explained by noting that increase in money supply leads to inflation, which has been shown to be negatively associated with stock prices.
Furthermore, the study depicted that there is a negative significant long run relationship between interest rates and market capitalisation. This result was expected since the interest rates used were lending rates. Thus high interest rates lead to high cost of borrowing and hence a reduction in economic activity. This also affects corporate profit as higher cost of capital reduces the profits, reduces future cash flow of business and dividends. This causes a reduction of the stock prices. Higher interest rates also directly lead to the increase in the discount rate, thus a reduction in the present value of future dividends hence lower stock prices.
The results of the VECM short run analysis showed that past values of interest rate, and market capitalization can be used to predict the short run GDP. The impulse response functions showed that shocks to the interest rate and market capitalisation have a significant permanent effect on GDP. The variance decompositions showed that a significant percentage of variation in the GDP is explained by the stock market and banking sector variables.
Government Policy Implications
Though dollarization brought inflation down to single digits, policy makers should continue putting more importance to the keeping of interest rate under control. This is because it is currently the most important factor which adversely affects GDP. Policy makers should also encourage stock market expansion as it is a cheap source of money supply for development and investment.
Monetary policy should be designed in a way that keeps lending rates low. This is because high lending rates have a significant negative impact on the profitability of companies as they increase the cost of capital. This has an adverse effect on the stock prices. Policy makers should also design policies that increase the industrial production thus real output in the economy, as this leads to higher stock prices in the long run. When designing policies to stabilize the stock market, policy makers should take into consideration the performance of the banking sector as it has been shown to have a significant impact on the stock prices in both the short and long run.
 Frimpong, J.M. (2009) Economic Forces and the Stock Market in a Developing Economy: Cointegration Evidence from Ghana. European Journal of Economics, Finance and Administrative Sciences, 16, 1450-2275.
 Demetriades, P. and Khaled, H. (1996) Does Financial Development Cause Economic Growth? Time Series Evidence from 16 Countries. Journal of Development Economics, 51, 387-411.
 Bencivenga, V.R. and Smith, B.D. (1993) Some Consequences of Credit Rationing in an Endogenous Growth Model. Journal of Economic Dynamics & Control, 17, 97-122.
 Hu, Y.W. (2007) Collective Pension Funds: International Evidence and Implications for China's Enterprise Annuities Reform. OECD Working Papers on Insurance and Private Pensions, No. 9, OECD Publishing.
 Makina, D. (2010) Historical Perspective on Zimbabwe’s Economic Performance: A Tale of the Lost Five Decades. Journal of Developing Societies, 26, 99-123.
 Dickey, D. and Fuller, W. (1979) Distributions of the Estimators for Autoregressive Time Series with a Unit Root. Journal of the American Statistical Association, 75, 427-431.
 MacKinnon, J., Alfred, G., Haug, A. and Leo, M. (1999) Numerical Distribution Functions of Likelihood Ratio Tests for Cointegration. Journal of Applied Econometrics, 14, 563-577.
 Asmy, M., Rohilina, W., Hassama, A. and Amin, M.F. (2009) Effects of Macroeconomics Variables on Stock Prices in Malaysia: An Approach of Error Correction Model. Department of Economics, International Islamic University Malaysia, Gombak.
 Pilinkus, D. and Boguslauskas, V. (2009) The Short-Run Relationship between Stock Market Prices and Macroeconomic Variables in Lithuania: An Application of the Impulse Response Function. Engineering Economics, 14, 26-33.
 Mukherjee, T.K. and Atsuyuki, N. (1995) Dynamic Relations between Macroeconomic Variables and the Japanese Stock Market: An Application of a Vector Error Correction Model. Journal of Financial Research, 18, 223-237.
 Ratanapakorn, O. and Sharma, S. (2007) Dynamic Analysis between the U.S. Stock Returns and the Macroeconomic Variables. Applied Financial Economics, 17, 369-377.