Continuing from the previous post,we now turn to the distribution of coefficients and t-statistics associated with the return and dividend growth regressions across simulated datasets. Following the outline at the beginning of this set of posts :
 Once we have collected regression results across 20000 simulated datasets,we can look at the probability of observing b(r) and b(d) pairs as well as t-statistic pairs that are more extreme than original sample estimates.
Each Monte Carlo trial involves running the single period forecasting regressions of the VAR. I use the custom JoinPlot() function to plot the joint distributions of return and dividend growth coefficients as well as respective t-statistics. Implementation is quite simple :
#Single Period results windows() par(mfrow=c(2,2)) JointPlot(mcs.sample,5000,initial.reg,'single period') JointPlot(mcs.fixed,5000,initial.reg,'single period')
While the he mcs.sample captures simulated results using original sample estimates (1927-2004),the mcs.fixed case uses phi=0.99 as per the paper. This yields the following set of plots,with the first row showing the joint distributions for return/div growth coefficients and t-stats using original sample estimates in the simulation, and the second row capturing the same information but for the fixed case of phi=0.99 in the simulation :
Although we have simulated 20000 datasets,i only plot the results for 5000 simulations (for clarity and to keep the optician at bay). The black lines which meet at the red circle represent original sample estimates.The blue circles represent simulated results.The percentages in each quadrant capture the portion of simulated results more extreme than sample estimates. The green triangle marks the location of the null hypothesis (e.g. b(r)=0 and b(d) =-0.1 in the case of the original sample).
The original null hypothesis is that returns are unpredictable and that dividend growth is predictable,or more compactly that b(r)=0 and b(d)=-0.1 as per the identity. The null hypothesis is meant to help us answer the question : which of and how much of each of return and dividend growth is forecastable. As far as I understand it (and correct me if i am wrong),evidence against the null hypothesis should consist of the following (accounting for statistical significance) :
- return coefficient from simulated data > return coefficient from sample data.
- dividend growth coefficient from simulated data > dividend growth coefficient from sample data.
In the light of the above and the joint distribution plots for the case of our original sample (first row of plots) :
- If we ONLY look at the marginal distribution of the simulated return coefficients (br) we would find weak evidence against the null hypothesis since the monte carlo draw produces a coefficient larger than the sample estimate around 19% of the time (top right + bottom right quadrant).Coefficients larger the original sample estimate are to be seen even if the true underlying coefficient is zero.
- If we look at the joint distribution,most of the events with more extreme return coefficients are paired with dividend-growth forecast coefficients that are more negative than seen in the data (bottom right quadrant). In these events,both dividend growth AND returns are forecastable. Here prices are moving partially on forecasts of future dividend growth and in the right direction.
- Dividend growth fails to be forecastable in only 1.27% of the samples generated under the null (top right quadrant). The absence of dividend-growth forecastability offers stronger evidence against the null.
If I understand this correctly,the dividend-growth variable is the dog that did not bark, given that it is the absence of its forecastability that provides a stronger evidence against the null hypothesis than the presence of return forecastability.The absence of dividend growth forecastability can be thought of as evidence that return forecastability is economically significant.In other words,if we wanted to prove that returns are forecastable in the short run,the lack of dividend growth forecastability is more compelling than the presence of return forecastability for achieving that purpose.
 Once we have looked at the single period results,we can proceed with the long-horizon regression coefficients ( which are mercifully not new estimates, but rather calculated from the single period estimates of the original sample 1927-2004)
Long horizon estimates and tests provide a further avenue for delivering economically significant return forecasts.Conveniently,long horizon regression coefficients are not new estimates but rather derived from existing single-period results :
Once the long-run return coefficient has been calculated,the corresponding dividend growth coefficient can be derived from :
These results reinforce the notion of economically significant return forecastability.The dividend yield volatility is almost exactly accounted for by return forecasts(b_longrun=1) with essentially no contribution from dividend-growth forecasts (b_longrun=0). In our case :
#Identitical div/lrun results rho <- mcs.sample[]['rho'] phi <- mcs.sample[]['phi'] lr.beta <- betas/(1-rho*phi) ld.beta <- lr.beta-1 betas.lr <- c(lr.beta,ld.beta) numT <- mcs.sample[]['num.trials'] pr <- length(which(mcs.sample[][,'b[long.r]']>lr.beta))/numT*100 pd <- length(which(mcs.sample[][,'b[long.d]']>(ld.beta)))/numT*100 pvals.lr <- c(pr,pd) #Table col.names <- c('Long-run coeff','%p values') row.names <- c('Return','Dividend growth') reg.table <- cbind(betas.lr,pvals.lr) est.tab <- apply(reg.table, 2, rev) est.tab <- cbind(rev(row.names),est.tab) windows() TableMaker(row.h=1,est.tab,c('Results',col.names),strip=F,strip.col=c('green','blue'),col.cut=0.05,alpha=0.7,border.col='lightgrey',text.col='black',header.bcol='lightblue',header.tcol='black',title=c('Long horizon estimates'))
This results in the following table :
Although the t-statistics and standard errors are missing from this table (because i do not know how there are determined),they would,according to the paper, be exactly identical. Hence one advantage of using long-horizon regression coefficients is that they obviate the need to choose between return and dividend growth tests (as was the case in single period regressions). The last column shows our conventional probability values,essentially telling us : how many forecasts are greater than the sample value under the unforecastable null hypothesis.There is a 1.43% chance of seeing a long-run forecast more extreme than seen in the data. Ultimately,the long-run return AND dividend growth regressions give essentially the same strong rejections as the short-run dividend-growth regression. As a consequence, we can tabulate the small-sample distribution of the test in a conventional histogram rather than a two dimensional plot.
#histogram windows() par(mfrow=c(1,2)) lr<-hist(plot=FALSE,mcs.sample[][,'b[long.r]'],breaks=100) plot(xlim=c(-3,3),lr,freq=FALSE,col=ifelse(lr$breaks<=lr.beta,'lightgrey','red'),ylab='',xlab='long_b[r]',cex.main=0.85,cex.lab=0.75,cex.axis=0.75,cex=0.65,main=paste('phi=',phi)) abline(v=lr.beta,col='red') text(x=2,y=0.85,paste('Data:',round(lr.beta,3),'\nProb :',pr),col='black',cex=0.6) ld<-hist(plot=FALSE,mcs.sample[][,'b[long.d]'],breaks=100) plot(xlim=c(-3,3),ld,freq=FALSE,col=ifelse(ld$breaks<=(lr.beta-1),'lightgrey','red'),ylab='',xlab='long_b[d]',cex.main=0.85,cex.lab=0.75,cex.axis=0.75,cex=0.65,main=paste('phi=',phi)) abline(v=(lr.beta-1),col='red') text(x=1,y=0.85,paste('Data:',round((lr.beta-1),3),'\n Prob :',pd),col='black',cex=0.6)
I think this reinforces the point of long run estimates being similar :