In terms of the main issue with Cochrane’s(2008) paper, we are asking : Which (and how much) of dividend growth or returns is forecastable. Given the relationship between [a] returns,[b]dividend growth and [c] dividend yields, a null hypothesis in which returns are not forecastable must also account for dividend growth that is forecastable.Hence the need to evaluate the joint distribution of return and dividend growth coefficients in simulated datasets.

**So far there are three tests with respect to return predictability :**

- the one-period return regression coefficient b(r)
- the one-period dividend growth regression coefficient b(d)
- the long-horizon return regression coefficient (b_lrun) [provides same results as long-horizon dividend growth regression but is more convenient to look at]

**In terms of the results for the single period case :**

- The return forecasting coefficient,taken alone, is not significant.Evidence against the null is weak since only about 19% of simulated return forecasts exceed the sample counterpart. and only around 11% of simulated t-statistics are more extreme than that of the sample.
- Absence of dividend growth forecastability offers much more evidence against the null than the presence of return forecastability,lowering probability values to the single digit range.

**In terms of the long horizon case :**

- The long horizon return and dividend growth forecasts generate coefficients that are larger than their sample counterparts only 1-2% of the time (1.43% in my case) complementing the single period dividend growth forecast results.

**[6] Once we have summarised both single-period and long horizon results ,we can look at the source of power regarding single-period dividend growth and long horizon tests, as well as examine the effect of varying dividend yield autocorrelation on short- and long run percentage values.**

To investigate the source of the greater power afforded the [a] single period dividend growth estimates and [b] long horizon estimates,Cochrane (2008) plots the **joint distribution of return regression coefficients b(r) and dividend yield regression coefficient φ**. I have also included the small sample distribution for the long horizon return coefficient estimates for the case where **φ=0.941** and **φ=0.99**,the sample and fixed cases respectively.A simple call to the JointPlot() function with the fourth parameter set to ‘long horizon’ suffices :

#Long horizon results windows() par(mfrow=c(2,2)) JointPlot(mcs.sample,5000,initial.reg,'long horizon') JointPlot(mcs.fixed,5000,initial.reg,'long horizon')

ll

**Resulting in the following set of plots** :

**Focusing on the joint distribution for which φ =0.941 :**

- The triangle marks the null hypothesis and the circle marks the estimated coefficients
**The golden line marks the line br = 1 − ρφ + bd_smpl. Points above and to the right are samples in which bd exceeds its sample value****The black line is defined by br /(1 − ρφ) = br_smpl /(1 − ρ φ_smpl).Points above and to the right are draws in which b(r)_lrun exceeds the sample value.**- Percentages are probabilities of belonging to the associated quadrant.
- A high return coefficient by itself is not uncommon (just under 20% of the time).
- b(r) and
*φ*estimates are negatively correlated across samples **We never see b(r) larger than in the data ALONG with***φ*larger than in the data.The top right quadrant is empty.

These plots basically capture why single period dividend growth tests and long-horizon regression results give so many fewer rejections under the null than the single-period return test.While the single period return test falls in the 10-20% range probability values,the single-period dividend growth and the long-run tests fall in the 1-2% range. It is the strong negative correlation of estimates b(r) and *φ *across simulated samples that underlies the power of long-horizon return and dividend-growth tests.

The final part of this post shall address the effects of varying *φ *(the dividend yield autocorrelation) on the percentage values of simulated estimates being more extreme than their sample counterparts. A quick glance at the joint distribution plots above clearly suggest that as *φ * rises from 0.941 to 0.99,the cloud of points rises and more of them cross the sample estimates of [a] the single period dividend growth variable and [b] the long horizon return variable.

In the following code we specify a vector of phis (from 0.9 to 1.01),calculate the half life of each member of that vector,create a list object to hold our monte carlo simulation results for each value of phi and save the resulting list in another .Rdata file.I only run 5000 trials to save time.As before, the MonteCarlo function() returns a list object with two elements,the first being a summary of the inputs and the second being a collection of simulation results. The mcs.list variable below only stores the second element.

#######Varying phi################################################### # Vector of phis phi.vec <- c(0.9,0.941,0.96,0.98,0.99,1.00,1.01) half.life <- log(0.5)/log(phi.vec) mcs.list<-list() numT.temp <- 5000 #Save data [run once] for(i in 1:length(phi.vec)){ mcs.list[[i]] <- MonteCarloSimulation(num.trials=numT.temp,num.obs=100,phi=phi.vec[i],rho=0.9638,error.cov=res.cov)[[2]] } save(mcs.list,file='Dog2.Rdata') #####################################################################

ll

Now that we have saved a list object with 7 elements,each corresponding to the monte carlo results of a particular value of *φ *,let us load the data,and apply to each element of that list a custom function that [a] extracts coefficients/estimates whose values exceed their sample counterparts and [b] express these as a fraction of the number of simulated samples (5000).The TableMaker() function once again is used to tabulate the results.

################Table of phis######################################## #Load and Plot load('Dog2.Rdata') p.br <-unlist(lapply(mcs.list,function(g)length(which(g[,'b[r]']>betas[4]))/numT.temp*100)) p.bd <- unlist(lapply(mcs.list,function(g)length(which(g[,'b[d]']>betas[5]))/numT.temp*100)) p.lbr <- unlist(lapply(mcs.list,function(g)length(which(g[,'b[long.r]']>lr.beta))/numT.temp*100)) p.lbd <- unlist(lapply(mcs.list,function(g)length(which(g[,'b[long.d]']>(lr.beta-1)))/numT.temp*100)) tbl <- matrix(c(phi.vec,p.br,p.bd,p.lbr,p.lbd,half.life),ncol=6) colnames(tbl) <- c('phi','b[r]','b[d]','long b[r]','long b[d]','half life') col.names <- colnames(tbl) reg.table = tbl est.tab <- round(reg.table,5) est.tab <- apply(est.tab, 2, rev) est.tab <- cbind(c(' '),est.tab) TableMaker(row.h=1,est.tab,c('Effects of phi',col.names),strip=F,strip.col=c('green','blue'),col.cut=0.05,alpha=0.7,border.col='lightgrey',text.col='black',header.bcol='gold',header.tcol='black',title=c('The effects of dividend yield autocorrelation\n5000 Montecarlo trials per phi')) #####################################################################

**As φ rises :**

- The probability of the single-period return coefficient exceeding its sample value is little changed.
- The probability of the single-period dividend growth coefficient exceeding its sample value increases quite strongly.
- The probability values of long run coefficients are identical with respect to each other and similar to the single-period dividend growth case. The latter result corroborates the overlap between the two areas carved out by the coefficients in the joint distribution plot from before.
- To 1.01, nothing dramatic happens to the coefficients but the half life which should be infinite (as per the paper) takes on a strange number in my case (not sure what i have done wrong here).

**The most important point here is that in all cases,that is to say for all reasonable value of φ ,the single-period dividend-growth and long-run regression tests have a great deal more power than the single-period return regression in rejecting the notion that returns are unpredictable.**