# Asset Pricing [8b : Time Series Predictability]

Campbell & Shiller’s(1988) present value identity provides a convenient framework for understanding time series predictability and addressing the new issues alluded to towards the end of the previous post. As usual, i will not profess to understand every step of the derivation or the requisite mathematics behind it, but I shall provide (copy) a brief sketch that is relevant to the post. Starting with a simple return decomposition and taking logs:  The Taylor expansion of the final equation yields: Iterating forward and ruling out explosive behaviour of stock prices: From this relation,it is clear that prices are high relative to fundamentals if investors today :

[a] expect dividends to rise in the future.This is sensible since high dividend growth expectations give high prices (and p-d).

[b] expect returns (discount rates) to be low in the future. This is sensible since low discount rates imply future cashflows are discounted at a lower rate to the present and hence provide higher current prices (and p-d).

Price-dividend ratios can move if and only if there is news about current dividends,future dividend growth or future returns. Hence if dividend growth and returns are totally unpredictable (i.e. same for every time t), then it must follow from the identity that p(t)-d(t) be constant over time.

If prices are supposed to capture how the world looks going forward,then constant expectations of fundamentals going forward must generate constant prices today. Since empirically p(t)-d(t) is not constant, it must be true that one of the two components on the right-hand-side of the identity be predictable. To determine which of these two channels is ultimately responsible for the observed variation of the p-d ratio, it is convenient to link predictability to variability via a set of regressions. Reformulating the identity to : and regressing each component of the identity on the dividend yield would result in the following set of regressions: The coefficients must be such that: The set of regressions might give one the illusion that the right hand side (dividend yield) is causally related to the dependent variable on the left (whatever it may be).Contrary to typical statistical interpretation,these forecasting regressions are estimated in an attempt to understand how the right hand variable is formed. Traders that observe some news about expected cash flows or discount rates going forward (LHS) would incorporate this information in current prices (RHS). Therefore, these regressions are not explanatory in nature, rather they seek to answer the question ‘how much of the return can one know ahead of time’? The R-squared measure provides an answer to this question.

The long run return & dividend growth forecasting regression coefficients must add to one.If dividend yields vary,they must forecast long-run returns or dividend growth. Since covariances are the numerators of regression betas,multiplying both sides by the variance of the dividend yield ratio (denominator of the coefficient) implies : Clearly,the price ratio only varies if it forecasts long-run dividend growth or long run returns.If both coefficients were equal to zero this would imply that the price ratio can neither forecast dividend growth nor returns and dp(t) would not vary over time.Since dp(t) clearly varies over time it must forecast at least one of returns or dividend growth. Returning to the original identity, variation in dividend yields must correspond to changing investor expectations of dividend growth or returns.

Towards the end of the previous post, I alluded to the fact that the initial tests & results concerning stock return predictability had been overturned in the 80s.The simple forecasting regression of return on lagged returns used in the first generation of tests engendered the view that beta=0,returns followed a random walk and that one cannot beat the market.

The second generation of time series tests overturned these remarks by focussing on new variables (price ratios in particular) across longer horizons, concluding that expected returns vary over time in ways not captured by the classic random walk.

To illustrate how the introduction of longer horizons and new variables overturned previously cherished views of stock-return behaviour over time, consider the following regression : The dependent variable is the CSRP value weighted excess return compounded over k horizons such that for k=5,the compounded stock return is given by the following with each R(t) being defined as (1+Return) : and the corresponding excess return is given by : The following code illustrates the results of regressing compounded returns across 15 years on the dividend price ratio and the drastic variation in the R-squared measure aswell as the slope coefficient across horizons.

k

```#####################################################################################
# New views & facts
#####################################################################################
#Return regressions~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

dep.list<-list()
new.reg <- list()
dep.list[]<-matrix(c(idx.ret,tbill,exc),nrow=length(idx.ret))
for(i in 2:15){
dep.list[[i]]=CustomCompound(idx.ret,tbill,i)
}

png(file="actf%02d.png",width=550,height=500)
for(i in 1:15){
temp=cbind(dep.list[[i]][,3],lag(dp,-1)[1:nrow(dep.list[[i]])])
new.reg[[i]]=lm(temp[,1]~temp[,2])
plot(temp[,1],ylab='Returns',xlab='',cex.main=0.85,cex.lab=0.75,cex.axis=0.75,main=paste('Actual versus forecast\nHorizon',i,' years'),col='darkgreen',type='l',lwd=2)
lines(fitted(new.reg[[i]]),col='blue')
legend(bty='n',y.intersp=1,'bottomleft',fill=c('darkgreen','blue'),legend=c('Actual','Fitted'),ncol=1,bg='white',cex=0.75)
}
dev.off()
shell("convert -delay 15 *.png actf.gif")

#Extract regression values
estim <-RegExtractor(new.reg,'est')
pvals <-matrix(RegExtractor(new.reg,'pval'),ncol=2)
tvals <-matrix(RegExtractor(new.reg,'tval'),ncol=2)
rsq <-matrix(RegExtractor(new.reg,'rsq'),ncol=1)

#Variation relative to mean level
std.vals=sapply(new.reg,function(v) sd(fitted(v)))
mean.vals=sapply(new.reg,function(v) mean(fitted(v)))

#Plot & Table
row.names<- paste('Compound ',1:15,' Years')
col.names<-c('Intercept','Beta','t-stats','p-vals','R2','E(R)','Std(Fitted)')
reg.table=cbind(estim,(tvals[,2]),(pvals[,2]),rsq,mean.vals,std.vals)

windows()
layout(matrix(c(1,1,2),nrow=3))
barplot(t(rsq*100),ylab='R-squared(%)',xlab='horizon',cex.main=0.85,cex.lab=0.75,cex.axis=0.75,main='Comparing R-squared across horizons\nReturn regressions',col='gold',names.arg=1:15)
est.tab <- round(reg.table,5)
est.tab <- apply(est.tab, 2, rev)
est.tab <- cbind(rev(row.names),est.tab)
par(mai=c(0.15,0.25,0.25,0.15))
```

l
An animated gif that shows how the predicted and actual values vary for each horizon of the regression : The correlation is evident,high dividend yields(low prices) mean high subsequent returns and low dividend yields(high prices) mean low subsequent returns. A more useful plot that illustrates how r-squared and regression results vary with the horizon : R-squared and slope coefficients increase with the regression horizon implying that forecastability get more economically interesting at longer horizons. From the Chicago lectures I would have expected that the slope remains statistically insignificant throughout, but I clearly obtained the opposite result with smaller p-values for longer horizons. Another good example of the ‘tinkering’ aspect of this blog where i may have made a mistake somewhere or glossed over some procedure that has been taken for granted but is unknown to me.

Looking at the parallel forecasts of dividend growth : When prices are high, this should be a signal that investors expect dividends to be higher in the future, so on average one should observe higher dividend growth in the years following high prices (low dividend yields) and expect a negative slope coefficient. To determine whether this is the case, I ran dividend growth forecasting regressions for 15 horizons as before :

k

```#Dividend growth ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
dep.list<-list()
new.reg <- list()
dep.list[]<-dg
temp<-NULL
for(i in 2:15){
dep.list[[i]]=DivCompound(dg,i)
}

for(i in 1:15){
temp=cbind(dep.list[[i]],lag(dp,-1)[1:length(dep.list[[i]])])
new.reg[[i]]=lm(temp[,1]~temp[,2])
}

#Extract regression values
estim <-RegExtractor(new.reg,'est')
pvals <-matrix(RegExtractor(new.reg,'pval'),ncol=2)
tvals <-matrix(RegExtractor(new.reg,'tval'),ncol=2)
rsq <-matrix(RegExtractor(new.reg,'rsq'),ncol=1)

#Table
row.names<- paste('Compound ',1:15,' Years')
col.names<-c('Intercept','Beta','t-stats','p-vals','R2')
reg.table=cbind(estim,(tvals[,2]),(pvals[,2]),rsq)

windows()
layout(matrix(c(1,1,2),nrow=3))
barplot(ylim=c(0,35),t(rsq*100),ylab='R-squared(%)',xlab='horizon',cex.main=0.85,cex.lab=0.75,cex.axis=0.75,main='Comparing R-squared across horizons\nDividend growth regressions',col='lightblue',names.arg=1:15)
est.tab <- round(reg.table,5)
est.tab <- apply(est.tab, 2, rev)
est.tab <- cbind(rev(row.names),est.tab)
par(mai=c(0.15,0.25,0.40,0.15)) 