Archive

Tag Archives: CAPM

The measurement error, which arises when estimates of true beta are used as explanatory variables in the cross sectional part of the two pass regression, can be circumvented by grouping stocks into diversified portfolios. Stocks are sorted into portfolios based on characteristics that should explain risk premia. The Black, Jensen Scholes (BJS henceforth) methodology uses individual stock betas to rank the universe of stocks we are interested in and can be generalised as follows.

hjk

Using the same dataset as before, a specific application of the BJS method follows.

[1] Sample data : All stocks on the SP 500 from 1995(beg) – 2012(end). Data availability issues restricts sample to 351 assets for the specified time period.

[2]  Estimate a beta for each stock in the first 5-year regression window using time series data in excess form.

[3] Rank order all of the stock betas and form 10 portfolios. Portfolio#1 will contain 35 stocks associated with the highest estimated betas. Portfolio #10 will contain 36 assets with the lowest estimated betas.

[4] Compute each of the portfolio’s returns for each of the 12 months in the year directly following the first 5-year regression window.

[5] Move the 5-year regression window forward by one year and repeat steps 2-4 for this new subset of data across all regression windows.

[6] Now we have 13 regression windows with 10 portfolios in each of them.

[7] Join identically numbered portfolios across regression windows (i.e. Portfolio n in gression window i with Portfolio n in windows j,k,l,etc) to yield 10 portfolio time series .

[8] Estimate a time series regression of portfolio returns against excess market returns for the remaining monthly data in the data set.

[9] Estimate the ex-post security market line via a cross sectional regression of mean portfolio returns against portfolio betas estimated in the previous step.

The code for the BJS procedure follows:

#Black Jensen Scholes
BJS <- list()
  BJS$mkt.ex.ret <- as.matrix(exm.ret[(nrow(ex.ret)-155):nrow(ex.ret),])
  BJS$windows.loc.beg <- c(1,seq(12,144,by=12))
  BJS$windows.loc.end <- seq(60,204,by=12)
  BJS$port.beg <- seq(1,317,by=35)
  BJS$port.end <- seq(35,351,by=35)+c(rep(0,9),1)

  BJS$length.windows <- 5
	BJS$num.windows <- ((length(market.ret)/12))-BJS$length.windows
	BJS$windows <- list()

		for(i in 1:BJS$num.windows){
			BJS$windows[[i]]<-list()
			BJS$windows[[i]]$alpha.list <- NULL
			BJS$windows[[i]]$beta.list <- NULL
			BJS$windows[[i]]$portfolios <- NULL
			BJS$windows[[i]]$portfolios.mean <- NULL

			for(j in 1:ncol(stock.ret)){
				BJS$windows[[i]]$fit[[j]]<-lm(ex.ret[BJS$windows.loc.beg[i]:BJS$windows.loc.end[i],j]~exm.ret[BJS$windows.loc.beg[i]:BJS$windows.loc.end[i]])

				BJS$windows[[i]]$fit[[j]]$alpha <- coef(BJS$windows[[i]]$fit[[j]])[1]
			  BJS$windows[[i]]$fit[[j]]$beta <- coef(BJS$windows[[i]]$fit[[j]])[2]

				BJS$windows[[i]]$alpha.list <- rbind(BJS$windows[[i]]$alpha.list,BJS$windows[[i]]$fit[[j]]$alpha)
				BJS$windows[[i]]$beta.list <- rbind(BJS$windows[[i]]$beta.list,BJS$windows[[i]]$fit[[j]]$beta)

				BJS$windows[[i]]$fit[[j]]$alpha.p <- summary(BJS$windows[[i]]$fit[[j]])[[4]][7]
			  BJS$windows[[i]]$fit[[j]]$beta.p <- summary(BJS$windows[[i]]$fit[[j]])[[4]][8]

				BJS$windows[[i]]$fit[[j]]$rsquared <- summary(BJS$windows[[i]]$fit[[j]])[[9]]
			}
			rownames(BJS$windows[[i]]$alpha.list) <- colnames(monthly.ret)
			rownames(BJS$windows[[i]]$beta.list) <- colnames(monthly.ret)
		  colnames(BJS$windows[[i]]$alpha.list) <- 'alphas'
			colnames(BJS$windows[[i]]$beta.list) <- 'betas'

			BJS$windows[[i]]$matrix <- cbind(BJS$windows[[i]]$alpha.list,BJS$windows[[i]]$beta.list,t(stock.ret[(BJS$windows.loc.end[i]+1):(BJS$windows.loc.end[i]+12),1:351]))
			colnames(BJS$windows[[i]]$matrix)[3:14] <- month.abb
			BJS$windows[[i]]$matrix.sorted <- BJS$windows[[i]]$matrix[order(BJS$windows[[i]]$matrix[,2],decreasing=TRUE),]

			for(k in 1:10){
				BJS$windows[[i]]$portfolios[[k]] <- BJS$windows[[i]]$matrix.sorted[BJS$port.beg[k]:BJS$port.end[k],]
			  BJS$windows[[i]]$portfolios.mean[[k]] <- matrix(colMeans(BJS$windows[[i]]$portfolios[[k]][,3:14]),ncol=1)
			}
	}

BJS$port.ret.ts<- list()
for(a in 1:10){
 BJS$port.ret.ts[[a]]<-BJS$windows[[1]]$portfolios.mean[[a]]
	for(b in 2:BJS$num.windows){
		BJS$port.ret.ts[[a]]<-rbind(BJS$port.ret.ts[[a]],BJS$windows[[b]]$portfolios.mean[[a]])
	}
BJS$port.ret.ts.tot <- cbind(BJS$port.ret.ts.tot,BJS$port.ret.ts[[a]])
}

BJS$port.ret.ts.tot <-matrix(BJS$port.ret.ts.tot,ncol=10)
colnames(BJS$port.ret.ts.tot) <- paste('Portfolio',1:10)

BJS$port.reg <- lm(BJS$port.ret.ts.tot~BJS$mkt.ex.ret)
BJS$port.betas <- as.matrix(coef(BJS$port.reg)[2,])
BJS$port.alphas <- as.matrix(coef(BJS$port.reg)[1,])

BJS$port.alpha.p <- NULL
BJS$port.beta.p <- NULL
BJS$port.rsquared <- NULL

for(i in 1:10){
	BJS$port.alpha.p <- rbind(BJS$port.alpha.p,summary(BJS$port.reg)[[i]][[4]][7])
	BJS$port.beta.p <- rbind(BJS$port.beta.p,summary(BJS$port.reg)[[i]][[4]][8])
	BJS$port.rsquared <- rbind(BJS$port.rsquared,summary(BJS$port.reg)[[i]][[9]])
}

BJS$port.cor <- cor(BJS$port.ret.ts.tot,BJS$mkt.ex.ret)
BJS$port.res.std <- apply(resid(BJS$port.reg),2,sd)

BJS$port.cross.ret <- matrix(colMeans(BJS$port.ret.ts.tot))*100
BJS$port.cross.reg <- lm(BJS$port.cross.ret~BJS$port.betas)

gh
To visualise the results:

#Visualise BJS
dat <- rbind(t(BJS$port.alphas),t(BJS$port.alpha.p),t(BJS$port.betas),t(BJS$port.beta.p),t(BJS$port.cor),t(BJS$port.res.std),t(BJS$port.cross.ret))
tab<-cbind(c('Alpha','¬p-value','Beta','¬p-value','Cor(p,m)','Std(res)','AVG(ret)'),(round(dat,3)))
annot <- c(' ',paste('Portfolio ',1:10))

windows()
layout(matrix(c(1,2,2),nrow=3))
par(mai=c(0,0,0.2,0))
TableMaker(row.h=1,apply(tab,2,rev),annot,strip=F,strip.col=c('red','green'),col.cut=0,alpha=0.6,border.col='lightgrey',text.col='black',header.bcol='gold',header.tcol='black',title='Summary of statistics for time series regressions')

par(mai=c(0.55,0.55,0.3,0.25))
plot(ylim=c(-2,5),ylab='Average Excess Returns',xlab='Systematic Risk',pch=17,main='Security Market Line',x=BJS$port.betas,y=BJS$port.cross.ret,cex.main=0.85,cex.lab=0.8,cex.axis=0.8,col=rainbow(10),cex=1.5)
abline(BJS$port.cross.reg)

legend('bottom',fill=rainbow(10),legend=paste('Portfolio ',1:10),ncol=5,bg='white',bty='n',cex=0.7)

jj
kpThe BJS method seems to do quite well for itself. While there is no way for me to confirm whether my code is correct, comparing my plot with one of the plots typically found for such studies does (especially if I use the same scale) appears to reveal some similarity.

Here is a snap shot of the original plot alongside my plot after adjusting the y-scales.

hhhjh

 

Advertisements

Since the CAPM relates average asset returns to market beta,it is natural to test the relation by running a cross sectional regression to fit a line through a scatter plot of average excess returns against beta. Proposed by Lintner (1965), the two pass regression starts with a time series regression of excess asset returns on excess market returns and proceeds with a cross-sectional regression of average excess returns against the estimated betas obtained previously. The previous post has summarised, among other things, some of the properties of the time series regression. This post will use the estimated betas and residual variances from previous time series regressions as inputs to 4 versions of the following generic cross-sectional regression :

fgh

The first model suppresses the intercept and residual variance term leaving previously estimated betas as the only independent variable. The second model add an intercept term. The third model suppresses the intercept term but includes both beta and residual variances on the right hand side. The fourth and final model includes all the terms expressed in the formula above.


#cross section regression
avg.ex.ret avg.exm.ret betas resid.sd <-matrix(unlist(lapply(ts.list$resid,sd)),ncol=1)

cross.fit4 cross.fit2 cross.fit1 cross.fit3
annot rowtitle tab.1 tab.2 tab.3 tab.4 tab
windows()
layout(matrix(c(1,2,2,2,2,2,2),byrow=T,nrow=7,ncol=1))
par(mai=c(0,0.32,0.4,0))
TableMaker(row.h=1,apply(tab,2,rev),annot,strip=F,strip.col=c('red','green'),col.cut=0,alpha=0.6,border.col='lightgrey',text.col='black',header.bcol='gold',header.tcol='black',title='Security Market Line')

par(mai=c(0.7,0.55,0,0.2))
plot(col=rainbow(length(betas)),pch=16,x=betas,y=avg.ex.ret,cex.lab=0.7,cex.axis=0.8,lwd=1,cex=0.75,cex.main=0.85,xlab='Betas',ylab='Mean Excess Return')
points(x=betas,y=avg.ex.ret,cex=0.2,col='white')
text(x=betas,y=avg.ex.ret,cex=0.6,col='black',colnames(monthly.ret),pos=2)

abline(cross.fit1,col='red',lwd=1.5)
abline(cross.fit2,col='blue',lwd=1.5)

legend(ncol=2,'topright',fill=c('red','blue'),legend=c('No intercept','With intercept'),bg='white',bty='n',cex=0.75)

loThe plot shows the security market line for the first and second models fitted though a scatter of 351 risky assets in the average-return/beta space.A table of coefficient estimates is also given. It appears that the first and third models do a reasonable job of explaining the data,providing an R-squared above 70% in each case.  If the CAPM is correct, the estimated regression coefficients should be equal to :

Capture

With the exception of gamma1 in the fourth model, all the results depicted in the table are statistically significant, implying that the CAPM conditions above fail to hold.

Standard tests of the CAPM considered so far suffer from a set of well understood econometric deficiencies which can be easily found on the internet and are summarised below for convenience and guidance.

[Econometric Issues in Tests]

:: Errors in the cross section regressions are likely to be heteroskedastic and correlated across assets. OLS coefficients remain unbiased but become inefficient.

  • Use feasible GLS
  • USE conventional OLS but adjust for standard errors
  • Fama and MacBeth (1973) procedure

::  Error in variables problem. The Betas used as independent variables in cross section regressions are themselves estimates of true Betas. Measurement error may lead to wrongful rejection of CAPM.

  • Group data into portfolios
  • Instrumental variables
  • Adjust estimates for the bias

:: Betas in the time series regression are likely to change over time.

  • Form portfolios around stationary characteristics

The sandwich package was used to re-estimate the cross sectional CAPM relation accounting for the requisite adjustments to heteroskedasticity and autocorrelation in the error term. Oddly enough, the parameter estimates did not show any marked differences so I shall omit them here. The next section of the Asset Pricing series will look at the Black Jensen Scholes (portfolio grouping method) and the Fama and MacBeth methodology.

The third part of the asset pricing series will be concerned with some initial tests of the CAPM as well as an outline of associated shortcomings that complicate interpretation and testing. This post will just gather and subsequently summarise some of the key statistics associated with risky assets (subject to data availability). Absolute and relative return/risk statistics are adapted from this reference guide. Once again the rpanel package will be used to provide some control over the asset of interest.


#Visualise
years <- seq(1995,2012,by=1)
annot <- rev(c('AVG monthly gain','AVG monthly loss','Months in excess','Best Outperformance','Worst Underperformance','Up number ratio','Down number ratios','Up capture','Down capture','Upside proficiency','Downside proficiency'))
std <- apply(monthly.ret,2,sd)
mret <- colMeans(monthly.ret)

windows()
layout(matrix(c(1,1,1,1,1,2,2,3,3,3,2,2,4,4,4,2,2,5,5,5),byrow=T,nrow=4,ncol=5))

if (interactive()) {
 draw <- function(panel) {
 ret.tab <- matrix(round(t(stock.ret[,panel$asset]),3),nrow=12)
 ret.tab <- apply(ret.tab, 2, rev)

ret.stat <- matrix(round(t(KeyReturnStats(stock.ret[,panel$asset],bench.ret)),3))
 ret.stat <- apply(ret.stat, 2, rev)

ret.stat<-cbind(annot,ret.stat)

par(mai=c(0,0,0.2,0))
 TableMaker(ret.tab,years,strip=T,strip.col=c('red','green'),alpha=0.7,border.col='grey',text.col='black',header.bcol='darkgrey',header.tcol='white',title=paste('Key return statistics for',colnames(monthly.ret[,panel$asset]),sep=' '))

par(mai=c(0.3,0.15,0,0))
 TableMaker(ret.stat,c('Measures','Values'),strip=F,strip.col=c('red','green'),alpha=0.6,border.col='lightgrey',text.col='black',header.bcol='darkgrey',header.tcol='white',title='')

par(mai=c(0,0.25,0.15,0.25))
 plot(xaxt='n',yaxt='n',stock.ret[,panel$asset],col='blue',xlab='',ylab='',cex.main=0.85,cex.lab=0.7,cex.axis=0.8,lwd=1,cex=0.75)
 lines(bench.ret,col='green')
 grid()
 legend('topright',fill=c('blue','green'),legend=c('Asset','Market'),ncol=1,bg='white',bty='n',border=NA,cex=0.7)
 mtext(side=2,'Monthly Return',cex=0.7)

par(mai=c(0,0.25,0,0.25))
 #plot(xaxt='n',yaxt='n',stock.ann[,panel$asset],col='blue',xlab='',ylab='',cex.main=0.85,cex.lab=0.7,cex.axis=0.8,lwd=1,cex=0.75)
 #lines(bench.ann,col='green')
 #mtext(side=2,'Annual Return',cex=0.7)
 plot(ylim=c(min(mret)-0.005,max(mret)+0.005),std,mret,xaxt='n',yaxt='n',xlab='',ylab='',cex.main=0.85,cex.lab=0.7,cex.axis=0.8,lwd=1,cex=0.75,pch=0,col='darkgrey')
 points(std[panel$asset],mret[panel$asset],cex=1,col='red',pch=15)
 points(apply(market.ret,2,sd),mean(market.ret),cex=1,col='green',pch=17)
 legend('topright',pch=c(15,17,0),col=c('red','green','darkgrey'),legend=c('Choice','Market','Assets'),ncol=1,bg='white',bty='n',border='black',cex=0.7)
 abline(v=std[panel$asset],h=mret[panel$asset],col='red')
 text(x=min(std)+0.025,max(mret)+0.005,col='black',paste('Ret: ',round(mret[panel$asset],4),' Std: ',round(std[panel$asset],4),sep=' '),cex=0.75)
 par(mai=c(0.45,0.25,0,0.25))
 ret.temp <- matrix(stock.ret[,panel$asset],nrow=12)
 boxplot((ret.temp),notch=F,col=ifelse(colMeans(t(ret.temp))>0,'gold','red'),cex.main=0.85,cex.lab=0.7,cex.axis=0.8,lwd=1,cex=0.75,xaxt='n')
 axis(side=1,labels=years,at=1:length(years),cex.axis=0.75)

 panel
 }
 panel<- rp.control(asset=1)
 rp.slider(panel,asset,1,(ncol(monthly.ret)), action=draw,resolution=1,showvalue=TRUE)
}

This generates the following plot of key return statistics.

jd

High Quality Pdf : here

An animated gif helps visualise the return characteristics across risky assets as follows.

bolo_1

Key risk statistics can also be conveniently summarised in the following dashboard of plots.


#Time series regression

ts.list <- list()
	ts.list$fit <- list()
  ts.list$fitted <- list()
  ts.list$resid <- list()
  ts.list$alphas <-NULL
  ts.list$betas <- NULL
  ts.list$alpha.p <- NULL
  ts.list$beta.p <- NULL

for(i in 1:ncol(monthly.ret))
{
	ts.list$fit[[i]] <- lm(ex.ret[,i]~exm.ret)
	ts.list$fitted[[i]] <- as.matrix(ts.list$fit[[i]]$fitted.values)
	ts.list$resid[[i]] <- as.matrix(ts.list$fit[[i]]$residuals)
	ts.list$alphas <- rbind(ts.list$alphas,ts.list$fit[[i]][[1]][1])
	ts.list$betas <- rbind(ts.list$betas,ts.list$fit[[i]][[1]][2])
	ts.list$alphas.p <- rbind(ts.list$alphas.p,summary(ts.list$fit[[i]])[[4]][7])
	ts.list$betas.p <- rbind(ts.list$betas.p,summary(ts.list$fit[[i]])[[4]][8])
  print(paste('Creating list object for: ',colnames(monthly.ret)[i]))
}

sp.risk <- round(matrix(unlist(lapply(ts.list$resid,sd)),ncol=1),2)
sp.var <- sp.risk^2
mkt.var <- apply(bench.ret,2,sd)^2

sd.asset <- round(apply(monthly.ret,2,sd),2)
sd.var <- sd.asset^2

dec.mat <- cbind(colnames(monthly.ret),round(ts.list$betas,2),sd.asset,sp.risk)
colnames(dec.mat) <- c('Company','Beta','Asset SD','Residual SD')

mkt.risk <- (ts.list$betas^2)*mkt.var
dec.p<-cbind(mkt.risk,sp.var)
colnames(dec.p)<-c('systematic','idiosyncratic')

#Key Risk statistics
windows()
layout(matrix(c(1,2,2,1,3,4,1,5,5,1,6,6),nrow=4,ncol=3,byrow=T))

if (interactive()) {
  draw <- function(panel) {

  	a<-monthly.ret[,panel$asset,drop=F]

#risk measure table
std.m <- round(mon.std(a),3)
std.ann <- round(ann.std(a),3)
gain <- round(gain.std(a),3)
loss<-round(loss.std(a),3)
dd<-round(down.dev(a,mean(rf.ret)),3)
gl <- round(gl.risk(a),3)
sk <- round(skewness(a),3)[1]
kt <- round(kurtosis(a),3)[1]
mdd <- round(maxDrawdown(a),3)
asset.var <- round(as.numeric(VaR(a)),2)
asset.cvar <- round(as.numeric(CVaR(a)),2)
asset.jbt <- round(as.numeric(jarque.bera.test(a)[1]),2)
asset.starr <- round(as.numeric(colMeans(ex.ret)[1]/CVaR(a)),2)

sp.m <- as.numeric(round(sharpe.r (a,rf.ret),3))[1]
sp.a <- as.numeric(round(sharpe.r (a,rf.ret),3))[2]

calm <- as.numeric(round(CalmarRatio(a),3))
terl <- as.numeric(round(SterlingRatio(a),3))
sort <- as.numeric(round(SortinoRatio(a),3))
omg <- as.numeric(round(Omega(a),3))

annot <- c('STD(monthly)','STD(annualised)','STD(gain)','STD(loss)','DrawD','G/L Ratio','Skewness','Kurtosis','Max DD','Value@Risk','CValue@Risk','Jarque-Bera','STARR','Sharpe(monthly)','Sharpe(annual)','Calmar','Sterling','Sortino','Omega')
tab <- cbind(annot,rbind(std.m,std.ann,gain,loss,dd,gl,sk,kt,mdd,asset.var,asset.cvar,asset.jbt,asset.starr,sp.m,sp.a,calm,terl,sort,omg))

par(mai=c(0,0.1,0.3,0))
TableMaker(apply(tab,2,rev),c('Measure','Values'),strip=F,strip.col=c('red','green'),alpha=0.6,border.col='lightgrey',text.col='black',header.bcol='gold',header.tcol='black',title='Absolute\n Risk (adjusted) measures')

#hist plot
par(mai=c(0,0.5,0.3,0.2))

m<-hist(a,breaks=seq(min(a),max(a),length.out=100),plot=F)
hist(xaxt='n',main=paste('Histogram of Returns for',colnames(a)),xlab='',cex.lab=0.7,cex.axis=0.8,lwd=1,cex=0.75,cex.main=0.85,a,col=ifelse(m$breaks<as.numeric(VaR(a)),'red',ifelse(m$breaks<0,'orange',ifelse(m$breaks<as.numeric(abs(VaR(a))),'blue','green'))),breaks=seq(min(a),max(a),length.out=100))
legend(ncol=1,'topright',fill=c('red','orange','blue','green'),legend=c('Shortfall','Negative','Positive','Upside'),bg='white',bty='n',cex=0.7)
abline(v=VaR(a),col='black',lwd=1,lty=2)
text(x=VaR(a),y=7,pos=4,paste('VaR\n',round(VaR(a),2)),cex=0.7)
abline(v=CVaR(a),col='black',lwd=1,lty=2)
text(x=CVaR(a),y=7,pos=2,paste('CVaR\n',round(CVaR(a),2)),cex=0.7)

#Variance decomp
par(mai=c(0,0.2,0.4,0))
pie(main='Variance Decomposition',cex=0.85,labels=c(paste('systematic\n',round((dec.p[panel$asset,1]*100),3),'%'),paste('idiosyncratic\n',round((dec.p[panel$asset,2]*100),3),'%')),dec.p[panel$asset,],col=c('darkblue','gold'),cex.main=0.85)

#Relative risk adj return measures
alpha.ann <- round(((1+ts.list$alphas[panel$asset])^12)-1,3)
jens.alpha <- round(mean(a)-(mean(rf.ret)+ts.list$betas[panel$asset]*(mean(exm.ret))),3)
trey.r <- round((((1+mean(a)^12)-1) - ((1+(mean(rf.ret)^12))-1)) / ts.list$betas[panel$asset],3)
beta <- round(ts.list$betas[panel$asset],3)
inf.r <- round(InformationRatio(a,market.ret[,1]),3)

annot<-c('Annualised Alpha','Jensen-alpha','Tryenor','Beta','Information Ratio')
tel.tab <- cbind(annot,rbind(alpha.ann,jens.alpha,trey.r,beta,inf.r))

par(mai=c(0,0.1,0.4,0.2))
TableMaker(apply(tel.tab,2,rev),c('Measure','Values'),strip=F,strip.col=c('red','green'),alpha=0.6,border.col='lightgrey',text.col='black',header.bcol='gold',header.tcol='black',title='Relative\nRisk adjusted return measures')

#drawdowns

par(mai=c(0,0,0.2,0))
d.tab <-table.Drawdowns(a)
TableMaker(apply(d.tab,2,rev),colnames(d.tab),strip=F,strip.col=c('red','green'),alpha=0.6,border.col='lightgrey',text.col='black',header.bcol='gold',header.tcol='black',title='Drawdown Table & Chart')

par(mai=c(0.25,0.15,0,0.15))
chart.Drawdown(xlab='',main='',a,cex.main=0.8)

panel
  }
	panel<- rp.control(asset=1)
	rp.slider(panel,asset,1,(ncol(monthly.ret)), action=draw,resolution=1,showvalue=TRUE)
}

A snapshot of the result follows.

asd

 

The pdf file can be found here

To make the connection between the traditional mean variance framework and the equilibrium condition embodied by the CAPM more tangible,  a quick and more concrete supplement to the previous post may be helpful. Instead of simulating asset characteristics and subsequently feeding these as inputs into the optimisation process, we will simply assume a particular set of asset returns, impose a variance structure among these and find the minimum variance and tangency portfolios formulaically. To simplify this process, several functions are written to once again draw the frontiers,capital allocation lines and locate assets/portfolios of interest.

p

###################################################################################################
###################################################################################################
#Supplement
###################################################################################################
###################################################################################################

Frontiers <- function (data.obj){
    er <- t(data.obj$mean.ret)
    risk.free <- data.obj$risk.free
	cov.mat <- data.obj$cov.matrix
	ones <- rep(1, length(er))
	target <- seq(0,1,length.out=100)
    top <- cbind(2*cov.mat, er, ones)

	temp<-rbind(t(er), ones)
    bot <- cbind(temp, matrix(0,2,2))
    A <- rbind(top, bot)
	w<-matrix(0,nrow=length(target),ncol=length(er))
	er.port <- matrix(0,nrow=length(target),ncol=1)
	sd.port <- matrix(0,nrow=length(target),ncol=1)

	for (i in 1:length(target)){
		b.target <- as.matrix(c(rep(0, length(er)), target[i], 1))
    x <- solve(A, b.target)
    w[i,] <-x[1:length(er)]
		er.port[i] <- crossprod(er,w[i,])
		sd.port[i] <- sqrt(w[i,] %*% cov.mat %*% w[i,])
	}

	cov.mat.inv <- solve(cov.mat)
    w.t <- cov.mat.inv %*% (er - risk.free)
    w.t <- as.vector(w.t/sum(w.t))
    er.t <- crossprod(w.t,er)
    sd.t <- sqrt(t(w.t) %*% cov.mat %*% w.t)

frontiers.data <-	list()
	frontiers.data$weights <- w
	frontiers.data$port.ret <- er.port
	frontiers.data$port.risk <- sd.port
	frontiers.data$tang.weights <- w.t
	frontiers.data$tang.ret <- er.t
	frontiers.data$tang.risk <- sd.t
	return(frontiers.data)
}

CAL.Draw <- function(data.obj,frontiers.data,colour,lty){
	total.risk <- seq(0,1,by=0.01)
	risk.free <- data.obj$risk.free
	asset.return <- frontiers.data$tang.ret
	asset.risk <- frontiers.data$tang.risk

	price.risk <-c()
	total.return <-matrix(rep(0,length(total.risk)),nrow=length(total.risk),ncol=1)
 	price.risk <- (asset.return-risk.free)/asset.risk
    total.return <- risk.free+(price.risk*total.risk)
 	CAL.complete<-cbind(total.risk,total.return)

	lines(CAL.complete,col=colour,type='l',lwd=1,lty=lty)
	points(x=asset.risk,y=asset.return,pch=17,col=colour)
    text(labels='Tangency\nPortfolio',x=asset.risk,y=asset.return,pos=3,col='black',cex=0.65)

}

Frontier.Draw <- function(data.obj,frontiers.data,colour,add,lty,title){
	if(add=='new'){
		par(xaxs="i", yaxs="i")
		plot(main=title,xlim=c(0,0.2),ylim=c(min(data.obj$mean.ret)-0.3,max(data.obj$mean.ret)+0.3),x=frontiers.data$port.risk,y=frontiers.data$port.ret,lty=lty,type='lines',col=colour,lwd=1.5,xlab='Portfolio Risk',ylab='Portfolio Return',cex.lab=0.75,cex.axis=0.7,cex.main=0.75)
	}else if(add=='add'){
		lines(x=frontiers.data$port.risk,y=frontiers.data$port.ret,type='lines',col=colour,lwd=1.5,main='',xlab='Portfolio Risk',ylab='Portfolio Return',cex.lab=0.7,cex.axis=0.7)
	}
		points(x=sqrt(diag(data.obj$cov.matrix)),y=data.obj$mean.ret,col=colour,pch=19)
	  points(x=0.001,y=data.obj$risk.free,pch=15,col='dark green')
	  text(labels=paste('Asset',1:length(data.obj$mean.ret)),x=sqrt(diag(data.obj$cov.matrix)),y=data.obj$mean.ret,pos=4,col='black',cex=0.65)
 	  text(labels='risk free\nasset',x=0.001,y=data.obj$risk.free,pos=4,col='black',cex=0.65)
}

l

The objective is to sketch some of the implications of the CAPM in the context of the mean variance framework, particularly with respect to the frontiers that would result when markets are in (dis) equilibrium. The base case involves 2 risky assets, the minimum variance frontier that results from these two assets,the tangency portfolio (combination of these assets that maximises the Sharpe ratio) as well as the risk free asset and the capital allocation line ( combination of the risk free asset with the tangency portfolio). Once this basic case has been established, the effect of adding a third risky asset to the universe of investable assets will be examined in terms of the resulting shifts in the recalculated frontiers/lines and changes in tangency portfolios. To implement these issues :

r

###########################################################################
#
# Quick Implentations
#
############################################################################</pre>
windows()
 layout(matrix(c(1,2,3,4),ncol=2,nrow=2,byrow=T))
 par(mar=c(4,4,3,1))

#base case
 two.asset.data <- list()
 two.asset.data$mean.ret <- matrix(c(0.800,0.0500),nrow=1,ncol=2,dimnames=list(c('mean return'),c('asset 1','asset 2')))
 two.asset.data$cov.matrix <- matrix(c(0.0256,0,0,0.0144),nrow=2,ncol=2,byrow=T,dimnames=list(c('asset 1','asset 2'),c('asset 1','asset 2')))
 two.asset.data$risk.free <- 0.02
 b.ret <- two.asset.data$mean.ret[2]
 b.var <- two.asset.data$cov.matrix[2]

base<-Frontiers(two.asset.data)
 Frontier.Draw(two.asset.data,base,c('red','blue'),'new',lty=1,'Base Case\n2 Risky Assets')
 CAL.Draw(two.asset.data,base,'black',lty=1)

#new asset with alpha of 0
 Frontier.Draw(two.asset.data,base,c('red','blue'),'new',lty=1,'New Asset\nAlpha = 0')
 CAL.Draw(two.asset.data,base,'black',lty=1)

z.data <- list()
 z.data$mean.ret <- matrix(c(0.800,0.0500,0+b.ret),nrow=1,ncol=3,dimnames=list(c('mean return'),c('asset 1','asset 2','asset 3')))
 z.data$cov.matrix <- matrix(c(0.0256,0,0,0,0.0144,0,0,0,0.0144),nrow=3,ncol=3,byrow=T,dimnames=list(c('asset 1','asset 2','asset 3'),c('asset 1','asset 2','asset 3')))
 z.data$risk.free <- 0.02

th<-Frontiers(z.data)
 Frontier.Draw(z.data,th,c('red','blue','dark green'),'add',2)
 CAL.Draw(z.data,th,'dark green',2)

#new asset with alpha of -0.2
 Frontier.Draw(two.asset.data,base,c('red','blue'),'new',lty=1,'New Asset\nAlpha = -0.2')
 CAL.Draw(two.asset.data,base,'black',lty=1)

z.data <- list()
 z.data$mean.ret <- matrix(c(0.800,0.0500,-0.2+(b.ret)),nrow=1,ncol=3,dimnames=list(c('mean return'),c('asset 1','asset 2','asset 3')))
 z.data$cov.matrix <- matrix(c(0.0256,0,0,0,0.0144,0,0,0,0.0144),nrow=3,ncol=3,byrow=T,dimnames=list(c('asset 1','asset 2','asset 3'),c('asset 1','asset 2','asset 3')))
 z.data$risk.free <- 0.02

th<-Frontiers(z.data)
 Frontier.Draw(z.data,th,c('red','blue','dark green'),'add',2)
 CAL.Draw(z.data,th,'dark green',2)

#new asset with alpha of +0.5
 Frontier.Draw(two.asset.data,base,c('red','blue'),'new',lty=1,'New Asset\nAlpha = 0.5')
 CAL.Draw(two.asset.data,base,'black',lty=1)

z.data <- list()
 z.data$mean.ret <- matrix(c(0.800,0.0500,0.5+(b.ret)),nrow=1,ncol=3,dimnames=list(c('mean return'),c('asset 1','asset 2','asset 3')))
 z.data$cov.matrix <- matrix(c(0.0256,0,0,0,0.0144,0,0,0,0.0144),nrow=3,ncol=3,byrow=T,dimnames=list(c('asset 1','asset 2','asset 3'),c('asset 1','asset 2','asset 3')))
 z.data$risk.free <- 0.02

th<-Frontiers(z.data)
 Frontier.Draw(z.data,th,c('red','blue','dark green'),'add',2)
 CAL.Draw(z.data,th,'dark green',2)

#Gif 1
id <- seq(-0.25,0.95,by=0.015)
l <- length(id)

png(file="example%02d.png", width=600, height=500)
for(i in 1:l){
 layout(matrix(c(1,1,1,2,1,1,1,3,1,1,1,4),3,4,byrow=T))
 par(mai=c(0.7,0.3,0.5,0.3))
 Frontier.Draw(two.asset.data,base,c('red','blue'),'new',lty=1,paste('New Asset\nAlpha =', id[i]))
 CAL.Draw(two.asset.data,base,'black',lty=1)

z.data <- list()
 z.data$mean.ret <- matrix(c(0.800,0.0500,id[i]+(b.ret)),nrow=1,ncol=3,dimnames=list(c('mean return'),c('asset 1','asset 2','asset 3')))
 z.data$cov.matrix <- matrix(c(0.0256,0,0,0,0.0144,0,0,0,0.0144),nrow=3,ncol=3,byrow=T,dimnames=list(c('asset 1','asset 2','asset 3'),c('asset 1','asset 2','asset 3')))
 z.data$risk.free <- 0.02

th<-Frontiers(z.data)
 Frontier.Draw(z.data,th,c('red','blue','dark green'),'add',2)
 CAL.Draw(z.data,th,'dark green',2)

 par(mai=c(0.5,0.1,0.5,0.5))
 stackpoly(col=c('red','blue','dark green'),cex=0.75,cex.axis=0.8,cex.lab=0.8,cex.main=0.8,th$weights,xlab='Portfolios on the frontier',main='Frontier weights')
 par(mai=c(0.5,0.1,0.5,0.5))
 barplot(cex=0.75,cex.axis=0.8,cex.lab=0.8,cex.main=0.8,space=0,beside=T,th$tang.weights,col=c('red','blue','dark green'),ylab='Weights',xlab='Risky assets',main='Tangency Portfolio')
 plot.new()
 par(mai=c(0.7,0.1,0.5,0.5))
 legend('top',fill=c('red','blue','dark green'),legend=c(paste('Asset',1:3)),ncol=1,bg='white',bty='n',cex=0.85)

}
dev.off()
shell("convert -delay 10 *.png example_1.gif")

#gif 2
id <- seq(0.001,0.0256,by=0.001)
l <- length(id)

png(file="bobo%02d.png", width=600, height=500)
for(i in 1:l){
 layout(matrix(c(1,1,1,2,1,1,1,3,1,1,1,4),3,4,byrow=T))
 par(mai=c(0.7,0.3,0.5,0.3))
 Frontier.Draw(two.asset.data,base,'red','new',lty=1,paste('New Asset\nRisk =', id[i]))
 CAL.Draw(two.asset.data,base,'red',lty=1)

z.data <- list()
 z.data$mean.ret <- matrix(c(0.800,0.0500,b.ret),nrow=1,ncol=3,dimnames=list(c('mean return'),c('asset 1','asset 2','asset 3')))
 z.data$cov.matrix <- matrix(c(0.0256,0,0,0,0.0144,0,0,0,id[i]),nrow=3,ncol=3,byrow=T,dimnames=list(c('asset 1','asset 2','asset 3'),c('asset 1','asset 2','asset 3')))
 z.data$risk.free <- 0.02

th<-Frontiers(z.data)
 Frontier.Draw(z.data,th,'dark green','add',2)
 CAL.Draw(z.data,th,'dark green',2)

 par(mai=c(0.5,0.1,0.5,0.5))
 stackpoly(col=c('red','blue','dark green'),cex=0.75,cex.axis=0.8,cex.lab=0.8,cex.main=0.8,th$weights,xlab='Portfolios on the frontier',main='Frontier weights')
 par(mai=c(0.5,0.1,0.5,0.5))
 barplot(cex=0.75,cex.axis=0.8,cex.lab=0.8,cex.main=0.8,space=0,beside=T,th$tang.weights,col=c('red','blue','dark green'),ylab='Weights',xlab='Risky assets',main='Tangency Portfolio')
 plot.new()
 par(mai=c(0.7,0.1,0.5,0.5))
 legend('top',fill=c('red','blue','dark green'),legend=c(paste('Asset',1:3)),ncol=1,bg='white',bty='n',cex=0.85)

}
dev.off()

shell("convert -delay 10 *.png bobo_1.gif")
<pre>

f

The third asset that will be added to the initial investable universe (comprising of two assets) will be related to the CAPM with one of the following restrictions:

[1] Alpha is 0

  1. The expected return on the asset is correctly related to the tangency portfolio.
  2. No mispricing of the security relative to the level predicted by the CAPM.
  3. CAPM holds
  4. Equilibrium prevails.

[2] Alpha is -0.2

  1. The expected return on the new asset is inconsistent with the CAPM return.
  2. Mispricing of the security; the expected return of the asset is lower than that predicted by the CAPM by 0.2. This implies that the asset is overvalued by the market in terms of price and mean reversion in the future would dictate the asset price to decrease (returns to increase) until the CAPM-consistent level is reached.
  3. CAPM fails.
  4. Markets are in disequilibrium because the original two asset tangency portfolio can be improved upon by shorting the new asset and reaching a new tangency portfolio with a higher sharpe ratio (or CAL slope)

[3] Alpha is +0.5

  1. The expected return on the new asset is inconsistent with CAPM returns.
  2. Mispricing of the security; the expected return of the asset is higher than that predicted by the CAPM by 0.5. This implies that the asset is undervalued by the market in terms of price and mean reversion in the future would dictate a rise in asset price (returns to decrease) until the CAPM-consistent level is reached.
  3. CAPM fails.
  4. Markets are in disequilibrium because the original two asset tangency portfolio can be improved upon by taking a long position in the new asset and thereby reaching a new tangency portfolio with a higher sharpe ratio (steeper CAL slope).

lop

The topleft plot depicts the situation of two risky assets; the remaining plots show the effect of adding a third risky asset to the universe of investable assets.  In all three cases, the minimum variance frontier after inclusion of the third asset,shifts to the left, implying an expanded set of investment opportunities in the presence of 3 assets versus the initial scenario of only 2 investable securities. More succinctly, the investment combinations available when there are more securities in the market are such that the resulting portfolios confer to the investor the same return for less risk. In this particular case, since the covariance matrix has been constructed to affect zero correlation across all three assets, adding a new risky asset clearly confers diversification benefits which would otherwise be either less pronounced or non-existent should the correlation across said risky assets be closer to +1 or achieve equivalence with this extreme.

Adding a CAPM consistent asset (alpha=0) to the existing universe of two stocks would not incentivise investors to change the composition of their tangency portfolio given that the resulting sharpe ratio of the new tangency portfolio would remain unchanged. In the light of the previous post, when the CAPM holds and markets are in equilbrium, the marginal benefit to cost ratio of all stocks are identical and investors would have no reason to change portfolio composition. The notion of an unchanged sharpe ratio can be visually confirmed by the overlay of the old and new capital allocation lines (whose slopes are essentially the highest sharpe ratio achievable through a combination of available risky assets).

Adding an asset that is overvalued (alpha=-0.2) to the initial scenario amounts to identifying an asset whose price is expected to decrease in the future. Rational investors would naturally short this asset in anticipation of future downward revisions in price. The slope of the new capital allocation line increases, indicating a higher sharpe ratio being conferred to the new tangency portfolio in which the third asset is held with a negative weight. Visually, the new tangency portfolio is located to the right of the original tangency portfolio, implying greater risk and returns, perhaps on account of the short position associated with the new asset.

Adding an asset that is undervalued (alpha=0.5) to the base case would alert investors to the existence of an asset whose price is expected to increase if and when future price reversions occur. Rational investors would naturally take a long position in such an asset. The slope of the new capital allocation line is higher than the original one implying a superior sharpe ratio being conferred to the new optimal risky portfolio.

¤I have updated this post with the following animations which show how tangency weights as well as generic portfolio weights on the minimum variance frontier change when the third asset’s mean return (risk) is varied while keeping variance constant (mean return constant).

example_1

 

bobo_1

To operationalise the preceding Mean-Variance analysis, estimates of asset returns and variances are required. Given the expected returns and variance/covariance matrix, the procedure determines an optimal portfolio from a selection of portfolios available on the mean-variance-frontier. Since Expected returns are relatively difficult to forecast, a model of what returns should be, on a normatively meaningful basis, will be of value.

The CAPM is an equilibrium model specifying a relation between expected returns and covariances for all assets. Equilibrium characterises a situation in which investors have no incentive to deviate from their current condition. Despite the empirical controversies surrounding CAPM estimates and the validity of the equilibrium argument, it should be noted that the Markowitz procedure itself remains relevant and instructive for most investors.

This post will continue to sketch a small selection of the main results in asset pricing. Here the CAPM shall be (loosely) derived from the Mean-Variance analysis touched upon in the preceding section. Implications of the validity/failure of the model will be examined using simulated data in the context of

f

  1. Security Characterisitic Lines (SCL)
    • A time series regression which plots the excess asset return against the excess market return.
  2. Security Market Lines (SML)
    • A cross section regression which plots the average excess asset returns against the slope coefficient obtained from the time series regression above.
  3. Mean Variance Contours
    • Variation in the mean variance contours on the basis of whether or not CAPM holds for a simulated asset.

l

[The Capital Asset Pricing Model]

Derivation of the CAPM relationship requires a set of assumptions to hold:

  1. No Transaction Costs
  2. All assets are tradable and are infinitely divisible
  3. No Taxes
  4. No individual can affect security prices
  5. Investors care exclusively about expected returns and variances
  6. Returns are normally distributed
  7. Investors have a quadratic utility function
  8. Unlimited short selling
  9. Unlimited borrowing and lending
  10. Homogeneous expectations.

Assumptions 5-10 imply that every investor solves the identical optimal risky portfolio problem delineated in the previous post. This implies that all investors draw the same CAL and see the same efficient frontier. This aggregate CAL which is identical across all agents is called the Capital Market Line (CML).

The CAPM or its graphical counterpart in the SML essentially emerges as relation between expected asset returns and risk when [1] Markowitz considerations regarding the tangency portfolio and [2] Equilibrium notions are combined

p

::Markowitz Considerations :

One implication of the two fund theorem sketched previously is that all investors must want to hold the market portfolio (tangency portfolio) since it maximises the sharpe-ratio. Assuming that every investor faces the same universe of assets and operates under the same assumptions listed above, then each individual would rationally draw the same efficient frontier,the same CAL and hold the same tangency portfolio. While the amount of capital invested in the risk free asset versus this tangency portfolio may vary across individuals on the basis of risk preferences, the composition of the tangency portfolio is identical across all agents.

p

:: Equilibrium Considerations :

Intuitively, an equilibrium describes a condition in which every investor is content with their portfolio holdings. If the opposite were true, investors would adjust the composition of assets within their portfolio (i.e.demanding assets that are undervalued,supplying assets that are overvalued by the market) hence affecting asset prices. This implies that in equilibrium everyone must be optimally invested and nothing can be done to increase the Sharpe ratio of the current portfolio. If  no investor is incentivised to change his portfolio composition, there is no scope for increasing the sharpe-ratio of the optimal portfolio being currently held.

k

:: Intuition For The Standard CAPM :

Investors will only hold a security in their portfolio if its marginal return (the additional excess return from holding an extra unit of the security in the portfolio) can be justified given its marginal risk (the additional volatility from holding an extra unit of the security in the portfolio). If equilibrium characterises a condition in which no investor wishes to alter portfolio composition, it must be the case that assets in the investable universe do not command a marginal benefit that compensates sufficiently for the marginal cost derived from their inclusion. Or more formally, it must be the case that for example :

sd

In equilibrium, it must be the case that the marginal benefit to cost ratio is identical across all assets. Otherwise, investors could benefit from shifting assets in their portfolio. Intuitively, if the marginal benefit to cost ratio of GM were greater (less) than that of the market portfolio, an investor would be incentivised to increase (decrease) his holding of GM in the optimal portfolio. Hence only in equilibrium, where the ratio is identical across assets, will the rational investor be content with the current condition and not alter portfolio composition. 

Summarily, according to Markowtiz, investors should hold the tangency portfolio ; Equilibria concerns imply investors must want to hold the tangency portfolio ; CAPM implies that the tangency portfolio is the market portfolio.

To yield the SML (graphical representation of the CAPM essentially), the previous expression can be altered to yield
Capture

Adding the risk free asset on both sides would yield the standard CAPM.

To illustrate the difference between CAPM consistent and inconsistent assets and some of the relevant implications, the following functions are created : DrawSecurityLines , CAPM.Frontiers and Fitting.model

It should be noted that portfolio optimisation was changed from before to include the short sales option. 

###################################################################################################
#
#  Market Lines
#
###################################################################################################

DrawSecurityLines <- function(risky.assets,frontier.points,fitted.results){

    assets.ret <- risky.assets[[1]]
	assets.mean.ret <- risky.assets[[5]]
	risk.free <- risky.assets[[4]]
	shock <- risky.assets[[6]]
	num.assets <- ncol(assets.ret)
	num.obs <- nrow(assets.ret)

	tangency.sharpe.ret <- frontier.points[[4]][1]
	tangency.weights <- as.matrix(frontier.points[[4]][-(1:3)])
    tangency.excess.ret <- fitted.results[[3]]
	assets.excess.ret <- assets.ret - risk.free

	#Consistent with CAPM
	alphas.raw <- as.matrix(fitted.results[[1]][[1]][1,])
	alphas.pvalue <- matrix(0,dimnames=list(c('P-values:'),c(paste('Asset:',1:num.assets))),ncol=num.assets)
	alphas.adj <- matrix(0,dimnames=list(c('P-values:'),c(paste('Asset:',1:num.assets))),ncol=num.assets)

	for(i in 1:num.assets){
		alphas.pvalue[i] <- (summary(fitted.results[[1]])[[i]])$coefficients[1,4]
	}

	betas.raw <- as.matrix(fitted.results[[1]][[1]][2,])
    betas.pvalue <- matrix(0,dimnames=list(c('P-values:'),c(paste('Asset:',1:num.assets))),ncol=num.assets)

	for(i in 1:num.assets){
		betas.pvalue[i] <- (summary(fitted.results[[1]])[[i]])$coefficients[2,4]
	}

	fitted.vals <- fitted.results[[1]]$fitted.values
	capm.consistent <- matrix(dimnames=list(c(paste('Obs:',1:num.obs)),c(paste('Asset',1:num.assets))),0,nrow=num.obs,ncol=num.assets)
	for(i in 1: num.assets){
	 	capm.consistent[,i] <- alphas.adj[i]+betas.raw[i]*tangency.excess.ret
	 }
	avg.ret <- colMeans(assets.excess.ret)

	fitted.new <- lm (avg.ret~betas.raw)
	avg.fitted.ret <- fitted.new$fitted.values
	col.ind <- rainbow(num.assets)

	#InConsistent with CAPM
	capm.inconsistent <- matrix(dimnames=list(c(paste('Obs:',1:num.obs)),c(paste('Asset',1:num.assets))),0,nrow=num.obs,ncol=num.assets)

	for(i in 1: num.assets){
	 	capm.inconsistent[,i] <- assets.ret[,i]+shock[,i]
	 }

	in.assets.excess.ret <- capm.inconsistent-risk.free
	in.fitted <- lm(in.assets.excess.ret~tangency.excess.ret)
	in.alphas <- as.matrix(in.fitted[[1]][1,])
    in.betas <- as.matrix(in.fitted[[1]][2,])
	in.fitted.vals <- in.fitted$fitted.values

	avg.ret.c <- colMeans(in.assets.excess.ret)
    fitted.new.c <- lm(avg.ret.c~in.betas)
	avg.fitted.ret.c <- fitted.new.c$fitted.values

	#Characteristic Lines if CAPM held
  windows()
	  layout(matrix(c(1,1,1,1,2,2,3,3)),3,2)
	  par(mar=c(4,4,2.5,1))
	  plot.y.min <- min(fitted.results[[1]]$fitted.values)-0.1
	  plot.y.max <- max(fitted.results[[1]]$fitted.values)+0.1

		plot(ylim=c(plot.y.min,plot.y.max),col=col.ind[1],xlim=c(min(tangency.excess.ret)-0.2,max(tangency.excess.ret)),xlab='Excess MVE returns',ylab='Excess asset returns',x=tangency.excess.ret,y=assets.excess.ret[,1],type='p',pch=21,cex=0.55,axes=T,cex.main=0.85,cex.lab=0.80,main='Security Characteristic Lines \n (when CAPM holds)')
		abline(v=0)

	  for(i in 2: num.assets){
			points(col=col.ind[i],x=tangency.excess.ret,y=assets.excess.ret[,i],type='p',pch=21,cex=0.55)
		}

	 for(i in 1:num.assets){
	 	lines(x=tangency.excess.ret,capm.consistent[,i],col=col.ind[i],lwd=1)
	 }

	legend(title='Simulated Assets',"topleft",fill=c(col.ind),legend=c(paste('Asset:',1:num.assets)),bg='white',bty='n',border='white',cex=0.75)

	par(mar=c(0.5,4,1,1))
	barplot(inside=T,col=col.ind,alphas.adj,horiz=F,beside=T,ylab='Alphas',xlab=NULL,cex=0.75,cex.main=0.85,cex.axis=0.75,cex.lab=0.75)

	par(mar=c(1.5,4,0.5,1))
	barplot(inside=T,col=col.ind,betas.raw,horiz=F,beside=T,ylab='Betas',xlab=NULL,cex=0.75,cex.main=0.85,cex.axis=0.75,cex.lab=0.75)

	#Characterisitic lines if CAPM failed
	windows()
	  layout(matrix(c(1,1,1,1,2,2,3,3)),3,2)
	  par(oma=c(0,0,0,0),mar=c(4,4,2.5,1))

	  plot(ylim=c(plot.y.min,plot.y.max),col=col.ind[1],xlim=c(min(tangency.excess.ret)-0.2,max(tangency.excess.ret)),xlab='Excess MVE returns',ylab='Excess asset returns',x=tangency.excess.ret,y=capm.inconsistent[,1],type='p',pch=21,cex=0.55,axes=T,cex.main=0.85,cex.lab=0.75,main='Security Characteristic Lines \n (when CAPM does not hold)')
		abline(v=0)

	  for(i in 1:num.assets){
	 	  lines(x=tangency.excess.ret,y=in.fitted.vals[,i],col=col.ind[i],lwd=1)
	  }
	  legend(xpd=NA,xjust=0,title='Simulated Assets',"topleft",fill=c(col.ind),legend=c(paste('Asset:',1:num.assets)),bg='white',bty='n',border='white',cex=0.75)

	  par(mar=c(0.5,4,1,1))
	  barplot(inside=T,col=col.ind,in.alphas,horiz=F,beside=T,ylab='Alphas',xlab=NULL,cex=0.75,cex.main=0.85,cex.axis=0.75,cex.lab=0.75)

	  par(mar=c(1.5,4,0.5,1))
	  barplot(inside=T,col=col.ind,in.betas,horiz=F,beside=T,ylab='Betas',xlab=NULL,cex=0.75,cex.main=0.85,cex.axis=0.75,cex.lab=0.75)

	#Security Market Line using fitted data (CAPM holds)
  windows()
  par(oma=c(0,0,0,0))
 		sml.matrix <- matrix(dimnames=list(c(paste('Asset:',1:num.assets)),c('Beta','Expected Return')),c(betas.raw,avg.ret),byrow=F,nrow=num.assets,ncol=2)
		plot(xlab=colnames(sml.matrix)[1],ylab=colnames(sml.matrix)[2],cex.main=0.85,cex.lab=0.75,cex.axis=0.75,main='Security Market Line \n (when CAPM holds)',type='lines',x=betas.raw,y=avg.fitted.ret,col='black',lwd=1.5)
		points(sml.matrix,type='p',col=col.ind,pch=20,cex=1.5)
		text(labels=1:num.assets,x=sml.matrix[,1],y=sml.matrix[,2],pos=3,col='blue',cex=0.65)
	  points(x=1,tangency.sharpe.ret-risk.free,pch=8,col='red',cex=1.5)
  	text(labels='MVE Portfolio\n(Market Portfolio)',x=1,y=tangency.sharpe.ret,pos=1,col='red',cex=0.65)
	  legend(xpd=NA,xjust=0,title='Simulated Assets',"topleft",fill=c(col.ind),legend=c(paste('Asset:',1:num.assets)),bg='white',bty='n',border='white',cex=0.65)

	#Security Market Line if CAPM does not hold
	windows()
  par(oma=c(0,0,0,0))
 		sml.matrix <- matrix(dimnames=list(c(paste('Asset:',1:num.assets)),c('Beta','Expected Return')),c(in.betas,avg.ret.c),byrow=F,nrow=num.assets,ncol=2)
		plot(cex.main=0.85,cex.lab=0.75,cex.axis=0.75,main='Security Market Line \n (when CAPM does not hold)',type='lines',col='black',lwd=1.5,x=in.betas,y=avg.fitted.ret.c)
		points(sml.matrix,type='p',col=col.ind,pch=20,cex=1.5)
		text(labels=1:num.assets,x=sml.matrix[,1],y=sml.matrix[,2],pos=3,col='blue',cex=0.65)
	  legend(xpd=NA,xjust=0,title='Simulated Assets',"topleft",fill=c(col.ind),legend=c(paste('Asset:',1:num.assets)),bg='white',bty='n',border='white',cex=0.65)
}

d

###################################################################################################
#
# New Frontiers
#
###################################################################################################

CAPM.Frontiers <- function(type,orig.optimal.port,risky.assets,fitted.data,rand.ind){

	risk.free <- risky.assets[[4]]
	num.assets <- ncol(risky.assets[[1]])
	num.obs <- nrow(risky.assets[[1]])
	shock <- (risky.assets[[6]][,rand.ind])*2
    tangency.sharpe.ret <- orig.optimal.port[[4]][1]-risk.free
	tangency.sharpe.risk <- orig.optimal.port[[4]][2]
	tangency.sharpe.var <-tangency.sharpe.risk^2
	tangency.ts <- fitted.data[[3]]

    col.ind <- rainbow(num.assets)

	if(type=='CAPM Prevails')
		{
		 asset.ts <- risky.assets[[1]][,rand.ind]
		 asset.ret <- mean(asset.ts)
		 asset.risk <- sd(asset.ts)
		 asset.var <- asset.risk^2
         cov.est <- cov(asset.ts,tangency.ts)

	  } else if(type=='CAPM Fails')
	   {
	   	  asset.ts <- risky.assets[[1]][,rand.ind]+shock
		  asset.ret <- mean(asset.ts)
	  	  asset.risk <- sd(asset.ts)
		  asset.var <- asset.risk^2
          cov.est <- cov(asset.ts,tangency.ts)
     	 }

	asset.weight <- seq(-1,1,length.out=100)
	port.weight <- 1- asset.weight

	new.frontier <- matrix(0,dimnames=list(c(paste('points:',1:100)),c('asset.weight','port.weight','expected.return','risk')),nrow=100,ncol=4)
	new.frontier[,1] <- asset.weight
	new.frontier[,2] <- port.weight
	new.frontier[,3] <- (asset.weight*asset.ret)+(port.weight*tangency.sharpe.ret)
	new.frontier[,4] <- sqrt((asset.weight^2)*asset.var+(port.weight^2)*tangency.sharpe.var+(2*cov.est*asset.weight*port.weight))

	lines(x=new.frontier[,4],y=new.frontier[,3],type='lines',col=col.ind[rand.ind])

}

a

##############################################################
#
# Fitting the Model
#
##############################################################

Fitting.model <- function(risky.assets,frontier.points){
    assets.ret <- risky.assets[[1]]
	assets.mean.ret <- risky.assets[[5]]
	risk.free <- risky.assets[[4]]
	num.assets <- ncol(assets.ret)
	num.obs <- nrow(assets.ret)

	tangency.sharpe.ret <- frontier.points[[4]][1]
	tangency.sharpe.risk <- frontier.points[[4]][2]
	tangency.weights <- as.matrix(frontier.points[[4]][-(1:3)])

	#tangency.excess.ret <- rnorm(num.obs,tangency.sharpe.ret-risk.free,tangency.sharpe.risk)
	assets.excess.ret <- assets.ret - risk.free
	tangency.excess.ret <- (assets.excess.ret%*%tangency.weights)

	fitted.results <- lm(assets.excess.ret~tangency.excess.ret)
	return(list(fitted.results,tangency.sharpe.ret,tangency.excess.ret))
}

d

Again, we will simulate return/risk data for 25 assets, draw the CAL, the efficient frontier,locate the sharpe maximising portfolio on it, plot the indifference contours (just for the sake of completion). The security characteristics line,security market line along with various mean variance contours will be drawn and/or added.

k

sim.opt <- simulate.assets(25)
sim.mvf <- MVF(sim.opt)
cal.opt <- CAL.line('Sharpe',sim.opt,sim.mvf)
DrawCAL(cal.opt,sim.opt,sim.mvf,legend.draw=TRUE)
utility.contours('Sharpe',sim.opt,0.3,sim.mvf)
DrawMVF(sim.mvf,FALSE,NULL,NULL)

fit <- Fitting.model(sim.opt,sim.mvf)
for(i in 1:25){
 CAPM.Frontiers(type='CAPM Prevails',sim.mvf,sim.opt,fit,rand.ind=i)
}

#Draw Security Lines
DrawSecurityLines(sim.opt,sim.mvf,fit)

j

[Implications Of The CAPM ]

As alluded to at the beginning of this post, we would like to explore some of the main implications on assets in a universe where the CAPM holds versus that where the CAPM as an equilibrium argument fails. Using a unique colour code for each of the 25 simulated assets, the following 3 plots show some of the main results that emerge when the CAPM is true. On an operational note, I should mention that the time series data associated with the tangency portfolio,a necessary input to subsequent implementations, was obtained from multiplying the optimal asset weights by their relevant risky asset returns at each point in time. I am not certain as to whether this approach to ‘reverse-engineer’ the time series return for the optimal portfolio is valid, but if it is, and judging from the regression results that ensue that this is the case, the time series returns thus obtained are consistent with the CAPM.

o

:: Contours :

As with any post found on this blog, nothing should be regarded as instructive or written up for the purposes of reproducible research. A loose diary of tinkering is a more apt description. As a case in point, this is what we would expect if the code was correct (copied from google images):

zx

zz

Although the main point (as far as I understand it) remains valid, my plots do look a bit different and I do not know whether this is due to the nature of the simulated data points or whether this is due to wrong code. In any case the following descriptions are based on the fact/lie/notion that the code is right.

cc

The plot above charts the 25 simulated data points in the risk-return space, along with the CAL (here the CML since we are looking at all investors),the mean variance frontier, the global minimum variance portfolio, the optimal risky portfolio as well as 25 additional frontiers, each of which connecting a risky asset with the original optimal risky portfolio. As loosely sketched above, one implication of the CAPM is that any combination of a risky asset and the tangency portfolio (market portfolio under the CAPM) must be efficient. In other words, if markets are in equilibrium, the marginal benefit to cost ratio across all assets/portfolios is identical, implying that investors have no incentive to change portfolio composition and ultimately that there is no scope to increase the sharpe ratio by combining that optimal portfolio with any risky asset. With reference to the figure above, since all combinations of a risky asset and the tangency portfolio (coloured contours) fall within the original (black) frontier in such a manner as to achieve tangency with the original CML, we cannot combine a risky asset with the optimal risky portfolio to increase the Sharpe-ratio. This can be visually confirmed by drawing a mental CAL (CML) between the risk free asset and any point on any coloured contour and noting that the current CAL (CML) still maintains the highest slope and hence maximal sharpe ratio.

To compare this with the case where the CAPM does not hold, let us ‘shock’ the asset returns by some random amount so that their marginal benefit to cost ratios diverge relative to the tangency portfolio and note the consequent effects on the same set of plots.

for(i in 1:25){
    CAPM.Frontiers(type='CAPM Fails',sim.mvf,sim.opt,fit,rand.ind=i)
}

fgWe can see that the resulting coloured frontiers sometimes exceed the original CAL implying that a higher sharpe ratio could be achieved by combining a certain risky asset with the original tangency portfolio.

u

:: Security Characteristic Lines:

Another implication relating to the validity/failure of the CAPM that can be visually examined is captured in security characteristic lines which plot the excess asset returns of an asset against that of the market portfolio (tangency portfolio in our simulated case). In the case the CAPM prevails, we would expect the intercept from this time series regression to not be statistically different from 0. In other words we would fail to reject the null hypothesis that the intercept term is 0, concluding that the excess asset returns are related to excess market returns via beta.

j

Regressing excess asset returns on excess market returns when the CAPM holds yields the following p-values for coefficients. The green line denotes a significance level of 0.10. Since the p-value of alphas across assets are in excess of the significance level selected,we cannot reject the hypothesis that alpha is equal to zero for assets individually. Hence in subsequent implementations where CAPM is deemed to prevail, alphas are set to 0.

l

[

The Security Charactersitic Lines for the case where the CAPM prevail/fails is shown below. As illustrated above, if the CAPM holds, intercepts should be 0.

dg

Shocking our CAPM consistent returns by a random amount and regressing these excess returns against the original tangency portfolio returns yields the following chart.

sfg

x

:: Security Market Lines:

Another visual difference between the case where the CAPM prevails/fails can be captured by Security Market Lines which are cross sectional regressions of average asset excess returns on the estimated betas in the previous time series regression. 

jk

If the CAPM holds, all assets are correctly prices and plot directly on the line. The market portfolio (tangency portfolio from simulated assets) has a beta of 1 as it covaries with itself.

bnxd

Once we shock the CAPM consistent returns, assets become mispriced relative to their risk. Assets that plot above (below) the SML are commanding a return in excess of (less than) what they deserve and hence are undervalued (overvalued) by the market in terms of price. We would expect undervalued assets to be bought,pushing prices up and returns down until the SML is reached. Overvalued assets on the other hand are priced to highly by the market and will be sold in anticipation of future downward price reversions to the level predicted by the SML. Hence whenever assets are correctly priced in terms of their risk, an equilibrium state is reached in which investors have no incentive to take long/short positions.

as