# Tag Archives: sharpe ratio

The basic implication of the CAPM is that the expected excess return of an asset is linearly related to the expected excess return on the market portfolio according to the following relation:

This is simply a specific instance of a generic factor pricing model in which the factor is an excess return of a surrogate market portfolio and the test assets are all excess returns of risky assets. The betas are defined by regression coefficients

and the model states that expected returns are linear in the betas :

From the expressions above, it is clear that there are two testable implications with regard to the validity of the CAPM:

[1] All regression intercepts should be individually equal to zero

[2] All regression intercepts should be jointly equal to zero

While there are numerous ways to estimate the model and evaluate the properties of its parameters, this post simply seeks to apply the Gibbons,Ross & Shanken methodology, in both its numerical and graphical incarnations, to a subset of the data. An attempt was made to download price and return data for the constituents of the SP500 since 1995. Data availability issues however constrained the number of assets under examination to 351 in total, with 216 monthly observations across said assets (as well as the index and T-Bill rate). The previous post summarised key return and risk statistics associated with each of these 351 assets with the help of the rpanel package (for control) and the PerformanceAnalytics package (for a host of measures). To implement the GRS test, one has to ensure that the number of test assets used in the process is less than the number of return observations.

For the sake of convenience the (updated) dashboards from the previous blog post are given below.

After estimating a conventional time-series regression for each risky asset, a dashboard of residual diagnostic plots can also be helpful.

dcdc

```#Residual diag
windows()
layout(matrix(c(1,1,2,3,1,1,4,5,6,6,7,7),byrow=T,nrow=3,ncol=4))

if (interactive()) {
draw <- function(panel) {
par(mai=c(0,0.3,0.3,0.2))
plot(main=paste('Time Series Regression :',colnames(monthly.ret)[panel\$asset],'\n','Alpha= ',round(ts.list\$alphas[panel\$asset],3),'|| Beta= ',round(ts.list\$betas[panel\$asset],3)),x=exm.ret,y=ex.ret[,panel\$asset],xlab='',ylab='',cex.main=0.85,cex.lab=0.7,cex.axis=0.8,lwd=1,cex=0.75)
legend('topleft',legend=c('Actual','Fitted'),fill=c('black','red'),border.col=NA,bg=NA,cex=0.7,ncol=2)
abline(ts.list\$fit[[panel\$asset]],col='red',lwd=2)

par(mai=c(0,0.15,0.3,0.2))
qqPlot(ts.list\$fit[[panel\$asset]],xlab='',ylab='',cex.lab=0.7,cex.axis=0.8,lwd=1,cex=0.75)

par(mai=c(0,0.15,0.3,0.2))
acf(main='',ts.list\$resid[[panel\$asset]],xlab='',ylab='',,cex.lab=0.7,cex.axis=0.8,lwd=1,cex=0.75)

par(mai=c(0,0.15,0.3,0.2))

par(mai=c(0,0.15,0.3,0.2))
plot(x=ts.list\$fitted[[panel\$asset]],y=ts.list\$resid[[panel\$asset]],xlab='',ylab='',cex.lab=0.7,cex.axis=0.8,lwd=1,cex=0.75)

par(mai=c(0.3,0.3,0.3,0.2))
plot(type='l',ts.list\$resid[[panel\$asset]],xlab='',ylab='',cex.lab=0.7,cex.axis=0.8,lwd=1,cex=0.75)
legend('topright',legend=c('Residuals'),fill=c('black'),border.col=NA,bg=NA,cex=0.7,ncol=1)

gq.p <- as.numeric(gqtest(ts.list\$fit[[panel\$asset]])[4])
bp <- as.numeric(bptest(ts.list\$fit[[panel\$asset]])[4])

sw <- as.numeric(shapiro.test(ts.list\$resid[[panel\$asset]])[2])
jb <- as.numeric(jarque.bera.test(ts.list\$resid[[panel\$asset]])[3])

dw<-as.numeric(durbinWatsonTest(ts.list\$fit[[panel\$asset]])[3])

an <- c('Alpha','Beta','G-Quandt','')
an1 <- c('B-Pagan','S-Wilk','J-Bera','D-Watson')

te <- cbind(an,rbind(round(ts.list\$alphas.p[panel\$asset],3),round(ts.list\$betas.p[panel\$asset],3),round(gq.p,3),''))
te1 <-cbind(an1,rbind(round(bp,3),round(sw,3),round(jb,3),round(dw,3)))
tab <- cbind(te,te1)

par(mai=c(0.2,0,0.3,0.1))
panel
}
panel<- rp.control(asset=1)
rp.slider(panel,asset,1,(ncol(monthly.ret)), action=draw,resolution=1,showvalue=TRUE)
}

```

The residual diagnostics dashboard covers the conventional issues of [1] fitted vs actual data,[2] normality, [3] residual autocorrelation, [4] heteroskedasticity, [5] stationarity and [6] a table of p-values that are colour coded to reflect rejection (red) or non-rejection (green) of the null hypotheses associated with the named measure for the selected asset, at the 10% significance level. Asset selection is once again done via the rpanel package.

So far we have only concerned ourselves with the first testable implication of the CAPM in the context of a time series regression, namely the estimation and visualisation of residual diagnostics for each of the 351 test assets. The significance of the parameters for each model is assessed by comparing the test statistics to critical values or the p-values to chosen significance levels. For those assets that have an alpha-p-value less (greater) than 0.1, one would (not) reject the null hypothesis that their pricing error was equal to zero at the 10% level of significance.

The second testable implication of the CAPM in a time series framework relates to the condition of pricing errors (alphas) being jointly equal to zero across all test assets. Gibbons,Ross and Shanken (GRS for short) provide a useful methodology to test this condition under assumptions of residual normality,homoscedasticity and independence. The GRS test statistic I tried to replicate takes the following functional form :

It appears that this statistic can be rewritten in such a way as to provide an intuitive graphical presentation of CAPM validity.More precisely, GRS show that the test statistic can be expressed in terms of  how far inside the ex post frontier the factor return is (excess market return in the CAPM).

Disequilibrium in markets implies that prices continue adjusting until the market clears. As prices move, so will asset returns and relative market values, affecting tangency- and market portfolio weights respectively. In the ideal CAPM universe, market and tangency portfolios will eventually converge and every investor will hold the tangency portfolio. Hence for the CAPM to hold, the market portfolio surrogate used in model estimation must not deviate too far, in a statistical sense, from the tangency portfolio.This code snippet calculates the Test statistic,p-value and plots the usual frontiers,assets and portfolios.

```#Joint test (use only 200 assets because n.obs > n.assets otherwise)

t.per <- nrow(monthly.ret)
n.ass <- 200
t.term <- (t.per-n.ass-1)/n.ass

alphas <- ts.list\$alphas[1:200]
res.cov <- NULL
for(i in 1:200)
{
res.cov<-cbind(res.cov,ts.list\$resid[[i]])
}
res.cov.m <- cov(res.cov)

term <- ((1+(mean(exm.ret)/apply(exm.ret,2,sd))^2)^(-1))
a.term <- t(alphas)%*%ginv(res.cov.m)%*%(alphas)

t.stat.g <- t.term*term*a.term
grs.pval<-pf(t.stat.g,200,15,lower.tail=T)

ret.set <- t(as.matrix(colMeans(cbind(bench.ret,monthly.ret[,1:200]))*100))
cov.set <- var(cbind(bench.ret,monthly.ret[,1:200])*100)

risky.asset.data <- list()
risky.asset.data\$mean.ret <- ret.set
risky.asset.data\$cov.matrix <- cov.set
risky.asset.data\$risk.free <- mean(rf.vec)

base<-Frontiers(risky.asset.data)
Frontier.Draw(risky.asset.data,base,rainbow(200),'new',lty=1,paste('Gibbons,Ross & Shanken Interpretation','\n','GRS statistic/pvalue :',round(t.stat.g,3),'/',round(grs.pval,3),'\n','For 200 assets'))
CAL.Draw(risky.asset.data,base,'black',lty=1)

x <- seq(0,30,by=0.01)
lin <- mean(rf.vec)+((((mean(bench.ret)*100)-mean(rf.vec))/(apply(bench.ret,2,sd)*100))*x)
lines(x=x,lin,col='gold',lwd=1.5)
points(x=apply(bench.ret,2,sd)*100,y=mean(bench.ret)*100,col='black',cex=1,pch=17)
text('Market\nReturn',x=apply(bench.ret,2,sd)*100,y=mean(bench.ret)*100,col='black',cex=0.7,pos=1)

```

The CAPM will always hold if the market proxy is mean variance efficient. For this condition to hold true, the surrogate for the market portfolio should lie on the capital market line, the efficient frontier when a risk free asset is introduced alongside the collection of risky assets. Since the market portfolio is not identifiable, the CAPM cannot be really tested. The market proxy used above, monthly returns to the SP 500 index, does not include factors such as [1] real estate and [2] human capital.

The previous post addressed the second element of the two fund theorem, the allocation of capital between the risky portfolio (which was constrained to hold only one asset) and the riskfree asset. This post will concentrate on locating the optimal risky portfolio when the number of risky assets is extended to 25 simulated data points. The possible set of expected returns and standard deviations for different combinations of the simulated risky assets will be plotted along the Minimum Variance Frontier. Every point on this contour is a combination of risky assets that minimise portfolio risks for a given level of portfolio returns.

[The Minimum Variance Frontier]

The basic intuitions can be illustrated using the case of two risky assets. The expected return on such a portfolio would simply be the weighted average of individual asset returns

The variance of such a portfolio depends on the covariance and variances of the 2 assets concerned.

The covariance itself can be decomposed in terms of the correlation coefficient and standard deviations.

By varying the weights in each asset and noting the corresponding expected portfolio returns and risk, the MVF can be traced. Two functions were written to deal with such issues MVF and DrawMVF.

```#############################################################################
#
# Mean Variance Frontier
#
#############################################################################

MVF <- function (risky.assets){

raw.ret <- risky.assets[[1]]
n <- ncol(raw.ret)
min.ret <- min(risky.assets[[5]])
max.ret <- max(risky.assets[[5]])
var.cov <-risky.assets[[2]]
n.port = 200
risk.free <- risky.assets[[4]]
risky.port <- list()

if(n<2){
print('Please specify more than one asset when using the [simulate.assets] function')
} else if(n>1){
target.returns <- seq(min.ret,max.ret,length.out=n.port)
mvp <- vector('list',n.port)
min.var.frontier <- matrix(0,dimnames=list(c(seq(1,n.port)),c('Portfolio Return','Portfolio Risk','Sharpe Ratio',1:n)),nrow=n.port,ncol=n+3)

for(i in 1:n.port){
mvp[[i]]<-portfolio.optim(x=raw.ret,pm=target.returns[i],riskless=F,shorts=F)
min.var.frontier[i,] <- c(mvp[[i]]\$pm,(mvp[[i]]\$ps),(mvp[[i]]\$pm-risk.free)/(mvp[[i]]\$ps),mvp[[i]]\$pw)
}

max.sharpe.port <- min.var.frontier[which.max(min.var.frontier[,3]),]
global.min.var.port <- min.var.frontier[which.min(min.var.frontier[,2]),]

risky.port[[1]] <- min.var.frontier
risky.port[[2]] <- target.returns
risky.port[[3]] <- raw.ret
risky.port[[4]] <- max.sharpe.port
risky.port[[5]] <- global.min.var.port

return(risky.port)

}
}
```

mn

```#############################################################################
#
# Drawing the mean variance frontier
#
#############################################################################

port.return <- risky.portfolio[[1]][,1]
port.risk <- risky.portfolio[[1]][,2]
port.sharpe <- risky.portfolio[[4]][2:1]
min.var.port <- risky.portfolio[[5]][2:1]

col.ind <- rainbow(total.num)
lines(x=port.risk,y=port.return,col=col.ind[ind],lwd=1.5,type='l')
lines(x=port.risk,y=port.return,col='black',lwd=1.5,type='l')
points(x=min.var.port[1],y=min.var.port[2],col='purple',cex=1,pch=11)

}

}
```

h
To operationalise some of the issues so far, let’s simulate random asset returns for 25 securities, plot them in the risk/return space along with the MVF that these assets trace.

o

```#Mean variance efficiency

sim.opt <- simulate.assets(25)
sim.mvf <- MVF(sim.opt)
cal.opt <- CAL.line('Sharpe',sim.opt,sim.mvf)
DrawCAL(cal.opt,sim.opt,sim.mvf,legend.draw=TRUE)
utility.contours('Sharpe',sim.opt,1.5,sim.mvf)
DrawMVF(sim.mvf,FALSE,NULL,NULL)
```

Once the MVF has been traced, the optimal portfolio of risky simulated assets can be located. Essentially, the objective is to find one collection of risky asset weights that is superior to any other combination of risky assets according to some risk adjusted performance measure. The standard performance measure used is the Sharpe ratio which adjusts the excess expected portfolio return by the standard deviation of portfolio returns. Graphically, since the slope of the CAL is the Sharpe ratio,the optimal risky portfolio is simply the tangency portfolio which results when the CAL with the highest slope touches the MVF. This is shown in the plot above. Intuitively, one could have drawn the CAL with respect to any other point on the MVF but the Sharpe ratio would be suboptimal. Connecting the risk free asset with the global minimum variance portfolio would result in a CAL that has a negative slope, and hence a negative Sharpe ratio, implying that the risk free asset provides a better return than the global minimum variance portfolio (in our example).

Mathematically, the objective is to find a set of risky asset weights that maximise the Sharpe ratio:

Instead of using optimisation packages to locate the Sharpe-ratio-maximising-portfolio, I simply computed the Sharpe measure for every point on the MVF and extracted the subset of weights for which the Sharpe ratio was maximal. The result appears to be correct since it corroborates with more advanced packages such as fPortfolio.

The Two-fund separation theorem of Modern Portfolio Theory states that the investment problem can be decomposed into the following steps:

1. Finding the optimal portfolio of risky assets
2. Finding the best combination of this optimal risky portfolio and the risk free asset.

With regard to the first point, the optimal portfolio of risky assets is simply the set of risky asset weights that maximise the Sharpe ratio (or of course some other performance measure)

With regard to the second point, the total optimal portfolio, one that is invested in the risk free rate and the risky optimal portfolio, depends on the manner in which individual risk preferences (as embodied in the utility function) intersect with investment opportunities (as embodied in th CAL).

As before, the total optimal portfolio results where the highest available indifference curve for a given level of risk aversion meets the CAL with the highest Sharpe ratio (or slope). Again, an individual with a risk aversion of 0.5 would allocate more of his capital to the optimal risky portfolio than an individual with a risk aversion parameter of 1.5.

An investor who has a very low risk aversion parameter would borrow at the risk free rate and invest the borrowed funds along with own capital in the optimal risky portfolio. The resulting total portfolio would lie to the right of the optimal risky portfolio.

According to the Two-fund separation theorem of Modern Portfolio Theory, the investment problem can be decomposed into the following steps:

1. Finding the optimal portfolio of risky assets
2. Finding the best combination of this optimal risky portfolio and the risk free asset.

This post will concentrate on the second issue first. The problem of finding an optimal risky portfolio will be addressed in the next post.

[The Capital Allocation Line]

To simplify the procedure for finding the optimal combination of the risk free and the risky funds, one can first consider the return on a portfolio consisting of one risky asset and a risk free asset. The return on such a portfolio is simply:

The variance of this combined portfolio is given by:

These two results can be combined to yield the following expression :

This is the Capital Allocation Line (CAL) and represents the set of investment possibilities created by all combinations of the risky and riskless asset. The price of risk is the return premium per unit of portfolio risk and depends only on the prices of available securities. This ratio is also known as the Sharpe ratio. To illustrate how the CAL works, let us first set up some functions to [a] simulate random asset returns from a normal distribution and [b] calculate and draw the CAL connecting one risky asset with the risk free rate. These functions correspond to simulate.assets,CAL.line and DrawCAL as shown below.

```#############################################################################
#
# Simulate asset returns from a normal distribution
#
#############################################################################

simulate.assets <- function(num.assets){
ret.matrix <- matrix(dimnames=list(c(seq(1,100)),c(paste('Asset',1:num.assets))),(rep(0,num.assets*100)),nrow=100,ncol=num.assets,byrow=FALSE)
shock <- matrix(dimnames=list(c(seq(1,100)),c(paste('Asset',1:num.assets))),(rep(0,num.assets*100)),nrow=100,ncol=num.assets,byrow=FALSE)
for(i in 1:num.assets){
ret.matrix[,i]=rnorm(100,mean=runif(1,-0.8,0.2),sd=runif(1,0.8,2))
shock[,i] <- rnorm(100,colMeans(ret.matrix)[i],apply(ret.matrix,2,sd)[i])
}
sd.matrix <- apply(ret.matrix,2,sd)
var.matrix <- var(ret.matrix)
risk.free <- 0.02
mean.ret <- colMeans(ret.matrix)
sim.assets <- list(ret.matrix,var.matrix,sd.matrix,risk.free,mean.ret,shock)
return(sim.assets)
}
```

```#############################################################################
#
# Capital Allocation Line
#
#############################################################################

CAL.line <- function(type,risky.assets,optimised.portfolio){

if(type=='Assets' && is.null(risky.assets)==FALSE){
num.assets <- ncol(risky.assets[[1]])
asset.returns <- colMeans(risky.assets[[1]])
asset.risk <- risky.assets[[3]]

} else if (type=='Sharpe' && is.null(optimised.portfolio)==FALSE){
num.assets <- 1
asset.returns <- optimised.portfolio[[4]][1]
asset.risk <- optimised.portfolio[[4]][2]
}

total.risk <- seq(0,2,by=0.01)
risk.free <- risky.assets[[4]]

CAL.complete <- vector('list',num.assets)
price.risk <-c()
total.return <-matrix(rep(0,length(total.risk)*num.assets),nrow=length(total.risk),ncol=num.assets)

for(i in 1:num.assets){
price.risk[i] <- (asset.returns[[i]]-risk.free)/asset.risk[i]
total.return[,i] <- risk.free+(price.risk[i]*total.risk)
CAL.complete[[i]]<-cbind(total.risk,total.return[,i])
colnames(CAL.complete[[i]]) <- c('Total Risk','Total Return')
}
return(CAL.complete)
}
```

```#############################################################################
#
# Draw Capital allocation Line (s)
#
#############################################################################

DrawCAL<-function(CAL,risky.assets,optimised.portfolio,legend.draw){

num.lines <- length(CAL)
num.assets <- ncol(risky.assets[[1]])
plot.x.max <- max(risky.assets[[3]])+0.2
plot.y.min <- min(risky.assets[[5]])-1
plot.y.max <- max(risky.assets[[5]])+1

windows()
par(xaxs="i", yaxs="i")

if(num.lines==1 && is.null(optimised.portfolio)==FALSE){
plot(xlab='Standard Deviation',ylab='Expected Return',ylim=c(plot.y.min,plot.y.max),xlim=c(0,plot.x.max),CAL[[1]],col='red',type='l',main='Capital Allocation Line',cex.main=0.85,cex.lab=0.75,cex.axis=0.75,lwd=2)
points(x=optimised.portfolio[[4]][2],y=(optimised.portfolio[[4]][1]),type='p',pch=9,col='dark blue',cex=1)
} else if(num.lines==1 && is.null(optimised.portfolio)==TRUE){
plot(xlab='Standard Deviation',ylab='Expected Return',ylim=c(plot.y.min,plot.y.max),xlim=c(0,plot.x.max),CAL[[1]],col='red',type='l',main='Capital Allocation Line',cex.main=0.85,cex.lab=0.75,cex.axis=0.75,lwd=2)
} else if(num.lines>1){
plot(xlab='Standard Deviation',ylab='Expected Return',ylim=c(plot.y.min,plot.y.max),xlim=c(0,plot.x.max),CAL[[1]],col='red',type='l',main='Capital Allocation Line',cex.main=0.85,cex.lab=0.75,cex.axis=0.75,lwd=2)
for(i in 2:num.lines){
lines(CAL[[i]],col='red',type='l',lwd=2)
}
}
points(x=0.01,y=risky.assets[[4]],type='p',pch=15,col='dark green',cex=1)
points(x=risky.assets[[3]],y=risky.assets[[5]],type='p',pch=17,col='blue',cex=0.75)
text(labels=1:num.assets,x=risky.assets[[3]],y=risky.assets[[5]],pos=3,col='blue',cex=0.65)
legend(plot=legend.draw,title='Legend','bottomleft',pch=c(15,17,3,3,8,9,11),col=c('dark green','blue','red','dark orange','black','dark blue','purple'),legend=c('Risk free asset','Risky assets','Capital Allocation Line','Indifference Curves','Optimal Total Portfolio','Max Sharpe Portfolio','Min Variance Portfolio'),ncol=1,bg='white',bty='n',border='black',cex=0.65)
}
```

To Implement these functions,let’s first simulate the data for one asset with100 timeseries observations drawn from a normal distribution with a mean and standard deviation that are themselves drawn from a uniform distribution. A risk free rate is also pre-specified at 0.02 and a simple CAL is drawn to connect these 2 assets as per the expression above.

```
#Capital Allocation Line

sim.one <- simulate.assets(1)
cal.one <-CAL.line('Assets',sim.one,NULL)
DrawCAL(cal.one,sim.one,NULL,legend.draw=TRUE)
```

As mentioned above, the CAL is simply the combination of the risk free asset and the risky portfolio (which as of yet has been constrained to contain a single asset). The slope of the CAL is the Sharpe ratio and depicts the amount of excess portfolio return per unit of portfolio risk we can expect. Since every point on the CAL is an investment possibility of allocating wealth between the riskless asset and the risky portfolio (single asset in this case), what determines which of these combinations is optimal? The amount of capital an investor places in the risk free asset versus the risky portfolio is determined by the preferences for risk and return as captured by the utility function examined previously.

Mathematically, the objective of an investor is to find the weights of the risky portfolio that maximises the utility of portfolio returns. Or more succinctly :

The optimal weight for the risky portfolio is given by :

The optimal weight in the risk free asset is then simply (1-w*). Since the indifference curve and the CAL are drawn in the same risk/return space, let’s superimpose an indifference curve with a risk aversion parameter of 1 on the CAL as follows.

```utility.contours('Assets',sim.one,1,NULL)
```

The total optimal portfolio results where investment opportunities as represented by the CAL meet the highest attainable indifference curve as represented by the orange contour. With a risk aversion parameter of 1, the investor exhibits his distaste for risk and places a large weight on the risk free asset versus the risky asset or portfolio. Another investor who is less risk averse (a risk aversion parameter of 0.5) would invest a larger portion of his capital in the risky asset versus the risk free asset and hence the total optimal portfolio is located closer to the simulated risky asset as below.

While the investor prefers to make investments consistent with the risk/return opportunities towards the northwestern corner of the plot – where blue indifference contours are drawn – the available investments as per the capital allocation line do not permit this.

So far we have constrained our risky portfolio to contain only one asset. There is nothing stopping us from simulating multiple risky assets and drawing the CAL between the risk free asset and each of these simulated assets as below.

```sim.mult <- simulate.assets(20)
cal.mult <- CAL.line('Assets',sim.mult,NULL)
DrawCAL(cal.mult,sim.mult,NULL,legend.draw=T)
```

This post has dealt with the second half of the two fund theorem according to which any investment problem can be addressed by combining the risk free asset (riskless fund) and the risky portfolio (risky fund). So far we have constrained the number of assets contained in the risky portfolio. The task of finding the optimal risky portfolio, the second half of the theorem, shall be addressed in the next post.