# Tag Archives: utility function

The point of Fama and French’s time series regression is not to explain stock returns as much as assessing  estimated intercept terms as well as linking parameter estimates to average excess portfolio returns in a positive and linear fashion. Since it is unintuitive (and perhaps unwieldy) to plot average excess returns against respective parameter estimates when there are 4 dimensions (expected return ~ mkt + smb + hml), it is useful to visualise model fit by plotting actual average excess returns against model predictions across the 25 sorted FF portfolios. The vertical axis is still the average return of the test assets,but instead of plotting these against estimated parameters as before,the horizontal axis reflect predicted values from the CAPM and FF models. The points should lie on the 45 degree line if the model is correct. The vertical distance between this line and plot points reflects the discrepancies between actual data and model projections.

The code for the cross sectional regressions follows.

```###############################################################
#Cross Section
###############################################################

#[Size perspective]
SZ.focus <- matrix(unlist(SZ.BM\$size.port.mean),ncol=1) - mean(rf.ret)
######capm
SZ.capm.b <- matrix(unlist(s.betas),ncol=1)
SZ.capm.cross <- lm(SZ.focus~SZ.capm.b)
SZ.capm.alpha <- as.matrix(coef(SZ.capm.cross))
SZ.capm.beta <- as.matrix(coef(SZ.capm.cross))
SZ.capm.fitted <- fitted(SZ.capm.cross)

######fama french 3 factor
SZ.ff.b <- t(matrix(unlist(s.ff.betas),nrow=3))
colnames(SZ.ff.b) <- c('MKT','SMB','HML')
SZ.ff.cross <- lm(SZ.focus~SZ.ff.b)
SZ.ff.alpha <- as.matrix(coef(SZ.ff.cross))
SZ.ff.beta <- as.matrix(coef(SZ.ff.cross)[2:4])
SZ.ff.fitted <- fitted(SZ.ff.cross)

#[Value perspective]
BZ.focus <- matrix(unlist(BM.SZ\$value.port.mean),ncol=1) - mean(rf.ret)
######capm
BZ.capm.b <- matrix(unlist(v.betas),ncol=1)
BZ.capm.cross <- lm(BZ.focus~BZ.capm.b)
BZ.capm.alpha <- as.matrix(coef(BZ.capm.cross))
BZ.capm.beta <- as.matrix(coef(BZ.capm.cross))
BZ.capm.fitted <- fitted(BZ.capm.cross)

######fama french 3 factor
BZ.ff.b <- t(matrix(unlist(v.ff.betas),nrow=3))
colnames(BZ.ff.b) <- c('MKT','SMB','HML')
BZ.ff.cross <- lm(BZ.focus~BZ.ff.b)
BZ.ff.alpha <- as.matrix(coef(BZ.ff.cross))
BZ.ff.beta <- as.matrix(coef(BZ.ff.cross)[2:4])
BZ.ff.fitted <- fitted(BZ.ff.cross)
###############################################################################
```

To visualise the cross sectional results for the CAPM and FF3F models across 25 sorted portfolios.

```###################################################################################################
#Fitted vs Actual
###################################################################################################

windows()
layout(matrix(c(1,2,3,4),ncol=2,byrow=T))

#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~[SizeFocus]~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
#CAPM
par(mai=c(0.1,0.65,0.65,0))
plot(x=SZ.capm.fitted*100,y=SZ.focus*100,xlab='',xaxt='n',ylab='Average Excess Returns',cex.main=0.85,cex.lab=0.7,cex.axis=0.65,xlim=c(0.2,1.2),ylim=c(0.2,1.2),col='white')
for(i in 1:5){
lines(x=SZ.capm.fitted[beg[i]:end[i]]*100,y=SZ.focus[beg[i]:end[i]]*100,col=cols.m[i])
points(x=SZ.capm.fitted[beg[i]:end[i]]*100,y=SZ.focus[beg[i]:end[i]]*100,col=cols.m[i],pch=20+i)
}
abline(0,1)
legend(title='CAPM\n\nLines connect same\n Size Quintile','bottomright',pch=21:25,col=cols.m,legend=c('Smallest Size Portfolio',paste('Size Portfolio',2:4),'Largest Size Portfolio'),ncol=1,bg='white',bty='n',cex=0.6)

#Fitted 3 factor models
par(mai=c(0.1,0.2,0.65,0.45))
plot(x=SZ.ff.fitted*100,y=SZ.focus*100,xlab='',xaxt='n',ylab='',yaxt='n',cex.main=0.85,cex.lab=0.7,cex.axis=0.65,xlim=c(0.2,1.2),ylim=c(0.2,1.2),col='white')
for(i in 1:5){
lines(x=SZ.ff.fitted[beg[i]:end[i]]*100,y=SZ.focus[beg[i]:end[i]]*100,col=cols.m[i])
points(x=SZ.ff.fitted[beg[i]:end[i]]*100,y=SZ.focus[beg[i]:end[i]]*100,col=cols.m[i],pch=20+i)
}
mtext(font=2,line=2,adj=1,cex=0.75,text='25 Size/BM sorted Portfolios : Actual average excess returns against model predictions',side=3)

abline(0,1)
legend(title='3 Factor Model\n\nLines connect same\n Size Quintile','bottomright',pch=21:25,col=cols.m,legend=c('Smallest Size Portfolio',paste('Size Portfolio',2:4),'Largest Size Portfolio'),ncol=1,bg='white',bty='n',cex=0.6)
#*************************************************************************************************#

#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~[ValueFocus]~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
#CAPM
par(mai=c(0.65,0.65,0.1,0))
plot(x=BZ.capm.fitted*100,y=BZ.focus*100,xlab='Predicted Mean Excess Returns',ylab='Average Excess Returns',cex.main=0.85,cex.lab=0.7,cex.axis=0.65,xlim=c(0.2,1.2),ylim=c(0.2,1.2),col='white')
for(i in 1:5){
lines(x=BZ.capm.fitted[beg[i]:end[i]]*100,y=BZ.focus[beg[i]:end[i]]*100,col=cols.m[i])
points(x=BZ.capm.fitted[beg[i]:end[i]]*100,y=BZ.focus[beg[i]:end[i]]*100,col=cols.m[i],pch=20+i)
}
abline(0,1)
legend(title='CAPM\n\nLines connect same\n Book-Market Quintile','bottomright',pch=21:25,col=cols.m,legend=c('Lowest B/M Portfolio',paste('Value Portfolio',2:4),'Highest B/M Portfolio'),ncol=1,bg='white',bty='n',cex=0.6)

#Fitted 3 factor models
par(mai=c(0.65,0.2,0.1,0.45))
plot(x=BZ.ff.fitted*100,y=BZ.focus*100,xlab='Predicted Mean Excess Returns',ylab='',yaxt='n',cex.main=0.85,cex.lab=0.7,cex.axis=0.65,xlim=c(0.2,1.2),ylim=c(0.2,1.2),col='white')
for(i in 1:5){
lines(x=BZ.ff.fitted[beg[i]:end[i]]*100,y=BZ.focus[beg[i]:end[i]]*100,col=cols.m[i])
points(x=BZ.ff.fitted[beg[i]:end[i]]*100,y=BZ.focus[beg[i]:end[i]]*100,col=cols.m[i],pch=20+i)
}
abline(0,1)
legend(title='3 Factor Model\n\nLines connect same\n Book-Market Quintile','bottomright',pch=21:25,col=cols.m,legend=c('Lowest B/M Portfolio',paste('Value Portfolio',2:4),'Highest B/M Portfolio'),ncol=1,bg='white',bty='n',cex=0.6)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
``` The first row of plots charts mean excess returns against cross sectional predictions across both CAPM and FF3F models with lines connecting different BM portfolios within each size category The second row of plots examines the risk return relation by connecting different size portfolios within each BM category. In both cases, the scatter of points is closer to the 45 degree line for the FF3F model than for the CAPM, implying that variation in average returns of the 25 portfolios can be explained by varying loadings on the 3 factors. The adjusted r-squared increases from 0.23% to 72.78% as we move away from the CAPM to the 3 factor model.

At this point it is instructive, at least in terms of keeping a diary of experiments for myself,to embed the idea of factor models discussed above in the microeconomic framework outlined in part 1 of the asset pricing series. The economic intuition for valuing uncertain cash flows starts with

 the basic consumer problem of maximising current and future expected utility of consumption : leading to  the central asset pricing relation according to which the price of any asset is simply its discounted payoffs : p = E(mx) where providing  a set of important relations such as  • Asset prices (p) are equal to the expected cash flow payoff (x) discounted at the risk free rate (rf) plus a risk premium.
• This risk premium depends on the covariance of the payoff with the discount factor.
• Assets whose payoff covaries positively with the stochastic discount factor (m) or negatively with consumption growth commands a higher price and a lower expected return.
• Assets whose payoff covaries negatively with the discount factor or positively with consumption growth commands a lower price and a higher expected return.
• Investors dislike uncertainty in their consumption streams.
• Assets whose payoff vary positively with consumption disrupt an investor’s consumption stream and hence must be cheap in terms of price and high in terms of expected returns.
• Assets whose payoff vary negatively with consumption help stabilise an investors consumption stream and hence can be expensive in terms of price and low in terms of expected returns.
• A positive covariance between asset payoff and consumption characterises an asset that pays off well when investors are already wealthy (bad when investors already feel poor).
• A negative covariance between asset payoff and consumption characterises an asset that pays off poorly when investors feel wealthy (well when investors feel poor).
• Price/Expected return depend on covariance not individual risks.

Practical applications tend to use linear approximations of the form : leading to the well worn specification of : Although this consumption based approach should in principle provide a complete answer to most absolute pricing issues,empirically it is not perfect. Essentially, the slew of factor models is a response to the question posed by Cochrane : ‘what variables  other than consumption might indicate bad times (high marginal utility)’. Value stocks, which combine high book values with low market prices, tend to be particularly affected in times of recession and crises, inducing investors to only hold such assets at low prices/high premiums. Assets that covary negatively with m and hence positively with consumption must pay a higher average return. Since value stocks do poorly in recessions when consumption levels are low, they are covarying positively with consumption,negatively with the stochastic discount factor, and hence must provide a greater premium/lower price to entice investors. The original authors suggest that the economic meaning of HML is as a proxy for this distress risk. Along similar lines of reasoning,the SMB can be thought of as a mimicking portfolio for illiquidity risk. It should be noted that while the 3 factor model is an empirical improvement in most applications, the economic meaning of the factors on the right hand side seem to be less certain.

The previous post summarised the consequences on a tangency portfolio’s asset weights, returns and risks of extending the base case scenario (2 risky assets and 1 risk free asset) by including a third risky asset. The additional asset is constructed in such a way as to command a mean return and risk in excess of those provided by asset number 2 of the base case scenario, while simultaneously being less risky than asset number 1 of the base case,commanding a lower mean return with respect to the same until successive incarnations of that additional asset finally dominates base asset number 1 along both  dimensions of interest. By fixing the new asset’s risk to occupy an intermediate spot between the base assets but varying its mean return to increasingly higher levels,one would expect its optimal weight in the tangency portfolio to increase (as it does).

The two fund separation theorem decomposes any investment problem into  finding an optimal combination of risky assets and  finding the best combination of this optimal risky portfolio and the risk free asset. While the first part of the problem can be addressed independently across different investors without accounting for individual preferences,the latter part of the investment problem must be addressed in relation to the trade offs between risk and return that an individual is willing to incur. As we have seen several posts ago, individual preferences can be captured using indifference curves and utility functions. The total optimal portfolio, itself a combination of the optimal risky portfolio and the risk free asset, emerges at the tangency between preferences (as governed by utility functions and indifference curves) and investment opportunities (as governed by the capital allocation line that connects the riskless fund with the optimal risky fund).

While the previous post was concerned with issues surrounding the optimal risky portfolio, this post will concentrate on the total optimal portfolio. The objective here is to track how optimal asset weights change across individuals with different risk preferences using the framework defined in previous posts. Each investor will be associated with (and identified by) a unique risk aversion parameter ranging from 1 to 5 with increments of 0.5 separating successive agents. Intuitively, a higher risk aversion parameter should be associated with a total optimal portfolio that is less invested in the optimal risky fund than the riskless fund. The converse should be true for lower values of the risk aversion parameter.

```#####################################################################
#Supplement 3
#####################################################################

idx <- seq(1,5,by=0.5)
lay.mat <- matrix(c(1,1,1,2,1,1,1,2,1,1,1,2,3,3,3,4),4,4,byrow=T)
lay.h <- c(0.20,0.20,0.2,0.3)
lay.w <- rep(1,4)
n.assets <- 15

total.opt.weights <- NULL
sim.opt <- simulate.assets(n.assets)
sim.mvf <- MVF(sim.opt)
cal.opt <- CAL.line('Sharpe',sim.opt,sim.mvf)
DrawCAL(cal.opt,sim.opt,sim.mvf,legend.draw=T,lay.mat,lay.h,lay.w,main.title='Frontiers Plot\nwith optimal portfolios')
DrawMVF(sim.mvf,FALSE,NULL,NULL)

for(i in 1:length(idx)){
utility.contours('Sharpe',sim.opt,idx[i],sim.mvf)
}

expected.ret <- sim.mvf[]
risky.var <- (sim.mvf[])^2
risk.free <- sim.opt[]
opt.risky.alloc <- matrix((expected.ret-risk.free)/(idx* risky.var),ncol=1)
opt.riskfree.alloc <- matrix((1-opt.risky.alloc),ncol=1)
opt.port.ret <- risk.free+opt.risky.alloc*(expected.ret-risk.free)
opt.port.risk <- (opt.risky.alloc^2)*risky.var
opt.utility <- opt.port.ret-(0.5*idx*opt.port.risk)

z.data <- list()
z.data\$mean.ret <- matrix(sim.opt[],nrow=1,ncol=n.assets,dimnames=list(c('mean return'),c(paste('Asset',1:n.assets))))
z.data\$cov.matrix <-as.matrix(sim.opt[],nrow=n.assets,ncol=n.assets,dimnames=list(c(paste('Asset',1:n.assets)),c(paste('Asset',1:n.assets))))
z.data\$risk.free <- risk.free

front<-Frontiers(z.data)
risky.weights <- matrix(front\$tang.weights,nrow=n.assets,ncol=1,dimnames=list(c(paste('Asset',1:n.assets)),c('tangency weight')))
for(i in 1:length(opt.risky.alloc)){
total.opt.weights <-cbind(total.opt.weights,opt.risky.alloc[i]*risky.weights)
}
total.opt.weights <- t(total.opt.weights)

par(mai=c(0.65,0,0.55,0.15))
l <- length(idx)
plot(col='green',pch=15,bty='o',x=opt.riskfree.alloc,y=1:l,cex=0.75,cex.axis=0.8,cex.lab=0.8,cex.main=0.8,yaxt='n',main='Optimal Weight\nrisk free rate',xlab='Weight',ylab='')
polygon(y=c(1:l,l:1),x=c(rep(0,l),rev(opt.riskfree.alloc)),col=rf.col)
text(x=opt.riskfree.alloc,y=1:l,idx,cex=0.7,col='darkgreen',pos=1)

par(mai=c(0.65,0.53,0.15,0.3))
transition(total.opt.weights,colours=c(front.col),xlab='Risk aversion parameter',ylab='Asset weights',main='Weight transition map - Risky assets')

par(mai=c(0.65,0,0.10,0.15))
plot(1, type="n", axes=F, xlab="", ylab="",bty='o',xaxt='n',yaxt='n')
legend('center',fill=c(front.col,rf.col),legend=c(paste('Asset',1:n.assets),'Rf-asset'),ncol=2,bg='white',bty='n',cex=0.85,title='Total\nPortfolio Weights')

```

kk
In the code above,we simulate 15 random assets, plot them in the risk/return space along with the minimum variance frontier that they span,the optimal risky portfolio (tangency portfolio) that emerges from the markowitz procedure,the capital allocation line (CAL) that connects the riskless fund to the tangency portfolio, as well as the optimal total portfolios that emerge as points on the CAL. Since each investor is presumed to face the same universe of risky assets, the differences across total optimal portfolios across various agents depend on the associated risk aversion parameter.

The dashboard of plots follows.
ll ll
While both transition maps are defined relative to the risk aversion parameter,I have separated the risky asset weights and riskless asset weights to better emphasise the point of the two fund separation theorem as well as making visualisation less muddy. The results corroborate the intuition that greater values of the risk aversion parameter are associated with increasing proportions of the total optimal portfolio being invested in the riskless fund. The transition function was adapted from alphaism and systematic investor.

The previous post addressed the second element of the two fund theorem, the allocation of capital between the risky portfolio (which was constrained to hold only one asset) and the riskfree asset. This post will concentrate on locating the optimal risky portfolio when the number of risky assets is extended to 25 simulated data points. The possible set of expected returns and standard deviations for different combinations of the simulated risky assets will be plotted along the Minimum Variance Frontier. Every point on this contour is a combination of risky assets that minimise portfolio risks for a given level of portfolio returns.

[The Minimum Variance Frontier]

The basic intuitions can be illustrated using the case of two risky assets. The expected return on such a portfolio would simply be the weighted average of individual asset returns The variance of such a portfolio depends on the covariance and variances of the 2 assets concerned. The covariance itself can be decomposed in terms of the correlation coefficient and standard deviations. By varying the weights in each asset and noting the corresponding expected portfolio returns and risk, the MVF can be traced. Two functions were written to deal with such issues MVF and DrawMVF.

```#############################################################################
#
# Mean Variance Frontier
#
#############################################################################

MVF <- function (risky.assets){

raw.ret <- risky.assets[]
n <- ncol(raw.ret)
min.ret <- min(risky.assets[])
max.ret <- max(risky.assets[])
var.cov <-risky.assets[]
n.port = 200
risk.free <- risky.assets[]
risky.port <- list()

if(n<2){
print('Please specify more than one asset when using the [simulate.assets] function')
} else if(n>1){
target.returns <- seq(min.ret,max.ret,length.out=n.port)
mvp <- vector('list',n.port)
min.var.frontier <- matrix(0,dimnames=list(c(seq(1,n.port)),c('Portfolio Return','Portfolio Risk','Sharpe Ratio',1:n)),nrow=n.port,ncol=n+3)

for(i in 1:n.port){
mvp[[i]]<-portfolio.optim(x=raw.ret,pm=target.returns[i],riskless=F,shorts=F)
min.var.frontier[i,] <- c(mvp[[i]]\$pm,(mvp[[i]]\$ps),(mvp[[i]]\$pm-risk.free)/(mvp[[i]]\$ps),mvp[[i]]\$pw)
}

max.sharpe.port <- min.var.frontier[which.max(min.var.frontier[,3]),]
global.min.var.port <- min.var.frontier[which.min(min.var.frontier[,2]),]

risky.port[] <- min.var.frontier
risky.port[] <- target.returns
risky.port[] <- raw.ret
risky.port[] <- max.sharpe.port
risky.port[] <- global.min.var.port

return(risky.port)

}
}
```

mn

```#############################################################################
#
# Drawing the mean variance frontier
#
#############################################################################

port.return <- risky.portfolio[][,1]
port.risk <- risky.portfolio[][,2]
port.sharpe <- risky.portfolio[][2:1]
min.var.port <- risky.portfolio[][2:1]

col.ind <- rainbow(total.num)
lines(x=port.risk,y=port.return,col=col.ind[ind],lwd=1.5,type='l')
lines(x=port.risk,y=port.return,col='black',lwd=1.5,type='l')
points(x=min.var.port,y=min.var.port,col='purple',cex=1,pch=11)

}

}
```

h
To operationalise some of the issues so far, let’s simulate random asset returns for 25 securities, plot them in the risk/return space along with the MVF that these assets trace.

o

```#Mean variance efficiency

sim.opt <- simulate.assets(25)
sim.mvf <- MVF(sim.opt)
cal.opt <- CAL.line('Sharpe',sim.opt,sim.mvf)
DrawCAL(cal.opt,sim.opt,sim.mvf,legend.draw=TRUE)
utility.contours('Sharpe',sim.opt,1.5,sim.mvf)
DrawMVF(sim.mvf,FALSE,NULL,NULL)
``` Once the MVF has been traced, the optimal portfolio of risky simulated assets can be located. Essentially, the objective is to find one collection of risky asset weights that is superior to any other combination of risky assets according to some risk adjusted performance measure. The standard performance measure used is the Sharpe ratio which adjusts the excess expected portfolio return by the standard deviation of portfolio returns. Graphically, since the slope of the CAL is the Sharpe ratio,the optimal risky portfolio is simply the tangency portfolio which results when the CAL with the highest slope touches the MVF. This is shown in the plot above. Intuitively, one could have drawn the CAL with respect to any other point on the MVF but the Sharpe ratio would be suboptimal. Connecting the risk free asset with the global minimum variance portfolio would result in a CAL that has a negative slope, and hence a negative Sharpe ratio, implying that the risk free asset provides a better return than the global minimum variance portfolio (in our example).

Mathematically, the objective is to find a set of risky asset weights that maximise the Sharpe ratio: Instead of using optimisation packages to locate the Sharpe-ratio-maximising-portfolio, I simply computed the Sharpe measure for every point on the MVF and extracted the subset of weights for which the Sharpe ratio was maximal. The result appears to be correct since it corroborates with more advanced packages such as fPortfolio.

The Two-fund separation theorem of Modern Portfolio Theory states that the investment problem can be decomposed into the following steps:

1. Finding the optimal portfolio of risky assets
2. Finding the best combination of this optimal risky portfolio and the risk free asset.

With regard to the first point, the optimal portfolio of risky assets is simply the set of risky asset weights that maximise the Sharpe ratio (or of course some other performance measure)

With regard to the second point, the total optimal portfolio, one that is invested in the risk free rate and the risky optimal portfolio, depends on the manner in which individual risk preferences (as embodied in the utility function) intersect with investment opportunities (as embodied in th CAL). As before, the total optimal portfolio results where the highest available indifference curve for a given level of risk aversion meets the CAL with the highest Sharpe ratio (or slope). Again, an individual with a risk aversion of 0.5 would allocate more of his capital to the optimal risky portfolio than an individual with a risk aversion parameter of 1.5. An investor who has a very low risk aversion parameter would borrow at the risk free rate and invest the borrowed funds along with own capital in the optimal risky portfolio. The resulting total portfolio would lie to the right of the optimal risky portfolio. According to the Two-fund separation theorem of Modern Portfolio Theory, the investment problem can be decomposed into the following steps:

1. Finding the optimal portfolio of risky assets
2. Finding the best combination of this optimal risky portfolio and the risk free asset.

This post will concentrate on the second issue first. The problem of finding an optimal risky portfolio will be addressed in the next post.

[The Capital Allocation Line]

To simplify the procedure for finding the optimal combination of the risk free and the risky funds, one can first consider the return on a portfolio consisting of one risky asset and a risk free asset. The return on such a portfolio is simply: The variance of this combined portfolio is given by: These two results can be combined to yield the following expression : This is the Capital Allocation Line (CAL) and represents the set of investment possibilities created by all combinations of the risky and riskless asset. The price of risk is the return premium per unit of portfolio risk and depends only on the prices of available securities. This ratio is also known as the Sharpe ratio. To illustrate how the CAL works, let us first set up some functions to [a] simulate random asset returns from a normal distribution and [b] calculate and draw the CAL connecting one risky asset with the risk free rate. These functions correspond to simulate.assets,CAL.line and DrawCAL as shown below.

```#############################################################################
#
# Simulate asset returns from a normal distribution
#
#############################################################################

simulate.assets <- function(num.assets){
ret.matrix <- matrix(dimnames=list(c(seq(1,100)),c(paste('Asset',1:num.assets))),(rep(0,num.assets*100)),nrow=100,ncol=num.assets,byrow=FALSE)
shock <- matrix(dimnames=list(c(seq(1,100)),c(paste('Asset',1:num.assets))),(rep(0,num.assets*100)),nrow=100,ncol=num.assets,byrow=FALSE)
for(i in 1:num.assets){
ret.matrix[,i]=rnorm(100,mean=runif(1,-0.8,0.2),sd=runif(1,0.8,2))
shock[,i] <- rnorm(100,colMeans(ret.matrix)[i],apply(ret.matrix,2,sd)[i])
}
sd.matrix <- apply(ret.matrix,2,sd)
var.matrix <- var(ret.matrix)
risk.free <- 0.02
mean.ret <- colMeans(ret.matrix)
sim.assets <- list(ret.matrix,var.matrix,sd.matrix,risk.free,mean.ret,shock)
return(sim.assets)
}
```

```#############################################################################
#
# Capital Allocation Line
#
#############################################################################

CAL.line <- function(type,risky.assets,optimised.portfolio){

if(type=='Assets' && is.null(risky.assets)==FALSE){
num.assets <- ncol(risky.assets[])
asset.returns <- colMeans(risky.assets[])
asset.risk <- risky.assets[]

} else if (type=='Sharpe' && is.null(optimised.portfolio)==FALSE){
num.assets <- 1
asset.returns <- optimised.portfolio[]
asset.risk <- optimised.portfolio[]
}

total.risk <- seq(0,2,by=0.01)
risk.free <- risky.assets[]

CAL.complete <- vector('list',num.assets)
price.risk <-c()
total.return <-matrix(rep(0,length(total.risk)*num.assets),nrow=length(total.risk),ncol=num.assets)

for(i in 1:num.assets){
price.risk[i] <- (asset.returns[[i]]-risk.free)/asset.risk[i]
total.return[,i] <- risk.free+(price.risk[i]*total.risk)
CAL.complete[[i]]<-cbind(total.risk,total.return[,i])
colnames(CAL.complete[[i]]) <- c('Total Risk','Total Return')
}
return(CAL.complete)
}
```

```#############################################################################
#
# Draw Capital allocation Line (s)
#
#############################################################################

DrawCAL<-function(CAL,risky.assets,optimised.portfolio,legend.draw){

num.lines <- length(CAL)
num.assets <- ncol(risky.assets[])
plot.x.max <- max(risky.assets[])+0.2
plot.y.min <- min(risky.assets[])-1
plot.y.max <- max(risky.assets[])+1

windows()
par(xaxs="i", yaxs="i")

if(num.lines==1 && is.null(optimised.portfolio)==FALSE){
plot(xlab='Standard Deviation',ylab='Expected Return',ylim=c(plot.y.min,plot.y.max),xlim=c(0,plot.x.max),CAL[],col='red',type='l',main='Capital Allocation Line',cex.main=0.85,cex.lab=0.75,cex.axis=0.75,lwd=2)
points(x=optimised.portfolio[],y=(optimised.portfolio[]),type='p',pch=9,col='dark blue',cex=1)
} else if(num.lines==1 && is.null(optimised.portfolio)==TRUE){
plot(xlab='Standard Deviation',ylab='Expected Return',ylim=c(plot.y.min,plot.y.max),xlim=c(0,plot.x.max),CAL[],col='red',type='l',main='Capital Allocation Line',cex.main=0.85,cex.lab=0.75,cex.axis=0.75,lwd=2)
} else if(num.lines>1){
plot(xlab='Standard Deviation',ylab='Expected Return',ylim=c(plot.y.min,plot.y.max),xlim=c(0,plot.x.max),CAL[],col='red',type='l',main='Capital Allocation Line',cex.main=0.85,cex.lab=0.75,cex.axis=0.75,lwd=2)
for(i in 2:num.lines){
lines(CAL[[i]],col='red',type='l',lwd=2)
}
}
points(x=0.01,y=risky.assets[],type='p',pch=15,col='dark green',cex=1)
points(x=risky.assets[],y=risky.assets[],type='p',pch=17,col='blue',cex=0.75)
text(labels=1:num.assets,x=risky.assets[],y=risky.assets[],pos=3,col='blue',cex=0.65)
legend(plot=legend.draw,title='Legend','bottomleft',pch=c(15,17,3,3,8,9,11),col=c('dark green','blue','red','dark orange','black','dark blue','purple'),legend=c('Risk free asset','Risky assets','Capital Allocation Line','Indifference Curves','Optimal Total Portfolio','Max Sharpe Portfolio','Min Variance Portfolio'),ncol=1,bg='white',bty='n',border='black',cex=0.65)
}
```

To Implement these functions,let’s first simulate the data for one asset with100 timeseries observations drawn from a normal distribution with a mean and standard deviation that are themselves drawn from a uniform distribution. A risk free rate is also pre-specified at 0.02 and a simple CAL is drawn to connect these 2 assets as per the expression above.

```
#Capital Allocation Line

sim.one <- simulate.assets(1)
cal.one <-CAL.line('Assets',sim.one,NULL)
DrawCAL(cal.one,sim.one,NULL,legend.draw=TRUE)
``` As mentioned above, the CAL is simply the combination of the risk free asset and the risky portfolio (which as of yet has been constrained to contain a single asset). The slope of the CAL is the Sharpe ratio and depicts the amount of excess portfolio return per unit of portfolio risk we can expect. Since every point on the CAL is an investment possibility of allocating wealth between the riskless asset and the risky portfolio (single asset in this case), what determines which of these combinations is optimal? The amount of capital an investor places in the risk free asset versus the risky portfolio is determined by the preferences for risk and return as captured by the utility function examined previously.

Mathematically, the objective of an investor is to find the weights of the risky portfolio that maximises the utility of portfolio returns. Or more succinctly : The optimal weight for the risky portfolio is given by : The optimal weight in the risk free asset is then simply (1-w*). Since the indifference curve and the CAL are drawn in the same risk/return space, let’s superimpose an indifference curve with a risk aversion parameter of 1 on the CAL as follows.

```utility.contours('Assets',sim.one,1,NULL)
``` The total optimal portfolio results where investment opportunities as represented by the CAL meet the highest attainable indifference curve as represented by the orange contour. With a risk aversion parameter of 1, the investor exhibits his distaste for risk and places a large weight on the risk free asset versus the risky asset or portfolio. Another investor who is less risk averse (a risk aversion parameter of 0.5) would invest a larger portion of his capital in the risky asset versus the risk free asset and hence the total optimal portfolio is located closer to the simulated risky asset as below. While the investor prefers to make investments consistent with the risk/return opportunities towards the northwestern corner of the plot – where blue indifference contours are drawn – the available investments as per the capital allocation line do not permit this.

So far we have constrained our risky portfolio to contain only one asset. There is nothing stopping us from simulating multiple risky assets and drawing the CAL between the risk free asset and each of these simulated assets as below.

```sim.mult <- simulate.assets(20)
cal.mult <- CAL.line('Assets',sim.mult,NULL)
DrawCAL(cal.mult,sim.mult,NULL,legend.draw=T)
``` This post has dealt with the second half of the two fund theorem according to which any investment problem can be addressed by combining the risk free asset (riskless fund) and the risky portfolio (risky fund). So far we have constrained the number of assets contained in the risky portfolio. The task of finding the optimal risky portfolio, the second half of the theorem, shall be addressed in the next post.

Up until the proliferation of the mean-variance analysis due to Markowitz, the typical advice offered by an investment advisor would entail some combination of the following ideas:

• Young investors should allocate a disproportionately large amount of their investable capital to risky securities within risky asset classes.
• Older investors should allocate a disproportionate amount of capital to less volatile securities within less risky asset classes.

While intuitively compelling and perhaps sensible as a first approximation, linking an individual’s asset allocation decision exclusively to his human capital in such a broad fashion is suboptimal. According to the Two-fund separation theory, a key result in Modern Portfolio Theory, the investment allocation problem can be decomposed into two elements :

(1.) Finding the optimal portfolio of risky assets.

• This amounts to finding a vector of weights associated with risky assets that maximise the risk adjusted return of the resulting portfolio. An important consideration here is the selection and use of an appropriate risk-adjusted performance measure.

(2.) Finding the best combination of the risk free asset and the optimal risky portfolio.

• Once the optimal risky portfolio has been determined, it can be combined with the risk free asset; with respective weights in each of these ‘funds’ subject to individual risk preferences, the total portfolio can be constructed.

The most important implications of Modern Portfolio Theory can be summarised as follows.

• Investors should control total portfolio risk by varying the amount of capital invested in the risk free asset and the optimal risky portfolio as opposed to changing the composition of the risky portfolio itself. The weight given to the either ‘fund’ depends on the individual’s risk aversion.
• The portfolio of risky assets should contain a large number of assets that are less than perfectly positively correlated (i.e. a well diversified portfolio)

It should be noted that most of the relevant results are derived under the assumptions that either [a] all returns are normally distributed or [b] investors care only about mean returns and variance of returns. Further assumptions require that all assets be tradable and no transaction costs are incurred when trades occur.

Before the Two fund theorem can be implemented, a theoretical framework for understanding the tradeoff between risk and return will be briefly sketched.

j

[Utility Functions and Indifference curves]

l

The standard utility function of choice  is the power utility function, also known as constant relative risk aversion or the CRRA utility function.  Gamma, or the coefficient of risk aversion governs the curvature of the utility function and how resistant individuals are to taking risks. It can be shown that the utility an investor derives from a pattern of returns amounts to with A representing the risk aversion parameter. If assets can be thought of as providing a benefit (returns) at a cost (risk), then a simple way to visualise the tradeoff can be attained using Indifference curves. To illustrate these and other issues of this post, the following two helper functions,utility.contours and contours.plot were written.

```#############################################################################
#
# Contour Plotting
#
#############################################################################

contours.plot <- function(type,expected.ret,risky.var,num.assets,risk.free,risk.aversion,total.risk){

opt.risky.alloc <- (expected.ret-risk.free)/(risk.aversion* risky.var)
opt.riskfree.alloc <- (1-opt.risky.alloc)
opt.port.ret <- risk.free+opt.risky.alloc*(expected.ret-risk.free)
opt.port.risk <- (opt.risky.alloc^2)*risky.var
opt.utility <- opt.port.ret-(0.5*risk.aversion*opt.port.risk)

contours <- vector('list',num.assets)
for(i in 1:num.assets){
contours[[i]]\$optimal.utility <- opt.utility[i]
contours[[i]]\$returns <- opt.utility[i]+(0.5*risk.aversion*total.risk)
contours[[i]]\$risk <- sqrt(total.risk)
}

s.contours <- vector('list',5)
u.contours <- vector('list',5)

for(i in 1:5){
s.contours[[i]]\$optimal.utility <- opt.utility
s.contours[[i]]\$returns <- opt.utility+(0.5*i*total.risk)
s.contours[[i]]\$risk <- sqrt(total.risk)
u.contours[[i]]\$optimal.utility <- i
u.contours[[i]]\$returns <- opt.utility+i/5+(0.5*risk.aversion*total.risk)
u.contours[[i]]\$risk <- sqrt(total.risk)
}

col.indiff <- colorRampPalette(brewer.pal(9,"Blues"))(100)
ind <- 100

if(type=='Sample'){
windows()

par(mfrow=c(2,1),mar=c(1.5,4,3,1.5))
plot(main='Indifference Curves',cex.main=0.85,cex.lab=0.75,cex.axis=0.75,ylab='Expected Return',xlab='Risk',x=s.contours[]\$risk,y=s.contours[]\$returns,type='line',col=col.indiff[ind],lty=2,lwd=2)

for(i in 2:5){
lines(main='Indifference Curves',cex.main=0.85,cex.lab=0.75,cex.axis=0.75,ylab='Expected Return',xlab='Risk',x=s.contours[[i]]\$risk,y=s.contours[[i]]\$returns,type='line',col=col.indiff[ind-i*5],lty=2,lwd=2)
}

par(mar=c(4,4,1,1.5))
plot(cex.main=0.85,cex.lab=0.75,cex.axis=0.75,ylab='Expected Return',xlab='Risk',x=u.contours[]\$risk,y=u.contours[]\$returns,type='line',col=col.indiff[ind],lty=2,lwd=2)

for(i in 2:5){
lines(main='Indifference Curves',cex.main=0.85,cex.lab=0.75,cex.axis=0.75,ylab='Expected Return',xlab='Risk',x=u.contours[[i]]\$risk,y=u.contours[[i]]\$returns,type='line',col=col.indiff[ind-i*5],lty=2,lwd=2)
}

} else if(type=='Optimal'){

lines(cex.main=0.85,cex.lab=0.75,cex.axis=0.75,ylab='Expected Return',xlab='Risk',x=contours[]\$risk,y=contours[]\$returns,type='line',col='dark orange',lty=2,lwd=2)
points(x=sqrt(opt.port.risk),opt.port.ret,pch=8,col='black')

for(i in 1:5){
lines(x=u.contours[[i]]\$risk,y=u.contours[[i]]\$returns,type='line',col=col.indiff[ind-i*5],lty=2,lwd=2)
}
}
}
```
```##############################################################################
#
# Utility Contours
#
#############################################################################

utility.contours <- function(type,risky.assets,risk.aversion,optimised.portfolio){

risk.free <- risky.assets[]
total.risk <- seq(0,2,by=0.01)^2

if(type=='Assets'){
expected.ret <- risky.assets[]
risky.var <- risky.assets[]^2
num.assets <- length(risky.var)
contours.plot(type='Optimal',expected.ret,risky.var,num.assets,risk.free,risk.aversion,total.risk)

} else if(type=='Sharpe'){
expected.ret <- optimised.portfolio[]
risky.var <- (optimised.portfolio[])^2
num.assets <- 1
contours.plot(type='Optimal',expected.ret,risky.var,num.assets,risk.free,risk.aversion,total.risk)

} else if(type=='Indifference Sample'){
sim <- simulate.assets(1)
expected.ret <- sim[]
risky.var <- sim[]^2
risk.free.smpl <- sim[]
num.assets <- 1
risk.aversion.smpl <- 1
contours.plot(type='Sample',expected.ret,risky.var,num.assets,risk.free.smpl,risk.aversion.smpl,total.risk)

} else	if(type=='CRRA'){
consumption <- seq(0,3,by=0.01)
gamma <- c(0.7,0.5,0.2)
util <- matrix(nrow=length(consumption),ncol=length(gamma),c(rep(0,length(consumption)*length(gamma))),byrow=FALSE)
marginal.util <- matrix(nrow=length(consumption),ncol=length(gamma),c(rep(0,length(consumption)*length(gamma))),byrow=FALSE)

for(i in 1:length(gamma)){
util[,i] <- (consumption^(1-gamma[i]))/(1-gamma[i])
marginal.util[,i] <- consumption^(-gamma[i])
}

colnames(util) <- paste('Gamma',1:length(gamma))
colnames(marginal.util) <- paste ('Gamma',1:length(gamma))
colours <- colorRampPalette(brewer.pal(9,"Blues"))(100)
col.index <- 100

windows()
par(mfrow=c(2,1),mar=c(2,2,3,2))
plot(cex.axis=0.75,lwd=2,x=consumption,util[,1],type='line',col=colours[col.index],main='Constant Relative Risk Aversion - Utility function',cex.main=0.85,cex.lab=0.75,ylab='')
for(i in 2:length(gamma)){
lines(lwd=3,x=consumption,util[,i],col=colours[col.index-(i*10)])
}

par(mar=c(3,2,1,2))
plot(cex.axis=0.75,ylim=c(-3,3),lwd=2,x=consumption,marginal.util[,1],type='line',col=colours[col.index],main='Constant Relative Risk Aversion - Marginal Utility function',cex.main=0.85,cex.lab=0.75,ylab='')
for(i in 2:length(gamma)){
lines(lwd=2,x=consumption,marginal.util[,i],col=colours[col.index-(i*10)])
}
legend(horiz=TRUE,title='Gamma',"bottom",fill=c(colours[col.index],colours[col.index-20],colours[col.index-30]),legend=gamma[1:3],bg='white',bty='n',border='white',cex=0.75)
}
}
```

To Implement these functions :

```##############################################################
#
# Implementations
#
##############################################################

#Utility Contours

utility.contours(type='CRRA',risky.assets=NULL,risk.aversion=NULL,optimised.portfolio=NULL)
utility.contours(type='Indifference Sample',risky.assets=NULL,risk.aversion=0.5,optimised.portfolio=NULL)
```

The plots above illustrate the CRRA utility function as well as its first derivative (or marginal utility function). The greater the risk aversion coefficient ,gamma, the greater the curvature of the utility function, the more risk averse an individual, the larger the amount of money (the certainty equivalent) required to make that individual indifferent between receiving a particular money amount with perfect certainty and obtaining that amount as an expectation of a gamble. The slope of the utility curve,as depicted in the bottom plot as the first derivative of the preceding function , suggests that the additional utility derived from an extra unit of wealth declines as the amount or level of wealth increases. While utility derived increases with wealth, the rate at which an additional unit of wealth promotes greater utility decreases over rising wealth. The first plot depicts the effect on indifference curves of varying the risk aversion parameter while holding utility constant. The converse is true for the second plot, in which the risk aversion parameter is held constant across varying degrees of utility while indifference curves are charted. True to its name, an indifference curve connects all combinations of (in this case) risk and return which yield the same utility. With respect to the first plot, the greater the risk aversion of an individual, the greater the expected return required for a given level of risk before the specified level of utility can be reached. With respect to the second plot, northwesterly movements of the contour signify combinations of risk and return that yield greater utility. Intuitively, when risk aversion and risk are held constant, the asset that provides greater expected returns would command greater utility. An investor, who regards assets as bundles of risks and returns, would wish to hold a portfolio of assets that are located on the highest possible indifference curve for a given aversion to risk.