Archive

Foreign Exchange

The previous post examined exchange rate predictability (in-sample) in the context of various linear models. Since economic and financial systems are known to traverse structural and behavioural changes, different time series / economic models may be required to explain the empirical data at different times. Non-linear modelling in economic/financial times series presupposes the existence of different states of the world (i.e. regimes) and the possibility of time series dynamics to vary across regimes.

There are generally two ways to test the existence of non-linearity. In the absence of any prior knowledge about non-linear structures in the data, Ramsay’s RESET test, Tsay test and the BDS test could be used to determine their existence. By contrast, if a specific form of non-linearity is suggested by economic or financial theory, it is preferable to test for this perceived structure and construct the requisite model.

In this post the Threshold Vector Autoregressive Model (TVAR) and Self-exciting Threshold Autoregression (SETAR) models will be applied to some subset of the data for GBP and CAD currency pairs. Neither the Multivariate Markov Switching Model nor the Smoothed Transition Autoregressive Regression models (STAR) will be utilised; finding no working packages for the former and the data failing suitability tests with respect to the latter model.

Fitting the TVAR model requires specifying , among other parameters, [1] the threshold variable (the evolution of which governs different regimes) and [2] the number of regimes. After looping through various variables in the dataset, the change in money supply and the change in price index appear to yield the best results (lowest rmse) for the CAD and GBP currency pairs respectively. The SETAR model is a special kind of TAR model where the dependent variable is also  the threshold variable. 

Since it is basically the same, only the code for the CAD/USD pair is shown.

#Fundamentals
windows(width=14,height=12)
layout(matrix(c(1,1,2,3,1,1,2,3),nrow=4,ncol=2,byrow=F))
par(mai=c(0,0.55,0.30,0.1))

cad.tvar.fund<-cad.tar.sub
cad.tvar<-TVAR(include='both',cad.tar.dat, lag=1, nthresh=2, thDelay=1, trim=0.25, mTh=5, plot=F)
fx<-as.matrix(fitted(cad.tvar)[1,])
cad.tvar.f.err<-rmse(cad.diff[2:313,1],fx)

r<-matrix(regime(cad.tvar)[2:313])
rownames(r)=rownames(cad.diff)[1:312]

plot(cex.lab=1,cex.axis=0.75,cad.diff[2:313,1]*100,type='p',xaxt='n',xlab='',ylab='CAD/USD returns',col='black',main='Threshold Vector Autoregression For CAD/USD \n 2 Thresholds and 3 Regimes',cex.main=0.85)
grid()
lines(fx*100,col='purple')
lines(cad.diff[2:313,1]*100,col='black')
legend(bty='n','bottomleft',legend=c('Actual','Fitted'),fill=c('black','purple'),ncol=1,cex=1)
text(adj=0,col="black",nrow(r)/2+40,min(cad.diff[2:313,1])*100,paste('RMSE :: ',cad.tvar.f.err),font=2,cex=1)

par(mai=c(0,0.55,0,0.1))
plot(ylim=c(min(cad.diff[2:313,5])*100,max(cad.diff[2:313,5])*100+0.5),cad.diff[1:313,5]*100,type='l',xaxt='n',xlab='',ylab='Change in Money Supply',cex.axis=0.8,cex.lab=1)
abline(h=cad.tvar$model.specific$Thresh[1]*100,col='black',lty='dotted')
abline(h=cad.tvar$model.specific$Thresh[2]*100,col='black',lty='dotted')
legend(bty='n','bottomright',legend=c('Low Regime','Middle Regime','High Regime'),fill=c('red','blue','green'),ncol=3,cex=1)
text(adj=0,col="black",0,max(cad.diff[2:313,5])*100+0.4,'Threshold Variable : Change in Money Supply',font=2,cex=1)

par(mai=c(0.5,0.55,0,0.1))
plot(lwd=1,xaxt='n',ylab='Regime Index',xlab='',cex.axis=0.8,cex.lab=1,r,type='h',col=ifelse(r==1,'red',ifelse(r==2,'blue','green')))
axis(tick=FALSE,1, at=1:312, labels=rownames(r),las=1,cex.axis=0.8,las=1,cex=0.8)

The first plot shows the actual CAD/USD returns aswell as the fitted data. Compared to linear models, the RMSE of the TV AR model is clearly superior, providing some support for non-linear models even against the BMA technique. The second plot depicts the evolution of the threshold variable; the dotted horizontal lines representing the 2 thresholds which separate the data into 3 regimes. The final plot depicts the regime to which values of the threshold variable belongs. Each regime is associated with a different model.

Next let’s plot the results for the SETAR model.

# Setar

a<-auto.arima(cad.diff[1:313,1])
f<-fitted(a)[3:313]
arerr<-rmse(cad.diff[3:313,1],f)

m<-selectSETAR(cad.diff[1:313,1],m=2,nthresh=2,include='both')
s<-setar(cad.diff[1:313,1], m=2, thDelay=0,th=c(-0.000985251,0.002029066))
fx<-fitted(s)[3:313]
rm<-rmse(cad.diff[3:313,1],fx)

windows(width=14,height=12)
layout(matrix(c(1,1,2,1,1,2),nrow=3,ncol=2,byrow=F))
par(mai=c(0,0.55,0.30,0.1))

r<-matrix(regime(s)[3:313])
rownames(r)=rownames(cad.diff)[3:313]

plot(cex.lab=1,cex.axis=0.75,cad.diff[3:313,1],type='p',xaxt='n',xlab='',ylab='CAD/USD returns',col='black',main='SETAR(2) VS ARIMA (1,0,0) For CAD/USD \n 2 Thresholds and 3 Regimes',cex.main=0.85)
grid()
abline(h=m$th[1],col='black',lty='dotted')
abline(h=m$th[2],col='black',lty='dotted')
lines(f,col='darkorange')
lines(fx,col='purple')
lines(cad.diff[3:313,1],col='black')
legend(bty='n','bottomleft',legend=c('Actual','Fitted-ARIMA(1,0,0)','Fitted-SETAR(2)','Low Regime','Middle Regime','High Regime'),fill=c('black','darkorange','purple','red','blue','green'),ncol=1,cex=1)
text(adj=0,col="black",nrow(r)/2-40,min(ca.orig[3:313,1]),paste('SETAR-RMSE :: ',rm,'   ARIMA-RMSE ::',arerr),font=2,cex=1)

par(mai=c(0.5,0.55,0,0.1))
plot(lwd=1,xaxt='n',ylab='Regime Index',xlab='',cex.axis=0.8,cex.lab=1,r,type='h',col=ifelse(r==1,'red',ifelse(r==2,'blue','green')))
axis(tick=FALSE,1, at=3:313, labels=rownames(cad.diff)[3:313],las=1,cex.axis=0.8,las=1)

Unlike the previous model, the SETAR specification uses lagged values of the dependent variable itself as the threshold variable. As per usual, the first plot depicts the actual and fitted results. In addition to the SETAR model, an ARIMA (1,0,0) model has also been fitted to show the difference in predictive accuracy between linear and non-linear models that only involve one variable (i.e.CAD/USD returns). According to the RMSE measure, the SETAR clearly outperforms the ARIMA model. It is slightly inferior to the TVAR model above, a result that can be explained by the fact that the TVAR specification is conditioned on more information (i.e. change in Canadian money supply over time).

The same procedures applied to the GBP/USD currency pair yields the following plots

Compared to linear models for the GBP/USD currency pair,the TVAR model has the lowest RMSE. The optimal threshold variable appears to be the British price index. Again there are 2 thresholds,segmenting the data into 3 regimes, each associated with a different model.

The SETAR model applied to the GBP/USD currency pair produces superior results relative to the ARIMA model but not the BMA technique in the previous post.

Advertisements

In this post, a series of standard linear models will be fitted to a small selection of fundamental economic data I managed to scrape off the internet. The In-sample forecast performance of the Purchasing Power Parity model, Uncovered interest parity model, Dornbusch-Frankel model (with share prices) and the Bayesian averaging model shall be assessed using the standard RMSE measure of forecast accuracy. Requisite data was collected for two currency pairs : GBP/USD and CAD/USD. Instead of differentiating between different types of non-stationarity in the time series of the variables, I simply fitted the model with a variety of assumptions and plotted the results. In both cases, the more complicated Vector Error Correction specifications of the candidate models are found to be inferior to the simpler Vector Autoregression models. The Bayesian Model Averaging technique provided the lowest RMSE for CAD/USD and the second best measure for the GBP/USD pair, a result that speaks in favour of the difficulty encountered when linking exchange rate returns to economic fundamentals with any consistency.

Given model uncertainty in the number of underlying candidate models, the BMA technique helps us select different combinations of underlying models based on performance. By averaging over many different competing models, BMA incorporates model uncertainty into conclusions about parameters and prediction.For k potential variables, the BMA technique implies estimating 2^k variable combinations and thus 2^K models. The model weights for this averaging stem from posterior model probabilities that arise from Bayes’ theorem.

Click here for more information on BMA

The underlying model for the BMA method is specified as : log (FX) – log (FX-1) = Xβ+εt where X is a matrix of exchange rate determinants. For the CAD/USD currency pair, X  contains the following variables (some variables are defined relative to their US counterparts)  :

  • [1] Canadian 3- month TBill (CADTBILL)
  • [2] Canadian Industrial Production (CANINDP)
  • [3] Canadian Inflation (CANINFL)
  • [4] Canadian Money Supply (CANMSUPP)
  • [5] Canadian Share price index (CANSPIND)
  • [6] Candian Price Index (CANPI)
  • [7] Relative T-bills (REL.TBILL)
  • [8] Relative Ind. production (REL.Ind.Prod)
  • [9] Relative Money Supply (REL.M.SUPPLY)
  • [10] Relative Inflation (REL.Infl)
  • [11] Relative Share price (REL.Stock.P)
  • [12] Change in Canadian 3- month TBill (CADTBILL..diff)
  • [13] Change in Canadian Industrial Production (CANINDP.diff)
  • [14] Change in Canadian Inflation (CANINFL..diff)
  • [15] Change in Canadian Money Supply (CANMSUPP..diff)
  • [16] Change in Canadian Share price index (CANSPIND..diff)
  • [17] Change in Candian Price Index (CANPI..diff)
  • [18] Crude Oil (Oil)
  • [19] Change in Crude Oil (Chg.OIL)

Let’s first load the data and plot the demeaned logarithmic data for the 2 currency pairs and 3 countries.


setwd("C:/University of Warwick/R experiments/FxLin")

library(quantmod)
library(timeSeries)
library(RColorBrewer)
library(plotrix)
library(rpanel)
library(symbols)
library(reshape)
library(fUnitRoots)
library(vars)
library(BMS)

#Download and set up data#######################################################

us.orig<-(as.matrix(read.table('USDAT.csv',header=T,sep=',',row.names=1)))
uk.orig<-(as.matrix(read.table('UKDAT.csv',header=T,sep=',',row.names=1)))
ca.orig<-(as.matrix(read.table('CADDAT.csv',header=T,sep=',',row.names=1)))
oil.orig<-(as.matrix(read.table('OILDAT.csv',header=T,sep=',',row.names=1)))

txt<-c('3-month TBILL','Ind.Production','Inflation','Money Supply','Stock Index','Price Level')
rmse
################################################################################

#Visualise Data#################################################################

windows(width=14,height=12)
layout(matrix(c(1,1,2,2,3,1,1,2,2,4,5,6,7,8,9),nrow=5,ncol=3,byrow=F))
par(mai=c(0,0,0,0))

plot(ylim=c(min(ca.orig[,1]),max(ca.orig[,1])+0.065),ca.orig[,1],type='l',col='blue',lwd=1.5,xlab='',xaxt='n',ylab='')
grid(col='black')
rect(-15,max(ca.orig[,1])+0.05,length(ca.orig),max(ca.orig[,1])+1,density=NA,col='darkblue',lty=1,border='black')
text(adj=0,col="white",200,max(ca.orig[,1])+0.065,':: Candadian Dollar / US Dollar',font=2,cex=1)
legend(bty='n','bottomleft',legend=c('US Data','CAD Data','GBP Data','Global'),fill=c('green','darkblue','red','darkorange'),ncol=1,cex=1)

#par(mai=c(0.15,0,0,0))
plot(ylim=c(min(uk.orig[,1]),max(uk.orig[,1])+0.065),uk.orig[,1],type='l',col='red',lwd=1.5,xlab='',xaxt='n',ylab='')
grid(col='black')
rect(-15,max(uk.orig[,1])+0.05,length(uk.orig),max(uk.orig[,1])+1,density=NA,col='darkred',lty=1,border='black')
text(adj=0,col="white",200,max(uk.orig[,1])+0.065,':: Sterling Pounds / US Dollar',font=2,cex=1)

par(mai=c(0,0,0.15,0))
for(i in 2:7){
  plot(main=txt[i-1],cex.main=0.85,ca.orig[,i],col='blue',type='l',xlab='',xaxt='n',ylab='',yaxt='n',lwd=1.5,ylim=c(min(ca.orig[,i]),max(ca.orig[,i])+0.1))
  lines(uk.orig[,i],col='red',lwd=1.5)
  lines(us.orig[,i-1],col='green',lwd=1.5)
}

par(mai=c(0,0,0.15,0))
plot(main='Crude oil index',cex.main=0.85,oil.orig[,1],col='dark orange',type='l',xlab='',xaxt='n',ylab='',yaxt='n',lwd=1.5)

kk

For the high quality image click here

After ascertaining nonstationarity for all the variables, the aforementioned models were fitted to the data. A selection of the code which deals with the BMA and a plot of the in-sample fitted values follows. The code is far from elegant but I am not fussed as long as it works.

i


#Bayesian Model Averaging####################################################

cad.bma.dat<-data.frame(CADUSD=cad.diff[1:313,1],cbind(ca.orig[1:313,2:7],cad.fundamentals[1:313,],cad.diff[1:313,2:7],Oil=oil.orig[1:313,],Chg.OIL=diff(oil.orig,1)[1:313]))
cad.bma.res = bms(cad.bma.dat, mprior = "uniform", g = "HQ",nmodel=2000,burn = 50000)
image(cad.bma.res,col=cm.colors(3),cex.main=0.75,cex.axis=0.75)
plotModelsize(cad.bma.res)
pdensa = pred.density(cad.bma.res, newdata = cad.bma.dat[1:313,2:20])
plot(cad.diff[1:313,1],type='l',col='black',lwd=1,xlab='',xaxt='n',ylab='')
lines(pdensa$fit,col='green')

uk.bma.dat<-data.frame(UKUSD=uk.diff[1:313,1],cbind(uk.orig[1:313,2:7],uk.fundamentals[1:313,],uk.diff[1:313,2:7],Oil=oil.orig[1:313,],Chg.OIL=diff(oil.diff,1)[1:313,]))
uk.bma.res = bms(uk.bma.dat, mprior = "uniform", g = "HQ",nmodel=2000,burn = 50000)
image(uk.bma.res,col=cm.colors(3),cex.main=0.75,cex.axis=0.75)
plotModelsize(uk.bma.res)
pdensb = pred.density(uk.bma.res, newdata = uk.bma.dat[1:313,2:20])
plot(uk.diff[1:313,1],type='l',col='black',lwd=1,xlab='',xaxt='n',ylab='')
lines(pdensb$fit,col='green')

#############################################################################

#In sample Model Predictions#################################################

windows(width=14,height=12)
layout(matrix(c(1,2,3,4,5,6,7,8,8),nrow=3,ncol=3,byrow=F))
par(mai=c(0.05,0,0.15,0.05))
plot(main='Uncovered Interest Parity VAR',cex.main=0.85,cad.diff[1:313,1],type='l',col='black',lwd=1,xlab='',xaxt='n',ylab='')
lines(fitted(cad.uip.var.b)[,1],col='green')
lines(fitted(cad.uip.var.c)[,1],col='blue')
lines(fitted(cad.uip.var.t)[,1],col='red')
legend(bty='n','bottom',legend=c('Both','Constant','Trend'),fill=c('green','blue','red'),ncol=3,cex=1)

plot(main='Purchasing Power Parity VAR',cex.main=0.85,cad.diff[1:313,1],type='l',col='black',lwd=1,xlab='',xaxt='n',ylab='')
lines(fitted(cad.ppp.var.b)[,1],col='green')
lines(fitted(cad.ppp.var.c)[,1],col='blue')
lines(fitted(cad.ppp.var.t)[,1],col='red')
legend(bty='n','bottom',legend=c('Both','Constant','Trend'),fill=c('green','blue','red'),ncol=3,cex=1)

plot(main='Fundamentals VAR',cex.main=0.85,cad.diff[1:313,1],type='l',col='black',lwd=1,xlab='',xaxt='n',ylab='')
lines(fitted(cad.fund.var.b)[,1],col='green')
lines(fitted(cad.fund.var.c)[,1],col='blue')
lines(fitted(cad.fund.var.t)[,1],col='red')
legend(bty='n','bottom',legend=c('Both','Constant','Trend'),fill=c('green','blue','red'),ncol=3,cex=1)

plot(main='Uncovered Interest Parity VECM',cex.main=0.85,cad.diff[1:313,1],type='l',col='black',lwd=1,xlab='',xaxt='n',ylab='',yaxt='n')
lines(fitted(cad.uip.vecm.est.bc)[,1],col='green')
lines(fitted(cad.uip.vecm.est.cc)[,1],col='blue')
lines(fitted(cad.uip.vecm.est.tt)[,1],col='red')
lines(fitted(cad.uip.vecm.est.tc)[,1],col='dark orange')
legend(bty='n','bottom',legend=c('B/con','C/con','T/trend','T/con'),fill=c('green','blue','red','dark orange'),ncol=4,cex=1)

plot(main='Purchasing Power Parity VAR',cex.main=0.85,cad.diff[1:313,1],type='l',col='black',lwd=1,xlab='',xaxt='n',ylab='',yaxt='n')
lines(fitted(cad.ppp.vecm.est.bc)[,1],col='green')
lines(fitted(cad.ppp.vecm.est.cc)[,1],col='blue')
lines(fitted(cad.ppp.vecm.est.tt)[,1],col='red')
lines(fitted(cad.ppp.vecm.est.tc)[,1],col='dark orange')
legend(bty='n','bottom',legend=c('B/con','C/con','T/trend','T/con'),fill=c('green','blue','red','dark orange'),ncol=4,cex=1)

plot(main='Fundamentals VECM',cex.main=0.85,cad.diff[1:313,1],type='l',col='black',lwd=1,xlab='',xaxt='n',ylab='',yaxt='n')
lines(fitted(cad.fund.vecm.est.bc)[,1],col='green')
lines(fitted(cad.fund.vecm.est.cc)[,1],col='blue')
lines(fitted(cad.fund.vecm.est.tt)[,1],col='red')
lines(fitted(cad.fund.vecm.est.tc)[,1],col='dark orange')
legend(bty='n','bottom',legend=c('B/con','C/con','T/trend','T/con'),fill=c('green','blue','red','dark orange'),ncol=4,cex=1)

plot(main='Bayesian Model Averaging',cex.main=0.85,cad.diff[1:313,1],type='l',col='black',lwd=1,xlab='',xaxt='n',ylab='',yaxt='n')
lines(pdensa$fit,col='purple')

c.u.v.b<-rmse(cad.diff[1:310,1],fitted(cad.uip.var.b)[,1])
c.u.v.c<-rmse(cad.diff[1:310,1],fitted(cad.uip.var.c)[,1])
c.u.v.t<-rmse(cad.diff[1:310,1],fitted(cad.uip.var.t)[,1])

c.u.ve.bc<-rmse(cad.diff[1:310,1],fitted(cad.uip.vecm.est.bc)[,1])
c.u.ve.cc<-rmse(cad.diff[1:310,1],fitted(cad.uip.vecm.est.cc)[,1])
c.u.ve.tt<-rmse(cad.diff[1:310,1],fitted(cad.uip.vecm.est.tt)[,1])
c.u.ve.tc<-rmse(cad.diff[1:310,1],fitted(cad.uip.vecm.est.tc)[,1])

c.p.v.b<-rmse(cad.diff[1:311,1],fitted(cad.ppp.var.b)[,1])
c.p.v.c<-rmse(cad.diff[1:311,1],fitted(cad.ppp.var.c)[,1])
c.p.v.t<-rmse(cad.diff[1:311,1],fitted(cad.ppp.var.t)[,1])

c.p.ve.bc<-rmse(cad.diff[1:311,1],fitted(cad.ppp.vecm.est.bc)[,1])
c.p.ve.cc<-rmse(cad.diff[1:311,1],fitted(cad.ppp.vecm.est.cc)[,1])
c.p.ve.tt<-rmse(cad.diff[1:311,1],fitted(cad.ppp.vecm.est.tt)[,1])
c.p.ve.tc<-rmse(cad.diff[1:311,1],fitted(cad.ppp.vecm.est.tc)[,1])

c.f.v.b<-rmse(cad.diff[1:312,1],fitted(cad.fund.var.b)[,1])
c.f.v.c<-rmse(cad.diff[1:313,1],fitted(cad.fund.var.c)[,1])
c.f.v.t<-rmse(cad.diff[1:313,1],fitted(cad.fund.var.t)[,1])

c.f.ve.bc<-rmse(cad.diff[1:311,1],fitted(cad.fund.vecm.est.bc)[,1])
c.f.ve.cc<-rmse(cad.diff[1:311,1],fitted(cad.fund.vecm.est.cc)[,1])
c.f.ve.tt<-rmse(cad.diff[1:311,1],fitted(cad.fund.vecm.est.tt)[,1])
c.f.ve.tc<-rmse(cad.diff[1:311,1],fitted(cad.fund.vecm.est.tc)[,1])

c.bma<-rmse(cad.diff[1:313,1],pdensa$fit)

plot(1:100, type="n", axes=T, xlab="", ylab="",bty='o',xaxt='n',yaxt='n')
rect(-10,-100,105,105,density=NA,col='darkblue',lty=1,border='black')
abline(h=98,col='white',lwd=1.5)
text(50,101,'In-sample accuracy RMSE CAD/USD',cex=0.95,col="cyan",font=2)
text(adj=0,col="white",2,95,paste('UIP (VAR) both : ',c.u.v.b))
text(adj=0,col="white",2,91,paste('UIP (VAR) const : ',c.u.v.c))
text(adj=0,col="white",2,87,paste('UIP (VAR) trend : ',c.u.v.t))
abline(h=83,col='white',lwd=1.5)
text(adj=0,col="white",2,79,paste('PPP (VAR) both : ',c.p.v.b))
text(adj=0,col="white",2,75,paste('PPP (VAR) const : ',c.p.v.c))
text(adj=0,col="white",2,71,paste('PPP (VAR) trend : ',c.p.v.t))
abline(h=67,col='white',lwd=1.5)
text(adj=0,col="white",2,63,paste('Fund (VAR) both : ',c.f.v.b))
text(adj=0,col="white",2,59,paste('Fund (VAR) const : ',c.f.v.c))
text(adj=0,col="white",2,55,paste('Fund (VAR) trend : ',c.f.v.t))
abline(h=51,col='white',lwd=1.5)
text(adj=0,col="white",2,47,paste('UIP (VECM) both : ',c.u.ve.bc))
text(adj=0,col="white",2,43,paste('UIP (VECM) const : ',c.u.ve.cc))
text(adj=0,col="white",2,39,paste('UIP (VECM) const : ',c.u.ve.tt))
abline(h=35,col='white',lwd=1.5)
text(adj=0,col="white",2,31,paste('PPP (VECM) both : ',c.p.ve.bc))
text(adj=0,col="white",2,27,paste('PPP (VECM) const : ',c.p.ve.cc))
text(adj=0,col="white",2,23,paste('PPP (VECM) const : ',c.p.ve.tt))
abline(h=19,col='white',lwd=1.5)
text(adj=0,col="white",2,15,paste('Fund (VECM) both : ',c.f.ve.bc))
text(adj=0,col="white",2,11,paste('Fund (VECM) const : ',c.f.ve.cc))
text(adj=0,col="white",2,7,paste('Fund (VECM) const : ',c.f.ve.tt))
abline(h=4,col='white',lwd=1.5)
text(adj=0,col="yellow",2,1,paste('BMA : ',c.bma))

#############################################################################

windows(width=14,height=12)
layout(matrix(c(1,2,3,4,5,6,7,8,8),nrow=3,ncol=3,byrow=F))
par(mai=c(0.05,0,0.15,0.05))
plot(main='Uncovered Interest Parity VAR',cex.main=0.85,uk.diff[1:313,1],type='l',col='black',lwd=1,xlab='',xaxt='n',ylab='')
lines(fitted(uk.uip.var.b)[,1],col='green')
lines(fitted(uk.uip.var.c)[,1],col='blue')
lines(fitted(uk.uip.var.t)[,1],col='red')
legend(bty='n','bottom',legend=c('Both','Constant','Trend'),fill=c('green','blue','red'),ncol=3,cex=1)

plot(main='Purchasing Power Parity VAR',cex.main=0.85,uk.diff[1:313,1],type='l',col='black',lwd=1,xlab='',xaxt='n',ylab='')
lines(fitted(uk.ppp.var.b)[,1],col='green')
lines(fitted(uk.ppp.var.c)[,1],col='blue')
lines(fitted(uk.ppp.var.t)[,1],col='red')
legend(bty='n','bottom',legend=c('Both','Constant','Trend'),fill=c('green','blue','red'),ncol=3,cex=1)

plot(main='Fundamentals VAR',cex.main=0.85,uk.diff[1:313,1],type='l',col='black',lwd=1,xlab='',xaxt='n',ylab='')
lines(fitted(uk.fund.var.b)[,1],col='green')
lines(fitted(uk.fund.var.c)[,1],col='blue')
lines(fitted(uk.fund.var.t)[,1],col='red')
legend(bty='n','bottom',legend=c('Both','Constant','Trend'),fill=c('green','blue','red'),ncol=3,cex=1)

plot(main='Uncovered Interest Parity VECM',cex.main=0.85,uk.diff[1:313,1],type='l',col='black',lwd=1,xlab='',xaxt='n',ylab='',yaxt='n')
lines(fitted(uk.uip.vecm.est.bc)[,1],col='green')
lines(fitted(uk.uip.vecm.est.cc)[,1],col='blue')
lines(fitted(uk.uip.vecm.est.tt)[,1],col='red')
lines(fitted(uk.uip.vecm.est.tc)[,1],col='dark orange')
legend(bty='n','bottom',legend=c('B/con','C/con','T/trend','T/con'),fill=c('green','blue','red','dark orange'),ncol=4,cex=1)

plot(main='Purchasing Power Parity VAR',cex.main=0.85,uk.diff[1:313,1],type='l',col='black',lwd=1,xlab='',xaxt='n',ylab='',yaxt='n')
lines(fitted(uk.ppp.vecm.est.bc)[,1],col='green')
lines(fitted(uk.ppp.vecm.est.cc)[,1],col='blue')
lines(fitted(uk.ppp.vecm.est.tt)[,1],col='red')
lines(fitted(uk.ppp.vecm.est.tc)[,1],col='dark orange')
legend(bty='n','bottom',legend=c('B/con','C/con','T/trend','T/con'),fill=c('green','blue','red','dark orange'),ncol=4,cex=1)

plot(main='Fundamentals VECM',cex.main=0.85,uk.diff[1:313,1],type='l',col='black',lwd=1,xlab='',xaxt='n',ylab='',yaxt='n')
lines(fitted(uk.fund.vecm.est.bc)[,1],col='green')
lines(fitted(uk.fund.vecm.est.cc)[,1],col='blue')
lines(fitted(uk.fund.vecm.est.tt)[,1],col='red')
lines(fitted(uk.fund.vecm.est.tc)[,1],col='dark orange')
legend(bty='n','bottom',legend=c('B/con','C/con','T/trend','T/con'),fill=c('green','blue','red','dark orange'),ncol=4,cex=1)

plot(main='Bayesian Model Averaging',cex.main=0.85,uk.diff[1:313,1],type='l',col='black',lwd=1,xlab='',xaxt='n',ylab='',yaxt='n')
lines(pdensa$fit,col='purple')

c.u.v.b<-rmse(uk.diff[1:296,1],fitted(uk.uip.var.b)[,1])
c.u.v.c<-rmse(uk.diff[1:296,1],fitted(uk.uip.var.c)[,1])
c.u.v.t<-rmse(uk.diff[1:296,1],fitted(uk.uip.var.t)[,1])

c.u.ve.bc<-rmse(uk.diff[1:296,1],fitted(uk.uip.vecm.est.bc)[,1])
c.u.ve.cc<-rmse(uk.diff[1:296,1],fitted(uk.uip.vecm.est.cc)[,1])
c.u.ve.tt<-rmse(uk.diff[1:296,1],fitted(uk.uip.vecm.est.tt)[,1])
c.u.ve.tc<-rmse(uk.diff[1:296,1],fitted(uk.uip.vecm.est.tc)[,1])

c.p.v.b<-rmse(uk.diff[1:299,1],fitted(uk.ppp.var.b)[,1])
c.p.v.c<-rmse(uk.diff[1:299,1],fitted(uk.ppp.var.c)[,1])
c.p.v.t<-rmse(uk.diff[1:299,1],fitted(uk.ppp.var.t)[,1])

c.p.ve.bc<-rmse(uk.diff[1:299,1],fitted(uk.ppp.vecm.est.bc)[,1])
c.p.ve.cc<-rmse(uk.diff[1:299,1],fitted(uk.ppp.vecm.est.cc)[,1])
c.p.ve.tt<-rmse(uk.diff[1:299,1],fitted(uk.ppp.vecm.est.tt)[,1])
c.p.ve.tc<-rmse(uk.diff[1:299,1],fitted(uk.ppp.vecm.est.tc)[,1])

c.f.v.b<-rmse(uk.diff[1:311,1],fitted(uk.fund.var.b)[,1])
c.f.v.c<-rmse(uk.diff[1:311,1],fitted(uk.fund.var.c)[,1])
c.f.v.t<-rmse(uk.diff[1:311,1],fitted(uk.fund.var.t)[,1])

c.f.ve.bc<-rmse(uk.diff[1:311,1],fitted(uk.fund.vecm.est.bc)[,1])
c.f.ve.cc<-rmse(uk.diff[1:311,1],fitted(uk.fund.vecm.est.cc)[,1])
c.f.ve.tt<-rmse(uk.diff[1:311,1],fitted(uk.fund.vecm.est.tt)[,1])
c.f.ve.tc<-rmse(uk.diff[1:311,1],fitted(uk.fund.vecm.est.tc)[,1])

c.bma<-rmse(uk.diff[1:313,1],pdensb$fit)

plot(1:100, type="n", axes=T, xlab="", ylab="",bty='o',xaxt='n',yaxt='n')
rect(-10,-100,105,105,density=NA,col='darkblue',lty=1,border='black')
abline(h=98,col='white',lwd=1.5)
text(50,101,'In-sample accuracy RMSE for GBP/USD',cex=0.95,col="cyan",font=2)
text(adj=0,col="white",2,95,paste('UIP (VAR) both : ',c.u.v.b))
text(adj=0,col="white",2,91,paste('UIP (VAR) const : ',c.u.v.c))
text(adj=0,col="white",2,87,paste('UIP (VAR) trend : ',c.u.v.t))
abline(h=83,col='white',lwd=1.5)
text(adj=0,col="white",2,79,paste('PPP (VAR) both : ',c.p.v.b))
text(adj=0,col="white",2,75,paste('PPP (VAR) const : ',c.p.v.c))
text(adj=0,col="white",2,71,paste('PPP (VAR) trend : ',c.p.v.t))
abline(h=67,col='white',lwd=1.5)
text(adj=0,col="white",2,63,paste('Fund (VAR) both : ',c.f.v.b))
text(adj=0,col="white",2,59,paste('Fund (VAR) const : ',c.f.v.c))
text(adj=0,col="white",2,55,paste('Fund (VAR) trend : ',c.f.v.t))
abline(h=51,col='white',lwd=1.5)
text(adj=0,col="white",2,47,paste('UIP (VECM) both : ',c.u.ve.bc))
text(adj=0,col="white",2,43,paste('UIP (VECM) const : ',c.u.ve.cc))
text(adj=0,col="white",2,39,paste('UIP (VECM) const : ',c.u.ve.tt))
abline(h=35,col='white',lwd=1.5)
text(adj=0,col="white",2,31,paste('PPP (VECM) both : ',c.p.ve.bc))
text(adj=0,col="white",2,27,paste('PPP (VECM) const : ',c.p.ve.cc))
text(adj=0,col="white",2,23,paste('PPP (VECM) const : ',c.p.ve.tt))
abline(h=19,col='white',lwd=1.5)
text(adj=0,col="white",2,15,paste('Fund (VECM) both : ',c.f.ve.bc))
text(adj=0,col="white",2,11,paste('Fund (VECM) const : ',c.f.ve.cc))
text(adj=0,col="white",2,7,paste('Fund (VECM) const : ',c.f.ve.tt))
abline(h=4,col='white',lwd=1.5)
text(adj=0,col="yellow",2,1,paste('BMA : ',c.bma))
#############################################################################

l
From the following  plot associated with the BMA technique, we can see that the best model that explains CAD/USD returns contains 5 variables : [1] Change in Canadian share price index, [2] Change in money supply, [3] Canadian Industrial Production, [4] Canadian share price index, [5] Canadian Inflation. The white rectangles represent omitted variables, the blue rectangles suggest a negative relation between the dependent and the particular explanatory variables while the pink shades suggest a positive relation. There is also some consistency across the best 1766 models in terms of the relation between exchange rates and fundamental variables. The change in the Canadian share price index appears to have a negative impact on exchange rate returns, implying perhaps an appreciation of the CAD relative to the USD when the change in Canadian shares is positive. This relation appears to hold true for all the best models.

A different picture emerges for the GBP/USD currency pair, where the best model contains only one variable: [1] Change in share price index. The GBP T-Bill differential appears to be quite important in explaining GBP/USD returns across the best 2000 models, bearing a negative relation against exchange rate returns. The greater the T-bill differential, the more attractive are CAD deposits, the greater the demand for CAD dollars relative to USD, the greater the chances of CAD appreciation and hence a decline in CAD/USD returns.

The following set of plots show fitted model values and actual data along with the RMSEs associated with each specification. The Bayesian Model Averaging technique appears to outperform other models for both currency pairs. Unfortunately, the more complicated VECM specifications of the models are inferior to their VAR counterparts. There is no particular consistency in terms of performance. The only consolation lies in the BMA’s out performance across currency pairs and models.

The googleVis package  provides an interface between R and the Google Visualisation API. With the googleVis package users can easily create web pages with interactive charts based on R data frames and display them either via the local R HTTP help server or within their own sites, without uploading the data to Google. A browser with Flash and Internet connection is required. In this post I will plot some of the data from the previous blog entry using these pretty google widgets. Since wordpress.com does not appear to support javascript unless the blog is hosted on a personal server, I will upload the html file onto dropbox and provide a (hopefully) publicly accessible link below.

Depending on the structure of the data frame, we may have to melt the data before calling the google vis functions. 

kk<-melt(plevels.orig,id=rownames(plevels.orig))
mn<-melt(eff.orig,id=rownames(eff.orig))
k<-kk[,3]
tot<-cbind(mn,k)
colnames(tot)<-c('Country','Year','Relative GDP per Capita in PPP','Relative Price Levels')
M1
nn<-rownames(plevels.orig)
ptemp<-data.frame(Country=nn,plevels.orig)
colnames(ptemp)[1]<-'Country'
colnames(ptemp)[2:17]<-1995:2010
G1
stemp<-data.frame(Country=nn,round(sorted.y,2))
colnames(stemp)[1]<-'Country'
colnames(stemp)[2:17]<-1995:2010
tbl
gt<-gvisMerge(gvisMerge(G1,M1,horizontal=TRUE),tbl)
plot(gt)
print(gt, file="C:/University of Warwick/R experiments/Fx/gvis.html")

k

The above code was simply tagged along to the previous blog post code. The gvis.html file contains the html/java script code which takes care of the rest.  The first plot shows a map of the world, colour coded based on the relative price levels across all countries. Hovering the mouse over the country should yield more exact information. The second plot is a motion chart with many plotting options available. The table on the bottom of the page records the residuals from the Balassa-Samuelson regression (i.e.currency misalignments across time and countries). The Treemap plotted below this table visualises the same data set. The titles for the plots were added on after the html file was generated in R.

The following is a snapshot of the plots followed by the drop box link.

bn

The Treemap was added later. We need at least 4 columns of data : the id of each rectangle (country),the parent of each country (global), the values for the countries (absolute values of currency misalignments in 1995 across 189 countries),the colours for the countries (original values of said misalignments).

parentcol<-rep('Global',189)
ttemp<-data.frame(stemp[,1],parentcol)
pptemp<-data.frame(ttemp,cbind(abs(stemp[,2]),stemp[,2]))
colnames(pptemp)<-c('Country','Parent','val','col')

parentrow<-data.frame(cbind('Global',NA,cbind(0,0)))
colnames(parentrow)<-colnames(pptemp)

tottemp<-rbind(pptemp,parentrow)
Tree
gt<-gvisMerge(gvisMerge(gvisMerge(G1,M1,horizontal=TRUE),tbl),Tree,horizontal=FALSE)
print(gt, file="C:/University of Warwick/R experiments/Fx/gvis.html")

f

The dropbox link is here ::  http://dl.dropbox.com/u/99657020/gvis.html

The Balassa-Samuelson effect describes the mechanism by which an appreciation of the real exchange rate occurs during the catch-up process given faster relative productivity gains in the tradable goods sector of the economy. During the developement process, productivity has a tendency to increase more rapidly in the tradable goods sector versus the services sector. Since the prices of tradable goods are set by international markets, an increase in productivity in this area of the economy leads to an increase in wages that is not detrimental to international competitiveness. Since this increase in wages is  not localised to the tradable goods sector , but spreads across the economy as a whole, there is a rise in the relative prices in the non-tradable goods sector despite lower productivity growth in this area of the economy. Given that the overall price index is an average of these two sectors, an increase in the prices of domestic goods relative to those from abroad should result in an appreciation of the real exchange rate.

A typical approach to assess the misalignment of currencies is to use a Purchasing Power Parity (PPP) model corrected by a Balassa effect. The PPP criterion itself is based on the law of one price, according to which the prices of comparable goods should be equivalent across different countries once expressed in a common currency. If the dollar cost of a bottle of coke is cheaper in Japan than it is in the United States, arbitrage forces would force capital to flow from the Unites States to Japan until price levels are equalised via higher demand pressure in the latter economy. By itself, the PPP model does not account for the Balassa Samuelson effect, the notion that the real exchange rate of an emerging country should appreciate given superior productivity gains in the tradable goods sector versus advanced reference economies.

:: Click here for more information

To summarise :

  1. The overall price level of an economy is a weighted average of the price levels of said country’s tradable-goods-sector and non-tradable-goods-sector.
  2. Productivitiy gains in the tradable sector are typically superior to those of the services sector,pushing wages up in BOTH the tradable and non-tradable sectors of the economy.
  3. Countries with rapidly expanding economies should have more rapidly appreciating exchange rates.

:: Click here for  the source of  the diagram

 

In order to test this effect, relative price levels of a group of countries is regressed on PPP GDP per capita. Price and GDP data can be downloaded from the ICP (International Comparison Programme)After downloading and formatting the data for 189 countries across 16 years (1995-2010) in Excel, I imported the data into R, regressed relative price levels against relative PPP GDP per capita (in log form) using the United States as a reference economy (automatically accounted for in the downloaded dataset) and summarised the findings. The slider function is used to change the year and country of interest. 

d

setwd("C:/University of Warwick/R experiments/Fx")

library(quantmod)
library(timeSeries)
library(RColorBrewer)
library(plotrix)
library(rpanel)
library(symbols)

#Download and set up data####################################################
plevels.orig<-as.matrix(read.table('prices.csv',header=T,sep=',',row.names=1))
eff.orig<-as.matrix(read.table('percapitagdp.csv',header=T,sep=',',row.names=1))

colnames(plevels.orig)<-1995:2010
colnames(eff.orig)<-1995:2010
#############################################################################

#Balassa-Samuelson-Regression################################################
est<-list()
sorted.y<-NULL

for(i in 1:ncol(eff.orig)){
    est$pred[[i]]<-lm(log(plevels.orig[,i])~log(eff.orig[,i]))
    est$alphas[[i]]<-as.matrix(coef(est$pred[[i]])[1])
    est$betas[[i]]<-as.matrix(coef(est$pred[[i]])[2])
    est$resid[[i]]<-resid(est$pred[[i]])
    sorted.y<-cbind(sorted.y,as.matrix(est$resid[[i]]))
}

windows(width=14,height=12)
layout(matrix(c(1,1,1,2,1,1,1,2,3,4,5,6),nrow=4,ncol=3,byrow=F))
colcodes<-rainbow(nrow(eff.orig),alpha=0.6)

if (interactive()) {
  draw <- function(panel) {
    par(mai=c(0,0,0.2,0))

    xcoord<-est$pred[[panel$year]]$model[,2]
    ycoord<-est$pred[[panel$year]]$model[,1]
    plot(main=paste('| Balassa-Samuelson | ',' :: log (relative price levels)=',round(est$alpha[[panel$year]],2),'+',round(est$betas[[panel$year]],2),'*log (relative PPP GDP per capita) ::'),cex.main=0.85,bty='o',xlab='',ylim=c(min(ycoord),max(ycoord)+0.5),xaxt='n',x=xcoord,y=ycoord,pch=21,col=ifelse(rownames(eff.orig)==rownames(eff.orig)[panel$country],'red','black'),cex=0.75)

    text(x=xcoord,y=ycoord-0.06,rownames(eff.orig),cex=0.75,col=ifelse(rownames(eff.orig)==rownames(eff.orig)[panel$country],'darkred','black'))
    abline(est$pred[[panel$year]],lwd=1.5,col='darkblue')
    draw.circle(x=xcoord[panel$country],y=ycoord[panel$country],c(0.11,0.10,0.1),border=NA,col=heat.colors(40,alpha=0.2),lty=1,lwd=1)
    abline(v=xcoord[panel$country],h=ycoord[panel$country],col='red',lwd=1,lty='dashed')
    text(adj=0,min(xcoord)-0.05,max(ycoord)+0.55,paste(colnames(eff.orig)[panel$year],'::',rownames(eff.orig)[panel$country]),cex=0.85,col="black",font=2)

    par(mai=c(0.5,0,0.2,0))
    barplot(main=paste('Misalignments for ', rownames(eff.orig)[panel$country]),cex.main=0.85,bty='o',horiz=F,border='black',sorted.y[panel$country,],col=ifelse(sorted.y[panel$country,]<=0,"blue","cyan"),names.arg=1995:2010,las=2)

    par(mai=c(0,0,0.2,0))
    plot(xlab='',yaxt='n',xaxt='n',plevels.orig[panel$country,],type='l',col='blue',lwd=2,main='Relative Price Levels',cex.main=0.85)
    barplot(space=0,beside=TRUE,xlab='',ylim=c(min(plevels.orig[panel$country,])-3,max(plevels.orig[panel$country,])+10),yaxt='n',xaxt='n',plevels.orig[panel$country,],col='darkblue',add=TRUE,density=50,angle=45)
    grid(lty=1,col="darkgray")

    par(mai=c(0,0,0.2,0))
    plot(xlab='',yaxt='n',xaxt='n',eff.orig[panel$country,],type='l',col='orange',lwd=2,main='Relative GDP per capita in PPP',cex.main=0.85)
    barplot(space=0,xlab='',ylim=c(min(eff.orig[panel$country,])-3,max(eff.orig[panel$country,])+10),yaxt='n',xaxt='n',eff.orig[panel$country,],col='darkorange',add=TRUE,density=50,angle=45)
    grid(lty=1,col="darkgray")

    par(mai=c(0,0,0,0))
    under<-as.matrix(subset(sorted.y,sorted.y[,panel$year]<0)[,panel$year])
    over<-as.matrix(subset(sorted.y,sorted.y[,panel$year]>0)[,panel$year])

    pie3D(font=2,cex.main=0.85,main=paste('Proportion of misalignments for',colnames(as.matrix(eff.orig))[panel$year]),col=c('red','green'),mar=c(1,1,2,1),theta=1,radius=1,labelcex=0.70,shade=0.6,c(nrow(under),nrow(over)),labels=c(paste('Undervalued ',round(nrow(under)/189*100,0),'%'),paste('Overvalued ',round(nrow(over)/189*100,0),'%')))

    par(mai=c(0,0,0,0))
    temp1<-as.matrix(under)
    temp2<-as.matrix(over)
    plot(1:100, type="n", axes=T, xlab="", ylab="",bty='o',xaxt='n',yaxt='n')
    rect(-10,-5,105,105,density=NA,col='darkblue',lty=1,border='black')
    abline(h=80,col='white',lwd=1.5)
    text(50,90,'Summary',cex=1,col="white",font=2)
    text(adj=0,col="white",2,70,paste('Year : ',colnames(as.matrix(eff.orig))[panel$year]),font=2,cex=1)
    text(adj=0,col="white",2,60,paste('Country : ',rownames(as.matrix(eff.orig))[panel$country]),font=2,cex=1)
    text(adj=0,col="white",2,50,paste('Misalignment : ',ifelse(sorted.y[panel$country,panel$year]<0,'Undervalued','Overvalued')),font=2,cex=1)
    text(adj=0,col="white",2,40,paste('Amount : ',round(sorted.y[panel$country,panel$year]*100,2),'%'),font=2,cex=1)
    text(adj=0,col="white",2,30,paste('Largest Overvaluation : ',rownames(temp2)[index(temp2)[temp2==max(temp2)]]),font=2,cex=1)
    text(adj=0,col="white",2,20,paste('Largest Undervaluation : ',rownames(temp1)[index(temp1)[temp1==min(temp1)]]),font=2,cex=1)

    panel
  }
  panel<- rp.control(year = 1)
  rp.slider(panel,year,1,16, action=draw,resolution=1,showvalue=TRUE)
  rp.slider(panel,country,1,189,action=draw,resolution=1,showvalue=TRUE)

}

s

Click here for the high quality image

Suppose we were interested in the currency misalignment of the Chinese Renminbi in 2003. We would push the year slider to 9 and the country slider to 37.  

[1] The first plot charts the relative price levels of each country in the dataset against the corresponding relative GDP per capita for the chosen year (2003 in this case). The red dashed vertical and horizontal lines meet in a circle to locate the selected country (China in this case). The blue positively sloped line is the fitted Balassa-Samuelson regression. Points on this fitted regression line tells us the relative price level we would expect from a country with a given relative GDP per capita. The finding confirms the intuition that the larger the relative GDP per capita, the larger the relative price levels. This is evidenced from the positive slope of the regression line across  189 countries in the chosen year. We can clearly see that China’s price level plots below the level that we would expect given its PPP GDP per capita. Since relative price levels can be interpreted as a reference real exchange rate, the Chinese currency is undervalued even after accounting for the Balassa-Samuelson effect.

[2] The barplot below charts the magnitude of misalignments for the chosen country across all 16 years. We can see that for the Chinese currency, undervaluation abounds for all the 16 years with signs of small corrections in 2001 and 2009/2010.

[3]  The first 2 shaded barplots in the second column are simply time series plots of the data itself. We can see that for China, both price levels and GDP per capita have been rising dramatically over the 16 years.

[4] The pie chart tells us the proportion of under/over valued currencies for the selected year across 189 countries. In 2003, 50% of all currencies were undervalued/overvalued.

[5] The final image in the plot is just a summary, telling us the selected year/country, the direction of misalignment,the amount of under/over valuation,the largest over/undervaluation across 189 countries in the chosen year. 

To get an idea of how China behaved across time, the following animated gif plots the requisite information across time (1995-2010) for China.

To get an idea of how other countries are doing in 2003, the following gif file shows the requisite information for the other 188 countries in the same year.