Wednesday, January 30, 2013

Approaching the Zero Bound - Bonds

As bonds approach the artificial zero bound, where do we go next especially after the record setting +30% in 2011?  The rolling 250-day total return has rarely gone negative since the inception of the Vanguard Funds VBMFX and VUSTX.  I am intentionally an ex-bond fund manager, so I am very interested.

From TimelyPortfolio
From TimelyPortfolio

R code in GIST:

require(latticeExtra)
require(directlabels)
require(reshape2)
require(quantmod)
getSymbols("VUSTX",from="1990-01-01")
getSymbols("VBMFX",from="1990-01-01")
bonds.tr <- merge(ROC(VUSTX[,6],250),ROC(VBMFX[,6],250))
colnames(bonds.tr) <- c("VanguardLongTsy","VanguardTotBnd")
bonds.melt <- melt(as.data.frame(cbind(as.Date(index(bonds.tr)),coredata(bonds.tr))),id.vars=1)
colnames(bonds.melt) <- c("date","fund","totret250")
bonds.melt$date <- as.Date(bonds.melt$date)
asTheEconomist(
horizonplot(totret250~date|fund,origin=0,horizonscale=0.05,
data=bonds.melt,
strip=TRUE,strip.left=FALSE,par.strip.text=list(cex=1.1),
layout=c(1,2),
main="Vanguard Bond Funds 250 Day Total Return"))
direct.label(
xyplot(bonds.tr,screens=1,
ylim=c(-0.35,0.35),scales=list(y=list(rot=0)),
col=theEconomist.theme()$superpose.line$col,
par.settings=theEconomist.theme(box="transparent"),
lattice.options=theEconomist.opts(),
xlab=NULL,
main="Vanguard Bond Funds 250 Day Total Return"),
list("last.points",hjust=1,cex=1.2))

Monday, January 28, 2013

Applying Tradeblotter’s Nice Work Across Manager Rather than Time

Ever since I saw the very helpful distribution page first presented in Download and parse EDHEC hedge fund indexes, I have used it liberally.  Now that it is has been functionalized (Visually Comparing Return Distributions), I thought I would amend it slightly to compare distributions of returns across managers rather than time.  As a simple example, I compared the mutual funds offered by Vanguard and Pimco.  This amended function might be very helpful for internal performance monitoring of separately managed accounts across external or internal money managers.  Also, for the purposes of composite dispersion, this might be a good beginning summary look.

I like what I did in Pretty Correlation Map of PIMCO Funds, but this time I wanted to get the performance data directly in a table rather than calculating performance from multiple calls to the getSymbols function.  R can easily load these tables, but I got lazy and imported the tables into Excel https://docs.google.com/file/d/0ByeeEIaS0AOsRE1OeWk2VmFFVUU/edit and then cleaned them up with some old fashioned pivot tables.  When finished data munging, I copied and pasted into a data sheet to save as .csv (could have used R again to get the data directly from Excel), so I could publish through Google Docs for anyone that wanted to replicate what I have done.

Pimco and Vanguard Mutual Funds 1 Year Performance
From TimelyPortfolio
Pimco and Vanguard Mutual Funds YTD 2013 Performance
From TimelyPortfolio

Now, I’d like to incorporate another fine piece of work Tracking Number of Historical Clusters into this analysis, but I’ll save that for another post.

Code from Gist:

#copied this function almost entirely from http://tradeblotter.wordpress.com/
#I only take credit for the ugly/bad code
#amended to accept data in long form and replace time with manager
# Histogram, QQPlot and ECDF plots aligned by scale for comparison
page.Distributions.long <- function (R, mgrcol, perfcol, ylim=c(0,0.25)) {
require(PerformanceAnalytics)
op <- par(no.readonly = TRUE)
# c(bottom, left, top, right)
par(oma = c(5,0,2,1), mar=c(0,0,0,3))
mgr = unique(R[,mgrcol])
layout(matrix(1:(4*length(mgr)), ncol=4, byrow=TRUE), widths=rep(c(.6,1,1,1),length(mgr)))
# layout.show(n=21)
chart.mins=min(R[,perfcol], na.rm=TRUE)
chart.maxs=max(R[,perfcol], na.rm=TRUE)
row.names = mgr
for(i in 1:length(mgr)){
if(i==length(mgr)){
plot.new()
text(x=1, y=0.5, adj=c(1,0.5), labels=row.names[i], cex=1.1)
chart.Histogram(R[which(R[,mgrcol]==mgr[i]),perfcol], main="", xlim=c(chart.mins, chart.maxs), ylim=ylim,
breaks=seq(round(chart.mins, digits=2)-0.01, round(chart.maxs, digits=2)+0.01, by=0.01),
show.outliers=TRUE, methods=c("add.normal"), colorset =
c("black", "#00008F", "#005AFF", "#23FFDC", "#ECFF13", "#FF4A00", "#800000"))
abline(v=0, col="darkgray", lty=2)
chart.QQPlot(R[which(R[,mgrcol]==mgr[i]),perfcol], main="", pch=20, envelope=0.95, col=c(1,"#005AFF"), ylim=c(chart.mins, chart.maxs))
abline(v=0, col="darkgray", lty=2)
chart.ECDF(R[which(R[,mgrcol]==mgr[i]),perfcol], main="", xlim=c(chart.mins, chart.maxs), lwd=2)
abline(v=0, col="darkgray", lty=2)
}
else{
plot.new()
text(x=1, y=0.5, adj=c(1,0.5), labels=row.names[i], cex=1.1)
chart.Histogram(R[which(R[,mgrcol]==mgr[i]),perfcol], main="", xlim=c(chart.mins, chart.maxs), ylim=ylim,
breaks=seq(round(chart.mins, digits=2)-0.01, round(chart.maxs, digits=2)+0.01, by=0.01),
xaxis=FALSE, yaxis=FALSE, show.outliers=TRUE, methods=c("add.normal"), colorset =
c("black", "#00008F", "#005AFF", "#23FFDC", "#ECFF13", "#FF4A00", "#800000"))
abline(v=0, col="darkgray", lty=2)
chart.QQPlot(R[which(R[,mgrcol]==mgr[i]),perfcol], main="", xaxis=FALSE, yaxis=FALSE, pch=20, envelope=0.95, col=c(1,"#005AFF"), ylim=c(chart.mins, chart.maxs))
abline(v=0, col="darkgray", lty=2)
chart.ECDF(R[which(R[,mgrcol]==mgr[i]),perfcol], main="", xlim=c(chart.mins, chart.maxs), xaxis=FALSE, yaxis=FALSE, lwd=2)
abline(v=0, col="darkgray", lty=2)
}
}
par(op)
}
#data from pimco and vanguard websites imported into Excel and translated into csv
#if local uncomment next line
#pimco_vanguard <- read.csv("vanguard_pimco.csv")
#get data from published google doc spreadsheet
pimco_vanguard <- read.csv("https://docs.google.com/spreadsheet/pub?key=0AieeEIaS0AOsdDFET0ZmbTBKWDNoMnZrZ0oySWRia1E&single=true&gid=0&output=csv")
#do col 4 which is 1 year or past 12 months
#exclude 0 assuming that data does not exist for this fund
page.Distributions.long(pimco_vanguard[pimco_vanguard$X1Y != 0,], perfcol = 4, mgrcol = 1, ylim = c(0,10))
#do col 3 which is ytd
page.Distributions.long(pimco_vanguard[pimco_vanguard$YTD != 0,], perfcol = 3, mgrcol = 1, ylim = c(0,30))

Wednesday, January 16, 2013

Slightly Different Measure of Valuation

I grow tired of the tried and true standard measures of valuation, and from time to time I try to think of alternate methods.  One thought was to analyze Ken French’s Market(ME) to Book(BE) Breakpoints by percentile.  We can see by year at what level is a stock considered cheap relative to the universe.  As these breakpoints move higher, the market is willing to pay a higher price.  In reverse, as these breakpoints move lower, stocks fetch a lower price or can be considered cheaper.  Since there are 20 fifth percentiles, a horizon plot can provide a good overall look at this measure of valuation.

Here is a horizon plot of absolute ME/BE valuation by fifth percentile since 1926.

From TimelyPortfolio

For a more representative look, let's plot a horizon chart of the ME-BE / historical mean - 1.

From TimelyPortfolio

For one more non-horizon look, we can use an xyplot.

From TimelyPortfolio

In theory, I think this could provide yet another gauge of the cheapness of stocks, but of course, there is lots of research to be done.

R Code from Gist:

require(latticeExtra)
require(xts)
loadfrench <- function(zipfile, txtfile, skip, nrows) {
#my.url will be the location of the zip file with the data
my.url=paste("http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/",zipfile,".zip",sep="")
#this will be the temp file set up for the zip file
my.tempfile<-paste(tempdir(),"\\frenchzip.zip",sep="")
#my.usefile is the name of the txt file with the data
my.usefile<-paste(tempdir(),"\\",txtfile,".txt",sep="")
download.file(my.url, my.tempfile, method="auto",
quiet = FALSE, mode = "wb",cacheOK = TRUE)
unzip(my.tempfile,exdir=tempdir(),junkpath=TRUE)
#read space delimited text file extracted from zip
french <- read.table(file=my.usefile,
header = FALSE, sep = "", fill=TRUE, #add fill = true to handle bad data
as.is = FALSE ,
skip = skip, nrows=nrows)
#get dates ready for xts index
datestoformat <- french[,1]
datestoformat <- paste(substr(datestoformat,1,4),
"12","31",sep="-")
#get xts for analysis
#unfortunately the last percentile in 1942 is not separated by a space so we will delete last two columns
french_xts <- as.xts(french[,1:(NCOL(french)-2)],
order.by=as.Date(datestoformat))
#delete missing data which is denoted by -0.9999
french_xts[which(french_xts < -0.99,arr.ind=TRUE)[,1],
unique(which(french_xts < -0.99,arr.ind=TRUE)[,2])] <- 0
#divide by 100 to get percent
french_xts <- french_xts/100
return(french_xts)
}
filenames <- c("BE-ME_Breakpoints")
BE_ME = loadfrench(zipfile=filenames[1],txtfile=filenames[1],skip=3,nrows=87)
#first column is year which we can remove
#columns 2 and 3 are counts for positive and negative which we will remove
BE_ME = BE_ME[,4:NCOL(BE_ME)]
colnames(BE_ME) <- paste(5*0:(NCOL(BE_ME)-1),"pctile",sep="")
#do horizon plot of absolute BE_ME breakpoints
horizonplot(BE_ME,
layout=c(1,NCOL(BE_ME)),
strip.left=FALSE,
xlab = NULL,
ylab = list(rev(colnames(BE_ME)), rot = 0, cex = 0.7),
scales = list(x=list(tck=c(1,0))),
main="Analysis of Historical BE_ME Breakpoints \n(data courtesy http://mba.tuck.dartmouth.edu/pages/faculty/ken.french)")
#do horizon plot of relative to historical mean breakpoints
horizonplot(BE_ME/matrix(rep(apply(BE_ME,MARGIN=2,FUN=mean),times=NROW(BE_ME)),ncol=NCOL(BE_ME),byrow=TRUE)-1,
layout=c(1,NCOL(BE_ME)),
horizonscale=0.25,
origin = 0,
scales = list(y = list(relation = "same"), x=list(tck=c(1,0))),
strip.left=FALSE,
xlab = NULL,
ylab = list(rev(colnames(BE_ME)), rot = 0, cex = 0.7),
main="Analysis of Historical BE_ME Breakpoints - Mean \n(data courtesy http://mba.tuck.dartmouth.edu/pages/faculty/ken.french)")
require(RColorBrewer)
xyplot(BE_ME,col=c(brewer.pal(9,"Reds"),brewer.pal(9,"Blues")),
screens=1,
scales = list(x=list(tck=c(1,0))),
xlab = NULL,
ylab = "ME-BE Breakpoints",
main="Analysis of Historical BE_ME Breakpoints\n(data courtesy http://mba.tuck.dartmouth.edu/pages/faculty/ken.french)")

Thursday, January 10, 2013

Interesting Presentation from Van Eck Trackers Team

I just saw a very interesting presentation from the Van Eck Trackers Team (acquired by Van Eck in July 2012) at the CFA Society of Alabama January 2013 lunch.  I have not had the chance to read all their research and attempt to replicate portions in R, but I found two points very compelling.  First, the research done on the price of illiquidity

Freed, Marc S., and Ben McMillan. "Investible Benchmarks and Hedge Fund Liquidity." The Journal of Wealth Management 14.3: 58-66.

published in the Journal of Wealth Management potentially could inversely answer the pricing of Warren Buffett’s notion of “cash as a call option” described in a Globe and Mail article with my own thoughts in this post.  The price of illiquidity can be considerable as shown below,

image

so inversely the opportunity from liquidity or available cash could be substantial. Imagine what hedge fund performance might look like net of fee, net of tax, and net of illiquidity. Even the best managers cannot achieve a high enough return to compensate for this level of embedded costs.  Strangely enough it also comes close to my crude heuristic of 10% expected return premium necessary for me to justify illiquidity.

The presentation (similar presentation given to the CFA Society of Pittsburgh) also discusses the concept of True® Alpha.  The decomposition is nicely described in this graphic from the presentation.

image 

I have not confirmed with the authors/speakers their use of R, but their were some remarkably familiar graph layouts that lead me to believe R played a prominent role.  Regardless, I would really like to replicate some of the calculations in R.