Thursday, December 20, 2012

Shiny/R Conversion of Another One of My Favorite Mike Bostock d3 Examples

Mike Bostock has revolutionized visualization with his d3 and his seemingly infinite examples.  In another adaptation of his amazing work, I will adapt one of my favorite examples to supplement the interactive scatterplot with data supplied by R through Shiny.  Often in finance, we will use a scatterplot to explore the relationship between different asset classes or risk exposures.  This exploration becomes much more deep and meaningful when we can interact in real-time with the plot.

I thought using the technique to look at the monthly returns of Vanguard funds representing different exposures would be helpful.  A clear linear relationship immediately shows up between the equity funds (VFINX-Vanguard S&P 500, VDMIX-Vanguard Developed Markets, and VEIEX-Vanguard Emerging Markets).  Also, we can see the very beneficial effect of VBMFX-Vanguard Total US Bond throughout what we know was a tumultuous period.

Live example hosted at Glimmer (IE does not correctly render IFRAME below)

Code at Github

Shiny SVG no d3–New and Improved

The fine author Joe Cheng of RStudio Shiny suggested in this Google Groups message to use htmlOutput rather than the ugly hack in my last post R Shiny svg with no d3.  As I should have known, it works great and eliminates all the useless javascript.  Unless, you dig into the code, nothing changes, but I have posted a new glimmer live example and the code at Github.  For the ambitious, please feel free to use d3 now or other code to show how we could change colors and other aesthetics with javascript.

Tuesday, December 18, 2012

R Shiny svg with no d3

see Shiny SVG no d3–New and Improved for a better version

Paul Murrell’s Technical Report reminded me that svg in the browser does not require d3.  With his package gridSVG, we can do all sorts of wonderful things without leaving R.  I think I prefer the power of d3.js, but here is an example of R creating svg from a lattice plot with gridSVG and then sending to the browser through Shiny.

Here is the code at github.

Monday, December 17, 2012

From d3 to pdf (hopefully knitr) with R Shiny

Although I think I like the d3, R, and Shiny team the best, I could definitely envision a big need for multi-page pdf reports created with R, knitr, and latex delivered to the browser with Shiny.  Shiny helpfully provides pre-built functionality to send .png charts and graphics created by R to the web.  This works especially well since most browsers offer containers for the graphics that can dynamically change.

In this quick experiment, I use the pdf() dev function from R instead of the hoped for knit2pdf to see if sending the binary pdf data is possible.  It seems the downloadHandler function from Shiny only currently handles text.  The downloadHandler function from Shiny also serves up downloads well.  I have included the code to get that to work also if you prefer this behavior.

Try it out at http://glimmer.rstudio.com/timelyportfolio/shiny-d3-pdf.  It seems to work with the newest Chrome and IE.

image

Code at Github.

Friday, December 14, 2012

d3, Shiny, and R Reporting Performance

I thought it would be interesting to offer a little different example of how we can use d3, R, and Rstudio Shiny.  This time we will perform a simple example to report portfolio or index performance.  Just as a test of my progress, I also threw in a jQuery UI accordion effect.

Live example here

image

Code here

Thursday, December 13, 2012

Shiny, R, d3 Adaptation of Mike Bostock’s Calendar

The idea with all the posts http://timelyportfolio.blogspot.com/search/label/shiny was to learn both d3 and shiny by iterating through multiple experiments.  This example adaptation was my quickest yet at about 30 minutes.  Mike Bostock had done all the hard work, and all I had to do was write about 40 lines of R code to download the index data through getSymbols and Yahoo!Finance and deliver that data to d3.

Example http://glimmer.rstudio.com/timelyportfolio/shiny-d3-calendar/

Code https://github.com/timelyportfolio/shiny-d3-calendar

image

Wednesday, December 12, 2012

more d3 with shiny and R (CPI from bls.gov)

Extending the marginal success achieved in

d3 Showreel Combined with R and Shiny 

d3 and r interacting through shiny

I have converted http://bl.ocks.org/3891711 to use data supplied by an R load of http://bls.gov CPI data through Rstudio's Shiny.  The example does not work perfectly due to some NaN in the d3 path, but I wanted to publish for some feedback.

A live example is here (please be patient since it will download about 6mb before showing anything), and the code is here on Github.

image

Monday, December 10, 2012

d3 Showreel Combined with R and Shiny

Since the d3 portion of the example provide in my last post d3 and r interacting through shiny was so weak, I thought it would be interesting to combine the much more compelling Showreel Example with the same stock data.  However, this time the data will come from R getSymbols.  Also, most of the plots would be better using cumulative return data, so we will use R to convert the price data into a cumulative return series.

Almost all credit for the code contained in this example goes to Mike Bostock, and although much of the code is now gone, the structure and idea belong to Trestle Technology's Jeff Allen.

See it live in the browser at http://glimmer.rstudio.com/timelyportfolio/shiny-d3-showreel/, and all code is hosted on Github, so please play, experiment, and let me know what great things you do with this.

Friday, December 7, 2012

d3 and r interacting through shiny

I was amazed and delighted by the Reconstruct Gene Networks Using Shiny.  Jeff accomplished what I knew was possible but had absolutely no idea how to implement.  With the boost, I went to work combining his d3 force layout with my d3 experiments discussed in Hi R and Axys, I’m d3.js “Nice to Meet You” (On the Iphone)Unfortunately, I could not figure out how to host the example at http://glimmer.rstudio.com See this live at http://glimmer.rstudio.com/timelyportfolio/shiny-d3-plot/, and for all the source code go to https://github.com/timelyportfolio/shiny-d3-plot.

Wednesday, December 5, 2012

Apple Compared to Others with ggthemes

For a happy person delightfully concentrated in Apple, I wanted to show Apple’s performance versus Microsoft and Cisco in decades 1(1990-2000) and 2 (2000-2012).  I thought this would give me a good chance to try out the very interesting work being done at https://github.com/jrnold/ggthemes.

From TimelyPortfolio

Just as a reference, my general goto chart is theEconomist set from latticeExtra.  Here is how it would look with that method.  I think it is still my favorite.

From TimelyPortfolio

For another chart since 2003 prepared by the far more capable at the Wall Street Journal, see Why Microsoft Beats Apple hat tip from Stock Picking Matters: Apple v Microsoft from The Big Picture by Global Macro Monitor with a hat tip from Jason Zweig.

My best guess in terms of the debate is some other company will resoundingly beat both Microsoft and Apple over the next decade.

R code in GIST:

require(ggplot2)
#to get ggthemes from jrnold github if you have not installed
#require(devtools)
#install_github("ggthemes","jrnold")
require(ggthemes)
require(reshape2)
require(directlabels)
require(quantmod)
require(PerformanceAnalytics)
tckrs <- c("CSCO","MSFT","AAPL","^GSPC")
getSymbols(tckrs,from="1990-01-01")
prices <- na.omit(merge(CSCO[,6],MSFT[,6],AAPL[,6],GSPC[,6]))
colnames(prices) <- c("Cisco","Microsoft","Apple","SP500")
returns <- prices/lag(prices) - 1
returns[1,] <- 0
cumul <- cumprod(returns+1)
cumul.df <- as.data.frame(cbind(index(cumul),coredata(cumul)))
cumul.melt <- melt(cumul.df,id.vars=1)
colnames(cumul.melt) <- c("Date","Stock","Cumul")
cumul.melt[,"Date"] <- as.Date(cumul.melt[,"Date"])
direct.label(
ggplot(cumul.melt, aes(x=Date,y=log(Cumul),colour=Stock)) +
geom_line() +
theme_economist() + #if you want to play try theme_wsj() or theme_few()
scale_colour_economist() +
ggtitle("Apple Compared to Others Since 1990")
, list(last.bumpup,hjust=0.45,cex=0.65))
ggsave(plot=last_plot(),file="apple.png",width=6,height=4)
#for reference I will use my old favorite theEconomist from latticeExtra
require(latticeExtra)
direct.label(
asTheEconomist(xyplot(log(Cumul)~Date,data=cumul.melt,groups=Stock,
main="Apple Compared to Microsoft and Cisco Since 1990")
)
, list(last.bumpup,hjust=0.25,cex=1))

Monday, November 19, 2012

Drawdown Determined Position Size

This caught my eye as I searched for some more academic research on my favorite risk measure drawdown.

Yang, Z. George and Zhong, Liang,
Optimal Portfolio Strategy to Control Maximum Drawdown -
The Case of Risk Based Dynamic Asset Allocation
(February 25, 2012).
Available at SSRN:
http://ssrn.com/abstract=2053854 or
http://dx.doi.org/10.2139/ssrn.2053854 

The paper seeks to do what I have tried to do without any real success—use drawdown to help determine position size.  I felt motivated to replicate in R their measure Rolling Economic Drawdown-Controlled Optimal Portfolio Strategy (REDD-COPS).  Since drawdown suffers from a significant lag, the authors suggest a rolling drawdown to offset some of the embedded lag:

"Intuitively, a drawdown look-back period H [length of rolling period] somewhat shorter than or similar to the market decline cycle is the key to achieve optimality. Substituting EDD with a lower REDD in equation (1), we have higher risky asset allocation to improve portfolio return
during a market rebound phase. In the examples followed, we'll use H = 1 year throughout."

The authors calibrate REDD-COPS on the S&P 500 as a single asset, and then use REDD-COPS in a portfolio context with three assets (S&P 500 – SPY, US 20+ Year Treasury – TLT, and DJ UBS Commodity Index).  I’ll show the results from my attempt to replicate the single asset test.  Sorry for the Thanksgiving but ugly colors, but I just could not resist.

From TimelyPortfolio

Their results are interesting, but I’m not entirely convinced of the robustness of a system using REDD-COPS to determine position size especially since their use of entire period Sharpe requires hindsight.  However despite the ultimate result, the byproduct discovery discussed in my post Cash–Opportunity Lost or Opportunity Gained was well worth the effort.  Stay tuned for my attempt to do the multi-asset REDD-COPS system.

R code in GIST:

#explore Rolling Economic Drawdown - Controlled Optimal Portfolio Strategy (REDD-COPS)
#from Yang, Z. George and Zhong, Liang,
#Optimal Portfolio Strategy to Control Maximum Drawdown -
#The Case of Risk Based Dynamic Asset Allocation (February 25, 2012).
#Available at SSRN: http://ssrn.com/abstract=2053854 or
#http://dx.doi.org/10.2139/ssrn.2053854
require(quantmod)
require(PerformanceAnalytics)
require(RColorBrewer)
#get sp500 for first attempt
getSymbols("^GSPC", from = "1900-01-01")
GSPC.monthly <- to.monthly(GSPC)[,4]
index(GSPC.monthly) <- as.Date(index(GSPC.monthly))
roc <- ROC(GSPC.monthly, n = 1, type = "discrete")
#get 1 year t-bill for risk-free
getSymbols("GS1", src = "FRED")
#combine the monthly SP500 return with a monthly return of GS1 1 year treasury
returns <- na.omit( merge(roc, ((1+lag(GS1,1) / 100) ^ (1/12)) - 1) )
cumreturns <- cumprod(1+returns)
#calculate REDD assuming 1st column is risky asset and 2nd is risk-free
REDD <- function(x, rf) {
rf <- rf[index(x)]
result <- 1 - last(x) /
( coredata(max(x)) * coredata(last(rf)) / coredata(first(rf[index(x[which(x==max(x))])])) )
return(result)
}
#get REDD for SP500
#paper says
#"Intuitively, a drawdown look-back period H somewhat shorter than or similar to the
#market decline cycle is the key to achieve optimality. Substituting EDD with a lower
#REDD in equation (1), we have higher risky asset allocation to improve portfolio return
#during a market rebound phase. In the examples followed, we'll use H = 1 year throughout."
GSPC.redd <- rollapplyr(cumreturns[,1], width = 12, FUN = REDD, rf=cumreturns[,2])
#experiment with a couple different Sharpe options
GSPC.sharpe <- na.omit( runMax(lag(rollapplyr(returns[,1], width = 36, FUN = SharpeRatio, Rf = 0, p = 0.95, "StdDev"),12),
n = 36) )
#another sharpe alternative
#GSPC.sharpe <- 1 - na.omit( runMin(lag(rollapplyr(returns[,1], width = 36, FUN = SharpeRatio, Rf = 0, p = 0.95, "StdDev"),12),
# n = 12) )
#if you would like to use a constant Sharpe, specify here and uncomment
#the paper uses a little hindsight to use the historic 0.403 Sharpe
#GSPC.sharpe <- 0.403
#feel free to experiment here
#I will specify 0.2
drawdown.limit <- 0.20
position.size <- as.xts(apply(( (GSPC.sharpe/drawdown.limit + 0.5) / (1-drawdown.limit^2) ) *
#( (drawdown.limit - GSPC.redd) / (1 - GSPC.redd) ), MARGIN = 1, FUN = max, 0), order.by = index(GSPC.redd))
( (drawdown.limit - GSPC.redd) / (1 - GSPC.redd) ), MARGIN = 1, FUN = max, 0), order.by = index(GSPC.sharpe))
plot(position.size)
sum(position.size)/NROW(position.size)
#charts.PerformanceSummary(merge(lag(position.size)*roc, roc))
return.comps <- merge(lag(position.size)*returns[,1] + lag(1-position.size) * returns[,2], returns[,1], returns[,2])
colnames(return.comps) <- c("REDD-COPS","SP500","US1Y")
charts.PerformanceSummary(return.comps, ylog=TRUE,
colorset=brewer.pal(10,"Spectral")[c(2,4,7)], #Thanksgiving but ugly colors
main="REDD-COPS System Test (http://ssrn.com/abstract=2053854)")

Friday, November 9, 2012

Unbelievable and Amazing R Shiny–Web Parameter Test in 1.5 Hours

Life keeps getting better and better.  Yesterday, I discovered the absolutely unbelievable and amazing work RStudio has done with Shiny employing one of my favorite R packages websockets.  As proof of the ease and quality, within a couple of minutes, I was able to get it up and running.  This morning basically starting from scratch in less than 1.5 hours I was able to achieve a web-based interactive parameter test for a moving average system as my first example.

Below is a screencast of this very basic parameter testing web app.  I can only hope that this simple application can illustrate just how powerful Shiny is.  Just imagine pairing this with d3 or knitr.

R code for server.R and ui.R from GIST:

#almost entirely based on the 02_text and 03_mpg examples provided by RStudio Shiny
#all credit belongs to them
if (!require(PerformanceAnalytics)) {
stop("This app requires the PerformanceAnalytics package. To install it, run 'install.packages(\"PerformanceAnalytics\")'.\n")
}
if (!require(quantmod)) {
stop("This app requires the quantmod package. To install it, run 'install.packages(\"quantmod\")'.\n")
}
# Download data for a stock, if needed
require_symbol <- function(symbol) {
if (!exists(symbol))
getSymbols(symbol, src="FRED")
#getSymbols(symbol, from = "1900-01-01")
}
library(shiny)
# Define server logic required to summarize and view the selected dataset
shinyServer(function(input, output) {
make_chart <- function(symbol="SP500") {
# get price data if does not exist
require_symbol(symbol)
#would hope not to recalculate each time but for now will leave messy
price.monthly <- to.monthly(get(symbol))[,4]
ret.monthly <- ROC(price.monthly, type="discrete", n=1)
#calculate system returns
systemRet <- merge(
ifelse(lag(price.monthly > runMean(price.monthly, n=input$nmonths), k=1), 1, 0) * ret.monthly,
ret.monthly)
colnames(systemRet) <- c(paste(input$nmonths,"MASys",sep=""), symbol)
charts.PerformanceSummary(systemRet, ylog=TRUE)
}
make_table <- function(symbol="SP500") {
# get price data if does not exist
require_symbol(symbol)
#would hope not to recalculate each time but for now will leave messy
price.monthly <- to.monthly(get(symbol))[,4]
ret.monthly <- ROC(price.monthly, type="discrete", n=1)
#calculate system returns
systemRet <- merge(
ifelse(lag(price.monthly > runMean(price.monthly, n=input$nmonths), k=1), 1, 0) * ret.monthly,
ret.monthly)
colnames(systemRet) <- c(paste(input$nmonths,"MASys",sep=""), symbol)
table.Stats(systemRet)
}
# Generate a plot of the system and buy/hold benchmark given nmonths parameter
# include outliers if requested
output$systemPlot <- reactivePlot(function() {
make_chart()
})
# Generate a summary stats table of the dataset
output$view <- reactiveTable(function() {
make_table()
})
})
view raw server.r hosted with ❤ by GitHub
#almost entirely based on the 02_text and 03_mpg examples provided by RStudio Shiny
#all credit belongs to them
library(shiny)
# Define UI for dataset viewer application
shinyUI(pageWithSidebar(
# Application title
headerPanel("Shiny Moving Average Parameter Test"),
# Sidebar with controls to select a dataset and specify the number
# of observations to view
sidebarPanel(
numericInput("nmonths", "Number of months for moving average:", 10)
),
# Show a summary of the dataset and an HTML table with the requested
# number of observations
mainPanel(
plotOutput("systemPlot"),
tableOutput("view")
)
))
view raw ui.r hosted with ❤ by GitHub

Wednesday, November 7, 2012

Cash–Opportunity Lost or Opportunity Gained

Tom Brakke from http://researchpuzzle.com/ wrote a great thought piece Cash as Trash, Cash as King, and Cash as a Weapon for the CFA Institute blog.  My favorite part comes in the last paragraph:

“That’s the kind of analysis that should be brought to the discussion of cash, not simple sayings that bounce back and forth in response to the mood of the market. Individual investors should not be afraid to hold cash, even when it’s earning little, if it’s available to them when needed most. And investment professionals should get away from misguided notions about how much cash is too much cash in a portfolio. Let the manager use the value and power of cash to execute a strategy. Then you can judge whether the strategy makes sense. Don’t remove cash as an effective weapon.”

Another way of looking at cash is does it represent the commonly accepted notion of opportunity lost (opportunity cost or “cash as trash”) or does it represent opportunity gained (Buffett’s cash as a “call option” as described in the solid Globe and Mail article).  I hope those who know me or read this blog know where I stand.  Cash is a refuge in the absence of opportunity, and I plan to spend significant time over the next couple months exploring how to mathematically price cash as a call option. If anyone has attempted this or read any research, please share it with me.

Interestingly enough as a byproduct of some other research, yesterday I was confronted with something that I should have already known.  If you compare the 1 year US Treasury (not really cash but close enough) with just the price return of the S&P 500 starting from 1960, the price only S&P 500 is extraordinarily unremarkable.

From TimelyPortfolio
From TimelyPortfolio

Also, cash does not look so bad when we consider the new Research Affiliates research "Glidepath Illusion".  Certainly commonly accepted “wisdom” does not seem so wise.

R code from GIST:

#start exploring Buffett's cash as a call option
#as described in
#http://www.theglobeandmail.com/globe-investor/investment-ideas/streetwise/for-warren-buffett-the-cash-option-is-priceless/article4565468/
require(latticeExtra)
require(directlabels)
require(reshape2)
require(quantmod)
require(PerformanceAnalytics)
#get sp500 for first attempt
getSymbols("SP500", src="FRED")
SP500.monthly <- to.monthly(SP500)[,4]
index(SP500.monthly) <- as.Date(index(SP500.monthly))
roc <- ROC(SP500.monthly, n = 1, type = "discrete")
#get 1 year t-bill for risk-free
getSymbols("GS1", src = "FRED")
#combine the monthly SP500 return with a monthly return of GS1 1 year treasury
returns <- na.omit( merge(roc, ((1+lag(GS1,1) / 100) ^ (1/12)) - 1) )
#do cumulative returns since 1960 so skip the 1950s
cumreturns <- cumprod(1+returns["1960::",])
cumreturns.df <- as.data.frame(cbind(index(cumreturns),coredata(cumreturns)))
colnames(cumreturns.df) <- c("Date","S&P500.Price","US1y")
cumreturns.df[,"Date"] <- as.Date(cumreturns.df[,"Date"])
cumreturns.melt <- melt(cumreturns.df,id.vars=1)
colnames(cumreturns.melt) <- c("Date","Index","CumulReturns")
#make the cumulative return line chart with latticeExtra
direct.label(asTheEconomist(xyplot(CumulReturns~Date, data = cumreturns.melt, groups=Index,
main = "S&P 500 (Price Only) Compared to US 1 Year Treasury Cumulative Growth")),
,method = list("last.points",hjust=1,vjust=0,cex=1.2))
#make a barplot for comparison of cumulative returns
barplot(t(table.Stats(returns["1960::",])[7,] + 1) ^ 12 - 1,
beside=TRUE,
ylim=c(0,0.07),
names.arg=colnames(cumreturns.df)[2:3],
col=theEconomist.theme()$superpose.line$col,
las=1, cex.axis=0.8,
main="")
abline(h=0)
title(main="S&P 500 (Price Only) and US 1 Year Cumulative Return Since 1960",
adj=0.05,outer=TRUE,line=-2,cex.main=0.95)

Monday, October 22, 2012

Resurrect Posts on Japan and the Yen

As the Yen and Japan continue to get more interesting in my mind, I just wanted to resurrect some posts that I have done on Japan and the Yen and sort them by my favorites.

Japan Trade by Geographic Region
Japanese Trade and the Yen
Japan Intentional or Accidental Pursuit of Deflation
Japan Trade More Specifically with Korea

Just to add a chart, here is one using data from the Federal Reserve Bank of St. Louis (FRED).  While the extreme correlation between the Yen and the S&P 500 has limited the opportunity available in the Yen, the correlation has recently weakened as Japanese deficits have worsened and the Yen stopped getting stronger.

From TimelyPortfolio

R code:

require(latticeExtra)
require(quantmod)

getSymbols("DEXJPUS",src="FRED")
getSymbols("SP500", src="FRED")

asTheEconomist(xyplot(DEXJPUS,main="US Dollars for Japanese Yen Since 1970\nSource: Federal Reserve Bank of St. Louis"))

#merge the weekly returns of Yen and SP500
ret <- na.omit(merge(weeklyReturn(DEXJPUS),weeklyReturn(SP500)))
#use the rolling correlation method from PerformanceAnalytics chart.RollingCorrelation
rollcor <- as.xts(rollapply(ret, width = 208, FUN = function(x) cor(x[,
                     1, drop = FALSE], x[, 2, drop = FALSE]), by = 1,
                     by.column = FALSE, na.pad = FALSE, align = "right"))
xyplot(na.omit(merge(SP500,rollcor,DEXJPUS)),col=brewer.pal("RdBu",n=9)[c(9,2,8)],
              lattice.options=theEconomist.opts(),
              par.settings=theEconomist.theme(box="transparent"),
              scale=list(y=list(rot=0)),
              xlab=NULL,
              strip=strip.custom(factor.levels=c("S&P 500","Correlation (Rolling 4 Year) S&P 500 and USD/Yen","USD/Japanese Yen")),
              main = "S&P 500 and USD/Yen Since 1970\nSource: Federal Reserve Bank of St. Louis")

Tuesday, October 16, 2012

Japanese Government Bond (JGB) Data Since 1974

The Ministry of Finance Japan very generously provides data on JGBs back to 1974.  Here is a quick example how to pull it into R and then graph it.

From TimelyPortfolio

R code in GIST (do raw for copy/paste):

#get Japan yield data from the Ministry of Finance Japan
#data goes back to 1974
require(latticeExtra)
require(xtsExtra)
url <- "http://www.mof.go.jp/english/jgbs/reference/interest_rate/"
filenames <- paste("jgbcme",c("","_2010","_2000-2009","_1990-1999","_1980-1989","_1974-1979"),".csv",sep="")
#load all data and combine into one jgb data.frame
jgb <- read.csv(paste(url,filenames[1],sep=""),stringsAsFactors=FALSE)
for (i in 2:length(filenames)) {
jgb <- rbind(jgb,read.csv(paste(url,"/historical/",filenames[i],sep=""),stringsAsFactors=FALSE))
}
#now clean up the jgb data.frame to make a jgb xts
jgb.xts <- as.xts(data.matrix(jgb[,2:NCOL(jgb)]),order.by=as.Date(jgb[,1]))
plot.xts(jgb.xts,ylim=c(0,12),screens=1,las=1)
plot.xts(jgb.xts,ylim=c(0,12),screens=c(rep(1,5),rep(2,5),rep(3,5)),las=1)
#use lattice to do the same thing
#for the sake of time will do final formatting here
xyplot(jgb.xts,col=brewer.pal("Blues",n=9)[5:9],
ylim=c(0,12),
screens=c(rep(1,5),rep(2,5),rep(3,5)),
lattice.options=theEconomist.opts(),
par.settings=theEconomist.theme(box="transparent"),
scale=list(y=list(rot=0)),
strip=strip.custom(factor.levels=c("1-5 Year","5-10 Year","10-40 Year"),style=5),
main="Japanese Government Bonds Since 1974")
view raw jgb data.r hosted with ❤ by GitHub

Life on the Big International Frontier

Although I have used the Kenneth French data library extensively in various posts, I have not yet used the international data sets paired with the wonderful paper.

Eugene F. Fama and Kenneth R. French (2012) "Size, Value, and Momentum in International Stock Returns", Critical Finance Review

To rectify this home bias, let’s generate some efficient frontiers for the biggest cap stocks by geographic region to see how the frontiers have evolved over the last 20 years.

From TimelyPortfolio

Eventually, I would like to think through some other methods of comparing risk, return, and weights across multiple frontiers.

R code from GIST (do raw for copy/paste):

loadfrench <- function(zipfile, txtfile, skip, nrows) {
require(xts)
#my.url will be the location of the zip file with the data
my.url=paste("http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/",zipfile,".zip",sep="")
#this will be the temp file set up for the zip file
my.tempfile<-paste(tempdir(),"\\frenchzip.zip",sep="")
#my.usefile is the name of the txt file with the data
my.usefile<-paste(tempdir(),"\\",txtfile,".txt",sep="")
download.file(my.url, my.tempfile, method="auto",
quiet = FALSE, mode = "wb",cacheOK = TRUE)
unzip(my.tempfile,exdir=tempdir(),junkpath=TRUE)
#read space delimited text file extracted from zip
french <- read.table(file=my.usefile,
header = TRUE, sep = "",
as.is = TRUE,
skip = skip, nrows=nrows)
#get dates ready for xts index
datestoformat <- rownames(french)
datestoformat <- paste(substr(datestoformat,1,4),
substr(datestoformat,5,6),"01",sep="-")
#get xts for analysis
french_xts <- as.xts(french[,1:NCOL(french)],
order.by=as.Date(datestoformat))
#divide by 100 to get percent
french_xts <- french_xts/100
#delete missing data which is denoted by -0.9999
french_xts[which(french_xts < -0.99,arr.ind=TRUE)[,1],
unique(which(french_xts < -0.99,arr.ind=TRUE)[,2])] <- 0
return(french_xts)
}
filenames <- c("Global_25_Portfolios_ME_BE-ME","Europe_25_Portfolios_ME_BE-ME","Japan_25_Portfolios_ME_BE-ME","Asia_Pacific_ex_Japan_25_Portfolios_ME_BE-ME","North_America_25_Portfolios_ME_BE-ME")
#loop through the filenames to load the file for each region
for (i in 1:length(filenames)) {
assign(substr(filenames[i],1,4), loadfrench(zipfile=filenames[i],txtfile=filenames[i],skip=21,nrows=266))
}
#merge the data into one xts object for ease of reference and use
big <- get(substr(filenames[1],1,4))[,21:25]
colnames(big) <- paste(substr(filenames[1],1,4),".",c("expensive",2:4,"cheap"),sep="")
#also set up equal weight to just explore the regions bigcap without valuation
big.ew <- as.xts(apply(big,MARGIN=1,FUN=mean),order.by=index(big))
colnames(big.ew) <- substr(filenames[1],1,4)
for (i in 2:length(filenames)) {
temp <- get(substr(filenames[i],1,4))[,21:25]
colnames(temp) <- paste(substr(filenames[i],1,4),".",c("expensive",2:4,"cheap"),sep="")
big <- merge(big,temp)
temp.ew <- as.xts(apply(temp,MARGIN=1,FUN=mean),order.by=index(temp))
colnames(temp.ew) <- substr(filenames[i],1,4)
big.ew <- merge(big.ew,temp.ew)
}
#use the equal weighted big cap
portfolio <- big.ew #change to big if you want to see the full 5x5
require(fPortfolio)
#do a frontier plot full series and then 1990-1999 and 2000-current
#sloppy but it will work
frontier <- list(portfolioFrontier(as.timeSeries(portfolio["::1999",])),
portfolioFrontier(as.timeSeries(portfolio["2000::",])),
portfolioFrontier(as.timeSeries(portfolio)))
datelabels<-c("1990-1999","2000-2012","1990-2012")
#get colors with topo.colors for the three frontiers
#we will use the first 3 of the 4 supplied
colors <- topo.colors(4)[3:1]
for(i in 1:3) {
frontierPlot(frontier[[i]], pch=19, xlim=c(0,0.10), ylim=c(0,0.015), title=FALSE, col=c(colors[i],colors[i]), add=as.logical(i-1))
minvariancePoints(frontier[[i]],pch=19,col="red")
#tangencyPoints(frontier,pch=19,col="blue")
#tangencyLines(frontier,pch=19,col="blue")
#equalWeightsPoints(frontier[[i]],pch=15,col="grey")
singleAssetPoints(frontier[[i]],pch=19,cex=1,col=colors[i])
#twoAssetsLines(frontier,lty=3,col="grey")
#sharpeRatioLines(frontier,col="orange",lwd=2)
#legend("topleft",legend=colnames(portfolio),pch=19,col=topo.colors(10),
# cex=0.65)
#label assets
stats <- getStatistics(frontier[[i]])
text(y=stats$mean,x=sqrt(diag(stats$Cov)),labels=names(stats$mean),pos=4,col=colors[i],cex=0.7)
#set up function from equalWeightsPoints to also label the point
equalLabel <- function (object, return = c("mean", "mu"), risk = c("Cov", "Sigma",
"CVaR", "VaR"), auto = TRUE, ...)
{
return = match.arg(return)
risk = match.arg(risk)
data = getSeries(object)
spec = getSpec(object)
constraints = getConstraints(object)
numberOfAssets = getNAssets(object)
setWeights(spec) = rep(1/numberOfAssets, times = numberOfAssets)
ewPortfolio = feasiblePortfolio(data, spec, constraints)
assets = frontierPoints(ewPortfolio, return = return, risk = risk,
auto = auto)
text(assets, labels = "Equal-Weight", pos=4,...)
invisible(assets)
}
#equalLabel(frontier,cex=0.7,col="grey")
#label the frontier dates at minvariance point; again very sloppy but it works
#text(x=min(frontierPoints(frontier[[i]])[,1]),
# y=frontierPoints(frontier[[i]])[which(frontierPoints(frontier[[i]])[,1]==min(frontierPoints(frontier[[i]])[,1]))[1],2],
# labels=datelabels[i],col=colors[i],pos=2)
text(x=(minvariancePoints(frontier[[i]])[,1]),
y=(minvariancePoints(frontier[[i]])[,2]),
labels=datelabels[i],col=colors[i],pos=2)
}
title(main="Global Biggest Cap Efficient Frontier",xlab="Risk(cov)",ylab="Monthly Return")
mtext(side=3, text="source: http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html",font=3,cex=0.8)
#also parallel coordinates of each of the minvariance might be interesting
minvar <- as.data.frame(rbind((minvariancePoints(frontier[[1]])),(minvariancePoints(frontier[[2]])),(minvariancePoints(frontier[[3]]))))
rownames(minvar) <- datelabels
parcoord(minvar,col=colors)
#might be nice to do animated gif or parallel coordinates of weights or risk/return
weightsPlot(frontier[[3]])(frontier[[3]]))

Monday, October 15, 2012

Not Much of a Grand Finale. What if We Go To 0?

When I ask the question “What if the US 10 year goes to 0?", most do not know the effect, the catalyst, or if 0 has ever happened before.  The math is fairly simple to do in Excel or with an old-school calculator, but let’s use RQuantLib to do the pricing and then use LatticeExtra with some slight adjustment in SVG.  RQuantLib spits out a total return of 17% if we go to 0 by the end of October, which seems like a decent amount until we look at on a chart.

From TimelyPortfolio

Mildly impressive, but the move is almost undetectable on a log scale.

From TimelyPortfolio

Throughout history, we have really only one good reference point in Japan whose 10 year went very briefly to 0.47%, but we need to remember that was in extended deflation in which stocks and real estate lost 90%.  That 17% return if we go to 0 (actually much less since 0.47% 0.43% was the stopping point) is not all that helpful in this devastating environment.

Even more strange is that the move we experienced over the last 15 months is greater than the potential move from here to 0.  On a six month change in yield chart, the 2% from April to 0% in October seems perfectly normal if we forget about the starting point.

From TimelyPortfolio

Similarly, a 12 month rolling total return chart does not reveal anything odd.

From TimelyPortfolio

However, starting point is critical.  Instead of subtracting ending from starting yield, ending yield/starting yield is more appropriate at this critical level.  Now we can see how unusual the move really is.

From TimelyPortfolio

If you are buying bonds to protect/benefit from a disastrous, deflationary “end of the world”, please be aware that best case you make a fairly measly 17%.  Just moving back to where we were Spring 2011 would mean a bigger loss than the absolute best case.

THIS IS NOT INVESTMENT ADVICE.  ALL OF THE ABOVE IS SIMPLY FOR ILLUSTRATION.

R code from GIST: (do raw for copy/paste)

require(quantmod)
require(PerformanceAnalytics)
require(RQuantLib)
require(latticeExtra)
getSymbols("DGS10",src="FRED")
GS10 <- to.monthly(DGS10)[,4]
getSymbols("GS10",src="FRED")
#Fed monthly series of yields is the monthly average of daily yields
#set index to yyyy-mm-dd format rather than to.monthly mmm yyy for better merging later
index(GS10)<-as.Date(index(GS10))
#add next month as 0%
GS10 <- rbind(GS10,as.xts(0,order.by=as.Date("2012-10-01")))
GS10pricereturn<-GS10
GS10pricereturn[1,1]<-0
colnames(GS10pricereturn)<-"PriceReturn-monthly avg GS10"
#use quantlib to price the GS10 and BAA bonds from monthly yields
#GS10 and BAA series are 20-30 year bonds so will advance date by 25 years
for (i in 1:(NROW(GS10)-1)) {
GS10pricereturn[i+1,1]<-FixedRateBondPriceByYield(yield=GS10[i+1,1]/100,issueDate=Sys.Date(),
maturityDate= advance("UnitedStates/GovernmentBond", Sys.Date(), 10, 3),
rates=GS10[i,1]/100,period=2)[1]/100-1
}
#total return will be the price return + yield/12 for one month
GS10totalreturn<-GS10pricereturn+lag(GS10,k=1)/12/100
colnames(GS10totalreturn)<-"TotalReturn-monthly avg GS10"
GS10totalreturn[1,1] <- 0
GS10cumul <- cumprod(GS10totalreturn+1)
asTheEconomist(xyplot(GS10cumul,scales=list(y=list(rot=0)),
main="US 10 Year Cumulative Growth since 1954"))
asTheEconomist(xyplot(log(GS10cumul),
scales=list(y=list(rot=0)),
main="US 10 Year Cumulative Growth (log scale) since 1954"))
asTheEconomist(xyplot(ROC(GS10cumul,12, type = "discrete"),
scales=list(y=list(rot=0)),
main="US 10 Year 12 Month Total Return since 1954"))
roc <- ROC(GS10,12,type="discrete")
asTheEconomist(xyplot(roc["::2012-09-01"],
scales=list(y=list(rot=0)),
main="US 10 Year 12 Month % Change in Yield since 1954"))
roc["2012-10-01"] <- -1
asTheEconomist(xyplot(roc,
scales=list(y=list(rot=0)),
main="US 10 Year 12 Month % Change in Yield (with 0% for October 2012) since 1954"))
plot.zoo(GS10)
asTheEconomist(xyplot(diff(GS10,6),
scales=list(y=list(rot=0)),
main="US 10 Year 6 Month Change in Yield since 1954"))

Tuesday, October 2, 2012

Emerging as Low Vol

Extending the series begun with When Russell 2000 is Low Vol, I thought I should take a look at Emerging Market stocks during periods of low relative volatility to the S&P 500.  So you can replicate even without access to expensive data, let’s use the Vanguard Emerging Market Fund (VEIEX) and the Vanguard S&P 500 Fund (VFINX) as proxies.  In the 12 month rolling regression, we see the same fairly steadily increasing beta and correlation of the Emerging Market stocks to the S&P 500 that we saw in the Russell 2000.

From TimelyPortfolio

If I progress further on this research, I will have to work on an adaptive definition of “low vol”, but for the purpose of this post, I defined “low vol” as

Emerging 50 day std. dev – S&P 500 50 day sd > –0.075

For the Russell 2000, we used a more strict 0.0125.  Although the numeric definition is different, the chart shows a very similar profile.

From TimelyPortfolio

R code from GIST:

require(quantmod)
require(PerformanceAnalytics)
getSymbols("VEIEX",from = "1900-01-01") #use VEIEX Vanguard Emerging as proxy for emerging mkt stocks
getSymbols("VFINX",from = "1900-01-01") #use VFINX Vanguard SP500 as proxy for SP500
#get 1 day change for the emerging and sp500
roc <- na.omit(merge(ROC(VEIEX[,6],type="discrete",n=1),ROC(VFINX[,6],type="discrete",n=1)))
stdev <- rollapplyr(roc,FUN=sd,width=50)
#get relative strength of emerging versus S&P 500
rs <- VEIEX[,6]/VFINX[,6]
#do some trial graphs to see the interaction
plot.zoo(merge(stdev[,1]-stdev[,2],rs))
plot.zoo(merge(stdev[,1]-stdev[,2]/rs,rs))
plot.zoo(merge(stdev[,1]/stdev[,2],rs,VEIEX[,6]))
#create a PerformanceAnalytics rolling summary of emerging versus the S&P 500
charts.RollingRegression(roc[,1],roc[,2],width=250,main="")
title(main="Vanguard Emerging compared to the Vanguard S&P 500 (Rolling 1 Year)",outer=TRUE, line=-1.5, adj=0.05, cex.main=0.85)
#use colors provided in xblocks documentation
rgb <- hcl(c(0, 0, 260), c = c(100, 0, 100), l = c(50, 90, 50), alpha = 0.2)
plot.zoo(VEIEX[,6], #plot closing price of emerging
bty="n", #no box; will fill in later with abline
las=1, #no rotation on y axis labels
xlab = NA,
ylab = NA)
xblocks(index(VEIEX[,6]), as.vector(stdev[,1]-stdev[,2]/rs > - 0.075),col = rgb[3]) #admittedly the -0.075 is ex-post
#connect the axes
abline(h=par("usr")[3]) #extend y axis
abline(v=par("usr")[1]) #extend x axis
abline(h=pretty(par("yaxp")),lty=1,lwd=2,col="white") #try something new for gridlines
title(main="Vanguard Emerging VEIEX (source: Yahoo! Finance)",outer=TRUE, line=-2, adj=0.05)
mtext("blocks denote periods where Emerging 50 day sd low compared to S&P 500 sd",side=3,adj=0.05,cex=0.7,font=3, line=-1.5)

Monday, October 1, 2012

When Russell 2000 is Low Vol

Continuing in my exploration of the Russell 2000 (Russell 2000 Softail Fat Boy), I thought I would try to approach the topic with a low volatility paradox mindset.  Since 2005, beta of the Russell 2000 compared to the S&P 500 has exceeded 1.2 with a max of 1.6 for almost every rolling 1 year period.  This suggests that the Russell 2000 is anything but low vol.

From TimelyPortfolio

However, we can take a more simplistic view by comparing the rolling 50-day standard deviation of the Russell 2000 with the S&P 500.  Russell 2000 on an absolute and relative basis does very well when rolling 50-day standard deviation of the Russell 2000 minus the same standard deviation on the S&P 500 exceeds –1.25%, so the Russell 2000 performs best when volatility approaches the S&P 500.  In low relative volatility environments, it seems we should own the high beta Russell 2000.  You will see the largest down moves all occur in the non-shaded time periods.

From TimelyPortfolio

I intentionally wanted this post to be simple, so I hid a lot of the preliminary work and extra links.  Far more went into this than appears above.

R code from GIST:

require(quantmod)
require(PerformanceAnalytics)
getSymbols("^RUT",from = "1900-01-01")
getSymbols("^GSPC",from = "1900-01-01")
#get 1 day change for the Russell 2000 and S&P 500
roc <- na.omit(merge(ROC(RUT[,4],type="discrete",n=1),ROC(GSPC[,4],type="discrete",n=1)))
stdev <- rollapplyr(roc,FUN=sd,width=50)
#get relative strength of Russell 2000 versus S&P 500
rs <- RUT[,4]/GSPC[,4]
#do some trial graphs to see the interaction
plot.zoo(merge(stdev[,1]-stdev[,2],rs))
plot.zoo(merge(stdev[,1]-stdev[,2]/rs,rs))
plot.zoo(merge(stdev[,1]/stdev[,2],rs,RUT[,4]))
#create a PerformanceAnalytics rolling summary of Russell 2000 versus the S&P 500
charts.RollingRegression(roc[,1],roc[,2],width=250,main="")
title(main="Russell 2000 compared to the S&P 500 (Rolling 1 Year)",outer=TRUE, line=-1.5, adj=0.05)
#use colors provided in xblocks documentation
rgb <- hcl(c(0, 0, 260), c = c(100, 0, 100), l = c(50, 90, 50), alpha = 0.2)
plot.zoo(RUT[,4], #plot closing price of Russell 2000
bty="n", #no box; will fill in later with abline
las=1, #no rotation on y axis labels
xlab = NA,
ylab = NA)
xblocks(index(RUT[,4]), as.vector(stdev[,1]-stdev[,2]/rs > - 0.0125),col = rgb[3])
#connect the axes
abline(h=par("usr")[3]) #extend y axis
abline(v=par("usr")[1]) #extend x axis
abline(h=pretty(par("yaxp")),lty=1,lwd=2,col="white") #try something new for gridlines
title(main="Russell 2000 (source: Yahoo! Finance)",outer=TRUE, line=-2, adj=0.05)
mtext("blocks denote periods where Russell 2000 50 day sd low compared to S&P 500 sd",side=3,adj=0.05,cex=0.7,font=3, line=-1.5)

Thursday, September 20, 2012

Obviousness of REITs?

I very much enjoy papers such as

Antonacci, Gary, Risk Premia Harvesting Through Momentum (September 5, 2012). Available at SSRN: http://ssrn.com/abstract=2042750 or http://dx.doi.org/10.2139/ssrn.2042750

Faber, Mebane T., A Quantitative Approach to Tactical Asset Allocation (February 17, 2009). Journal of Wealth Management, Spring 2007. Available at SSRN: http://ssrn.com/abstract=962461

and clearly the finance community appreciates these with the first as a First place winner of the 2012 NAAIM Wagner Awards for Advancements in Active Investment Management and the second with a download rank of 2 on SSRN with 273,000 abstract views.

However, I struggle mightily with how obvious would these papers’ choice of assets been without the benefit of hindsight.  I already briefly touched on this flaw in Bonds Much Sharpe -r Than Buffett.  Of course, using what we now know is one of the best asset classes in the history of the world that has also experienced an anomalous and extremely negative correlation with equities during their distress will provide a very positive result.  Unfortunately, I have yet to find any research from the late 1970s or early 1980s that predicted such a glorious environment for bonds.

Similar but not quite as extreme, adding REITs (all my posts about REITs)  over the last 12 years in any way would almost guarantee a pleasant result.  However, REITs were not so stellar for the 15 year period 1984-1999.  Would REITs have been such an obvious choice in 1999?  Of course, if we know the future, but I’m not so sure when we only knew the past.

From TimelyPortfolio
From TimelyPortfolio
From TimelyPortfolio

Also, how obvious would gold have been in 1998 or how obvious would high yield have been in the early 1980s when they did not even exist? Can we expect the next 10 years to look like the last 10 years?

R code from GIST:

#obviousness of reits?
#get NAREIT data
#I like NAREIT since I get back to 1971
#much easier though to get Wilshire REIT from FRED
#also it is daily instead of monthly
#getSymbols("WILLREITIND",src="FRED") will do this
require(gdata)
reitURL <- "http://returns.reit.com/returns/MonthlyHistoricalReturns.xls"
reitExcel <- read.xls(reitURL,sheet="Index Data",pattern="All REITs",stringsAsFactors=FALSE)
#clean up dates so we can use xts functionality later
datetoformat <- reitExcel[,1]
datetoformat <- paste(substr(datetoformat,1,3),"-01-",substr(datetoformat,5,6),sep="")
datetoformat <- as.Date(datetoformat,format="%b-%d-%y")
reitExcel[,1] <- datetoformat
#######now get the returns and clean up slightly############
require(quantmod)
require(xtsExtra)
#shift colnames over 1
colnames(reitExcel) <- colnames(reitExcel)[c(1,1:(NCOL(reitExcel)-1))]
#get return columns
reitData <- reitExcel[,c(3,24)]
#name columns
colnames(reitData) <- c("AllREITs","EquityREITs")
reitData <- reitData[3:NROW(reitData),]
#erase commas
col2cvt <- 1:NCOL(reitData)
reitData[,col2cvt] <- lapply(reitData[,col2cvt],function(x){as.numeric(gsub(",", "", x))})
#create xts
reitData <- as.xts(reitData,order.by=reitExcel[3:NROW(reitExcel),1])
#add the sp500
getSymbols("SP500",src="FRED")
SP500 <- to.monthly(SP500)[,4]
#get 1st of month to align when we merge
index(SP500) <- as.Date(index(SP500))
#merge REIT and S&p
reitSp500 <- na.omit(merge(reitData,SP500))
reitSp500 <- ROC(reitSp500,n=1,type="discrete")
#make first return 0 instead of NA
reitSp500[1,] <- 0
##################make some charts#########################
require(PerformanceAnalytics)
layout(matrix(c(1,2),nrow=1))
chart.RiskReturnScatter(reitSp500["2000::",],
col=c("steelblue4","steelblue2","gray50"),
main="REITS and the S&P 500 Since 2000",
xlim = c(0,0.35))
chart.RiskReturnScatter(reitSp500["1984::1999",],
col=c("steelblue4","steelblue2","gray50"),
main="REITS and the S&P 500 1984-1999",
xlim = c(0,0.25))
charts.PerformanceSummary(reitSp500["2000::",],
colorset =c("steelblue4","steelblue2","gray50"),
main="REITS and the S&P 500 Since 2000")
charts.PerformanceSummary(reitSp500["1984::1999",],
colorset =c("steelblue4","steelblue2","gray50"),
main="REITS and the S&P 500 1984-1999")

Friday, September 7, 2012

Big Issue with System Backtests

Almost always, when I see a system backtested, the backtest assumes a static portfolio with no contributions or withdrawals.  This assumption only covers an extremely limited subset of my clients.  Cash flows in and out of a portfolio or system can have a much larger impact on ending net worth than the geometrically linked performance of a system or money manager.  Most of the systems I have shown for demonstration purposes on this blog have suffered from this unrealistic assumption.  I thought I should show how contributions similar to those in a 401k can affect a simple moving average system.

In a roaring bull market, any momentum system will underperform so significantly that potential abandonment of the system due to lack of confidence is highly likely.  As shown below, there was very little discussion of moving average strategies 1990-2000 due to the simple fact that buy/hold absolutely clobbered them.  While normal adjustments like return/risk and a log scale can soften the impact on the screen, the psychological impact to a client can be very damaging as the focus moves entirely to $ value of the portfolio.  Below is a comparison of buy and hold versus a 200 day moving average system with no additions or withdrawals.  As discussed, 1990-2000 was not kind to the moving average.

From TimelyPortfolio

However, if we add a simple framework similar to a 401k investor starting with $100,000 and adding $10,000 per year ($2,500 per quarter), the results differ significantly.

From TimelyPortfolio

I intentionally played a little trick by changing the y axis to a log scale.  Clients don’t think in log scale when evaluating performance.  They simply look at $ value of the portfolio.  If we look at the results without a log scale, underperformance by 2000 is visible, but it is nowhere near as great as shown in the first chart of the post, and outperformance after the 2008-2009 collapse is very healthy.  I actually think a client could accept this profile much more readily than that shown with a static portfolio.

From TimelyPortfolio

Of course, this test was not perfect, and all sorts of assumptions and simulations can be added, but we can start to see how inflows and outflows can impact a portfolio whether it is buy/hold, discretionary, or systematic.

R code from GIST ( do raw for copy/paste ):

require(PerformanceAnalytics)
require(quantmod)
getSymbols("^GSPC", from = "1900-01-01")
#get return series from closing price
ret.bh <- ROC(GSPC[,4],n = 1,type = "discrete")
#change first value from NA to 0
ret.bh[1,] <- 0
#set nper for moving average system
nper <- 200 #200 for approximation of 10 month
#get returns for moving average system
ret.ma <- lag(ifelse(GSPC[,4] > runMean(GSPC[,4], n = nper), 1, 0),k = 1) * ret.bh
ret.ma[is.na(ret.ma)] <- 0
ret <- merge(ret.bh,ret.ma)
colnames(ret) <- c("SP500.buyhold","SP500.ma")
#do analysis without money add framework
charts.PerformanceSummary(ret, colorset=c("gray60","steelblue3"), lwd=c(1,2),
main = "S&P 500 BuyHold and Moving Average")
#set up a very basic framework
#for a situation similar to 401k
#where money is periodically added
startmoney = 100000
lastmoney = rep(startmoney,2)
#this 60 is a crude approximation of quarterly
addfreq = 60
#this is the deposit
#in this case 10% of starting capital added per year or 2.5% per quarter
deposit = startmoney[1] * 0.10 / 4 #floor(250/addfreq)
#copy structure of ret for the portfolio series
portfolio <- merge(ret.bh, ret.ma)
#set all portfolio to be equal to starting capital
portfolio[] <- startmoney
colnames(portfolio) <- c("SP500.buyhold","SP500.ma")
#know this is not a pretty way to do this
#use quantstrat for more robust portfolio accounting and testing
for (i in 1:NROW(GSPC)) {
#deposit money each addfreq days
if (i %% addfreq == 0) {
portfolio[i,] = c((lastmoney[1] + deposit) * (1 + ret.bh[i]), (lastmoney[2] + deposit) * (1 + ret.ma[i]))
lastmoney = as.vector(portfolio[i,])
#all other periods just get return of sp500
} else {
portfolio[i,] = c(lastmoney[1] * (1 + ret.bh[i]), lastmoney[2] * (1 + ret.ma[i]))
lastmoney = as.vector(portfolio[i,])
}
}
#very slightly amend the default panel to do log scale of y axis
#but have everything still work and also label non log
slightly.changed.panel <- function(index,x,...) {
default.panel(index,x,...)
abline(h=pretty(c(par("yaxp")[1],par("yaxp")[2]),n=abs(par("yaxp")[3])),col="gray60",lty=3,lwd=0.5)
#way too much manual intervention
axis(side=2,col="gray60",col.axis="black",lwd=1,las=1,
at=pretty(c(par("yaxp")[1],par("yaxp")[2]),n=abs(par("yaxp")[3]))[c(1,3,5)],
labels=10^pretty(c(par("yaxp")[1],par("yaxp")[2]),n=abs(par("yaxp")[3]))[c(1,3,5)]
)
abline(h=par("usr")[3])
}
drawdown.panel <- function(index,x,...) {
mtext("Drawdown", side=1, adj=1, line=-2)
default.panel(index,x,...)
#silly to do this but if we wanted just certain points like every 4 months we could do something like this
#default.panel(index[seq(1,NROW(index),by=4)],coredata(x[seq(1,NROW(index),by=4)]),...)
#abline(h=0, col="black")
abline(h=pretty(c(par("yaxp")[1],par("yaxp")[2]),n=par("yaxp")[3]),col="gray60",lty=3)
axis(side=2,at=pretty(c(par("yaxp")[1],par("yaxp")[2]),n=par("yaxp")[3]),col="gray60", col.axis="black", las=1)
abline(h=par("usr")[3], col="black")
}
plot.xts(na.omit(merge(log10(portfolio),Drawdowns(ROC(portfolio),n=1,type="discrete"))),
screens=c(1,1,2,2),
layout.screens=c(1,1,2),
auto.legend=TRUE,legend.loc=c("topleft",NA),
legend.pars = list(bty = "n", horiz=TRUE),
#log="y",
col=c("gray70","steelblue3"),
lwd=c(1,2),
bty="n",
auto.grid=FALSE,
major.format="%Y",
minor.ticks=FALSE,
col.axis="transparent",
cex.axis=0.9,
panel=c(slightly.changed.panel,drawdown.panel),
main=NA)
title(main="S&P 500 Strategy Comparison with 401k Style Additions", outer=TRUE, line=-2, adj= 0.05)
#clients don't think in log terms so rerun without log scale
plot.xts(na.omit(merge(portfolio,Drawdowns(ROC(portfolio),n=1,type="discrete"))),
screens=c(1,1,2,2),
layout.screens=c(1,1,2),
auto.legend=TRUE,legend.loc=c("topleft",NA),
legend.pars = list(bty = "n", horiz=TRUE),
#log="y",
col=c("gray70","steelblue3"),
lwd=c(1,2),
bty="n",
auto.grid=FALSE,
major.format="%Y",
minor.ticks=FALSE,
col.axis="transparent",
cex.axis=0.9,
panel=c(default.panel,drawdown.panel),
main=NA)
title(main="S&P 500 Strategy Comparison with 401k Style Additions", outer=TRUE, line=-2, adj= 0.05)

Friday, August 31, 2012

Russell 2000 Softail Fat Boy

If the Russell 2000 were a motorcycle, maybe it should be a Harley-Davidson Softail Fat Boy.  I have explored the exceptional case of the Russell 2000 in quite a few posts

but I still am not sure I have done a good job of clearly and simply explaining some of the unique characteristics of the Russell 2000 over the last  25 years.  The Russell 2000 has been extremely difficult to beat because the upside has been so good (fat boy) and the downside has been the same or better (softtail).

From TimelyPortfolio
From TimelyPortfolio
From TimelyPortfolio

R code from GIST:

#explore exceptional case of the Russell 2000
require(quantmod)
require(PerformanceAnalytics)
require(xtsExtra)
getSymbols("^RUT", from = "1900-01-01")
getSymbols("^GSPC", from = "1900-01-01")
#do initial exploration of distribution
chart.QQPlot(na.omit(ROC(RUT[,4],type="discrete",n=250))["1987::"])
chart.QQPlot(na.omit(ROC(GSPC[,4],type="discrete",n=250))["1987::"])
#explore barplot.xts to do a chart of annual returns for both indexes
#merge prices
prices <- merge(GSPC[,4],RUT[,4])
#use endpoints to get annual returns
returns.annual <- as.xts(apply(
ROC(prices[endpoints(prices,"years")],type="discrete",n=1),
MARGIN = 2,
FUN = na.fill, fill = 0),
order.by = index(prices[endpoints(prices,"years")]))
#name columns something a little more clear
colnames(returns.annual) <- c("S&P 500","Russell 2000")
#using barplot.xts create the plot
#I made some subtle changes to barplot.xts to experiment so plot will be cosmetically different
barplot.xts(returns.annual,
stacked=FALSE,
box="transparent", #get rid of box surrounding the plot
ylim=c(-0.5,0.5),
ylab=NA,
border=c(brewer.pal(n=11,"BrBG")[c(4,9)]),
col=c(brewer.pal(n=11,"BrBG")[c(4,9)])) #deliberately trying some new colors
title(main="Annual Returns of S&P 500 and Russell 2000",
outer = TRUE,
adj=0.05, font.main = 1, cex.main = 1.25, line = -2)
require(latticeExtra)
require(reshape2)
roc <- na.omit(merge(ROC(GSPC[,4],type="discrete",n=250),ROC(RUT[,4],type="discrete",n=250)))
#name columns something a little more clear
colnames(roc) <- c("S&P 500","Russell 2000")
roc.melt <- melt(coredata(roc))
asTheEconomist(
densityplot(~value,data=roc.melt,groups=Var2,
#par.settings=theEconomist.theme(box="transparent"),
#lattice.options=theEconomist.opts(),
auto.key=list(space="right",col=(brewer.pal(n=11,"BrBG")[c(9,4)]),lines=FALSE),
col=(brewer.pal(n=11,"BrBG")[c(9,4)]),
ylab=NA,
main="Annual Returns (Rolling 250 Day) of S&P 500 and Russell 2000")
)

Thursday, August 30, 2012

Another Great Google Summer of Code 2012 R Project

Tradeblotter announced the very nice features that will be added to the PerformanceAnalytics package as a result of the Google Summer of Code (GSOC) 2012 project:

“…Matthieu commenced to produce dozens of new functions, extend several more existing ones, and add more than 40 pages of additional documentation (complete with formulae and examples) to PerformanceAnalytics. He’s included Bacon’s small data set and several new table.* functions for testing and demonstrating that the functions match the published results. All on plan, I would add.

He also wrote a very nice overview of the functions developed from Bacon (2008) that are included in PerformanceAnalytics, which should be helpful to readers or teachers of Bacon’s work. Matthieu’s summary document will also be distributed as a vignette in the package, accessible using vignette('PA-Bacon'). Additional detail is included in the documentation for each function, as well. I’ll highlight some of those functions in later posts…”

Although the Bacon vignette is very helpful, I highly recommend reading the book also.  For those of you on O’Reilly’s Safari, you can add the book to your bookshelf.

Preview

or buy it from Amazon.

Since I saw the announcement (been patiently waiting) when I was also reading this very good article “Tomatoes and the Low Vol Effect” by Research Affiliates, I was already working on some R to explore the Sharpe and Information Ratio between a buy/hold on the S&P 500 and a 10 month moving average strategy.  When you look at the code, you might enjoy some of the other charts and methods, but  I decided to change directions with my research and integrate a new PerformanceAnalytics function and continue to push the limits with another fine Google Summer of Code 2012 addition xtsExtra plot.xts.  I was not real selective and opted for ProspectRatio, since it is a formula I do not often hear discussed.  On page 20, the Bacon vignette describes ProspectRatio as

image

I basically tried to include absolutely everything possible (scattered, smothered, covered, chunked, and diced) from xtsExtra plot.xts in this chart.  The chart includes cumulative growth of buy/hold and 10 month moving average on the S&P 500 in the top panel, shows drawdown in the middle, and then adds a horizon plot of Prospect Ratio for the ma strategy – Prospect Ratio for the buy/hold.  Since plot.xts also allows for blocks, I thought it would be helpful to also shade those periods when the 36 month rolling Sharpe Ratio of the ma strategy outperforms buy/hold.  While this is probably a bit much for a single chart, I think it demonstrates clearly how powerful these tools are.

From TimelyPortfolio

R code from GIST (do raw for copy/paste):

#integrate 2 Google Summer of Code 2012 Projects
#plot.xts and PerformanceAnalytics both received very nice additions
#wish I knew the exact link but believe this section came from stackoverflow
#this allows you to work with the code straight from r-forge SVN
## If you want to source() a bunch of files, something like
## the following may be useful:
#path="C:\\Program Files\\R\\R-2.15.1\\sandbox\\svnsource\\returnanalytics\\pkg\\PerformanceAnalytics\\r"
#sourceDir <- function(path, trace = TRUE, ...) {
# for (nm in list.files(path, pattern = "\\.[RrSsQq]$")) {
# if(trace) cat(nm,":")
# source(file.path(path, nm), ...)
# if(trace) cat("\n")
# }
#}
#sourceDir(path)
require(RColorBrewer)
require(quantmod)
require(xtsExtra)
getSymbols("^GSPC",from="1890-01-01")
sp500.monthly <- GSPC[endpoints(GSPC, "months"),4]
roc <- ROC(sp500.monthly, type = "discrete", n = 1)
n = 10 #set n for number of periods; this is 10 for 10 months
roc.ma <- lag(ifelse(sp500.monthly > runMean(sp500.monthly, n = n), 1, 0), k = 1) * roc
returns <- merge(roc, roc.ma)
returns <- as.xts(apply(returns, MARGIN = 2, na.fill, fill = 0), order.by = index(returns))
colnames(returns) <- c("SP500.buyhold", "SP500.ma")
charts.PerformanceSummary(returns, ylog = TRUE)
sr <- SharpeRatio.annualized(returns)
ir <- InformationRatio(returns, returns[,1])
applyacross <- function(x,rollFun=Omega,width=12,by=1,Rb,...) {
if(missing(Rb)) { #add this so we can also use for InformationRatio
result <- apply.rolling(x,FUN=rollFun,width,by,...)
} else {
result <- apply.rolling(x,FUN=rollFun,width,by,Rb=Rb,...)
}
result <- merge(x,result)[,2] #merge to pad with NA and then get second column
colnames(result) <- colnames(x)
return(result)
}
stat = "SharpeRatio.annualized" #ES, KellyRatio, Omega, skewness, kurtosis, VaR, SharpeRatio, CalmarRatio
width = 36
sharpe.rolling <- as.xts(matrix(unlist(lapply(returns,FUN=applyacross,rollFun=get(stat),width=width)),byrow=FALSE,nrow=NROW(returns)), order.by = index(returns))
sharpe.rolling <- as.xts(apply(sharpe.rolling,MARGIN=2,FUN=na.fill,fill=0),order.by=index(sharpe.rolling))
colnames(sharpe.rolling) <- colnames(returns)
plot.xts(sharpe.rolling,screens=1)
plot.xts(sharpe.rolling[,2] - sharpe.rolling[,1],screens=1)
plot(x=coredata(sharpe.rolling[,1]),y=coredata(sharpe.rolling[,2]),pch=19,
col=ifelse(sharpe.rolling[,2]>0,"green","red"))
abline(lm(SP500.ma ~ SP500.buyhold, data=as.data.frame(sharpe.rolling)))
text(x=coredata(sharpe.rolling[,1]),y=coredata(sharpe.rolling[,2]),labels=format(as.Date(index(sharpe.rolling)),"%Y"),cex=0.5, pos = 2 )
plot(lm(SP500.ma ~ SP500.buyhold, data=as.data.frame(sharpe.rolling)),which=2)
stat = "InformationRatio"
#this allows multiple columns if we have more than one index or comparison
ir.rolling <- as.xts(matrix(unlist(lapply(returns,FUN=applyacross,rollFun=get(stat),width=width,Rb=returns[,1])),byrow=FALSE,nrow=NROW(returns)), order.by = index(returns))
colnames(ir.rolling) <- colnames(returns)
#could also do this if we only have one column
#ir.rolling <- apply.rolling(returns[,2],FUN=InformationRatio,width=width,by=1,Rb=returns[,1])
plot.xts(na.omit(merge(ir.rolling[,2],ROC(sp500.monthly,type="discrete",n=36))),screens=1)
plot(x=coredata(ROC(sp500.monthly,type="discrete",n=36)),y=coredata(ir.rolling[,2]),
pch=19,
col=ifelse(ir.rolling[,2]>0,"green","red"),
las=1)
abline(h=0)
abline(v=0)
plot(x = coredata(sharpe.rolling[,1]),y=coredata(ir.rolling[,2]),
pch = 19,
col = ifelse(ir.rolling[,2]>0,"green","red"),
las = 1)
abline(h=0)
abline(lm(coredata(ir.rolling[,2])~coredata(sharpe.rolling[,1])))
plot(lm(coredata(ir.rolling[,2])~coredata(sharpe.rolling[,1])))#,which=2)
#use the new ProspectRatio function in PerfomranceAnalytics
stat = "ProspectRatio" #ES, KellyRatio, Omega, skewness, kurtosis, VaR, SharpeRatio, CalmarRatio
width = 36
pr.rolling <- as.xts(matrix(unlist(lapply(returns,FUN=applyacross,rollFun=get(stat),width=width,MAR=0.025)),byrow=FALSE,nrow=NROW(returns)), order.by = index(returns))
pr.rolling <- as.xts(apply(pr.rolling,MARGIN=2,FUN=na.fill,fill=0),order.by=index(pr.rolling))
colnames(pr.rolling) <- colnames(returns)
#set up horizon plot functionality
horizon.panel <- function(index,x,...) {
#get some decent colors from RColorBrewer
#we will use colors on the edges so 2:4 for red and 7:9 for blue
require(RColorBrewer)
col.brew <- brewer.pal(name="RdBu",n=10)
#ease this reference later
n=NROW(x)
#clean up NA with either of the two methods below
#x[which(is.na(x),arr.ind=TRUE)[,1],
# unique(which(is.na(x),ar.ind=TRUE)[,2])] <- 0
x <- apply(x,MARGIN=2,FUN=na.fill,fill=0)
#get number of bands for the loop
#limit to 3
nbands = 3
#first tried this but will not work since each series needs to have same number of bands
#min(4,ceiling(max(abs(coredata(x)))/horizonscale))
par(usr=c(index[1],par("usr")[2],origin,horizonscale))
for (i in 1:nbands) {
#draw positive
polygon(
c(index[1], index, index[n]),
c(origin, coredata(x) - (i-1) * horizonscale,origin),
col=col.brew[length(col.brew)-nbands+i-1],
border=NA
)
#draw negative
polygon(
c(index[1], index, index[n]),
c(origin, -coredata(x) - (i-1) * horizonscale,origin),
col=col.brew[nbands-i+1],
border=NA
)
}
#delete trash drawn below origin that we keep so no overlap between positive and negative
polygon(
c(index[1], index, index[n]),
c(origin, -ifelse(coredata(x)==origin,horizonscale*5,abs(coredata(x))),origin),
col=par("bg"),
border=NA
)
#draw a line at the origin
abline(h=origin,col="black")
#draw line at top of plot or otherwise polygons will cover boxes
abline(h=par("usr")[4],col="black")
#mtext("ProspectRatio Difference", side = 3, adj = 0.02, line = -1.5, cex = 0.75)
}
horizonscale = 0.25
origin = 0
#trying this to color sections or color lines based on sharpe
rle.sharpe <- rle(as.vector(sharpe.rolling[,2]>sharpe.rolling[,1]))
dates <- index(returns)[cumsum(rle.sharpe$lengths)]
start.i=ifelse(na.omit(rle.sharpe$values)[1],2,1)
#png("plotxts with everything and ProspectRatio.png",height=600, width=640)
plot.xts(merge(log(cumprod(1+returns)),Drawdowns(returns),pr.rolling[,2] - pr.rolling[,1]),
screens = c(1,1,2,2,3), #since 2 columns for cumul and drawdown repeat screens
layout.screens = c(1,1,1,1,2,2,3), #make screen 1 4/7 of total 2 2/7 and 3 (horizon) 1/7
col = brewer.pal(9,"Blues")[c(5,8)], #get two blues that will look ok
lwd = c(1.5,2), #line width; will do smaller 1.5 for benchmark buy/hold
las = 1, #do not rotate y axis labels
ylim = matrix(c(0,5,-0.55,0,origin,horizonscale),byrow=TRUE,ncol=2), #plot.xts accepts ylim in matrix form; print matrix to see how it works
auto.legend = TRUE, #let plot.xts do the hard work on the legend
legend.loc = c("topleft",NA, NA), #just do legend on the first screen
legend.pars = list(bty = "n", horiz=TRUE), #make legend box transparent and legend horizontal
panel = c(default.panel,default.panel,horizon.panel), #specify panels for each screen
main = NA, #will do title later so we have more control
#log="y", #log scale does not work with blocks
blocks = list(start.time=dates[seq(start.i,NROW(dates),2)], #overlay blocks in which 36-mo sharpe ratio of ma exceeds buy/hold
end.time=dates[-1][seq(start.i,NROW(dates),2)],col="gray90")) #darkolivegreen2"))
title(main = "Strategy Comparison on S&P 500 - Buy Hold versus Moving Average", adj = 0.05, line = -1.5, outer = TRUE, cex.main = 1.1, font.main = 3)
#dev.off()

Monday, August 27, 2012

Horizon on ggplot2

SocialDataBlog’s kind reference in post Horizon plots with ggplot (not) motivated me to finish what the post started.  I knew that ggplot2 would be a little more difficult to use for the purpose of a horizon plot, but I felt compelled to provide at least one example of a horizon plot for each of the major R graphing packages.  I achieved a good result but the code is not as elegant or as flexible as I would like.  Readers more comfortable with ggplot2, please bash, fork, and improve.

From TimelyPortfolio

R code in GIST (do raw for copy/paste):

#first attempt at implementing horizon plots in ggplot2
#pleased with result but code sloppy and inflexible
#as always very open to improvements and forks
require(ggplot2)
require(reshape2)
require(quantmod)
require(PerformanceAnalytics)
require(xtsExtra)
data(edhec)
origin = 0
horizonscale = 0.1
#get 12 month rolling return of edhec indexes
roc <- as.xts(apply(cumprod(edhec+1),MARGIN=2,ROC,n=12,type="discrete"),order.by=index(edhec))
roc.df <- as.data.frame(cbind(index(roc),coredata(roc)))
roc.melt <- melt(roc.df,id.vars=1)
roc.melt[,1] <- as.Date(roc.melt[,1]) #convert back to a Date
horizon.panel.ggplot <- function(df, title) {
#df parameter should be in form of date (x), grouping, and a value (y)
colnames(df) <- c("date","grouping","y")
#get some decent colors from RColorBrewer
#we will use colors on the edges so 2:4 for red and 7:9 for blue
require(RColorBrewer)
col.brew <- brewer.pal(name="RdBu",n=10)
#get number of bands for the loop
#limit to 3 so it will be much more manageable
nbands = 3
#loop through nbands to add a column for each of the positive and negative bands
for (i in 1:nbands) {
#do positive
df[,paste("ypos",i,sep="")] <- ifelse(df$y > origin,
ifelse(abs(df$y) > horizonscale * i,
horizonscale,
ifelse(abs(df$y) - (horizonscale * (i - 1) - origin) > origin, abs(df$y) - (horizonscale * (i - 1) - origin), origin)),
origin)
#do negative
df[,paste("yneg",i,sep="")] <- ifelse(df$y < origin,
ifelse(abs(df$y) > horizonscale * i,
horizonscale,
ifelse(abs(df$y) - (horizonscale * (i - 1) - origin) > origin, abs(df$y) - (horizonscale * (i - 1) - origin), origin)),
origin)
}
#melt data frame now that we have added a column for each band
#this will fit ggplot2 expectations and make it much easier
df.melt <- melt(df[,c(1:2,4:9)],id.vars=1:2)
#name the columns for reference
#try to be generic
colnames(df.melt) <- c("date","grouping","band","value")
#use ggplot to produce an area plot
p <- ggplot(data=df.melt) +
geom_area(aes(x = date, y = value, fill=band),
#alpha=0.25,
position="identity") + #this means not stacked
scale_fill_manual(values=c("ypos1"=col.brew[7], #assign the colors to each of the bands; colors get darker as values increase
"ypos2"=col.brew[8],
"ypos3"=col.brew[9],
"yneg1"=col.brew[4],
"yneg2"=col.brew[3],
"yneg3"=col.brew[2])) +
ylim(origin,horizonscale) + #limit plot to origin and horizonscale
facet_grid(grouping ~ .) + #do new subplot for each group
theme_bw() + #this is optional, but I prefer to default
opts(legend.position = "none", #remove legend
strip.text.y = theme_text(),#rotate strip text to horizontal
axis.text.y = theme_blank(),#remove y axis labels
axis.ticks = theme_blank(), #remove tick marks
axis.title.y = theme_blank(),#remove title for the y axis
axis.title.x = theme_blank(),#remove title for the x axis
title = title, #add a title from function parameter
plot.title = theme_text(size=16, face="bold", hjust=0)) #format title
return(p)
}
horizon.panel.ggplot(roc.melt, "EDHEC Indexes Return (Rolling 1 Year)")

Thursday, August 23, 2012

Bonds Much Sharpe -r Than Buffett

Mebane Faber’s post Buffett’s Alpha points out Warren Buffett’s 0.76 Sharpe Ratio discussed in the similarly title paper Buffet’s Alpha.  I of course immediately think about the 8th Wonder of the World – the US Bond Market, whose Sharpe Ratio has trounced Buffett’s for the last 30 years.  What I like even better are all the tactical systems that employ US bonds in their backtests and make no adjustment for substantially different returns going forward.  For those of you who do not know, bonds with absolute certainty cannot achieve >8% annualized returns with max drawdown < 5% for the next 30 years with a starting yield to worst at 1.86% (Barclays Agg 8/23/2012). 

If anyone can show me where to get 8% annualized returns with max drawdown of 5% for the next 30 years, please let me know, and I will buy that with leverage and enjoy life. I’ll be happy to share my gains with whomever has the answer.

From TimelyPortfolio

In addition to an unbelievable Sharpe Ratio, bonds have exhibited a low/negative correlation with stocks during stocks’ bear market, which is also historically very anomalous.

From TimelyPortfolio

Just because I love horizon plots.

From TimelyPortfolio

For a longer perspective, here is rolling correlation since 1900 from the CSFB 2011 Yearbook.

image

So the experience can be dramatically different than what we have been fortunate enough to experience recently.

image

R code from GIST (install xtsExtra):

require(quantmod)
require(PerformanceAnalytics)
require(xtsExtra)
require(RColorBrewer)
#unfortunately don't feel like fighting IP lawyers so I cannot share this index data
portfolio <- read.csv("file.csv",stringsAsFactors=FALSE)
portfolio <- portfolio[2:NROW(portfolio),2:NCOL(portfolio)]
portfolio <- portfolio[,c(1,3,5)]
#since export has duplicate colnames we need to remove the .1 added
#colnames(portfolio) <- substr(colnames(portfolio),1,nchar(colnames(portfolio))-2)
len <- nchar(portfolio[,1])
xtsdate <- paste(substr(portfolio[,1],len-3,len),"-",
ifelse(len==9,"0",""),substr(portfolio[,1],1,len-8),"-01",sep="")
portfolio.xts <- xts(data.matrix(portfolio[,2:NCOL(portfolio)]),order.by=as.Date(xtsdate))
portfolio.xts <- portfolio.xts/100
portfolio.xts[1,]<-0
charts.RollingPerformance(portfolio.xts[,1], #Barclays Aggregate
width = 60, #rolling 5 year (60 months)
Rf = portfolio.xts[,2], #Barclays 3-month
main = "Barclays Aggregate Performance (Rolling 5 Year)")
getSymbols("SP500", src = "FRED")
getSymbols("GS10", src = "FRED")
require(RQuantLib)
GS10pricereturn<-GS10
GS10pricereturn[1,1]<-0
colnames(GS10pricereturn)<-"US10yPrice"
for (i in 1:(NROW(GS10)-1)) {
GS10pricereturn[i+1,1]<-FixedRateBondPriceByYield(yield=GS10[i+1,1]/100,issueDate=Sys.Date(),
maturityDate= advance("UnitedStates/GovernmentBond", Sys.Date(), 10, 3),
rates=GS10[i,1]/100,period=2)[1]/100-1
}
#total return will be the price return + yield/12 for one month
GS10totalreturn<-GS10pricereturn+lag(GS10,k=1)/12/100
colnames(GS10totalreturn)<-"US10yTotalReturn"
#thanks to PerformanceAnalytics package for this handy rolling correlation calculation
rollcorr <- rollapplyr(na.omit(merge(ROC(SP500,n=1,type="discrete"),
GS10totalreturn)),
width = 60, #5 year rolling
FUN = function(x) cor(x[, 1, drop = FALSE],
x[, 2, drop = FALSE]),
by = 1,
by.column = FALSE,
na.pad = TRUE)
rollcorr[is.na(rollcorr)] <- 0 #change leading NAs to 0
custom.panel <- function (index, x, col, lwd, ...) {
default.panel(index, ifelse(x > 0, x, 0), col = brewer.pal("Greens", n = 9)[7], lwd = 2, ...)
default.panel(index, ifelse(x < 0, x, 0), col = brewer.pal("Reds",n = 9)[7], lwd = 2, ...)
abline(h=pretty(c(par("yaxp")[1],par("yaxp")[2]),n=par("yaxp")[3]),col="gray60",lwd=0.5)
abline (h = 0, col = "black", lwd = 2)
}
plot.xts(rollcorr,
auto.grid = FALSE, #turn off grid for more precise control
bty = "n", #turn off box because I like better
las = 1,
ylim = c(-1,1),
main = NA,
panel = custom.panel, #use our customized panel
type = "l",
major.format = "%Y",
minor.tick = FALSE,
yaxs = "i",
cex.axis = 0.8,
lwd = 2,
blocks = list(start.time = "2000-03-01", end.time = "2012-12-31"))
title(main = "Correlation of US 10y Bonds with S&P 500 (Rolling 5 Year)", outer = TRUE, line = -2, adj = 0.05)
mtext("Source: Federal Reserve Bank of St. Louis (FRED)", side = 1, cex = 0.7, font = 3, adj = 0.9, outer = TRUE, line = -2)
horizon.panel <- function(index,x,...) {
#get some decent colors from RColorBrewer
#we will use colors on the edges so 2:4 for red and 7:9 for blue
require(RColorBrewer)
col.brew <- brewer.pal(name="RdBu",n=10)
#ease this reference later
n=NROW(x)
#clean up NA with either of the two methods below
#x[which(is.na(x),arr.ind=TRUE)[,1],
# unique(which(is.na(x),ar.ind=TRUE)[,2])] <- 0
x <- apply(x,MARGIN=2,FUN=na.fill,fill=0)
#get number of bands for the loop
#limit to 3
nbands = 3
#first tried this but will not work since each series needs to have same number of bands
#min(4,ceiling(max(abs(coredata(x)))/horizonscale))
par(usr=c(index[1],index[n],origin,horizonscale))
for (i in 1:nbands) {
#draw positive
polygon(
c(index[1], index, index[n]),
c(origin, coredata(x) - (i-1) * horizonscale,origin),
col=col.brew[length(col.brew)-nbands+i-1],
border=NA
)
#draw negative
polygon(
c(index[1], index, index[n]),
c(origin, -coredata(x) - (i-1) * horizonscale,origin),
col=col.brew[nbands-i+1],
border=NA
)
}
#delete trash drawn below origin that we keep so no overlap between positive and negative
polygon(
c(index[1], index, index[n]),
c(origin, -ifelse(coredata(x)==origin,horizonscale*5,abs(coredata(x))),origin),
col=par("bg"),
border=NA
)
#draw a line at the origin
abline(h=origin,col="black")
}
horizonscale = 0.25
origin = 0
plot.xts(rollcorr,
auto.grid = FALSE, #turn off grid for more precise control
bty = "n", #turn off box because I like better
las = 1,
ylim = c(origin,horizonscale),
main = NA,
panel = horizon.panel, #use our customized panel
type = "l",
major.format = "%Y",
minor.tick = FALSE,
yax.loc = "none",
yaxt = "n", xaxt = "n",
cex.axis = 0.6,
lwd = 2)
title(main = "Correlation of US 10y Bonds with S&P 500 (Rolling 5 Year)", outer = TRUE, line = -2, adj = 0.05, cex = 0.7)
mtext("Source: Federal Reserve Bank of St. Louis (FRED)", side = 1, cex = 0.7, font = 3, adj = 0.9, outer = TRUE, line = -2)

Monday, August 20, 2012

plot.xts with Moving Average Panel

(for all plot.xts posts, see http://timelyportfolio.blogspot.com/search/label/plot.xts)

As another example of all that we can do with the new plot.xts, let’s try to do a price plot with a moving average overlays.  We will use the ETFs shown by Mebane Faber at http://www.mebanefaber.com/timing-model/.  With the panel functionality, it is very easy to specify a panel to draw the price line and then add the calculated moving average.  Notice how in all the examples, the recession block appears easily and very nicely.

From TimelyPortfolio

Also, if you wanted to specify some funky layouts, we have that option.  For this case, I do not think it makes much sense, but in the future I will demonstrate some more appropriate uses.

From TimelyPortfolio

Or as one more example, let’s change it up just slightly.

From TimelyPortfolio

R code in GIST (do raw for copy/paste and also install xtsExtra from r-forge):

#install.packages("xtsExtra", repos="http://R-Forge.R-project.org")
require(quantmod)
require(RColorBrewer)
require(xtsExtra) #if you get an error, see first line and install from r-forge
#use Mebane Faber tickers VTI, VEU, IEF, VNQ, DBC
#as discussed in http://papers.ssrn.com/sol3/papers.cfm?abstract_id=962461
tckrs <- c("VTI", "VEU", "IEF", "VNQ", "DBC")
getSymbols(tckrs, from = "2000-01-01")
prices <- get(tckrs[1])[,6]
for (i in 2:length(tckrs)) {
prices <- na.omit(merge(prices, get(tckrs[i])[,6]))
}
ma.panel <- function (index, x, col, ...) {
#draw line for price
default.panel(index, x, col, ...)
#label each panel with first 3 characters of column name
mtext(substr(colnames(x), 1, 3), side = 3, cex = 0.8, line = -2.5, adj = 0.5, col = col)
#now get n=200 moving average
ma <- runMean(x, n = 200)
#add the moving average line
default.panel(index, ma, col="indianred1", ...)
abline(h=pretty(c(par("yaxp")[1],par("yaxp")[2]),n=par("yaxp")[3]),col="gray60",lty=3,lwd=0.5)
#abline(h=par("yaxp")[1], col="black")
}
plot.xts(prices,
las = 1, #no rotate for y axis tickmark labels
panel = ma.panel, #do the moving average panel to overlay moving average lines
auto.grid = FALSE, #get some extra contol of grid
col = brewer.pal("Set1", n = 7)[c(2:5,7)],
main = NA,
blocks = list(start.time = "2007-12-01", end.time = "2009-06-01", col="lightblue"))
title(main = "ETFs (www.mebanefaber.com) and 200 day Moving Average", adj = 0.05, outer = TRUE, line = -2)
mtext("source: Yahoo! Finance", outer = TRUE, side = 1, adj = 0.94, font = 1, cex = 0.7, line = -1)
#not sure why you would want to do something like this
#but for example purposes
#we can change the layout radically with plot.xts
plot.xts(prices,
screens = c(1,2,4,3,5),
layout.screens = matrix(c(1,1,2,2,3,3,4,5), nrow = 2, byrow = FALSE), #to see how the matrix works print the matrix print(matrix(c(1,1,2,2,3,3,4,5), nrow = 2, byrow = FALSE))
ylim = matrix(c(range(prices[,c(1,2,4)]),range(prices[,c(1,2,4)]),range(prices[,c(1,2,4)]),range(prices[,3]),range(prices[,5])),
byrow=TRUE,ncol=2),
las = 1, #no rotate for y axis tickmark labels
panel = ma.panel, #do the moving average panel to overlay moving average lines
auto.grid = FALSE, #get some extra contol of grid
col = brewer.pal("Set1", n = 7)[c(2:5,7)],
main = NA,
minor.ticks = FALSE,
major.format = "%Y",
blocks = list(start.time = "2007-12-01", end.time = "2009-06-01", col="lightblue"))
title(main = "ETFs (www.mebanefaber.com) and 200 day Moving Average", adj = 0.05, outer = TRUE, line = -2)
mtext("source: Yahoo! Finance", outer = TRUE, side = 1, adj = 0.94, font = 1, cex = 0.7, line = -1)
plot.xts(prices,
screens = c(1,2,4,3,5),
layout.screens = matrix(c(1,1,2,2,3,3,4,5,4,5), nrow = 2, byrow = FALSE), #to see how the matrix works print the matrix print(matrix(c(1,1,2,2,3,3,4,5,4,5), nrow = 2, byrow = FALSE))
ylim = matrix(c(range(prices[,c(1,2,4)]),range(prices[,c(1,2,4)]),range(prices[,c(1,2,4)]),range(prices[,3]),range(prices[,5])),
byrow=TRUE,ncol=2),
las = 1, #no rotate for y axis tickmark labels
panel = ma.panel, #do the moving average panel to overlay moving average lines
auto.grid = FALSE, #get some extra contol of grid
col = brewer.pal("Set1", n = 7)[c(2:5,7)],
main = NA,
minor.ticks = FALSE,
major.format = "%Y",
blocks = list(start.time = "2007-12-01", end.time = "2009-06-01", col="lightblue"))
title(main = "ETFs (www.mebanefaber.com) and 200 day Moving Average", adj = 0.05, outer = TRUE, line = -2)
mtext("source: Yahoo! Finance", outer = TRUE, side = 1, adj = 0.94, font = 1, cex = 0.7, line = -1)

Friday, August 17, 2012

GARCH Panel in plot.xts

I’m clearly out of my realm of competence with most of the rugarch functions, but I thought it might be nice to provide an example combining plot.xts and uGARCHroll.

image

R code from GIST:

#install.packages("xtsExtra", repos="http://R-Forge.R-project.org")
require(quantmod)
require(rugarch)
require(xtsExtra) #if you get an error, see first line and install from r-forge
getSymbols("DEXJPUS",src="FRED")
DEXJPUS<-1/to.weekly(DEXJPUS)
ugarch.panel <- function(index,x,type,cex,col,pch,...){
spec = ugarchspec(
variance.model=list(garchOrder=c(1,1)),
mean.model=list(armaOrder=c(1,1), include.mean=T))
#get ugarchroll; I cannot say I completely understand what I'm doing here
ugr <- ugarchroll(spec,
data = na.omit(x),
forecast.length = floor(NROW(na.omit(x)) / 1000) * 1000,
n.ahead = 1,
refit.every = 50,
refit.window = "moving")
#get garch coefficients
ugr.var <- merge(x,as.data.frame(ugr,which="VaR"))
ugr.var <- as.xts(apply(ugr.var,MARGIN=2,na.fill,fill=c(0,"extend")),order.by=index(x))[,2:4]
print(tail(ugr.var))
default.panel(index,ugr.var[,3],type="h",cex,pch,col,...)
default.panel(index,ugr.var[,1],type="l",cex,pch,col="red",...)
default.panel(index,ugr.var[,2],type="l",cex,pch,col="gray50",...)
text(x=index[1],y=par("usr")[4],"VaR from rugarch ugarchroll",adj=c(0,1),cex=0.8)
}
plot.xts(na.omit(merge(DEXJPUS[,4],ROC(DEXJPUS[,4],n=1,type="discrete"))),
screens=c(1,2),
minor.ticks=FALSE, major.format="%Y",
panel=c(default.panel,ugarch.panel),
main="Japanese Yen (source: St. Louis Federal Reserve)")

Horizon Plots with plot.xts

Anyone who has read

  1. 48 Industries (Dendrogram Ordered) Over 50 Years
  2. 48 Industries Since 1963
  3. “Trend is Not Your Friend” Applied to 48 Industries
  4. Horizon Plots in Base Graphics
  5. More on Horizon Charts
  6. Application of Horizon Plots
  7. Horizon Plot Already Available
  8. Cubism Horizon Charts in R

should already know that I really like horizon charts.  Also, you would probably guess that I would be very excited if the new plot.xts could produce horizon charts.  So I am excited, since with its panel functionality, plot.xts very capably creates horizon charts.

From TimelyPortfolio

R code from GIST (also, please install xtsExtra from r-forge):

#plot.xts with horizons
#install.packages("xtsExtra", repos="http://R-Forge.R-project.org")
require(PerformanceAnalytics)
require(quantmod)
require(xtsExtra) #if you get error, install xtsExtra from r-forge
horizon.panel <- function(index,x,...) {
#get some decent colors from RColorBrewer
#we will use colors on the edges so 2:4 for red and 7:9 for blue
require(RColorBrewer)
col.brew <- brewer.pal(name="RdBu",n=10)
#ease this reference later
n=NROW(x)
#clean up NA with either of the two methods below
#x[which(is.na(x),arr.ind=TRUE)[,1],
# unique(which(is.na(x),ar.ind=TRUE)[,2])] <- 0
x <- apply(x,MARGIN=2,FUN=na.fill,fill=0)
#get number of bands for the loop
#limit to 3
nbands = 3
#first tried this but will not work since each series needs to have same number of bands
#min(4,ceiling(max(abs(coredata(x)))/horizonscale))
for (i in 1:nbands) {
#draw positive
polygon(
c(index[1], index, index[n]),
c(origin, coredata(x) - (i-1) * horizonscale,origin),
col=col.brew[length(col.brew)-nbands+i-1],
border=NA
)
#draw negative
polygon(
c(index[1], index, index[n]),
c(origin, -coredata(x) - (i-1) * horizonscale,origin),
col=col.brew[nbands-i+1],
border=NA
)
}
#delete trash drawn below origin that we keep so no overlap between positive and negative
polygon(
c(index[1], index, index[n]),
c(origin, -ifelse(coredata(x)==origin,horizonscale*5,abs(coredata(x))),origin),
col=par("bg"),
border=NA
)
#draw a line at the origin
abline(h=origin,col="black")
#labelling just for example purposes
#do label at top left and top right
text(x=index[1], y=horizonscale, colnames(x), adj=c(0,1))
text(x=index[n], y=horizonscale, colnames(x), adj=c(1,1))
#do label at center right for ease of reference
#text(x=index[n], y=horizonscale/2, colnames(x), pos=2)
#do label at bottom center
#text(x=index[n/2], y=origin, colnames(x), pos=3)
}
data(edhec)
#set these up for global access since I do not see how to pass into panel function
origin = 0
horizonscale = 0.1
#get 12 month rolling return of edhec indexes
roc <- as.xts(apply(cumprod(edhec+1),MARGIN=2,ROC,n=12,type="discrete"),order.by=index(edhec))
plot.xts(roc,layout.screens = 1:NCOL(roc),
ylim=c(origin,horizonscale),
panel=horizon.panel,bty="n",
auto.grid=FALSE,
yax.loc="none",
yaxt="n", #, xaxt="n")
main="Horizon Plot of EDHEC Indexes 12-month Rolling Return")

Thursday, August 16, 2012

plot.xts is wonderful

As mentioned in FOSS Trading post A New plot.xts yesterday

“The Google Summer of Code (2012) project to extend xts has produced a very promising new plot.xts function. Michael Weylandt, the project's student, wrote R-SIG-Finance to request impressions, feedback, and bug reports. The function is housed in the xtsExtra package of the xts project on R-Forge.

Please try xtsExtra::plot.xts and let us know what you think. A sample of the eye-candy produced by the code in Michael's email is below. Granted, this isn't a one-liner, but it's certainly impressive! Great work Michael!”

and as announced by author Michael Weylandt in https://stat.ethz.ch/pipermail/r-sig-finance/2012q3/010652.html

“As the community which makes the most heavy use of xts, I would like to draw your attention to a new set of plotting functions for xts objects available as part of Google Summer of Code 2012. This work represents a major overhaul of previously existing plot.xts and should provide you with the most comprehensive and flexible time series plotting available in R. Features include:

  • "automagic" layout construction and axis alignment
  • smart argument recycling
  • panel function abilities
  • more attractive candle and bar plots for OHLC objects
  • scatterplots to view the co-evolution of multiple series
  • event markers
  • regime highlighting
  • time-oriented barplots via barplot.xts [based on code by Peter Carl]
  • interoperability with all known R time series classes using the xts try/reclass paradigm

while retaining the same smart axis formatting and gridlines that plot.xts provided. We have made every effort to maintain complete compatibility with documented usages of the old plot.xts and to be 95% compatible with plot.zoo. My goal has been to craft a design which uses smart defaults to put attractive and informative graphics ever at your fingertips, while remaining flexible enough for "power-users" to craft every detail as they desire…”

Michael has done a fantastic job with this Google Summer of Code (GSOC) project, and I look forward to incorporating all the new features of plot.xts.  As a quick example very different from that shown in the post and mailing list announcement, I thought it would be fun to show how we can use plot.xts to replace the veteran chart.TimeSeries and charts.PerformanceSummary plots from the PerformanceAnalytics package.

For the chart.TimeSeries, I will almost exactly replicate the chart given in the documentation.

From TimelyPortfolio

Not really as a useful application but for the sake of demonstration, we can produce something like this to separate styles and also avoid event label collision (just implemented by Michael very late last night).

From TimelyPortfolio

plot.xts even allows plot.zoo and lattice panel type functionality which allows us to do very nice things like in this charts.PerformanceSummary style chart.

From TimelyPortfolio

I very much appreciate this very powerful contribution to R.  Thanks to everyone involved.

R code in GIST (note: install xtsExtra from R-forge) :

#install.packages("xtsExtra", repos="http://R-Forge.R-project.org")
require(PerformanceAnalytics)
require(xtsExtra) #if you get error, please install xtsExtra from r-forge as shown in the top line
require(RColorBrewer)
#function add alpha or transparency to colors
addalpha <- function(cols,alpha=180) {
rgbcomp <- col2rgb(cols)
rgbcomp[4] <- alpha
return(rgb(rgbcomp[1],rgbcomp[2],rgbcomp[3],rgbcomp[4],maxColorValue=255))
}
#replicate example provided in chart.TimeSeries with plot.xts
# These are start and end dates, formatted as xts ranges.
## http://www.nber.org-cycles.html
cycles.begin.dates<-c("1857-06",
"1860-10",
"1865-04",
"1869-06",
"1873-10",
"1882-03",
"1887-03",
"1890-07",
"1893-01",
"1895-12",
"1899-06",
"1902-09",
"1907-05",
"1910-01",
"1913-01",
"1918-08",
"1920-01",
"1923-05",
"1926-10",
"1929-08",
"1937-05",
"1945-02",
"1948-11",
"1953-07",
"1957-08",
"1960-04",
"1969-12",
"1973-11",
"1980-01",
"1981-07",
"1990-07",
"2001-03",
"2007-12"
)
cycles.end.dates<-c("1858-12",
"1861-06",
"1867-12",
"1870-12",
"1879-03",
"1885-05",
"1888-04",
"1891-05",
"1894-06",
"1897-06",
"1900-12",
"1904-08",
"1908-06",
"1912-01",
"1914-12",
"1919-03",
"1921-07",
"1924-07",
"1927-11",
"1933-03",
"1938-06",
"1945-10",
"1949-10",
"1954-05",
"1958-04",
"1961-02",
"1970-11",
"1975-03",
"1980-07",
"1982-11",
"1991-03",
"2001-11",
"2009-06"
)
# Event lists - FOR BEST RESULTS, KEEP THESE DATES IN ORDER
risk.dates = c(
"1987-10-19",
"1994-02-01",
"1997-07-01",
"1998-08-17",
"1998-09-23",
"2000-07-01",
"2011-09-11")
risk.labels = c(
"Black Monday",
"Bond Crash",
"Asian Crisis",
"Russian Crisis",
"LTCM",
"Tech Bubble",
"Sept 11")
data(edhec)
R=edhec[,1:3,drop=FALSE]
Return.cumulative = cumprod(1+R) - 1
plot.xts(Return.cumulative, main="Default plot.xts Chart") #using all the defaults
#png("chartTimeSeries.png",width=640,height=567,units="px")
plot.xts(Return.cumulative, screens=1, #screens=1 probably most appropriate for this application
blocks = list( #get blocks to plot from above set dates in list form
start.time=paste(cycles.begin.dates,"-01",sep=""),
end.time=paste(cycles.end.dates,"-01",sep=""),
col = "lightblue"),
events = list( #get events to plot from above risk.date in list form
time = risk.dates,
label = risk.labels,
col = "red"),
lwd = 2,
legend.loc = "bottomright", auto.legend=TRUE,
main="EDHEC Style Indexes")
#dev.off()
#png("chartTimeSeries with extra.png",width=640,height=567,units="px")
plot.xts(Return.cumulative,
screens=c(1,2,2), #plot 1st series in 1st panel and 2nd and 3rd series in 2nd panel
layout.screens=c(1,2,2), #just as an example change the layout so 1st panel is 1/3 of area and 2nd is bottom 2/3
blocks = list( #set up blocks for recessions
start.time=paste(cycles.begin.dates,"-01",sep=""),
end.time=paste(cycles.end.dates,"-01",sep=""),
col = "lightblue"),
events = list( #add some event lines
time = risk.dates,
label = risk.labels,
offset = c(0.5,0.5,0.5,0.5,-0.5,0.5), #collision control can also use yadj as in next line
y.adj = c(0,0.5), #with recycling will alternate position for each label
col = "purple"), #don't know why you would use purple but in case you do
lwd = 2,
legend.loc = "bottomright", auto.legend=TRUE,
main="EDHEC Style Indexes")
#dev.off()
#for a little more advanced application we can start to do charts.PerformanceSummary style plot.xts
cumulreturn.panel <- function(...) {
mtext("Cumulative Return", side=1, adj=1, line=-3)
default.panel(...)
abline(h=pretty(c(par("yaxp")[1],par("yaxp")[2]),n=par("yaxp")[3]),col="gray60",lty=3)
abline(h=0, col="black")
}
es.panel <- function(index,x,...) {
mtext("Expected Shortfall", side=1, adj=1, line=-3)
default.panel(index,x,...)
#silly to do this but if we wanted just certain points like every 4 months we could do something like this
#default.panel(index[seq(1,NROW(index),by=4)],coredata(x[seq(1,NROW(index),by=4)]),...)
#abline(h=0, col="black")
abline(h=pretty(c(par("yaxp")[1],par("yaxp")[2]),n=par("yaxp")[3]),col="gray60",lty=3)
abline(h=par("yaxp")[1], col="black")
}
drawdown.panel <- function(index,x,...) {
mtext("Drawdown", side=1, adj=1, line=-2)
default.panel(index,x,...)
#silly to do this but if we wanted just certain points like every 4 months we could do something like this
#default.panel(index[seq(1,NROW(index),by=4)],coredata(x[seq(1,NROW(index),by=4)]),...)
#abline(h=0, col="black")
abline(h=pretty(c(par("yaxp")[1],par("yaxp")[2]),n=par("yaxp")[3]),col="gray60",lty=3)
abline(h=par("usr")[3], col="black")
}
#get some risk measurements to add for Performance Summary style plot
Risk.drawdown <- Drawdowns(R)
Risk.es <- rollapplyr(R,FUN="ES",width=36,p=0.95,na.pad=TRUE)
#take care of NA with 0 at beginning and interpolation at end
Risk.es <- apply(Risk.es,MARGIN=2,FUN=na.fill,fill=c(0,"extend"))
#something wrong with returned value from apply.rolling so indexes don't work properly
data.to.plot <- as.xts(cbind(coredata(Return.cumulative),Risk.es,coredata(Risk.drawdown)),order.by=index(edhec))
#png("chartsPerformanceSummary.png",width=640,height=600,units="px")
plot.xts(data.to.plot,
lwd = c(2,1,1), #do this to show how arguments are recycled
col = brewer.pal(n=9,"PuBu")[c(8,4,6)],
auto.grid = FALSE, #usually auto.grid works just fine but turn off for example purposes
las = 1,yax.loc = "right", # yax.loc could also be flip or left in this case
screens = c(1,1,1,2,2,2,3,3,3), #3 series for each so first 3 in panel 1 second 3 in panel 2 and last 3 in panel 3
layout.screens = c(1,1,2,3), #panel 1 take up first 2 of 4 so 50% and panels 2 and 3 each 25%
bty = "n",
panel = c(cumulreturn.panel,es.panel,drawdown.panel), #c(first.panel,"auto"), #panel cycles through by panel rather than each series
ylab = NA, major.format = "%b %Y", minor.ticks = FALSE,
legend.loc = c("topleft",NA,NA), auto.legend = TRUE,
legend.pars = list(bty = "n", horiz=TRUE), #make legend box transparent
cex.axis = 0.9,
main = NA)
title(main = "Performance Summary of EDHEC Indexes", adj = 0, outer = TRUE, line = -1.5)
#dev.off()
#just playing around with type="h"
#think will only work if all colors very different
plot.xts(Risk.drawdown["2000::2001",],type="h",screens=1,lwd=c(10,25,15),
col=apply(as.matrix(brewer.pal(n=9,"PuBu")[c(8,4,6)]),MARGIN=1,FUN=addalpha,alpha=120))

Friday, August 10, 2012

Animated GIF Annual Correlation of 48 Industries for 50 Years

Inspired by http://blogs.reuters.com/felix-salmon/2012/08/06/chart-of-the-day-hft-edition/, I thought I would build on 48 Industries (Dendrogram Ordered) Over 50 Years, “Trend is Not Your Friend” Applied to 48 Industries, and 48 Industries Since 1963 with an animated GIF of annual correlation for the Kenneth French 48 Industry Data set.

From TimelyPortfolio

R code in GIST (do raw for copy/paste):

#get very helpful Ken French data
#for this project we will look at Industry Portfolios
#http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/48_Industry_Portfolios_daily.zip
require(latticeExtra)
require(animation)
require(PerformanceAnalytics)
require(quantmod)
#my.url will be the location of the zip file with the data
my.url="http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/48_Industry_Portfolios_daily.zip"
#this will be the temp file set up for the zip file
my.tempfile<-paste(tempdir(),"\\frenchindustry.zip",sep="")
#my.usefile is the name of the txt file with the data
my.usefile<-paste(tempdir(),"\\48_Industry_Portfolios_daily.txt",sep="")
download.file(my.url, my.tempfile, method="auto",
quiet = FALSE, mode = "wb",cacheOK = TRUE)
unzip(my.tempfile,exdir=tempdir(),junkpath=TRUE)
#read space delimited text file extracted from zip
french_industry <- read.table(file=my.usefile,
header = TRUE, sep = "",
as.is = TRUE,
skip = 9, nrows=12211)
#get dates ready for xts index
datestoformat <- rownames(french_industry)
datestoformat <- paste(substr(datestoformat,1,4),
substr(datestoformat,5,6),substr(datestoformat,7,8),sep="-")
#get xts for analysis
french_industry_xts <- as.xts(french_industry[,1:NCOL(french_industry)],
order.by=as.Date(datestoformat))
#divide by 100 to get percent
french_industry_xts <- french_industry_xts/100
#delete missing data which is denoted by -0.9999
french_industry_xts[which(french_industry_xts < -0.99,arr.ind=TRUE)[,1],
unique(which(french_industry_xts < -0.99,arr.ind=TRUE)[,2])] <- 0
#get a vector of the end of years
evaldates <- endpoints(french_industry_xts,"years")
saveGIF(
for(i in 2:length(evaldates)) {
#do correlation table
ca <- cor(french_industry_xts[evaldates[i-1]:evaldates[i],])
#replace na with 0
ca[which(is.na(ca),arr.ind=TRUE)[,]] <- 0
#get colors to use for heat map
brew <- brewer.pal(name="RdBu",n=5)
#get color ramp
cc.brew <- colorRampPalette(brew)
#apply color ramp
cc <- cc.brew(nrow(ca))
#do heatmap and sort by degree of correlation to VFINX (Vanguard S&P 500)
#heatmap(ca,symm=TRUE,Rowv=NA,Colv=NA,col=cc,RowSideColors=cc,main="")
#title(main=paste("Correlation Table\n",index(french_industry_xts)[evaldates[i]],sep=""),font.main=1,outer=TRUE,line=-2,cex.main=1.3)
#do with dendrogram ordering
heatmap(ca,symm=TRUE,col=cc,RowSideColors=cc,main="")
title(main=paste("Correlation Table (Dendrogram Ordered)\n",index(french_industry_xts)[evaldates[i]],sep=""),font.main=1,outer=TRUE,line=-3,cex.main=1.3,adj=0)
}
)

Thursday, August 9, 2012

48 Industries (Dendrogram Ordered) Over 50 Years

Thanks to reader AHWest for the comment on post 48 Industries Since 1963.

“I think it would be interesting to see the industries ordered by some sort of similarity of returns.”

I think this is a great suggestion, and I would like to see it also.  I tried the dendrogram plot technique from Inspirational Stack Overflow Dendrogram Applied to Currencies, but then I spotted the dendrogramGrob in the latticeExtra documentation.  This was much easier, and in a couple of lines, we are able to order and connect the 48 industries.

From TimelyPortfolio
R code from GIST (do raw for copy/paste):
require(fAssets)
require(latticeExtra)
require(quantmod)
require(PerformanceAnalytics)
#my.url will be the location of the zip file with the data
my.url="http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/48_Industry_Portfolios_daily.zip"
#this will be the temp file set up for the zip file
my.tempfile<-paste(tempdir(),"\\frenchindustry.zip",sep="")
#my.usefile is the name of the txt file with the data
my.usefile<-paste(tempdir(),"\\48_Industry_Portfolios_daily.txt",sep="")
download.file(my.url, my.tempfile, method="auto",
quiet = FALSE, mode = "wb",cacheOK = TRUE)
unzip(my.tempfile,exdir=tempdir(),junkpath=TRUE)
#read space delimited text file extracted from zip
french_industry <- read.table(file=my.usefile,
header = TRUE, sep = "",
as.is = TRUE,
skip = 9, nrows=12211)
#get dates ready for xts index
datestoformat <- rownames(french_industry)
datestoformat <- paste(substr(datestoformat,1,4),
substr(datestoformat,5,6),substr(datestoformat,7,8),sep="-")
#get xts for analysis
french_industry_xts <- as.xts(french_industry[,1:NCOL(french_industry)],
order.by=as.Date(datestoformat))
#divide by 100 to get percent
french_industry_xts <- french_industry_xts/100
#delete missing data which is denoted by -0.9999
french_industry_xts[which(french_industry_xts < -0.99,arr.ind=TRUE)[,1],
unique(which(french_industry_xts < -0.99,arr.ind=TRUE)[,2])] <- 0
#get price series or cumulative growth of 1
french_industry_price <- cumprod(french_industry_xts+1)
#get 250 day rate of change or feel free to change to something other than 250
roc <- french_industry_price
#split into groups so do not run out of memory
for (i in seq(12,48,by=12)) {
roc[,((i-11):(i))] <- ROC(french_industry_price[,((i-11):(i))],n=250,type="discrete")
}
roc[1:250,] <- 0
# try to do http://stackoverflow.com/questions/9747426/how-can-i-produce-plots-like-this
# was much easier to use latticeExtra
# get dendrogram data from hclust
# backward and repetitive but it works
t <- assetsDendrogramPlot(as.timeSeries(french_industry_xts))
# thanks to the latticeExtra example
dd.row <- as.dendrogram(t$hclust)
row.ord <- order.dendrogram(dd.row)
xyplot(roc[,row.ord],
layout=c(1,48), ylim=c(0,0.25),
scales = list(tck = c(1,0), y = list(draw = FALSE,relation = "same")),
horizonscale=0.25,
origin = 0,
colorkey = TRUE,
#since so many industries, we will comment out grid
panel = function(x,y,...) {
panel.horizonplot(x,y,...) #feel free to change to whatever you would like)
# panel.grid(h=3, v=0,col = "white", lwd=1,lty = 3)
},
ylab = list(rev(colnames(roc[,row.ord])), rot = 0, cex = 0.7, pos = 3),
xlab = NULL,
par.settings=theEconomist.theme(box = "gray70"),
#use ylab above for labelling so we can specify FALSE for strip and strip.left
strip = FALSE,
strip.left = FALSE,
main = "French Daily 48 Industry (Dendrogram Ordered) 1963-2011\n source: http://mba.tuck.dartmouth.edu/pages/faculty/ken.french",
legend =
list(right =
list(fun = dendrogramGrob,
args =
list(x = dd.row, ord = row.ord,
side = "right",
size = 10))))

Wednesday, August 8, 2012

“Trend is Not Your Friend” Applied to 48 Industries

Please see previous post Crazy RUT in Academic Context Why Trend is Not Your Friend.

I’ll repeat the intro to the post mentioned above, so we can all get caught back up.

In response to Where are the Fat Tails?, reader vonjd very helpfully referred me to this paper The Trend is Not Your Friend! Why Empirical Timing Success is Determined by the Underlying’s Price Characteristics and Market Efficiency is Irrelevant by Peter Scholz and Ursula Walther. The authors conclude

“Our study on the basis of real data clearly confirms the hypothesis that the asset price characteristics of the underlying price process have a crucial impact on timing results. This allows us to forecast the timing success depending on the market's parameters. An OLS
regression analysis supports our predictions and verifies our assumption that the drift has the
strongest influence on timing success. By contrast, the higher moments (skewness, kurtosis)
seem not to have any significant impact on the timing result in the empirical sample. As we
presumed, the level of market development, and hence the degree of efficiency, does not play
any role. Trading worked coincidentally rather well in the developed world and quite poorly in
the emerging markets. The driving factor for the timing success is the parametric environment the trading system stumbles on…

Our study contributes to the discussion by providing a structured analysis of the relevance of the most important price process parameters. As a result, the traditional explanations for timing success can be abandoned: we find that it is very likely for the SMA trading rule to generate excess returns over its benchmark if the underlying price path exhibits negative drifts, high serial autocorrelation, low volatilities of returns, and highly clustered volatilities. Drift and autocorrelation of the underlying asset seem to have the largest impact, though.”

One of my initial ideas for extending the research was to incorporate a much larger set of indexes over a longer period of time.  As I was working on 48 Industries Since 1963, I decided 50 years of data on 48 different indexes would be a great dataset to apply the ideas and methods presented in the paper.

Using R and all its wonderful packages, it is surprisingly easy to accomplish.  Let’s see if we can test “drift and autocorrelation…have the largest impact” on excess returns with industries also.  I’ll try not to get too statistical.

In terms of drift or annualized return, we can see a linear inverse relationship between return and out(under)performance of the 200 day moving average system, so the better the performance of the industry, the less likely a moving average system is to outperform.

From TimelyPortfolio

In terms of the GARCH model effects on excess returns, a parallel coordinate chart will best start our exploration.  Lines are colored by the excess return of a moving average system on each industry.

From TimelyPortfolio

Mu and alpha1 seem to most heavily influence the ability of a moving average system to outperform.  Let’s isolate our chart to mu and alpha1 and add ar1 based on the authors’ findings.  I am very tempted to try to explain GARCH here, but for the sake of brevity, I’ll refrain.  This paper and this Portfolio Probe post offer a good introduction to GARCH.

From TimelyPortfolio

Now let’s look at a scatterplot of excess return versus each of the GARCH stats, starting with those with the most influence.

From TimelyPortfolio
From TimelyPortfolio
From TimelyPortfolio
From TimelyPortfolio
From TimelyPortfolio
From TimelyPortfolio

In conclusion, it seems the same effects observed by the authors also apply to US industry indexes.  In future posts, I’ll add a little more statistical rigor to the analysis and apply to other indexes.

Now, I just cannot resist using a horizon plot to evaluate the rolling 250 day excess returns of a moving average system over buy and hold.  As you can see, a bull market favors buy and hold.  The 70s and 2008-2009 were very kind to a moving average approach.

From TimelyPortfolio

R code in GIST (do raw for copy/paste):

#get very helpful Ken French data
#for this project we will look at Industry Portfolios
#http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/48_Industry_Portfolios_daily.zip
require(latticeExtra)
require(PerformanceAnalytics)
require(quantmod)
#my.url will be the location of the zip file with the data
my.url="http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/48_Industry_Portfolios_daily.zip"
#this will be the temp file set up for the zip file
my.tempfile<-paste(tempdir(),"\\frenchindustry.zip",sep="")
#my.usefile is the name of the txt file with the data
my.usefile<-paste(tempdir(),"\\48_Industry_Portfolios_daily.txt",sep="")
download.file(my.url, my.tempfile, method="auto",
quiet = FALSE, mode = "wb",cacheOK = TRUE)
unzip(my.tempfile,exdir=tempdir(),junkpath=TRUE)
#read space delimited text file extracted from zip
french_industry <- read.table(file=my.usefile,
header = TRUE, sep = "",
as.is = TRUE,
skip = 9, nrows=12211)
#get dates ready for xts index
datestoformat <- rownames(french_industry)
datestoformat <- paste(substr(datestoformat,1,4),
substr(datestoformat,5,6),substr(datestoformat,7,8),sep="-")
#get xts for analysis
french_industry_xts <- as.xts(french_industry[,1:NCOL(french_industry)],
order.by=as.Date(datestoformat))
#divide by 100 to get percent
french_industry_xts <- french_industry_xts/100
#delete missing data which is denoted by -0.9999
french_industry_xts[which(french_industry_xts < -0.99,arr.ind=TRUE)[,1],
unique(which(french_industry_xts < -0.99,arr.ind=TRUE)[,2])] <- 0
#get price series or cumulative growth of 1
french_industry_price <- cumprod(french_industry_xts+1)
#do 200 day moving average for initial testing
#just change n to test on other widths or rolling period
ma <- as.xts(apply(french_industry_price, MARGIN = 2, FUN = runMean, n=200), order.by = index(french_industry_price))
#do plot to test reasonableness
chart.TimeSeries(ma,ylog=TRUE)
#set up system to enter when price moves above moving average
#exit when below
ma.system <- lag(as.xts(
apply(french_industry_price > ma, MARGIN = 2, as.numeric),
order.by = index(french_industry_price)),
k=1) * french_industry_xts
#get returns cumulative and annualized for the entire period
ret.comp.cumul <- Return.cumulative(ma.system) - Return.cumulative(french_industry_xts)
ret.bh.ann <- Return.annualized(french_industry_xts)
ret.comp.ann <- Return.annualized(ma.system) - ret.bh.ann
rownames(ret.comp.ann) <- "Out(under)performance"
#get colors to use for heat map style coloring by out/under performance
brew <- brewer.pal(name="RdBu",n=5)
#get color ramp
cc.brew <- colorRampPalette(brew)
#apply color ramp
cc <- cc.brew(length(ret.comp.ann))
#do colors based on out/under performance but with gray so visible when labelling
cc.palette <- colorRampPalette(c(cc[1],"gray60",cc[length(cc)]))
cc.levpalette <- cc.palette(length(ret.comp.ann))
cc.levels <- level.colors(ret.comp.ann, at = do.breaks(c(-max(abs(ret.comp.ann)),max(abs(ret.comp.ann))),length(ret.comp.ann)),
col.regions = cc.levpalette)
#plot(ret.comp.cumul ~ Return.cumulative(french_industry_xts),pch=19)
#plot out/underperformance of ma system vs annualized compounded return of buyhold
plot(x=ret.bh.ann,y=ret.comp.ann,
pch=19, col=cc.levels,bty="l",
xlab = "Ann. Return of BuyHold", ylab = "Out(under)Performance of MA")
text(x=ret.bh.ann,y=ret.comp.ann, labels=colnames(ret.comp.ann), cex=0.7, pos=2, col=cc.levels)
#plot out/underperformance of ma system vs std. dev. of buyhold
plot(y=ret.comp.ann, x=apply(na.omit(french_industry_xts),MARGIN=2,sd),
pch=19,col=cc.levels,bty="l",
xlab = "StdDev of BuyHold", ylab = "Out(under)Performance of MA")
text(y=ret.comp.ann, x=apply(na.omit(french_industry_xts),MARGIN=2,sd), labels=colnames(ret.comp.ann), cex=0.7, pos=2, col=cc.levels)
#plot out/underperformance of ma system vs Omega of buyhold
plot(y=ret.comp.ann, x=Omega(french_industry_xts),
pch=19,col=cc.levels,bty="l",
xlab = "Omega of BuyHold", ylab = "Out(under)Performance of MA")
text(y=ret.comp.ann, x=Omega(french_industry_xts), labels=colnames(ret.comp.ann), cex=0.7, pos=2, col=cc.levels)
#using rugarch get garch stats similar to those explored in the research
require(rugarch)
spec = ugarchspec(
variance.model=list(garchOrder=c(1,1)),
mean.model=list(armaOrder=c(1,1), include.mean=T))
#set up function to get garch stats through apply function
gfNa <- function(data, spec) {
x <- na.omit(coredata(data))
gf <- suppressWarnings(ugarchfit(spec=spec, data=x))
stats <- coef(gf)
return(stats)
}
#do apply to get garch stats across all industries
gf.stats <- apply(french_industry_xts[,1:NCOL(french_industry_xts)],MARGIN=2,FUN=gfNa,spec=spec)
#plot return comparison by each garch stat
for (i in 1:NROW(gf.stats)) {
plot(y=ret.comp.ann, x=gf.stats[i,],
pch=19, col=cc.levels, bty="l",
xlab=rownames(gf.stats)[i], ylab="Out(under)Performance of MA")
text(y=ret.comp.ann, x=gf.stats[i,], labels=colnames(ret.comp.ann), cex=0.7, pos=2, col=cc.levels)
}
#thinking through linear model
#linmod = lm(as.vector(ret.comp.ann)~as.vector(gf.stats[1,]))
#plot(linmod)
#do parallel coordinates chart with color
require(MASS)
parcoord(cbind(t(ret.comp.ann),t(gf.stats)[,c(1,2,5)]),
col = cc.levels,lwd = 2,
main = "Out(under)Performance by GARCH Stat")
parcoord(cbind(t(ret.comp.ann),t(gf.stats)),
col = cc.levels,lwd = 2,
main = "Out(under)Performance by GARCH Stat")
#get rolling out(under) performance for horizon chart
#change na to 0 in ma.system returns
ma.system[which(is.na(ma.system),arr.ind=TRUE)[,1],
unique(which(is.na(ma.system),arr.ind=TRUE)[,2])] <- 0
ma_system_price <- cumprod(1+ma.system)
roc <- french_industry_price
#split into groups so do not run out of memory
for (i in seq(12,48,by=12)) {
#get difference in rolling performance
roc[,((i-11):(i))] <- ROC(ma_system_price[,((i-11):(i))],n=250,type="discrete") -
ROC(french_industry_price[,((i-11):(i))],n=250,type="discrete")
}
roc[1:250,] <- 0
#do a horizon plot of all 48 industries with horizonscale of 0.25
horizonplot(roc,
layout=c(1,48),
horizonscale=0.25, #feel free to change to whatever you would like
scales = list(tck = c(1,0), y = list(draw = FALSE,relation = "same")),
origin = 0,
colorkey = FALSE,
#since so many industries, we will comment out grid
# panel = function(x, ...) {
# panel.horizonplot(x, ...)
# panel.grid(h=3, v=0,col = "white", lwd=1,lty = 3)
# },
ylab = list(rev(colnames(roc)), rot = 0, cex = 0.7, pos = 3),
xlab = NULL,
par.settings=theEconomist.theme(box = "gray70"),
#use ylab above for labelling so we can specify FALSE for strip and strip.left
strip = FALSE,
strip.left = FALSE,
main = "Moving Averages System Performance on French Daily 48 Industry 1963-2011\n source: http://mba.tuck.dartmouth.edu/pages/faculty/ken.french")