Monday, November 19, 2012

Drawdown Determined Position Size

This caught my eye as I searched for some more academic research on my favorite risk measure drawdown.

Yang, Z. George and Zhong, Liang,
Optimal Portfolio Strategy to Control Maximum Drawdown -
The Case of Risk Based Dynamic Asset Allocation
(February 25, 2012).
Available at SSRN:
http://ssrn.com/abstract=2053854 or
http://dx.doi.org/10.2139/ssrn.2053854 

The paper seeks to do what I have tried to do without any real success—use drawdown to help determine position size.  I felt motivated to replicate in R their measure Rolling Economic Drawdown-Controlled Optimal Portfolio Strategy (REDD-COPS).  Since drawdown suffers from a significant lag, the authors suggest a rolling drawdown to offset some of the embedded lag:

"Intuitively, a drawdown look-back period H [length of rolling period] somewhat shorter than or similar to the market decline cycle is the key to achieve optimality. Substituting EDD with a lower REDD in equation (1), we have higher risky asset allocation to improve portfolio return
during a market rebound phase. In the examples followed, we'll use H = 1 year throughout."

The authors calibrate REDD-COPS on the S&P 500 as a single asset, and then use REDD-COPS in a portfolio context with three assets (S&P 500 – SPY, US 20+ Year Treasury – TLT, and DJ UBS Commodity Index).  I’ll show the results from my attempt to replicate the single asset test.  Sorry for the Thanksgiving but ugly colors, but I just could not resist.

From TimelyPortfolio

Their results are interesting, but I’m not entirely convinced of the robustness of a system using REDD-COPS to determine position size especially since their use of entire period Sharpe requires hindsight.  However despite the ultimate result, the byproduct discovery discussed in my post Cash–Opportunity Lost or Opportunity Gained was well worth the effort.  Stay tuned for my attempt to do the multi-asset REDD-COPS system.

R code in GIST:

#explore Rolling Economic Drawdown - Controlled Optimal Portfolio Strategy (REDD-COPS)
#from Yang, Z. George and Zhong, Liang,
#Optimal Portfolio Strategy to Control Maximum Drawdown -
#The Case of Risk Based Dynamic Asset Allocation (February 25, 2012).
#Available at SSRN: http://ssrn.com/abstract=2053854 or
#http://dx.doi.org/10.2139/ssrn.2053854
require(quantmod)
require(PerformanceAnalytics)
require(RColorBrewer)
#get sp500 for first attempt
getSymbols("^GSPC", from = "1900-01-01")
GSPC.monthly <- to.monthly(GSPC)[,4]
index(GSPC.monthly) <- as.Date(index(GSPC.monthly))
roc <- ROC(GSPC.monthly, n = 1, type = "discrete")
#get 1 year t-bill for risk-free
getSymbols("GS1", src = "FRED")
#combine the monthly SP500 return with a monthly return of GS1 1 year treasury
returns <- na.omit( merge(roc, ((1+lag(GS1,1) / 100) ^ (1/12)) - 1) )
cumreturns <- cumprod(1+returns)
#calculate REDD assuming 1st column is risky asset and 2nd is risk-free
REDD <- function(x, rf) {
rf <- rf[index(x)]
result <- 1 - last(x) /
( coredata(max(x)) * coredata(last(rf)) / coredata(first(rf[index(x[which(x==max(x))])])) )
return(result)
}
#get REDD for SP500
#paper says
#"Intuitively, a drawdown look-back period H somewhat shorter than or similar to the
#market decline cycle is the key to achieve optimality. Substituting EDD with a lower
#REDD in equation (1), we have higher risky asset allocation to improve portfolio return
#during a market rebound phase. In the examples followed, we'll use H = 1 year throughout."
GSPC.redd <- rollapplyr(cumreturns[,1], width = 12, FUN = REDD, rf=cumreturns[,2])
#experiment with a couple different Sharpe options
GSPC.sharpe <- na.omit( runMax(lag(rollapplyr(returns[,1], width = 36, FUN = SharpeRatio, Rf = 0, p = 0.95, "StdDev"),12),
n = 36) )
#another sharpe alternative
#GSPC.sharpe <- 1 - na.omit( runMin(lag(rollapplyr(returns[,1], width = 36, FUN = SharpeRatio, Rf = 0, p = 0.95, "StdDev"),12),
# n = 12) )
#if you would like to use a constant Sharpe, specify here and uncomment
#the paper uses a little hindsight to use the historic 0.403 Sharpe
#GSPC.sharpe <- 0.403
#feel free to experiment here
#I will specify 0.2
drawdown.limit <- 0.20
position.size <- as.xts(apply(( (GSPC.sharpe/drawdown.limit + 0.5) / (1-drawdown.limit^2) ) *
#( (drawdown.limit - GSPC.redd) / (1 - GSPC.redd) ), MARGIN = 1, FUN = max, 0), order.by = index(GSPC.redd))
( (drawdown.limit - GSPC.redd) / (1 - GSPC.redd) ), MARGIN = 1, FUN = max, 0), order.by = index(GSPC.sharpe))
plot(position.size)
sum(position.size)/NROW(position.size)
#charts.PerformanceSummary(merge(lag(position.size)*roc, roc))
return.comps <- merge(lag(position.size)*returns[,1] + lag(1-position.size) * returns[,2], returns[,1], returns[,2])
colnames(return.comps) <- c("REDD-COPS","SP500","US1Y")
charts.PerformanceSummary(return.comps, ylog=TRUE,
colorset=brewer.pal(10,"Spectral")[c(2,4,7)], #Thanksgiving but ugly colors
main="REDD-COPS System Test (http://ssrn.com/abstract=2053854)")

Friday, November 9, 2012

Unbelievable and Amazing R Shiny–Web Parameter Test in 1.5 Hours

Life keeps getting better and better.  Yesterday, I discovered the absolutely unbelievable and amazing work RStudio has done with Shiny employing one of my favorite R packages websockets.  As proof of the ease and quality, within a couple of minutes, I was able to get it up and running.  This morning basically starting from scratch in less than 1.5 hours I was able to achieve a web-based interactive parameter test for a moving average system as my first example.

Below is a screencast of this very basic parameter testing web app.  I can only hope that this simple application can illustrate just how powerful Shiny is.  Just imagine pairing this with d3 or knitr.

R code for server.R and ui.R from GIST:

#almost entirely based on the 02_text and 03_mpg examples provided by RStudio Shiny
#all credit belongs to them
if (!require(PerformanceAnalytics)) {
stop("This app requires the PerformanceAnalytics package. To install it, run 'install.packages(\"PerformanceAnalytics\")'.\n")
}
if (!require(quantmod)) {
stop("This app requires the quantmod package. To install it, run 'install.packages(\"quantmod\")'.\n")
}
# Download data for a stock, if needed
require_symbol <- function(symbol) {
if (!exists(symbol))
getSymbols(symbol, src="FRED")
#getSymbols(symbol, from = "1900-01-01")
}
library(shiny)
# Define server logic required to summarize and view the selected dataset
shinyServer(function(input, output) {
make_chart <- function(symbol="SP500") {
# get price data if does not exist
require_symbol(symbol)
#would hope not to recalculate each time but for now will leave messy
price.monthly <- to.monthly(get(symbol))[,4]
ret.monthly <- ROC(price.monthly, type="discrete", n=1)
#calculate system returns
systemRet <- merge(
ifelse(lag(price.monthly > runMean(price.monthly, n=input$nmonths), k=1), 1, 0) * ret.monthly,
ret.monthly)
colnames(systemRet) <- c(paste(input$nmonths,"MASys",sep=""), symbol)
charts.PerformanceSummary(systemRet, ylog=TRUE)
}
make_table <- function(symbol="SP500") {
# get price data if does not exist
require_symbol(symbol)
#would hope not to recalculate each time but for now will leave messy
price.monthly <- to.monthly(get(symbol))[,4]
ret.monthly <- ROC(price.monthly, type="discrete", n=1)
#calculate system returns
systemRet <- merge(
ifelse(lag(price.monthly > runMean(price.monthly, n=input$nmonths), k=1), 1, 0) * ret.monthly,
ret.monthly)
colnames(systemRet) <- c(paste(input$nmonths,"MASys",sep=""), symbol)
table.Stats(systemRet)
}
# Generate a plot of the system and buy/hold benchmark given nmonths parameter
# include outliers if requested
output$systemPlot <- reactivePlot(function() {
make_chart()
})
# Generate a summary stats table of the dataset
output$view <- reactiveTable(function() {
make_table()
})
})
view raw server.r hosted with ❤ by GitHub
#almost entirely based on the 02_text and 03_mpg examples provided by RStudio Shiny
#all credit belongs to them
library(shiny)
# Define UI for dataset viewer application
shinyUI(pageWithSidebar(
# Application title
headerPanel("Shiny Moving Average Parameter Test"),
# Sidebar with controls to select a dataset and specify the number
# of observations to view
sidebarPanel(
numericInput("nmonths", "Number of months for moving average:", 10)
),
# Show a summary of the dataset and an HTML table with the requested
# number of observations
mainPanel(
plotOutput("systemPlot"),
tableOutput("view")
)
))
view raw ui.r hosted with ❤ by GitHub

Wednesday, November 7, 2012

Cash–Opportunity Lost or Opportunity Gained

Tom Brakke from http://researchpuzzle.com/ wrote a great thought piece Cash as Trash, Cash as King, and Cash as a Weapon for the CFA Institute blog.  My favorite part comes in the last paragraph:

“That’s the kind of analysis that should be brought to the discussion of cash, not simple sayings that bounce back and forth in response to the mood of the market. Individual investors should not be afraid to hold cash, even when it’s earning little, if it’s available to them when needed most. And investment professionals should get away from misguided notions about how much cash is too much cash in a portfolio. Let the manager use the value and power of cash to execute a strategy. Then you can judge whether the strategy makes sense. Don’t remove cash as an effective weapon.”

Another way of looking at cash is does it represent the commonly accepted notion of opportunity lost (opportunity cost or “cash as trash”) or does it represent opportunity gained (Buffett’s cash as a “call option” as described in the solid Globe and Mail article).  I hope those who know me or read this blog know where I stand.  Cash is a refuge in the absence of opportunity, and I plan to spend significant time over the next couple months exploring how to mathematically price cash as a call option. If anyone has attempted this or read any research, please share it with me.

Interestingly enough as a byproduct of some other research, yesterday I was confronted with something that I should have already known.  If you compare the 1 year US Treasury (not really cash but close enough) with just the price return of the S&P 500 starting from 1960, the price only S&P 500 is extraordinarily unremarkable.

From TimelyPortfolio
From TimelyPortfolio

Also, cash does not look so bad when we consider the new Research Affiliates research "Glidepath Illusion".  Certainly commonly accepted “wisdom” does not seem so wise.

R code from GIST:

#start exploring Buffett's cash as a call option
#as described in
#http://www.theglobeandmail.com/globe-investor/investment-ideas/streetwise/for-warren-buffett-the-cash-option-is-priceless/article4565468/
require(latticeExtra)
require(directlabels)
require(reshape2)
require(quantmod)
require(PerformanceAnalytics)
#get sp500 for first attempt
getSymbols("SP500", src="FRED")
SP500.monthly <- to.monthly(SP500)[,4]
index(SP500.monthly) <- as.Date(index(SP500.monthly))
roc <- ROC(SP500.monthly, n = 1, type = "discrete")
#get 1 year t-bill for risk-free
getSymbols("GS1", src = "FRED")
#combine the monthly SP500 return with a monthly return of GS1 1 year treasury
returns <- na.omit( merge(roc, ((1+lag(GS1,1) / 100) ^ (1/12)) - 1) )
#do cumulative returns since 1960 so skip the 1950s
cumreturns <- cumprod(1+returns["1960::",])
cumreturns.df <- as.data.frame(cbind(index(cumreturns),coredata(cumreturns)))
colnames(cumreturns.df) <- c("Date","S&P500.Price","US1y")
cumreturns.df[,"Date"] <- as.Date(cumreturns.df[,"Date"])
cumreturns.melt <- melt(cumreturns.df,id.vars=1)
colnames(cumreturns.melt) <- c("Date","Index","CumulReturns")
#make the cumulative return line chart with latticeExtra
direct.label(asTheEconomist(xyplot(CumulReturns~Date, data = cumreturns.melt, groups=Index,
main = "S&P 500 (Price Only) Compared to US 1 Year Treasury Cumulative Growth")),
,method = list("last.points",hjust=1,vjust=0,cex=1.2))
#make a barplot for comparison of cumulative returns
barplot(t(table.Stats(returns["1960::",])[7,] + 1) ^ 12 - 1,
beside=TRUE,
ylim=c(0,0.07),
names.arg=colnames(cumreturns.df)[2:3],
col=theEconomist.theme()$superpose.line$col,
las=1, cex.axis=0.8,
main="")
abline(h=0)
title(main="S&P 500 (Price Only) and US 1 Year Cumulative Return Since 1960",
adj=0.05,outer=TRUE,line=-2,cex.main=0.95)