Thursday, October 20, 2011

Since My Last Trip to Disney


My family is off to DisneyWorld for a week, so there will not be any posts while I am there. However, I thought it would be interesting to see how Disney stock has done since my last trip September 2010.
Maybe since Disney has done so poorly, the crowds will be smaller. Of course I know better than that.
Maybe since the Japanese Yen has continued to do so well, there will be more Japanese at the parks.
It is interesting how well XLY (Consumer Discretionary) over the last 1 year. I guess US consumers really are special.
From TimelyPortfolio
R code:
require(quantmod)
require(PerformanceAnalytics)


tckrs <- c("spy","xly","dis","eem","fxy")


getSymbols(tckrs,from="2010-09-15",to=Sys.Date(),adjust=TRUE)


stocks <- merge(DIS[,4],SPY[,4],XLY[,4],EEM[,4],FXY[,4])
colnames(stocks) <- c("DIS","SPY","XLY","EEM","FXY")
stocks.roc <- ROC(stocks,n=1,type="discrete")
stocks.roc[1,] <- 0


charts.PerformanceSummary(stocks.roc,lwd=2,
    colorset=c("indianred3","steelblue4","darkolivegreen3","gray70","purple"),
    main="Since My Last Trip to Disney",cex.legend=1.2)

Thursday, October 13, 2011

System in 10 Minutes After Twitter

On Twitter last night, I spotted @milktrader from www.algorithmzoo.com doing some range research on equity indexes.  I offered a tweet on the crazy Russell 2000 17% move over 7 days.  Within 10 minutes, I discovered a signal that worked very well.  It probably is worthless, but I thought I would share in case someone cares to play with it.  THIS IS NOT INVESTMENT ADVICE AND WILL PROBABLY LOSE INCREDIBLE AMOUNTS OF MONEY.  If nothing else, it illustrates the power of R.

From TimelyPortfolio
From TimelyPortfolio

R code:

require(quantmod)
require(PerformanceAnalytics)
getSymbols("^RUT",from="1896-01-01",to=Sys.Date())
signal<-ifelse(runMax(RUT[,2],7)/runMin(RUT[,3],7)-1-
    ROC(RUT[,4],n=20,type="discrete")<0.02,1,0)
perf<-merge(lag(signal,k=1)*ROC(RUT[,4],type="discrete",n=1),
    ROC(RUT[,4],type="discrete",n=1))
colnames(perf)<-c("System","Russell 2000")
charts.PerformanceSummary(perf,ylog=TRUE,
    main="Quick Untested Russell 2000 System")

Wednesday, October 12, 2011

Generosity of Asian Central Banks

The only thing that separates the United States from Europe and the notorious PIIGS is the generosity of Asian Central Banks who have been consistently quantitatively easing since 1998 (Join the Reserves).

From TimelyPortfolio

Without this generosity, the United States could very easily have entered a death spiral (see Death Spiral of a Country and Death Spiral Warning Graph) in 2008 and still might fall into this disastrous trap.

Deliberately attacking this very tenuous thread seems foolish, but this foolish action has found support with our fine politicians Washington Post "Senate approves China currency bill".  The US needs to recruit some US $ buyers, but unfortunately there are none to fill the multi-trillion dollar gap.

Even more concerning is the focus on China who has been steadily attacking the problem.  The focus on China does not make sense when you look at the appreciation of the Chinese Yuan.  I see a lot more Korean cars on the road and TVs and appliances in the stores, but I do not hear any mention of the extreme undervaluation of the Korean Won to the US$ but especially the Japanese Yen, so it appears the Koreans Won.

From TimelyPortfolio

I agree that the US$ reserve building needs to stop, but let’s not deliberately induce a positive feedback death spiral.

R code (click to download from Google Docs):

require(quantmod)
require(PerformanceAnalytics)

getSymbols("DEXCHUS",src="FRED")
getSymbols("DEXKOUS",src="FRED")

plot.zoo(merge(1/DEXCHUS,100/DEXKOUS)["1997::",],screens=1,lwd=2,
    col=c("lightblue4","antiquewhite4"),
    xlab="Year",ylab=NA,
    main="Chinese Yuan and Korean Won
    1997 to Sep 2011")
legend("right", c("China","Korea"), lwd = 2, bty="n",
    col=c("lightblue4","antiquewhite4"),y.intersp=2,
    text.col=c("lightblue4","antiquewhite4"))

Friday, October 7, 2011

ttrTests 4th and Final Test

Hopefully you have been able to persist with me through

ttrTests This is a Test Test 3:Data Snoopy

ttrTests This is a Test--Test 1 and Test 2

ttrTests: Its Great Thesis and Incredible Potentia...

For the 4th and final ttrTest, we will check for the persistence of parameters across subperiods and then across subperiods with bootstrapped samples for each subperiod.  Not surprisingly, CUD failed most of the tests.  However, reader iQuant is one step ahead of me in his comment:

“Nice research piece, thank you! I was wondering whether we should consider the interest earned in cash positions as well the cost of going short. In long term strategies such as the one presented by Professor St. John this may make a lot of difference, favouring low frequency parameter sets and increasing the conditional excess returns of zero-weigth positions.

October 2, 2011 12:46 PM

I will attempt to address this very valid concern in future posts.

From TimelyPortfolio
From TimelyPortfolio

For the most intense test, we will bootstrap multiple samples across multiple subperiods. The prettiness of the graph unfortunately does not indicate the power of the test.

From TimelyPortfolio

R code (click to download from Google Docs):

#let's define our silly countupdown function
#as a sample of a custom ttr rule
CUD <- function(x,params=50,...) {
#CUD takes the n-period sum of 1 (up days) and -1 (down days)
temp <- ifelse(runSum(ifelse(ROC(x,1,type="discrete") > 0,1,-1),params)>=0,1,0)
#replace NA with 0 at beginning of period
temp[is.na(temp)] <- 0
temp
}   require(ttrTests)
require(quantmod)
require(lattice)
require(reshape2)
require(PerformanceAnalytics)   #defaults functions is overridden by ggplot2 and plyr if loaded
#and will cause problems if you want to use ttrTests concurrently   tckrs <- c("GSPC","RUT","N225","GDAXI","DJUBS")   #use 1 or GSPC but adjust however you would like
i=1
getSymbols(paste("^",tckrs[i],sep=""),from="1896-01-01",to=Sys.Date())
test_price <- as.vector(get(tckrs[i])[,4])   #run subperiods and paramPersist to test for luck across subperiods
#"asks whether or not good choices of parameters"
#"were robust across different time periods"
#chose 6 since data is from 1950 will approximate by decade
subper <- subperiods(x=test_price, periods = 6, ttr = CUD,
start = 20, nSteps = 30, stepSize = 10, restrict = FALSE,
burn = 0, short = FALSE, condition = NULL,
silent = TRUE, TC = 0.001, loud = TRUE, alpha = 0.025,
file = "", benchmark = "hold")   #make output slightly more usable with some naming
#believe I got this right
names(subper[[2]]) <- "obs.correlation"
#while we are in a nasty for loop; grab some data also
for (j in 3:length(subper)) {
names(subper[[j]]) <- paste(c("excess.return","z.score","adj.excess.return",
"Sharpe.ratio","best","best.repeat","best.adjusted",
paste("tested.parameters",c(1:(NROW(subper[[j]])-7)),sep="")))
# add this if desired ".subper",j-2, sep="")
ifelse(j==3, excess.df <- cbind(rep(j-2,length(subper[[j]]$tested.parameters)),
subper[[j]]$tested.parameters,
subper[[j]]$excess.return),
excess.df <- rbind(excess.df,
cbind(rep(j-2,length(subper[[j]]$tested.parameters)),
subper[[j]]$tested.parameters,
subper[[j]]$excess.return)))
}
excess.df <- as.data.frame(excess.df)
colnames(excess.df) <- c("subperiod","parameter","excess.ret")   #run boxplot of excess returns by parameter
jpeg(filename="boxplot.jpg",
quality=100,width=6.25, height = 6.25, units="in",res=96)
boxplot(excess.df$excess.ret~excess.df$parameter,
xlab="Parameter", ylab="Excess Return",
main="Boxplot of Excess Returns by Parameter")
dev.off()   #jpeg(filename="strip chart.jpg",
# quality=100,width=6.25, height = 6.25, units="in",res=96)
stripchart(excess.df$excess.ret~excess.df$parameter, pch=19,
xlab="Parameter", ylab="Excess Return", vertical=TRUE,
col=topo.colors(NROW(subper[[j]]$tested.parameters)),
main="Stripchart of Excess Returns by Parameter")
#dev.off()     #and my favorite of all
#"tests if the persistence measure from subperiods()"
#"is statistically significant"
#this takes the longest (about 28 minutes on my i7 laptop)
#if you want to play
#change periods to 2 or bsamples to 10 to speed time
parpersist <- paramPersist(x=test_price, ttr = CUD, periods=6,
start = 20, nSteps = 30, stepSize = 10,
bSamples=100,
restrict = FALSE, burn = 0, short = FALSE, condition = NULL,
silent = TRUE, TC = 0.001, loud = TRUE, alpha = 0.025,
file = "")
names(parpersist) <- c("act.corr","obs.corr.samples","p.value")   #jpeg(filename="paramPersist correlations.jpg",
# quality=100,width=6.25, height = 6.25, units="in",res=96)
plot(parpersist$obs.corr.samples,ylab="Correlation",xlab="Sample",
main="paramPersist for CUD")
abline(h=parpersist$act.corr,col="darkslateblue")
text(0,parpersist$act.corr, "actual", col = "darkslateblue", adj = c(0, -.1))
#dev.off()   #make output slightly more usable with some naming
#believe I got this right
names(snoop) <- c("details","V1","V2",
"V3","p1.for.l","p2.for.c","p3.for.u")   #jpeg(filename="dataSnoop values.jpg",
quality=100,width=6.25, height = 6.25, units="in",res=96)
plot(snoop$V3,
type="l", col=2,
main="ttrTests dataSnoop V1,V2,and V3 on CUD",
xlab="Bootstrap Sample", ylab="Values")
points(snoop$V2, type="l", col=3)
points(snoop$V1, col=4)
legend("topright",legend=c("V1","V2","V3"),col=c(4,3,2),pch=19,lty=1)
#dev.off()

Created by Pretty R at inside-R.org

Thursday, October 6, 2011

Efficient Frontier of Buy-Hold and Tactical System

In my mind, there are two very disparate views in the money management space: Markowitz style diversification and Faber style tactical allocation.

I thought it would be fun to see what happens when we try to blend the two with an efficient frontier between two assets—”buy and hold” S&P 500 and a Faber 10-month tactical S&P 500 strategy.  In the process I will also introduce the fPortfolio package.

If we take the 10 month moving average S&P 500 system

From TimelyPortfolio

and treat it as a separate asset class, the efficient frontier since 1950 would look like this.

From TimelyPortfolio

I think this is an interesting and unique way to look at it.  If we then take the rolling tangency portfolio the allocation would fluctuate and fairly closely mark bear and bull equity markets.

From TimelyPortfolio

AS ALWAYS THIS IS NOT INVESTMENT ADVICE, JUST A FUN EXPERIMENT. SIGNIFICANT LOSSES ARE HIGHLY LIKELY IF YOU PURSUE THIS APPROACH.

The blended allocation looks fairly good.  There was no optimization or backtesting.  Please let me know if you discover improvements.

From TimelyPortfolio
From TimelyPortfolio

R code (download from Google Docs):

require(quantmod)
require(PerformanceAnalytics)   #get GSPC or S&P 500
#feel free to change to whatever you would like
#for non index do not include ^
getSymbols("^GSPC",from="1896-01-01",to=Sys.Date())   #get monthly close
sp500 <- to.monthly(GSPC)[,4]
#do this to get from mmm yyyy to yyyy-mm-dd
index(sp500) <- as.Date(index(sp500))
#get monthly returns from the monthly closes
#multiple ways of doing this
sp500.ret <- monthlyReturn(sp500)   #get 10 month Mebane Faber moving average
ma <- runMean(sp500,n=10)
#if close > 10 month moving average then 1 and 0 if <
signal <- ifelse(sp500 > ma,1,0)   #merge originial return data with this new data
#multiply the 1-month lagged signal by return
#if signal is 0 then return is 0 indicating out
returnComp <- merge(sp500.ret,lag(signal,k=1)*sp500.ret)
returnComp[is.na(returnComp[,2]),2] <- 0
colnames(returnComp) <- c("SP500","SP500.Faber")   #jpeg(filename="performance summary.jpg",
quality=100,width=6, height = 7.5, units="in",res=96)
charts.PerformanceSummary(returnComp, ylog=TRUE,
colorset=c("lightcyan4","lightgoldenrod3"),
main="S&P 500 and Mebane Faber Moving Average System
Monthly Performance Since 1950"
)
#dev.off()   #saved this require for later
#since fPortfolio and f anything does not play well
#with PerformanceAnalytics
require(fPortfolio)   #get frontier for the combination
#of the original price and the Faber mov avg system
frontier <- portfolioFrontier(as.timeSeries(returnComp))   #most of this comes directly from the fPortfolio demo
#very slight changes have been made
#we will run for the entire period
#jpeg(filename="frontier plot.jpg",
# quality=100,width=6, height = 6, units="in",res=96)
#unfortunately title cannot be changed easily
frontierPlot(frontier, pch=19, risk = "CVaR")
minvariancePoints(frontier,pch=19,col="red")
tangencyPoints(frontier,pch=19,col="blue")
tangencyLines(frontier,pch=19,col="blue")
equalWeightsPoints(frontier,pch=15,col="grey")
singleAssetPoints(frontier,pch=19,cex=1.5,col=c("lightcyan4","lightgoldenrod3"))
twoAssetsLines(frontier,lty=3,col="grey")
legend("topleft",legend=colnames(returnComp),pch=19,col=c("lightcyan4","lightgoldenrod3"))
#sharpeRatioLines(frontier,col="orange",lwd=2)
#dev.off()   #now let's see what this looks like on a rolling basis
#this from and to is not well documented in the fPortfolio
#documentation so I hope it helps some people
#will use 48 month rolling window and redo every 6 months
#window needs to be large since Faber system will have lots of
#0 returns, which can be handled by adding t-bill returns while out
from <- rollingWindows(as.timeSeries(returnComp),period="48m",by="6m")$from
to <- rollingWindows(as.timeSeries(returnComp),period="48m",by="6m")$to   Spec = portfolioSpec()
setTargetReturn(Spec) = mean(colMeans(as.timeSeries(returnComp)))
Constraints = "LongOnly"
#using Tangency but can also do Cml with rollingCmlPortfolio
#or rollingMinvariancePortfolio
rollTan <- rollingTangencyPortfolio(as.timeSeries(returnComp),Spec,Constraints,
from=from,to=to)
#sapply works very nicely with the lists used in fPortfolio
#get weights from each of the rolling periods
tanweights <- sapply(rollTan,getWeights)
rownames(tanweights) <- colnames(returnComp)   #jpeg(filename="rollling weight plot.jpg",
quality=100,width=6, height = 6, units="in",res=96)
barplot(tanweights,col=c("lightcyan4","lightgoldenrod4"),
legend.text=TRUE,names.arg=format(from,"%b %y"),
cex.names=0.7)
#dev.off()   #do not know a slick way to do this
#repeat the weights 6 times for the 6 month by
for (i in 1:NROW(t(tanweights))) {
for (j in 1:6) {
if (i==1 & j==1) {
tanweights.xts <- data.frame(t(tanweights[,i]))
} else {
#check to make sure we do not exceed number
#of rows in original return series - 48
#for the initialization
if (NROW(tanweights.xts) <= NROW(returnComp)-48)
tanweights.xts <- rbind(tanweights.xts,
data.frame(t(tanweights[,i])))
}
}
}
tanweights.xts <- xts(tanweights.xts,
order.by=index(returnComp)[48:NROW(returnComp)])   tanreturns <- lag(tanweights.xts,k=1)*returnComp[49:NROW(returnComp),]
returnComp2 <- merge(returnComp,tanreturns[,1]+tanreturns[,2])
colnames(returnComp2) <- c(colnames(returnComp)[1:2],"Cml")
#jpeg(filename="risk return.jpg",
quality=100,width=6, height = 6, units="in",res=96)
chart.RiskReturnScatter(returnComp2)
#dev.off()
#since fPortfolio gives error on charts.PerformanceSummary
#assemble quick one-pager
#jpeg(filename="perf all.jpg",
# quality=100,width=6, height = 7, units="in",res=96)
layout(matrix(c(1, 2)), height = c(2.5,1.5), width = 1)
par(mar = c(1, 4, 4, 2))
chart.CumReturns(returnComp2,xaxis=FALSE,ylab="Cumulative Return",
colorset = c("lightcyan4","lightgoldenrod4","darkolivegreen3"),
main="SP500 with Faber MA and CML Combo",legend.loc="topleft")
par(mar = c(5, 4, 0, 2))
chart.Drawdown(returnComp2,main="",ylab="Drawdown",
colorset=c("lightcyan4","lightgoldenrod4","darkolivegreen3"))
#dev.off()

Created by Pretty R at inside-R.org