Friday, September 30, 2011

Difficult Month for One of My Best Ideas

THIS IS NOT INVESTMENT ADVICE.  MY IDEAS PROBABLY WILL LOSE YOU MONEY, AND I WILL NOT LET YOU KNOW WHEN I CHANGE MY MIND.

Bloomberg’s article “Asian Currencies Set for Worst Month Since 1997 Crisis Caused IMF Bailouts” demonstrates why with all my bearishness I did not make much money this month.  Just as a reminder for the not so faithful readers, http://timelyportfolio.blogspot.com/2011/05/russell-napier-asip-in-ft-says-emerging.html shows why I like long emerging market stocks and short Russell 2000.  However, this trade does not work so well when the currencies get clobbered.  I understand the clobbering in 1997 when Soros and others had control of the Asian currencies, but 15 years 10 trillion since that crisis, the Asian central banks clearly have control of their currencies if they choose to exert that influence.  Billions in outflows are meaningless compared to the trillions sitting in foreign currencies (see post Join the Reserves), primarily the US $.

From TimelyPortfolio

R code:

require(quantmod)

#get asian currency data from the FED FRED data series
getSymbols("DEXKOUS",src="FRED") #load Korea
getSymbols("DEXMAUS",src="FRED") #load Malaysia
getSymbols("DEXSIUS",src="FRED") #load Singapore
getSymbols("DEXTAUS",src="FRED") #load Taiwan
getSymbols("DEXCHUS",src="FRED") #load China
getSymbols("DEXJPUS",src="FRED") #load Japan
getSymbols("DEXTHUS",src="FRED") #load Thailand
getSymbols("DEXBZUS",src="FRED") #load Brazil
getSymbols("DEXMXUS",src="FRED") #load Mexico
getSymbols("DEXINUS",src="FRED") #load India
getSymbols("DTWEXO",src="FRED") #load US Dollar Other Trading Partners
getSymbols("DTWEXB",src="FRED") #load US Dollar Broad

par(mfrow=c(4, 3)) #provides 2 columns and 3 rows for charts
plot(1/coredata(DEXKOUS["1995::2011"]),type="l",ylab="Korea")
plot(1/coredata(DEXMAUS["1995::2011"]),type="l",ylab="Malaysia")
plot(1/coredata(DEXSIUS["1995::2011"]),type="l",ylab="Singapore")
plot(1/coredata(DEXTAUS["1995::2011"]),type="l",ylab="Taiwan")
plot(1/coredata(DEXCHUS["1995::2011"]),type="l",ylab="China")
plot(1/coredata(DEXJPUS["1995::2011"]),type="l",ylab="Japan")
plot(1/coredata(DEXTHUS["1995::2011"]),type="l",ylab="Thailand")
plot(1/coredata(DEXBZUS["1995::2011"]),type="l",ylab="Brazil")
plot(1/coredata(DEXMXUS["1995::2011"]),type="l",ylab="Mexico")
plot(1/coredata(DEXINUS["1995::2011"]),type="l",ylab="India")
plot(coredata(DTWEXO["1995::2011"]),type="l",ylab="US Dollar Other")
plot(coredata(DTWEXB["1995::2011"]),type="l",ylab="US Dollar Broad")

ttrTests This is a Test Test 3:Data Snoopy

THIS IS NOT INVESTMENT ADVICE.  IT IS JUST AN EXAMPLE AND WILL LIKELY LOSE LOTS OF MONEY IF YOU PURSUE WHAT IS DISCUSSED.  READER IS RESPONSIBLE FOR THEIR OWN GAINS OR LOSSES.  IF YOU ARE AN UNLIKELY WINNER, I WOULD LOVE TO HEAR YOUR STORY.

When we are deciding on a quantitative system to guide our investments, we have multiple choices all fraught with potential luck:  investment choice, system type, system, timeframe, system parameter or parameters, and money management.  To really gain some confidence in the persistence of your choices, I believe each choice should be as rigorously tested as possible.  http://math.uic.edu/~dstjohn/thesis.pdf, http://ageconsearch.umn.edu/bitstream/19039/1/cp05pa01.pdf, and http://www.ssc.wisc.edu/~bhansen/718/White2000.pdf all offer very good discussion of some of the tests.  My posts ttrTests This is a Test--Test 1 and Test 2, ttrTests: Its Great Thesis and Incredible Potential, and ttrTests Experimentation offer a simple application of the much-less-than-simple statistical tests.

I will apply the dataSnoop test with Hansen’s test for Superior Predictive Ability (SPA) to check for luck in the parameter choice of my simple CUD indicator.

I hate to do this, but I cannot think of an easy way to describe the output of the dataSnoop test, so I hope you will read http://math.uic.edu/~dstjohn/thesis.pdf pages 51-57.  Here is the paper’s description of the tests when applied to a MACD system.  clip_image001

http://math.uic.edu/~dstjohn/thesis.pdf page 55

p-values for my CUD indicator are not so good.

CUD: Observed P-values for Means 'l', 'c', and 'u' respectively: 0.4 0.4 0.97

The observed values from the test are plotted below.

From TimelyPortfolio

R code (click to download from Google Docs):

 

#let's define our silly countupdown function
#as a sample of a custom ttr rule
CUD <- function(x,params=50,...) {
#CUD takes the n-period sum of 1 (up days) and -1 (down days)
temp <- ifelse(runSum(ifelse(ROC(x,1,type="discrete") > 0,1,-1),params)>=0,1,0)
#replace NA with 0 at beginning of period
temp[is.na(temp)] <- 0
temp
}   require(ttrTests)
require(quantmod)
require(lattice)
require(reshape2)
require(PerformanceAnalytics)   #defaults functions is overridden by ggplot2 and plyr if loaded
#and will cause problems if you want to use ttrTests concurrently   tckrs <- c("GSPC","RUT","N225","GDAXI","DJUBS")   #use 1 or GSPC but adjust however you would like
i=1
getSymbols(paste("^",tckrs[i],sep=""),from="1896-01-01",to=Sys.Date())
test_price <- as.vector(get(tckrs[i])[,4])   #run dataSnoop to test for luck
#by checking all parameters across multiple bootstrap samples
#this takes a long time, so for experimenting change bSamples to
#something smaller than 100
#if you are planning to use this for commercial purposes
#make sure you see the warning in the documentation
#on Dr. Halbert White's patent and his paper
#http://www.ssc.wisc.edu/~bhansen/718/White2000.pdf   #don't get me started on patents of this sort   #crit can be "sharpe", "return", or "adjust"
#will choose "sharpe" but feel free to try them all
snoop <- dataSnoop(x=test_price, ttr = CUD, start = 20, nSteps = 30, stepSize = 10,
bSamples=100, crit="sharpe",
restrict = FALSE, burn = 0, short = FALSE, condition = NULL,
silent = TRUE, TC = 0.001, loud = TRUE, alpha = 0.025,
begin = 1, percent = 1, file = "", benchmark = "hold")   #make output slightly more usable with some naming
#believe I got this right
names(snoop) <- c("details","V1","V2",
"V3","p1.for.l","p2.for.c","p3.for.u")   #jpeg(filename="dataSnoop values.jpg",
quality=100,width=6.25, height = 6.25, units="in",res=96)
plot(snoop$V3,
type="l", col=2,
main="ttrTests dataSnoop V1,V2,and V3 on CUD",
xlab="Bootstrap Sample", ylab="Values")
points(snoop$V2, type="l", col=3)
points(snoop$V1, col=4)
legend("topright",legend=c("V1","V2","V3"),col=c(4,3,2),pch=19,lty=1)
#dev.off()

Created by Pretty R at inside-R.org

Wednesday, September 28, 2011

ttrTests This is a Test--Test 1 and Test 2

Just to remind everyone, THIS IS NOT INVESTMENT ADVICE AND ANY ACTIONS TAKEN BASED ON THIS DISCUSSION WILL PROBABLY RESULT IN SIGNIFICANT LOSSES.

We had fun with the ttrTests package in two previous posts ttrTests: Its Great Thesis and Incredible Potential and ttrTests Experimentation.  Let’s actually run some real tests on the basic CUD indicator introduced in A Quantstrat to Build On Part 6.  We will start by using paramStats to test multiple parameters.

From TimelyPortfolio

After we find the best parameter, we will test with returnStats the actual returns using the best parameter for the CUD indicator versus a buy-and-hold approach.

From TimelyPortfolio

Finally, we will try to apply some statistical rigidity to the process by using 100 bootstrapped samples to test the best parameter on out-of-sample data. As always, please let me know your thoughts.

From TimelyPortfolio
From TimelyPortfolio
From TimelyPortfolio

R code (click to download from Google Docs):

#let's define our silly countupdown function
#as a sample of a custom ttr rule
CUD <- function(x,params=50,...) {
#CUD takes the n-period sum of 1 (up days) and -1 (down days)
temp <- ifelse(runSum(ifelse(ROC(x,1,type="discrete") > 0,1,-1),params)>=0,1,0)
#replace NA with 0 at beginning of period
temp[is.na(temp)] <- 0
temp
}   require(ttrTests)
require(quantmod)
require(lattice)
require(reshape2)
require(PerformanceAnalytics)   #defaults functions is overridden by ggplot2 and plyr if loaded
#and will cause problems if you want to use ttrTests concurrently   tckrs <- c("GSPC","RUT","N225","GDAXI","DJUBS")   #use 1 or GSPC but adjust however you would like
i=1
getSymbols(paste("^",tckrs[i],sep=""),from="1896-01-01",to=Sys.Date())
test_price <- as.vector(get(tckrs[i])[,4])
#do parameter tests but plot=FALSE
#we will plot later
param_results <- paramStats(x=test_price, ttr = CUD, start = 20, nSteps = 30, stepSize = 10,
restrict = FALSE, burn = 0, short = FALSE, condition = NULL,
silent = TRUE, TC = 0.001, loud = TRUE, plot = FALSE, alpha = 0.025,
begin = 1, percent = 1, file = "", benchmark = "hold")
#make output slightly more usable with some naming
#believe I got this right
names(param_results) <- c("excess.return","z.score","adj.excess.return",
"Sharpe.ratio","best","best.repeat","best.adjusted",
paste("tested.parameters",c(1:(NROW(param_results)-7)),sep=""))
#jpeg(filename="excess by parameter.jpg",
quality=100,width=6.25, height = 6.25, units="in",res=96)
plot(param_results$excess.return~param_results$tested.parameters1,
type="l", col="darkgray",
main="ttrTests Excess Return by Parameter")
abline(v=param_results$best, col="indianred3")
#dev.off()   #let's use the returnStats function to get
#more complete return and distribution info on the best parameter
stats <- returnStats(x=test_price, ttr=CUD, params=param_results$best,
short=FALSE, TC=0.001, benchmark="hold")
#make output slightly more usable with some naming
#believe I got this right
names(stats) <- c("benchmark.stats","ttr.stats","adj.stats.and.periods",
"excess.stats","long.stats","short.stats","neutral.stats")
#jpeg(filename="analysis of returns.jpg",
quality=100,width=6.25, height = 6.25, units="in",res=96)
barplot(c(stats$long.stats[1],stats$short.stats[1],
stats$neutral.stats[1],stats$benchmark.stats[1]),
col=c("darkolivegreen3","indianred3","steelblue3","gray70"),
names.arg=c("long","short","neutral","benchmark"),
main="Analysis of Returns from ttrTests")
#dev.off()   #now let's test the best parameter with nullModel
#this tests the parameter with bootstrap resampling
#for significance across one of three criteria
#specified by crit = "sharpe", "return" (excess return),
#or "adjust" (excess adjusted for trading costs)
nmodel <- nullModel(x=test_price, model="stationaryBootstrap", userParams=4, bSamples=100,
ttr=CUD, params=param_results$best, short=FALSE, TC=0.001, crit="sharpe",
benchmark="hold")
#make output slightly more usable with some naming
#believe I got this right
#this is different from documentation but code seems to
#fit with this
names(nmodel) <- c("excess.return","excess.Sharpe.ratio","adj.excess.return","p.value")
#jpeg(filename="excess return by bootstrap.jpg",
quality=100,width=6.25, height = 6.25, units="in",res=96)
plot(nmodel$excess.return,type="l",xlab="Bootstrap Sample",
main="Excess Return for each Bootstrapped Sample")
#add line for excess return from actual ttr performance
abline(h=stats$excess.stats[1], col="indianred3")
#dev.off()   #jpeg(filename="excess adjusted return by bootstrap.jpg",
quality=100,width=6.25, height = 6.25, units="in",res=96)
plot(nmodel$adj.excess.return,type="l",col="khaki4",
main="Excess Adjusted Return for each Bootstrapped Sample")
#add line for excess adjusted from actual ttr performance
abline(h=stats$adj.stats.and.periods[1], col="indianred3")
#dev.off()   #jpeg(filename="excess sharpe ratio by bootstrap.jpg",
quality=100,width=6.25, height = 6.25, units="in",res=96)
plot(nmodel$excess.Sharpe.ratio,type="l",lwd=2,xlab="BootstrapSample",col="cadetblue4",
main="Excess Sharpe Ratio for each Bootstrapped Sample")
#add line for excess sharpe from actual ttr performance
abline(h=stats$ttr.stats[3]-stats$benchmark.stats[3], col="indianred3")
#dev.off()

Created by Pretty R at inside-R.org

Monday, September 26, 2011

ttrTests: Its Great Thesis and Incredible Potential

I stumbled on the ttrTests R package as mentioned in my post ttrTests Experimentation.  I did not recognize its potential until I spent much more time absorbing the basis of the package—David St. John’s thesis Technical Analysis Based on Moving Average Convergence and Divergence.  Since the title specifically addresses MACD, which I have had little luck implementing, I dismissed much of the content.  However, the power of the thesis extends well beyond MACD to all systematic methods and describes tests to ensure luck is not the source of a system’s returns.  In the package documentation, there is a summary of the 5 main tests:

“Contains five major tests supported by other functions: Did the TTR strategy outperform a benchmark in the past data? Is the excess return significant, using bootstrapping to construct a confidence interval? Is the excess return explained by data snooping? Is the ’good’ choice of parameters robust across sub-samples? Is this robustness significant, using bootstrapping to construct a confidence interval?”

The tests expose luck, data snooping, trading costs, and parameter persistence across both degrees of freedom and subperiods.  I look forward to documenting its power in my blog and also potentially working with the author to include in other R packages such as quantstrat.

Since I am running out of time, I first want to apply each of the tests to MACD in the same style as the package documentation and the thesis paper, but this time on a xts DJI object gathered through getSymbols rather than the spData provided with the package.

The output from the tests is very cumbersome, but I hope this set of examples will help provide a flavor for the package and its powerful tests.  In my next couple of posts, I will run each test in much further detail on my basic custom CUD indicator and try to get the cumbersome output in a far more digestible and graphical format.

R code (click to download from Google Docs):

require(ttrTests)
require(quantmod)   #get Dow Jones Industrials from Yahoo! Finance
getSymbols("^DJI",from="1896-01-01",to=Sys.Date())
#convert closing price to vector format which works best with ttrTests
DJI.vector <- as.vector(DJI[,4])   #using the defaults as mentioned in the thesis paper on MACD
#show each of the tests in order of their mention   #quotes are from ttrTests package documentation
#"compares the performance of the TTR with some benchmark"
returnStats(DJI.vector)   #"constructs a confidence interval for this performance"
#"and gives a p-value for the excess return observed in (1)."
nullModel(DJI.vector)   #"constructs a p-value for the ’best’ choice"
#"of parameters within a given domain"
dataSnoop(DJI.vector,bSamples=3,test="RC")
dataSnoop(DJI.vector,bSamples=3,test="SPA")   #"asks whether or not good choices of parameters"
#"were robust across different time periods"
#chose 8 since data is from 1928 will approximate by decade
subperiods(DJI.vector, periods=8)   #and my favorite of all
#"tests if the persistence measure from subperiods()"
#"is statistically significant"
#this takes the longest (about 10 minutes on my i7 laptop)
paramPersist(DJI.vector)

Created by Pretty R at inside-R.org

Friday, September 16, 2011

Performance with ggplot2

Now after Reporting Good Enough to Share, let’s use ggplot2 and PerformanceAnalytics to turn this

image4

into this

From TimelyPortfolio

I have been notified that the colors aren’t great.  How does everyone like this?

R code (click to download):

require(quantmod)
require(ggplot2)
require(PerformanceAnalytics)   clientPerf <- read.csv("clientperf.csv",stringsAsFactors=FALSE)
clientPerf[,2:NCOL(clientPerf)] <- lapply(clientPerf[,2:NCOL(clientPerf)],as.numeric)
clientPerf <- as.xts(clientPerf[,2:NCOL(clientPerf)]/100,
order.by = as.Date(clientPerf[,1],format="%m-%d-%y"))
colnames(clientPerf) <- c("Client","BarclaysAgg","SP500")     returns.cumul <- xts(apply((1+clientPerf),MARGIN=2,FUN=cumprod),order.by=index(clientPerf))
returns.cumul.Melt <- melt(as.data.frame(cbind(index(returns.cumul),
coredata(returns.cumul))),id.vars=1)
colnames(returns.cumul.Melt) <- c("Date","Portfolio","Growth")
a<-ggplot(returns.cumul.Melt,stat="identity",
aes(x=Date,y=Growth,colour=Portfolio)) +
geom_line(lwd=1) +
scale_x_date() +
scale_colour_manual(values=c("cadetblue","darkolivegreen3","gray70","bisque3","purple")) +
theme_bw() +
labs(x = "", y = "") +
opts(title = "Cumulative Returns",plot.title = theme_text(size = 20, hjust = 0))   returnTable <- table.TrailingPeriods(clientPerf,
periods=c(12,24,36,NROW(clientPerf)),
FUNCS="Return.annualized")
rownames(returnTable) <- c(paste(c(1:3)," Year",sep=""),"Since Inception")
returnMelt <- melt(cbind(rownames(returnTable),returnTable*100))
colnames(returnMelt)<-c("Period","Portfolio","Value")
b<-ggplot(returnMelt, stat="identity",
aes(x=Period,y=Value,fill=Portfolio)) +
geom_bar(position="dodge") +
scale_fill_manual(values=c("cadetblue","darkolivegreen3","gray70")) +
theme_bw() +
labs(x = "", y = "") +
opts(title = "Returns (annualized)",plot.title = theme_text(size = 20, hjust = 0))     downsideTable<-table.DownsideRisk(clientPerf)[c(1,3,5,7,8),]
downsideMelt<-melt(cbind(rownames(downsideTable),
downsideTable))
colnames(downsideMelt)<-c("Statistic","Portfolio","Value")
c<-ggplot(downsideMelt, stat="identity",
aes(x=Statistic,y=Value,fill=Portfolio)) +
geom_bar(position="dodge") + coord_flip() +
scale_fill_manual(values=c("cadetblue","darkolivegreen3","gray70")) +
theme_bw() +
labs(x = "", y = "") +
opts(title = "Risk Measures",plot.title = theme_text(size = 20, hjust = 0))
#geom_hline(aes(y = 0))
#opts(axis.line = theme_segment(colour = "red"))
#opts(panel.grid.major = theme_line(linetype = "dotted"))   #jpeg(filename="performance ggplot.jpg",quality=100,width=6.5, height = 8, units="in",res=96)
#pdf("perf ggplot.pdf", width = 8.5, height = 11)   grid.newpage()
pushViewport(viewport(layout = grid.layout(3, 1)))   vplayout <- function(x, y)
viewport(layout.pos.row = x, layout.pos.col = y)
print(a, vp = vplayout(1, 1))
print(b, vp = vplayout(2, 1))
print(c, vp = vplayout(3, 1))
#dev.off()

Created by Pretty R at inside-R.org

Thursday, September 15, 2011

Reporting Good Enough to Share

Sorry to all my faithful readers for my absence recently. I started a new job at a new firm, so my blogging has moved down the priority list but only temporarily. I am still committed to documenting my thoughts, especially finance and R thoughts as discussed in Why Talk My Book? .

So far, I have used R primarily for my own use without the intention of sharing with clients <> R stats finance geeks. However, R is so powerful that I would like to leverage it for more general communication.

Our shop uses Advent Axys, which I believe is still the industry standard despite its extremely poor and inflexible reporting capabilities. At one point in a previous life, I believed some of the inflexibility could be overcome through its replang scripting, and I endured a very painful journey documenting and experimenting http://www.axysreporting.com. I now believe the best solution is to use Axys only to export its information to better, more capable reporting engines.

In this example, I amended the Axys 631 report to export performance to a .csv file. I'm concerned that sharing my amended report might violate the very aggressive IP attorneys at Advent. I will try to write from scratch at some point to share while still avoiding this ire.  Please let me know if you have already done this to save me some time and effort.  Click on the screenshot to get the .csv file from Google Docs.

image

For now, I will share a sample performance file generated from my 631 exporter, and change the fine PerformanceAnalytics charts.PerformanceSummary report.  In later posts I will add some ggplot2 charts.  I think the chart is now good enough to share with clients, but unfortunately, my code is nowhere near as clean or robust as I would like.  I combined all the code into one R script, and here is the result.  I would love input and thoughts.

The original charts.PerformanceSummary with no modifications.

From TimelyPortfolio

My new version

From TimelyPortfolio

R code (click to download from Google Docs):

#this report uses the PerformanceAnalytics charts.PerformanceSummary
#as the base for a new one-pager
#99% of the code is from the PerformanceAnalytics package
#all credit should be given to the PerformanceAnalytics team
#all errors should be assigned to me
 
 
#this is almost entirely from the PerformanceAnalytics
#chart.CumReturns function
#I only change to label the endpoints
na.skip <- function (x, FUN=NULL, ...) # maybe add a trim capability?
{ # @author Brian Peterson
 
# DESCRIPTION:
 
# Time series data often contains NA's, either due to missing days,
# noncontiguous series, or merging multiple series,
#
# Some Calulcations, such as return calculations, require data that
# looks like a vector, and needs the output of na.omit
#
# It is often convenient to apply these vector-like functions, but
# you still need to keep track of the structure of the oridginal data.
 
# Inputs
# x the time series to apply FUN too
# FUN function to apply
# ... any additonal parameters to FUN
 
# Outputs:
# An xts time series that has the same index and NA's as the data
# passed in, after applying FUN
 
nx <- na.omit(x)
fx <- FUN(nx, ... = ...)
if (is.vector(fx)) {
result <- .xts(fx, .index(x), .indexCLASS = indexClass(x))
}
else {
result <- merge(fx, .xts(, .index(x)))
}
return(result)
}
 
 
 
chart.CumReturnsX <-
function (R, wealth.index = FALSE, geometric = TRUE, legend.loc = NULL, colorset = (1:12), begin = c("first","axis"), ...)
{ # @author Peter Carl
 
# DESCRIPTION:
# Cumulates the returns given and draws a line graph of the results as
# a cumulative return or a "wealth index".
 
# Inputs:
# R: a matrix, data frame, or timeSeries of returns
# wealth.index: if true, shows the "value of $1", starting the cumulation
# of returns at 1 rather than zero
# legend.loc: use this to locate the legend, e.g., "topright"
# colorset: use the name of any of the palattes above
# method: "none"
 
# Outputs:
# A timeseries line chart of the cumulative return series
 
# FUNCTION:
 
# Transform input data to a matrix
begin = begin[1]
x = checkData(R)
 
# Get dimensions and labels
columns = ncol(x)
columnnames = colnames(x)
 
# Calculate the cumulative return
one = 0
if(!wealth.index)
one = 1
 
##find the longest column, calc cum returns and use it for starting values
 
if(begin == "first") {
length.column.one = length(x[,1])
# find the row number of the last NA in the first column
start.row = 1
start.index = 0
while(is.na(x[start.row,1])){
start.row = start.row + 1
}
x = x[start.row:length.column.one,]
if(geometric)
reference.index = na.skip(x[,1],FUN=function(x) {cumprod(1+x)})
else
reference.index = na.skip(x[,1],FUN=function(x) {cumsum(x)})
}
for(column in 1:columns) {
if(begin == "axis") {
start.index = FALSE
} else {
# find the row number of the last NA in the target column
start.row = 1
while(is.na(x[start.row,column])){
start.row = start.row + 1
}
start.index=ifelse(start.row > 1,TRUE,FALSE)
}
if(start.index){
# we need to "pin" the beginning of the shorter series to the (start date - 1 period)
# value of the reference index while preserving NA's in the shorter series
if(geometric)
z = na.skip(x[,column],FUN = function(x,index=reference.index[(start.row - 1)]) {rbind(index,1+x)})
else
z = na.skip(x[,column],FUN = function(x,index=reference.index[(start.row - 1)]) {rbind(1+index,1+x)})
} else {
z = 1+x[,column]
}
column.Return.cumulative = na.skip(z,FUN = function(x, one, geometric) {if(geometric) cumprod(x)-one else (1-one) + cumsum(x-1)},one=one, geometric=geometric)
if(column == 1)
Return.cumulative = column.Return.cumulative
else
Return.cumulative = merge(Return.cumulative,column.Return.cumulative)
}
if(columns == 1)
Return.cumulative = as.xts(Return.cumulative)
colnames(Return.cumulative) = columnnames
 
# Chart the cumulative returns series
chart.TimeSeries(Return.cumulative, col = colorset, legend.loc = legend.loc, ...)
for (i in 1:NCOL(Return.cumulative)) {
text(x=NROW(Return.cumulative),
y=coredata(Return.cumulative)[NROW(Return.cumulative),i],
sprintf("%1.0f%%",round(coredata(Return.cumulative)[NROW(Return.cumulative),i],2)*100),
adj = c(0, 0.5),lwd=0.5,col=colorset[i])
}
}
 
#this function shows bar plot side-by-side comparisons for rolling annualized
#returns
#please proceed with caution as function is not robust
#and does not perform adequate error checking
chart.SideBar <- function (w, auto.grid = TRUE, xaxis = TRUE, yaxis = TRUE, yaxis.right = FALSE,
type = "l", lty = 1, lwd = 2, main = NULL, ylab = "Annualized Returns", xlab = NULL,
xlim = NULL, ylim = NULL, element.color = "darkgray", event.lines = NULL,
event.labels = NULL, period.areas = NULL, event.color = "darkgray",
period.color = "aliceblue", colorset = (1:12), pch = (1:12),
legend.loc = NULL, cex.axis = 0.8, cex.legend = 0.8,
cex.lab = 1, cex.labels = 0.8, cex.main = 1, major.ticks = "auto",
minor.ticks = TRUE, grid.color = "lightgray", grid.lty = "dotted",
xaxis.labels = NULL, ...)
{
barplot(w, beside=TRUE, col = colorset[1:NROW(w)],
xlab = xlab, ylab = ylab, axes = FALSE,
ylim = c(min(0,min(w)),max(w)+0.05),...)
 
if (auto.grid) {
abline(v=0, col = element.color, lty = grid.lty)
grid(NA, NULL, col = grid.color)
}
abline(h = 0, col = grid.color)
 
axis(2, cex.axis = cex.axis, col = element.color)
title(ylab = ylab, cex = cex.lab)
 
 
 
if (!is.null(legend.loc)) {
legend(legend.loc, inset = 0.02, text.col = colorset,
col = colorset, cex = cex.legend, border.col = grid.color,
lty = lty, lwd = 2, bg = "white", legend = rownames(w))
}
 
box(col = element.color)
}
 
 
charts.PerformanceSummaryX <-
function (R, Rf = 0, main = NULL, submain=NULL, geometric=TRUE, methods = "none", width = 0, event.labels = NULL, ylog = FALSE, wealth.index = FALSE, gap = 12, begin=c("first","axis"), legend.loc="bottomright", p=0.95, maxdraw = TRUE,...)
{ # @author Peter Carl
 
# DESCRIPTION:
# A wrapper to create a wealth index chart, bars for monthly peRformance,
# and underwater chart for drawdown.
 
# Inputs:
# R: a matrix, data frame, or timeSeries, usually a set of monthly returns.
# The first column is assumed to be the returns of interest, the next
# columns are assumed to be relevant benchmarks for comparison.
# Rf: this is the risk free rate. Remember to set this to the same
# periodicity as the data being passed in.
# method: Used to select the risk parameter to use in the chart.BarVaR. May
# be any of:
# modVaR - uses CF modified VaR
# VaR - uses traditional Value at Risk
# StdDev - monthly standard deviation of trailing 12 month returns
#
 
# Outputs:
# A stack of three related timeseries line charts
 
# FUNCTION:
begin = begin[1]
x = checkData(R)
colnames = colnames(x)
ncols = ncol(x)
 
# This repeats a bit of code from chart.CumReturns, but it's intended
# to align the start dates of all three charts. Basically, it assumes
# that the first column in the list is the column of interest, and
# starts everything from that start date
 
length.column.one = length(x[,1])
# find the row number of the last NA in the first column
start.row = 1
start.index = 0
while(is.na(x[start.row,1])){
start.row = start.row + 1
}
x = x[start.row:length.column.one,]
 
if(ncols > 1)
legend.loc = legend.loc
else
legend.loc = NULL
 
if(is.null(main))
main = paste(colnames[1],"Performance", sep=" ")
 
if(ylog)
wealth.index = TRUE
 
op <- par(no.readonly=TRUE)
 
# First, we lay out the graphic as a three row, one column format
# plot.new()
layout(matrix(c(1,2,3)),height=c(2,1,1.3),width=1)
# to see the resulting layout, use layout.show(3)
 
# mar: a numerical vector of the form c(bottom, left, top, right) which
# gives the number of lines of margin to be specified on the four sides
# of the plot. The default is c(5, 4, 4, 2) + 0.1
 
# The first row is the cumulative returns line plot
par(oma=c(2,2,4,2))
par(mar=c(1,6,8,4))
chart.CumReturnsX(x, main = "", xaxis = FALSE, legend.loc = legend.loc, cex.legend = 1, event.labels = event.labels, ylog = ylog, wealth.index = wealth.index, begin = begin, geometric = geometric, ylab="Cumulative Return",...)
 
# title(main=main, sub=submain, cex.main=2, cex.sub=1.5, adj=0, outer =FALSE)
mtext(text=main, line = -2, outer = TRUE, adj = 0.1, cex=2)
mtext(text=submain, line = -4, outer = TRUE, adj = 0.075, cex=1.5)
 
 
 
# The second row is the monthly returns bar plot
# par(mar=c(1,4,0,2))
 
freq = periodicity(x)
 
switch(freq$scale,
seconds = { date.label = "Second"},
minute = { date.label = "Minute"},
hourly = {date.label = "Hourly"},
daily = {date.label = "Daily"},
weekly = {date.label = "Weekly"},
monthly = {date.label = "Monthly"},
quarterly = {date.label = "Quarterly"},
yearly = {date.label = "Annual"}
)
 
# chart.BarVaR(x, main = "", xaxis = FALSE, width = width, ylab = paste(date.label,"Return"), methods = methods, event.labels = NULL, ylog=FALSE, gap = gap, p=p, ...)
 
# The third row is the underwater plot
par(mar=c(3,6,0,4))
chart.Drawdown(x, geometric = geometric, main = "", xlab=NA, ylab = "Drawdown", event.labels = NULL, ylog=FALSE, ...)
if (maxdraw) {
abline(h=-0.1,col="tomato3",lwd=3,lty="dashed")
text(x=2,y=-0.1,"maximum acceptable drawdown",adj = c(0, 0.5))
}
 
# If we wanted to add a fourth row with the table of monthly returns
# par(mar=c(0,0,0,0))
# textplot(table.Returns(as.matrix(R)),cex=.7,cmar=1.5,rmar=0.5,halign="center", valign="center")
 
par(mar=c(6,6,0,4))
 
returnTable <- returnTable <- table.TrailingPeriods(x,
periods=c(12,24,36,NROW(x)),
FUNCS="Return.annualized")
rownames(returnTable) <- c(paste(c(1:3)," Year",sep=""),"Since Inception")[1:NROW(returnTable)]
 
chart.SideBar(t(as.matrix(returnTable)),...)
 
par(op)
 
}
 
 
 
 
#now let's use the amended report to look at performance
require(quantmod)
require(PerformanceAnalytics)
 
clientPerf <- read.csv("clientperf.csv",stringsAsFactors=FALSE)
clientPerf[,2:NCOL(clientPerf)] <- lapply(clientPerf[,2:NCOL(clientPerf)],as.numeric)
clientPerf <- as.xts(clientPerf[,2:NCOL(clientPerf)]/100,
order.by = as.Date(clientPerf[,1],format="%m-%d-%y"))
colnames(clientPerf) <- c("Client","BarclaysAgg","SP500")
 
jpeg(filename="performance summary.jpg",quality=100,width=6, height = 7, units="in",res=96)
charts.PerformanceSummary(clientPerf)
dev.off()
 
jpeg(filename="performance one-pager.jpg",quality=100,width=6, height = 7, units="in",res=96)
charts.PerformanceSummaryX(clientPerf,main="Client Performance",
submain=paste("Since Inception - ",format(seq(index(clientPerf)[1], length=2, by="-1 months")[2],"%B %Y"),sep=""),legend.loc="bottomright",maxdraw=FALSE,
colorset=c("cadetblue4","darkgray","bisque3"),lwd=c(3,2,2))
dev.off()

Created by Pretty R at inside-R.org