Thursday, February 28, 2013

Shading and Points with xtsExtra plot.xts

For some reason, I feel like have much better control with plot.xts function from the xtsExtra package described here over some of the other more refined R graphical packages. Maybe, it is just my simple mind, but recently I wanted to shade holding periods with points for buy and sale dates. With plot.xts from xtsExtra I was able to quickly and easily generate the following plot. I did have to slightly amend the original plot.xts function as seen here, but it seemed more natural and like much less of a struggle.

plot of chunk unnamed-chunk-1

I also enjoyed writing this post almost entirely in R markdown.

R code from Github:

require(RColorBrewer)
require(quantmod)
require(xtsExtra)
source("https://raw.github.com/timelyportfolio/plotxts_shading_points/master/plot.R")
jpy <- getSymbols("DEXJPUS",src="FRED",auto.assign=FALSE)
#define array of buy dates
buydates = c("2010-08-20",
"2011-02-07",
"2011-03-30",
"2011-10-14",
"2012-06-21",
"2012-10-25")
#define a vector of sell dates
selldates = c("2010-10-08",
"2011-03-02",
"2011-06-01",
"2012-05-31",
"2012-08-22",
format(Sys.Date(),"%Y-%m-%d")) #fill today - don't think this is necessary
custom.panel <- function(index,x,...) {
default.panel(index,x,...)
abline(h=pretty(c(par("yaxp")[1],par("yaxp")[2]),n=par("yaxp")[3]),col="gray60",lty=3)
abline(h=par("usr")[3], col="black")
axis(side=2,col="gray60",col.axis="black",lwd=0,lwd.ticks=FALSE,las=1,
at=pretty(c(par("yaxp")[1],par("yaxp")[2]),n=abs(par("yaxp")[3])),
labels=pretty(c(par("yaxp")[1],par("yaxp")[2]),n=abs(par("yaxp")[3])))
points(x=index[which(index(x) %in% as.Date(buydates))],
y=x[which(index(x) %in% as.Date(buydates)),],cex=1,pch=19,
col="darkolivegreen3")
points(x=index[which(index(x) %in% as.Date(selldates))],
y=x[which(index(x) %in% as.Date(selldates)),],cex=1,pch=19,
col="indianred3")
#to add reference lines to indicate entry level
#I'm not sure it is necessary but if you like it uncomment below
#for(i in 1:(length(startdates))) {
# segments(x0=index[which(index(x) == as.Date(startdates[i]))],
# x1=index[which(index(x) == as.Date(enddates[i]))],,
# y0=x[which(index(x) == as.Date(startdates[i])),],
# y1=x[which(index(x) == as.Date(startdates[i])),])
#}
}
plot.xts(jpy["2009-12::"], #limit to Dec 2009 to Current so more easily visible
col = brewer.pal(9,"Blues")[c(7)], #get two blues that will look ok
lwd = 2, #line width; will do 2
las = 1, #do not rotate y axis labels
bty="n",
auto.grid=FALSE,
major.format="%b %Y",
major.ticks="years",
minor.ticks=FALSE,
col.axis="transparent",
yax.loc="none",
cex.axis=0.8,
panel=custom.panel,
main = NA, #will do title later so we have more control
blocks = list(start.time=buydates, #overlay blocks for periods owned
end.time=selldates,col="gray90"))
#define title separately so we have more control
title(main = "US$/Japanese Yen from St. Louis Federal Reserve (FRED)",
outer=TRUE,
line=-2,
adj=0.05)
text(x=0.05,y=0.1,label="holding periods shaded",adj=0,font=3,cex=0.8)

Thursday, February 21, 2013

Additional Plots on French Breakpoints as Valuation

I feel like there might be some merit in Slightly Different Measure of Valuation using Ken French’s Market(ME) to Book(BE) Breakpoints by percentile to offer an additional valuation metric for US stocks.  I thought some additional plots might help me flesh out the concept.  This plot struck me as particularly helpful.

From TimelyPortfolio

In the next iteration, I hope to add a look at prospective drawdown or returns.  However, I struggle since the last 30 years all have basically exhibited historical overvaluation.  Since 1926, no period of overvaluation has lasted longer than 14 years except the last 30.

Thanks to the post from http://timotheepoisot.fr/2013/02/17/stacked-barcharts/ which helped me use much more appealing colors than the default lattice set.


R code from GIST:

require(latticeExtra)
require(Hmisc)
require(reshape2)
require(xts)
loadfrench <- function(zipfile, txtfile, skip, nrows) {
#my.url will be the location of the zip file with the data
my.url=paste("http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/",zipfile,".zip",sep="")
#this will be the temp file set up for the zip file
my.tempfile<-paste(tempdir(),"\\frenchzip.zip",sep="")
#my.usefile is the name of the txt file with the data
my.usefile<-paste(tempdir(),"\\",txtfile,".txt",sep="")
download.file(my.url, my.tempfile, method="auto",
quiet = FALSE, mode = "wb",cacheOK = TRUE)
unzip(my.tempfile,exdir=tempdir(),junkpath=TRUE)
#read space delimited text file extracted from zip
french <- read.table(file=my.usefile,
header = FALSE, sep = "", fill=TRUE, #add fill = true to handle bad data
as.is = FALSE ,
skip = skip, nrows=nrows)
#get dates ready for xts index
datestoformat <- french[,1]
datestoformat <- paste(substr(datestoformat,1,4),
"12","31",sep="-")
#get xts for analysis
#unfortunately the last percentile in 1942 is not separated by a space so we will delete last two columns
french_xts <- as.xts(french[,1:(NCOL(french)-2)],
order.by=as.Date(datestoformat))
#delete missing data which is denoted by -0.9999
french_xts[which(french_xts < -0.99,arr.ind=TRUE)[,1],
unique(which(french_xts < -0.99,arr.ind=TRUE)[,2])] <- 0
#divide by 100 to get percent
french_xts <- french_xts/100
return(french_xts)
}
filenames <- c("BE-ME_Breakpoints")
BE_ME = loadfrench(zipfile=filenames[1],txtfile=filenames[1],skip=3,nrows=87)
#first column is year which we can remove
#columns 2 and 3 are counts for positive and negative which we will remove
BE_ME = BE_ME[,4:NCOL(BE_ME)]
colnames(BE_ME) <- paste(5*0:(NCOL(BE_ME)-1),"pctile",sep="")
#kind of normalize data by dividing each percentile by the percentile mean
BE_ME.adj <- BE_ME/matrix(rep(apply(BE_ME,MARGIN=2,FUN=mean),times=NROW(BE_ME)),
ncol=NCOL(BE_ME),byrow=TRUE)-1
BE_ME.adj.df <- as.data.frame(cbind(as.numeric(format(as.Date(index(BE_ME.adj)),"%Y")),coredata(BE_ME.adj)))
BE_ME.adj.melt <- melt(BE_ME.adj.df,id.vars=1)
#add column for the decade so we can use in plots
BE_ME.adj.melt[,4] <- paste(substr(BE_ME.adj.melt[,1],1,3),"0",sep="")
colnames(BE_ME.adj.melt) <- c("Year","Pctile","value","Decade")
#good way to get decent colors
#stole from http://timotheepoisot.fr/2013/02/17/stacked-barcharts/
pal = colorRampPalette(brewer.pal(5,'Paired'))(20)
p1<-Ecdf(~value|Decade,groups=Year%%10,col=pal[seq(1,20,by=2)],data=BE_ME.adj.melt, #data=BE_ME.adj.melt[which(BE_ME.adj.melt[,"Year"] %% 2 == 0),],
label.curves=TRUE,
layout=c(1,10),
strip=FALSE,strip.left=strip.custom(bg="grey70"),
scales=list(x=list(tck=c(1,0)),y=list(alternating=0,tck=c(0,0))),
ylab=NULL,
xlab="BE_ME/percentile mean",
main=" ") +
layer(panel.abline(v=0, col="grey50"))
p2<-
dotplot(factor(Year)~value|Decade,col=pal[seq(1,20,by=2)],data=BE_ME.adj.melt, #data=BE_ME.adj.melt[which(BE_ME.adj.melt[,"Year"] %% 2 == 0),],
pch=19,
cex=0.6,
strip=FALSE,strip.left=strip.custom(bg="grey70"),
scales=list(x=list(tck=c(1,0)),y=list(relation="free",draw=FALSE)),
layout=c(1,10),
xlab="BE_ME/percentile mean",
main="Kenneth French BE_ME Percentile Breakpoints") +
layer(panel.abline(v=0, col="grey50")) #+
#layer(panel.abline(v=0.25, col="darkolivegreen4")) +
#layer(panel.abline(v=-0.25, col="indianred4"))
#side by side
print(p2,position=c(0,0.015,0.5,1),more=TRUE)
print(p1,position=c(0.45,0.015,1,1))
grid.text("Dot Plot by Year by Decade",x=unit(0.1,"npc"),y=unit(0.96,"npc"),hjust=0)
grid.text("Cumulative Density by Year by Decade",x=unit(0.55,"npc"),y=unit(0.96,"npc"),hjust=0)

Wednesday, February 20, 2013

Another Way to Look at Vanguard and Pimco

I like the results of the analysis shown in my post Applying Tradeblotter’s Nice Work Across Manager Rather than Time, but I was not satisfied that the plot allowed a quick summary comparison of the two massive fund complexes.  I am much more pleased with this Ecdf plot from the HMisc package.

From TimelyPortfolio

R code at GIST:

#build on fine work at http://tradeblotter.wordpress.com/
#I take blame for all the ugly/bad code
require(latticeExtra)
#data from pimco and vanguard websites imported into Excel and translated into csv
#if local uncomment next line
#pimco_vanguard <- read.csv("vanguard_pimco.csv")
#get data from published google doc spreadsheet
pimco_vanguard <- read.csv("https://docs.google.com/spreadsheet/pub?key=0AieeEIaS0AOsdDFET0ZmbTBKWDNoMnZrZ0oySWRia1E&single=true&gid=0&output=csv")
#do 1 year or past 12 months
#exclude 0 assuming that data does not exist for this fund
asTheEconomist(
Ecdf(~X1Y,group=FundComplex,data=pimco_vanguard[which(!(pimco_vanguard[,"X1Y"]==0)),],
label.curves=TRUE,
main="PIMCO and Vanguard Mutual Funds \nCumulative Density of 1 Year Performance")
) +
layer(panel.abline(v=0, col="grey70"))

Tuesday, February 19, 2013

Onepager Now with knitR

Since at some point I had trouble with a conflict between knitr and the latex package textpos, I used the lesser Sweave in Another Experiment with R and Sweave.  I ran the Sweave2knitr command and discovered that textpos and knitr play well together now.  Here is the result using knitr (go to https://www.box.com/s/4nftk6qpa0cugapmncsn if the embed does not show below):

.rnw source file from Gist

\documentclass[nohyper,justified]{tufte-handout}
%\documentclass{article}
%\usepackage[absolute,showboxes]{textpos}
\usepackage[absolute]{textpos}
\usepackage{sidecap}
%\usepackage{color}
%\usepackage[usenames,dvipsnames,svgnames,table]{xcolor}
\begin{document}
<<include=FALSE>>=
opts_chunk$set(concordance=TRUE)
@
\begin{wide}
\section{\Huge Performance Summary with knitR and R}
{\Large Here is a little experiment with R and Sweave to produce a performance report. I have done some samples in the past, but I wanted to iterate through a couple more, especially to evaluate other options for what has been started in the PApages package. Of course, this text could be easily replaced with some commentary from a manager about opportunities, thoughts, or current allocation. A dashboard set of charts also might be very helpful here.}
\hrulefill
\end{wide}
<<eval=TRUE,echo=FALSE,results='hide',warning=FALSE,message=FALSE,error=FALSE>>=
#do requires and set up environment for reporting
require(xtable)
require(ggplot2)
require(directlabels)
require(reshape2)
require(latticeExtra)
require(quantmod)
require(PerformanceAnalytics)
data(managers)
#get xts in df form so that we can melt with the reshape package
#will use just manager 1, sp500, and 10y treasury
managers <- managers[,c(1,8,9)]
#add 0 at beginning so cumulative returns start at 1
#also cumulative will match up datewise with returns
managers <- as.xts(rbind(rep(0,NCOL(managers)),coredata(managers)),
order.by=c(as.Date(format(index(managers)[1],"%Y-%m-01"))-1,index(managers)))
managers.df <- as.data.frame(cbind(index(managers),coredata(managers)),stringsAsFactors=FALSE)
#melt data which puts in a form that lattice and ggplot enjoy
managers.melt <- melt(managers.df,id.vars=1)
colnames(managers.melt) <- c("date","account","return")
managers.melt[,1] <- as.Date(managers.melt[,1])
#get cumulative returns starting at 1
managers.cumul <- as.xts(
apply(managers+1,MARGIN=2,FUN=cumprod),
#add end of first month to accommodate the 1 that we add
order.by=index(managers))
managers.cumul.df <- as.data.frame(cbind(index(managers.cumul),
coredata(managers.cumul)),
stringsAsFactors=FALSE)
managers.cumul.melt <- melt(managers.cumul.df,id.vars=1)
colnames(managers.cumul.melt) <- c("date","account","return")
managers.cumul.melt[,1] <- as.Date(managers.cumul.melt[,1])
#get rolling returns for 1y, 3y, 5y, since inception
trailing <- table.TrailingPeriods(managers, periods=c(12,36,60,NROW(managers)),FUNCS=c("mean"),funcs.names=c("return"))
trailing.df <- as.data.frame(cbind(c("1y","3y","5y","SinceIncep"),
c(rep("return",4)),
coredata(trailing)),
stringsAsFactors=FALSE)
trailing.melt <- melt(trailing.df,id.vars=1:2)
colnames(trailing.melt) <- c("period","measure","account","return")
#function to get numbers in percent format
#will use \\ to play well with tikz
percent <- function(x, digits = 2, format = "f", ...)
{
paste(formatC(100 * x, format = format, digits = digits, ...), "%", sep = "")
}
@
\begin{textblock*}{105mm}(15mm,70mm)
\begin{figure}
\vspace{0pt}
<<echo=FALSE,eval=TRUE,fig.width=8,fig.height=12>>=
charts.PerformanceSummary(managers,
colorset=c(brewer.pal(9,"Blues")[6],brewer.pal(8,"Greys")[c(5,4)]),
xlab=NULL)
@
\end{figure}
\end{textblock*}
\begin{textblock*}{75mm}(120mm,100mm)
\small Cumulative returns offer one of the best methods to evaluate the ability of a manager to achieve long term returns. Ultimately, the cumulative return is often one of the primary objectives of our clients.
\normalsize
\newline
\begin{figure}
\vspace{0pt}
<<echo=FALSE,eval=TRUE,results='asis'>>=
trailingtable <- apply(trailing,MARGIN=2,FUN=percent)
rownames(trailingtable) <- c("1y","3y","5y",paste("Since Inception ",format(index(managers)[1],"%b %Y")))
print(xtable(trailingtable), floating=FALSE, scalebox=0.7)
@
\end{figure}
\end{textblock*}
\begin{textblock*}{75mm}(120mm,152mm)
\small However, cumulative returns must also be evaluated with reference to the risks incurred to generate those returns. Below are multiple risk measures. We are most concerned with limiting drawdowns shown in the bottom left chart.
\normalsize
\newline
\begin{figure}
\vspace{0pt}
<<echo=FALSE,eval=TRUE,results='asis'>>=
risktable <- table.DownsideRisk(managers)
print(xtable(risktable), floating=FALSE, scalebox=0.6)
@
\end{figure}
\end{textblock*}
\end{document}

Tuesday, February 12, 2013

Another Experiment with R and Sweave

The R package PApages is a great start towards addressing the very common problem of internal and external reporting in the money management industry.  Advent's APX, Axys, and Black Diamond and the up and coming extremely well-connected and well-funded Addepar provide basic and acceptable reporting but generally don’t provide the full set of risk and return metrics that I would expect.  Since the very successful GSOC projects with PerformanceAnalytics …Now With More Bacon (2008)! and New Attribution Functions for PortfolioAnalytics, we have a comprehensive and robust set of risk, return, and attribution measures in R.  Combined with the near limitless graphical abilities of R with xtsExtra, ggplot, lattice, and base graphics, R seems to offer one of the best platforms for reporting, so I’ve committed myself to continue my series http://timelyportfolio.blogspot.com/search/label/reporting exploring various reporting options in R.

This is a fairly crude sketch of something we can accomplish easily with R, Sweave, and PerformanceAnalytics.  I hope to itergreat to something a little more compelling.  If the embedded pdf does not work below, please see at https://www.box.com/s/xpfn3rjwwmv8aftmkbyi.

R Sweave file from GIST:

\documentclass[nohyper,justified]{tufte-handout}
%\documentclass{article}
%\usepackage[absolute,showboxes]{textpos}
\usepackage[absolute]{textpos}
\usepackage{sidecap}
%\usepackage{color}
%\usepackage[usenames,dvipsnames,svgnames,table]{xcolor}
\begin{document}
\SweaveOpts{concordance=TRUE}
\begin{wide}
\section{\Huge Performance Summary with Sweave and R}
{\Large Here is a little experiment with R and Sweave to produce a performance report. I have done some samples in the past, but I wanted to iterate through a couple more, especially to evaluate other options for what has been started in the PApages package. Of course, this text could be easily replaced with some commentary from a manager about opportunities, thoughts, or current allocation. A dashboard set of charts also might be very helpful here.}
\hrulefill
\end{wide}
<<eval=TRUE,echo=FALSE,results=hide,warning=FALSE>>=
#do requires and set up environment for reporting
require(xtable)
require(ggplot2)
require(directlabels)
require(reshape2)
require(latticeExtra)
require(quantmod)
require(PerformanceAnalytics)
data(managers)
#get xts in df form so that we can melt with the reshape package
#will use just manager 1, sp500, and 10y treasury
managers <- managers[,c(1,8,9)]
#add 0 at beginning so cumulative returns start at 1
#also cumulative will match up datewise with returns
managers <- as.xts(rbind(rep(0,NCOL(managers)),coredata(managers)),
order.by=c(as.Date(format(index(managers)[1],"%Y-%m-01"))-1,index(managers)))
managers.df <- as.data.frame(cbind(index(managers),coredata(managers)),stringsAsFactors=FALSE)
#melt data which puts in a form that lattice and ggplot enjoy
managers.melt <- melt(managers.df,id.vars=1)
colnames(managers.melt) <- c("date","account","return")
managers.melt[,1] <- as.Date(managers.melt[,1])
#get cumulative returns starting at 1
managers.cumul <- as.xts(
apply(managers+1,MARGIN=2,FUN=cumprod),
#add end of first month to accommodate the 1 that we add
order.by=index(managers))
managers.cumul.df <- as.data.frame(cbind(index(managers.cumul),
coredata(managers.cumul)),
stringsAsFactors=FALSE)
managers.cumul.melt <- melt(managers.cumul.df,id.vars=1)
colnames(managers.cumul.melt) <- c("date","account","return")
managers.cumul.melt[,1] <- as.Date(managers.cumul.melt[,1])
#get rolling returns for 1y, 3y, 5y, since inception
trailing <- table.TrailingPeriods(managers, periods=c(12,36,60,NROW(managers)),FUNCS=c("mean"),funcs.names=c("return"))
trailing.df <- as.data.frame(cbind(c("1y","3y","5y","SinceIncep"),
c(rep("return",4)),
coredata(trailing)),
stringsAsFactors=FALSE)
trailing.melt <- melt(trailing.df,id.vars=1:2)
colnames(trailing.melt) <- c("period","measure","account","return")
#function to get numbers in percent format
#will use \\ to play well with tikz
percent <- function(x, digits = 2, format = "f", ...)
{
paste(formatC(100 * x, format = format, digits = digits, ...), "%", sep = "")
}
@
\begin{textblock*}{150mm}(5mm,70mm)
\begin{figure}
%to really fill the page, this works nicely
%\begin{minipage}[t]{1.2\linewidth}
%\begin{minipage}[t]{0.5\linewidth}
\vspace{0pt}
<<echo=FALSE,eval=TRUE,fig=TRUE,width=4,height=6>>=
charts.PerformanceSummary(managers,
colorset=c(brewer.pal(9,"Blues")[6],brewer.pal(8,"Greys")[c(5,4)]),
xlab=NULL)
@
\end{figure}
\end{textblock*}
\begin{textblock*}{85mm}(120mm,103mm)
Cumulative returns offer one of the best methods to evaluate the ability of a manager to achieve long term returns. Ultimately, the cumulative return is often one of the primary objectives of our clients.
\newline
\begin{figure}
\vspace{0pt}
<<echo=FALSE,eval=TRUE,results=tex>>=
trailingtable <- apply(trailing,MARGIN=2,FUN=percent)
rownames(trailingtable) <- c("1y","3y","5y",paste("Since Inception ",format(index(managers)[1],"%b %Y")))
print(xtable(trailingtable), floating=FALSE, scalebox=0.8)
@
\end{figure}
\end{textblock*}
\begin{textblock*}{85mm}(120mm,161mm)
However, cumulative returns must also be evaluated with reference to the risks incurred to generate those returns. Below are multiple risk measures. We are most concerned with limiting drawdowns shown in the bottom left chart.
\newline
\begin{figure}
\vspace{0pt}
<<echo=FALSE,eval=TRUE,results=tex>>=
risktable <- table.DownsideRisk(managers)
print(xtable(risktable), floating=FALSE, scalebox=0.7)
@
\end{figure}
\end{textblock*}
\end{document}

Sunday, February 3, 2013

Japanese Government Bonds (JGB) Total Return Series

In a follow up to Yen and JGBs Short-Term vs Long Term and a series of posts on Japan, I thought the Bloomberg article "Japan Pension Fund’s Bonds Too Many If Abe Succeeds, Mitani Says" was particularly interesting.  It is difficult to find a total return series for the JGBS, so here is an example of how we might construct it in R with the JGB 9 year. Using the 9 year gets us about a decade more data than the 10 year.  The calculation is not perfect but it gets us very close.

The Japanese Pension Fund (GPIF) has been spoiled by a very pleasant ride with their JGBs.

From TimelyPortfolio

R code from GIST:

#get Japan yield data from the Ministry of Finance Japan
#data goes back to 1974
require(RQuantLib)
require(PerformanceAnalytics)
#get data from the Japanese Ministry of Finance
url <- "http://www.mof.go.jp/english/jgbs/reference/interest_rate/"
filenames <- paste("jgbcme",c("","_2010","_2000-2009","_1990-1999","_1980-1989","_1974-1979"),".csv",sep="")
#load all data and combine into one jgb data.frame
jgb <- read.csv(paste(url,filenames[1],sep=""),stringsAsFactors=FALSE)
for (i in 2:length(filenames)) {
jgb <- rbind(jgb,read.csv(paste(url,"/historical/",filenames[i],sep=""),stringsAsFactors=FALSE))
}
#now clean up the jgb data.frame to make a jgb yield xts series
jgb.xts <- as.xts(data.matrix(jgb[,2:NCOL(jgb)]),order.by=as.Date(jgb[,1]))
#initialize the price return object
JGB9pricereturn<-jgb.xts[,"X9"]
JGB9pricereturn[1,1]<-0
colnames(JGB9pricereturn)<-"PriceReturn-JGB9"
#use quantlib to price the JGB 9 year
#9 year has a longer history than the 10 year so we'll use 9 year
for (i in 1:(NROW(jgb.xts[,"X9"])-1)) {
JGB9pricereturn[i+1,1]<-FixedRateBondPriceByYield(yield=jgb.xts[i+1,"X9"]/100,issueDate=Sys.Date(),
maturityDate= advance("Japan", Sys.Date(), 9, 3),
rates=jgb.xts[i,"X9"]/100,period=2)[1]/100-1
}
#total return will be the price return + yield/12 for one month
JGB9totalreturn<-JGB9pricereturn+lag(jgb.xts[,"X9"],k=1)/250/100
colnames(JGB9totalreturn)<-"TotalReturn-JGB9"
JGB9totalreturn[1,1] <- 0
JGB9cumul <- cumprod(JGB9totalreturn+1)
charts.PerformanceSummary(JGB9totalreturn,
main=NA,
xlab=NA)
title(main="Japanese Government Bond (JGB) 9 Year Total Return",adj=0.04,outer=TRUE,line=-1.5)

Friday, February 1, 2013

Yen and JGBs Short-Term vs Long Term

I have read some articles arguing that the recent move in the Japanese Yen is overdone.  However, considering the short-term without regard to the long-term context is naïve and potentially dangerous.  Although I do not have significant proof, I believe long-term mean reversion can completely dominate short-term mean reversion hopes.  Just to provide some longer-term context, I thought I would offer some graphical aids.

From TimelyPortfolio

In my mind, the Yen selloff is only in its infancy.  For the move to truly engage, I think we need Japanese Government Bond (JGB) yields to move higher also, and if it does we are in a different paradigm than the last 20 years.  But, what do I know?

R code from GIST:

#get Japan yield data from the Ministry of Finance Japan
#data goes back to 1974
require(latticeExtra)
require(xtsExtra)
url <- "http://www.mof.go.jp/english/jgbs/reference/interest_rate/"
filenames <- paste("jgbcme",c("","_2010","_2000-2009","_1990-1999","_1980-1989","_1974-1979"),".csv",sep="")
#load all data and combine into one jgb data.frame
jgb <- read.csv(paste(url,filenames[1],sep=""),stringsAsFactors=FALSE)
for (i in 2:length(filenames)) {
jgb <- rbind(jgb,read.csv(paste(url,"/historical/",filenames[i],sep=""),stringsAsFactors=FALSE))
}
#now clean up the jgb data.frame to make a jgb xts
jgb.xts <- as.xts(data.matrix(jgb[,2:NCOL(jgb)]),order.by=as.Date(jgb[,1]))
#get Yen from the Fed
getSymbols("DEXJPUS",src="FRED")
p1986 <- xyplot(na.omit(merge(DEXJPUS,jgb.xts[,"X10"])),
lattice.options=theEconomist.opts(),
par.settings=theEconomist.theme(box="transparent"),
scale=list(y=list(rot=0)),
strip=strip.custom(factor.levels=c("USD/Yen","JGB 10y Yield","Rolling 1Y Correlation")),
xlab=NULL,
main="Japanese JGB 10Y Yield and Yen Since 1986")
p2005 <- xyplot(na.omit(merge(DEXJPUS,jgb.xts[,"X10"]))["2005::",],
lattice.options=theEconomist.opts(),
par.settings=theEconomist.theme(box="transparent"),
scale=list(y=list(rot=0)),
strip=strip.custom(factor.levels=c("USD/Yen","JGB 10y Yield","Rolling 1Y Correlation")),
xlab=NULL,
main="Japanese JGB 10Y Yield and Yen Since 2005")
p2012 <- xyplot(na.omit(merge(DEXJPUS,jgb.xts[,"X10"]))["2012::",],
lattice.options=theEconomist.opts(),
par.settings=theEconomist.theme(box="transparent"),
scale=list(y=list(rot=0)),
strip=strip.custom(factor.levels=c("USD/Yen","JGB 10y Yield")),
xlab=NULL,
main="Japanese JGB 10Y Yield and Yen Since 2012")
#######print top to bottom
print(p2012,position=c(0,0,1,0.35),more=TRUE)
print(p2005,position=c(0,0.33,1,0.67),more=TRUE)
print(p1986,position=c(0,0.66,1,1))