Creating a Table of Monthly Returns With R and a Volatility Trading Interview

This post will cover two aspects: the first will be a function to convert daily returns into a table of monthly returns, complete with drawdowns and annual returns. The second will be an interview I had with David Lincoln (now on youtube) to talk about the events of Feb. 5, 2018, and my philosophy on volatility trading.

So, to start off with, a function that I wrote that’s supposed to mimic PerforamnceAnalytics’s table.CalendarReturns is below. What table.CalendarReturns is supposed to do is to create a month X year table of monthly returns with months across and years down. However, it never seemed to give me the output I was expecting, so I went and wrote another function.

Here’s the code for the function:

require(data.table)
require(PerformanceAnalytics)
require(scales)
require(Quandl)

# helper functions
pastePerc <- function(x) {return(paste0(comma(x),"%"))}
rowGsub <- function(x) {x <- gsub("NA%", "NA", x);x}

calendarReturnTable <- function(rets, digits = 3, percent = FALSE) {
  
  # get maximum drawdown using daily returns
  dds <- apply.yearly(rets, maxDrawdown)
  
  # get monthly returns
  rets <- apply.monthly(rets, Return.cumulative)
  
  # convert to data frame with year, month, and monthly return value
  dfRets <- cbind(year(index(rets)), month(index(rets)), coredata(rets))
  
  # convert to data table and reshape into year x month table
  dfRets <- data.frame(dfRets)
  colnames(dfRets) <- c("Year", "Month", "Value")
  monthNames <- c("Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec")
  for(i in 1:length(monthNames)) {
    dfRets$Month[dfRets$Month==i] <- monthNames[i]
  }
  dfRets <- data.table(dfRets)
  dfRets <- data.table::dcast(dfRets, Year~Month)
  
  # create row names and rearrange table in month order
  dfRets <- data.frame(dfRets)
  yearNames <- dfRets$Year
  rownames(dfRets) <- yearNames; dfRets$Year <- NULL
  dfRets <- dfRets[,monthNames]
  
  # append yearly returns and drawdowns
  yearlyRets <- apply.yearly(rets, Return.cumulative)
  dfRets$Annual <- yearlyRets
  dfRets$DD <- dds
  
  # convert to percentage
  if(percent) {
    dfRets <- dfRets * 100
  }
  
  # round for formatting
  dfRets <- apply(dfRets, 2, round, digits)
   
  # paste the percentage sign
  if(percent) {
    dfRets <- apply(dfRets, 2, pastePerc)
    dfRets <- apply(dfRets, 2, rowGsub)
    dfRets <- data.frame(dfRets)
    rownames(dfRets) <- yearNames
  }
  return(dfRets)
}

Here’s how the output looks like.

spy <- Quandl("EOD/SPY", type='xts', start_date='1990-01-01')
spyRets <- Return.calculate(spy$Adj_Close)
calendarReturnTable(spyRets, percent = FALSE)
        Jan    Feb    Mar    Apr    May    Jun    Jul    Aug    Sep    Oct    Nov    Dec Annual    DD
1993  0.000  0.011  0.022 -0.026  0.027  0.004 -0.005  0.038 -0.007  0.020 -0.011  0.012  0.087 0.047
1994  0.035 -0.029 -0.042  0.011  0.016 -0.023  0.032  0.038 -0.025  0.028 -0.040  0.007  0.004 0.085
1995  0.034  0.041  0.028  0.030  0.040  0.020  0.032  0.004  0.042 -0.003  0.044  0.016  0.380 0.026
1996  0.036  0.003  0.017  0.011  0.023  0.009 -0.045  0.019  0.056  0.032  0.073 -0.024  0.225 0.076
1997  0.062  0.010 -0.044  0.063  0.063  0.041  0.079 -0.052  0.048 -0.025  0.039  0.019  0.335 0.112
1998  0.013  0.069  0.049  0.013 -0.021  0.043 -0.014 -0.141  0.064  0.081  0.056  0.065  0.287 0.190
1999  0.035 -0.032  0.042  0.038 -0.023  0.055 -0.031 -0.005 -0.022  0.064  0.017  0.057  0.204 0.117
2000 -0.050 -0.015  0.097 -0.035 -0.016  0.020 -0.016  0.065 -0.055 -0.005 -0.075 -0.005 -0.097 0.171
2001  0.044 -0.095 -0.056  0.085 -0.006 -0.024 -0.010 -0.059 -0.082  0.013  0.078  0.006 -0.118 0.288
2002 -0.010 -0.018  0.033 -0.058 -0.006 -0.074 -0.079  0.007 -0.105  0.082  0.062 -0.057 -0.216 0.330
2003 -0.025 -0.013  0.002  0.085  0.055  0.011  0.018  0.021 -0.011  0.054  0.011  0.050  0.282 0.137
2004  0.020  0.014 -0.013 -0.019  0.017  0.018 -0.032  0.002  0.010  0.013  0.045  0.030  0.107 0.075
2005 -0.022  0.021 -0.018 -0.019  0.032  0.002  0.038 -0.009  0.008 -0.024  0.044 -0.002  0.048 0.070
2006  0.024  0.006  0.017  0.013 -0.030  0.003  0.004  0.022  0.027  0.032  0.020  0.013  0.158 0.076
2007  0.015 -0.020  0.012  0.044  0.034 -0.015 -0.031  0.013  0.039  0.014 -0.039 -0.011  0.051 0.099
2008 -0.060 -0.026 -0.009  0.048  0.015 -0.084 -0.009  0.015 -0.094 -0.165 -0.070  0.010 -0.368 0.476
2009 -0.082 -0.107  0.083  0.099  0.058 -0.001  0.075  0.037  0.035 -0.019  0.062  0.019  0.264 0.271
2010 -0.036  0.031  0.061  0.015 -0.079 -0.052  0.068 -0.045  0.090  0.038  0.000  0.067  0.151 0.157
2011  0.023  0.035  0.000  0.029 -0.011 -0.017 -0.020 -0.055 -0.069  0.109 -0.004  0.010  0.019 0.186
2012  0.046  0.043  0.032 -0.007 -0.060  0.041  0.012  0.025  0.025 -0.018  0.006  0.009  0.160 0.097
2013  0.051  0.013  0.038  0.019  0.024 -0.013  0.052 -0.030  0.032  0.046  0.030  0.026  0.323 0.056
2014 -0.035  0.046  0.008  0.007  0.023  0.021 -0.013  0.039 -0.014  0.024  0.027 -0.003  0.135 0.073
2015 -0.030  0.056 -0.016  0.010  0.013 -0.020  0.023 -0.061 -0.025  0.085  0.004 -0.017  0.013 0.119
2016 -0.050 -0.001  0.067  0.004  0.017  0.003  0.036  0.001  0.000 -0.017  0.037  0.020  0.120 0.103
2017  0.018  0.039  0.001  0.010  0.014  0.006  0.021  0.003  0.020  0.024  0.031  0.012  0.217 0.026
2018  0.056 -0.031     NA     NA     NA     NA     NA     NA     NA     NA     NA     NA  0.023 0.101

And with percentage formatting:

calendarReturnTable(spyRets, percent = TRUE)
Using 'Value' as value column. Use 'value.var' to override
         Jan      Feb     Mar     Apr     May     Jun     Jul      Aug      Sep      Oct     Nov     Dec   Annual      DD
1993  0.000%   1.067%  2.241% -2.559%  2.697%  0.367% -0.486%   3.833%  -0.726%   1.973% -1.067%  1.224%   8.713%  4.674%
1994  3.488%  -2.916% -4.190%  1.121%  1.594% -2.288%  3.233%   3.812%  -2.521%   2.843% -3.982%  0.724%   0.402%  8.537%
1995  3.361%   4.081%  2.784%  2.962%  3.967%  2.021%  3.217%   0.445%   4.238%  -0.294%  4.448%  1.573%  38.046%  2.595%
1996  3.558%   0.319%  1.722%  1.087%  2.270%  0.878% -4.494%   1.926%   5.585%   3.233%  7.300% -2.381%  22.489%  7.629%
1997  6.179%   0.957% -4.414%  6.260%  6.321%  4.112%  7.926%  -5.180%   4.808%  -2.450%  3.870%  1.910%  33.478% 11.203%
1998  1.288%   6.929%  4.876%  1.279% -2.077%  4.259% -1.351% -14.118%   6.362%   8.108%  5.568%  6.541%  28.688% 19.030%
1999  3.523%  -3.207%  4.151%  3.797% -2.287%  5.538% -3.102%  -0.518%  -2.237%   6.408%  1.665%  5.709%  20.388% 11.699%
2000 -4.979%  -1.523%  9.690% -3.512% -1.572%  1.970% -1.570%   6.534%  -5.481%  -0.468% -7.465% -0.516%  -9.730% 17.120%
2001  4.446%  -9.539% -5.599%  8.544% -0.561% -2.383% -1.020%  -5.933%  -8.159%   1.302%  7.798%  0.562% -11.752% 28.808%
2002 -0.980%  -1.794%  3.324% -5.816% -0.593% -7.376% -7.882%   0.680% -10.485%   8.228%  6.168% -5.663% -21.588% 32.968%
2003 -2.459%  -1.348%  0.206%  8.461%  5.484%  1.066%  1.803%   2.063%  -1.089%   5.353%  1.092%  5.033%  28.176% 13.725%
2004  1.977%   1.357% -1.320% -1.892%  1.712%  1.849% -3.222%   0.244%   1.002%   1.288%  4.451%  3.015%  10.704%  7.526%
2005 -2.242%   2.090% -1.828% -1.874%  3.222%  0.150%  3.826%  -0.937%   0.800%  -2.365%  4.395% -0.190%   4.827%  6.956%
2006  2.401%   0.573%  1.650%  1.263% -3.012%  0.264%  0.448%   2.182%   2.699%   3.152%  1.989%  1.337%  15.847%  7.593%
2007  1.504%  -1.962%  1.160%  4.430%  3.392% -1.464% -3.131%   1.283%   3.870%   1.357% -3.873% -1.133%   5.136%  9.925%
2008 -6.046%  -2.584% -0.903%  4.766%  1.512% -8.350% -0.899%   1.545%  -9.437% -16.519% -6.961%  0.983% -36.807% 47.592%
2009 -8.211% -10.745%  8.348%  9.935%  5.845% -0.068%  7.461%   3.694%   3.545%  -1.923%  6.161%  1.907%  26.364% 27.132%
2010 -3.634%   3.119%  6.090%  1.547% -7.945% -5.175%  6.830%  -4.498%   8.955%   3.820%  0.000%  6.685%  15.057% 15.700%
2011  2.330%   3.474%  0.010%  2.896% -1.121% -1.688% -2.000%  -5.498%  -6.945%  10.915% -0.406%  1.044%   1.888% 18.609%
2012  4.637%   4.341%  3.216% -0.668% -6.006%  4.053%  1.183%   2.505%   2.535%  -1.820%  0.566%  0.900%  15.991%  9.687%
2013  5.119%   1.276%  3.798%  1.921%  2.361% -1.336%  5.168%  -2.999%   3.168%   4.631%  2.964%  2.589%  32.307%  5.552%
2014 -3.525%   4.552%  0.831%  0.695%  2.321%  2.064% -1.344%   3.946%  -1.379%   2.355%  2.747% -0.256%  13.462%  7.273%
2015 -2.963%   5.620% -1.574%  0.983%  1.286% -2.029%  2.259%  -6.095%  -2.543%   8.506%  0.366% -1.718%   1.252% 11.910%
2016 -4.979%  -0.083%  6.724%  0.394%  1.701%  0.350%  3.647%   0.120%   0.008%  -1.734%  3.684%  2.028%  12.001% 10.306%
2017  1.789%   3.929%  0.126%  0.993%  1.411%  0.637%  2.055%   0.292%   2.014%   2.356%  3.057%  1.209%  21.700%  2.609%
2018  5.636%  -3.118%      NA      NA      NA      NA      NA       NA       NA       NA      NA      NA   2.342% 10.102%

That covers it for the function. Now, onto volatility trading. Dodging the February short volatility meltdown has, in my opinion, been one of the best out-of-sample validators for my volatility trading research. My subscriber numbers confirm it, as I’ve received 12 new subscribers this month, as individuals interested in the volatility trading space have gained a newfound respect for the risk management that my system uses. After all, it’s the down months that vindicate system traders like myself that do not employ leverage in the up times. Those interested in following my trades can subscribe here. Furthermore, recently, I was able to get a chance to speak with David Lincoln about my background, and philosophy on trading in general, and trading volatility in particular. Those interested can view the interview here.

Thanks for reading.

NOTE: I am currently interested in networking, full-time positions related to my skill set, and long-term consulting projects. Those interested in discussing professional opportunities can find me on LinkedIn after writing a note expressing their interest.

Advertisements

How to Make Like A Chrono Trigger Character and Survive the Apocalypse

This impromptu post will be talking about the events of Feb 5, 2018 in the volatility markets.

Allow me to indulge in a little bit of millennial nostalgia. For those that played Chrono Trigger, odds are, one of their most memorable experiences is first experiencing the Kingdom of Zeal–it was a floating kingdom above the clouds of a never-ending ice age, complete with warm scenery, and calming music.

Long story short, it was powered by harvesting magic from…essentially the monster that was the game’s final enemy. What was my favorite setting in the game eventually had this happen to it.

byeZeal

Essentially, the lesson taken from that scenario is: exercise caution first and foremost, and don’t mess around with things one does not understand. After the 2017 that XIV had, when it was seemingly impossible to do any wrong, many system traders looked foolish. Well, it seems that all good things must come to an end, though it isn’t often that they do so this violently.

For the record, my aggressive subscription strategy was flat starting on January 31st, while my conservative strategy was flat for far longer. In short, discretion is sometimes the better part of valor, though those that are interested in what actually constitutes as valor and want to hear it from a quant, you can head over to Alpha Architect. Wes Gray and Jack Vogel will tell you far more about being a badass than I ever could.

However, to put some firm numbers on my trading philosophy:

1*(1+1) = 2.
1*(1-1) = 0.

Make 100% on a trade? You’re a hero for some finite amount of time.
Lose 100%? You’re not just an idiot. You’re done. Kaput. Finished. Career over.

The way I see it is this: in trading, there’s no free lunch, and there are a lot of smart people in the industry.

The way I see it is this:

Risk in the financial markets (especially the volatility trading markets) isn’t like this: shortTail

But like this: longtail

The tails are very long. And in the financial markets, they aren’t so fluffy.

For the record, my subscription strategy, beyond taking a look at my VXX signal, is unaffected by XIV’s termination, as SVXY will slot right in to replace it.

Thanks for reading.

NOTE: I am currently seeking full time employment, consulting opportunities, and networking opportunities in relation to the skills I’ve demonstrated. Contact me on LinkedIn here.

Which Implied Volatility Ratio Is Best?

This post will be about comparing a volatility signal using three different variations of implied volatility indices to predict when to enter a short volatility position.

In volatility trading, there are three separate implied volatility indices that have a somewhat long history for trading–the VIX (everyone knows this one), the VXV (more recently changed to be called the VIX3M), which is like the VIX, except for a three-month period), and the VXMT, which is the implied six-month volatility period.

This relationship gives investigation into three separate implied volatility ratios: VIX/VIX3M (aka VXV), VIX/VXMT, and VIX3M/VXMT, as predictors for entering a short (or long) volatility position.

So, let’s get the data.

require(downloader)
require(quantmod)
require(PerformanceAnalytics)
require(TTR)
require(data.table)

download("http://www.cboe.com/publish/scheduledtask/mktdata/datahouse/vix3mdailyprices.csv", 
         destfile="vxvData.csv")
download("http://www.cboe.com/publish/ScheduledTask/MktData/datahouse/vxmtdailyprices.csv", 
         destfile="vxmtData.csv")

VIX <- fread("http://www.cboe.com/publish/scheduledtask/mktdata/datahouse/vixcurrent.csv", skip = 1)
VIXdates <- VIX$Date
VIX$Date <- NULL; VIX <- xts(VIX, order.by=as.Date(VIXdates, format = '%m/%d/%Y'))


vxv <- xts(read.zoo("vxvData.csv", header=TRUE, sep=",", format="%m/%d/%Y", skip=2))
vxmt <- xts(read.zoo("vxmtData.csv", header=TRUE, sep=",", format="%m/%d/%Y", skip=2))

download("https://dl.dropboxusercontent.com/s/jk6der1s5lxtcfy/XIVlong.TXT",
         destfile="longXIV.txt")

xiv <- xts(read.zoo("longXIV.txt", format="%Y-%m-%d", sep=",", header=TRUE))

xivRets <- Return.calculate(Cl(xiv))

One quick strategy to investigate is simple–the idea that the ratio should be below 1 (I.E. contango in implied volatility term structure) and decreasing (below a moving average). So when the ratio will be below 1 (that is, with longer-term implied volatility greater than shorter-term), and the ratio will be below its 60-day moving average, the strategy will take a position in XIV.

Here’s the code to do that.

vixVix3m <- Cl(VIX)/Cl(vxv)
vixVxmt <- Cl(VIX)/Cl(vxmt)
vix3mVxmt <- Cl(vxv)/Cl(vxmt)

stratStats <- function(rets) {
  stats <- rbind(table.AnnualizedReturns(rets), maxDrawdown(rets))
  stats[5,] <- stats[1,]/stats[4,]
  stats[6,] <- stats[1,]/UlcerIndex(rets)
  rownames(stats)[4] <- "Worst Drawdown"
  rownames(stats)[5] <- "Calmar Ratio"
  rownames(stats)[6] <- "Ulcer Performance Index"
  return(stats)
}

maShort <- SMA(vixVix3m, 60)
maMed <- SMA(vixVxmt, 60)
maLong <- SMA(vix3mVxmt, 60)

sigShort <- vixVix3m < 1 & vixVix3m < maShort
sigMed <- vixVxmt < 1 & vixVxmt < maMed 
sigLong <- vix3mVxmt < 1 & vix3mVxmt < maLong 

retsShort <- lag(sigShort, 2) * xivRets 
retsMed <- lag(sigMed, 2) * xivRets 
retsLong <- lag(sigLong, 2) * xivRets

compare <- na.omit(cbind(retsShort, retsMed, retsLong))
colnames(compare) <- c("Short", "Medium", "Long")
charts.PerformanceSummary(compare)
stratStats(compare)

With the following performance:

3ratios.PNG

> stratStats(compare)
                              Short    Medium     Long
Annualized Return         0.5485000 0.6315000 0.638600
Annualized Std Dev        0.3874000 0.3799000 0.378900
Annualized Sharpe (Rf=0%) 1.4157000 1.6626000 1.685600
Worst Drawdown            0.5246983 0.5318472 0.335756
Calmar Ratio              1.0453627 1.1873711 1.901976
Ulcer Performance Index   3.7893478 4.6181788 5.244137

In other words, the VIX3M/VXMT sports the lowest drawdowns (by a large margin) with higher returns.

So, when people talk about which implied volatility ratio to use, I think this offers some strong evidence for the longer-out horizon as a predictor for which implied vol term structure to use. It’s also why it forms the basis of my subscription strategy.

Thanks for reading.

NOTE: I am currently seeking a full-time position (remote or in the northeast U.S.) related to my skill set demonstrated on this blog. Please message me on LinkedIn if you know of any opportunities which may benefit from my skill set.

(Don’t Get) Contangled Up In Noise

This post will be about investigating the efficacy of contango as a volatility trading signal.

For those that trade volatility (like me), a term you may see that’s somewhat ubiquitous is the term “contango”. What does this term mean?

Well, simple: it just means the ratio of the second month of VIX futures over the first. The idea being is that when the second month of futures is more than the first, that people’s outlook for volatility is greater in the future than it is for the present, and therefore, the futures are “in contango”, which is most of the time.

Furthermore, those that try to find decent volatility trading ideas may have often seen that futures in contango implies that holding a short volatility position will be profitable.

Is this the case?

Well, there’s an easy way to answer that.

First off, refer back to my post on obtaining free futures data from the CBOE.

Using this data, we can obtain our signal (that is, in order to run the code in this post, run the code in that post).

xivSig <- termStructure$C2 > termStructure$C1

Now, let’s get our XIV data (again, big thanks to Mr. Helmuth Vollmeier for so kindly providing it.

require(downloader)
require(quantmod)
require(PerformanceAnalytics)
require(TTR)
require(Quandl)
require(data.table)

download("https://dl.dropboxusercontent.com/s/jk6der1s5lxtcfy/XIVlong.TXT",
         destfile="longXIV.txt")

download("https://dl.dropboxusercontent.com/s/950x55x7jtm9x2q/VXXlong.TXT", 
         destfile="longVXX.txt") #requires downloader package

xiv <- xts(read.zoo("longXIV.txt", format="%Y-%m-%d", sep=",", header=TRUE))
xivRets <- Return.calculate(Cl(xiv))

Now, here’s how this works: as the CBOE doesn’t update its settles until around 9:45 AM EST on the day after (EG a Tuesday’s settle data won’t release until Wednesday at 9:45 AM EST), we have to enter at close of the day after the signal fires. (For those wondering, my subscription strategy uses this mechanism, giving subscribers ample time to execute throughout the day.)

So, let’s calculate our backtest returns. Here’s a stratStats function to compute some summary statistics.

stratStats <- function(rets) {
  stats <- rbind(table.AnnualizedReturns(rets), maxDrawdown(rets))
  stats[5,] <- stats[1,]/stats[4,]
  stats[6,] <- stats[1,]/UlcerIndex(rets)
  rownames(stats)[4] <- "Worst Drawdown"
  rownames(stats)[5] <- "Calmar Ratio"
  rownames(stats)[6] <- "Ulcer Performance Index"
  return(stats)
}
stratRets <- lag(xivSig, 2) * xivRets
charts.PerformanceSummary(stratRets)
stratStats(stratRets)

With the following results:

contangled

                                 C2
Annualized Return         0.3749000
Annualized Std Dev        0.4995000
Annualized Sharpe (Rf=0%) 0.7505000
Worst Drawdown            0.7491131
Calmar Ratio              0.5004585
Ulcer Performance Index   0.7984454

So, this is obviously a disaster. Visual inspection will show devastating, multi-year drawdowns. Using the table.Drawdowns command, we can view the worst ones.

> table.Drawdowns(stratRets, top = 10)
         From     Trough         To   Depth Length To Trough Recovery
1  2007-02-23 2008-12-15 2010-04-06 -0.7491    785       458      327
2  2010-04-21 2010-06-30 2010-10-25 -0.5550    131        50       81
3  2014-07-07 2015-12-11 2017-01-04 -0.5397    631       364      267
4  2012-03-27 2012-06-01 2012-07-17 -0.3680     78        47       31
5  2017-07-25 2017-08-17 2017-10-16 -0.3427     59        18       41
6  2013-09-27 2014-04-11 2014-06-18 -0.3239    182       136       46
7  2011-02-15 2011-03-16 2011-04-26 -0.3013     49        21       28
8  2013-02-20 2013-03-01 2013-04-23 -0.2298     44         8       36
9  2013-05-20 2013-06-20 2013-07-08 -0.2261     34        23       11
10 2012-12-19 2012-12-28 2013-01-23 -0.2154     23         7       16

So, the top 3 are horrendous, and then anything above 30% is still pretty awful. A couple of those drawdowns lasted multiple years as well, with a massive length to the trough. 458 trading days is nearly two years, and 364 is about one and a half years. Imagine seeing a strategy be consistently on the wrong side of the trade for nearly two years, and when all is said and done, you’ve lost three-fourths of everything in that strategy.

There’s no sugar-coating this: such a strategy can only be called utter trash.

Let’s try one modification: we’ll require both contango (C2 > C1), and that contango be above its 60-day simple moving average, similar to my VXV/VXMT strategy.

contango <- termStructure$C2/termStructure$C1
maContango <- SMA(contango, 60)
xivSig <- contango > 1 & contango > maContango
stratRets <- lag(xivSig, 2) * xivRets

With the results:

stillContangled

> stratStats(stratRets)
                                 C2
Annualized Return         0.4271000
Annualized Std Dev        0.3429000
Annualized Sharpe (Rf=0%) 1.2457000
Worst Drawdown            0.5401002
Calmar Ratio              0.7907792
Ulcer Performance Index   1.7515706

Drawdowns:

> table.Drawdowns(stratRets, top = 10)
         From     Trough         To   Depth Length To Trough Recovery
1  2007-04-17 2008-03-17 2010-01-06 -0.5401    688       232      456
2  2014-12-08 2014-12-31 2015-04-09 -0.2912     84        17       67
3  2017-07-25 2017-09-05 2017-12-08 -0.2610     97        30       67
4  2012-03-27 2012-06-21 2012-07-02 -0.2222     68        61        7
5  2012-07-20 2012-12-06 2013-02-08 -0.2191    139        96       43
6  2015-10-20 2015-11-13 2016-03-16 -0.2084    102        19       83
7  2013-12-27 2014-04-11 2014-05-23 -0.1935    102        73       29
8  2017-03-21 2017-05-17 2017-06-26 -0.1796     68        41       27
9  2012-02-07 2012-02-15 2012-03-12 -0.1717     24         7       17
10 2016-09-08 2016-09-09 2016-12-06 -0.1616     63         2       61

So, a Calmar still safely below 1, an Ulcer Performance Index still in the basement, a maximum drawdown that’s long past the point that people will have abandoned the strategy, and so on.

So, even though it was improved, it’s still safe to say this strategy doesn’t perform too well. Even after the large 2007-2008 drawdown, it still gets some things pretty badly wrong, like being exposed to all of August 2017.

While I think there are applications to contango in volatility investing, I don’t think its use is in generating the long/short volatility signal on its own. Rather, I think other indices and sources of data do a better job of that. Such as the VXV/VXMT, which has since been iterated on to form my subscription strategy.

Thanks for reading.

NOTE: I am currently seeking networking opportunities, long-term projects, and full-time positions related to my skill set. My linkedIn profile can be found here.

Launching My Subscription Service

After gauging interest from my readers, I’ve decided to open up a subscription service. I’ll copy and paste the FAQs, or my best attempt at trying to answer as many questions as possible ahead of time, and may answer more in the future.

I’m choosing to use Patreon just to outsource all of the technicalities of handling subscriptions and creating a centralized source to post subscription-based content.

Here’s the link to subscribe.

FAQs (copied from the subscription page):

*****

Thank you for visiting. After gauging interest from my readership on my main site (www.quantstrattrader.wordpress.com), I created this as a subscription page for quantitative investment strategies, with the goal of having subscribers turn their cash into more cash, net of subscription fees (hopefully). The systems I develop come from a background of learning from experienced quantitative trading professionals, and senior researchers at large firms. The current system I initially published a prototype for several years back and watched it being tracked, before finally starting to deploy my own capital earlier this year, and making the most recent modifications even more recently. 

And while past performance doesn’t guarantee future results and the past doesn’t repeat itself, it often rhymes, so let’s turn money into more money.

Some FAQs about the strategy:

​What is the subscription price for this strategy?

​Currently, after gauging interest from readers and doing research based on other sites, the tentative pricing is $50/month. As this strategy builds a track record, that may be subject to change in the future, and notifications will be made in such an event.

What is the description of the strategy?

The strategy is mainly a short volatility system that trades XIV, ZIV, and VXX. As far as volatility strategies go, it’s fairly conservative in that it uses several different checks in order to ensure a position.

What is the strategy’s edge?

In two words: risk management. Essentially, there are a few separate criteria to select an investment, and the system spends a not-insignificant time with no exposure when some of these criteria provide contradictory signals. Furthermore, the system uses disciplined methodologies in its construction in order to avoid unnecessary free parameters, and to keep the strategy as parsimonious as possible.

Do you trade your own capital with this strategy?

Yes. 

When was the in-sample training period for this system?

A site that no longer updates its blog (volatility made simple) once tracked a more rudimentary strategy that I wrote about several years ago. I was particularly pleased with the results of that vetting, and recently have received input to improve my system to a much greater degree, as well as gained the confidence to invest live capital into it.

How many trades per year does the system make?

In the backtest from April 20, 2008 through the end of 2016, the system made 187 transactions in XIV (both buy and sell), 160 in ZIV, and 52 in VXX. Meaning over the course of approximately 9 years, there was on average 43 transactions per year. In some cases, this may simply be switching from XIV to ZIV or vice versa. In other words, trades last approximately a week (some may be longer, some shorter).

When will signals be posted?

Signals will be posted sometime between 12 PM and market close (4 PM EST). In backtesting, they are tested as market on close orders, so individuals assume any risk/reward by executing earlier.

How often is this system in the market?

About 56%. However, over the course of backtesting (and live trading), only about 9% of months have zero return. 

What are the distribution of winning, losing, and zero return months?

As of late October 2017, there have been about 65% winning months (with an average gain of 12.8%), 26% losing months (with an average loss of 4.9%), and 9% zero months.

What are some other statistics about the strategy?

Since 2011 (around the time that XIV officially came into inception as opposed to using synthetic data), the strategy has boasted an 82% annualized return, with a 24.8% maximum drawdown and an annualized standard deviation of 35%. This means a Sharpe ratio (return to standard deviation) higher than 2, and a Calmar ratio higher than 3. It also has an Ulcer Performance Index of 10.

What are the strategy’s worst drawdowns?

Since 2011 (again, around the time of XIV’s inception), the largest drawdown was 24.8%, starting on October 31, 2011, and making a new equity high on January 12, 2012. The longest drawdown started on August 21, 2014 and recovered on April 10, 2015, and lasted for 160 trading days.

Will the subscription price change in the future?

If the strategy continues to deliver strong returns, then there may be reason to increase the price so long as the returns bear it out.

Can a conservative risk signal be provided for those who might not be able to tolerate a 25% drawdown? 

A variant of the strategy that targets about half of the annualized standard deviation of the strategy boasts a 40% annualized return for about 12% drawdown since 2011. Overall, this has slightly higher reward to risk statistics, but at the cost of cutting aggregate returns in half.

Can’t XIV have a termination event?

This refers to the idea of the XIV ETN terminating if it loses 80% of its value in a single day. To give an idea of the likelihood of this event, using synthetic data, the XIV ETN had a massive drawdown of 92% over the course of the 2008 financial crisis. For the history of that synthetic (pre-inception) and realized (post-inception) data, the absolute worst day was a down day of 26.8%. To note, the strategy was not in XIV during that day.

What was the strategy’s worst day?

On September 16, 2016, the strategy lost 16% in one day. This was at the tail end of a stretch of positive days that made about 40%.

What are the strategy’s risks?

The first risk is that given that this strategy is naturally biased towards short volatility, that it can have potential for some sharp drawdowns due to the nature of volatility spikes. The other risk is that given that this strategy sometimes spends its time in ZIV, that it will underperform XIV on some good days. This second risk is a consequence of additional layers of risk management in the strategy.

How complex is this strategy?

Not overly. It’s only slightly more complex than a basic momentum strategy when counting free parameters, and can be explained in a couple of minutes.

Does this strategy use any complex machine learning methodologies?

No. The data requirements for such algorithms and the noise in the financial world make it very risky to apply these methodologies, and research as of yet did not bear fruit to justify incorporating them.

Will instrument volume ever be a concern (particularly ZIV)?

According to one individual who worked on the creation of the original VXX ETN (and by extension, its inverse, XIV), new shares of ETNs can be created by the issuer (in ZIV’s case, Credit Suisse) on demand. In short, the concern of volume is more of a concern of the reputability of the person making the request. In other words, it depends on how well the strategy does.

Can the strategy be held liable/accountable/responsible for a subscriber’s loss/drawdown?

​Let this serve as a disclaimer: by subscribing, you agree to waive any legal claim against the strategy, or its creator(s) in the event of drawdowns, losses, etc. The subscription is for viewing the output of a program, and this service does not actively manage a penny of subscribers’ actual assets. Subscribers can choose to ignore the strategy’s signals at a moment’s notice at their discretion. The program’s output should not be thought of as the investment advice coming from a CFP, CFA, RIA, etc.

Why should these signals be trusted?

Because my work on other topics has been on full, public display for several years. Unlike other websites, I have shown “bad backtests”, thus breaking the adage of “you’ll never see a bad backtest”. I have shown thoroughness in my research, and the same thoroughness has been applied towards this system as well. Until there is a longer track record such that the system can stand on its own, the trust in the system is the trust in the system’s creator.

Who is the intended audience for these signals?

The intended audience is individual, retail investors with a certain risk tolerance, and is priced accordingly. 

​Isn’t volatility investing very risky?

​It’s risky from the perspective of the underlying instrument having the capacity to realize very large drawdowns (greater than 60%, and even greater than 90%). However, from a purely numerical standpoint, the company taking over so much of shopping, Amazon, since inception has had a 37.1% annualized rate of return, a standard deviation of 61.5%, a worst drawdown of 94%, and an Ulcer Performance Index of 0.9. By comparison, XIV, from 2008 (using synthetic data), has had a 35.5% annualized rate of return, a standard deviation of 57.7%, a worst drawdown of 92%, and an Ulcer Performance Index of 0.6. If Amazon is considered a top-notch asset, then from a quantitative comparison, a system looking to capitalize on volatility bets should be viewed from a similar perspective. To be sure, the strategy’s performance vastly outperforms that of buying and holding XIV (which nobody should do). However, the philosophy of volatility products being much riskier than household tech names just does not hold true unless the future wildly differs from the past.

​Is there a possibility for collaborating with other strategy creators?

​Feel free to contact me at my email ilya.kipnis@gmail.com to discuss that possibility. I request a daily stream of returns before starting any discussion.

Why Patreon?

Because past all the artsy-craftsy window dressing and interesting choice of vocabulary, Patreon is simply a platform that processes payments and creates a centralized platform from which to post subscription-based content, as opposed to maintaining mailing lists and other technical headaches. Essentially, it’s simply a way to outsource the technical end of running a business, even if the window dressing is a bit unorthodox.

***

Thanks for reading.

NOTE: I am currently interested in networking and full-time roles based on my skills. My LinkedIn profile can be found here.

The Kelly Criterion — Does It Work?

This post will be about implementing and investigating the running Kelly Criterion — that is, a constantly adjusted Kelly Criterion that changes as a strategy realizes returns.

For those not familiar with the Kelly Criterion, it’s the idea of adjusting a bet size to maximize a strategy’s long term growth rate. Both https://en.wikipedia.org/wiki/Kelly_criterionWikipedia and Investopedia have entries on the Kelly Criterion. Essentially, it’s about maximizing your long-run expectation of a betting system, by sizing bets higher when the edge is higher, and vice versa.

There are two formulations for the Kelly criterion: the Wikipedia result presents it as mean over sigma squared. The Investopedia definition is P-[(1-P)/winLossRatio], where P is the probability of a winning bet, and the winLossRatio is the average win over the average loss.

In any case, here are the two implementations.

investoPediaKelly <- function(R, kellyFraction = 1, n = 63) {
  signs <- sign(R)
  posSigns <- signs; posSigns[posSigns < 0] <- 0
  negSigns <- signs; negSigns[negSigns > 0] <- 0; negSigns <- negSigns * -1
  probs <- runSum(posSigns, n = n)/(runSum(posSigns, n = n) + runSum(negSigns, n = n))
  posVals <- R; posVals[posVals < 0] <- 0
  negVals <- R; negVals[negVals > 0] <- 0; 
  wlRatio <- (runSum(posVals, n = n)/runSum(posSigns, n = n))/(runSum(negVals, n = n)/runSum(negSigns, n = n))
  kellyRatio <- probs - ((1-probs)/wlRatio)
  out <- kellyRatio * kellyFraction
  return(out)
}

wikiKelly <- function(R, kellyFraction = 1, n = 63) {
  return(runMean(R, n = n)/runVar(R, n = n)*kellyFraction)
}

Let’s try this with some data. At this point in time, I’m going to show a non-replicable volatility strategy that I currently trade.

volSince2011

For the record, here are its statistics:

                              Close
Annualized Return         0.8021000
Annualized Std Dev        0.3553000
Annualized Sharpe (Rf=0%) 2.2574000
Worst Drawdown            0.2480087
Calmar Ratio              3.2341613

Now, let’s see what the Wikipedia version does:

badKelly <- out * lag(wikiKelly(out), 2)
charts.PerformanceSummary(badKelly)

badKelly

The results are simply ridiculous. And here would be why: say you have a mean return of .0005 per day (5 bps/day), and a standard deviation equal to that (that is, a Sharpe ratio of 1). You would have 1/.0005 = 2000. In other words, a leverage of 2000 times. This clearly makes no sense.

The other variant is the more particular Investopedia definition.

invKelly <- out * lag(investKelly(out), 2)
charts.PerformanceSummary(invKelly)

invKelly

Looks a bit more reasonable. However, how does it stack up against not using it at all?

compare <- na.omit(cbind(out, invKelly))
charts.PerformanceSummary(compare)

kellyCompare

Turns out, the fabled Kelly Criterion doesn’t really change things all that much.

For the record, here are the statistical comparisons:

                               Base     Kelly
Annualized Return         0.8021000 0.7859000
Annualized Std Dev        0.3553000 0.3588000
Annualized Sharpe (Rf=0%) 2.2574000 2.1903000
Worst Drawdown            0.2480087 0.2579846
Calmar Ratio              3.2341613 3.0463063

Thanks for reading.

NOTE: I am currently looking for my next full-time opportunity, preferably in New York City or Philadelphia relating to the skills I have demonstrated on this blog. My LinkedIn profile can be found here. If you know of such opportunities, do not hesitate to reach out to me.

An Out of Sample Update on DDN’s Volatility Momentum Trading Strategy and Beta Convexity

The first part of this post is a quick update on Tony Cooper’s of Double Digit Numerics’s volatility ETN momentum strategy from the volatility made simple blog (which has stopped updating as of a year and a half ago). The second part will cover Dr. Jonathan Kinlay’s Beta Convexity concept.

So, now that I have the ability to generate a term structure and constant expiry contracts, I decided to revisit some of the strategies on Volatility Made Simple and see if any of them are any good (long story short: all of the publicly detailed ones aren’t so hot besides mine–they either have a massive drawdown in-sample around the time of the crisis, or a massive drawdown out-of-sample).

Why this strategy? Because it seemed different from most of the usual term structure ratio trades (of which mine is an example), so I thought I’d check out how it did since its first publishing date, and because it’s rather easy to understand.

Here’s the strategy:

Take XIV, VXX, ZIV, VXZ, and SHY (this last one as the “risk free” asset), and at the close, invest in whichever has had the highest 83 day momentum (this was the result of optimization done on volatilityMadeSimple).

Here’s the code to do this in R, using the Quandl EOD database. There are two variants tested–observe the close, buy the close (AKA magical thinking), and observe the close, buy tomorrow’s close.

require(quantmod)
require(PerformanceAnalytics)
require(TTR)
require(Quandl)

Quandl.api_key("yourKeyHere")

symbols <- c("XIV", "VXX", "ZIV", "VXZ", "SHY")

prices <- list()
for(i in 1:length(symbols)) {
  price <- Quandl(paste0("EOD/", symbols[i]), start_date="1990-12-31", type = "xts")$Adj_Close
  colnames(price) <- symbols[i]
  prices[[i]] <- price
}
prices <- na.omit(do.call(cbind, prices))
returns <- na.omit(Return.calculate(prices))

# find highest asset, assign column names
topAsset <- function(row, assetNames) {
  out <- row==max(row, na.rm = TRUE)
  names(out) <- assetNames
  out <- data.frame(out)
  return(out)
}

# compute momentum
momentums <- na.omit(xts(apply(prices, 2, ROC, n = 83), order.by=index(prices)))

# find highest asset each day, turn it into an xts
highestMom <- apply(momentums, 1, topAsset, assetNames = colnames(momentums))
highestMom <- xts(t(do.call(cbind, highestMom)), order.by=index(momentums))

# observe today's close, buy tomorrow's close
buyTomorrow <- na.omit(xts(rowSums(returns * lag(highestMom, 2)), order.by=index(highestMom)))

# observe today's close, buy today's close (aka magic thinking)
magicThinking <- na.omit(xts(rowSums(returns * lag(highestMom)), order.by=index(highestMom)))

out <- na.omit(cbind(buyTomorrow, magicThinking))
colnames(out) <- c("buyTomorrow", "magicalThinking")

# results
charts.PerformanceSummary(out['2014-04-11::'], legend.loc = 'top')
rbind(table.AnnualizedReturns(out['2014-04-11::']), maxDrawdown(out['2014-04-11::']))

Pretty simple.

Here are the results.

capture

> rbind(table.AnnualizedReturns(out['2014-04-11::']), maxDrawdown(out['2014-04-11::']))
                          buyTomorrow magicalThinking
Annualized Return          -0.0320000       0.0378000
Annualized Std Dev          0.5853000       0.5854000
Annualized Sharpe (Rf=0%)  -0.0547000       0.0646000
Worst Drawdown              0.8166912       0.7761655

Looks like this strategy didn’t pan out too well. Just a daily reminder that if you’re using fine grid-search to select a particularly good parameter (EG n = 83 days? Maybe 4 21-day trading months, but even that would have been n = 82), you’re asking for a visit from, in the words of Mr. Tony Cooper, a visit from the grim reaper.

****

Moving onto another topic, whenever Dr. Jonathan Kinlay posts something that I think I can replicate that I’d be very wise to do so, as he is a very skilled and experienced practitioner (and also includes me on his blogroll).

A topic that Dr. Kinlay covered is the idea of beta convexity–namely, that an asset’s beta to a benchmark may be different when the benchmark is up as compared to when it’s down. Essentially, it’s the idea that we want to weed out firms that are what I’d deem as “losers in disguise”–I.E. those that act fine when times are good (which is when we really don’t care about diversification, since everything is going up anyway), but do nothing during bad times.

The beta convexity is calculated quite simply: it’s the beta of an asset to a benchmark when the benchmark has a positive return, minus the beta of an asset to a benchmark when the benchmark has a negative return, then squaring the difference. That is, (beta_bench_positive – beta_bench_negative) ^ 2.

Here’s some R code to demonstrate this, using IBM vs. the S&P 500 since 1995.

ibm <- Quandl("EOD/IBM", start_date="1995-01-01", type = "xts")
ibmRets <- Return.calculate(ibm$Adj_Close)

spy <- Quandl("EOD/SPY", start_date="1995-01-01", type = "xts")
spyRets <- Return.calculate(spy$Adj_Close)

rets <- na.omit(cbind(ibmRets, spyRets))
colnames(rets) <- c("IBM", "SPY")

betaConvexity <- function(Ra, Rb) {
  positiveBench <- Rb[Rb > 0]
  assetPositiveBench <- Ra[index(positiveBench)]
  positiveBeta <- CAPM.beta(Ra = assetPositiveBench, Rb = positiveBench)
  
  negativeBench <- Rb[Rb < 0]
  assetNegativeBench <- Ra[index(negativeBench)]
  negativeBeta <- CAPM.beta(Ra = assetNegativeBench, Rb = negativeBench)
  
  out <- (positiveBeta - negativeBeta) ^ 2
  return(out)
}

betaConvexity(rets$IBM, rets$SPY)

For the result:

> betaConvexity(rets$IBM, rets$SPY)
[1] 0.004136034

Thanks for reading.

NOTE: I am always looking to network, and am currently actively looking for full-time opportunities which may benefit from my skill set. If you have a position which may benefit from my skills, do not hesitate to reach out to me. My LinkedIn profile can be found here.