The Kelly Criterion — Does It Work?

This post will be about implementing and investigating the running Kelly Criterion — that is, a constantly adjusted Kelly Criterion that changes as a strategy realizes returns.

For those not familiar with the Kelly Criterion, it’s the idea of adjusting a bet size to maximize a strategy’s long term growth rate. Both https://en.wikipedia.org/wiki/Kelly_criterionWikipedia and Investopedia have entries on the Kelly Criterion. Essentially, it’s about maximizing your long-run expectation of a betting system, by sizing bets higher when the edge is higher, and vice versa.

There are two formulations for the Kelly criterion: the Wikipedia result presents it as mean over sigma squared. The Investopedia definition is P-[(1-P)/winLossRatio], where P is the probability of a winning bet, and the winLossRatio is the average win over the average loss.

In any case, here are the two implementations.

investoPediaKelly <- function(R, kellyFraction = 1, n = 63) {
  signs <- sign(R)
  posSigns <- signs; posSigns[posSigns < 0] <- 0
  negSigns <- signs; negSigns[negSigns > 0] <- 0; negSigns <- negSigns * -1
  probs <- runSum(posSigns, n = n)/(runSum(posSigns, n = n) + runSum(negSigns, n = n))
  posVals <- R; posVals[posVals < 0] <- 0
  negVals <- R; negVals[negVals > 0] <- 0; 
  wlRatio <- (runSum(posVals, n = n)/runSum(posSigns, n = n))/(runSum(negVals, n = n)/runSum(negSigns, n = n))
  kellyRatio <- probs - ((1-probs)/wlRatio)
  out <- kellyRatio * kellyFraction
  return(out)
}

wikiKelly <- function(R, kellyFraction = 1, n = 63) {
  return(runMean(R, n = n)/runVar(R, n = n)*kellyFraction)
}

Let’s try this with some data. At this point in time, I’m going to show a non-replicable volatility strategy that I currently trade.

volSince2011

For the record, here are its statistics:

                              Close
Annualized Return         0.8021000
Annualized Std Dev        0.3553000
Annualized Sharpe (Rf=0%) 2.2574000
Worst Drawdown            0.2480087
Calmar Ratio              3.2341613

Now, let’s see what the Wikipedia version does:

badKelly <- out * lag(wikiKelly(out), 2)
charts.PerformanceSummary(badKelly)

badKelly

The results are simply ridiculous. And here would be why: say you have a mean return of .0005 per day (5 bps/day), and a standard deviation equal to that (that is, a Sharpe ratio of 1). You would have 1/.0005 = 2000. In other words, a leverage of 2000 times. This clearly makes no sense.

The other variant is the more particular Investopedia definition.

invKelly <- out * lag(investKelly(out), 2)
charts.PerformanceSummary(invKelly)

invKelly

Looks a bit more reasonable. However, how does it stack up against not using it at all?

compare <- na.omit(cbind(out, invKelly))
charts.PerformanceSummary(compare)

kellyCompare

Turns out, the fabled Kelly Criterion doesn’t really change things all that much.

For the record, here are the statistical comparisons:

                               Base     Kelly
Annualized Return         0.8021000 0.7859000
Annualized Std Dev        0.3553000 0.3588000
Annualized Sharpe (Rf=0%) 2.2574000 2.1903000
Worst Drawdown            0.2480087 0.2579846
Calmar Ratio              3.2341613 3.0463063

Thanks for reading.

NOTE: I am currently looking for my next full-time opportunity, preferably in New York City or Philadelphia relating to the skills I have demonstrated on this blog. My LinkedIn profile can be found here. If you know of such opportunities, do not hesitate to reach out to me.

Advertisements

Leverage Up When You’re Down?

This post will investigate the idea of reducing leverage when drawdowns are small, and increasing leverage as losses accumulate. It’s based on the idea that whatever goes up must come down, and whatever comes down generally goes back up.

I originally came across this idea from this blog post.

So, first off, let’s write an easy function that allows replication of this idea. Essentially, we have several arguments:

One: the default leverage (that is, when your drawdown is zero, what’s your exposure)? For reference, in the original post, it’s 10%.

Next: the various leverage levels. In the original post, the leverage levels are 25%, 50%, and 100%.

And lastly, we need the corresponding thresholds at which to apply those leverage levels. In the original post, those levels are 20%, 40%, and 55%.

So, now we can create a function to implement that in R. The idea being that we have R compute the drawdowns, and then use that information to determine leverage levels as precisely and frequently as possible.

Here’s a quick piece of code to do so:

require(xts)
require(PerformanceAnalytics)

drawdownLev <- function(rets, defaultLev = .1, levs = c(.25, .5, 1), ddthresh = c(-.2, -.4, -.55)) {
  # compute drawdowns
  dds <- PerformanceAnalytics:::Drawdowns(rets)
  
  # initialize leverage to the default level
  dds$lev <- defaultLev
  
  # change the leverage for every threshold
  for(i in 1:length(ddthresh)) {
    
    # as drawdowns go through thresholds, adjust leverage
    dds$lev[dds$Close < ddthresh[i]] <- levs[i]
  }
  
  # compute the new strategy returns -- apply leverage at tomorrow's close
  out <- rets * lag(dds$lev, 2)
  
  # return the leverage and the new returns
  leverage <- dds$lev
  colnames(leverage) <- c("DDLev_leverage")
  return(list(leverage, out))
}

So, let’s replicate some results.

require(downloader)
require(xts)
require(PerformanceAnalytics)


download("https://dl.dropboxusercontent.com/s/jk6der1s5lxtcfy/XIVlong.TXT",
         destfile="longXIV.txt")


xiv <- xts(read.zoo("longXIV.txt", format="%Y-%m-%d", sep=",", header=TRUE))
xivRets <- Return.calculate(Cl(xiv))

xivDDlev <- drawdownLev(xivRets, defaultLev = .1, levs = c(.25, .5, 1), ddthresh = c(-.2, -.4, -.55))
compare <- na.omit(cbind(xivDDlev[[2]], xivRets))
colnames(compare) <- c("XIV_DD_leverage", "XIV")

charts.PerformanceSummary(compare['2011::2016'])

And our results look something like this:

xivddlev

                          XIV_DD_leverage       XIV
Annualized Return               0.2828000 0.2556000
Annualized Std Dev              0.3191000 0.6498000
Annualized Sharpe (Rf=0%)       0.8862000 0.3934000
Worst Drawdown                  0.4870604 0.7438706
Calmar Ratio                    0.5805443 0.3436668

That said, what would happen if one were to extend the data for all available XIV data?

xivddlev2

> rbind(table.AnnualizedReturns(compare), maxDrawdown(compare), CalmarRatio(compare))
                          XIV_DD_leverage       XIV
Annualized Return               0.1615000 0.3319000
Annualized Std Dev              0.3691000 0.5796000
Annualized Sharpe (Rf=0%)       0.4375000 0.5727000
Worst Drawdown                  0.8293650 0.9215784
Calmar Ratio                    0.1947428 0.3601385

A different story.

In this case, I think the takeaway is that such a mechanism does well when the drawdowns for the benchmark in question occur sharply, so that the lower exposure protects from those sharp drawdowns, and then the benchmark spends much of the time in a recovery mode, so that the increased exposure has time to earn outsized returns, and then draws down again. When the benchmark continues to see drawdowns after maximum leverage is reached, or continues to perform well when not in drawdown, such a mechanism falls behind quickly.

As always, there is no free lunch when it comes to drawdowns, as trying to lower exposure in preparation for a correction will necessarily mean forfeiting a painful amount of upside in the good times, at least as presented in the original post.

Thanks for reading.

NOTE: I am currently looking for my next full-time opportunity, preferably in New York City or Philadelphia relating to the skills I have demonstrated on this blog. My LinkedIn profile can be found here. If you know of such opportunities, do not hesitate to reach out to me.

An Out of Sample Update on DDN’s Volatility Momentum Trading Strategy and Beta Convexity

The first part of this post is a quick update on Tony Cooper’s of Double Digit Numerics’s volatility ETN momentum strategy from the volatility made simple blog (which has stopped updating as of a year and a half ago). The second part will cover Dr. Jonathan Kinlay’s Beta Convexity concept.

So, now that I have the ability to generate a term structure and constant expiry contracts, I decided to revisit some of the strategies on Volatility Made Simple and see if any of them are any good (long story short: all of the publicly detailed ones aren’t so hot besides mine–they either have a massive drawdown in-sample around the time of the crisis, or a massive drawdown out-of-sample).

Why this strategy? Because it seemed different from most of the usual term structure ratio trades (of which mine is an example), so I thought I’d check out how it did since its first publishing date, and because it’s rather easy to understand.

Here’s the strategy:

Take XIV, VXX, ZIV, VXZ, and SHY (this last one as the “risk free” asset), and at the close, invest in whichever has had the highest 83 day momentum (this was the result of optimization done on volatilityMadeSimple).

Here’s the code to do this in R, using the Quandl EOD database. There are two variants tested–observe the close, buy the close (AKA magical thinking), and observe the close, buy tomorrow’s close.

require(quantmod)
require(PerformanceAnalytics)
require(TTR)
require(Quandl)

Quandl.api_key("yourKeyHere")

symbols <- c("XIV", "VXX", "ZIV", "VXZ", "SHY")

prices <- list()
for(i in 1:length(symbols)) {
  price <- Quandl(paste0("EOD/", symbols[i]), start_date="1990-12-31", type = "xts")$Adj_Close
  colnames(price) <- symbols[i]
  prices[[i]] <- price
}
prices <- na.omit(do.call(cbind, prices))
returns <- na.omit(Return.calculate(prices))

# find highest asset, assign column names
topAsset <- function(row, assetNames) {
  out <- row==max(row, na.rm = TRUE)
  names(out) <- assetNames
  out <- data.frame(out)
  return(out)
}

# compute momentum
momentums <- na.omit(xts(apply(prices, 2, ROC, n = 83), order.by=index(prices)))

# find highest asset each day, turn it into an xts
highestMom <- apply(momentums, 1, topAsset, assetNames = colnames(momentums))
highestMom <- xts(t(do.call(cbind, highestMom)), order.by=index(momentums))

# observe today's close, buy tomorrow's close
buyTomorrow <- na.omit(xts(rowSums(returns * lag(highestMom, 2)), order.by=index(highestMom)))

# observe today's close, buy today's close (aka magic thinking)
magicThinking <- na.omit(xts(rowSums(returns * lag(highestMom)), order.by=index(highestMom)))

out <- na.omit(cbind(buyTomorrow, magicThinking))
colnames(out) <- c("buyTomorrow", "magicalThinking")

# results
charts.PerformanceSummary(out['2014-04-11::'], legend.loc = 'top')
rbind(table.AnnualizedReturns(out['2014-04-11::']), maxDrawdown(out['2014-04-11::']))

Pretty simple.

Here are the results.

capture

> rbind(table.AnnualizedReturns(out['2014-04-11::']), maxDrawdown(out['2014-04-11::']))
                          buyTomorrow magicalThinking
Annualized Return          -0.0320000       0.0378000
Annualized Std Dev          0.5853000       0.5854000
Annualized Sharpe (Rf=0%)  -0.0547000       0.0646000
Worst Drawdown              0.8166912       0.7761655

Looks like this strategy didn’t pan out too well. Just a daily reminder that if you’re using fine grid-search to select a particularly good parameter (EG n = 83 days? Maybe 4 21-day trading months, but even that would have been n = 82), you’re asking for a visit from, in the words of Mr. Tony Cooper, a visit from the grim reaper.

****

Moving onto another topic, whenever Dr. Jonathan Kinlay posts something that I think I can replicate that I’d be very wise to do so, as he is a very skilled and experienced practitioner (and also includes me on his blogroll).

A topic that Dr. Kinlay covered is the idea of beta convexity–namely, that an asset’s beta to a benchmark may be different when the benchmark is up as compared to when it’s down. Essentially, it’s the idea that we want to weed out firms that are what I’d deem as “losers in disguise”–I.E. those that act fine when times are good (which is when we really don’t care about diversification, since everything is going up anyway), but do nothing during bad times.

The beta convexity is calculated quite simply: it’s the beta of an asset to a benchmark when the benchmark has a positive return, minus the beta of an asset to a benchmark when the benchmark has a negative return, then squaring the difference. That is, (beta_bench_positive – beta_bench_negative) ^ 2.

Here’s some R code to demonstrate this, using IBM vs. the S&P 500 since 1995.

ibm <- Quandl("EOD/IBM", start_date="1995-01-01", type = "xts")
ibmRets <- Return.calculate(ibm$Adj_Close)

spy <- Quandl("EOD/SPY", start_date="1995-01-01", type = "xts")
spyRets <- Return.calculate(spy$Adj_Close)

rets <- na.omit(cbind(ibmRets, spyRets))
colnames(rets) <- c("IBM", "SPY")

betaConvexity <- function(Ra, Rb) {
  positiveBench <- Rb[Rb > 0]
  assetPositiveBench <- Ra[index(positiveBench)]
  positiveBeta <- CAPM.beta(Ra = assetPositiveBench, Rb = positiveBench)
  
  negativeBench <- Rb[Rb < 0]
  assetNegativeBench <- Ra[index(negativeBench)]
  negativeBeta <- CAPM.beta(Ra = assetNegativeBench, Rb = negativeBench)
  
  out <- (positiveBeta - negativeBeta) ^ 2
  return(out)
}

betaConvexity(rets$IBM, rets$SPY)

For the result:

> betaConvexity(rets$IBM, rets$SPY)
[1] 0.004136034

Thanks for reading.

NOTE: I am always looking to network, and am currently actively looking for full-time opportunities which may benefit from my skill set. If you have a position which may benefit from my skills, do not hesitate to reach out to me. My LinkedIn profile can be found here.

Constant Expiry VIX Futures (Using Public Data)

This post will be about creating constant expiry (E.G. a rolling 30-day contract) using VIX settlement data from the CBOE and the spot VIX calculation (from Yahoo finance, or wherever else). Although these may be able to be traded under certain circumstances, this is not always the case (where the desired expiry is shorter than the front month’s time to expiry).

The last time I visited this topic, I created a term structure using publicly available data from the CBOE, along with an external expiry calendar.

The logical next step, of course, is to create constant-expiry contracts, which may or may not be tradable (if your contract expiry is less than 30 days, know that the front month has days in which the time to expiry is more than 30 days).

So here’s where we left off: a way to create a continuous term structure using CBOE settlement VIX data.

So from here, before anything, we need to get VIX data. And while the getSymbols command used to be easier to use, because Yahoo broke its API (what else do you expect from an otherwise-irrelevant, washed-up web 1.0 dinosaur?), it’s not possible to get free Yahoo data at this point in time (in the event that Josh Ulrich doesn’t fix this issue in the near future, I’m open to suggestions for other free sources of data which provide data of reputable quality), so we need to get VIX data from elsewhere (particularly, the CBOE itself, which is a one-stop shop for all VIX-related data…and most likely some other interesting futures as well.)

So here’s how to get VIX data from the CBOE (thanks, all you awesome CBOE people! And a shoutout to all my readers from the CBOE, I’m sure some of you are from there).

VIX <- fread("http://www.cboe.com/publish/scheduledtask/mktdata/datahouse/vixcurrent.csv", skip = 1)
VIXdates <- VIX$Date
VIX$Date <- NULL; VIX <- xts(VIX, order.by=as.Date(VIXdates, format = '%m/%d/%Y'))
spotVix <- Cl(VIX)

Next, there’s a need for some utility functions to help out with identifying which futures contracts to use for constructing synthetics.

# find column with greatest days to expiry less than or equal to desired days to expiry
shortDurMinMax <- function(row, daysToExpiry) {
  return(max(which(row <= daysToExpiry)))
}

# find column with least days to expiry greater desired days to expiry
longDurMinMax <- function(row, daysToExpiry) {
  return(min(which(row > daysToExpiry)))
}

# gets the difference between the two latest contracts (either expiry days or price)
getLastDiff <- function(row) {
  indices <- rev(which(!is.na(row)))
  out <- row[indices[1]] - row[indices[2]]
  return(out)
}

# gets the rightmost non-NA value of a row
getLastValue <- function(row) {
  indices <- rev(which(!is.na(row)))
  out <- row[indices[1]]
  return(out)
}

The first two functions are to determine short-duration and long-duration contracts. Simply, provided a row of data and the desired constant time to expiry, the first function finds the contract with a time closest to expiry less than or equal to the desired amount, while the second function does the inverse.

The next two functions are utilized in the scenario of a function whose time to expiry is greater than the expiry of the longest trading contract. Such a synthetic would obviously not be able to be traded, but can be created for the purposes of using as an indicator. The third function gets the last two non-NA values in a row (I.E. the two last prices, the two last times to expiry), and the fourth one simply gets the rightmost non-NA value in a row.

The algorithm to create a synthetic constant-expiry contract/indicator is divided into three scenarios:

One, in which the desired time to expiry of the contract is shorter than the front month, such as a constant 30-day expiry contract, when the front month has more than 30 days to maturity (such as on Nov 17, 2016), at which point, the weight will be the desired time to expiry over the remaining time to expiry in the front month, and the remainder in spot VIX (another asset that cannot be traded, at least conventionally).

The second scenario is one in which the desired time to expiry is longer than the last traded contract. For instance, if the desire was to create a contract
with a year to expiry when the furthest out is eight months, there obviously won’t be data for such a contract. In such a case, the algorithm is to compute the linear slope between the last two available contracts, and add the extrapolated product of the slope multiplied by the time remaining between the desired and the last contract to the price of the last contract.

Lastly, the third scenario (and the most common one under most use cases) is that of the synthetic for which there is both a trading contract that has less time to expiry than the desired constant rate, and one with more time to expiry. In this instance, a matter of linear algebra (included in the comments) denotes the weight of the short expiry contract, which is (desired – expiry_long)/(expiry_short – expiry_long).

The algorithm iterates through all three scenarios, and due to the mechanics of xts automatically sorting by timestamp, one obtains an xts object in order of dates of a synthetic, constant expiry futures contract.

Here is the code for the function.


constantExpiry <- function(spotVix, termStructure, expiryStructure, daysToExpiry) {
  
  # Compute synthetics that are too long (more time to expiry than furthest contract)
  
  # can be Inf if no column contains values greater than daysToExpiry (I.E. expiry is 3000 days)
  suppressWarnings(longCol <- xts(apply(expiryStructure, 1, longDurMinMax, daysToExpiry), order.by=index(termStructure)))
  longCol[longCol == Inf] <- 10
  
  # xts for too long to expiry -- need a NULL for rbinding if empty
  tooLong <- NULL
  
  # Extend the last term structure slope an arbitrarily long amount of time for those with too long expiry
  tooLongIdx <- index(longCol[longCol==10])
  if(length(tooLongIdx) > 0) {
    tooLongTermStructure <- termStructure[tooLongIdx]
    tooLongExpiryStructure <- expiryStructure[tooLongIdx]
    
    # difference in price/expiry for longest two contracts, use it to compute a slope
    priceDiff <- xts(apply(tooLongTermStructure, 1, getLastDiff), order.by = tooLongIdx)
    expiryDiff <- xts(apply(tooLongExpiryStructure, 1, getLastDiff), order.by = tooLongIdx)
    slope <- priceDiff/expiryDiff
    
    # get longest contract price and compute additional days to expiry from its time to expiry 
    # I.E. if daysToExpiry is 180 and longest is 120, additionalDaysToExpiry is 60
    maxDaysToExpiry <- xts(apply(tooLongExpiryStructure, 1, max, na.rm = TRUE), order.by = tooLongIdx)
    longestContractPrice <- xts(apply(tooLongTermStructure, 1, getLastValue), order.by = tooLongIdx)
    additionalDaysToExpiry <- daysToExpiry - maxDaysToExpiry
    
    # add slope multiplied by additional days to expiry to longest contract price
    tooLong <- longestContractPrice + additionalDaysToExpiry * slope
  }
  
  # compute synthetics that are too short (less time to expiry than shortest contract)
  
  # can be -Inf if no column contains values less than daysToExpiry (I.E. expiry is 5 days)
  suppressWarnings(shortCol <- xts(apply(expiryStructure, 1, shortDurMinMax, daysToExpiry), order.by=index(termStructure)))
  shortCol[shortCol == -Inf] <- 0
  
  # xts for too short to expiry -- need a NULL for rbinding if empty
  tooShort <- NULL
  
  tooShortIdx <- index(shortCol[shortCol==0])
  
  if(length(tooShortIdx) > 0) {
    tooShort <- termStructure[,1] * daysToExpiry/expiryStructure[,1] + spotVix * (1 - daysToExpiry/expiryStructure[,1])
    tooShort <- tooShort[tooShortIdx]
  }
  
  
  # compute everything in between (when time to expiry is between longest and shortest)
  
  # get unique permutations for contracts that term structure can create
  colPermutes <- cbind(shortCol, longCol)
  colnames(colPermutes) <- c("short", "long")
  colPermutes <- colPermutes[colPermutes$short > 0,]
  colPermutes <- colPermutes[colPermutes$long < 10,]
  
  regularSynthetics <- NULL
  
  # if we can construct synthetics from regular futures -- someone might enter an extremely long expiry
  # so this may not always be the case
  
  if(nrow(colPermutes) > 0) {
    
    # pasting long and short expiries into a single string for easier subsetting
    shortLongPaste <- paste(colPermutes$short, colPermutes$long, sep="_")
    uniqueShortLongPaste <- unique(shortLongPaste)
    
    regularSynthetics <- list()
    for(i in 1:length(uniqueShortLongPaste)) {
      # get unique permutation of short-expiry and long-expiry contracts
      permuteSlice <- colPermutes[which(shortLongPaste==uniqueShortLongPaste[i]),]
      expirySlice <- expiryStructure[index(permuteSlice)]
      termStructureSlice <- termStructure[index(permuteSlice)]
      
      # what are the parameters?
      shortCol <- unique(permuteSlice$short); longCol <- unique(permuteSlice$long)
      
      # computations -- some linear algebra
      
      # S/L are weights, ex_S/ex_L are time to expiry
      # D is desired constant time to expiry
      
      # S + L = 1
      # L = 1 - S
      # S + (1-S) = 1
      # 
      # ex_S * S + ex_L * (1-S) = D
      # ex_S * S + ex_L - ex_L * S = D
      # ex_S * S - ex_L * S = D - ex_L
      # S(ex_S - ex_L) = D - ex_L
      # S = (D - ex_L)/(ex_S - ex_L)
      
      weightShort <- (daysToExpiry - expirySlice[, longCol])/(expirySlice[, shortCol] - expirySlice[, longCol])
      weightLong <- 1 - weightShort
      syntheticValue <- termStructureSlice[, shortCol] * weightShort + termStructureSlice[, longCol] * weightLong
      
      regularSynthetics[[i]] <- syntheticValue
    }
    
    regularSynthetics <- do.call(rbind, regularSynthetics)
  }
  
  out <- rbind(tooShort, regularSynthetics, tooLong)
  colnames(out) <- paste0("Constant_", daysToExpiry)
  return(out)
}

And here’s how to use it:

constant30 <- constantExpiry(spotVix = vixSpot, termStructure = termStructure, expiryStructure = expiryStructure, daysToExpiry = 30)
constant180 <- constantExpiry(spotVix = vixSpot, termStructure = termStructure, expiryStructure = expiryStructure, daysToExpiry = 180)

constantTermStructure <- cbind(constant30, constant180)

chart.TimeSeries(constantTermStructure, legend.loc = 'topright', main = "Constant Term Structure")

With the result:
Capture

Which means that between the CBOE data itself, and this function that creates constant expiry futures from CBOE spot and futures prices, one can obtain any futures contract, whether real or synthetic, to use as an indicator for volatility trading strategies. This allows for exploration of a wide variety of volatility trading strategies.

Thanks for reading.

NOTE: I am always interested in networking and hearing about full-time opportunities related to my skill set. My linkedin can be found here.

Furthermore, if you are a volatility ETF/futures trading professional, I am interested in possible project-based collaboration. If you are interested, please contact me.

Creating a VIX Futures Term Structure In R From Official CBOE Settlement Data

This post will be detailing a process to create a VIX term structure from freely available CBOE VIX settlement data and a calendar of freely obtainable VIX expiry dates. This has applications for volatility trading strategies.

So this post, as has been the usual for quite some time, will not be about a strategy, but rather, a tool that can be used for exploring future strategies. Particularly, volatility strategies–which seems to have been a hot topic on this blog some time ago (and might very well be still, especially since the Volatility Made Simple blog has just stopped tracking open-sourced strategies for the past year).

This post’s topic is the VIX term structure–that is, creating a set of continuous contracts–properly rolled according to VIX contract specifications, rather than a hodgepodge of generic algorithms as found on some other websites. The idea is, as of the settlement of a previous day (or whenever the CBOE actually releases their data), you can construct a curve of contracts, and see if it’s in contango (front month cheaper than next month and so on) or backwardation (front month more expensive than next month, etc.).

The first (and most code-intensive) part of the procedure is fairly simple–map the contracts to an expiration date, then put their settlement dates and times to expiry into two separate xts objects, with one column for each contract.

The expiries text file is simply a collection of copied and pasted expiry dates from this site. It includes the January 2018 expiration date. Here is what it looks like:

> head(expiries)
  V1       V2   V3
1 18  January 2006
2 15 February 2006
3 22    March 2006
4 19    April 2006
5 17      May 2006
6 21     June 2006
require(xts)
require(data.table)

# 06 through 17
years <- c(paste0("0", c(6:9)), as.character(c(10:17)))

# futures months
futMonths <- c("F", "G", "H", "J", "K", "M",
            "N", "Q", "U", "V", "X", "Z")

# expiries come from http://www.macroption.com/vix-expiration-calendar/
expiries <- read.table("expiries.txt", header = FALSE, sep = " ")

# convert expiries into dates in R
dateString <- paste(expiries$V3, expiries$V2, expiries$V1, sep = "-")
dates <- as.Date(dateString, format = "%Y-%B-%d")

# map futures months to numbers for dates
monthMaps <- cbind(futMonths, c("01", "02", "03", "04", "05", "06",
                                   "07", "08", "09", "10", "11", "12"))
monthMaps <- data.frame(monthMaps)
colnames(monthMaps) <- c("futureStem", "monthNum")

dates <- data.frame(dates)
dates$dateMon <- substr(dates$dates, 1, 7)

contracts <- expand.grid(futMonths, years)
contracts <- paste0(contracts[,1], contracts[,2])
contracts <- c(contracts, "F18")
stem <- "https://cfe.cboe.com/Publish/ScheduledTask/MktData/datahouse/CFE_"
#contracts <- paste0(stem, contracts, "_VX.csv")

masterlist <- list()
timesToExpiry <- list()
for(i in 1:length(contracts)) {
  
  # obtain data
  contract <- contracts[i]
  dataFile <- paste0(stem, contract, "_VX.csv")
  expiryYear <- paste0("20",substr(contract, 2, 3))
  expiryMonth <- monthMaps$monthNum[monthMaps$futureStem == substr(contract,1,1)]
  expiryDate <- dates$dates[dates$dateMon == paste(expiryYear, expiryMonth, sep="-")]
  data <- suppressWarnings(fread(dataFile))
  
  # create dates
  dataDates <- as.Date(data$`Trade Date`, format = '%m/%d/%Y')
  
  # create time to expiration xts
  toExpiry <- xts(expiryDate - dataDates, order.by=dataDates)
  colnames(toExpiry) <- contract
  timesToExpiry[[i]] <- toExpiry
  
  # get settlements
  settlement <- xts(data$Settle, order.by=dataDates)
  colnames(settlement) <- contract
  masterlist[[i]] <- settlement
}

# cbind outputs
masterlist <- do.call(cbind, masterlist)
timesToExpiry <- do.call(cbind, timesToExpiry)

# NA out zeroes in settlements
masterlist[masterlist==0] <- NA

From there, we need to visualize how many contracts are being traded at once on any given day (I.E. what’s a good steady state number for the term structure)?

sumNonNA <- function(row) {
  return(sum(!is.na(row)))
}

simultaneousContracts <- xts(apply(masterlist, 1, sumNonNA), order.by=index(masterlist))
chart.TimeSeries(simultaneousContracts)

The result looks like this:

So, 8 contracts (give or take) at any given point in time. This is confirmed by the end of the master list of settlements.

dim(masterlist)
tail(masterlist[,135:145])
> dim(masterlist)
[1] 3002  145
> tail(masterlist[,135:145])
           H17    J17    K17    M17    N17    Q17    U17    V17    X17    Z17   F18
2017-04-18  NA 14.725 14.325 14.525 15.175 15.475 16.225 16.575 16.875 16.925    NA
2017-04-19  NA 14.370 14.575 14.525 15.125 15.425 16.175 16.575 16.875 16.925    NA
2017-04-20  NA     NA 14.325 14.325 14.975 15.375 16.175 16.575 16.875 16.900    NA
2017-04-21  NA     NA 14.325 14.225 14.825 15.175 15.925 16.350 16.725 16.750    NA
2017-04-24  NA     NA 12.675 13.325 14.175 14.725 15.575 16.025 16.375 16.475 17.00
2017-04-25  NA     NA 12.475 13.125 13.975 14.425 15.225 15.675 16.025 16.150 16.75

Using this information, an algorithm can create eight continuous contracts, ranging from front month to eight months out. The algorithm starts at the first day of the master list to the first expiry, then moves between expiry windows, and just appends the front month contract, and the next seven contracts to a list, before rbinding them together, and does the same with the expiry structure.

termStructure <- list()
expiryStructure <- list()
masterDates <- unique(c(first(index(masterlist)), dates$dates[dates$dates %in% index(masterlist)], Sys.Date()-1))
for(i in 1:(length(masterDates)-1)) {
  subsetDates <- masterDates[c(i, i+1)]
  dateRange <- paste(subsetDates[1], subsetDates[2], sep="::")
  subset <- masterlist[dateRange,c(i:(i+7))]
  subset <- subset[-1,]
  expirySubset <- timesToExpiry[index(subset), c(i:(i+7))]
  colnames(subset) <- colnames(expirySubset) <- paste0("C", c(1:8))
  termStructure[[i]] <- subset
  expiryStructure[[i]] <- expirySubset
}

termStructure <- do.call(rbind, termStructure)
expiryStructure <- do.call(rbind, expiryStructure)

Again, one more visualization of when we have a suitable number of contracts:

simultaneousContracts <- xts(apply(termStructure, 1, sumNonNA), order.by=index(termStructure))
chart.TimeSeries(simultaneousContracts)

And in order to preserve the most data, we’ll cut the burn-in period off when we first have 7 contracts trading at once.

first(index(simultaneousContracts)[simultaneousContracts >= 7])
termStructure <- termStructure["2006-10-23::"]
expiryStructure <- expiryStructure[index(termStructure)]

So there you have it–your continuous VIX futures contract term structure, as given by the official CBOE settlements. While some may try and simulate a trading strategy based on these contracts, I myself prefer to use them as indicators or features to a model that would rather buy XIV or VXX.

One last trick, for those that want to visualize things, a way to actually visualize the term structure on any given day, in particular, the most recent one in the term structure.

plot(t(coredata(last(termStructure))), type = 'b')

A clear display of contango.

A post on how to compute synthetic constant-expiry contracts (EG constant 30 day expiry contracts) will be forthcoming in the near future.

Thanks for reading.

NOTE: I am currently interested in networking and full-time positions which may benefit from my skills. I may be contacted at my LinkedIn profile found here.

How well can you scale your strategy?

This post will deal with a quick, finger in the air way of seeing how well a strategy scales–namely, how sensitive it is to latency between signal and execution, using a simple volatility trading strategy as an example. The signal will be the VIX/VXV ratio trading VXX and XIV, an idea I got from Volatility Made Simple’s amazing blog, particularly this post. The three signals compared will be the “magical thinking” signal (observe the close, buy the close, named from the ruleOrderProc setting in quantstrat), buy on next-day open, and buy on next-day close.

Let’s get started.

require(downloader)
require(PerformanceAnalytics)
require(IKTrading)
require(TTR)

download("http://www.cboe.com/publish/scheduledtask/mktdata/datahouse/vxvdailyprices.csv", 
         destfile="vxvData.csv")
download("https://dl.dropboxusercontent.com/s/jk6der1s5lxtcfy/XIVlong.TXT",
         destfile="longXIV.txt")
download("https://dl.dropboxusercontent.com/s/950x55x7jtm9x2q/VXXlong.TXT", 
         destfile="longVXX.txt") #requires downloader package
getSymbols('^VIX', from = '1990-01-01')


xiv <- xts(read.zoo("longXIV.txt", format="%Y-%m-%d", sep=",", header=TRUE))
vxx <- xts(read.zoo("longVXX.txt", format="%Y-%m-%d", sep=",", header=TRUE))
vxv <- xts(read.zoo("vxvData.csv", header=TRUE, sep=",", format="%m/%d/%Y", skip=2))
vixVxv <- Cl(VIX)/Cl(vxv)


xiv <- xts(read.zoo("longXIV.txt", format="%Y-%m-%d", sep=",", header=TRUE))
vxx <- xts(read.zoo("longVXX.txt", format="%Y-%m-%d", sep=",", header=TRUE))

vxxCloseRets <- Return.calculate(Cl(vxx))
vxxOpenRets <- Return.calculate(Op(vxx))
xivCloseRets <- Return.calculate(Cl(xiv))
xivOpenRets <- Return.calculate(Op(xiv))

vxxSig <- vixVxv > 1
xivSig <- 1-vxxSig

magicThinking <- vxxCloseRets * lag(vxxSig) + xivCloseRets * lag(xivSig)
nextOpen <- vxxOpenRets * lag(vxxSig, 2) + xivOpenRets * lag(xivSig, 2)
nextClose <- vxxCloseRets * lag(vxxSig, 2) + xivCloseRets * lag(xivSig, 2)
tradeWholeDay <- (nextOpen + nextClose)/2

compare <- na.omit(cbind(magicThinking, nextOpen, nextClose, tradeWholeDay))
colnames(compare) <- c("Magic Thinking", "Next Open", 
                       "Next Close", "Execute Through Next Day")
charts.PerformanceSummary(compare)
rbind(table.AnnualizedReturns(compare), 
      maxDrawdown(compare), CalmarRatio(compare))

par(mfrow=c(1,1))
chart.TimeSeries(log(cumprod(1+compare), base = 10), legend.loc='topleft', ylab='log base 10 of additional equity',
                 main = 'VIX vx. VXV different execution times')

So here’s the run-through. In addition to the magical thinking strategy (observe the close, buy that same close), I tested three other variants–a variant which transacts the next open, a variant which transacts the next close, and the average of those two. Effectively, I feel these three could give a sense of a strategy’s performance under more realistic conditions–that is, how well does the strategy perform if transacted throughout the day, assuming you’re managing a sum of money too large to just plow into the market in the closing minutes (and if you hope to get rich off of trading, you will have a larger sum of money than the amount you can apply magical thinking to). Ideally, I’d use VWAP pricing, but as that’s not available for free anywhere I know of, that means that readers can’t replicate it even if I had such data.

In any case, here are the results.

Equity curves:

Log scale (for Mr. Tony Cooper and others):

Stats:

                          Magic Thinking Next Open Next Close Execute Through Next Day
Annualized Return               0.814100 0.8922000  0.5932000                 0.821900
Annualized Std Dev              0.622800 0.6533000  0.6226000                 0.558100
Annualized Sharpe (Rf=0%)       1.307100 1.3656000  0.9529000                 1.472600
Worst Drawdown                  0.566122 0.5635336  0.6442294                 0.601014
Calmar Ratio                    1.437989 1.5831686  0.9208586                 1.367510

My reaction? The execute on next day’s close performance being vastly lower than the other configurations (and that deterioration occurring in the most recent years) essentially means that the fills will have to come pretty quickly at the beginning of the day. While the strategy seems somewhat scalable through the lens of this finger-in-the-air technique, in my opinion, if the first full day of possible execution after signal reception will tank a strategy from a 1.44 Calmar to a .92, that’s a massive drop-off, after holding everything else constant. In my opinion, I think this is quite a valid question to ask anyone who simply sells signals, as opposed to manages assets. Namely, how sensitive are the signals to execution on the next day? After all, unless those signals come at 3:55 PM, one is most likely going to be getting filled the next day.

Now, while this strategy is a bit of a tomato can in terms of how good volatility trading strategies can get (they can get a *lot* better in my opinion), I think it made for a simple little demonstration of this technique. Again, a huge thank you to Mr. Helmuth Vollmeier for so kindly keeping up his dropbox all this time for the volatility data!

Thanks for reading.

NOTE: I am currently contracting in a data science capacity in Chicago. You can email me at ilya.kipnis@gmail.com, or find me on my LinkedIn here. I’m always open to beers after work if you’re in the Chicago area.

NOTE 2: Today, on October 21, 2015, if you’re in Chicago, there’s a Chicago R Users Group conference at Jaks Tap at 6:00 PM. Free pizza, networking, and R, hosted by Paul Teetor, who’s a finance guy. Hope to see you there.

Volatility Stat-Arb Shenanigans

This post deals with an impossible-to-implement statistical arbitrage strategy using VXX and XIV. The strategy is simple: if the average daily return of VXX and XIV was positive, short both of them at the close. This strategy makes two assumptions of varying dubiousness: that one can “observe the close and act on the close”, and that one can short VXX and XIV.

So, recently, I decided to play around with everyone’s two favorite instruments on this blog–VXX and XIV, with the idea that “hey, these two instruments are diametrically opposed, so shouldn’t there be a stat-arb trade here?”

So, in order to do a lick-finger-in-the-air visualization, I implemented Mike Harris’s momersion indicator.

momersion <- function(R, n, returnLag = 1) {
  momentum <- sign(R * lag(R, returnLag))
  momentum[momentum < 0] <- 0
  momersion <- runSum(momentum, n = n)/n * 100
  colnames(momersion) <- "momersion"
  return(momersion)
}

And then I ran the spread through it.


xiv <- xts(read.zoo("longXIV.txt", format="%Y-%m-%d", sep=",", header=TRUE))
vxx <- xts(read.zoo("longVXX.txt", format="%Y-%m-%d", sep=",", header=TRUE))

xivRets <- Return.calculate(Cl(xiv))
vxxRets <- Return.calculate(Cl(vxx))

volSpread <- xivRets + vxxRets
volSpreadMomersion <- momersion(volSpread, n = 252)
plot(volSpreadMomersion)

In other words, this spread is certainly mean-reverting at just about all times.

And here is the code for the results from 2011 onward, from when the XIV and VXX actually started trading.

#both sides
sig <- -lag(sign(volSpread))
longShort <- sig * volSpread
charts.PerformanceSummary(longShort['2011::'], main = 'long and short spread')

#long spread only
sig <- -lag(sign(volSpread))
sig[sig < 0] <- 0
longOnly <- sig * volSpread
charts.PerformanceSummary(longOnly['2011::'], main = 'long spread only')


#short spread only
sig <- -lag(sign(volSpread))
sig[sig > 0] <- 0
shortOnly <- sig * volSpread
charts.PerformanceSummary(shortOnly['2011::'], main = 'short spread only')

threeStrats <- na.omit(cbind(longShort, longOnly, shortOnly))["2011::"]
colnames(threeStrats) <- c("LongShort", "Long", "Short")
rbind(table.AnnualizedReturns(threeStrats), CalmarRatio(threeStrats))

Here are the equity curves:

Long-short:

Long-only:

Short-only:

With the following statistics:

                          LongShort      Long    Short
Annualized Return          0.115400 0.0015000 0.113600
Annualized Std Dev         0.049800 0.0412000 0.027900
Annualized Sharpe (Rf=0%)  2.317400 0.0374000 4.072100
Calmar Ratio               1.700522 0.0166862 7.430481

In other words, the short side is absolutely amazing as a trade–except for the one small fact of having it be impossible to actually execute, or at least as far as I’m aware. Anyhow, this was simply a for-fun post, but hopefully it served some purpose.

Thanks for reading.

NOTE: I am currently contracting and am looking to network in the Chicago area. You can find my LinkedIn here.