Comparing Some Strategies from Easy Volatility Investing, and the Table.Drawdowns Command

This post will be about comparing strategies from the paper “Easy Volatility Investing”, along with a demonstration of R’s table.Drawdowns command.

First off, before going further, while I think the execution assumptions found in EVI don’t lend the strategies well to actual live trading (although their risk/reward tradeoffs also leave a lot of room for improvement), I think these strategies are great as benchmarks.

So, some time ago, I did an out-of-sample test for one of the strategies found in EVI, which can be found here.

Using the same source of data, I also obtained data for SPY (though, again, AlphaVantage can also provide this service for free for those that don’t use Quandl).

Here’s the new code.

require(downloader)
require(quantmod)
require(PerformanceAnalytics)
require(TTR)
require(Quandl)
require(data.table)

download("http://www.cboe.com/publish/scheduledtask/mktdata/datahouse/vix3mdailyprices.csv", destfile="vxvData.csv")

VIX <- fread("http://www.cboe.com/publish/scheduledtask/mktdata/datahouse/vixcurrent.csv", skip = 1)
VIXdates <- VIX$Date
VIX$Date <- NULL; VIX <- xts(VIX, order.by=as.Date(VIXdates, format = '%m/%d/%Y'))

vxv <- xts(read.zoo("vxvData.csv", header=TRUE, sep=",", format="%m/%d/%Y", skip=2))

ma_vRatio <- SMA(Cl(VIX)/Cl(vxv), 10)
xivSigVratio <- ma_vRatio < 1 
vxxSigVratio <- ma_vRatio > 1 

# V-ratio (VXV/VXMT)
vRatio <- lag(xivSigVratio) * xivRets + lag(vxxSigVratio) * vxxRets
# vRatio <- lag(xivSigVratio, 2) * xivRets + lag(vxxSigVratio, 2) * vxxRets


# Volatility Risk Premium Strategy
spy <- Quandl("EOD/SPY", start_date='1990-01-01', type = 'xts')
spyRets <- Return.calculate(spy$Adj_Close)
histVol <- runSD(spyRets, n = 10, sample = FALSE) * sqrt(252) * 100
vixDiff <- Cl(VIX) - histVol
maVixDiff <- SMA(vixDiff, 5)

vrpXivSig <- maVixDiff > 0 
vrpVxxSig <- maVixDiff < 0
vrpRets <- lag(vrpXivSig, 1) * xivRets + lag(vrpVxxSig, 1) * vxxRets


obsCloseMomentum <- magicThinking # from previous post

compare <- na.omit(cbind(xivRets, obsCloseMomentum, vRatio, vrpRets))
colnames(compare) <- c("BH_XIV", "DDN_Momentum", "DDN_VRatio", "DDN_VRP")

So, an explanation: there are four return streams here–buy and hold XIV, the DDN momentum from a previous post, and two other strategies.

The simpler one, called the VRatio is simply the ratio of the VIX over the VXV. Near the close, check this quantity. If this is less than one, buy XIV, otherwise, buy VXX.

The other one, called the Volatility Risk Premium strategy (or VRP for short), compares the 10 day historical volatility (that is, the annualized running ten day standard deviation) of the S&P 500, subtracts it from the VIX, and takes a 5 day moving average of that. Near the close, when that’s above zero (that is, VIX is higher than historical volatility), go long XIV, otherwise, go long VXX.

Again, all of these strategies are effectively “observe near/at the close, buy at the close”, so are useful for demonstration purposes, though not for implementation purposes on any large account without incurring market impact.

Here are the results, since 2011 (that is, around the time of XIV’s actual inception):
easyVolInvesting

To note, both the momentum and the VRP strategy underperform buying and holding XIV since 2011. The VRatio strategy, on the other hand, does outperform.

Here’s a summary statistics function that compiles some top-level performance metrics.

stratStats <- function(rets) {
  stats <- rbind(table.AnnualizedReturns(rets), maxDrawdown(rets))
  stats[5,] <- stats[1,]/stats[4,]
  stats[6,] <- stats[1,]/UlcerIndex(rets)
  rownames(stats)[4] <- "Worst Drawdown"
  rownames(stats)[5] <- "Calmar Ratio"
  rownames(stats)[6] <- "Ulcer Performance Index"
  return(stats)
}

And the result:

> stratStats(compare['2011::'])
                             BH_XIV DDN_Momentum DDN_VRatio   DDN_VRP
Annualized Return         0.3801000    0.2837000  0.4539000 0.2572000
Annualized Std Dev        0.6323000    0.5706000  0.6328000 0.6326000
Annualized Sharpe (Rf=0%) 0.6012000    0.4973000  0.7172000 0.4066000
Worst Drawdown            0.7438706    0.6927479  0.7665093 0.7174481
Calmar Ratio              0.5109759    0.4095285  0.5921650 0.3584929
Ulcer Performance Index   1.1352168    1.2076995  1.5291637 0.7555808

To note, all of the benchmark strategies suffered very large drawdowns since XIV’s inception, which we can examine using the table.Drawdowns command, as seen below:

> table.Drawdowns(compare[,1]['2011::'], top = 5)
        From     Trough         To   Depth Length To Trough Recovery
1 2011-07-08 2011-11-25 2012-11-26 -0.7439    349        99      250
2 2015-06-24 2016-02-11 2016-12-21 -0.6783    379       161      218
3 2014-07-07 2015-01-30 2015-06-11 -0.4718    236       145       91
4 2011-02-15 2011-03-16 2011-04-20 -0.3013     46        21       25
5 2013-04-15 2013-06-24 2013-07-22 -0.2877     69        50       19
> table.Drawdowns(compare[,2]['2011::'], top = 5)
        From     Trough         To   Depth Length To Trough Recovery
1 2014-07-07 2016-06-27 2017-03-13 -0.6927    677       499      178
2 2012-03-27 2012-06-13 2012-09-13 -0.4321    119        55       64
3 2011-10-04 2011-10-28 2012-03-21 -0.3621    117        19       98
4 2011-02-15 2011-03-16 2011-04-21 -0.3013     47        21       26
5 2011-06-01 2011-08-04 2011-08-18 -0.2723     56        46       10
> table.Drawdowns(compare[,3]['2011::'], top = 5)
        From     Trough         To   Depth Length To Trough Recovery
1 2014-01-23 2016-02-11 2017-02-14 -0.7665    772       518      254
2 2011-09-13 2011-11-25 2012-03-21 -0.5566    132        53       79
3 2012-03-27 2012-06-01 2012-07-19 -0.3900     80        47       33
4 2011-02-15 2011-03-16 2011-04-20 -0.3013     46        21       25
5 2013-04-15 2013-06-24 2013-07-22 -0.2877     69        50       19
> table.Drawdowns(compare[,4]['2011::'], top = 5)
        From     Trough         To   Depth Length To Trough Recovery
1 2015-06-24 2016-02-11 2017-10-11 -0.7174    581       161      420
2 2011-07-08 2011-10-03 2012-02-03 -0.6259    146        61       85
3 2014-07-07 2014-12-16 2015-05-21 -0.4818    222       115      107
4 2013-02-20 2013-07-08 2014-06-10 -0.4108    329        96      233
5 2012-03-27 2012-06-01 2012-07-17 -0.3900     78        47       31

Note that the table.Drawdowns command only examines one return stream at a time. Furthermore, the top argument specifies how many drawdowns to look at, sorted by greatest drawdown first.

One reason I think that these strategies seem to suffer the drawdowns they do is that they’re either all-in on one asset, or its exact opposite, with no room for error.

One last thing, for the curious, here is the comparison with my strategy since 2011 (essentially XIV inception) benchmarked against the strategies in EVI (which I have been trading with live capital since September, and have recently opened a subscription service for):

volPerfBenchmarks

stratStats(compare['2011::'])
                             QST_vol    BH_XIV DDN_Momentum DDN_VRatio   DDN_VRP
Annualized Return          0.8133000 0.3801000    0.2837000  0.4539000 0.2572000
Annualized Std Dev         0.3530000 0.6323000    0.5706000  0.6328000 0.6326000
Annualized Sharpe (Rf=0%)  2.3040000 0.6012000    0.4973000  0.7172000 0.4066000
Worst Drawdown             0.2480087 0.7438706    0.6927479  0.7665093 0.7174481
Calmar Ratio               3.2793211 0.5109759    0.4095285  0.5921650 0.3584929
Ulcer Performance Index   10.4220721 1.1352168    1.2076995  1.5291637 0.7555808

Thanks for reading.

NOTE: I am currently looking for networking and full-time opportunities related to my skill set. My LinkedIn profile can be found here.

Advertisements

Launching My Subscription Service

After gauging interest from my readers, I’ve decided to open up a subscription service. I’ll copy and paste the FAQs, or my best attempt at trying to answer as many questions as possible ahead of time, and may answer more in the future.

I’m choosing to use Patreon just to outsource all of the technicalities of handling subscriptions and creating a centralized source to post subscription-based content.

Here’s the link to subscribe.

FAQs (copied from the subscription page):

*****

Thank you for visiting. After gauging interest from my readership on my main site (www.quantstrattrader.wordpress.com), I created this as a subscription page for quantitative investment strategies, with the goal of having subscribers turn their cash into more cash, net of subscription fees (hopefully). The systems I develop come from a background of learning from experienced quantitative trading professionals, and senior researchers at large firms. The current system I initially published a prototype for several years back and watched it being tracked, before finally starting to deploy my own capital earlier this year, and making the most recent modifications even more recently. 

And while past performance doesn’t guarantee future results and the past doesn’t repeat itself, it often rhymes, so let’s turn money into more money.

Some FAQs about the strategy:

​What is the subscription price for this strategy?

​Currently, after gauging interest from readers and doing research based on other sites, the tentative pricing is $50/month. As this strategy builds a track record, that may be subject to change in the future, and notifications will be made in such an event.

What is the description of the strategy?

The strategy is mainly a short volatility system that trades XIV, ZIV, and VXX. As far as volatility strategies go, it’s fairly conservative in that it uses several different checks in order to ensure a position.

What is the strategy’s edge?

In two words: risk management. Essentially, there are a few separate criteria to select an investment, and the system spends a not-insignificant time with no exposure when some of these criteria provide contradictory signals. Furthermore, the system uses disciplined methodologies in its construction in order to avoid unnecessary free parameters, and to keep the strategy as parsimonious as possible.

Do you trade your own capital with this strategy?

Yes. 

When was the in-sample training period for this system?

A site that no longer updates its blog (volatility made simple) once tracked a more rudimentary strategy that I wrote about several years ago. I was particularly pleased with the results of that vetting, and recently have received input to improve my system to a much greater degree, as well as gained the confidence to invest live capital into it.

How many trades per year does the system make?

In the backtest from April 20, 2008 through the end of 2016, the system made 187 transactions in XIV (both buy and sell), 160 in ZIV, and 52 in VXX. Meaning over the course of approximately 9 years, there was on average 43 transactions per year. In some cases, this may simply be switching from XIV to ZIV or vice versa. In other words, trades last approximately a week (some may be longer, some shorter).

When will signals be posted?

Signals will be posted sometime between 12 PM and market close (4 PM EST). In backtesting, they are tested as market on close orders, so individuals assume any risk/reward by executing earlier.

How often is this system in the market?

About 56%. However, over the course of backtesting (and live trading), only about 9% of months have zero return. 

What are the distribution of winning, losing, and zero return months?

As of late October 2017, there have been about 65% winning months (with an average gain of 12.8%), 26% losing months (with an average loss of 4.9%), and 9% zero months.

What are some other statistics about the strategy?

Since 2011 (around the time that XIV officially came into inception as opposed to using synthetic data), the strategy has boasted an 82% annualized return, with a 24.8% maximum drawdown and an annualized standard deviation of 35%. This means a Sharpe ratio (return to standard deviation) higher than 2, and a Calmar ratio higher than 3. It also has an Ulcer Performance Index of 10.

What are the strategy’s worst drawdowns?

Since 2011 (again, around the time of XIV’s inception), the largest drawdown was 24.8%, starting on October 31, 2011, and making a new equity high on January 12, 2012. The longest drawdown started on August 21, 2014 and recovered on April 10, 2015, and lasted for 160 trading days.

Will the subscription price change in the future?

If the strategy continues to deliver strong returns, then there may be reason to increase the price so long as the returns bear it out.

Can a conservative risk signal be provided for those who might not be able to tolerate a 25% drawdown? 

A variant of the strategy that targets about half of the annualized standard deviation of the strategy boasts a 40% annualized return for about 12% drawdown since 2011. Overall, this has slightly higher reward to risk statistics, but at the cost of cutting aggregate returns in half.

Can’t XIV have a termination event?

This refers to the idea of the XIV ETN terminating if it loses 80% of its value in a single day. To give an idea of the likelihood of this event, using synthetic data, the XIV ETN had a massive drawdown of 92% over the course of the 2008 financial crisis. For the history of that synthetic (pre-inception) and realized (post-inception) data, the absolute worst day was a down day of 26.8%. To note, the strategy was not in XIV during that day.

What was the strategy’s worst day?

On September 16, 2016, the strategy lost 16% in one day. This was at the tail end of a stretch of positive days that made about 40%.

What are the strategy’s risks?

The first risk is that given that this strategy is naturally biased towards short volatility, that it can have potential for some sharp drawdowns due to the nature of volatility spikes. The other risk is that given that this strategy sometimes spends its time in ZIV, that it will underperform XIV on some good days. This second risk is a consequence of additional layers of risk management in the strategy.

How complex is this strategy?

Not overly. It’s only slightly more complex than a basic momentum strategy when counting free parameters, and can be explained in a couple of minutes.

Does this strategy use any complex machine learning methodologies?

No. The data requirements for such algorithms and the noise in the financial world make it very risky to apply these methodologies, and research as of yet did not bear fruit to justify incorporating them.

Will instrument volume ever be a concern (particularly ZIV)?

According to one individual who worked on the creation of the original VXX ETN (and by extension, its inverse, XIV), new shares of ETNs can be created by the issuer (in ZIV’s case, Credit Suisse) on demand. In short, the concern of volume is more of a concern of the reputability of the person making the request. In other words, it depends on how well the strategy does.

Can the strategy be held liable/accountable/responsible for a subscriber’s loss/drawdown?

​Let this serve as a disclaimer: by subscribing, you agree to waive any legal claim against the strategy, or its creator(s) in the event of drawdowns, losses, etc. The subscription is for viewing the output of a program, and this service does not actively manage a penny of subscribers’ actual assets. Subscribers can choose to ignore the strategy’s signals at a moment’s notice at their discretion. The program’s output should not be thought of as the investment advice coming from a CFP, CFA, RIA, etc.

Why should these signals be trusted?

Because my work on other topics has been on full, public display for several years. Unlike other websites, I have shown “bad backtests”, thus breaking the adage of “you’ll never see a bad backtest”. I have shown thoroughness in my research, and the same thoroughness has been applied towards this system as well. Until there is a longer track record such that the system can stand on its own, the trust in the system is the trust in the system’s creator.

Who is the intended audience for these signals?

The intended audience is individual, retail investors with a certain risk tolerance, and is priced accordingly. 

​Isn’t volatility investing very risky?

​It’s risky from the perspective of the underlying instrument having the capacity to realize very large drawdowns (greater than 60%, and even greater than 90%). However, from a purely numerical standpoint, the company taking over so much of shopping, Amazon, since inception has had a 37.1% annualized rate of return, a standard deviation of 61.5%, a worst drawdown of 94%, and an Ulcer Performance Index of 0.9. By comparison, XIV, from 2008 (using synthetic data), has had a 35.5% annualized rate of return, a standard deviation of 57.7%, a worst drawdown of 92%, and an Ulcer Performance Index of 0.6. If Amazon is considered a top-notch asset, then from a quantitative comparison, a system looking to capitalize on volatility bets should be viewed from a similar perspective. To be sure, the strategy’s performance vastly outperforms that of buying and holding XIV (which nobody should do). However, the philosophy of volatility products being much riskier than household tech names just does not hold true unless the future wildly differs from the past.

​Is there a possibility for collaborating with other strategy creators?

​Feel free to contact me at my email ilya.kipnis@gmail.com to discuss that possibility. I request a daily stream of returns before starting any discussion.

Why Patreon?

Because past all the artsy-craftsy window dressing and interesting choice of vocabulary, Patreon is simply a platform that processes payments and creates a centralized platform from which to post subscription-based content, as opposed to maintaining mailing lists and other technical headaches. Essentially, it’s simply a way to outsource the technical end of running a business, even if the window dressing is a bit unorthodox.

***

Thanks for reading.

NOTE: I am currently interested in networking and full-time roles based on my skills. My LinkedIn profile can be found here.

The Return of Free Data and Possible Volatility Trading Subscription

This post will be about pulling free data from AlphaVantage, and gauging interest for a volatility trading subscription service.

So first off, ever since the yahoos at Yahoo decided to turn off their free data, the world of free daily data has been in somewhat of a dark age. Well, thanks to http://blog.fosstrading.com/2017/10/getsymbols-and-alpha-vantage.html#gpluscommentsJosh Ulrich, Paul Teetor, and other R/Finance individuals, the latest edition of quantmod (which can be installed from CRAN) now contains a way to get free financial data from AlphaVantage since the year 2000, which is usually enough for most backtests, as that date predates the inception of most ETFs.

Here’s how to do it.

First off, you need to go to alphaVantage, register, and https://www.alphavantage.co/support/#api-keyget an API key.

Once you do that, downloading data is simple, if not slightly slow. Here’s how to do it.

require(quantmod)

getSymbols('SPY', src = 'av', adjusted = TRUE, output.size = 'full', api.key = YOUR_KEY_HERE)

And the results:

> head(SPY)
           SPY.Open SPY.High SPY.Low SPY.Close SPY.Volume SPY.Adjusted
2000-01-03   148.25   148.25 143.875  145.4375    8164300     104.3261
2000-01-04   143.50   144.10 139.600  139.8000    8089800     100.2822
2000-01-05   139.90   141.20 137.300  140.8000    9976700     100.9995
2000-01-06   139.60   141.50 137.800  137.8000    6227200      98.8476
2000-01-07   140.30   145.80 140.100  145.8000    8066500     104.5862
2000-01-10   146.30   146.90 145.000  146.3000    5741700     104.9448

Which means if any one of my old posts on asset allocation has been somewhat defunct thanks to bad yahoo data, it will now work again with a slight modification to the data input algorithms.

Beyond demonstrating this routine, one other thing I’d like to do is to gauge interest for a volatility signal subscription service, for a system I have personally started trading a couple of months ago.

Simply, I have seen other websites with subscription services with worse risk/reward than the strategy I currently trade, which switches between XIV, ZIV, and VXX. Currently, the equity curve, in log 10, looks like this:

volStratPerf

That is, $1000 in 2008 would have become approximately $1,000,000 today, if one was able to trade this strategy since then.

Since 2011 (around the time of inception for XIV), the performance has been:


                        Performance
Annualized Return         0.8265000
Annualized Std Dev        0.3544000
Annualized Sharpe (Rf=0%) 2.3319000
Worst Drawdown            0.2480087
Calmar Ratio              3.3325450

Considering that some websites out there charge upwards of $50 a month for either a single tactical asset rotation strategy (and a lot more for a combination) with inferior risk/return profiles, or a volatility strategy that may have had a massive and historically record-breaking drawdown, I was hoping to gauge a price point for what readers would consider paying for signals from a better strategy than those.

Thanks for reading.

NOTE: I am currently interested in networking and am seeking full-time opportunities related to my skill set. My LinkedIn profile can be found here.

The Kelly Criterion — Does It Work?

This post will be about implementing and investigating the running Kelly Criterion — that is, a constantly adjusted Kelly Criterion that changes as a strategy realizes returns.

For those not familiar with the Kelly Criterion, it’s the idea of adjusting a bet size to maximize a strategy’s long term growth rate. Both https://en.wikipedia.org/wiki/Kelly_criterionWikipedia and Investopedia have entries on the Kelly Criterion. Essentially, it’s about maximizing your long-run expectation of a betting system, by sizing bets higher when the edge is higher, and vice versa.

There are two formulations for the Kelly criterion: the Wikipedia result presents it as mean over sigma squared. The Investopedia definition is P-[(1-P)/winLossRatio], where P is the probability of a winning bet, and the winLossRatio is the average win over the average loss.

In any case, here are the two implementations.

investoPediaKelly <- function(R, kellyFraction = 1, n = 63) {
  signs <- sign(R)
  posSigns <- signs; posSigns[posSigns < 0] <- 0
  negSigns <- signs; negSigns[negSigns > 0] <- 0; negSigns <- negSigns * -1
  probs <- runSum(posSigns, n = n)/(runSum(posSigns, n = n) + runSum(negSigns, n = n))
  posVals <- R; posVals[posVals < 0] <- 0
  negVals <- R; negVals[negVals > 0] <- 0; 
  wlRatio <- (runSum(posVals, n = n)/runSum(posSigns, n = n))/(runSum(negVals, n = n)/runSum(negSigns, n = n))
  kellyRatio <- probs - ((1-probs)/wlRatio)
  out <- kellyRatio * kellyFraction
  return(out)
}

wikiKelly <- function(R, kellyFraction = 1, n = 63) {
  return(runMean(R, n = n)/runVar(R, n = n)*kellyFraction)
}

Let’s try this with some data. At this point in time, I’m going to show a non-replicable volatility strategy that I currently trade.

volSince2011

For the record, here are its statistics:

                              Close
Annualized Return         0.8021000
Annualized Std Dev        0.3553000
Annualized Sharpe (Rf=0%) 2.2574000
Worst Drawdown            0.2480087
Calmar Ratio              3.2341613

Now, let’s see what the Wikipedia version does:

badKelly <- out * lag(wikiKelly(out), 2)
charts.PerformanceSummary(badKelly)

badKelly

The results are simply ridiculous. And here would be why: say you have a mean return of .0005 per day (5 bps/day), and a standard deviation equal to that (that is, a Sharpe ratio of 1). You would have 1/.0005 = 2000. In other words, a leverage of 2000 times. This clearly makes no sense.

The other variant is the more particular Investopedia definition.

invKelly <- out * lag(investKelly(out), 2)
charts.PerformanceSummary(invKelly)

invKelly

Looks a bit more reasonable. However, how does it stack up against not using it at all?

compare <- na.omit(cbind(out, invKelly))
charts.PerformanceSummary(compare)

kellyCompare

Turns out, the fabled Kelly Criterion doesn’t really change things all that much.

For the record, here are the statistical comparisons:

                               Base     Kelly
Annualized Return         0.8021000 0.7859000
Annualized Std Dev        0.3553000 0.3588000
Annualized Sharpe (Rf=0%) 2.2574000 2.1903000
Worst Drawdown            0.2480087 0.2579846
Calmar Ratio              3.2341613 3.0463063

Thanks for reading.

NOTE: I am currently looking for my next full-time opportunity, preferably in New York City or Philadelphia relating to the skills I have demonstrated on this blog. My LinkedIn profile can be found here. If you know of such opportunities, do not hesitate to reach out to me.

Leverage Up When You’re Down?

This post will investigate the idea of reducing leverage when drawdowns are small, and increasing leverage as losses accumulate. It’s based on the idea that whatever goes up must come down, and whatever comes down generally goes back up.

I originally came across this idea from this blog post.

So, first off, let’s write an easy function that allows replication of this idea. Essentially, we have several arguments:

One: the default leverage (that is, when your drawdown is zero, what’s your exposure)? For reference, in the original post, it’s 10%.

Next: the various leverage levels. In the original post, the leverage levels are 25%, 50%, and 100%.

And lastly, we need the corresponding thresholds at which to apply those leverage levels. In the original post, those levels are 20%, 40%, and 55%.

So, now we can create a function to implement that in R. The idea being that we have R compute the drawdowns, and then use that information to determine leverage levels as precisely and frequently as possible.

Here’s a quick piece of code to do so:

require(xts)
require(PerformanceAnalytics)

drawdownLev <- function(rets, defaultLev = .1, levs = c(.25, .5, 1), ddthresh = c(-.2, -.4, -.55)) {
  # compute drawdowns
  dds <- PerformanceAnalytics:::Drawdowns(rets)
  
  # initialize leverage to the default level
  dds$lev <- defaultLev
  
  # change the leverage for every threshold
  for(i in 1:length(ddthresh)) {
    
    # as drawdowns go through thresholds, adjust leverage
    dds$lev[dds$Close < ddthresh[i]] <- levs[i]
  }
  
  # compute the new strategy returns -- apply leverage at tomorrow's close
  out <- rets * lag(dds$lev, 2)
  
  # return the leverage and the new returns
  leverage <- dds$lev
  colnames(leverage) <- c("DDLev_leverage")
  return(list(leverage, out))
}

So, let’s replicate some results.

require(downloader)
require(xts)
require(PerformanceAnalytics)


download("https://dl.dropboxusercontent.com/s/jk6der1s5lxtcfy/XIVlong.TXT",
         destfile="longXIV.txt")


xiv <- xts(read.zoo("longXIV.txt", format="%Y-%m-%d", sep=",", header=TRUE))
xivRets <- Return.calculate(Cl(xiv))

xivDDlev <- drawdownLev(xivRets, defaultLev = .1, levs = c(.25, .5, 1), ddthresh = c(-.2, -.4, -.55))
compare <- na.omit(cbind(xivDDlev[[2]], xivRets))
colnames(compare) <- c("XIV_DD_leverage", "XIV")

charts.PerformanceSummary(compare['2011::2016'])

And our results look something like this:

xivddlev

                          XIV_DD_leverage       XIV
Annualized Return               0.2828000 0.2556000
Annualized Std Dev              0.3191000 0.6498000
Annualized Sharpe (Rf=0%)       0.8862000 0.3934000
Worst Drawdown                  0.4870604 0.7438706
Calmar Ratio                    0.5805443 0.3436668

That said, what would happen if one were to extend the data for all available XIV data?

xivddlev2

> rbind(table.AnnualizedReturns(compare), maxDrawdown(compare), CalmarRatio(compare))
                          XIV_DD_leverage       XIV
Annualized Return               0.1615000 0.3319000
Annualized Std Dev              0.3691000 0.5796000
Annualized Sharpe (Rf=0%)       0.4375000 0.5727000
Worst Drawdown                  0.8293650 0.9215784
Calmar Ratio                    0.1947428 0.3601385

A different story.

In this case, I think the takeaway is that such a mechanism does well when the drawdowns for the benchmark in question occur sharply, so that the lower exposure protects from those sharp drawdowns, and then the benchmark spends much of the time in a recovery mode, so that the increased exposure has time to earn outsized returns, and then draws down again. When the benchmark continues to see drawdowns after maximum leverage is reached, or continues to perform well when not in drawdown, such a mechanism falls behind quickly.

As always, there is no free lunch when it comes to drawdowns, as trying to lower exposure in preparation for a correction will necessarily mean forfeiting a painful amount of upside in the good times, at least as presented in the original post.

Thanks for reading.

NOTE: I am currently looking for my next full-time opportunity, preferably in New York City or Philadelphia relating to the skills I have demonstrated on this blog. My LinkedIn profile can be found here. If you know of such opportunities, do not hesitate to reach out to me.

An Out of Sample Update on DDN’s Volatility Momentum Trading Strategy and Beta Convexity

The first part of this post is a quick update on Tony Cooper’s of Double Digit Numerics’s volatility ETN momentum strategy from the volatility made simple blog (which has stopped updating as of a year and a half ago). The second part will cover Dr. Jonathan Kinlay’s Beta Convexity concept.

So, now that I have the ability to generate a term structure and constant expiry contracts, I decided to revisit some of the strategies on Volatility Made Simple and see if any of them are any good (long story short: all of the publicly detailed ones aren’t so hot besides mine–they either have a massive drawdown in-sample around the time of the crisis, or a massive drawdown out-of-sample).

Why this strategy? Because it seemed different from most of the usual term structure ratio trades (of which mine is an example), so I thought I’d check out how it did since its first publishing date, and because it’s rather easy to understand.

Here’s the strategy:

Take XIV, VXX, ZIV, VXZ, and SHY (this last one as the “risk free” asset), and at the close, invest in whichever has had the highest 83 day momentum (this was the result of optimization done on volatilityMadeSimple).

Here’s the code to do this in R, using the Quandl EOD database. There are two variants tested–observe the close, buy the close (AKA magical thinking), and observe the close, buy tomorrow’s close.

require(quantmod)
require(PerformanceAnalytics)
require(TTR)
require(Quandl)

Quandl.api_key("yourKeyHere")

symbols <- c("XIV", "VXX", "ZIV", "VXZ", "SHY")

prices <- list()
for(i in 1:length(symbols)) {
  price <- Quandl(paste0("EOD/", symbols[i]), start_date="1990-12-31", type = "xts")$Adj_Close
  colnames(price) <- symbols[i]
  prices[[i]] <- price
}
prices <- na.omit(do.call(cbind, prices))
returns <- na.omit(Return.calculate(prices))

# find highest asset, assign column names
topAsset <- function(row, assetNames) {
  out <- row==max(row, na.rm = TRUE)
  names(out) <- assetNames
  out <- data.frame(out)
  return(out)
}

# compute momentum
momentums <- na.omit(xts(apply(prices, 2, ROC, n = 83), order.by=index(prices)))

# find highest asset each day, turn it into an xts
highestMom <- apply(momentums, 1, topAsset, assetNames = colnames(momentums))
highestMom <- xts(t(do.call(cbind, highestMom)), order.by=index(momentums))

# observe today's close, buy tomorrow's close
buyTomorrow <- na.omit(xts(rowSums(returns * lag(highestMom, 2)), order.by=index(highestMom)))

# observe today's close, buy today's close (aka magic thinking)
magicThinking <- na.omit(xts(rowSums(returns * lag(highestMom)), order.by=index(highestMom)))

out <- na.omit(cbind(buyTomorrow, magicThinking))
colnames(out) <- c("buyTomorrow", "magicalThinking")

# results
charts.PerformanceSummary(out['2014-04-11::'], legend.loc = 'top')
rbind(table.AnnualizedReturns(out['2014-04-11::']), maxDrawdown(out['2014-04-11::']))

Pretty simple.

Here are the results.

capture

> rbind(table.AnnualizedReturns(out['2014-04-11::']), maxDrawdown(out['2014-04-11::']))
                          buyTomorrow magicalThinking
Annualized Return          -0.0320000       0.0378000
Annualized Std Dev          0.5853000       0.5854000
Annualized Sharpe (Rf=0%)  -0.0547000       0.0646000
Worst Drawdown              0.8166912       0.7761655

Looks like this strategy didn’t pan out too well. Just a daily reminder that if you’re using fine grid-search to select a particularly good parameter (EG n = 83 days? Maybe 4 21-day trading months, but even that would have been n = 82), you’re asking for a visit from, in the words of Mr. Tony Cooper, a visit from the grim reaper.

****

Moving onto another topic, whenever Dr. Jonathan Kinlay posts something that I think I can replicate that I’d be very wise to do so, as he is a very skilled and experienced practitioner (and also includes me on his blogroll).

A topic that Dr. Kinlay covered is the idea of beta convexity–namely, that an asset’s beta to a benchmark may be different when the benchmark is up as compared to when it’s down. Essentially, it’s the idea that we want to weed out firms that are what I’d deem as “losers in disguise”–I.E. those that act fine when times are good (which is when we really don’t care about diversification, since everything is going up anyway), but do nothing during bad times.

The beta convexity is calculated quite simply: it’s the beta of an asset to a benchmark when the benchmark has a positive return, minus the beta of an asset to a benchmark when the benchmark has a negative return, then squaring the difference. That is, (beta_bench_positive – beta_bench_negative) ^ 2.

Here’s some R code to demonstrate this, using IBM vs. the S&P 500 since 1995.

ibm <- Quandl("EOD/IBM", start_date="1995-01-01", type = "xts")
ibmRets <- Return.calculate(ibm$Adj_Close)

spy <- Quandl("EOD/SPY", start_date="1995-01-01", type = "xts")
spyRets <- Return.calculate(spy$Adj_Close)

rets <- na.omit(cbind(ibmRets, spyRets))
colnames(rets) <- c("IBM", "SPY")

betaConvexity <- function(Ra, Rb) {
  positiveBench <- Rb[Rb > 0]
  assetPositiveBench <- Ra[index(positiveBench)]
  positiveBeta <- CAPM.beta(Ra = assetPositiveBench, Rb = positiveBench)
  
  negativeBench <- Rb[Rb < 0]
  assetNegativeBench <- Ra[index(negativeBench)]
  negativeBeta <- CAPM.beta(Ra = assetNegativeBench, Rb = negativeBench)
  
  out <- (positiveBeta - negativeBeta) ^ 2
  return(out)
}

betaConvexity(rets$IBM, rets$SPY)

For the result:

> betaConvexity(rets$IBM, rets$SPY)
[1] 0.004136034

Thanks for reading.

NOTE: I am always looking to network, and am currently actively looking for full-time opportunities which may benefit from my skill set. If you have a position which may benefit from my skills, do not hesitate to reach out to me. My LinkedIn profile can be found here.

Constant Expiry VIX Futures (Using Public Data)

This post will be about creating constant expiry (E.G. a rolling 30-day contract) using VIX settlement data from the CBOE and the spot VIX calculation (from Yahoo finance, or wherever else). Although these may be able to be traded under certain circumstances, this is not always the case (where the desired expiry is shorter than the front month’s time to expiry).

The last time I visited this topic, I created a term structure using publicly available data from the CBOE, along with an external expiry calendar.

The logical next step, of course, is to create constant-expiry contracts, which may or may not be tradable (if your contract expiry is less than 30 days, know that the front month has days in which the time to expiry is more than 30 days).

So here’s where we left off: a way to create a continuous term structure using CBOE settlement VIX data.

So from here, before anything, we need to get VIX data. And while the getSymbols command used to be easier to use, because Yahoo broke its API (what else do you expect from an otherwise-irrelevant, washed-up web 1.0 dinosaur?), it’s not possible to get free Yahoo data at this point in time (in the event that Josh Ulrich doesn’t fix this issue in the near future, I’m open to suggestions for other free sources of data which provide data of reputable quality), so we need to get VIX data from elsewhere (particularly, the CBOE itself, which is a one-stop shop for all VIX-related data…and most likely some other interesting futures as well.)

So here’s how to get VIX data from the CBOE (thanks, all you awesome CBOE people! And a shoutout to all my readers from the CBOE, I’m sure some of you are from there).

VIX <- fread("http://www.cboe.com/publish/scheduledtask/mktdata/datahouse/vixcurrent.csv", skip = 1)
VIXdates <- VIX$Date
VIX$Date <- NULL; VIX <- xts(VIX, order.by=as.Date(VIXdates, format = '%m/%d/%Y'))
spotVix <- Cl(VIX)

Next, there’s a need for some utility functions to help out with identifying which futures contracts to use for constructing synthetics.

# find column with greatest days to expiry less than or equal to desired days to expiry
shortDurMinMax <- function(row, daysToExpiry) {
  return(max(which(row <= daysToExpiry)))
}

# find column with least days to expiry greater desired days to expiry
longDurMinMax <- function(row, daysToExpiry) {
  return(min(which(row > daysToExpiry)))
}

# gets the difference between the two latest contracts (either expiry days or price)
getLastDiff <- function(row) {
  indices <- rev(which(!is.na(row)))
  out <- row[indices[1]] - row[indices[2]]
  return(out)
}

# gets the rightmost non-NA value of a row
getLastValue <- function(row) {
  indices <- rev(which(!is.na(row)))
  out <- row[indices[1]]
  return(out)
}

The first two functions are to determine short-duration and long-duration contracts. Simply, provided a row of data and the desired constant time to expiry, the first function finds the contract with a time closest to expiry less than or equal to the desired amount, while the second function does the inverse.

The next two functions are utilized in the scenario of a function whose time to expiry is greater than the expiry of the longest trading contract. Such a synthetic would obviously not be able to be traded, but can be created for the purposes of using as an indicator. The third function gets the last two non-NA values in a row (I.E. the two last prices, the two last times to expiry), and the fourth one simply gets the rightmost non-NA value in a row.

The algorithm to create a synthetic constant-expiry contract/indicator is divided into three scenarios:

One, in which the desired time to expiry of the contract is shorter than the front month, such as a constant 30-day expiry contract, when the front month has more than 30 days to maturity (such as on Nov 17, 2016), at which point, the weight will be the desired time to expiry over the remaining time to expiry in the front month, and the remainder in spot VIX (another asset that cannot be traded, at least conventionally).

The second scenario is one in which the desired time to expiry is longer than the last traded contract. For instance, if the desire was to create a contract
with a year to expiry when the furthest out is eight months, there obviously won’t be data for such a contract. In such a case, the algorithm is to compute the linear slope between the last two available contracts, and add the extrapolated product of the slope multiplied by the time remaining between the desired and the last contract to the price of the last contract.

Lastly, the third scenario (and the most common one under most use cases) is that of the synthetic for which there is both a trading contract that has less time to expiry than the desired constant rate, and one with more time to expiry. In this instance, a matter of linear algebra (included in the comments) denotes the weight of the short expiry contract, which is (desired – expiry_long)/(expiry_short – expiry_long).

The algorithm iterates through all three scenarios, and due to the mechanics of xts automatically sorting by timestamp, one obtains an xts object in order of dates of a synthetic, constant expiry futures contract.

Here is the code for the function.


constantExpiry <- function(spotVix, termStructure, expiryStructure, daysToExpiry) {
  
  # Compute synthetics that are too long (more time to expiry than furthest contract)
  
  # can be Inf if no column contains values greater than daysToExpiry (I.E. expiry is 3000 days)
  suppressWarnings(longCol <- xts(apply(expiryStructure, 1, longDurMinMax, daysToExpiry), order.by=index(termStructure)))
  longCol[longCol == Inf] <- 10
  
  # xts for too long to expiry -- need a NULL for rbinding if empty
  tooLong <- NULL
  
  # Extend the last term structure slope an arbitrarily long amount of time for those with too long expiry
  tooLongIdx <- index(longCol[longCol==10])
  if(length(tooLongIdx) > 0) {
    tooLongTermStructure <- termStructure[tooLongIdx]
    tooLongExpiryStructure <- expiryStructure[tooLongIdx]
    
    # difference in price/expiry for longest two contracts, use it to compute a slope
    priceDiff <- xts(apply(tooLongTermStructure, 1, getLastDiff), order.by = tooLongIdx)
    expiryDiff <- xts(apply(tooLongExpiryStructure, 1, getLastDiff), order.by = tooLongIdx)
    slope <- priceDiff/expiryDiff
    
    # get longest contract price and compute additional days to expiry from its time to expiry 
    # I.E. if daysToExpiry is 180 and longest is 120, additionalDaysToExpiry is 60
    maxDaysToExpiry <- xts(apply(tooLongExpiryStructure, 1, max, na.rm = TRUE), order.by = tooLongIdx)
    longestContractPrice <- xts(apply(tooLongTermStructure, 1, getLastValue), order.by = tooLongIdx)
    additionalDaysToExpiry <- daysToExpiry - maxDaysToExpiry
    
    # add slope multiplied by additional days to expiry to longest contract price
    tooLong <- longestContractPrice + additionalDaysToExpiry * slope
  }
  
  # compute synthetics that are too short (less time to expiry than shortest contract)
  
  # can be -Inf if no column contains values less than daysToExpiry (I.E. expiry is 5 days)
  suppressWarnings(shortCol <- xts(apply(expiryStructure, 1, shortDurMinMax, daysToExpiry), order.by=index(termStructure)))
  shortCol[shortCol == -Inf] <- 0
  
  # xts for too short to expiry -- need a NULL for rbinding if empty
  tooShort <- NULL
  
  tooShortIdx <- index(shortCol[shortCol==0])
  
  if(length(tooShortIdx) > 0) {
    tooShort <- termStructure[,1] * daysToExpiry/expiryStructure[,1] + spotVix * (1 - daysToExpiry/expiryStructure[,1])
    tooShort <- tooShort[tooShortIdx]
  }
  
  
  # compute everything in between (when time to expiry is between longest and shortest)
  
  # get unique permutations for contracts that term structure can create
  colPermutes <- cbind(shortCol, longCol)
  colnames(colPermutes) <- c("short", "long")
  colPermutes <- colPermutes[colPermutes$short > 0,]
  colPermutes <- colPermutes[colPermutes$long < 10,]
  
  regularSynthetics <- NULL
  
  # if we can construct synthetics from regular futures -- someone might enter an extremely long expiry
  # so this may not always be the case
  
  if(nrow(colPermutes) > 0) {
    
    # pasting long and short expiries into a single string for easier subsetting
    shortLongPaste <- paste(colPermutes$short, colPermutes$long, sep="_")
    uniqueShortLongPaste <- unique(shortLongPaste)
    
    regularSynthetics <- list()
    for(i in 1:length(uniqueShortLongPaste)) {
      # get unique permutation of short-expiry and long-expiry contracts
      permuteSlice <- colPermutes[which(shortLongPaste==uniqueShortLongPaste[i]),]
      expirySlice <- expiryStructure[index(permuteSlice)]
      termStructureSlice <- termStructure[index(permuteSlice)]
      
      # what are the parameters?
      shortCol <- unique(permuteSlice$short); longCol <- unique(permuteSlice$long)
      
      # computations -- some linear algebra
      
      # S/L are weights, ex_S/ex_L are time to expiry
      # D is desired constant time to expiry
      
      # S + L = 1
      # L = 1 - S
      # S + (1-S) = 1
      # 
      # ex_S * S + ex_L * (1-S) = D
      # ex_S * S + ex_L - ex_L * S = D
      # ex_S * S - ex_L * S = D - ex_L
      # S(ex_S - ex_L) = D - ex_L
      # S = (D - ex_L)/(ex_S - ex_L)
      
      weightShort <- (daysToExpiry - expirySlice[, longCol])/(expirySlice[, shortCol] - expirySlice[, longCol])
      weightLong <- 1 - weightShort
      syntheticValue <- termStructureSlice[, shortCol] * weightShort + termStructureSlice[, longCol] * weightLong
      
      regularSynthetics[[i]] <- syntheticValue
    }
    
    regularSynthetics <- do.call(rbind, regularSynthetics)
  }
  
  out <- rbind(tooShort, regularSynthetics, tooLong)
  colnames(out) <- paste0("Constant_", daysToExpiry)
  return(out)
}

And here’s how to use it:

constant30 <- constantExpiry(spotVix = vixSpot, termStructure = termStructure, expiryStructure = expiryStructure, daysToExpiry = 30)
constant180 <- constantExpiry(spotVix = vixSpot, termStructure = termStructure, expiryStructure = expiryStructure, daysToExpiry = 180)

constantTermStructure <- cbind(constant30, constant180)

chart.TimeSeries(constantTermStructure, legend.loc = 'topright', main = "Constant Term Structure")

With the result:
Capture

Which means that between the CBOE data itself, and this function that creates constant expiry futures from CBOE spot and futures prices, one can obtain any futures contract, whether real or synthetic, to use as an indicator for volatility trading strategies. This allows for exploration of a wide variety of volatility trading strategies.

Thanks for reading.

NOTE: I am always interested in networking and hearing about full-time opportunities related to my skill set. My linkedin can be found here.

Furthermore, if you are a volatility ETF/futures trading professional, I am interested in possible project-based collaboration. If you are interested, please contact me.