KDA–Robustness Results

This post will display some robustness results for KDA asset allocation.

Ultimately, the two canary instruments fare much better using the original filter weights in Defensive Asset Allocation than in other variants of the weights for the filter. While this isn’t as worrying (the filter most likely was created that way and paired with those instruments by design), what *is* somewhat more irritating is that the strategy is dependent upon the end-of-month phenomenon, meaning this strategy cannot be simply tranched throughout an entire trading month.

So first off, let’s review the code from last time:

# KDA asset allocation 
# KDA stands for Kipnis Defensive Adaptive (Asset Allocation).

# compute strategy statistics
stratStats <- function(rets) {
  stats <- rbind(table.AnnualizedReturns(rets), maxDrawdown(rets))
  stats[5,] <- stats[1,]/stats[4,]
  stats[6,] <- stats[1,]/UlcerIndex(rets)
  rownames(stats)[4] <- "Worst Drawdown"
  rownames(stats)[5] <- "Calmar Ratio"
  rownames(stats)[6] <- "Ulcer Performance Index"
  return(stats)
}

# required libraries
require(quantmod)
require(PerformanceAnalytics)
require(tseries)

# symbols
symbols <- c("SPY", "VGK",   "EWJ",  "EEM",  "VNQ",  "RWX",  "IEF",  "TLT",  "DBC",  "GLD", "VWO", "BND")  


# get data
rets <- list()
for(i in 1:length(symbols)) {
  
  returns <- Return.calculate(Ad(get(getSymbols(symbols[i], from = '1990-01-01'))))
  colnames(returns) <- symbols[i]
  rets[[i]] <- returns
}
rets <- na.omit(do.call(cbind, rets))


# algorithm
KDA <- function(rets, offset = 0, leverageFactor = 1.5, momWeights = c(12, 4, 2, 1)) {
  
  # get monthly endpoints, allow for offsetting ala AllocateSmartly/Newfound Research
  ep <- endpoints(rets) + offset
  ep[ep < 1] <- 1
  ep[ep > nrow(rets)] <- nrow(rets)
  ep <- unique(ep)
  epDiff <- diff(ep)
  if(last(epDiff)==1) { # if the last period only has one observation, remove it
    ep <- ep[-length(ep)]
  }
  
  # initialize vector holding zeroes for assets
  emptyVec <- data.frame(t(rep(0, 10)))
  colnames(emptyVec) <- symbols[1:10]
  
  
  allWts <- list()
  # we will use the 13612F filter
  for(i in 1:(length(ep)-12)) {
    
    # 12 assets for returns -- 2 of which are our crash protection assets
    retSubset <- rets[c((ep[i]+1):ep[(i+12)]),]
    epSub <- ep[i:(i+12)]
    sixMonths <- rets[(epSub[7]+1):epSub[13],]
    threeMonths <- rets[(epSub[10]+1):epSub[13],]
    oneMonth <- rets[(epSub[12]+1):epSub[13],]
    
    # computer 13612 fast momentum
    moms <- Return.cumulative(oneMonth) * momWeights[1] + Return.cumulative(threeMonths) * momWeights[2] + 
      Return.cumulative(sixMonths) * momWeights[3] + Return.cumulative(retSubset) * momWeights[4]
    assetMoms <- moms[,1:10] # Adaptive Asset Allocation investable universe
    cpMoms <- moms[,11:12] # VWO and BND from Defensive Asset Allocation
    
    # find qualifying assets
    highRankAssets <- rank(assetMoms) >= 6 # top 5 assets
    posReturnAssets <- assetMoms > 0 # positive momentum assets
    selectedAssets <- highRankAssets & posReturnAssets # intersection of the above
    
    # perform mean-variance/quadratic optimization
    investedAssets <- emptyVec
    if(sum(selectedAssets)==0) {
      investedAssets <- emptyVec
    } else if(sum(selectedAssets)==1) {
      investedAssets <- emptyVec + selectedAssets 
    } else {
      idx <- which(selectedAssets)
      # use 1-3-6-12 fast correlation average to match with momentum filter  
      cors <- (cor(oneMonth[,idx]) * momWeights[1] + cor(threeMonths[,idx]) * momWeights[2] + 
                 cor(sixMonths[,idx]) * momWeights[3] + cor(retSubset[,idx]) * momWeights[4])/sum(momWeights)
      vols <- StdDev(oneMonth[,idx]) # use last month of data for volatility computation from AAA
      covs <- t(vols) %*% vols * cors
      
      # do standard min vol optimization
      minVolRets <- t(matrix(rep(1, sum(selectedAssets))))
      minVolWt <- portfolio.optim(x=minVolRets, covmat = covs)$pw
      names(minVolWt) <- colnames(covs)
      investedAssets <- emptyVec
      investedAssets[,selectedAssets] <- minVolWt
    }
    
    # crash protection -- between aggressive allocation and crash protection allocation
    pctAggressive <- mean(cpMoms > 0)
    investedAssets <- investedAssets * pctAggressive 
    
    pctCp <- 1-pctAggressive
    
    # if IEF momentum is positive, invest all crash protection allocation into it
    # otherwise stay in cash for crash allocation
    if(assetMoms["IEF"] > 0) {
      investedAssets["IEF"] <- investedAssets["IEF"] + pctCp
    }
    
    # leverage portfolio if desired in cases when both risk indicator assets have positive momentum
    if(pctAggressive == 1) {
      investedAssets = investedAssets * leverageFactor
    }
    
    # append to list of monthly allocations
    wts <- xts(investedAssets, order.by=last(index(retSubset)))
    allWts[[i]] <- wts
    
  }
  
  # put all weights together and compute cash allocation
  allWts <- do.call(rbind, allWts)
  allWts$CASH <- 1-rowSums(allWts)
  
  # add cash returns to universe of investments
  investedRets <- rets[,1:10]
  investedRets$CASH <- 0
  
  # compute portfolio returns
  out <- Return.portfolio(R = investedRets, weights = allWts)
  return(list(allWts, out))
}

So, the idea is that we take the basic Adaptive Asset Allocation algorithm, and wrap it in a canary universe from Defensive Asset Allocation (see previous post for links to both), which we use to control capital allocation, ranging from 0 to 1 (or beyond, in cases where leverage applies).

One of the ideas was to test out different permutations of the parameters belonging to the canary filter–a 1, 3, 6, 12 weighted filter focusing on the first month.

There are two interesting variants of this–equal weighting on the filter (both for momentum and the safety assets), and reversing the weights (that is, 1 * 1, 3 * 2, 6 * 4, 12 * 12). Here are the results of that experiment:


# different leverages
KDA_100 <- KDA(rets, leverageFactor = 1)
KDA_EW <- KDA(rets, leverageFactor = 1, momWeights = c(1,1,1,1))
KDA_rev <- KDA(rets, leverageFactor = 1, momWeights = c(1, 2, 4, 12))
# KDA_150 <- KDA(rets, leverageFactor = 1.5)
# KDA_200 <- KDA(rets, leverageFactor = 2)

# compare
compare <- na.omit(cbind(KDA_100[[2]], KDA_EW[[2]], KDA_rev[[2]]))
colnames(compare) <- c("KDA_base", "KDA_EW", "KDA_rev")
charts.PerformanceSummary(compare, colorset = c('black', 'purple', 'gold'), 
                          main = "KDA AA with various momentum weights")

stratStats(compare)
apply.yearly(compare, Return.cumulative)

With the following results:

> stratStats(compare)
                            KDA_base    KDA_EW   KDA_rev
Annualized Return         0.10990000 0.0879000 0.0859000
Annualized Std Dev        0.09070000 0.0900000 0.0875000
Annualized Sharpe (Rf=0%) 1.21180000 0.9764000 0.9814000
Worst Drawdown            0.07920363 0.1360625 0.1500333
Calmar Ratio              1.38756275 0.6460266 0.5725396
Ulcer Performance Index   3.96188378 2.4331636 1.8267448

> apply.yearly(compare, Return.cumulative)
              KDA_base       KDA_EW    KDA_rev
2008-12-31  0.15783690  0.101929228 0.08499664
2009-12-31  0.18169281 -0.004707164 0.02403531
2010-12-31  0.17797930  0.283216782 0.27889530
2011-12-30  0.17220203  0.161001680 0.03341651
2012-12-31  0.13030215  0.081280035 0.09736187
2013-12-31  0.12692163  0.120902015 0.09898799
2014-12-31  0.04028492  0.047381890 0.06883301
2015-12-31 -0.01621646 -0.005016891 0.01841095
2016-12-30  0.01253209  0.020960805 0.01580218
2017-12-29  0.15079063  0.148073455 0.18811112
2018-12-31  0.06583962  0.029804042 0.04375225
2019-02-20  0.01689700  0.003934044 0.00962020

So, one mea culpa: after comparing AllocateSmartly, my initial code (which I’ve since edited, most likely owing to getting some logic mixed up when I added functionality to lag the day of month to trade) had some sort of bug in it which gave a slightly better than expected 2015 return. Nevertheless, the results are very similar. What is interesting to note is that in the raging bull market that was essentially from 2010 onwards, the equal weight and reverse weight filters don’t perform too badly, though the reverse weight filter has a massive drawdown in 2011, but in terms of capitalizing in awful markets, the original filter as designed by Keller and TrendXplorer works best, both in terms of making money during the recession, and does better near the market bottom in 2009.

Now that that’s out of the way, the more interesting question is how does the strategy work when not trading at the end of the month? Long story short, the best time to trade it is in the last week of the month. Once the new month rolls around, hands off. If you’re talking about tranching this strategy, then you have about a week’s time to get your positions in, so I’m not sure the actual dollar volume this strategy can manage, as it’s dependent on the month-end effect (I know that one of my former managers–a brilliant man, by all accounts–said that this phenomena no longer existed, but I feel these empirical results refute that assertion in this particular instance). Here are these results:

lagCompare <- list()
for(i in 1:21) {
  offRets <- KDA(rets, leverageFactor = 1, offset = i)
  tmp <- offRets[[2]]
  colnames(tmp) <- paste0("Lag", i)
  lagCompare[[i]] <- tmp
}
lagCompare <- do.call(cbind, lagCompare)
lagCompare <- na.omit(cbind(KDA_100[[2]], lagCompare))
colnames(lagCompare)[1] <- "Base"

charts.PerformanceSummary(lagCompare, colorset=c("orange", rep("gray", 21)))
stratStats(lagCompare)

With the results:

> stratStats(lagCompare)
                                Base      Lag1      Lag2      Lag3      Lag4      Lag5      Lag6      Lag7      Lag8
Annualized Return         0.11230000 0.0584000 0.0524000 0.0589000 0.0319000 0.0319000 0.0698000 0.0790000 0.0912000
Annualized Std Dev        0.09100000 0.0919000 0.0926000 0.0945000 0.0975000 0.0957000 0.0943000 0.0934000 0.0923000
Annualized Sharpe (Rf=0%) 1.23480000 0.6357000 0.5654000 0.6229000 0.3270000 0.3328000 0.7405000 0.8460000 0.9879000
Worst Drawdown            0.07920363 0.1055243 0.1269207 0.1292193 0.1303246 0.1546962 0.1290020 0.1495558 0.1227749
Calmar Ratio              1.41786439 0.5534272 0.4128561 0.4558141 0.2447734 0.2062107 0.5410771 0.5282311 0.7428230
Ulcer Performance Index   4.03566328 1.4648618 1.1219982 1.2100649 0.4984094 0.5012318 1.3445786 1.4418132 2.3277271
                               Lag9     Lag10     Lag11     Lag12     Lag13     Lag14     Lag15     Lag16     Lag17
Annualized Return         0.0854000 0.0863000 0.0785000 0.0732000 0.0690000 0.0862000 0.0999000 0.0967000 0.1006000
Annualized Std Dev        0.0906000 0.0906000 0.0900000 0.0913000 0.0906000 0.0909000 0.0923000 0.0947000 0.0949000
Annualized Sharpe (Rf=0%) 0.9426000 0.9524000 0.8722000 0.8023000 0.7617000 0.9492000 1.0825000 1.0209000 1.0600000
Worst Drawdown            0.1278059 0.1189949 0.1197596 0.1112761 0.1294588 0.1498408 0.1224511 0.1290538 0.1274083
Calmar Ratio              0.6682006 0.7252411 0.6554796 0.6578231 0.5329880 0.5752771 0.8158357 0.7493000 0.7895878
Ulcer Performance Index   2.3120919 2.6415855 2.4441605 1.9248615 1.8096134 2.2378207 2.8753265 2.9092448 3.0703542
                             Lag18     Lag19     Lag20     Lag21
Annualized Return         0.097100 0.0921000 0.1047000 0.1019000
Annualized Std Dev        0.092900 0.0903000 0.0958000 0.0921000
Annualized Sharpe (Rf=0%) 1.044900 1.0205000 1.0936000 1.1064000
Worst Drawdown            0.100604 0.1032067 0.1161583 0.1517104
Calmar Ratio              0.965170 0.8923835 0.9013561 0.6716747
Ulcer Performance Index   3.263040 2.7159601 3.0758230 3.0414002

Essentially, the trade at the very end of the month is the only one with a Calmar ratio above 1, though the Calmar ratio from lag15 to lag 21 is about .8 or higher, with a Sharpe ratio of 1 or higher. So, there’s definitely a window of when to trade, and when not to–namely, the lag 1 through 5 variations have the worst performances by no small margin. Therefore, I strongly suspect that the 1-3-6-12 filter was designed around the idea of the end-of-month effect, or at least, not stress-tested for different trading days within the month (and given that longer-dated data is only monthly, this is understandable).

Nevertheless, I hope this does answer some people’s questions from the quant finance universe. I know that Corey Hoffstein of Think Newfound (and wow that blog is good from the perspective of properties of trend-following) loves diversifying across every bit of the process, though in this case, I do think there’s something to be said about “diworsification”.

In any case, I do think there are some future research venues for further research here.

Thanks for reading.

Advertisements

Right Now It’s KDA…Asset Allocation.

This post will introduce KDA Asset Allocation. KDA — I.E. Kipnis Defensive Adaptive Asset Allocation is a combination of Wouter Keller’s and TrendXplorer’s Defensive Asset Allocation, along with ReSolve Asset Management’s Adaptive Asset Allocation. This is an asset allocation strategy with a profile unlike most tactical asset allocation strategies I’ve seen before (namely, it barely loses any money in 2015, which was generally a brutal year for tactical asset allocation strategies).

So, the idea for this strategy came from reading an excellent post from TrendXplorer on the idea of a canary universe–using a pair of assets to determine when to increase exposure to risky/aggressive assets, and when to stay in cash. Rather than gauge it on the momentum of the universe itself, the paper by Wouter Keller and TrendXplorer instead uses proxy assets VWO and BND as a proxy universe. Furthermore, in which situations say to take full exposure to risky assets, the latest iteration of DAA actually recommends leveraging exposure to risky assets, which will also be demonstrated. Furthermore, I also applied the idea of the 1-3-6-12 fast filter espoused by Wouter Keller and TrendXplorer–namely, the sum of the 12 * 1-month momentum, 4 * 3-month momentum, 2 * 6-month momentum, and the 12 month momentum (that is, month * some number = 12). This puts a large emphasis on the front month of returns, both for the risk on/off assets, and the invested assets themselves.

However, rather than adopt the universe of investments from the TrendXplorer post, I decided to instead defer to the well-thought-out universe construction from Adaptive Asset Allocation, along with their idea to use a mean variance optimization approach for actually weighting the selected assets.

So, here are the rules:

Take the investment universe–SPY, VGK, EWJ, EEM, VNQ, RWX, IEF, TLT, DBC, GLD, and compute the 1-3-6-12 momentum filter for them (that is, the sum of 12 * 1-month momentum, 4 * 3-month momentum, 2* 6-month momentum and 12 month momentum), and rank them. The selected assets are those with a momentum above zero, and that are in the top 5.

Use a basic quadratic optimization algorithm on them, feeding in equal returns (as they passed the dual momentum filter), such as the portfolio.optim function from the tseries package.

From adaptive asset allocation, the covariance matrix is computed using one-month volatility estimates, and a correlation matrix that is the weighted average of the same parameters used for the momentum filter (that is, 12 * 1-month correlation + 4 * 3-month correlation + 2 * 6-month correlation + 12-month correlation, all divided by 19).

Next, compute your exposure to risky assets by which percentage of the two canary assets–VWO and BND–have a positive 1-3-6-12 momentum. If both assets have a positive momentum, leverage the portfolio (if desired). Reapply this algorithm every month.

All of the allocation not made to risky assets goes towards IEF (which is in the pool of risky assets as well, so some months may have a large IEF allocation) if it has a positive 1-3-6-12 momentum, or just stay in cash if it does not.

The one somewhat optimistic assumption made is that the strategy observes the close on a day, and enters at the close as well. Given a holding period of a month, this should not have a massive material impact as compared to a strategy which turns over potentially every day.

Here’s the R code to do this:

# KDA asset allocation 
# KDA stands for Kipnis Defensive Adaptive (Asset Allocation).

# compute strategy statistics
stratStats <- function(rets) {
  stats <- rbind(table.AnnualizedReturns(rets), maxDrawdown(rets))
  stats[5,] <- stats[1,]/stats[4,]
  stats[6,] <- stats[1,]/UlcerIndex(rets)
  rownames(stats)[4] <- "Worst Drawdown"
  rownames(stats)[5] <- "Calmar Ratio"
  rownames(stats)[6] <- "Ulcer Performance Index"
  return(stats)
}

# required libraries
require(quantmod)
require(PerformanceAnalytics)
require(tseries)

# symbols
symbols <- c("SPY", "VGK",   "EWJ",  "EEM",  "VNQ",  "RWX",  "IEF",  "TLT",  "DBC",  "GLD", "VWO", "BND")  


# get data
rets <- list()
for(i in 1:length(symbols)) {
  
    returns <- Return.calculate(Ad(get(getSymbols(symbols[i], from = '1990-01-01'))))
  colnames(returns) <- symbols[i]
  rets[[i]] <- returns
}
rets <- na.omit(do.call(cbind, rets))


# algorithm
KDA <- function(rets, offset = 0, leverageFactor = 1.5) {
  
  # get monthly endpoints, allow for offsetting ala AllocateSmartly/Newfound Research
  ep <- endpoints(rets) + offset
  ep[ep < 1] <- 1
  ep[ep > nrow(rets)] <- nrow(rets)
  ep <- unique(ep)
  epDiff <- diff(ep)
  if(last(epDiff)==1) { # if the last period only has one observation, remove it
    ep <- ep[-length(ep)]
  }
  # initialize vector holding zeroes for assets
  emptyVec <- data.frame(t(rep(0, 10)))
  colnames(emptyVec) <- symbols[1:10]
  
  
  allWts <- list()
  # we will use the 13612F filter
  for(i in 1:(length(ep)-12)) {
    
    # 12 assets for returns -- 2 of which are our crash protection assets
    retSubset <- rets[c((ep[i]+1):ep[(i+12)]),]
    epSub <- ep[i:(i+12)]
    sixMonths <- rets[(epSub[7]+1):epSub[13],]
    threeMonths <- rets[(epSub[10]+1):epSub[13],]
    oneMonth <- rets[(epSub[12]+1):epSub[13],]
    
    # computer 13612 fast momentum
    moms <- Return.cumulative(oneMonth) * 12 + Return.cumulative(threeMonths) * 4 + 
      Return.cumulative(sixMonths) * 2 + Return.cumulative(retSubset)
    assetMoms <- moms[,1:10] # Adaptive Asset Allocation investable universe
    cpMoms <- moms[,11:12] # VWO and BND from Defensive Asset Allocation
    
    # find qualifying assets
    highRankAssets <- rank(assetMoms) >= 6 # top 5 assets
    posReturnAssets <- assetMoms > 0 # positive momentum assets
    selectedAssets <- highRankAssets & posReturnAssets # intersection of the above
    
    # perform mean-variance/quadratic optimization
    investedAssets <- emptyVec
    if(sum(selectedAssets)==0) {
      investedAssets <- emptyVec
    } else if(sum(selectedAssets)==1) {
      investedAssets <- emptyVec + selectedAssets 
    } else {
      idx <- which(selectedAssets)
      # use 1-3-6-12 fast correlation average to match with momentum filter  
      cors <- (cor(oneMonth[,idx]) * 12 + cor(threeMonths[,idx]) * 4 + 
                 cor(sixMonths[,idx]) * 2 + cor(retSubset[,idx]))/19
      vols <- StdDev(oneMonth[,idx]) # use last month of data for volatility computation from AAA
      covs <- t(vols) %*% vols * cors
      
      # do standard min vol optimization
      minVolRets <- t(matrix(rep(1, sum(selectedAssets))))
      minVolWt <- portfolio.optim(x=minVolRets, covmat = covs)$pw
      names(minVolWt) <- colnames(covs)
      investedAssets <- emptyVec
      investedAssets[,selectedAssets] <- minVolWt
    }
    
    # crash protection -- between aggressive allocation and crash protection allocation
    pctAggressive <- mean(cpMoms > 0)
    investedAssets <- investedAssets * pctAggressive 
    
    pctCp <- 1-pctAggressive
    
    # if IEF momentum is positive, invest all crash protection allocation into it
    # otherwise stay in cash for crash allocation
    if(assetMoms["IEF"] > 0) {
      investedAssets["IEF"] <- investedAssets["IEF"] + pctCp
    }
    
    # leverage portfolio if desired in cases when both risk indicator assets have positive momentum
    if(pctAggressive == 1) {
      investedAssets = investedAssets * leverageFactor
    }
    
    # append to list of monthly allocations
    wts <- xts(investedAssets, order.by=last(index(retSubset)))
    allWts[[i]] <- wts
    
  }
  
  # put all weights together and compute cash allocation
  allWts <- do.call(rbind, allWts)
  allWts$CASH <- 1-rowSums(allWts)
  
  # add cash returns to universe of investments
  investedRets <- rets[,1:10]
  investedRets$CASH <- 0
  
  # compute portfolio returns
  out <- Return.portfolio(R = investedRets, weights = allWts)
  return(out)
}

# different leverages
KDA_100 <- KDA(rets, leverageFactor = 1)
KDA_150 <- KDA(rets, leverageFactor = 1.5)
KDA_200 <- KDA(rets, leverageFactor = 2)

# compare
compare <- na.omit(cbind(KDA_100, KDA_150, KDA_200))
colnames(compare) <- c("KDA_base", "KDA_lev_150%", "KDA_lev_200%")
charts.PerformanceSummary(compare, colorset = c('black', 'purple', 'gold'), 
                          main = "KDA AA with various offensive leverages")

And here are the equity curves and statistics:

What appeals to me about this strategy, is that unlike most tactical asset allocation strategies, this strategy comes out relatively unscathed by the 2015-2016 whipsaws that hurt so many other tactical asset allocation strategies. However this strategy isn’t completely flawless, as sometimes, it decides that it’d be a great time to enter full risk-on mode and hit a drawdown, as evidenced by the drawdown curve. Nevertheless, the Calmar ratios are fairly solid for a tactical asset allocation rotation strategy, and even in a brutal 2018 that decimated all risk assets, this strategy managed to post a very noticeable *positive* return. On the downside, the leverage plan actually seems to *negatively* affect risk/reward characteristics in this strategy–that is, as leverage during aggressive allocations increases, characteristics such as the Sharpe and Calmar ratio actually *decrease*.

Overall, I think there are different aspects to unpack here–such as performances of risky assets as a function of the two canary universe assets, and a more optimal leverage plan. This was just the first attempt at combining two excellent ideas and seeing where the performance goes. I also hope that this strategy can have a longer backtest over at AllocateSmartly.

Thanks for reading.

GARCH and a rudimentary application to Vol Trading

This post will review Kris Boudt’s datacamp course, along with introducing some concepts from it, discuss GARCH, present an application of it to volatility trading strategies, and a somewhat more general review of datacamp.

So, recently, Kris Boudt, one of the highest-ranking individuals pn the open-source R/Finance totem pole (contrary to popular belief, I am not the be-all end-all of coding R in finance…probably just one of the more visible individuals due to not needing to run a trading desk), taught a course on Datacamp on GARCH models.

Naturally, an opportunity to learn from one of the most intelligent individuals in the field in a hand-held course does not come along every day. In fact, on Datacamp, you can find courses from some of the most intelligent individuals in the R/Finance community, such as Joshua Ulrich, Ross Bennett (teaching PortfolioAnalytics, no less), David Matteson, and, well, just about everyone short of Doug Martin and Brian Peterson themselves. That said, most of those courses are rather introductory, but occasionally, you get a course that covers a production-tier library that allows one to do some non-trivial things, such as this course, which covers Alexios Ghalanos’s rugarch library.

Ultimately, the course is definitely good for showing the basics of rugarch. And, given how I blog and use tools, I wholly subscribe to the 80/20 philosophy–essentially that you can get pretty far using basic building blocks in creative ways, or just taking a particular punchline and applying it to some data, and throwing it into a trading strategy to see how it does.

But before we do that, let’s discuss what GARCH is.

While I’ll save the Greek notation for those that feel inclined to do a google search, here’s the acronym:

Generalized Auto-Regressive Conditional Heteroskedasticity

What it means:

Generalized: a more general form of the

Auto-Regressive: past values are used as inputs to predict future values.

Conditional: the current value differs given a past value.

Heteroskedasticity: varying volatility. That is, consider the VIX. It isn’t one constant level, such as 20. It varies with respect to time.

Or, to summarize: “use past volatility to predict future volatility because it changes over time.”

Now, there are some things that we know from empirical observation about looking at volatility in financial time series–namely that volatility tends to cluster–high vol is followed by high vol, and vice versa. That is, you don’t just have one-off huge moves one day, then calm moves like nothing ever happened. Also, volatility tends to revert over longer periods of time. That is, VIX doesn’t stay elevated for protracted periods of time, so more often than not, betting on its abatement can make some money, (assuming the timing is correct.)

Now, in the case of finance, which birthed the original GARCH, 3 individuals (Glosten-Jagannathan-Runkle) extended the model to take into account the fact that volatility in an asset spikes in the face of negative returns. That is, when did the VIX reach its heights? In the biggest of bear markets in the financial crisis. So, there’s an asymmetry in the face of positive and negative returns. This is called the GJR-GARCH model.

Now, here’s where the utility of the rugarch package comes in–rather than needing to reprogram every piece of math, Alexios Ghalanos has undertaken that effort for the good of everyone else, and implemented a whole multitude of prepackaged GARCH models that allow the end user to simply pick the type of GARCH model that best fits the assumptions the end user thinks best apply to the data at hand.

So, here’s the how-to.

First off, we’re going to get data for SPY from Yahoo finance, then specify our GARCH model.

The GARCH model has three components–the mean model–that is, assumptions about the ARMA (basic ARMA time series nature of the returns, in this case I just assumed an AR(1)), a variance model–which is the part in which you specify the type of GARCH model, along with variance targeting (which essentially forces an assumption of some amount of mean reversion, and something which I had to use to actually get the GARCH model to converge in all cases), and lastly, the distribution model of the returns. In many models, there’s some in-built assumption of normality. In rugarch, however, you can relax that assumption by specifying something such as “std” — that is, the Student T Distribution, or in this case, “sstd”–Skewed Student T Distribution. And when one thinks about the S&P 500 returns, a skewed student T distribution seems most reasonable–positive returns usually arise as a large collection of small gains, but losses occur in large chunks, so we want a distribution that can capture this property if the need arises.

<pre class="wp-block-syntaxhighlighter-code brush: plain; notranslate">
require(rugarch)
require(quantmod)
require(TTR)
require(PerformanceAnalytics)

# get SPY data from Yahoo 
getSymbols("SPY", from = '1990-01-01')

spyRets = na.omit(Return.calculate(Ad(SPY)))

# GJR garch with AR1 innovations under a skewed student T distribution for returns
gjrSpec = ugarchspec(mean.model = list(armaOrder = c(1,0)),
                      variance.model = list(model = "gjrGARCH",
                                            variance.targeting = TRUE),
                      distribution.model = "sstd")
</pre>

As you can see, with a single function call, the user can specify a very extensive model encapsulating assumptions about both the returns and the model which governs their variance. Once the model is specified,it’s equally simple to use it to create a rolling out-of-sample prediction–that is, just plug your data in, and after some burn-in period, you start to get predictions for a variety of metrics. Here’s the code to do that. 

<pre class="wp-block-syntaxhighlighter-code brush: plain; notranslate">
# Use rolling window of 504 days, refitting the model every 22 trading days
t1 = Sys.time()
garchroll = ugarchroll(gjrSpec, data = spyRets, 
n.start = 504, refit.window = "moving", refit.every = 22)
t2 = Sys.time()
print(t2-t1)

# convert predictions to data frame
garchroll = as.data.frame(garchroll)
</pre>

In this case, I use a rolling 504 day window that refits every 22 days(approximately 1 trading month). To note, if the window is too short,you may run into fail-to-converge instances, which would disallow converting the predictions to a data frame. The rolling predictions take about four minutes to run on the server instance I use, so refitting every single day is most likely not advised.

Here’s how the predictions look like:

<pre class="wp-block-syntaxhighlighter-code brush: plain; notranslate">
head(garchroll)
                      Mu       Sigma      Skew    Shape Shape(GIG)      Realized
1995-01-30  6.635618e-06 0.005554050 0.9456084 4.116495          0 -0.0043100611
1995-01-31  4.946798e-04 0.005635425 0.9456084 4.116495          0  0.0039964165
1995-02-01  6.565350e-06 0.005592726 0.9456084 4.116495          0 -0.0003310769
1995-02-02  2.608623e-04 0.005555935 0.9456084 4.116495          0  0.0059735255
1995-02-03 -1.096157e-04 0.005522957 0.9456084 4.116495          0  0.0141870212
1995-02-06 -5.922663e-04 0.005494048 0.9456084 4.116495          0  0.0042281655

</pre>

The salient quantity here is the Sigma quantity–that is, the prediction for daily volatility. This is the quantity that we want to compare against the VIX.

So the strategy we’re going to be investigating is essentially what I’ve seen referred to as VRP–the Volatility Risk Premium in Tony Cooper’s seminal paper, Easy Volatility Investing.

The idea of the VRP is that we compare some measure of realized volatility (EG running standard deviation, GARCH predictions from past data) to the VIX, which is an implied volatility (so, purely forward looking). The idea is that when realized volatility (past/current measured) is greater than future volatility, people are in a panic. Similarly, when implied volatility is greater than realized volatility, things are as they should be, and it should be feasible to harvest the volatility risk premium by shorting volatility (analogous to selling insurance).

The instruments we’ll be using for this are ZIV and VXZ. ZIV because SVXY is no longer supported on InteractiveBrokers or RobinHood, and then VXZ is its long volatility counterpart.

We’ll be using close-to-close returns; that is, get the signal on Monday morning, and transact on Monday’s close, rather than observe data on Friday’s close, and transact around that time period as well(also known as magical thinking, according to Brian Peterson).


getSymbols('^VIX', from = '1990-01-01')

# convert GARCH sigma predictions to same scale as the VIX by annualizing, multiplying by 100
garchPreds = xts(garchroll$Sigma * sqrt(252) * 100, order.by=as.Date(rownames(garchroll)))
diff = garchPreds - Ad(VIX)

require(downloader)

download('https://www.dropbox.com/s/y3cg6d3vwtkwtqx/VXZlong.TXT?raw=1', destfile='VXZlong.txt')
download('https://www.dropbox.com/s/jk3ortdyru4sg4n/ZIVlong.TXT?raw=1', destfile='ZIVlong.txt')

ziv = xts(read.zoo('ZIVlong.txt', format='%Y-%m-%d', sep = ',', header=TRUE))
vxz = xts(read.zoo('VXZlong.txt', format = '%Y-%m-%d', sep = ',', header = TRUE))

zivRets = na.omit(Return.calculate(Cl(ziv)))
vxzRets = na.omit(Return.calculate(Cl(vxz)))
vxzRets['2014-08-05'] = .045

zivSig = diff < 0 
vxzSig = diff > 0 

garchOut = lag(zivSig, 2) * zivRets + lag(vxzSig, 2) * vxzRets

histSpy = runSD(spyRets, n = 21, sample = FALSE) * sqrt(252) * 100
spyDiff = histSpy - Ad(VIX)

zivSig = spyDiff < 0 
zivSig = spyDiff > 0 

spyOut = lag(zivSig, 2) * zivRets + lag(vxzSig, 2) * vxzRets

avg = (garchOut + spyOut)/2
compare = na.omit(cbind(garchOut, spyOut, avg))
colnames(compare) = c("gjrGARCH", "histVol", "avg")

With the following output:

<pre class="wp-block-syntaxhighlighter-code brush: plain; notranslate">
stratStats <- function(rets) {
  stats <- rbind(table.AnnualizedReturns(rets), maxDrawdown(rets))
  stats[5,] = stats[1,]/stats[4,]
  stats[6,] = stats[1,]/UlcerIndex(rets)
  rownames(stats)[4] = "Worst Drawdown"
  rownames(stats)[5] = "Calmar Ratio"
  rownames(stats)[6] = "Ulcer Performance Index"
  return(stats)
}

charts.PerformanceSummary(compare)
stratStats(compare)

&gt; stratStats(compare)
                           gjrGARCH   histVol       avg
Annualized Return         0.2195000 0.2186000 0.2303000
Annualized Std Dev        0.2936000 0.2947000 0.2614000
Annualized Sharpe (Rf=0%) 0.7477000 0.7419000 0.8809000
Worst Drawdown            0.4310669 0.5635507 0.4271594
Calmar Ratio              0.5092017 0.3878977 0.5391429
Ulcer Performance Index   1.3563017 1.0203611 1.5208926


</pre>

So, to comment on this strategy: this is definitely not something you will take and trade out of the box. Both variants of this strategy, when forced to choose a side, walk straight into the Feb 5 volatility explosion. Luckily, switching between ZIV and VXZ keeps the account from completely exploding in a spectacular failure. To note, both variants of the VRP strategy, GJR Garch and the 22 day rolling realized volatility, suffer their own period of spectacularly large drawdown–the historical volatility in 2007-2008, and currently, though this year has just been miserable for any reasonable volatility strategy, I myself am down 20%, and I’ve seen other strategists down that much as well in their primary strategies.

That said, I do think that over time, and if using the tail-end-of-the-curve instruments such as VXZ and ZIV (now that XIV is gone and SVXY no longer supported on several brokers such as Interactive Brokers and RobinHood), that there are a number of strategies that might be feasible to pass off as a sort of trading analogue to machine learning’s “weak learner”.

That said, I’m not sure how many vastly different types of ways to approach volatility trading there are that make logical sense from an intuitive perspective (that is, “these two quantities have this type of relationship, which should give a consistent edge in trading volatility” rather than “let’s over-optimize these two parameters until we eliminate every drawdown”).

While I’ve written about the VIX3M/VIX6M ratio in the past, which has formed the basis of my proprietary trading strategy, I’d certainly love to investigate other volatility trading ideas out in public. For instance, I’d love to start the volatility trading equivalent of an AllocateSmartly type website–just a compendium of a reasonable suite of volatility trading strategies, track them, charge a subscription fee, and let users customize their own type of strategies. However, the prerequisite for that is that there are a lot of reasonable ways to trade volatility that don’t just walk into tail-end events such as the 2007-2008 transition, Feb 5, and so on.

Furthermore, as some recruiters have told me that I should also cross-post my blog scripts on my Github, I’ll start doing that also, from now on.

***
One last topic: a general review of Datacamp. As some of you may know, I instruct a course on datacamp. But furthermore, I’ve spent quite a bit of time taking courses (particularly in Python) on there as well, thanks to having access by being an instructor.

Generally, here’s the gist of it: Datacamp is a terrific resource for getting your feet wet and getting a basic overview of what technologies are out there. Generally, courses follow a “few minutes of lecture, do exercises using the exact same syntax you saw in the lecture”, with a lot of the skeleton already written for you, so you don’t wind up endlessly guessing. Generally, my procedure will be: “try to complete the exercise, and if I fail, go back and look at the slides to find an analogous block of code, change some names, and fill in”. 

Ultimately, if the world of data science, machine learning, and some quantitative finance is completely new to you–if you’re the kind of person that reads my blog, and completely glosses past the code: *this* is the resource for you, and I recommend it wholeheartedly. You’ll take some courses that give you a general tour of what data scientists, and occasionally, quants, do. And in some cases, you may have a professor in a fairly advanced field, like Kris Boudt, teach a fairly advanced topic, like the state-of-the art rugarch package (this *is* an industry-used package, and is actively maintained by Alexios Ghalanos, an economist at Amazon, so it’s far more than a pedagogical tool).

That said, for someone like me, who’s trying to port his career-capable R skills to Python to land a job (my last contract ended recently, so I am formally searching for a new role), Datacamp doesn’t *quite* do the trick–just yet. While there is a large catalog of courses, it does feel like there’s a lot of breadth, though not sure how much depth in terms of getting proficient enough to land interviews on the sole merits of DataCamp course completions. While there are Python course tracks (EG python developer, which I completed, and Python data analyst, which I also completed), I’m not sure they’re sufficient in terms of “this track was developed with partnership in industry–complete this capstone course, and we have industry partners willing to interview you”.

Also, from what I’ve seen of quantitative finance taught in Python, and having to rebuild all functions from numpy/pandas, I am puzzled as to   how people do quantitative finance in Python without libraries like PerformanceAnalytics, rugarch, quantstrat, PortfolioAnalytics, and so on. Those libraries make expressing and analyzing investment ideas far more efficient, and removes a great chance of making something like an off-by-one error (also known as look-ahead bias in trading). So far, I haven’t seen the Python end of Datacamp dive deep into quantitative finance, and I hope that changes in the near future.

So, as a summary, I think this is a fantastic site for code-illiterate individuals to get their hands dirty and their feet wet with some coding, but I think the opportunity to create an economic, democratized, interest to career a-la-carte, self-paced experience is still very much there for the taking. And given the quality of instructors that Datacamp has worked with in the past (David Matteson–*the* regime change expert, I think–along with many other experts), I think Datacamp has a terrific opportunity to capitalize here.

So, if you’re the kind of person who glosses past the code: don’t gloss anymore. You can now take courses to gain an understanding of what my code does, and ask questions about it.

***
Thanks for reading.

NOTE: I am currently looking for networking opportunities and full-time roles related to my skill set. Feel free to download my resume or contact me on LinkedIn.

Principal Component Momentum?

This post will investigate using Principal Components as part of a momentum strategy.

Recently, I ran across a post from David Varadi that I thought I’d further investigate and translate into code I can explicitly display (as David Varadi doesn’t). Of course, as David Varadi is a quantitative research director with whom I’ve done good work with in the past, I find that trying to investigate his ideas is worth the time spent.

So, here’s the basic idea: in an allegedly balanced universe, containing both aggressive (e.g. equity asset class ETFs) assets and defensive assets (e.g. fixed income asset class ETFs), that principal component analysis, a cornerstone in machine learning, should have some effectiveness at creating an effective portfolio.

I decided to put that idea to the test with the following algorithm:

Using the same assets that David Varadi does, I first use a rolling window (between 6-18 months) to create principal components. Making sure that the SPY half of the loadings is always positive (that is, if the loading for SPY is negative, multiply the first PC by -1, as that’s the PC we use), and then create two portfolios–one that’s comprised of the normalized positive weights of the first PC, and one that’s comprised of the negative half.

Next, every month, I use some momentum lookback period (1, 3, 6, 10, and 12 months), and invest in the portfolio that performed best over that period for the next month, and repeat.

Here’s the source code to do that: (and for those who have difficulty following, I highly recommend James Picerno’s Quantitative Investment Portfolio Analytics in R book.

require(PerformanceAnalytics)
require(quantmod)
require(stats)
require(xts)

symbols <- c("SPY", "EFA", "EEM", "DBC", "HYG", "GLD", "IEF", "TLT")  

# get free data from yahoo
rets <- list()
getSymbols(symbols, src = 'yahoo', from = '1990-12-31')
for(i in 1:length(symbols)) {
  returns <- Return.calculate(Ad(get(symbols[i])))
  colnames(returns) <- symbols[i]
  rets[[i]] <- returns
}
rets <- na.omit(do.call(cbind, rets))

# 12 month PC rolling PC window, 3 month momentum window
pcPlusMinus <- function(rets, pcWindow = 12, momWindow = 3) {
  ep <- endpoints(rets)

  wtsPc1Plus <- NULL
  wtsPc1Minus <- NULL
  
  for(i in 1:(length(ep)-pcWindow)) {
    # get subset of returns
    returnSubset <- rets[(ep[i]+1):(ep[i+pcWindow])]
    
    # perform PCA, get first PC (I.E. pc1)
    pcs <- prcomp(returnSubset) 
    firstPc <- pcs[[2]][,1]
    
    # make sure SPY always has a positive loading
    # otherwise, SPY and related assets may have negative loadings sometimes
    # positive loadings other times, and creates chaotic return series
    
    if(firstPc['SPY'] < 0) {
      firstPc <- firstPc * -1
    }
    
    # create vector for negative values of pc1
    wtsMinus <- firstPc * -1
    wtsMinus[wtsMinus < 0] <- 0
    wtsMinus <- wtsMinus/(sum(wtsMinus)+1e-16) # in case zero weights
    wtsMinus <- xts(t(wtsMinus), order.by=last(index(returnSubset)))
    wtsPc1Minus[[i]] <- wtsMinus
    
    # create weight vector for positive values of pc1
    wtsPlus <- firstPc
    wtsPlus[wtsPlus < 0] <- 0
    wtsPlus <- wtsPlus/(sum(wtsPlus)+1e-16)
    wtsPlus <- xts(t(wtsPlus), order.by=last(index(returnSubset)))
    wtsPc1Plus[[i]] <- wtsPlus
  }
  
  # combine positive and negative PC1 weights
  wtsPc1Minus <- do.call(rbind, wtsPc1Minus)
  wtsPc1Plus <- do.call(rbind, wtsPc1Plus)
  
  # get return of PC portfolios
  pc1MinusRets <- Return.portfolio(R = rets, weights = wtsPc1Minus)
  pc1PlusRets <- Return.portfolio(R = rets, weights = wtsPc1Plus)
  
  # combine them
  combine <-na.omit(cbind(pc1PlusRets, pc1MinusRets))
  colnames(combine) <- c("PCplus", "PCminus")
  
  momEp <- endpoints(combine)
  momWts <- NULL
  for(i in 1:(length(momEp)-momWindow)){
    momSubset <- combine[(momEp[i]+1):(momEp[i+momWindow])]
    momentums <- Return.cumulative(momSubset)
    momWts[[i]] <- xts(momentums==max(momentums), order.by=last(index(momSubset)))
  }
  momWts <- do.call(rbind, momWts)
  
  out <- Return.portfolio(R = combine, weights = momWts)
  colnames(out) <- paste("PCwin", pcWindow, "MomWin", momWindow, sep="_")
  return(list(out, wtsPc1Minus, wtsPc1Plus, combine))
}


pcWindows <- c(6, 9, 12, 15, 18)
momWindows <- c(1, 3, 6, 10, 12)

permutes <- expand.grid(pcWindows, momWindows)

stratStats <- function(rets) {
  stats <- rbind(table.AnnualizedReturns(rets), maxDrawdown(rets))
  stats[5,] <- stats[1,]/stats[4,]
  stats[6,] <- stats[1,]/UlcerIndex(rets)
  rownames(stats)[4] <- "Worst Drawdown"
  rownames(stats)[5] <- "Calmar Ratio"
  rownames(stats)[6] <- "Ulcer Performance Index"
  return(stats)
}

results <- NULL
for(i in 1:nrow(permutes)) {
  tmp <- pcPlusMinus(rets = rets, pcWindow = permutes$Var1[i], momWindow = permutes$Var2[i])
  results[[i]] <- tmp[[1]]
}
results <- do.call(cbind, results)
stats <- stratStats(results)

After a cursory look at the results, it seems the performance is fairly miserable with my implementation, even by the standards of tactical asset allocation models (the good ones have a Calmar and Sharpe Ratio above 1)

Here are histograms of the Calmar and Sharpe ratios.

PCCalmarHistogram
PCSharpeHistogram

These values are generally too low for my liking. Here’s a screenshot of the table of all 25 results.

PCresultsTable.PNG

While my strategy of choosing which portfolio to hold is different from David Varadi’s (momentum instead of whether or not the aggressive portfolio is above its 200-day moving average), there are numerous studies that show these two methods are closely related, yet the results feel starkly different (and worse) compared to his site.

I’d certainly be willing to entertain suggestions as to how to improve the process, which will hopefully create some more meaningful results. I also know that AllocateSmartly expressed interest in implementing something along these lines for their estimable library of TAA strategies, so I thought I’d try to do it and see what results I’d find, which in this case, aren’t too promising.

Thanks for reading.

NOTE: I am networking, and actively seeking a position related to my skill set in either Philadelphia, New York City, or remotely. If you know of a position which may benefit from my skill set, feel free to let me know. You can reach me on my LinkedIn profile here, or email me.

A Review of James Picerno’s Quantitative Investment Portfolio Analytics in R

This is a review of James Picerno’s Quantitative Investment Portfolio Analytics in R. Overall, it’s about as fantastic a book as you can get on portfolio optimization until you start getting into corner cases stemming from large amounts of assets.

Here’s a quick summary of what the book covers:

1) How to install R.

2) How to create some rudimentary backtests.

3) Momentum.

4) Mean-Variance Optimization.

5) Factor Analysis

6) Bootstrapping/Monte-Carlo simulations.

7) Modeling Tail Risk

8) Risk Parity/Vol Targeting

9) Index replication

10) Estimating impacts of shocks

11) Plotting in ggplot

12) Downloading/saving data.

All in all, the book teaches the reader many fantastic techniques to get started doing some basic portfolio management using asset-class ETFs, and under the assumption of ideal data–that is, that there are few assets with concurrent starting times, that the number of assets is much smaller than the number of observations (I.E. 10 asset class ETFs, 90 day lookback windows, for instance), and other attributes taken for granted to illustrate concepts. I myself have used these concepts time and again (and, in fact, covered some of these topics on this blog, such as volatility targeting, momentum, and mean-variance), but in some of the work projects I’ve done, the trouble begins when the number of assets grows larger than the number of observations, or when assets move in or out of the investable universe (EG a new company has an IPO or a company goes bankrupt/merges/etc.). It also does not go into the PortfolioAnalytics package, developed by Ross Bennett and Brian Peterson. Having recently started to use this package for a real-world problem, it produces some very interesting results and its potential is immense, with the large caveat that you need an immense amount of computing power to generate lots of results for large-scale problems, which renders it impractical for many individual users. A quadratic optimization on a backtest with around 2400 periods and around 500 assets per rebalancing period (days) took about eight hours on a cloud server (when done sequentially to preserve full path dependency).

However, aside from delving into some somewhat-edge-case appears-more-in-the-professional-world topics, this book is extremely comprehensive. Simply, as far as managing a portfolio of asset-class ETFs (essentially, what the inimitable Adam Butler and crew from ReSolve Asset Management talk about, along with Walter’s fantastic site, AllocateSmartly), this book will impart a lot of knowledge that goes into doing those things. While it won’t make you as comfortable as say, an experienced professional like myself is at writing and analyzing portfolio optimization backtests, it will allow you to do a great deal of your own analysis, and certainly a lot more than anyone using Excel.

While I won’t rehash what the book covers in this post, what I will say is that it does cover some of the material I’ve posted in years past. And furthermore, rather than spending half the book about topics such as motivations, behavioral biases, and so on, this book goes right into the content that readers should know in order to execute the tasks they desire. Furthermore, the content is presented in a very coherent, English-and-code, matter-of-fact way, as opposed to a bunch of abstract mathematical derivations that treats practical implementation as an afterthought. Essentially, when one buys a cookbook, they don’t get it to read half of it for motivations as to why they should bake their own cake, but on how to do it. And as far as density of how-to, this book delivers in a way I think that other authors should strive to emulate.

Furthermore, I think that this book should be required reading for any analyst wanting to work in the field. It’s a very digestible “here’s how you do X” type of book. I.E. “here’s a data set, write a backtest based on these momentum rules, use an inverse-variance weighting scheme, do a Fama-French factor analysis on it”.

In any case, in my opinion, for anyone doing any sort of tactical asset allocation analysis in R, get this book now. For anyone doing any sort of tactical asset allocation analysis in spreadsheets, buy this book sooner than now, and then see the previous sentence. In any case, I’ll certainly be keeping this book on my shelf and referencing it if need be.

Thanks for reading.

Note: I am currently contracting but am currently on the lookout for full-time positions in New York City. If you know of a position which may benefit from my skills, please let me know. My LinkedIn profile can be found here.

A Different Way To Think About Drawdown — Geometric Calmar Ratio

This post will discuss the idea of the geometric Calmar ratio — a way to modify the Calmar ratio to account for compounding returns.

So, one thing that recently had me sort of annoyed in terms of my interpretation of the Calmar ratio is this: essentially, the way I interpret it is that it’s a back of the envelope measure of how many years it takes you to recover from the worst loss. That is, if a strategy makes 10% a year (on average), and has a loss of 10%, well, intuition serves that from that point on, on average, it’ll take about a year to make up that loss–that is, a Calmar ratio of 1. Put another way, it means that on average, a strategy will make money at the end of 252 trading days.

But, that isn’t really the case in all circumstances. If an investment manager is looking to create a small, meager return for their clients, and is looking to make somewhere between 5-10%, then sure, the Calmar ratio approximation and interpretation makes sense in that context. Or, it makes sense in the context of “every year, we withdraw all profits and deposit to make up for any losses”. But in the context of a hedge fund trying to create large, market-beating returns for its investors, those hedge funds can have fairly substantial drawdowns.

Citadel–one of the gold standards of the hedge fund industry, had a drawdown of more than 50% during the financial crisis, and of course, there was https://www.reuters.com/article/us-usa-fund-volatility/exclusive-ljm-partners-shutting-its-doors-after-vol-mageddon-losses-in-u-s-stocks-idUSKCN1GC29Hat least one fund that blew up in the storm-in-a-teacup volatility spike on Feb. 5 (in other words, if those guys were professionals, what does that make me? Or if I’m an amateur, what does that make them?).

In any case, in order to recover from such losses, it’s clear that a strategy would need to make back a lot more than what it lost. Lose 25%? 33% is the high water mark. Lose 33%? 50% to get back to even. Lose 50%? 100%. Beyond that? You get the idea.

In order to capture this dynamic, we should write a new Calmar ratio to express this idea.

So here’s a function to compute the geometric calmar ratio:

require(PerformanceAnalytics)

geomCalmar <- function(r) {
  rAnn <- Return.annualized(r)
  maxDD <- maxDrawdown(r)
  toHighwater <- 1/(1-maxDD) - 1
  out <- rAnn/toHighwater
  return(out)
}

So, let's compare how some symbols stack up. We'll take a high-volatility name (AMZN), the good old S&P 500 (SPY), and a very low volatility instrument (SHY).

getSymbols(c('AMZN', 'SPY', 'SHY'), from = '1990-01-01')
rets <- na.omit(cbind(Return.calculate(Ad(AMZN)), Return.calculate(Ad(SPY)), Return.calculate(Ad(SHY))))
compare <- rbind(table.AnnualizedReturns(rets), maxDrawdown(rets), CalmarRatio(rets), geomCalmar(rets))
rownames(compare)[6] <- "Geometric Calmar"
compare

The returns start from July 31, 2002. Here are the statistics.

                           AMZN.Adjusted SPY.Adjusted SHY.Adjusted
Annualized Return             0.3450000   0.09110000   0.01940000
Annualized Std Dev            0.4046000   0.18630000   0.01420000
Annualized Sharpe (Rf=0%)     0.8528000   0.48860000   1.36040000
Worst Drawdown                0.6525491   0.55189461   0.02231459
Calmar Ratio                  0.5287649   0.16498652   0.86861760
Geometric Calmar              0.1837198   0.07393135   0.84923475

For my own proprietary volatility trading strategy, a strategy which has a Calmar above 2 (interpretation: finger in the air means that you make a new equity high every six months in the worst case scenario), here are the statistics:

> CalmarRatio(stratRetsAggressive[[2]]['2011::'])
                Close
Calmar Ratio 3.448497
> geomCalmar(stratRetsAggressive[[2]]['2011::'])
                     Close
Annualized Return 2.588094

Essentially, because of the nature of losses compounding, the geometric Calmar ratio will always be lower than the standard Calmar ratio, which is to be expected when dealing with the geometric nature of compounding returns.

Essentially, I hope that this gives individuals some thought about re-evaluating the Calmar Ratio.

Thanks for reading.

NOTES: registration for R/Finance 2018 is open. As usual, I’ll be giving a lightning talk, this time on volatility trading.

I am currently contracting and seek network opportunities, along with information about prospective full time roles starting in July. Those interested in my skill set can feel free to reach out to me on LinkedIn here.

How to Make Like A Chrono Trigger Character and Survive the Apocalypse

This impromptu post will be talking about the events of Feb 5, 2018 in the volatility markets.

Allow me to indulge in a little bit of millennial nostalgia. For those that played Chrono Trigger, odds are, one of their most memorable experiences is first experiencing the Kingdom of Zeal–it was a floating kingdom above the clouds of a never-ending ice age, complete with warm scenery, and calming music.

Long story short, it was powered by harvesting magic from…essentially the monster that was the game’s final enemy. What was my favorite setting in the game eventually had this happen to it.

byeZeal

Essentially, the lesson taken from that scenario is: exercise caution first and foremost, and don’t mess around with things one does not understand. After the 2017 that XIV had, when it was seemingly impossible to do any wrong, many system traders looked foolish. Well, it seems that all good things must come to an end, though it isn’t often that they do so this violently.

For the record, my aggressive subscription strategy was flat starting on January 31st, while my conservative strategy was flat for far longer. In short, discretion is sometimes the better part of valor, though those that are interested in what actually constitutes as valor and want to hear it from a quant, you can head over to Alpha Architect. Wes Gray and Jack Vogel will tell you far more about being a badass than I ever could.

However, to put some firm numbers on my trading philosophy:

1*(1+1) = 2.
1*(1-1) = 0.

Make 100% on a trade? You’re a hero for some finite amount of time.
Lose 100%? You’re not just an idiot. You’re done. Kaput. Finished. Career over.

The way I see it is this: in trading, there’s no free lunch, and there are a lot of smart people in the industry.

The way I see it is this:

Risk in the financial markets (especially the volatility trading markets) isn’t like this: shortTail

But like this: longtail

The tails are very long. And in the financial markets, they aren’t so fluffy.

For the record, my subscription strategy, beyond taking a look at my VXX signal, is unaffected by XIV’s termination, as SVXY will slot right in to replace it.

Thanks for reading.

NOTE: I am currently seeking full time employment, consulting opportunities, and networking opportunities in relation to the skills I’ve demonstrated. Contact me on LinkedIn here.