A Basic Logical Invest Global Market Rotation Strategy

This may be one of the simplest strategies I’ve ever presented on this blog, but nevertheless, it works, for some definition of “works”.

Here’s the strategy: take five global market ETFs (MDY, ILF, FEZ, EEM, and EPP), along with a treasury ETF (TLT), and every month, fully invest in the security that had the best momentum. While I’ve tried various other tweaks, none have given the intended high return performance that the original variant has.

Here’s the link to the original strategy.

While I’m not quite certain of how to best go about programming the variable lookback period, this is the code for the three month lookback.

require(quantmod)
require(PerformanceAnalytics)

symbols <- c("MDY", "TLT", "EEM", "ILF", "EPP", "FEZ")
getSymbols(symbols, from="1990-01-01")
prices <- list()
for(i in 1:length(symbols)) {
  prices[[i]] <- Ad(get(symbols[i]))
}
prices <- do.call(cbind, prices)
colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))
returns <- Return.calculate(prices)
returns <- na.omit(returns)

logicInvestGMR <- function(returns, lookback = 3) {
  ep <- endpoints(returns, on = "months") 
  weights <- list()
  for(i in 2:(length(ep) - lookback)) {
    retSubset <- returns[ep[i]:ep[i+lookback],]
    cumRets <- Return.cumulative(retSubset)
    rankCum <- rank(cumRets)
    weight <- rep(0, ncol(retSubset))
    weight[which.max(cumRets)] <- 1
    weight <- xts(t(weight), order.by=index(last(retSubset)))
    weights[[i]] <- weight
  }
  weights <- do.call(rbind, weights)
  stratRets <- Return.portfolio(R = returns, weights = weights)
  return(stratRets)
}

gmr <- logicInvestGMR(returns)
charts.PerformanceSummary(gmr)

And here’s the performance:

> rbind(table.AnnualizedReturns(gmr), maxDrawdown(gmr), CalmarRatio(gmr))
                          portfolio.returns
Annualized Return                  0.287700
Annualized Std Dev                 0.220700
Annualized Sharpe (Rf=0%)          1.303500
Worst Drawdown                     0.222537
Calmar Ratio                       1.292991

With the resultant equity curve:

While I don’t get the 34% advertised, nevertheless, the risk to reward ratio over the duration of the backtest is fairly solid for something so simple, and I just wanted to put this out there.

Thanks for reading.

Advertisements

The Logical Invest Enhanced Bond Rotation Strategy (And the Importance of Dividends)

This post will display my implementation of the Logical Invest Enhanced Bond Rotation strategy. This is a strategy that indeed does work, but is dependent on reinvesting dividends, as bonds pay coupons, which means bond ETFs do likewise.

The strategy is fairly simple — using four separate fixed income markets (long-term US government bonds, high-yield bonds, emerging sovereign debt, and convertible bonds), the strategy aims to deliver a low-risk, high Sharpe profile. Every month, it switches to two separate securities, in either a 60-40 or 50-50 split (that is, a 60-40 one way, or the other). My implementation for this strategy is similar to the ones I’ve done for the Logical Invest Universal Investment Strategy, which is to maximize a modified Sharpe ratio in a walk-forward process.

Here’s the code:

LogicInvestEBR <- function(returns, lowerBound, upperBound, period, modSharpeF) {
  count <- 0
  configs <- list()
  instCombos <- combn(colnames(returns), m = 2)
  for(i in 1:ncol(instCombos)) {
    inst1 <- instCombos[1, i]
    inst2 <- instCombos[2, i]
    rets <- returns[,c(inst1, inst2)]
    weightSeq <- seq(lowerBound, upperBound, by = .1)
    for(j in 1:length(weightSeq)) {
      returnConfig <- Return.portfolio(R = rets, 
                      weights = c(weightSeq[j], 1-weightSeq[j]), 
                      rebalance_on="months")
      colnames(returnConfig) <- paste(inst1, weightSeq[j], 
                                inst2, 1-weightSeq[j], sep="_")
      count <- count + 1
      configs[[count]] <- returnConfig
    }
  }
  
  configs <- do.call(cbind, configs)
  cumRets <- cumprod(1+configs)
  
  #rolling cumulative 
  rollAnnRets <- (cumRets/lag(cumRets, period))^(252/period) - 1
  rollingSD <- sapply(X = configs, runSD, n=period)*sqrt(252)
  
  modSharpe <- rollAnnRets/(rollingSD ^ modSharpeF)
  monthlyModSharpe <- modSharpe[endpoints(modSharpe, on="months"),]
  
  findMax <- function(data) {
    return(data==max(data))
  }
  
  #configs$zeroes <- 0 #zeroes for initial periods during calibration
  weights <- t(apply(monthlyModSharpe, 1, findMax))
  weights <- weights*1
  weights <- xts(weights, order.by=as.Date(rownames(weights)))
  weights[is.na(weights)] <- 0
  weights$zeroes <- 1-rowSums(weights)
  configCopy <- configs
  configCopy$zeroes <- 0
  
  stratRets <- Return.portfolio(R = configCopy, weights = weights)
  return(stratRets)  
}

The one thing different about this code is the way I initialize the return streams. It’s an ugly piece of work, but it takes all of the pairwise combinations (that is, 4 choose 2, or 4c2) along with a sequence going by 10% for the different security weights between the lower and upper bound (that is, if the lower bound is 40% and upper bound is 60%, the three weights will be 40-60, 50-50, and 60-40). So, in this case, there are 18 configurations. 4c2*3. Do note that this is not at all a framework that can be scaled up. That is, with 20 instruments, there will be 190 different combinations, and then anywhere between 3 to 11 (if going from 0-100) configurations for each combination. Obviously, not a pretty sight.

Beyond that, it’s the same refrain. Bind the returns together, compute an n-day rolling cumulative return (far faster my way than using the rollApply version of Return.annualized), divide it by the n-day rolling annualized standard deviation divided by the modified Sharpe F factor (1 gives you Sharpe ratio, 0 gives you pure returns, greater than 1 puts more of a focus on risk). Take the highest Sharpe ratio, allocate to that configuration, repeat.

So, how does this perform? Here’s a test script, using the same 73-day lookback with a modified Sharpe F of 2 that I’ve used in the previous Logical Invest strategies.

symbols <- c("TLT", "JNK", "PCY", "CWB", "VUSTX", "PRHYX", "RPIBX", "VCVSX")
suppressMessages(getSymbols(symbols, from="1995-01-01", src="yahoo"))
etfClose <- Return.calculate(cbind(Cl(TLT), Cl(JNK), Cl(PCY), Cl(CWB)))
etfAdj <- Return.calculate(cbind(Ad(TLT), Ad(JNK), Ad(PCY), Ad(CWB)))
mfClose <- Return.calculate(cbind(Cl(VUSTX), Cl(PRHYX), Cl(RPIBX), Cl(VCVSX)))
mfAdj <- Return.calculate(cbind(Ad(VUSTX), Ad(PRHYX), Ad(RPIBX), Ad(VCVSX)))
colnames(etfClose) <- colnames(etfAdj) <- c("TLT", "JNK", "PCY", "CWB")
colnames(mfClose) <- colnames(mfAdj) <- c("VUSTX", "PRHYX", "RPIBX", "VCVSX")

etfClose <- etfClose[!is.na(etfClose[,4]),]
etfAdj <- etfAdj[!is.na(etfAdj[,4]),]
mfClose <- mfClose[-1,]
mfAdj <- mfAdj[-1,]

etfAdjTest <- LogicInvestEBR(returns = etfAdj, lowerBound = .4, upperBound = .6,
                             period = 73, modSharpeF = 2)

etfClTest <- LogicInvestEBR(returns = etfClose, lowerBound = .4, upperBound = .6,
                             period = 73, modSharpeF = 2)

mfAdjTest <- LogicInvestEBR(returns = mfAdj, lowerBound = .4, upperBound = .6,
                            period = 73, modSharpeF = 2)

mfClTest <- LogicInvestEBR(returns = mfClose, lowerBound = .4, upperBound = .6,
                           period = 73, modSharpeF = 2)

fiveStats <- function(returns) {
  return(rbind(table.AnnualizedReturns(returns), 
               maxDrawdown(returns), CalmarRatio(returns)))
}

etfs <- cbind(etfAdjTest, etfClTest)
colnames(etfs) <- c("Adjusted ETFs", "Close ETFs")
charts.PerformanceSummary((etfs))

mutualFunds <- cbind(mfAdjTest, mfClTest)
colnames(mutualFunds) <- c("Adjusted MFs", "Close MFs")
charts.PerformanceSummary(mutualFunds)
chart.TimeSeries(log(cumprod(1+mutualFunds)), legend.loc="topleft")

fiveStats(etfs)
fiveStats(mutualFunds)

So, first, the results of the ETFs:

Equity curve:

Five statistics:

> fiveStats(etfs)
                          Adjusted ETFs Close ETFs
Annualized Return            0.12320000 0.08370000
Annualized Std Dev           0.06780000 0.06920000
Annualized Sharpe (Rf=0%)    1.81690000 1.20980000
Worst Drawdown               0.06913986 0.08038459
Calmar Ratio                 1.78158934 1.04078405

In other words, reinvesting dividends makes up about 50% of these returns.

Let’s look at the mutual funds. Note that these are for the sake of illustration only–you can’t trade out of mutual funds every month.

Equity curve:

Log scale:

Statistics:

                          Adjusted MFs Close MFs
Annualized Return           0.11450000 0.0284000
Annualized Std Dev          0.05700000 0.0627000
Annualized Sharpe (Rf=0%)   2.00900000 0.4532000
Worst Drawdown              0.09855271 0.2130904
Calmar Ratio                1.16217559 0.1332706

In this case, day and night, though how much of it is the data source may also be an issue. Yahoo isn’t the greatest when it comes to data, and I’m not sure how much the data quality deteriorates going back that far. However, the takeaway seems to be this: with bond strategies, dividends will need to be dealt with, and when considering returns data presented to you, keep in mind that those adjusted returns assume the investor stays on top of dividend maintenance. Fail to reinvest the dividends in a timely fashion, and, well, the gap can be quite large.

To put it into perspective, as I was writing this post, I wondered whether or not most of this was indeed due to dividends. Here’s a plot of the difference in returns between adjusted and close ETF returns.

chart.TimeSeries(etfAdj - etfClose, legend.loc="topleft", date.format="%Y-%m",
                 main = "Return differences adjusted vs. close ETFs")

With the resulting image:

While there may be some noise to the order of the negative fifth power on most days, there are clear spikes observable in the return differences. Those are dividends, and their compounding makes a sizable difference. In one case for CWB, the difference is particularly striking (Dec. 29, 2014). In fact, here’s a quick little analysis of the effect of the dividend effects.

dividends <- etfAdj - etfClose
divReturns <- list()
for(i in 1:ncol(dividends)) {
  diffStream <- dividends[,i]
  divPayments <- diffStream[diffStream >= 1e-3]
  divReturns[[i]] <- Return.annualized(divPayments)
}
divReturns <- do.call(cbind, divReturns)
divReturns

divReturns/Return.annualized(etfAdj)

And the result:

> divReturns
                         TLT        JNK        PCY        CWB
Annualized Return 0.03420959 0.08451723 0.05382363 0.05025999

> divReturns/Return.annualized(etfAdj)
                       TLT       JNK       PCY       CWB
Annualized Return 0.453966 0.6939243 0.5405922 0.3737499

In short, the effect of the dividend is massive. In some instances, such as with JNK, the dividend comprises more than 50% of the annualized returns for the security!

Basically, I’d like to hammer the point home one last time–backtests using adjusted data assume instantaneous maintenance of dividends. In order to achieve the optimistic returns seen in the backtests, these dividend payments must be reinvested ASAP. In short, this is the fine print on this strategy, and is a small, but critical detail that the SeekingAlpha article doesn’t mention. (Seriously, do a ctrl + F in your browser for the word “dividend”. It won’t come up in the article itself.) I wanted to make sure to add it.

One last thing: gaudy numbers when using monthly returns!

> fiveStats(apply.monthly(etfs, Return.cumulative))
                          Adjusted ETFs Close ETFs
Annualized Return            0.12150000   0.082500
Annualized Std Dev           0.06490000   0.067000
Annualized Sharpe (Rf=0%)    1.87170000   1.232100
Worst Drawdown               0.03671871   0.049627
Calmar Ratio                 3.30769620   1.662642

Look! A Calmar Ratio of 3.3, and a Sharpe near 2!*

*: Must manage dividends. Statistics reported are monthly.

Okay, in all fairness, this is a pretty solid strategy, once one commits to managing the dividends. I just felt that it should have been a topic made front and center considering its importance in this case, rather than simply swept under the “we use adjusted returns” rug, since in this instance, the effect of dividends is massive.

In conclusion, while I will more or less confirm the strategy’s actual risk/reward performance (unlike some other SeekingAlpha strategies I’ve backtested), which, in all honesty, I find really impressive, it comes with a caveat like the rest of them. However, the caveat of “be detail-oriented/meticulous/paranoid and reinvest those dividends!” in my opinion is a caveat that’s a lot easier to live with than 30%+ drawdowns that were found lurking in other SeekingAlpha strategies. So for those that can stay on top of those dividends (whether manually, or with machine execution), here you go. I’m basically confirming the performance of Logical Invest’s strategy, but just belaboring one important detail.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.

The Downside of Rankings-Based Strategies

This post will demonstrate a downside to rankings-based strategies, particularly when using data of a questionable quality (which, unless one pays multiple thousands of dollars per month for data, most likely is of questionable quality). Essentially, by making one small change to the way the strategy filters, it introduces a massive performance drop in terms of drawdown. This exercise effectively demonstrates a different possible way of throwing a curve-ball at ranking strategies to test for robustness.

Recently, a discussion came up between myself, Terry Doherty, Cliff Smith, and some others on Seeking Alpha regarding what happened when I substituted the 63-day SMA for the three month SMA in Cliff Smith’s QTS strategy (quarterly tactical strategy…strategy).

Essentially, by simply substituting a 63-day SMA (that is, using daily data instead of monthly) for a 3-month SMA, the results were drastically affected.

Here’s the new QTS code, now in a function.

qts <- function(prices, nShort = 20, nLong = 105, nMonthSMA = 3, nDaySMA = 63, wRankShort=1, wRankLong=1.01, 
                movAvgType = c("monthly", "daily"), cashAsset="VUSTX", returnNames = FALSE) {
  cashCol <- grep(cashAsset, colnames(prices))
  
  #start our data off on the security with the least data (VGSIX in this case)
  prices <- prices[!is.na(prices[,7]),] 
  
  #cash is not a formal asset in our ranking
  cashPrices <- prices[, cashCol]
  prices <- prices[, -cashCol]
  
  #compute momentums
  rocShort <- prices/lag(prices, nShort) - 1
  rocLong <- prices/lag(prices, nLong) - 1
  
  #take the endpoints of quarter start/end
  quarterlyEps <- endpoints(prices, on="quarters")
  monthlyEps <- endpoints(prices, on = "months")
  
  #take the prices at quarterly endpoints
  quarterlyPrices <- prices[quarterlyEps,]
  
  #short momentum at quarterly endpoints (20 day)
  rocShortQtrs <- rocShort[quarterlyEps,]
  
  #long momentum at quarterly endpoints (105 day)
  rocLongQtrs <- rocLong[quarterlyEps,]
  
  #rank short momentum, best highest rank
  rocSrank <- t(apply(rocShortQtrs, 1, rank))
  
  #rank long momentum, best highest rank
  rocLrank <- t(apply(rocLongQtrs, 1, rank))
  
  #total rank, long slightly higher than short, sum them
  totalRank <- wRankLong * rocLrank + wRankShort * rocSrank 
  
  #function that takes 100% position in highest ranked security
  maxRank <- function(rankRow) {
    return(rankRow==max(rankRow))
  }
  
  #apply above function to our quarterly ranks every quarter
  rankPos <- t(apply(totalRank, 1, maxRank))
  
  #SMA of securities, only use monthly endpoints
  #subset to quarters
  #then filter
  movAvgType = movAvgType[1]
  if(movAvgType=="monthly") {
    monthlyPrices <- prices[monthlyEps,]
    monthlySMAs <- xts(apply(monthlyPrices, 2, SMA, n=nMonthSMA), order.by=index(monthlyPrices))
    quarterlySMAs <- monthlySMAs[index(quarterlyPrices),]
    smaFilter <- quarterlyPrices > quarterlySMAs
  } else if (movAvgType=="daily") {
    smas <- xts(apply(prices, 2, SMA, n=nDaySMA), order.by=index(prices))
    quarterlySMAs <- smas[index(quarterlyPrices),]
    smaFilter <- quarterlyPrices > quarterlySMAs
  } else {
    stop("invalid moving average type")
  }
  
  finalPos <- rankPos*smaFilter
  finalPos <- finalPos[!is.na(rocLongQtrs[,1]),]
  cash <- xts(1-rowSums(finalPos), order.by=index(finalPos))
  finalPos <- merge(finalPos, cash, join='inner')
  
  prices <- merge(prices, cashPrices, join='inner')
  returns <- Return.calculate(prices)
  stratRets <- Return.portfolio(returns, finalPos)
  
  if(returnNames) {
    findNames <- function(pos) {
      return(names(pos[pos==1]))
    }
    tmp <- apply(finalPos, 1, findNames)
    assetNames <- xts(tmp, order.by=as.Date(names(tmp)))
    return(list(assetNames, stratRets))
  }
  return(stratRets)
}

The one change I made is this:

  movAvgType = movAvgType[1]
  if(movAvgType=="monthly") {
    monthlyPrices <- prices[monthlyEps,]
    monthlySMAs <- xts(apply(monthlyPrices, 2, SMA, n=nMonthSMA), order.by=index(monthlyPrices))
    quarterlySMAs <- monthlySMAs[index(quarterlyPrices),]
    smaFilter <- quarterlyPrices > quarterlySMAs
  } else if (movAvgType=="daily") {
    smas <- xts(apply(prices, 2, SMA, n=nDaySMA), order.by=index(prices))
    quarterlySMAs <- smas[index(quarterlyPrices),]
    smaFilter <- quarterlyPrices > quarterlySMAs
  } else {
    stop("invalid moving average type")
  }

In essence, it allows the function to use either a monthly-calculated moving average, or a daily, which is then subset to the quarterly frequency of the rest of the data.

(I also allow the function to return the names of the selected securities.)

So now we can do two tests:

1) The initial parameter settings (20-day short-term momentum, 105-day long-term momentum, equal weigh their ranks (tiebreaker to the long-term), and use a 3-month SMA to filter)
2) The same exact parameter settings, except a 63-day SMA for the filter.

Here’s the code to do that.

#get our data from yahoo, use adjusted prices
symbols <- c("NAESX", #small cap
             "PREMX", #emerging bond
             "VEIEX", #emerging markets
             "VFICX", #intermediate investment grade
             "VFIIX", #GNMA mortgage
             "VFINX", #S&P 500 index
             "VGSIX", #MSCI REIT
             "VGTSX", #total intl stock idx
             "VUSTX") #long term treasury (cash)

getSymbols(symbols, from="1990-01-01")
prices <- list()
for(i in 1:length(symbols)) {
  prices[[i]] <- Ad(get(symbols[i]))  
}
prices <- do.call(cbind, prices)
colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))

monthlySMAqts <- qts(prices, returnNames=TRUE)
dailySMAqts <- qts(prices, wRankShort=.95, wRankLong=1.05, movAvgType = "daily", returnNames=TRUE)

retsComparison <- cbind(monthlySMAqts[[2]], dailySMAqts[[2]])
colnames(retsComparison) <- c("monthly SMA qts", "daily SMA qts")
retsComparison <- retsComparison["2003::"]
charts.PerformanceSummary(retsComparison["2003::"])
rbind(table.AnnualizedReturns(retsComparison["2003::"]), maxDrawdown(retsComparison["2003::"]))

And here are the results:

Statistics:

                          monthly SMA qts daily SMA qts
Annualized Return               0.2745000     0.2114000
Annualized Std Dev              0.1725000     0.1914000
Annualized Sharpe (Rf=0%)       1.5915000     1.1043000
Worst Drawdown                  0.1911616     0.3328411

With the corresponding equity curves:

Here are the several instances in which the selections do not match thanks to the filters:

selectedNames <- cbind(monthlySMAqts[[1]], dailySMAqts[[1]])
colnames(selectedNames) <- c("Monthly SMA Filter", "Daily SMA Filter")
differentSelections <- selectedNames[selectedNames[,1]!=selectedNames[,2],]

With the results:

           Monthly SMA Filter Daily SMA Filter
1997-03-31 "VGSIX"            "cash"          
2007-12-31 "cash"             "PREMX"         
2008-06-30 "cash"             "VFIIX"         
2008-12-31 "cash"             "NAESX"         
2011-06-30 "cash"             "NAESX"  

Now, of course, many can make the arguments that Yahoo’s data is junk, my backtest doesn’t reflect reality, etc., which would essentially miss the point: this data here, while not a perfect realization of the reality of Planet Earth, may as well have been valid (you know, like all the academics, who use various simulation techniques to synthesize more data or explore other scenarios?). All I did here was change the filter to something logically comparable (that is, computing the moving average filter on a different time-scale, which does not in any way change the investment logic). From 2003 onward, this change only affected the strategy in four places. However, those instances were enough to create some noticeable changes (for the worse) in the strategy’s performance. Essentially, the downside of rankings-based strategies are when the overall number of selected instruments (in this case, ONE!) is small, a few small changes in parameters, data, etc. can lead to drastically different results.

As I write this, Cliff Smith already has ideas as to how to counteract this phenomenon. However, unto my experience, once a strategy starts getting into “how do we smooth out that one bump on the equity curve” territory, I think it’s time to go back and re-examine the strategy altogether. In my opinion, while the idea of momentum is of course, sound, with a great deal of literature devoted to it, the idea of selecting just one instrument at a time as the be-all-end-all strategy does not sit well with me. However, to me, QTS nevertheless presents an interesting framework for analyzing small subgroups of securities, and using it as one layer of an overarching strategy framework, such that the return streams are sub-strategies, instead of raw instruments.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.

The Logical-Invest “Universal Investment Strategy”–A Walk Forward Process on SPY and TLT

I’m sure we’ve all heard about diversified stock and bond portfolios. In its simplest, most diluted form, it can be comprised of the SPY and TLT etfs. The concept introduced by Logical Invest, in a Seeking Alpha article written by Frank Grossman (also see link here), essentially uses a walk-forward methodology of maximizing a modified Sharpe ratio, biased heavily in favor of the volatility rather than the returns. That is, it uses a 72-day moving window to maximize total returns between different weighting configurations of a SPY-TLT mix over the standard deviation raised to the power of 5/2. To put it into perspective, at a power of 1, this is the basic Sharpe ratio, and at a power of 0, just a momentum maximization algorithm.

The process for this strategy is simple: rebalance every month on some multiple of 5% between SPY and TLT that previously maximized the following quantity (returns/vol^2.5 on a 72-day window).

Here’s the code for obtaining the data and computing the necessary quantities:

require(quantmod)
require(PerformanceAnalytics)
getSymbols(c("SPY", "TLT"), from="1990-01-01")
returns <- merge(Return.calculate(Ad(SPY)), Return.calculate(Ad(TLT)), join='inner')
returns <- returns[-1,]
configs <- list()
for(i in 1:21) {
  weightSPY <- (i-1)*.05
  weightTLT <- 1-weightSPY
  config <- Return.portfolio(R = returns, weights=c(weightSPY, weightTLT), rebalance_on = "months")
  configs[[i]] <- config
}
configs <- do.call(cbind, configs)
cumRets <- cumprod(1+configs)
period <- 72

roll72CumAnn <- (cumRets/lag(cumRets, period))^(252/period) - 1
roll72SD <- sapply(X = configs, runSD, n=period)*sqrt(252)

Next, the code for creating the weights:

sd_f_factor <- 2.5
modSharpe <- roll72CumAnn/roll72SD^sd_f_factor
monthlyModSharpe <- modSharpe[endpoints(modSharpe, on="months"),]

findMax <- function(data) {
  return(data==max(data))
}

weights <- t(apply(monthlyModSharpe, 1, findMax))
weights <- weights*1
weights <- xts(weights, order.by=as.Date(rownames(weights)))
weights[is.na(weights)] <- 0
weights$zeroes <- 1-rowSums(weights)
configs$zeroes <- 0

That is, simply take the setting that maximizes the monthly modified Sharpe Ratio calculation at each rebalancing date (the end of every month).

Next, here’s the performance:

stratRets <- Return.portfolio(R = configs, weights = weights)
rbind(table.AnnualizedReturns(stratRets), maxDrawdown(stratRets))
charts.PerformanceSummary(stratRets)

Which gives the results:

> rbind(table.AnnualizedReturns(stratRets), maxDrawdown(stratRets))
                          portfolio.returns
Annualized Return                 0.1317000
Annualized Std Dev                0.0990000
Annualized Sharpe (Rf=0%)         1.3297000
Worst Drawdown                    0.1683851

With the following equity curve:

Not perfect, but how does it compare to the ingredients?

Let’s take a look:

stratAndComponents <- merge(returns, stratRets, join='inner')
charts.PerformanceSummary(stratAndComponents)
rbind(table.AnnualizedReturns(stratAndComponents), maxDrawdown(stratAndComponents))
apply.yearly(stratAndComponents, Return.cumulative)

Here are the usual statistics:

> rbind(table.AnnualizedReturns(stratAndComponents), maxDrawdown(stratAndComponents))
                          SPY.Adjusted TLT.Adjusted portfolio.returns
Annualized Return            0.0907000    0.0783000         0.1317000
Annualized Std Dev           0.1981000    0.1381000         0.0990000
Annualized Sharpe (Rf=0%)    0.4579000    0.5669000         1.3297000
Worst Drawdown               0.5518552    0.2659029         0.1683851

In short, it seems the strategy performs far better than either of the ingredients. Let’s see if the equity curve comparison reflects this.

Indeed, it does. While it does indeed have the drawdown in the crisis, both instruments were in drawdown at the time, so it appears that the strategy made the best of a bad situation.

Here are the annual returns:

> apply.yearly(stratAndComponents, Return.cumulative)
           SPY.Adjusted TLT.Adjusted portfolio.returns
2002-12-31  -0.02054891  0.110907611        0.01131366
2003-12-31   0.28179336  0.015936985        0.12566042
2004-12-31   0.10695067  0.087089794        0.09724221
2005-12-30   0.04830869  0.085918063        0.10525398
2006-12-29   0.15843880  0.007178861        0.05294557
2007-12-31   0.05145526  0.102972399        0.06230742
2008-12-31  -0.36794099  0.339612265        0.19590423
2009-12-31   0.26352114 -0.218105306        0.18826736
2010-12-31   0.15056113  0.090181150        0.16436950
2011-12-30   0.01890375  0.339915713        0.24562838
2012-12-31   0.15994578  0.024083393        0.06051237
2013-12-31   0.32303535 -0.133818884        0.13760060
2014-12-31   0.13463980  0.273123290        0.19637382
2015-02-20   0.02773183  0.006922893        0.02788726

2002 was an incomplete year. However, what’s interesting here is that on a whole, while the strategy rarely if ever does as well as the better of the two instruments, it always outperforms the worse of the two instruments–and not only that, but it has delivered a positive performance in every year of the backtest–even when one instrument or the other was taking serious blows to performance, such as SPY in 2008, and TLT in 2009 and 2013.

For the record, here is the weight of SPY in the strategy.

weightSPY <- apply(monthlyModSharpe, 1, which.max)
weightSPY <- do.call(rbind, weightSPY)
weightSPY <- (weightSPY-1)*.05
align <- cbind(weightSPY, stratRets)
align <- na.locf(align)
chart.TimeSeries(align[,1], date.format="%Y", ylab="Weight SPY", main="Weight of SPY in SPY-TLT pair")

Now while this may serve as a standalone strategy for some people, the takeaway in my opinion from this is that dynamically re-weighting two return streams that share a negative correlation can lead to some very strong results compared to the ingredients from which they were formed. Furthermore, rather than simply rely on one number to summarize a relationship between two instruments, the approach that Frank Grossman took to actually model the combined returns was one I find interesting, and undoubtedly has applications as a general walk-forward process.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.

The Quarterly Tactical Strategy (aka QTS)

This post introduces the Quarterly Tactical Strategy, introduced by Cliff Smith on a Seeking Alpha article. It presents a variation on the typical dual-momentum strategy that only trades over once a quarter, yet delivers a seemingly solid risk/return profile. The article leaves off a protracted period of unimpressive performance at the turn of the millennium, however.

First off, due to the imprecision of the English language, I received some help from TrendXplorer in implementing this strategy. Those who are fans of Amibroker are highly encouraged to visit his blog.

In any case, this strategy is fairly simple:

Take a group of securities (in this case, 8 mutual funds), and do the following:

Rank a long momentum (105 days) and a short momentum (20 days), and invest in the security with the highest composite rank, with ties going to the long momentum (that is, .501*longRank + .499*shortRank, for instance). If the security with the highest composite rank is greater than its three month SMA, invest in that security, otherwise, hold cash.

There are two critical points that must be made here:

1) The three-month SMA is *not* a 63-day SMA. It is precisely a three-point SMA up to that point on the monthly endpoints of that security.
2) Unlike in flexible asset allocation or elastic asset allocation, the cash asset is not treated as a formal asset.

Let’s look at the code. Here’s the data–which are adjusted-data mutual fund data (although with a quarterly turnover, the frequent trading constraint of not trading out of the security is satisfied, though I’m not sure how dividends are treated–that is, whether a retail investor would actually realize these returns less a hopefully tiny transaction cost through their brokers–aka hopefully not too much more than $1 per transaction):

require(quantmod)
require(PerformanceAnalytics)
require(TTR)

#get our data from yahoo, use adjusted prices
symbols <- c("NAESX", #small cap
             "PREMX", #emerging bond
             "VEIEX", #emerging markets
             "VFICX", #intermediate investment grade
             "VFIIX", #GNMA mortgage
             "VFINX", #S&P 500 index
             "VGSIX", #MSCI REIT
             "VGTSX", #total intl stock idx
             "VUSTX") #long term treasury (cash)

getSymbols(symbols, from="1990-01-01")
prices <- list()
for(i in 1:length(symbols)) {
  prices[[i]] <- Ad(get(symbols[i]))  
}
prices <- do.call(cbind, prices)
colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))

#define our cash asset and keep track of which column it is
cashAsset <- "VUSTX"
cashCol <- grep(cashAsset, colnames(prices))

#start our data off on the security with the least data (VGSIX in this case)
prices <- prices[!is.na(prices[,7]),] 

#cash is not a formal asset in our ranking
cashPrices <- prices[, cashCol]
prices <- prices[, -cashCol]

Nothing anybody hasn’t seen before up to this point. Get data, start it off at most recent inception mutual fund, separate the cash prices, moving along.

What follows is a rather rough implementation of QTS, not wrapped up in any sort of function that others can plug and play with (though I hope I made the code readable enough for others to tinker with).

Let’s define parameters and compute momentum.

#define our parameters
nShort <- 20
nLong <- 105
nMonthSMA <- 3

#compute momentums
rocShort <- prices/lag(prices, nShort) - 1
rocLong <- prices/lag(prices, nLong) - 1

Now comes some endpoints functionality (or, more colloquially, magic) that the xts library provides. It’s what allows people to get work done in R much faster than in other programming languages.

#take the endpoints of quarter start/end
quarterlyEps <- endpoints(prices, on="quarters")
monthlyEps <- endpoints(prices, on = "months")

#take the prices at quarterly endpoints
quarterlyPrices <- prices[quarterlyEps,]

#short momentum at quarterly endpoints (20 day)
rocShortQtrs <- rocShort[quarterlyEps,]

#long momentum at quarterly endpoints (105 day)
rocLongQtrs <- rocLong[quarterlyEps,]

In short, get the quarterly endpoints (and monthly, we need those for the monthly SMA which you’ll see shortly) and subset our momentum computations on those quarterly endpoints. Now let’s get the total rank for those subset-on-quarters momentum computations.

#rank short momentum, best highest rank
rocSrank <- t(apply(rocShortQtrs, 1, rank))

#rank long momentum, best highest rank
rocLrank <- t(apply(rocLongQtrs, 1, rank))

#total rank, long slightly higher than short, sum them
totalRank <- 1.01*rocLrank + rocSrank 

#function that takes 100% position in highest ranked security
maxRank <- function(rankRow) {
  return(rankRow==max(rankRow))
}

#apply above function to our quarterly ranks every quarter
rankPos <- t(apply(totalRank, 1, maxRank))

So as you can see, I rank the momentum computations by row, take a weighted sum (in slight favor of the long momentum), and then simply take the security with the highest rank at every period, giving me one 1 in every row and 0s otherwise.

Now let’s do the other end of what determines position, which is the SMA filter. In this case, we need monthly data points for our three-month SMA, and then subset it to quarters to be on the same timescale as the quarterly ranks.

#SMA of securities, only use monthly endpoints
#subset to quarters
#then filter
monthlyPrices <- prices[monthlyEps,]
monthlySMAs <- xts(apply(monthlyPrices, 2, SMA, n=nMonthSMA), order.by=index(monthlyPrices))
quarterlySMAs <- monthlySMAs[index(quarterlyPrices),]
smaFilter <- quarterlyPrices > quarterlySMAs

Now let’s put it together to get our final positions. Our cash position is simply one if we don’t have a single investment in the time period, zero else.

finalPos <- rankPos*smaFilter
finalPos <- finalPos[!is.na(rocLongQtrs[,1]),]
cash <- xts(1-rowSums(finalPos), order.by=index(finalPos))
finalPos <- merge(finalPos, cash, join='inner')

Now we can finally compute our strategy returns.

prices <- merge(prices, cashPrices, join='inner')
returns <- Return.calculate(prices)
stratRets <- Return.portfolio(returns, finalPos)
table.AnnualizedReturns(stratRets)
maxDrawdown(stratRets)
charts.PerformanceSummary(stratRets)
plot(log(cumprod(1+stratRets)))

So what do things look like?

Like this:

> table.AnnualizedReturns(stratRets)
                          portfolio.returns
Annualized Return                    0.1899
Annualized Std Dev                   0.1619
Annualized Sharpe (Rf=0%)            1.1730
> maxDrawdown(stratRets)
[1] 0.1927991

And since the first equity curve doesn’t give much of an indication in the early years, I’ll take Tony Cooper’s (of Double Digit Numerics) advice and show the log equity curve as well.

In short, from 1997 through 2002, this strategy seemed to be going nowhere, and then took off. As I was able to get this backtest going back to 1997, it makes me wonder why it was only started in 2003 for the SeekingAlpha article, since even with 1997-2002 thrown in, this strategy’s risk/reward profile still looks fairly solid. CAR about 1 (slightly less, but that’s okay, for something that turns over so infrequently, and in so few securities!), and a Sharpe higher than 1. Certainly better than what the market itself offered over the same period of time for retail investors. Perhaps Cliff Smith himself could chime in regarding his choice of time frame.

In any case, Cliff Smith marketed the strategy as having a higher than 28% CAGR, and his article was published on August 15, 2014, and started from 2003. Let’s see if we can replicate those results.

stratRets <- stratRets["2002-12-31::2014-08-15"]
table.AnnualizedReturns(stratRets)
maxDrawdown(stratRets)
charts.PerformanceSummary(stratRets)
plot(log(cumprod(1+stratRets)))

Which results in this:

> table.AnnualizedReturns(stratRets)
                          portfolio.returns
Annualized Return                    0.2862
Annualized Std Dev                   0.1734
Annualized Sharpe (Rf=0%)            1.6499
> maxDrawdown(stratRets)
[1] 0.1911616

A far improved risk/return profile without 1997-2002 (or the out-of-sample period after Cliff Smith’s publishing date). Here are the two equity curves in-sample.

In short, the results look better, and the SeekingAlpha article’s results are validated.

Now, let’s look at the out-of-sample periods on their own.

stratRets <- Return.portfolio(returns, finalPos)
earlyOOS <- stratRets["::2002-12-31"]
table.AnnualizedReturn(earlyOOS)
maxDrawdown(earlyOOS)
charts.PerformanceSummary(earlyOOS)

Here are the results:

> table.AnnualizedReturns(earlyOOS)
                          portfolio.returns
Annualized Return                    0.0321
Annualized Std Dev                   0.1378
Annualized Sharpe (Rf=0%)            0.2327
> maxDrawdown(earlyOOS)
[1] 0.1927991

And with the corresponding equity curve (which does not need a log-scale this time).

In short, it basically did nothing for an entire five years. That’s rough, and I definitely don’t like the fact that it was left off of the SeekingAlpha article, as anytime I can extend a backtest further back than a strategy’s original author and then find skeletons in the closet (as happened for each and every one of Harry Long’s strategies), it sets off red flags on this end, so I’m hoping that there’s some good explanation for leaving off 1997-2002 that I’m simply failing to mention.

Lastly, let’s look at the out-of-sample performance.

lateOOS <- stratRets["2014-08-15::"]
charts.PerformanceSummary(lateOOS)
table.AnnualizedReturns(lateOOS)
maxDrawdown(lateOOS)

With the following results:

> table.AnnualizedReturns(lateOOS)
                          portfolio.returns
Annualized Return                    0.0752
Annualized Std Dev                   0.1426
Annualized Sharpe (Rf=0%)            0.5277
> maxDrawdown(lateOOS)
[1] 0.1381713

And the following equity curve:

Basically, while it’s ugly, it made new equity highs over only two more transactions (and in such a small sample size, anything can happen), so I’ll put this one down as a small, ugly win, but a win nevertheless.

If anyone has any questions or comments about this strategy, I’d love to see them, as this is basically a first-pass replica. To Mr. Cliff Smith’s credit, the results check out, and when the worst thing one can say about a strategy is that it had a period of a flat performance (aka when the market crested at the end of the Clinton administration right before the dot-com burst), well, that’s not the worst thing in the world.

More replications (including one requested by several readers) will be upcoming.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.

A New Harry Long Strategy and A Couple of New PerfA Functions

So, Harry Long (of Houston) came out with a new strategy on SeekingAlpha involving some usual mix of SPXL (3x leveraged SPY), TMF (3x leveraged TLT), and some volatility indices (in this case, ZIV and TVIX). Now, since we’ve tread this path before, expectations are rightfully set. It’s a strategy that’s going to look good in the sample he used, it’s going to give some performance back in the crisis, and it’ll ultimately prove to be a simple-to-implement, simple-to-backtest strategy with its own set of ups and downs that does outperform the usual buy-and-hold indices.

Once again, a huge thanks to Mr. Helmuth Vollmeier for the long history volatility data.

So here’s the code (I’ll skip a lot of the comparing equity curves of my synthetic instruments to the Yahoo-finance variants, as you’ve all seen that before) to get to the initial equity curve comparison.

require(downloader)
require(PerformanceAnalytics)

download("https://dl.dropboxusercontent.com/s/950x55x7jtm9x2q/VXXlong.TXT",
         destfile="longVXX.txt")
VXX &amp;amp;lt;- xts(read.zoo("longVXX.txt", sep=",", header=TRUE))
TVIXrets &amp;amp;lt;- Return.calculate(Cl(VXX))*2
getSymbols("TVIX", from="1990-01-01")
TVIX &amp;amp;lt;- TVIX[-which(index(TVIX)=="2014-12-30"),] #trashy Yahoo data, removing obvious bad print
compare &amp;amp;lt;- merge(TVIXrets, Return.calculate(Ad(TVIX)), join='inner')
charts.PerformanceSummary(compare)
charts.PerformanceSummary(compare["2012::"])
charts.PerformanceSummary(compare["2013::"])
charts.PerformanceSummary(compare["2014::"])
charts.PerformanceSummary(compare["2015::"]) #okay we're good

download("https://www.dropbox.com/s/jk3ortdyru4sg4n/ZIVlong.TXT",
         destfile="longZIV.txt")
ZIV &amp;amp;lt;- xts(read.zoo("longZIV.txt", sep=",", header=TRUE))
ZIVrets &amp;amp;lt;- Return.calculate(Cl(ZIV))

getSymbols("SPY", from="1990-01-01")
SPXLrets &amp;amp;lt;- Return.calculate(Ad(SPY))*3

getSymbols("TMF", from="1990-01-01")
TMFrets &amp;amp;lt;- Return.calculate(Ad(TMF))
getSymbols("TLT", from="1990-01-01")
TLTrets &amp;amp;lt;- Return.calculate(Ad(TLT))
tmf3TLT &amp;amp;lt;- merge(TMFrets, 3*TLTrets, join='inner')
charts.PerformanceSummary(tmf3TLT)
discrepancy &amp;amp;lt;- as.numeric(Return.annualized(tmf3TLT[,2]-tmf3TLT[,1]))
tmf3TLT[,2] &amp;amp;lt;- tmf3TLT[,2] - ((1+discrepancy)^(1/252)-1)
charts.PerformanceSummary(tmf3TLT)
modifiedTLT &amp;amp;lt;- 3*TLTrets - ((1+discrepancy)^(1/252)-1)
TMFrets &amp;amp;lt;- modifiedTLT

components &amp;amp;lt;- cbind(SPXLrets, ZIVrets, TMFrets, TVIXrets)
components &amp;amp;lt;- components["2004-03-29::"]
stratRets &amp;amp;lt;- Return.portfolio(R = components, weights = c(.4, .2, .35, .05), rebalance_on="years")
charts.PerformanceSummary(stratRets)
SPYrets &amp;amp;lt;- Return.calculate(Ad(SPY))
compare &amp;amp;lt;- merge(stratRets, SPYrets, join='inner')
charts.PerformanceSummary(compare["2011::"])

With the following equity curve display:

So far, so good. Let’s look at the full backtest performance.

charts.PerformanceSummary(compare)

With the resultant equity curve:

Which, given what we’ve seen before, isn’t outside the realm of expectation.

For those interested in the log equity curves, here you go:

compare[is.na(compare)] &amp;amp;lt;- 0
plot(log(cumprod(1+compare)), legend.loc="topleft")

And for fun, let’s look at the outperformance equity curve.

diff &amp;amp;lt;- compare[,1] - compare[,2]
charts.PerformanceSummary(diff, main="relative performance")

And the result:

Now this is somewhere in the ballpark of what you’d love to see from your strategy against a benchmark — aside from a couple of spikes which do a number on the corresponding drawdowns chart, it looks like a steady outperformance.

However, the new features I’d like to introduce in this blog post are a quicker way of generating the usual statistics table I display, and a more in-depth drawdown analysis.

Here’s how:

rbind(table.AnnualizedReturns(compare), maxDrawdown(compare))

Which gives us the following result:

&amp;amp;gt; rbind(table.AnnualizedReturns(compare), maxDrawdown(compare))
                          portfolio.returns SPY.Adjusted
Annualized Return                 0.2181000    0.0799000
Annualized Std Dev                0.2159000    0.1982000
Annualized Sharpe (Rf=0%)         1.0100000    0.4030000
Worst Drawdown                    0.4326138    0.5518552

Since this saves me typing, I’ll be using this format from now on. And as a bonus, it displays annualized standard deviation. While I don’t particularly care for that statistic as I believe that max drawdown captures the notion of “here’s the pain on the other end of your returns” better than “here’s how much your strategy wiggles from day to day”, the fact that it’s thrown in and is a statistic that a lot of other people (particularly portfolio managers, pension fund managers, etc.) are interested in, so much the better.

Now, moving onto a more in-depth analysis of drawdown, PerformanceAnalytics has the following functionality:

dd &amp;amp;lt;- table.Drawdowns(compare[,1], top=100)
dd &amp;amp;lt;- dd[dd$Depth &amp;amp;lt; -.05,]
dd
sum(dd$"To Trough")/nrow(compare)

This brings up the following table (it seems that with multiple return streams, it’ll just default to the first one), and a derived statistic.

&amp;amp;gt; dd
         From     Trough         To   Depth Length To Trough Recovery
1  2008-12-19 2009-03-09 2010-03-16 -0.4326    310        53      257
2  2007-10-30 2008-10-15 2008-12-04 -0.3275    278       243       35
3  2013-05-22 2013-06-24 2013-09-18 -0.1617     83        23       60
4  2004-04-02 2004-05-10 2004-09-17 -0.1450    116        26       90
5  2010-04-26 2010-07-02 2010-09-21 -0.1230    104        49       55
6  2006-03-20 2006-06-19 2006-09-20 -0.1229    129        64       65
7  2007-05-08 2007-08-15 2007-10-01 -0.1229    102        70       32
8  2005-07-29 2005-10-27 2005-12-14 -0.1112     97        64       33
9  2011-06-01 2011-08-11 2011-09-06 -0.1056     68        51       17
10 2005-02-09 2005-03-22 2005-05-31 -0.1051     77        29       48
11 2010-11-05 2010-11-17 2011-02-07 -0.1022     64         9       55
12 2011-09-23 2011-10-27 2011-12-19 -0.0836     61        25       36
13 2013-09-19 2013-10-09 2013-10-17 -0.0815     21        15        6
14 2012-05-02 2012-05-18 2012-06-29 -0.0803     42        13       29
15 2012-10-18 2012-11-15 2012-11-29 -0.0721     28        19        9
16 2014-09-02 2014-10-13 2014-11-05 -0.0679     47        30       17
17 2008-12-05 2008-12-08 2008-12-16 -0.0589      8         2        6
18 2011-02-18 2011-03-16 2011-04-01 -0.0580     30        18       12
19 2014-07-23 2014-08-06 2014-08-15 -0.0536     18        11        7
20 2012-04-03 2012-04-10 2012-04-26 -0.0517     17         5       12

What I did was I simply wanted to query the table for all drawdowns that were more than 5%, or the 100 biggest drawdowns (though considering that we have about 20 drawdowns over about a decade, it seems the rule is 2 drawdowns over 5% per year, give or take, and this is a pretty volatile strategy). Lastly, I wanted to know the proportion of the time that someone watching the strategy will be feeling the pain of watching the strategy go to those depths, so I took the sum of the “To Trough” column and divided it by the amount of days of the backtest. This is the result:

&amp;amp;gt; sum(dd$"To Trough")/nrow(compare)
[1] 0.3005505

I’m fairly certain some individuals more seasoned than I am would do something different given this information and functionality, and if so, feel free to leave a comment, but this is just a licked-finger-in-the-air calculation I did. So, 30% of the time, whoever is investing real money into this will want to go and grab a few more drinks than usual.

Let’s do the same analysis for the relative performance.

tmp &amp;amp;lt;- rbind(table.AnnualizedReturns(diff), maxDrawdown(diff))
rownames(tmp)[4] &amp;amp;lt;- "Worst Drawdown"
tmp

When running only one set of returns, apparently the last row will just simply be called “4”, so I had to manually rename that row. Here’s the result:

&amp;amp;gt; tmp
                          portfolio.returns
Annualized Return                 0.1033000
Annualized Std Dev                0.2278000
Annualized Sharpe (Rf=0%)         0.4535000
Worst Drawdown                    0.3874082

Far from spectacular, but there it is for what it’s worth.

Now the drawdowns.

dd &amp;amp;lt;- table.Drawdowns(diff, top=100)
dd &amp;amp;lt;- dd[dd$Depth &amp;amp;lt; -.05,]
dd
sum(dd$"To Trough")/nrow(diff)

With the following result:

&amp;amp;gt; dd
         From     Trough         To   Depth Length To Trough Recovery
1  2008-11-21 2009-06-22 2013-04-04 -0.3874   1097       145      952
2  2008-10-28 2008-11-04 2008-11-19 -0.2328     17         6       11
3  2007-11-27 2008-06-12 2008-10-07 -0.1569    218       137       81
4  2013-05-03 2013-06-25 2013-12-26 -0.1386    165        37      128
5  2008-10-13 2008-10-13 2008-10-24 -0.1327     10         1        9
6  2005-06-08 2006-06-28 2006-11-08 -0.1139    360       267       93
7  2004-03-29 2004-05-10 2004-09-20 -0.1102    121        30       91
8  2007-03-08 2007-06-12 2007-11-26 -0.0876    183        67      116
9  2005-02-10 2005-03-28 2005-05-31 -0.0827     76        31       45
10 2014-09-02 2014-09-17 2014-10-14 -0.0613     31        12       19

In short, the spikes in outperformance gave us some pretty…interesting…drawdown statistics, which just essentially meant that the strategy wasn’t roaring at the same exact time that the SPY had its bounce from the bottom. And for interest, my finger in the air pain statistic:

&amp;amp;gt; sum(dd$"To Trough")/nrow(diff)
[1] 0.2689908

So approximately 27% of the time, the strategy underperforms its benchmark–meaning that 73% of the time, you’re fairly happy–assuming you’ve chosen the correct benchmark.

In short, overall, more freebies from Harry Long with a title to attract readers. The strategy is what it is–something that boasts strong absolute returns and definitely outperforms SPY, though like anything else, has its moments that it’ll burn you (no pain, no gain, as they say). However, the quicker statistics table functionality combined with the more in-depth drawdown analysis is something that I am definitely happy to have stumbled upon.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.

The ZOMMA Warthog Index

Harry Long posted another article on SeekingAlpha. As usual, it’s another strategy to be taken with a grain of salt, but very much worth discussing. So here’s the strategy:

Rebalance annually:

50% XLP, 15% GLD, 35% TLT — aka a variation of the stocks/bonds portfolio, just with some gold added in. The twist is the idea that XLP is less risky than SPY. The original article, written by Harry Long, can be found here.

This strategy is about as simple as they come, and is tested just as easily in R.

Here’s the original replication:

require(PerformanceAnalytics)
require(quantmod)
getSymbols("XLP", from="1990-01-01")
getSymbols("GLD", from="1990-01-01")
getSymbols("TLT", from="1990-01-01")
prices <- cbind(Ad(XLP), Ad(GLD), Ad(TLT))
prices <- prices[!is.na(prices[, 2]),]
rets <- Return.calculate(prices)
warthogRets <- Return.portfolio(rets, weights=c(.5, .15, .35), rebalance_on = "years")
getSymbols("SPY", from="1990-01-01")
SPYrets <- Return.calculate(Ad(SPY))
comparison <- merge(warthogRets, SPYrets, join='inner')
charts.PerformanceSummary(comparison)

And the resulting replicated (approximately) equity curve:

So far, so good.

Now, let’s put the claim of outperforming over an entire cycle to the test.

years <- apply.yearly(comparison, Return.cumulative)
years <- years[-1,] #remove 2004
years
sum(years[,1] > years[,2])/nrow(years)
sapply(years, mean)

And here’s the output:

> years
           portfolio.returns SPY.Adjusted
2005-12-30        0.07117683   0.04834811
2006-12-29        0.10846819   0.15843582
2007-12-31        0.14529234   0.05142241
2008-12-31        0.05105269  -0.36791039
2009-12-31        0.03126667   0.26344690
2010-12-31        0.14424559   0.15053339
2011-12-30        0.20384853   0.01897321
2012-12-31        0.07186807   0.15991238
2013-12-31        0.04234810   0.32309145
2014-12-10        0.16070863   0.11534450
> sum(years[,1] > years[,2])/nrow(years)
[1] 0.5
> sapply(years, mean)
portfolio.returns      SPY.Adjusted 
       0.10302756        0.09215978 

So, even with SPY’s 2008 in mind, this strategy seems to break even on a year-to-year basis, and the returns are slightly better, which I suppose should be par for the course for a backtest against SPY through the recession.

Now, let’s just remove that one year and see how things stack up:

years <- years[-4,] #remove 2008
sapply(years, mean)

And we obtain the following results:

> sapply(years, mean)
portfolio.returns      SPY.Adjusted 
        0.1088025         0.1432787 

In short, the so-called outperformance is largely sample-dependent on the S&P having a terrible year. Beyond that, things don’t look so stellar for the strategy.

Lastly, anytime anyone makes a claim about a strategy outperforming a benchmark, one very, very easy test to do is to simply look at the equity curve of a “market neutral” strategy that shorts the benchmark against the strategy, and see if that looks like an equity curve one would be comfortable holding. Here are the results:

diff <- comparison[,1] - comparison[,2]
charts.PerformanceSummary(diff, main="short SPY against portfolio")

And the resulting equity curve of the outperformance:

Basically, ever since the crisis passed, the strategy has been flat to underperforming against SPY, which is fair enough considering that the market has been on a roar since then.

To hammer the point, let’s look at the equity curves from 2009 onwards.

#strategy against SPY post-crisis
charts.PerformanceSummary(comparison["2009::"])

With the result:

To its credit, the strategy’s equity curve is certainly a smoother ride. At the same time, the underperformance is clear as well. I suppose this makes sense as a portfolio invested 35% in bonds should expect to underperform a portfolio fully invested in equities on an absolute returns basis.

Now, as I did before, in It’s Amazing How Well Dumb Things Get Marketed, I’m going to backtest this strategy as far as I can using proxies from Quandl for gold and bonds, by going directly to the futures. Refer to that post to see that the proxies are relatively decent for the ETFs. In this case, I switched from adjusted to close prices, but the concept should still hold.

require(IKTrading)
goldFutures <- quandClean(stemCode = "CHRIS/CME_GC", verbose = TRUE)
thirtyBond <- quandClean(stemCode="CHRIS/CME_US", verbose = TRUE)
longPrices <- cbind(Cl(XLP), Cl(goldFutures), Cl(thirtyBond))
longRets <- Return.calculate(longPrices)
longRets <- longRets[!is.na(longRets[,1]),]
names(longRets) <- c("XLP", "Gold", "Bonds")
longWarthog <- Return.portfolio(longRets, weights=c(.5, .15, .35), rebalance_on = "years")
longComparison <- merge(longWarthog, SPYrets, join='inner')
charts.PerformanceSummary(longComparison)

And here’s the resultant equity curve comparison:

So, the idea that the strategy outperforms SPY on an absolute returns basis? It seems to be due to SPY’s horrendous performance in 2008, which is a once-in-several-decades event. As you can see, if you started from XLP’s inception to approximately 2005, the strategy was actually in drawdown for all those years, at least if you started in 1998.

Let’s look at the risk/reward metrics for this timeframe:

stats <- rbind(Return.annualized(longComparison), 
      maxDrawdown(longComparison),
      SharpeRatio.annualized(longComparison),
      Return.annualized(longComparison)/maxDrawdown(longComparison)
)
rownames(stats)[4] <- "Return Over MaxDD"
stats

And the results:

> stats
                                portfolio.returns SPY.Adjusted
Annualized Return                       0.0409267   0.05344120
Worst Drawdown                          0.2439091   0.55186722
Annualized Sharpe Ratio (Rf=0%)         0.4846184   0.26295594
Return Over MaxDD                       0.1677949   0.09683706

To its credit, the strategy has a significantly better return to risk profile than SPY does, which I hope would be the case for any backtest going up against SPY with 2008 in mind. Overall, the strategy has a bit of merit as a defensive play, and for a simple static allocation based strategy that plays on the defensive side, it delivers what you might expect from a risk-averse strategy, which is reflected in a much better return to drawdown profile than the market. Nevertheless, one should be careful not to misinterpret a strategy whose focus is to control risk for a strategy that aims to swing for the fences. Furthermore, this is a strategy that Harry Long has been so kind as to give away, so take it for what it’s worth–a simple-to-implement defensive strategy which delivers on its defensive promises.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.