I’m Back, A New Harry Long Strategy, And Plans For Hypothesis-Driven Development

I’m back. Anyone that wants to know “what happened at Graham”, I felt there was very little scaffolding/on-boarding, and Graham’s expectations/requirements changed, though I have a reference from one of the quantitative directors. In any case, moving on.

Harry Long recently came out with a new strategy posted on SeekingAlpha, and I’d like to test it for robustness to see if it has merit.

Here’s the link to the post.

So, the rules are fairly simple:

ZIV 15%
SPLV 50%
TMF 10%
UUP 20%
VXX 5%

TMF can be approximated with a 3x leveraged TLT. SPLV is also highly similar to XLP — aka the consumer staples SPY sector. Here’s the equity curve comparison to prove it.

So, let’s test this thing.

require(PerformanceAnalytics)
require(downloader)
require(quantmod)

getSymbols('XLP', from = '1900-01-01')
getSymbols('TLT', from = '1900-01-01')
getSymbols('UUP', from = '1900-01-01')
download('https://www.dropbox.com/s/jk3ortdyru4sg4n/ZIVlong.TXT', destfile='ZIVlong.csv')
download('https://dl.dropboxusercontent.com/s/950x55x7jtm9x2q/VXXlong.TXT', destfile = 'VXXlong.csv')
ZIV <- xts(read.zoo('ZIVlong.csv', header=TRUE, sep=','))
VXX <- xts(read.zoo('VXXlong.csv', header=TRUE, sep=','))

symbols <- na.omit(cbind(Return.calculate(Cl(ZIV)), Return.calculate(Ad(XLP)), Return.calculate(Ad(TLT))*3,
                         Return.calculate(Ad(UUP)), Return.calculate(Cl(VXX))))
strat <- Return.portfolio(symbols, weights = c(.15, .5, .1, .2, .05), rebalance_on='years')

Here are the results:

compare <- na.omit(cbind(strat, Return.calculate(Ad(XLP))))
charts.PerformanceSummary(compare)
rbind(table.AnnualizedReturns(compare), maxDrawdown(compare), CalmarRatio(compare))

Equity curve (compared against buy and hold XLP)

Statistics:

                          portfolio.returns XLP.Adjusted
Annualized Return                 0.0864000    0.0969000
Annualized Std Dev                0.0804000    0.1442000
Annualized Sharpe (Rf=0%)         1.0747000    0.6720000
Worst Drawdown                    0.1349957    0.3238755
Calmar Ratio                      0.6397665    0.2993100

In short, this strategy definitely offers a lot more bang for your risk in terms of drawdown, and volatility, and so, offers noticeably higher risk/reward tradeoffs. However, it’s not something that beats the returns of instruments in the category of twice its volatility.

Here are the statistics from 2010 onwards.

charts.PerformanceSummary(compare['2010::'])
rbind(table.AnnualizedReturns(compare['2010::']), maxDrawdown(compare['2010::']), CalmarRatio(compare['2010::']))

                          portfolio.returns XLP.Adjusted
Annualized Return                0.12050000    0.1325000
Annualized Std Dev               0.07340000    0.1172000
Annualized Sharpe (Rf=0%)        1.64210000    1.1308000
Worst Drawdown                   0.07382878    0.1194072
Calmar Ratio                     1.63192211    1.1094371

Equity curve:

Definitely a smoother ride, and for bonus points, it seems some of the hedges helped with the recent market dip. Again, while aggregate returns aren’t as high as simply buying and holding XLP, the Sharpe and Calmar ratios do better on a whole.

Now, let’s do some robustness analysis. While I do not know how Harry Long arrived at the individual asset weights he did, what can be tested much more easily is what effect offsetting the rebalancing day has on the performance of the strategy. As this is a strategy rebalanced once a year, it can easily be tested for what effect the rebalancing date has on its performance.

yearlyEp <- endpoints(symbols, on = 'years')
rebalanceDays <- list()
for(i in 0:251) {
  offset <- yearlyEp+i
  offset[offset > nrow(symbols)] <- nrow(symbols)
  offset[offset==0] <- 1
  wts <- matrix(rep(c(.15, .5, .1, .2, .05), length(yearlyEp)), ncol=5, byrow=TRUE)
  wts <- xts(wts, order.by=as.Date(index(symbols)[offset]))
  offsetRets <- Return.portfolio(R = symbols, weights = wts)
  colnames(offsetRets) <- paste0("offset", i)
  rebalanceDays[[i+1]] <- offsetRets
}
rebalanceDays <- do.call(cbind, rebalanceDays)
rebalanceDays <- na.omit(rebalanceDays)
stats <- rbind(table.AnnualizedReturns(rebalanceDays), maxDrawdown(rebalanceDays))
stats[5,] <- stats[1,]/stats[4,]

Here are the plots of return, Sharpe, and Calmar vs. offset.

plot(as.numeric(stats[1,])~c(0:251), type='l', ylab='CAGR', xlab='offset', main='CAGR vs. offset')
plot(as.numeric(stats[3,])~c(0:251), type='l', ylab='Sharpe Ratio', xlab='offset', main='Sharpe vs. offset')
plot(as.numeric(stats[5,])~c(0:251), type='l', ylab='Calmar Ratio', xlab='offset', main='Calmar vs. offset')
plot(as.numeric(stats[4,])~c(0:251), type='l', ylab='Drawdown', xlab='offset', main='Drawdown vs. offset')




In short, this strategy seems to be somewhat dependent upon the rebalancing date, which was left unsaid. Here are the quantiles for the five statistics for the given offsets:

rownames(stats)[5] <- "Calmar"
apply(stats, 1, quantile)
     Annualized Return Annualized Std Dev Annualized Sharpe (Rf=0%) Worst Drawdown    Calmar
0%            0.072500             0.0802                  0.881000      0.1201198 0.4207922
25%           0.081925             0.0827                  0.987625      0.1444921 0.4755600
50%           0.087650             0.0837                  1.037250      0.1559238 0.5364758
75%           0.092000             0.0843                  1.090900      0.1744123 0.6230789
100%          0.105100             0.0867                  1.265900      0.1922916 0.8316698

While the standard deviation seems fairly robust, the Sharpe can decrease by about 33%, the Calmar can get cut in half, and the CAGR can also vary fairly substantially. That said, even using conservative estimates, the Sharpe ratio is fairly solid, and the Calmar outperforms that of XLP in any given variation, but nevertheless, performance can vary.

Is this strategy investible in its current state? Maybe, depending on your standards for rigor. Up to this point, rebalancing sometime in December-early January seems to substantially outperform other rebalance dates. Maybe a Dec/January anomaly effect exists in literature to justify this. However, the article makes no mention of that. Furthermore, the article doesn’t explain how it arrived at the weights it did.

Which brings me to my next topic, namely about a change with this blog going forward. Namely, hypothesis-driven trading system development. While this process doesn’t require complicated math, it does require statistical justification for multiple building blocks of a strategy, and a change in mindset, which a great deal of publicly available trading system ideas either gloss over, or omit entirely. As one of my most important readers praised this blog for “showing how the sausage is made”, this seems to be the next logical step in this progression.

Here’s the reasoning as to why.

It seems that when presenting trading ideas, there are two schools of thought: those that go off of intuition, build a backtest based off of that intuition, and see if it generally lines up with some intuitively expected result–and those that believe in a much more systematic, hypothesis-driven step-by-step framework, justifying as many decisions (ideally every decision) in creating a trading system. The advantage of the former is that it allows for displaying many more ideas in a much shorter timeframe. However, it has several major drawbacks: first off, it hides many concerns about potential overfitting. If what one sees is one final equity curve, there is nothing said about the sensitivity of said equity curve to however many various input parameters, and what other ideas were thrown out along the way. Secondly, without a foundation of strong hypotheses about the economic phenomena exploited, there is no proof that any strategy one comes across won’t simply fail once it’s put into live trading.

And third of all, which I find most important, is that such activities ultimately don’t sufficiently impress the industry’s best practitioners. For instance, Tony Cooper took issue with my replication of Trading The Odds’ volatility trading strategy, namely how data-mined it was (according to him in the comments section), and his objections seem to have been completely borne out by in out-of-sample performance.

So, for those looking for plug-and-crank system ideas, that may still happen every so often if someone sends me something particularly interesting, but there’s going to be some all-new content on this blog.

Thanks for reading.

NOTE: while I am currently consulting, I am always open to networking, meeting up (Philadelphia and New York City both work), consulting arrangements, and job discussions. Contact me through my email at ilya.kipnis@gmail.com, or through my LinkedIn, found here.

Last Post For A While, And Two Premium (Cheap) Databases

This will be my last post on this blog for an indefinite length of time. I will also include an algorithm to query Quandl’s SCF database, which is an update on my attempt to use free futures data from Quandl’s CHRIS database, which suffered from data integrity issues, even after attempts to clean it. Also provided is a small tutorial on using Quandl’s EOD database, for those that take issue with Yahoo’s daily data.

So, first off, the news…as some of you may have already heard from meeting me in person at R/Finance 2015 (which was terrific…interesting presentations, good people, good food, good drinks), effective June 8th, I will be starting employment in the New York area as a quantitative research analyst, and part of the agreement is that this blog becomes an archive of my work, so that I may focus my full energies on my full-time position. That is, it’s not going anywhere, so for those that are recent followers, you now have a great deal of time to catch up on the entire archive, which including this post, will be 62 posts. Topics covered include:

Quantstrat — its basics, and certain strategies coded using it, namely those based off of John Ehlers’s digital signal processing algorithms, along with custom order-sizing functions. A small aside on pairs trading is included as well.
Asset rotation — flexible asset allocation, elastic asset allocation, and most recently, classic asset allocation (aka the constrained critical line algorithm).
Seeking Alpha ideas — both Logical Invest and Harry Long’s work, along with Cliff Smith’s Quarterly Tactical Strategy (QTS). The Logical Invest algorithms do what they set out to do, but in some cases, are dependent on dividends to drive returns. By contrast, Harry Long’s ideas are more thought processes and proofs of concept, as opposed to complete strategies, often depending on ETFs with inception dates after the financial crisis passed (which I used some creativity for to backtest to pre-crisis timelines). I’ve also collaborated with Harry Long privately, and what I’ve seen privately has better risk/reward than anything he has revealed in public, and I find it to be impressive given the low turnover rate of such portfolios.
Volatility trading — XIV and VXX, namely, and a strategy around these two instruments that has done well out of sample.
Other statistical ideas, such as robustness heuristics and change point detection.

Topics I would have liked to have covered but didn’t roll around to:

Most Japanese trading methods — Ichimoku and Heiken Ashi, among other things. Both are in my IKTrading package, I just never rolled around to showing them off. I did cover a hammer trading system which did not perform as I would have liked it to.
Larry Connors’s mean reversion strategies — he has a book on trading ETFs, and another one he wrote before that. The material provided on this blog is sufficient for anyone to use to code those strategies.
The PortfolioAnalytics package — what quantstrat is to signal-based individual instrument trading strategies, PortfolioAnalytics is this (and a lot more) to portfolio management strategies. Although strategies such as equal-weight ranking perform well by some standards, this is only the tip of the iceberg. PortfolioAnalytics is, to my awareness, cutting edge portfolio management technology that can run the gauntlet from quick classic quadratic optimization to cutting-edge random-search global optimization portfolios (albeit those take more time to compute).

Now, onto the second part of this post, which is a pair of premium databases. They’re available from Quandl, and cost $50/month. As far as I’ve been able to tell, the futures database (SCF) data quality is vastly better than the CHRIS database, which can miss (or corrupt) chunks of data. The good news, however, is that free users can actually query these databases (or maybe all databases total, not sure) 150 times in a 60 day period. The futures script sends out 40 of these 150 queries, which may be all that is necessary if one intends to use it for some form of monthly turnover trading strategy.

Here’s the script for the SCF (futures) database. There are two caveats here:

1) The prices are on a per-contract rate. Notional values in futures trading, to my understanding, are vastly larger than one contract, to the point that getting integer quantities is no small assumption.

2) According to Alexios Ghalanos (AND THIS IS REALLY IMPORTANT), R’s GARCH guru, and one of the most prominent quants in the R/Finance community, for some providers, the Open, High, and Low values in futures data may not be based off of U.S. traditional pit trading hours in the same way that OHLC in equities/ETFs are based off of the 9:30 AM – 4:00 PM hours, but rather, extended trading hours. This means that there’s very low liquidity around open in futures, and that the high and low are also based off of these low-liquidity times as well. I am unsure if Quandl’s SCF database uses extended hours open-high-low data (low liquidity), or traditional pit hours (high liquidity), and I am hoping a representative from Quandl will clarify this in the comments section for this post. In any case, I just wanted to make sure that readers are aware of this issue.

In any case, here’s the data fetch for the Stevens Continuous Futures (SCF) database from Quandl. All of these are for front-month contracts, unadjusted prices on open interest cross. Note that in order for this script to work, you must supply quandl with your authorization token, which takes the form of something like this:

Quandl.auth("yourTokenHere")
require(Quandl)

Quandl.auth("yourTokenHere")
authCode <- "yourTokenHere"

quandlSCF <- function(code, authCode, from = NA, to = NA) {
  dataCode <- paste0("SCF/", code)
  out <- Quandl(dataCode, authCode = authCode)
  out <- xts(out[, -1], order.by=out$Date)
  colnames(out)[4] <- "Close"
  colnames(out)[6] <- "PrevDayOpInt"
  if(!is.na(from)) {
    out <- out[paste0(from, "::")]
  }
  if(!is.na(to)) {
    out <- out[paste0("::", to)]
  }
  return(out)
}

#Front open-interest cross
from <- NA
to <- NA

#Energies
CME_CL1 <- quandlSCF("CME_CL1_ON", authCode = authCode, from = from, to = to) #crude
CME_NG1 <- quandlSCF("CME_NG1_ON", authCode = authCode, from = from, to = to) #natgas
CME_HO1 <- quandlSCF("CME_HO1_ON", authCode = authCode, from = from, to = to) #heatOil
CME_RB1 <- quandlSCF("CME_RB1_ON", authCode = authCode, from = from, to = to) #RBob
ICE_B1 <- quandlSCF("ICE_B1_ON", authCode = authCode, from = from, to = to) #Brent
ICE_G1 <- quandlSCF("ICE_G1_ON", authCode = authCode, from = from, to = to) #GasOil

#Grains
CME_C1 <- quandlSCF("CME_C1_ON", authCode = authCode, from = from, to = to) #Chicago Corn
CME_S1 <- quandlSCF("CME_S1_ON", authCode = authCode, from = from, to = to) #Chicago Soybeans
CME_W1 <- quandlSCF("CME_W1_ON", authCode = authCode, from = from, to = to) #Chicago Wheat
CME_SM1 <- quandlSCF("CME_SM1_ON", authCode = authCode, from = from, to = to) #Chicago Soybean Meal
CME_KW1 <- quandlSCF("CME_KW1_ON", authCode = authCode, from = from, to = to) #Kansas City Wheat
CME_BO1 <- quandlSCF("CME_BO1_ON", authCode = authCode, from = from, to = to) #Chicago Soybean Oil

#Softs
ICE_SB1 <- quandlSCF("ICE_SB1_ON", authCode = authCode, from = from, to = to) #Sugar No. 11
ICE_KC1 <- quandlSCF("ICE_KC1_ON", authCode = authCode, from = from, to = to) #Coffee
ICE_CC1 <- quandlSCF("ICE_CC1_ON", authCode = authCode, from = from, to = to) #Cocoa
ICE_CT1 <- quandlSCF("ICE_CT1_ON", authCode = authCode, from = from, to = to) #Cotton

#Other Ags
CME_LC1 <- quandlSCF("CME_LC1_ON", authCode = authCode, from = from, to = to) #Live Cattle
CME_LN1 <- quandlSCF("CME_LN1_ON", authCode = authCode, from = from, to = to) #Lean Hogs

#Precious Metals
CME_GC1 <- quandlSCF("CME_GC1_ON", authCode = authCode, from = from, to = to) #Gold
CME_SI1 <- quandlSCF("CME_SI1_ON", authCode = authCode, from = from, to = to) #Silver
CME_PL1 <- quandlSCF("CME_PL1_ON", authCode = authCode, from = from, to = to) #Platinum
CME_PA1 <- quandlSCF("CME_PA1_ON", authCode = authCode, from = from, to = to) #Palladium

#Base
CME_HG1 <- quandlSCF("CME_HG1_ON", authCode = authCode, from = from, to = to) #Copper

#Currencies
CME_AD1 <- quandlSCF("CME_AD1_ON", authCode = authCode, from = from, to = to) #Ozzie
CME_CD1 <- quandlSCF("CME_CD1_ON", authCode = authCode, from = from, to = to) #Canadian Dollar
CME_SF1 <- quandlSCF("CME_SF1_ON", authCode = authCode, from = from, to = to) #Swiss Franc
CME_EC1 <- quandlSCF("CME_EC1_ON", authCode = authCode, from = from, to = to) #Euro
CME_BP1 <- quandlSCF("CME_BP1_ON", authCode = authCode, from = from, to = to) #Pound
CME_JY1 <- quandlSCF("CME_JY1_ON", authCode = authCode, from = from, to = to) #Yen
ICE_DX1 <- quandlSCF("ICE_DX1_ON", authCode = authCode, from = from, to = to) #Dollar Index

#Equities
CME_ES1 <- quandlSCF("CME_ES1_ON", authCode = authCode, from = from, to = to) #Emini
CME_MD1 <- quandlSCF("CME_MD1_ON", authCode = authCode, from = from, to = to) #Midcap 400
CME_NQ1 <- quandlSCF("CME_NQ1_ON", authCode = authCode, from = from, to = to) #Nasdaq 100
ICE_RF1 <- quandlSCF("ICE_RF1_ON", authCode = authCode, from = from, to = to) #Russell Smallcap
CME_NK1 <- quandlSCF("CME_NK1_ON", authCode = authCode, from = from, to = to) #Nikkei

#Bonds/rates
CME_FF1  <- quandlSCF("CME_FF1_ON", authCode = authCode, from = from, to = to) #30-day fed funds
CME_ED1 <- quandlSCF("CME_ED1_ON", authCode = authCode, from = from, to = to) #3 Mo. Eurodollar/TED Spread
CME_FV1  <- quandlSCF("CME_FV1_ON", authCode = authCode, from = from, to = to) #Five Year TNote
CME_TY1  <- quandlSCF("CME_TY1_ON", authCode = authCode, from = from, to = to) #Ten Year Note
CME_US1  <- quandlSCF("CME_US1_ON", authCode = authCode, from = from, to = to) #30 year bond

In this case, I just can’t give away my token. You’ll have to replace that with your own, which every account has. However, once again, individuals not subscribed to these databases need to pay $50/month.

Lastly, I’d like to show the Quandl EOD database. This is identical in functionality to Yahoo’s, but may be (hopefully!) more accurate. I have never used this database on this blog because the number one rule has always been that readers must be able to replicate all analysis for free, but for those who doubt the quality of Yahoo’s data, they may wish to look at Quandl’s EOD database.

This is how it works, with an example for SPY.

out <- Quandl("EOD/SPY", start_date="1999-12-31", end_date="2005-12-31", type = "xts")

And here’s some output.

> head(out)
             Open   High    Low  Close   Volume Dividend Split Adj_Open Adj_High  Adj_Low Adj_Close Adj_Volume
1999-12-31 146.80 147.50 146.30 146.90  3172700        0     1 110.8666 111.3952 110.4890  110.9421    3172700
2000-01-03 148.25 148.25 143.88 145.44  8164300        0     1 111.9701 111.9701 108.6695  109.8477    8164300
2000-01-04 143.50 144.10 139.60 139.80  8089800        0     1 108.3828 108.8359 105.4372  105.5882    8089800
2000-01-05 139.90 141.20 137.30 140.80 12177900        0     1 105.6631 106.6449 103.6993  106.3428   12177900
2000-01-06 139.60 141.50 137.80 137.80  6227200        0     1 105.4327 106.8677 104.0733  104.0733    6227200
2000-01-07 140.30 145.80 140.10 145.80  8066500        0     1 105.9690 110.1231 105.8179  110.1231    8066500

To note, this data is not automatically assigned to “SPY” as quantmod’s “getSymbols” function fetching from Yahoo would automatically do. Also, note that when calling the Quandl function to its EOD database, you automatically obtain both adjusted and unadjusted prices. One aspect that I am not sure is as easily done through Quandl’s API is how easy it is to adjust prices for splits but not dividends. But, for what it’s worth, there it is. So for those that take contention with the quality of Yahoo data, you may wish to look at Quandl’s EOD database for $50/month.

So…that’s it. From this point on, this blog is an archive of my work that will stay up; it’s not going anywhere. However, I won’t be updating it or answering questions on this blog. For those that have any questions about functionality, I highly recommend posting questions to the R-SIG Finance mailing list. It’s been a pleasure sharing my thoughts and work with you, and I’m glad I’ve garnered the attention of many intelligent individuals, from those that have provided me with data, to those that have built upon my work, to those that have hired me for consulting (and now a full-time) opportunity. I also hope that some of my work displayed here made it to other trading and/or asset management firms. I am very grateful for all of the feedback, comments, data, and opportunities I’ve received along the way.

Once again, thank you so much for reading. It’s been a pleasure.

Momentum, Markowitz, and Solving Rank-Deficient Covariance Matrices — The Constrained Critical Line Algorithm

This post will feature the differences in the implementation of my constrained critical line algorithm with that of Dr. Clarence Kwan’s. The constrained critical line algorithm is a form of gradient descent that incorporates elements of momentum. My implementation includes a volatility-targeting binary search algorithm.

First off, rather than try and explain the algorithm piece by piece, I’ll defer to Dr. Clarence Kwan’s paper and excel spreadsheet, from where I obtained my original implementation. Since that paper and excel spreadsheet explains the functionality of the algorithm, I won’t repeat that process here. Essentially, the constrained critical line algorithm incorporates its lambda constraints into the structure of the covariance matrix itself. This innovation actually allows the algorithm to invert previously rank-deficient matrices.

Now, while Markowitz mean-variance optimization may be a bit of old news for some, the ability to use a short lookback for momentum with monthly data has allowed me and my two coauthors (Dr. Wouter Keller, who came up with flexible and elastic asset allocation, and Adam Butler, of GestaltU) to perform a backtest on a century’s worth of assets, with more than 30 assets in the backtest, despite using only a 12-month formation period. That paper can be found here.

Let’s look at the code for the function.

CCLA <- function(covMat, retForecast, maxIter = 1000, 
                 verbose = FALSE, scale = 252, 
                 weightLimit = .7, volThresh = .1) {
  if(length(retForecast) > length(unique(retForecast))) {
    sequentialNoise <- seq(1:length(retForecast)) * 1e-12
    retForecast <- retForecast + sequentialNoise
  }
  
  #initialize original out/in/up status
  if(length(weightLimit) == 1) {
    weightLimit <- rep(weightLimit, ncol(covMat))
  }
  rankForecast <- length(retForecast) - rank(retForecast) + 1
  remainingWeight <- 1 #have 100% of weight to allocate
  upStatus <- inStatus <- rep(0, ncol(covMat))
  i <- 1
  while(remainingWeight > 0) {
    securityLimit <- weightLimit[rankForecast == i]
    if(securityLimit < remainingWeight) {
      upStatus[rankForecast == i] <- 1 #if we can't invest all remaining weight into the security
      remainingWeight <- remainingWeight - securityLimit
    } else {
      inStatus[rankForecast == i] <- 1
      remainingWeight <- 0
    }
    i <- i + 1
  }
  
  #initial matrices (W, H, K, identity, negative identity)
  covMat <- as.matrix(covMat)
  retForecast <- as.numeric(retForecast)
  init_W <- cbind(2*covMat, rep(-1, ncol(covMat)))
  init_W <- rbind(init_W, c(rep(1, ncol(covMat)), 0))
  H_vec <- c(rep(0, ncol(covMat)), 1)
  K_vec <- c(retForecast, 0)
  negIdentity <- -1*diag(ncol(init_W))
  identity <- diag(ncol(init_W))
  matrixDim <- nrow(init_W)
  weightLimMat <- matrix(rep(weightLimit, matrixDim), ncol=ncol(covMat), byrow=TRUE)
  
  #out status is simply what isn't in or up
  outStatus <- 1 - inStatus - upStatus
  
  #initialize expected volatility/count/turning points data structure
  expVol <- Inf
  lambda <- 100
  count <- 0
  turningPoints <- list()
  while(lambda > 0 & count < maxIter) {
    
    #old lambda and old expected volatility for use with numerical algorithms
    oldLambda <- lambda
    oldVol <- expVol
    
    count <- count + 1
    
    #compute W, A, B
    inMat <- matrix(rep(c(inStatus, 1), matrixDim), nrow = matrixDim, byrow = TRUE)
    upMat <- matrix(rep(c(upStatus, 0), matrixDim), nrow = matrixDim, byrow = TRUE)
    outMat <- matrix(rep(c(outStatus, 0), matrixDim), nrow = matrixDim, byrow = TRUE)
    
    W <- inMat * init_W + upMat * identity + outMat * negIdentity
    
    inv_W <- solve(W)
    modified_H <- H_vec - rowSums(weightLimMat* upMat[,-matrixDim] * init_W[,-matrixDim])
    A_vec <- inv_W %*% modified_H
    B_vec <- inv_W %*% K_vec
    
    #remove the last elements from A and B vectors
    truncA <- A_vec[-length(A_vec)]
    truncB <- B_vec[-length(B_vec)]
    
    #compute in Ratio (aka Ratio(1) in Kwan.xls)
    inRatio <- rep(0, ncol(covMat))
    inRatio[truncB > 0] <- -truncA[truncB > 0]/truncB[truncB > 0]
    
    #compute up Ratio (aka Ratio(2) in Kwan.xls)
    upRatio <- rep(0, ncol(covMat))
    upRatioIndices <- which(inStatus==TRUE & truncB < 0)
    if(length(upRatioIndices) > 0) {
      upRatio[upRatioIndices] <- (weightLimit[upRatioIndices] - truncA[upRatioIndices]) / truncB[upRatioIndices]
    }
    
    #find lambda -- max of up and in ratios
    maxInRatio <- max(inRatio)
    maxUpRatio <- max(upRatio)
    lambda <- max(maxInRatio, maxUpRatio)
    
    #compute new weights
    wts <- inStatus*(truncA + truncB * lambda) + upStatus * weightLimit + outStatus * 0
    
    #compute expected return and new expected volatility
    expRet <- t(retForecast) %*% wts
    expVol <- sqrt(wts %*% covMat %*% wts) * sqrt(scale)
    
    #create turning point data row and append it to turning points
    turningPoint <- cbind(count, expRet, lambda, expVol, t(wts))
    colnames(turningPoint) <- c("CP", "Exp. Ret.", "Lambda", "Exp. Vol.", colnames(covMat))
    turningPoints[[count]] <- turningPoint
    
    #binary search for volatility threshold -- if the first iteration is lower than the threshold,
    #then immediately return, otherwise perform the binary search until convergence of lambda
    if(oldVol == Inf & expVol < volThresh) {
      turningPoints <- do.call(rbind, turningPoints)
      threshWts <- tail(turningPoints, 1)
      return(list(turningPoints, threshWts))
    } else if(oldVol > volThresh & expVol < volThresh) {
      upLambda <- oldLambda
      dnLambda <- lambda
      meanLambda <- (upLambda + dnLambda)/2
      while(upLambda - dnLambda > .00001) {
        
        #compute mean lambda and recompute weights, expected return, and expected vol
        meanLambda <- (upLambda + dnLambda)/2
        wts <- inStatus*(truncA + truncB * meanLambda) + upStatus * weightLimit + outStatus * 0
        expRet <- t(retForecast) %*% wts
        expVol <- sqrt(wts %*% covMat %*% wts) * sqrt(scale)
        
        #if new expected vol is less than threshold, mean becomes lower bound
        #otherwise, it becomes the upper bound, and loop repeats
        if(expVol < volThresh) {
          dnLambda <- meanLambda
        } else {
          upLambda <- meanLambda
        }
      }
      
      #once the binary search completes, return those weights, and the corner points
      #computed until the binary search. The corner points aren't used anywhere, but they're there.
      threshWts <- cbind(count, expRet, meanLambda, expVol, t(wts))
      colnames(turningPoint) <- colnames(threshWts) <- c("CP", "Exp. Ret.", "Lambda", "Exp. Vol.", colnames(covMat))
      turningPoints[[count]] <- turningPoint
      turningPoints <- do.call(rbind, turningPoints)
      return(list(turningPoints, threshWts))
    }
    
    #this is only run for the corner points during which binary search doesn't take place
    #change status of security that has new lambda
    if(maxInRatio > maxUpRatio) {
      inStatus[inRatio == maxInRatio] <- 1 - inStatus[inRatio == maxInRatio]
      upStatus[inRatio == maxInRatio] <- 0
    } else {
      upStatus[upRatio == maxUpRatio] <- 1 - upStatus[upRatio == maxUpRatio]
      inStatus[upRatio == maxUpRatio] <- 0
    }
    outStatus <- 1 - inStatus - upStatus
  }
  
  #we only get here if the volatility threshold isn't reached
  #can actually happen if set sufficiently low
  turningPoints <- do.call(rbind, turningPoints)
  
  threshWts <- tail(turningPoints, 1)
  
  return(list(turningPoints, threshWts))
}

Essentially, the algorithm can be divided into three parts:

The first part is the initialization, which does the following:

It creates three status vectors: in, up, and out. The up vector denotes which securities are at their weight constraint cap, the in status are securities that are not at their weight cap, and the out status are securities that receive no weighting on that iteration of the algorithm.

The rest of the algorithm essentially does the following:

It takes a gradient descent approach by changing the status of the security that minimizes lambda, which by extension minimizes the volatility at the local point. As long as lambda is greater than zero, the algorithm continues to iterate. Letting the algorithm run until convergence effectively provides the volatility-minimization portfolio on the efficient frontier.

However, one change that Dr. Keller and I made to it is the functionality of volatility targeting, allowing the algorithm to stop between iterations. As the SSRN paper shows, a higher volatility threshold, over the long run (the *VERY* long run) will deliver higher returns.

In any case, the algorithm takes into account several main arguments:

A return forecast, a covariance matrix, a volatility threshold, and weight limits, which can be either one number that will result in a uniform weight limit, or a per-security weight limit. Another argument is scale, which is 252 for days, 12 for months, and so on. Lastly, there is a volatility threshold component, which allows the user to modify how aggressive or conservative the strategy can be.

In any case, to demonstrate this function, let’s run a backtest. The idea in this case will come from a recent article published by Frank Grossmann from SeekingAlpha, in which he obtained a 20% CAGR but with a 36% max drawdown.

So here’s the backtest:

symbols <- c("AFK", "ASHR", "ECH", "EGPT",
             "EIDO", "EIRL", "EIS", "ENZL",
             "EPHE", "EPI", "EPOL", "EPU",
             "EWA", "EWC", "EWD", "EWG",
             "EWH", "EWI", "EWJ", "EWK",
             "EWL", "EWM", "EWN", "EWO",
             "EWP", "EWQ", "EWS", "EWT",
             "EWU", "EWW", "EWY", "EWZ",
             "EZA", "FM", "FRN", "FXI",
             "GAF", "GULF", "GREK", "GXG",
             "IDX", "MCHI", "MES", "NORW",
             "QQQ", "RSX", "THD", "TUR",
             "VNM", "TLT"
)

getSymbols(symbols, from = "2003-01-01")

prices <- list()
entryRets <- list()
for(i in 1:length(symbols)) {
  prices[[i]] <- Ad(get(symbols[i]))
}
prices <- do.call(cbind, prices)
colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))

returns <- Return.calculate(prices)
returns <- returns[-1,]

sumIsNa <- function(col) {
  return(sum(is.na(col)))
}

appendZeroes <- function(selected, originalSetNames) {
  zeroes <- rep(0, length(originalSetNames) - length(selected))
  names(zeroes) <- originalSetNames[!originalSetNames %in% names(selected)]
  all <- c(selected, zeroes)
  all <- all[originalSetNames]
  return(all)
}

computeStats <- function(rets) {
  stats <- rbind(table.AnnualizedReturns(rets), maxDrawdown(rets), CalmarRatio(rets))
  return(round(stats, 3))
}

CLAAbacktest <- function(returns, lookback = 3, volThresh = .1, assetCaps = .5, tltCap = 1,
                         returnWeights = FALSE, useTMF = FALSE) {
  if(useTMF) {
    returns$TLT <- returns$TLT * 3
  }
  ep <- endpoints(returns, on = "months")
  weights <- list()
  for(i in 2:(length(ep) - lookback)) {
    retSubset <- returns[(ep[i]+1):ep[i+lookback],]
    retNAs <- apply(retSubset, 2, sumIsNa)
    validRets <- retSubset[, retNAs==0]
    retForecast <- Return.cumulative(validRets)
    covRets <- cov(validRets)
    weightLims <- rep(assetCaps, ncol(covRets))
    weightLims[colnames(covRets)=="TLT"] <- tltCap
    weight <- CCLA(covMat = covRets, retForecast = retForecast, weightLimit = weightLims, volThresh = volThresh)
    weight <- weight[[2]][,5:ncol(weight[[2]])]
    weight <- appendZeroes(selected = weight, colnames(retSubset))
    weight <- xts(t(weight), order.by=last(index(validRets)))
    weights[[i]] <- weight
  }
  weights <- do.call(rbind, weights)
  stratRets <- Return.portfolio(R = returns, weights = weights)
  if(returnWeights) {
    return(list(weights, stratRets))
  }
  return(stratRets)
}

In essence, we take the returns over a specified monthly lookback period, specify a volatility threshold, specify asset caps, specify the bond asset cap, and whether or not we wish to use TLT or TMF (a 3x leveraged variant, which just multiplies all returns of TLT by 3, for simplicity). The output of the CCLA (Constrained Critical Line Algorithm) is a list that contains the corner points, and the volatility threshold corner point which contains the corner point number, expected return, expected volatility, and the lambda value. So, we want the fifth element onward of the second element of the list.

Here are some results:

config1 <- CLAAbacktest(returns = returns)
config2 <- CLAAbacktest(returns = returns, useTMF = TRUE)
config3 <- CLAAbacktest(returns = returns, lookback = 4)
config4 <- CLAAbacktest(returns = returns, lookback = 2, useTMF = TRUE)

comparison <- na.omit(cbind(config1, config2, config3, config4))
colnames(comparison) <- c("Default", "TMF instead of TLT", "Lookback 4", "Lookback 2 and TMF")
charts.PerformanceSummary(comparison)
computeStats(comparison)

With the following statistics:

> computeStats(comparison)
                          Default TMF instead of TLT Lookback 4 Lookback 2 and TMF
Annualized Return           0.137              0.146      0.133              0.138
Annualized Std Dev          0.126              0.146      0.125              0.150
Annualized Sharpe (Rf=0%)   1.081              1.000      1.064              0.919
Worst Drawdown              0.219              0.344      0.186              0.357
Calmar Ratio                0.625              0.424      0.714              0.386

The variants that use TMF instead of TLT suffer far worse drawdowns. Not much of a hedge, apparently.

Here’s the equity curve:

Taking the 4 month lookback configuration (strongest Calmar), we’ll play around with the volatility setting.

Here’s the backtest:

config5 <- CLAAbacktest(returns = returns, lookback = 4, volThresh = .15)
config6 <- CLAAbacktest(returns = returns, lookback = 4, volThresh = .2)

comparison2 <- na.omit(cbind(config3, config5, config6))
colnames(comparison2) <- c("Vol10", "Vol15", "Vol20")
charts.PerformanceSummary(comparison2)
computeStats(comparison2)

With the results:

> computeStats(comparison2)
                          Vol10 Vol15 Vol20
Annualized Return         0.133 0.153 0.180
Annualized Std Dev        0.125 0.173 0.204
Annualized Sharpe (Rf=0%) 1.064 0.886 0.882
Worst Drawdown            0.186 0.212 0.273
Calmar Ratio              0.714 0.721 0.661

In this case, more risk, more reward, lower risk/reward ratios as you push the volatility threshold. So for once, the volatility puzzle doesn’t rear its head, and higher risk indeed does translate to higher returns (at the cost of everything else, though).

Here’s the equity curve.

Lastly, let’s try toggling the asset cap limits with the vol threshold back at 10.

config7 <- CLAAbacktest(returns = returns, lookback = 4, assetCaps = .1)
config8 <- CLAAbacktest(returns = returns, lookback = 4, assetCaps = .25)
config9 <- CLAAbacktest(returns = returns, lookback = 4, assetCaps = 1/3)
config10 <- CLAAbacktest(returns = returns, lookback = 4, assetCaps = 1)

comparison3 <- na.omit(cbind(config7, config8, config9, config3, config10))
colnames(comparison3) <- c("Cap10", "Cap25", "Cap33", "Cap50", "Uncapped")
charts.PerformanceSummary(comparison3)
computeStats(comparison3)

With the resulting statistics:

> computeStats(comparison3)
                          Cap10 Cap25 Cap33 Cap50 Uncapped
Annualized Return         0.124 0.122 0.127 0.133    0.134
Annualized Std Dev        0.118 0.122 0.123 0.125    0.126
Annualized Sharpe (Rf=0%) 1.055 1.002 1.025 1.064    1.070
Worst Drawdown            0.161 0.185 0.186 0.186    0.186
Calmar Ratio              0.771 0.662 0.680 0.714    0.721

Essentially, in this case, there was very little actual change from simply tweaking weight limits. Here’s an equity curve:

To conclude, while not exactly achieving the same aggregate returns or Sharpe ratio that the SeekingAlpha article did, it did highlight a probable cause of its major drawdown, and also demonstrated the levers of how to apply the constrained critical line algorithm, the mechanics of which are detailed in the papers linked to earlier.

Thanks for reading.

A Basic Logical Invest Global Market Rotation Strategy

This may be one of the simplest strategies I’ve ever presented on this blog, but nevertheless, it works, for some definition of “works”.

Here’s the strategy: take five global market ETFs (MDY, ILF, FEZ, EEM, and EPP), along with a treasury ETF (TLT), and every month, fully invest in the security that had the best momentum. While I’ve tried various other tweaks, none have given the intended high return performance that the original variant has.

Here’s the link to the original strategy.

While I’m not quite certain of how to best go about programming the variable lookback period, this is the code for the three month lookback.

require(quantmod)
require(PerformanceAnalytics)

symbols <- c("MDY", "TLT", "EEM", "ILF", "EPP", "FEZ")
getSymbols(symbols, from="1990-01-01")
prices <- list()
for(i in 1:length(symbols)) {
  prices[[i]] <- Ad(get(symbols[i]))
}
prices <- do.call(cbind, prices)
colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))
returns <- Return.calculate(prices)
returns <- na.omit(returns)

logicInvestGMR <- function(returns, lookback = 3) {
  ep <- endpoints(returns, on = "months") 
  weights <- list()
  for(i in 2:(length(ep) - lookback)) {
    retSubset <- returns[ep[i]:ep[i+lookback],]
    cumRets <- Return.cumulative(retSubset)
    rankCum <- rank(cumRets)
    weight <- rep(0, ncol(retSubset))
    weight[which.max(cumRets)] <- 1
    weight <- xts(t(weight), order.by=index(last(retSubset)))
    weights[[i]] <- weight
  }
  weights <- do.call(rbind, weights)
  stratRets <- Return.portfolio(R = returns, weights = weights)
  return(stratRets)
}

gmr <- logicInvestGMR(returns)
charts.PerformanceSummary(gmr)

And here’s the performance:

> rbind(table.AnnualizedReturns(gmr), maxDrawdown(gmr), CalmarRatio(gmr))
                          portfolio.returns
Annualized Return                  0.287700
Annualized Std Dev                 0.220700
Annualized Sharpe (Rf=0%)          1.303500
Worst Drawdown                     0.222537
Calmar Ratio                       1.292991

With the resultant equity curve:

While I don’t get the 34% advertised, nevertheless, the risk to reward ratio over the duration of the backtest is fairly solid for something so simple, and I just wanted to put this out there.

Thanks for reading.

Advertising a Few Systematic ETFs (Strictly Of My Own Volition)

This post will introduce several ETFs from Alpha Architect and Cambria Funds (run by Meb Faber) that I think readers should be aware of (if not so already) in order to capitalize on systematic investing without needing to lose a good portion of the return due to taxes and transaction costs.

So, as my readers know, I backtest lots of strategies on this blog that deal with monthly turnover, and many transactions. In all instances, I assume that A) slippage and transaction costs are negligible, =B) there is sufficient capital such that when a weighting scheme states to place 5.5% of a portfolio into an ETF with an expensive per-share price (EG a sector spider, SPY, etc.), that the issue of integer shares can be adhered to without issue, and C) that there are no taxes on the monthly transactions. For retail investors without millions of dollars to deploy, one or more of these assumptions may not hold. After all, if you have $20,000 to invest, and are paying $50 a month on turnover costs, that’s -3% to your CAGR, which would render quite a few of these strategies pretty terrible.

So, in this short blurb, I want to shine a light on several of these ETFs.

First off, a link to a post from Alpha Architect that essentially states that there are only two tried-and-true market “anomalies” when correcting for data-mining: value, and momentum. Well, that and the durable consumption goods factor. The first, I’m not quite sure how to rigorously test using only freely available data, and the last, I’m not quite sure why it works. Low volatility, perhaps?

In any case, for people who don’t have institutional-grade investing capabilities, here are some ETFs that aim to intelligently capitalize on the value and momentum factors, along with one “permanent portfolio” type of ETF.

Momentum:
GMOM: Global Momentum. Essentially, spread your bets, and go with the trend. Considering Meb Faber is a proponent of momentum (see his famous Ivy Portfolio book), this is the way to capitalize on that.

Value:
QVAL: Alpha Architect’s (domestic) Quantitative Value ETF. The team at Alpha Architect are proponents of value investing, and with a team of several PhDs dedicated to a systematic value investing research process, this may be a way for retail investors to buy-and-hold one product and outsource the meticulous value research necessary for the proper implementation of such a strategy.

IVAL: an international variant of the above.

GVAL: The Cambria Funds quantitative value fund.

Asset Allocation (permanent portfolio):

GAA: Global Asset Allocation. My interpretation? Take the good old stocks, bonds, and real assets portfolio, and spread it out across the globe.

Now, let’s just do a quick rundown and see how these strategies have performed over the small time horizon the latest one has been in existence.

symbols <- c("GMOM", "QVAL", "IVAL", "GVAL", "GAA")

getSymbols(symbols, from = "1990-01-01")
prices <- list()
for(i in 1:length(symbols)) {
  prices[[i]] <- Ad(get(symbols[i]))  
}
prices <- do.call(cbind, prices)
colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))

coolEtfReturns <- Return.calculate(prices)
coolEtfReturns <- na.omit(coolEtfReturns)
charts.PerformanceSummary(coolEtfReturns, main = "Quant investing for retail people.")

stats <- rbind(table.AnnualizedReturns(coolEtfReturns),
               maxDrawdown(coolEtfReturns),
               CalmarRatio(coolEtfReturns),
               SortinoRatio(coolEtfReturns) * sqrt(252))
round(stats, 3)
                           GMOM  QVAL  IVAL  GVAL   GAA
Annualized Return         0.038 0.237 0.315 0.323 0.106
Annualized Std Dev        0.082 0.138 0.123 0.192 0.066
Annualized Sharpe (Rf=0%) 0.466 1.709 2.556 1.680 1.595
Worst Drawdown            0.039 0.046 0.046 0.069 0.028
Calmar Ratio              0.981 5.189 6.816 4.678 3.737
Sortino Ratio (MAR = 0%)  0.665 2.598 3.742 2.274 2.407

In other words, aside from momentum, which is having a flat-ish series of months, the performances are overall fairly strong, in this tiny sample (not at all significant).

The one caveat I’d throw out there, however, is that these instruments are not foolproof. For fun, here’s a plot of GVAL (that is, Cambria’s global value fund) since its inception.

And the statistics for it for the whole duration of its inception.

                          GVAL.Adjusted
Annualized Return                -0.081
Annualized Std Dev                0.164
Annualized Sharpe (Rf=0%)        -0.494
Worst Drawdown                    0.276
Calmar Ratio                     -0.294

Again, tiny sample, so nothing conclusive at all, but it just means that these funds may occasionally hurt (no free lunch). That stated, I nevertheless think that Dr. Wesley Gray and Mebane Faber, at Alpha Architect and Cambria Funds, respectively, are about as reputable of money managers as one would find, and the idea that one can invest with them, as opposed to god knows with what mutual fund, to me, is something I think that’s worth not just pointing out, but drawing some positive attention to.

That stated, if anyone out there has hypothetical performances for these funds that goes back to a ten year history in a time-series, I’d love to run some analysis on those. After all, if there were some simple way to improve the performances of a portfolio of these instruments even more, well, I believe Newton had something to say about standing on the shoulders of giants.

Thanks for reading.

NOTE: I will be giving a quick lightning talk at R in finance in Chicago later this month (about two weeks). The early bird registration ends this Friday.

The JP Morgan SCTO strategy

This strategy goes over JP Morgan’s SCTO strategy, a basic XL-sector/RWR rotation strategy with the typical associated risks and returns with a momentum equity strategy. It’s nothing spectacular, but if a large bank markets it, it’s worth looking at.

Recently, one of my readers, a managing director at a quantitative investment firm, sent me a request to write a rotation strategy based around the 9 sector spiders and RWR. The way it works (or at least, the way I interpreted it) is this:

Every month, compute the return (not sure how “the return” is defined) and rank. Take the top 5 ranks, and weight them in a normalized fashion to the inverse of their 22-day volatility. Zero out any that have negative returns. Lastly, check the predicted annualized vol of the portfolio, and if it’s greater than 20%, bring it back down to 20%. The cash asset–SHY–receives any remaining allocation due to setting securities to zero.

For the reference I used, here’s the investment case document from JP Morgan itself.

Here’s my implementation:

Step 1) get the data, compute returns.

require(quantmod)
require(PerformanceAnalytics)
symbols <- c("XLB", "XLE", "XLF", "XLI", "XLK", "XLP", "XLU", "XLV", "XLY", "RWR", "SHY")
getSymbols(symbols, from="1990-01-01")
prices <- list()
for(i in 1:length(symbols)) {
  prices[[i]] <- Ad(get(symbols[i]))  
}
prices <- do.call(cbind, prices)
colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))
returns <- na.omit(Return.calculate(prices))

Step 2) The function itself.

sctoStrat <- function(returns, cashAsset = "SHY", lookback = 4, annVolLimit = .2,
                      topN = 5, scale = 252) {
  ep <- endpoints(returns, on = "months")
  weights <- list()
  cashCol <- grep(cashAsset, colnames(returns))
  
  #remove cash from asset returns
  cashRets <- returns[, cashCol]
  assetRets <- returns[, -cashCol]
  for(i in 2:(length(ep) - lookback)) {
    retSubset <- assetRets[ep[i]:ep[i+lookback]]
    
    #forecast is the cumulative return of the lookback period
    forecast <- Return.cumulative(retSubset)
    
    #annualized (realized) volatility uses a 22-day lookback period
    annVol <- StdDev.annualized(tail(retSubset, 22))
    
    #rank the forecasts (the cumulative returns of the lookback)
    rankForecast <- rank(forecast) - ncol(assetRets) + topN
    
    #weight is inversely proportional to annualized vol
    weight <- 1/annVol
    
    #zero out anything not in the top N assets
    weight[rankForecast <= 0] <- 0
    
    #normalize and zero out anything with a negative return
    weight <- weight/sum(weight)
    weight[forecast < 0] <- 0
    
    #compute forecasted vol of portfolio
    forecastVol <- sqrt(as.numeric(t(weight)) %*% 
                          cov(retSubset) %*% 
                          as.numeric(weight)) * sqrt(scale)
    
    #if forecasted vol greater than vol limit, cut it down
    if(as.numeric(forecastVol) > annVolLimit) {
      weight <- weight * annVolLimit/as.numeric(forecastVol)
    }
    weights[[i]] <- xts(weight, order.by=index(tail(retSubset, 1)))
  }
  
  #replace cash back into returns
  returns <- cbind(assetRets, cashRets)
  weights <- do.call(rbind, weights)
  
  #cash weights are anything not in securities
  weights$CASH <- 1-rowSums(weights)
  
  #compute and return strategy returns
  stratRets <- Return.portfolio(R = returns, weights = weights)
  return(stratRets)      
}

In this case, I took a little bit of liberty with some specifics that the reference was short on. I used the full covariance matrix for forecasting the portfolio variance (not sure if JPM would ignore the covariances and do a weighted sum of individual volatilities instead), and for returns, I used the four-month cumulative. I’ve seen all sorts of permutations on how to compute returns, ranging from some average of 1, 3, 6, and 12 month cumulative returns to some lookback period to some two period average, so I’m all ears if others have differing ideas, which is why I left it as a lookback parameter.

Step 3) Running the strategy.

scto4_20 <- sctoStrat(returns)
getSymbols("SPY", from = "1990-01-01")
spyRets <- Return.calculate(Ad(SPY))
comparison <- na.omit(cbind(scto4_20, spyRets))
colnames(comparison) <- c("strategy", "SPY")
charts.PerformanceSummary(comparison)
apply.yearly(comparison, Return.cumulative)
stats <- rbind(table.AnnualizedReturns(comparison),
               maxDrawdown(comparison),
               CalmarRatio(comparison),
               SortinoRatio(comparison)*sqrt(252))
round(stats, 3)

Here are the statistics:

                          strategy   SPY
Annualized Return            0.118 0.089
Annualized Std Dev           0.125 0.193
Annualized Sharpe (Rf=0%)    0.942 0.460
Worst Drawdown               0.165 0.552
Calmar Ratio                 0.714 0.161
Sortino Ratio (MAR = 0%)     1.347 0.763

               strategy         SPY
2002-12-31 -0.035499564 -0.05656974
2003-12-31  0.253224759  0.28181559
2004-12-31  0.129739794  0.10697941
2005-12-30  0.066215224  0.04828267
2006-12-29  0.167686936  0.15845242
2007-12-31  0.153890329  0.05146218
2008-12-31 -0.096736711 -0.36794994
2009-12-31  0.181759432  0.26351755
2010-12-31  0.099187188  0.15056146
2011-12-30  0.073734427  0.01894986
2012-12-31  0.067679129  0.15990336
2013-12-31  0.321039353  0.32307769
2014-12-31  0.126633020  0.13463790
2015-04-16  0.004972434  0.02806776

And the equity curve:

To me, it looks like a standard rotation strategy. Aims for the highest momentum securities, diversifies to try and control risk, hits a drawdown in the crisis, recovers, and slightly lags the bull run on SPY. Nothing out of the ordinary.

So, for those interested, here you go. I’m surprised that JP Morgan itself markets this sort of thing, considering that they probably employ top-notch quants that can easily come up with products and/or strategies that are far better.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.

The Logical Invest Enhanced Bond Rotation Strategy (And the Importance of Dividends)

This post will display my implementation of the Logical Invest Enhanced Bond Rotation strategy. This is a strategy that indeed does work, but is dependent on reinvesting dividends, as bonds pay coupons, which means bond ETFs do likewise.

The strategy is fairly simple — using four separate fixed income markets (long-term US government bonds, high-yield bonds, emerging sovereign debt, and convertible bonds), the strategy aims to deliver a low-risk, high Sharpe profile. Every month, it switches to two separate securities, in either a 60-40 or 50-50 split (that is, a 60-40 one way, or the other). My implementation for this strategy is similar to the ones I’ve done for the Logical Invest Universal Investment Strategy, which is to maximize a modified Sharpe ratio in a walk-forward process.

Here’s the code:

LogicInvestEBR <- function(returns, lowerBound, upperBound, period, modSharpeF) {
  count <- 0
  configs <- list()
  instCombos <- combn(colnames(returns), m = 2)
  for(i in 1:ncol(instCombos)) {
    inst1 <- instCombos[1, i]
    inst2 <- instCombos[2, i]
    rets <- returns[,c(inst1, inst2)]
    weightSeq <- seq(lowerBound, upperBound, by = .1)
    for(j in 1:length(weightSeq)) {
      returnConfig <- Return.portfolio(R = rets, 
                      weights = c(weightSeq[j], 1-weightSeq[j]), 
                      rebalance_on="months")
      colnames(returnConfig) <- paste(inst1, weightSeq[j], 
                                inst2, 1-weightSeq[j], sep="_")
      count <- count + 1
      configs[[count]] <- returnConfig
    }
  }
  
  configs <- do.call(cbind, configs)
  cumRets <- cumprod(1+configs)
  
  #rolling cumulative 
  rollAnnRets <- (cumRets/lag(cumRets, period))^(252/period) - 1
  rollingSD <- sapply(X = configs, runSD, n=period)*sqrt(252)
  
  modSharpe <- rollAnnRets/(rollingSD ^ modSharpeF)
  monthlyModSharpe <- modSharpe[endpoints(modSharpe, on="months"),]
  
  findMax <- function(data) {
    return(data==max(data))
  }
  
  #configs$zeroes <- 0 #zeroes for initial periods during calibration
  weights <- t(apply(monthlyModSharpe, 1, findMax))
  weights <- weights*1
  weights <- xts(weights, order.by=as.Date(rownames(weights)))
  weights[is.na(weights)] <- 0
  weights$zeroes <- 1-rowSums(weights)
  configCopy <- configs
  configCopy$zeroes <- 0
  
  stratRets <- Return.portfolio(R = configCopy, weights = weights)
  return(stratRets)  
}

The one thing different about this code is the way I initialize the return streams. It’s an ugly piece of work, but it takes all of the pairwise combinations (that is, 4 choose 2, or 4c2) along with a sequence going by 10% for the different security weights between the lower and upper bound (that is, if the lower bound is 40% and upper bound is 60%, the three weights will be 40-60, 50-50, and 60-40). So, in this case, there are 18 configurations. 4c2*3. Do note that this is not at all a framework that can be scaled up. That is, with 20 instruments, there will be 190 different combinations, and then anywhere between 3 to 11 (if going from 0-100) configurations for each combination. Obviously, not a pretty sight.

Beyond that, it’s the same refrain. Bind the returns together, compute an n-day rolling cumulative return (far faster my way than using the rollApply version of Return.annualized), divide it by the n-day rolling annualized standard deviation divided by the modified Sharpe F factor (1 gives you Sharpe ratio, 0 gives you pure returns, greater than 1 puts more of a focus on risk). Take the highest Sharpe ratio, allocate to that configuration, repeat.

So, how does this perform? Here’s a test script, using the same 73-day lookback with a modified Sharpe F of 2 that I’ve used in the previous Logical Invest strategies.

symbols <- c("TLT", "JNK", "PCY", "CWB", "VUSTX", "PRHYX", "RPIBX", "VCVSX")
suppressMessages(getSymbols(symbols, from="1995-01-01", src="yahoo"))
etfClose <- Return.calculate(cbind(Cl(TLT), Cl(JNK), Cl(PCY), Cl(CWB)))
etfAdj <- Return.calculate(cbind(Ad(TLT), Ad(JNK), Ad(PCY), Ad(CWB)))
mfClose <- Return.calculate(cbind(Cl(VUSTX), Cl(PRHYX), Cl(RPIBX), Cl(VCVSX)))
mfAdj <- Return.calculate(cbind(Ad(VUSTX), Ad(PRHYX), Ad(RPIBX), Ad(VCVSX)))
colnames(etfClose) <- colnames(etfAdj) <- c("TLT", "JNK", "PCY", "CWB")
colnames(mfClose) <- colnames(mfAdj) <- c("VUSTX", "PRHYX", "RPIBX", "VCVSX")

etfClose <- etfClose[!is.na(etfClose[,4]),]
etfAdj <- etfAdj[!is.na(etfAdj[,4]),]
mfClose <- mfClose[-1,]
mfAdj <- mfAdj[-1,]

etfAdjTest <- LogicInvestEBR(returns = etfAdj, lowerBound = .4, upperBound = .6,
                             period = 73, modSharpeF = 2)

etfClTest <- LogicInvestEBR(returns = etfClose, lowerBound = .4, upperBound = .6,
                             period = 73, modSharpeF = 2)

mfAdjTest <- LogicInvestEBR(returns = mfAdj, lowerBound = .4, upperBound = .6,
                            period = 73, modSharpeF = 2)

mfClTest <- LogicInvestEBR(returns = mfClose, lowerBound = .4, upperBound = .6,
                           period = 73, modSharpeF = 2)

fiveStats <- function(returns) {
  return(rbind(table.AnnualizedReturns(returns), 
               maxDrawdown(returns), CalmarRatio(returns)))
}

etfs <- cbind(etfAdjTest, etfClTest)
colnames(etfs) <- c("Adjusted ETFs", "Close ETFs")
charts.PerformanceSummary((etfs))

mutualFunds <- cbind(mfAdjTest, mfClTest)
colnames(mutualFunds) <- c("Adjusted MFs", "Close MFs")
charts.PerformanceSummary(mutualFunds)
chart.TimeSeries(log(cumprod(1+mutualFunds)), legend.loc="topleft")

fiveStats(etfs)
fiveStats(mutualFunds)

So, first, the results of the ETFs:

Equity curve:

Five statistics:

> fiveStats(etfs)
                          Adjusted ETFs Close ETFs
Annualized Return            0.12320000 0.08370000
Annualized Std Dev           0.06780000 0.06920000
Annualized Sharpe (Rf=0%)    1.81690000 1.20980000
Worst Drawdown               0.06913986 0.08038459
Calmar Ratio                 1.78158934 1.04078405

In other words, reinvesting dividends makes up about 50% of these returns.

Let’s look at the mutual funds. Note that these are for the sake of illustration only–you can’t trade out of mutual funds every month.

Equity curve:

Log scale:

Statistics:

                          Adjusted MFs Close MFs
Annualized Return           0.11450000 0.0284000
Annualized Std Dev          0.05700000 0.0627000
Annualized Sharpe (Rf=0%)   2.00900000 0.4532000
Worst Drawdown              0.09855271 0.2130904
Calmar Ratio                1.16217559 0.1332706

In this case, day and night, though how much of it is the data source may also be an issue. Yahoo isn’t the greatest when it comes to data, and I’m not sure how much the data quality deteriorates going back that far. However, the takeaway seems to be this: with bond strategies, dividends will need to be dealt with, and when considering returns data presented to you, keep in mind that those adjusted returns assume the investor stays on top of dividend maintenance. Fail to reinvest the dividends in a timely fashion, and, well, the gap can be quite large.

To put it into perspective, as I was writing this post, I wondered whether or not most of this was indeed due to dividends. Here’s a plot of the difference in returns between adjusted and close ETF returns.

chart.TimeSeries(etfAdj - etfClose, legend.loc="topleft", date.format="%Y-%m",
                 main = "Return differences adjusted vs. close ETFs")

With the resulting image:

While there may be some noise to the order of the negative fifth power on most days, there are clear spikes observable in the return differences. Those are dividends, and their compounding makes a sizable difference. In one case for CWB, the difference is particularly striking (Dec. 29, 2014). In fact, here’s a quick little analysis of the effect of the dividend effects.

dividends <- etfAdj - etfClose
divReturns <- list()
for(i in 1:ncol(dividends)) {
  diffStream <- dividends[,i]
  divPayments <- diffStream[diffStream >= 1e-3]
  divReturns[[i]] <- Return.annualized(divPayments)
}
divReturns <- do.call(cbind, divReturns)
divReturns

divReturns/Return.annualized(etfAdj)

And the result:

> divReturns
                         TLT        JNK        PCY        CWB
Annualized Return 0.03420959 0.08451723 0.05382363 0.05025999

> divReturns/Return.annualized(etfAdj)
                       TLT       JNK       PCY       CWB
Annualized Return 0.453966 0.6939243 0.5405922 0.3737499

In short, the effect of the dividend is massive. In some instances, such as with JNK, the dividend comprises more than 50% of the annualized returns for the security!

Basically, I’d like to hammer the point home one last time–backtests using adjusted data assume instantaneous maintenance of dividends. In order to achieve the optimistic returns seen in the backtests, these dividend payments must be reinvested ASAP. In short, this is the fine print on this strategy, and is a small, but critical detail that the SeekingAlpha article doesn’t mention. (Seriously, do a ctrl + F in your browser for the word “dividend”. It won’t come up in the article itself.) I wanted to make sure to add it.

One last thing: gaudy numbers when using monthly returns!

> fiveStats(apply.monthly(etfs, Return.cumulative))
                          Adjusted ETFs Close ETFs
Annualized Return            0.12150000   0.082500
Annualized Std Dev           0.06490000   0.067000
Annualized Sharpe (Rf=0%)    1.87170000   1.232100
Worst Drawdown               0.03671871   0.049627
Calmar Ratio                 3.30769620   1.662642

Look! A Calmar Ratio of 3.3, and a Sharpe near 2!*

*: Must manage dividends. Statistics reported are monthly.

Okay, in all fairness, this is a pretty solid strategy, once one commits to managing the dividends. I just felt that it should have been a topic made front and center considering its importance in this case, rather than simply swept under the “we use adjusted returns” rug, since in this instance, the effect of dividends is massive.

In conclusion, while I will more or less confirm the strategy’s actual risk/reward performance (unlike some other SeekingAlpha strategies I’ve backtested), which, in all honesty, I find really impressive, it comes with a caveat like the rest of them. However, the caveat of “be detail-oriented/meticulous/paranoid and reinvest those dividends!” in my opinion is a caveat that’s a lot easier to live with than 30%+ drawdowns that were found lurking in other SeekingAlpha strategies. So for those that can stay on top of those dividends (whether manually, or with machine execution), here you go. I’m basically confirming the performance of Logical Invest’s strategy, but just belaboring one important detail.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.