Testing the Hierarchical Risk Parity algorithm

This post will be a modified backtest of the Adaptive Asset Allocation backtest from AllocateSmartly, using the Hierarchical Risk Parity algorithm from last post, because Adam Butler was eager to see my results. On a whole, as Adam Butler had told me he had seen, HRP does not generate outperformance when applied to a small, carefully-constructed, diversified-by-selection universe of asset classes, as opposed to a universe of hundreds or even several thousand assets, where its theoretically superior properties result in it being a superior algorithm.

First off, I would like to thank one Matthew Barry, for helping me modify my HRP algorithm so as to not use the global environment for recursion. You can find his github here.

Here is the modified HRP code.

covMat <- read.csv('cov.csv', header = FALSE)
corMat <- read.csv('corMat.csv', header = FALSE)

clustOrder <- hclust(dist(corMat), method = 'single')$order

getIVP <- function(covMat) {
  invDiag <- 1/diag(as.matrix(covMat))
  weights <- invDiag/sum(invDiag)
  return(weights)
}

getClusterVar <- function(covMat, cItems) {
  covMatSlice <- covMat[cItems, cItems]
  weights <- getIVP(covMatSlice)
  cVar <- t(weights) %*% as.matrix(covMatSlice) %*% weights
  return(cVar)
}

getRecBipart <- function(covMat, sortIx) {
  w <- rep(1,ncol(covMat))
  w <- recurFun(w, covMat, sortIx)
  return(w)
}

recurFun <- function(w, covMat, sortIx) {
  subIdx <- 1:trunc(length(sortIx)/2)
  cItems0 <- sortIx[subIdx]
  cItems1 <- sortIx[-subIdx]
  cVar0 <- getClusterVar(covMat, cItems0)
  cVar1 <- getClusterVar(covMat, cItems1)
  alpha <- 1 - cVar0/(cVar0 + cVar1)
  
  # scoping mechanics using w as a free parameter
  w[cItems0] <- w[cItems0] * alpha
  w[cItems1] <- w[cItems1] * (1-alpha)
  
  if(length(cItems0) > 1) {
    w <- recurFun(w, covMat, cItems0)
  }
  if(length(cItems1) > 1) {
    w <- recurFun(w, covMat, cItems1)
  }
  return(w)
}


out <- getRecBipart(covMat, clustOrder)
out

With covMat and corMat being from the last post. In fact, this function can be further modified by encapsulating the clustering order within the getRecBipart function, but in the interest of keeping the code as similar to Marcos Lopez de Prado’s code as I could, I’ll leave this here.

Anyhow, the backtest will follow. One thing I will mention is that I’m using Quandl’s EOD database, as Yahoo has really screwed up their financial database (I.E. some sector SPDRs have broken data, dividends not adjusted, etc.). While this database is a $50/month subscription, I believe free users can access it up to 150 times in 60 days, so that should be enough to run backtests from this blog, so long as you save your downloaded time series for later use by using write.zoo.

This code needs the tseries library for the portfolio.optim function for the minimum variance portfolio (Dr. Kris Boudt has a course on this at datacamp), and the other standard packages.

A helper function for this backtest (and really, any other momentum rotation backtest) is the appendMissingAssets function, which simply adds on assets not selected to the final weighting and re-orders the weights by the original ordering.

require(tseries)
require(PerformanceAnalytics)
require(quantmod)
require(Quandl)

Quandl.api_key("YOUR_AUTHENTICATION_HERE") # not displaying my own api key, sorry 😦

# function to append missing (I.E. assets not selected) asset names and sort into original order
appendMissingAssets <- function(wts, allAssetNames, wtsDate) {
  absentAssets <- allAssetNames[!allAssetNames %in% names(wts)]
  absentWts <- rep(0, length(absentAssets))
  names(absentWts) <- absentAssets
  wts <- c(wts, absentWts)
  wts <- xts(t(wts), order.by=wtsDate)
  wts <- wts[,allAssetNames]
  return(wts)
}

Next, we make the call to Quandl to get our data.

symbols <- c("SPY", "VGK",	"EWJ",	"EEM",	"VNQ",	"RWX",	"IEF",	"TLT",	"DBC",	"GLD")	

rets <- list()
for(i in 1:length(symbols)) {
  
  # quandl command to download from EOD database. Free users should use write.zoo in this loop.
  
  returns <- Return.calculate(Quandl(paste0("EOD/", symbols[i]), start_date="1990-12-31", type = "xts")$Adj_Close)
  colnames(returns) <- symbols[i]
  rets[[i]] <- returns
}
rets <- na.omit(do.call(cbind, rets))

While Josh Ulrich fixed quantmod to actually get Yahoo data after Yahoo broke the API, the problem is that the Yahoo data is now garbage as well, and I’m not sure how much Josh Ulrich can do about that. I really hope some other provider can step up and provide free, usable EOD data so that I don’t have to worry about readers not being able to replicate the backtest, as my policy for this blog is that readers should be able to replicate the backtests so they don’t just nod and take my word for it. If you are or know of such a provider, please leave a comment so that I can let the blog readers know all about you.

Next, we initialize the settings for the backtest.

invVolWts <- list()
minVolWts <- list()
hrpWts <- list()
ep <- endpoints(rets, on =  "months")
nMonths = 6 # month lookback (6 as per parameters from allocateSmartly)
nVol = 20 # day lookback for volatility (20 ibid)

While the AAA backtest actually uses a 126 day lookback instead of a 6 month lookback, as it trades at the end of every month, that’s effectively a 6 month lookback, give or take a few days out of 126, but the code is less complex this way.

Next, we have our actual backtest.

for(i in 1:(length(ep)-nMonths)) {
  
  # get returns subset and compute absolute momentum
  retSubset <- rets[c(ep[i]:ep[(i+nMonths)]),]
  retSubset <- retSubset[-1,]
  moms <- Return.cumulative(retSubset)
  
  # select top performing assets and subset returns for them
  highRankAssets <- rank(moms) >= 6 # top 5 assets
  posReturnAssets <- moms > 0 # positive momentum assets
  selectedAssets <- highRankAssets & posReturnAssets # intersection of the above
  selectedSubset <- retSubset[,selectedAssets] # subset returns slice
  
  if(sum(selectedAssets)==0) { # if no qualifying assets, zero weight for period
    
    wts <- xts(t(rep(0, ncol(retSubset))), order.by=last(index(retSubset)))
    colnames(wts) <- colnames(retSubset)
    invVolWts[[i]] <- minVolWts[[i]] <- hrpWts[[i]] <- wts
    
  } else if (sum(selectedAssets)==1) { # if one qualifying asset, invest fully into it
    
    wts <- xts(t(rep(0, ncol(retSubset))), order.by=last(index(retSubset)))
    colnames(wts) <- colnames(retSubset)
    wts[, which(selectedAssets==1)] <- 1
    invVolWts[[i]] <- minVolWts[[i]] <- hrpWts[[i]] <- wts
    
  } else { # otherwise, use weighting algorithms
    
    cors <- cor(selectedSubset) # correlation
    volSubset <- tail(selectedSubset, nVol) # 20 day volatility
    vols <- StdDev(volSubset)
    covs <- t(vols) %*% vols * cors
    
    # minimum volatility using portfolio.optim from tseries
    minVolRets <- t(matrix(rep(1, sum(selectedAssets))))
    minVolWt <- portfolio.optim(x=minVolRets, covmat = covs)$pw
    names(minVolWt) <- colnames(covs)
    minVolWt <- appendMissingAssets(minVolWt, colnames(retSubset), last(index(retSubset)))
    minVolWts[[i]] <- minVolWt
    
    # inverse volatility weights
    invVols <- 1/vols 
    invVolWt <- invVols/sum(invVols) 
    invNames <- colnames(invVolWt)
    invVolWt <- as.numeric(invVolWt) 
    names(invVolWt) <- invNames
    invVolWt <- appendMissingAssets(invVolWt, colnames(retSubset), last(index(retSubset)))
    invVolWts[[i]] <- invVolWt
    
    # hrp weights
    clustOrder <- hclust(dist(cors), method = 'single')$order
    hrpWt <- getRecBipart(covs, clustOrder)
    names(hrpWt) <- colnames(covs)
    hrpWt <- appendMissingAssets(hrpWt, colnames(retSubset), last(index(retSubset)))
    hrpWts[[i]] <- hrpWt
  }
}

In a few sentences, this is what happens:

The algorithm takes a subset of the returns (the past six months at every month), and computes absolute momentum. It then ranks the ten absolute momentum calculations, and selects the intersection of the top 5, and those with a return greater than zero (so, a dual momentum calculation).

If no assets qualify, the algorithm invests in nothing. If there’s only one asset that qualifies, the algorithm invests in that one asset. If there are two or more qualifying assets, the algorithm computes a covariance matrix using 20 day volatility multiplied with a 126 day correlation matrix (that is, sd_20′ %*% sd_20 * (elementwise) cor_126. It then computes normalized inverse volatility weights using the volatility from the past 20 days, a minimum variance portfolio with the portfolio.optim function, and lastly, the hierarchical risk parity weights using the HRP code above from Marcos Lopez de Prado’s paper.

Lastly, the program puts together all of the weights, and adds a cash investment for any period without any investments.

invVolWts <- round(do.call(rbind, invVolWts), 3) # round for readability
minVolWts <- round(do.call(rbind, minVolWts), 3)
hrpWts <- round(do.call(rbind, hrpWts), 3)

# allocate to cash if no allocation made due to all negative momentum assets
invVolWts$cash <- 0; invVolWts$cash <- 1-rowSums(invVolWts)
hrpWts$cash <- 0; hrpWts$cash <- 1-rowSums(hrpWts)
minVolWts$cash <- 0; minVolWts$cash <- 1-rowSums(minVolWts)

# cash value will be zero
rets$cash <- 0

# compute backtest returns
invVolRets <- Return.portfolio(R = rets, weights = invVolWts)
minVolRets <- Return.portfolio(R = rets, weights = minVolWts)
hrpRets <- Return.portfolio(R = rets, weights = hrpWts)

Here are the results:

compare <- cbind(invVolRets, minVolRets, hrpRets)
colnames(compare) <- c("invVol", "minVol", "HRP")
charts.PerformanceSummary(compare)
rbind(table.AnnualizedReturns(compare), maxDrawdown(compare), CalmarRatio(compare))  
                             invVol    minVol       HRP
Annualized Return         0.0872000 0.0724000 0.0792000
Annualized Std Dev        0.1208000 0.1025000 0.1136000
Annualized Sharpe (Rf=0%) 0.7221000 0.7067000 0.6968000
Worst Drawdown            0.1548801 0.1411368 0.1593287
Calmar Ratio              0.5629882 0.5131956 0.4968234

In short, in the context of a small, carefully-selected and allegedly diversified (I’ll let Adam Butler speak for that one) universe dominated by the process of which assets to invest in as opposed to how much, the theoretical upsides of an algorithm which simultaneously exploits a covariance structure without needing to invert a covariance matrix can be lost.

However, this test (albeit from 2007 onwards, thanks to ETF inception dates combined with lookback burn-in) confirms what Adam Butler himself told me, which is that HRP hasn’t impressed him, and from this backtest, I can see why. However, in the context of dual momentum rank selection, I’m not convinced that any weighting scheme will realize much better performance than any other.

Thanks for reading.

NOTE: I am always interested in networking and hearing about full-time opportunities related to my skill set. My linkedIn profile can be found here.

The Marcos Lopez de Prado Hierarchical Risk Parity Algorithm

This post will be about replicating the Marcos Lopez de Prado algorithm from his paper building diversified portfolios that outperform out of sample. This algorithm is one that attempts to make a tradeoff between the classic mean-variance optimization algorithm that takes into account a covariance structure, but is unstable, and an inverse volatility algorithm that ignores covariance, but is more stable.

This is a paper that I struggled with until I ran the code in Python (I have anaconda installed but have trouble installing some packages such as keras because I’m on windows…would love to have someone walk me through setting up a Linux dual-boot), as I assumed that the clustering algorithm actually was able to concretely group every asset into a particular cluster (I.E. ETF 1 would be in cluster 1, ETF 2 in cluster 3, etc.). Turns out, that isn’t at all the case.

Here’s how the algorithm actually works.

First off, it computes a covariance and correlation matrix (created from simulated data in Marcos’s paper). Next, it uses a hierarchical clustering algorithm on a distance-transformed correlation matrix, with the “single” method (I.E. friend of friends–do ?hclust in R to read up more on this). The key output here is the order of the assets from the clustering algorithm. Note well: this is the only relevant artifact of the entire clustering algorithm.

Using this order, it then uses an algorithm that does the following:

Initialize a vector of weighs equal to 1 for each asset.

Then, run the following recursive algorithm:

1) Break the order vector up into two equal-length (or as close to equal length) lists as possible.

2) For each half of the list, compute the inverse variance weights (that is, just the diagonal) of the covariance matrix slice containing the assets of interest, and then compute the variance of the cluster when multiplied by the weights (I.E. w’ * S^2 * w).

3) Then, do a basic inverse-variance weight for the two clusters. Call the weight of cluster 0 alpha = 1-cluster_variance_0/(cluster_variance_0 + cluster_variance_1), and the weight of cluster 1 its complement. (1 – alpha).

4) Multiply all assets in the original vector of weights containing assets in cluster 0 with the weight of cluster 0, and all weights containing assets in cluster 1 with the weight of cluster 1. That is, weights[index_assets_cluster_0] *= alpha, weights[index_assets_cluster_1] *= 1-alpha.

5) Lastly, if the list isn’t of length 1 (that is, not a single asset), repeat this entire process until every asset is its own cluster.

Here is the implementation in R code.

First off, the correlation matrix and the covariance matrix for use in this code, obtained from Marcos Lopez De Prado’s code in the appendix in his paper.

> covMat
             V1           V2           V3           V4           V5          V6           V7           V8           V9          V10
1   1.000647799 -0.003050479  0.010033224 -0.010759689 -0.005036503 0.008762563  0.998201625 -0.001393196 -0.001254522 -0.009365991
2  -0.003050479  1.009021349  0.008613817  0.007334478 -0.009492688 0.013031817 -0.009420720 -0.015346223  1.010520047  1.013334849
3   0.010033224  0.008613817  1.000739363 -0.000637885  0.001783293 1.001574768  0.006385368  0.001922316  0.012902050  0.007997935
4  -0.010759689  0.007334478 -0.000637885  1.011854725  0.005759976 0.000905812 -0.011912269  0.000461894  0.012572661  0.009621670
5  -0.005036503 -0.009492688  0.001783293  0.005759976  1.005835878 0.005606343 -0.009643250  1.008567427 -0.006183035 -0.007942770
6   0.008762563  0.013031817  1.001574768  0.000905812  0.005606343 1.064309825  0.004413960  0.005780148  0.017185396  0.011601336
7   0.998201625 -0.009420720  0.006385368 -0.011912269 -0.009643250 0.004413960  1.058172027 -0.006755374 -0.008099181 -0.016240271
8  -0.001393196 -0.015346223  0.001922316  0.000461894  1.008567427 0.005780148 -0.006755374  1.074833155 -0.011903469 -0.013738378
9  -0.001254522  1.010520047  0.012902050  0.012572661 -0.006183035 0.017185396 -0.008099181 -0.011903469  1.075346677  1.015220126
10 -0.009365991  1.013334849  0.007997935  0.009621670 -0.007942770 0.011601336 -0.016240271 -0.013738378  1.015220126  1.078586686
> corMat
             V1           V2           V3           V4           V5          V6           V7           V8           V9          V10
1   1.000000000 -0.003035829  0.010026270 -0.010693011 -0.005020245 0.008490954  0.970062043 -0.001343386 -0.001209382 -0.009015412
2  -0.003035829  1.000000000  0.008572055  0.007258718 -0.009422702 0.012575370 -0.009117080 -0.014736040  0.970108941  0.971348946
3   0.010026270  0.008572055  1.000000000 -0.000633903  0.001777455 0.970485047  0.006205079  0.001853505  0.012437239  0.007698212
4  -0.010693011  0.007258718 -0.000633903  1.000000000  0.005709500 0.000872861 -0.011512172  0.000442908  0.012052964  0.009210090
5  -0.005020245 -0.009422702  0.001777455  0.005709500  1.000000000 0.005418538 -0.009347204  0.969998023 -0.005945165 -0.007625721
6   0.008490954  0.012575370  0.970485047  0.000872861  0.005418538 1.000000000  0.004159261  0.005404237  0.016063910  0.010827955
7   0.970062043 -0.009117080  0.006205079 -0.011512172 -0.009347204 0.004159261  1.000000000 -0.006334331 -0.007592568 -0.015201540
8  -0.001343386 -0.014736040  0.001853505  0.000442908  0.969998023 0.005404237 -0.006334331  1.000000000 -0.011072068 -0.012759610
9  -0.001209382  0.970108941  0.012437239  0.012052964 -0.005945165 0.016063910 -0.007592568 -0.011072068  1.000000000  0.942667300
10 -0.009015412  0.971348946  0.007698212  0.009210090 -0.007625721 0.010827955 -0.015201540 -0.012759610  0.942667300  1.000000000

Now, for the implementation.

This reads in the two matrices above and gets the clustering order.

covMat <- read.csv('cov.csv', header = FALSE)
corMat <- read.csv('corMat.csv', header = FALSE)

clustOrder <- hclust(dist(corMat), method = 'single')$order

This is the clustering order:

> clustOrder
 [1]  9  2 10  1  7  3  6  4  5  8

Next, the getIVP (get Inverse Variance Portfolio) and getClusterVar functions (note: I’m trying to keep the naming conventions identical to Dr. Lopez’s paper)

getIVP <- function(covMat) {
  # get inverse variance portfolio from diagonal of covariance matrix
  invDiag <- 1/diag(as.matrix(covMat))
  weights <- invDiag/sum(invDiag)
  return(weights)
}

getClusterVar <- function(covMat, cItems) {
  # compute cluster variance from the inverse variance portfolio above
  covMatSlice <- covMat[cItems, cItems]
  weights <- getIVP(covMatSlice)
  cVar <- t(weights) %*% as.matrix(covMatSlice) %*% weights
  return(cVar)
}

Next, my code diverges from the code in the paper, because I do not use the list comprehension structure, but instead opt for a recursive algorithm, as I find that style to be more readable.

One wrinkle to note is the use of the double arrow dash operator, to assign to a variable outside the scope of the recurFun function. I assign the initial weights vector w in the global environment, and update it from within the recurFun function. I am aware that it is a faux pas to create variables in the global environment, but my attempts at creating a temporary environment in which to update the weight vector did not produce the updating mechanism I had hoped to, so a little bit of assistance with refactoring this code would be appreciated.

getRecBipart <- function(covMat, sortIx) {
  # keeping track of weights vector in the global environment
  assign("w", value = rep(1, ncol(covMat)), envir = .GlobalEnv)

  # run recursion function
  recurFun(covMat, sortIx)
  return(w)
}

recurFun <- function(covMat, sortIx) {
  # get first half of sortIx which is a cluster order
  subIdx <- 1:trunc(length(sortIx)/2)

  # subdivide ordering into first half and second half
  cItems0 <- sortIx[subIdx]
  cItems1 <- sortIx[-subIdx]

  # compute cluster variances of covariance matrices indexed
  # on first half and second half of ordering
  cVar0 <- getClusterVar(covMat, cItems0)
  cVar1 <- getClusterVar(covMat, cItems1)
  alpha <- 1 - cVar0/(cVar0 + cVar1)
  
  # updating weights outside the function using scoping mechanics 
  w[cItems0] <<- w[cItems0] * alpha
  w[cItems1] <<- w[cItems1] * (1-alpha)
  
  # rerun the function on a half if the length of that half is greater than 1
  if(length(cItems0) > 1) {
    recurFun(covMat, cItems0)
  }
  if(length(cItems1) > 1) {
    recurFun(covMat, cItems1)
  }
}

Lastly, let’s run the function.

out <- getRecBipart(covMat, clustOrder)

With the result (which matches the paper):

> out
 [1] 0.06999366 0.07592151 0.10838948 0.19029104 0.09719887 0.10191545 0.06618868 0.09095933 0.07123881 0.12790318

So, hopefully this democratizes the use of this technology in R. While I have seen a raw Rcpp implementation and one from the Systematic Investor Toolbox, neither of those implementations satisfied me from a “plug and play” perspective. This implementation solves that issue. Anyone here can copy and paste these functions into their environment and immediately make use of one of the algorithms devised by one of the top minds in quantitative finance.

A demonstration in a backtest using this methodology will be forthcoming.

Thanks for reading.

NOTE: I am always interested in networking and full-time opportunities which may benefit from my skills. Furthermore, I am also interested in project work in the volatility ETF trading space. My linkedin profile can be found here.

An Introduction to Portfolio Component Conditional Value At Risk

This post will introduce component conditional value at risk mechanics found in PerformanceAnalytics from a paper written by Brian Peterson, Kris Boudt, and Peter Carl. This is a mechanism that is an easy-to-call mechanism for computing component expected shortfall in asset returns as they apply to a portfolio. While the exact mechanics are fairly complex, the upside is that the running time is nearly instantaneous, and this method is a solid tool for including in asset allocation analysis.

For those interested in an in-depth analysis of the intuition of component conditional value at risk, I refer them to the paper written by Brian Peterson, Peter Carl, and Kris Boudt.

Essentially, here’s the idea: all assets in a given portfolio have a marginal contribution to its total conditional value at risk (also known as expected shortfall)–that is, the expected loss when the loss surpasses a certain threshold. For instance, if you want to know your 5% expected shortfall, then it’s the average of the worst 5 returns per 100 days, and so on. For returns using daily resolution, the idea of expected shortfall may sound as though there will never be enough data in a sufficiently fast time frame (on one year or less), the formula for expected shortfall in the PerformanceAnalytics defaults to an approximation calculation using a Cornish-Fisher expansion, which delivers very good results so long as the p-value isn’t too extreme (that is, it works for relatively sane p values such as the 1%-10% range).

Component Conditional Value at Risk has two uses: first off, given no input weights, it uses an equal weight default, which allows it to provide a risk estimate for each individual asset without burdening the researcher to create his or her own correlation/covariance heuristics. Secondly, when provided with a set of weights, the output changes to reflect the contribution of various assets in proportion to those weights. This means that this methodology works very nicely with strategies that exclude assets based on momentum, but need a weighting scheme for the remaining assets. Furthermore, using this methodology also allows an ex-post analysis of risk contribution to see which instrument contributed what to risk.

First, a demonstration of how the mechanism works using the edhec data set. There is no strategy here, just a demonstration of syntax.

require(quantmod)
require(PerformanceAnalytics)

data(edhec)

tmp &lt;- CVaR(edhec, portfolio_method = &quot;component&quot;)

This will assume an equal-weight contribution from all of the funds in the edhec data set.

So tmp is the contribution to expected shortfall from each of the various edhec managers over the entire time period. Here’s the output:

$MES
           [,1]
[1,] 0.03241585

$contribution
 Convertible Arbitrage             CTA Global  Distressed Securities       Emerging Markets  Equity Market Neutral
          0.0074750513          -0.0028125166           0.0039422674           0.0069376579           0.0008077760
          Event Driven Fixed Income Arbitrage           Global Macro      Long/Short Equity       Merger Arbitrage
          0.0037114666           0.0043125937           0.0007173036           0.0036152960           0.0013693293
        Relative Value          Short Selling         Funds of Funds
          0.0037650911          -0.0048178690           0.0033924063 

$pct_contrib_MES
 Convertible Arbitrage             CTA Global  Distressed Securities       Emerging Markets  Equity Market Neutral
            0.23059863            -0.08676361             0.12161541             0.21402052             0.02491917
          Event Driven Fixed Income Arbitrage           Global Macro      Long/Short Equity       Merger Arbitrage
            0.11449542             0.13303965             0.02212817             0.11152864             0.04224258
        Relative Value          Short Selling         Funds of Funds
            0.11614968            -0.14862694             0.10465269

The salient part of this is the percent contribution (the last output). Notice that it can be negative, meaning that certain funds gain when others lose. At least, this was the case over the current data set. These assets diversify a portfolio and actually lower expected shortfall.

&gt; tmp2 &lt;- CVaR(edhec, portfolio_method = &quot;component&quot;, weights = c(rep(.1, 10), rep(0,3)))
&gt; tmp2
$MES
           [,1]
[1,] 0.04017453

$contribution
 Convertible Arbitrage             CTA Global  Distressed Securities       Emerging Markets  Equity Market Neutral
          0.0086198045          -0.0046696862           0.0058778855           0.0109152240           0.0009596620
          Event Driven Fixed Income Arbitrage           Global Macro      Long/Short Equity       Merger Arbitrage
          0.0054824325           0.0050398011           0.0009638502           0.0044568333           0.0025287234
        Relative Value          Short Selling         Funds of Funds
          0.0000000000           0.0000000000           0.0000000000 

$pct_contrib_MES
 Convertible Arbitrage             CTA Global  Distressed Securities       Emerging Markets  Equity Market Neutral
            0.21455894            -0.11623499             0.14630875             0.27169512             0.02388732
          Event Driven Fixed Income Arbitrage           Global Macro      Long/Short Equity       Merger Arbitrage
            0.13646538             0.12544767             0.02399157             0.11093679             0.06294345
        Relative Value          Short Selling         Funds of Funds
            0.00000000             0.00000000             0.00000000

In this case, I equally weighted the first ten managers in the edhec data set, and put zero weight in the last three. Furthermore, we can see what happens when the weights are not equal.

&gt; tmp3 &lt;- CVaR(edhec, portfolio_method = &quot;component&quot;, weights = c(.2, rep(.1, 9), rep(0,3)))
&gt; tmp3
$MES
           [,1]
[1,] 0.04920372

$contribution
 Convertible Arbitrage             CTA Global  Distressed Securities       Emerging Markets  Equity Market Neutral
          0.0187406982          -0.0044391078           0.0057235762           0.0102706768           0.0007710434
          Event Driven Fixed Income Arbitrage           Global Macro      Long/Short Equity       Merger Arbitrage
          0.0051541429           0.0055944367           0.0008028457           0.0044085104           0.0021768951
        Relative Value          Short Selling         Funds of Funds
          0.0000000000           0.0000000000           0.0000000000 

$pct_contrib_MES
 Convertible Arbitrage             CTA Global  Distressed Securities       Emerging Markets  Equity Market Neutral
            0.38087972            -0.09021895             0.11632406             0.20873782             0.01567043
          Event Driven Fixed Income Arbitrage           Global Macro      Long/Short Equity       Merger Arbitrage
            0.10475109             0.11369947             0.01631677             0.08959710             0.04424249
        Relative Value          Short Selling         Funds of Funds
            0.00000000             0.00000000             0.00000000

This time, notice that as the weight increased in the convertible arb manager, so too did his contribution to maximum expected shortfall.

For a future backtest, I would like to make some data requests. I would like to use the universe found in Faber’s Global Asset Allocation book. That said, the simulations in that book go back to 1972, and I was wondering if anyone out there has daily returns for those assets/indices. While some ETFs go back into the early 2000s, there are some that start rather late such as DBC (commodities, early 2006), GLD (gold, early 2004), BWX (foreign bonds, late 2007), and FTY (NAREIT, early 2007). As an eight-year backtest would be a bit short, I was wondering if anyone had data with more history.

One other thing, I will in New York for the trading show, and speaking on the “programming wars” panel on October 6th.

Thanks for reading.

NOTE: While I am currently contracting, I am also looking for a permanent position which can benefit from my skills for when my current contract ends. If you have or are aware of such an opening, I will be happy to speak with you.

How To Compute Turnover With Return.Portfolio in R

This post will demonstrate how to take into account turnover when dealing with returns-based data using PerformanceAnalytics and the Return.Portfolio function in R. It will demonstrate this on a basic strategy on the nine sector SPDRs.

So, first off, this is in response to a question posed by one Robert Wages on the R-SIG-Finance mailing list. While there are many individuals out there with a plethora of questions (many of which can be found to be demonstrated on this blog already), occasionally, there will be an industry veteran, a PhD statistics student from Stanford, or other very intelligent individual that will ask a question on a topic that I haven’t yet touched on this blog, which will prompt a post to demonstrate another technical aspect found in R. This is one of those times.

So, this demonstration will be about computing turnover in returns space using the PerformanceAnalytics package. Simply, outside of the PortfolioAnalytics package, PerformanceAnalytics with its Return.Portfolio function is the go-to R package for portfolio management simulations, as it can take a set of weights, a set of returns, and generate a set of portfolio returns for analysis with the rest of PerformanceAnalytics’s functions.

Again, the strategy is this: take the 9 three-letter sector SPDRs (since there are four-letter ETFs now), and at the end of every month, if the adjusted price is above its 200-day moving average, invest into it. Normalize across all invested sectors (that is, 1/9th if invested into all 9, 100% into 1 if only 1 invested into, 100% cash, denoted with a zero return vector, if no sectors are invested into). It’s a simple, toy strategy, as the strategy isn’t the point of the demonstration.

Here’s the basic setup code:

require(TTR)
require(PerformanceAnalytics)
require(quantmod)

symbols <- c("XLF", "XLK", "XLU", "XLE", "XLP", "XLF", "XLB", "XLV", "XLY")
getSymbols(symbols, src='yahoo', from = '1990-01-01-01')
prices <- list()
for(i in 1:length(symbols)) {
  tmp <- Ad(get(symbols[[i]]))
  prices[[i]] <- tmp
}
prices <- do.call(cbind, prices)

# Our signal is a simple adjusted price over 200 day SMA
signal <- prices > xts(apply(prices, 2, SMA, n = 200), order.by=index(prices))

# equal weight all assets with price above SMA200
returns <- Return.calculate(prices)
weights <- signal/(rowSums(signal)+1e-16)

# With Return.portfolio, need all weights to sum to 1
weights$zeroes <- 1 - rowSums(weights)
returns$zeroes <- 0

monthlyWeights <- na.omit(weights[endpoints(weights, on = 'months'),])
weights <- na.omit(weights)
returns <- na.omit(returns)

So, get the SPDRs, put them together, compute their returns, generate the signal, and create the zero vector, since Return.Portfolio treats weights less than 1 as a withdrawal, and weights above 1 as the addition of more capital (big FYI here).

Now, here’s how to compute turnover:

out <- Return.portfolio(R = returns, weights = monthlyWeights, verbose = TRUE)
beginWeights <- out$BOP.Weight
endWeights <- out$EOP.Weight
txns <- beginWeights - lag(endWeights)
monthlyTO <- xts(rowSums(abs(txns[,1:9])), order.by=index(txns))
plot(monthlyTO)

So, the trick is this: when you call Return.portfolio, use the verbose = TRUE option. This creates several objects, among them returns, BOP.Weight, and EOP.Weight. These stand for Beginning Of Period Weight, and End Of Period Weight.

The way that turnover is computed is simply the difference between how the day’s return moves the allocated portfolio from its previous ending point to where that portfolio actually stands at the beginning of next period. That is, the end of period weight is the beginning of period drift after taking into account the day’s drift/return for that asset. The new beginning of period weight is the end of period weight plus any transacting that would have been done. Thus, in order to find the actual transactions (or turnover), one subtracts the previous end of period weight from the beginning of period weight.

This is what such transactions look like for this strategy.

Something we can do with such data is take a one-year rolling turnover, accomplished with the following code:

yearlyTO <- runSum(monthlyTO, 252)
plot(yearlyTO, main = "running one year turnover")

It looks like this:

This essentially means that one year’s worth of two-way turnover (that is, if selling an entirely invested portfolio is 100% turnover, and buying an entirely new set of assets is another 100%, then two-way turnover is 200%) is around 800% at maximum. That may be pretty high for some people.

Now, here’s the application when you penalize transaction costs at 20 basis points per percentage point traded (that is, it costs 20 cents to transact $100).

txnCosts <- monthlyTO * -.0020
retsWithTxnCosts <- out$returns + txnCosts
compare <- na.omit(cbind(out$returns, retsWithTxnCosts))
colnames(compare) <- c("NoTxnCosts", "TxnCosts20BPs")
charts.PerformanceSummary(compare)
table.AnnualizedReturns(compare)

And the result:


                          NoTxnCosts TxnCosts20BPs
Annualized Return             0.0587        0.0489
Annualized Std Dev            0.1554        0.1553
Annualized Sharpe (Rf=0%)     0.3781        0.3149

So, at 20 basis points on transaction costs, that takes about one percent in returns per year out of this (admittedly, terrible) strategy. This is far from negligible.

So, that is how you actually compute turnover and transaction costs. In this case, the transaction cost model was very simple. However, given that Return.portfolio returns transactions at the individual asset level, one could get as complex as they would like with modeling the transaction costs.

Thanks for reading.

NOTE: I will be giving a lightning talk at R/Finance, so for those attending, you’ll be able to find me there.

Momentum, Markowitz, and Solving Rank-Deficient Covariance Matrices — The Constrained Critical Line Algorithm

This post will feature the differences in the implementation of my constrained critical line algorithm with that of Dr. Clarence Kwan’s. The constrained critical line algorithm is a form of gradient descent that incorporates elements of momentum. My implementation includes a volatility-targeting binary search algorithm.

First off, rather than try and explain the algorithm piece by piece, I’ll defer to Dr. Clarence Kwan’s paper and excel spreadsheet, from where I obtained my original implementation. Since that paper and excel spreadsheet explains the functionality of the algorithm, I won’t repeat that process here. Essentially, the constrained critical line algorithm incorporates its lambda constraints into the structure of the covariance matrix itself. This innovation actually allows the algorithm to invert previously rank-deficient matrices.

Now, while Markowitz mean-variance optimization may be a bit of old news for some, the ability to use a short lookback for momentum with monthly data has allowed me and my two coauthors (Dr. Wouter Keller, who came up with flexible and elastic asset allocation, and Adam Butler, of GestaltU) to perform a backtest on a century’s worth of assets, with more than 30 assets in the backtest, despite using only a 12-month formation period. That paper can be found here.

Let’s look at the code for the function.

CCLA <- function(covMat, retForecast, maxIter = 1000, 
                 verbose = FALSE, scale = 252, 
                 weightLimit = .7, volThresh = .1) {
  if(length(retForecast) > length(unique(retForecast))) {
    sequentialNoise <- seq(1:length(retForecast)) * 1e-12
    retForecast <- retForecast + sequentialNoise
  }
  
  #initialize original out/in/up status
  if(length(weightLimit) == 1) {
    weightLimit <- rep(weightLimit, ncol(covMat))
  }
  rankForecast <- length(retForecast) - rank(retForecast) + 1
  remainingWeight <- 1 #have 100% of weight to allocate
  upStatus <- inStatus <- rep(0, ncol(covMat))
  i <- 1
  while(remainingWeight > 0) {
    securityLimit <- weightLimit[rankForecast == i]
    if(securityLimit < remainingWeight) {
      upStatus[rankForecast == i] <- 1 #if we can't invest all remaining weight into the security
      remainingWeight <- remainingWeight - securityLimit
    } else {
      inStatus[rankForecast == i] <- 1
      remainingWeight <- 0
    }
    i <- i + 1
  }
  
  #initial matrices (W, H, K, identity, negative identity)
  covMat <- as.matrix(covMat)
  retForecast <- as.numeric(retForecast)
  init_W <- cbind(2*covMat, rep(-1, ncol(covMat)))
  init_W <- rbind(init_W, c(rep(1, ncol(covMat)), 0))
  H_vec <- c(rep(0, ncol(covMat)), 1)
  K_vec <- c(retForecast, 0)
  negIdentity <- -1*diag(ncol(init_W))
  identity <- diag(ncol(init_W))
  matrixDim <- nrow(init_W)
  weightLimMat <- matrix(rep(weightLimit, matrixDim), ncol=ncol(covMat), byrow=TRUE)
  
  #out status is simply what isn't in or up
  outStatus <- 1 - inStatus - upStatus
  
  #initialize expected volatility/count/turning points data structure
  expVol <- Inf
  lambda <- 100
  count <- 0
  turningPoints <- list()
  while(lambda > 0 & count < maxIter) {
    
    #old lambda and old expected volatility for use with numerical algorithms
    oldLambda <- lambda
    oldVol <- expVol
    
    count <- count + 1
    
    #compute W, A, B
    inMat <- matrix(rep(c(inStatus, 1), matrixDim), nrow = matrixDim, byrow = TRUE)
    upMat <- matrix(rep(c(upStatus, 0), matrixDim), nrow = matrixDim, byrow = TRUE)
    outMat <- matrix(rep(c(outStatus, 0), matrixDim), nrow = matrixDim, byrow = TRUE)
    
    W <- inMat * init_W + upMat * identity + outMat * negIdentity
    
    inv_W <- solve(W)
    modified_H <- H_vec - rowSums(weightLimMat* upMat[,-matrixDim] * init_W[,-matrixDim])
    A_vec <- inv_W %*% modified_H
    B_vec <- inv_W %*% K_vec
    
    #remove the last elements from A and B vectors
    truncA <- A_vec[-length(A_vec)]
    truncB <- B_vec[-length(B_vec)]
    
    #compute in Ratio (aka Ratio(1) in Kwan.xls)
    inRatio <- rep(0, ncol(covMat))
    inRatio[truncB > 0] <- -truncA[truncB > 0]/truncB[truncB > 0]
    
    #compute up Ratio (aka Ratio(2) in Kwan.xls)
    upRatio <- rep(0, ncol(covMat))
    upRatioIndices <- which(inStatus==TRUE & truncB < 0)
    if(length(upRatioIndices) > 0) {
      upRatio[upRatioIndices] <- (weightLimit[upRatioIndices] - truncA[upRatioIndices]) / truncB[upRatioIndices]
    }
    
    #find lambda -- max of up and in ratios
    maxInRatio <- max(inRatio)
    maxUpRatio <- max(upRatio)
    lambda <- max(maxInRatio, maxUpRatio)
    
    #compute new weights
    wts <- inStatus*(truncA + truncB * lambda) + upStatus * weightLimit + outStatus * 0
    
    #compute expected return and new expected volatility
    expRet <- t(retForecast) %*% wts
    expVol <- sqrt(wts %*% covMat %*% wts) * sqrt(scale)
    
    #create turning point data row and append it to turning points
    turningPoint <- cbind(count, expRet, lambda, expVol, t(wts))
    colnames(turningPoint) <- c("CP", "Exp. Ret.", "Lambda", "Exp. Vol.", colnames(covMat))
    turningPoints[[count]] <- turningPoint
    
    #binary search for volatility threshold -- if the first iteration is lower than the threshold,
    #then immediately return, otherwise perform the binary search until convergence of lambda
    if(oldVol == Inf & expVol < volThresh) {
      turningPoints <- do.call(rbind, turningPoints)
      threshWts <- tail(turningPoints, 1)
      return(list(turningPoints, threshWts))
    } else if(oldVol > volThresh & expVol < volThresh) {
      upLambda <- oldLambda
      dnLambda <- lambda
      meanLambda <- (upLambda + dnLambda)/2
      while(upLambda - dnLambda > .00001) {
        
        #compute mean lambda and recompute weights, expected return, and expected vol
        meanLambda <- (upLambda + dnLambda)/2
        wts <- inStatus*(truncA + truncB * meanLambda) + upStatus * weightLimit + outStatus * 0
        expRet <- t(retForecast) %*% wts
        expVol <- sqrt(wts %*% covMat %*% wts) * sqrt(scale)
        
        #if new expected vol is less than threshold, mean becomes lower bound
        #otherwise, it becomes the upper bound, and loop repeats
        if(expVol < volThresh) {
          dnLambda <- meanLambda
        } else {
          upLambda <- meanLambda
        }
      }
      
      #once the binary search completes, return those weights, and the corner points
      #computed until the binary search. The corner points aren't used anywhere, but they're there.
      threshWts <- cbind(count, expRet, meanLambda, expVol, t(wts))
      colnames(turningPoint) <- colnames(threshWts) <- c("CP", "Exp. Ret.", "Lambda", "Exp. Vol.", colnames(covMat))
      turningPoints[[count]] <- turningPoint
      turningPoints <- do.call(rbind, turningPoints)
      return(list(turningPoints, threshWts))
    }
    
    #this is only run for the corner points during which binary search doesn't take place
    #change status of security that has new lambda
    if(maxInRatio > maxUpRatio) {
      inStatus[inRatio == maxInRatio] <- 1 - inStatus[inRatio == maxInRatio]
      upStatus[inRatio == maxInRatio] <- 0
    } else {
      upStatus[upRatio == maxUpRatio] <- 1 - upStatus[upRatio == maxUpRatio]
      inStatus[upRatio == maxUpRatio] <- 0
    }
    outStatus <- 1 - inStatus - upStatus
  }
  
  #we only get here if the volatility threshold isn't reached
  #can actually happen if set sufficiently low
  turningPoints <- do.call(rbind, turningPoints)
  
  threshWts <- tail(turningPoints, 1)
  
  return(list(turningPoints, threshWts))
}

Essentially, the algorithm can be divided into three parts:

The first part is the initialization, which does the following:

It creates three status vectors: in, up, and out. The up vector denotes which securities are at their weight constraint cap, the in status are securities that are not at their weight cap, and the out status are securities that receive no weighting on that iteration of the algorithm.

The rest of the algorithm essentially does the following:

It takes a gradient descent approach by changing the status of the security that minimizes lambda, which by extension minimizes the volatility at the local point. As long as lambda is greater than zero, the algorithm continues to iterate. Letting the algorithm run until convergence effectively provides the volatility-minimization portfolio on the efficient frontier.

However, one change that Dr. Keller and I made to it is the functionality of volatility targeting, allowing the algorithm to stop between iterations. As the SSRN paper shows, a higher volatility threshold, over the long run (the *VERY* long run) will deliver higher returns.

In any case, the algorithm takes into account several main arguments:

A return forecast, a covariance matrix, a volatility threshold, and weight limits, which can be either one number that will result in a uniform weight limit, or a per-security weight limit. Another argument is scale, which is 252 for days, 12 for months, and so on. Lastly, there is a volatility threshold component, which allows the user to modify how aggressive or conservative the strategy can be.

In any case, to demonstrate this function, let’s run a backtest. The idea in this case will come from a recent article published by Frank Grossmann from SeekingAlpha, in which he obtained a 20% CAGR but with a 36% max drawdown.

So here’s the backtest:

symbols <- c("AFK", "ASHR", "ECH", "EGPT",
             "EIDO", "EIRL", "EIS", "ENZL",
             "EPHE", "EPI", "EPOL", "EPU",
             "EWA", "EWC", "EWD", "EWG",
             "EWH", "EWI", "EWJ", "EWK",
             "EWL", "EWM", "EWN", "EWO",
             "EWP", "EWQ", "EWS", "EWT",
             "EWU", "EWW", "EWY", "EWZ",
             "EZA", "FM", "FRN", "FXI",
             "GAF", "GULF", "GREK", "GXG",
             "IDX", "MCHI", "MES", "NORW",
             "QQQ", "RSX", "THD", "TUR",
             "VNM", "TLT"
)

getSymbols(symbols, from = "2003-01-01")

prices <- list()
entryRets <- list()
for(i in 1:length(symbols)) {
  prices[[i]] <- Ad(get(symbols[i]))
}
prices <- do.call(cbind, prices)
colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))

returns <- Return.calculate(prices)
returns <- returns[-1,]

sumIsNa <- function(col) {
  return(sum(is.na(col)))
}

appendZeroes <- function(selected, originalSetNames) {
  zeroes <- rep(0, length(originalSetNames) - length(selected))
  names(zeroes) <- originalSetNames[!originalSetNames %in% names(selected)]
  all <- c(selected, zeroes)
  all <- all[originalSetNames]
  return(all)
}

computeStats <- function(rets) {
  stats <- rbind(table.AnnualizedReturns(rets), maxDrawdown(rets), CalmarRatio(rets))
  return(round(stats, 3))
}

CLAAbacktest <- function(returns, lookback = 3, volThresh = .1, assetCaps = .5, tltCap = 1,
                         returnWeights = FALSE, useTMF = FALSE) {
  if(useTMF) {
    returns$TLT <- returns$TLT * 3
  }
  ep <- endpoints(returns, on = "months")
  weights <- list()
  for(i in 2:(length(ep) - lookback)) {
    retSubset <- returns[(ep[i]+1):ep[i+lookback],]
    retNAs <- apply(retSubset, 2, sumIsNa)
    validRets <- retSubset[, retNAs==0]
    retForecast <- Return.cumulative(validRets)
    covRets <- cov(validRets)
    weightLims <- rep(assetCaps, ncol(covRets))
    weightLims[colnames(covRets)=="TLT"] <- tltCap
    weight <- CCLA(covMat = covRets, retForecast = retForecast, weightLimit = weightLims, volThresh = volThresh)
    weight <- weight[[2]][,5:ncol(weight[[2]])]
    weight <- appendZeroes(selected = weight, colnames(retSubset))
    weight <- xts(t(weight), order.by=last(index(validRets)))
    weights[[i]] <- weight
  }
  weights <- do.call(rbind, weights)
  stratRets <- Return.portfolio(R = returns, weights = weights)
  if(returnWeights) {
    return(list(weights, stratRets))
  }
  return(stratRets)
}

In essence, we take the returns over a specified monthly lookback period, specify a volatility threshold, specify asset caps, specify the bond asset cap, and whether or not we wish to use TLT or TMF (a 3x leveraged variant, which just multiplies all returns of TLT by 3, for simplicity). The output of the CCLA (Constrained Critical Line Algorithm) is a list that contains the corner points, and the volatility threshold corner point which contains the corner point number, expected return, expected volatility, and the lambda value. So, we want the fifth element onward of the second element of the list.

Here are some results:

config1 <- CLAAbacktest(returns = returns)
config2 <- CLAAbacktest(returns = returns, useTMF = TRUE)
config3 <- CLAAbacktest(returns = returns, lookback = 4)
config4 <- CLAAbacktest(returns = returns, lookback = 2, useTMF = TRUE)

comparison <- na.omit(cbind(config1, config2, config3, config4))
colnames(comparison) <- c("Default", "TMF instead of TLT", "Lookback 4", "Lookback 2 and TMF")
charts.PerformanceSummary(comparison)
computeStats(comparison)

With the following statistics:

> computeStats(comparison)
                          Default TMF instead of TLT Lookback 4 Lookback 2 and TMF
Annualized Return           0.137              0.146      0.133              0.138
Annualized Std Dev          0.126              0.146      0.125              0.150
Annualized Sharpe (Rf=0%)   1.081              1.000      1.064              0.919
Worst Drawdown              0.219              0.344      0.186              0.357
Calmar Ratio                0.625              0.424      0.714              0.386

The variants that use TMF instead of TLT suffer far worse drawdowns. Not much of a hedge, apparently.

Here’s the equity curve:

Taking the 4 month lookback configuration (strongest Calmar), we’ll play around with the volatility setting.

Here’s the backtest:

config5 <- CLAAbacktest(returns = returns, lookback = 4, volThresh = .15)
config6 <- CLAAbacktest(returns = returns, lookback = 4, volThresh = .2)

comparison2 <- na.omit(cbind(config3, config5, config6))
colnames(comparison2) <- c("Vol10", "Vol15", "Vol20")
charts.PerformanceSummary(comparison2)
computeStats(comparison2)

With the results:

> computeStats(comparison2)
                          Vol10 Vol15 Vol20
Annualized Return         0.133 0.153 0.180
Annualized Std Dev        0.125 0.173 0.204
Annualized Sharpe (Rf=0%) 1.064 0.886 0.882
Worst Drawdown            0.186 0.212 0.273
Calmar Ratio              0.714 0.721 0.661

In this case, more risk, more reward, lower risk/reward ratios as you push the volatility threshold. So for once, the volatility puzzle doesn’t rear its head, and higher risk indeed does translate to higher returns (at the cost of everything else, though).

Here’s the equity curve.

Lastly, let’s try toggling the asset cap limits with the vol threshold back at 10.

config7 <- CLAAbacktest(returns = returns, lookback = 4, assetCaps = .1)
config8 <- CLAAbacktest(returns = returns, lookback = 4, assetCaps = .25)
config9 <- CLAAbacktest(returns = returns, lookback = 4, assetCaps = 1/3)
config10 <- CLAAbacktest(returns = returns, lookback = 4, assetCaps = 1)

comparison3 <- na.omit(cbind(config7, config8, config9, config3, config10))
colnames(comparison3) <- c("Cap10", "Cap25", "Cap33", "Cap50", "Uncapped")
charts.PerformanceSummary(comparison3)
computeStats(comparison3)

With the resulting statistics:

> computeStats(comparison3)
                          Cap10 Cap25 Cap33 Cap50 Uncapped
Annualized Return         0.124 0.122 0.127 0.133    0.134
Annualized Std Dev        0.118 0.122 0.123 0.125    0.126
Annualized Sharpe (Rf=0%) 1.055 1.002 1.025 1.064    1.070
Worst Drawdown            0.161 0.185 0.186 0.186    0.186
Calmar Ratio              0.771 0.662 0.680 0.714    0.721

Essentially, in this case, there was very little actual change from simply tweaking weight limits. Here’s an equity curve:

To conclude, while not exactly achieving the same aggregate returns or Sharpe ratio that the SeekingAlpha article did, it did highlight a probable cause of its major drawdown, and also demonstrated the levers of how to apply the constrained critical line algorithm, the mechanics of which are detailed in the papers linked to earlier.

Thanks for reading.

The JP Morgan SCTO strategy

This strategy goes over JP Morgan’s SCTO strategy, a basic XL-sector/RWR rotation strategy with the typical associated risks and returns with a momentum equity strategy. It’s nothing spectacular, but if a large bank markets it, it’s worth looking at.

Recently, one of my readers, a managing director at a quantitative investment firm, sent me a request to write a rotation strategy based around the 9 sector spiders and RWR. The way it works (or at least, the way I interpreted it) is this:

Every month, compute the return (not sure how “the return” is defined) and rank. Take the top 5 ranks, and weight them in a normalized fashion to the inverse of their 22-day volatility. Zero out any that have negative returns. Lastly, check the predicted annualized vol of the portfolio, and if it’s greater than 20%, bring it back down to 20%. The cash asset–SHY–receives any remaining allocation due to setting securities to zero.

For the reference I used, here’s the investment case document from JP Morgan itself.

Here’s my implementation:

Step 1) get the data, compute returns.

require(quantmod)
require(PerformanceAnalytics)
symbols <- c("XLB", "XLE", "XLF", "XLI", "XLK", "XLP", "XLU", "XLV", "XLY", "RWR", "SHY")
getSymbols(symbols, from="1990-01-01")
prices <- list()
for(i in 1:length(symbols)) {
  prices[[i]] <- Ad(get(symbols[i]))  
}
prices <- do.call(cbind, prices)
colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))
returns <- na.omit(Return.calculate(prices))

Step 2) The function itself.

sctoStrat <- function(returns, cashAsset = "SHY", lookback = 4, annVolLimit = .2,
                      topN = 5, scale = 252) {
  ep <- endpoints(returns, on = "months")
  weights <- list()
  cashCol <- grep(cashAsset, colnames(returns))
  
  #remove cash from asset returns
  cashRets <- returns[, cashCol]
  assetRets <- returns[, -cashCol]
  for(i in 2:(length(ep) - lookback)) {
    retSubset <- assetRets[ep[i]:ep[i+lookback]]
    
    #forecast is the cumulative return of the lookback period
    forecast <- Return.cumulative(retSubset)
    
    #annualized (realized) volatility uses a 22-day lookback period
    annVol <- StdDev.annualized(tail(retSubset, 22))
    
    #rank the forecasts (the cumulative returns of the lookback)
    rankForecast <- rank(forecast) - ncol(assetRets) + topN
    
    #weight is inversely proportional to annualized vol
    weight <- 1/annVol
    
    #zero out anything not in the top N assets
    weight[rankForecast <= 0] <- 0
    
    #normalize and zero out anything with a negative return
    weight <- weight/sum(weight)
    weight[forecast < 0] <- 0
    
    #compute forecasted vol of portfolio
    forecastVol <- sqrt(as.numeric(t(weight)) %*% 
                          cov(retSubset) %*% 
                          as.numeric(weight)) * sqrt(scale)
    
    #if forecasted vol greater than vol limit, cut it down
    if(as.numeric(forecastVol) > annVolLimit) {
      weight <- weight * annVolLimit/as.numeric(forecastVol)
    }
    weights[[i]] <- xts(weight, order.by=index(tail(retSubset, 1)))
  }
  
  #replace cash back into returns
  returns <- cbind(assetRets, cashRets)
  weights <- do.call(rbind, weights)
  
  #cash weights are anything not in securities
  weights$CASH <- 1-rowSums(weights)
  
  #compute and return strategy returns
  stratRets <- Return.portfolio(R = returns, weights = weights)
  return(stratRets)      
}

In this case, I took a little bit of liberty with some specifics that the reference was short on. I used the full covariance matrix for forecasting the portfolio variance (not sure if JPM would ignore the covariances and do a weighted sum of individual volatilities instead), and for returns, I used the four-month cumulative. I’ve seen all sorts of permutations on how to compute returns, ranging from some average of 1, 3, 6, and 12 month cumulative returns to some lookback period to some two period average, so I’m all ears if others have differing ideas, which is why I left it as a lookback parameter.

Step 3) Running the strategy.

scto4_20 <- sctoStrat(returns)
getSymbols("SPY", from = "1990-01-01")
spyRets <- Return.calculate(Ad(SPY))
comparison <- na.omit(cbind(scto4_20, spyRets))
colnames(comparison) <- c("strategy", "SPY")
charts.PerformanceSummary(comparison)
apply.yearly(comparison, Return.cumulative)
stats <- rbind(table.AnnualizedReturns(comparison),
               maxDrawdown(comparison),
               CalmarRatio(comparison),
               SortinoRatio(comparison)*sqrt(252))
round(stats, 3)

Here are the statistics:

                          strategy   SPY
Annualized Return            0.118 0.089
Annualized Std Dev           0.125 0.193
Annualized Sharpe (Rf=0%)    0.942 0.460
Worst Drawdown               0.165 0.552
Calmar Ratio                 0.714 0.161
Sortino Ratio (MAR = 0%)     1.347 0.763

               strategy         SPY
2002-12-31 -0.035499564 -0.05656974
2003-12-31  0.253224759  0.28181559
2004-12-31  0.129739794  0.10697941
2005-12-30  0.066215224  0.04828267
2006-12-29  0.167686936  0.15845242
2007-12-31  0.153890329  0.05146218
2008-12-31 -0.096736711 -0.36794994
2009-12-31  0.181759432  0.26351755
2010-12-31  0.099187188  0.15056146
2011-12-30  0.073734427  0.01894986
2012-12-31  0.067679129  0.15990336
2013-12-31  0.321039353  0.32307769
2014-12-31  0.126633020  0.13463790
2015-04-16  0.004972434  0.02806776

And the equity curve:

To me, it looks like a standard rotation strategy. Aims for the highest momentum securities, diversifies to try and control risk, hits a drawdown in the crisis, recovers, and slightly lags the bull run on SPY. Nothing out of the ordinary.

So, for those interested, here you go. I’m surprised that JP Morgan itself markets this sort of thing, considering that they probably employ top-notch quants that can easily come up with products and/or strategies that are far better.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.

The Logical Invest Enhanced Bond Rotation Strategy (And the Importance of Dividends)

This post will display my implementation of the Logical Invest Enhanced Bond Rotation strategy. This is a strategy that indeed does work, but is dependent on reinvesting dividends, as bonds pay coupons, which means bond ETFs do likewise.

The strategy is fairly simple — using four separate fixed income markets (long-term US government bonds, high-yield bonds, emerging sovereign debt, and convertible bonds), the strategy aims to deliver a low-risk, high Sharpe profile. Every month, it switches to two separate securities, in either a 60-40 or 50-50 split (that is, a 60-40 one way, or the other). My implementation for this strategy is similar to the ones I’ve done for the Logical Invest Universal Investment Strategy, which is to maximize a modified Sharpe ratio in a walk-forward process.

Here’s the code:

LogicInvestEBR <- function(returns, lowerBound, upperBound, period, modSharpeF) {
  count <- 0
  configs <- list()
  instCombos <- combn(colnames(returns), m = 2)
  for(i in 1:ncol(instCombos)) {
    inst1 <- instCombos[1, i]
    inst2 <- instCombos[2, i]
    rets <- returns[,c(inst1, inst2)]
    weightSeq <- seq(lowerBound, upperBound, by = .1)
    for(j in 1:length(weightSeq)) {
      returnConfig <- Return.portfolio(R = rets, 
                      weights = c(weightSeq[j], 1-weightSeq[j]), 
                      rebalance_on="months")
      colnames(returnConfig) <- paste(inst1, weightSeq[j], 
                                inst2, 1-weightSeq[j], sep="_")
      count <- count + 1
      configs[[count]] <- returnConfig
    }
  }
  
  configs <- do.call(cbind, configs)
  cumRets <- cumprod(1+configs)
  
  #rolling cumulative 
  rollAnnRets <- (cumRets/lag(cumRets, period))^(252/period) - 1
  rollingSD <- sapply(X = configs, runSD, n=period)*sqrt(252)
  
  modSharpe <- rollAnnRets/(rollingSD ^ modSharpeF)
  monthlyModSharpe <- modSharpe[endpoints(modSharpe, on="months"),]
  
  findMax <- function(data) {
    return(data==max(data))
  }
  
  #configs$zeroes <- 0 #zeroes for initial periods during calibration
  weights <- t(apply(monthlyModSharpe, 1, findMax))
  weights <- weights*1
  weights <- xts(weights, order.by=as.Date(rownames(weights)))
  weights[is.na(weights)] <- 0
  weights$zeroes <- 1-rowSums(weights)
  configCopy <- configs
  configCopy$zeroes <- 0
  
  stratRets <- Return.portfolio(R = configCopy, weights = weights)
  return(stratRets)  
}

The one thing different about this code is the way I initialize the return streams. It’s an ugly piece of work, but it takes all of the pairwise combinations (that is, 4 choose 2, or 4c2) along with a sequence going by 10% for the different security weights between the lower and upper bound (that is, if the lower bound is 40% and upper bound is 60%, the three weights will be 40-60, 50-50, and 60-40). So, in this case, there are 18 configurations. 4c2*3. Do note that this is not at all a framework that can be scaled up. That is, with 20 instruments, there will be 190 different combinations, and then anywhere between 3 to 11 (if going from 0-100) configurations for each combination. Obviously, not a pretty sight.

Beyond that, it’s the same refrain. Bind the returns together, compute an n-day rolling cumulative return (far faster my way than using the rollApply version of Return.annualized), divide it by the n-day rolling annualized standard deviation divided by the modified Sharpe F factor (1 gives you Sharpe ratio, 0 gives you pure returns, greater than 1 puts more of a focus on risk). Take the highest Sharpe ratio, allocate to that configuration, repeat.

So, how does this perform? Here’s a test script, using the same 73-day lookback with a modified Sharpe F of 2 that I’ve used in the previous Logical Invest strategies.

symbols <- c("TLT", "JNK", "PCY", "CWB", "VUSTX", "PRHYX", "RPIBX", "VCVSX")
suppressMessages(getSymbols(symbols, from="1995-01-01", src="yahoo"))
etfClose <- Return.calculate(cbind(Cl(TLT), Cl(JNK), Cl(PCY), Cl(CWB)))
etfAdj <- Return.calculate(cbind(Ad(TLT), Ad(JNK), Ad(PCY), Ad(CWB)))
mfClose <- Return.calculate(cbind(Cl(VUSTX), Cl(PRHYX), Cl(RPIBX), Cl(VCVSX)))
mfAdj <- Return.calculate(cbind(Ad(VUSTX), Ad(PRHYX), Ad(RPIBX), Ad(VCVSX)))
colnames(etfClose) <- colnames(etfAdj) <- c("TLT", "JNK", "PCY", "CWB")
colnames(mfClose) <- colnames(mfAdj) <- c("VUSTX", "PRHYX", "RPIBX", "VCVSX")

etfClose <- etfClose[!is.na(etfClose[,4]),]
etfAdj <- etfAdj[!is.na(etfAdj[,4]),]
mfClose <- mfClose[-1,]
mfAdj <- mfAdj[-1,]

etfAdjTest <- LogicInvestEBR(returns = etfAdj, lowerBound = .4, upperBound = .6,
                             period = 73, modSharpeF = 2)

etfClTest <- LogicInvestEBR(returns = etfClose, lowerBound = .4, upperBound = .6,
                             period = 73, modSharpeF = 2)

mfAdjTest <- LogicInvestEBR(returns = mfAdj, lowerBound = .4, upperBound = .6,
                            period = 73, modSharpeF = 2)

mfClTest <- LogicInvestEBR(returns = mfClose, lowerBound = .4, upperBound = .6,
                           period = 73, modSharpeF = 2)

fiveStats <- function(returns) {
  return(rbind(table.AnnualizedReturns(returns), 
               maxDrawdown(returns), CalmarRatio(returns)))
}

etfs <- cbind(etfAdjTest, etfClTest)
colnames(etfs) <- c("Adjusted ETFs", "Close ETFs")
charts.PerformanceSummary((etfs))

mutualFunds <- cbind(mfAdjTest, mfClTest)
colnames(mutualFunds) <- c("Adjusted MFs", "Close MFs")
charts.PerformanceSummary(mutualFunds)
chart.TimeSeries(log(cumprod(1+mutualFunds)), legend.loc="topleft")

fiveStats(etfs)
fiveStats(mutualFunds)

So, first, the results of the ETFs:

Equity curve:

Five statistics:

> fiveStats(etfs)
                          Adjusted ETFs Close ETFs
Annualized Return            0.12320000 0.08370000
Annualized Std Dev           0.06780000 0.06920000
Annualized Sharpe (Rf=0%)    1.81690000 1.20980000
Worst Drawdown               0.06913986 0.08038459
Calmar Ratio                 1.78158934 1.04078405

In other words, reinvesting dividends makes up about 50% of these returns.

Let’s look at the mutual funds. Note that these are for the sake of illustration only–you can’t trade out of mutual funds every month.

Equity curve:

Log scale:

Statistics:

                          Adjusted MFs Close MFs
Annualized Return           0.11450000 0.0284000
Annualized Std Dev          0.05700000 0.0627000
Annualized Sharpe (Rf=0%)   2.00900000 0.4532000
Worst Drawdown              0.09855271 0.2130904
Calmar Ratio                1.16217559 0.1332706

In this case, day and night, though how much of it is the data source may also be an issue. Yahoo isn’t the greatest when it comes to data, and I’m not sure how much the data quality deteriorates going back that far. However, the takeaway seems to be this: with bond strategies, dividends will need to be dealt with, and when considering returns data presented to you, keep in mind that those adjusted returns assume the investor stays on top of dividend maintenance. Fail to reinvest the dividends in a timely fashion, and, well, the gap can be quite large.

To put it into perspective, as I was writing this post, I wondered whether or not most of this was indeed due to dividends. Here’s a plot of the difference in returns between adjusted and close ETF returns.

chart.TimeSeries(etfAdj - etfClose, legend.loc="topleft", date.format="%Y-%m",
                 main = "Return differences adjusted vs. close ETFs")

With the resulting image:

While there may be some noise to the order of the negative fifth power on most days, there are clear spikes observable in the return differences. Those are dividends, and their compounding makes a sizable difference. In one case for CWB, the difference is particularly striking (Dec. 29, 2014). In fact, here’s a quick little analysis of the effect of the dividend effects.

dividends <- etfAdj - etfClose
divReturns <- list()
for(i in 1:ncol(dividends)) {
  diffStream <- dividends[,i]
  divPayments <- diffStream[diffStream >= 1e-3]
  divReturns[[i]] <- Return.annualized(divPayments)
}
divReturns <- do.call(cbind, divReturns)
divReturns

divReturns/Return.annualized(etfAdj)

And the result:

> divReturns
                         TLT        JNK        PCY        CWB
Annualized Return 0.03420959 0.08451723 0.05382363 0.05025999

> divReturns/Return.annualized(etfAdj)
                       TLT       JNK       PCY       CWB
Annualized Return 0.453966 0.6939243 0.5405922 0.3737499

In short, the effect of the dividend is massive. In some instances, such as with JNK, the dividend comprises more than 50% of the annualized returns for the security!

Basically, I’d like to hammer the point home one last time–backtests using adjusted data assume instantaneous maintenance of dividends. In order to achieve the optimistic returns seen in the backtests, these dividend payments must be reinvested ASAP. In short, this is the fine print on this strategy, and is a small, but critical detail that the SeekingAlpha article doesn’t mention. (Seriously, do a ctrl + F in your browser for the word “dividend”. It won’t come up in the article itself.) I wanted to make sure to add it.

One last thing: gaudy numbers when using monthly returns!

> fiveStats(apply.monthly(etfs, Return.cumulative))
                          Adjusted ETFs Close ETFs
Annualized Return            0.12150000   0.082500
Annualized Std Dev           0.06490000   0.067000
Annualized Sharpe (Rf=0%)    1.87170000   1.232100
Worst Drawdown               0.03671871   0.049627
Calmar Ratio                 3.30769620   1.662642

Look! A Calmar Ratio of 3.3, and a Sharpe near 2!*

*: Must manage dividends. Statistics reported are monthly.

Okay, in all fairness, this is a pretty solid strategy, once one commits to managing the dividends. I just felt that it should have been a topic made front and center considering its importance in this case, rather than simply swept under the “we use adjusted returns” rug, since in this instance, the effect of dividends is massive.

In conclusion, while I will more or less confirm the strategy’s actual risk/reward performance (unlike some other SeekingAlpha strategies I’ve backtested), which, in all honesty, I find really impressive, it comes with a caveat like the rest of them. However, the caveat of “be detail-oriented/meticulous/paranoid and reinvest those dividends!” in my opinion is a caveat that’s a lot easier to live with than 30%+ drawdowns that were found lurking in other SeekingAlpha strategies. So for those that can stay on top of those dividends (whether manually, or with machine execution), here you go. I’m basically confirming the performance of Logical Invest’s strategy, but just belaboring one important detail.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.