Hypothesis Driven Development Part IV: Testing The Barroso/Santa Clara Rule

This post will deal with applying the constant-volatility procedure written about by Barroso and Santa Clara in their paper “Momentum Has Its Moments”.

The last two posts dealt with evaluating the intelligence of the signal-generation process. While the strategy showed itself to be marginally better than randomly tossing darts against a dartboard and I was ready to reject it for want of moving onto better topics that are slightly less of a toy in terms of examples than a little rotation strategy, Brian Peterson told me to see this strategy through to the end, including testing out rule processes.

First off, to make a distinction, rules are not signals. Rules are essentially a way to quantify what exactly to do assuming one acts upon a signal. Things such as position sizing, stop-loss processes, and so on, all fall under rule processes.

This rule deals with using leverage in order to target a constant volatility.

So here’s the idea: in their paper, Pedro Barroso and Pedro Santa Clara took the Fama-French momentum data, and found that the classic WML strategy certainly outperforms the market, but it has a critical downside, namely that of momentum crashes, in which being on the wrong side of a momentum trade will needlessly expose a portfolio to catastrophically large drawdowns. While this strategy is a long-only strategy (and with fixed-income ETFs, no less), and so would seem to be more robust against such massive drawdowns, there’s no reason to leave money on the table. To note, not only have Barroso and Santa Clara covered this phenomena, but so have others, such as Tony Cooper in his paper “Alpha Generation and Risk Smoothing Using Volatility of Volatility”.

In any case, the setup here is simple: take the previous portfolios, consisting of 1-12 month momentum formation periods, and every month, compute the annualized standard deviation, using a 21-252 (by 21) formation period, for a total of 12 x 12 = 144 trials. (So this will put the total trials run so far at 24 + 144 = 168…bonus points if you know where this tidbit is going to go).

Here’s the code (again, following on from the last post, which follows from the second post, which follows from the first post in this series).

require(reshape2)
require(ggplot2)

ruleBacktest <- function(returns, nMonths, dailyReturns,
                         nSD=126, volTarget = .1) {
  nMonthAverage <- apply(returns, 2, runSum, n = nMonths)
  nMonthAverage <- xts(nMonthAverage, order.by = index(returns))
  nMonthAvgRank <- t(apply(nMonthAverage, 1, rank))
  nMonthAvgRank <- xts(nMonthAvgRank, order.by=index(returns))
  selection <- (nMonthAvgRank==5) * 1 #select highest average performance
  dailyBacktest <- Return.portfolio(R = dailyReturns, weights = selection)
  constantVol <- volTarget/(runSD(dailyBacktest, n = nSD) * sqrt(252))
  monthlyLeverage <- na.omit(constantVol[endpoints(constantVol), on ="months"])
  wts <- cbind(monthlyLeverage, 1-monthlyLeverage)
  constantVolComponents <- cbind(dailyBacktest, 0)
  out <- Return.portfolio(R = constantVolComponents, weights = wts)
  out <- apply.monthly(out, Return.cumulative)
  return(out)
}

t1 <- Sys.time()
allPermutations <- list()
for(i in seq(21, 252, by = 21)) {
  monthVariants <- list()
  for(j in 1:12) {
    trial <- ruleBacktest(returns = monthRets, nMonths = j, dailyReturns = sample, nSD = i)
    sharpe <- table.AnnualizedReturns(trial)[3,]
    monthVariants[[j]] <- sharpe
  }
  allPermutations[[i]] <- do.call(c, monthVariants)
}
allPermutations <- do.call(rbind, allPermutations)
t2 <- Sys.time()
print(t2-t1)

rownames(allPermutations) <- seq(21, 252, by = 21)
colnames(allPermutations) <- 1:12

baselineSharpes <- table.AnnualizedReturns(algoPortfolios)[3,]
baselineSharpeMat <- matrix(rep(baselineSharpes, 12), ncol=12, byrow=TRUE)

diffs <- allPermutations - as.numeric(baselineSharpeMat)
require(reshape2)
require(ggplot2)
meltedDiffs <-melt(diffs)

colnames(meltedDiffs) <- c("volFormation", "momentumFormation", "sharpeDifference")
ggplot(meltedDiffs, aes(x = momentumFormation, y = volFormation, fill=sharpeDifference)) + 
  geom_tile()+scale_fill_gradient2(high="green", mid="yellow", low="red")

meltedSharpes <- melt(allPermutations)
colnames(meltedSharpes) <- c("volFormation", "momentumFormation", "Sharpe")
ggplot(meltedSharpes, aes(x = momentumFormation, y = volFormation, fill=Sharpe)) + 
  geom_tile()+scale_fill_gradient2(high="green", mid="yellow", low="red", midpoint = mean(allPermutations))

Again, there’s no parallel code since this is a relatively small example, and I don’t know which OS any given instance of R runs on (windows/linux have different parallelization infrastructure).

So the idea here is to simply compare the Sharpe ratios with different volatility lookback periods against the baseline signal-process-only portfolios. The reason I use Sharpe ratios, and not say, CAGR, volatility, or drawdown is that Sharpe ratios are scale-invariant. In this case, I’m targeting an annualized volatility of 10%, but with a higher targeted volatility, one can obtain higher returns at the cost of higher drawdowns, or be more conservative. But the Sharpe ratio should stay relatively consistent within reasonable bounds.

So here are the results:

Sharpe improvements:

In this case, the diagram shows that on a whole, once the volatility estimation period becomes long enough, the results are generally positive. Namely, that if one uses a very short estimation period, that volatility estimate is more dependent on the last month’s choice of instrument, as opposed to the longer-term volatility of the system itself, which can create poor forecasts. Also to note is that the one-month momentum formation period doesn’t seem overly amenable to the constant volatility targeting scheme (there’s basically little improvement if not a slight drag on risk-adjusted performance). This is interesting in that the baseline Sharpe ratio for the one-period formation is among the best of the baseline performances. However, on a whole, the volatility targeting actually does improve risk-adjusted performance of the system, even one as simple as throwing all your money into one asset every month based on a single momentum signal.

Absolute Sharpe ratios:

In this case, the absolute Sharpe ratios look fairly solid for such a simple system. The 3, 7, and 9 month variants are slightly lower, but once the volatility estimation period reaches between 126 and 252 days, the results are fairly robust. The Barroso and Santa Clara paper uses a period of 126 days to estimate annualized volatility, which looks solid across the entire momentum formation period spectrum.

In any case, it seems the verdict is that a constant volatility target improves results.

Thanks for reading.

NOTE: while I am currently consulting, I am always open to networking, meeting up (Philadelphia and New York City both work), consulting arrangements, and job discussions. Contact me through my email at ilya.kipnis@gmail.com, or through my LinkedIn, found here.

Hypothesis Driven Development Part III: Monte Carlo In Asset Allocation Tests

This post will show how to use Monte Carlo to test for signal intelligence.

Although I had rejected this strategy in the last post, I was asked to do a monte-carlo analysis of a thousand random portfolios to see how the various signal processes performed against said distribution. Essentially, the process is quite simple: as I’m selecting one asset each month to hold, I simply generate a random number between 1 and the amount of assets (5 in this case), and hold it for the month. Repeat this process for the number of months, and then repeat this process a thousand times, and see where the signal processes fall across that distribution.

I didn’t use parallel processing here since Windows and Linux-based R have different parallel libraries, and in the interest of having the code work across all machines, I decided to leave it off.

Here’s the code:

randomAssetPortfolio <- function(returns) {
  numAssets <- ncol(returns)
  numPeriods <- nrow(returns)
  assetSequence <- sample.int(numAssets, numPeriods, replace=TRUE)
  wts <- matrix(nrow = numPeriods, ncol=numAssets, 0)
  wts <- xts(wts, order.by=index(returns))
  for(i in 1:nrow(wts)) {
    wts[i,assetSequence[i]] <- 1
  }
  randomPortfolio <- Return.portfolio(R = returns, weights = wts)
  return(randomPortfolio)
}

t1 <- Sys.time()
randomPortfolios <- list()
set.seed(123)
for(i in 1:1000) {
  randomPortfolios[[i]] <- randomAssetPortfolio(monthRets)
}
randomPortfolios <- do.call(cbind, randomPortfolios)
t2 <- Sys.time()
print(t2-t1)

algoPortfolios <- sigBoxplots[,1:12]
randomStats <- table.AnnualizedReturns(randomPortfolios)
algoStats <- table.AnnualizedReturns(algoPortfolios)

par(mfrow=c(3,1))
hist(as.numeric(randomStats[1,]), breaks = 20, main = 'histogram of monte carlo annualized returns',
     xlab='annualized returns')
abline(v=as.numeric(algoStats[1,]), col='red')
hist(as.numeric(randomStats[2,]), breaks = 20, main = 'histogram of monte carlo volatilities',
     xlab='annualized vol')
abline(v=as.numeric(algoStats[2,]), col='red')
hist(as.numeric(randomStats[3,]), breaks = 20, main = 'histogram of monte carlo Sharpes',
     xlab='Sharpe ratio')
abline(v=as.numeric(algoStats[3,]), col='red')

allStats <- cbind(randomStats, algoStats)
aggregateMean <- apply(allStats, 1, mean)
aggregateDevs <- apply(allStats, 1, sd)

algoPs <- 1-pnorm(as.matrix((algoStats - aggregateMean)/aggregateDevs))

plot(as.numeric(algoPs[1,])~c(1:12), main='Return p-values',
     xlab='Formation period', ylab='P-value')
abline(h=0.05, col='red')
abline(h=.1, col='green')

plot(1-as.numeric(algoPs[2,])~c(1:12), ylim=c(0, .5), main='Annualized vol p-values',
     xlab='Formation period', ylab='P-value')
abline(h=0.05, col='red')
abline(h=.1, col='green')

plot(as.numeric(algoPs[3,])~c(1:12), main='Sharpe p-values',
     xlab='Formation period', ylab='P-value')
abline(h=0.05, col='red')
abline(h=.1, col='green')

And here are the results:


In short, compared to monkeys throwing darts, to use some phrasing from the Price Action Lab blog, these signal processes are only marginally intelligent, if at all, depending on the variation one chooses. Still, I was recommended to see this process through the end, and evaluate rules, so next time, I’ll evaluate one easy-to-implement rule.

Thanks for reading.

NOTE: while I am currently consulting, I am always open to networking, meeting up (Philadelphia and New York City both work), consulting arrangements, and job discussions. Contact me through my email at ilya.kipnis@gmail.com, or through my LinkedIn, found here.

Hypothesis-Driven Development Part II

This post will evaluate signals based on the rank regression hypotheses covered in the last post.

The last time around, we saw that rank regression had a very statistically significant result. Therefore, the next step would be to evaluate the basic signals — whether or not there is statistical significance in the actual evaluation of the signal–namely, since the strategy from SeekingAlpha simply selects the top-ranked ETF every month, this is a very easy signal to evaluate.

Simply, using the 1-24 month formation periods for cumulative sum of monthly returns, select the highest-ranked ETF and hold it for one month.

Here’s the code to evaluate the signal (continued from the last post), given the returns, a month parameter, and an EW portfolio to compare with the signal.


signalBacktest <- function(returns, nMonths, ewPortfolio) {
  nMonthAverage <- apply(returns, 2, runSum, n = nMonths)
  nMonthAverage <- xts(nMonthAverage, order.by = index(returns))
  nMonthAvgRank <- t(apply(nMonthAverage, 1, rank))
  nMonthAvgRank <- xts(nMonthAvgRank, order.by=index(returns))
  selection <- (nMonthAvgRank==5) * 1 #select highest average performance
  sigTest <- Return.portfolio(R = returns, weights = selection)
  difference <- sigTest - ewPortfolio
  diffZscore <- mean(difference)/sd(difference)
  sigZscore <- mean(sigTest)/sd(sigTest)
  return(list(sigTest, difference, mean(sigTest), sigZscore, mean(difference), diffZscore))
}

ewPortfolio <- Return.portfolio(monthRets, rebalance_on="months")

sigBoxplots <- list()
excessBoxplots <- list()
sigMeans <- list()
sigZscores <- list()
diffMeans <- list()
diffZscores <- list()
for(i in 1:24) {
  tmp <- signalBacktest(monthRets, nMonths = i, ewPortfolio)
  sigBoxplots[[i]] <- tmp[[1]]
  excessBoxplots[[i]] <- tmp[[2]]
  sigMeans[[i]] <- tmp[[3]]
  sigZscores[[i]] <- tmp[[4]]
  diffMeans[[i]] <- tmp[[5]]
  diffZscores[[i]] <- tmp[[6]]
}

sigBoxplots <- do.call(cbind, sigBoxplots)
excessBoxplots <- do.call(cbind, excessBoxplots)
sigMeans <- do.call(c, sigMeans)
sigZscores <- do.call(c, sigZscores)
diffMeans <- do.call(c, diffMeans)
diffZscores <- do.call(c, diffZscores)

par(mfrow=c(2,1))
plot(as.numeric(sigMeans)*100, type='h', main = 'signal means', 
     ylab = 'percent per month', xlab='formation period')
plot(as.numeric(sigZscores), type='h', main = 'signal Z scores', 
     ylab='Z scores', xlab='formation period')

plot(as.numeric(diffMeans)*100, type='h', main = 'mean difference between signal and EW',
     ylab = 'percent per month', xlab='formation period')
plot(as.numeric(diffZscores), type='h', main = 'difference Z scores',
     ylab = 'Z score', xlab='formation period')

boxplot(as.matrix(sigBoxplots), main = 'signal boxplots', xlab='formation period')
abline(h=0, col='red')
points(sigMeans, col='blue')

boxplot(as.matrix(sigBoxplots[,1:12]), main = 'signal boxplots 1 through 12 month formations', 
        xlab='formation period')
abline(h=0, col='red')
points(sigMeans[1:12], col='blue')

boxplot(as.matrix(excessBoxplots), main = 'difference (signal - EW) boxplots', 
        xlab='formation period')
abline(h=0, col='red')
points(sigMeans, col='blue')

boxplot(as.matrix(excessBoxplots[,1:12]), main = 'difference (signal - EW) boxplots 1 through 12 month formations', 
        xlab='formation period')
abline(h=0, col='red')
points(sigMeans[1:12], col='blue')

Okay, so what’s going on here is that I compare the signal against the equal weight portfolio, and take means and z scores of both the signal values in general, and against the equal weight portfolio. I plot these values, along with boxplots of the distributions of both the signal process, and the difference between the signal process and the equal weight portfolio.

Here are the results:




To note, the percents are already multiplied by 100, so in the best cases, the rank strategy outperforms the equal weight strategy by about 30 basis points per month. However, these results are…not even in the same parking lot as statistical significance, let alone in the same ballpark.

Now, at this point, in case some people haven’t yet read Brian Peterson’s paper on strategy development, the point of hypothesis-driven development is to *reject* hypothetical strategies ASAP before looking at any sort of equity curve and trying to do away with periods of underperformance. So, at this point, I would like to reject this entire strategy because there’s no statistical evidence to actually continue. Furthermore, because August 2015 was a rather interesting month, especially in terms of volatility dispersion, I want to return to volatility trading strategies, now backed by hypothesis-driven development.

If anyone wants to see me continue to rule testing with this process, let me know. If not, I have more ideas on the way.

Thanks for reading.

NOTE: while I am currently consulting, I am always open to networking, meeting up (Philadelphia and New York City both work), consulting arrangements, and job discussions. Contact me through my email at ilya.kipnis@gmail.com, or through my LinkedIn, found here.

I’m Back, A New Harry Long Strategy, And Plans For Hypothesis-Driven Development

I’m back. Anyone that wants to know “what happened at Graham”, I felt there was very little scaffolding/on-boarding, and Graham’s expectations/requirements changed, though I have a reference from my direct boss, an accomplished quantitative director In any case, moving on.

Harry Long recently came out with a new strategy posted on SeekingAlpha, and I’d like to test it for robustness to see if it has merit.

Here’s the link to the post.

So, the rules are fairly simple:

ZIV 15%
SPLV 50%
TMF 10%
UUP 20%
VXX 5%

TMF can be approximated with a 3x leveraged TLT. SPLV is also highly similar to XLP — aka the consumer staples SPY sector. Here’s the equity curve comparison to prove it.

So, let’s test this thing.

require(PerformanceAnalytics)
require(downloader)
require(quantmod)

getSymbols('XLP', from = '1900-01-01')
getSymbols('TLT', from = '1900-01-01')
getSymbols('UUP', from = '1900-01-01')
download('https://www.dropbox.com/s/jk3ortdyru4sg4n/ZIVlong.TXT', destfile='ZIVlong.csv')
download('https://dl.dropboxusercontent.com/s/950x55x7jtm9x2q/VXXlong.TXT', destfile = 'VXXlong.csv')
ZIV <- xts(read.zoo('ZIVlong.csv', header=TRUE, sep=','))
VXX <- xts(read.zoo('VXXlong.csv', header=TRUE, sep=','))

symbols <- na.omit(cbind(Return.calculate(Cl(ZIV)), Return.calculate(Ad(XLP)), Return.calculate(Ad(TLT))*3,
                         Return.calculate(Ad(UUP)), Return.calculate(Cl(VXX))))
strat <- Return.portfolio(symbols, weights = c(.15, .5, .1, .2, .05), rebalance_on='years')

Here are the results:

compare <- na.omit(cbind(strat, Return.calculate(Ad(XLP))))
charts.PerformanceSummary(compare)
rbind(table.AnnualizedReturns(compare), maxDrawdown(compare), CalmarRatio(compare))

Equity curve (compared against buy and hold XLP)

Statistics:

                          portfolio.returns XLP.Adjusted
Annualized Return                 0.0864000    0.0969000
Annualized Std Dev                0.0804000    0.1442000
Annualized Sharpe (Rf=0%)         1.0747000    0.6720000
Worst Drawdown                    0.1349957    0.3238755
Calmar Ratio                      0.6397665    0.2993100

In short, this strategy definitely offers a lot more bang for your risk in terms of drawdown, and volatility, and so, offers noticeably higher risk/reward tradeoffs. However, it’s not something that beats the returns of instruments in the category of twice its volatility.

Here are the statistics from 2010 onwards.

charts.PerformanceSummary(compare['2010::'])
rbind(table.AnnualizedReturns(compare['2010::']), maxDrawdown(compare['2010::']), CalmarRatio(compare['2010::']))

                          portfolio.returns XLP.Adjusted
Annualized Return                0.12050000    0.1325000
Annualized Std Dev               0.07340000    0.1172000
Annualized Sharpe (Rf=0%)        1.64210000    1.1308000
Worst Drawdown                   0.07382878    0.1194072
Calmar Ratio                     1.63192211    1.1094371

Equity curve:

Definitely a smoother ride, and for bonus points, it seems some of the hedges helped with the recent market dip. Again, while aggregate returns aren’t as high as simply buying and holding XLP, the Sharpe and Calmar ratios do better on a whole.

Now, let’s do some robustness analysis. While I do not know how Harry Long arrived at the individual asset weights he did, what can be tested much more easily is what effect offsetting the rebalancing day has on the performance of the strategy. As this is a strategy rebalanced once a year, it can easily be tested for what effect the rebalancing date has on its performance.

yearlyEp <- endpoints(symbols, on = 'years')
rebalanceDays <- list()
for(i in 0:251) {
  offset <- yearlyEp+i
  offset[offset > nrow(symbols)] <- nrow(symbols)
  offset[offset==0] <- 1
  wts <- matrix(rep(c(.15, .5, .1, .2, .05), length(yearlyEp)), ncol=5, byrow=TRUE)
  wts <- xts(wts, order.by=as.Date(index(symbols)[offset]))
  offsetRets <- Return.portfolio(R = symbols, weights = wts)
  colnames(offsetRets) <- paste0("offset", i)
  rebalanceDays[[i+1]] <- offsetRets
}
rebalanceDays <- do.call(cbind, rebalanceDays)
rebalanceDays <- na.omit(rebalanceDays)
stats <- rbind(table.AnnualizedReturns(rebalanceDays), maxDrawdown(rebalanceDays))
stats[5,] <- stats[1,]/stats[4,]

Here are the plots of return, Sharpe, and Calmar vs. offset.

plot(as.numeric(stats[1,])~c(0:251), type='l', ylab='CAGR', xlab='offset', main='CAGR vs. offset')
plot(as.numeric(stats[3,])~c(0:251), type='l', ylab='Sharpe Ratio', xlab='offset', main='Sharpe vs. offset')
plot(as.numeric(stats[5,])~c(0:251), type='l', ylab='Calmar Ratio', xlab='offset', main='Calmar vs. offset')
plot(as.numeric(stats[4,])~c(0:251), type='l', ylab='Drawdown', xlab='offset', main='Drawdown vs. offset')




In short, this strategy seems to be somewhat dependent upon the rebalancing date, which was left unsaid. Here are the quantiles for the five statistics for the given offsets:

rownames(stats)[5] <- "Calmar"
apply(stats, 1, quantile)
     Annualized Return Annualized Std Dev Annualized Sharpe (Rf=0%) Worst Drawdown    Calmar
0%            0.072500             0.0802                  0.881000      0.1201198 0.4207922
25%           0.081925             0.0827                  0.987625      0.1444921 0.4755600
50%           0.087650             0.0837                  1.037250      0.1559238 0.5364758
75%           0.092000             0.0843                  1.090900      0.1744123 0.6230789
100%          0.105100             0.0867                  1.265900      0.1922916 0.8316698

While the standard deviation seems fairly robust, the Sharpe can decrease by about 33%, the Calmar can get cut in half, and the CAGR can also vary fairly substantially. That said, even using conservative estimates, the Sharpe ratio is fairly solid, and the Calmar outperforms that of XLP in any given variation, but nevertheless, performance can vary.

Is this strategy investible in its current state? Maybe, depending on your standards for rigor. Up to this point, rebalancing sometime in December-early January seems to substantially outperform other rebalance dates. Maybe a Dec/January anomaly effect exists in literature to justify this. However, the article makes no mention of that. Furthermore, the article doesn’t explain how it arrived at the weights it did.

Which brings me to my next topic, namely about a change with this blog going forward. Namely, hypothesis-driven trading system development. While this process doesn’t require complicated math, it does require statistical justification for multiple building blocks of a strategy, and a change in mindset, which a great deal of publicly available trading system ideas either gloss over, or omit entirely. As one of my most important readers praised this blog for “showing how the sausage is made”, this seems to be the next logical step in this progression.

Here’s the reasoning as to why.

It seems that when presenting trading ideas, there are two schools of thought: those that go off of intuition, build a backtest based off of that intuition, and see if it generally lines up with some intuitively expected result–and those that believe in a much more systematic, hypothesis-driven step-by-step framework, justifying as many decisions (ideally every decision) in creating a trading system. The advantage of the former is that it allows for displaying many more ideas in a much shorter timeframe. However, it has several major drawbacks: first off, it hides many concerns about potential overfitting. If what one sees is one final equity curve, there is nothing said about the sensitivity of said equity curve to however many various input parameters, and what other ideas were thrown out along the way. Secondly, without a foundation of strong hypotheses about the economic phenomena exploited, there is no proof that any strategy one comes across won’t simply fail once it’s put into live trading.

And third of all, which I find most important, is that such activities ultimately don’t sufficiently impress the industry’s best practitioners. For instance, Tony Cooper took issue with my replication of Trading The Odds’ volatility trading strategy, namely how data-mined it was (according to him in the comments section), and his objections seem to have been completely borne out by in out-of-sample performance.

So, for those looking for plug-and-crank system ideas, that may still happen every so often if someone sends me something particularly interesting, but there’s going to be some all-new content on this blog.

Thanks for reading.

NOTE: while I am currently consulting, I am always open to networking, meeting up (Philadelphia and New York City both work), consulting arrangements, and job discussions. Contact me through my email at ilya.kipnis@gmail.com, or through my LinkedIn, found here.

The JP Morgan SCTO strategy

This strategy goes over JP Morgan’s SCTO strategy, a basic XL-sector/RWR rotation strategy with the typical associated risks and returns with a momentum equity strategy. It’s nothing spectacular, but if a large bank markets it, it’s worth looking at.

Recently, one of my readers, a managing director at a quantitative investment firm, sent me a request to write a rotation strategy based around the 9 sector spiders and RWR. The way it works (or at least, the way I interpreted it) is this:

Every month, compute the return (not sure how “the return” is defined) and rank. Take the top 5 ranks, and weight them in a normalized fashion to the inverse of their 22-day volatility. Zero out any that have negative returns. Lastly, check the predicted annualized vol of the portfolio, and if it’s greater than 20%, bring it back down to 20%. The cash asset–SHY–receives any remaining allocation due to setting securities to zero.

For the reference I used, here’s the investment case document from JP Morgan itself.

Here’s my implementation:

Step 1) get the data, compute returns.

require(quantmod)
require(PerformanceAnalytics)
symbols <- c("XLB", "XLE", "XLF", "XLI", "XLK", "XLP", "XLU", "XLV", "XLY", "RWR", "SHY")
getSymbols(symbols, from="1990-01-01")
prices <- list()
for(i in 1:length(symbols)) {
  prices[[i]] <- Ad(get(symbols[i]))  
}
prices <- do.call(cbind, prices)
colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))
returns <- na.omit(Return.calculate(prices))

Step 2) The function itself.

sctoStrat <- function(returns, cashAsset = "SHY", lookback = 4, annVolLimit = .2,
                      topN = 5, scale = 252) {
  ep <- endpoints(returns, on = "months")
  weights <- list()
  cashCol <- grep(cashAsset, colnames(returns))
  
  #remove cash from asset returns
  cashRets <- returns[, cashCol]
  assetRets <- returns[, -cashCol]
  for(i in 2:(length(ep) - lookback)) {
    retSubset <- assetRets[ep[i]:ep[i+lookback]]
    
    #forecast is the cumulative return of the lookback period
    forecast <- Return.cumulative(retSubset)
    
    #annualized (realized) volatility uses a 22-day lookback period
    annVol <- StdDev.annualized(tail(retSubset, 22))
    
    #rank the forecasts (the cumulative returns of the lookback)
    rankForecast <- rank(forecast) - ncol(assetRets) + topN
    
    #weight is inversely proportional to annualized vol
    weight <- 1/annVol
    
    #zero out anything not in the top N assets
    weight[rankForecast <= 0] <- 0
    
    #normalize and zero out anything with a negative return
    weight <- weight/sum(weight)
    weight[forecast < 0] <- 0
    
    #compute forecasted vol of portfolio
    forecastVol <- sqrt(as.numeric(t(weight)) %*% 
                          cov(retSubset) %*% 
                          as.numeric(weight)) * sqrt(scale)
    
    #if forecasted vol greater than vol limit, cut it down
    if(as.numeric(forecastVol) > annVolLimit) {
      weight <- weight * annVolLimit/as.numeric(forecastVol)
    }
    weights[[i]] <- xts(weight, order.by=index(tail(retSubset, 1)))
  }
  
  #replace cash back into returns
  returns <- cbind(assetRets, cashRets)
  weights <- do.call(rbind, weights)
  
  #cash weights are anything not in securities
  weights$CASH <- 1-rowSums(weights)
  
  #compute and return strategy returns
  stratRets <- Return.portfolio(R = returns, weights = weights)
  return(stratRets)      
}

In this case, I took a little bit of liberty with some specifics that the reference was short on. I used the full covariance matrix for forecasting the portfolio variance (not sure if JPM would ignore the covariances and do a weighted sum of individual volatilities instead), and for returns, I used the four-month cumulative. I’ve seen all sorts of permutations on how to compute returns, ranging from some average of 1, 3, 6, and 12 month cumulative returns to some lookback period to some two period average, so I’m all ears if others have differing ideas, which is why I left it as a lookback parameter.

Step 3) Running the strategy.

scto4_20 <- sctoStrat(returns)
getSymbols("SPY", from = "1990-01-01")
spyRets <- Return.calculate(Ad(SPY))
comparison <- na.omit(cbind(scto4_20, spyRets))
colnames(comparison) <- c("strategy", "SPY")
charts.PerformanceSummary(comparison)
apply.yearly(comparison, Return.cumulative)
stats <- rbind(table.AnnualizedReturns(comparison),
               maxDrawdown(comparison),
               CalmarRatio(comparison),
               SortinoRatio(comparison)*sqrt(252))
round(stats, 3)

Here are the statistics:

                          strategy   SPY
Annualized Return            0.118 0.089
Annualized Std Dev           0.125 0.193
Annualized Sharpe (Rf=0%)    0.942 0.460
Worst Drawdown               0.165 0.552
Calmar Ratio                 0.714 0.161
Sortino Ratio (MAR = 0%)     1.347 0.763

               strategy         SPY
2002-12-31 -0.035499564 -0.05656974
2003-12-31  0.253224759  0.28181559
2004-12-31  0.129739794  0.10697941
2005-12-30  0.066215224  0.04828267
2006-12-29  0.167686936  0.15845242
2007-12-31  0.153890329  0.05146218
2008-12-31 -0.096736711 -0.36794994
2009-12-31  0.181759432  0.26351755
2010-12-31  0.099187188  0.15056146
2011-12-30  0.073734427  0.01894986
2012-12-31  0.067679129  0.15990336
2013-12-31  0.321039353  0.32307769
2014-12-31  0.126633020  0.13463790
2015-04-16  0.004972434  0.02806776

And the equity curve:

To me, it looks like a standard rotation strategy. Aims for the highest momentum securities, diversifies to try and control risk, hits a drawdown in the crisis, recovers, and slightly lags the bull run on SPY. Nothing out of the ordinary.

So, for those interested, here you go. I’m surprised that JP Morgan itself markets this sort of thing, considering that they probably employ top-notch quants that can easily come up with products and/or strategies that are far better.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.

The Logical Invest Enhanced Bond Rotation Strategy (And the Importance of Dividends)

This post will display my implementation of the Logical Invest Enhanced Bond Rotation strategy. This is a strategy that indeed does work, but is dependent on reinvesting dividends, as bonds pay coupons, which means bond ETFs do likewise.

The strategy is fairly simple — using four separate fixed income markets (long-term US government bonds, high-yield bonds, emerging sovereign debt, and convertible bonds), the strategy aims to deliver a low-risk, high Sharpe profile. Every month, it switches to two separate securities, in either a 60-40 or 50-50 split (that is, a 60-40 one way, or the other). My implementation for this strategy is similar to the ones I’ve done for the Logical Invest Universal Investment Strategy, which is to maximize a modified Sharpe ratio in a walk-forward process.

Here’s the code:

LogicInvestEBR <- function(returns, lowerBound, upperBound, period, modSharpeF) {
  count <- 0
  configs <- list()
  instCombos <- combn(colnames(returns), m = 2)
  for(i in 1:ncol(instCombos)) {
    inst1 <- instCombos[1, i]
    inst2 <- instCombos[2, i]
    rets <- returns[,c(inst1, inst2)]
    weightSeq <- seq(lowerBound, upperBound, by = .1)
    for(j in 1:length(weightSeq)) {
      returnConfig <- Return.portfolio(R = rets, 
                      weights = c(weightSeq[j], 1-weightSeq[j]), 
                      rebalance_on="months")
      colnames(returnConfig) <- paste(inst1, weightSeq[j], 
                                inst2, 1-weightSeq[j], sep="_")
      count <- count + 1
      configs[[count]] <- returnConfig
    }
  }
  
  configs <- do.call(cbind, configs)
  cumRets <- cumprod(1+configs)
  
  #rolling cumulative 
  rollAnnRets <- (cumRets/lag(cumRets, period))^(252/period) - 1
  rollingSD <- sapply(X = configs, runSD, n=period)*sqrt(252)
  
  modSharpe <- rollAnnRets/(rollingSD ^ modSharpeF)
  monthlyModSharpe <- modSharpe[endpoints(modSharpe, on="months"),]
  
  findMax <- function(data) {
    return(data==max(data))
  }
  
  #configs$zeroes <- 0 #zeroes for initial periods during calibration
  weights <- t(apply(monthlyModSharpe, 1, findMax))
  weights <- weights*1
  weights <- xts(weights, order.by=as.Date(rownames(weights)))
  weights[is.na(weights)] <- 0
  weights$zeroes <- 1-rowSums(weights)
  configCopy <- configs
  configCopy$zeroes <- 0
  
  stratRets <- Return.portfolio(R = configCopy, weights = weights)
  return(stratRets)  
}

The one thing different about this code is the way I initialize the return streams. It’s an ugly piece of work, but it takes all of the pairwise combinations (that is, 4 choose 2, or 4c2) along with a sequence going by 10% for the different security weights between the lower and upper bound (that is, if the lower bound is 40% and upper bound is 60%, the three weights will be 40-60, 50-50, and 60-40). So, in this case, there are 18 configurations. 4c2*3. Do note that this is not at all a framework that can be scaled up. That is, with 20 instruments, there will be 190 different combinations, and then anywhere between 3 to 11 (if going from 0-100) configurations for each combination. Obviously, not a pretty sight.

Beyond that, it’s the same refrain. Bind the returns together, compute an n-day rolling cumulative return (far faster my way than using the rollApply version of Return.annualized), divide it by the n-day rolling annualized standard deviation divided by the modified Sharpe F factor (1 gives you Sharpe ratio, 0 gives you pure returns, greater than 1 puts more of a focus on risk). Take the highest Sharpe ratio, allocate to that configuration, repeat.

So, how does this perform? Here’s a test script, using the same 73-day lookback with a modified Sharpe F of 2 that I’ve used in the previous Logical Invest strategies.

symbols <- c("TLT", "JNK", "PCY", "CWB", "VUSTX", "PRHYX", "RPIBX", "VCVSX")
suppressMessages(getSymbols(symbols, from="1995-01-01", src="yahoo"))
etfClose <- Return.calculate(cbind(Cl(TLT), Cl(JNK), Cl(PCY), Cl(CWB)))
etfAdj <- Return.calculate(cbind(Ad(TLT), Ad(JNK), Ad(PCY), Ad(CWB)))
mfClose <- Return.calculate(cbind(Cl(VUSTX), Cl(PRHYX), Cl(RPIBX), Cl(VCVSX)))
mfAdj <- Return.calculate(cbind(Ad(VUSTX), Ad(PRHYX), Ad(RPIBX), Ad(VCVSX)))
colnames(etfClose) <- colnames(etfAdj) <- c("TLT", "JNK", "PCY", "CWB")
colnames(mfClose) <- colnames(mfAdj) <- c("VUSTX", "PRHYX", "RPIBX", "VCVSX")

etfClose <- etfClose[!is.na(etfClose[,4]),]
etfAdj <- etfAdj[!is.na(etfAdj[,4]),]
mfClose <- mfClose[-1,]
mfAdj <- mfAdj[-1,]

etfAdjTest <- LogicInvestEBR(returns = etfAdj, lowerBound = .4, upperBound = .6,
                             period = 73, modSharpeF = 2)

etfClTest <- LogicInvestEBR(returns = etfClose, lowerBound = .4, upperBound = .6,
                             period = 73, modSharpeF = 2)

mfAdjTest <- LogicInvestEBR(returns = mfAdj, lowerBound = .4, upperBound = .6,
                            period = 73, modSharpeF = 2)

mfClTest <- LogicInvestEBR(returns = mfClose, lowerBound = .4, upperBound = .6,
                           period = 73, modSharpeF = 2)

fiveStats <- function(returns) {
  return(rbind(table.AnnualizedReturns(returns), 
               maxDrawdown(returns), CalmarRatio(returns)))
}

etfs <- cbind(etfAdjTest, etfClTest)
colnames(etfs) <- c("Adjusted ETFs", "Close ETFs")
charts.PerformanceSummary((etfs))

mutualFunds <- cbind(mfAdjTest, mfClTest)
colnames(mutualFunds) <- c("Adjusted MFs", "Close MFs")
charts.PerformanceSummary(mutualFunds)
chart.TimeSeries(log(cumprod(1+mutualFunds)), legend.loc="topleft")

fiveStats(etfs)
fiveStats(mutualFunds)

So, first, the results of the ETFs:

Equity curve:

Five statistics:

> fiveStats(etfs)
                          Adjusted ETFs Close ETFs
Annualized Return            0.12320000 0.08370000
Annualized Std Dev           0.06780000 0.06920000
Annualized Sharpe (Rf=0%)    1.81690000 1.20980000
Worst Drawdown               0.06913986 0.08038459
Calmar Ratio                 1.78158934 1.04078405

In other words, reinvesting dividends makes up about 50% of these returns.

Let’s look at the mutual funds. Note that these are for the sake of illustration only–you can’t trade out of mutual funds every month.

Equity curve:

Log scale:

Statistics:

                          Adjusted MFs Close MFs
Annualized Return           0.11450000 0.0284000
Annualized Std Dev          0.05700000 0.0627000
Annualized Sharpe (Rf=0%)   2.00900000 0.4532000
Worst Drawdown              0.09855271 0.2130904
Calmar Ratio                1.16217559 0.1332706

In this case, day and night, though how much of it is the data source may also be an issue. Yahoo isn’t the greatest when it comes to data, and I’m not sure how much the data quality deteriorates going back that far. However, the takeaway seems to be this: with bond strategies, dividends will need to be dealt with, and when considering returns data presented to you, keep in mind that those adjusted returns assume the investor stays on top of dividend maintenance. Fail to reinvest the dividends in a timely fashion, and, well, the gap can be quite large.

To put it into perspective, as I was writing this post, I wondered whether or not most of this was indeed due to dividends. Here’s a plot of the difference in returns between adjusted and close ETF returns.

chart.TimeSeries(etfAdj - etfClose, legend.loc="topleft", date.format="%Y-%m",
                 main = "Return differences adjusted vs. close ETFs")

With the resulting image:

While there may be some noise to the order of the negative fifth power on most days, there are clear spikes observable in the return differences. Those are dividends, and their compounding makes a sizable difference. In one case for CWB, the difference is particularly striking (Dec. 29, 2014). In fact, here’s a quick little analysis of the effect of the dividend effects.

dividends <- etfAdj - etfClose
divReturns <- list()
for(i in 1:ncol(dividends)) {
  diffStream <- dividends[,i]
  divPayments <- diffStream[diffStream >= 1e-3]
  divReturns[[i]] <- Return.annualized(divPayments)
}
divReturns <- do.call(cbind, divReturns)
divReturns

divReturns/Return.annualized(etfAdj)

And the result:

> divReturns
                         TLT        JNK        PCY        CWB
Annualized Return 0.03420959 0.08451723 0.05382363 0.05025999

> divReturns/Return.annualized(etfAdj)
                       TLT       JNK       PCY       CWB
Annualized Return 0.453966 0.6939243 0.5405922 0.3737499

In short, the effect of the dividend is massive. In some instances, such as with JNK, the dividend comprises more than 50% of the annualized returns for the security!

Basically, I’d like to hammer the point home one last time–backtests using adjusted data assume instantaneous maintenance of dividends. In order to achieve the optimistic returns seen in the backtests, these dividend payments must be reinvested ASAP. In short, this is the fine print on this strategy, and is a small, but critical detail that the SeekingAlpha article doesn’t mention. (Seriously, do a ctrl + F in your browser for the word “dividend”. It won’t come up in the article itself.) I wanted to make sure to add it.

One last thing: gaudy numbers when using monthly returns!

> fiveStats(apply.monthly(etfs, Return.cumulative))
                          Adjusted ETFs Close ETFs
Annualized Return            0.12150000   0.082500
Annualized Std Dev           0.06490000   0.067000
Annualized Sharpe (Rf=0%)    1.87170000   1.232100
Worst Drawdown               0.03671871   0.049627
Calmar Ratio                 3.30769620   1.662642

Look! A Calmar Ratio of 3.3, and a Sharpe near 2!*

*: Must manage dividends. Statistics reported are monthly.

Okay, in all fairness, this is a pretty solid strategy, once one commits to managing the dividends. I just felt that it should have been a topic made front and center considering its importance in this case, rather than simply swept under the “we use adjusted returns” rug, since in this instance, the effect of dividends is massive.

In conclusion, while I will more or less confirm the strategy’s actual risk/reward performance (unlike some other SeekingAlpha strategies I’ve backtested), which, in all honesty, I find really impressive, it comes with a caveat like the rest of them. However, the caveat of “be detail-oriented/meticulous/paranoid and reinvest those dividends!” in my opinion is a caveat that’s a lot easier to live with than 30%+ drawdowns that were found lurking in other SeekingAlpha strategies. So for those that can stay on top of those dividends (whether manually, or with machine execution), here you go. I’m basically confirming the performance of Logical Invest’s strategy, but just belaboring one important detail.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.

The Logical Invest “Hell On Fire” Replication Attempt

This post is about my replication attempt of Logical Invest’s “Hell On Fire” strategy — which is its Universal Investment Strategy using SPXL and TMF (aka the 3x leveraged ETFs). I don’t match their results, but I do come close.

It seems that some people at Logical Invest have caught whiff of some of the work I did in replicating Harry Long’s ideas. First off, for the record, I’ve actually done some work with Harry Long in private, and the strategies we’ve worked on together are definitely better than the strategies he has shared for free, so if you are an institution hoping to vet his track record, I wouldn’t judge it by the very much incomplete frameworks he posts for free.

This post’s strategy is the Logical Invest Universal Investment Strategy leveraged up three times over. Here’s the link to their newest post. Also, I’m happy to see that they think positively of my work.

In any case, my results are worse than those on Logical Invest’s, so if anyone sees a reason for the discrepancy, please let me know.

Here’s the code for the backtest–most of it is old, from my first time analyzing Logical Invest’s strategy.

LogicalInvestUIS <- function(returns, period = 63, modSharpeF = 2.8) {
  returns[is.na(returns)] <- 0 #impute any NAs to zero
  configs <- list()
  for(i in 1:11) {
    weightFirst <- (i-1)*.1
    weightSecond <- 1-weightFirst
    config <- Return.portfolio(R = returns, weights=c(weightFirst, weightSecond), rebalance_on = "months")
    configs[[i]] <- config
  }
  configs <- do.call(cbind, configs)
  cumRets <- cumprod(1+configs)
  
  #rolling cumulative 
  rollAnnRets <- (cumRets/lag(cumRets, period))^(252/period) - 1
  rollingSD <- sapply(X = configs, runSD, n=period)*sqrt(252)
  
  modSharpe <- rollAnnRets/(rollingSD ^ modSharpeF)
  monthlyModSharpe <- modSharpe[endpoints(modSharpe, on="months"),]
  
  findMax <- function(data) {
    return(data==max(data))
  }
  
  #configs$zeroes <- 0 #zeroes for initial periods during calibration
  weights <- t(apply(monthlyModSharpe, 1, findMax))
  weights <- weights*1
  weights <- xts(weights, order.by=as.Date(rownames(weights)))
  weights[is.na(weights)] <- 0
  weights$zeroes <- 1-rowSums(weights)
  configCopy <- configs
  configCopy$zeroes <- 0
  
  stratRets <- Return.portfolio(R = configCopy, weights = weights)
  
  weightFirst <- apply(monthlyModSharpe, 1, which.max)
  weightFirst <- do.call(rbind, weightFirst)
  weightFirst <- (weightFirst-1)*.1
  align <- cbind(weightFirst, stratRets)
  align <- na.locf(align)
  chart.TimeSeries(align[,1], date.format="%Y", ylab=paste("Weight", colnames(returns)[1]), 
                                                           main=paste("Weight", colnames(returns)[1]))
  
  return(stratRets)
}

In this case, rather than steps of 5% weights, I used 10% weights after looking at the Logical Invest charts more closely.

Now, let’s look at the instruments.

getSymbols("SPY", from="1990-01-01")

getSymbols("TMF", from="1990-01-01")
TMFrets <- Return.calculate(Ad(TMF))
getSymbols("TLT", from="1990-01-01")
TLTrets <- Return.calculate(Ad(TLT))
tmf3TLT <- merge(TMFrets, 3*TLTrets, join='inner')
charts.PerformanceSummary(tmf3TLT)
Return.annualized(tmf3TLT[,2]-tmf3TLT[,1])
discrepancy <- as.numeric(Return.annualized(tmf3TLT[,2]-tmf3TLT[,1]))
tmf3TLT[,2] <- tmf3TLT[,2] - ((1+discrepancy)^(1/252)-1)
modifiedTLT <- 3*TLTrets - ((1+discrepancy)^(1/252)-1)

rets <- merge(3*Return.calculate(Ad(SPY)), modifiedTLT, join='inner')
colnames(rets) <- gsub("\\.[A-z]*", "", colnames(rets))

leveragedReturns <- rets
colnames(leveragedReturns) <- paste("Leveraged", colnames(leveragedReturns), sep="_")
leveragedReturns <- leveragedReturns[-1,]

Again, more of the same that I did from my work analyzing Harry Long’s strategies to get a longer backtest of SPXL and TMF (aka leveraged SPY and TLT).

Now, let’s look at some configurations.


hof <- LogicalInvestUIS(returns = leveragedReturns, period = 63, modSharpeF = 2.8)
hof2 <- LogicalInvestUIS(returns = leveragedReturns, period = 73, modSharpeF = 3)
hof3 <- LogicalInvestUIS(returns = leveragedReturns, period = 84, modSharpeF = 4)
hof4 <- LogicalInvestUIS(returns = leveragedReturns, period = 42, modSharpeF = 1.5)
hof5 <- LogicalInvestUIS(returns = leveragedReturns, period = 63, modSharpeF = 6)
hof6 <- LogicalInvestUIS(returns = leveragedReturns, period = 73, modSharpeF = 2)

hofComparisons <- cbind(hof, hof2, hof3, hof4, hof5, hof6)
colnames(hofComparisons) <- c("d63_F2.8", "d73_F3", "d84_F4", "d42_F1.5", "d63_F6", "d73_F2")
rbind(table.AnnualizedReturns(hofComparisons), maxDrawdown(hofComparisons), CalmarRatio(hofComparisons))

With the following statistics:

> rbind(table.AnnualizedReturns(hofComparisons), maxDrawdown(hofComparisons), CalmarRatio(hofComparisons))
                           d63_F2.8    d73_F3    d84_F4  d42_F1.5    d63_F6    d73_F2
Annualized Return         0.3777000 0.3684000 0.2854000 0.1849000 0.3718000 0.3830000
Annualized Std Dev        0.3406000 0.3103000 0.3010000 0.4032000 0.3155000 0.3383000
Annualized Sharpe (Rf=0%) 1.1091000 1.1872000 0.9483000 0.4585000 1.1785000 1.1323000
Worst Drawdown            0.5619769 0.4675397 0.4882101 0.7274609 0.5757738 0.4529908
Calmar Ratio              0.6721751 0.7879956 0.5845827 0.2541127 0.6457823 0.8455274

It seems that the original 73 day lookback, sharpe F of 2 had the best performance.

Here are the equity curves (log scale because leveraged or volatility strategies look silly at regular scale):

chart.TimeSeries(log(cumprod(1+hofComparisons)), legend.loc="topleft", date.format="%Y",
                 main="Hell On Fire Comparisons", ylab="Value of $1", yaxis = FALSE)
axis(side=2, at=c(0, 1, 2, 3, 4), label=paste0("$", round(exp(c(0, 1, 2, 3, 4)))), las = 1)

In short, sort of upwards from 2002 to the crisis, where all the strategies take a dip, and then continue steadily upwards.

Here are the drawdowns:

dds <- PerformanceAnalytics:::Drawdowns(hofComparisons)
chart.TimeSeries(dds, legend.loc="bottomright", date.format="%Y", main="Drawdowns Hell On Fire Variants", 
                 yaxis=FALSE, ylab="Drawdown", auto.grid=FALSE)
axis(side=2, at=seq(from=0, to=-.7, by = -.1), label=paste0(seq(from=0, to=-.7, by = -.1)*100, "%"), las = 1)

Basically, some regular bumps along the road given the CAGRs (that is, if you’re going to leverage something that has an 8% drawdown on the occasion three times over, it’s going to have a 24% drawdown on those same occasions, if not more), and the massive hit in the crisis when bonds take a hit, and on we go.

In short, this strategy is basically the same as the original strategy, just leveraged up, so for those with the stomach for it, there you go. Of course, Logical Invest is leaving off some details, since I’m not getting a perfect replica. Namely, their returns seem slightly higher, and their drawdowns slightly lower. I suppose that’s par for the course when selling subscriptions and newsletters.

One last thing, which I think people should be aware of–when people report statistics on their strategies, make sure to ask the question as to which frequency. Because here’s a quick little modification, going from daily returns to monthly returns:

> betterStatistics <- apply.monthly(hofComparisons, Return.cumulative)
> rbind(table.AnnualizedReturns(betterStatistics), maxDrawdown(betterStatistics), CalmarRatio(betterStatistics))
                           d63_F2.8    d73_F3    d84_F4  d42_F1.5    d63_F6   d73_F2
Annualized Return         0.3719000 0.3627000 0.2811000 0.1822000 0.3661000 0.377100
Annualized Std Dev        0.3461000 0.3014000 0.2914000 0.3566000 0.3159000 0.336700
Annualized Sharpe (Rf=0%) 1.0746000 1.2036000 0.9646000 0.5109000 1.1589000 1.119900
Worst Drawdown            0.4323102 0.3297927 0.4100792 0.6377512 0.4636949 0.311480
Calmar Ratio              0.8602366 1.0998551 0.6855148 0.2856723 0.7894636 1.210563

While the Sharpe ratios don’t improve too much, the Calmars (aka the return to drawdown) statistics increase dramatically. EG, imagine a month in which there’s a 40% drawdown, but it ends at a new equity high. A monthly return series will sweep that under the rug, or, for my fellow Jewish readers, pass over it. So, be wary.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.