Review: Invoance’s TRAIDE application

This review will be about Inovance Tech’s TRAIDE system. It is an application geared towards letting retail investors apply proprietary machine learning algorithms to assist them in creating systematic trading strategies. Currently, my one-line review is that while I hope the company founders mean well, the application is still in an early stage, and so, should be checked out by potential users/venture capitalists as something with proof of potential, rather than a finished product ready for mass market. While this acts as a review, it’s also my thoughts as to how Inovance Tech can improve its product.

A bit of background: I have spoken several times to some of the company’s founders, who sound like individuals at about my age level (so, fellow millennials). Ultimately, the selling point is this:

Systematic trading is cool.
Machine learning is cool.
Therefore, applying machine learning to systematic trading is awesome! (And a surefire way to make profits, as Renaissance Technologies has shown.)

While this may sound a bit snarky, it’s also, in some ways, true. Machine learning has become the talk of the town, from IBM’s Watson (RenTec itself hired a bunch of speech recognition experts from IBM a couple of decades back), to Stanford’s self-driving car (invented by Sebastian Thrun, who now heads Udacity), to the Netflix prize, to god knows what Andrew Ng is doing with deep learning at Baidu. Considering how well machine learning has done at much more complex tasks than “create a half-decent systematic trading algorithm”, it shouldn’t be too much to ask this powerful field at the intersection of computer science and statistics to help the retail investor glued to watching charts generate a lot more return on his or her investments than through discretionary chart-watching and noise trading. To my understanding from conversations with Inovance Tech’s founders, this is explicitly their mission.

(Note: Dr. Wes Gray and Alpha Architect, in their book DIY Financial Advisor, have already established that listening to pundits, and trying to succeed at discretionary trading, is on a whole, a loser’s game)

However, I am not sure that Inovance’s TRAIDE application actually accomplishes this mission in its current state.

Here’s how it works:

Users select one asset at a time, and select a date range (data going back to Dec. 31, 2009). Assets are currently limited to highly liquid currency pairs, and can take the following settings: 1 hour, 2 hour, 4 hour, 6 hour, or daily bar time frames.

Users then select from a variety of indicators, ranging from technical (moving averages, oscillators, volume calculations, etc. Mostly an assortment of 20th century indicators, though the occasional adaptive moving average has managed to sneak in–namely KAMA–see my DSTrading package, and MAMA–aka the Mesa Adaptive Moving Average, from John Ehlers) to more esoteric ones such as some sentiment indicators. Here’s where things start to head south for me, however. Namely, that while it’s easy to add as many indicators as a user would like, there is basically no documentation on any of them, with no links to reference, etc., so users will have to bear the onus of actually understanding what each and every one of the indicators they select actually does, and whether or not those indicators are useful. The TRAIDE application makes zero effort (thus far) to actually get users acquainted with the purpose of these indicators, what their theoretical objective is (measure conviction in a trend, detect a trend, oscillator type indicator, etc.)

Furthermore, regarding indicator selections, users also specify one parameter setting for each indicator per strategy. E.G. if I had an EMA crossover, I’d have to create a new strategy for a 20/100 crossover, a 21/100 crossover, rather than specifying something like this:

short EMA: 20-60
long EMA: 80-200

Quantstrat itself has this functionality, and while I don’t recall covering parameter robustness checks/optimization (in other words, testing multiple parameter sets–whether one uses them for optimization or robustness is up to the user, not the functionality) in quantstrat on this blog specifically, this information very much exists in what I deem “the official quantstrat manual”, found here. In my opinion, the option of covering a range of values is mandatory so as to demonstrate that any given parameter setting is not a random fluke. Outside of quantstrat, I have demonstrated this methodology in my Hypothesis Driven Development posts, and in coming up for parameter selection for volatility trading.

Where TRAIDE may do something interesting, however, is that after the user specifies his indicators and parameters, its “proprietary machine learning” algorithms (WARNING: COMPLETELY BLACK BOX) determine for what range of values of the indicators in question generated the best results within the backtest, and assign them bullishness and bearishness scores. In other words, “looking backwards, these were the indicator values that did best over the course of the sample”. While there is definite value to exploring the relationships between indicators and future returns, I think that TRAIDE needs to do more in this area, such as reporting P-values, conviction, and so on.

For instance, if you combine enough indicators, your “rule” is a market order that’s simply the intersection of all of the ranges of your indicators. For instance, TRAIDE may tell a user that the strongest bullish signal when the difference of the moving averages is between 1 and 2, the ADX is between 20 and 25, the ATR is between 0.5 and 1, and so on. Each setting the user selects further narrows down the number of trades the simulation makes. In my opinion, there are more ways to explore the interplay of indicators than simply one giant AND statement, such as an “OR” statement, of some sort. (E.G. select all values, put on a trade when 3 out of 5 indicators fall into the selected bullish range in order to place more trades). While it may be wise to filter down trades to very rare instances if trading a massive amount of instruments, such that of several thousand possible instruments, only several are trading at any given time, with TRAIDE, a user selects only *one* asset class (currently, one currency pair) at a time, so I’m hoping to see TRAIDE create more functionality in terms of what constitutes a trading rule.

After the user selects both a long and a short rule (by simply filtering on indicator ranges that TRAIDE’s machine learning algorithms have said are good), TRAIDE turns that into a backtest with a long equity curve, short equity curve, total equity curve, and trade statistics for aggregate, long, and short trades. For instance, in quantstrat, one only receives aggregate trade statistics. Whether long or short, all that matters to quantstrat is whether or not the trade made or lost money. For sophisticated users, it’s trivial enough to turn one set of rules on or off, but TRAIDE does more to hold the user’s hand in that regard.

Lastly, TRAIDE then generates MetaTrader4 code for a user to download.

And that’s the process.

In my opinion, while what Inovance Tech has set out to do with TRAIDE is interesting, I wouldn’t recommend it in its current state. For sophisticated individuals that know how to go through a proper research process, TRAIDE is too stringent in terms of parameter settings (one at a time), pre-coded indicators (its target audience probably can’t program too well), and asset classes (again, one at a time). However, for retail investors, my issue with TRAIDE is this:

There is a whole assortment of undocumented indicators, which then move to black-box machine learning algorithms. The result is that the user has very little understanding of what the underlying algorithms actually do, and why the logic he or she is presented with is the output. While TRAIDE makes it trivially easy to generate any one given trading system, as multiple individuals have stated in slightly different ways before, writing a strategy is the easy part. Doing the work to understand if that strategy actually has an edge is much harder. Namely, checking its robustness, its predictive power, its sensitivity to various regimes, and so on. Given TRAIDE’s rather short data history (2010 onwards), and coupled with the opaqueness that the user operates under, my analogy would be this:

It’s like giving an inexperienced driver the keys to a sports car in a thick fog on a winding road. Nobody disputes that a sports car is awesome. However, the true burden of the work lies in making sure that the user doesn’t wind up smashing into a tree.

Overall, I like the TRAIDE application’s mission, and I think it may have potential as something for the retail investors that don’t intend to learn the ins-and-outs of coding a trading system in R (despite me demonstrating many times over how to put such systems together). I just think that there needs to be more work put into making sure that the results a user sees are indicative of an edge, rather than open the possibility of highly-flexible machine learning algorithms chasing ghosts in one of the noisiest and most dynamic data sets one can possibly find.

My recommendations are these:

1) Multiple asset classes
2) Allow parameter ranges, and cap the number of trials at any given point (E.G. 4 indicators with ten settings each = 10,000 possible trading systems = blow up the servers). To narrow down the number of trial runs, use techniques from experimental design to arrive at decent combinations. (I wish I remembered my response surface methodology techniques from my master’s degree about now!)
3) Allow modifications of order sizing (E.G. volatility targeting, stop losses), such as I wrote about in my hypothesis-driven development posts.
4) Provide *some* sort of documentation for the indicators, even if it’s as simple as a link to investopedia (preferably a lot more).
5) Far more output is necessary, especially for users who don’t program. Namely, to distinguish whether or not there is a legitimate edge, or if there are too few observations to reject the null hypothesis of random noise.
6) Far longer data histories. 2010 onwards just seems too short of a time-frame to be sure of a strategy’s efficacy, at least on daily data (may not be true for hourly).
7) Factor in transaction costs. Trading on an hourly time frame will mean far less P&L per trade than on a daily resolution. If MT4 charges a fixed ticket price, users need to know how this factors into their strategy.
8) Lastly, dogfooding. When I spoke last time with Inovance Tech’s founders, they claimed they were using their own algorithms to create a forex strategy, which was doing well in live trading. By the time more of these suggestions are implemented, it’d be interesting to see if they have a track record as a fund, in addition to as a software provider.

If all of these things are accounted for and automated, the product will hopefully accomplish its mission of bringing systematic trading and machine learning to more people. I think TRAIDE has potential, and I’m hoping that its staff will realize that potential.

Thanks for reading.

NOTE: I am currently contracting in downtown Chicago, and am always interested in networking with professionals in the systematic trading and systematic asset management/allocation spaces. Find my LinkedIn here.

A Filter Selection Method Inspired From Statistics

This post will demonstrate a method to create an ensemble filter based on a trade-off between smoothness and responsiveness, two properties looked for in a filter. An ideal filter would both be responsive to price action so as to not hold incorrect positions, while also be smooth, so as to not incur false signals and unnecessary transaction costs.

So, ever since my volatility trading strategy, using three very naive filters (all SMAs) completely missed a 27% month in XIV, I’ve decided to try and improve ways to create better indicators in trend following. Now, under the realization that there can potentially be tons of complex filters in existence, I decided instead to focus on a way to create ensemble filters, by using an analogy from statistics/machine learning.

In static data analysis, for a regression or classification task, there is a trade-off between bias and variance. In a nutshell, variance is bad because of the possibility of overfitting on a few irregular observations, and bias is bad because of the possibility of underfitting legitimate data. Similarly, with filtering time series, there are similar concerns, except bias is called lag, and variance can be thought of as a “whipsawing” indicator. Essentially, an ideal indicator would move quickly with the data, while at the same time, not possess a myriad of small bumps-and-reverses along the way, which may send false signals to a trading strategy.

So, here’s how my simple algorithm works:

The inputs to the function are the following:

A) The time series of the data you’re trying to filter
B) A collection of candidate filters
C) A period over which to measure smoothness and responsiveness, defined as the square root of the n-day EMA (2/(n+1) convention) of the following:
a) Responsiveness: the squared quantity of price/filter – 1
b) Smoothness: the squared quantity of filter(t)/filter(t-1) – 1 (aka R’s return.calculate) function
D) A conviction factor, to which power the errors will be raised. This should probably be between .5 and 3
E) A vector that defines the emphasis on smoothness (vs. emphasis on responsiveness), which should range from 0 to 1.

Here’s the code:


getSymbols('SPY', from = '1990-01-01')

smas <- list()
for(i in 2:250) {
  smas[[i]] <- SMA(Ad(SPY), n = i)
smas <-, smas)

xtsApply <- function(x, FUN, n, ...) {
  out <- xts(apply(x, 2, FUN, n = n, ...),

sumIsNa <- function(x){

This gets SPY data, and creates two utility functions–xtsApply, which is simply a column-based apply that replaces the original index that using a column-wise apply discards, and sumIsNa, which I use later for counting the numbers of NAs in a given row. It also creates my candidate filters, which, to keep things simple, are just SMAs 2-250.

Here’s the actual code of the function, with comments in the code itself to better explain the process from a technical level (for those still unfamiliar with R, look for the hashtags):

ensembleFilter <- function(data, filters, n = 20, conviction = 1, emphasisSmooth = .51) {
  # smoothness error
  filtRets <- Return.calculate(filters)
  sqFiltRets <- filtRets * filtRets * 100 #multiply by 100 to prevent instability
  smoothnessError <- sqrt(xtsApply(sqFiltRets, EMA, n = n))
  # responsiveness error
  repX <- xts(matrix(data, nrow = nrow(filters), ncol=ncol(filters)), 
     = index(filters))
  dataFilterReturns <- repX/filters - 1
  sqDataFilterQuotient <- dataFilterReturns * dataFilterReturns * 100 #multiply by 100 to prevent instability
  responseError <- sqrt(xtsApply(sqDataFilterQuotient, EMA, n = n))
  # place smoothness and responsiveness errors on same notional quantities
  meanSmoothError <- rowMeans(smoothnessError)
  meanResponseError <- rowMeans(responseError)
  ratio <- meanSmoothError/meanResponseError
  ratio <- xts(matrix(ratio, nrow=nrow(filters), ncol=ncol(filters)),
  responseError <- responseError * ratio
  # for each term in emphasisSmooth, create a separate filter
  ensembleFilters <- list()
  for(term in emphasisSmooth) {
    # compute total errors, raise them to a conviction power, find the normalized inverse
    totalError <- smoothnessError * term + responseError * (1-term)
    totalError <- totalError ^ conviction
    invTotalError <- 1/totalError
    normInvError <- invTotalError/rowSums(invTotalError)
    # ensemble filter is the sum of candidate filters in proportion
    # to the inverse of their total error
    tmp <- xts(rowSums(filters * normInvError),
    #NA out time in which one or more filters were NA
    initialNAs <- apply(filters, 1, sumIsNa) 
    tmp[initialNAs > 0] <- NA
    tmpName <- paste("emphasisSmooth", term, sep="_")
    colnames(tmp) <- tmpName
    ensembleFilters[[tmpName]] <- tmp
  # compile the filters
  out <-, ensembleFilters)

The vast majority of the computational time takes place in the two xtsApply calls. On 249 different simple moving averages, the process takes about 30 seconds.

Here’s the output, using a conviction factor of 2:

t1 <- Sys.time()
filts <- ensembleFilter(Ad(SPY), smas, n = 20, conviction = 2, emphasisSmooth = c(0, .05, .25, .5, .75, .95, 1))
t2 <- Sys.time()

lines(filts[,1], col='blue', lwd=2)
lines(filts[,2], col='green', lwd = 2)
lines(filts[,3], col='orange', lwd = 2)
lines(filts[,4], col='brown', lwd = 2)
lines(filts[,5], col='maroon', lwd = 2)
lines(filts[,6], col='purple', lwd = 2)
lines(filts[,7], col='red', lwd = 2)

And here is an example, looking at SPY from 2007 through 2011.

In this case, I chose to go from blue to green, orange, brown, maroon, purple, and finally red for smoothness emphasis of 0, 5%, 25%, 50%, 75%, 95%, and 1, respectively.

Notice that the blue line is very wiggly, while the red line sometimes barely moves, such as during the 2011 drop-off.

One thing that I noticed in the course of putting this process together is something that eluded me earlier–namely, that naive trend-following strategies which are either fully long or fully short based on a crossover signal can lose money quickly in sideways markets.

However, theoretically, by finely varying the jumps between 0% to 100% emphasis on smoothness, whether in steps of 1% or finer, one can have a sort of “continuous” conviction, by simply adding up the signs of differences between various ensemble filters. In an “uptrend”, the difference as one moves from the most responsive to most smooth filter should constantly be positive, and vice versa.

In the interest of brevity, this post doesn’t even have a trading strategy attached to it. However, an implied trading strategy can be to be long or short the SPY depending on the sum of signs of the differences in filters as you move from responsiveness to smoothness. Of course, as the candidate filters are all SMAs, it probably wouldn’t be particularly spectacular. However, for those out there who use more complex filters, this may be a way to create ensembles out of various candidate filters, and create even better filters. Furthermore, I hope that given enough candidate filters and an objective way of selecting them, it would be possible to reduce the chances of creating an overfit trading system. However, anything with parameters can potentially be overfit, so that may be wishful thinking.

All in all, this is still a new idea for me. For instance, the filter to compute the error terms can probably be improved. The inspiration for an EMA 20 essentially came from how Basel computes volatility (if I recall, correctly, it uses the square root of an 18 day EMA of squared returns), and the very fact that I use an EMA can itself be improved upon (why an EMA instead of some other, more complex filter). In fact, I’m always open to how I can improve this concept (and others) from readers.

Thanks for reading.

NOTE: I am currently contracting in Chicago in an analytics capacity. If anyone would like to meet up, let me know. You can email me at, or contact me through my LinkedIn here.

How well can you scale your strategy?

This post will deal with a quick, finger in the air way of seeing how well a strategy scales–namely, how sensitive it is to latency between signal and execution, using a simple volatility trading strategy as an example. The signal will be the VIX/VXV ratio trading VXX and XIV, an idea I got from Volatility Made Simple’s amazing blog, particularly this post. The three signals compared will be the “magical thinking” signal (observe the close, buy the close, named from the ruleOrderProc setting in quantstrat), buy on next-day open, and buy on next-day close.

Let’s get started.


         destfile="longVXX.txt") #requires downloader package
getSymbols('^VIX', from = '1990-01-01')

xiv <- xts(read.zoo("longXIV.txt", format="%Y-%m-%d", sep=",", header=TRUE))
vxx <- xts(read.zoo("longVXX.txt", format="%Y-%m-%d", sep=",", header=TRUE))
vxv <- xts(read.zoo("vxvData.csv", header=TRUE, sep=",", format="%m/%d/%Y", skip=2))
vixVxv <- Cl(VIX)/Cl(vxv)

xiv <- xts(read.zoo("longXIV.txt", format="%Y-%m-%d", sep=",", header=TRUE))
vxx <- xts(read.zoo("longVXX.txt", format="%Y-%m-%d", sep=",", header=TRUE))

vxxCloseRets <- Return.calculate(Cl(vxx))
vxxOpenRets <- Return.calculate(Op(vxx))
xivCloseRets <- Return.calculate(Cl(xiv))
xivOpenRets <- Return.calculate(Op(xiv))

vxxSig <- vixVxv > 1
xivSig <- 1-vxxSig

magicThinking <- vxxCloseRets * lag(vxxSig) + xivCloseRets * lag(xivSig)
nextOpen <- vxxOpenRets * lag(vxxSig, 2) + xivOpenRets * lag(xivSig, 2)
nextClose <- vxxCloseRets * lag(vxxSig, 2) + xivCloseRets * lag(xivSig, 2)
tradeWholeDay <- (nextOpen + nextClose)/2

compare <- na.omit(cbind(magicThinking, nextOpen, nextClose, tradeWholeDay))
colnames(compare) <- c("Magic Thinking", "Next Open", 
                       "Next Close", "Execute Through Next Day")
      maxDrawdown(compare), CalmarRatio(compare))

chart.TimeSeries(log(cumprod(1+compare), base = 10), legend.loc='topleft', ylab='log base 10 of additional equity',
                 main = 'VIX vx. VXV different execution times')

So here’s the run-through. In addition to the magical thinking strategy (observe the close, buy that same close), I tested three other variants–a variant which transacts the next open, a variant which transacts the next close, and the average of those two. Effectively, I feel these three could give a sense of a strategy’s performance under more realistic conditions–that is, how well does the strategy perform if transacted throughout the day, assuming you’re managing a sum of money too large to just plow into the market in the closing minutes (and if you hope to get rich off of trading, you will have a larger sum of money than the amount you can apply magical thinking to). Ideally, I’d use VWAP pricing, but as that’s not available for free anywhere I know of, that means that readers can’t replicate it even if I had such data.

In any case, here are the results.

Equity curves:

Log scale (for Mr. Tony Cooper and others):


                          Magic Thinking Next Open Next Close Execute Through Next Day
Annualized Return               0.814100 0.8922000  0.5932000                 0.821900
Annualized Std Dev              0.622800 0.6533000  0.6226000                 0.558100
Annualized Sharpe (Rf=0%)       1.307100 1.3656000  0.9529000                 1.472600
Worst Drawdown                  0.566122 0.5635336  0.6442294                 0.601014
Calmar Ratio                    1.437989 1.5831686  0.9208586                 1.367510

My reaction? The execute on next day’s close performance being vastly lower than the other configurations (and that deterioration occurring in the most recent years) essentially means that the fills will have to come pretty quickly at the beginning of the day. While the strategy seems somewhat scalable through the lens of this finger-in-the-air technique, in my opinion, if the first full day of possible execution after signal reception will tank a strategy from a 1.44 Calmar to a .92, that’s a massive drop-off, after holding everything else constant. In my opinion, I think this is quite a valid question to ask anyone who simply sells signals, as opposed to manages assets. Namely, how sensitive are the signals to execution on the next day? After all, unless those signals come at 3:55 PM, one is most likely going to be getting filled the next day.

Now, while this strategy is a bit of a tomato can in terms of how good volatility trading strategies can get (they can get a *lot* better in my opinion), I think it made for a simple little demonstration of this technique. Again, a huge thank you to Mr. Helmuth Vollmeier for so kindly keeping up his dropbox all this time for the volatility data!

Thanks for reading.

NOTE: I am currently contracting in a data science capacity in Chicago. You can email me at, or find me on my LinkedIn here. I’m always open to beers after work if you’re in the Chicago area.

NOTE 2: Today, on October 21, 2015, if you’re in Chicago, there’s a Chicago R Users Group conference at Jaks Tap at 6:00 PM. Free pizza, networking, and R, hosted by Paul Teetor, who’s a finance guy. Hope to see you there.

Volatility Stat-Arb Shenanigans

This post deals with an impossible-to-implement statistical arbitrage strategy using VXX and XIV. The strategy is simple: if the average daily return of VXX and XIV was positive, short both of them at the close. This strategy makes two assumptions of varying dubiousness: that one can “observe the close and act on the close”, and that one can short VXX and XIV.

So, recently, I decided to play around with everyone’s two favorite instruments on this blog–VXX and XIV, with the idea that “hey, these two instruments are diametrically opposed, so shouldn’t there be a stat-arb trade here?”

So, in order to do a lick-finger-in-the-air visualization, I implemented Mike Harris’s momersion indicator.

momersion <- function(R, n, returnLag = 1) {
  momentum <- sign(R * lag(R, returnLag))
  momentum[momentum < 0] <- 0
  momersion <- runSum(momentum, n = n)/n * 100
  colnames(momersion) <- "momersion"

And then I ran the spread through it.

xiv <- xts(read.zoo("longXIV.txt", format="%Y-%m-%d", sep=",", header=TRUE))
vxx <- xts(read.zoo("longVXX.txt", format="%Y-%m-%d", sep=",", header=TRUE))

xivRets <- Return.calculate(Cl(xiv))
vxxRets <- Return.calculate(Cl(vxx))

volSpread <- xivRets + vxxRets
volSpreadMomersion <- momersion(volSpread, n = 252)

In other words, this spread is certainly mean-reverting at just about all times.

And here is the code for the results from 2011 onward, from when the XIV and VXX actually started trading.

#both sides
sig <- -lag(sign(volSpread))
longShort <- sig * volSpread
charts.PerformanceSummary(longShort['2011::'], main = 'long and short spread')

#long spread only
sig <- -lag(sign(volSpread))
sig[sig < 0] <- 0
longOnly <- sig * volSpread
charts.PerformanceSummary(longOnly['2011::'], main = 'long spread only')

#short spread only
sig <- -lag(sign(volSpread))
sig[sig > 0] <- 0
shortOnly <- sig * volSpread
charts.PerformanceSummary(shortOnly['2011::'], main = 'short spread only')

threeStrats <- na.omit(cbind(longShort, longOnly, shortOnly))["2011::"]
colnames(threeStrats) <- c("LongShort", "Long", "Short")
rbind(table.AnnualizedReturns(threeStrats), CalmarRatio(threeStrats))

Here are the equity curves:




With the following statistics:

                          LongShort      Long    Short
Annualized Return          0.115400 0.0015000 0.113600
Annualized Std Dev         0.049800 0.0412000 0.027900
Annualized Sharpe (Rf=0%)  2.317400 0.0374000 4.072100
Calmar Ratio               1.700522 0.0166862 7.430481

In other words, the short side is absolutely amazing as a trade–except for the one small fact of having it be impossible to actually execute, or at least as far as I’m aware. Anyhow, this was simply a for-fun post, but hopefully it served some purpose.

Thanks for reading.

NOTE: I am currently contracting and am looking to network in the Chicago area. You can find my LinkedIn here.

A Review of DIY Financial Advisor, by Gray, Vogel, and Foulke

This post will review the DIY Financial Advisor book, which I thought was a very solid read, and especially pertinent to those who are more beginners at investing (especially systematic investing). While it isn’t exactly perfect, it’s about as excellent a primer on investing as one will find out there that is accessible to the lay-person, in my opinion.

Okay, so, official announcement: I am starting a new section of posts called “Reviews”, which I received from being asked to review this book. Essentially, I believe that anyone that’s trying to create a good product that will help my readers deserves a spotlight, and I myself would like to know what cool and innovative financial services/products are coming about. For those who’d like exposure on this site, if you’re offering an affordable and innovative product or service that can be of use to an audience like mine, reach out to me.

Anyway, this past weekend, while relocating to Chicago, I had the pleasure of reading Alpha Architect’s (Gray, Vogel, Foulke) book “DIY financial advisor”, essentially making a case as to why a retail investor should be able to outperform the expert financial advisers that charge several percentage points a year to manage one’s wealth.
The book starts off by citing various famous studies showing how many subtle subconscious biases and fallacies human beings are susceptible to (there are plenty), such as falling for complexity, overconfidence, and so on—none of which emotionless computerized systems and models suffer from. Furthermore, it also goes on to provide several anecdotal examples of experts gone bust, such as Victor Niederhoffer, who blew up not once, but twice (and rumor has it he blew up a third time), and studies showing that systematic data analysis has shown to beat expert recommendations time and again—including when experts were armed with the outputs of the models themselves. Throw in some quotes from Jim Simons (CEO of the best hedge fund in the world, Renaissance Technologies), and the first part of the book can be summed up like this:

1) Your rank and file human beings are susceptible to many subconscious biases.

2) Don’t trust the recommendations of experts. Even simpler models have systematically outperformed said “experts”. Some experts have even blown up, multiple times even (E.G. Victor Niederhoffer).

3) Building an emotionless system will keep these human fallacies from wrecking your investment portfolio.

4) Sticking to a well thought-out system is a good idea, even when it’s uncomfortable—such as when a marine has to wear a Kevlar helmet, hold extra ammo, and extra water in a 126 degree Iraq desert (just ask Dr./Captain Gray!).

This is all well and good—essentially making a very strong case for why you should build a system, and let the system do the investment allocation heavy lifting for you.
Next, the book goes into the FACTS acronym of different manager selection—fees, access, complexity, taxes, and search. Fees: how much does it cost to have someone manage your investments? Pretty self-explanatory here. Access: how often can you pull your capital (EG a hedge fund that locks you up for a year especially when it loses money should be run from, and fast). Complexity: do you understand how the investments are managed? Taxes: long-term capital gains, or shorter-term? Generally, very few decent systems will be holding for a year or more, so in my opinion, expect to pay short-term taxes. Search: that is, how hard is it to find a good candidate? Given the sea of hedge funds (especially those with short-term track records, or only track records managing tiny amounts of money), how hard is it to find a manager who’ll beat the benchmark after fees? Answer: very difficult. In short, all the glitzy sophisticated managers you hear about? Far from a terrific deal, assuming you can even find one.

Continuing, the book goes into two separate anomalies that should form the foundation for any equity investment strategy – value, and momentum. The value system essentially goes long the top decile of the EBIT/TEV metric for the top 60% of market-cap companies traded on the NYSE every year. In my opinion, this is a system that is difficult to implement for the average investor in terms of managing the data process for this system, along with having the proper capital to allocate to all the various companies. That is, if you have a small amount of capital to invest, you might not be able to get that equal weight allocation across a hundred separate companies. However, I believe that with the QVAL and IVAL etfs (from Alpha Architect, and full disclosure, I have some of my IRA invested there), I think that the systematic value component can be readily accessed through these two funds.

The momentum strategy, however, is much simpler. There’s a momentum component, and a moving average component. There’s some math that shows that these two signals are related (a momentum signal is actually proportional to a difference of a moving average and its last value), and the ROBUST system that this book proposes is a combination of a momentum signal and an SMA signal. This is how it works. Assume you have $100,000 and 5 assets to invest in, for the sake of math. Divide the portfolio into a $50,000 momentum component and a $50,000 moving average component. Every month, allocate $10,000 to each of the five assets with a positive 12-month momentum, or stay in cash for that asset. Next, allocate another $10,000 to each of the five assets with a price above a 12-month simple moving average. It’s that simple, and given the recommended ETFs (commodities, bonds, foreign stocks, domestic stocks, real estate), it’s a system that most investors can rather easily implement, especially if they’ve been following my blog.

For those interested in more market anomalies (especially value anomalies), there’s a chapter which contains a large selection of academic papers that go back and forth on the efficacies of various anomalies and how well they can predict returns. So, for those interested, it’s there.

The book concludes with some potential pitfalls which a DIY investor should be aware of when running his or her own investments, which is essentially another psychology chapter.
Overall, in my opinion, this book is fairly solid in terms of reasons why a retail investor should take the plunge and manage his or her own investments in a systematic fashion. Namely, that flesh and blood advisers are prone to human error, and on top of that, usually charge an unjustifiably high fee, and then deliver lackluster performance. The book recommends a couple of simple systems, one of which I think anyone who can copy and paste some rudimentary R can follow (the ROBUST momentum system), and another which I think most stay-at-home investors should shy away from (the value system, simply because of the difficulty of dealing with all that data), and defer to either or both of Alpha Architect’s 2 ETFs.
In terms of momentum, there are the ALFA, GMOM, and MTUM tickers (do your homework, I’m long ALFA) for various differing exposures to this anomaly, for those that don’t want to pay the constant transaction costs/incur short-term taxes of running their own momentum strategy.

In terms of where this book comes up short, here are my two cents:
Tested over nearly a century, the risk-reward tradeoffs of these systems can still be frightening at times. That is, for a system that delivers a CAGR of around 15%, are you still willing to risk a 50% drawdown? Losing half of everything on the cusp of retirement sounds very scary, no matter the long-term upside.

Furthermore, this book keeps things simple, with an intended audience of mom and pop investors (who have historically underperformed the S&P 500!). While I think it accomplishes this, I think there could have been value added, even for such individuals, by outlining some ETFs or simple ETF/ETN trading systems that can diversify a portfolio. For instance, while volatility trading sounds very scary, in the context of providing diversification, it may be worth looking into. For instance, 2008 was a banner year for most volatility trading strategies that managed to go long and stay long volatility through the crisis. I myself still have very little knowledge of all of the various exotic ETFs that are popping up left and right, and I would look very favorably on a reputable source that can provide a tour of some that can provide respectable return diversification to a basic equities/fixed-income/real asset ETF-based portfolio, as outlined in one of the chapters in this book, and other books, such as Meb Faber’s Global Asset Allocation (a very cheap ebook).

One last thing that I’d like to touch on—this book is written in a very accessible style, and even the math (yes, math!) is understandable for someone that’s passed basic algebra. It’s something that can be read in one or two sittings, and I’d recommend it to anyone that’s a beginner in investing or systematic investing.

Overall, I give this book a solid 4/5 stars. It’s simple, easily understood, and brings systematic investing to the masses in a way that many people can replicate at home. However, I would have liked to see some beyond-the-basics content as well given the plethora of different ETFs.

Thanks for reading.

Hypothesis-Driven Development Part V: Stop-Loss, Deflating Sharpes, and Out-of-Sample

This post will demonstrate a stop-loss rule inspired by Andrew Lo’s paper “when do stop-loss rules stop losses”? Furthermore, it will demonstrate how to deflate a Sharpe ratio to account for the total number of trials conducted, which is presented in a paper written by David H. Bailey and Marcos Lopez De Prado. Lastly, the strategy will be tested on the out-of-sample ETFs, rather than the mutual funds that have been used up until now (which actually cannot be traded more than once every two months, but have been used simply for the purpose of demonstration).

First, however, I’d like to fix some code from the last post and append some results.

A reader asked about displaying the max drawdown for each of the previous rule-testing variants based off of volatility control, and Brian Peterson also recommended displaying max leverage, which this post will provide.

Here’s the updated rule backtest code:

ruleBacktest <- function(returns, nMonths, dailyReturns,
                         nSD=126, volTarget = .1) {
  nMonthAverage <- apply(returns, 2, runSum, n = nMonths)
  nMonthAverage <- na.omit(xts(nMonthAverage, = index(returns)))
  nMonthAvgRank <- t(apply(nMonthAverage, 1, rank))
  nMonthAvgRank <- xts(nMonthAvgRank,
  selection <- (nMonthAvgRank==5) * 1 #select highest average performance
  dailyBacktest <- Return.portfolio(R = dailyReturns, weights = selection)
  constantVol <- volTarget/(runSD(dailyBacktest, n = nSD) * sqrt(252))
  monthlyLeverage <- na.omit(constantVol[endpoints(constantVol), on ="months"])
  wts <- cbind(monthlyLeverage, 1-monthlyLeverage)
  constantVolComponents <- cbind(dailyBacktest, 0)
  out <- Return.portfolio(R = constantVolComponents, weights = wts)
  out <- apply.monthly(out, Return.cumulative)
  maxLeverage <- max(monthlyLeverage, na.rm = TRUE)
  return(list(out, maxLeverage))

t1 <- Sys.time()
allPermutations <- list()
allDDs <- list()
leverages <- list()
for(i in seq(21, 252, by = 21)) {
  monthVariants <- list()
  ddVariants <- list()
  leverageVariants <- list()
  for(j in 1:12) {
    trial <- ruleBacktest(returns = monthRets, nMonths = j, dailyReturns = sample, nSD = i)
    sharpe <- table.AnnualizedReturns(trial[[1]])[3,]
    dd <- maxDrawdown(trial[[1]])
    monthVariants[[j]] <- sharpe
    ddVariants[[j]] <- dd
    leverageVariants[[j]] <- trial[[2]]
  allPermutations[[i]] <-, monthVariants)
  allDDs[[i]] <-, ddVariants)
  leverages[[i]] <-, leverageVariants)
allPermutations <-, allPermutations)
allDDs <-, allDDs)
leverages <-, leverages)
t2 <- Sys.time()

Here are the two new plots.

meltedDDs <- melt(allDDs)
colnames(meltedDDs) <- c("volFormation", "momentumFormation", "MaxDD")
ggplot(meltedDDs, aes(x = momentumFormation, y = volFormation, fill=MaxDD)) + 
  geom_tile()+scale_fill_gradient2(high="red", mid="yellow", low="green", midpoint = 0.2)

meltedLeverage <- melt(leverages)
colnames(meltedLeverage) <- c("volFormation", "momentumFormation", "MaxLeverage")
ggplot(meltedLeverage, aes(x = momentumFormation, y = volFormation, fill=MaxLeverage)) + 
  geom_tile()+scale_fill_gradient2(high="red", mid="yellow", low="green", midpoint = 2.5)



Here are the results presented as a hypothesis test–a linear regression of drawdowns and leverage against momentum formation period and volatility calculation period:

ddLM <- lm(meltedDDs$MaxDD~meltedDDs$volFormation + meltedDDs$momentumFormation)

lm(formula = meltedDDs$MaxDD ~ meltedDDs$volFormation + meltedDDs$momentumFormation)

     Min       1Q   Median       3Q      Max 
-0.08022 -0.03434 -0.00135  0.02911  0.20077 

                             Estimate Std. Error t value Pr(>|t|)    
(Intercept)                  0.240146   0.010922   21.99  < 2e-16 ***
meltedDDs$volFormation      -0.000484   0.000053   -9.13  6.5e-16 ***
meltedDDs$momentumFormation  0.001533   0.001112    1.38     0.17    
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.0461 on 141 degrees of freedom
Multiple R-squared:  0.377,	Adjusted R-squared:  0.368 
F-statistic: 42.6 on 2 and 141 DF,  p-value: 3.32e-15

levLM <- lm(meltedLeverage$MaxLeverage~meltedLeverage$volFormation + meltedDDs$momentumFormation)

lm(formula = meltedLeverage$MaxLeverage ~ meltedLeverage$volFormation + 

    Min      1Q  Median      3Q     Max 
-0.9592 -0.5179 -0.0908  0.3679  3.1022 

                             Estimate Std. Error t value Pr(>|t|)    
(Intercept)                  4.076870   0.164243   24.82   <2e-16 ***
meltedLeverage$volFormation -0.009916   0.000797  -12.45   <2e-16 ***
meltedDDs$momentumFormation  0.009869   0.016727    0.59     0.56    
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.693 on 141 degrees of freedom
Multiple R-squared:  0.524,	Adjusted R-squared:  0.517 
F-statistic: 77.7 on 2 and 141 DF,  p-value: <2e-16

Easy interpretation here–the shorter-term volatility estimates are unstable due to the one-asset rotation nature of the system. Particularly silly is using the one-month volatility estimate. Imagine the system just switched from the lowest-volatility instrument to the highest. It would then take excessive leverage and get blown up that month for no particularly good reason. A longer-term volatility estimate seems to do much better for this system. So, while the Sharpe is generally improved, the results become far more palatable when using a more stable calculation for volatility, which sets maximum leverage to about 2 when targeting an annualized volatility of 10%. Also, to note, the period to compute volatility matters far more than the momentum formation period when addressing volatility targeting, which lends credence (at least in this case) to so many people that say “the individual signal rules matter far less than the position-sizing rules!”. According to some, position sizing is often a way for people to mask only marginally effective (read: bad) strategies with a separate layer to create a better result. I’m not sure which side of the debate (even assuming there is one) I fall upon, but for what it’s worth, there it is.

Moving on, I want to test out one more rule, which is inspired by Andrew Lo’s stop-loss rule. Essentially, the way it works is this (to my interpretation): it evaluates a running standard deviation, and if the drawdown exceeds some threshold of the running standard deviation, to sit out for some fixed period of time, and then re-enter. According to Andrew Lo, stop-losses help momentum strategies, so it seems as good a rule to test as any.

However, rather than test different permutations of the stop rule on all 144 prior combinations of volatility-adjusted configurations, I’m going to take an ensemble strategy, inspired by a conversation I had with Adam Butler, the CEO of ReSolve Asset Management, who stated that “we know momentum exists, but we don’t know the perfect way to measure it”, from the section I just finished up and use an equal weight of all 12 of the momentum formation periods with a 252-day rolling annualized volatility calculation, and equal weight them every month.

Here are the base case results from that trial (bringing our total to 169).

strat <- list()
for(i in 1:12) {
  strat[[i]] <- ruleBacktest(returns = monthRets, nMonths = i, dailyReturns = sample, nSD = 252)[[1]]
strat <-, strat)
strat <- Return.portfolio(R = na.omit(strat), rebalance_on="months")

rbind(table.AnnualizedReturns(strat), maxDrawdown(strat), CalmarRatio(strat))

With the following result:

Annualized Return                   0.12230
Annualized Std Dev                  0.10420
Annualized Sharpe (Rf=0%)           1.17340
Worst Drawdown                      0.09616
Calmar Ratio                        1.27167

Of course, also worth nothing is that the annualized standard deviation is indeed very close to 10%, even with the ensemble. And it’s nice that there is a Sharpe past 1. Of course, given that these are mutual funds being backtested, these results are optimistic due to the unrealistic execution assumptions (can’t trade sooner than once every *two* months).

Anyway, let’s introduce our stop-loss rule, inspired by Andrew Lo’s paper.

loStopLoss <- function(returns, sdPeriod = 12, sdScaling = 1, sdThresh = 1.5, cooldown = 3) {
  stratRets <- list()
  count <- 1
  stratComplete <- FALSE
  originalRets <- returns
  ddThresh <- -runSD(returns, n = sdPeriod) * sdThresh * sdScaling
  while(!stratComplete) {
    retDD <- PerformanceAnalytics:::Drawdowns(returns)
    DDbreakthrough <- retDD < ddThresh & lag(retDD) > ddThresh
    firstBreak <- which.max(DDbreakthrough) #first threshold breakthrough, if 1, we have no breakthrough
      #the above line is unintuitive since this is a boolean vector, so it returns the first value of TRUE
    if(firstBreak > 1) { #we have a drawdown breakthrough if this is true
      stratRets[[count]] <- returns[1:firstBreak,] #subset returns through our threshold breakthrough
      nextPoint <- firstBreak + cooldown + 1 #next point of re-entry is the point after the cooldown period
      if(nextPoint <= (nrow(returns)-1)) { #if we can re-enter, subset the returns and return to top of loop
        returns <- returns[nextPoint:nrow(returns),]
        ddThresh <- ddThresh[nextPoint:nrow(ddThresh),]
        count <- count+1
      } else { #re-entry point is after data exhausted, end strategy
        stratComplete <- TRUE
    } else { #there are no more critical drawdown breakthroughs, end strategy
      stratRets[[count]] <- returns
      stratComplete <- TRUE
  stratRets <-, stratRets) #combine returns
  expandRets <- cbind(stratRets, originalRets) #account for all the days we missed
  expandRets[[,1]), 1] <- 0 #cash positions will be zero
  rets <- expandRets[,1]
  colnames(rets) <- paste(cooldown, sdThresh, sep="_")

Essentially, the way it works is like this: the function computes all the drawdowns for a return series, along with its running standard deviation (non-annualized–if you want to annualize it, change the sdScaling parameter to something like sqrt(12) for monthly or sqrt(252) for daily data). Next, it looks for when the drawdown crossed a critical threshold, then cuts off that portion of returns and standard deviation history, and moves ahead in history by the cooldown period specified, and repeats. Most of the code is simply dealing with corner cases (is there even a time to use the stop rule? What about iterating when there isn’t enough data left?), and then putting the results back together again.

In any case, for the sake of simplicity, this function doesn’t use two different time scales (IE compute volatility using daily data, make decisions monthly), so I’m sticking with using a 12-month rolling volatility, as opposed to 252 day rolling volatility multiplied by the square root of 21.

Finally, here are another 54 runs to see if Andrew Lo’s stop-loss rule works here. Essentially, the intuition behind this is that if the strategy breaks down, it’ll continue to break down, so it would be prudent to just turn it off for a little while.

Here are the trial runs:

sharpes <- list()
params <- expand.grid(threshVec, cooldownVec)
for(i in 1:nrow(params)) {
  configuration <- loStopLoss(returns = strat, sdThresh = params[i,1],
                             cooldown = params[i, 2])
  sharpes[[i]] <- table.AnnualizedReturns(configuration)[3,]
sharpes <-, sharpes)

loStoplossFrame <- cbind(params, sharpes)
loStoplossFrame$improvement <- loStoplossFrame[,3] - table.AnnualizedReturns(strat)[3,]

colnames(loStoplossFrame) <- c("Threshold", "Cooldown", "Sharpe", "Improvement")

And a plot of the results.

ggplot(loStoplossFrame, aes(x = Threshold, y = Cooldown, fill=Improvement)) + 
  geom_tile()+scale_fill_gradient2(high="green", mid="yellow", low="red", midpoint = 0)


Result: at this level, and at this frequency (retaining the monthly decision-making process), the stop-loss rule basically does nothing in order to improve the risk-reward trade-off in the best case scenarios, and in most scenarios, simply hurts. 54 trials down the drain, bringing us up to 223 trials. So, what does the final result look like?


Here’s the final in-sample equity curve–and the first one featured in this entire series. This is, of course, a *feature* of hypothesis-driven development. Playing whack-a-mole with equity curve bumps is what is a textbook case of overfitting. So, without further ado:

And now we can see why stop-loss rules generally didn’t add any value to this strategy. Simply, it had very few periods of sustained losses at the monthly frequency, and thus, very little opportunity for a stop-loss rule to add value. Sure, the occasional negative month crept in, but there was no period of sustained losses. Furthermore, Yahoo Finance may not have perfect fidelity on dividends on mutual funds from the late 90s to early 2000s, so the initial flat performance may also be a rather conservative estimate on the strategy’s performance (then again, as I stated before, using mutual funds themselves is optimistic given the unrealistic execution assumptions, so maybe it cancels out). Now, if this equity curve were to be presented without any context, one may easily question whether or not it was curve-fit. To an extent, one can argue that the volatility computation period may be optimized, though I’d hardly call a 252-day (one-year) rolling volatility estimate a curve-fit.

Next, I’d like to introduce another concept on this blog that I’ve seen colloquially addressed in other parts of the quantitative blogging space, particularly by Mike Harris of Price Action Lab, namely that of multiple hypothesis testing, and about the need to correct for that.

Luckily for that, Drs. David H. Bailey and Marcos Lopez De Prado wrote a paper to address just that. Also, I’d like to note one very cool thing about this paper: it actually has a worked-out numerical example! In my opinion, there are very few things as helpful as showing a simple result that transforms a collection of mathematical symbols into a result to demonstrate what those symbols actually mean in the span of one page. Oh, and it also includes *code* in the appendix (albeit Python — even though, you know, R is far more developed. If someone can get Marcos Lopez De Prado to switch to R–aka the better research language, that’d be a godsend!).

In any case, here’s the formula for the deflated Sharpe ratio, implemented straight from the paper.

deflatedSharpe <- function(sharpe, nTrials, varTrials, skew, kurt, numPeriods, periodsInYear) {
  emc <- .5772
  sr0_term1 <- (1 - emc) * qnorm(1 - 1/nTrials)
  sr0_term2 <- emc * qnorm(1 - 1/nTrials * exp(-1))
  sr0 <- sqrt(varTrials * 1/periodsInYear) * (sr0_term1 + sr0_term2)
  numerator <- (sharpe/sqrt(periodsInYear) - sr0)*sqrt(numPeriods - 1)
  skewnessTerm <- 1 - skew * sharpe/sqrt(periodsInYear)
  kurtosisTerm <- (kurt-1)/4*(sharpe/sqrt(periodsInYear))^2
  denominator <- sqrt(skewnessTerm + kurtosisTerm)
  result <- pnorm(numerator/denominator)
  pval <- 1-result

The inputs are the strategy’s Sharpe ratio, the number of backtest runs, the variance of the sharpe ratios of those backtest runs, the skewness of the candidate strategy, its non-excess kurtosis, the number of periods in the backtest, and the number of periods in a year. Unlike the De Prado paper, I choose to return the p-value (EG 1-.

Let’s collect all our Sharpe ratios now.

allSharpes <- c(as.numeric(table.AnnualizedReturns(sigBoxplots)[3,]), 

And now, let’s plug and chug!

stratSignificant <- deflatedSharpe(sharpe = as.numeric(table.AnnualizedReturns(strat)[3,]),
                                   nTrials = length(allSharpes), varTrials = var(allSharpes),
                                   skew = as.numeric(skewness(strat)), kurt = as.numeric(kurtosis(strat)) + 3,
                                   numPeriods = nrow(strat), periodsInYear = 12)

And the result!

> stratSignificant
[1] 0.01311

Success! At least at the 5% level…and a rejection at the 1% level, and any level beyond that.

So, one last thing! Out-of-sample testing on ETFs (and mutual funds during the ETF burn-in period)!

symbols2 <- c("CWB", "JNK", "TLT", "SHY", "PCY")
getSymbols(symbols2, from='1900-01-01')
prices2 <- list()
for(tmp in symbols2) {
  prices2[[tmp]] <- Ad(get(tmp))
prices2 <-, prices2)
colnames(prices2) <- substr(colnames(prices2), 1, 3)
returns2 <- na.omit(Return.calculate(prices2))

monthRets2 <- apply.monthly(returns2, Return.cumulative)

oosStrat <- list()
for(i in 1:12) {
  oosStrat[[i]] <- ruleBacktest(returns = monthRets2, nMonths = i, dailyReturns = returns2, nSD = 252)[[1]]
oosStrat <-, oosStrat)
oosStrat <- Return.portfolio(R = na.omit(oosStrat), rebalance_on="months")

symbols <- c("CNSAX", "FAHDX", "VUSTX", "VFISX", "PREMX")
getSymbols(symbols, from='1900-01-01')
prices <- list()
for(symbol in symbols) {
  prices[[symbol]] <- Ad(get(symbol))
prices <-, prices)
colnames(prices) <- substr(colnames(prices), 1, 5)
oosMFreturns <- na.omit(Return.calculate(prices))
oosMFmonths <- apply.monthly(oosMFreturns, Return.cumulative)

oosMF <- list()
for(i in 1:12) {
  oosMF[[i]] <- ruleBacktest(returns = oosMFmonths, nMonths = i, dailyReturns = oosMFreturns, nSD=252)[[1]]
oosMF <-, oosMF)
oosMF <- Return.portfolio(R = na.omit(oosMF), rebalance_on="months")
oosMF <- oosMF["2009-04/2011-03"]

fullOOS <- rbind(oosMF, oosStrat)

rbind(table.AnnualizedReturns(fullOOS), maxDrawdown(fullOOS), CalmarRatio(fullOOS))

And the results:

Annualized Return                    0.1273
Annualized Std Dev                   0.0901
Annualized Sharpe (Rf=0%)            1.4119
Worst Drawdown                       0.1061
Calmar Ratio                         1.1996

And one more equity curve (only the second!).

In other words, the out-of-sample statistics compare to the in-sample statistics. The Sharpe ratio is higher, the Calmar slightly lower. But on a whole, the performance has kept up. Unfortunately, the strategy is currently in a drawdown, but that’s the breaks.

So, whew. That concludes my first go at hypothesis-driven development, and has hopefully at least demonstrated the process to a satisfactory degree. What started off as a toy strategy instead turned from a rejection to a not rejection to demonstrating ideas from three separate papers, and having out-of-sample statistics that largely matched if not outperformed the in-sample statistics. For those thinking about investing in this strategy (again, here is the strategy: take 12 different portfolios, each selecting the asset with the highest momentum over months 1-12, target an annualized volatility of 10%, with volatility defined as the rolling annualized 252-day standard deviation, and equal-weight them every month), what I didn’t cover was turnover and taxes (this is a bond ETF strategy, so dividends will play a large role).

Now, one other request–many of the ideas for this blog come from my readers. I am especially interested in things to think about from readers with line-management responsibilities, as I think many of the questions from those individuals are likely the most universally interesting ones. If you’re one such individual, I’d appreciate an introduction, and knowing who more of the individuals in my reader base are.

Thanks for reading.

NOTE: while I am currently consulting, I am always open to networking, meeting up, consulting arrangements, and job discussions. Contact me through my email at, or through my LinkedIn, found here.

Hypothesis Driven Development Part IV: Testing The Barroso/Santa Clara Rule

This post will deal with applying the constant-volatility procedure written about by Barroso and Santa Clara in their paper “Momentum Has Its Moments”.

The last two posts dealt with evaluating the intelligence of the signal-generation process. While the strategy showed itself to be marginally better than randomly tossing darts against a dartboard and I was ready to reject it for want of moving onto better topics that are slightly less of a toy in terms of examples than a little rotation strategy, Brian Peterson told me to see this strategy through to the end, including testing out rule processes.

First off, to make a distinction, rules are not signals. Rules are essentially a way to quantify what exactly to do assuming one acts upon a signal. Things such as position sizing, stop-loss processes, and so on, all fall under rule processes.

This rule deals with using leverage in order to target a constant volatility.

So here’s the idea: in their paper, Pedro Barroso and Pedro Santa Clara took the Fama-French momentum data, and found that the classic WML strategy certainly outperforms the market, but it has a critical downside, namely that of momentum crashes, in which being on the wrong side of a momentum trade will needlessly expose a portfolio to catastrophically large drawdowns. While this strategy is a long-only strategy (and with fixed-income ETFs, no less), and so would seem to be more robust against such massive drawdowns, there’s no reason to leave money on the table. To note, not only have Barroso and Santa Clara covered this phenomena, but so have others, such as Tony Cooper in his paper “Alpha Generation and Risk Smoothing Using Volatility of Volatility”.

In any case, the setup here is simple: take the previous portfolios, consisting of 1-12 month momentum formation periods, and every month, compute the annualized standard deviation, using a 21-252 (by 21) formation period, for a total of 12 x 12 = 144 trials. (So this will put the total trials run so far at 24 + 144 = 168…bonus points if you know where this tidbit is going to go).

Here’s the code (again, following on from the last post, which follows from the second post, which follows from the first post in this series).


ruleBacktest <- function(returns, nMonths, dailyReturns,
                         nSD=126, volTarget = .1) {
  nMonthAverage <- apply(returns, 2, runSum, n = nMonths)
  nMonthAverage <- xts(nMonthAverage, = index(returns))
  nMonthAvgRank <- t(apply(nMonthAverage, 1, rank))
  nMonthAvgRank <- xts(nMonthAvgRank,
  selection <- (nMonthAvgRank==5) * 1 #select highest average performance
  dailyBacktest <- Return.portfolio(R = dailyReturns, weights = selection)
  constantVol <- volTarget/(runSD(dailyBacktest, n = nSD) * sqrt(252))
  monthlyLeverage <- na.omit(constantVol[endpoints(constantVol), on ="months"])
  wts <- cbind(monthlyLeverage, 1-monthlyLeverage)
  constantVolComponents <- cbind(dailyBacktest, 0)
  out <- Return.portfolio(R = constantVolComponents, weights = wts)
  out <- apply.monthly(out, Return.cumulative)

t1 <- Sys.time()
allPermutations <- list()
for(i in seq(21, 252, by = 21)) {
  monthVariants <- list()
  for(j in 1:12) {
    trial <- ruleBacktest(returns = monthRets, nMonths = j, dailyReturns = sample, nSD = i)
    sharpe <- table.AnnualizedReturns(trial)[3,]
    monthVariants[[j]] <- sharpe
  allPermutations[[i]] <-, monthVariants)
allPermutations <-, allPermutations)
t2 <- Sys.time()

rownames(allPermutations) <- seq(21, 252, by = 21)
colnames(allPermutations) <- 1:12

baselineSharpes <- table.AnnualizedReturns(algoPortfolios)[3,]
baselineSharpeMat <- matrix(rep(baselineSharpes, 12), ncol=12, byrow=TRUE)

diffs <- allPermutations - as.numeric(baselineSharpeMat)
meltedDiffs <-melt(diffs)

colnames(meltedDiffs) <- c("volFormation", "momentumFormation", "sharpeDifference")
ggplot(meltedDiffs, aes(x = momentumFormation, y = volFormation, fill=sharpeDifference)) + 
  geom_tile()+scale_fill_gradient2(high="green", mid="yellow", low="red")

meltedSharpes <- melt(allPermutations)
colnames(meltedSharpes) <- c("volFormation", "momentumFormation", "Sharpe")
ggplot(meltedSharpes, aes(x = momentumFormation, y = volFormation, fill=Sharpe)) + 
  geom_tile()+scale_fill_gradient2(high="green", mid="yellow", low="red", midpoint = mean(allPermutations))

Again, there’s no parallel code since this is a relatively small example, and I don’t know which OS any given instance of R runs on (windows/linux have different parallelization infrastructure).

So the idea here is to simply compare the Sharpe ratios with different volatility lookback periods against the baseline signal-process-only portfolios. The reason I use Sharpe ratios, and not say, CAGR, volatility, or drawdown is that Sharpe ratios are scale-invariant. In this case, I’m targeting an annualized volatility of 10%, but with a higher targeted volatility, one can obtain higher returns at the cost of higher drawdowns, or be more conservative. But the Sharpe ratio should stay relatively consistent within reasonable bounds.

So here are the results:

Sharpe improvements:

In this case, the diagram shows that on a whole, once the volatility estimation period becomes long enough, the results are generally positive. Namely, that if one uses a very short estimation period, that volatility estimate is more dependent on the last month’s choice of instrument, as opposed to the longer-term volatility of the system itself, which can create poor forecasts. Also to note is that the one-month momentum formation period doesn’t seem overly amenable to the constant volatility targeting scheme (there’s basically little improvement if not a slight drag on risk-adjusted performance). This is interesting in that the baseline Sharpe ratio for the one-period formation is among the best of the baseline performances. However, on a whole, the volatility targeting actually does improve risk-adjusted performance of the system, even one as simple as throwing all your money into one asset every month based on a single momentum signal.

Absolute Sharpe ratios:

In this case, the absolute Sharpe ratios look fairly solid for such a simple system. The 3, 7, and 9 month variants are slightly lower, but once the volatility estimation period reaches between 126 and 252 days, the results are fairly robust. The Barroso and Santa Clara paper uses a period of 126 days to estimate annualized volatility, which looks solid across the entire momentum formation period spectrum.

In any case, it seems the verdict is that a constant volatility target improves results.

Thanks for reading.

NOTE: while I am currently consulting, I am always open to networking, meeting up (Philadelphia and New York City both work), consulting arrangements, and job discussions. Contact me through my email at, or through my LinkedIn, found here.