An Introduction to Portfolio Component Conditional Value At Risk

This post will introduce component conditional value at risk mechanics found in PerformanceAnalytics from a paper written by Brian Peterson, Kris Boudt, and Peter Carl. This is a mechanism that is an easy-to-call mechanism for computing component expected shortfall in asset returns as they apply to a portfolio. While the exact mechanics are fairly complex, the upside is that the running time is nearly instantaneous, and this method is a solid tool for including in asset allocation analysis.

For those interested in an in-depth analysis of the intuition of component conditional value at risk, I refer them to the paper written by Brian Peterson, Peter Carl, and Kris Boudt.

Essentially, here’s the idea: all assets in a given portfolio have a marginal contribution to its total conditional value at risk (also known as expected shortfall)–that is, the expected loss when the loss surpasses a certain threshold. For instance, if you want to know your 5% expected shortfall, then it’s the average of the worst 5 returns per 100 days, and so on. For returns using daily resolution, the idea of expected shortfall may sound as though there will never be enough data in a sufficiently fast time frame (on one year or less), the formula for expected shortfall in the PerformanceAnalytics defaults to an approximation calculation using a Cornish-Fisher expansion, which delivers very good results so long as the p-value isn’t too extreme (that is, it works for relatively sane p values such as the 1%-10% range).

Component Conditional Value at Risk has two uses: first off, given no input weights, it uses an equal weight default, which allows it to provide a risk estimate for each individual asset without burdening the researcher to create his or her own correlation/covariance heuristics. Secondly, when provided with a set of weights, the output changes to reflect the contribution of various assets in proportion to those weights. This means that this methodology works very nicely with strategies that exclude assets based on momentum, but need a weighting scheme for the remaining assets. Furthermore, using this methodology also allows an ex-post analysis of risk contribution to see which instrument contributed what to risk.

First, a demonstration of how the mechanism works using the edhec data set. There is no strategy here, just a demonstration of syntax.

require(quantmod)
require(PerformanceAnalytics)

data(edhec)

tmp <- CVaR(edhec, portfolio_method = "component")

This will assume an equal-weight contribution from all of the funds in the edhec data set.

So tmp is the contribution to expected shortfall from each of the various edhec managers over the entire time period. Here’s the output:

$MES
           [,1]
[1,] 0.03241585

$contribution
 Convertible Arbitrage             CTA Global  Distressed Securities       Emerging Markets  Equity Market Neutral
          0.0074750513          -0.0028125166           0.0039422674           0.0069376579           0.0008077760
          Event Driven Fixed Income Arbitrage           Global Macro      Long/Short Equity       Merger Arbitrage
          0.0037114666           0.0043125937           0.0007173036           0.0036152960           0.0013693293
        Relative Value          Short Selling         Funds of Funds
          0.0037650911          -0.0048178690           0.0033924063 

$pct_contrib_MES
 Convertible Arbitrage             CTA Global  Distressed Securities       Emerging Markets  Equity Market Neutral
            0.23059863            -0.08676361             0.12161541             0.21402052             0.02491917
          Event Driven Fixed Income Arbitrage           Global Macro      Long/Short Equity       Merger Arbitrage
            0.11449542             0.13303965             0.02212817             0.11152864             0.04224258
        Relative Value          Short Selling         Funds of Funds
            0.11614968            -0.14862694             0.10465269

The salient part of this is the percent contribution (the last output). Notice that it can be negative, meaning that certain funds gain when others lose. At least, this was the case over the current data set. These assets diversify a portfolio and actually lower expected shortfall.

> tmp2 <- CVaR(edhec, portfolio_method = "component", weights = c(rep(.1, 10), rep(0,3)))
> tmp2
$MES
           [,1]
[1,] 0.04017453

$contribution
 Convertible Arbitrage             CTA Global  Distressed Securities       Emerging Markets  Equity Market Neutral
          0.0086198045          -0.0046696862           0.0058778855           0.0109152240           0.0009596620
          Event Driven Fixed Income Arbitrage           Global Macro      Long/Short Equity       Merger Arbitrage
          0.0054824325           0.0050398011           0.0009638502           0.0044568333           0.0025287234
        Relative Value          Short Selling         Funds of Funds
          0.0000000000           0.0000000000           0.0000000000 

$pct_contrib_MES
 Convertible Arbitrage             CTA Global  Distressed Securities       Emerging Markets  Equity Market Neutral
            0.21455894            -0.11623499             0.14630875             0.27169512             0.02388732
          Event Driven Fixed Income Arbitrage           Global Macro      Long/Short Equity       Merger Arbitrage
            0.13646538             0.12544767             0.02399157             0.11093679             0.06294345
        Relative Value          Short Selling         Funds of Funds
            0.00000000             0.00000000             0.00000000

In this case, I equally weighted the first ten managers in the edhec data set, and put zero weight in the last three. Furthermore, we can see what happens when the weights are not equal.

> tmp3 <- CVaR(edhec, portfolio_method = "component", weights = c(.2, rep(.1, 9), rep(0,3)))
> tmp3
$MES
           [,1]
[1,] 0.04920372

$contribution
 Convertible Arbitrage             CTA Global  Distressed Securities       Emerging Markets  Equity Market Neutral
          0.0187406982          -0.0044391078           0.0057235762           0.0102706768           0.0007710434
          Event Driven Fixed Income Arbitrage           Global Macro      Long/Short Equity       Merger Arbitrage
          0.0051541429           0.0055944367           0.0008028457           0.0044085104           0.0021768951
        Relative Value          Short Selling         Funds of Funds
          0.0000000000           0.0000000000           0.0000000000 

$pct_contrib_MES
 Convertible Arbitrage             CTA Global  Distressed Securities       Emerging Markets  Equity Market Neutral
            0.38087972            -0.09021895             0.11632406             0.20873782             0.01567043
          Event Driven Fixed Income Arbitrage           Global Macro      Long/Short Equity       Merger Arbitrage
            0.10475109             0.11369947             0.01631677             0.08959710             0.04424249
        Relative Value          Short Selling         Funds of Funds
            0.00000000             0.00000000             0.00000000

This time, notice that as the weight increased in the convertible arb manager, so too did his contribution to maximum expected shortfall.

For a future backtest, I would like to make some data requests. I would like to use the universe found in Faber’s Global Asset Allocation book. That said, the simulations in that book go back to 1972, and I was wondering if anyone out there has daily returns for those assets/indices. While some ETFs go back into the early 2000s, there are some that start rather late such as DBC (commodities, early 2006), GLD (gold, early 2004), BWX (foreign bonds, late 2007), and FTY (NAREIT, early 2007). As an eight-year backtest would be a bit short, I was wondering if anyone had data with more history.

One other thing, I will in New York for the trading show, and speaking on the “programming wars” panel on October 6th.

Thanks for reading.

NOTE: While I am currently contracting, I am also looking for a permanent position which can benefit from my skills for when my current contract ends. If you have or are aware of such an opening, I will be happy to speak with you.

How To Compute Turnover With Return.Portfolio in R

This post will demonstrate how to take into account turnover when dealing with returns-based data using PerformanceAnalytics and the Return.Portfolio function in R. It will demonstrate this on a basic strategy on the nine sector SPDRs.

So, first off, this is in response to a question posed by one Robert Wages on the R-SIG-Finance mailing list. While there are many individuals out there with a plethora of questions (many of which can be found to be demonstrated on this blog already), occasionally, there will be an industry veteran, a PhD statistics student from Stanford, or other very intelligent individual that will ask a question on a topic that I haven’t yet touched on this blog, which will prompt a post to demonstrate another technical aspect found in R. This is one of those times.

So, this demonstration will be about computing turnover in returns space using the PerformanceAnalytics package. Simply, outside of the PortfolioAnalytics package, PerformanceAnalytics with its Return.Portfolio function is the go-to R package for portfolio management simulations, as it can take a set of weights, a set of returns, and generate a set of portfolio returns for analysis with the rest of PerformanceAnalytics’s functions.

Again, the strategy is this: take the 9 three-letter sector SPDRs (since there are four-letter ETFs now), and at the end of every month, if the adjusted price is above its 200-day moving average, invest into it. Normalize across all invested sectors (that is, 1/9th if invested into all 9, 100% into 1 if only 1 invested into, 100% cash, denoted with a zero return vector, if no sectors are invested into). It’s a simple, toy strategy, as the strategy isn’t the point of the demonstration.

Here’s the basic setup code:

require(TTR)
require(PerformanceAnalytics)
require(quantmod)

symbols <- c("XLF", "XLK", "XLU", "XLE", "XLP", "XLF", "XLB", "XLV", "XLY")
getSymbols(symbols, src='yahoo', from = '1990-01-01-01')
prices <- list()
for(i in 1:length(symbols)) {
  tmp <- Ad(get(symbols[[i]]))
  prices[[i]] <- tmp
}
prices <- do.call(cbind, prices)

# Our signal is a simple adjusted price over 200 day SMA
signal <- prices > xts(apply(prices, 2, SMA, n = 200), order.by=index(prices))

# equal weight all assets with price above SMA200
returns <- Return.calculate(prices)
weights <- signal/(rowSums(signal)+1e-16)

# With Return.portfolio, need all weights to sum to 1
weights$zeroes <- 1 - rowSums(weights)
returns$zeroes <- 0

monthlyWeights <- na.omit(weights[endpoints(weights, on = 'months'),])
weights <- na.omit(weights)
returns <- na.omit(returns)

So, get the SPDRs, put them together, compute their returns, generate the signal, and create the zero vector, since Return.Portfolio treats weights less than 1 as a withdrawal, and weights above 1 as the addition of more capital (big FYI here).

Now, here’s how to compute turnover:

out <- Return.portfolio(R = returns, weights = monthlyWeights, verbose = TRUE)
beginWeights <- out$BOP.Weight
endWeights <- out$EOP.Weight
txns <- beginWeights - lag(endWeights)
monthlyTO <- xts(rowSums(abs(txns[,1:9])), order.by=index(txns))
plot(monthlyTO)

So, the trick is this: when you call Return.portfolio, use the verbose = TRUE option. This creates several objects, among them returns, BOP.Weight, and EOP.Weight. These stand for Beginning Of Period Weight, and End Of Period Weight.

The way that turnover is computed is simply the difference between how the day’s return moves the allocated portfolio from its previous ending point to where that portfolio actually stands at the beginning of next period. That is, the end of period weight is the beginning of period drift after taking into account the day’s drift/return for that asset. The new beginning of period weight is the end of period weight plus any transacting that would have been done. Thus, in order to find the actual transactions (or turnover), one subtracts the previous end of period weight from the beginning of period weight.

This is what such transactions look like for this strategy.

Something we can do with such data is take a one-year rolling turnover, accomplished with the following code:

yearlyTO <- runSum(monthlyTO, 252)
plot(yearlyTO, main = "running one year turnover")

It looks like this:

This essentially means that one year’s worth of two-way turnover (that is, if selling an entirely invested portfolio is 100% turnover, and buying an entirely new set of assets is another 100%, then two-way turnover is 200%) is around 800% at maximum. That may be pretty high for some people.

Now, here’s the application when you penalize transaction costs at 20 basis points per percentage point traded (that is, it costs 20 cents to transact $100).

txnCosts <- monthlyTO * -.0020
retsWithTxnCosts <- out$returns + txnCosts
compare <- na.omit(cbind(out$returns, retsWithTxnCosts))
colnames(compare) <- c("NoTxnCosts", "TxnCosts20BPs")
charts.PerformanceSummary(compare)
table.AnnualizedReturns(compare)

And the result:


                          NoTxnCosts TxnCosts20BPs
Annualized Return             0.0587        0.0489
Annualized Std Dev            0.1554        0.1553
Annualized Sharpe (Rf=0%)     0.3781        0.3149

So, at 20 basis points on transaction costs, that takes about one percent in returns per year out of this (admittedly, terrible) strategy. This is far from negligible.

So, that is how you actually compute turnover and transaction costs. In this case, the transaction cost model was very simple. However, given that Return.portfolio returns transactions at the individual asset level, one could get as complex as they would like with modeling the transaction costs.

Thanks for reading.

NOTE: I will be giving a lightning talk at R/Finance, so for those attending, you’ll be able to find me there.

Are R^2s Useful In Finance? Hypothesis-Driven Development In Reverse

This post will shed light on the values of R^2s behind two rather simplistic strategies — the simple 10 month SMA, and its relative, the 10 month momentum (which is simply a difference of SMAs, as Alpha Architect showed in their book DIY Financial Advisor.

Not too long ago, a friend of mine named Josh asked me a question regarding R^2s in finance. He’s finishing up his PhD in statistics at Stanford, so when people like that ask me questions, I’d like to answer them. His assertion is that in some instances, models that have less than perfect predictive power (EG R^2s of .4, for instance), can still deliver very promising predictions, and that if someone were to have a financial model that was able to explain 40% of the variance of returns, they could happily retire with that model making them very wealthy. Indeed, .4 is a very optimistic outlook (to put it lightly), as this post will show.

In order to illustrate this example, I took two “staple” strategies — buy SPY when its closing monthly price is above its ten month simple moving average, and when its ten month momentum (basically the difference of a ten month moving average and its lag) is positive. While these models are simplistic, they are ubiquitously talked about, and many momentum strategies are an improvement upon these baseline, “out-of-the-box” strategies.

Here’s the code to do that:

require(xts)
require(quantmod)
require(PerformanceAnalytics)
require(TTR)

getSymbols('SPY', from = '1990-01-01', src = 'yahoo')
adjustedPrices <- Ad(SPY)
monthlyAdj <- to.monthly(adjustedPrices, OHLC=TRUE)

spySMA <- SMA(Cl(monthlyAdj), 10)
spyROC <- ROC(Cl(monthlyAdj), 10)
spyRets <- Return.calculate(Cl(monthlyAdj))

smaRatio <- Cl(monthlyAdj)/spySMA - 1
smaSig <- smaRatio > 0
rocSig <- spyROC > 0

smaRets <- lag(smaSig) * spyRets
rocRets <- lag(rocSig) * spyRets

And here are the results:

strats <- na.omit(cbind(smaRets, rocRets, spyRets))
colnames(strats) <- c("SMA10", "MOM10", "BuyHold")
charts.PerformanceSummary(strats, main = "strategies")
rbind(table.AnnualizedReturns(strats), maxDrawdown(strats), CalmarRatio(strats))

                              SMA10     MOM10   BuyHold
Annualized Return         0.0975000 0.1039000 0.0893000
Annualized Std Dev        0.1043000 0.1080000 0.1479000
Annualized Sharpe (Rf=0%) 0.9346000 0.9616000 0.6035000
Worst Drawdown            0.1663487 0.1656176 0.5078482
Calmar Ratio              0.5860332 0.6270657 0.1757849

In short, the SMA10 and the 10-month momentum (aka ROC 10 aka MOM10) both handily outperform the buy and hold, not only in absolute returns, but especially in risk-adjusted returns (Sharpe and Calmar ratios). Again, simplistic analysis, and many models get much more sophisticated than this, but once again, simple, illustrative example using two strategies that outperform a benchmark (over the long term, anyway).

Now, the question is, what was the R^2 of these models? To answer this, I took a rolling five-year window that essentially asked: how well did these quantities (the ratio between the closing price and the moving average – 1, or the ten month momentum) predict the next month’s returns? That is, what proportion of the variance is explained through the monthly returns regressed against the previous month’s signals in numerical form (perhaps not the best framing, as the signal is binary as opposed to continuous which is what is being regressed, but let’s set that aside, again, for the sake of illustration).

Here’s the code to generate the answer.

predictorsAndPredicted <- na.omit(cbind(lag(smaRatio), lag(spyROC), spyRets))
R2s <- list()
for(i in 1:(nrow(predictorsAndPredicted)-59))  { #rolling five-year regression
  subset <- predictorsAndPredicted[i:(i+59),]
  smaLM <- lm(subset[,3]~subset[,1])
  smaR2 <- summary(smaLM)$r.squared
  rocLM <- lm(subset[,3]~subset[,2])
  rocR2 <- summary(rocLM)$r.squared
  R2row <- xts(cbind(smaR2, rocR2), order.by=last(index(subset)))
  R2s[[i]] <- R2row
}
R2s <- do.call(rbind, R2s)
par(mfrow=c(1,1))
colnames(R2s) <- c("SMA", "Momentum")
chart.TimeSeries(R2s, main = "R2s", legend.loc = 'topleft')

And the answer, in pictorial form:

In short, even in the best case scenarios, namely, crises which provide momentum/trend-following/call it what you will its raison d’etre, that is, its risk management appeal, the proportion of variance explained by the actual signal quantities was very small. In the best of times, around 20%. But then again, think about what the R^2 value actually is–it’s the percentage of variance explained by a predictor. If a small set of signals (let alone one) was able to explain the majority of the change in the returns of the S&P 500, or even a not-insignificant portion, such a person would stand to become very wealthy. More to the point, given that two strategies that handily outperform the market have R^2s that are exceptionally low for extended periods of time, it goes to show that holding the R^2 up as some form of statistical holy grail certainly is incorrect in the general sense, and anyone who does so either is painting with too broad a brush, is creating disingenuous arguments, or should simply attempt to understand another field which may not work the way their intuition tells them.

Thanks for reading.

A Book Review of ReSolve Asset Management’s Adaptive Asset Allocation

This review will review the “Adaptive Asset Allocation: Dynamic Global Portfolios to Profit in Good Times – and Bad” book by the people at ReSolve Asset Management. Overall, this book is a definite must-read for those who have never been exposed to the ideas within it. However, when it comes to a solution that can be fully replicated, this book is lacking.

Okay, it’s been a while since I reviewed my last book, DIY Financial Advisor, from the awesome people at Alpha Architect. This book in my opinion, is set up in a similar sort of format.

This is the structure of the book, and my reviews along with it:

Part 1: Why in the heck you actually need to have a diversified portfolio, and why a diversified portfolio is a good thing. In a world in which there is so much emphasis put on single-security performance, this is certainly something that absolutely must be stated for those not familiar with portfolio theory. It highlights the example of two people–one from Abbott Labs, and one from Enron, who had so much of their savings concentrated in their company’s stock. Mr. Abbott got hit hard and changed his outlook on how to save for retirement, and Mr. Enron was never heard from again. Long story short: a diversified portfolio is good, and a properly diversified portfolio can offset one asset’s zigs with another asset’s zags. This is the key to establishing a stream of returns that will help meet financial goals. Basically, this is your common sense story (humans love being told stories) so as to motivate you to read the rest of the book. It does its job, though for someone like me, it’s more akin to a big “wait for it, wait for it…and there’s the reason why we should read on, as expected”.

Part 2: Something not often brought up in many corners of the quant world (because it’s real life boring stuff) is the importance not only of average returns, but *when* those returns are achieved. Namely, imagine your everyday saver. At the beginning of their careers, they’re taking home less salary and have less money in their retirement portfolio (or speculation portfolio, but the book uses retirement portfolio). As they get into middle age and closer to retirement, they have a lot more money in said retirement portfolio. Thus, strong returns are most vital when there is more cash available *to* the portfolio, and the difference between mediocre returns at the beginning and strong returns at the end of one’s working life as opposed to vice versa is *astronomical* and cannot be understated. Furthermore, once *in* retirement, strong returns in the early years matter far more than returns in the later years once money has been withdrawn out of the portfolio (though I’d hope that a portfolio’s returns can be so strong that one can simply “live off the interest”). Or, put more intuitively: when you have $10,000 in your portfolio, a 20% drawdown doesn’t exactly hurt because you can make more money and put more into your retirement account. But when you’re 62 and have $500,000 and suddenly lose 30% of everything, well, that’s massive. How much an investor wants to avoid such a scenario cannot be understated. Warren Buffett once said that if you can’t bear to lose 50% of everything, you shouldn’t be in stocks. I really like this part of the book because it shows just how dangerous the ideas of “a 50% drawdown is unavoidable” and other “stay invested for the long haul” refrains are. Essentially, this part of the book makes a resounding statement that any financial adviser keeping his or her clients invested in equities when they’re near retirement age is doing something not very advisable, to put it lightly. In my opinion, those who advise pension funds should especially keep this section of the book in mind, since for some people, the long-term may be coming to an end, and what matters is not only steady returns, but to make sure the strategy doesn’t fall off a cliff and destroy decades of hard-earned savings.

Part 3: This part is also one that is a very important read. First off, it lays out in clear terms that the long-term forward-looking valuations for equities are at rock bottom. That is, the expected forward 15-year returns are very low, using approximately 75 years of evidence. Currently, according to the book, equity valuations imply a *negative* 15-year forward return. However, one thing I *will* take issue with is that while forward-looking long-term returns for equities may be very low, if one believed this chart and only invested in the stock market when forecast 15-year returns were above the long term average, one would have missed out on both the 2003-2007 bull runs, *and* the one since 2009 that’s just about over. So, while the book makes a strong case for caution, readers should also take the chart with a grain of salt in my opinion. However, another aspect of portfolio construction that this book covers is how to construct a robust (assets for any economic environment) and coherent (asset classes balanced in number) universe for implementation with any asset allocation algorithm. I think this bears repeating: universe selection is an extremely important topic in the discussion of asset allocation, yet there is very little discussion about it. Most research/topics simply take some “conventional universe”, such as “all stocks on the NYSE”, or “all the stocks in the S&P 500”, or “the entire set of the 50-60 most liquid futures” without consideration for robustness and coherence. This book is the first source I’ve seen that actually puts this topic under a magnifying glass besides “finger in the air pick and choose”.

Part 4: and here’s where I level my main criticism at this book. For those that have read “Adaptive Asset Allocation: A Primer”, this section of the book is basically one giant copy and paste. It’s all one large buildup to “momentum rank + min-variance optimization”. All well and good, until there’s very little detail beyond the basics as to how the minimum variance portfolio was constructed. Namely, what exactly is the minimum variance algorithm in use? Is it one of the poor variants susceptible to numerical instability inherent in inverting sample covariance matrices? Or is it a heuristic like David Varadi’s minimum variance and minimum correlation algorithm? The one feeling I absolutely could not shake was that this book had a perfect opportunity to present a robust approach to minimum variance, and instead, it’s long on concept, short on details. While the theory of “maximize return for unit risk” is all well and good, the actual algorithm to implement that theory into practice is not trivial, with the solutions taught to undergrads and master’s students having some well-known weaknesses. On top of this, one thing that got hammered into my head in the past was that ranking *also* had a weakness at the inclusion/exclusion point. E.G. if, out of ten assets, the fifth asset had a momentum of say, 10.9%, and the sixth asset had a momentum of 10.8%, how are we so sure the fifth is so much better? And while I realize that this book was ultimately meant to be a primer, in my opinion, it would have been a no-objections five-star if there were an appendix that actually went into some detail on how to go from the simple concepts and included a small numerical example of some algorithms that may address the well-known weaknesses. This doesn’t mean Greek/mathematical jargon. Just an appendix that acknowledged that not every reader is someone only picking up his first or second book about systematic investing, and that some of us are familiar with the “whys” and are more interested in the “hows”. Furthermore, I’d really love to know where the authors of this book got their data to back-date some of these ETFs into the 90s.

Part 5: some more formal research on topics already covered in the rest of the book–namely a section about how many independent bets one can take as the number of assets grow, if I remember it correctly. Long story short? You *easily* get the most bang for your buck among disparate asset classes, such as treasuries of various duration, commodities, developed vs. emerging equities, and so on, as opposed to trying to pick among stocks in the same asset class (though there’s some potential for alpha there…just…a lot less than you imagine). So in case the idea of asset class selection, not stock selection wasn’t beaten into the reader’s head before this point, this part should do the trick. The other research paper is something I briefly skimmed over which went into more depth about volatility and retirement portfolios, though I felt that the book covered this topic earlier on to a sufficient degree by building up the intuition using very understandable scenarios.

So that’s the review of the book. Overall, it’s a very solid piece of writing, and as far as establishing the *why*, it does an absolutely superb job. For those that aren’t familiar with the concepts in this book, this is definitely a must-read, and ASAP.

However, for those familiar with most of the concepts and looking for a detailed “how” procedure, this book does not deliver as much as I would have liked. And I realize that while it’s a bad idea to publish secret sauce, I bought this book in the hope of being exposed to a new algorithm presented in the understandable and intuitive language that the rest of the book was written in, and was left wanting.

Still, that by no means diminishes the impact of the rest of the book. For those who are more likely to be its target audience, it’s a 5/5. For those that wanted some specifics, it still has its gem on universe construction.

Overall, I rate it a 4/5.

Thanks for reading.

A Review of DIY Financial Advisor, by Gray, Vogel, and Foulke

This post will review the DIY Financial Advisor book, which I thought was a very solid read, and especially pertinent to those who are more beginners at investing (especially systematic investing). While it isn’t exactly perfect, it’s about as excellent a primer on investing as one will find out there that is accessible to the lay-person, in my opinion.

Okay, so, official announcement: I am starting a new section of posts called “Reviews”, which I received from being asked to review this book. Essentially, I believe that anyone that’s trying to create a good product that will help my readers deserves a spotlight, and I myself would like to know what cool and innovative financial services/products are coming about. For those who’d like exposure on this site, if you’re offering an affordable and innovative product or service that can be of use to an audience like mine, reach out to me.

Anyway, this past weekend, while relocating to Chicago, I had the pleasure of reading Alpha Architect’s (Gray, Vogel, Foulke) book “DIY financial advisor”, essentially making a case as to why a retail investor should be able to outperform the expert financial advisers that charge several percentage points a year to manage one’s wealth.
The book starts off by citing various famous studies showing how many subtle subconscious biases and fallacies human beings are susceptible to (there are plenty), such as falling for complexity, overconfidence, and so on—none of which emotionless computerized systems and models suffer from. Furthermore, it also goes on to provide several anecdotal examples of experts gone bust, such as Victor Niederhoffer, who blew up not once, but twice (and rumor has it he blew up a third time), and studies showing that systematic data analysis has shown to beat expert recommendations time and again—including when experts were armed with the outputs of the models themselves. Throw in some quotes from Jim Simons (CEO of the best hedge fund in the world, Renaissance Technologies), and the first part of the book can be summed up like this:

1) Your rank and file human beings are susceptible to many subconscious biases.

2) Don’t trust the recommendations of experts. Even simpler models have systematically outperformed said “experts”. Some experts have even blown up, multiple times even (E.G. Victor Niederhoffer).

3) Building an emotionless system will keep these human fallacies from wrecking your investment portfolio.

4) Sticking to a well thought-out system is a good idea, even when it’s uncomfortable—such as when a marine has to wear a Kevlar helmet, hold extra ammo, and extra water in a 126 degree Iraq desert (just ask Dr./Captain Gray!).

This is all well and good—essentially making a very strong case for why you should build a system, and let the system do the investment allocation heavy lifting for you.
Next, the book goes into the FACTS acronym of different manager selection—fees, access, complexity, taxes, and search. Fees: how much does it cost to have someone manage your investments? Pretty self-explanatory here. Access: how often can you pull your capital (EG a hedge fund that locks you up for a year especially when it loses money should be run from, and fast). Complexity: do you understand how the investments are managed? Taxes: long-term capital gains, or shorter-term? Generally, very few decent systems will be holding for a year or more, so in my opinion, expect to pay short-term taxes. Search: that is, how hard is it to find a good candidate? Given the sea of hedge funds (especially those with short-term track records, or only track records managing tiny amounts of money), how hard is it to find a manager who’ll beat the benchmark after fees? Answer: very difficult. In short, all the glitzy sophisticated managers you hear about? Far from a terrific deal, assuming you can even find one.

Continuing, the book goes into two separate anomalies that should form the foundation for any equity investment strategy – value, and momentum. The value system essentially goes long the top decile of the EBIT/TEV metric for the top 60% of market-cap companies traded on the NYSE every year. In my opinion, this is a system that is difficult to implement for the average investor in terms of managing the data process for this system, along with having the proper capital to allocate to all the various companies. That is, if you have a small amount of capital to invest, you might not be able to get that equal weight allocation across a hundred separate companies. However, I believe that with the QVAL and IVAL etfs (from Alpha Architect, and full disclosure, I have some of my IRA invested there), I think that the systematic value component can be readily accessed through these two funds.

The momentum strategy, however, is much simpler. There’s a momentum component, and a moving average component. There’s some math that shows that these two signals are related (a momentum signal is actually proportional to a difference of a moving average and its last value), and the ROBUST system that this book proposes is a combination of a momentum signal and an SMA signal. This is how it works. Assume you have $100,000 and 5 assets to invest in, for the sake of math. Divide the portfolio into a $50,000 momentum component and a $50,000 moving average component. Every month, allocate $10,000 to each of the five assets with a positive 12-month momentum, or stay in cash for that asset. Next, allocate another $10,000 to each of the five assets with a price above a 12-month simple moving average. It’s that simple, and given the recommended ETFs (commodities, bonds, foreign stocks, domestic stocks, real estate), it’s a system that most investors can rather easily implement, especially if they’ve been following my blog.

For those interested in more market anomalies (especially value anomalies), there’s a chapter which contains a large selection of academic papers that go back and forth on the efficacies of various anomalies and how well they can predict returns. So, for those interested, it’s there.

The book concludes with some potential pitfalls which a DIY investor should be aware of when running his or her own investments, which is essentially another psychology chapter.
Overall, in my opinion, this book is fairly solid in terms of reasons why a retail investor should take the plunge and manage his or her own investments in a systematic fashion. Namely, that flesh and blood advisers are prone to human error, and on top of that, usually charge an unjustifiably high fee, and then deliver lackluster performance. The book recommends a couple of simple systems, one of which I think anyone who can copy and paste some rudimentary R can follow (the ROBUST momentum system), and another which I think most stay-at-home investors should shy away from (the value system, simply because of the difficulty of dealing with all that data), and defer to either or both of Alpha Architect’s 2 ETFs.
In terms of momentum, there are the ALFA, GMOM, and MTUM tickers (do your homework, I’m long ALFA) for various differing exposures to this anomaly, for those that don’t want to pay the constant transaction costs/incur short-term taxes of running their own momentum strategy.

In terms of where this book comes up short, here are my two cents:
Tested over nearly a century, the risk-reward tradeoffs of these systems can still be frightening at times. That is, for a system that delivers a CAGR of around 15%, are you still willing to risk a 50% drawdown? Losing half of everything on the cusp of retirement sounds very scary, no matter the long-term upside.

Furthermore, this book keeps things simple, with an intended audience of mom and pop investors (who have historically underperformed the S&P 500!). While I think it accomplishes this, I think there could have been value added, even for such individuals, by outlining some ETFs or simple ETF/ETN trading systems that can diversify a portfolio. For instance, while volatility trading sounds very scary, in the context of providing diversification, it may be worth looking into. For instance, 2008 was a banner year for most volatility trading strategies that managed to go long and stay long volatility through the crisis. I myself still have very little knowledge of all of the various exotic ETFs that are popping up left and right, and I would look very favorably on a reputable source that can provide a tour of some that can provide respectable return diversification to a basic equities/fixed-income/real asset ETF-based portfolio, as outlined in one of the chapters in this book, and other books, such as Meb Faber’s Global Asset Allocation (a very cheap ebook).

One last thing that I’d like to touch on—this book is written in a very accessible style, and even the math (yes, math!) is understandable for someone that’s passed basic algebra. It’s something that can be read in one or two sittings, and I’d recommend it to anyone that’s a beginner in investing or systematic investing.

Overall, I give this book a solid 4/5 stars. It’s simple, easily understood, and brings systematic investing to the masses in a way that many people can replicate at home. However, I would have liked to see some beyond-the-basics content as well given the plethora of different ETFs.

Thanks for reading.

Hypothesis Driven Development Part IV: Testing The Barroso/Santa Clara Rule

This post will deal with applying the constant-volatility procedure written about by Barroso and Santa Clara in their paper “Momentum Has Its Moments”.

The last two posts dealt with evaluating the intelligence of the signal-generation process. While the strategy showed itself to be marginally better than randomly tossing darts against a dartboard and I was ready to reject it for want of moving onto better topics that are slightly less of a toy in terms of examples than a little rotation strategy, Brian Peterson told me to see this strategy through to the end, including testing out rule processes.

First off, to make a distinction, rules are not signals. Rules are essentially a way to quantify what exactly to do assuming one acts upon a signal. Things such as position sizing, stop-loss processes, and so on, all fall under rule processes.

This rule deals with using leverage in order to target a constant volatility.

So here’s the idea: in their paper, Pedro Barroso and Pedro Santa Clara took the Fama-French momentum data, and found that the classic WML strategy certainly outperforms the market, but it has a critical downside, namely that of momentum crashes, in which being on the wrong side of a momentum trade will needlessly expose a portfolio to catastrophically large drawdowns. While this strategy is a long-only strategy (and with fixed-income ETFs, no less), and so would seem to be more robust against such massive drawdowns, there’s no reason to leave money on the table. To note, not only have Barroso and Santa Clara covered this phenomena, but so have others, such as Tony Cooper in his paper “Alpha Generation and Risk Smoothing Using Volatility of Volatility”.

In any case, the setup here is simple: take the previous portfolios, consisting of 1-12 month momentum formation periods, and every month, compute the annualized standard deviation, using a 21-252 (by 21) formation period, for a total of 12 x 12 = 144 trials. (So this will put the total trials run so far at 24 + 144 = 168…bonus points if you know where this tidbit is going to go).

Here’s the code (again, following on from the last post, which follows from the second post, which follows from the first post in this series).

require(reshape2)
require(ggplot2)

ruleBacktest <- function(returns, nMonths, dailyReturns,
                         nSD=126, volTarget = .1) {
  nMonthAverage <- apply(returns, 2, runSum, n = nMonths)
  nMonthAverage <- xts(nMonthAverage, order.by = index(returns))
  nMonthAvgRank <- t(apply(nMonthAverage, 1, rank))
  nMonthAvgRank <- xts(nMonthAvgRank, order.by=index(returns))
  selection <- (nMonthAvgRank==5) * 1 #select highest average performance
  dailyBacktest <- Return.portfolio(R = dailyReturns, weights = selection)
  constantVol <- volTarget/(runSD(dailyBacktest, n = nSD) * sqrt(252))
  monthlyLeverage <- na.omit(constantVol[endpoints(constantVol), on ="months"])
  wts <- cbind(monthlyLeverage, 1-monthlyLeverage)
  constantVolComponents <- cbind(dailyBacktest, 0)
  out <- Return.portfolio(R = constantVolComponents, weights = wts)
  out <- apply.monthly(out, Return.cumulative)
  return(out)
}

t1 <- Sys.time()
allPermutations <- list()
for(i in seq(21, 252, by = 21)) {
  monthVariants <- list()
  for(j in 1:12) {
    trial <- ruleBacktest(returns = monthRets, nMonths = j, dailyReturns = sample, nSD = i)
    sharpe <- table.AnnualizedReturns(trial)[3,]
    monthVariants[[j]] <- sharpe
  }
  allPermutations[[i]] <- do.call(c, monthVariants)
}
allPermutations <- do.call(rbind, allPermutations)
t2 <- Sys.time()
print(t2-t1)

rownames(allPermutations) <- seq(21, 252, by = 21)
colnames(allPermutations) <- 1:12

baselineSharpes <- table.AnnualizedReturns(algoPortfolios)[3,]
baselineSharpeMat <- matrix(rep(baselineSharpes, 12), ncol=12, byrow=TRUE)

diffs <- allPermutations - as.numeric(baselineSharpeMat)
require(reshape2)
require(ggplot2)
meltedDiffs <-melt(diffs)

colnames(meltedDiffs) <- c("volFormation", "momentumFormation", "sharpeDifference")
ggplot(meltedDiffs, aes(x = momentumFormation, y = volFormation, fill=sharpeDifference)) + 
  geom_tile()+scale_fill_gradient2(high="green", mid="yellow", low="red")

meltedSharpes <- melt(allPermutations)
colnames(meltedSharpes) <- c("volFormation", "momentumFormation", "Sharpe")
ggplot(meltedSharpes, aes(x = momentumFormation, y = volFormation, fill=Sharpe)) + 
  geom_tile()+scale_fill_gradient2(high="green", mid="yellow", low="red", midpoint = mean(allPermutations))

Again, there’s no parallel code since this is a relatively small example, and I don’t know which OS any given instance of R runs on (windows/linux have different parallelization infrastructure).

So the idea here is to simply compare the Sharpe ratios with different volatility lookback periods against the baseline signal-process-only portfolios. The reason I use Sharpe ratios, and not say, CAGR, volatility, or drawdown is that Sharpe ratios are scale-invariant. In this case, I’m targeting an annualized volatility of 10%, but with a higher targeted volatility, one can obtain higher returns at the cost of higher drawdowns, or be more conservative. But the Sharpe ratio should stay relatively consistent within reasonable bounds.

So here are the results:

Sharpe improvements:

In this case, the diagram shows that on a whole, once the volatility estimation period becomes long enough, the results are generally positive. Namely, that if one uses a very short estimation period, that volatility estimate is more dependent on the last month’s choice of instrument, as opposed to the longer-term volatility of the system itself, which can create poor forecasts. Also to note is that the one-month momentum formation period doesn’t seem overly amenable to the constant volatility targeting scheme (there’s basically little improvement if not a slight drag on risk-adjusted performance). This is interesting in that the baseline Sharpe ratio for the one-period formation is among the best of the baseline performances. However, on a whole, the volatility targeting actually does improve risk-adjusted performance of the system, even one as simple as throwing all your money into one asset every month based on a single momentum signal.

Absolute Sharpe ratios:

In this case, the absolute Sharpe ratios look fairly solid for such a simple system. The 3, 7, and 9 month variants are slightly lower, but once the volatility estimation period reaches between 126 and 252 days, the results are fairly robust. The Barroso and Santa Clara paper uses a period of 126 days to estimate annualized volatility, which looks solid across the entire momentum formation period spectrum.

In any case, it seems the verdict is that a constant volatility target improves results.

Thanks for reading.

NOTE: while I am currently consulting, I am always open to networking, meeting up (Philadelphia and New York City both work), consulting arrangements, and job discussions. Contact me through my email at ilya.kipnis@gmail.com, or through my LinkedIn, found here.

Hypothesis Driven Development Part III: Monte Carlo In Asset Allocation Tests

This post will show how to use Monte Carlo to test for signal intelligence.

Although I had rejected this strategy in the last post, I was asked to do a monte-carlo analysis of a thousand random portfolios to see how the various signal processes performed against said distribution. Essentially, the process is quite simple: as I’m selecting one asset each month to hold, I simply generate a random number between 1 and the amount of assets (5 in this case), and hold it for the month. Repeat this process for the number of months, and then repeat this process a thousand times, and see where the signal processes fall across that distribution.

I didn’t use parallel processing here since Windows and Linux-based R have different parallel libraries, and in the interest of having the code work across all machines, I decided to leave it off.

Here’s the code:

randomAssetPortfolio <- function(returns) {
  numAssets <- ncol(returns)
  numPeriods <- nrow(returns)
  assetSequence <- sample.int(numAssets, numPeriods, replace=TRUE)
  wts <- matrix(nrow = numPeriods, ncol=numAssets, 0)
  wts <- xts(wts, order.by=index(returns))
  for(i in 1:nrow(wts)) {
    wts[i,assetSequence[i]] <- 1
  }
  randomPortfolio <- Return.portfolio(R = returns, weights = wts)
  return(randomPortfolio)
}

t1 <- Sys.time()
randomPortfolios <- list()
set.seed(123)
for(i in 1:1000) {
  randomPortfolios[[i]] <- randomAssetPortfolio(monthRets)
}
randomPortfolios <- do.call(cbind, randomPortfolios)
t2 <- Sys.time()
print(t2-t1)

algoPortfolios <- sigBoxplots[,1:12]
randomStats <- table.AnnualizedReturns(randomPortfolios)
algoStats <- table.AnnualizedReturns(algoPortfolios)

par(mfrow=c(3,1))
hist(as.numeric(randomStats[1,]), breaks = 20, main = 'histogram of monte carlo annualized returns',
     xlab='annualized returns')
abline(v=as.numeric(algoStats[1,]), col='red')
hist(as.numeric(randomStats[2,]), breaks = 20, main = 'histogram of monte carlo volatilities',
     xlab='annualized vol')
abline(v=as.numeric(algoStats[2,]), col='red')
hist(as.numeric(randomStats[3,]), breaks = 20, main = 'histogram of monte carlo Sharpes',
     xlab='Sharpe ratio')
abline(v=as.numeric(algoStats[3,]), col='red')

allStats <- cbind(randomStats, algoStats)
aggregateMean <- apply(allStats, 1, mean)
aggregateDevs <- apply(allStats, 1, sd)

algoPs <- 1-pnorm(as.matrix((algoStats - aggregateMean)/aggregateDevs))

plot(as.numeric(algoPs[1,])~c(1:12), main='Return p-values',
     xlab='Formation period', ylab='P-value')
abline(h=0.05, col='red')
abline(h=.1, col='green')

plot(1-as.numeric(algoPs[2,])~c(1:12), ylim=c(0, .5), main='Annualized vol p-values',
     xlab='Formation period', ylab='P-value')
abline(h=0.05, col='red')
abline(h=.1, col='green')

plot(as.numeric(algoPs[3,])~c(1:12), main='Sharpe p-values',
     xlab='Formation period', ylab='P-value')
abline(h=0.05, col='red')
abline(h=.1, col='green')

And here are the results:


In short, compared to monkeys throwing darts, to use some phrasing from the Price Action Lab blog, these signal processes are only marginally intelligent, if at all, depending on the variation one chooses. Still, I was recommended to see this process through the end, and evaluate rules, so next time, I’ll evaluate one easy-to-implement rule.

Thanks for reading.

NOTE: while I am currently consulting, I am always open to networking, meeting up (Philadelphia and New York City both work), consulting arrangements, and job discussions. Contact me through my email at ilya.kipnis@gmail.com, or through my LinkedIn, found here.