Happy New Year!

So, this is something I’ve been working on before its official publication (so this is the first place on the entire internet that you’ll see it outside SSRN, and certainly one of the few places that will extend it), directly in contact with the original paper author, Dr. Wouter Keller, who published the Flexible Asset Allocation algorithm that I improved on earlier. It’s called Elastic Asset Allocation, and seems to be a simpler yet more general algorithm. Here’s the link to the SSRN

Essentially, the algorithm can be explained simply:

Use monthly data.

For a 12 month period, we want the monthly average of the 1, 3, 6, and 12 month cumulative returns (that is, sum all those up, divide by 22), the volatility of the individual monthly returns, and the correlation of the universe of returns to the monthly average of the returns. Then arrange those values in the following expression:

z_i = (r_i ^ wR * (1-c_i) ^ wC / (v_i + error) ^ wV) ^ (wS + error) if r_i > 0, 0 otherwise, where:

r_i are the average returns

c_i are the correlations

v_i are the volatilities

wR, wC, and wV are the respective weights for returns, correlations, and volatilities.

Next, select the top N of P assets to include in the portfolio.

Then, the weights for each security can be expressed as the normalized values of the sum of selected z_i’s.

If a crash protection rule is enabled, compute CP, which is the fraction of the universe of securities (not selected securities, but the entire universe) below zero, divided by the size of the universe, and multiply all weights by 1-CP. Reinvest the remainder in the cash asset (something like VBMFX, VFISX, SHY, etc.). In FAA, I called this the risk-free asset, but in this case, it’s simply the cash asset.

The error term is 1e-6, or some other small value to avoid divide by zero errors.

Finally, wS is an aggression parameter. Setting this value to 0 essentially forces an equal weight portfolio, and infinity will simply select the best asset each month.

Anyhow, let’s look at a prototype of the code (will bug with NA returns, and still doesn’t include excess returns over some treasury security), using the original FAA securities. The weights are wR=1, wC=.5, and wV=0, with wS = 2 for an offensive scheme or wS = .5 and wC = 1 for a more defensive scheme. Crash protection is enabled.

require(quantmod) require(PerformanceAnalytics) symbols <- c("VTSMX", "FDIVX", "VEIEX", "VBMFX", "VFISX", "VGSIX", "QRAAX") getSymbols(symbols, from="1990-01-01") prices <- list() for(i in 1:length(symbols)) { prices[[i]] <- Ad(get(symbols[i])) } prices <- do.call(cbind, prices) colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices)) ep <- endpoints(prices, "months") prices <- prices[ep,] prices <- prices["1997-03::"] EAA <- function(monthlyPrices, wR=1, wV=0, wC=.5, wS=2, errorJitter=1e-6, cashAsset=NULL, bestN=1+ceiling(sqrt(ncol(monthlyPrices))), enableCrashProtection = TRUE, returnWeights=FALSE) { returns <- Return.calculate(monthlyPrices) returns <- returns[-1,] #return calculation uses one observation if(is.null(cashAsset)) { returns$zeroes <- 0 cashAsset <- "zeroes" warning("No cash security specified. Recommended to use one of: quandClean('CHRIS/CME_US'), SHY, or VFISX. Using vector of zeroes instead.") } cashCol <- grep(cashAsset, colnames(returns)) weights <- list() for(i in 1:(nrow(returns)-11)) { returnsData <- returns[i:(i+11),] #each chunk will be 12 months of returns data #per-month mean of cumulative returns of 1, 3, 6, and 12 month periods periodReturn <- ((returnsData[12,] + Return.cumulative(returnsData[10:12,]) + Return.cumulative(returnsData[7:12,]) + Return.cumulative(returnsData)))/22 vols <- StdDev.annualized(returnsData) mktIndex <- xts(rowMeans(returnsData), order.by=index(returnsData)) #equal weight returns of universe cors <- cor(returnsData, mktIndex) #correlations to market index weightedRets <- periodReturn ^ wR weightedCors <- (1 - as.numeric(cors)) ^ wC weightedVols <- (vols + errorJitter) ^ wV wS <- wS + errorJitter z <- (weightedRets * weightedCors / weightedVols) ^ wS #compute z_i and zero out negative returns z[periodReturn < 0] <- 0 crashProtection <- sum(z==0)/length(z) #compute crash protection cash cushion orderedZ <- sort(as.numeric(z), decreasing=TRUE) selectedSecurities <- z >= orderedZ[bestN] preNormalizedWeights <- z*selectedSecurities #select top N securities, keeping z_i scores periodWeights <- preNormalizedWeights/sum(preNormalizedWeights) #normalize if (enableCrashProtection) { periodWeights <- periodWeights * (1-crashProtection) #CP rule } weights[[i]] <- periodWeights } weights <- do.call(rbind, weights) weights[, cashCol] <- weights[, cashCol] + 1-rowSums(weights) #add to risk-free asset all non-invested weight strategyReturns <- Return.rebalancing(R = returns, weights = weights) #compute strategy returns if(returnWeights) { return(list(weights, strategyReturns)) } else { return(strategyReturns) } } offensive <- EAA(prices, cashAsset="VBMFX", bestN=3) defensive <- EAA(prices, cashAsset="VBMFX", bestN=3, wS=.5, wC=1) compare <- cbind(offensive, defensive) colnames(compare) <- c("Offensive", "Defensive") stats <- rbind(Return.annualized(compare)*100, StdDev.annualized(compare)*100, maxDrawdown(compare)*100, SharpeRatio.annualized(compare)) rownames(stats)[3] <- "Worst Drawdown" charts.PerformanceSummary(compare)

And here are the results:

Offensive Defensive Annualized Return 12.085183 10.197450 Annualized Standard Deviation 11.372610 8.633327 Worst Drawdown 12.629251 8.134785 Annualized Sharpe Ratio (Rf=0%) 1.062657 1.181173

And the resultant equity curves:

Risk and return on full display here. The defensive variant possessing a higher CAGR than max drawdown make it look attractive at first glance, but I’m fairly certain it’s due to investing heavily in bond funds during QE.

Not a bad algorithm, though the fact that there are only 7 securities here leave it open to some idiosyncratic risk. The low-hanging fruit, of course, is that the correlation uses a single-pass variant, meaning that it is quite possible to select correlated assets that are not correlated to non-selected assets, but are indeed correlated to each other. However, since the actual correlations are used here as opposed to correlation rank, I am thinking that a stepwise correlation selection process would have to be specifically written for this particular algorithm. I’ll no doubt run this by David Varadi and see how to properly set up a stepwise correlation algorithm when working with the interplay of correlations with other values (returns, volatility).

In any case, what I like about this algorithm is that not only is it a security *selection* algorithm, which was what FAA was, but it also includes another aspect, which is the actual weighting aspect, rather than simply leaving all assets at equal weight.

At the moment, this is still in its prototypical phase, and I am aware of the bugs of assets that don’t start at the same time, which will be fixed before I commit this to my IKTrading package. But I wanted to put this out there for those that wished to experiment with it and provide feedback if they came across something they believe is worth sharing.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.

Amazing work Ilya. This has to be some sort of a land-speed record. You have the code out at the same time as the paper publication. Clearly the author’s have a very high respect for your work – as I do (and I am sure many others). This is really excellent work and a terrific contribution to the community at large. Kudo’s to you for making it available. Thanks!

PS: I’ll report any bugs I come across should I find any – not worried about the start time bug, I use this:

#removes dates (rows) with NA’s. Shortens time period of data set.

prices <- prices[complete.cases(prices),]

Gerald,

That’s because I actually worked *with* the author *before* he published the paper =).

-Ilya

Great work Ilya as always.. Am a fan of what you do on this blog. I just tried running the code I get an error on line

strategyReturns <- Return.rebalancing(R = returns, weights = weights) #compute strategy returns

Error is: "Error in `[.xts`(result, 2:length(result)) : subscript out of bounds"

I will spend a bit of time to figure out why later this evening. Ran the code as is. No modifications.

But, many many thanks for sharing such great work.

Yeah, either your libraries need updating, or it’s a matter of not simultaneous starting points. I’ll make the code more robust in the future.

-Ilya

Apologies for not checking the libraries. Once updated, Code worked perfectly.

Hi Ilya,

Nice check of the published working paper, with similar results for the N=7. One query:

From the paper, it states that “r_i is equal to the average total excess return over the last 1,3,6 and 12 months…”

Similarly to yourself, I assumed that the authors were referring to the ‘monthly’ average rather than a direct average of the different period returns. That being said, why are you calculating the monthly average of the 1, 3, 6, and 12 month cumulative returns as the sum of the all returns, divided by 22?

Surely the average return (excess or not) is given by: r_bar = 1/4 * (r_1 + r_3 / 3 + r_6 / 6 + r_12 / 12)

Based on a quick 1000-path simulation of 12-months of normally distributed (log)returns (changed back to linear before calculating momentum), the standard deviation between your computed average return r_i and the r_bar given above is 0.67% for a return vol of 10% and and increases linearly to 1.33% for a vol of 20%. This is a sizeable difference when ranking assets, especially if you include the Top N cut-off rule.

On the stepwise correlation routine, I would be hesitant to add it here as you’re only calculating correlations on 12 observations. This is (arguably) okay in the original as each correlation is treated as a single pairwise value against an EW index of the asset universe (so you actually get a latent stepwise component already for free). However, as soon as you use a stepwise routine, I assume you’ll need the full correlation matrix. For 7 assets, you’re then calculating 21 correlations from 12 observations which is not ideal. And for any N > 12, you’ll likely end up with a correlation that is rank deficient and/or very ill-conditioned; not the situation you want to be in when allocating assets.

Thank you for all the hard work you put into this blog, always an interesting read.

Regards,

Emlyn

No. The author’s computation was to sum up all the cumulative returns and then divide by one number at once.

Regarding the concerns about correlation, that is an interesting point. I suppose the best way to test that idea out would be to actually, well, backtest it. If your concerns hold, they should be visible through the underperformance of the portfolio.

Fair enough. To be 100% correct, I would suggest then that one uses the phrase ‘weighted average monthly return’, rather than purely ‘average monthly return’.

Pingback: Adding a Risk-Free Rate To Your Analyses | QuantStrat TradeR

Pingback: Comparing Flexible and Elastic Asset Allocation | QuantStrat TradeR