An Update On EAA and a Volatility Strategy

Again, before starting this post, I’d like to inform readers that the book Quantitative Trading With R, written by Harry Georgakopoulos, with contributions from myself, is now available for order on Amazon. Already, it has garnered a pair of five-star reviews, and it deals not only with quantstrat, but with aspects such as spread trading, high frequency data, and options. I highly recommend it.

So, first things first, I want to inform everyone that EAA (that is, Elastic Asset Allocation, the new algorithm recently released by Dr. Wouter Keller a couple of weeks ago) is now in my IKTrading package. I made some modifications to deal with incongruous security starting dates (that is, handled NA momentum, and so on, similarly to the process in FAA). Again, no particular guarantees, but at this point, I think the algorithm won’t regularly break (but I may be missing some edge case, so feedback is always appreciated). Also, after thinking about it a bit more, I don’t foresee EAA as it stands being able to make use of a conditional correlation algorithm, since rather than using correlation simply for security selection, it uses correlations to make weighting decisions, which raises the question of what the correlation value of the first security would be. 0? -1? Ideas on how to address this are always welcome, since applying conditional correlation outside of a ranking context is a topic now of interest to me.

Furthermore, TrendXplorer has recently posted his own post on EAA after seeing mine on his blog. It is *very* comprehensive, and for those that are more inclined towards AmiBroker, you’ll be in Nirvana. It can be found here. Also, it seems he has done some work with another SeekingAlpha contributor named Cliff Smith (and seems to have worked hand in hand with him), and thus, had a far more positive experience than I did going solo replicating Harry Long’s strategies (or, if some of you may like, marketing materials). TrendXplorer did some work with a strategy called QTS, which I hope I’ll be able to cover in the near future. That can all be found here. So, I’d like to formally extend thanks to TrendXplorer for the work he has done with both EAA, and also pointing me towards yet another viable asset allocation strategy.

In terms of my own updated EAA, to test it out, I added Tesla Motors to the original seven securities. So let’s look at the as-of-now-current EAA.

"EAA" <- function(monthlyPrices, wR=1, wV=0, wC=.5, wS=2, errorJitter=1e-6, 
                cashAsset=NULL, bestN=1+ceiling(sqrt(ncol(monthlyPrices))),
                enableCrashProtection = TRUE, returnWeights=FALSE, monthlyRiskFree=NULL) {
  returns <- Return.calculate(monthlyPrices)
  returns <- returns[-1,] #return calculation uses one observation
  if(!is.null(monthlyRiskFree)) {
    returnsRF <- Return.calculate(monthlyRiskFree)
    returnsRF <- returnsRF[-1,]
  }
  
  if(is.null(cashAsset)) {
    returns$zeroes <- 0
    cashAsset <- "zeroes"
    warning("No cash security specified. Recommended to use one of: quandClean('CHRIS/CME_US'), SHY, or VFISX. 
            Using vector of zeroes instead.")
  }
  
  cashCol <- grep(cashAsset, colnames(returns))
  
  weights <- list()
  for(i in 1:(nrow(returns)-11)) {
    returnsData <- returns[i:(i+11),] #each chunk will be 12 months of returns data
    #per-month mean of cumulative returns of 1, 3, 6, and 12 month periods
    periodReturn <- ((returnsData[12,] + Return.cumulative(returnsData[10:12,]) + 
                      Return.cumulative(returnsData[7:12,]) + Return.cumulative(returnsData)))/22
    
    if(!is.null(monthlyRiskFree)) {
      rfData <- returnsRF[i:(i+11),]
      rfReturn <- ((rfData[12,] + Return.cumulative(rfData[10:12,]) + 
                    Return.cumulative(rfData[7:12,]) + Return.cumulative(rfData)))/22
      periodReturn <- periodReturn - as.numeric(rfReturn)
    }
    
    vols <- StdDev.annualized(returnsData) 
    mktIndex <- xts(rowMeans(returnsData, na.rm=TRUE), order.by=index(returnsData)) #equal weight returns of universe
    cors <- cor(returnsData, mktIndex) #correlations to market index
    
    weightedRets <- periodReturn ^ wR
    weightedCors <- (1 - as.numeric(cors)) ^ wC
    weightedVols <- (vols + errorJitter) ^ wV
    wS <- wS + errorJitter
    
    z <- (weightedRets * weightedCors / weightedVols) ^ wS #compute z_i and zero out negative returns
    z[periodReturn < 0] <- 0
    crashProtection <- sum(z==0, na.rm=TRUE)/sum(!is.na(z)) #compute crash protection cash cushion
    
    orderedZ <- sort(as.numeric(z), decreasing=TRUE)
    selectedSecurities <- z >= orderedZ[bestN]
    preNormalizedWeights <- z*selectedSecurities #select top N securities, keeping z_i scores
    periodWeights <- preNormalizedWeights/sum(preNormalizedWeights, na.rm=TRUE) #normalize
    if (enableCrashProtection) {
      periodWeights <- periodWeights * (1-crashProtection) #CP rule
    }
    periodWeights[is.na(periodWeights)] <- 0
    weights[[i]] <- periodWeights
  }
  
  weights <- do.call(rbind, weights)
  weights[, cashCol] <- weights[, cashCol] + 1-rowSums(weights) #add to risk-free asset all non-invested weight
  strategyReturns <- Return.rebalancing(R = returns, weights = weights) #compute strategy returns
  if(returnWeights) {
    return(list(weights, strategyReturns))
  } else {
    return(strategyReturns)
  }
}

Essentially, little changed aside from some lines dealing with NAs (AKA securities that were not yet around at the time whose prices are given as NAs).

To test out whether the algorithm worked, I added TSLA to see if it didn’t break the code. Here is the new test code.

require(quantmod)
require(PerformanceAnalytics)

symbols <- c("VTSMX", "FDIVX", "VEIEX", "VBMFX", "VFISX", "VGSIX", "QRAAX", "TSLA")

getSymbols(symbols, from="1990-01-01")
prices <- list()
for(i in 1:length(symbols)) {
  prices[[i]] <- Ad(get(symbols[i]))  
}
prices <- do.call(cbind, prices)
colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))
ep <- endpoints(prices, "months")
prices <- prices[ep,]
prices <- prices["1997-03::"]

getSymbols("^IRX", from="1990-01-01")
dailyYield <- (1+(Cl(IRX)/100))^(1/252) - 1
threeMoPrice <- cumprod(1+dailyYield)
threeMoPrice <- threeMoPrice["1997-03::"]
threeMoPrice <- threeMoPrice[endpoints(threeMoPrice, "months"),]

offensive <- EAA(prices, cashAsset="VBMFX", bestN=3)
defensive <- EAA(prices, cashAsset="VBMFX", bestN=3, wS=.5, wC=1)
offRF <- EAA(prices, cashAsset="VBMFX", bestN=3, monthlyRiskFree = threeMoPrice)
defRF <- EAA(prices, cashAsset="VBMFX", bestN=3, wS=.5, wC=1, monthlyRiskFree = threeMoPrice)
compare <- cbind(offensive, defensive, offRF, defRF)
colnames(compare) <- c("Offensive", "Defensive", "OffRF", "DefRF")
stats <- rbind(Return.annualized(compare)*100, StdDev.annualized(compare)*100, maxDrawdown(compare)*100, SharpeRatio.annualized(compare))
rownames(stats)[3] <- "Worst Drawdown"
charts.PerformanceSummary(compare)
stats

With the following statistics table and equity curve:

> stats
                                 Offensive Defensive      OffRF     DefRF
Annualized Return               17.6174693 13.805683 16.7376777 13.709368
Annualized Standard Deviation   22.7328695 13.765444 22.3854966 13.504313
Worst Drawdown                  25.3534015 12.135310 25.3559118 12.146654
Annualized Sharpe Ratio (Rf=0%)  0.7749778  1.002923  0.7477019  1.015184

Essentially, TSLA — a high momentum, high-volatility stock causes some consternation in the offensive variant of the algorithm. Let’s look at the weight statistics of TSLA when it was in the portfolio.

test <- EAA(prices, cashAsset = "VBMFX", bestN=3, returnWeights=TRUE)
weights <- test[[1]]
summary(weights$TSLA[weights$TSLA > 0])

With the results:

    Index                 TSLA        
 Min.   :2011-07-29   Min.   :0.01614  
 1st Qu.:2012-09-14   1st Qu.:0.32345  
 Median :2013-07-31   Median :0.48542  
 Mean   :2013-06-20   Mean   :0.51415  
 3rd Qu.:2014-04-15   3rd Qu.:0.75631  
 Max.   :2014-12-31   Max.   :0.95793  

Also, to be clear, R’s summary function was not created with xts type objects in mind, so the Index statistics are just pure nonsense (R is trying to do summary statistics on the underlying numerical values of the date index — they have no relation to the TSLA weights), so if you ever call summary on anything in an xts, be aware that it isn’t actually providing you the dates of the corresponding weights (if they exist at all — E.G. the mean of the weights isn’t an actual weight at any point in time).

In any case, it seems that the offensive variant of the algorithm is susceptible to creating portfolios that are very poorly diversified, since the offensive variant doesn’t place any weight on security volatility–simply correlation. So if there was a very volatile instrument that was on a roaring trend, EAA would tell you to just place your entire portfolio in that one instrument–which of course, can be the correct thing to do if you know for certain that said trend will continue, until, of course, it doesn’t.

I’m sure there are still some methods to account for instruments of wildly different risk/return profiles, even without the need of additional code, by varying the parameters. I just wanted to demonstrate the need to be aware of this phenomenon, which I happened upon simply by testing the portfolio for incongruous starting dates and just so happened to pick a “hot topic” stock.

Last (for this post), I’d like to make readers aware that the blogger Volatility Made Simple has created a variant of a strategy I had written about earlier (again, thanks to Mr. Helmuth Vollmeier for providing the initial foundation), in which he mixed signals from the three variants I had found to be in stable regions, and I’m really happy he has done so, as he’s one of the first people who have explicitly extended my work.

Unfortunately, said strategy is currently in drawdown. However, looking at its drawdown curve against that of XIV itself, it seems that volatility has been doing crazy things lately, and the drawdown has been worse in the past. I am concerned, however, that it may be a strategy prone to overfitting, and it’s a constant reminder that there is still more to learn, and more techniques to use to convince oneself that a backtest isn’t just an overfit, data-mined, sample-dependent illusion with good marketing that will break down immediately upon looking at a larger sample. However, as I did not originate the strategy myself, I’d at least like to hope that whoever was the first person who came up with the VXV/VXMT ratio idea had some good rationale for the strategy to begin with.

In the immediate future, I’ll be looking into change point analysis and twitter’s new breakout detection package.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract or full time roles available for proprietary research that could benefit from my skills, please contact me through my LinkedIn here.

23 thoughts on “An Update On EAA and a Volatility Strategy

  1. Pingback: The Whole Street’s Daily Wrap for 1/16/2015 | The Whole Street

  2. Hi Ilya,

    Thank you for the continued work on this blog. It is and amazing resource. Also, CONGRATULATIONS on your contribution to the book on Quantitative Trading in R. I only found out about the book through your blog posts. I just ordered my copy :-)

    -GeraldM

    • Definitely correct when thinking about fiction books and other such material. But when it comes to technical materials, such as those found from Wiley and so on, this is par for the course, if not a little cheaper. For instance, compare the latest version of Perry Kaufman’s Trading Systems book + software which goes for around $80. Also, that’s something that should be taken up with the publisher =), but on a whole, I do wish such materials were cheaper myself.

    • Oh man, $60 is cheap for a technical book. University text books cost $150-$200. Yep, learning something can cost a few dollars but if you think education is expensive you should try ignorance. Now THAT will cost you :-)

  3. Nirvana, wow! ;-) Thanks for your kind words about my blog, Ilya.

    Actually I am equally impressed (if not more) by the beautiful R-code you write. Moreover you manage to create the code so incredibly fast. Just amazing.

    Anyway, next to monthly reforms for EAA, now quarterly frequency is supported too to meet accounts with a 90 day holding period limit. See the linked chart (http://www.screencast.com/t/0Rtt6OQO) for an N=7 sized universe consisting of $MDY, $IEV, $EEM, $QQQ, $XLV, $IEF and $TLT with $IEF also used as the cash proxy fund ($-sign: the ETF’s are extended with their corresponding mutual funds to prolong backtest history). EAA was set in EW_Hedged mode (wR,wV,wC,wS=1,0,1,0) with quarterly rebalancing.

    JW

    • Wow! Quarterly means you can actually trade mutual funds that way (the more popular ones have a 60-day lockout against frequent trading), meaning your returns can be much better. Thanks for the compliment. I’ll actually see if I can get quarterly functionality built in.

      Also, I like the work you did with Cliff Smith of SeekingAlpha. I’m backed up with an interview project right now, but come early February, I’d love to continue to collaborate.

  4. hi Ilya. Discovered your blog yesterday – it’s incredible, been devouring your articles pretty much non-stop for the past 10 hours. It’s been a number of years since I last wrote program code but this blog inspires me to pick up R etc as a matter of priority.

    Hey, one question if I may. In assessing robustness of a strategy, I find presentation of rolling returns (1 month, quarterly, annual) incredibly illustrative. I’m sure you know what I mean, but you could check out what they do at http://www.logical-invest.com, for example. Have you considered computing rolling returns for EAA or your volatility strategies? The heat maps and scatter plots are really great of course but this would be a nice complementary angle.

    thanks and best wishes

    /mikko

  5. Hello Ilya,

    I’m new with R and quant trading, appreciate your blog. Now playing with the code, I’ve seen the file EAA.R in the IKTrading package contain a parameter “returnWeights” that allow to return the weight of each securities (well, this is my understanding of the param definition). When it is set to TRUE my R Studio return an error I don’t understand :-|

    Pease do you have an idea how I could debug this ?

    Thank’s
    Florent

    Erreur dans checkData(R, method = “xts”) : The data cannot be converted into a time series. If you are trying to pass in names from a data object with one column, you should use the form ‘data[rows, columns, drop = FALSE]’. Rownames should have standard date formats, such as ‘1985-03-15’.

      • Hello Ilya,

        Thanks, in fact I was not using the code properly. It works like a charm now :-)

        My understanding with this algo would be that the more funds the better, but it’s not. I tested different baskets of x3 leveraged funds and found combination with good returns over the last 8 years (~15% MDD and 25% average return per year which is not bad).

        I separate basket between risk-off assets (i.e +20-and 7-10 years Treasury, U.S Dollar) and risk-on assets (i.e various Indexes, Equities and also Gold miners). Added VIX and inverse VIX give also good perf.

        In order to test with Bitcoin time serie data I change the source from Yahoo to Google (Oanda is great but request cannot exceed 500-days and Yahoo API fails to request currencies pairs).

        I query Google with getSymbols(symbols, from=”2009-01-01″, src=”google”, verbose = TRUE) but it returns an error saying “subscript out of bounds: no column name containing “Adjusted”. I am not sure how to handle this and would appreciate some pointer.

        Regards,
        Florent

      • IIya, just to say the error above is related to the line prices[[i]] <- Ad(get(symbols[i])) because Google data do not include "ajusted" price. I changed with prices[[i]] prices <- do.call(cbind, prices)
        nan, nan
        Erreur dans merge.xts(…, all = all, fill = fill, suffixes = suffixes) :
        'NA' not allowed in 'index'

      • Sorry for my last message that was troncated. I wanted to say that the error above was related to the line prices[[i]] <- Ad(get(symbols[i])) because Google data do not include "ajusted" price. I changed with prices[[i]] <- Cl(get(symbols[i])) in order to use closing price, it works fine with Yahoo data but still returns an error when using Google data :

        prices <- do.call(cbind, prices)
        nan, nan
        Erreur dans merge.xts(…, all = all, fill = fill, suffixes = suffixes) :
        'NA' not allowed in 'index'

      • For some reason data from google source come without a column “Date” and that gives NA. I write this little workaround that fetch BTC/USD time serie from Oanda for the last 1000 days.

        I can see a new column with BTC/USD data serie in prices[[]] but it is not clear for me why some date are missing in BTC[[1]]. Then after I populate BTC[[1]] data into prices[[]] some existing symbols of the symbol’s list become #NA sometime.

        It must be a very simple, sorry for the basic of the question.
        Florent

        prices <- list()
        for(i in 1:length(symbols)) {
        prices[[i]] <- Cl(get(symbols[i]))
        }
        BTC <- list()
        for(i in 1:2) {

        BTC[[1]] <- getFX("BTC/USD",
        from = Sys.Date() -499 * (i + 1),
        to = Sys.Date() – 499 * i,
        env = parent.frame(),
        auto.assign = FALSE)
        }
        BTC[[1]] <- getFX("BTC/USD",
        from = Sys.Date() -499, #retrieve most recent data here
        to = Sys.Date(),
        env = parent.frame(),
        auto.assign = FALSE)

        prices[[length(symbols)+1]] <- BTC[[1]]
        prices <- do.call(cbind, prices)

  6. Hi Florent,

    This is a very interesting idea to use bitcoin prices. Thank you for posting. I had a similar problem when I was downloading Forex data from a site that stored it in text file (I was using the USDCAD cross rate to look at the price of gold in Canadian dollars). I put the data into a dataframe and then using the as.date and as.xts functions, I got it all to work. I don’t know all the details of what you are doing but I am willing to help. Send me a note to my spam gmail account “junkystuffy5” and I will reach back to you.

    Please keep posting. This is great work by Ilya and theses user conversations are very useful.

    • Hi Gerald,

      Thanks you. The reason I would like to use Bitcoin is to give beta to a portfolio of Bitshares asset. BitAssets trade under symbol like BitUSD, BitGold, BitGBP, BitCNY, etc. (no stocks nor Indexes at the moment). They are a new class of digital crypto-currencies 2.0 that are pegged to the value of their real world counterparts (USD, Gold, GBP, CNY, etc.). The idea behind this is having an asset allocation algo that could provide returns in a tax-free environment.

      You’ll find more description on these bitAsset on the following two links:
      http://bytemaster.bitshares.org/article/2014/12/18/What-are-BitShares-Market-Pegged-Assets/
      http://bitsharesblocks.com/assets/market

      * Just to clarify that I have no interest in such company neither I’m in relation with their team.

      Another suggestion could be to add altcoin into the algo

    • Patzoul, PerformanceAnalytics takes care of that under the hood. This is a package created by a team of elite quants. You can rest assured that issues like these are taken care of. I do something like you describe with the volatility trading strategies, but in those cases, I do everything manually, and don’t use Return.portfolio.

  7. Hello Ilya, in this example volatility is calculated from the standard deviation of the open (or close) price but it doesn’t takes into account price range (high – low). Do you think it is worth trying with another volatility indicator like ATR or chaikin volatility ? Thanks !

Leave a comment