Nuts and Bolts of Quantstrat, Part V

This post will be about pre-processing custom indicators in quantstrat–that is, how to add values to your market data that do not arise from the market data itself.

The first four parts of my nuts and bolts of quantstrat were well received. They are even available as a datacamp course. For those that want to catch up to today’s post, I highly recommend the datacamp course.

To motivate this post, the idea is that say you’re using alternative data that isn’t simply derived from a transformation of the market data itself. I.E. you have a proprietary alternative data stream that may predict an asset’s price, you want to employ a cross-sectional ranking system, or any number of things. How do you do this within the context of quantstrat?

The answer is that it’s as simple as binding a new xts to your asset data, as this demonstration will show.

First, let’s get the setup out of the way.

require(quantstrat)
require(PerformanceAnalytics)

initDate="1990-01-01"
from="2003-01-01"
to="2012-12-31"
options(width=70)

options("getSymbols.warning4.0"=FALSE)

currency('USD')
Sys.setenv(TZ="UTC")

symbols <- 'SPY'
suppressMessages(getSymbols(symbols, from=from, to=to, src="yahoo", adjust=TRUE))  

stock(symbols, currency="USD", multiplier=1)

Now, we have our non-derived indicator. In this case, it’s a toy example–the value is 1 if the year is odd (I.E. 2003, 2005, 2007, 2009), and 0 if it’s even. We compute that and simply column-bind (cbind) it to the asset data.

nonDerivedIndicator <- as.numeric(as.character(substr(index(SPY), 1, 4)))%%2 == 1
nonDerivedIndicator <- xts(nonDerivedIndicator, order.by=index(SPY))

SPY <- cbind(SPY, nonDerivedIndicator)
colnames(SPY)[7] = "nonDerivedIndicator"

Next, we just have a very simple strategy–buy a share of SPY on odd years, sell on even years. That is, buy when the nonDerivedIndicator column crosses above 0.5 (from 0 to 1), and sell when the opposite occurs.

strategy.st <- portfolio.st <- account.st <- "nonDerivedData"
rm.strat(strategy.st)
initPortf(portfolio.st, symbols=symbols, initDate=initDate, currency='USD')
initAcct(account.st, portfolios=portfolio.st, initDate=initDate, currency='USD')
initOrders(portfolio.st, initDate=initDate)
strategy(strategy.st, store=TRUE)

add.signal(strategy.st, name = sigThreshold, 
           arguments = list(column = "nonDerivedIndicator", threshold = 0.5, relationship = "gte", cross = TRUE),
           label = "longEntry")

add.signal(strategy.st, name = sigThreshold, 
           arguments = list(column = "nonDerivedIndicator", threshold = 0.5, relationship = "lte", cross = TRUE),
           label = "longExit")


tmp <- applySignals(strategy = strategy.st, mktdata=SPY)


add.rule(strategy.st, name="ruleSignal", 
         arguments=list(sigcol="longEntry", sigval=TRUE, ordertype="market", 
                        orderside="long", replace=FALSE, prefer="Open", orderqty = 1), 
         type="enter", path.dep=TRUE)

add.rule(strategy.st, name="ruleSignal", 
         arguments=list(sigcol="longExit", sigval=TRUE, orderqty="all", 
                        ordertype="market", orderside="long", 
                        replace=FALSE, prefer="Open"), 
         type="exit", path.dep=TRUE)

#apply strategy
t1 <- Sys.time()
out <- applyStrategy(strategy=strategy.st,portfolios=portfolio.st)
t2 <- Sys.time()
print(t2-t1)

#set up analytics
updatePortf(portfolio.st)
dateRange <- time(getPortfolio(portfolio.st)$summary)[-1]
updateAcct(portfolio.st,dateRange)
updateEndEq(account.st)

And the result:

chart.Posn(portfolio.st, 'SPY')

In conclusion, you can create signals based off of any data in quantstrat. Whether that means volatility ratios, fundamental data, cross-sectional ranking, or whatever proprietary alternative data source you may have access to, this very simple process is how you can use quantstrat to add all of those things to your systematic trading backtest research.

Thanks for reading.

Note: I am always interested in full-time opportunities which may benefit from my skills. I have experience in data analytics, asset management, and systematic trading research. If you know of any such opportunities, do not hesitate to contact me on my LinkedIn, found here.

Advertisements

Ehlers’s Autocorrelation Periodogram

This post will introduce John Ehlers’s Autocorrelation Periodogram mechanism–a mechanism designed to dynamically find a lookback period. That is, the most common parameter optimized in backtests is the lookback period.

Before beginning this post, I must give credit where it’s due, to one Mr. Fabrizio Maccallini, the head of structured derivatives at Nordea Markets in London. You can find the rest of the repository he did for Dr. John Ehlers’s Cycle Analytics for Traders on his github. I am grateful and honored that such intelligent and experienced individuals are helping to bring some of Dr. Ehlers’s methods into R.

The point of the Ehlers Autocorrelation Periodogram is to dynamically set a period between a minimum and a maximum period length. While I leave the exact explanation of the mechanic to Dr. Ehlers’s book, for all practical intents and purposes, in my opinion, the punchline of this method is to attempt to remove a massive source of overfitting from trading system creation–namely specifying a lookback period.

SMA of 50 days? 100 days? 200 days? Well, this algorithm takes that possibility of overfitting out of your hands. Simply, specify an upper and lower bound for your lookback, and it does the rest. How well it does it is a topic of discussion for those well-versed in the methodologies of electrical engineering (I’m not), so feel free to leave comments that discuss how well the algorithm does its job, and feel free to blog about it as well.

In any case, here’s the original algorithm code, courtesy of Mr. Maccallini:

AGC <- function(loCutoff = 10, hiCutoff = 48, slope = 1.5) {      accSlope = -slope # acceptableSlope = 1.5 dB   ratio = 10 ^ (accSlope / 20)   if ((hiCutoff - loCutoff) > 0)
    factor <-  ratio ^ (2 / (hiCutoff - loCutoff));
  return (factor)
}

autocorrPeriodogram <- function(x, period1 = 10, period2 = 48, avgLength = 3) {
  # high pass filter
  alpha1 <- (cos(sqrt(2) * pi / period2) + sin(sqrt(2) * pi / period2) - 1) / cos(sqrt(2) * pi / period2)
  hp <- (1 - alpha1 / 2) ^ 2 * (x - 2 * lag(x) + lag(x, 2))
  hp <- hp[-c(1, 2)]
  hp <- filter(hp, (1 - alpha1), method = "recursive")
  hp <- c(NA, NA, hp)
  hp <- xts(hp, order.by = index(x))
  # super smoother
  a1 <- exp(-sqrt(2) * pi / period1)
  b1 <- 2 * a1 * cos(sqrt(2) * pi / period1)
  c2 <- b1
  c3 <- -a1 * a1
  c1 <- 1 - c2 - c3
  filt <- c1 * (hp + lag(hp)) / 2
  leadNAs <- sum(is.na(filt))
  filt <- filt[-c(1: leadNAs)]
  filt <- filter(filt, c(c2, c3), method = "recursive")
  filt <- c(rep(NA, leadNAs), filt)
  filt <- xts(filt, order.by = index(x))
  # Pearson correlation for each value of lag
  autocorr <- matrix(0, period2, length(filt))
  for (lag in 2: period2) {
    # Set the average length as M
    if (avgLength == 0) M <- lag
    else M <- avgLength
    autocorr[lag, ] <- runCor(filt, lag(filt, lag), M)
  }
  autocorr[is.na(autocorr)] <- 0
  # Discrete Fourier transform
  # Correlate autocorrelation values with the cosine and sine of each period of interest
  # The sum of the squares of each value represents relative power at each period
  cosinePart <- sinePart <- sqSum <- R <- Pwr <- matrix(0, period2, length(filt))
  for (period in period1: period2) {
    for (N in 2: period2) {
      cosinePart[period, ] = cosinePart[period, ] + autocorr[N, ] * cos(2 * N * pi / period)
      sinePart[period, ] = sinePart[period, ] + autocorr[N, ] * sin(2 * N * pi / period)
    }
    sqSum[period, ] = cosinePart[period, ] ^ 2 + sinePart[period, ] ^ 2
    R[period, ] <- EMA(sqSum[period, ] ^ 2, ratio = 0.2)
  }
  R[is.na(R)] <- 0
  # Normalising Power
  K <- AGC(period1, period2, 1.5)
  maxPwr <- rep(0, length(filt))   for(period in period1: period2) {     for (i in 1: length(filt)) {       if (R[period, i] >= maxPwr[i]) maxPwr[i] <- R[period, i]
      else maxPwr[i] <- K * maxPwr[i]
    }
  }
  for(period in 2: period2) {
    Pwr[period, ] <- R[period, ] / maxPwr
  }
  # Compute the dominant cycle using the Center of Gravity of the spectrum
  Spx <- Sp <- rep(0, length(filter))
  for(period in period1: period2) {
    Spx <- Spx + period * Pwr[period, ] * (Pwr[period, ] >= 0.5)
    Sp <- Sp + Pwr[period, ] * (Pwr[period, ] >= 0.5)
  }
  dominantCycle <- Spx / Sp
  dominantCycle[is.nan(dominantCycle)] <- 0
  dominantCycle <- xts(dominantCycle, order.by=index(x))
  dominantCycle <- dominantCycle[dominantCycle > 0]
  return(dominantCycle)
  #heatmap(Pwr, Rowv = NA, Colv = NA, na.rm = TRUE, labCol = "", add.expr = lines(dominantCycle, col = 'blue'))
}

One thing I do notice is that this code uses a loop that says for(i in 1:length(filt)), which is an O(data points) loop, which I view as the plague in R. While I’ve used Rcpp before, it’s been for only the most basic of loops, so this is definitely a place where the algorithm can stand to be improved with Rcpp due to R’s inherent poor looping.

Those interested in the exact logic of the algorithm will, once again, find it in John Ehlers’s Cycle Analytics For Traders book (see link earlier in the post).

Of course, the first thing to do is to test how well the algorithm does what it purports to do, which is to dictate the lookback period of an algorithm.

Let’s run it on some data.

getSymbols('SPY', from = '1990-01-01')

t1 <- Sys.time()
out <- autocorrPeriodogram(Ad(SPY), period1 = 120, period2 = 252, avgLength = 3)
t2 <- Sys.time() print(t2-t1) 

And the result:

 > t1 <- Sys.time() > out <- autocorrPeriodogram(Ad(SPY), period1 = 120, period2 = 252, avgLength = 3) > t2 <- Sys.time() > print(t2-t1)
Time difference of 33.25429 secs

Now, what does the algorithm-set lookback period look like?

plot(out)

Let’s zoom in on 2001 through 2003, when the markets went through some upheaval.

plot(out['2001::2003']

In this zoomed-in image, we can see that the algorithm’s estimates seem fairly jumpy.

Here’s some code to feed the algorithm’s estimates of n into an indicator to compute an indicator with a dynamic lookback period as set by Ehlers’s autocorrelation periodogram.

acpIndicator <- function(x, minPeriod, maxPeriod, indicatorFun = EMA, ...) {
  acpOut <- autocorrPeriodogram(x = x, period1 = minPeriod, period2 = maxPeriod)
  roundedAcpNs <- round(acpOut, 0) # round to the nearest integer
  uniqueVals <- unique(roundedAcpNs) # unique integer values
  out <- xts(rep(NA, length(roundedAcpNs)), order.by=index(roundedAcpNs))

  for(i in 1:length(uniqueVals)) { # loop through unique values, compute indicator
    tmp <- indicatorFun(x, n = uniqueVals[i], ...)
    out[roundedAcpNs==uniqueVals[i]] <- tmp[roundedAcpNs==uniqueVals[i]]
  }
  return(out)
}

And here is the function applied with an SMA, to tune between 120 and 252 days.

ehlersSMA <- acpIndicator(Ad(SPY), 120, 252, indicatorFun = SMA)

plot(Ad(SPY)['2008::2010'])
lines(ehlersSMA['2008::2010'], col = 'red')

And the result:

As seen, this algorithm is less consistent than I would like, at least when it comes to using a simple moving average.

For now, I’m going to leave this code here, and let people experiment with it. I hope that someone will find that this indicator is helpful to them.

Thanks for reading.

NOTES: I am always interested in networking/meet-ups in the northeast (Philadelphia/NYC). Furthermore, if you believe your firm will benefit from my skills, please do not hesitate to reach out to me. My linkedin profile can be found here.

Lastly, I am volunteering to curate the R section for books on quantocracy. If you have a book about R that can apply to finance, be sure to let me know about it, so that I can review it and possibly recommend it. Thakn you.

The Problem With Depmix For Online Regime Prediction

This post will be about attempting to use the Depmix package for online state prediction. While the depmix package performs admirably when it comes to describing the states of the past, when used for one-step-ahead prediction, under the assumption that tomorrow’s state will be identical to today’s, the hidden markov model process found within the package does not perform to expectations.

So, to start off, this post was motivated by Michael Halls-Moore, who recently posted some R code about using the depmixS4 library to use hidden markov models. Generally, I am loath to create posts on topics I don’t feel I have an absolutely front-to-back understanding of, but I’m doing this in the hope of learning from others on how to appropriately do online state-space prediction, or “regime switching” detection, as it may be called in more financial parlance.

Here’s Dr. Halls-Moore’s post.

While I’ve seen the usual theory of hidden markov models (that is, it can rain or it can be sunny, but you can only infer the weather judging by the clothes you see people wearing outside your window when you wake up), and have worked with toy examples in MOOCs (Udacity’s self-driving car course deals with them, if I recall correctly–or maybe it was the AI course), at the end of the day, theory is only as good as how well an implementation can work on real data.

For this experiment, I decided to take SPY data since inception, and do a full in-sample “backtest” on the data. That is, given that the HMM algorithm from depmix sees the whole history of returns, with this “god’s eye” view of the data, does the algorithm correctly classify the regimes, if the backtest results are any indication?

Here’s the code to do so, inspired by Dr. Halls-Moore’s.

require(depmixS4)
require(quantmod)
getSymbols('SPY', from = '1990-01-01', src='yahoo', adjust = TRUE)
spyRets <- na.omit(Return.calculate(Ad(SPY)))

set.seed(123)

hmm <- depmix(SPY.Adjusted ~ 1, family = gaussian(), nstates = 3, data=spyRets)
hmmfit <- fit(hmm, verbose = FALSE)
post_probs <- posterior(hmmfit)
post_probs <- xts(post_probs, order.by=index(spyRets))
plot(post_probs$state)
summaryMat <- data.frame(summary(hmmfit))
colnames(summaryMat) <- c("Intercept", "SD")
bullState <- which(summaryMat$Intercept > 0)
bearState <- which(summaryMat$Intercept < 0)

hmmRets <- spyRets * lag(post_probs$state == bullState) - spyRets * lag(post_probs$state == bearState)
charts.PerformanceSummary(hmmRets)
table.AnnualizedReturns(hmmRets)

Essentially, while I did select three states, I noted that anything with an intercept above zero is a bull state, and below zero is a bear state, so essentially, it reduces to two states.

With the result:

table.AnnualizedReturns(hmmRets)
                          SPY.Adjusted
Annualized Return               0.1355
Annualized Std Dev              0.1434
Annualized Sharpe (Rf=0%)       0.9448

So, not particularly terrible. The algorithm works, kind of, sort of, right?

Well, let’s try online prediction now.

require(DoMC)

dailyHMM <- function(data, nPoints) {
  subRets <- data[1:nPoints,]
  hmm <- depmix(SPY.Adjusted ~ 1, family = gaussian(), nstates = 3, data = subRets)
  hmmfit <- fit(hmm, verbose = FALSE)
  post_probs <- posterior(hmmfit)
  summaryMat <- data.frame(summary(hmmfit))
  colnames(summaryMat) <- c("Intercept", "SD")
  bullState <- which(summaryMat$Intercept > 0)
  bearState <- which(summaryMat$Intercept < 0)
  if(last(post_probs$state) %in% bullState) {
    state <- xts(1, order.by=last(index(subRets)))
  } else if (last(post_probs$state) %in% bearState) {
    state <- xts(-1, order.by=last(index(subRets)))
  } else {
    state <- xts(0, order.by=last(index(subRets)))
  }
  colnames(state) <- "State"
  return(state)
}

# took 3 hours in parallel
t1 <- Sys.time()
set.seed(123)
registerDoMC((detectCores() - 1))
states <- foreach(i = 500:nrow(spyRets), .combine=rbind) %dopar% {
  dailyHMM(data = spyRets, nPoints = i)
}
t2 <- Sys.time()
print(t2-t1)

So what I did here was I took an expanding window, starting from 500 days since SPY’s inception, and kept increasing it, by one day at a time. My prediction, was, trivially enough, the most recent day, using a 1 for a bull state, and a -1 for a bear state. I ran this process in parallel (on a linux cluster, because windows’s doParallel library seems to not even know that certain packages are loaded, and it’s more messy), and the first big issue is that this process took about three hours on seven cores for about 23 years of data. Not exactly encouraging, but computing time isn’t expensive these days.

So let’s see if this process actually works.

First, let’s test if the algorithm does what it’s actually supposed to do and use one day of look-ahead bias (that is, the algorithm tells us the state at the end of the day–how correct is it even for that day?).


onlineRets <- spyRets * states 
charts.PerformanceSummary(onlineRets)
table.AnnualizedReturns(onlineRets)

With the result:

> table.AnnualizedReturns(onlineRets)
                          SPY.Adjusted
Annualized Return               0.2216
Annualized Std Dev              0.1934
Annualized Sharpe (Rf=0%)       1.1456

So, allegedly, the algorithm seems to do what it was designed to do, which is to classify a state for a given data set. Now, the most pertinent question: how well do these predictions do even one day ahead? You’d think that state space predictions would be parsimonious from day to day, given the long history, correct?


onlineRets <- spyRets * lag(states)
charts.PerformanceSummary(onlineRets)
table.AnnualizedReturns(onlineRets)

With the result:

> table.AnnualizedReturns(onlineRets)
                          SPY.Adjusted
Annualized Return               0.0172
Annualized Std Dev              0.1939
Annualized Sharpe (Rf=0%)       0.0888

That is, without the lookahead bias, the state space prediction algorithm is atrocious. Why is that?

Well, here’s the plot of the states:

In short, the online hmm algorithm in the depmix package seems to change its mind very easily, with obvious (negative) implications for actual trading strategies.

So, that wraps it up for this post. Essentially, the main message here is this: there’s a vast difference between loading doing descriptive analysis (AKA “where have you been, why did things happen”) vs. predictive analysis (that is, “if I correctly predict the future, I get a positive payoff”). In my opinion, while descriptive statistics have their purpose in terms of explaining why a strategy may have performed how it did, ultimately, we’re always looking for better prediction tools. In this case, depmix, at least in this “out-of-the-box” demonstration does not seem to be the tool for that.

If anyone has had success with using depmix (or other regime-switching algorithm in R) for prediction, I would love to see work that details the procedure taken, as it’s an area I’m looking to expand my toolbox into, but don’t have any particular good leads. Essentially, I’d like to think of this post as me describing my own experiences with the package.

Thanks for reading.

NOTE: On Oct. 5th, I will be in New York City. On Oct. 6th, I will be presenting at The Trading Show on the Programming Wars panel.

NOTE: My current analytics contract is up for review at the end of the year, so I am officially looking for other offers as well. If you have a full-time role which may benefit from the skills you see on my blog, please get in touch with me. My linkedin profile can be found here.

How To Compute Turnover With Return.Portfolio in R

This post will demonstrate how to take into account turnover when dealing with returns-based data using PerformanceAnalytics and the Return.Portfolio function in R. It will demonstrate this on a basic strategy on the nine sector SPDRs.

So, first off, this is in response to a question posed by one Robert Wages on the R-SIG-Finance mailing list. While there are many individuals out there with a plethora of questions (many of which can be found to be demonstrated on this blog already), occasionally, there will be an industry veteran, a PhD statistics student from Stanford, or other very intelligent individual that will ask a question on a topic that I haven’t yet touched on this blog, which will prompt a post to demonstrate another technical aspect found in R. This is one of those times.

So, this demonstration will be about computing turnover in returns space using the PerformanceAnalytics package. Simply, outside of the PortfolioAnalytics package, PerformanceAnalytics with its Return.Portfolio function is the go-to R package for portfolio management simulations, as it can take a set of weights, a set of returns, and generate a set of portfolio returns for analysis with the rest of PerformanceAnalytics’s functions.

Again, the strategy is this: take the 9 three-letter sector SPDRs (since there are four-letter ETFs now), and at the end of every month, if the adjusted price is above its 200-day moving average, invest into it. Normalize across all invested sectors (that is, 1/9th if invested into all 9, 100% into 1 if only 1 invested into, 100% cash, denoted with a zero return vector, if no sectors are invested into). It’s a simple, toy strategy, as the strategy isn’t the point of the demonstration.

Here’s the basic setup code:

require(TTR)
require(PerformanceAnalytics)
require(quantmod)

symbols <- c("XLF", "XLK", "XLU", "XLE", "XLP", "XLF", "XLB", "XLV", "XLY")
getSymbols(symbols, src='yahoo', from = '1990-01-01-01')
prices <- list()
for(i in 1:length(symbols)) {
  tmp <- Ad(get(symbols[[i]]))
  prices[[i]] <- tmp
}
prices <- do.call(cbind, prices)

# Our signal is a simple adjusted price over 200 day SMA
signal <- prices > xts(apply(prices, 2, SMA, n = 200), order.by=index(prices))

# equal weight all assets with price above SMA200
returns <- Return.calculate(prices)
weights <- signal/(rowSums(signal)+1e-16)

# With Return.portfolio, need all weights to sum to 1
weights$zeroes <- 1 - rowSums(weights)
returns$zeroes <- 0

monthlyWeights <- na.omit(weights[endpoints(weights, on = 'months'),])
weights <- na.omit(weights)
returns <- na.omit(returns)

So, get the SPDRs, put them together, compute their returns, generate the signal, and create the zero vector, since Return.Portfolio treats weights less than 1 as a withdrawal, and weights above 1 as the addition of more capital (big FYI here).

Now, here’s how to compute turnover:

out <- Return.portfolio(R = returns, weights = monthlyWeights, verbose = TRUE)
beginWeights <- out$BOP.Weight
endWeights <- out$EOP.Weight
txns <- beginWeights - lag(endWeights)
monthlyTO <- xts(rowSums(abs(txns[,1:9])), order.by=index(txns))
plot(monthlyTO)

So, the trick is this: when you call Return.portfolio, use the verbose = TRUE option. This creates several objects, among them returns, BOP.Weight, and EOP.Weight. These stand for Beginning Of Period Weight, and End Of Period Weight.

The way that turnover is computed is simply the difference between how the day’s return moves the allocated portfolio from its previous ending point to where that portfolio actually stands at the beginning of next period. That is, the end of period weight is the beginning of period drift after taking into account the day’s drift/return for that asset. The new beginning of period weight is the end of period weight plus any transacting that would have been done. Thus, in order to find the actual transactions (or turnover), one subtracts the previous end of period weight from the beginning of period weight.

This is what such transactions look like for this strategy.

Something we can do with such data is take a one-year rolling turnover, accomplished with the following code:

yearlyTO <- runSum(monthlyTO, 252)
plot(yearlyTO, main = "running one year turnover")

It looks like this:

This essentially means that one year’s worth of two-way turnover (that is, if selling an entirely invested portfolio is 100% turnover, and buying an entirely new set of assets is another 100%, then two-way turnover is 200%) is around 800% at maximum. That may be pretty high for some people.

Now, here’s the application when you penalize transaction costs at 20 basis points per percentage point traded (that is, it costs 20 cents to transact $100).

txnCosts <- monthlyTO * -.0020
retsWithTxnCosts <- out$returns + txnCosts
compare <- na.omit(cbind(out$returns, retsWithTxnCosts))
colnames(compare) <- c("NoTxnCosts", "TxnCosts20BPs")
charts.PerformanceSummary(compare)
table.AnnualizedReturns(compare)

And the result:


                          NoTxnCosts TxnCosts20BPs
Annualized Return             0.0587        0.0489
Annualized Std Dev            0.1554        0.1553
Annualized Sharpe (Rf=0%)     0.3781        0.3149

So, at 20 basis points on transaction costs, that takes about one percent in returns per year out of this (admittedly, terrible) strategy. This is far from negligible.

So, that is how you actually compute turnover and transaction costs. In this case, the transaction cost model was very simple. However, given that Return.portfolio returns transactions at the individual asset level, one could get as complex as they would like with modeling the transaction costs.

Thanks for reading.

NOTE: I will be giving a lightning talk at R/Finance, so for those attending, you’ll be able to find me there.

Create Amazing Looking Backtests With This One Wrong–I Mean Weird–Trick! (And Some Troubling Logical Invest Results)

This post will outline an easy-to-make mistake in writing vectorized backtests–namely in using a signal obtained at the end of a period to enter (or exit) a position in that same period. The difference in results one obtains is massive.

Today, I saw two separate posts from Alpha Architect and Mike Harris both referencing a paper by Valeriy Zakamulin on the fact that some previous trend-following research by Glabadanidis was done with shoddy results, and that Glabadanidis’s results were only reproducible through instituting lookahead bias.

The following code shows how to reproduce this lookahead bias.

First, the setup of a basic moving average strategy on the S&P 500 index from as far back as Yahoo data will provide.

require(quantmod)
require(xts)
require(TTR)
require(PerformanceAnalytics)

getSymbols('^GSPC', src='yahoo', from = '1900-01-01')
monthlyGSPC <- Ad(GSPC)[endpoints(GSPC, on = 'months')]

# change this line for signal lookback
movAvg <- SMA(monthlyGSPC, 10)

signal <- monthlyGSPC > movAvg
gspcRets <- Return.calculate(monthlyGSPC)

And here is how to institute the lookahead bias.

lookahead <- signal * gspcRets
correct <- lag(signal) * gspcRets

These are the “results”:

compare <- na.omit(cbind(gspcRets, lookahead, correct))
colnames(compare) <- c("S&P 500", "Lookahead", "Correct")
charts.PerformanceSummary(compare)
rbind(table.AnnualizedReturns(compare), maxDrawdown(compare), CalmarRatio(compare))
logRets <- log(cumprod(1+compare))
chart.TimeSeries(logRets, legend.loc='topleft')

Of course, this equity curve is of no use, so here’s one in log scale.

As can be seen, lookahead bias makes a massive difference.

Here are the numerical results:

                            S&P 500  Lookahead   Correct
Annualized Return         0.0740000 0.15550000 0.0695000
Annualized Std Dev        0.1441000 0.09800000 0.1050000
Annualized Sharpe (Rf=0%) 0.5133000 1.58670000 0.6623000
Worst Drawdown            0.5255586 0.08729914 0.2699789
Calmar Ratio              0.1407286 1.78119192 0.2575219

Again, absolutely ridiculous.

Note that when using Return.Portfolio (the function in PerformanceAnalytics), that package will automatically give you the next period’s return, instead of the current one, for your weights. However, for those writing “simple” backtests that can be quickly done using vectorized operations, an off-by-one error can make all the difference between a backtest in the realm of reasonable, and pure nonsense. However, should one wish to test for said nonsense when faced with impossible-to-replicate results, the mechanics demonstrated above are the way to do it.

Now, onto other news: I’d like to thank Gerald M for staying on top of one of the Logical Invest strategies–namely, their simple global market rotation strategy outlined in an article from an earlier blog post.

Up until March 2015 (the date of the blog post), the strategy had performed well. However, after said date?

It has been a complete disaster, which, in hindsight, was evident when I passed it through the hypothesis-driven development framework process I wrote about earlier.

So, while there has been a great deal written about not simply throwing away a strategy because of short-term underperformance, and that anomalies such as momentum and value exist because of career risk due to said short-term underperformance, it’s never a good thing when a strategy creates historically large losses, particularly after being published in such a humble corner of the quantitative financial world.

In any case, this was a post demonstrating some mechanics, and an update on a strategy I blogged about not too long ago.

Thanks for reading.

NOTE: I am always interested in hearing about new opportunities which may benefit from my expertise, and am always happy to network. You can find my LinkedIn profile here.

On The Relationship Between the SMA and Momentum

Happy new year. This post will be a quick one covering the relationship between the simple moving average and time series momentum. The implication is that one can potentially derive better time series momentum indicators than the classical one applied in so many papers.

Okay, so the main idea for this post is quite simple:

I’m sure we’re all familiar with classical momentum. That is, the price now compared to the price however long ago (3 months, 10 months, 12 months, etc.). E.G. P(now) – P(10)
And I’m sure everyone is familiar with the simple moving average indicator, as well. E.G. SMA(10).

Well, as it turns out, these two quantities are actually related.

It turns out, if instead of expressing momentum as the difference of two numbers, it is expressed as the sum of returns, it can be written (for a 10 month momentum) as:

MOM_10 = return of this month + return of last month + return of 2 months ago + … + return of 9 months ago, for a total of 10 months in our little example.

This can be written as MOM_10 = (P(0) – P(1)) + (P(1) – P(2)) + … + (P(9) – P(10)). (Each difference within parentheses denotes one month’s worth of returns.)

Which can then be rewritten by associative arithmetic as: (P(0) + P(1) + … + P(9)) – (P(1) + P(2) + … + P(10)).

In other words, momentum — aka the difference between two prices, can be rewritten as the difference between two cumulative sums of prices. And what is a simple moving average? Simply a cumulative sum of prices divided by however many prices summed over.

Here’s some R code to demonstrate.

require(quantmod)
require(TTR)
require(PerformanceAnalytics)

getSymbols('SPY', from = '1990-01-01')
monthlySPY <- Ad(SPY)[endpoints(SPY, on = 'months')]
monthlySPYrets <- Return.calculate(monthlySPY)
#dividing by 10 since that's the moving average period for comparison
signalTSMOM <- (monthlySPY - lag(monthlySPY, 10))/10 
signalDiffMA <- diff(SMA(monthlySPY, 10))

# rounding just 
sum(round(signalTSMOM, 3)==round(signalDiffMA, 3), na.rm=TRUE)

With the resulting number of times these two signals are equal:

[1] 267

In short, every time.

Now, what exactly is the punchline of this little example? Here’s the punchline:

The simple moving average is…fairly simplistic as far as filters go. It works as a pedagogical example, but it has some well known weaknesses regarding lag, windowing effects, and so on.

Here’s a toy example how one can get a different momentum signal by changing the filter.

toyStrat <- monthlySPYrets * lag(signalTSMOM > 0)

emaSignal <- diff(EMA(monthlySPY, 10))
emaStrat <- monthlySPYrets * lag(emaSignal > 0)

comparison <- cbind(toyStrat, emaStrat)
colnames(comparison) <- c("DiffSMA10", "DiffEMA10")
charts.PerformanceSummary(comparison)
table.AnnualizedReturns(comparison)

With the following results:

                          DiffSMA10 DiffEMA10
Annualized Return            0.1051    0.0937
Annualized Std Dev           0.1086    0.1076
Annualized Sharpe (Rf=0%)    0.9680    0.8706

While the difference of EMA10 strategy didn’t do better than the difference of SMA10 (aka standard 10-month momentum), that’s not the point. The point is that the momentum signal is derived from a simple moving average filter, and that by using a different filter, one can still use a momentum type of strategy.

Or, put differently, the main/general takeaway here is that momentum is the slope of a filter, and one can compute momentum in an infinite number of ways depending on the filter used, and can come up with a myriad of different momentum strategies.

Thanks for reading.

NOTE: I am currently contracting in Chicago, and am always open to networking. Contact me at my email at ilya.kipnis@gmail.com or find me on my LinkedIn here.

A First Attempt At Applying Ensemble Filters

This post will outline a first failed attempt at applying the ensemble filter methodology to try and come up with a weighting process on SPY that should theoretically be a gradual process to shift from conviction between a bull market, a bear market, and anywhere in between. This is a follow-up post to this blog post.

So, my thinking went like this: in a bull market, as one transitions from responsiveness to smoothness, responsive filters should be higher than smooth filters, and vice versa, as there’s generally a trade-off between the two. In fact, in my particular formulation, the quantity of the square root of the EMA of squared returns punishes any deviation from a flat line altogether (although inspired by Basel’s measure of volatility, which is the square root of the 18-day EMA of squared returns), while the responsiveness quantity punishes any deviation from the time series of the realized prices. Whether these are the two best measures of smoothness and responsiveness is a topic I’d certainly appreciate feedback on.

In any case, an idea I had on the top of my head was that in addition to having a way of weighing multiple filters by their responsiveness (deviation from price action) and smoothness (deviation from a flat line), that by taking the sums of the sign of the difference between one filter and its neighbor on the responsiveness to smoothness spectrum, provided enough ensemble filters (say, 101, so there are 100 differences), one would obtain a way to move from full conviction of a bull market, to a bear market, to anything in between, and have this be a smooth process that doesn’t have schizophrenic swings of conviction.

Here’s the code to do this on SPY from inception to 2003:

require(TTR)
require(quantmod)
require(PerformanceAnalytics)

getSymbols('SPY', from = '1990-01-01')

smas <- list()
for(i in 2:250) {
  smas[[i]] <- SMA(Ad(SPY), n = i)
}
smas <- do.call(cbind, smas)

xtsApply <- function(x, FUN, n, ...) {
  out <- xts(apply(x, 2, FUN, n = n, ...), order.by=index(x))
  return(out)
}

sumIsNa <- function(x){
  return(sum(is.na(x)))
}

ensembleFilter <- function(data, filters, n = 20, conviction = 1, emphasisSmooth = .51) {
  
  # smoothness error
  filtRets <- Return.calculate(filters)
  sqFiltRets <- filtRets * filtRets * 100 #multiply by 100 to prevent instability
  smoothnessError <- sqrt(xtsApply(sqFiltRets, EMA, n = n))
  
  # responsiveness error
  repX <- xts(matrix(data, nrow = nrow(filters), ncol=ncol(filters)), 
              order.by = index(filters))
  dataFilterReturns <- repX/filters - 1
  sqDataFilterQuotient <- dataFilterReturns * dataFilterReturns * 100 #multiply by 100 to prevent instability
  responseError <- sqrt(xtsApply(sqDataFilterQuotient, EMA, n = n))
  
  # place smoothness and responsiveness errors on same notional quantities
  meanSmoothError <- rowMeans(smoothnessError)
  meanResponseError <- rowMeans(responseError)
  ratio <- meanSmoothError/meanResponseError
  ratio <- xts(matrix(ratio, nrow=nrow(filters), ncol=ncol(filters)),
               order.by=index(filters))
  responseError <- responseError * ratio
  
  # for each term in emphasisSmooth, create a separate filter
  ensembleFilters <- list()
  for(term in emphasisSmooth) {
    
    # compute total errors, raise them to a conviction power, find the normalized inverse
    totalError <- smoothnessError * term + responseError * (1-term)
    totalError <- totalError ^ conviction
    invTotalError <- 1/totalError
    normInvError <- invTotalError/rowSums(invTotalError)
    
    # ensemble filter is the sum of candidate filters in proportion
    # to the inverse of their total error
    tmp <- xts(rowSums(filters * normInvError), order.by=index(data))
    
    #NA out time in which one or more filters were NA
    initialNAs <- apply(filters, 1, sumIsNa) 
    tmp[initialNAs > 0] <- NA
    tmpName <- paste("emphasisSmooth", term, sep="_")
    colnames(tmp) <- tmpName
    ensembleFilters[[tmpName]] <- tmp
  }
  
  # compile the filters
  out <- do.call(cbind, ensembleFilters)
  return(out)
}

t1 <- Sys.time()
filts <- ensembleFilter(Ad(SPY), smas, n = 20, conviction = 2, emphasisSmooth = seq(0, 1, by=.01))
t2 <- Sys.time()

par(mfrow=c(3,1))
filtDiffs <- sign(filts[,1:100] - filts[,2:101])
sumDiffs <- xts(rowSums(filtDiffs), order.by=index(filtDiffs))

plot(Ad(SPY)["::2003"])
plot(sumDiffs["::2003"])
plot(diff(sumDiffs["::2003"]))

And here’s the very underwhelming result:

Essentially, while I expected to see changes in conviction of maybe 20 at most, instead, my indicator of sum of sign differences did exactly as I had hoped it wouldn’t, which is to be a very binary sort of mechanic. My intuition was that between an “obvious bull market” and an “obvious bear market” that some differences would be positive, some negative, and that they’d net each other out, and the conviction would be zero. Furthermore, that while any individual crossover is binary, all one hundred signs being either positive or negative would be a more gradual process. Apparently, this was not the case. To continue this train of thought later, one thing to try would be an all-pairs sign difference. Certainly, I don’t feel like giving up on this idea at this point, and, as usual, feedback would always be appreciated.

Thanks for reading.

NOTE: I am currently consulting in an analytics capacity in downtown Chicago. However, I am also looking for collaborators that wish to pursue interesting trading ideas. If you feel my skills may be of help to you, let’s talk. You can email me at ilya.kipnis@gmail.com, or find me on my LinkedIn here.