Just a short post today. Jack Damn has been tweeting about consecutive up/down days lately, which inspired me to go looking for a potential edge. The NASDAQ 100 has posted 6 consecutive up days as of yesterday’s close. Is this a signal of over-extension? Unfortunately there are very few instances of such a high number of consecutive up days, so it’s impossible to speak with certainty about any of the numbers. Let’s take a look at QQQ returns (including dividends) after X consecutive up or down days:
There’s no edge on the short side here if you’re looking for a mean reversion trade. Looking out over longer horizons, it seems like many consecutive up days tend to be followed by above-average returns. Another argument in favor of the idea that the bottom of the current pullback has been reached, but nothing really useful in terms of actual trading.
Finally, a look at the probability of positive returns following X consecutive up or down days:
UPDATE: Here’s the spreadsheet.
Read more Consecutive Up or Down Days, NASDAQ 100 Edition
Taking another look at portfolio optimization methods as they apply to relative strength- & momentum-based GTAA portfolios, this time I present a novel method that uses factor momentum in an attempt to tilt the allocation towards the factors behind the momentum, and away from factors that do not have strong momentum. The approach was inspired by a nice little paper by Bhansali et al.: The Risk in Risk Parity: A Factor Based Analysis of Asset Based Risk Parity.
My first idea was something I would have termed “Global Tactical Factor Allocation”: a relative strength & momentum approach based directly on risk factors instead of assets. It failed miserably, but the work showed some interesting alternative paths. One of these was to decompose returns into their principal components (factors) and base a weighting algorithm on factor momentum. Early testing shows promise.
PCA basics
A quick intro to principal component analysis (you can skip to the next section if you’re not interested) before we proceed. PCA is method to linear transform data in a potentially useful way. It re-expresses data such that the principal components are uncorrelated (orthogonal) to each other, and the first component expresses the direction of maximum variance (i.e. it explains the greatest possible part of the total variance of any orthogonal component), and so forth for the 2nd…nth components. It can be used as a method of dimensionality reduction by ignoring the lower-variance components, but that is not relevant to the present analysis.
The principal components can be obtained very easily. In MATLAB:
- Assume r are the de-meaned returns of our assets.
- [V D] = eig(cov(r)); will give us the eigenvectors (V) and eigenvalues (D) of the covariance matrix of r.
- The diagonal of D contains the variance of each principal component. To get the % of total variance explained by each component, simply: 100 * flipud(diag(D)) / sum(diag(D)).
- V contains the linear coefficients of our assets to each component; fliplr(V) to get them in the “right” order.
- Finally, to find the actual principal components we simply multiply the returns with V: r*V.
Data & Methodology
The assets used are the following:
The data covers the period from January 2002 to October 2012 and includes dividends. Some of the assets do not have data for the entire period; they simply become available when they have 252 days of data. The methodology is essentially the standard relative-strength & momentum GTAA approach:
- Rebalance every Friday.
- Rank assets by their 120-day returns and pick the top 5.
- Discard any assets that had negative returns during the period, even if they are in the top 5.
- Apply portfolio optimization algorithm.
- Trade on close.
Commissions are not taken into account (though I acknowledge they are a significant issue due to frequent rebalancing). The algorithms I will be benchmarking against are: equal weights, naïve risk parity (RP), equal risk contribution (ERC), and minimum correlation (MCA). See this post for more on these methods.
A look at the factors
When extracting principal components from asset returns there is typically an a posteriori identification of each component with a risk factor. Looking at stock returns, the first factor is typically identified as the so-called “market” factor. Bhansali et al. identify the two first factors as “growth” and “inflation”. I leave this identification process as homework to the reader.
Using the last 250 days in the data, here is the % of total variance explained by each factor:
And here are the factor loadings (for the top 3 factors) for each asset:
Factor momentum weighting
The general idea is to decompose the returns of our chosen assets into principal components, identify the factors that have relative strength and absolute momentum, and then tilt the weights towards them and away from the low- and negative-momentum factors. We are left with the non-trivial problem of constructing a portfolio that has exposure to the factor(s) we want, and are neutral to the rest. We will do this by setting target betas for the portfolio and then minimizing the difference between the target and realized betas given a set of weights (using a numerical optimizer).
- After selecting the assets using the above steps, decompose their returns into their principal components.
- Rank the factors on their 120-day returns and pick the top 3.
- Discard any factors that had negative returns during the period, even if they are in the top 3.
- Discarded factors have a target beta of 0.
- The other factors have a target beta of 1.
The objective then is to minimize:
where ti is the target portfolio beta to risk factor i, βi is the portfolio beta to risk factor i, and M is the number of risk factors.
Results
Here are the performance metrics of the benchmarks and the factor momentum algorithm:
The selling point of the factor momentum approach is the consistency it manages to achieve, while also maintaining excellent volatility-adjusted and drawdown-adjusted returns. There are two long periods during which the other optimization methods did quite badly (2008-2009 and from the middle of 2011 to the present); factor momentum just keeps going.
One interesting point is that the factor momentum algorithm tends to allocate to fewer holdings than the other approaches (because all the other algorithms will always have non-zero weights for any assets selected, which is not the case for FM). There may be some low-hanging fruit here in terms of diversification.
Some other potentially interesting ideas for the future: is there any value in the momentum of residuals (in a regression against the factor returns), similar to the Blitz, Huij & Martens approach? An interesting extension would be to loosen the factor-neutral constraint to leave room for other objectives. Finding a smarter way to calculate target betas would also be an interesting and probably fruitful exercise; taking each factor’s volatility and momentum into account is probably the most obvious idea.
Footnotes
Read more Using Factor Momentum to Optimize GTAA Portfolios
UPDATE: read The IBS Effect: Mean Reversion in Equity ETFs instead of this post, it features more recent data and deeper analysis.
The location of the closing price within the day’s range is a surprisingly powerful predictor of next-day returns for equity indices. The closing price in relation to the day’s range (or CRTDR [UPDATE: as reader Jan mentioned in the comments, there is already a name for this: Internal Bar Strength or IBS] if you’re a fan of unpronounceable acronyms) is simply calculated as such:
It takes values between 0 and 1 and simply indicates at which point along the day’s range the closing price is located. In this post I will take a look not only at returns forecasting, but also how to use this value in conjunction with other indicators. You may be skeptical about the value of something so extremely simplistic, but I think you’ll be pleasantly surprised.
The basics: QQQ and SPY
First, a quick look at QQQ and SPY next-day returns depending on today’s CRTDR:
A very promising start. Now the equity curves for each quartile:
That’s quite good; consistency through time and across assets is important and we’ve got both in this case. The magnitude of the out-performance of the bottom quartile is very large; I think we can do something useful with it.
There are several potential improvements to this basic approach: using the range of several days instead of only the last one, adjusting for the day’s close-to-close return, and averaging over several days are a few of the more obvious routes to explore. However, for the purposes of this post I will simply continue to use the simplest version.
CRTDR Internationally
A quick look across a larger array of assets, which is always an important test (here I also incorporate a bit of shorting):
Long when CRTDR < 45%, short when CRTDR > 95%. $10k per trade. Including commissions of $0.005 per share, excluding dividends.
One question that comes up when looking at ETFs of foreign indices is about the effect of non-overlapping trading hours. Would we be better off using the ETF trading hours or the local trading hours to determine the range and out predictions? Let’s take a look at the EWU ETF (iShares MSCI United Kingdom Index Fund) vs the FTSE 100 index, with the following strategy:
- Go long on close if CRTDR < 45%
- Go short on close if CRTDR > 95%
FTSE vs EWU CRTDR strategy, 1996-2012. $1m per trade (the number was a technical necessity due to the price of the FTSE 100 index).
Fascinating! This result left me completely stumped. I would love to hear your ideas about this…I have a feeling that there must be some sort of explanation, but I’m afraid I can’t come up with anything realistic.
Trading Signal or Filter?
It should be noted that I don’t actually use the CRTRD as a signal to take trades at all. Given the above results you may find this surprising, but all the positive returns are already captured by other, similar (and better), indicators (especially short-term price-based indicators such as RSI(3)). Instead I use it in reverse: as a filter to exclude potential trades. To demonstrate, let’s have a look at a very simplistic mean reversion system:
- Buy QQQ at close when RSI(3) < 10
- Sell QQQ at close when RSI(3) > 50
On average, this will result in a daily return of 0.212%. So we have two approaches in our hands that both have positive expectancy, what happens if we combine them?
- Go long either on the RSI(3) criteria above OR CRTDR < 50%
RSI(3) and RSI(3) w/ CRTDR strategy applied to QQQ. Commissions not included.
This is a bit surprising: putting together two systems, both of which have positive expectancy, results in significantly lower returns. At this point some may say “there’s no value to be gained here”. But fear not, there are significant returns to be wrung out of the CRTDR! Instead of using it as a signal, what if we use it in reverse as a filter? Let’s investigate further: what happens if we split these days up by CRTDR?
Now that’s quite interesting. Combining them has very bad results, but instead we have an excellent method to filter out bad RSI(3) trades. Let’s have a closer look at the interplay between RSI(3) signals and CRTDR:
Next-day QQQ returns.
And now the equity curves with and without the CRTDR < 50% filter:
RSI(3) and RSI(3) w/ CRTDR < 50% filter applied to QQQ. Commissions not included.
That’s pretty good. Consistent performance and out-performance relative to the vanilla RSI(3) strategy. Not only that, but we have filtered out over 35% of trades which not only means far less money spent on commissions, but also frees up capital for other trades.
UPDATE: I neglected to mention that I use Cutler’s RSI and not the “normal” one, the difference being the use of simple moving averages instead of exponential moving averages. I have also uploaded an excel sheet and Multicharts .net signal code that replicate most of the results in the post.
Read more Closing Price in Relation to the Day’s Range, and Equity Index Mean Reversion