The second, and probably final, followup to the Mining for Three Day Candlestick Patterns post. Previously, we improved performance by adding more data to the search. In this post we’ll try to improve the system further by combining multiple predictors. The central question is how to combine the forecasts. I test averaging, weighted averaging, regression, and a voting scheme and compare them against a baseline one-predictor strategy.
Set-Up
Combining predictors is a standard tactic in machine learning, but the case of k-NN predictors is a bit of an outlier. Typical ensemble methods depend on generating variations in the data set in order to generate different and complementary predictors (as in the cases of boosting and bagging). This doesn’t work very well with nearest neighbor predictors, however, because they tend to be insensitive to variations in the data set. So what can we vary? The choice of k, the choice of inputs, the choice of distance measure for the nearest neighbors, and some pre-processing options such as whether to adjust for volatility or not.
I am not going to make any variation in outputs as that’s reserved for a post of its own. The idea is pretty simple: it’s essentially a random forest with k-NN predictors instead of decision trees (here’s an interesting paper on it).
So we’re left with k, sum of absolute or sum of square distances, and volatility adjustment. I picked 10 combinations of these options:
The k values were picked at random and I’m sure it’s possible to do better by optimizing them using cross validation.
The signals obviously overlap significantly, and have similar stats when used one-by-one:
The instrument traded is SPY. Additional data is taken from the following instruments for the pattern search: EWY, EWD, EWC, EWQ, EWU, EWA, EWP, EWH, EWL, EFA, EPP, EWM, EWI, EWG, EWO, IWM, QQQ, EWS, EWT, and EWJ. The thresholds in each case are adjusted to result in a similar length of time spent in the market. Position sizing is done based on the 10-day realized volatility of SPY, as described in this post: leverage is equal to 20% divided by 10-day realized annualized standard deviation, with a maximum leverage of 200%. Finally, an IBS filter is applied that allows long positions only when IBS < 0.5 and short positions only when IBS > 0.5.
The baseline is the PF3 predictor: k = 75, square distance measure, no volatility adjustment. Here’s the equity curve:
Averaging
The simplest approach is obviously to just average the 10 forecasts and then use the average value to generate trades. A long position is taken when the average forecast is greater than 15 basis points, and a short position when the average is smaller than -12.5 basis points. Here’s what the equity curve looks like:
It’s interesting to note that the dispersion of forecasts is inversely related to the accuracy of the average: the smaller the standard deviation of the forecasts, the more accurate they are. Unfortunately effect is marginal and thus not particularly useful for improving the strategy.
Weighted Averaging
A simple extension, that generates slightly better stats, is to weigh each forecast before averaging. There’s a wide array of stats one can use here (Sharpe/Sortino/MAR ratios are obvious candidates); I picked the mean square error. The inverse of the MSE becomes the forecast’s weight, so that smaller errors result in greater weights. The same thresholds as above are used to generate signals. The weights provide a slight improvement both in terms of Sharpe and MAR ratios. The equity curve:
Voting
Using a threshold for each forecast, (>5 basis points for a “long” vote, and <-10 basis points for a “short” vote), each predictor is assigned a long or short vote. The overlap between the votes is significant, between 88% and 97% for different estimators. How many votes should we require for a trade? It quickly becomes obvious that simple majority voting isn’t enough, as only near-unanimous decisions provide worthwhile predictions. The average next-day return when there are between 1 and 8 long votes is 0.4 basis points. The average return after 9 or 10 long votes is 23 basis points.
The resulting equity curve looks like this:
Ordinary Least Squares
It’s also possible to combine the forecasts using regression, with next-day returns as the dependent variable and the k-NN predictor forecasts as the independent ones.
The distribution of forecasts with OLS is very tightly clustered around 0, and for some reason higher forecasts are not associated with higher next-day returns (as they are for the 3 methods above). I don’t really understand why this is the case. The thresholds for trades are 0.5 basis points for a long trade, and -0.5 basis points for a short trade.
An issue here is, of course, multicollinearity due to the similarity of the independent variables. This can lead to, among other problems, overfitting (which is usually characterized by very large absolute values of the coefficients). Using ridge regression solves that issue by limiting the absolute value of coefficients.
A potentially interesting idea would be to constrain the coefficients to positive values, which might lessen the overfitting effects and also make much more sense on an intuitive level (after all, we know all the forecasts are similarly accurate, so negative coefficients don’t make much sense).
Ridge Regression
If multicollinearity is a significant problem, we can use ridge regression to solve it. It offer significant improvement over the OLS approach, but it still fares badly compared to the one-predictor case. The same thresholds as in the OLS approach are used. Here’s the equity curve:
Stats
Here are the stats for the single-predictor base case and all the combination methods:
All of them other than the voting failed horribly. I’m not sure why, but it’s good to know. The improvement provided by the voting system is sizable, however. Not only does the voting-based strategy achieve significantly higher risk-adjusted returns, it does it while spending 15% less time in the market. Those results are also easy to improve on by simply adding more predictors. The marginal gain from each new predictor will be diminishing, but there is definitely more value to wring out of it. And this is just with 3-day patterns: we can easily add 2 and 4 day patterns into the mix as well.
Other Possibilities
A wide array of machine learning methods can be used to combine predictions. Especially if the number of forecasts grew larger, techniques such as random forests or ANNs would be interesting to investigate. As long as simpler methods work very well I think there is little reason to increase the complexity (not to mention the opaqueness) of the strategy.
Sam says:
This is probably a stupid question, but why are you going with K nearest neighbors, rather than choosing some sort of similarity cut-off and having the K vary? That is, only consider 3-day patterns which are >90% similar to one another.
qusma says:
Not stupid at all. Initially I discarded that approach and picked a fixed K because simply picking the patterns depending on their distance would not yield a large enough sample early on, when the history wasn’t long enough. However with the additional data from all the other instruments this shouldn’t be an issue.
Another problem is picking a “sane” value. A tiny change in the search radius can actually change the number of patterns found by an order of magnitude. Some patterns are so rare that you might not get any matches (in which case you have to widen the search radius dynamically), while getting hundreds of matches for others.
I just tested it out (with a radius of 0.012 using square distances) and the results are very similar to the the ones with a fixed K…it’s definitely possible (and probably a good idea) to use both approaches in an ensemble, though.
Sam says:
Instead of using 10 similar strategies, what happens if you use each of the underlying instruments (QQQ, EW*, etc) as predictors and combine them using similar methods?
qusma says:
Interesting idea, I’ll get back to you with the results.
qusma says:
Didn’t work out. http://i0.wp.com/qusma.com/wp-content/uploads/2013/09/stats-with-one-predictor-per-instrument.png
Sam says:
Thanks for running this. Very interesting that the combination systems show similar performance relative to one another. Have you tested the 10-combination system in this post on instruments other than SPY? If so, do you see similar results of Voting always winning?
I’ll have my set-up running soon, also in C#. Thanks for the Accord.NET tip — it’s a fantastic library.
qusma says:
I haven’t tested it on other instruments yet…my platform only supports single-instrument backtesting right now, which makes running every iteration on every instrument rather cumbersome. I should have proper portfolio backtesting running within a week or so though.
Emil Bigaj says:
Are you trading this?