Tag: portfolio optimization

Problems in the Optimization of Swing Portfolios

Assume you have a bunch of different systems that trade stocks, equity index ETFs, bond ETFs, and some other alternative assets, eg commodity ETFs. All the systems take both long and short positions. What are the questions your portfolio optimization approach must answer? This is a post focused on high-level conceptual issues rather than implementation details.

 

Do I want market factor exposure?

By “market factor” I refer to the first component that results from PCAing a bunch of equities (which is similar but not identical to the first factor in all the different academic factor models). In other words it’s the underlying process that drives global equity returns as a whole.

Case A: Your system says “go long SPY”. This is simple: you do want exposure to the market factor. This is basically a beta timing strategy: you are forecasting a positive return on the market factor, and using the long SPY position to capture it. Risk management basically involves decreasing your SPY exposure until you hit your desired risk levels.

Case B: Your system says “go long SPY and short QQQ”. Suddenly this is not a beta timing strategy at all, but something very different. Clearly what we care about in this scenario is relative performance of the unique return components of SPY and QQQ. Therefore, we can think about this as a kind of statistical arbitrage trade. The typical approach here is to go beta neutral or dollar neutral, but naive risk parity is also reasonable.

Case C: Your system says:

  • Go long SPY
  • Go long EFA
  • Go long AAPL
  • Go long AMZN
  • Go short NFLX

I’m going long the entire world, and short NFLX. This looks more like a beta timing trade (Case A) rather than a statarb-style trade (Case B), because the number and breadth of the trades concentrated on the long side indicate a positive return on the market factor. I probably don’t want to be beta neutral in this case. Not every instance of long/short equity trades can be boiled down to “go market neutral”. So how do we determine if we want to be exposed to the market factor or not?

One intriguing solution would be to create a forecasting model on top of the strategies: it would take as input your signals (and perhaps other features as well), and predict the return of the market factor in the next period. If the market factor is predicted to rise or fall significantly, then do a beta timing trade. If not, do a beta-neutral trade. If such forecasting can add value, the only downside is complexity.

 

Is the market factor enough?

Let’s say you trade 5 instruments: 3 US ETFs (SPY, QQQ, IWM), a Hong Kong ETF (EWH), and a China ETF (FXI). Let’s take a look at some numbers. Here are the correlations and MDS plot so you can get a feel for how these instruments relate to each other:

 

china corrs

Correlations

MDS plot

 

PCAing their returns shows exactly what you’d expect: the first component is what you’d call the market factor and all the ETFs have a similar, positive loading. The second component is a China-specific factor.

PCA – Factor Loadings

And here’s the biplot:

Biplot

Case D: Your systems say:

  • Go long SPY
  • Go long EWH
  • Go short FXI

In this situation there is no practical way to be neutral on both the global market factor and the China-specific factor. To quickly illustrate why, consider these two scenarios of weights and the resulting factor loadings:

Weights Factor Loadings
SPY EWH FXI PC1 PC2
0.45 0.5 -1 0.01 -0.39
0.5 0.65 -0.4 0.35 -0.01

Should we try to minimize both? Should we treat EWH/FXI as an independent statarb trade and neutralize the China factor, and then use our SPY position to do beta timing on the global market factor? How do we know which option to pick? How do we automate this decision?

A similar problem comes up if we segment the market by sectors. It’s possible that your strategy has a selection effect that causes you to be overweight certain sectors (low volatility equity strategies famously have this issue), in which case it might be prudent to start with a per-sector risk budget and then optimize your portfolio within each sector separately.

If you’re running a pure statarb portfolio, this is an easy decision, and given the large number of securities in your portfolio there is a possibility of minimizing exposure not only to the market factor but many others as well. For more on how to practically achieve this, check out Valle, Meade & Beasely, Factor neutral portfolios.

 

Do I want exposure to other, non-equity factors?

Sometimes.

Let’s see what happens when we add TLT (20+ year treasuries) and GDX (gold miners) to our asset universe. Correlations and MDS:

PCA:

PCA – Factor Loadings

Some things to notice: as you’d expect, we need more factors to capture most of the variance. On top of that, the interpretation of what each factor “means” is more difficult: the first factor is still the market factor, but after that…my guess is PC2 is interest-rate related, and PC3 looks China-specific? In any case, I think the correct approach here is to be agnostic about what these factors “mean” and just approach it as a purely statistical problem.

At this point I also want to note that these factors and their respective instrument loadings are not stable over time. Your choice of lookback period will influence your forecasts.

Again we run into similar problems. If your systems say “go long GDX”, that looks like a beta timing play on PC2/3?/4?. But let’s say you are running two unrelated systems, and they give you the following signals:

Case E:

  • Long TLT
  • Short GDX

Should we treat these as two independent trades, or should we treat it as a single statarb-style trade? Does it matter that these two instruments are in completely different asset classes when their PC1 and PC2 loadings are so similar? Is it a relative value trade, or are we doing two separate (but in the same direction) beta-timing trades on PC4? Hard to tell without an explicit model to forecast factor returns. If you don’t know what is driving the returns of your trades, you cannot optimize your portfolio.

 

Do I treat all instruments equally?

Depends. If you go the factor-agnostic route and treat everything as a statistical abstraction, then perhaps it’s best to also be completely agnostic about the instruments themselves. On the other hand I can see some good arguments for hand-tailored rules that would apply to specific asset classes, or sub-classes. Certainly when it comes to risk management you shouldn’t treat individual stocks the same way you treat indices — perhaps this should be extended to portfolio optimization.

It also depends on what other instruments you include in your universe. If you’re only trading a single instrument in an asset class, then perhaps you can treat every trade as a beta timing trade.

 

How do I balance the long and short sides?

In a simple scenario like Case B (long SPY/short QQQ) this is straight-forward. You pick some metric: beta, $, volatility, etc. and just balance the two sides.

But let’s go back to the more complex Case C. The answer of how to balance the long and short sides depends on our earlier answer: should we treat this scenario as beta timing or statarb? The problem with the latter is that any sensible risk management approach will severely limit your NFLX exposure. And since you want to be beta (or $, or σ) neutral, this means that by extension the long side of the trade will also be severely limited even though it carries far less idiosyncratic risk.

Perhaps we can use the risk limits to determine if we want to be beta-neutral or not. If you can’t go neutral because of limits, then it’s a beta timing trade. Can you put your faith into such a hacky heuristic?

What do we do in situations where we’re having to balance a long/short portfolio with all sorts of different asset classes? Do we segregate the portfolio (by class, by factor loading?) in order to balance each part separately? As we saw in Case E, it’s not always clear which instruments go into which group.

 

 

How do I allocate between strategies?

So far we’ve been looking at instruments only, but perhaps this is the wrong approach. What if we used the return series of strategies instead? After all the expected returns distribution conditional on a signal being fired from a particular strategy is not the same as the unconditional expected returns distribution of the instrument.

What do we do if Strategy A wants to put on 5 trades, and Strategy B wants to put on 1 trade? Do we care about distributing the risk budget across instruments, or across strategies, or maybe a mix of both? How do we balance that with risk limits? How do we balance that with factor loadings?

 

So, to sum things up. The ideal portfolio optimization algorithm perfectly balances trading costs, instruments, asset classes, factor exposure (but only when needed), strategies, and does it all under constraints imposed by risk management. In the end, I don’t think there are any good answers here. Only ugly collections of heuristics.

Read more Problems in the Optimization of Swing Portfolios

Using Factor Momentum to Optimize GTAA Portfolios

Taking another look at portfolio optimization methods as they apply to relative strength- & momentum-based GTAA portfolios, this time I present a novel method that uses factor momentum in an attempt to tilt the allocation towards the factors behind the momentum, and away from factors that do not have strong momentum. The approach was inspired by a nice little paper by Bhansali et al.: The Risk in Risk Parity: A Factor Based Analysis of Asset Based Risk Parity.

My first idea was something I would have termed “Global Tactical Factor Allocation”: a relative strength & momentum approach based directly on risk factors instead of assets. It failed miserably, but the work showed some interesting alternative paths. One of these was to decompose returns into their principal components (factors) and base a weighting algorithm on factor momentum. Early testing shows promise.

PCA basics

A quick intro to principal component analysis (you can skip to the next section if you’re not interested) before we proceed. PCA is method to linear transform data in a potentially useful way. It re-expresses data such that the principal components are uncorrelated (orthogonal) to each other, and the first component expresses the direction of maximum variance (i.e. it explains the greatest possible part of the total variance of any orthogonal component), and so forth for the 2nd…nth components. It can be used as a method of dimensionality reduction by ignoring the lower-variance components, but that is not relevant to the present analysis1.

 The principal components can be obtained very easily. In MATLAB:

  • Assume r are the de-meaned returns of our assets.
  • [V D] = eig(cov(r)); will give us the eigenvectors (V) and eigenvalues (D) of the covariance matrix of r.
  • The diagonal of D contains the variance of each principal component. To get the % of total variance explained by each component, simply: 100 * flipud(diag(D)) / sum(diag(D)).
  • V contains the linear coefficients of our assets to each component; fliplr(V) to get them in the “right” order.
  • Finally, to find the actual principal components we simply multiply the returns with V: r*V.

Data & Methodology

The assets used are the following:

The data covers the period from January 2002 to October 2012 and includes dividends. Some of the assets do not have data for the entire period; they simply become available when they have 252 days of data. The methodology is essentially the standard relative-strength & momentum GTAA approach:

  • Rebalance every Friday.
  • Rank assets by their 120-day returns and pick the top 5.
  • Discard any assets that had negative returns during the period, even if they are in the top 5.
  • Apply portfolio optimization algorithm.
  • Trade on close.

Commissions are not taken into account (though I acknowledge they are a significant issue due to frequent rebalancing). The algorithms I will be benchmarking against are: equal weights, naïve risk parity (RP), equal risk contribution (ERC), and minimum correlation (MCA). See this post for more on these methods.

A look at the factors

When extracting principal components from asset returns there is typically an a posteriori identification of each component with a risk factor. Looking at stock returns, the first factor is typically identified as the so-called “market” factor. Bhansali et al. identify the two first factors as “growth” and “inflation”. I leave this identification process as homework to the reader.

Using the last 250 days in the data, here is the % of total variance explained by each factor:

factor variance explained

And here are the factor loadings (for the top 3 factors) for each asset:

factor loadings

Factor momentum weighting

The general idea is to decompose the returns of our chosen assets into principal components, identify the factors that have relative strength and absolute momentum, and then tilt the weights towards them and away from the low- and negative-momentum factors. We are left with the non-trivial problem of constructing a portfolio that has exposure to the factor(s) we want, and are neutral to the rest. We will do this by setting target betas for the portfolio and then minimizing the difference between the target and realized betas given a set of weights (using a numerical optimizer).

  • After selecting the assets using the above steps, decompose their returns into their principal components.
  • Rank the factors on their 120-day returns and pick the top 3.
  • Discard any factors that had negative returns during the period, even if they are in the top 3.
  • Discarded factors have a target beta of 0.
  • The other factors have a target beta of 12.

The objective then is to minimize:

minimize this function

where ti is the target portfolio beta to risk factor i, βi is the portfolio beta to risk factor i, and M is the number of risk factors.

Results

Here are the performance metrics of the benchmarks and the factor momentum algorithm:

Performance statistics

The selling point of the factor momentum approach is the consistency it manages to achieve, while also maintaining excellent volatility-adjusted and drawdown-adjusted returns. There are two long periods during which the other optimization methods did quite badly (2008-2009 and from the middle of 2011 to the present); factor momentum just keeps going.

Equity curves

One interesting point is that the factor momentum algorithm tends to allocate to fewer holdings than the other approaches (because all the other algorithms will always have non-zero weights for any assets selected, which is not the case for FM). There may be some low-hanging fruit here in terms of diversification.

Some other potentially interesting ideas for the future: is there any value in the momentum of residuals (in a regression against the factor returns), similar to the Blitz, Huij & Martens approach? An interesting extension would be to loosen the factor-neutral constraint to leave room for other objectives. Finding a smarter way to calculate target betas would also be an interesting and probably fruitful exercise; taking each factor’s volatility and momentum into account is probably the most obvious idea.

Footnotes
  1. Dimensionality reduction can be extremely useful in trading, particularly when dealing with machine learning or simply when there is a need to combine multiple overlapping indicators.[]
  2. This is of course a very primitive approach; each factor has different volatility, so equal betas means unequal risk allocated to each factor.[]

Read more Using Factor Momentum to Optimize GTAA Portfolios

Portfolio Optimization Algorithm Showdown: GTAA Edition

I was revisiting the choice of portfolio optimization algorithm for the GTAA portion of my portfolio and thought it was an excellent opportunity for another post. The portfolio usually contains 5 assets (though at times it may choose fewer than 5) picked from a universe of 17 ETFs and mutual funds, which are picked by relative and absolute momentum. The specifics are irrelevant to this post as we’ll be looking exclusively at portfolio optimization techniques applied after the asset selection choices have been made.

Tactical asset allocation portfolios present different challenges from optimizing portfolios of stocks, or permanently diversified portfolios, because the mix of asset classes is extremely important and can vary significantly through time. Especially when using methods that weigh directly on volatility, bonds tend to have very large weights. During the last couple of decades this has been working great due to steadily dropping yields, but it may turn out to be dangerous going forward. I aim to test a wide array of approaches, from the crude equal weights, to the trendy risk parity, and the very fresh minimum correlation algorithm. Standard mean-variance optimization is out of the question because of its many and well-known problems, but mainly because forecasting returns is an exercise in futility.

The algorithms

The only restriction on the weights is no shorting; there are no minimum or maximum values.

  • Equal Weights

Self-explanatory.

  • Risk Parity (RP)

Risk parity (often confused with equal risk contribution) is essentially weighting proportional to the inverse of volatility (as measured by the 120-day standard deviation of returns, in this case). I will be using an unlevered version of the approach. I must admit I am still somewhat skeptical of the value of the risk parity approach for the bond-related reasons mentioned above.

  • Minimum Volatility (MV)

Minimum volatility portfolios take into account the covariance matrix and have weights that minimize the portfolio’s expected volatility. This approach has been quite successful in optimizing equity portfolios, partly because it indirectly exploits the low volatility anomaly. You’ll need a numerical optimization algorithm to solve for the minimum volatility portfolio.

  • MV (with shrinkage)

A note on shrinkage (not that kind of shrinkage!): one issue with algorithms that make use of the covariance matrix is estimation error. The number of covariances that must be estimated grows exponentially with the number of assets in the portfolio, and these covariances are naturally not constant through time. The errors in the estimation of these covariances have negative effects further down the road when we calculate the desired weightings.  A partial solution to this problem is to “shrink” the covariance matrix towards a “target matrix”. For more on the topic of shrinkage, as well as a description of the shrinkage approach I use here, see Honey, I Shrunk the Sample Covariance Matrix by Ledoit & Wolf.

  • Equal Risk Contribution (ERC)

The ERC approach is sort of an advanced version of risk parity that takes into account the covariance matrix of the assets’ returns (here‘s a quick comparison between the two). This difference results in significant complications when it comes to calculating weights, as you need to use a numerical optimization algorithm to minimize

formula

subject to the standard restrictions on the weights, where xis the weight of the ith asset, and (Σx)i denotes the ith row of the vector resulting from the product of Σ (the covariance matrix) and x (the weight vector). To do this I use MATLAB’s fmincon SQP algorithm.

For more on ERC, a good overview is On the Properties of Equally-Weighted Risk Contributions Portfolios by Maillard, et. al.

  • ERC (with shrinkage)

See above.

  • Minimum Correlation Algorithm (MCA)

A new optimization algorithm, developed by David Varadi, Michael Kapler, and Corey Rittenhouse. The main object of the MCA approach is to under-weigh assets with high correlations and vice versa, though it’s a bit more complicated than just weighting by the inverse of assets’ average correlation. If you’re interested in the specifics, check out the paper: The Minimum Correlation Algorithm: A Practical Diversification Tool.

The results

Moving on to the results, it quickly becomes clear that there isn’t much variation between the approaches. Most of the returns and risk management are driven by the asset selection process, leaving little room for the optimization algorithms to improve or screw up the results.

portfolio optimization algorithm stats

Predictably, the “crude” approaches such as equal weights or the inverse of maximum drawdown don’t do all that well. Not terribly by any means, but going up in complexity does seem to have some advantages. What stands out is that the minimum correlation algorithm outperforms the rest in both risk-adjusted return metrics I like to use.

Risk parity, despite its popularity, wallows in mediocrity in this test; its only redeeming feature being a bit of positive skew which is always nice to have.

The minimum volatility weights are an interesting case. They do what is says on the box: minimize volatility. Returns suffer consequently, but are excellent on a volatility-adjusted basis. On the other hand, the performance in terms of maximum drawdown is terrible. Some interesting features to note: the worst loss for the minimum volatility weights is by far the lowest of the pack: the worst day in over 15 years was -2.91%. This is accompanied by the lowest average time to recover from drawdowns, and an obscene (though also rather unimportant) longest winning streak of 22 days.

Finally, equal risk contribution weights almost match the performance of minimum volatility in terms of CAGR / St.Dev. while also giving us a lower drawdown. ERC also comes quite close to MCA; I would say it is the second-best approach on offer here.

A look at the equity curves below shows just how similar most of the allocations are. The results could very well be due to luck and not a superior algorithm.

GTAA portfolio optimization methods equity curves

To investigate further, I have divided the equity curves into three parts: 1996 – 2001, 2002-2007, and 2008-2012. Consistent results across these sub-periods would increase my confidence that the best algorithms actually provide value and weren’t just lucky.

subperiod stats

As expected there is significant variation in results between sub-periods. However, I believe these numbers solidify the value of the minimum correlation approach. If we compare it to its closest rival, ERC, minimum correlation comes out ahead in 2 out of 3 periods in terms of volatility-adjusted returns, and in 3 out of 3 periods in terms of drawdown-adjusted returns.

The main lesson here is that as long as your asset selection process and money/risk management are good, it’s surprisingly tough to seriously screw up the results by using a bad portfolio optimization approach. Nonetheless I was happily surprised to see minimum correlation beat out the other, more traditional, approaches, even though the improvement is marginal.

Read more Portfolio Optimization Algorithm Showdown: GTAA Edition