Thursday, July 02, 2015

Time series analysis and data gaps

Most time series techniques such as the ADF test for stationarity, Johansen test for cointegration, or ARIMA model for returns prediction, assume that our data points are collected at regular intervals. In traders' parlance, it assumes bar data with fixed bar length. It is easy to see that this mundane requirement immediately presents a problem even if we were just to analyze daily bars: how are we do deal with weekends and holidays?

You can see that the statistics of return bars over weekdays can differ significantly from those over weekends and holidays. Here is a table of comparison for SPY daily returns from 2005/05/04-2015/04/09:

SPY daily returns
Number of bars
Mean Returns (bps)
Mean Absolute Returns (bps)
Kurtosis (3 is “normal”)
Weekdays only
1,958
3.9
80.9
13.0
Weekends/holidays only
542
0.3
82.9
23.7

Though the absolute magnitude of the returns over a weekday is similar to that over a weekend, the mean returns are much more positive on the weekdays. Note also that the kurtosis of returns is almost doubled on the weekends. (Much higher tail risks on weekends with much less expected returns: why would anyone hold a position over weekends?) So if we run any sort of time series analysis on daily data, we are force-fitting a model on data with heterogeneous statistics that won't work well.

The problem is, of course, much worse if we attempt time series analysis on intraday bars. Not only are we faced with the weekend gap, in the case of stocks or ETFs we are faced with the overnight gap as well. Here is a table of comparison for AUDCAD 15-min returns vs weekend returns from 2009/01/01-2015/06/16:

AUDCAD 15-min returns
Number of bars
Mean Returns (bps)
Mean Absolute Returns (bps)
Kurtosis (3 is “normal”)
Weekdays only
158,640
0.01
4.5
18.8
Weekends/holidays only
343
-2.06
15.3
4.6

In this case, every important statistic is different (and it is noteworthy that kurtosis is actually lower on the weekends here, illustrating the mean-reverting character of this time series.)

So how should we predict intraday returns with data that has weekend gaps? (The same solution should apply to overnight gaps for stocks, and so omitted in the following discussion.) Let's consider several proposals:

1) Just delete the weekend returns, or set them as NaN in Matlab, or missing values NA in R. 

This won't work because the first few bars of a week isn't properly predicted by the last few bars of the previous week. We shouldn't use any linear model built with daily or intraday data to predict the returns of the first few bars of a week, whether or not that model contains data with weekend gaps. As for how many bars constitute the "first few bars", it depends on the lookback of the model. (Notice I emphasize linear model here because some nonlinear models can deal with large jumps during the weekends appropriately.)

2) Just pretend the weekend returns are no different from the daily or intraday returns when building/training the time series model, but do not use the model for predicting weekend returns. I.e. do not hold positions over the weekends.

This has been the default, and perhaps simplest (naive?) way of handling this issue for many traders, and it isn't too bad. The predictions for the first few bars in a week will again be suspect, as in 1), so one may want to refrain from trading then. The model built this way isn't the best possible one, but then we don't have to be purists.

3) Use only the most recent period without a gap to train the model. So for an intraday FX model, we would be using the bars in the previous week, sans the weekends, to train the model. Do not use the model for predicting weekend returns nor the first few bars of a week.

This sounds fine, except that there is usually not enough data in just a week to build a robust model, and the resulting model typically suffers from severe data snooping bias.

You might think that it should be possible to concatenate data from multiple gapless periods to form a larger training set. This "concatenation" does not mean just piecing together multiple weeks' time series into one long time series - that would be equivalent to 2) and wrong. Concatenation just means that we maximize the total log likelihood of a model over multiple independent time series, which in theory can be done without much fuss since log likelihood (i.e. log probability) of independent data are additive. But in practice, most pre-packaged time series model programs do not have this facility. (Do add a comment if anyone knows of such a package in Matlab, R, or Python!) Instead of modifying the guts of a likelihood-maximization routine of a time series fitting package, we will examine a short cut in the next proposal.

4) Rather than using a pre-packaged time series model with maximum likelihood estimation, just use an equivalent multiple linear regression (LR) model. Then just fit the training data with this LR model with all the data in the training set except the weekend bars, and use it for predicting all future bars except the weekend bars and the first few bars of a week.

This conversion of a time series model into a LR model is fairly easy for an autoregressive model AR(p), but may not be possible for an autoregressive moving average model ARMA(p, q). This is because the latter involves a moving average of the residuals, creating a dependency which I don't know how to incorporate into a LR. But I have found that AR(p) model, due to its simplicity, often works better out-of-sample than ARMA models anyway. It is of course, very easy to just omit certain data points from a LR fit, as each data point is presumed independent. 

Here is a plot of the out-of-sample cumulative returns of one such AR model built for predicting 15-minute returns of NOKSEK, assuming midpoint executions and no transaction costs (click to enlarge.)












Whether or not one decides to use this or the other techniques for handling data gaps, it is always a good idea to pay some attention to whether a model will work over these special bars.

===

My Upcoming Workshop


This is a new online workshop focusing on the practical use of AI techniques for identifying predictive indicators for asset returns.

===

Managed Accounts Update

Our FX Managed Account program is 6.02% in June (YTD: 31.33%).

===

Industry Update
  • I previously reported on a fundamental stock model proposed by Lyle and Wang using a linear combination of just two firm fundamentals ― book-to-market ratio and return on equity. Professor Lyle has posted a new version of this model.
  • Charles-Albert Lehalle, Jean-Philippe Bouchaud, and Paul Besson reported that "intraday price is more aligned to signed limit orders (cumulative order replenishment) rather than signed market orders (cumulative order imbalance), even if order imbalance is able to forecast short term price movements." Hat tip: Mattia Manzoni. (I don't have a link to the original paper: please ask Mattia for that!)
  • A new investment competition to help you raise capital is available at hedgefol.io.
  • Enjoy an Outdoor Summer Party with fellow quants benefiting the New York Firefighters Burn Center Foundation on Tuesday, July 14th with great food and cool drinks on a terrace overlooking Manhattan. Please RSVP to join quant fund managers, systematic traders, algorithmic traders, quants and high frequency sharks for a great evening. This is a complimentary event (donations are welcomed). 
===

Follow me on Twitter: @chanep

Monday, April 13, 2015

Beware of Low Frequency Data

(This post is based on the talk of the same title I gave at Quantopian's NYC conference which commenced at 3.14.15 9:26:54. Do these numbers remind you of something?)

A correct backtest of a trading strategy requires accurate historical data. This isn't controversial. Historical data that is full of errors will generate fictitious profits for mean-reverting strategies, since noise in prices is mean-reverting. However, what is lesser known is how perfectly accurate capture of historical prices, if done in a sub-optimal way, can still lead to dangerously inflated backtest results. I will illustrate this with three simple strategies.

CEF Premum Reversion

Patro et al published a paper on trading the mean reversion of closed-end funds’ (CEF) premium. Based on rational analysis, the market value of a CEF should be the same as the net asset value (NAV) of its holdings. So the strategy to exploit any differences is both reasonable and simple: rank all the CEF's by their % difference ("premium") between market value and NAV, and short the quintile with the highest premium and buy the quintile with the lowest (maybe negative) premium. Hold them for a month, and repeat. (You can try this on a daily basis too, since Bloomberg provides daily NAV data.) The Sharpe ratio of this strategy from 1998-2011 is 1.5. Transaction costs are ignored, but shouldn't be significant for a monthly rebalance strategy.

The authors are irreproachable for their use of high quality price data provided by CRSP and monthly fund NAV data from Bloomberg for their backtest. So I was quite confident that I can reproduce their results with the same data from CRSP, and with historical NAV data from Compustat instead. Indeed, here is the cumulative returns chart from my own backtest (click to enlarge):


However, I also know that there is one detail that many traders and academic researchers neglect when they backtest daily strategies for stocks, ETFs, or CEFs. They often use the "consolidated" closing price as the execution price, instead of the "official" (also called "auction" or "primary") closing price. To understand the difference, one has to remember that the US stock market is a network of over 60 "market centers" (see the teaching notes of Prof. Joel Hasbrouck for an excellent review of the US stock market structure). The exact price at which one's order will be executed is highly dependent on the exact market center to which it has been routed. A natural way to execute this CEF strategy is to send a market-on-close (MOC) or limit-on-close (LOC) order near the close, since this is the way we can participate in the closing auction and avoid paying the bid-ask spread. Such orders will be routed to the primary exchange for each stock, ETF, or CEF, and the price it is filled at will be the official/auction/primary price at that exchange. On the other hand, the price that most free data service (such as Yahoo Finance) provides is the consolidated price, which is merely that of the last transaction received by the Securities Information Processor (SIP) from any one of these market centers on or before 4pm ET. There is no reason to believe that one's order will be routed to that particular market center and was executed at that price at all. Unfortunately, the CEF strategy was tested on this consolidated price. So I decide to backtest it again with the official closing price.

Where can we find historical official closing price? Bloomberg provides that, but it is an expensive subscription. CRSP data has conveniently included the last bid and ask that can be used to compute the mid price at 4pm which is a good estimate of the official closing price. This mid price is what I used for a revised backtest. But the CRSP data also doesn't come cheap - I only used it because my academic affiliation allowed me free access. There is, however, an unexpected source that does provide the official closing price at a reasonable rate: QuantGo.com will rent us tick data that has a Cross flag for the closing auction trade. How ironic: the cheapest way to properly backtest a strategy that trades only once a month requires tick data time-stamped at 1 millisecond, with special tags for each trade!

So what is the cumulative returns using the mid price for our backtest?


Opening Gap Reversion

Readers of my book will be familiar with this strategy (Example 4.1): start with the SPX universe, buy the 10 stocks that gapped down most at the open, and short the 10 that gapped up most. Liquidate everything at the close. We can apply various technical or fundamental filters to make this strategy more robust, but the essential driver of the returns is mean-reversion of the overnight gap (i.e. reversion of the return from the previous close to today's open).

We have backtested this strategy using the closing mid price as I recommended above, and including a further 5 bps transaction cost each for the entry and exit trade. The backtest looked wonderful, so we traded it live. Here is the comparison of the backtest vs live cumulative P&L:


Yes, it is still mildly profitable, but nowhere near the profitability of the backtest, or more precisely, walk-forward test. What went wrong? Two things:

  • Just like the closing price, we should have used the official/auction/primary open price. Unfortunately CRSP does not provide the opening bid-ask, so we couldn't have estimated the open price from the mid price. QuantGo, though, does provide a Cross flag for the opening auction trade as well.
  • To generate the limit on open (LOO) or market on open (MOO) orders suitable for executing this strategy, we need to submit the order using the pre-market quotes before 9:28am ET, based on Nasdaq's rules.
Once again, a strategy that is seemingly low frequency, with just an entry at the open and an exit at the close, actually requires TAQ (ticks and quotes) data to backtest properly.

Futures Momentum

Lest you think that this requirement for TAQ data for backtesting only applies to mean reversion strategies, we can consider the following futures momentum strategy that can be applied to the gasoline (RB), gold (GC), or various other contracts trading on the NYMEX.

At the end of a trading session (defined as the previous day's open outcry close to today's open outcry close), rank all the trades or quotes in that session. We buy a contract in the next session if the last price is above the 95th percentile, sell it if it drops below the 60th (this serves as a stop loss). Similarly, we short a contract if the last price is below the 5th percentile, and buy cover if it goes above the 40th.

Despite being an intraday strategy, it typically trades only 1 roundtrip a day - a low frequency strategy. We backtested it two ways: with 1-min trade bars (prices are from back-adjusted continuous contracts provided by eSignal), and with best bid-offer (BBO) quotes with 1 ms time stamps (from QuantGo's actual contract prices, not backadjusted). 

For all the contracts that we have tested, the 1-ms data produced much worse returns than the 1-min data. The reason is interesting: 1-ms data shows that the strategy exhibits high frequency flip-flops. These are sudden changes in the order book (in particular, BBO quotes) that quickly revert. Some observers have called these flip-flops "mini flash crashes", and they happen as frequently in the futures as in the stock market, and occasionally in the spot Forex market as well. Some people have blamed it on high frequency traders. But I think flip-flop describe the situation better than flash crash, since flash crash implies the sudden disappearance of quotes or liquidity from the order book, while in a flip-flopping situation, new quotes/liquidity above the BBO can suddenly appear and disappear in a few milliseconds, simultaneous with the disappearance and re-appearance of quotes on the opposite side of the order book. Since ours is a momentum strategy, such reversals of course create losses. These losses are very real, and we experienced it in live trading. But these losses are also undetectable if we backtest using 1-min bar data.

Some readers may object: if the 1-min bar backtest shows good profits, why not just trade this live with 1-min bar data and preserve its profit? Let's consider why this doesn't actually allow us to avoid using TAQ data. Note that we were able to avoid the flip-flops using 1-min data only because we were lucky in our backtest - it wasn't because we had some trading rule that prevented our entering or exiting a position when the flip-flops occurred. How then are we to ensure that our luck will continue with live market data? At the very least, we have to test this strategy with many sets of 1-min bar data, and choose the set that shows the worst returns as part of our stress testing. For example, one set may be [9:00:00, 9:01:00, 9:02:00, ...,] and the second set may be [9:00:00.001, 9:01:00.001, 9:02:00.001, ...], etc. This backtest, however, still requires TAQ data, since no historical data vendor I know of provides such multiple sets of time-shifted bars!

As I mentioned above, these  flip-flops are omnipresent in the stock market as well. This shouldn't be surprising considering that 50% of the stock transaction volume is due to high frequency trading. It is particularly damaging when we are trading spreads, such as the ETF pair EWA vs EWC. A small change in the BBO of a leg may represent a big percentage change in the spread, which itself may be just a few ticks wide. So such flip-flops can frequently trigger orders which are filled at much worse prices than expected. 

Conclusion

The three example strategies above illustrate that even when a strategy trades at low frequency, maybe as low as once a month, we often still require high frequency TAQ data to backtest it properly, or even economically. If the strategy trades intraday, even if just once a day, then this requirement becomes all the more important due to the flip-flopping of the order book in the millisecond time frame.

===
My Upcoming  Talks and Workshops

5/13-14: "Mean Reversion Strategies", "AI techniques in Trading" and "Portfolio Optimization" at Q-Trade Bootcamp 2015, Milan, Italy. 
6/17-19: "Mean Reversion Strategies" live online workshop.

===
Managed Account Program Update

Our FX Managed Account program has a net return of +4.29% in March (YTD: +12.7%).

===
Follow me on Twitter: @chanep

Saturday, February 28, 2015

Commitments of Traders (COT) strategy on soybean futures

In our drive to extract alphas from a variety of non-price data, we came across this old-fashioned source: Commitments of Traders (COT) on futures. This indicator is well-known to futures traders since 1923 (see www.cmegroup.com/education/files/COT_FBD_Update_2012-4-26.pdf), but there are often persistent patterns (risk factors?) in the markets that refuse to be arbitraged away. It is worth another look, especially since the data has become richer over the years.

First, some facts about COT:
1) CFTC collects the reports of the number of long and short futures and options contracts ("open interest") held by different types of firms by Tuesdays, and reports them every Friday by 4:30 CT.
2) Options positions are added to COT as if they were futures but adjusted by their deltas.
3) COT are then broken down into contracts held by different types of firms. The most familiar types are "Commercial" (e.g. an ethanol plant) and "Non-Commercial" (i.e. speculators).
4) Other types are "Spreaders" who hold calendar spreads, "Index traders", "Money Managers", etc. There are 9 mutually exclusive types in total.

Since we only have historical COT data from csidata.com, and they do not collect data on all these types, we have to restrict our present analysis only to Commercial and Non-Commercial. Also, beware that csidata tags a COT report by its Tuesday data collection date. As noted above, that information is unactionable until the following Sunday evening when the market re-opens.

A simple strategy would be to compute the ratio of long vs short COT for Non-Commercial traders. We buy the front contract when this ratio is equal to or greater than 3, exiting when the ratio drops to or below 1. We short the front contract when this ratio is equal to or less than 1/3, exiting when the ratio rises to or above 1. Hence this is a momentum strategy: we trade in the same direction as the speculators did. As most profitable futures traders are momentum traders, it would not be surprising this strategy could be profitable.

Over the period from 1999 to 2014, applying this strategy on CME soybean futures returns about 9% per annum, though its best period seems to be behind us already. I have plotted the cumulative returns below (click to enlarge).



I have applied this strategy to a few other agricultural commodities, but it doesn't seem to work on them. It is therefore quite possible that the positive result on soybeans is a fluke. Also, it is very unsatisfactory that we do not have data on the Money Managers (which include the all important CPOs and CTAs), since they would likely to be an important source of alpha. Of course, we can go directly to the cftc.gov, download all the historical reports in .xls format, and compile the data ourselves. But that is a project for another day.

===
My Upcoming  Talks and Workshops

3/14: "Beware of Low Frequency Data" at QuantCon 2015, New York.
3/22-: "Algorithmic Trading of Bitcoins" pre-recorded online workshop.
3/24-25: "Millisecond Frequency Trading" live online workshop.
5/13-14: "Mean Reversion Strategies", "AI techniques in Trading" and "Portfolio Optimization" at Q-Trade Bootcamp 2015, Milan, Italy. 

===
Managed Account Program Update

Our FX Managed Account program has a net return of +7.68% in February (YTD: +8.06%).

===
Follow me on Twitter: @chanep




Thursday, January 08, 2015

Trading with Estimize and I/B/E/S earnings estimates data

By Yang Gao

Estimize is an online-community utilizing 'wisdom of crowds' to offer intelligence about market. It contains a wide range of crowd-sourced estimates from over 4,500 buy-side, sell-side and individual analysts. Studies (from Deustche Bank and Rice University among others) show estimates from Estimize are more accurate than estimates from traditional sell-side analysts.

The first strategy we tested is a mean reversion strategy developed by the quantitative research team from Deltix using Estimize’s data. This strategy is based on the idea that post-earning-announcement prices typically revert from the short-term trend driven by the more recent Estimize estimates just before the announcement. We backtested this strategy with S&P100 over the period between 2012/01/01 and 2013/12/31. (Even though Estimize has 2014 data, we do not have the corresponding survivorship-bias-free price data from the Center for Research in Securities Prices that includes the closing bid and ask prices.) With 5bp one-way transaction cost, we found that the backtest shows a Sharpe ratio of 0.8 and an average annual return of 6%.  The following figure is the cumulative P&L of the strategy based on $1 per stock position.

Cumulative P&L of Deltix Mean Reversion Strategy with Estimize 
It surprised us that a mean-reverting instead of a momentum strategy was used in conjunction with Estimize data, since earnings estimates and announcements typically generate price momentum. In order to show that this return is really driven by the information in Estimize and not simply due to price reversal, we provide a benchmark mean-reverting strategy that uses prices alone to generate signal:

1. Find long period T and short period T_s, where T is average period of the reporting of all the quarterly estimates and T_s is average period of the reporting of the latest 20% of all estimates.
2. Calculate stock return R over T and Rs over T_s, and let delta = R - Rs
3. Buy stocks with delta > 0 at close before an earnings announcement and exit the positions next morning at the open after the announcement.
4. Sell stocks with delta < 0 at close before an earnings announcement and exit the positions next morning at the open after the announcement.
5. Hedge net exposure with SPY during the entire holding period.

This benchmark shows no significant positive return and so it does seem that there is useful information in the Estimize data captured by Deltix’s mean-reversion strategy.

Next, we compare the traditional earnings estimates from I/B/E/S gathered from sell-side Wall Street analysts to the crowd-sourced Estimize estimates. Backtest showed the same Deltix mean reversion strategy described above but using I/B/E/S estimates gave negative return over the same S&P100 universe and over the same 2012-2013 period, again supporting the thesis that Estimize estimates may be superior.

Since Deltix's mean reversion strategy gives negative returns on I/B/E/S data, it is natural to see if a momentum strategy would work instead: if the short-term average estimate is higher than the long-term average estimate (i.e. analogous to delta < 0 above), we expect the price to move up and vice verse.

The backtest result of this momentum strategy over the same universe and time period is quite promising: with 5bp transaction cost, the Sharpe ratio = 1.5 and average annual return = 11%. The following figure is the daily P&L of the strategy based on $1 per stock position

 Cumulative P&L of momentum Strategy with I/B/E/S


We tried the same momentum strategy using Estimize data over 2012-2013, and it generated negative returns this time. This is not surprising since we found earlier that the mean reversion strategy using Estimize data generated positive returns.

We proceeded to backtest this momentum strategy over the S&P100 using out-of-sample I/B/E/S data between 2010 and 2012, and unfortunately the strategy failed there too. The following figure is the daily P&L of the strategy from 2010-2014.

Cumulative P&L of momentum Strategy with I/B/E/S 

So how would Deltix’s mean-reversion strategy with Estimize data work over this out-of-sample period? Unfortunately, we won’t know because Estimize didn't start collecting data until the end of 2011. The following table is a summary on the annual returns comparing different strategies using different data sets and periods.

Strategies
Mean-Reversion
Momentum
Estimize
(2012.01-2013.12)
         6%
         -9%
I/B/E/S
(2012.01-2013.12)
       -17%
         11%
I/B/E/S
(2010.01-2011.12)
        1.8%
        -6.4%


As a result, we cannot conclude that Estimize data is consistently better than I/B/E/S data in terms of generating alpha: it depends on the strategy deployed. We also cannot decide which strategy – mean-reversion or momentum – is consistently better: it depends on the time period and the data used. The only conclusion we can reach is that the short duration of the Estimize data coupled with our lack of proper price data in 2014 means that we cannot have a statistically significant backtest. This state of inconclusiveness will of course be cured in time.

_________
Yang Gao, Ph.D., is a research intern at QTS Capital Management, LLC.

===
Industry Update
(No endorsement of companies or products is implied by our mention.)
  • There is a good discussion comparing Quantconnect to Quantopian here.
  • For FX traders, Rizm offers a comparable service as Quantconnect and Quantopian as it is directly connected to FXCM.
  • Quantopian now offers free fundamental data from MorningStar. Also, check out their Quantopian Managers Program where you can compete to manage real money.
===
Workshop Update

Our next online workshop will be Millisecond Frequency Trading on March 25-26. It is for traders who are interested in intraday trading (even if not at millisecond frequency) and who want to defend against certain HFT tactics.

===
Managed Account Program Update

Our FX Managed Account program had a strong finish in 2014, with annual net return of 69.86%.

===
Follow me on Twitter: @chanep

Friday, November 14, 2014

Rent, don’t buy, data: our experience with QuantGo (Guest Post)

By Roger Hunter

I am a quant researcher and developer for QTS Partners, a commodity pool Ernie (author of this blog) founded in 2011. I help Ernie develop and implement several strategies in the pool and various separate accounts.  I wrote this article to give insights into a very important part of our strategy development process: the selection of data sources.

Our main research focus is on strategies that monitor execution in milliseconds and that hold for seconds through several days. For example, a strategy that trades more than one currency pair simultaneously must ensure that several executions take place at the right price and within a very short time. Backtesting requires high quality historical intraday quote and trade, preferably tick data for testing.  Our initial focus was futures and after looking at various vendors for the tick data quality and quantity we needed, we chose Nanex data which is aggregated at 25ms. This means, for example, that aggressor flags are not available. We purchased several years of futures data and set to work.

Earlier this year we needed to update our data and discovered that Nanex prices had increased significantly. We also needed quotes and trades, and data for more asset classes including US equities and options.

We looked at TickData.com which has good data but is very expensive and you pay up-front per symbol.  There are other services like Barchartondemand.com and XIgnite.com where you pay based on your monthly usage (number of data requests made) which is a model we do not like.  We ended up choosing QuantGo.com, where you have unlimited access to years of global tick or bar data for a fixed monthly subscription fee per data service.

On QuantGo, you get computer instances in your own secure and private cloud built on Amazon AWS with on-demand access to a wide range of global intraday tick or bar data from multiple data vendors.  Since you own and manage the computer instances you can choose any operating system, install any software, access the internet or import your own data.  With QuantGo the original vendor data must remain in the cloud but you can download your results, this allows QuantGo to rent access to years of data at affordable monthly prices.

All of the data we have used so far is from AlgoSeek (one of QuantGo’s data vendors). This data is survivorship bias-free and is exactly as provided by the exchanges at the time. Futures quotes and trades download very quickly on the system. I am testing options strategies, which is challenging due to the size of the data. The data is downloaded in highly compressed form which is then expanded (by QuantGo) to a somewhat verbose text form.  Before the price split, a day of option quotes and trades for AAPL was typically 100GB in this form. Here is a data sample from the full Options (OPRA) data:

Timestamp, EventType, Ticker, OptionDetail, Price, Quantity, Exchange, Conditions
08:30:02.493, NO_QUOTE BID NB, LLEN, PUT at 7.0000 on 2013-12-21, 0.0000, 0, BATS, F
08:30:02.493, NO_QUOTE ASK, LLEN, CALL at 7.0000 on 2013-12-21, 0.0000, 0, BATS, F
09:30:00.500, ROTATION ASK, LLEN, PUT at 2.0000 on 2013-07-20, 0.2500, 15, ARCA, R
09:30:00.500, ROTATION BID, LLEN, PUT at 2.0000 on 2013-07-20, 0.0000, 0, ARCA, R
09:30:00.507, FIRM_QUOTE ASK NB, LLEN, PUT at 5.0000 on 2013-08-17, 5.0000, 7, BATS, A
09:30:00.508, FIRM_QUOTE BID NB, LLEN, PUT at 6.0000 on 2013-08-17, 0.2000, 7, BATS, A

These I convert to a more compact format, and filter out lines we don't need (e.g. NO_QUOTE, non-firm, etc.)

The quality of the AlgoSeek data seems to be high. One test I have performed is to record live data and compare it with AlgoSeek. This is possible because the AlgoSeek historical data is now updated daily, and is one day behind for all except options, which varies from two days to five (they are striving for two, but the process involves uploading all options data to special servers --- a significant task). Another test is done using OptionNET Explorer (ONE). ONE data is at 5-minute intervals and the software displays midpoints only. However, by executing historical trades, you can see the bid and ask values for options at these 5-minute boundaries. I have checked 20 of these against the AlgoSeek data and found exact agreement in every case. In any event, you are free to contact the data vendors directly to learn more about their products. The final test of data quality (and of our market model) is the comparison of live trading results (at one contract/spread level) with backtests over the same period.

The data offerings have recently expanded dramatically with more data partners and now include historical data from (QuantGo claims) "every exchange in the world". I haven't verified this, but the addition of elementized, tagged and scored news from Acquire Media, for example, will allow us to backtest strategies of the type discussed in Ernie's latest book.

So far, we like the system. For us, the positives are:

1. Affordable Prices.  The reason that the price has been kept relatively low is that original vendor data must be kept and used in the QuantGo cloud. For example, to access years of US data we have been paying
Five years of US Equities Trades and Quotes (“TAQ”) is $250 per month
Five years of US Equities 5 minute Bars $75 per month
Three Years of US Options 1 minute bars $100 per month.
Three Year of CME, CBOT, NYMEX Futures Trades and Quotes $250 per month

2.  Free Sample Data.  Each data service has free demo data which is actual real historical data where I can select data from the demo date range.  This allowed me to view and work with the data before subscribing.

3. One API.  I have one API to access different data vendors.  QuantGo gives me a java GUI, python CLI and various libraries (R, Matlab, Java).

4. On-Demand.  The ability to select the data we want "on demand" via a subscription from a website console at any time. You can select data for any symbol and for just a day or for several years.

5. Platform not proprietary.  We can use any operating system or software with the data as it is being downloaded to virtual computers we fully control and manage.

Because all this is done in the cloud, we have to pay for our cloud computer usage as well.  While cloud usage is continuing to drop rapidly in price it is still a variable cost and it needs to monitored.  QuantGo does provide close to real-time billing estimates and alarms you can preset at dollar values.

I was at first skeptical of the restriction of not being able to download the data vendor’s tick or bar data, but so far this hasn't been an issue as in practice we only need the results and our derived data sets. I'm told that if you want to buy the data for your own computers, you can negotiate directly with the individual data vendor and will get a discount if you have been using it for a while on QuantGo.


As we use the windows operating system we access our cloud computers with Remote Desktop and there have been some latency issues, but these are tolerable. On the other hand, it is a big advantage to be able to start with a relatively small virtual machine for initial coding and debugging, then "dial up" a much larger machine (or group of machines) when you want to run many compute and data intensive backtests. While QuantGo is recently launched and is not perfect, it does open up the world of the highest institutional quality data to those of us who do not have the data budget of a Renaissance Technologies or D.E. Shaw.

===
Industry Update
(No endorsement of companies or products is implied by our mention.)
  • A new site for jobs in finance was recently launched: www.financejobs.co.
  • A new software package Geode by Georgica Software can backtest tick data, and comes with a fairly rudimentary fill simulator.
  • Quantopian.com now incorporates a new IPython based research environment that allows interactive data analysis using minute level pricing data in Python.
===
Workshops Update

My next online Quantitative Momentum Strategies workshop will be held on December 2-4. Any reader interested in futures trading  in general would benefit from this course.

===
Managed Account Program Update

Our FX Managed Account program had an unusually profitable month in October.

===
Follow me on Twitter: @chanep

Friday, September 05, 2014

Moving Average Crossover = Triangle Filter on 1-Period Returns

Many traders who use technical analysis favor the Moving Average Crossover as a momentum indicator. They compute the short-term minus the long-term moving averages of prices, and go long if this indicator just turns positive, or go short if it turns negative. This seems intuitive enough. What isn't obvious, however, is that MA Crossover is nothing more than an estimate of the recent average compound return.

But just when you might be tempted to ditch this indicator in favor of the average compound return, it can be shown that the MA Crossover is also a triangle filter on the 1-period returns. (A triangle filter in signal processing is a set of weights imposed on a time series that increases linearly with time up to some point, and then decreases linearly with time up to the present time. See the diagram at the end of this article.) Why is this interpretation interesting? That's because it leads us to consider other, more sophisticated filters (such as the least square, Kalman, or wavelet filters) as possible momentum indicators. In collaboration with my former workshop participant Alex W. who was inspired by this paper by Bruder et. al., we present the derivations below.

===

First, note that we will compute the moving average of log prices y, not raw prices. There is of course no loss or gain in information going from prices to log prices, but it will make our analysis possible. (The exact time of the crossover, though, will depend on whether we use prices or log prices.) If we write MA(t, n1) to denote the moving average of n1 log prices ending at time t, then the moving average crossover is MA(t, n1)-MA(t, n2), assuming n1< n2.  By definition,

MA(t, n1)=(y(t)+y(t-1)+...+y(t-n1+1))/n1
MA(t, n2)=(y(t)+y(t-1)+...+y(t-n1+1)+y(t-n1)+...+y(t-n2+1)/n2

MA(t, n1)-MA(t, n2)
=[(n2-n1)/(n1*n2)] *[y(t)+y(t-1)+...+y(t-n1+1)] - (1/n2)*[y(t-n1)+...+y(t-n2+1)]    
=[(n2-n1)/n2] *MA(t, n1)-[(n2-n1)/n2]*MA(t-n1, n2-n1)
=[(n2-n1)/n2]*[MA(t, n1)-MA(t-n1, n2-n1)]

If we interpret MA(t, n1) as an approximation of the log price at the midpoint (n1-1)/2 of the time interval [t-n1+1, t], and MA(t-n1, n2-n1) as an approximation of the log price at the midpoint (n2-n1-1)/2 of the time interval [t-n1, t-(n2-n1)], then [MA(t, n1)-MA(t-n1, n2-n1)] is an approximation of the total return over a time period of n2/2. If we write this total return as an average compound growth rate r multiplied by the period n2/2, we get

MA(t, n1)-MA(t, n2)  ≈ [(n2-n1)/n2]*(n2/2)*r

r ≈ [2/(n2-n1)]*[MA(t, n1)-MA(t, n2)]

as shown in Equation 4 of the paper cited above. (Note the roles of n1 and n2 are reversed in that paper.)

===

Next, we will show why the MA crossover is also a triangle filter on 1-period returns. Simplifying notation by fixing t to be 0,

MA(t=0, n1)
=(y(0)+y(-1)+...+y(-n1+1))/n1
=(1/n1)*[(y(0)-y(-1))+2(y(-1)-y(-2))+...+n1*(y(-n1+1)-y(-n1))]+y(-n1)

Writing the returns from t-1 to t as R(t), this becomes

MA(t=0, n1)=(1/n1)*[R(0)+2*R(-1)+...+n1*R(-n1+1)]+y(-n1)

Similarly,

MA(t=0, n2)=(1/n2)*[R(0)+2*R(-1)+...+n2*R(-n2+1)]+y(-n2)

So MA(0, n1)-MA(0, n2)
=(1/n1-1/n2)*[R(0)+2*R(-1)+...+n1*R(-n1+1)]
 -(1/n2)*[(n1+1)*R(-n1)+(n1+2)*R(-n1-1)+...+n2*R(-n2+1)]
+y(-n1)-y(-n2)

Note that the last line above is just the total cumulative return from -n2 to -n1, which can be written as

y(-n1)-y(-n2)=R(-n1)+R(-n1-1)+...+R(-n2+1)

Hence we can absorb that into the expression prior to that

MA(0, n1)-MA(0, n2)
=(1/n1-1/n2)*[R(0)+2*R(-1)+...+n1*R(-n1+1)]
 -(1/n2)*[(n1+1-n2)*R(-n1)+(n1+2-n2)*R(-n1-1)+...+(-1)*R(-n2+2)]
=(1/n1-1/n2)*[R(0)+2*R(-1)+...+n1*R(-n1+1)]
 +(1/n2)*[(n2-n1-1)*R(-n1)+(n2-n1-2)*R(-n1-1)+...+R(-n2+2)]

We can see the coefficients of R's from t=-n2+2 to -n1 form the left side of an triangle with positive slope, and those from  t=-n1+1 to 0 form the right side of the triangle with negative slope. The plot (click to enlarge) below shows the coefficients as a function of time, with n2=10, n1=7, and current time as t=0. The right-most point is the weight for R(0): the return from t=-1 to 0.


Q.E.D. Now I hope you are ready to move on to a wavelet filter!

P.S. It is wonderful to be able to check the correctness of messy algebra like those above with a simple Matlab program!

===
New Service Announcement

Our firm QTS Capital Management has recently launched a FX Managed Accounts program. It uses one of the mean-reverting strategies we have been trading successfully in our fund for the last three years, and is still going strong despite the low volatility in the markets. The benefits of a managed account are that clients retain full ownership and control of their funds at all times, and they can decide what level of leverage they are comfortable with. Unlike certain offshore FX operators, QTS is a CPO/CTA regulated by the National Futures Association and the Commodity Futures Trading Commission.

===
Workshops Update

Readers may be interested in my next workshop series to be held in London, November 3-7. Please follow the link at the bottom of this page for information.

===
Follow me on Twitter: @chanep

Monday, August 18, 2014

Kelly vs. Markowitz Portfolio Optimization

In my book, I described a very simple and elegant formula for determining the optimal asset allocation among N assets:

F=C-1*M   (1)

where F is a Nx1 vector indicating the fraction of the equity to be allocated to each asset, C is the covariance matrix, and M is the mean vector for the excess returns of these assets. Note that these "assets" can in fact be "trading strategies" or "portfolios" themselves. If these are in fact real assets that incur a carry (financing) cost, then excess returns are returns minus the risk-free rate.

Notice that these fractions, or weights as they are usually called, are not normalized - they don't necessarily add up to 1. This means that F not only determines the allocation of the total equity among N assets, but it also determines the overall optimal leverage to be used. The sum of the absolute value of components of F divided by the total equity is in fact the overall leverage. Thus is the beauty of Kelly formula: optimal allocation and optimal leverage in one simple formula, which is supposed to maximize the compounded growth rate of one's equity (or equivalently the equity at the end of many periods).

However, most students of finance are not taught Kelly portfolio optimization. They are taught Markowitz mean-variance portfolio optimization. In particular, they are taught that there is a portfolio called the tangency portfolio which lies on the efficient frontier (the set of portfolios with minimum variance consistent with a certain expected return) and which maximizes the Sharpe ratio. Left unsaid are

  • What's so good about this tangency portfolio?
  • What's the real benefit of maximizing the Sharpe ratio?
  • Is this tangency portfolio the same as the one recommended by Kelly optimal allocation?
I want to answer these questions here, and provide a connection between Kelly and Markowitz portfolio optimization.

According to Kelly and Ed Thorp (and explained in my book), F above not only maximizes the compounded growth rate, but it also maximizes the Sharpe ratio. Put another way: the maximum growth rate is achieved when the Sharpe ratio is maximized. Hence we see why the tangency portfolio is so important. And in fact, the tangency portfolio is the same as the Kelly optimal portfolio F, except for that fact that the tangency portfolio is assumed to be normalized and has a leverage of 1 whereas F goes one step further and determines the optimal leverage for us. Otherwise, the percent allocation of an asset in both are the same (assuming that we haven't imposed additional constraints in the optimization problem). How do we prove this?

The usual way Markowitz portfolio optimization is taught is by setting up a constrained quadratic optimization problem - quadratic because we want to optimize the portfolio variance which is a quadratic function of the weights of the underlying assets - and proceed to use a numerical quadratic programming (QP) program to solve this and then further maximize the Sharpe ratio to find the tangency portfolio. But this is unnecessarily tedious and actually obscures the elegant formula for F shown above. Instead, we can proceed by applying Lagrange multipliers to the following optimization problem (see http://faculty.washington.edu/ezivot/econ424/portfolioTheoryMatrix.pdf for a similar treatment):

Maximize Sharpe ratio = FT*M/(FT*C*F)1/2    (2)

subject to constraint FT*1=1   (3)

(to emphasize that the 1 on the left hand side is a column vector of one's, I used bold face.)

So we should maximize the following unconstrained quantity with respect to the weights Fof each asset i and the Lagrange multiplier λ:

FT*M/(FT*C*F)1/2  - λ(FT*1-1)  (4)

But taking the partial derivatives of this fraction with a square root in the denominator is unwieldy. So equivalently, we can maximize the logarithm of the Sharpe ratio subject to the same constraint. Thus we can take the partial derivatives of 

log(FT*M)-(1/2)*log(FT*C*F)  - λ(FT*1-1)   (5)

with respect to Fi. Setting each component i to zero gives the matrix equation

(1/FT*M)M-(1/FT*C*F)C*F=λ1   (6)

Multiplying the whole equation by Fon the right gives

(1/FT*M)FT*M-(1/FT*C*F)FT*C*F=λFT*1   (7)

Remembering the constraint, we recognize the right hand side as just λ. The left hand side comes out to be exactly zero, which means that λ is zero. A Lagrange multiplier that turns out to be zero means that the constraint won't affect the solution of the optimization problem up to a proportionality constant. This is satisfying since we know that if we apply an equal leverage on all the assets, the maximum Sharpe ratio should be unaffected. So we are left with the matrix equation for the solution of the optimal F:

C*F=(FT*C*F/FT*M)M    (8)

If you know how to solve this for F using matrix algebra, I would like to hear from you. But let's try an ansatz F=C-1*M as in (1). The left hand side of (8) becomes M, the right hand side becomes (FT*M/FT*M)M = M as well. So the ansatz works, and the solution is in fact (1), up to a proportionality constant. To satisfy the normalization constraint (3), we can write

F=C-1*M / (1T*C-1*M)  (9)

So there, the tangency portfolio is the same as the Kelly optimal portfolio, up to a normalization constant, and without telling us what the optimal leverage is.

===
Workshop Update:

Based on popular demand, I have revised the dates for my online Mean Reversion Strategies workshop to be August 27-29. 

===
Follow me @chanep on Twitter.




Wednesday, July 02, 2014

Another "universal" capital allocation algorithm

Financial engineers are accustomed to borrowing techniques from scientists in other fields (e.g. genetic algorithms), but rarely does the borrowing go the other way. It is therefore surprising to hear about this paper on a possible mechanism for evolution due to natural selection which is inspired by universal capital allocation algorithms.

A capital allocation algorithm attempts to optimize the allocation of capital to stocks in a portfolio. An allocation algorithm is called universal if it results in a net worth that is "similar" to that generated by the best constant-rebalanced portfolio with fixed weightings over time (denoted CBAL* below), chosen in hindsight. "Similar" here means that the net worth does not diverge exponentially. (For a precise definition, see this very readable paper by Borodin, et al. H/t: Vladimir P.)

Previously, I know only of one such universal trading algorithm - the Universal Portfolio invented by Thomas Cover, which I have described before. But here is another one that has proven to be universal: the exceedingly simple EG algorithm.

The EG ("Exponentiated Gradient") algorithm is an example of a capital allocation rule using "multiplicative updates": the new capital allocated to a stock is proportional to its current capital multiplied by a factor. This factor is an exponential function of the return of the stock in the last period. This algorithm is both greedy and conservative: greedy because it always allocates more capital to the stock that did well most recently; conservative because there is a penalty for changing the allocation too drastically from one period to the next. This multiplicative update rule is the one proposed as a model for evolution by natural selection.

The computational advantage of EG over the Universal Portfolio is obvious: the latter requires a weighted average over all possible allocations at every step, while the former needs only know the allocation and returns for the most recent period. But does this EG algorithm actually generate good returns in practice? I tested it two ways:

1) Allocate between cash (with 2% per annum interest) and SPY.
2) Allocate among SP500 stocks.

In both cases, the only free parameter of the model is a number called the "learning rate" η, which determines how fast the allocation can change from one period to the next. It is generally found that η=0.01 is optimal, which we adopted. Also, we disallow short positions in this study.

The benchmarks for comparison for 1) are, using the notations of the Borodin paper,

a)  the buy-and-hold SPY portfolio BAH, and
b) the best constant-rebalanced portfolio with fixed allocations in hindsight CBAL*.

The benchmarks for comparison for 2)  are

a) a constant rebalanced portfolio of SP500 stocks with equal allocations U-CBAL,
b) a portfolio with 100% allocation to the best stock chosen in hindsight BEST1, and
c) CBAL*.

To find CBAL* for a SP500 portfolio, I used Matlab Optimization Toolbox's constrained optimization function fmincon.

There is also the issue of SP500 index reconstitution. It is complicated to handle the addition and deletion of stocks in the index within a constrained optimization function. So I opted for the shortcut of using a subset of stocks that were in SP500 from 2007 to 2013, tolerating the presence of surivorship bias. There are only 346 such stocks.

The result for 1) (cash vs SPY) is that the CAGR (compound annualized growth rate) of EG is slightly lower than BAH (4% vs 5%). It turns out that BAH and CBAL* are the same: it was best to allocate 100% to SPY during 2007-2013, an unsurprising recommendation in hindsight.

The result for 2) is that the CAGR of EG is higher than the equal-weight portfolio (0.5% vs 0.2%). But both these numbers are much lower than that of BEST1 (39.58%), which is almost the same as that of CBAL* (39.92%). (Can you guess which stock in the current SP500 generated the highest CAGR? The answer, to be revealed below*, will surprise you!)

We were promised that the EG algorithm will perform "similarly" to CBAL*, so why does it underperform so miserably? Remember that similarity here just means that the divergence is sub-exponential: but even a polynomial divergence can in practice be substantial! This seems to be a universal problem with universal algorithms of asset allocation: I have never found any that actually achieves significant returns in the short span of a few years. Maybe we will find more interesting results with higher frequency data.

So given the underwhelming performance of EG, why am I writing about this algorithm, aside from its interesting connection with biological evolution? That's because it serves as a setup for another, non-universal, portfolio allocation scheme, as well as a way to optimize parameters for trading strategies in general: both topics for another time

===
Workshops Update:

My next online workshop will be on  Mean Reversion Strategies, August 26-28. This and the Quantitative Momentum workshops will also be conducted live at Nanyang Technological University in Singapore, September 18-21.

===
Do follow me @chanep on Twitter, as I often post links to interesting articles there.

===
*The SP500 stock that generated the highest return from 2007-2013 is AMZN.