Friday, November 15, 2013

Cointegration Trading with Log Prices vs. Prices

In my recent book, I highlighted a difference between cointegration (pair) trading of price spreads and log price spreads. Suppose the price spread hA*yA-hB*yB of two stocks A and B is stationary. We should just keep the number of shares of stocks A and B fixed, in the ratio hA:hB, and short this spread when it is much higher than average, and long this spread when it is much lower. On the other hand, for a stationary log price spread hA*log(yA)-hB*log(yB), we need to keep the market values of stocks A and B fixed, in the ratio hA:hB, which means that at the end of every bar, we need to rebalance the shares of A and B due to price changes.

For most cointegrating pairs that I have studied, both the price spreads and the log price spreads are stationary, so it doesn't matter which one we use for our trading strategy. However, for an unusual pair where its log price spread cointegrates but price spread does not (Hat tip: Adam G. for drawing my attention to one such example), the implication is quite significant. A stationary price spread means that prices differences are mean-reverting, a stationary log price spread means that returns differences are mean-reverting. For example, if stock A typically grows 2 times as fast as B, but has been growing 2.5 times as fast recently, we can expect the growth rate differential to decrease going forward. We would still short A and long B, but we would exit this position when the growth rates of A vs B return to a 2:1 ratio, and not when the price spread of A vs B returns to a historical mean. In fact, the price spread of A vs B should continue to increase over the long term.

This much is easy to understand. But thanks to a reader Ferenc F. who referred me to a paper by Fernholz and Maguire, I realize there is a simple mathematical relationship between stock A and B in order for their log prices to cointegrate.

Let us start with a formula derived by these authors for the change in log market value P of a portfolio of 2 stocks: d(logP) = hA*d(log(yA))+hB*d(log(yB))+gamma*dt.

The gamma in this equation is

gamma=1/2*(hA*varA + hB*varB), where varA is the variance of stock A minus the variance of the portfolio market value, and ditto for varB.

Note that this formula holds for a portfolio of any two stocks, not just when they are cointegrating. But if they are in fact cointegrating, and if hA and hB are the weights which create the stationary portfolio P, we know that d(logP) cannot have a non-zero long term drift term represented by gamma*dt. So gamma must be zero. Now in order for gamma to be zero, the covariance of the two stocks must be positive (no surprise here) and equal to the average of the variances of the two stocks. I invite the reader to verify this conclusion by expressing the variance of the portfolio market value in terms of the variances of the individual stocks and their covariance, and also to extend it to a portfolio with N stocks. This cointegration test for log prices is certainly simpler than the usual CADF or Johansen tests! (The price to pay for this simplicity? We must assume normal distributions of returns.)

===

My online Quantitative Momentum Strategies workshop will be offered on December 2-4. Please visit epchan.com/my-workshops for registration details.

Thursday, October 24, 2013

How Useful is Order Flow and VPIN?

Can short-term price movement be predicted? (I am speaking of  seconds or minutes here.) This is a question not only relevant to high frequency traders, but to every long-term investor as well. Even if  one plans to buy and hold a stock for years,  nobody likes to suffer short-term negative P&L immediately after entry into position.

One short-term prediction method that has long found favor with academic researchers and traders alike is order flow. Order flow is just signed transaction volume: if a transaction of 100 shares is classified as a "buy", the order flow is +100; if it is classified as a "sell", the order flow is -100. This might strike some as rather strange: every transaction has a buyer and seller, so what does it mean by a "buy" or a "sell"? Well, the "buyer" is defined as the one who is the "aggressor", i.e. one that is using a market order to buy at the ask price. (And vice versa for the seller, whom I will henceforth omit in this discussion.) The intuitive reason why a series of large "buy" market orders are predictive of short-term price increase is that if someone is so eager to go long, s/he is likely to know something about the market that others don't (either due to superior fundamental knowledge or technical model), so we better join her/him! Such superior traders are often called "informed traders", and their order flow is often called "toxic flow". Toxic, that is, to the uninformed market maker.

In theory, if one has a tick data feed, one can tell whether an execution is a "buy" or "sell" by comparing the trade price with the bid and ask price: if the trade price is equal to the ask, it is a "buy". This is called the "Quote Rule". But in practice, there is a hitch. If the bid and ask prices change quickly, a buy market order may end up buying at the bid price if the market has fortuitously moved lower since the order was sent. Besides, perhaps 1/3 of trading in the US equities markets take place in dark pools or via hidden orders, so the quotes are simply invisible and order flow non-computable. So this classification scheme is not foolproof. Therefore, a number of researchers (see "Flow Toxicity and Volatility in a High Frequency World" by Easley, et. al.) proposed an alternative, "easier", method to compute order flow. Instead of checking the trade price of each tick, they just need the "open" and "close" trade prices of a bar, preferably a volume bar, and assign a fraction of the volume in that bar to "buy" or "sell" depending on whether the close price is higher or lower than the open price. (The assignment formula is based on the cumulative probability density of a Gaussian distribution, which incidentally models price changes of volume bars, but not time bars, pretty well.) The absolute difference between buy and sell volume expressed as a fraction of the total volume is called "VPIN" by the authors, or Volume-Synchronized Probability of Informed Trading. The higher VPIN is, the more likely we will experience short-term momentum due to informed trading.

Theory and intuition aside, how well does order flow work in practice as a short-term predictor in various markets? And how predictive is VPIN as compared to the old Quote Rule?  In my experience, while this indicator is predictive of price change, the change is often too small to overcome transaction costs including the bid-ask spread. And more disturbingly, in those markets where both Quote Rule and VPIN should work (e.g. futures markets), VPIN has so far underperformed Quote Rule, despite (?) it being patented and highly touted. I have informally polled other investment professionals on their experience, and the answer usually come back indifferent as well.

Do you have live experience with VPIN? Or more generally, do you find strategies built using volume bars superior to those using time bars? If so, please leave us your comments!

===

My online Quantitative Momentum Strategies workshop will be offered in December. Please visit epchan.com/my-workshops for registration details.


Tuesday, August 20, 2013

Guest Post: A qualitative review of VIX F&O pricing and hedging models

By Azouz Gmach

VIX Futures & Options are one of the most actively traded index derivatives series on the Chicago Board Options Exchange (CBOE). These derivatives are written on S&P 500 volatility index and their popularity has made volatility a widely accepted asset class for trading, diversifying and hedging instrument since their launch. VIX Futures started trading on March 26th, 2004 on CFE (CBOE Future Exchange) and VIX Options were introduced on Feb 24th, 2006.


VIX Futures & Options

VIX (Volatility Index) or the ‘Fear Index’ is based on the S&P 500 options volatility. Spot VIX can be defined as square root of 30 day variance swap of S&P 500 index (SPX) or in simple terms it is the 30-day average implied volatility of S&P 500 index options. The VIX F&O are based on this spot VIX and is similar to the equity indexes in general modus operandi. But structurally they have far more differences than similarities. While, in case of equity indices (for example SPX), the index is a weighted average of the components, in case of the VIX it is sum of squares of the components. This non-linear relationship makes the spot VIX non-tradable but at the same time the derivatives of spot VIX are tradable. This can be better understood with the analogy of Interest Rate Derivatives. The derivatives based on the interest rates are traded worldwide but the underlying asset: interest rate itself cannot be traded.

The different relation between the VIX derivatives and the underlying VIX makes it unique in the sense that the overall behavior of the instruments and their pricing is quite different from the equity index derivatives. This also makes the pricing of VIX F&O a complicated process. A proper statistical approach incorporating the various aspects like the strength of trend, mean reversion and volatility etc. is needed for modeling the pricing and behavior of VIX derivatives.


Research on Pricing Models

There has been a lot of research in deriving models for the VIX F&O pricing based on different approaches. These models have their own merits and demerits and it becomes a tough decision to decide on the most optimum model. In this regards, I find the work of Mr. Qunfang Bao titled ‘Mean-Reverting Logarithmic Modeling of VIX’ quite interesting. In his research, Bao not only revisits the existing models and work by other prominent researchers but also comes out with suggestive models after a careful observation of the limitations of the already proposed models. The basic thesis of Bao’s work involves mean-reverting logarithmic dynamics as an essential aspect of Spot VIX.

VIX F&O contracts don’t necessarily track the underlying in the same way in which equity futures track their indices. VIX Futures have a dynamic relationship with the VIX index and do not exactly follow its index. This correlation is weaker and evolves over time. Close to expiration, the correlation improves and the futures might move in sync with the index. On the other hand VIX Options are more related to the futures and can be priced off the VIX futures in a much better way than the VIX index itself.


Pricing Models

As a volatility index, VIX shares the properties of mean reversion, large upward jumps & stochastic volatility (aka stochastic vol-of-vol). A good model is expected to take into consideration, most of these factors.

There are roughly two categories of approaches for VIX modeling. One is the Consistent approach and the other being Standalone approach.

        I.            Consistent Approach: - This is the pure diffusion model wherein the inherent relationship between S&P 500 & VIX is used in deriving the expression for spot VIX which by definition is square root of forward realized variance of SPX.

      II.            Standalone Approach: - In this approach, the VIX dynamics are directly specified and thus the VIX derivatives can be priced in a much simpler way. This approach only focuses on pricing derivatives written on VIX index without considering SPX option.
Bao in his paper mentions that the standalone approach is comparatively better and simpler than the consistent approach.


MRLR model

The most widely proposed model under the standalone approach is MRLR (Mean Reverting Logarithmic Model) model which assumes that the spot VIX follows a Geometric Brownian motion process. The MRLR model fits well for VIX Future pricing but appears to be unsuited for the VIX Options pricing because of the fact that this model generates no skew for VIX option. In contrast, this model is a good model for VIX futures.


MRLRJ model

Since the MRLR model is unable to produce implied volatility skew for VIX options, Bao further tries to modify the MRLR model by adding jump into the mean reverting logarithmic dynamics obtaining the Mean Reverting Logarithmic Jump Model (MRLRJ). By adding upward jump into spot VIX, this model is able to capture the positive skew observed in VIX options market.


MRLRSV model

Another way in which the implied volatility skew can be produced for VIX Options is by including stochastic volatility into the spot VIX dynamics. This model of Mean Reverting Logarithmic model with stochastic volatility (MRLRSV) is based on the aforesaid process of skew appropriation.
Both, MRLRJ and MRLRSV models perform equally well in appropriating positive skew observed in case of VIX options.


MRLRSVJ model

Bao further combines the MRLRJ and MRLRSV models together to form MRLRSVJ model. He mentions that this combined model becomes somewhat complicated and in return adds little value to the MRLRJ or MRLRSV models. Also extra parameters are needed to be estimated in case of MRLRSVJ model.

MRLRJ & MRLRSV models serve better than the other models that have been proposed for pricing the VIX F&O. Bao in his paper, additionally derives and calibrates the mathematical expressions for the models he proposes and derives the hedging strategies based on these models as well. Quantifying the Volatility skew has been an active area of interest for researchers and this research paper addresses the same in a very scientific way, keeping in view the convexity adjustments, future correlation and numerical analysis of the models etc. While further validation and back testing of the models may be required, but Bao’s work definitely answers a lot of anomalous features of the VIX and its derivatives.

---
Azouz Gmach works for QuantShare, a technical/fundamental analysis software.

===
My online Mean Reversion Strategies workshop will be offered in September. Please visit epchan.com/my-workshops for registration details.

Also, I will be teaching a new course Millisecond Frequency Trading (MFT) in London this October.

-Ernie

Tuesday, July 16, 2013

Momentum Crash and Recovery

In my book I devoted considerable attention to the phenomenon of "Momentum Crashes" that professor Kent Daniel discovered. This refers to the fact that momentum strategies generally work very poorly in the immediate aftermath of a financial crisis. This phenomenon apparently spans many asset classes, and has been around since the Great Depression.  Sometimes it lasted multiple decades, and at other times these strategies recovered during the lifetime of a momentum trader. So how have momentum strategies fared after the 2008 financial crisis, and have they recovered?

First, let's look at the Diversified Trends Indicator (formerly the S&P DTI index), which is a fairly generic trend-following strategy applied to futures.  Here are the index values since inception (click to enlarge):



and here are the values for 2013:




After suffering relentless decline since 2009, it has finally shown positive returns YTD!

Now look at a momentum strategy on the soybean futures (ZS) that I have been working on. Here are the cumulative returns from 2009 to 2011 June:


and here the cumulative returns since then:


The difference is stark!

Despite evidences that indeed momentum strategies have enjoyed a general recovery, we must play the part of skeptical financial scientists and look for alternative theories. If any reader can tell us an alternative, plausible explanation why ZS should start to display trending behavior since July 2011, but not before, please post that in the comment area. The prize for the best explanation: I will disclose in private more details about this strategy to that reader. (To claim the prize, please include the last 4 digit of your phone number in the post for identification purpose.)

===
Upcoming events:
  1. I will be teaching an online workshop on Momentum Strategies from July 30 - August 1. Registration info can be found here.
  2. My friend Dr. Haksun Li is offering a Certificate in Quantitative Investment series of courses.










Saturday, May 25, 2013

My new book on Algorithmic Trading is out

A reader (Hat tip: Ken) told me that my new book Algorithmic Trading: Winning Strategies and Their Rationale is now available for purchase at Amazon.com. The difference with my previous book? A lot more sample strategies with an emphasis on their "rationale", and more advanced techniques. It covers stocks, futures, and FX. A big thank-you to my editors, reviewers, and you, the reader, for your on-going support.

And when you are done with it, please post a review on Amazon whether you like it or hate it!

Also, I am now offering a live online course on Backtesting in June. It covers in excruciating details the various nuances of conducting a correct backtest and the numerous pitfalls one can encounter when backtesting different types of strategies and asset classes. For syllabus and registration details, please visit my website.



Friday, May 03, 2013

Nonlinear Trading Strategies

I have long been partial to linear strategies due to their simplicity and relative immunity to overfitting. They can be used quite easily to profit from mean-reversion. However, there is a serious problem: they are quite fragile, i.e. vulnerable to tail risks. As we move from mean-reverting strategies to momentum strategies, we immediately introduce a nonlinearity (stop losses), but simultaneously remove certain tail risks (except during times when markets are closed). But if we want to enjoy anti-fragility and are going to introduce nonlinearities anyway, we might as well go full-monty, and consider options strategies. (It is no surprise that Taleb was an options trader.)

It is easy to see that options strategies are nonlinear, since options payoff curves (value of an option as function of underlying stock price) are plainly nonlinear. I personally have resisted trading them because they all seem so complicated, and I abhor complexities. But recently a reader recommended a little book to me: Jeff Augen's "Day Trading Options" where the Black-Scholes equation (and indeed any equation) is mercifully absent from the entire treatise. At the same time, it is suffused with qualitative ideas. Among the juicy bits:

1) We can find distortions in the 2D implied volatility surface (implied volatility as z-axis, expiration months as x, and strike prices as y) which may mean revert to "smoothness", hence presenting arbitrage opportunities. These distortions are present for both stock and stock index options.

2) Options are underpriced intraday and overpriced overnight: hence it is often a good idea to buy them at the market open and sell them at market close (except on some special days! See 4 below.). In fact, there are certain days of the week where this distortion is the most drastic and thus favorable to this strategy.

3) Certain cash instruments have unusually high kurtosis, but their corresponding option prices consistently underprice such tail risks. Thus structures such as strangles or backspreads can often be profitable without incurring any left tail risks.

4) If there is a long weekend before expiration day (e.g. Easter weekend),  the time decay of the options value over 3 days is compressed into an intraday decline on the last trading day before the weekend.

Now, as quantitative traders, we have no need to take his word on any of these assertions. So, onward to backtesting!

(For those who may be stymied by the lack of affordable historical intraday options data, I recommend Nanex.net.)

===

There are still 2 slots available in my online Mean Reversion Strategies workshop in May. The workshop will be conducted live via Adobe Connect, and is limited to a total of 4 participants. Part of the workshop will focus on how to avoid getting hurt when a pair or a portfolio of instruments stop cointegrating.

Thursday, April 04, 2013

An Integrated Development Environment for High Frequency Strategies

I have come across many software platforms that allow traders to first specify and backtest a strategy and then, with the push of a button, turn the backtest strategy into a live trading program that can automatically submit orders to their favorite broker. (See all my articles on this topic here.)  I called these platforms "Integrated Development Environment" (IDE) in my new book, and they range from the familiar and retail-oriented (e.g. MetaTrader, NinjaTrader, TradeStation), to the professional but skills-demanding (e.g. ActiveQuant, Marketcetera, TradeLink),  and finally to the comprehensive and industrial-strength (e.g. Deltix, Progress Apama, QuantHouse, RTD Tango). Some of these require no programming skills at all, allowing you to construct strategies by dragging-and-dropping, others use some simple scripting languages like Python, and yet others demand full-blown programming abilities in Java, C#, or C++. But which of these allow us to backtest and execute high frequency strategies?

To state the obvious: backtesting HFstrategies is quite hard. The volume of data is one issue. But in addition, the execution details are very important to such strategies: details such as the exact exchange/venue to which we are routing our orders, the precise state of the order book that triggers our orders, the order types we are using, and finally the probability of getting filled if we use non-marketable orders. Messing up one of these details and the backtest will be far from realistic. I often tell people that it is easier to paper trade a HF strategy than to backtest one. While many of the platforms I reported above do allow backtesting using tick data, I don't know that they enable backtesting using the full order book and choice of execution venue. With this background, I am happy to report I have recently come across just such a platform called Lime Strategy Studio.

First, the bad news. LimeTrader is useful only to traders who trade with Lime Brokerage, as it is configured to send live orders to Lime only. [UPDATE: I have since learned that there are adapters available for 3rd party brokers.] However, if you are going to trade HF stocks and futures strategies, why not go with Lime, since they provide you with a comprehensive API, direct ultra-low latency feeds from the exchanges, and allow (nay, insist on) colocation either at the exchanges or at their data center at a reasonable fee? (Full Disclosure: I have no current business relationship with Lime, though I was a customer.) Another piece of bad news: the specification of the strategy must be in C++.

But once you get over these two hurdles, the benefits are manifold. Every detail that you can specify for a live trading strategy can be specified for the backtest and paper trading. As I said, these details may include order type, trading venue, state of order book, and even statistics of the order book, not to mention fundamental data such as earnings, corporate actions, and other user-provided data such as news. A fill simulator is included for your non-marketable orders. As with other IDEs, once you backtested a strategy in its every detail and are satisfied with the performance metrics, you can go live (either for paper or production trading) with the push of a button.

If any reader know of other IDEs that have similar features and useful for backtesting HF strategies, please let us know!

===

Speaking of HF strategies, traders often lament the ultra-high secrecy around them and the difficulty of gathering knowledge in this field. A friend (hat tip: Dave) referred me to this paper by Prof. Dragos Bozdog et. al. that gives a flavor of what sort of modeling may be involved. I find it very readable and thought-provoking.

===

There are still 2 slots available in my online Mean Reversion Strategies workshop scheduled for May.



Thursday, March 14, 2013

What Can Quant Traders Learn from Taleb's "Antifragile"?

It can seem a bit ironic that we should be discussing Nassim Taleb's best-seller "Antifragile" here, since most algorithmic trading strategies involve predictions and won't be met with approval from Taleb. Predictions, as Taleb would say, are "fragile" -- they are prone to various biases (e.g. data snooping bias) and the occasional Black Swan event will wipe out the small cumulative profits from many correct bets. Nevertheless, underneath the heap of diatribes against various luminaries ranging from Robert Merton to Paul Krugman, we can find a few gems. Let me start from the obvious to the subtle:

1) Momentum strategies are more antifragile than mean-reversion strategies.

Taleb didn't say that, but that's the first thought that came to my mind. As I argued in many places, mean reverting strategies have natural profit caps (exit when price has reverted to mean) but no natural stop losses (we should buy more of something if it gets cheaper), so it is very much subject to left tail risk, but cannot take advantage of the unexpected good fortune of the right tail. Very fragile indeed! On the contrary, momentum strategies have natural stop losses (exit when momentum reverses) and no natural profit caps (keep same position as long as momentum persists). Generally, very antifragile! Except: what if during a trading halt (due to the daily overnight gap, or circuit breakers), we can't exit a momentum position in time? Well, you can always buy an option to simulate a stop loss. Taleb would certainly approve of that.

2) High frequency strategies are more antifragile than low frequency strategies.

Taleb also didn't say that, and it has nothing to do with whether it is easier to predict short-term vs. long-term returns. Since HF strategies allow us to accumulate profits much faster than low frequency ones, we need not apply any leverage. So even when we are unlucky enough to be holding a position of the wrong sign when a Black Swan hits, the damage will be small compared to the cumulative profits. So while HF strategies do not exactly benefit from right tail risk, they are at least robust with respect to left tail risk.

3) Parameter estimation errors and vulnerability to them should be explicitly incorporated in a backtest performance measurement.

Suppose your trading model has a few parameters which you estimated/optimized using some historical data set. Based on these optimized parameters, you compute the Sharpe ratio of your model on this same data. No doubt this Sharpe ratio will be very good, due to the in-sample optimization. If you apply this model with those optimized the parameters on out-of-sample data, you would probably get a worse Sharpe ratio which is more predictive. But why stop at just two data sets? We can find N different data sets of the same size, calculate the optimized parameters on each of them, but compute the Sharpe ratios over the N-1 out-of-sample data sets. Finally, you can average over all these Sharpe ratios. If your trading model is fragile, you will find that this Sharpe ratio is quite low. But more important than Sharpe ratios, you should compute the maximum drawdown based on each set of parameters, and also the maximum of all these max drawdowns. If your trading model is fragile, this maximum of maximum drawdowns is likely to be quite scary.

The scheme I described above is called cross-validation and is well-known before Taleb, though his book reminds me of its importance.

4) Notwithstanding 3) above, a true estimate of the max drawdown is impossible because it depends on the estimate of the probability of rare events. As Taleb mentioned, even in case of a normal distribution, if the "true" standard deviation is higher than your estimate by a mere 5%, the probability of a 6-sigma event will be increased by 5 times over your estimate! So really the only way to ensure that our maximum drawdown will not exceed a certain  limit is through Constant Proportion Portfolio Insurance: trading risky assets with Kelly-leverage in a limited liability company, putting money that you never want to lose in a FDIC-insured bank, with regular withdrawals from the LLC to the bank (but not the other way around).

5) Correlations are impossible to estimate/predict. The only thing we can do is to short at +1 and buy at -1.

Taleb hates Markowitz portfolio optimization, and one of the reasons is that it relies on estimates of covariances of asset returns. As he said, a pair of assets that may have -0.2 correlation over a long period can have +0.8 correlation over another long period. This is especially true in times of financial stress. I quite agree on this point: I believe that manually assigning correlations with values of  +/-0.75, +/-0.5, +/-0.25, 0 to entries of the correlation matrix based on "intuition" (fundamental knowledge) can generate as good out-of-sample performance as any meticulously estimated numbers.The more fascinating question is whether there is indeed mean-reversion of correlations. And if so, what instruments can we use to profit from it? Perhaps this article will help.

6) Backtest can only be used to reject a strategy, not to predict its success.

This echoes the point made by commenter Michael Harris in a previous article. Since historical data will never be long enough to capture all the possible Black Swan events that can occur in the future, we can never know if a strategy will fail miserably. However, if a strategy already failed in a backtest, we can be pretty sure that it will fail again in the future.

===

The online "Quantitative Momentum Strategies” workshop that I mentioned in the previous article is now fully booked. Based on popular demand, I will offer a "Mean Reversion Strategies" workshop in May. Once again, it will be conducted in real-time through Skype, and the number of attendees will be similarly limited to 4. See here for more information.






Monday, February 18, 2013

A workshop, a webinar, and a question

There is a workshop on the 25th of February titled "Market turbulence; monetization; and universality" by Mike Lipkin at Columbia University that promises to be interesting to those traders who have a physics background. Mike is a former colleague of mine at Cornell's Laboratory of Atomic and Solid State Physics, and I fondly remember the good old days when we all hunched over the theory group's computers while day-dreaming of our future. Mike has since gone on to become an options market-maker at the American Stock Exchange and an Adjunct Associate Professor at Columbia. He published some very interesting research on the "stock pinning" phenomenon near options expirations, i.e. stock prices often converge to the nearest strike prices of their options just before expirations.

---

If we want to trade directly on various FX ECNs such as HotspotFX or EBS, perhaps because we want to run some HFT strategies, we will need to be sponsored by a prime broker. However, since the Dodd-Frank act has been in full force, no prime brokers that I know of are willing to take on customers with less than $10M assets. (I often feel that the CFTC's primary goal is to prevent small players like myself from ever competing with bigger institutions. Of course, their stated goal is to "protect" us from financial harm ....) The only exception may be CitiFX TradeStream ECN. Has any reader ever traded on this market? Any reviews or comments will be most welcome.

---

I am now offering an online workshop "Quantitative Momentum Strategies” to a select number of traders and portfolio managers. It will be conducted in real-time through Skype, and the number of attendees will be limited to 4. See here for more information.

Sunday, February 03, 2013

A stock factor based on option volatility smirk

A reader pointed out an interesting paper that suggests using option volatility smirk as a factor to rank stocks. Volatility smirk is the difference between the implied volatilities of the OTM put option and the ATM call option. (Of course, there are numerous OTM and ATM put and call options. You can refer to the original paper for a precise definition.) The idea is that informed traders (i.e. those traders who have a superior ability in predicting the next earnings numbers for the stock) will predominately buy OTM puts when they think the future earnings reports will be bad, thus driving up the price of those puts and their corresponding implied volatilities relative to the more liquid ATM calls. If we use this volatility smirk as a factor to rank stocks, we can form a long portfolio consisting of stocks in the bottom quintile, and a short portfolio with stocks in the top quintile. If we update this long-short portfolio weekly with the latest volatility smirk numbers, it is reported that we will enjoy an annualized excess return of 9.2%.

As a standalone factor, this 9.2% return may not seem terribly exciting, especially since transaction costs have not been accounted for. However, the beauty of factor models is that you can combine an arbitrary number of factors, and though each factor may be weak, the combined model could be highly predictive. A search of the keyword "factor" on my blog will reveal that I have talked about many different factors applicable to different asset classes in the past. For stocks in particular, there is a short term factor as simple as the previous 1-day return that worked wonders. Joel Greenblatt's famous "Little Book that Beats the Market" used 2 factors to rank stocks (return-on-capital and earnings yield) and generated an APR of 30.8%.

The question, however, is how we should combine all these different factors. Some factor model aficionados will no doubt propose a linear regression fit, with future return as the dependent variable and all these factors as independent variables. However, my experience with this method has been unrelentingly poor: I have witnessed millions of dollars lost by various banks and funds using this method. In fact, I think the only sensible way to combine them is to simply add them together with equal weights. That is, if you have 10 factors, simply form 10 long-short portfolios each based on one factor, and combine these portfolios with equal capital. As Daniel Kahneman said, "Formulas that assign equal weights to all the predictors are often superior, because they are not affected by accidents of sampling".


Wednesday, January 02, 2013

The Pseudo-science of Hypothesis Testing

Backtesting trading strategies necessarily involves a very limited amount of historical data. For example, I seldom test strategies with data older than 2007. Gathering longer history may not improve predictive accuracy since the market structure may have changed substantially. Given such scant data, it is reasonable to question whether the good backtest results (e.g. a high annualized return R) we may have obtained is just due to luck. Many academic researchers try to address this issue by running their published strategies through  standard statistical hypothesis testing.

You know the drill: the researchers first come up with a supposedly excellent strategy. In a display of false modesty, they then suggest that perhaps a null hypothesis can produce the same good return R. The null hypothesis may be constructed by running the original strategy through some random simulated historical data, or by randomizing the trade entry dates. The researchers then proceed to show that such random constructions are highly unlikely to generate a return equal to or better than R. Thus the null hypothesis is rejected, and thereby impressing you that the strategy is somehow sound.

As statistical practitioners in fields outside of finance will tell you, this whole procedure is quite meaningless and often misleading.

The probabilistic syllogism of hypothesis testing has the same structure as the following simple example (devised by Jeff Gill in his paper "The Insignificance of Null Hypothesis Significance Testing"):

1) If a person is an American then it is highly unlikely she is a member of Congress.
2) The person is a member of Congress.
3) Therefore it is highly unlikely she is an American.

The absurdity of hypothesis testing should be clear. In mathematical terms, the probability we are really interested in is the conditional probability that the null hypothesis is true given an observed high return R: P(H0|R). But instead, the hypothesis test merely gives us the conditional probability of a return R given that the null hypothesis is true: P(R|H0). These two conditional probabilities are seldom equal.

But even if we can somehow compute P(H0|R), it is still of very little use, since there are an infinite number of potential H0. Just because you have knocked down one particular straw man doesn't say much about your original strategy.

If hypothesis testing is both meaningless and misleading, why do financial researchers continue to peddle it? Mainly because this is de rigueur to get published. But it does serve one useful purpose for our own private trading research. Even though a rejection of the null hypothesis in no way shows that the strategy is sound, a failure to reject the null hypothesis will be far more interesting.

(For other references on criticism of hypothesis testing, read Nate Silver's bestseller "The Signal and The Noise". Silver is of course the statistician who correctly predicted the winner of all 50 states + D.C. in the 2012 US presidential election. The book is highly relevant to anyone who makes a living predicting the future. In particular, it tells the story of one Bob Voulgaris who makes $1-4M per annum betting on NBA outcomes. It makes me wonder whether I should quit making bets on financial markets and move on to sports.)