The Average Investor's Blog

A software developer view on the markets

ARMA Models for Trading, Part V

Posted by The Average Investor on Jun 8, 2011

All posts in this series were combined into a single, extended tutorial and posted on my new blog.

Once back testing is done and the results of the trading system are convincingly positive, it’s time to move it to production. Now, how do we implement something like the ARMA techniques I described earlier in practice (assuming a retail brokerage account)?

In an earlier post I discussed the problems related to trading at the close and various solutions. Thus, in this post, I will discuss only the two obstacles I had to overcome for the ARMA methods.

My implementation was to compute a table of actions for each instrument I am interested in. Here is an example:

-0.99%, 1
-0.98%, 1
-0.97%, 1
-0.96%, 1
-0.95%, -1
-0.94%, -1
-0.93%, -1
-0.92%, -1
-0.91%, -1
-0.9%, -1

The above table tells me what position I am supposed to take at the close of the day for various changes (as a percentage) in the price. In the above example, anything up to and including -0.96% is a long position. In other words, if the instrument loses more than -0.96% – take long.

The first problem I found was the computational time. These computations are expensive! I had two choices – code everything in C or buy hardware. Obviously I went after the latter solution – no idea how long it would have taken to port everything successfully to C. For less than $900 I managed to assemble an i7 system which was running a few orders of magnitude faster than my laptop. Moreover it is able to run up to 8 workloads in parallel. If you are unfamiliar, i7 has four cores plus hyper-threading, which makes it sort of similar to an eight core machine and so far it has proved good enough to compute 4 to 5 maps in about 3 hours. All the computations are run in parallel as daily cron jobs in Linux. The results are sent to me in an email. :) The scripts implementing the infrastructure are available on the quantscript project.

The bigger problem was that quite often the computations were not stable enough. In the above example there was a single switch between the long and the short position (at -0.95%). Thus, at 15:55, if the instrument is away from this point, one can take the position with confidence that it won’t cross the line in the last second. Of course, it’s not going to be perfectly on the close, but on average, you shouldn’t expect high negative impact from this type of slippage. Same holds if one is trading moving averages – there is a single prices that separates the long and short position and it can be even determined mathematically.

No such luck for my ARMA implementation, quite often the table of actions looks like:

0.75%, 1
0.76%, 1
0.77%, -1
0.78%, 1
0.79%, 1
0.8%, -1
0.81%, -1
0.82%, 1
0.83%, 1
0.84%, 1
0.85%, -1

If the price is within this range in the last few minutes, there is no guarantee whether it will stop on a long or on a short. So what to do? My solution was simply to not take a position if the price is within an unstable region nearby the close. The alternative is to take the position and be happy with it until at least the open on the next day. After the close we can compute the exact position and if we did the wrong thing, bought when we were supposed to sell, we can close the position at the next day open. Sometimes we will get lucky and it will move in the directions that benefits us regardless of what the system says.

This brings me to the last point – after the close is known, I compute the precise positions and check against the actual positions. If necessary I might take corrective actions during the next day.

My experience so far from a trading point of view is quite positive. What I described above is not too hard to follow in practice. I hope it proves worth the efforts. :D

About these ads

8 Responses to “ARMA Models for Trading, Part V”

  1. Paolo said

    I’ve been reading your previous posts about ARMA models for trading but it’s not clear to me how these -0.99%:-0.9% percentage ranges are related to ARMA forecasts.

    As far as I remaind you were saying if the one day ahead prediction come negative then the desired position is short and viceversa…

    thanks,

    Paolo

    • Part V is about trading the system in real life. The system needs the closing price to decide what position to take, but the closing price is not available before the end of the day. That’s a typical obstacle with systems using the closing price both for signals and for trading.

      More details are in the Trading at the Close post.

  2. Paolo said

    That was clear…I’m asking what those percentage ranges refer to

  3. I see – the percentages refer to the price, for instance, if the price is -0.94% from the previous day close, one should go short. Makes it easier to monitor the moves towards the close and to decide the position.

  4. Paolo said

    I’m quite surprised that your table of action is so erratic for such strategy…a simple 0.01% of difference in closing price can completely revert the position.

    Anyway I was also wondering whether you are worried about likely over-fitting in your model…choosing the best ARMA-GARCH parameters with a walk-forward process resembles optimising the best moving average lengths for a crossover system on a daily basis.

  5. Yep, surprised me too and I haven’t come up with anything better but to consider the state “unstable” (which probably is the case in a way) and move out.

    As for the over-fitting, this approach can be applied to any technique: optimize over a history time window according to some optimization function and use the best parameters for the next day (initially I was changing ARMA parameters on a weekly basis). I would expect to have the same problem using neural networks, machine learning, wavelets, etc (which are coming next). Btw, how exactly do we pick the 10-month moving average for instance? ;)

    Just a minor detail, I don’t insist on picking the best model, just a relatively good model. For instance, if I know the best 5-10 models, I’d probably go with the most stable one. Of course, assuming that this model picking approach performs well on historic data.

  6. Paolo said

    Got your point, I appreciate

    Thanks,

    Paolo

  7. [...] has become a lengthy and time-consuming post, so I will stop here. If I ever decide to do another post on this topic – I would likely discuss the obstacles I encountered while trading these models [...]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

%d bloggers like this: