]]>

library(quantmod) getSymbols("^DJI", from="1900-01-01") dji = Cl(DJI["/2011"]) djiVol = aggregate( dji, as.numeric(format(index(dji), "%Y")), function(ss) coredata(tail(TTR:::volatility( ss, n=NROW(ss), calc="close"), 1))) ecdf(as.vector(djiVol))(as.numeric(tail(djiVol,1))) # The result is 0.8214286, the 82nd quantile

A volatile year no doubt, but once again confirming the fact that, in markets behaviour at least, history does repeat itself. The following plot clearly shows that the volatility experienced during the great depression dwarves the recent levels:

Next, out of pure curiosity, I also computed how much money were to be made with these levels of volatility:

library(quantmod) getSymbols("^DJI", from="1900-01-01") # Compute the absolute returns absRets = abs(ROC(Cl(DJI["/2011"]))) absRets = reclass(ifelse(is.na(absRets),0,absRets),absRets) # Summarize annually yy = as.numeric(format(index(absRets), "%Y")) zz = aggregate(absRets, yy, function(ss) tail(cumprod(1+ss),1)) print(as.vector(tail(zz,1))) # The result is 10.64

That’s right, an owner of a crystal ball would have been able to multiply his money 10-fold in 2011 alone! For further comparison, in 1932, the owner of the same ball would have been able to multiply his money … 590 times!

]]>

It is interesting to note that a close above $126.10 will be above the 200-day moving average. Something unseen since Dec 7th, and the last time this (SPY closing above its 200 day SMA) happened for more than 3 days in a row, was back in August.

In any case, happy trading and merry holidays!

]]>

How is typically ARMA trading simulated? The data is split into two sets. The first set is used for model estimation, an in-sample testing. Once the model parameters are determined, the model performance is tested and evaluated using the second set, the out-of-sample forecasting. The first set is usually a few times larger than the second and spans four or more years of data (1000+ trading days).

I wanted to be able to repeat the first step once in a while (weekly, monthly, etc) and to use the determined parameters for forecasts until the next calibration. Now, it’s easier to see why I classified my earlier approach as an “extreme” – it does the model re-evaluation on a daily basis. In any case, I wanted to build a framework to test the more orthodox approach.

To test such an approach, I needed to perform а “rolling” forecast (have mercy if that’s not the right term). Let’s assume we use weekly model calibration. Each Friday (or whatever the last day of the week is) we find the best model according to some criteria. At this point we can forecast one day ahead entirely based on previous data. Once the data for Monday arrives, we can forecast Tuesday, again entirely based on previous data, etc.

My problem was that the package I am using, fGarch, doesn’t support rolling forecasts. So before attempting to implement this functionality, I decided to look around for other packages (thanks god I didn’t jump to coding).

At first, my search led me to the forecast package. I was encouraged – it has exactly the forecast function I needed (in fact, it helped me figure out exactly what I need;)). The only problem – it supports only mean models, ARFIMA, no GARCH.

Next I found the gem – the rugarch package. Not only it implements a few different GARCH models, but it also supports ARFIMA mean models! I found the documentation and examples quite easy to follow too, not to mention that there is an additional introduction. All in all – a superb job!

Needless to say this finding left me feeling like a fat kid in a candy store (R is simply amazing in this regard!). Most likely you will be hearing about mew tests soon, meanwhile let’s finish the post with a short illustration of the rugarch package (single in-sample model training with out-of-sample forecast):

library(quantmod) library(rugarch) getSymbols("SPY", from="1900-01-01") spyRets = na.trim( ROC( Cl( SPY ) ) ) # Train over 2000-2004, forecast 2005 ss = spyRets["2000/2005"] outOfSample = NROW(ss["2005"]) spec = ugarchspec( variance.model=list(garchOrder=c(1,1)), mean.model=list(armaOrder=c(4,5), include.mean=T), distribution.model="sged") fit = ugarchfit(spec=spec, data=ss, out.sample=outOfSample) fore = ugarchforecast(fit, n.ahead=1, n.roll=outOfSample) # Build some sort of indicator base on the forecasts ind = xts(head(as.array(fore)[,2,],-1), order.by=index(ss["2005"])) ind = ifelse(ind < 0, -1, 1) # Compute the performance mm = merge( ss["2005"], ind, all=F ) tail(cumprod(mm[,1]*mm[,2]+1)) # Output (last line): 2005-12-30 1.129232

Hats down to brilliancy!

]]>

]]>

The last week also marked the end of month. As of the end of November, the performance of the indicators stood as follows:

Indicator | Gain/Loss |
---|---|

Buy and Hold | -0.70% |

DVI | 22.24% |

ARMA | -5.32% |

An impressive performance by the DVI indicator! Let’s see how the year ends.

]]>

]]>

To compute these values, I use a parallel approach to the one shown for the DVI indicator. The code will be posted soon, but right now I am quite busy with some improvements on the ARMA/GARCH strategies.

]]>

The market was down 4 of the 5 trading days. The DVI indicator was short on Monday (winning) and Tuesday (losing) and wend long afterwards (losing). This resulted in a lost of **only** -2.8%.

The real loser was my ARMA indicator. Yes that’s right, it was even worst. It got the market direction wrong every single day of the week for a whopping -4.67% lost.

Both DVI and ARMA are long for Monday (check the right bar on the blog) and they seem to indicate that markets are entering into oversold territory.

]]>

Moving Average | Position | Since | Gain |
---|---|---|---|

20 Week | Long (SPY) | 2011-10-21 | 2.17% |

10 Month | Out (IEF) | 2011-08-31 | 0.67% |

I am dropping the 200-day moving average because it is hard to follow on weekly basis.

The DVI indicator was long for the entire week, so it followed the performance of the index – 0.94%. As of the Friday close however the DVI indicator went above 0.5, which indicates a short position for Monday.

]]>