Auto Traders: What time is it?

For several years I’ve been working on measuring and identifying efficiency in my computer based models, trying to consider as thoroughly as possible, why does it work and where will it work best - if it actually works at all.

Some of the research has been focused around Market Impact and Timing Risk, something that affects all traders but especially those who trade tighter time frames and algo (computer / ea) models. If you want or do trade ea’s a really good place to begin to get a handle on this is to Google: Understanding the Profit and Loss Distribution of Trading Algorithms Robert Kissell filetype:pdf

As I thought about “timing” I began to wonder if “time” impacted my trade models. Did the time of day I entered a trade, represent a different level of efficiency in my model. I was surprised by the results.

In the next few posts I’m going to break out how “time of entry” impacted profitably in back-testing. - JP

Hey JP,

Looking forward mate.

cheers

[B]Background[/B]
Before we jump into impacts, I want to cover the model just a little bit and how it came to be. I’ve been working with low level linear genetic programming for about 15 years now. At one time you could evolve trading models in a GP application and trade them with no tweaking. But like most approaches, implementation leads to nullification and they eventually become less efficient and more volatile. So adjustments have to be made.

I knew that going in, so I was able to get more life out of my original model than most but by 2007 it had rolled over and died a “volatile” death. I spent several months trying to build a new model, and had some success but I also knew that the difficulty I had in doing it had risen exponentially and eventually this approach would not produce results.

The initial model created in 2003, required over 10,000,000 models to be sampled before finding a good fit and it lasted for five years. In 2007 the system had to sample over 100,000,000 models and it became too volatile a year and a half later to trade. That’s a story for a different day though.

Knowing the 2007 trade model was going to be junk in short order I began to think about “efficiency” a lot more and how to measure it. I was also trying to get back into hedge-fund allocations and wanted to be able to show “capacity” was built into the approach. It’s important if you’re trading for institutional clients that you can prove you can handle really large allocations and have the ability to move monies in and out without suffering huge Market and Timing impacts. Despite a pretty large grid of computers I was never really able to get a good model going. I needed more computational power and more time.

[B]Enter 2012[/B]
In 2009 I took a sabbatical from all areas of retail forex so I could get back to research and modeling. By the early portion of 2012 I was ready to dig back in. I had just bought a pallet of new Dell T7500 Dual 6 core CPU workstations and had promised my first born to the electric company (these things consume a lot of power when they are at full tilt and I run them about 95% when doing a regression/classification.)

[B]Algo Details[/B]
The algorithm that we’ll use for our sample was created in 2012 using several years of EUR/USD 15m data. The two largest portions of the data were used for training and validation and the balance was left for out-of-sample testing. The out-of-sample data runs from June of 2011 to January of 2012. We’ll refer to it now simply as “2012”.

The algorithm was applied to each 15 minute period across the sample data and a forecast of where the market would be in 24 hours was recorded. A trade was assumed at the time with a 15 minute delay to duplicate real world market impact as was a 3 pip spread to imply transaction impacts and slippage.

Standard lots are assumed with a value of $10 per pip. Single trade leverage at the time of the trade is 1:1 and no overlapping trades are allowed - i.e. a trade is taken at 8:00 AM in the morning and held for @ 24 hours where (for this test) it is closed and a new trade is opened, even if it is in the same direction of the previous trade.

[B]The Apology[/B]
If all of that bored you to tears I apologize, just wanted to give you some background on what the model was based on and how it was measured. I’ll try to be brief from here on out :slight_smile: - JP

Okay let’s start at the top of the GMT trading day. This first chart assumes we take a trade each day at 0:00 GMT based on the algorithms forecast. Profit and loss is based on a nearly 24 hour holding period then the trade is closed out and a new trade is taken when the model is run again. There is no leverage used and trades are not allowed to overlap.

Not a bad start for the model, nearly 20% return for 6 months of trading without any leverage. Initially we would assume the trade model was a good one and every entry around the clock would be similar. You can probably all ready tell by that setup, that’s not the case. Let’s carry on with the balance of our initial, top of the hour checks. 1:00 GMT, 2:00 GMT, 3:00 GMT and so on.



Picking up at 4:00 GMT




Picking up at 8:00 GMT




Now at 12:00 GMT




Now at 16:00 GMT




Finally at 20:00 GMt




Where we see something interesting…

It looks as if between 22:00 and 2:00 there is a period of higher efficiency in the model, so let’s dig into those 15 minute segments to verify.




Picking up at 23:15




Now at 0:15 GMT




Finally at 1:15




So what’s the take away. Two things, first there appears to be a period of a couple of hours where the forecasting algorithm really was quite accurate and it was a broad enough window that a large amount of money could have been moved in and out very efficiently without impacting returns much. But, there is a HUGE chunk of the day where this thing isn’t worth the electric bill it generated, and that should be a red flag to all of us. At the end of the day, technical analysis, fundamental analysis, computer models etc are just a bunch of different ways of saying the market may go up or the market may go down.

It appears “when” those signals are generated my actually impact the quality of the signal and the likelihood that it’s actually correct. For most traders this would be difficult to duplicate because you need a lot of trade data get an accurate verification. My total dataset here was 16,000+ trades. It’s easier to draw a conclusion when you have large data sets then if your set is small. But it’s probably something we should be thinking about and making an attempt to log and measure as time goes by. - JP

There was sure a lot of work done to get those results,so a big thanks for that.

But as a retail trader daily liquidity is more than enough to get us in and out without any problem.
On the other hand a hedge funds capital moving in and out does leave a trace and those traces are never to be tested since your algo can’t generate based on that. Does that make sense?

I took from your analysis that speculating during a certain time of the day gives us an edge,correct?

Cheers

I don’t know. What it signaled to me when I first did the work was, there were clearly times in a day when I didn’t want to trade that algorithm. That got me thinking about Technical Analysis and I’ve wondered but never confirmed (it would be really hard to do) if market conditions during specific times of the day also impact those signals.

Most of that was based on the fact that my model’s data-set, like almost all machine learning models was full of TA indicators. If the mathematical formula the GP kicks out based on those readings at certain times doesn’t work - which it clearly doesn’t, isn’t it really the data-points (the Technical Indicators themselves) that are failing, or giving false signals during those market hours?

This is the kind of stuff that keeps me up at night. - JP

In this case it wouldn’t work for most hours of the day,which is amazing.

If you could prove that deploying Technical indicators at certain times of the day increases accuracy ,
then you are very well on a breakthrough path.

Cheers

The only problem with a breakthrough is, as soon as it hits the open space it will get widely adopted and then lose its edge. Implementation leads to nullification. This work for me dates back to 2007, I’m sure others in the institutional world found it before and the market has compensated for it by now.

It’s kind of like T/A - there is a lot of empirical evidence that Technical Analysis worked pretty well in the 70’s and fairly well in the 80’s. By the 90’s it was hard to produce a study that showed it had forecasting value without some form of computational optimization. Same thing happened with Genetic Programming (actually all machine learning). It worked as a stand alone product for quite a while, but then quit. Now we are forced to push it into new directions to make it useful. In three or four years I’ll write a post about what I’m doing today. :slight_smile:

That last sentence was a good one.

I take it you are working on the next breakthrough then.

Cheers

Working on the 2016 technology… Got to stay one step ahead. :wink: