To the bottom of it

Greetings, I wanted to ask your opinion on something.

If you test a trading strategy for 25 pairs on a period say 2016 - 2017 and find positive results i.e wins for the majority of the pairs but also some losing pairs.

You really have to find out why the other pairs lost before moving on with the strategy would you agree. It might be the case that the market conditions for 2016 - 2017 just so happened to be favourable for the winning pairs, which might not always be the case moving forward.

I have a situation like that now and i’m just wondering if I should just put some risk management behind it ; something simple like lotsizes to 0.1 if the win rate dips on a pair, or is it better to get to the bottom of the problem with those losing pairs.

hi ropunzel
did you try back test it for the years before 2016? if yes how many years ,
you having 25 pairs , and might add more to fees , why not diversifies in 5 pairs
just go with pairs you more certain of your strategy works on we not know exactly your trading and strategies
if you have an edge and you back test it there is no need to be worry about risk management more than the pairs

1 Like

Thanks for the post. You just might be right

Unless you’re eventually going to automate it all, it’s maybe not a very good plan going forward, to be dealing with 25 pairs?

I would look at the best performing 5 (or 6, or 7) pairs from the results of the year you’ve already researched and backtest each over a decade’s historical data, before proceeding further.

Thanks. for the post, yes auto trading - but definitely the signal correlations will have to be modelled.

I’m starting to think that just picking the winning tickets might be the one. I might never get to the crux of the issue with the others. Maybe just cross validate and look at the errors for all the pairs using different parameters for R:R etc… to find the optimal config for each one.

Choose a threshold for the prediction error and run with the pairs that consistently beat it.

One of the signals I am using is the change in the StDev. I had a quick look at the decay in the auto correlations for successive bars 1 to 20 and there does appear to be a relationship between the decay rate and the win rate for that par and bars in the StDev & MA.

Pretty much the predictive quality starts to decline exponentially after roughly 3 future bars and I think the trick may be to find a corresponding TP & SL appropriate for 3 bars forward, given the what the ATR is doing. So rather than trying to work it out in theory, I could just try different TP & SL combinations, adjusting the R:R to find what works best.

1 Like

It always seems so logical to just eliminate the losers, then you’d just be left with the winners.

Also that setting a time limit and a TP will make sure that trades that started out as winners don’t become losers.

So using this plan assume you’re setting up a new football team. All your new squad are 18-year old rookies. So you ditch the guys who come in the bottom third of your 40-metre sprints. Then you sack every player as they reach 19. Then you sack every player who’s managed to score 3 goals. How’s your team looking?

2 Likes

Greetings, thanks for the post, Idon’t think I understand your analogy.

That’s OK.

Think of each trade as a player in your team.

If you eliminate the players (trades) who will probably be losers, you also eliminate some who will actually be great players (trades) one day, its just that using even the best selection criteria you have today you can’t see how they will develop.

If you cut the players (trades) which haven’t become winners in a short period, again, you’ve cut some guys (trades) who will be stars one day, you just didn’t give them time.

If you sack the players who are most successful (the trades which are winners and do as well as you had ever dreamed), they will definitely not score any more goals for you (these trades will definitely not make any more profit for you).

So you end up with a very average team that can only hope to get draws (an account which just abut breaks even).

1 Like

Thanks for the post. I understand your point. Let me say

if I were to create a model and test the model in this sequence with the 25 pairs:

Model (2009 - 2010 - 2011) Test (2012)
Model (2010 - 2011 - 2012) Test (2013)
Model (2011 - 2012 - 2013) Test (2014)
Model (2012 - 2013 - 2014) Test (2015)

and consistently for each 4 test periods AUDCHF came up with a prediction error above -37% on a Win Rate Target of 65% R:R 2:1. Whereas my good friend CADCHF came up with a prediction error of under -9% on each 4 tests.

Surely that would mean CADCHF in the starting line up and AUDCHF on the bench, until AUDCHF could demonstrate some value in training

1 Like

Is the more important question not to understand why AUDCHF doesn’t work with your model, because at some point these market conditions that negatively affect the performance of AUDCHF WILL shift to the other pairs that are performing positively at current.

You’re a smart person - we all know the markets are cyclical in nature. I’d go about understanding what variables effect you, and use these as a filter on all pairs. The short term solution is to strike pairs off the ‘active trading list’ that don’t perform well in your model, however market cycles can take 10 years to complete. A three year model doesn’t cut the mustard.

1 Like

I think you’re right. The problem now is not knowing the property that determines those outcomes. I think that will be the long term objective i.e before going live but in the interim I would like to start forward-testing with the others.

And that’s the million dollar question that all traders need to solve when treating this as a long term venture for constant profits - answering this actually solves needing to adapt to the markets, because the filter(s) you come up with will be exactly what makes you adapt to changing market conditions in a proactive approach rather than a reactive approach.

2 Likes

Completely agree with you. But what if you find a central property that you cannot predict the future state of.

Well that’s always the risk that we have to accept as part of this business. However, if you can at the very least maximize the performance of your adaptation variables on the past data of price that we have been provided with then you certainly reduce the amount of times when you have to adapt with a reactive approach in mind. No one wants to have to trade with a reactive style adapting in real time with no past data on what the expectancy is likely to be, right? We all know that we can’t be right 100% of the time so accepting this from the get go could be a good state of mind to follow - I would imagine.

2 Likes

,save…saved

Found this interesting:

Hey who new, follow the trend.

1 Like