People vary hugely in the extent to which the psychological differences between demo accounts and real-money accounts affect their behaviour, but for some (and quite often, I think, the ones who are least prepared for it), these factors are clearly hugely significant.
And then there’s also the (less often discussed) selection-bias problem, which I’ll try to explain with an analogy.
You’re a pharmaceutical company, and you’ve spent a few tens of millions researching and developing a new treatment for condition X, which you now want to bring to market. Among the hurdles you have to pass is the production of some double-blind, randomised clinical trials showing that your new product works better than the standard, already-available, lower-cost equivalents. So you set about commissioning a whole series of clinical trials.
To successfully negotiate the various administrative requirements, you need at least one independently audited clinical trial result of sufficient size for it to be known (according to the industry-standard, pre-defined statistical parameters) that its result shows with 95% certainty (or “p<0.5”, as statisticians say) that the outcome-difference demonstrated is actually due to the the superiority of the product itself, and not due to chance or random findings.
Simplifying slightly, if your new drug is actually worse than the currently available one, then out of every 20 clinical trials you commission, on average, [U]one[/U] will still [I]apparently[/I] demonstrate that it’s better. That’s the one you’ll (perhaps) publish, and submit to the regulatory authorities to get your product license.
Traders often do something very similar, on demo accounts.
But it’s themselves they’re fooling, rather than a country’s regulatory authorities.
They try 20 different methods (or one method 20 times, with only slight variations) and they eventually (perhaps even well before 20 attempts) get one that “apparently works” and they think to themselves “This is the one: now I’ve cracked it” and open a real account and begin trading the method with real money. And guess what? It often doesn’t work, because their 95% certainty wasn’t really enough, and they fooled themselves in that the results, which [B]appeared[/B] to prove a causative correlation between the method and the outcome, were actually random all the time … [I]however methodically they tested it[/I].
So, there’s the additional problem that something that worked well on demo doesn’t work so well (i.e. loses money) on a real-money account, simply because what looked like “evidence” was actually random.
Nicholas Nassim Taleb has discussed this whole syndrome - including with specific reference to trading (that’s his own professional background) - in his hugely important book [I]Fooled By Randomness[/I].
People wanting to avoid this [U]very common[/U] scenario will benefit from reading Michael Harris’s book [I]Profitability and Systematic Trading[/I].