Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

We are all scientists. Every day we conduct our own experiments, even if we don’t realise it. I recently conducted one whilst walking through Oxford on my way to catch the train home. The type of experiment I’m referring to forms the basis of all human and animal developmental learning and it simply involves – observing things. Walking home that day I noticed people emerging from various high street retailers with bags of shopping. Tubes of wrapping paper sticking out the top supported my intuition and perceptions leading to a conclusion of what it is I was observing – Christmas shopping!

Observation has been the principle of empirical scientific method since Aristotle, who viewed scientific inquiry as ‘…progression from observations to general principles and back to observations’. Alexander Fleming discovered penicillin accidently by observing the attenuation of Staphylococcus aureus caused by mould in a petri dish he was about to throw out. Albert Bandura’s experiments showed how children can learn by observing other people’s behaviour, something I can now relate to watching (sorry, observing) the development of my 10 month old daughter.

Nowadays, observational studies (that is studies that only observe their subjects without doing anything or in other words, intervening) form a large branch of scientific study. It was with interest then that I read the article in Significance magazine by Stanley Young and Alan Karr with the provocative opening line “Any claim coming from an observational study is most likely to be wrong”. Perhaps a more important question is how did I find myself reading a statistics magazine in the first place but that’s for another blog.

Returning to the article, the authors claim there is sufficient evidence to say that any claim coming from observational studies is likely to be wrong. What? How could this be? Are you telling me that those people were not doing their Christmas shopping? It’s important here to note another key principle of scientific inquiry – observations and experiments should be repeatable; when they are repeated, they must give the same answer. When stating that claims from observational studies will be wrong, Young and Karr mean wrong in the sense they will not replicate if tested rigorously.

So my small sample, observational cohort study of Oxford Christmas shoppers won’t hold up if tested more rigorously? Thinking about it from an evidence based medicine point of view I can see how this will be the case. Christmas is not a universal culture phenomenon; some people’s shopping would have had nothing to do with Christmas. Was the wrapping paper definitely going to be used to wrap Christmas presents? I definitely didn’t apply the RAMBOMAN model to my ‘study’.

 Rambo doll and diagram illustrating the RAMboMAN concept.

Bias would have been present but I’m sure this is obvious to you all. My ‘study’ was clearly not rigorous nor scientific. However, Young and Karr argue that even in observational studies employing rigorous scientific methodologies, claims fail to replicate. For example, of 49 claims from highly cited studies, 14 failed to replicate entirely or the magnitude of effect was reduced (the latter phenomenon relating to regression to the mean; see Carl Heneghan’s blog on this topic). Six of the 49 studies were observational studies, five (83%) of which failed to replicate.

The authors conducted their own (informal) study and found that in 12 randomised trials assessing 52 observational claims, all (that’s 0 out of 52, or 100%) failed to replicate claims in the same direction of the observational claims. Five claims were actually significant in the opposite direction! Some might consider these findings worrying, particular as the problem may be systemic throughout the medical scientific literature.

Current UK recommendations for physical activity are underpinned by data from observational studies and therefore could be confounded in their estimated effects. The observational data cited indicate huge benefits of being more physical active including 20-30% risk reduction in mortality, 20-35% lower risk of cardiovascular disease, and 30-40% reduced risk of type 2 diabetes to name but a few. However such findings might not be replicated in clinical trials. In addition, information on the ‘formulation’ and ‘dosage’ is scarce, as is information on who best to prescribe physical activity to and where more evidence is needed.

Don’t get me wrong, I am proponent of physical activity and exercise for prevention and treatment of chronic disease. It’s just that I’m basing this on observations, intuition and perception and we have seen the problems this can cause. On the flip side, randomised trials rarely reflect ‘real world’ settings. I’m keen to find answers based on the best available evidence.  There have been hundreds of randomised trials of physical activity in numerous disease conditions. However, these data have not been systematically scrutinised in the same way as the observational data.

I believe such scrutiny will provide a more definitive answer as to the preventative and treatment effect of physical activity in chronic disease. It may also provide information on the dosage that elicits the best (and worst) outcomes. Will there be sufficient evidence to assess if claims from observational studies hold true? We will have to wait and s….observe.