Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.


The notion of systematic review – looking at the totality of evidence – is quietly one of the most important innovations in medicine over the past 30 years.” – Ben Goldacre, 2011

What would make someone speak so highly of this type of research? Let us answer that by considering a case study, which itself begins by asking another question:

Should a patient who has suffered a sudden and severe injury to the brain, for example through a road traffic accident, be given a corticosteroid? 

Such injuries are unfortunately not uncommon and are associated with high levels of disability and death, mainly because of consequent brain swelling. Weak evidence from the early 1960s suggested that patients with swelling in the brain might benefit from being given a corticosteroid. However, clinical uncertainty meant that their use in these circumstances was not universal.

Part of this uncertainty was explained by variation in the results obtained from randomised controlled trials. A systematic review of 13 randomised controlled trials, published in the Cochrane Library and the BMJ, echoed this uncertainty. It pointed out that, despite aggregation of all the available published trial evidence, the balance of benefit to harm from using corticosteroids was not clear. The conclusion was particularly concerning, given that it meant that some patients who were given corticosteroids might fare worse because of it, while other patients were potentially being denied a treatment that could be of real benefit to them.

The authors of the systematic review highlighted another important fact. That a large, well designed and executed, multicentre trial was needed. They took up this challenge themselves. By 2004, they had published the results of the “corticosteroid randomisation after significant head injury” or “CRASH” trial. The multinational trial, funded by the UK Medical Research Council, randomised 10,008 participants with head injuries to either a corticosteroid or a placebo within 8 hours of the injury. For many, the results were unexpected but unarguable: within 2 weeks of being given a corticosteroid for a brain injury, patients had an increased risk of death compared with placebo.

Importantly, the CRASH study investigators immediately updated their previous systematic review, placing the results of their new study in the context of the previous data. The size of the CRASH study meant that it had a powerful impact on the overall meta-analysis result. It was clear that when combined with the previous studies, corticosteroids appeared to be associated with a higher risk of death. The authors subsequently updated their Cochrane review, concluding that “steroids should no longer be routinely used in people with traumatic head injury”.

The CRASH trial has been described as exemplary. It was prospectively registered, publicly funded and multi-centre. Perhaps most importantly, it began with a systematic review; highlighting an important clinical uncertainty and therefore a clear need for new research. It also ended with placing its own results in the context of a systematic review.  Many funders of clinical trials now recognise the importance of this practice. For example, the UK National Institute for Health Research (NIHR) requires that any request for support to conduct a new clinical trial should include clear justification of need, demonstrated by reference to systematic reviews of relevant existing evidence, and a search for existing ongoing trials. Audits of NIHR funded trials have shown a high degree of adherence to this standard.  However, the practice is far from universal, and there are opportunities for improvements.

Therefore, all funders of clinical trials should make it a prerequisite that investigators not only use a systematic review to inform their trial, but also complete their trial with a demonstration of how the new data add to the existing evidence base. As the CRASH study highlighted, a failure to consider this would be unscientific and unethical.

Kamal R. Mahtani is an NHS GP, Director of the EBHC MSc in Systematic Reviews and  Deputy Director of the Centre for Evidence-Based Medicine

Declared interests

I have received funding from the NHS National Institute for Health Research to conduct independent research. I have no other relevant competing interests to declare.


The views expressed in this commentary represent the views of the author and not necessarily those of his host institution, the NHS, the NIHR or the Department of Health.


David Nunan, Jeffery Aronson and Iain Chalmers for helpful discussions.