Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

 In the EUROPA trial  12,218 patients were randomized to receive perindopril or placebo. 9.9% of the participants in the ‘placebo’ group died or had a heart attack, whereas only 8% in the experimental group died or had a heart attack: roughly a 2% absolute effect size.  You would have to treat 50 patients with the drug to get one with a positive outcome.

According to current EBM standards, the EUROPA study provided very good evidence supporting the effects of perindopril because the study was large, randomized and double-blind. On this basis the authors of the study recommended that “all patients with coronary heart disease” should use the drug.

However, there are several problems with this and other large studies with small effects that are overlooked by standard EBM critical appraisal methods.

Exaggerating effect sizes

The authors of the EUROPA study reported the misleading relative effect size of 20% which sounds much more impressive than 2% and most cannot interpret the difference between relative and absolute risk; my next blog will involve a method that will teach the difference between absolute and relative risk – hopefully which you will never forget.

The paradox of large studies

The larger the effect of the treatment, the smaller the required trial:  you don’t need thousands of patients to realize that general anesthesia, the Heimlich maneuver, or external defibrillation work. So while large trials sound impressive (and for methodological reasons they are) they indicate that the effect size is small. To be sure small effects are sometimes important, for example when they involve reducing the chances of dying. Yet small apparent effects are also more likely to arise due to hidden biases.

Publication bias

Most trials remain unpublished, especially those with negative results. For instance, Turner et al.  identified 74 antidepressant trials registered by the FDA. Of the 38 with positive results, 37 were published. Of the 36 with negative/questionable results, only 14 were published. Unpublished studies are notoriously difficult to obtain and are often not included in systematic reviews. This makes the results of systematic reviews questionable. In a real example of how this can influence treatment decisions, Carl Heneghan and colleagues conducted a detailed investigation of the evidence for Tamiflu for preventing and treating influenza in healthy adults. A 2006 Review of the drug concluded it had some effectiveness, and on this basis billions of pounds of taxpayer money were spent on the drugs. However it turned out that the review did not contain all the trials because the sponsor did not release them.

When all the trials were obtained there the evidence supporting Tamiflu’s benefits questionable and it became clear that side-effects were far more common than was initially believed.

Conflict of interest.

Biased researchers can influence study results. Lundh et al. recently found that manufacturing company sponsorship is more likely to lead to favourable results than other studies. In a more dramatic example, Heres et al found that Olanzapine beat risperidone, risperidone beat quetiapine, and quetiapine beat olanzapine.

What predicted the success? You guessed it, the sponsor.

Whoever made the drug in the trial got the result they wanted. In both of these examples the industry sponsored research was not lower quality according to standard EBM criteria for appraising evidence. Instead, ‘hidden biases’ had  creeped in.

I am not anti-industry; in fact I know that no study is free from all bias. Yet we have strong evidence that industry sponsored research does exaggerate (their) treatment benefits and this needs to be considered when interpreting industry-sponsored research.

The EUROPA study revisited

So how might these hidden biases have influenced the EUROPA trial? James Penston notes the following:

  • All five members of the EUROPA executive committee declared a conflict of interest.
  • 10.5% of the patients in the run-in phase of the trial were excluded, mostly for reasons related to treatment with perindopril.
  • A subsequent study of a similar drug failed to replicate the effect.
  • 23% in the perindopril group dropped out of the trial whereas, 21% in the placebo group dropped out.

These factors might reasonably lead us to question whether the effects in the EUROPA study are believable. Yet none of the biases discussed here are adequately addressed by common EBM critical appraisal methodology, and something needs to be done about it.