Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

A large part of being a scientist is venturing into the unknown. You come up with hypotheses and test them through experiments. The problem is that more often than not, the experiments infrequently give you the BIG outcome you were perhaps hoping for, the one that might revolutionise clinical practice tomorrow, and instead often give marginal or equivocal (not significant) results.

Occasionally though, the experiment might give you an unexpected outcome that you didn’t plan for or were seeking, but might make the experiment more appealing for say publication. Wouldn’t it be nice to publish just that result and not the others that didn’t work, were marginal or showed no effect? You could even make out that this unexpected result was the one you were looking for all along. Tempting, isn’t it?

No, don’t go there.

In fact, what you’d be doing is introducing bias, specifically selective outcome reporting, into your experiment and devaluing the validity of your results.

What’s the harm?

Selective outcome reporting can lead to wasted resources and potentially harms patients. Here’s why. Let’s say, as a very simple example, you were interested in doing a systematic review of RCTs for a new blood pressure lowering drug, let’s call it “SwitchBP”.

Graphic of bottle of tablets labelled 'Switch BP'

You think there are likely to be enough RCT comparisons of SwitchBP versus a current standard blood pressure lowering drug to do a meta-analysis, and your main outcome of interest is a reduction in blood pressure. Your systematic search finds 5 RCTs (lets call them studies A, B, C D, E) of SwitchBP versus the standard drug. Four of them (Studies A, B, C, D) are smaller, but one (Study E) has a larger sample size and contributes the most to the overall effect size.

Picture3 (1).png 

So, it turns out that SwitchBP is better than the comparator at lowering blood pressure.

However, you’ve read some case reports which suggested that SwitchBP might be associated with a number of adverse events, so you particularly want to analyse those as well.  But as far as you can tell only 4 RCTs provide any quantitative data on adverse events (all showing no significant difference from the control arm) and Study E simply states in the text “no significant difference were noted in adverse events between each arm”.

Picture4 (1).png

 

However, you are a little suspicious of the trend and contact the authors of study E for any additional quantitative data they can share, but you get no response. So you continue with your review and inevitably conclude no overall increase in adverse events from SwitchBP based on the published data.

A short time later, you are thrilled when your review gets taken up by the new blood pressure lowering guidance and SwitchBP is recommended as a “safe and effective” drug to lower blood pressure.

But here’s the rub. How comfortable would you feel if you then found out that the adverse event data you hadn’t seen (from Study E) might change your overall conclusions? Because in Study E (the largest), the authors had chosen not to report their actual data on adverse events.  Including these data in your meta-analysis would have shown that, overall, SwitchBP was in fact associated with a significant increase in adverse events.

 Picture5-1024x362.png

However, this is possibly too late for your published systematic review, which is already included in guidance, supporting the widespread use of SwitchBP in patients, despite an increased risk of adverse events.

Of course this is a very simplistic example and the benefit to harm balance may still favour using SwitchBP. Nevertheless, the point is that that without full outcome reporting you were not able to generate conclusions which reflected what the full set of data should have told us.

Not disclosing full outcome data reflects a not uncommon practice. This and suggestions of what to do to reduce this bias are discussed in the next blog.