Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Spin [WITH OBJECT]: Draw out and twist (the fibres of wool, cotton, or other material) to convert them into yarn, either by hand or with machinery: “they spin wool into the yarn for weaving.”

 

Does the name Malcolm Tucker ring a bell? The Malcolm Tucker I am referring to is the fictional character from the BBC political satire The Thick of it. Tucker (played by Peter Capaldi) was a government director of communications, skilled in propaganda, more specifically in the art of “spinning” unfavorable information into a more complimentary, approving (and sometimes even glowing) public facing message. Whether the show accurately reflects real life governmental politics, or whether real life politicians ‘copy’ the show, remains a topic of discussion. Either way, “spin” in the political arena feels like something we are increasingly getting used to, almost expect.

“Spin” in reports of clinical research

For many researchers, the number of publications, and the impact of those publications, is the usual currency for measuring professional worth. Furthermore, we are increasingly seeing researchers discuss their work in public through mainstream and social media, as more of these opportunities arise. With this in mind it probably won’t come as such a shock to imagine that researchers might be tempted to report their results in a more favourable (again, even glowing) way than they deserve i.e. to add some “spin.

According to the EQUATOR network, such practice constitutes misleading reporting, and specifically the misinterpretation of study findings (e.g. presenting a study in a more positive way than the actual results reflect, or the presence of discrepancies between the abstract and the full text).

“Researchers have a duty to make publicly available the results of their research on human subjects and are accountable for the completeness and accuracy of their reports.”
WMA Declaration of Helsinki

So how common is “spin in clinical research?  An analysis of 72 randomised controlled trials that reported primary outcomes with statistically non-significant results, found that more than 40% of the trials had some form of “spin, defined by the authors as the “use of specific reporting strategies, from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically non-significant difference for the primary outcome, or to distract the reader from statistically non-significant results”. The analysis identified a number of strategies for “spin, with some of the most common being to focus reporting on statistically significant results for other analyses, i.e. not the primary outcomes; or to focus on another study objective and distract the reader from a statistically non-significant result.  Another analysis, this time involving 107 randomised controlled trials in oncology, similarly found that nearly half of the trials demonstrated some form of “spin” in either the abstract or the main text.

You might think that systematic reviews of primary research should address some of these problems. By seeking the totality of available evidence, interpreting the impact of bias, and then synthesising the evidence into a useable form, they can be powerful tools for informing clinical decisions. But not all systematic reviews are equal. Non-Cochrane systematic reviews have been shown to be twice as likely to have positive conclusion statements than Cochrane reviews. Furthermore, non-Cochrane reviews, when matched to an equivalent Cochrane review on the same topic, were more likely to report larger effect sizes with lower precision than the equivalent Cochrane review. In both cases, these findings may well reflect the extent to which methodological complexity is ignored or sidestepped in poorer quality reviews.  

So not all systematic reviews are equal, and neither are they exempt from “spin. A review of the presence of “spin” (defined as the consistency of reporting between the abstract/conclusions and the empirical data) in reviews of psychological therapies found that “spin” was present in 27 of the 95 included reviews (28%). In fact, a recent study identified 39 different types of “spin that may be found in a systematic review. Thirteen of those were specific to reports of systematic reviews and meta-analysis. When a sample of Cochrane systematic review editors and methodologists were asked to rank the most severe types of “spin” found in the abstracts of a review, their top three were (1) recommendations for clinical practice not supported by findings in the conclusion, (2) a misleading title, and (3) selective reporting.

Impacts of “spin” from clinical research

“Spin” may influence the interpretation of information by clinicians. A randomised controlled trial allocated 150 clinicians to assess a sample of cancer related abstracts with “spin” and another 150 clinicians to assess the same abstract with the “spin” removed. Although the absolute effect size was small, the study found that the presence of “spin” was statistically more likely to result in the clinicians reporting that the treatment was beneficial. Interestingly the study also found that “spin” resulted in clinicians rating the study as being less rigorous and they were more likely to want to review the full text-article.

Dissemination of research findings to the public e.g. through mainstream media, can also be a source of added “spin”. An analysis of 498 scientific press releases from the EurekAlert! Database identified 70 that referred to two-arm, parallel-group RCTs. “Spin”, which included a tendency to put more emphasis on the beneficial effects of a treatment, was identified in 33 (47%) of the press releases. Furthermore, the authors of the analysis found that the main factor associated with “spin” in a press releases was the presence of “spin” in the abstract conclusion.

So what motivates “spin”?

This is a complex area, to which more relevant research might add clarity. A desire to demonstrate impact has already been suggested as one driver. Other proposed mechanisms include (1) ignorance of scientific standards, (2) young researchers’ imitation of previous practice, (3) unconscious prejudice, or (4) willful intent to influence readers.

Conflicts of interest (COI) will almost certainly have some bearing on the presence of “spin”.  As an example, an overview of systematic reviews examined whether financially related conflicts of interest influenced the overall conclusions from systematic reviews that examined the relationship between the consumption of sugar-sweetened beverages (SSBs) and weight gain or obesity. Of the included studies, 5/6 systematic reviews that disclosed some form of financial conflict of interest with the food industry, reported no association between SSB consumption and weight gain. In contrast, 10/12 reviews, which reported no potential conflicts of interest, found that SSB consumption could be a potential risk factor for weight gain.

However, while a great deal of discussion focuses on financial COI, the “blind spot” may be non-financial conflicts of interest (NFCOI), which could have an even greater bearing on the presence of “spin”. For systematic reviews, these types of conflicts have been defined as “a set of circumstances that creates a risk that the primary interest—the quality and integrity of the systematic review—will be unduly influenced by a secondary or competing interest that is not mainly financial.” Examples of NFCOI include strongly held personal beliefs (e.g. leading to a possible “allegiance bias”), personal relationships, a desire for career advancement, or (increasingly possible now) a greater media profile. All of these have the potential to affect professional judgment and thus generate a message that does not convey a fair test of treatment.

Unfortunately a significant proportion of clinical research is already littered with various types of bias, which we know can influence the treatments we provide our patients as well as waste valuable resources. The added bias of “spin”, whether motivated by financial, personal, or intellectual conflicts of interest, or even plain ignorance, further obfuscates the problem.

Beware evidence “spin”.

Kamal R Mahtani is a GP, NIHR clinical lecturer and deputy director of the Centre for Evidence Based Medicine, Nuffield Department of Primary Care Health Sciences, University of Oxford. He is also a member of the Evidence Live 2016 steering committee which brings together leading speakers in evidence-based medicine from all over the world, from the fields of research, clinical practice and commissioning.

You can follow him on Twitter at @krmahtani

Competing interests: I declare no competing interests relevant to this article.

Disclaimer: The views expressed are those of the author and not necessarily of any of the institutions or organisations mentioned in the article.

Acknowledgements: Thanks to Jeff Aronson, Meena Mahtani and Annette Plüddemann for helpful comments.