Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.

While attending the University of Oxford Complex Reviews module, I thought about waste. Waste of time, labour, and funding. Perhaps even the health of patients. While listening to my tutors, I realised the non-inclusion of unpublished data in systematic reviews (SRs) is a greater problem beyond the conceptual definition I considered previously. Particularly when reporting the effectiveness and harms of a treatment. Larger still are the impacts of publication and reporting bias on the validity of SRs used by governments and clinicians to make policy and treatment decisions. By the end of the module, I wondered if a SR not including unpublished data is a wasted effort.

The neuraminidase inhibitor oseltamivir (Tamiflu) was purported in clinical trials and SRs to be effective in preventing and treating influenza and was recommended for use during serious epidemic or pandemic. Based on this data, governments stockpiled oseltamivir in preparation for future influenza pandemics. The US and EU were reported to have spent $1.5 billion and $3 billion USD, respectively. Roche, the manufacturer of oseltamivir, generated $18 Billion USD in global sales since 1999. Half this revenue was generated by governmental stockpiling.

Tom Jefferson and colleagues demonstrated the available data used to inform crucial policy decisions was incomplete, and based on selectively reported data from published trials. The review team demonstrated that many published trial reports exaggerated the effectiveness of oseltamivir. They also discovered evidence of previously unmentioned harms. How did they do this? By obtaining the unpublished clinical study reports (CSR) from the manufacturer sponsoring these trials and reanalyzing the data. Among several disturbing discoveries, their work revealed that approximately 60% of oseltamivir trials were unpublished.

Chalmers and Glasziou noted, “SRs can be misleading if they are done sloppily or interpreted incautiously, and that this might contribute to waste of research effort.” Mahtani proposed using SRs to inform new research is good practice and may help prevent wasteful duplication of studies if they are incorporated in trial planning and design. Jefferson discussed the problem of journal articles where evidence may be distorted by reporting bias or even be deliberately manufactured “pieces of marketing.” Jefferson proposed addressing distorted evidence by comparing primary evidence sources with their corresponding CSRs. This approach could minimize waste by increasing the validity of the primary evidence synthesized in SRs. However, there are limitations to this approach.

Eighty-three percent of systematic reviewers, in one survey, never considered using CSRs before indicating their existence is widely unknown. Searching for and obtaining unpublished data requires effort beyond what some systematic reviewers may be able to commit to. Jefferson commented it can take considerable time to obtain CSRs or unpublished trials from a drug manufacturer and the data provided may be heavily redacted to protect proprietary interests. Trialists may be reluctant to release unpublished trials with negative results to a systematic reviewer. There are logistical concerns as collecting and analyzing unpublished data requires more time, funding, and administrative resources than many reviewers may have at their disposal.

Complete data transparency, mandatory registration, and timely reporting of trials in centralized databases by universities, governments, and pharmaceutical companies would enable the timely acquisition of data by systematic reviewers.

Although some progress has been made, there are barriers against this endeavor and compliance with reporting laws is low. Looking to the future, time, labor, and limited staffing challenges could be addressed using artificial intelligence systems designed to efficiently search and process voluminous amounts of natural language data.

Despite current limitations, the properly executed SR, that includes unpublished data, economizes time, labor, and funding by ensuring the SR is not a wasted effort that generates misleading results. When there is high confidence systematic reviewers performed due diligence to locate unpublished data for inclusion, consumers of the medical literature can be more assured they have the most valid information possible to make decisions regarding treatment and policy. In short, the inclusion of unpublished data means the SR may not have to be needlessly repeated minimizing waste of limited resources, allow allocation of funding to other studies, accelerate the acquisition of knowledge, and improve the health of patients.

James Burgert was a student on the Complex Reviews module, part of the EBHC programme which includes the MSc EBHC (Systematic Reviews). He is a resuscitation medicine scientist with the Geneva Foundation for Military Medical Research and practices anesthesia in San Antonio, Texas, USA. The views expressed in this blog post are those of the author and do not necessarily reflect the views of the anyone referred to in the piece, the publisher or the host institution.