Eight steps to finding evidence for your systematic review
Systematic reviews have been described as the basic unit of knowledge translation. By systematically searching for the totality of available evidence on a given topic, they are potentially powerful tools on which to base clinical decisions. It is therefore critical that they include a relevant, effective, and reproducible search strategy.
The Oxford Bodleian Library describes the process of identifying studies that are relevant to a given clinical question as being made up of two broad stages:
- Development of a structured search strategy that is run across multiple databases.
- An iterative process for finding further studies by hand searching relevant publications, reviewing reference lists, citation searching, contact with authors, and location of on-going studies.
Several resources, providing more detailed guidance, are available to support this process. But do they all agree on what the optimal structure for searching is? A recent review has examined this. The reviewers identified nine guidance documents, all prominently featured in UK systematic reviewing practice. These were:
- Systematic Reviews: CRD’s guidance for undertaking reviews in health care
- The Cochrane Handbook
- Collaboration for environmental evidence: Guidelines for systematic reviews in environmental management
- Joanna Briggs Institute Reviewers’ Manual
- Institute for Quality and Efficiency in Health Care (IQWiG) Methods
- Systematic Reviews in the Social Sciences: A Practical Guide
- Process of information retrieval for systematic reviews and health technology assessments on clinical effectiveness (via Eunethta)
- The Campbell Handbook: Searching for studies: a guide to information retrieval for Campbell systematic reviews
- Developing NICE guidelines: the manual
From these guidance documents, the authors identified eight key stages that relate specifically to literature searching in preparing systematic reviews. They then identified empirical evidence that supports these steps. I have summarized their findings below:
1. Involve an information specialist/librarian
Six of the guidance documents recommended that information specialists (or information scientists), librarians, or trial search co-ordinators (TSCs) should ideally be members of the review team. The reviewers highlighted a study by Meert and colleagues, who found that having a librarian as a coauthor or team member was associated with a better quality of search result, as assessed using a bespoke methodology checklist.
2. Clearly define the aim(s) and purpose(s) of the literature search
Most of the guidance documents stressed that the purpose of seeking all available evidence was to minimise the chance of publication bias when evaluating the overall results and formulating conclusions. This makes sense when considering reviews of effectiveness, diagnostic accuracy, or prognosis. However, there is still some debate as to whether this needs to be as comprehensive for reviews of qualitative studies and when abbreviated synthesis is needed because of lack of time or other resources.
3. Prepare for your systematic review
The reviewers highlighted two important common steps to consider. First, to identify whether existing or ongoing reviews are already tackling your question or a related one, and to justify if a new review is needed. Guidance on when and how to update systematic reviews exist. Authors of potential reviews may consider searching the Cochrane database, PubMed (using a systematic review filter) and the International prospective register of systematic reviews (PROSPERO) database to check this. The second step was to develop and pilot an initial search strategy. This can provide several benefits, including estimating the volume of relevant literature, and indication of the quality and the possible resources needed for the full review.
4. Designing the search strategy
There are several different frameworks for designing the strategy search question. A commonly used one is the Population, Intervention, Comparator, Outcome (PICO) framework. Other frameworks exist, and include the SPICE (Setting – Perspective – Intervention – Comparison – Evaluation) and SPIDER (Sample – Phenomenon of Interest – Design – Evaluation – Research type) frameworks. A useful overview of frameworks to support question formulation can be found here.
5. Bibliographic database searching
Most of the guidance documents suggested that the types of databases authors choose to search should largely depend on the nature of the question. There is no agreed standard number of databases, although the average is about 4. The authors of a recent study suggested that, as a minimum, systematic reviews should search at least Embase, MEDLINE, Web of Science, and Google Scholar to provide adequate coverage.
6. Supplementary search methods
Most of the guidance documents advocate supplementary searching. Contacting experts working in the field may be important when conducting systematic reviews. Hand searching and scanning reference lists may also be used. Literature not formally published in sources such as journals is often termed “grey literature”. Finding such literature may include searches of websites of relevant agencies, working papers, dissertations, and conference proceedings. Supplementary search methods may be needed to address the risk of reporting bias, including publication bias. One way of mitigating this is to searching for and include data from registries or clinical study reports. However, these documents are technical and resource intense.
7. Managing the references
Five of the guidance documents recommend bibliographic management tools to store, deduplicate, and manage references. Software tools such as EndNote, Reference Manager, and RefWorks were often used. However, increasing numbers of tools and software packages are now available, many of them using online cloud-based platforms.
8. Documenting the search
The reviewers identified a consensus that systematic reviews should document the databases searched, the search strategy used, and any search limits (e.g. dates, languages). Providing the full search strategy aids transparency, but this was specifically mentioned as a requirement only in the Cochrane Handbook. Those who want to follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist will note the recommendation for reporting all information sources, including the date last searched, and that authors present their full electronic search strategy for at least one database.
This paper adds a lot more, including limitations associated with the development of this 8-step process, such as not including guidance from the Agency for Healthcare Research and Quality (AHRQ) or Canadian Agency for Drugs and Technologies in Health (CADTH). However, I would say that the paper is an essential read for anyone about to embark on a systematic review. Have a look and judge for yourself.
Kamal R Mahtani is a GP and Director of The Evidence-Based Health Care MSc in Systematic Reviews. He is based at the Centre for Evidence-Based Medicine, Nuffield Department of Primary Care Health Sciences, University of Oxford. He is also an Associate Editor at the BMJ Evidence-Based Medicine journal.
You can follow him on Twitter @krmahtani
Disclaimer: The views expressed in this article represent the views of the author and not necessarily those of the host institution, the NHS, the NIHR, or the Department of Health.
Acknowledgements: Jeffrey Aronson for helpful comments.
Competing interests: KM receives funding from the NHS NIHR SPCR Evidence Synthesis Working Group and the NIHR Health Technology Assessment programme.