Realist reviews are becoming an increasing popular approach to synthesising evidence about complex interventions. Whilst Geoff welcomes this increase, he raises challenges that such popularity might bring
Realist reviews (or syntheses – the terms are synonymous) are a type of theory driven approach to evidence synthesis. If you have even heard about them, then there are likely to be two things you know about the approach. The first might be about the type of questions they are used to answer – namely ‘what works, for whom, in what contexts, how, why and to what extent?’ for an intervention or programme. This type of question has almost become a bit of a cliché and synonymous with realist reviews. The second thing you might have heard of is the ‘formula’ Context + Mechanism = Outcome or C+M=O.
Personally, I find it really fantastic that anyone knows anything at all about realist reviews (or realist evaluations for that matter). Back in 2009 I was lucky enough to be offered at oral presentation spot at the Campbell Colloquium in Oslo. There were three of us presenting our work in a session dedicated to realist reviews. Such was the popularity of realist reviews that there were (if my memory serves me well) three or four people in the audience and half way through the session, one got up and walked out.
Fast forward to 2019 and realist reviews have really taken off. There are courses being run in the UK, Australia and Canada. There is a thriving email listserv with over 1000 members, quality and reporting standards and resources and the number of projects using these approaches grows day by day! If you look at PROSPERO (an international register of evidence syntheses) in 2011 there was just one realist reviews registered, but in 2019 there are 46.
But there are two elephants in the room that accompany this rise in popularity of realist reviews. Neither are unique to this approach, but they do need calling out. The first relates to quality. Anyone who has taken the time to read a published realist review will be struck by how variable quality is. In fact, at times you might even ask yourself ‘Is this really a realist review???’
The other elephant is a close relative of the one above and concerns the quality of the education, training and support being offered to those who undertake realist reviews. When learning anything new, the learner is initially highly dependent on the experience and expertise of their teacher. How sure can they be that their teachers are experienced and have the necessary expertise? How can they tell?
I don’t pretend to have the answers to these questions. Nor am I in the business of policing or naming and shaming any publication or training programme. But I do believe that each of us engaged in this field of work have an obligation to do our part. I feel that anyone doing a realist review and/or training others, needs to sign up to ensuring that what they do is of the highest quality they can manage – given the time and resources available. There is no such thing as a perfect project or teaching session and I have done my fair share of sub-par work – and for these I apologise if they have misled or confused. However, for my part, I promise to do better – to reflect on, accept and redress my knowledge and practice gaps, listen to feedback and advice from others and to improve. My hope is that in 2029, not only will there be more realist reviews, but they will be ones that we can all be proud of!
Geoff Wong is the joint Module Coordinator for the ‘Realist Reviews and Realist Evaluation’ Module, which is part of the MSc Evidence-Based Healthcare Care Course.