Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Head and shoulders photo of Professor Trish GreenhalghTools and resources for critical appraisal of research evidence are widely available and extremely useful.  Whatever the topic and whatever the study design used to research it, there is probably a checklist to guide you step-by-step through assessing its validity and relevance.

The implementation challenge is different. Let me break this news to you gently: there is no tooth fairy.  Nor is there any formal framework or model or checklist of things to do (or questions to ask) that will take you systematically through everything you need to do to ‘implement’ a particular piece of evidence in a particular setting.

There are certainly tools available (check out the Knowledge to Action Framework, for example), and you should try to become familiar with them.  They will prompt you to adapt your evidence to suit a local context, identify local ‘barriers’ and ‘facilitators’ to knowledge use, select and tailor your interventions, and monitor and evaluate your progress. All these aspects of implementation are indeed important.

But here’s the rub: despite their value, knowledge-to-action tools cannot be applied mechanistically in the same way as the CONSORT checklist can be applied to a paper describing a randomised controlled trial.  This is not because the tools are in some way flawed (in which case, the solution would be to refine the tools, just as people refined the CONSORT checklist over the years). It is because implementation is infinitely more complex (and hence unpredictable) than a research study in which confounding variables have been (or should have been) controlled or corrected for.

Implementing research evidence is not just a matter of following procedural steps. You will probably relate to that statement if you’ve ever tried it, just as you may know as a parent that raising a child is not just a matter of reading and applying the child-rearing manual, or as a tennis-player that winning a match cannot be achieved merely by knowing the rules of tennis and studying detailed statistics on your opponent’s performance in previous games.  All these are examples of complex practices that require skill and situational judgement (which comes from experience) as well as evidence on ‘what works’.

So-called ‘implementation science’ is, in reality, not a science at all – and nor is it an art. It is a science-informed practice.  And just as with child-rearing and tennis-playing, you get better at it by doing two things in addition to learning about ‘what works’: doing it, and sharing stories about doing it with others who are also doing it. By reflecting carefully on your own practice and by discussing real case examples shared by others, you will acquire not just the abstract knowledge about ‘what works’ but also the practical wisdom that will help you make contextual judgements about what is likely to work (or at least, what might be tried out to see if it works) in this situation for these people in this organisation with these constraints.

There is a philosophical point here. Much healthcare research is oriented to producing statistical generalisation based on one population sample to predict what will happen in a comparable sample. In such cases, there is usually a single, correct interpretation of the findings. In contrast, implementation science is at least partly about using unique case examples as a window to wider truths through the enrichment of understanding (what philosophers of science call ‘naturalistic generalisation’). In such cases, multiple interpretations of a case are possible and there may be no such thing as the ‘correct’ answer (recall the example of raising a child above).

In the Knowledge Into Action module, some of the time will be spent on learning about conceptual tools such as the Knowledge to Action Framework. But the module is deliberately designed to expose students to detailed case examples that offer multiple different interpretations. We anticipate that at least as much learning will occur as students not only apply ‘tools’ but also bring their rich and varied life experience (as healthcare professionals, policymakers, managers and service users) to bear on the case studies presented by their fellow students and visiting speakers.  Students will also have an opportunity to explore different interpretations of their chosen case in a written assignment.

The Knowledge into Action course is run by Oxford University Department for Continuing Education in conjunction with the Centre for Evidence-Based Medicine and the Nuffield Department of Primary Health Care Sciences.