Search results
Found 3219 matches for
The Centre for Evidence-Based Medicine (CEBM) at Oxford University develops, promotes and disseminates better evidence for health care.
The impact of the COVID-19 pandemic on antimicrobial usage: an international patient-level cohort study.
BACKGROUND: This study aimed to evaluate the trends in antimicrobial prescription during the first 1.5 years of COVID-19 pandemic. METHODS: This was an observational, retrospective cohort study using patient-level data from Bangladesh, Brazil, India, Italy, Malawi, Nigeria, South Korea, Switzerland and Turkey from patients with pneumonia and/or acute respiratory distress syndrome and/or sepsis, regardless of COVID-19 positivity, who were admitted to critical care units or COVID-19 specialized wards. The changes of antimicrobial prescription between pre-pandemic and pandemic were estimated using logistic or linear regression. Pandemic effects on month-wise antimicrobial usage were evaluated using interrupted time series analyses (ITSAs). RESULTS: Antimicrobials for which prescriptions significantly increased during the pandemic were as follows: meropenem in Bangladesh (95% CI: 1.94-4.07) with increased prescribed daily dose (PDD) (95% CI: 1.17-1.58) and Turkey (95% CI: 1.09-1.58), moxifloxacin in Bangladesh (95% CI: 4.11-11.87) with increased days of therapy (DOT) (95% CI: 1.14-2.56), piperacillin/tazobactam in Italy (95% CI: 1.07-1.48) with increased DOT (95% CI: 1.01-1.25) and PDD (95% CI: 1.05-1.21) and azithromycin in Bangladesh (95% CI: 3.36-21.77) and Brazil (95% CI: 2.33-8.42). ITSA showed a significant drop in azithromycin usage in India (95% CI: -8.38 to -3.49 g/100 patients) and South Korea (95% CI: -2.83 to -1.89 g/100 patients) after WHO guidelines v1 release and increased meropenem usage (95% CI: 93.40-126.48 g/100 patients) and moxifloxacin (95% CI: 5.40-13.98 g/100 patients) in Bangladesh and sulfamethoxazole/trimethoprim in India (95% CI: 0.92-9.32 g/100 patients) following the Delta variant emergence. CONCLUSIONS: This study reinforces the importance of developing antimicrobial stewardship in the clinical settings during inter-pandemic periods.
Making evaluations useful for healthcare leadership development programmes.
BACKGROUND: Effective healthcare leadership has been linked to improved individual and organisational outcomes globally. However, evaluations of healthcare leadership development programmes have often been of low quality. This study investigates the evaluation and decision-making needs of stakeholders for the Oxford Emerging Leaders Programme and aims to redesign its evaluation approach. METHODS: Drawing from Michael Quinn Patton's utilisation-focused evaluation approach, semistructured interviews were conducted with 12 key programme stakeholders. Interviews were thematically analysed to identify key areas for useful and impactful evaluation. RESULTS: Three main themes were identified: impact on patients, impact on healthcare organisations and individual outcomes. Individual outcomes were further divided into skills and qualities. Stakeholders emphasised the importance of measuring improvements in organisational culture, as well as from the perspectives of patients and individual leaders. The need for a multifaceted and longitudinal evaluation approach was highlighted. CONCLUSIONS: The study underscores the importance of aligning evaluation methods with stakeholder needs. Tailoring evaluations to specific programme aims and incorporating both qualitative and quantitative measures can enhance their utility. These insights contribute to the broader literature on healthcare leadership development and programme evaluation.
Influenza vaccination for healthcare workers who care for people aged 60 or older living in long-term care institutions
Rationale: People who work in long-term care institutions (LTCIs), such as doctors, nurses, other health professionals, cleaners and porters (and also family visitors), may have substantial rates of influenza during influenza seasons. They often continue to work when infected with influenza, increasing the likelihood of transmitting influenza to those in their care. The immune systems of care home residents may be weaker than those of the general population; vaccinating care home workers could reduce transmission of influenza within LTCIs. Objectives: To assess the effects of vaccinating healthcare workers in long-term care institutions against influenza on influenza-related outcomes in residents aged 60 years or older. Search methods: We searched the Cochrane Central Register of Controlled Trials (via Cochrane Library), MEDLINE (via Ovid), Embase (via Elsevier), Web of Science (Science Citation Index-Expanded and Conference Proceedings Citation Index - Science), and two clinical trials registries up to 22 August 2024. Eligibility criteria: In this version of the review we restricted eligibility to randomised controlled trials (RCTs) of influenza vaccination of healthcare workers (HCWs) caring for residents aged 60 years or older in LTCIs. Previously we included cohort or case-control studies. Outcomes: Outcomes of interest were: influenza (confirmed by laboratory tests) and its complications (lower respiratory tract infection; hospitalisation or death due to lower respiratory tract infection), all-cause mortality, and adverse events. Risk of bias: We used version one of the Cochrane risk of bias tool for RCTs. Synthesis methods: Two review authors independently extracted data and assessed the risk of bias. We used risk ratios (RRs) with 95% confidence intervals (CIs) to summarise the effects of vaccination on our outcomes of interest. We accounted for clustering by dividing events and sample sizes for each study by an assumed design effect as part of a sensitivity analysis. We used GRADE to assess the certainty of evidence for our outcomes of interest. Included studies: We did not identify any new trials for inclusion in this update. Four cluster-RCTs from Europe (8468 residents) of interventions to offer influenza vaccination for HCWs caring for residents ≥ 60 years in LTCIs provided outcome data that addressed the objectives of our review. The average age of the residents was between 77 and 86 years, and most were female (70% to 77%). The studies were comparable in their intervention and outcome measures. The studies did not report adverse events. The principal sources of bias in the studies related to attrition, lack of blinding, contamination in the control groups, and low rates of vaccination coverage in the intervention arms, leading us to downgrade the certainty of evidence for all outcomes due to serious risk of bias. Synthesis of results: Offering influenza vaccination to HCWs based in LTCIs may have little or no effect on the number of residents who develop influenza compared with those living in care homes where no vaccination is offered (from 5% to 4%) (RR 0.87, 95% CI 0.46 to 1.63; 2 studies, 752 participants; low-certainty evidence). We rated the evidence to be low from one study of 1059 residents showing a slight reduction in lower respiratory tract infection from HCW vaccination (6% versus 4%) (RR 0.70, 95% CI 0.41 to 1.2). The confidence interval is compatible with both a meaningful reduction and a slight increase in infections when illustrated as an absolute effect; 2% to 7%. Taking account of clustering for this outcome increased the confidence interval further, and we rated the evidence as very low-certainty accordingly (RR 0.72, 95% CI 0.28 to 1.85). HCW vaccination programmes may have little or no effect on the number of residents admitted to hospital for respiratory illness (RR 1.02, 95% CI 0.82 to 1.27; 1 study, 3400 participants; low-certainty evidence). There is insufficient evidence to determine whether HCW vaccination impacts on death due to lower respiratory tract infections in residents: 2% of residents in both groups died from lower respiratory tract infections based on the RR of 0.82 (95% CI 0.45 to 1.49; 2 studies, 4459 participants; very low-certainty evidence). HCW vaccination probably leads to a reduction in all-cause deaths from 9% to 6% (RR 0.69, 95% CI 0.60 to 0.80; 4 studies, 8468 participants; moderate-certainty evidence). Authors' conclusions: The effects of HCW vaccination on influenza-specific outcomes in older residents of LTCIs are uncertain. The reduction in all-cause mortality in people observed could not be explained by changes in influenza-specific outcomes. This review did not find information on co-interventions with HCW vaccination: hand washing, face masks, early detection of laboratory-proven influenza, quarantine, avoiding admissions, antivirals and asking HCWs with influenza or influenza-like illness not to go to work. Better studies are needed to give greater certainty in the evidence for vaccinating HCWs to prevent influenza in residents aged 60 years or older in LTCIs. Additional studies are needed to further test these interventions in combination. Registration: Protocol (2005): 10.1002/14651858.CD005187.pub. Original review (2006): 10.1002/14651858.CD005187.pub2. Update (2010): 10.1002/14651858.CD005187.pub3. Update (2013): 10.1002/14651858.CD005187.pub4. Update (2016): 10.1002/14651858.CD005187.pub5.
The impact of weight loss interventions on disordered eating symptoms in people with overweight and obesity: a systematic review & meta-analysis
Background: It is unclear whether weight loss interventions worsen disordered eating in people living with overweight/obesity. We aimed to systematically evaluate the association between weight loss interventions and disordered eating. Methods: Six databases were searched from inception until September 2024. Trials of weight loss interventions in people with overweight/obesity were included if they reported a validated score for disordered eating on either the Eating Disorder Examination Interview or the Eating Disorder Examination Questionnaire pre- and post-intervention. Interventions included behavioural weight loss programmes (BWL) and pharmacotherapy licenced for weight loss, with or without concurrent psychological support, provided for at least 4 weeks. Pooled standardised mean differences (SMD) in scores of disordered eating were calculated using random effects meta-analyses. Risk of bias (RoB) was assessed using the Cochrane RoB 2 tool and the Newcastle–Ottawa scale for randomised and single-arm trials, respectively (PROSPERO ID: CRD42023404792). Findings: Thirty-eight studies with 66 eligible arms (61 interventions: 29 BWL, 11 BWL + pharmacotherapy, 20 BWL + psychological intervention, 1 pharmacotherapy + psychological intervention) and 3364 participants in total were included. The mean weight change was −4.7 kg (95% CI: −5.7, −3.7). Compared with baseline, disordered eating scores improved by −1.47 SMD units (95% CI: −1.67, −1.27, p < 0.001, I2 = 94%) at intervention completion (median of 4 months). Seven randomised trials that directly compared a weight loss intervention to no/minimal intervention reported an improvement of −0.49 SMD units (95% CI, −0.93, −0.04, p = 0.0035, I2 = 73%). Sub-group analyses showed: (a) disordered eating scores improved more in people with an eating disorder at baseline compared with people without high scores, (b) no clear evidence that the association depended upon intervention type, and (c) disordered eating scores improved more in trials rated at low overall RoB. Interpretation: Despite heterogeneity in effect size, weight loss interventions consistently improved disordered eating scores. These findings provide reassurance that weight loss interventions might not worsen disordered eating and may improve it. Funding: Novo Nordisk UK Research Foundation Doctoral Fellowship in Clinical Diabetes.
Frequency of Renal Monitoring - Creatinine and Cystatin C (FORM-2C): An observational cohort study of patients with reduced eGFR in primary care
Background Monitoring is the mainstay of chronic kidney disease management in primary care; however, there is little evidence about the best way to do this. Aim To compare the effectiveness of estimated glomerular filtration rate (eGFR) derived from serum creatinine and serum cystatin C to predict renal function decline among those with a recent eGFR of 30-89 ml/min/1.73 m2. Design and setting Observational cohort study in UK primary care. Method Serum creatinine and serum cystatin C were both measured at seven study visits over 2 years in 750 patients aged ≥18 years with an eGFR of 30-89 ml/min/1.73 m2 within the previous year. The primary outcome was change in eGFR derived from serum creatinine or serum cystatin C between 6 and 24 months. Results Average change in eGFR was 0.51 ml/ min/1.73 m2/year when estimated by serum creatinine and -2.35 ml/min/1.73 m2/year when estimated by serum cystatin C. The c-statistic for predicting renal decline using serum creatininederived eGFR was 0.495 (95% confidence interval [CI] = 0.471 to 0.519). The equivalent c-statistic using serum cystatin C-derived eGFR was 0.497 (95% CI = 0.468 to 0.525). Similar results were obtained when restricting analyses to those aged ≥75 or <75 years, or with eGFR ≥60 ml/ min/1.73 m2. In those with eGFR <60 ml/ min/1.73 m2, serum cystatin C-derived eGFR was more predictive than serum creatinine-derived eGFR for future decline in kidney function. Conclusion In the primary analysis neither eGFR estimated from serum creatinine nor from serum cystatin C predicted future change in kidney function, partly due to small changes during 2 years. In some secondary analyses there was a suggestion that serum cystatin C was a more useful biomarker to estimate eGFR, especially in those with a baseline eGFR <60 ml/min/1.73 m2.
Design, methods, and reporting of impact studies of cardiovascular clinical prediction rules are suboptimal: a systematic review
Objectives: To evaluate design, methods, and reporting of impact studies of cardiovascular clinical prediction rules (CPRs). Study Design and Setting: We conducted a systematic review. Impact studies of cardiovascular CPRs were identified by forward citation and electronic database searches. We categorized the design of impact studies as appropriate for randomized and nonrandomized experiments, excluding uncontrolled before-after study. For impact studies with appropriate study design, we assessed the quality of methods and reporting. We compared the quality of methods and reporting between impact and matched control studies. Results: We found 110 impact studies of cardiovascular CPRs. Of these, 65 (59.1%) used inappropriate designs. Of 45 impact studies with appropriate design, 31 (68.9%) had substantial risk of bias. Mean number of reporting domains that impact studies with appropriate study design adhered to was 10.2 of 21 domains (95% confidence interval, 9.3 and 11.1). The quality of methods and reporting was not clearly different between impact and matched control studies. Conclusion: We found most impact studies either used inappropriate study design, had substantial risk of bias, or poorly complied with reporting guidelines. This appears to be a common feature of complex interventions. Users of CPRs should critically evaluate evidence showing the effectiveness of CPRs.
Self-monitoring of Blood Pressure in Patients with Hypertension-Related Multi-morbidity: Systematic Review and Individual Patient Data Meta-analysis
Background: Studies have shown that self-monitoring of blood pressure (BP) is effective when combined with co-interventions, but its efficacy varies in the presence of some co-morbidities. This study examined whether self-monitoring can reduce clinic BP in patients with hypertension-related co-morbidity. Methods: A systematic review was conducted of articles published in Medline, Embase, and the Cochrane Library up to January 2018. Randomized controlled trials of self-monitoring of BP were selected and individual patient data (IPD) were requested. Contributing studies were prospectively categorized by whether they examined a low/high-intensity co-intervention. Change in BP and likelihood of uncontrolled BP at 12 months were examined according to number and type of hypertension-related co-morbidity in a one-stage IPD meta-analysis. Results: A total of 22 trials were eligible, 16 of which were able to provide IPD for the primary outcome, including 6,522 (89%) participants with follow-up data. Self-monitoring was associated with reduced clinic systolic BP compared to usual care at 12-month follow-up, regardless of the number of hypertension-related co-morbidities (-3.12 mm Hg, [95% confidence intervals -4.78, -1.46 mm Hg]; P value for interaction with number of morbidities = 0.260). Intense interventions were more effective than low-intensity interventions in patients with obesity (P < 0.001 for all outcomes), and possibly stroke (P < 0.004 for BP control outcome only), but this effect was not observed in patients with coronary heart disease, diabetes, or chronic kidney disease. Conclusions: Self-monitoring lowers BP regardless of the number of hypertension-related co-morbidities, but may only be effective in conditions such obesity or stroke when combined with high-intensity co-interventions.
Determining which automatic digital blood pressure device performs adequately: A systematic review
The aim of this study is to systematically examine the proportion of accurate readings attained by automatic digital blood pressure (BP) devices in published validation studies. We included studies of automatic digital BP devices using recognized protocols. We summarized the data as mean and s.d. of differences between measured and observed BP, and proportion of measurements within 5 mm Hg. We included 79 articles (10 783 participants) reporting 113 studies from 22 different countries. Overall, 25/31 (81%), 37/41 (90%) and 34/35 (97%) devices passed the relevant protocols BHS, AAMI and ESH international protocol (ESH-IP), respectively. For devices that passed the BHS protocol, the proportion of measured values within 5 mm Hg of the observed value ranged from 60 to 86% (AAMI protocol 47-94% and ESH-IP 54-89%). The results for the same device varied significantly when a different protocol was used (Omron HEM-907 80% of readings were within 5 mm Hg using the AAMI protocol compared with 62% with the ESH-IP). Even devices with a mean difference of zero show high variation: a device with 74% of BP measurements within 5 mm Hg would require six further BP measurements to reduce variation to 95% of readings within 5 mm Hg. Current protocols for validating BP monitors give no guarantee of accuracy in clinical practice. Devices may pass even the most rigorous protocol with as few as 60% of readings within 5 mm Hg of the observed value. Multiple readings are essential to provide clinicians and patients with accurate information on which to base diagnostic and treatment decisions. © 2010 Macmillan Publishers Limited All rights reserved.
Home measurement of blood pressure and cardiovascular disease: Systematic review and meta-analysis of prospective studies
Objective: Examine the relationship between home blood pressure (BP) and risk for all-cause mortality, cardiovascular mortality and cardiovascular events. Methods: We conducted a systematic review and meta-analysis of prospective studies of home BP. Primary outcomes were all-cause mortality, cardiovascular mortality and cardiovascular events. We extracted hazard ratios and 95% confidence intervals (CIs) which were pooled with a random-effects model. Heterogeneity was assessed using the I statistic. Results: We identified eight studies with 17 698 participants. Follow-up was 3.2-10.9 years. For all-cause mortality (n = 747) the hazard ratio for home BP was 1.14 (95% CI 1.01-1.29) per 10 mmHg increase in systolic BP compared to 1.07 (0.91-1.26) for office BP. For cardiovascular mortality (n = 193) the hazard ratio for home BP was 1.29 (1.02-1.64) per 10 mmHg increase in systolic BP compared to 1.15 (0.91-1.46) for office BP. For cardiovascular events (n = 699) the hazard ratio for home BP was 1.14 (1.09-1.20) per 10 mmHg increase in systolic BP compared to 1.10 (1.06-1.15) for office BP. In three studies which adjusted for office and home BP the hazard ratio was 1.20 (1.11-1.30) per 10 mmHg increase in systolic BP for home BP adjusted for office BP compared to 0.99 (0.93-1.07) per 10 mmHg increase in systolic BP for office BP adjusted for home BP. Diastolic results were similar. Conclusions: Home BP remained a significant predictor of cardiovascular mortality and cardiovascular events after adjusting for office BP suggesting it is an important prognostic variable over and above that of office BP. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins.
Impact of Changes to National Hypertension Guidelines on Hypertension Management and Outcomes in the United Kingdom
In recent years, national and international guidelines have recommended the use of out-of-office blood pressure monitoring for diagnosing hypertension. Despite evidence of cost-effectiveness, critics expressed concerns this would increase cardiovascular morbidity. We assessed the impact of these changes on the incidence of hypertension, out-of-office monitoring and cardiovascular morbidity using routine clinical data from English general practices, linked to inpatient hospital, mortality, and socio-economic status data. We studied 3 937 191 adults with median follow-up of 4.2 years (49% men, mean age=39.7 years) between April 1, 2006 and March 31, 2017. Interrupted time series analysis was used to examine the impact of changes to English hypertension guidelines in 2011 on incidence of hypertension (primary outcome). Secondary outcomes included rate of out-of-office monitoring and cardiovascular events. Across the study period, incidence of hypertension fell from 2.1 to 1.4 per 100 person-years. The change in guidance in 2011 was not associated with an immediate change in incidence (change in rate=0.01 [95% CI,-0.18-0.20]) but did result in a leveling out of the downward trend (change in yearly trend =0.09 [95% CI, 0.04-0.15]). Ambulatory monitoring increased significantly in 2011/2012 (change in rate =0.52 [95% CI, 0.43-0.60]). The rate of cardiovascular events remained unchanged (change in rate =-0.02 [95% CI,-0.05-0.02]). In summary, changes to hypertension guidelines in 2011 were associated with a stabilisation in incidence and no increase in cardiovascular events. Guidelines should continue to recommend out-of-office monitoring for diagnosis of hypertension.
How do home and clinic blood pressure readings compare in pregnancy? A systematic review and individual patient data meta-analysis
Hypertensive disorders during pregnancy result in substantial maternal morbidity and are a leading cause of maternal deaths worldwide. Self-monitoring of blood pressure (BP) might improve the detection and management of hypertensive disorders of pregnancy, but few data are available, including regarding appropriate thresholds. This systematic review and individual patient data analysis aimed to assess the current evidence on differences between clinic and self-monitored BP through pregnancy. MEDLINE and 10 other electronic databases were searched for articles published up to and including July 2016 using a strategy designed to capture all the literature on self-monitoring of BP during pregnancy. Investigators of included studies were contacted requesting individual patient data: self-monitored and clinic BP and demographic data. Twenty-one studies that utilized self-monitoring of BP during pregnancy were identified. Individual patient data from self-monitored and clinic readings were available from 7 plus 1 unpublished articles (8 studies; n=758) and 2 further studies published summary data. Analysis revealed a mean self-monitoring clinic difference of ≤1.2 mm Hg systolic BP throughout pregnancy although there was significant heterogeneity (difference in means, I2 >80% throughout pregnancy). Although the overall population difference was small, levels of white coat hypertension were high, particularly toward the end of pregnancy. The available literature includes no evidence of a systematic difference between self and clinic readings, suggesting that appropriate treatment and diagnostic thresholds for self-monitoring during pregnancy would be equivalent to standard clinic thresholds.
Protocol: A systematic review and network meta-analysis of the effects of different doses of licensed statins on LDL cholesterol in humans in order to generate dose-response curves
This is a protocol for a study in qhich we shall seek to generate dose-response curves relating the daily doses of different statins currently licensed for clinical use to their effects in reducing LDL cholesterol, for comparison of calculated ED50 values with the dosages typically used in clinical practice. This will also allow a comparison of the different dosages of different statins that are capable of producing the same LDL-lowering effect.
Protocol: A systematic review and network meta-analysis of the effects of different doses of licensed statins on LDL cholesterol in humans in order to generate dose-response curves
This is a protocol for a study in which we shall seek to generate dose-response curves relating the daily doses of different statins currently licensed for clinical use to their effects in reducing LDL cholesterol, for comparison of calculated ED50 values with the dosages typically used in clinical practice. This will also allow a comparison of the different dosages of different statins that are capable of producing the same LDL-lowering effect.