Search results
Found 3212 matches for
The Centre for Evidence-Based Medicine (CEBM) at Oxford University develops, promotes and disseminates better evidence for health care.
Frequency of Renal Monitoring - Creatinine and Cystatin C (FORM-2C): An observational cohort study of patients with reduced eGFR in primary care
Background Monitoring is the mainstay of chronic kidney disease management in primary care; however, there is little evidence about the best way to do this. Aim To compare the effectiveness of estimated glomerular filtration rate (eGFR) derived from serum creatinine and serum cystatin C to predict renal function decline among those with a recent eGFR of 30-89 ml/min/1.73 m2. Design and setting Observational cohort study in UK primary care. Method Serum creatinine and serum cystatin C were both measured at seven study visits over 2 years in 750 patients aged ≥18 years with an eGFR of 30-89 ml/min/1.73 m2 within the previous year. The primary outcome was change in eGFR derived from serum creatinine or serum cystatin C between 6 and 24 months. Results Average change in eGFR was 0.51 ml/ min/1.73 m2/year when estimated by serum creatinine and -2.35 ml/min/1.73 m2/year when estimated by serum cystatin C. The c-statistic for predicting renal decline using serum creatininederived eGFR was 0.495 (95% confidence interval [CI] = 0.471 to 0.519). The equivalent c-statistic using serum cystatin C-derived eGFR was 0.497 (95% CI = 0.468 to 0.525). Similar results were obtained when restricting analyses to those aged ≥75 or <75 years, or with eGFR ≥60 ml/ min/1.73 m2. In those with eGFR <60 ml/ min/1.73 m2, serum cystatin C-derived eGFR was more predictive than serum creatinine-derived eGFR for future decline in kidney function. Conclusion In the primary analysis neither eGFR estimated from serum creatinine nor from serum cystatin C predicted future change in kidney function, partly due to small changes during 2 years. In some secondary analyses there was a suggestion that serum cystatin C was a more useful biomarker to estimate eGFR, especially in those with a baseline eGFR <60 ml/min/1.73 m2.
Design, methods, and reporting of impact studies of cardiovascular clinical prediction rules are suboptimal: a systematic review
Objectives: To evaluate design, methods, and reporting of impact studies of cardiovascular clinical prediction rules (CPRs). Study Design and Setting: We conducted a systematic review. Impact studies of cardiovascular CPRs were identified by forward citation and electronic database searches. We categorized the design of impact studies as appropriate for randomized and nonrandomized experiments, excluding uncontrolled before-after study. For impact studies with appropriate study design, we assessed the quality of methods and reporting. We compared the quality of methods and reporting between impact and matched control studies. Results: We found 110 impact studies of cardiovascular CPRs. Of these, 65 (59.1%) used inappropriate designs. Of 45 impact studies with appropriate design, 31 (68.9%) had substantial risk of bias. Mean number of reporting domains that impact studies with appropriate study design adhered to was 10.2 of 21 domains (95% confidence interval, 9.3 and 11.1). The quality of methods and reporting was not clearly different between impact and matched control studies. Conclusion: We found most impact studies either used inappropriate study design, had substantial risk of bias, or poorly complied with reporting guidelines. This appears to be a common feature of complex interventions. Users of CPRs should critically evaluate evidence showing the effectiveness of CPRs.
Self-monitoring of Blood Pressure in Patients with Hypertension-Related Multi-morbidity: Systematic Review and Individual Patient Data Meta-analysis
Background: Studies have shown that self-monitoring of blood pressure (BP) is effective when combined with co-interventions, but its efficacy varies in the presence of some co-morbidities. This study examined whether self-monitoring can reduce clinic BP in patients with hypertension-related co-morbidity. Methods: A systematic review was conducted of articles published in Medline, Embase, and the Cochrane Library up to January 2018. Randomized controlled trials of self-monitoring of BP were selected and individual patient data (IPD) were requested. Contributing studies were prospectively categorized by whether they examined a low/high-intensity co-intervention. Change in BP and likelihood of uncontrolled BP at 12 months were examined according to number and type of hypertension-related co-morbidity in a one-stage IPD meta-analysis. Results: A total of 22 trials were eligible, 16 of which were able to provide IPD for the primary outcome, including 6,522 (89%) participants with follow-up data. Self-monitoring was associated with reduced clinic systolic BP compared to usual care at 12-month follow-up, regardless of the number of hypertension-related co-morbidities (-3.12 mm Hg, [95% confidence intervals -4.78, -1.46 mm Hg]; P value for interaction with number of morbidities = 0.260). Intense interventions were more effective than low-intensity interventions in patients with obesity (P < 0.001 for all outcomes), and possibly stroke (P < 0.004 for BP control outcome only), but this effect was not observed in patients with coronary heart disease, diabetes, or chronic kidney disease. Conclusions: Self-monitoring lowers BP regardless of the number of hypertension-related co-morbidities, but may only be effective in conditions such obesity or stroke when combined with high-intensity co-interventions.
Determining which automatic digital blood pressure device performs adequately: A systematic review
The aim of this study is to systematically examine the proportion of accurate readings attained by automatic digital blood pressure (BP) devices in published validation studies. We included studies of automatic digital BP devices using recognized protocols. We summarized the data as mean and s.d. of differences between measured and observed BP, and proportion of measurements within 5 mm Hg. We included 79 articles (10 783 participants) reporting 113 studies from 22 different countries. Overall, 25/31 (81%), 37/41 (90%) and 34/35 (97%) devices passed the relevant protocols BHS, AAMI and ESH international protocol (ESH-IP), respectively. For devices that passed the BHS protocol, the proportion of measured values within 5 mm Hg of the observed value ranged from 60 to 86% (AAMI protocol 47-94% and ESH-IP 54-89%). The results for the same device varied significantly when a different protocol was used (Omron HEM-907 80% of readings were within 5 mm Hg using the AAMI protocol compared with 62% with the ESH-IP). Even devices with a mean difference of zero show high variation: a device with 74% of BP measurements within 5 mm Hg would require six further BP measurements to reduce variation to 95% of readings within 5 mm Hg. Current protocols for validating BP monitors give no guarantee of accuracy in clinical practice. Devices may pass even the most rigorous protocol with as few as 60% of readings within 5 mm Hg of the observed value. Multiple readings are essential to provide clinicians and patients with accurate information on which to base diagnostic and treatment decisions. © 2010 Macmillan Publishers Limited All rights reserved.
Home measurement of blood pressure and cardiovascular disease: Systematic review and meta-analysis of prospective studies
Objective: Examine the relationship between home blood pressure (BP) and risk for all-cause mortality, cardiovascular mortality and cardiovascular events. Methods: We conducted a systematic review and meta-analysis of prospective studies of home BP. Primary outcomes were all-cause mortality, cardiovascular mortality and cardiovascular events. We extracted hazard ratios and 95% confidence intervals (CIs) which were pooled with a random-effects model. Heterogeneity was assessed using the I statistic. Results: We identified eight studies with 17 698 participants. Follow-up was 3.2-10.9 years. For all-cause mortality (n = 747) the hazard ratio for home BP was 1.14 (95% CI 1.01-1.29) per 10 mmHg increase in systolic BP compared to 1.07 (0.91-1.26) for office BP. For cardiovascular mortality (n = 193) the hazard ratio for home BP was 1.29 (1.02-1.64) per 10 mmHg increase in systolic BP compared to 1.15 (0.91-1.46) for office BP. For cardiovascular events (n = 699) the hazard ratio for home BP was 1.14 (1.09-1.20) per 10 mmHg increase in systolic BP compared to 1.10 (1.06-1.15) for office BP. In three studies which adjusted for office and home BP the hazard ratio was 1.20 (1.11-1.30) per 10 mmHg increase in systolic BP for home BP adjusted for office BP compared to 0.99 (0.93-1.07) per 10 mmHg increase in systolic BP for office BP adjusted for home BP. Diastolic results were similar. Conclusions: Home BP remained a significant predictor of cardiovascular mortality and cardiovascular events after adjusting for office BP suggesting it is an important prognostic variable over and above that of office BP. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins.
Impact of Changes to National Hypertension Guidelines on Hypertension Management and Outcomes in the United Kingdom
In recent years, national and international guidelines have recommended the use of out-of-office blood pressure monitoring for diagnosing hypertension. Despite evidence of cost-effectiveness, critics expressed concerns this would increase cardiovascular morbidity. We assessed the impact of these changes on the incidence of hypertension, out-of-office monitoring and cardiovascular morbidity using routine clinical data from English general practices, linked to inpatient hospital, mortality, and socio-economic status data. We studied 3 937 191 adults with median follow-up of 4.2 years (49% men, mean age=39.7 years) between April 1, 2006 and March 31, 2017. Interrupted time series analysis was used to examine the impact of changes to English hypertension guidelines in 2011 on incidence of hypertension (primary outcome). Secondary outcomes included rate of out-of-office monitoring and cardiovascular events. Across the study period, incidence of hypertension fell from 2.1 to 1.4 per 100 person-years. The change in guidance in 2011 was not associated with an immediate change in incidence (change in rate=0.01 [95% CI,-0.18-0.20]) but did result in a leveling out of the downward trend (change in yearly trend =0.09 [95% CI, 0.04-0.15]). Ambulatory monitoring increased significantly in 2011/2012 (change in rate =0.52 [95% CI, 0.43-0.60]). The rate of cardiovascular events remained unchanged (change in rate =-0.02 [95% CI,-0.05-0.02]). In summary, changes to hypertension guidelines in 2011 were associated with a stabilisation in incidence and no increase in cardiovascular events. Guidelines should continue to recommend out-of-office monitoring for diagnosis of hypertension.
How do home and clinic blood pressure readings compare in pregnancy? A systematic review and individual patient data meta-analysis
Hypertensive disorders during pregnancy result in substantial maternal morbidity and are a leading cause of maternal deaths worldwide. Self-monitoring of blood pressure (BP) might improve the detection and management of hypertensive disorders of pregnancy, but few data are available, including regarding appropriate thresholds. This systematic review and individual patient data analysis aimed to assess the current evidence on differences between clinic and self-monitored BP through pregnancy. MEDLINE and 10 other electronic databases were searched for articles published up to and including July 2016 using a strategy designed to capture all the literature on self-monitoring of BP during pregnancy. Investigators of included studies were contacted requesting individual patient data: self-monitored and clinic BP and demographic data. Twenty-one studies that utilized self-monitoring of BP during pregnancy were identified. Individual patient data from self-monitored and clinic readings were available from 7 plus 1 unpublished articles (8 studies; n=758) and 2 further studies published summary data. Analysis revealed a mean self-monitoring clinic difference of ≤1.2 mm Hg systolic BP throughout pregnancy although there was significant heterogeneity (difference in means, I2 >80% throughout pregnancy). Although the overall population difference was small, levels of white coat hypertension were high, particularly toward the end of pregnancy. The available literature includes no evidence of a systematic difference between self and clinic readings, suggesting that appropriate treatment and diagnostic thresholds for self-monitoring during pregnancy would be equivalent to standard clinic thresholds.
Protocol: A systematic review and network meta-analysis of the effects of different doses of licensed statins on LDL cholesterol in humans in order to generate dose-response curves
This is a protocol for a study in qhich we shall seek to generate dose-response curves relating the daily doses of different statins currently licensed for clinical use to their effects in reducing LDL cholesterol, for comparison of calculated ED50 values with the dosages typically used in clinical practice. This will also allow a comparison of the different dosages of different statins that are capable of producing the same LDL-lowering effect.
Protocol: A systematic review and network meta-analysis of the effects of different doses of licensed statins on LDL cholesterol in humans in order to generate dose-response curves
This is a protocol for a study in which we shall seek to generate dose-response curves relating the daily doses of different statins currently licensed for clinical use to their effects in reducing LDL cholesterol, for comparison of calculated ED50 values with the dosages typically used in clinical practice. This will also allow a comparison of the different dosages of different statins that are capable of producing the same LDL-lowering effect.
Trends in kidney function testing in UK primary care since the introduction of the quality and outcomes framework: A retrospective cohort study using CPRD
Objectives To characterise serum creatinine and urinary protein testing in UK general practices from 2005 to 2013 and to examine how the frequency of testing varies across demographic factors, with the presence of chronic conditions and with the prescribing of drugs for which kidney function monitoring is recommended. Design Retrospective open cohort study. Setting Routinely collected data from 630 UK general practices contributing to the Clinical Practice Research Datalink. Participants 4 573 275 patients aged over 18 years registered at up-to-standard practices between 1 April 2005 and 31 March 2013. At study entry, no patients were kidney transplant donors or recipients, pregnant or on dialysis. Primary outcome measures The rate of serum creatinine and urinary protein testing per year and the percentage of patients with isolated and repeated testing per year. Results The rate of serum creatinine testing increased linearly across all age groups. The rate of proteinuria testing increased sharply in the 2009-2010 financial year but only for patients aged 60 years or over. For patients with established chronic kidney disease (CKD), creatinine testing increased rapidly in 2006-2007 and 2007-2008, and proteinuria testing in 2009-2010, reflecting the introduction of Quality and Outcomes Framework indicators. In adjusted analyses, CKD Read codes were associated with up to a twofold increase in the rate of serum creatinine testing, while other chronic conditions and potentially nephrotoxic drugs were associated with up to a sixfold increase. Regional variation in serum creatinine testing reflected country boundaries. Conclusions Over a nine-year period, there have been increases in the numbers of patients having kidney function tests annually and in the frequency of testing. Changes in the recommended management of CKD in primary care were the primary determinant, and increases persist even after controlling for demographic and patient-level factors. Future studies should address whether increased testing has led to better outcomes.
Short- and medium-term effects of light to moderate alcohol intake on glycaemic control in diabetes mellitus: a systematic review and meta-analysis of randomized trials
Background: People with diabetes are told that drinking alcohol may increase their risk of hypoglycaemia. Aims: To report the effects of alcohol consumption on glycaemic control in people with diabetes mellitus. Methods: Medline, EMBASE and the Cochrane Library databases were searched in 2015 to identify randomized trials that compared alcohol consumption with no alcohol use, reporting glycaemic control in people with diabetes. Data on blood glucose, HbA1c and numbers of hypoglycaemic episodes were pooled using random effects meta-analysis. Results: Pooled data from nine short-term studies showed no difference in blood glucose concentrations between those who drank alcohol in doses of 16–80 g (median 20 g, 2.5 units) compared with those who did not drink alcohol at 0.5, 2, 4 and 24 h after alcohol consumption. Pooled data from five medium-term studies showed that there was no difference in blood glucose or HbA1c concentrations at the end of the study between those who drank 11–18 g alcohol/day (median 13 g/day, 1.5 units/day) for 4–104 weeks and those who did not. We found no evidence of a difference in number of hypoglycaemic episodes or in withdrawal rates between randomized groups. Conclusions: Studies to date have not provided evidence that drinking light to moderate amounts of alcohol, with or without a meal, affects any measure of glycaemic control in people with Type 2 diabetes. These results suggest that current advice that people with diabetes do not need to refrain from drinking moderate quantities of alcohol does not need to be changed; risks to those with Type 1 diabetes remain uncertain.
Quantifying the effects of diuretics and β-adrenoceptor blockers on glycaemic control in diabetes mellitus - A systematic review and meta-analysis
Aims Although there are reports that β-adrenoceptor antagonists (beta-blockers) and diuretics can affect glycaemic control in people with diabetes mellitus, there is no clear information on how blood glucose concentrations may change and by how much. We report results from a systematic review to quantify the effects of these antihypertensive drugs on glycaemic control in adults with established diabetes. Methods We systematically reviewed the literature to identify randomized controlled trials in which glycaemic control was studied in adults with diabetes taking either beta-blockers or diuretics. We combined data on HbA1c and fasting blood glucose using fixed effects meta-analysis. Results From 3864 papers retrieved, we found 10 studies of beta-blockers and 12 studies of diuretics to include in the meta-analysis. One study included both comparisons, totalling 21 included reports. Beta-blockers increased fasting blood glucose concentrations by 0.64 mmol l-1 (95% CI 0.24, 1.03) and diuretics by 0.77 mmol l-1 (95% CI 0.14, 1.39) compared with placebo. Effect sizes were largest in trials of non-selective beta-blockers (1.33, 95% CI 0.72, 1.95) and thiazide diuretics (1.69, 95% CI 0.60, 2.69). Beta-blockers increased HbA1c concentrations by 0.75% (95% CI 0.30, 1.20) and diuretics by 0.24% (95% CI -0.17, 0.65) compared with placebo. There was no significant difference in the number of hypoglycaemic events between beta-blockers and placebo in three trials. Conclusions Randomized trials suggest that thiazide diuretics and non-selective beta-blockers increase fasting blood glucose and HbA1c concentrations in patients with diabetes by moderate amounts. These data will inform prescribing and monitoring of beta-blockers and diuretics in patients with diabetes.
Methods for meta-analysis of pharmacodynamic dose–response data with application to multi-arm studies of alogliptin
Standard methods for meta-analysis of dose–response data in epidemiology assume a model with a single scalar parameter, such as log-linear relationships between exposure and outcome; such models are implicitly unbounded. In contrast, in pharmacology, multi-parameter models, such as the widely used E max model, are used to describe relationships that are bounded above and below. We propose methods for estimating the parameters of a dose–response model by meta-analysis of summary data from the results of randomized controlled trials of a drug, in which each trial uses multiple doses of the drug of interest (possibly including dose 0 or placebo). We assume that, for each randomized arm of each trial, the mean and standard error of a continuous response measure and the corresponding allocated dose are available. We consider weighted least squares fitting of the model to the mean and dose pairs from all arms of all studies, and a two-stage procedure in which scalar inverse-variance meta-analysis is performed at each dose, and the dose–response model is fitted to the results by weighted least squares. We then compare these with two further methods inspired by network meta-analysis that fit the model to the contrasts between doses. We illustrate the methods by estimating the parameters of the E max model to a collection of multi-arm, multiple-dose, randomized controlled trials of alogliptin, a drug for the management of diabetes mellitus, and further examine the properties of the four methods with sensitivity analyses and a simulation study. We find that all four methods produce broadly comparable point estimates for the parameters of most interest, but a single-stage method based on contrasts between doses produces the most appropriate confidence intervals. Although simpler methods may have pragmatic advantages, such as the use of standard software for scalar meta-analysis, more sophisticated methods are nevertheless preferable for their advantages in estimation.
Optimal strategies for monitoring lipid levels in patients at risk or with cardiovascular disease: A systematic review with statistical and cost-effectiveness modelling
Background Various lipid measurements in monitoring/screening programmes can be used, alone or in cardiovascular risk scores, to guide treatment for prevention of cardiovascular disease (CVD). Because some changes in lipids are due to variability rather than true change, the value of lipid-monitoring strategies needs evaluation. Objective To determine clinical value and cost-effectiveness of different monitoring intervals and different lipid measures for primary and secondary prevention of CVD. Data sources We searched databases and clinical trials registers from 2007 (including the Cochrane Central Register of Controlled Trials, MEDLINE, EMBASE, the Clinical Trials Register, the Current Controlled Trials register, and the Cumulative Index to Nursing and Allied Health Literature) to update and extend previous systematic reviews. Patient-level data from the Clinical Practice Research Datalink and St Luke’s Hospital, Japan, were used in statistical modelling. Utilities and health-care costs were drawn from the literature. Methods In two meta-analyses, we used prospective studies to examine associations of lipids with CVD and mortality, and randomised controlled trials to estimate lipid-lowering effects of atorvastatin doses. Patient-level data were used to estimate progression and variability of lipid measurements over time, and hence to model lipid-monitoring strategies. Results are expressed as rates of true-/false-positive and true-/false-negative tests for high lipid or high CVD risk. We estimated incremental costs per quality-adjusted life-year. Results A total of 115 publications reported strength of association between different lipid measures and CVD events in 138 data sets. The summary adjusted hazard ratio per standard deviation of total cholesterol (TC) to high-density lipoprotein (HDL) cholesterol ratio was 1.25 (95% confidence interval 1.15 to 1.35) for CVD in a primary prevention population but heterogeneity was high (I2 = 98%); similar results were observed for non-HDL cholesterol, apolipoprotein B and other ratio measures. Associations were smaller for other single lipid measures. Across 10 trials, low-dose atorvastatin (10 and 20 mg) effects ranged from a TC reduction of 0.92 mmol/l to 2.07 mmol/l, and low-density lipoprotein reduction of between 0.88 mmol/l and 1.86 mmol/l. Effects of 40 mg and 80 mg were reported by one trial each. For primary prevention, over a 3-year period, we estimate annual monitoring would unnecessarily treat 9 per 1000 more men (28 vs. 19 per 1000) and 5 per 1000 more women (17 vs. 12 per 1000) than monitoring every 3 years. However, annual monitoring would also undertreat 9 per 1000 fewer men (7 vs. 16 per 1000) and 4 per 1000 fewer women (7 vs. 11 per 1000) than monitoring at 3-year intervals. For secondary prevention, over a 3-year period, annual monitoring would increase unnecessary treatment changes by 66 per 1000 men and 31 per 1000 women, and decrease undertreatment by 29 per 1000 men and 28 per 1000 men, compared with monitoring every 3 years. In cost-effectiveness, strategies with increased screening/monitoring dominate. Exploratory analyses found that any unknown harms of statins would need utility decrements as large as 0.08 (men) to 0.11 (women) per statin user to reverse this finding in primary prevention. Limitation Heterogeneity in meta-analyses. Conclusions While acknowledging known and potential unknown harms of statins, we find that more frequent monitoring strategies are cost-effective compared with others. Regular lipid monitoring in those with and without CVD is likely to be beneficial to patients and to the health service. Future research should include trials of the benefits and harms of atorvastatin 40 and 80 mg, large-scale surveillance of statin safety, and investigation of the effect of monitoring on medication adherence. Study registration This study is registered as PROSPERO CRD42013003727. Funding The National Institute for Health Research Health Technology Assessment programme.
Diagnostic accuracy study of three alcohol breathalysers marketed for sale to the public
Objectives: To assess the diagnostic accuracy of three personal breathalyser devices available for sale to the public marketed to test safety to drive after drinking alcohol. Design: Prospective comparative diagnostic accuracy study comparing two single-use breathalysers and one digital multiuse breathalyser (index tests) to a police breathalyser (reference test). Setting: Establishments licensed to serve alcohol in a UK city. Participants: Of 222 participants recruited, 208 were included in the main analysis. Participants were eligible if they were 18 years old or over, had consumed alcohol and were not intending to drive within the following 6 h. Outcome measures: Sensitivity and specificity of the breathalysers for the detection of being at or over the UK legal driving limit (35 μg/100 mL breath alcohol concentration). Results: 18% of participants (38/208) were at or over the UK driving limit according to the police breathalyser. The digital multiuse breathalyser had a sensitivity of 89.5% (95% CI 75.9% to 95.8%) and a specificity of 64.1% (95% CI 56.6% to 71.0%). The single-use breathalysers had a sensitivity of 94.7% (95% CI 75.4% to 99.1%) and 26.3% (95% CI 11.8% to 48.8%), and a specificity of 50.6% (95% CI 40.4% to 60.7%) and 97.5% (95% CI 91.4% to 99.3%), respectively. Self-reported alcohol consumption threshold of 5 UK units or fewer had a higher sensitivity than all personal breathalysers. Conclusions: One alcohol breathalyser had sensitivity of 26%, corresponding to false reassurance for approximately one person in four who is over the limit by the reference standard, at least on the evening of drinking alcohol. The other devices tested had 90% sensitivity or higher. All estimates were subject to uncertainty. There is no clearly defined minimum sensitivity for this safetycritical application. We conclude that current regulatory frameworks do not ensure high sensitivity for these devices marketed to consumers for a decision with potentially catastrophic consequences.
Systematic review and metaanalysis comparing the bias and accuracy of the modification of diet in renal disease and chronic kidney disease epidemiology collaboration equations in community-based populations
BACKGROUND: The majority of patients with chronic kidney disease are diagnosed and monitored in primary care. Glomerular filtration rate (GFR) is a key marker of renal function, but direct measurement is invasive; in routine practice, equations are used for estimated GFR (eGFR) from serum creatinine. We systematically assessed bias and accuracy of commonly used eGFR equations in populations relevant to primary care. CONTENT: MEDLINE, EMBASE, and the Cochrane Library were searched for studies comparing measured GFR (mGFR) with eGFR in adult populations comparable to primary care and reporting both the Modification of Diet in Renal Disease (MDRD) and the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations based on standardized creatinine measurements. We pooled data on mean bias (difference between eGFR and mGFR) and on mean accuracy (proportion of eGFR within 30% of mGFR) using a random-effects inverse-variance weighted metaanalysis. We included 48 studies of 26 875 patients that reported data on bias and/or accuracy. Metaanalysis of within-study comparisons in which both formulae were tested on the same patient cohorts using isotope dilution-mass spectrometry-traceable creatinine showed a lower mean bias in eGFR using CKD-EPI of 2.2 mL/min/1.73 m2 (95% CI, 1.1-3.2; 30 studies; I2 74.4%) and a higher mean accuracy of CKD-EPI of 2.7% (1.6 -3.8; 47 studies; I2 55.5%). Metaregression showed that in both equations bias and accuracy favored the CKD-EPI equation at higher mGFR values. SUMMARY: Both equations underestimated mGFR, but CKD-EPI gave more accurate estimates of GFR.
Performance of point-of-care HbA1c test devices: Implications for use in clinical practice - A systematic review and meta-analysis
Point-of-care (POC) devices could be used to measure hemoglobin A1c (HbA1c) in the doctors' office, allowing immediate feedback of results to patients. Reports have raised concerns about the analytical performance of some of these devices. We carried out a systematic review and meta-analysis using a novel approach to compare the accuracy and precision of POC HbA1c devices. Medline, Embase and Web of Science databases were searched in June 2015 for published reports comparing POC HbA1c devices with laboratory methods. Two reviewers screened articles and extracted data on bias, precision and diagnostic accuracy. Mean bias and variability between the POC and laboratory test were combined in a meta-analysis. Study quality was assessed using the QUADAS2 tool. Two researchers independently reviewed 1739 records for eligibility. Sixty-one studies were included in the meta-analysis of mean bias. Devices evaluated were A1cgear, A1cNow, Afinion, B-analyst, Clover, Cobas b101, DCA 2000/Vantage, HemoCue, Innovastar, Nycocard, Quo-Lab, Quo-Test and SDA1cCare. Nine devices had a negative mean bias which was significant for three devices. There was substantial variability in bias within devices. There was no difference in bias between clinical or laboratory operators in two devices. This is the first meta-analysis to directly compare performance of POC HbA1c devices. Use of a device with a mean negative bias compared to a laboratory method may lead to higher levels of glycemia and a lower risk of hypoglycaemia. The implications of this on clinical decision-making and patient outcomes now need to be tested in a randomized trial.