Validation of clinical prediction models: what does the “calibration slope” really measure?
Stevens RJ., Poppe KK.
© 2019 The Authors Background and Objectives: Definitions of calibration, an aspect of model validation, have evolved over time. We examine use and interpretation of the statistic currently referred to as the calibration slope. Methods: The history of the term “calibration slope”, and usage in papers published in 2016 and 2017, were reviewed. The behaviour of the slope in illustrative hypothetical examples and in two examples in the clinical literature was demonstrated. Results: The paper in which the statistic was proposed described it as a measure of “spread” and did not use the term “calibration”. In illustrative examples, slope of 1 can be associated with good or bad calibration, and this holds true across different definitions of calibration. In data extracted from a previous study, the slope was correlated with discrimination, not overall calibration. Many authors of recent papers interpret the slope as a measure of calibration; a minority interpret it as a measure of discrimination or do not explicitly categorise it as either. Seventeen of thirty-three papers used the slope as the sole measure of calibration. Conclusion: Misunderstanding about this statistic has led to many papers in which it is the sole measure of calibration, which should be discouraged.