Coverage vs Acceptance-Error Curves for Conformal Classification Models

Evgueni Smirnov*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

Abstract

In this paper, we introduce coverage vs acceptance-error graphs as a visualization tool for comparing the performance of conformal predictors at a given significance level ? for any k-class classification task with k = 2. We show that by plotting the performance of each predictor for different significance levels in ? ? [0, 1], we receive a coverage vs acceptance-error curve for that predictor. The area under this curve represents the probability that the p-value of randomly chosen true class-label of any test instance is greater than the p-value of any other false class-label for the same or any other test instance. This area can be used as a metric for predictive efficiency of a conformal predictor, when the validity has been established. The new metric is unique in that it is related to the empirical coverage rate, and extensive experiments confirmed its utility and difference from existing predictive efficiency criteria.
Original languageEnglish
Title of host publicationProceedings of Machine Learning Research
EditorsNeil Lawrence
Pages534-545
Number of pages12
Volume204
Publication statusPublished - 1 Jan 2023
Event12th Symposium on Conformal and Probabilistic Prediction with Applications, COPA 2023 - Limassol, Cyprus
Duration: 13 Sept 202315 Sept 2023
https://cml.rhul.ac.uk/copa2023/
https://copa-conference.com/

Publication series

SeriesProceedings of Machine Learning Research
ISSN2640-3498

Conference

Conference12th Symposium on Conformal and Probabilistic Prediction with Applications, COPA 2023
Abbreviated titleCOPA 2023
Country/TerritoryCyprus
CityLimassol
Period13/09/2315/09/23
Internet address

Keywords

  • Conformal Prediction
  • Metrics
  • Performance Curves
  • Predictive Efficiency

Cite this