All You Need Is Conformal Prediction

An important but easy-to-use tool for uncertainty quantification every data scientist should know.

Jonte Dancker
Towards Data Science

--

Turning a point prediction into a prediction set for classification or a prediction interval for regression models using Conformal Prediction. Through the resulting prediction regions we can quantify the uncertainty of the underlying ML model.
Turning a point prediction into a prediction region using Conformal Prediction to give us more information abouts the model’s uncertainty (Image by the author).

We must know how certain a model is when it makes predictions as there is a risk associated with wrong predictions. Without quantifying the model’s uncertainty, an accurate prediction and a wild guess look the same. For example, a self-driving car must be certain that the driving path is free from obstacles. I have written about this in another article.

But how can we quantify the uncertainty of our model?

This is where Conformal Prediction comes into the picture. Conformal Prediction is a framework for uncertainty quantification. The approach can turn any point prediction into statistically valid prediction regions. The region can be a set of classes for classification or an interval for regression problems.

How do we turn a point forecast into a prediction region using Conformal Prediction?

Conformal Prediction uses past experience to determine the uncertainty of new predictions. To apply Conformal Prediction we need a non-conformity score, a significance level alpha, and a calibration set.

--

--

Expert in time series forecasting and analysis | Writing about my data science side projects and sharing my learnings