HomeMachine LearningMachine Learning EducationTrusting Model’s Uncertainty?

Trusting Model’s Uncertainty?

[ad_1]

In an ideal world, machine learning (ML) methods like deep learning are deployed to make predictions on data from the same distribution as that on which they were trained. But the practical reality can be quite different: camera lenses becoming blurry, sensors degrading, and changes to popular online topics can result in differences between the distribution of data on which the model was trained and to which a model is applied, leading to what is known as covariate shift. For example, it was recently observed that deep learning models trained to detect pneumonia in chest x-rays would achieve very different levels of accuracy when evaluated on previously unseen hospitals’ data, due in part to subtle differences in image acquisition and processing.

In “Can you trust your model’s uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift,” presented at NeurIPS 2019, we benchmark the uncertainty of state-of-the-art deep learning models as they are exposed to both shifting data distributions and out-of-distribution data. In this work we consider a variety of input modalities, including images, text and online advertising data, exposing these deep learning models to increasingly shifted test data while carefully analyzing the behavior of their predictive probabilities. We also compare a variety of different methods for improving model uncertainty to see which strategies perform best under distribution shift.

What is Out-of-Distribution Data?
Deep learning models provide a probability with each prediction, representing the model confidence or uncertainty. As such, they can express what they don’t know and, correspondingly, abstain from prediction when the data is outside the realm of the original training dataset. In the case of covariate shift, uncertainty would ideally increase proportionally to any decrease in accuracy. A more extreme case is when data are not at all represented in the training set, i.e., when the data are out-of-distribution (OOD). For example, consider what happens when a cat-versus-dog image classifier is shown an image of an airplane. Would the model confidently predict incorrectly or would it assign a low probability to each class? In a related post we recently discussed methods we developed to identify such OOD examples. In this work we instead analyze the predictive uncertainty of models given out-of-distribution and shifted examples to see if the model probabilities reflect their ability to predict on such data.

Quantifying the Quality of Uncertainty
What does it mean for one model to have better representation of its uncertainty than another? While this can be a nuanced question that often is defined by a downstream task, there are ways to quantitatively assess the general quality of probabilistic predictions. For example, the meteorological community has carefully considered this question and developed a set of proper scoring rules that a comparison function for probabilistic weather forecasts should satisfy in order to be well-calibrated, while still rewarding accuracy. We applied several of these proper scoring rules, such as the Brier Score and Negative Log Likelihood (NLL), along with more intuitive heuristics, such as the expected calibration error (ECE), to understand how different ML models dealt with uncertainty under dataset shift.

Experiments
We analyze the effect of dataset shift on uncertainty across a variety of data modalities, including images, text, online advertising data and genomics. As an example, we illustrate the effect of dataset shift on the ImageNet dataset, a popular image understanding benchmark. ImageNet involves classifying over a million images into 1000 different categories. Some now consider this challenge mostly solved, and have developed harder variants, such as Corrupted Imagenet (or Imagenet-C), in which the data are augmented according to 16 different realistic corruptions, each at 5 different intensities.

We explore how model uncertainty behaves under changes to the data distribution, such as increasing intensities of the image perturbations used in Corrupted Imagenet. Shown here are examples of each type of image corruption, at intensity level 3 (of 5).

We used these corrupted images as examples of shifted data and examined the predictive probabilities of deep learning models as they were exposed to shifts of increasing intensity. Below we show box plots of the resulting accuracy and the ECE for each level of corruption (including uncorrupted test data), where each box aggregates across all corruption types in ImageNet-C. Each color represents a different type of model — a “vanilla” deep neural network used as a baseline, four uncertainty methods (dropouttemperature scaling and our last layer approaches), and an ensemble approach.

Trusting Model’s Uncertainty? 1
Accuracy (top) and expected calibration error (bottom; lower is better) for increasing intensities of dataset shift on ImageNet-C. We observe that the decrease in accuracy is not reflected by an increase in uncertainty of the model, indicated by both accuracy and ECE getting worse.

As the shift intensity increases, the deviation in accuracy across corruption methods for each model increases (increasing box size), as expected, and the accuracy on the whole decreases. Ideally this would be reflected in increasing uncertainty of the model, thus leaving the expected calibration error (ECE) unchanged. However, looking at the lower plot of the ECE, one sees that this is not the case and that calibration generally suffers as well. We observed similar worsening trends for Brier score and NLL indicating that the models are not becoming increasingly unsure with shift, but instead are becoming confidently wrong.

One popular method to improve calibration is known as temperature scaling, a variant of Platt scaling, which involves smoothing the predictions after training, using performance on a held-out validation set. We observed that while this improved calibration on the standard test data, it often made things worse on shifted data! Thus, practitioners applying this technique should be wary of distributional shift.

Fortunately, one method degrades in uncertainty much more gracefully than others. Deep ensembles (green), which average the predictions of a selection of models, each of which have different initializations, is a simple strategy that significantly improves robustness to shift and outperforms all other methods tested.

Summary and Recommended Best Practices
In our paper, we explored the behavior of state-of-the-art models under dataset shift across images, text, online advertising data and genomics. Our findings were mostly consistent across these different kinds of data. The quality of uncertainty degrades under dataset shift, but there are promising avenues of research to mitigate this. We hope that deep learning users take home the following messages from our study:

  1. Uncertainty under dataset shift is a real concern that needs to be considered when training models.
  2. Improving calibration and accuracy on an in-distribution test set often does not translate to improved calibration on shifted data.
  3. Out of all the methods we considered, deep ensembles are the most robust to dataset shift, and a relatively small ensemble size (e.g., 5) is sufficient. The effectiveness of ensembles presents interesting avenues for improving other approaches.

Improving the predictive uncertainty of deep learning models remains an active area of research in ML. We have released all of the code and model predictions from this benchmark in the hope that it will be useful to the community to drive and evaluate future work on this important topic.

This article has been published from the source link without modifications to the text. Only the headline has been changed.

[ad_2]

Source link

Most Popular