Dropout injection at test time for post hoc uncertainty quantification in neural networks

Fumera, G
Second
;
Roli, F
Last
2023-01-01

Abstract

Among Bayesian methods, Monte Carlo dropout provides principled tools for evaluating the epistemic uncertainty of neural networks. Its popularity recently led to seminal works that proposed activating the dropout layers only during inference for evaluating epistemic uncertainty. This approach, which we call dropout injection, provides clear benefits over its traditional counterpart (which we call embedded dropout) since it allows one to obtain a post hoc uncertainty measure for any existing network previously trained without dropout, avoiding an additional, time-consuming training process. Unfortunately, no previous work thoroughly analyzed injected dropout and compared it with embedded dropout; therefore, we provide a first comprehensive investigation, focusing on regression problems. We show that the effectiveness of dropout injection strongly relies on a suitable scaling of the corresponding uncertainty measure, and propose an alternative method to implement it. We also considered the trade-off between negative log-likelihood and calibration error as a function of the scale factor. Experimental results on benchmark data sets from several regression tasks, including crowd counting, support our claim that dropout injection can effectively behave as a competitive post hoc alternative to embedded dropout.
2023
Crowd counting
Epistemic uncertainty
Monte Carlo dropout
Trustworthy AI
Uncertainty quantification
Files in This Item:
File Size Format  
paper.pdf

embargo until 30/06/2025

Description: Articolo principale e appendici
Type: versione post-print
Size 7.26 MB
Format Adobe PDF
7.26 MB Adobe PDF & nbsp; View / Open   Request a copy

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Questionnaire and social

Share on:
Impostazioni cookie