• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Publication

Why have a Unified Predictive Uncertainty? Disentangling it using Deep Split Ensembles

@misc{sarawgi2020unified, title={Why have a Unified Predictive Uncertainty? Disentangling it using Deep Split Ensembles}, author={Utkarsh Sarawgi and Wazeer Zulfikar and Rishab Khincha and Pattie Maes}, year={2020}, eprint={2009.12406}, archivePrefix={arXiv}, primaryClass={cs.LG} }

Abstract

 Understanding and quantifying uncertainty in black box Neural Networks (NNs) is critical when deployed in real-world settings such as healthcare. Recent works using Bayesian and non-Bayesian methods have shown how a unified predictive uncertainty can be modelled for NNs. Decomposing this uncertainty to disentangle the granular sources of heteroscedasticity in data provides rich information about its underlying causes. We propose a conceptually simple non-Bayesian approach, deep split ensemble, to disentangle the predictive uncertainties using a multivariate Gaussian mixture model. The NNs are trained with clusters of input features, for uncertainty estimates per cluster. We evaluate our approach on a series of benchmark regression datasets, while also comparing with unified uncertainty methods. Extensive analyses using dataset shifts and empirical rule highlight our inherently well-calibrated models. Our work further demonstrates its applicability in a multi-modal setting using a benchmark Alzheimer's dataset and also shows how deep split ensembles can highlight hidden modality-specific biases. The minimal changes required to NNs and the training procedure, and the high flexibility to group features into clusters makes it readily deployable and useful. The source code is available at this URL.

Related Content