DeCoDEx: Confounder Detector Guidance for Improved Diffusion-based Counterfactual Explanations

Published: 06 Jun 2024, Last Modified: 06 Jun 2024MIDL 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Bias Mitigation, Causality, Counterfactual Image Synthesis, Diffusion, Explainability, Spurious Correlations
Abstract: Deep learning classifiers are prone to latching onto dominant confounders present in a dataset rather than on the causal markers associated with the target class, leading to poor generalization and biased predictions. Although explainability via counterfactual image generation has been successful at exposing the problem, bias mitigation strategies that permit accurate explainability in the presence of dominant and diverse artifacts remain unsolved. In this work, we propose the DeCoDEx framework and show how an external, pre-trained binary artifact detector can be leveraged during inference to guide a diffusion-based counterfactual image generator towards accurate explainability. Experiments on the CheXpert dataset, using both synthetic artifacts and real visual artifacts (support devices), show that the proposed method successfully synthesizes the counterfactual images that change the causal pathology markers associated with Pleural Effusion while preserving or ignoring the visual artifacts. Augmentation of ERM and Group-DRO classifiers with the DeCoDEx generated images substantially improves the results across underrepresented groups that are out of distribution for each class. The code is made publicly available at https://github.com/NimaFathi/DeCoDEx.
Latex Code: zip
Copyright Form: pdf
Submission Number: 296
Loading