Does Interpretability of Neural Networks Imply Adversarial Robustness?

Date and time: 
Friday, May 22, 2020 - 09:00
Location: 
Remotely
Author(s):
Adam Noack
University of Oregon
Host/Committee: 
  • Dejing Dou (Chair)
  • Daniel Lowd
  • Thanh Nguyen
Abstract: 

The success of deep neural networks is clouded by two issues: (1) a vulnerability to adversarial examples and (2) a tendency to be uninterpretable. Interestingly, recent empirical evidence in the literature as well as theoretical analysis on simple models suggest these two seemingly disparate issues are actually connected. In particular, robust models tend to be more interpretable than non-robust models. In this paper, we provide evidence for the claim that this relationship is bidirectional. Viz., models that are optimized to have interpretable gradients are more robust to adversarial examples than models trained in a standard manner. With further analysis and experiments on standard image classification datasets, we identify two factors behind this phenomenon---namely the suppression of the gradient's magnitude and the selective use of features guided by high-quality interpretations---which explain model behaviors under various regularization and target interpretation settings.