Exit Through the Training Data: A Look into Instance-Attribution Explanations and Efficient Data Deletion in Machine Learning

Date and time: 
Friday, February 14, 2020 - 13:00
Location: 
220 Deschutes
Author(s):
Jonathan Brophy
University of Oregon
Host/Committee: 
  • Daniel Lowd (Chair)
  • Stephen Fickas
  • Thanh Nguyen
Abstract: 

The widespread use of machine learning, coupled with large datasets and increasingly complex models have exposed a general lack of understanding for how individual predictions are made. It is perhaps unsurprising then that explainable AI (XAI) has become a very popular research topic over the past several years. This survey looks at one aspect of model interpretability: instance-attribution explanations; techniques that can trace the prediction of a test instance back through the model to the training instances that contributed most to its prediction, and they are used as means of debugging and improving models and datasets.

Another issue confronting many institutions today is that of data privacy and ownership, where some legislation requires companies to remove user data if requested. Removing private user information from databases is relatively straightforward, but machine learning models are not inherently designed to accommodate such a request. The second part of this review focuses on the task of efficient data deletion; techniques that can remove the effect of one or multiple training instances from a learned model without having to retrain the model from scratch.

Tags: