Countering Attacker Data Manipulation in Security Games

Date and time: 
Mon, May 3 2021 - 2:00pm
Andrew (Ryan) Butler
University of Oregon
  • Thanh Nguyen (Chair)
  • Thien Nguyen
  • Lei Jiao

Defending against attackers with unknown behavior is an important area of research in security games. A well-established approach is to utilize historical attack data to create a behavioral model of the attacker. However, this presents a vulnerability: a clever attacker may change its own behavior during learning, leading to an inaccurate model and ineffective defender strategies. In this paper, we investigate how a wary defender can defend against such deceptive attacker. We provide four main contributions. First, we develop a new technique to estimate attacker true behavior despite data manipulation by the clever adversary. Second, we extend this technique to be viable even when the defender has access to a minimal amount of historical data. Third, we utilize a maximin approach to optimize the defender's strategy against the worst-case within the estimate uncertainty. Finally, we demonstrate the effectiveness of our counter-deception methods by performing extensive experiments, showing clear gain for the defender and loss for the deceptive attacker.