Compactness versus Robustness: Either or Both?

Atlas Wang
Date and time: 
Thu, Dec 3 2020 - 12:15pm
Zhangyang "Atlas" Wang
University of Texas, Austin
  • Humphrey Shi

Deep networks were recently suggested to face the odds between accuracy (on clean natural data) and robustness (on adversarially perturbed data). Such a dilemma is shown to be rooted in the inherently higher sample complexity and/or model capacity, for learning a high-accuracy and robust classifier. In view of that, given a classification task, growing the model capacity appears to help draw a win-win between accuracy and robustness, yet at the expense of model size and latency. That poses challenges for many IoT applications that naturally demand security and trustworthiness (e.g., biometrics and identity verification) but can only afford limited resource budgets. This talk will introduce our recent efforts in co-optimizing the model accuracy, robustness and compactness. I will first present the Adversarially Trained Model Compression (ATMC) framework, that unifies the adversarial training objective and three major compression means (pruning, factorization, quantization) into one constrained optimization form. I will then introduce the input-adaptive efficient inference as an efficient and flexible defense, i.e., allowing for each input (either clean or adversarial) to adaptively choose one of the multiple output layers (early branches or the final one) to output its prediction. I will conclude the talk with our latest work on efficient and flexible adversarial training.


Professor Zhangyang “Atlas” Wang is currently an Assistant Professor of Electrical and Computer Engineering at UT Austin. He was an Assistant Professor of Computer Science and Engineering, at the Texas A&M University, from 2017 to 2020. He received his Ph.D. degree in ECE from UIUC in 2016, advised by Professor Thomas S. Huang; and his B.E. degree in EEIS from USTC in 2012. Prof. Wang is broadly interested in the fields of machine learning, computer vision, optimization, and their interdisciplinary applications. His latest interests focus on automated machine learning (AutoML), learning-based optimization, machine learning robustness, and efficient deep learning. His research is gratefully supported by NSF, DARPA, ARL/ARO, as well as a few more industry and university grants. He has received many research awards and scholarships, including most recently an ARO Young Investigator award, an IBM faculty research award, an Amazon research award (AWS AI), an Adobe Data Science Research Award, a Young Faculty Fellow of TAMU, and four research competition prizes from CVPR/ICCV/ECCV. More could be checked from: