A Systems Security Perspective on Building Secure and Reliable Deep Learning Systems

Sanghyun Hong
Thursday November 17, 2022, 10:00 AM
Online [Audience gathering: GISE conference room, SI-C204, Kanwal Rekhi Building]

How can we build secure and reliable deep learning systems for tomorrow?

We cannot answer this question without understanding the attack surfaces of deep neural networks (DNNs)—the core component of those systems. Recent work studied the attack surfaces such as mispredictions caused by adversarial examples or models altered by data poisoning. However, most of the prior work narrowly considers DNNs as an isolated mathematical concept, and it, therefore, overlooks a holistic picture—leaving out the security and privacy threats caused by vulnerable interactions between DNNs and other system components, such as hardware, systems, or software.

In this talk, I will discuss my work, studying computational properties of DNNs from a systems security perspective, that has exposed critical security and reliability threats and has steered industrial practices.  First, I will show how vulnerable DNNs are to hardware-level attacks, such as fault injection. An adversary who wields Rowhammer, a fault-injection attack that flips random or targeted bits in the physical memory, can inflict an accuracy drop up to 100% in practice. Second, I will show how vulnerable the computational savings provided by efficient deep learning algorithms are in adversarial settings. By adding human-imperceptible input perturbations, an attacker can completely offset a multi-exit network’s computational savings on an input. Third, I will show how vulnerable the common practice of applying a leading compression method (i.e., quantization) can be exploited to achieve adversarial outcomes. An adversary trains a well-performing model, and a victim who quantizes it activates malicious behaviors that were not present in the floating-point format. I will conclude my talk by discussing one on-going research for achieving my vision.

Speaker Biography

Sanghyun Hong is an Assistant Professor of Computer Science at Oregon State University. His research interests lie at the intersection of computer security, privacy, and machine learning. He focuses on studying the computational properties of DNNs, which makes systems using them particularly vulnerable in adversarial environments. His work has been published in USENIX Security, ICLR, ICML, and NeurIPs. He is the recipient of the Samsung Global Research (GRO) Award 2022. He was selected as a DARPA Riser 2022, a speaker at USENIX Enigma 2021, and a recipient of the Ann G. Wylie Dissertation Fellowship. He earned his Ph.D. at the University of Maryland, College Park. He received his B.S. at Seoul National University.

You can find more about Sanghyun at sanghyun-hong.com