Securing Distributed Systems Against Adversarial Attacks

Friday, March 17, 2017 - 1:00pm to 2:30pm
Location: 
32-G631
Speaker: 
Lili Su
Biography: 
Lili Su is a Ph.D. candidate in the Electrical and Computer Engineering Department at the University of Illinois at Urbana-Champaign, working with Prof. Nitin Vaidya on distributed computing. She expects to receive her Ph.D. degree in May 2017. Her research intersects distributed computing, security, optimization, and learning. She was one of the three nominees for the 2016 International Symposium on DIStributed Computing Best Student Paper Award. She received the 2015 International Symposium on Stabilization, Safety, and Security of Distributed Systems Best Student Paper Award. She also received the Sundaram Seshu International Student Fellowship for the academic year of 2016 to 2017 conferred by UIUC. In addition, she received the Outstanding Reviewer Award for her review service for IEEE Transactions on Communication in 2015.

Abstract: Distributed systems are ubiquitous in both industry and our daily life. For example, we use clusters and networked workstations to analyze large amount of data, use the world wide web for information and resource sharing, and use the Internet of Things (IoT) to access a much wider variety of resources. In distributed systems, components are more vulnerable to adversarial attacks. In this talk, we model the distributed systems as multi-agent networks, and consider the most general attack model - Byzantine fault model. In particular, this talk will focus on the problem of distributed learning over multi-agent networks, where agents repeatedly collect partially informative observations (samples) about an unknown state of the world, and try to collaboratively learn the true state. We focus on the impact of the Byzantine agents on the performance of consensus-based non-Bayesian learning. Our goal is to design algorithms for the non-faulty agents to collaboratively learn the true state through local communication.

 
At the end of this talk, I will also briefly mention our exploration on tolerating adversarial attacks in multi-agent optimization problems.