Lili Su: Learning with Distributed Systems: Adversary-Resilience and Neural Networks

Friday, October 18, 2019 - 1:00pm to 2:30pm
Location: 
32-G631
Speaker: 
Lili Su
Biography: 
MIT

In this talk, I will first talk about how to secure Federated Learning (FL) against adversarial faults. FL is a new distributed learning paradigm proposed by Google. The goal of FL is to enable the cloud (i.e., the learner) to train a model without collecting the training data from users' mobile devices. Compared with traditional learning, FL suffer serious security issues and several practical constraints call for new security strategies. Towards quantitative and systematic insights into the impacts of those security issues, we formulated and studied the problem of Byzantine-resilient Federated Learning. We proposed two robust learning rules that secure gradient descent against Byzantine faults. The estimation error achieved under our more recently proposed rule is order-optimal in the minimax sense.

Then, I will briefly talk about our recent results on neural networks, including both biological and artificial neural networks. Notably, ourresults on the artificial  neural networks (i.e., training over-parameterized 2-layer neural networks) improved the state-of-the-art. In particular, we showed that nearly-linear network over-parameterization is sufficient for the global convergence of gradient descent.