Abstract: Distributed systems are ubiquitous in both industry and our daily life. For example, we use clusters and networked workstations to analyze large amount of data, use the world wide web for information and resource sharing, and use the Internet of Things (IoT) to access a much wider variety of resources. In distributed systems, components are more vulnerable to adversarial attacks. In this talk, we model the distributed systems as multi-agent networks, and consider the most general attack model - Byzantine fault model. In particular, this talk will focus on the problem of distributed learning over multi-agent networks, where agents repeatedly collect partially informative observations (samples) about an unknown state of the world, and try to collaboratively learn the true state. We focus on the impact of the Byzantine agents on the performance of consensus-based non-Bayesian learning. Our goal is to design algorithms for the non-faulty agents to collaboratively learn the true state through local communication.