Algorithms Should Have Bullshit Detectors! (or Polynomial Time Byzantine Agreement with Optimal Resilience)

Tuesday, November 22, 2022 - 4:00pm to 5:15pm
Refreshments: 
Milk & Cookies served 4--4:15 pm
Location: 
32-G449 (Patil/Kiva)
Speaker: 
Seth Pettie (University of Michigan, Ann Arbor)

Abstract. One thing that distinguishes (theoretical) computer science from other scientific disciplines is its full-throated support of a fundamentally adversarial view of the universe.  Malicious adversaries, with unbounded computational advantages, attempt to foil our algorithms at every turn and destroy their quantitative guarantees.  However, there is one strange exception to this world view and it is this: the algorithm must accept its input as sacrosanct, and may never simply reject its input as illegitimate.  But what if some inputs really are illegitimate?  Is building a "bullshit detector" for algorithms a good idea?


In this talk I will present a protocol for asynchronous Byzantine Agreement against the "strongest" adversary, which is omniscient, computationally unbounded, and can adaptively corrupt players as the protocol progresses. The protocol reduces Byzantine Agreement to a coin-flipping game in  which the nominal goal is to collectively flip a fair coin, in the presence of malicious parties that attempt to fix its outcome.  However, in the short  term the malicious parties have an insurmountable advantage.  The true goal of the game is to build a sound "bullshit detector" that identifies which  parties are not playing according to protocol, and "blacklist" them.  The final result is a polynomial time Byzantine Agreement protocol against an  adaptive, all-knowing adversary that can corrupt up to f<n/3 players.  The previous-best protocol against f<n/3 corruptions took exponential
time.  Moreover, the problem cannot be solved with f >= n/3 corruptions.


Based on extended abstracts at STOC 2022 and SODA 2023, which are joint work with Leqi Zhu and Shang-En Huang.  https://arxiv.org/abs/2206.15335.