Multi-Agent Learning

Many learning settings involve multiple agents that learn and make decisions in a shared environment wherein rewards, observations, and state transitions are influenced by the collective decisions of all agents. These settings: 
  • - might be cooperative or antagonistic;
  • - might inherently involve multiple agents such as training a robot that will interact with other robots, a self-driving car that will drive in roads occupied by human- or self-driven cars, or an agent that will play a difficult game like Go or Poker against humans;
  • - might not be explicitly multi-agent but might employ multi-agent formulations to attain desirable learning outcomes, such as training GANs or robustifying models against adversarial attacks; 
  • - might require detecting and reasoning about strategic behavior (lying, bluffing, misdirection) and incentives;
  • - might require reasoning about asymmetric information not only as an obstacle to sidestep, but also as a strategic opportunity.
 
These settings include both recreational games (such as board, strategy, and sports games, with notable recent progress in Go, Poker, Diplomacy, Hanabi, Stratego, Gran Turismo, and Starcraft), as well as economic settings (mechanisms, auctions, markets), and information design problems.
 
The multi-agent learning group takes an optimization, complexity-theoretic, learning-theoretic and modeling approach to tackle such questions as:
 
  • - What are desirable optimization targets in multi-agent learning settings? And, motivated by deep learning settings, how does the answer to this question change when the agents’ utilities are non-concave and thus classical game-theoretic solution concepts might cease to exist?
  • - Under what conditions are attractive optimization targets in multi-agent settings tractable from a combined statistical and computational point of view?
  • - And for those optimization targets that exist and are tractable, are there algorithms that scale to the size of real-world applications?
  • - How can agents plan in the presence of large amounts of imperfect information and how does this planning interact with computational and statistical considerations?
  • - How can agents understand or develop conventions that are compatible with humans, including the strategic use of language?
  • - How can agents reason about mutually-beneficial alliances and coordinate?

Members:

Constantinos Daskalakis

Gabriele Farina

Asu Ozdaglar

Max Fishelson

Noah Golowich

Wei Zhang