Learning in Strategic Environments: from Calibrated Agents to General Information Asymmetry

Tuesday, September 17, 2024 - 4:15pm to 5:15pm
Refreshments: 
4:00 PM
Location: 
32-G449 (Kiva/Patil)
Speaker: 
Chara Podimata
Biography: 
https://charapodimata.com/

In this talk I will discuss learning in principal-agent games where there is information asymmetry between what the principal and what the agent know about each other’s chosen actions. I will introduce a generalization of the standard Stackelberg Games (SGs) framework: Calibrated Stackelberg Games (CSGs). In CSGs, a principal repeatedly interacts with an agent who (contrary to standard SGs) does not have direct access to the principal’s action but instead best-responds to calibrated forecasts about it. I will show that in CSGs, the principal can achieve utility that converges to the optimum Stackelberg value of the game (i.e., the value that they could achieve had the agent known the principal’s strategy all along) both in finite and continuous settings, and that no higher utility is achievable. Finally, I will discuss a meta-question: when learning in strategic environments, can agents overcome uncertainty about their preferences to achieve outcomes they could have achieved absent any uncertainty? And can they do this solely through interactions with each other?

Based on joint works with Nivasini Ananthakrishnan (UC Berkeley), Nika Haghtalab (UC Berkeley), and Kunhe Yang (UC Berkeley).