Introducing Algorithmic Thinking Theory for Foundation Models

Tuesday, October 14, 2025 - 4:15pm to 5:15pm
Refreshments: 
4:00 PM
Location: 
32-G449 (Kiva/Patil)
Speaker: 
Vincent Cohen-Addad (Google Research)
Biography: 
https://www.di.ens.fr/~vcohen/
The last few months have witnessed tremendous advances on Large Language Model (LLM) reasoning capabilities with Gemini and GPT winning a gold medal at the International Mathematical Olympiad (IMO) [1] and International Collegiate Programming Contest (ICPC) [2]. Several papers have shown that inference scaling techniques significantly improve the reasoning performances of the LLM, in particular for the IMO [3]. We will discuss these results and how one can frame the problem as an optimization problem, relate it to empirical results shown in [3], and derive optimal (algorithmic) thinking strategies. We will also discuss avenues for refining the model and improving inference scaling methods.