Mortiz Hardt: Reliable Machine Learning Through Algorithmic Stability

Thursday, November 19, 2015 - 4:00pm to 5:00pm
Light Refreshments at 3:50pm
Patil/Kiva G449
Moritz Hardt

Most applications of machine learning across science and industry rely on the holdout method for model selection and validation. Unfortunately, the holdout method can fail in the now common situation where the data analyst works interactively with the data, iteratively choosing which methods to use by probing the same holdout data many times.

In this talk, we show how the principle of algorithmic stability allows us to design a reusable holdout method, which can be used many times without losing the guarantees of fresh data.

We conclude with a bird's eye view of what algorithmic stability says about machine learning at large, including new insights into stochastic gradient descent, the most popular optimization method in contemporary machine learning.

Short bio:
Moritz Hardt is a senior research scientist at Google Research where his mission is to build theory and tools that make machine learning more reliable. After obtaining a PhD in computer science from Princeton University in 2011, he spent three years at IBM Research Almaden prior to joining Google.