Special Seminar in Computing and Mathematical Sciences
Ludwig Schmidt is a postdoctoral researcher at UC Berkeley working with Moritz Hardt and Ben Recht. Ludwig’s research interests revolve around the empirical and theoretical foundations of machine learning, often with a focus on making machine learning more reliable.. Before Berkeley, Ludwig completed his PhD at MIT under the supervision of Piotr Indyk. Ludwig received a Google PhD fellowship, a Microsoft Simons fellowship, a best paper award at the International Conference on Machine Learning (ICML), and the Sprowls dissertation award from MIT.
The past decade has seen tremendous progress in machine learning, with the ImageNet benchmark being maybe the most prominent example. In this talk, we closely analyze this progress in order to understand the main obstacles on the path towards safe, dependable, and secure machine learning.
First, we will investigate the nature and extent of overfitting on ML benchmarks through novel reproducibility experiments for ImageNet and other key datasets. Our results show that overfitting through test set re-use is surprisingly absent, but distribution shift poses a major open problem for reliable ML.
In the second part, we will focus on one particular robustness issue (adversarial examples) and develop methods inspired by optimization and generalization theory address this issue. We then conclude with a large experimental study of current robustness interventions that summarizes the main challenges going forward.