The contemporary practice in deep learning has challenged conventional approaches to machine learning. Specifically, deep neural networks are highly overparameterized models with respect to the number of data examples and are often trained without explicit regularization. Yet they achieve state-of-the-art generalization performance. Understanding the overparameterized regime requires new theory and foundational empirical studies. A prominent recent example is the "double descent" behavior of generalization errors that was discovered empirically in deep learning and then very recently analytically characterized for linear regression and related problems in statistical learning.
The goal of this workshop is to cross-fertilize the wide range of theoretical perspectives that will be required to understand overparameterized models, including the statistical, approximation theoretic, and optimization viewpoints. The workshop will be first of its kind in this space and will enable researchers to dialog about not only cutting edge theoretical studies of the relevant phenomena but also empirical studies that characterize numerical behaviors in a manner that can inspire new theoretical studies.
Invited speakers:
- Peter Bartlett, UC Berkeley
- Florent Krzakala, École Normale Supérieure
- Gitta Kutyniok, LMU Munich
- Michael Mahoney, UC Berkeley
- Robert Nowak, University of Wisconsin-Madison
- Tomaso Poggio, MIT
- Matthieu Wyart, EPFL
Organizing committee:
- Demba Ba, Harvard University
- Richard Baraniuk, Rice University
- Mikhail Belkin, UC San Diego
- Yehuda Dar, Rice University
- Vidya Muthukumar, Georgia Tech
- Ryan Tibshirani, Carnegie Mellon University
Workshop dates: April 20-21, 2021
Virtual event
Free registration
Workshop website: https://topml.rice.edu
Abstract submission deadline: February 18, 2021
Call for Contributions available at https://topml.rice.edu/call-for-contributions/