There is a long history of algorithmic development for solving inferential and estimation problems that play a central role in a variety of learning, sensing, and processing systems, including medical imaging scanners, numerous machine learning algorithms, and compressive sensing, to name just a few. Until recently, most algorithms for solving inferential and estimation problems have iteratively applied static models derived from physics or intuition.
In this course, we will explore a new approach that is based on “learning” various elements of the problem including i) stepsizes and parameters of iterative algorithms, ii) regularizers, and iii) inverse functions. For example, we will explore a new approach for solving inverse problems that is based on transforming an iterative, physics-based algorithm into a deep network whose parameters can be learned from training data. For a range of different inverse problems, deep networks have been shown to offer faster convergence to a better quality solution. Specific topics to be discussed include: Ill-posed inverse problems, iterative optimization, deep learning, neural networks, learning regularizers.
This is a “reading course,” meaning that students will read and present classic and recent papers from the technical literature to the rest of the class in a lively debate format. Discussions will aim at identifying common themes and important trends in the field. Students will also get hands on experience with optimization problems and deep learning software through a group project.
- Location: 1075 Duncan Hall
- Time: Friday 2pm
- Instructors: Reinhard Heckel & Richard Baraniuk
- Prerequisites: Required: Linear algebra, introduction to probability and statistics, familiarity with a programming language such as Python or MATLAB. Desired: Knowledge of signal processing, machine learning, convex optimization, and deep learning
- Course Website: Piazza Course Management Site (It is mandatory that you use this site; all official announcements will be made there)