The DSP group will present two papers at the International Conference on Artificial Intelligence and Statistics (AISTATS) conference in June 2020 in Palermo, Sicily, Italy
- D. LeJeune, H. Javadi, R. G. Baraniuk, "The Implicit Regularization of Ordinary Least Squares Ensembles," AISTATS, 2020
- D. LeJeune, G. Dasarathy, R. G. Baraniuk, "Thresholding Graph Bandits with GrAPL," AISTATS, 2020
In Fall 2019, Chinmay Hegde (PhD 2012) moved from Iowa State University to New York University (NYU).
In Fall 2019, Vinay Ribeiro (PhD 2005) moved from IIT-Delhi to IIT-Bombay.
In Summer 2020, Christoph Studer (postdoc 2010-12) will move from Cornell University to ETH-Zurich.
Rice DSP will again be well-represented at NeurIPS 2019 in Vancouver, Canada
An enlightening profile of Dean and ECE Professor Emeritus and Rice DSP Group co-founder C. Sidney Burrus has appeared in the 24 October 2019 edition of the Rice Magazine.
DSP alum Christoph Studer (postdoc 2010-12) has been awarded tenure at Cornell University. Christoph is an expert in signal processing, communications, machine learning, and their implementation in VLSI circuits. He has received a Swiss National Science Foundation postdoc fellowship, a US NSF CAREER Award, and numerous best paper awards. He is still is not a fan of Blender.
D. LeJeune, H. Javadi, R. G. Baraniuk, "The Implicit Regularization of Ordinary Least Squares Ensembles," arxiv.org/abs/1910.04743, 10 October 2019.
Ensemble methods that average over a collection of independent predictors that are each limited to a subsampling of both the examples and features of the training data command a significant presence in machine learning, such as the ever-popular random forest, yet the
nature of the subsampling effect, particularly of the features, is not well understood. We study the case of an ensemble of linear predictors, where each individual predictor is fit using ordinary least squares on a random submatrix of the data matrix. We show that, under standard Gaussianity assumptions, when the number of features selected for each predictor is optimally tuned, the asymptotic risk of a large ensemble is equal to the asymptotic ridge regression risk, which is known to be optimal among linear predictors in this setting. In addition to eliciting this implicit regularization that results from subsampling, we also connect this ensemble to the dropout technique used in training deep (neural) networks, another strategy that has been shown to have a ridge-like regularizing effect.
Above: Example (rows) and feature (columns) subsampling of the training data X used in the ordinary least squares fit for one member of the ensemble. The i-th member of the ensemble is only allowed to predict using its subset of the features (green). It must learn its parameters by performing ordinary least squares using the subsampled examples of (red) and the subsampled examples (rows) and features (columns) of X (blue, crosshatched).
Mark Davenport, Rice DSP alum and Associate Professor at Georgia Tech, has been awarded a Presidential Early Career Award for Scientists and Engineers. The PECASE is the highest honor bestowed by the United States Government to outstanding scientists and engineers who are beginning their independent research careers and who show exceptional promise for leadership in science and technology. Additional awards include a Sloan Research Fellowship in Mathematics and the Class of 1940 W. Roane Beard Outstanding Teacher Award from Georgia Tech. Mark's Georgia Tech colleague and Rice DSP Alum Justin Romberg was awarded the PECASE in 2009.
The DSP50 event on April 26th was a great success and we genuinely thank all who made it possible as well as all who were able to join us. DSP50 commemorated and celebrated the past, present, and future of one of the longest-running successful research and education programs at Rice with a variety of speakers, panel sessions, and discussions. If you missed out, but are interested in the videos, they can now be found on the Rice University ECE Youtube Page at the links below.
We are also working on developing a DSP50 commemorative coffee table book, if you're interested in a copy upon completion please fill out this form here.
Welcome and Intro
Genesis and Beyond
Waves of progress
Selection of DSP Impacts
The Future of DSP at Rice
26 April 2019
Rice University has been a major force in DSP research and education since two young faculty members launched their first DSP course some 50 years ago. Sidney Burrus and Tom Parks, together with their colleagues, joined over the years by Rui deFigueredo, Don Johnson and waves of additional faculty, built an internationally recognized program that spawned key theory and algorithms for digital filter design, fast Fourier transforms, array processing, wavelet transforms, compressive sensing, and deep learning. Rice's many outstanding DSP alumni now hold leadership positions in academics, industry, and government.
DSP50 will commemorate and celebrate the past, present, and future of one of the longest-running successful research and education programs at Rice with a variety of speakers, panel sessions, and discussions.
WHEN: Friday 26 April 2019
WHERE: BioScience Research Collaborative (BRC), Rice University, Houston, Texas
MORE INFO: dsp.rice.edu/DSP50
Rice PhD alum Eva Dyer has been awarded a prestigious Sloan Fellowship to support her research at the Georgia Institute of Technology on new computational methods for discovering principles that govern the organization and structure of the brain. Eva's broader research interests lie at the intersection of machine learning, optimization and neuroscience.