Rice Assistant Professor Anshumali Shrivastava has received an AFOSR YIP award for a project on Sub-Linear Algorithms for Learning and Sensing with Massive Data. He will be investigating the power of hashing algorithms for making state-of-the-art compressed sensing and information fusion methods more algorithmically efficient by cutting down memory and computation costs exponentially.
DSP Alum Justin Romberg (PhD, 2003), the Schlumberger Professor of ECE at Georgia Tech, has been elected an IEEE Fellow for his seminal contributions to compressive sensing. He has received a number of prestigious awards for his research in signal processing and machine learning, including the ONR Young Investigator Award, PECASE Award, Packard Fellowship, and Rice Outstanding Engineering Alumnus.
Patients who have to undergo a magnetic resonance imaging (MRI) scan may be spared the ordeal of having to lie still in the scanner for up to 45 minutes, thanks to new compressive sensing technology developed in the groups of Rice ECE faculty Richard Baraniuk and Kevin Kelly. The patented technology was recently licensed from Rice by Siemens Healthineers.
Magnetic resonance imaging (MRI) scanners equipped with compressive sensing operate much more quickly than current scanners. Siemens Healthineers has applied the technology to help solve an important clinical problem: how to reduce long scan times while maintaining high diagnostic quality. The result is the first clinical application of compressive sensing for cardiovascular imaging; it was approved for clinical use in February 2017 by the Food and Drug Administration. Thanks to compressive sensing, scans of the beating heart can be completed in as few as 25 seconds while the patient breathes freely. In contrast, in an MRI scanner equipped with conventional acceleration techniques, patients must lie still for six minutes or more and hold their breath as many as seven to 12 times throughout a cardiovascular-related procedure.
A. Aghazadeh, A. Lan, A. Shrivastava, R. G. Baraniuk, "RHash: Robust Hashing via L_infinity-norm Distortion," Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI), main track, pp. 1386-1394. https://doi.org/10.24963/ijcai.2017/192
Hashing is an important tool in large-scale machine learning. Unfortunately, current data-dependent hashing algorithms are not robust to small perturbations of the data points, which degrades the performance of nearest neighbor (NN) search. The culprit is the minimization of the L_2-norm, average distortion among pairs of points to find the hash function. Inspired by recent progress in robust optimization, we develop a novel hashing algorithm, dubbed RHash, that instead minimizes the L_1-norm, worst-case distortion among pairs of points. We develop practical and efficient implementations of RHash that couple the alternating direction method of multipliers (ADMM) framework with column generation to scale well to large datasets. A range of experimental evaluations demonstrate the superiority of RHash over ten state-of-the-art binary hashing schemes. In particular, we show that RHash achieves the same retrieval performance as the state-of-the-art algorithms in terms of average precision while using up to 60% fewer bits.
Above is a comparison of the robustness and nearest neighbor (NN) preservation performance of embeddings based on minimizing the L_2- norm (average distortion) vs. the L_1-norm (worst-case distortion) on a subset of the MNIST handwritten digit dataset projected onto its first two principal components. The subset consists of the 50 nearest neighbors (NN) (in the 2-dimensional ambient space) of the centroid q of the cluster of “8” digits. (a) Optimal embeddings for both distortion measures computed using a grid search over the orientation of the line representing the embedding. (b) Robustness of the embedding orientations to the addition of a small amount of white Gaussian noise to each data point. This plot of the mean square error of the orientation of the L_2-optimal embedding divided by the mean square error of the orientation of the L_1-optimal embedding indicates that the latter embedding is significantly more robust to perturbations in the data points. (c) Comparison of the top-5 NNs of the query point q obtained in the ambient space using the L_1- and L_2-optimal embeddings (no added noise). (d) Projections of the data points onto the L_1- and L_2-optimal embeddings (no added noise).
Rice University engineers are building a flat microscope, called FlatScope (TM), and developing software that can decode and trigger neurons on the surface of the brain. The goal as part of a new government initiative is to provide an alternate path for sight and sound to be delivered directly to the brain. The project is part of a $65 million effort announced this week by the federal Defense Advanced Research Projects Agency (DARPA) to develop a high-resolution neural interface. Among many long-term goals, the Neural Engineering System Design (NESD) program hopes to compensate for a person's loss of vision or hearing by delivering digital information directly to parts of the brain that can process it.
Press in Engadget, Photonics.com, ScienceAlert
A. Mousavi, G. Dasarathy, R. G. Baraniuk, “DeepCodec: Adaptive Sensing and Recovery via Deep Convolutional Neural Networks,” arXiv:1707.03386, July 2017.
We develop a novel computational sensing framework for sensing and recovering structured signals called DeepCodec. When trained on a set of representative signals, our framework learns to take undersampled measurements and recover signals from them using a deep convolutional neural network. In other words, it learns a transformation from the original signals to a near-optimal number of undersampled measurements and the inverse transformation from measurements to signals. This is in contrast to conventional compressive sensing (CS) systems that use random linear measurements and convex optimization or iterative algorithms for signal recovery. We compare our new framework with ℓ1-minimization from the phase transition point of view and demonstrate that it outperforms ℓ1-minimization in the regions of phase transition plot where ℓ1-minimization cannot recover the exact solution. In addition, we experimentally demonstrate how learning measurements enhances recovery performance, speeds up training, and reduces the number of parameters to learn.
DeepCodec learns a transformation from signals x to measurement vectors y and an approximate inverse transformation from measurement vectors y to signals x using a deep convolutional network that consists of convolutional and sub-pixel convolution layers.
Recovery comparison of DeepCodec vs. LASSO (with optimal regularization parameter).
Rice University-based nonprofit OpenStax, which is already changing the economics of higher education by providing free textbooks to more than 1 million college students per year, today launched a low-cost, personalized learning system called OpenStax Tutor Beta that analyzes how students learn to offer them individualized homework and tutoring. In development for three years, the system will be available this fall for three courses: college physics, biology and sociology. While students study using OpenStax Tutor, it learns how they learn — what they struggle with, what helps them most — and it uses that information to offer just-in-time remediation and enrichment. The system provides personalized assessment and spaced practice, helping students focus their studying efforts on their weak areas and remember what they learned earlier in the course.
DSP alum Marco Duarte (PhD, 2009) has been promoted to the position of Associate Professor with Tenure at the University of Massachusetts at Amherst effective September 2017. Marco is an expert in sparse signal processing, sensor networks, and pattern recognition. He has received the IEEE Signal Processing Society Overview Paper Award, an NSF/IPAM Mathematical Sciences Research Institutes Postdoctoral Fellowship, and the SPARS Best Student Paper Award. Congratulations!
The National Academy of Sciences announced today the election of 84 new members and 21 foreign associates in recognition of their distinguished and continuing achievements in original research. Ron DeVore, the Walter E. Koss Professor in the Department of Mathematics at Texas A&M University, is a pioneer of approximation theory and sparse representations. His work at Rice as the TI Visiting Professor in 2005-2006 focused on compressive sensing.
Prof. Richard Baraniuk of the Rice DSP/Machine Learning group has two open postdoc positions for research in machine learning theory and methods, computational sensing/imaging, and sparse signal processing. The group offers an energizing environment for research; 25 recent postdocs and PhD students have been placed in top faculty positions. Apply here!