Eric Chi, an assistant professor of statistics at North Carolina State University, has been selected for an NSF CAREER Award for a project on "Stable and Scalable Estimation of the Intrinsic Geometry of Multiway Data." Eric will be developing new clustering and bi-clustering methods and validating them on large datasets from high-throughput bioinformatics and neuroscience.
Rice DSP alum Douglas L. Jones (BS 1983, MS 1986, PhD 1987) has been named the William L. Everitt Distinguished Professor in Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. Doug is a global leader in the area of digital signal processing. His research includes neuroengineering, bioengineering and acoustics, and efficient energy management and conversion for sensing systems, communications, and information technology. The William L. Everitt Distinguished Professor in Electrical and Computer Engineering was established to honor the memory of the former head of the electrical engineering department from 1944-49 as well as the dean of the College of Engineering from 1949-68. The former home of the ECE department, Everitt Laboratory, bears his name.
Read more here.
Rice Assistant Professor Anshumali Shrivastava has received an AFOSR YIP award for a project on Sub-Linear Algorithms for Learning and Sensing with Massive Data. He will be investigating the power of hashing algorithms for making state-of-the-art compressed sensing and information fusion methods more algorithmically efficient by cutting down memory and computation costs exponentially.
DSP Alum Justin Romberg (PhD, 2003), the Schlumberger Professor of ECE at Georgia Tech, has been elected an IEEE Fellow for his seminal contributions to compressive sensing. He has received a number of prestigious awards for his research in signal processing and machine learning, including the ONR Young Investigator Award, PECASE Award, Packard Fellowship, and Rice Outstanding Engineering Alumnus.
Patients who have to undergo a magnetic resonance imaging (MRI) scan may be spared the ordeal of having to lie still in the scanner for up to 45 minutes, thanks to new compressive sensing technology developed in the groups of Rice ECE faculty Richard Baraniuk and Kevin Kelly. The patented technology was recently licensed from Rice by Siemens Healthineers.
Magnetic resonance imaging (MRI) scanners equipped with compressive sensing operate much more quickly than current scanners. Siemens Healthineers has applied the technology to help solve an important clinical problem: how to reduce long scan times while maintaining high diagnostic quality. The result is the first clinical application of compressive sensing for cardiovascular imaging; it was approved for clinical use in February 2017 by the Food and Drug Administration. Thanks to compressive sensing, scans of the beating heart can be completed in as few as 25 seconds while the patient breathes freely. In contrast, in an MRI scanner equipped with conventional acceleration techniques, patients must lie still for six minutes or more and hold their breath as many as seven to 12 times throughout a cardiovascular-related procedure.
DSP group members Gautam Dasarathy and Rich Baraniuk are organizing a workshop on Advances in Modeling and Learning Interactions from Complex Data @ NIPS 2017.
- Submission deadline: Oct. 27, 2017.
- Author notification: Nov. 10, 2016.
- Workshop: Dec. 8, 2017
See the workshop website for more details. Please consider submitting your best work.
A. Aghazadeh, A. Lan, A. Shrivastava, R. G. Baraniuk, "RHash: Robust Hashing via L_infinity-norm Distortion," Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI), main track, pp. 1386-1394. https://doi.org/10.24963/ijcai.2017/192
Hashing is an important tool in large-scale machine learning. Unfortunately, current data-dependent hashing algorithms are not robust to small perturbations of the data points, which degrades the performance of nearest neighbor (NN) search. The culprit is the minimization of the L_2-norm, average distortion among pairs of points to find the hash function. Inspired by recent progress in robust optimization, we develop a novel hashing algorithm, dubbed RHash, that instead minimizes the L_1-norm, worst-case distortion among pairs of points. We develop practical and efficient implementations of RHash that couple the alternating direction method of multipliers (ADMM) framework with column generation to scale well to large datasets. A range of experimental evaluations demonstrate the superiority of RHash over ten state-of-the-art binary hashing schemes. In particular, we show that RHash achieves the same retrieval performance as the state-of-the-art algorithms in terms of average precision while using up to 60% fewer bits.
Above is a comparison of the robustness and nearest neighbor (NN) preservation performance of embeddings based on minimizing the L_2- norm (average distortion) vs. the L_1-norm (worst-case distortion) on a subset of the MNIST handwritten digit dataset projected onto its first two principal components. The subset consists of the 50 nearest neighbors (NN) (in the 2-dimensional ambient space) of the centroid q of the cluster of “8” digits. (a) Optimal embeddings for both distortion measures computed using a grid search over the orientation of the line representing the embedding. (b) Robustness of the embedding orientations to the addition of a small amount of white Gaussian noise to each data point. This plot of the mean square error of the orientation of the L_2-optimal embedding divided by the mean square error of the orientation of the L_1-optimal embedding indicates that the latter embedding is significantly more robust to perturbations in the data points. (c) Comparison of the top-5 NNs of the query point q obtained in the ambient space using the L_1- and L_2-optimal embeddings (no added noise). (d) Projections of the data points onto the L_1- and L_2-optimal embeddings (no added noise).
Rice University engineers are building a flat microscope, called FlatScope (TM), and developing software that can decode and trigger neurons on the surface of the brain. The goal as part of a new government initiative is to provide an alternate path for sight and sound to be delivered directly to the brain. The project is part of a $65 million effort announced this week by the federal Defense Advanced Research Projects Agency (DARPA) to develop a high-resolution neural interface. Among many long-term goals, the Neural Engineering System Design (NESD) program hopes to compensate for a person's loss of vision or hearing by delivering digital information directly to parts of the brain that can process it.
Press in Engadget, Photonics.com, ScienceAlert
A. Mousavi, G. Dasarathy, R. G. Baraniuk, “DeepCodec: Adaptive Sensing and Recovery via Deep Convolutional Neural Networks,” arXiv:1707.03386, July 2017.
We develop a novel computational sensing framework for sensing and recovering structured signals called DeepCodec. When trained on a set of representative signals, our framework learns to take undersampled measurements and recover signals from them using a deep convolutional neural network. In other words, it learns a transformation from the original signals to a near-optimal number of undersampled measurements and the inverse transformation from measurements to signals. This is in contrast to conventional compressive sensing (CS) systems that use random linear measurements and convex optimization or iterative algorithms for signal recovery. We compare our new framework with ℓ1-minimization from the phase transition point of view and demonstrate that it outperforms ℓ1-minimization in the regions of phase transition plot where ℓ1-minimization cannot recover the exact solution. In addition, we experimentally demonstrate how learning measurements enhances recovery performance, speeds up training, and reduces the number of parameters to learn.
DeepCodec learns a transformation from signals x to measurement vectors y and an approximate inverse transformation from measurement vectors y to signals x using a deep convolutional network that consists of convolutional and sub-pixel convolution layers.
Recovery comparison of DeepCodec vs. LASSO (with optimal regularization parameter).
Rice University-based nonprofit OpenStax, which is already changing the economics of higher education by providing free textbooks to more than 1 million college students per year, today launched a low-cost, personalized learning system called OpenStax Tutor Beta that analyzes how students learn to offer them individualized homework and tutoring. In development for three years, the system will be available this fall for three courses: college physics, biology and sociology. While students study using OpenStax Tutor, it learns how they learn — what they struggle with, what helps them most — and it uses that information to offer just-in-time remediation and enrichment. The system provides personalized assessment and spaced practice, helping students focus their studying efforts on their weak areas and remember what they learned earlier in the course.