Chinmay Hegde, an assistant professor of electrical and computer engineering at Iowa State University, has been selected for an NSF CAREER Award for a project on "Advances in Graph Learning and Inference." Chin will be designing scalable non-convex algorithms for learning graphs given static and/or time-varying local measurements, designing new approximation algorithms for utilizing graphs to enable scalable post-hoc decision making in complex systems, and filling gaps between rigorous theory and practice of neural network learning. Chin was also recently named a Black & Veatch Building a World of Difference Faculty Fellow in Engineering at Iowa State.
Eric Chi, an assistant professor of statistics at North Carolina State University, has been selected for an NSF CAREER Award for a project on "Stable and Scalable Estimation of the Intrinsic Geometry of Multiway Data." Eric will be developing new clustering and bi-clustering methods and validating them on large datasets from high-throughput bioinformatics and neuroscience.
Rice DSP alum Douglas L. Jones (BS 1983, MS 1986, PhD 1987) has been named the William L. Everitt Distinguished Professor in Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. Doug is a global leader in the area of digital signal processing. His research includes neuroengineering, bioengineering and acoustics, and efficient energy management and conversion for sensing systems, communications, and information technology. The William L. Everitt Distinguished Professor in Electrical and Computer Engineering was established to honor the memory of the former head of the electrical engineering department from 1944-49 as well as the dean of the College of Engineering from 1949-68. The former home of the ECE department, Everitt Laboratory, bears his name.
Read more here.
Rice Assistant Professor Anshumali Shrivastava has received an AFOSR YIP award for a project on Sub-Linear Algorithms for Learning and Sensing with Massive Data. He will be investigating the power of hashing algorithms for making state-of-the-art compressed sensing and information fusion methods more algorithmically efficient by cutting down memory and computation costs exponentially.
J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, A. Veeraraghavan, “Single-Frame 3D Fluorescence Microscopy with Ultraminiature Lensless FlatScope,” Science Advances, Vol. 3, No. 12, 8 December 2017.
Abstract: Modern biology increasingly relies on fluorescence microscopy, which is driving demand for smaller, lighter, and cheaper microscopes. However, traditional microscope architectures suffer from a fundamental trade-off: As lenses become smaller, they must either collect less light or image a smaller field of view. To break this fundamental trade-off between device size and performance, we present a new concept for three-dimensional (3D) fluorescence imaging that replaces lenses with an optimized amplitude mask placed a few hundred micrometers above the sensor and an efficient algorithm that can convert a single frame of captured sensor data into high-resolution 3D images. The result is FlatScope: perhaps the world’s tiniest and lightest microscope. FlatScope is a lensless microscope that is scarcely larger than an image sensor (roughly 0.2 g in weight and less than 1 mm thick) and yet able to produce micrometer-resolution, high–frame rate, 3D fluorescence movies covering a total volume of several cubic millimeters. The ability of FlatScope to reconstruct full 3D images from a single frame of captured sensor data allows us to image 3D volumes roughly 40,000 times faster than a laser scanning confocal microscope while providing comparable resolution. We envision that this new flat fluorescence microscopy paradigm will lead to implantable endoscopes that minimize tissue damage, arrays of imagers that cover large areas, and bendable, flexible microscopes that conform to complex topographies.
Fig. 1. (A) Traditional microscopes capture the scene through an objective and tube lens (~20 to 460 mm), resulting in a quality image directly on the imaging sensor. (B) FlatScope captures the scene through an amplitude mask and spacer (~0.2 mm) and computationally reconstructs the image. Scale bars, 100 μm (inset, 50 μm). (C) Comparison of form factor and resolution for traditional lensed research microscopes, GRIN lens microscope, and FlatScope. FlatScope achieves high-resolution imaging while maintaining a large ratio of FOV relative to the cross-sectional area of the device (see Materials and Methods for elaboration). Microscope objectives are Olympus MPlanFL N (1.25×/2.5×/5×, NA = 0.04/0.08/0.15), Nikon Apochromat (1×/2×/4×, NA = 0.04/0.1/0.2), and Zeiss Fluar (2.5×/5×, NA = 0.12/0.25). (D) FlatScope prototype (shown without absorptive filter). Scale bars, 100 μm.
DSP Alum Justin Romberg (PhD, 2003), the Schlumberger Professor of ECE at Georgia Tech, has been elected an IEEE Fellow for his seminal contributions to compressive sensing. He has received a number of prestigious awards for his research in signal processing and machine learning, including the ONR Young Investigator Award, PECASE Award, Packard Fellowship, and Rice Outstanding Engineering Alumnus.
Patients who have to undergo a magnetic resonance imaging (MRI) scan may be spared the ordeal of having to lie still in the scanner for up to 45 minutes, thanks to new compressive sensing technology developed in the groups of Rice ECE faculty Richard Baraniuk and Kevin Kelly. The patented technology was recently licensed from Rice by Siemens Healthineers.
Magnetic resonance imaging (MRI) scanners equipped with compressive sensing operate much more quickly than current scanners. Siemens Healthineers has applied the technology to help solve an important clinical problem: how to reduce long scan times while maintaining high diagnostic quality. The result is the first clinical application of compressive sensing for cardiovascular imaging; it was approved for clinical use in February 2017 by the Food and Drug Administration. Thanks to compressive sensing, scans of the beating heart can be completed in as few as 25 seconds while the patient breathes freely. In contrast, in an MRI scanner equipped with conventional acceleration techniques, patients must lie still for six minutes or more and hold their breath as many as seven to 12 times throughout a cardiovascular-related procedure.
A. Aghazadeh, A. Lan, A. Shrivastava, R. G. Baraniuk, "RHash: Robust Hashing via L_infinity-norm Distortion," Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI), main track, pp. 1386-1394. https://doi.org/10.24963/ijcai.2017/192
Hashing is an important tool in large-scale machine learning. Unfortunately, current data-dependent hashing algorithms are not robust to small perturbations of the data points, which degrades the performance of nearest neighbor (NN) search. The culprit is the minimization of the L_2-norm, average distortion among pairs of points to find the hash function. Inspired by recent progress in robust optimization, we develop a novel hashing algorithm, dubbed RHash, that instead minimizes the L_1-norm, worst-case distortion among pairs of points. We develop practical and efficient implementations of RHash that couple the alternating direction method of multipliers (ADMM) framework with column generation to scale well to large datasets. A range of experimental evaluations demonstrate the superiority of RHash over ten state-of-the-art binary hashing schemes. In particular, we show that RHash achieves the same retrieval performance as the state-of-the-art algorithms in terms of average precision while using up to 60% fewer bits.
Above is a comparison of the robustness and nearest neighbor (NN) preservation performance of embeddings based on minimizing the L_2- norm (average distortion) vs. the L_1-norm (worst-case distortion) on a subset of the MNIST handwritten digit dataset projected onto its first two principal components. The subset consists of the 50 nearest neighbors (NN) (in the 2-dimensional ambient space) of the centroid q of the cluster of “8” digits. (a) Optimal embeddings for both distortion measures computed using a grid search over the orientation of the line representing the embedding. (b) Robustness of the embedding orientations to the addition of a small amount of white Gaussian noise to each data point. This plot of the mean square error of the orientation of the L_2-optimal embedding divided by the mean square error of the orientation of the L_1-optimal embedding indicates that the latter embedding is significantly more robust to perturbations in the data points. (c) Comparison of the top-5 NNs of the query point q obtained in the ambient space using the L_1- and L_2-optimal embeddings (no added noise). (d) Projections of the data points onto the L_1- and L_2-optimal embeddings (no added noise).
Rice University engineers are building a flat microscope, called FlatScope (TM), and developing software that can decode and trigger neurons on the surface of the brain. The goal as part of a new government initiative is to provide an alternate path for sight and sound to be delivered directly to the brain. The project is part of a $65 million effort announced this week by the federal Defense Advanced Research Projects Agency (DARPA) to develop a high-resolution neural interface. Among many long-term goals, the Neural Engineering System Design (NESD) program hopes to compensate for a person's loss of vision or hearing by delivering digital information directly to parts of the brain that can process it.
A. Mousavi, G. Dasarathy, R. G. Baraniuk, “DeepCodec: Adaptive Sensing and Recovery via Deep Convolutional Neural Networks,” arXiv:1707.03386, July 2017.
We develop a novel computational sensing framework for sensing and recovering structured signals called DeepCodec. When trained on a set of representative signals, our framework learns to take undersampled measurements and recover signals from them using a deep convolutional neural network. In other words, it learns a transformation from the original signals to a near-optimal number of undersampled measurements and the inverse transformation from measurements to signals. This is in contrast to conventional compressive sensing (CS) systems that use random linear measurements and convex optimization or iterative algorithms for signal recovery. We compare our new framework with ℓ1-minimization from the phase transition point of view and demonstrate that it outperforms ℓ1-minimization in the regions of phase transition plot where ℓ1-minimization cannot recover the exact solution. In addition, we experimentally demonstrate how learning measurements enhances recovery performance, speeds up training, and reduces the number of parameters to learn.
DeepCodec learns a transformation from signals x to measurement vectors y and an approximate inverse transformation from measurement vectors y to signals x using a deep convolutional network that consists of convolutional and sub-pixel convolution layers.
Recovery comparison of DeepCodec vs. LASSO (with optimal regularization parameter).