Publications

Rice DSP Publications Archive

  • Temporarily at this URL -- sorry for the inconvenience

Rice Compressive Sensing Community Publications Archive

  • Temporarily at this URL -- sorry for the inconvenience

Selected Recent Publications

A. Aghazadeh, A. Y. Lin, M. A. Sheikh, A. L. Chen, L. M. Atkins, C. L. Johnson, J. F. Petrosino, R. A. Drezek, R. G. Baraniuk, “Universal Microbial Diagnostics using Random DNA Probes,” Science Advances, vol. 2, 28 September 2016.

Abstract:  Early identification of pathogens is essential for limiting development of therapy-resistant pathogens and mitigating infectious disease outbreaks. Most bacterial detection schemes use target-specific probes to differentiate pathogen species, creating time and cost inefficiencies in identifying newly discovered organisms. We present a novel universal microbial diagnostics (UMD) platform to screen for microbial organisms in an infectious sample, using a small number of random DNA probes that are agnostic to the target DNA sequences. Our platform leverages the theory of sparse signal recovery (compressive sensing) to identify the composition of amicrobial sample that potentially contains novel or mutant species. We validated the UMD platform in vitro using five random probes to recover 11 pathogenic bacteria. We further demonstrated in silico that UMD can be generalized to screen for common human pathogens in different taxonomy levels. UMD’s unorthodox sensing approach opens the door to more efficient and universal molecular diagnostics.

T. A. Baran, R. G. Baraniuk, A. V. Oppenheim, P. Prandoni, and M. Vetterli, “MOOC Adventures in Signal Processing: Bringing DSP to the Era of Massive Open Online Courses,” IEEE Signal Processing Magazine, Vol. 3, Issue 4, July 2016.

Abstract: In higher education circles, 2012 may be known as the “year of the MOOC”; the launch of several high-profile initiatives, both for profit (Coursera, Udacity) and not for profit (edX), created an electrified feeling in the community, with massive open online courses (MOOCs) becoming the hottest new topic in academic conversation. The sudden attention was perhaps slightly forgetful of many notable attempts at distance learning that occurred before, from campus TV networks to well-organized online repositories of teaching material. The new mode of delivery, however, was ushered in by a few large-scale computer science courses, whose broad success triggered significant media attention.

M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. G. Baraniuk, “FlatCam: Thin, Bare-Sensor Cameras using Coded Aperture and Computation,” arXiv preprint arxiv.org/abs/1509.00116, 2015. To appear in IEEE Transactions on Computational Photography, special issue on Extreme Imaging, 2017.
Abstract:  FlatCam is a thin form-factor lensless camera that consists of a coded mask placed on top of a bare, conventional sensor array. Unlike a traditional, lens-based camera where an image of the scene is directly recorded on the sensor pixels, each pixel in FlatCam records a linear combination of light from multiple scene elements. A computational algorithm is then used to demultiplex the recorded measurements and reconstruct an image of the scene. FlatCam is an instance of a coded aperture imaging system; however, unlike the vast majority of related work, we place the coded mask extremely close to the image sensor that can enable a thin system. We employ a separable mask to ensure that both calibration and image reconstruction are scalable in terms of memory requirements and computational complexity. We demonstrate the potential of the FlatCam design using two prototypes: one at visible wavelengths and one at infrared wavelengths.
FlatCam architecture. (a) Every light source within the camera field-of-view contributes to every pixel in the multiplexed image formed on the sensor. A computational algorithm reconstructs the image of the scene. Inset shows the mask-sensor assembly of our prototype in which a binary, coded mask is placed 0.5mm away from an off-the-shelf digital image sensor. (b) An example of sensor measurements and the image reconstructed by solving a computational inverse problem.

Press coverage:

A. Patel, T. Nguyen, and R. G. Baraniuk, “A Probabilistic Theory of Deep Learning,” arXiv preprint, arxiv.org/abs/1504.00641, 2 April 2015. NIPS 2016.

Abstract: A grand challenge in machine learning is the development of computational algorithms that match or outperform humans in perceptual inference tasks such as visual object and speech recognition.  The key factor complicating such tasks is the presence of numerous nuisance variables, for instance, the unknown object position, orientation, and scale in object recognition or the unknown voice pronunciation, pitch, and speed in speech recognition.  Recently, a new breed of deep learning algorithms have emerged for high-nuisance inference tasks; they are constructed from many layers of alternating linear and nonlinear processing units and are trained using large-scale algorithms and massive amounts of training data.  The recent success of deep learning systems is impressive — they now routinely yield pattern recognition systems with near- or super-human capabilities — but a fundamental question remains:  Why do they work? Intuitions abound, but a coherent framework for understanding, analyzing, and synthesizing deep learning architectures has remained elusive.

We answer this question by developing a new probabilistic framework for deep learning based on a Bayesian generative probabilistic model that explicitly captures variation due to nuisance variables.  The graphical structure of the model enables it to be learned from data using classical expectation-maximization techniques.  Furthermore, by relaxing the generative model to a discriminative one, we can recover two of the current leading deep learning systems, deep convolutional neural networks (DCNs) and random decision forests (RDFs), providing insights into their successes and shortcomings as well as a principled route to their improvement.

The figure below illustrates an example of a mapping from our Deep Rendering Model (DRM) to its factor graph to a Deep Convolutional Network (DCN) at one level of abstraction.  The factor graph representation of the DRM supports efficient inference algorithms such as max-sum message passing.  The computational algorithm that implements the max-sum message passing algorithm matches that of a DCN.

C. A. Metzler, A. Maleki, and R. G. Baraniuk, “From Denoising to Compressed Sensing,” IEEE Transactions on Information Theory, Vol. 62, No. 9, pp. 5117–5144, September 2016.

Abstract:  A denoising algorithm seeks to remove perturbations or errors from a signal. The last three decades have seen extensive research devoted to this arena, and as a result, today’s denoisers are highly optimized algorithms that effectively remove large amounts of additive white Gaussian noise. A compressive sensing (CS) reconstruction algorithm seeks to recover a structured signal acquired using a small number of randomized measurements. Typical CS reconstruction algorithms can be cast as iteratively estimating a signal from a perturbed observation. This paper answers a natural question: How can one effectively employ a generic denoiser in a CS reconstruction algorithm? In response, in this paper, we develop a denoising-based approximate message passing (D-AMP) algorithm that is capable of high-performance reconstruction. We demonstrate that, for an appropriate choice of denoiser, D-AMP offers state-of-the-art CS recovery performance for natural images. We explain the exceptional performance of D-AMP by analyzing some of its theoretical features. A critical insight in our approach is the use of an appropriate Onsager correction term in the D-AMP iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.

The figure below illustrates reconstructions of the 256×256 Barbara test image (65536 pixels) from 6554 randomized measurements.  Exploiting the state-of-the-art BM3D denoising algorithm in D-AMP enables state-of-the-art CS recovery.