Rice DSP alum Tom Goldstein (postdoc 2012-2014) has received a DARPA Young Faculty Award for a project entitled Self-Assessing Network Models for Big Data Summarization. There are many complex tasks that neither a human nor a computer can complete alone. An example is searching a large image database: a human does not have the bandwidth to examine millions of images, while a neural network has the needed bandwidth, but lacks the intelligence to analyze complex or unusual images that lie far from its training distribution. Tom's project will enhance human-computer teaming by giving machines the ability to assess their own performance and self confidence. This way, machines can act independently, and then ask their human collaborators for help when it is needed.
DSP group members travelled en masse to Stockholm, Sweden to present seven regular papers at the International Conference on Machine Learning in July 2018:
- R. Balestriero and R. G. Baraniuk, “A Spline Theory of Deep Learning,” International Conference on Machine Learning (ICML), July 2018
- A. Aghazadeh, R. Spring, D. LeJeune, G. Dasarathy, A. Srivastava, and R. G. Baraniuk, “Ultra-Large Scale Feature Selection using Count-Sketches,” International Conference on Machine Learning (ICML), July 2018
- C. A. Metzler, P. Schniter, A. Veeraraghavan, and R. G. Baraniuk, “prDeep: Robust Phase Retrieval with Flexible Deep Neural Networks,” International Conference on Machine Learning (ICML), July 2018
- C. Romain, R. Balestriero, H. Glotin, and R. G. Baraniuk, “Spline Filters for End-to-End Deep Learning,” International Conference on Machine Learning (ICML), July 2018
- W. Nie, Y. Zhang, and A. Patel, “A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations,” International Conference of Machine Learning (ICML), 2018
- R. Liao, L. Xiong, L. Zhang, E. Fetaya, K. Yoon, X. Pitkow, R. Urtasun, R. Zemel, “Revisiting Recurrent Backpropagation,” International Conference of Machine Learning (ICML), 2018
- J. Wu, Y. Wang, Z. Wu, Z. Wang, A. Veeraraghavan, Y. Lin, “Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions,” International Conference of Machine Learning (ICML), 2018
In addition, DSP group members presented three workshop papers:
- Z. Wang, A. Lan, W. Nie, A. Waters, P. Grimaldi, and R. G. Baraniuk, “QG-Net: A Data-Driven Question Generation Model for Educational Content,” ICML Workshop on Theoretical Foundations and Applications of Deep Generative Models (TADGM), 2018
- A. Dave, A. K. Vadathya, R. Submramanayam, and K. Mitra, “Solving Inverse Problems in Compressive Imaging using a Deep Autoregressive Model,” ICML Workshop on Theoretical Foundations and Applications of Deep Generative Models (TADGM), 2018
- N. Ho, T. Nguyen, A. Patel, A. Anandkumar, M. E. Jordan, R. G. Baraniuk, “Latent-Dependent Deep Rendering Model,” ICML Workshop on Theoretical Foundations and Applications of Deep Generative Models (TADGM), 2018.
Rice DSP alum Rebecca Willett (PhD 2005) is joining the University of Chicago as a Professor of Computer Science and Statistics, where she will be developing a new machine learning initiative. Her research interests include machine learning, network science, medical imaging, wireless sensor networks, astronomy, and social networks. She has also held positions at the University of Wisconsin-Madison and Duke University. Rebecca has received the NSF CAREER Award and AFOSR YIP, and has served as a member of the DARPA Computer Science Study Group.
C. A. Metzler, A. Mousavi, R. Heckel, and R. G. Baraniuk, “Unsupervised Learning with Stein’s Unbiased Risk Estimator,” https://arxiv.org/abs/1805.10531, June 2018.
Appearing on NSF's Science360 News: Taking a closer look with a lens-free fluorescent microscope
J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, A. Veeraraghavan, “Single-Frame 3D Fluorescence Microscopy with Ultraminiature Lensless FlatScope,” Science Advances, Vol. 3, No. 12, 8 December 2017.
Abstract: Modern biology increasingly relies on fluorescence microscopy, which is driving demand for smaller, lighter, and cheaper microscopes. However, traditional microscope architectures suffer from a fundamental trade-off: As lenses become smaller, they must either collect less light or image a smaller field of view. To break this fundamental trade-off between device size and performance, we present a new concept for three-dimensional (3D) fluorescence imaging that replaces lenses with an optimized amplitude mask placed a few hundred micrometers above the sensor and an efficient algorithm that can convert a single frame of captured sensor data into high-resolution 3D images. The result is FlatScope: perhaps the world’s tiniest and lightest microscope. FlatScope is a lensless microscope that is scarcely larger than an image sensor (roughly 0.2 g in weight and less than 1 mm thick) and yet able to produce micrometer-resolution, high–frame rate, 3D fluorescence movies covering a total volume of several cubic millimeters. The ability of FlatScope to reconstruct full 3D images from a single frame of captured sensor data allows us to image 3D volumes roughly 40,000 times faster than a laser scanning confocal microscope while providing comparable resolution. We envision that this new flat fluorescence microscopy paradigm will lead to implantable endoscopes that minimize tissue damage, arrays of imagers that cover large areas, and bendable, flexible microscopes that conform to complex topographies.
Fig. 1. (A) Traditional microscopes capture the scene through an objective and tube lens (~20 to 460 mm), resulting in a quality image directly on the imaging sensor. (B) FlatScope captures the scene through an amplitude mask and spacer (~0.2 mm) and computationally reconstructs the image. Scale bars, 100 μm (inset, 50 μm). (C) Comparison of form factor and resolution for traditional lensed research microscopes, GRIN lens microscope, and FlatScope. FlatScope achieves high-resolution imaging while maintaining a large ratio of FOV relative to the cross-sectional area of the device (see Materials and Methods for elaboration). Microscope objectives are Olympus MPlanFL N (1.25×/2.5×/5×, NA = 0.04/0.08/0.15), Nikon Apochromat (1×/2×/4×, NA = 0.04/0.1/0.2), and Zeiss Fluar (2.5×/5×, NA = 0.12/0.25). (D) FlatScope prototype (shown without absorptive filter). Scale bars, 100 μm.
Rice DSP PhD Andrew Lan (PhD, 2016) has accepted an Assistant Professor position at UMass Amherst in the Department of Computer Science. He has spent the past two years as a postdoc at Princeton University and OpenStax. His research interests revolve around the development of human-in-the-loop machine learning methods that enable scalable, effective, and fail-safe personalized learning in education. Andrew joins DSP PhD alum Marco Duarte at UMass.
Rice DSP postdoc Gautam Dasarathy has accepted an Assistant Professor position at Arizona State University in the Department of Electrical and Computer Engineering. His research interests lie at the intersection of machine learning, signal processing, statistics, and information theory. He is particularly interested in the integrated design of learning algorithms and data acquisition systems (e.g., active learning) that involve both machine and human components.
Dr. Akane Sano from the MIT Media Lab will be joining the Rice ECE department in Fall 2018 as an Assistant Professor. She is currently a Research Scientist in the Affective Computing Group at MIT and a visiting scientist/lecturer in the People-Aware Computing Lab at Cornell University. Her research develops new technologies to measure, forecast, understand and improve health and wellbeing using multi-modal wearable and mobile human sensing, data analysis/modeling and application development. She has focused on measuring and predicting stress, mental health, and sleep performance for the purpose of improving overall wellbeing.
Dr. Yingyan Lin will join the Rice ECE department as an Assistant Professor in Fall 2018. She is currently the Texas Instruments Visiting Assistant Professor at Rice. Her research interests are in the areas of energy-efficient machine learning systems for cloud and mobile computing, analog and mixed-signal circuits, error resiliency techniques, and VLSI circuits and architectures for machine learning systems on resource-constrained platforms. She received her PhD in 2017 from the University of Illinois at Urbana-Champaign.