D. LeJeune, H. Javadi, R. G. Baraniuk, "The flip side of the reweighted coin: Duality of adaptive dropout and regularization," NeurIPS 2021, arXiv:2106.0776.

Among the most successful methods for sparsifying deep (neural) networks are those that adaptively mask the network weights throughout training. By examining this masking, or dropout, in the linear case, we uncover a duality between such adaptive methods and regularization through the so-called "η-trick" that casts both as iteratively reweighted optimizations. We show that any dropout strategy that adapts to the weights in a monotonic way corresponds to an effective subquadratic regularization penalty, and therefore leads to sparse solutions. We obtain the effective penalties for several popular sparsification strategies, which are remarkably similar to classical penalties commonly used in sparse optimization. Considering variational dropout as a case study, we demonstrate similar empirical behavior between the adaptive dropout method and classical methods on the task of deep network sparsification, validating our theory.

S. Sonkar, A. Katiyar, R. G. Baraniuk, "NePTuNe: Neural Powered Tucker Network for Knowledge Graph Completion," arxiv.org/abs/2103.08711, April 15, 2021.

Accepted at ACM IJCKG 2021 - The 10th International Joint Conference on Knowledge Graphs.

Knowledge graphs link entities through relations to provide a structured representation of real world facts. However, they are often incomplete, because they are based on only a small fraction of all plausible facts. The task of knowledge graph completion via link prediction aims to overcome this challenge by inferring missing facts represented as links between entities. Current approaches to link prediction leverage tensor factorization and/or deep learning. Factorization methods train and deploy rapidly thanks to their small number of parameters but have limited expressiveness due to their underlying linear methodology. Deep learning methods are more expressive but also computationally expensive and prone to overfitting due to their large number of trainable parameters. We propose Neural Powered Tucker Network (NePTuNe), a new hybrid link prediction model that couples the expressiveness of deep models with the speed and size of linear models. We demonstrate that NePTuNe provides state-of-the-art performance on the FB15K-237 dataset and near state-of-the-art performance on the WN18RR dataset.

P. K. Kota, D. LeJeune, R. A. Drezek, R. G. Baraniuk, "Extreme Compressed Sensing of Poisson Rates from Multiple Measurements," arxiv.org/abs/2103.08711, March 15, 2021.

Compressed sensing (CS) is a signal processing technique that enables the efficient recovery of a sparse high-dimensional signal from low-dimensional measurements. In the multiple measurement vector (MMV) framework, a set of signals with the same support must be recovered from their corresponding measurements. Here, we present the first exploration of the MMV problem where signals are independently drawn from a sparse, multivariate Poisson distribution. We are primarily motivated by a suite of biosensing applications of microfluidics where analytes (such as whole cells or biomarkers) are captured in small volume partitions according to a Poisson distribution. We recover the sparse parameter vector of Poisson rates through maximum likelihood estimation with our novel Sparse Poisson Recovery (SPoRe) algorithm. SPoRe uses batch stochastic gradient ascent enabled by Monte Carlo approximations of otherwise intractable gradients. By uniquely leveraging the Poisson structure, SPoRe substantially outperforms a comprehensive set of existing and custom baseline CS algorithms. Notably, SPoRe can exhibit high performance even with one-dimensional measurements and high noise levels. This resource efficiency is not only unprecedented in the field of CS but is also particularly potent for applications in microfluidics in which the number of resolvable measurements per partition is often severely limited. We prove the identifiability property of the Poisson model under such lax conditions, analytically develop insights into system performance, and confirm these insights in simulated experiments. Our findings encourage a new approach to biosensing and are generalizable to other applications featuring spatial and temporal Poisson signals.

Rice DSP faculty Yingyan Lin has received an NSF CAREER award for her project "Differentiable Network-Accelerator Co-Search – Towards Ubiquitous On-Device Intelligence and Green AI."  The project has two main aims:  first, to bridge the vast gap between deep learning's prohibitive computational and energy complexity and the constrained resources of consumer devices, and second to reduce the sizable environmental pollution that stems from energy-intensive deep learning training.

The contemporary practice in deep learning has challenged conventional approaches to machine learning. Specifically, deep neural networks are highly overparameterized models with respect to the number of data examples and are often trained without explicit regularization. Yet they achieve state-of-the-art generalization performance. Understanding the overparameterized regime requires new theory and foundational empirical studies. A prominent recent example is the "double descent" behavior of generalization errors that was discovered empirically in deep learning and then very recently analytically characterized for linear regression and related problems in statistical learning.

The goal of this workshop is to cross-fertilize the wide range of theoretical perspectives that will be required to understand overparameterized models, including the statistical, approximation theoretic, and optimization viewpoints. The workshop will be first of its kind in this space and will enable researchers to dialog about not only cutting edge theoretical studies of the relevant phenomena but also empirical studies that characterize numerical behaviors in a manner that can inspire new theoretical studies.

Invited speakers:

  • Peter Bartlett, UC Berkeley
  • Florent Krzakala, ‌École Normale Sup‌érieure
  • Gitta Kutyniok, LMU Munich
  • Michael Mahoney, UC Berkeley
  • Robert Nowak, University of Wisconsin-Madison
  • Tomaso Poggio, MIT
  • Matthieu Wyart, EPFL

Organizing committee:

  • Demba Ba, Harvard University
  • Richard Baraniuk, Rice University
  • Mikhail Belkin, UC San Diego
  • Yehuda Dar, Rice University
  • Vidya Muthukumar, Georgia Tech
  • Ryan Tibshirani, Carnegie Mellon University


Workshop dates: April 20-21, 2021
Virtual event
Free registration
Workshop website: https://topml.rice.edu
Abstract submission deadline: February 18, 2021
Call for Contributions available at https://topml.rice.edu/call-for-contributions/

DSP alum Justin Romberg (PhD, 2003), Schlumberger Professor Electrical and Computer Engineering at Georgia Tech, has been awarded the 2021 IEEE Jack S. Kilby Medal. He and his co-awardees Emmanuel Candes of Stanford University and Terrance Tao of UCLA will receive the highest honor in the field of signal processing for "groundbreaking contributions to compressed sensing."

Justin joins Rice DSP alum Jim McClellan (PhD, 1973), John and Marilu McCarty Chair of Electrical Engineering at Georgia Tech, and Rice DSP emeritus faculty member C. Sidney Burrus  as recipients of this honor.

 

 

Rice DSP and ECE alums Marco Duarte, Jason Laska, Mark Davenport, Dharmpal.Takhar, and Ting Sun plus faculty Kevin Kelly and Richard Baraniuk have been awarded the IEEE Signal Processing Magazine Best Paper Award for the paper "Single-Pixel Imaging via Compressive Sampling: Building Simpler, Smaller, and Less-Expensive Digital Cameras", IEEE Signal Processing Magazine, March 2008.