S. Sonkar, A. Katiyar, R. G. Baraniuk, "NePTuNe: Neural Powered Tucker Network for Knowledge Graph Completion," arxiv.org/abs/2103.08711, April 15, 2021.

Accepted at ACM IJCKG 2021 - The 10th International Joint Conference on Knowledge Graphs.

Knowledge graphs link entities through relations to provide a structured representation of real world facts. However, they are often incomplete, because they are based on only a small fraction of all plausible facts. The task of knowledge graph completion via link prediction aims to overcome this challenge by inferring missing facts represented as links between entities. Current approaches to link prediction leverage tensor factorization and/or deep learning. Factorization methods train and deploy rapidly thanks to their small number of parameters but have limited expressiveness due to their underlying linear methodology. Deep learning methods are more expressive but also computationally expensive and prone to overfitting due to their large number of trainable parameters. We propose Neural Powered Tucker Network (NePTuNe), a new hybrid link prediction model that couples the expressiveness of deep models with the speed and size of linear models. We demonstrate that NePTuNe provides state-of-the-art performance on the FB15K-237 dataset and near state-of-the-art performance on the WN18RR dataset.

P. K. Kota, D. LeJeune, R. A. Drezek, R. G. Baraniuk, "Extreme Compressed Sensing of Poisson Rates from Multiple Measurements," arxiv.org/abs/2103.08711, March 15, 2021.

Compressed sensing (CS) is a signal processing technique that enables the efficient recovery of a sparse high-dimensional signal from low-dimensional measurements. In the multiple measurement vector (MMV) framework, a set of signals with the same support must be recovered from their corresponding measurements. Here, we present the first exploration of the MMV problem where signals are independently drawn from a sparse, multivariate Poisson distribution. We are primarily motivated by a suite of biosensing applications of microfluidics where analytes (such as whole cells or biomarkers) are captured in small volume partitions according to a Poisson distribution. We recover the sparse parameter vector of Poisson rates through maximum likelihood estimation with our novel Sparse Poisson Recovery (SPoRe) algorithm. SPoRe uses batch stochastic gradient ascent enabled by Monte Carlo approximations of otherwise intractable gradients. By uniquely leveraging the Poisson structure, SPoRe substantially outperforms a comprehensive set of existing and custom baseline CS algorithms. Notably, SPoRe can exhibit high performance even with one-dimensional measurements and high noise levels. This resource efficiency is not only unprecedented in the field of CS but is also particularly potent for applications in microfluidics in which the number of resolvable measurements per partition is often severely limited. We prove the identifiability property of the Poisson model under such lax conditions, analytically develop insights into system performance, and confirm these insights in simulated experiments. Our findings encourage a new approach to biosensing and are generalizable to other applications featuring spatial and temporal Poisson signals.

Rice DSP faculty Yingyan Lin has received an NSF CAREER award for her project "Differentiable Network-Accelerator Co-Search – Towards Ubiquitous On-Device Intelligence and Green AI."  The project has two main aims:  first, to bridge the vast gap between deep learning's prohibitive computational and energy complexity and the constrained resources of consumer devices, and second to reduce the sizable environmental pollution that stems from energy-intensive deep learning training.

The contemporary practice in deep learning has challenged conventional approaches to machine learning. Specifically, deep neural networks are highly overparameterized models with respect to the number of data examples and are often trained without explicit regularization. Yet they achieve state-of-the-art generalization performance. Understanding the overparameterized regime requires new theory and foundational empirical studies. A prominent recent example is the "double descent" behavior of generalization errors that was discovered empirically in deep learning and then very recently analytically characterized for linear regression and related problems in statistical learning.

The goal of this workshop is to cross-fertilize the wide range of theoretical perspectives that will be required to understand overparameterized models, including the statistical, approximation theoretic, and optimization viewpoints. The workshop will be first of its kind in this space and will enable researchers to dialog about not only cutting edge theoretical studies of the relevant phenomena but also empirical studies that characterize numerical behaviors in a manner that can inspire new theoretical studies.

Invited speakers:

  • Peter Bartlett, UC Berkeley
  • Florent Krzakala, ‌École Normale Sup‌érieure
  • Gitta Kutyniok, LMU Munich
  • Michael Mahoney, UC Berkeley
  • Robert Nowak, University of Wisconsin-Madison
  • Tomaso Poggio, MIT
  • Matthieu Wyart, EPFL

Organizing committee:

  • Demba Ba, Harvard University
  • Richard Baraniuk, Rice University
  • Mikhail Belkin, UC San Diego
  • Yehuda Dar, Rice University
  • Vidya Muthukumar, Georgia Tech
  • Ryan Tibshirani, Carnegie Mellon University


Workshop dates: April 20-21, 2021
Virtual event
Free registration
Workshop website: https://topml.rice.edu
Abstract submission deadline: February 18, 2021
Call for Contributions available at https://topml.rice.edu/call-for-contributions/

DSP alum Justin Romberg (PhD, 2003), Schlumberger Professor Electrical and Computer Engineering at Georgia Tech, has been awarded the 2021 IEEE Jack S. Kilby Medal. He and his co-awardees Emmanuel Candes of Stanford University and Terrance Tao of UCLA will receive the highest honor in the field of signal processing for "groundbreaking contributions to compressed sensing."

Justin joins Rice DSP alum Jim McClellan (PhD, 1973), John and Marilu McCarty Chair of Electrical Engineering at Georgia Tech, and Rice DSP emeritus faculty member C. Sidney Burrus  as recipients of this honor.

 

 

Rice DSP and ECE alums Marco Duarte, Jason Laska, Mark Davenport, Dharmpal.Takhar, and Ting Sun plus faculty Kevin Kelly and Richard Baraniuk have been awarded the IEEE Signal Processing Magazine Best Paper Award for the paper "Single-Pixel Imaging via Compressive Sampling: Building Simpler, Smaller, and Less-Expensive Digital Cameras", IEEE Signal Processing Magazine, March 2008.

DSP alum Christopher Metzler (PhD, 2018) will join the Department of Electrical and Computer Engineering at the University of Maryland in January 2021.  An expert in computational imaging, image processing, and machine learning, Chris has received the NDSEG, NSF, and K2I Fellowships and is currently a postdoctoral fellow at Stanford University.

Chris made the news earlier this year with his work on seeing around corners in Science and OSA.

 

Learning-based methods, and in particular deep neural networks, have emerged as highly successful and universal tools for image and signal recovery and restoration. They achieve state-of-the-art results on tasks ranging from image denoising, image compression, and image reconstruction from few and noisy measurements. They are starting to be used in important imaging technologies, for example in GEs newest computational tomography scanners and in the newest generation of the iPhone.

The field has a range of theoretical and practical questions that remain unanswered. In particular, learning and neural network-based approaches often lack the guarantees of traditional physics-based methods. Further, while superior on average, learning-based methods can make drastic reconstruction errors, such as hallucinating a tumor in an MRI reconstruction or turning a pixelated picture of Obama into a white male.

This virtual workshop aims at bringing together theoreticians and practitioners in order to chart out recent advances and discuss new directions in deep neural network-based approaches for solving inverse problems in the imaging sciences and beyond.

The NeurIPS workshop will take place online either December 11 or 12 (TBD). At the workshop, we will have contributed talks as well as contributed posters. Detailed information about the scope of the workshop can be found at https://deep-inverse.org/, including directions for submission. Submission at OpenReview will be open from September 1 until the submission deadline of October 2, 2020. The session is being co-organized by RIce DSP Alum faculty member Reinhard Heckel, Rice Alum faculty member Paul Hand, Soheil Feizi, Lenka Zdeborova, and Rice DSP faculty Richard Baraniuk.