Author Archives: jkh6

Rice DSP and ECE alums Marco Duarte, Jason Laska, Mark Davenport, Dharmpal.Takhar, and Ting Sun plus faculty Kevin Kelly and Richard Baraniuk have been awarded the IEEE Signal Processing Magazine Best Paper Award for the paper "Single-Pixel Imaging via Compressive Sampling: Building Simpler, Smaller, and Less-Expensive Digital Cameras", IEEE Signal Processing Magazine, March 2008.

DSP alum Christopher Metzler (PhD, 2018) will join the Department of Electrical and Computer Engineering at the University of Maryland in January 2021.  An expert in computational imaging, image processing, and machine learning, Chris has received the NDSEG, NSF, and K2I Fellowships and is currently a postdoctoral fellow at Stanford University.

Chris made the news earlier this year with his work on seeing around corners in Science and OSA.

 

Learning-based methods, and in particular deep neural networks, have emerged as highly successful and universal tools for image and signal recovery and restoration. They achieve state-of-the-art results on tasks ranging from image denoising, image compression, and image reconstruction from few and noisy measurements. They are starting to be used in important imaging technologies, for example in GEs newest computational tomography scanners and in the newest generation of the iPhone.

The field has a range of theoretical and practical questions that remain unanswered. In particular, learning and neural network-based approaches often lack the guarantees of traditional physics-based methods. Further, while superior on average, learning-based methods can make drastic reconstruction errors, such as hallucinating a tumor in an MRI reconstruction or turning a pixelated picture of Obama into a white male.

This virtual workshop aims at bringing together theoreticians and practitioners in order to chart out recent advances and discuss new directions in deep neural network-based approaches for solving inverse problems in the imaging sciences and beyond.

The NeurIPS workshop will take place online either December 11 or 12 (TBD). At the workshop, we will have contributed talks as well as contributed posters. Detailed information about the scope of the workshop can be found at https://deep-inverse.org/, including directions for submission. Submission at OpenReview will be open from September 1 until the submission deadline of October 2, 2020. The session is being co-organized by RIce DSP Alum faculty member Reinhard Heckel, Rice Alum faculty member Paul Hand, Soheil Feizi, Lenka Zdeborova, and Rice DSP faculty Richard Baraniuk.

DSP PhD student Tan Nguyen has received a prestigious Computing Innovation Postdoctoral Fellowship (CIFellows Program) from the Computing Research Association (CRA).  He plans to work with Professor Stan Osher at UCLA on predicting drug-target binding affinity to study how current drugs work on new targets as a treatment for COVID-19 and future pandemic diseases.  Tan plans to develop a new class of deep learning models that are aware of the structural information of drugs, scalable to large datasets, and generalizable to unseen cases.

Mark Davenport (PhD, 2010) as been selected as a Rice Outstanding Young Engineering Alumnus. The award, established in 1996, recognizes achievements of Rice Engineering Alumni under 40 years old. Recipients are chosen by the George R. Brown School of Engineering and the Rice Engineering Alumni (REA).

Mark is an Associate Professor of Electrical and Computer Engineering at Georgia Tech. His many other honors include the Hershel Rich Invention Award and Budd Award for best engineering thesis at Rice, a NSF Math Sciences postdoc fellowship, NSF CAREER Award, AFOSR YIP Award, Sloan Fellow, and PECASE.

Mark spent time at Rice in winter 2020 as the Texas Instruments Visiting Professor.

DSP postdoc alum Thomas Goldstein has launched a new clothing line that evades detection by machine learning vision algorithms.

This stylish pullover is a great way to stay warm this winter, whether in the office or on-the-go. It features a stay-dry microfleece lining, a modern fit, and adversarial patterns the evade most common object detectors. In this demonstration, the YOLOv2 detector is evaded using a pattern trained on the COCO dataset with a carefully constructed objective.

Paper:  Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors by Z. Wu, S-N. Lim, L. Davis, Tom Goldstein, October 2019

Buy online

New Yorker article:  Dressing for the Surveillance Age

Rice DSP group faculty Richard Baraniuk will be leading a team of engineers, computer scientists, mathematicians, and statisticians on a five-year ONR MURI project to develop a principled theory of deep learning based on rigorous mathematical principles.  The team includes:

International collaborators include the Alan Turing and Isaac Newton Institutes in the UK.

DOD press release

MURI website

B. Wang*, T. M. Nguyen*, A. L. Bertozzi***, R. G. Baraniuk**, S. J. Osher**. "Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent", arXiv, 2020.

Gihub code: https://github.com/minhtannguyen/SRSGD.

Blog: http://almostconvergent.blogs.rice.edu/2020/02/21/srsgd.

Slides: SRSGD

Stochastic gradient descent (SGD) with constant momentum and its variants such as Adam are the optimization algorithms of choice for training deep neural networks (DNNs). Since DNN training is incredibly computationally expensive, there is great interest in speeding up convergence. Nesterov accelerated gradient (NAG) improves the convergence rate of gradient descent (GD) for convex optimization using a specially designed momentum; however, it accumulates error when an inexact gradient is used (such as in SGD), slowing convergence at best and diverging at worst. In this paper, we propose Scheduled Restart SGD (SRSGD), a new NAG-style scheme for training DNNs. SRSGD replaces the constant momentum in SGD by the increasing momentum in NAG but stabilizes the iterations by resetting the momentum to zero according to a schedule. Using a variety of models and benchmarks for image classification, we demonstrate that, in training DNNs, SRSGD significantly improves convergence and generalization; for instance in training ResNet200 for ImageNet classification, SRSGD achieves an error rate of 20.93% vs. the benchmark of 22.13%. These improvements become more significant as the network grows deeper. Furthermore, on both CIFAR and ImageNet, SRSGD reaches similar or even better error rates with fewer training epochs compared to the SGD baseline.

Figure 1: Error vs. depth of ResNet models trained with SRSGD and the baseline SGD with constant momemtum. Advantage of SRSGD continues to grow with depth.


Figure 2: Test error vs. number of epoch reduction in CIFAR10 and ImageNet training. The dashed lines are test errors of the SGD baseline.

* : Co-first authors; **: Co-last authors; ***: Middle author