Author Archives: jkh6

"Singular Value Perturbation and Deep Network Optimization", Rudolf H. Riedi, Randall Balestriero, and Richard G. Baraniuk, Constructive Approximation, 27 November 2022 (also arXiv preprint 2203.03099, 7 March 2022)

Deep learning practitioners know that ResNets and DenseNets are much preferred over ConvNets, because empirically their gradient descent learning converges faster and more stably to a better solution.  In other words, it is not what a deep network can approximate that matters, but rather how it learns to approximate. Empirical studies have indicated that this is because the so-called loss landscape of the objective function navigated by gradient descent as it optimizes the deep network parameters is much smoother for ResNets and DenseNets as compared to ConvNets (see Figure 1 from Tom Goldstein's group below). However, to date there has been no analytical work in this direction.

Building on our earlier work connecting deep networks with continuous piecewise-affine splines, we develop an exact local linear representation of a deep network layer for a family of modern deep networks that includes ConvNets at one end of a spectrum and networks with skip connections, such as ResNets and DenseNets, at the other. For tasks that optimize the squared-error loss, we prove that the optimization loss surface of a modern deep network is piecewise quadratic in the parameters, with local shape governed by the singular values of a matrix that is a function of the local linear representation. We develop new perturbation results for how the singular values of matrices of this sort behave as we add a fraction of the identity and multiply by certain diagonal matrices. A direct application of our perturbation results explains analytically why a network with skip connections (e.g., ResNet or DenseNet) is easier to optimize than a ConvNet: thanks to its more stable singular values and smaller condition number, the local loss surface of a network with skip connections is less erratic, less eccentric, and features local minima that are more accommodating to gradient-based optimization. Our results also shed new light on the impact of different nonlinear activation functions on a deep network's singular values, regardless of its architecture.

Rice DSP graduate student Daniel LeJeune successfully defended his PhD thesis entitled "Ridge Regularization by Randomization in Linear Ensembles".

Abstract:  Ensemble methods that average over a collection of independent predictors that are each limited to random sampling of both the examples and features of the training data command a significant presence in machine learning, such as the ever-popular random forest. Combining many such randomized predictors into an ensemble produces a highly robust predictor with excellent generalization properties; however, understanding the specific nature of the effect of randomization on ensemble method behavior has received little theoretical attention.We study the case of an ensembles of linear predictors, where each individual predictor is a linear predictor fit on a randomized sample of the data matrix. We first show a straightforward argument that an ensemble of ordinary least squares predictors fit on a simple subsampling can achieve the optimal ridge regression risk in a standard Gaussian data setting. We then significantly generalize this result to eliminate essentially all assumptions on the data by considering ensembles of linear random projections or sketches of the data, and in doing so reveal an asymptotic first-order equivalence between linear regression on sketched data and ridge regression. By extending this analysis to a second-order characterization, we show how large ensembles converge to ridge regression under quadratic metrics.

Daniel's next step is a postdoc with Emmanuel Candes at Stanford University.

ELEC378 - Machine Learning: Concepts and Techniques
Instructor: 
Prof. Richard Baraniuk

Machine learning is a powerful new way to build signal processing models and systems using data rather than physics. This introductory course covers the key ideas, algorithms, and implementations of both classical and modern methods. Topics include supervised and unsupervised learning, optimization, linear regression, logistic regression, support vector machines, deep neural networks, clustering, data mining. A course highlight is a hands-on team project competition using real-world data.

The course is open to students at all levels who are comfortable with linear algebra + coding in Python (ideally), R, MATLAB.

Course webpage; more information coming soon!

A Visual Tour of Current Challenges in Multimodal Language Models
Shashank Sonkar, Naiming Liu, Richard G. Baraniuk
arXiv preprint 2210.12565
October 2022

Transformer models trained on massive text corpora have become the de facto models for a wide range of natural language processing tasks. However, learning effective word representations for function words remains challenging.  Multimodal learning, which visually grounds transformer models in imagery, can overcome the challenges to some extent; however, there is still much work to be done. In this study, we explore the extent to which visual grounding facilitates the acquisition of function words using stable diffusion models that employ multimodal models for text-to-image generation. Out of seven categories of function words, along with numerous subcategories, we find that stable diffusion models effectively model only a small fraction of function words – a few pronoun subcategories and relatives. We hope that our findings will stimulate the development of new datasets and approaches that enable multimodal models to learn better representations of function words.

Above: Sample images depicting an SDM's success (green border) and failure (red border) in capturing the semantics of different subcategories of pronouns. (a)-(c) show that the information about gender and count implicit in subject pronouns like he, she, we is accurately depicted. But, for indefinite pronouns, SDMs fail to capture the notion of negatives ((d) nobody), existential ((e) some), and universals ((f) everyone). Likewise SDMs fail to capture the meaning of reflexive pronouns such as (g) myself, (h) himself, (i) herself.

We provide the code on github for readers to replicate our findings and explore further.

DSP PhD Student Zichao (Jack) Wang has been selected as a Rising Star in Data Science by the University of Chicago.  The Rising Stars in Data Science workshop at the University of Chicago focuses on celebrating and fast tracking the careers of exceptional data scientists at a critical inflection point in their career: the transition to postdoctoral scholar, research scientist, industry research position, or tenure track position.  Jack will speak at the workshop about his recent work on "Machine learning for human learning."

Jasper Tan, Daniel LeJeune, Blake Mason, Hamid Javadi, Richard G. Baraniuk, "Benign Overparameterization in Membership Inference with Early Stopping", arXiv:2205.14055.

Does a neural network's privacy have to be at odds with its accuracy? In this work, we study the effects the number of training epochs and parameters have on a neural network's vulnerability to membership inference (MI) attacks, which aim to extract potentially private information about the training data. We first demonstrate how the number of training epochs and parameters individually induce a privacy-utility trade-off: more of either improves generalization performance at the expense of lower privacy. However, remarkably, we also show that jointly tuning both can eliminate this privacy-utility trade-off. Specifically, with careful tuning of the number of training epochs, more overparameterization can increase model privacy for fixed generalization error. To better understand these phenomena theoretically, we develop a powerful new leave-one-out analysis tool to study the asymptotic behavior of linear classifiers and apply it to characterize the sample-specific loss threshold MI attack in high-dimensional logistic regression. For practitioners, we introduce a low-overhead procedure to estimate MI risk and tune the number of training epochs to guard against MI attacks.

Wider networks have better privacy-utility trade-offs

Rice DSP PhD AmirAli Aghazadeh (PhD, 2017) has accepted an assistant professor position at Georgia Tech in the Department of Electrical and Computer Engineering.  He has spent the past few years as a postdoc at Stanford University and UC-Berkeley.  AmirAli joins DSP PhD alums James McClellan, Douglas Williams, Justin Romberg, Christopher Rozell, Mark Davenport, and Eva Dyer and ECE PhD alum Robert Butera.

DSP PhD and postdoc alum Christopher Rozell has been named the Julian T. Hightower Chair at Georgia Tech. Chris has had a storied career so far. For his research, he has received the NSF CAREER Award and Sigma Xi Young Faculty Research Award and been named one of six international recipients of the James S. McDonnell Foundation 21st Century Science Initiative Scholar Award. For his teaching, he has received the Class of 1940 W. Howard Ector Outstanding Teacher Award and the CTL/BP America Junior Faculty Teaching Excellence Award. Previously, Chris held the Demetrius T. Paris Junior Professorship. Chris's research interests lie at the intersection of computational neuroscience and signal processing and aim to understand how neural systems organize and process sensory information.