A Visual Tour of Current Challenges in Multimodal Language Models
Shashank Sonkar, Naiming Liu, Richard G. Baraniuk
arXiv preprint 2210.12565
October 2022

Transformer models trained on massive text corpora have become the de facto models for a wide range of natural language processing tasks. However, learning effective word representations for function words remains challenging.  Multimodal learning, which visually grounds transformer models in imagery, can overcome the challenges to some extent; however, there is still much work to be done. In this study, we explore the extent to which visual grounding facilitates the acquisition of function words using stable diffusion models that employ multimodal models for text-to-image generation. Out of seven categories of function words, along with numerous subcategories, we find that stable diffusion models effectively model only a small fraction of function words – a few pronoun subcategories and relatives. We hope that our findings will stimulate the development of new datasets and approaches that enable multimodal models to learn better representations of function words.

Above: Sample images depicting an SDM's success (green border) and failure (red border) in capturing the semantics of different subcategories of pronouns. (a)-(c) show that the information about gender and count implicit in subject pronouns like he, she, we is accurately depicted. But, for indefinite pronouns, SDMs fail to capture the notion of negatives ((d) nobody), existential ((e) some), and universals ((f) everyone). Likewise SDMs fail to capture the meaning of reflexive pronouns such as (g) myself, (h) himself, (i) herself.

We provide the code on github for readers to replicate our findings and explore further.

DSP PhD Student Zichao (Jack) Wang has been selected as a Rising Star in Data Science by the University of Chicago.  The Rising Stars in Data Science workshop at the University of Chicago focuses on celebrating and fast tracking the careers of exceptional data scientists at a critical inflection point in their career: the transition to postdoctoral scholar, research scientist, industry research position, or tenure track position.  Jack will speak at the workshop about his recent work on "Machine learning for human learning."

Jasper Tan, Daniel LeJeune, Blake Mason, Hamid Javadi, Richard G. Baraniuk, "Benign Overparameterization in Membership Inference with Early Stopping", arXiv:2205.14055.

Does a neural network's privacy have to be at odds with its accuracy? In this work, we study the effects the number of training epochs and parameters have on a neural network's vulnerability to membership inference (MI) attacks, which aim to extract potentially private information about the training data. We first demonstrate how the number of training epochs and parameters individually induce a privacy-utility trade-off: more of either improves generalization performance at the expense of lower privacy. However, remarkably, we also show that jointly tuning both can eliminate this privacy-utility trade-off. Specifically, with careful tuning of the number of training epochs, more overparameterization can increase model privacy for fixed generalization error. To better understand these phenomena theoretically, we develop a powerful new leave-one-out analysis tool to study the asymptotic behavior of linear classifiers and apply it to characterize the sample-specific loss threshold MI attack in high-dimensional logistic regression. For practitioners, we introduce a low-overhead procedure to estimate MI risk and tune the number of training epochs to guard against MI attacks.

Wider networks have better privacy-utility trade-offs

Rice DSP PhD AmirAli Aghazadeh (PhD, 2017) has accepted an assistant professor position at Georgia Tech in the Department of Electrical and Computer Engineering.  He has spent the past few years as a postdoc at Stanford University and UC-Berkeley.  AmirAli joins DSP PhD alums James McClellan, Douglas Williams, Justin Romberg, Christopher Rozell, Mark Davenport, and Eva Dyer and ECE PhD alum Robert Butera.

DSP PhD and postdoc alum Christopher Rozell has been named the Julian T. Hightower Chair at Georgia Tech. Chris has had a storied career so far. For his research, he has received the NSF CAREER Award and Sigma Xi Young Faculty Research Award and been named one of six international recipients of the James S. McDonnell Foundation 21st Century Science Initiative Scholar Award. For his teaching, he has received the Class of 1940 W. Howard Ector Outstanding Teacher Award and the CTL/BP America Junior Faculty Teaching Excellence Award. Previously, Chris held the Demetrius T. Paris Junior Professorship. Chris's research interests lie at the intersection of computational neuroscience and signal processing and aim to understand how neural systems organize and process sensory information.

NAE Website - National Academy of Engineering Elects 67 Members and 12  Foreign MembersRichard Baraniuk has been elected to the National Academy of Engineering in recognition of his contributions to engineering "for the development and broad dissemination of open educational resources and for foundational contributions to compressive sensing." Election to the National Academy of Engineering is among the highest professional distinctions accorded to an engineer. More from Rice News.

Jasper Tan, Blake Mason, Hamid Javadi, Richard G. Baraniuk, "Parameters or Privacy: A Provable Tradeoff Between Overparameterization and Membership Inference"arXiv:2202.01243.

A surprising phenomenon in modern machine learning is the ability of a highly overparameterized model to generalize well (small error on the test data) even when it is trained to memorize the training data (zero error on the training data). This has led to an arms race towards increasingly overparameterized models (c.f., deep learning). In this paper, we study an underexplored hidden cost of overparameterization: the fact that overparameterized models are more vulnerable to privacy attacks, in particular the membership inference attack that predicts the (potentially sensitive) examples used to train a model. We significantly extend the relatively few empirical results on this problem by theoretically proving for an overparameterized linear regression model with Gaussian data that the membership inference vulnerability increases with the number of parameters. Moreover, a range of empirical studies indicates that more complex, nonlinear models exhibit the same behavior. Finally, we study different methods for mitigating such attacks in the overparameterized regime, such as noise addition and regularization, and conclude that simply reducing the parameters of an overparameterized model is an effective strategy to protect it from membership inference without greatly decreasing its generalization error.

Richard Baraniuk will present the 2023 AMS Josiah Willard Gibbs Lecture at the Joint Mathematics Meeting in Boston, Massachusetts in January 2023.  The first AMS Josiah Willard Gibbs Lecture was given in 1923. This public lecture is one of the signature events in the Society’s calendar.  Previous speakers have included Albert Einstein, Vannevar Bush, John von Neumann, Norbert Wiener, Kurt Gödel, Hermann Weyl, Eugene Wigner, Donald Knuth, Herb Simon, David Mumford, Ingrid Daubechies, and Claude Shannon.