Rice DSP PhD and Valedictorian Lorenzo Luzi (PhD, 2024) has accepted an assistant teaching professor position in the Data 2 Knowledge (D2K) Lab and Department of Statistics at Rice University.
Author Archives: jkh6
Two Papers at ICML 2024
Two DSP group papers have been accepted by the International Conference on Machine Learning (ICML) 2024 in Vienna, Austria
- "PIDformer: Transformer Meets Control Theory" by Tam Nguyen, César A. Uribe, Tan M. Nguyen, and Richard Baraniuk
- "Grokking Happens All the Time and Here is Why" by Ahmed Imtiaz Humayun, Randall Balestriero, and Richard Baraniuk
NSF invests $90M in innovative national scientific cyberinfrastructure for transforming STEM education
The U.S. National Science Foundation announced today a strategic investment of $90 million over five years in SafeInsights, a unique national scientific cyberinfrastructure aimed at transforming learning research and STEM education. Funded through the Mid-Scale Research Infrastructure Level-2 program (Mid-scale RI-2), SafeInsights is led by Prof. Richard Baraniuk at OpenStax at Rice University, who will oversee the implementation and launch of this new research infrastructure project of unprecedented scale and scope.
SafeInsights aims to serve as a central hub, facilitating research coordination and leveraging data across a range of major digital learning platforms that currently serve tens of millions of U.S. learners across education levels and science, technology, engineering and mathematics.
With its controlled and intuitive framework, unique privacy-protecting approach and emphasis on the inclusion of students, educators and researchers from diverse backgrounds, SafeInsights will enable extensive, long-term research on the predictors of effective learning, which are key to academic success and persistence.
Links for more information:
Two Papers at ICLR 2024
Two DSP group papers have been accepted by the International Conference on Learning Representations (ICLR) 2024 in Vienna, Austria
- "Self-Consuming Generative Models Go MAD" by S. Alemohammad, J. Casco-Rodriguez, L. Luzi, A. I. Humayun, H. Babaei, D. LeJeune, A. Siahkoohi, and R. G. Baraniuk
- "Implicit Neural Representations and the Algebra of Complex Wavelets" by M. Roddenberry, V. Saragadam, G. Balakrishnan, and R. G. Baraniuk
Self-Consuming Generative Models Go MAD
Self-Consuming Generative Models Go MAD
http://arxiv.org/abs/2307.01850
Sina Alemohammad, Josue Casco-Rodriguez, Lorenzo Luzi, Ahmed Imtiaz Humayun,
Hossein Babaei, Daniel LeJeune, Ali Siahkoohi, Richard G. Baraniuk
Abstract: Seismic advances in generative AI algorithms for imagery, text, and other data types has led to the temptation to use synthetic data to train next-generation models. Repeating this process creates an autophagous ("self-consuming") loop whose properties are poorly understood. We conduct a thorough analytical and empirical analysis using state-of-the-art generative image models of three families of autophagous loops that differ in how fixed or fresh real training data is available through the generations of training and in whether the samples from previous-generation models have been biased to trade off data quality versus diversity. Our primary conclusion across all scenarios is that without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease. We term this condition Model Autophagy Disorder (MAD), making analogy to mad cow disease.
In the news:
- "Generative AI Goes 'MAD' When Trained on AI-Created Data Over Five Times," Tom's Hardware, 12 July 2023
- "AI Loses Its Mind After Being Trained on AI-Generated Data," Futurism, 12 July 2023
- "Scientists make AI go crazy by feeding it AI-generated content," TweakTown, 13 July 2023
- "AI models trained on AI-generated data experience Model Autophagy Disorder (MAD) after approximately five training cycles," Multiplatform.AI, 13 July 2023
- "AIs trained on AI-generated images produce glitches and blurs,” NewScientist, 18 July 2023
- "Training AI With Outputs of Generative AI Is Mad" CDOtrends, 19 July 2023
- "When AI Is Trained on AI-Generated Data, Strange Things Start to Happen" Futurism, 1 August 2023
- "Mad AI risks destroying the Information Age" The Telegraph, 1 February 2024
- ''AI's 'mad cow disease' problem tramples into earnings season'', Yahoo!finance, 12 April 2024
- "Cesspool of AI crap or smash hit? LinkedIn’s AI-powered Collaborative Articles offer a sobering peek at the future of content'' Fortune, 18 April 2024
- "AI's Mad Loops," Rice Magazine, February 2025
30 Students in 30 Years
Jack Wang Defends PhD Thesis
Rice DSP graduate student Jack Wang successfully defended his PhD thesis entitled "Towards Personalized Human Learning at Scale: A Machine Learning Approach."
Abstract: Despite the recent advances in artificial intelligence (AI) and machine learning (ML), we have yet to witness the transformative breakthroughs they can bring to education and, more broadly, to how humans learn. This thesis establishes two research directions that leverage the recent advances in generative modeling to enable more personalized learning experiences on a large scale. The first part of the thesis focuses on educational content generation and proposes a method to automatically generate math word problems that are personalized to each learner. The second part of the thesis focuses on learning analytics and proposes a framework for analyzing learners’ open-ended solutions to assessment questions, such as code submissions in computer science education.
Jack’s next step is Adobe Research, where he will be working on new natural language processing models for documents and other data.
Two Papers at CVPR 2023
Two DSP group papers have been accepted by the IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR) 2023 in Vancouver, Canada
- "SplineCam: Exact Visualization of Deep Neural Network Geometry and Decision Boundaries" by Ahmed Imtiaz Humayun, Randall Balestriero, Guha Balakrishnan, and Richard Baraniuk (Highlight paper, 2.5% of all submissions)
- "WIRE: Wavelet Implicit Neural Representations," by Vishwa Saragadam, Daniel LeJeune, Jasper Tan, Guha Balakrishnan, Ashok Veeraraghavan, and Richard Baraniuk
Machine Learning Privacy Work to Appear at AISTATS 2023
"A Blessing of Dimensionality in Membership Inference through Regularization" by DSP group members Jasper Tan, Daniel LeJeune, Blake Mason, Hamid Javadi, and Richard Baraniuk has been accepted for the International Conference on Artificial Intelligence and Statistics (AISTATS) in Valencia, Spain, April 2023.
Two “Notable” Papers at ICLR 2023
Two DSP group papers have been accepted as "Notable - Top 25%" papers for the International Conference on Learning Representations (ICLR) 2023 in Kigali, Rwanda
- "A Primal-Dual Framework for Transformers and Neural Networks," by T. M. Nguyen, T. Nguyen, N. Ho, A. L. Bertozzi, R. G. Baraniuk, and S. Osher
- "Retrieval-based Controllable Molecule Generation," by Jack Wang, W. Nie, Z. Qiao, C. Xiao, R. G. Baraniuk, and A. Anandkumar
Abstracts below.
Retrieval-based Controllable Molecule Generation
Generating new molecules with specified chemical and biological properties via generative models has emerged as a promising direction for drug discovery. However, existing methods require extensive training/fine-tuning with a large dataset, often unavailable in real-world generation tasks. In this work, we propose a new retrieval-based framework for controllable molecule generation. We use a small set of exemplar molecules, i.e., those that (partially) satisfy the design criteria, to steer the pre-trained generative model towards synthesizing molecules that satisfy the given design criteria. We design a retrieval mechanism that retrieves and fuses the exemplar molecules with the input molecule, which is trained by a new self-supervised objective that predicts the nearest neighbor of the input molecule. We also propose an iterative refinement process to dynamically update the generated molecules and retrieval database for better generalization. Our approach is agnostic to the choice of generative models and requires no task-specific fine-tuning. On various tasks ranging from simple design criteria to a challenging real-world scenario for designing lead compounds that bind to the SARS-CoV-2 main protease, we demonstrate our approach extrapolates well beyond the retrieval database, and achieves better performance and wider applicability than previous methods.
A Primal-Dual Framework for Transformers and Neural Networks
Self-attention is key to the remarkable success of transformers in sequence modeling tasks, including many applications in natural language processing and computer vision. Like neural network layers, these attention mechanisms are often developed by heuristics and experience. To provide a principled framework for constructing attention layers in transformers, we show that the self-attention corresponds to the support vector expansion derived from a support vector regression problem, whose primal formulation has the form of a neural network layer. Using our framework, we derive popular attention layers used in practice and propose two new attentions: 1) the Batch Normalized Attention (Attention-BN) derived from the batch normalization layer and 2) the Attention with Scaled Head (Attention-SH) derived from using less training data to fit the SVR model. We empirically demonstrate the advantages of the Attention-BN and Attention-SH in reducing head redundancy, increasing the model’s accuracy, and improving the model’s efficiency in a variety of practical applications including image and time-series classification.