NIPS 2009 Workshop

Manifolds, sparsity, and structured models: When can low-dimensional geometry really help?

Manifolds, sparsity, and other low-dimensional geometric models have long been studied and exploited in machine learning, signal processing and computer science. For instance, manifold models lie at the heart of a variety of nonlinear dimensionality reduction techniques. Similarly, sparsity has made an impact in the problems of compression, linear regression, subset selection, graphical model learning, and compressive sensing. Moreover, often motivated by evidence that various neural systems are performing sparse coding, sparse representations have been exploited as an efficient and robust method for encoding a variety of natural signals. In all of these cases the key idea is to exploit low-dimensional models to obtain compact representations of the data. The goal of this workshop is to find commonalities and forge connections between these different fields and to examine how we can we exploit low-dimensional geometric models to help solve common problems in machine learning and beyond.

Schedule

Saturday Morning
7:30-7:40 Opening Remarks
7:40-8:00 Volkan Cevher
8:00-8:20 Mark Davenport
8:20-8:40 Sahand Negahban
8:40-9:00 Jarvis Haupt
9:00-9:30 Coffee Break
9:30-9:40 Poster Spotlights
9:40-10:30 Poster Session
  • "Rank-Sparsity uncertainty principles and matrix decomposition" - Venkat Chandrasekaran, Sujay Sanghavi, Pablo Parrilo, and Alan Willsky [Abstract]
  • "Distilled sensing: Adaptive sampling for sparse recovery" - Jarvis Haupt, Rui Castro, Robert Nowak, and Richard Baraniuk [Abstract]
  • "Compressive spectral clustering" - Blake Hunter and Thomas Strohmer [Abstract, Poster]
  • "Structured sparse principal component analysis" - Rodolphe Jenatton, Guillaume Obozinski, and Francis Bach [Abstract]
  • "Random projections for statistical learning: Compressed least-squares regression" - Odalric-Ambrym Maillard and RĂ©mi Munos [Abstract, Poster]
  • "Sparse signal recovery with exponential-family noise" - Irina Rish and Genady Grabarnik [Abstract, Poster]
  • "Geometric factor graph structure discovery" - Kumar Sricharan, Raviv Raich, and Alfred Hero [Abstract, Poster]
  • "Learning continuous transform models" - C.M. Wang, J. Sohl-Dickstein and B. A. Olshausen [Abstract]

Saturday Afternoon
16:00-16:20 Ken Clarkson
16:20-16:40 Mikhail Belkin
16:40-17:00 Bruno Olshausen
17:00-17:30 Coffee Break
17:30-18:30 Panel Discussion

Important Dates

  • Submission of extended abstracts: October 30, 2009 (later submission might not be considered for review)
  • Notification of acceptance: November 5, 2009
  • Workshop date: December 12, 2009

Submission instructions

We invite the submission of extended abstracts to be considered for a poster presentation at this workshop. Extended abstracts should be 1-2 pages, and the submission does not need to be blind. Extended abstracts should be sent to md@rice.edu in PDF or PS file format. Accepted extended abstracts will be made available online at the workshop website.

Organizers

  • Richard Baraniuk, Rice University.
  • Volkan Cevher, Rice University.
  • Mark Davenport, Rice University.
  • Piotr Indyk, MIT.
  • Bruno Olshausen, UC Berkeley.
  • Michael Wakin, Colorado School of Mines.


Rice University, MS-380 - 6100 Main St - Houston, TX 77005 - USA - webmaster-dsp@ece.rice.edu