# Compressive sensing and sparse approximation

The present dogma of signal processing maintains that a signal must be sampled at a rate at least twice its highest frequency in order to be represented without error. However, in practice, we often compress the data soon after sensing, trading off signal representation complexity (bits) for some error (consider JPEG image compression in digital cameras, for example). Clearly, this is wasteful of valuable sensing resources. Over the past few years, a new theory known as compressive sensing has emerged, in which a signal is sampled (and simultaneously compressed) to its “information rate” using non-adaptive, linear measurements. For “sparse” signals that can be represented using just a few terms from a basis expansion, this corresponds to sub-Nyquist sampling. Interestingly, random measurements play a starring role. The compressive sensing concept has led to the development of new signal acquisition hardware and has inspired a variety of new techniques for processing sparse data.

**Other Research at Rice**