HOME
ICIP ARTICLE
CBM COURSE
LINKS

A NEW IDEA ...

Compressive sensing (CS) is a novel idea that rethinks data acquisition. The theory was so revolutionary when it was created in 2004 that an early paper outlining it was initially rejected on the basis that its claims appeared impossible to be substantiated.

The impact of compressive sensing goes far beyond the research labs and enters a more organic social level. This new area was able to establish a true synergy between many disciplines of science, technology and engineering. Usually such groups are far apart due to the cultural differences of their respective fields. Now, thanks to compressive sensing, it is frequent to see pure mathematicians, applied mathematicians, computer scientists, and hardware engineers coming together to share ideas about the theory and its applications.

OVERVIEW

Acquisition and reconstruction are essential in every signal processing system and sampling theorems are responsible for the bridge between continuous and discrete domains. The most important theorem that sets a limit to the sampling rate guaranteeing signal recovery is the Shannon-Nyquist theorem for band-limited signals.

We know, however, that natural and manmade signals tend to be compressible, i.e., if point sampled many of the acquired coefficients will be redundant. Hence, a lot of effort has been made in order to rewrite the sampled data reducing the number of bits required to represent it. These schemes perform what is referred to as compression.

The sample-then-compress framework is very efficient and is used in many applications with a good performance. However, the fact that we are able to compress the acquired data, suggests that Nyquist was a pessimist, who considered the worst case scenario in which all that is known is that the signals are band-limited. But what if, instead of considering the Nyquist rate, we would try to recover the data by sensing at the information rate?

This is what Compressive Sensing is about. It comes out as a new paradigm for data acquisition that rises against the common knowledge of the filed. In truth, it gives stable and robust algorithms that allows sensing at rates much smaller then the Nyquist limit and recovering the signals with little corruption.

The basic idea is that compressibility translates in the existence of a representation in which the signal is sparse (most coefficients are zero). Therefore, while taking only a small number of samples would make the recovery problem ill-posed (an infinite number of solutions would be available), the compressibility property allows us to search in all possible solutions the one that makes the recovered signal sparse.

Of course, there is a twist in the word "sample". We cannot point sample the signal and hope to reconstruct it with a very small number of measurements because, once it is sparse, most of our acquired data will be zero. Instead, we measure the signal by calculating its inner product against different test functions.

Compressive sensing is intriguing not only because it proves that it is possible to reconstruct a signal with a very small number of measurements but also because it is nonadaptive . By this we mean that the algorithm is completely blind, not needing to guess characteristics of the original object (apart from sparsity). Moreover, the solution may be obtained by means of a linear program that solves a convex optimization problem.