On the Empirical Rate-Distortion Performance
of Compressive Sensing


Presented at IEEE International Conference on Image Processing (ICIP), November 2009.


Adriana Schulz
Luiz Velho
Eduardo A. B. da Silva


Compressive Sensing (CS) is a new paradigm in signal acquisition and compression that has been attracting the interest of the signal compression community. When it comes to image compression applications, it is relevant to estimate the number of bits required to reach a specific image quality. Although several theoretical results regarding the rate-distortion performance of CS have been published recently, there are not many practical image compression results available. The main goal of this paper is to carry out an empirical analysis of the rate-distortion performance of CS in image compression. We analyze issues such as the minimization algorithm used and the transform employed, as well as the trade-off between number of measurements and quantization error. From the experimental results obtained we highlight the potential and limitations of CS when compared to traditional image compression methods.




Extended version of the results:

Rate x PSNR Curves for varying Quatization Steps
Number of Measurements x PSNR Curves
Rate-Distortion Curves


This [.zip] archive contains all MATLAB functions used to generate the results, including the optimization functions from L1-Magic and the algorithm for generating Noiselets made available by Justin Romberg. CS recovery strategies that make use of Wavelets require the WAVELAB toolbox.

Before starting, it is necessary to compile the mex code that generates Noiselets. To do so, simply open file CS-codes/Measurements in MATLAB and run:
>> mex realnoiselet.c

The file that reproduces graphs in Fig.3 is rateDistortionEvaluation.m. Since the optimization algorithm is computationally expensive, it may take a while to run it. For simpler tests we recommend using the functions in CS-codes/CS folder.