The paper aims to highlight relative strengths and weaknesses of some of the recently proposed architectures for hardware implementation of analog-to-information converters based on Compressive Sensing. To do so, the most common architectures are analyzed when saturation of some building blocks is taken into account, and when measurements are subject to quantization to produce a digital stream. Furthermore, the signal reconstruction is performed by established and novel algorithms (one based on linear programming and the other based on iterative guessing of the support of the target signal), as well as their specialization to the particular architecture producing the measurements. Performance is assessed both as the probability of correct support reconstruction and as the final reconstruction error. Our results help highlighting pros and cons of various architectures and giving quantitative answers to some typical designoriented questions. Among these, we show: 1) that the (Random Modulation Pre-Integration) RMPI architecture and its recently proposed adjustments are probably the most versatile approach though not always the most economic to implement; 2) that when 1-bit quantization is sought, dynamically mixing quantization and integration in a randomized Sigma-Delta architecture help bringing the performance much closer to that of multi-bit approaches; 3) for each architecture, the trade-off between number of measurements and number of bits per measurements (given a fixed bit-budget); and 4) pros and cons of the use of Gaussian versus binary random variables for signal acquisition
A Pragmatic Look at Some Compressive Sensing Architectures With Saturation and Quantization
PARESCHI, Fabio;SETTI, Gianluca
2012
Abstract
The paper aims to highlight relative strengths and weaknesses of some of the recently proposed architectures for hardware implementation of analog-to-information converters based on Compressive Sensing. To do so, the most common architectures are analyzed when saturation of some building blocks is taken into account, and when measurements are subject to quantization to produce a digital stream. Furthermore, the signal reconstruction is performed by established and novel algorithms (one based on linear programming and the other based on iterative guessing of the support of the target signal), as well as their specialization to the particular architecture producing the measurements. Performance is assessed both as the probability of correct support reconstruction and as the final reconstruction error. Our results help highlighting pros and cons of various architectures and giving quantitative answers to some typical designoriented questions. Among these, we show: 1) that the (Random Modulation Pre-Integration) RMPI architecture and its recently proposed adjustments are probably the most versatile approach though not always the most economic to implement; 2) that when 1-bit quantization is sought, dynamically mixing quantization and integration in a randomized Sigma-Delta architecture help bringing the performance much closer to that of multi-bit approaches; 3) for each architecture, the trade-off between number of measurements and number of bits per measurements (given a fixed bit-budget); and 4) pros and cons of the use of Gaussian versus binary random variables for signal acquisitionI documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.