l1-Magic

l1-Magic

top

code  /  papers  /  team  /  links

How many linear measurements does it take to recover an object of interest?
This question has been
the foundation of the nascent field of "compressed sensing".
The main result of this line of research is easily
stated: if a digital, size N object is B-sparse, it can be recovered
from a little more than B measurements (on the
order of B log N, to be precise). By "B-sparse", we mean that
the object can be written as a superposition of B elements (where B
<< N) from a fixed orthobasis.

What makes this result truly relevant is that the
recovery is computationally efficient. The B-sparse signal is exactly
the solution to a convex optimization program, specifically by finding
the signal with minimum L1 norm that "explains"
the measurements that were observed. This minimum L1 solution can be
computed (even when the size N is in the millions) using modern interior
point methods.

The practical results are striking; in general, a
recovery as good as the optimal B-term approximation can be found from
on the order of 3B-4B measurements. Consider that a 1-megapixel image
can generally be approximated almost losslessly using around 25k wavelets.
Then a camera which takes 100k (specially designed) measurements will
produce the same quality image as a standard camera which measures each of the 1 million
pixels, a savings which could significantly reduce the power and computation expended to capture
the picture. The lesson
is that not only does a signal’s sparsity dictate how well it can be
compressed, it also has fundamental implications on how efficiently
it can be "sensed".

From an algebraic standpoint, the measurement process
can be represented as a KxN (K<<N) matrix operating on an N-vector:
y = Af. We are asking for the f which produced y; an ill-posed question
since the system has more columns than rows. But if f is sparse (with
unknown support), everything falls into place: l1 minimization
will simultanuously discover the exact support and the component values
of f from only the K measurements.

In short, recovery via l1 minimization works, well,
like magic.


Code

L1-MAGIC is a collection of MATLAB routines for sparse
recovery via L1 and TV minimization. The optimization programs are solved
using standard interior point methods, and are suitable for large-scale
problems.

Download
the code (including User’s Guide)

Download
the User’s Guide (pdf)

top


Papers

A nonlinear sampling theorem

"Robust uncertainty principles: Exact recovery from highly incomplete
Fourier information"

by: Emmanuel Candes, Justin Romberg, and Terence Tao
To appear in IEEE Transactions on Information Theory, February 2006.

The central result from this paper is that a sparse
vector can be recovered exactly from a small number of
Fourier domain observations. More precisely, let f be a length-N discrete
signal which has B nonzero
components (we stress that the number and locations of the components
are unknown a priori). We collect
samples at K different frequencies which are randomly selected. Then
for K on the order of B log N,
we can recover f perfectly (with very high probability) through l1 minimization.

Near-optimal signal recovery and the Uniform Uncertainty
Principle

"Near-optimal signal recovery from random projections and universal
encoding strategies"

by: Emmanuel Candes and Terence Tao
Submitted to IEEE Transactions on Information Theory, November 2004.

This paper derives precise conditions for when
an arbitrary sparse signal f can be recovered from a
fixed set of linear measurements y = Mf. If M obeys what is terms the
Uniform Uncertainty Principle
for sets of size S (which essentially means that all submatrices formed
by taking S columns from M are
approximate isometries), then every signal f with no more than S nonzero
components can be recovered
from its measurements y=Mf via an l1 minimization program. It is shown
that if M is generated randomly,
it will obey the UUP with high probability for sets of size S ~ K log
(N/K), where K is the number of rows
of M. Using this result, it is shown that if f is compressible rather
than sparse (meaning that the sorted
components of f decay quickly), then the l1 recovery is near-optimal:
the recovery error goes to zero as
we add more measurements almost as fast as the nonlinear approximation
error of the original signal.

Stability

"Stable signal recovery from incomplete and inaccurate measurements"
by: Emmanuel Candes, Justin Romberg, and Terence Tao
To appear in Communications on Pure and Applied Mathematics, 2006.

This paper demonstrates that the recovery procedure
is stable. Given that the measurement matrix
satisfies the UUP, we show that we can recover a sparse or compressible
signal f from corrupted measurements
y = Mf+e, where the size of e is less than epsilon, to error within
epsilon. The proof is short and clean, and
encompasses the previous recovery results for the noiseless case.

Statistical Estimation

"The Dantzig selector: Statistical estimation when p is much
smaller than n"

by: Emmanuel Candes and Terence Tao
Submitted to IEEE Transactions on Information Theory, June 2005.

When the errors made in the measurement process are
Gaussian, much more can be said about the precision
of the recovery. This paper shows that a sparse signal can be estimated
from an incomplete set of measurements
corrupted by additive white Gaussian noise just as well as from observing
the entire noisy signal by itself
(and thresholding). The estimation process, which is again a certain
type of l1 minimization program, is termed
the Dantzig Selector.

Linear Decoding

"Decoding by Linear Programming"
by: Emmanuel Candes and Terence Tao
IEEE Transactions on Information Theory, December 2005.

This paper demonstrates that in addition to recovering
sparse signals, l1 minimization can be used to detect
and correct sparse errors. A codeword c is generated by applying an
MxN coding matrix A to a message m:
c = Am. It is shown that if A obeys a type of uncertainty principle,
then c can be recovered even if m is
deviously altered in qM unknown locations (where q is a constant).

Finding Sparse Decompositions

"Quantitative robust uncertainty principles and optimally sparse
decompositions"

by: Emmanuel Candes and Justin Romberg
To appear in Foundations of Computational Mathematics, 2006.

This paper revisits the now classical application
of l1 minimization to finding sparse representations in unions
of bases. The spike-sinusoid system is studied in detail: it is shown
that if a signal is comprised of a superposition
of ~ N/sqrt(log N) spikes and sinusoids, then the sparsest (in the sense
of minimum support) decomposition can be
found via l1 minimization. This paper makes explicit use of novel uncertainty
principles between the time and
frequency domains. The extension to finding sparse decompositions in
general pairs of bases is also discussed.

top