Course Notes

Course Notes

var blankSrc = “../rw_common/themes/delta/png/blank.gif”;

img {
behavior: url(“../rw_common/themes/delta/png/pngbehavior.htc”);
}



Course Notes for ECE 6250, Fall 2013

I. Signal representations in vector spaces
   Notes 1, the Shannon-Nyquist sampling theorem
   Notes 2, introduction to and examples of bases
   Notes 3, linear vector spaces
   Notes 4, norms and inner products
   Notes 5, linear approximation
          here is the code for the example at the end of the notes
   Notes 6, orthobases
   Notes 7, parseval, orthoprojections, and Gram-Schmidt
   Notes 8, the cosine-I basis, the DCT, and JPEG
   Notes 9, block transforms, the lapped orthogonal transform
   Notes 10, Haar wavelets
   Notes 11, wavelets, multi resolution analysis, and properties of wavelets

II. Linear inverse problems and least-squares signal processing
   Notes 12, discretizing linear inverse problems
   Notes 13, symmetric matrices: eigenvalues, reconstruction, and stability
   Notes 14, the SVD and the least-squares problem
   Notes 15, the pseudo-inverse
   Notes 16, stable reconstruction: truncated SVD and Tikhonov regularization
          notes on extensions of Tik reg
   Notes 17, weighted least squares and the BLUE

III. Computing the solution to least-squares problems
   Notes 18, basic matrix factorizations, structured matrices
   Notes 19, Toeplitz matrices and the Levinson-Durbin algorithm
   Notes 20, least-squares as quadratic optimization, steepest descent
   Notes 21, the method of conjugate gradients
   Notes 22, recursive least squares
   Notes 23, the Kalman filter (revised version)

IV. Matrix approximation using least-squares
   Notes 24, low rank approximation, total least squares
   Notes 25, the KL transform and PCA

V. Beyond least squares
   Notes 26, L1 and L-infinity norm approximation
   Notes 27 (under construction), norm minimization and regularization
   Notes 28, basic concepts in convex optimization