Mathematical Foundations of Machine Learning, Fall 2019, Notes

(I will try to post notes here right before lecture.)

This concise review of linear algebra summarizes some of the background needed for the course.

See also some notes on basic matrix-vector manipulations.

Notes 01, Introduction

I. Vector spaces and linear representations
Notes 02, first look at linear representations
Notes 03, linear vector spaces
Notes 04, norms and inner products
Notes 05, linear approximation
Notes 06, orthogonal bases
Notes 07, nonorthogonal bases

II. Regression using least squares
Notes 08, regression as a linear inverse problem and least squares
the code in regression_example.m solves some toy problems
Notes 09, least-squares in a Hilbert space
Notes 10, reproducing kernel Hilbert spaces
Notes 11, kernel regression, Mercer’s theorem

III. Solving and analyzing least squares problems
Notes 12, symmetric systems of equations
Notes 13, the SVD and least-squares
Notes 14, stable least-squares reconstruction
Notes 15, matrix factorization
Notes 16, steepest descent and conjugate gradients

IV. Statistical estimation and classification
Notes 17, probability review, WLLN, and MMSE estimation
Notes 18, Gaussian estimation, Gaussian graphical models
Notes 19, maximum likelihood estimation
Notes 20, consistency and the Cramer-Rao lower bound
Notes 21, Bayesian estimation
Notes 22, the Bayes classifier, nearest neighbor
Notes 23, empirical risk minimization
Notes 24, logistic regression

V. Further topics
Notes 25, gradient descent
Notes 26, PCA and NNMF
Notes 27, Gaussian mixture models
Notes 28, hidden Markov models