**Instructor**

Justin Romberg

Office: Coda S1109

Phone: (404) 894-3930

**Description**

The purpose of this course is to provide first year PhD students in engineering and computing with a solid mathematical background for two of the pillars of modern data science: linear algebra and applied probability.

Download the syllabus

### Outline

**I. Vector space basics**

a) linear vector spaces, linear independence

b) norms and inner products

c) linear approximation

d) basis expansions

**II. Regression using least-squares
**a) regression as a linear inverse problem

b) the least-squares problem

c) ridge regression

d) regression in a Hilbert space, representer theorem

e) reproducing kernels, kernel regression, Mercer’s theorem

**III. Solving and analyzing least-squares problems**

a) the Singular Value Decomposition (SVD) and the pseudoinverse

b) stable inversion and regularization

c) matrix factorization

d) steepest descent and conjugate gradients

**IV. Statistical estimation and classification
**a) review of core concepts in probability

b) Gaussian estimation

c) maximum likelihood estimation

d) Bayesian estimation

e) the Bayes classifier

e) empirical risk minimization

f) logistic regression

**V. Further topics as time permits**

a) gradient descent for optimization

b) stochastic gradient descent

c) multi-layer neural networks, the chain rule and back propagation