This note contains part of classical models in probabilistic machine learning, where most of the contents are from Pattern Recognition and Machine Learning and lectures in Cambridge MLMI 23-24.
However, it might not be that friendly for ML beginners. A preliminary of basic machine learning knowledge, e.g. linear algebra, calculus, probability, statistics, linear regression, logistic regression, Bayesian inference, MLE and MAP, etc., is recommended. And I apologise that possible typos might occur, and I will fix them smoothly as long as I have time to do so.
Outline:
- Preliminary: basic distribution and maths techniques might used
- Linear Models for Regression: a review in both the decision-making perspective and the statistical one
- Linear Models for Classification:
- Generative models: Fisher's Discriminant model
- Discriminative models: Logistic regression, Iterative Reweighted Least Square, Multi-class Logistic regression, Probit regression and Bayesian Logistic regression
- Kernel methods: kernels, RKHS and Kernel regression
- Gaussian Process: GP regression, GP classification and large-scale kernel approximation
- Kernel Machines: SVM and RVM
- Graphical Models: Bayesian Network and Markov Random Field
- Expectation-Maximization: a review in both the approximate inference perspective and the KL-divergence one
MLMI_Lecture_Notes-5.pdf
Feel free to comment, reach out or check my homepage.