Course Credit: 3
Introduction
The course provides a broad but thorough introduction to the methods and practice of sta-
tistical machine learning and its core models and algorithms.
Objectives
The aim of the course is to provide students of statistics with detailed knowledge of how Ma-
chine Learning methods work and how statistical models can be brought to bear in computer
systems not only to analyze large data sets, but also to let computers perform tasks, that
traditional methods of computer science are unable to address.
20
Learning Outcomes
After completing the course, students will have the knowledge and skills to: i) Describe a num-
ber of models for supervised, unsupervised, and reinforcement machine learning, ii) Assess the
strength and weakness of each of these models, iii) Know the underlying mathematical rela-
tionships within and across statistical learning algorithms, iv) Identify appropriate statistical
tools for a data analysis problems in the real world based on reasoned arguments, v) Develop
and implement optimisation methods for training of statistical models, vi) Design decision
and optimal control problems to improve performance of statistical learning algorithms, vii)
Design and implement various statistical machine learning algorithms in real-world applica-
tions, viii) Evaluate the performance of various statistical machine learning algorithms, ix)
Demonstrate a working knowledge of dimension reduction techniques. Identify and implement
advanced computational methods in machine learning.
Contents
Statistical learning: Statistical learning and regression, curse of dimensionality and parametric
models, assessing model accuracy and bias-variance trade-o , classi cation problems and K-
nearest neighbors.
Linear regression: Model selection and qualitative predictors, interactions and nonlinearity.
Classi cation: Introduction to classi cation, logistic regression and maximum likelihood, mul-
tivariate logistic regression and confounding, case-control sampling and multiclass logistic
regressionl, linear discriminant analysis and Bayes theorem, univariate linear discriminant
analysis, multivariate linear discriminant analysis and ROC curves, quadratic discriminant
analysis and naive bayes.
Resampling methods: Estimating prediction error and validation set approach, k-fold cross-
validation, cross-validation- the right and wrong ways, the bootstrap, more on the bootstrap.
Linear model selection and regularization: Linear model selection and best subset selection,
forward stepwise selection, backward stepwise selection, estimating test error using mallow's
Cp, AIC, BIC, adjusted R-squared, estimating test error using cross-validation, shrinkage
methods and ridge regression, the Lasso, the elastic net, tuning parameter selection for ridge
regression and lasso, dimension reduction, principal components regression and partial least
squares.
Moving beyond linearity: Polynomial regression and step functions, piecewise polynomials
and splines, smoothing splines, local regression and generalized additive models.
Tree-based methods: Decision trees, pruning a decision tree, classi cation trees and compar-
ison with linear models, bootstrap aggregation (Bagging) and random forests, boosting and
variable importance.
Support vector machines: Maximal margin classi er, support vector classi er, kernels and
support vector machines, example and comparison with logistic regression.