Learning Outcomes Week 1
- Understand that machine learning combines data with assumptions to make predictions
- Understand that probability provides a calculus of uncertainty for us to deal with unknowns.
Learning Outcomes Week 2
- Understand what an objective function is.
- Understand the basic principles of collaborative filtering by matrix factorization.
- Understand the idea of knowledge representation in terms of vector data.
- Understand a simple iterative optimizer such as a gradient methods.
- Understand the difference between steepest descent and stochastic gradient descent.
- Understand the use of momentum in the gradient descent algorithm.
Learning Outcomes Week 3
- Understand what the relation between sum of squares and a Gaussian likelihood.
- Understand a fixed stationary point of an objective is found through setting gradients to zero.
- Understand the algorithmic aproach of coordinate ascent.
- Understand linear algebraic approaches to representing the objective function, the prediction function.
- Perform vector differentiation of the sum of squares objective function function.
- Recover the stationary point of a multivariate linear regression sum of squares objective.
Learning Outcomes Week 4
- Understand that basis functions allow for non-linear regression.
- Understand the difference between non-linear in the parameters and non-linear in the inputs.
- Understand the difference betwen local basis functions, like RBF and golbal like polynomials or the Fourier basis.
- Find the stationary point of a sum of squares objective function where the prediction function is a linear combination of a basis set.
Learning Outcomes Week 6
- Understand the challenge of model selection.
- Understand the difference between training set, test set and validation set.
- Understand and be able to apply appropriately the following approaches to model validation:
- hold out set,
- leave one out cross validation,
- k-fold cross validation.
- Be able to identify the type of error that arises from bias and the type of error that arises from variance.
Learning Outcomes Week 7
- Be able to distinguish between different types of uncertainty: aleatoric and epistemic. Be able to give examples of each type.
- Be able to derive Bayes rule' from the product rule of probability.
- Understand the meaning of the terms prior, posterior and marginal likelihood
- Be able to identify these terms in Bayes' rule.
- Be able to describe what each of these terms represents (belief before observation, belief after observation, relationship between belief and observation, the model score.)
- Understand how to derive the marginal likelihood from the likelihood and the prior.
- Understand the difference between the frequentist approach and the Bayesian approach, i.e. that in the Bayesian approach parameters are treated as random variables
- Be able to derive the maths to perform a simple Bayesian update on the offset parameter of a regression problem.
Learning Outcomes Week 8
- Understand the principles of latent variable modelling.
- Understand the origin of PCA and its relationship to factor analysis.
- Understand the eigenvalue problem for positive definite symmetric matrices and how it relates to the covariance matrix.
- Be able to derive the posterior density over the latent variables for probabilistic PCA and factor analysis.
- Be able to intelligently apply principal component analysis to mutlivariate data sets.
- Understand the separation between model and algorithm.
Learning Outcomes Week 9
- Understand the concept of a Bernoulli trial and the form of the Bernoulli distribution.
- Perform maximum likelihood in the Bernoulli distribution.
- Understand the principles and assumptions underlying naive Bayes.
- Be able to develop the joint distributions and the conditional distributions for naive Bayes.
- Be able to apply naive Bayes to simple data sets.
Learning Outcomes Week 10
- Understand the difference between a generative and discriminative model in classification
- Understanding that generative models such as naive Bayes can more easily handle missing data, but may require stronger assumptions than modeling directly the discriminative model.
- Understanding the use of a link function to allow a linear model to be combined with the Bernoulli distribution.
- Understand the form of the logit link function and the nature of the log odds.
- Understand the form of a generalised linear model such as logistic regression and how to derive gradients with respect to parameters to minimize the negative log likelihood.
- Be able to apply logistic regression to a classification data set.
Learning Outcomes Week 12
- Understanding how the marginal likelihood in a Gaussian Bayesian regression model can be computed using properties of the multivariate Gaussian.
- Understanding that Bayesian Regression Models put a joint Gaussian prior across the data.
- Understanding that we can specify the covariance function of that prior directly.
- Understanding that Gaussian process models generalize basis function models to allow infinite basis functions.
This document last modified Tuesday, 16-Dec-2014 13:53:08 UTC