Back to Top
Andrzej Pronobis
UW KTH

Dimensionality Reduction

Introduction

This lecture covers the problem of dimensionality reduction - one of the fundamental problems in statistics and machine learning. The aim is to give a good intuition of the problem from several different perspectives, discuss potential applications and explain fundamentals of basic and more advanced models. The lecture covers the following topics:

  • Linear Mapping to Embedded Space
  • Principal Component Analysis
  • Probabilistic PCA
  • Non-linear Dimensionality Reduction
  • Gaussian Process Latent Variable Models

Course Materials

Code and Data

The following Matlab package (code and data) contains useful tools and implementations of several models (based on existing packages implemented by others). It should be used as a basis for the solutions developed during the lecture.

Lecture

The lecture slides are available for download below:

Lecture Notes

The following lecture notes provide some of the derivations for the material discussed during the lecture.

Dimensionality Reduction

The following books and papers are a great source of knowledge supplementing the lectures:

  • Pattern Recognition and Machine Learning, C.Bishop, Springer, 2007.
  • Neil D. Lawrence: Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models. Journal of Machine Learning Research 6: 1783-1816 (2005)