Sampling Methods for Machine Learning

Inhalt

Sample-based inference is the de-facto standard for solving otherwise infeasible problems in machine learning, estimation, and control under (unavoidable) uncertainties. Thus, it is an important foundation for further studies. This lecture gives a thorough overview of state-of-the-art sampling methods and discusses current developments from the research frontier. 

 The first part shows how to efficiently sample large numbers of random samples from given densities starting with the special cases of uniform and Gaussian distributions. For sampling from arbitrary densities, important techniques such as inverse transform sampling, Knothe-Rosenblatt maps, Markov chain Monte Carlo, normalizing flows, and Langevin equations are introduced. 

 The second part is concerned with deterministic or low-discrepancy sampling, where the goal is to find a set of representative samples of a given density. These are usually obtained by optimization, which, in contrast to random samples, leads to good coverage, high homogeneity, and reproducible results. To analyze and synthesize such samples, various statistical tests and discrepancy measures are presented. This includes scalar tests such as the Cramér-von Mises test, Kolmogorov-Smirnov test, and multivariate generalizations based on Localized Cumulative Distributions and Stein discrepancy. 

 Finally, advanced topics such as importance sampling and sampling from the posterior density in a Bayesian update are discussed. Typical applications of sample-based inference include Bayesian neural networks, information fusion, and reinforcement learning. 

VortragsspracheEnglisch
Organisatorisches

Zur Vereinbarung einer mündlichen Prüfung bitte die folgende Emailadresse verwenden:

pruefung-isas@iar.kit.edu

KIT - ISAS Lehre