
a quick tutorial of pac-bayesian and stability learning
the gentlest possible introduction to generalization error bounds beyond uniform convergence
the gentlest possible introduction to generalization error bounds beyond uniform convergence
just some basic statistical learning *practice*
generalizing linear discriminant analysis beyond normally distributed data
bayesian inference in model misspecification settings, visually explained
a toy bayesian neural network with an exact $ \beta \mid D $
never escaping jekyll :)