You can turn any function into a probability
Machine learning has gotten sloppy over the years.
It used to be that we thought carefully about the theoretical underpinnings of our models and proceeded accordingly.
We used the L2 loss in regression because, when doing maximum likelihood estimation over, L2 loss follows from the assumption that our samples are distributed according to i.i.d. Gaussian noise around some underlying deterministic function, . If we have a likelihood defined as
where are the weights parametrizing our model.
We'd squeeze in some weight decay because when performing maximum a posteriori estimation it was equivalent to having a Gaussian prior over our weights, . For the same likelihood as above,
Nowadays, you just choose a loss function and twiddle with the settings until it works. Granted, this shift away from Bayesian-grounded techniques has given us a lot of flexibility. And it actually works (unlike much of the Bayesian project which turns out to just be disgustingly intractable).
But when you're a theorist trying to catch up to the empirical results, it turns out the Bayesian frame is rather useful. So we want a principled way of recovering probability distributions from arbitrary choices of loss function. Fortunately, this is possible.
The trick is simple: multiply your loss, , by some parameter , whose units are such that is measured in bits. Now, negate and exponentiate and out pop a set of probabilities:
We put a probability distribution over the function space we want to model, and we can extract probabilities over outputs for given inputs.