I want to document my journey in Machine Learning where I stamp interesting frameworks and why I find them interesting. A reminder to self: Great ideas do not come from nowhere. They are derived from fundamental concepts, collaboration, and lots of trial and error.
A generative model encodes a joint distribution $p(X)$ over many variables $X$. Even answering a simple equestion like “What is the probability that it is raining today and Anabel is going to eat hotpot soup?” requires computing a marginal probability: $p(E=e) = \sum_H p(e,H)$ where $E$ is an observed event and $H$ represents all other hidden variables. For general graphical models, this summation is #P-hard, meaning it is computationally intractable in the worst case.
This motivates the design of tractable model classes - models for which inference is guaranteed to be efficient. I attempt to summarise key ideas in my notes below on Probabilistic Circuits, a line of work driven by Guy van der Broeck and collaborators that advances tractable, rather than approximante, probabilistic inference. Here are the resources:
I wanted to investigate the origin story for InfoBAX, have a short introduction on BOED. How and why the ideas of info-theoretic BO came around. Useful Resources: