I want to document my journey in Machine Learning where I stamp interesting frameworks and why I find them interesting. A reminder to self: Great ideas do not come from nowhere. They are derived from fundamental concepts, collaboration, and lots of trial and error.
A generative model encodes a joint distribution $p(X)$ over many variables $X$. Even answering a simple equestion like “What is the probability that it is raining today and Anabel is going to eat hotpot soup?” requires computing a marginal probability: $p(E=e) = \sum_H p(e,H)$ where $E$ is an observed event and $H$ represents all other hidden variables. For general graphical models, this summation is #P-hard, meaning it is computationally intractable in the worst case. This motivates the design of tractable model classes - models for which inference is guaranteed to be efficient.
I wanted to investigate the origin story for InfoBAX, have a short introduction on BOED. How and why the ideas of info-theoretic BO came around. Useful Resources: