AKA Story

Expectation Maximization Algorithm

Goal In today’s summary we have a look at the expectation maximization algorithm that allows to optimize latent variable models when analytic inference of the posterior probability of latent variables is intractable. Motivation Latent variable models are itself interesting, because they are related to variational autoencoders and encoder-decoder frameworks that are popular in unsupervised and semi-supervised learning. They allow to sample from the data distribution and are believed to enhance the expressiveness of the hierarchical recurrent encoder decoder models. We can think of them as memorizing higher abstract information, such as emotional states that allow to generate sentimental utterances in the encoder. Ingredients variational autoencoder, observable variables, latent variables, maximum likelihood, posterior probability, complete data log likelihood Steps In general […]