Meet Musio

Sentences with style and topic

Table of Contents 1. “Generating sentences from a continuous space” 1.1. goal 1.2. motivation 1.3. ingredients 1.4. steps 1.5. outlook 1.6. resources Sentences with style and topic goal In this week’s post we will have a closer look at a paper dealing with the modeling of style, topic and high-level syntactic structures in language models by introducing global distributed latent representations. In particular, the variational autoencoder seems to be a promising candidate for pushing generative language models forwards and including global features. motivation Recurrent neural network language models are known to be capable of modeling complex distributions over sequences. However, their architecture limits them to modeling local statistics over sequences and therefore global features have to be captured otherwise. ingredients […]

Expectation Maximization Algorithm

Goal In today’s summary we have a look at the expectation maximization algorithm that allows to optimize latent variable models when analytic inference of the posterior probability of latent variables is intractable. Motivation Latent variable models are itself interesting, because they are related to variational autoencoders and encoder-decoder frameworks that are popular in unsupervised and semi-supervised learning. They allow to sample from the data distribution and are believed to enhance the expressiveness of the hierarchical recurrent encoder decoder models. We can think of them as memorizing higher abstract information, such as emotional states that allow to generate sentimental utterances in the encoder. Ingredients variational autoencoder, observable variables, latent variables, maximum likelihood, posterior probability, complete data log likelihood Steps In general […]