Meet Musio

Word Embedding

Thoughts about character-based word embedding and vocabularies in NLP :character:word:embedding:vocabulary: Goal In this summary we compare the two standard methods of single character embedding and full word embedding. Motivation In order to teach a computer to understand words in order to perform natural language tasks, we have to map characters or words to a vector space the computer naturally acts on. Ingredients vocabulary, convolutional layers, highway layers, vector space, out of vocabulary words, semantics, syntax Steps The mapping of character, words or even complete sentences into a vector space is usually called embedding. Given some text, there are two distinct methods to compute word embeddings manageable by a computer. Children learning to read to start by recognizing individual characters before […]

Sequence-to-Sequence

In this summary I like to provide a rough overview of Sequence-To-Sequence neural network architectures and what purposes these serve. Motivation A key observation when dealing with neural networks is that these can only handle objects of a fixed size. This means that the architecture has to be adopted if sequences like sentences should be process-able. The same problems with objects of variable length also appear on the dialog level, where a certain number of utterances and responses string together. Besides dialog modeling, speech recognition and machine translation demand for advanced neural networks. Ingredients Deep neural network, hidden layer, recurrent neural network, encoder, decoder, LSTM, back-propagation, word embedding, sentence embedding Steps As already stated standard neural networks can not deal with […]