AKA Story

Autoencoder

Goal Autoencoder have long been proposed to tackle the problem of unsupervised learning. In this week’s summary we have a look at their capabilities of providing a features that can be successfully used in supervised tasks and sketch their framework architecture. Motivation In supervised learning, back in the days, deeper architectures need some kind of pretraining of layers before the actual supervised tasked could be pursued. Autoencoder came in handy for this and allowed to train one layer after the other and were able to find useful features for the supervised learning. Ingredients unsupervised learning, features, representation, encoder, decoder, denoising Steps Let us start by looking at the general architecture. An autoencoder consists of two basic parts: the encoder and […]