AKA Story

Backpropagation through time

Goal Today’s summary will a give insight into the machinery behind optimization, namely the backpropagation algorithm, in any kind of neural network, whether it is a standard feed forward, convolutional or recurrent one. Motivation In order to adjust the weights of layers in neural networks in a way that the model shows learning behavior, we have to determine how the individual weights influence the final output. Ingredients chain rule, differentiation, gradient Steps We start be providing the objects that we have to handle for the above defined task of adjusting the weights in a meaningful way. In general, we are interested in the behavior of some function depending on the output of the last layer. Instead of just looking directly […]