- The input to the RNN at every time-step is the current value as well as a state vector which represent what the network has “seen” at time-steps before.
- The weights and biases of the network are declared as TensorFlow variables, which makes them persistent across runs and enables them to be updated incrementally for each batch.
- Now it’s time to build the part of the graph that resembles the actual RNN computation, first we want to split the batch data into adjacent time-steps.
- This is the final part of the graph, a fully connected softmax layer from the state to the output that will make the classes one-hot encoded, and then calculating the loss of the batch.
- It will plot the loss over the time, show training input, training output and the current predictions by the network on different sample series in a training batch.
This is a no-nonsense overview of implementing a recurrent neural network (RNN) in TensorFlow. Both theory and practice are covered concisely, and the end result is running TensorFlow RNN code.
Continue reading “How to Build a Recurrent Neural Network in TensorFlow”