- We show that this not only speeds up learning in Residual Networks but also improves the accuracy as the depth increases.
- Abstract: Very deep convolutional neural networks introduced new problems like vanishing gradient and degradation.
- In the paper, we propose the use of exponential linear unit instead of the combination of ReLU and Batch Normalization in Residual Networks.
- The recent successful contributions towards solving these problems are Residual and Highway Networks.
- The networks introduce skip connections that allow the information (from the input or those learned in earlier layers) to flow more into the deeper layers.
Read the full article, click here.
@quantombone: “Deep Residual Networks with Exponential Linear Unit #deeplearning #resnet #MachineLearning”
[1604.04112v2] Deep Residual Networks with Exponential Linear Unit