Deep learning is a fast-changing field at the intersection of computer science and mathematics. It is a relatively new branch of a wider field called machine learning.
@kdnuggets: A Guide to #DeepLearning by YerevaNN
There are many software frameworks that provide necessary functions, classes and modules for machine learning and for deep learning in particular. We suggest you not use these frameworks at the early stages of studying, instead we suggest you implement the basic algorithms from scratch. Most of the courses describe the maths behind the algorithms in enough detail, so they can be easily implemented.
Probabilistic graphical models (“PGMs”) form a separate subfield at the intersection of statistics and machine learning. There are many books and courses on PGMs in general. Here we present how these models are applied in the context of deep learning. Hugo Larochelle’s course describes a few famous models, while the book Deep Learning devotes four chapters (16-19) to the theory and describes more than a dozen models in the last chapter. These topics require a lot of mathematics.
Deep learning is a very active area of scientific research. To follow the state of the art one has to read new papers and follow important conferences. Usually every new idea is announced in a preprint paper on arxiv.org. Then some of them are submitted to conferences and are peer reviewed. The best of them are presented in the conferences and are published in journals. If the authors do not release code for their models, many people attempt to implement them and put them on GitHub. It takes a year or two before high quality blog posts, tutorials and videos appear on the web that properly explain the ideas and implementations.