Practical Deep Learning For Coders—18 hours of lessons for free

Practical #DeepLearning For Coders—18 hours of lessons for free

  • After this course, I cannot ignore the new developments in deep learning—I will devote one third of my machine learning course to the subject.
  • I’m a CEO, not a coder, so the idea that I’d be able to create a GPU deep learning server in the cloud meant learning a lot of new things—but with all the help on the wiki and from the instructors and community on the forum I did it!
  • Sometimes I feared whether I would be able to solve any deep learning problems, as all the research papers I read were very mathy beyond reach of simple intuitive terms.
  • But Jeremy and Rachel (Course Professors) believe in the theory of ‘Simple is Powerful’, by virtue of which anyone who takes this course will be able to confidently understand the simple techniques behind the ‘magic’ Deep Learning.
  • The course exceeded my expectations and showed me first hand how both Deep Learning and ourselves could change the world for better.

fast.ai’s practical deep learning MOOC for coders. Learn CNNs, RNNs, computer vision, NLP, recommendation systems, keras, theano, and much more! neural networks!

Continue reading “Practical Deep Learning For Coders—18 hours of lessons for free”

Deep Learning Research Review: Natural Language Processing

#ICYMI #DeepLearning Research Review: Natural Language Processing  #NLP

  • Since deep learning loves math, we’re going to represent each word as a d-dimensional vector.
  • Extracting the rows from this matrix can give us a simple initialization of our word vectors.
  • The above cost function is basically saying that we’re going to add the log probabilities of ‘I’ and ‘love’ as well as ‘NLP’ and ‘love’ (where ‘love’ is the center word in both cases).
  • One Sentence Summary: Word2Vec seeks to find vector representations of different words by maximizing the log probability of context words given a center word and modifying the vectors through SGD.
  • Bonus: Another cool word vector initialization method: GloVe (Combines the ideas of coocurence matrices with Word2Vec)


This edition of Deep Learning Research Review explains recent research papers in Natural Language Processing (NLP). If you don’t have the time to read the top papers yourself, or need an overview of NLP with Deep Learning, this post is for you.

Continue reading “Deep Learning Research Review: Natural Language Processing”

The challenges of word embeddings

#DeepLearning techniques for #NLProc tasks:  #abdsc #BigData #DataScience #MachineLearning

  • For those of you who aren’t familiar with them, word embeddings are essentially dense vector representations of words.
  • Word embeddings can be trained and used to derive similarities and relations between words.
  • Relations between words according to word embeddings
  • Word2vec represents every word as an independent vector, even though many words are morphologically similar, just like our two examples above.
  • If your model hasn’t encountered a word before, it will have no idea how to interpret it or how to build a vector for it.

Read the full article, click here.


@KirkDBorne: “#DeepLearning techniques for #NLProc tasks: #abdsc #BigData #DataScience #MachineLearning”


In recent times deep learning techniques have become more and more prevalent in NLP tasks; just take a look at the list of accepted papers at this year’s NAAC…


The challenges of word embeddings