What is the Role of the Activation Function in a Neural Network?

#ICYMI What is the Role of the Activation Function in #NeuralNetworks?  #DeepLearning

  • To sum it up, the logistic regression classifier has a non-linear activation function, but the weight coefficients of this model are essentially a linear combination, which is why logistic regression is a “generalized” linear model.
  • We put the net input z through a non-linear “activation function” — the logistic sigmoid function where.
  • In linear regression, we compute a linear combination of weights and inputs (let’s call this function the “net input function”).
  • We only have 3 units in the input layer (x_0 = 1 for the bias unit, and x_1 and x_2 for the 2 features, respectively); there are 200 of these sigmoid activation functions (a_m) in the hidden layer and 1 sigmoid function in the output layer, which is then squashed through a unit step function (not shown) to produce the predicted output class label y^ .
  • Let’s consider logistic regression.


Confused as to exactly what the activation function in a neural network does? Read this overview, and check out the handy cheat sheet at the end.

Continue reading “What is the Role of the Activation Function in a Neural Network?”

Up to Speed on Deep Learning: July Update, Part 2

Up to Speed on #DeepLearning: July Update, Part 2

  • The series introduces machine learning in four detailed segments: spanning an introduction to machine learning to an in-depth convolutional neural network implementation for face recognition.
  • Part 4 of Adam’s series Machine Learning is Fun.
  • Are the three prior parts: part 1 , part 2 , and part 3 .
  • Isaac’s background is in machine learning & artificial intelligence, having been previously an entrepreneur and data scientist.
  • Learn about artificial neural networks and how they’re being used for machine learning, as applied to speech and object recognition, image segmentation, modeling language and human motion, etc.


Check out this second installation of deep learning stories that made news in July. See if there are any items of note you missed.

Continue reading “Up to Speed on Deep Learning: July Update, Part 2”

Common Sense in Artificial Intelligence… by 2026?

Common Sense in #ArtificialIntelligence… by 2026?  by @lemire

  • To replace human beings at most jobs, machines need to exhibit what we intuitively call “common sense”.
  • For example, many human beings are illiterate and they can be said to have common sense.
  • Common sense is basic knowledge about how the world of human beings works.
  • For example, if you are lying on the floor yelling “I’m hurt”, common sense dictates that we call emergency services… but it is possible that Apple’s Siri could already be able to do this.
  • If computers could be granted a generous measure of common sense, many believe that they could make better employees than human beings.

Read the full article, click here.


@kdnuggets: “Common Sense in #ArtificialIntelligence… by 2026? by @lemire”


An insightful opinion piece on the future of common sense in AI. A recommended read by an authority in the field.


Common Sense in Artificial Intelligence… by 2026?

Regularization in Logistic Regression: Better Fit and Better Generalization?

Regularization in Logistic Regression: Better Fit & Generalization?  #MachineLearning @rasbt

  • Our new problem is to minimize the cost function given this added constraint.
  • We don’t want the model to memorize the training dataset, we want a model that generalizes well to new, unseen data.
  • In more specific terms, we can think of regularization as adding (or increasing the) bias if our model suffers from (high) variance (i.e., it overfits the training data).
  • A discussion on regularization in logistic regression, and how its usage plays into better model fit and generalization.
  • If we regularize the cost function (e.g., via L2 regularization), we add an additional to our cost function (J) that increases as the value of your parameter weights (w) increase; keep in mind that the regularization we add a new hyperparameter, lambda, to control the regularization strength.

Read the full article, click here.


@kdnuggets: “Regularization in Logistic Regression: Better Fit & Generalization? #MachineLearning @rasbt”


A discussion on regularization in logistic regression, and how its usage plays into better model fit and generalization.


Regularization in Logistic Regression: Better Fit and Better Generalization?