The Gentlest Introduction to Tensorflow – Part 2

The Gentlest Introduction to #Tensorflow Part 2  #DeepLearning #MachineLearning @reculture_us

  • Calculate prediction (y) & cost using a single datapoint
  • Using a variety of datapoints generalizes our model, i.e., it learns W, b values that can be used to predict any feature value.
  • For simplicity, we use least minimum squared error (MSE) as our cost function.
  • Create a TF Graph with model & cost, and initialize W, b with some values
  • We select a datapoint (x, y [C], and feed [D] it into the TF Graph to get the prediction (y) as well as the cost.

Read the full article, click here.


@kdnuggets: “The Gentlest Introduction to #Tensorflow Part 2 #DeepLearning #MachineLearning @reculture_us”


 
In the previous article, we used Tensorflow (TF) to build and learn a linear regression model with a single feature so that given a feature value (house size/sqm), we can predict the outcome (house price/$).


The Gentlest Introduction to Tensorflow – Part 2

The Gentlest Introduction to Tensorflow – Part 1

The Gentlest Introduction to Tensorflow Part 1  #MachineLearning #DeepLearning @reculture_us

  • In the spirit of keeping things simple, we will model our data points using a linear model.
  • With the concepts of linear model, cost function, and gradient descent in hand, we are ready to use TF.
  • To compare which model is a better-fit more rigorously, we define best-fit mathematically as a cost function that we need to minimize.
  • Minimizing the cost function is similar because, the cost function is undulating like the mountains (chart below), and we are trying to find the minimum point, which we can similarly achieve through gradient descent.
  • We cannot predict values for features that we don’t have data points for (chart below)

Read the full article, click here.


@kdnuggets: “The Gentlest Introduction to Tensorflow Part 1 #MachineLearning #DeepLearning @reculture_us”


In this series of articles, we present the gentlest introduction to Tensorflow that starts off by showing how to do linear regression for a single feature problem, and expand from there.


The Gentlest Introduction to Tensorflow – Part 1

Towards an integration of deep learning and neuroscience

Integration of Deep Learning and Neuroscience  by @AdamMarblestone @KordingLab & @DeepMindAI

  • We suggest directions by which neuroscience could seek to refine and test these hypotheses.
  • Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives.
  • We hypothesize that (1) the brain optimizes cost functions, (2) these cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior.
  • Cost functions and training procedures have become more complex and are varied across layers and over time.
  • In machine learning artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures.

Read the full article, click here.


@neuroraf: “Integration of Deep Learning and Neuroscience by @AdamMarblestone @KordingLab & @DeepMindAI”


bioRxiv – the preprint server for biology, operated by Cold Spring Harbor Laboratory, a research and educational institution


Towards an integration of deep learning and neuroscience

A Concise Overview of Standard Model-fitting Methods

  • A very concise overview of 4 standard model-fitting methods, focusing on their differences: closed-form equations, gradient descent, stochastic gradient descent, and mini-batch learning.
  • Using an optimization algorithm (Gradient Descent, Stochastic Gradient Descent, Newton’s Method, Simplex Method, etc.)
  • Using the Gradient Decent (GD) optimization algorithm, the weights are updated incrementally after each epoch (= pass over the training dataset).
  • In Ordinary Least Squares (OLS) Linear Regression, our goal is to find the line (or hyperplane) that minimizes the vertical offsets.
  • We can picture GD optimization as a hiker (the weight coefficient) who wants to climb down a mountain (cost function) into a valley (cost minimum), and each step is determined by the steepness of the slope (gradient) and the leg length of the hiker (learning rate).

Read the full article, click here.


@dataandme: “”Overview of Standard Model-fitting Methods” via @kdnuggets #kdn #machinelearning”


A very concise overview of 4 standard model-fitting methods, focusing on their differences: closed-form equations, gradient descent, stochastic gradient descent, and mini-batch learning.


A Concise Overview of Standard Model-fitting Methods

A Concise Overview of Standard Model-fitting Methods

A Concise Overview of Standard Model-fitting Methods  #MachineLearning #DeepLearning @rasbt

  • A very concise overview of 4 standard model-fitting methods, focusing on their differences: closed-form equations, gradient descent, stochastic gradient descent, and mini-batch learning.
  • Using an optimization algorithm (Gradient Descent, Stochastic Gradient Descent, Newton’s Method, Simplex Method, etc.)
  • Using the Gradient Decent (GD) optimization algorithm, the weights are updated incrementally after each epoch (= pass over the training dataset).
  • In Ordinary Least Squares (OLS) Linear Regression, our goal is to find the line (or hyperplane) that minimizes the vertical offsets.
  • We can picture GD optimization as a hiker (the weight coefficient) who wants to climb down a mountain (cost function) into a valley (cost minimum), and each step is determined by the steepness of the slope (gradient) and the leg length of the hiker (learning rate).

Read the full article, click here.


@mattmayo13: “A Concise Overview of Standard Model-fitting Methods #MachineLearning #DeepLearning @rasbt”


A very concise overview of 4 standard model-fitting methods, focusing on their differences: closed-form equations, gradient descent, stochastic gradient descent, and mini-batch learning.


A Concise Overview of Standard Model-fitting Methods

A Concise Overview of Standard Model-fitting Methods

A Concise Overview of Standard Model-fitting Methods  #MachineLearning #DeepLearning @rasbt

  • A very concise overview of 4 standard model-fitting methods, focusing on their differences: closed-form equations, gradient descent, stochastic gradient descent, and mini-batch learning.
  • Using an optimization algorithm (Gradient Descent, Stochastic Gradient Descent, Newton’s Method, Simplex Method, etc.)
  • Using the Gradient Decent (GD) optimization algorithm, the weights are updated incrementally after each epoch (= pass over the training dataset).
  • In Ordinary Least Squares (OLS) Linear Regression, our goal is to find the line (or hyperplane) that minimizes the vertical offsets.
  • We can picture GD optimization as a hiker (the weight coefficient) who wants to climb down a mountain (cost function) into a valley (cost minimum), and each step is determined by the steepness of the slope (gradient) and the leg length of the hiker (learning rate).

Read the full article, click here.


@mattmayo13: “A Concise Overview of Standard Model-fitting Methods #MachineLearning #DeepLearning @rasbt”


A very concise overview of 4 standard model-fitting methods, focusing on their differences: closed-form equations, gradient descent, stochastic gradient descent, and mini-batch learning.


A Concise Overview of Standard Model-fitting Methods