The Gentlest Introduction to Tensorflow – Part 3

The Gentlest Introduction to Tensorflow – Part 3  #NeuralNetworks

  • To do that we:

    In reality, any prediction relies on multiple features, so we advance from single-feature to 2-feature linear regression; we chose 2 features to keep visualization and comprehension simple, but the concept generalizes to any number of features.

  • In the single-feature scenario, we had to use linear regression to create a straight line to help us predict the outcome ‘house size’, for cases where we did not have datapoints.
  • Recall for a single-feature (see left of image below), the linear regression model outcome (y) has a weight (W), a placeholder (x) for the ‘house size’ feature, and a bias (b).
  • In TF, this multiplication would be:

    Note: The x representations in the feature matrix become more complex, i.e., we use x1.1, x1.2, instead of x1, x2, etc. because the feature matrix (the one in the middle) has expanded from representing a single datapoint of n-features (1 row x n columns) to representing m datapoints with n-features (m rows x n columns), so we extended x, e.g., x1, to x.

  • In TF, they would be written as:

    In TF, with our x, and W represented in matrices, regardless of the number of features our model has or the number of datapoints we want to handle, it can be simplified to:

    We do a side-by-side comparison to summarize the change from single to multi-feature linear regression:

    We illustrated the concept of multi-feature linear regression, and showed how we extend our model and TF code from single to 2-feature linear regression models, which is generalizable to n-feature models.


This post is the third entry in a series dedicated to introducing newcomers to TensorFlow in the gentlest possible manner. This entry progresses to multi-feature linear regression.

Continue reading “The Gentlest Introduction to Tensorflow – Part 3”

The Gentlest Introduction to Tensorflow – Part 3

The Gentlest Introduction to #Tensorflow – Part 3  #NeuralNetworks #MachineLearning

  • To do that we:

    In reality, any prediction relies on multiple features, so we advance from single-feature to 2-feature linear regression; we chose 2 features to keep visualization and comprehension simple, but the concept generalizes to any number of features.

  • In the single-feature scenario, we had to use linear regression to create a straight line to help us predict the outcome ‘house size’, for cases where we did not have datapoints.
  • Recall for a single-feature (see left of image below), the linear regression model outcome (y) has a weight (W), a placeholder (x) for the ‘house size’ feature, and a bias (b).
  • In TF, this multiplication would be:

    Note: The x representations in the feature matrix become more complex, i.e., we use x1.1, x1.2, instead of x1, x2, etc. because the feature matrix (the one in the middle) has expanded from representing a single datapoint of n-features (1 row x n columns) to representing m datapoints with n-features (m rows x n columns), so we extended x, e.g., x1, to x.

  • In TF, they would be written as:

    In TF, with our x, and W represented in matrices, regardless of the number of features our model has or the number of datapoints we want to handle, it can be simplified to:

    We do a side-by-side comparison to summarize the change from single to multi-feature linear regression:

    We illustrated the concept of multi-feature linear regression, and showed how we extend our model and TF code from single to 2-feature linear regression models, which is generalizable to n-feature models.


This post is the third entry in a series dedicated to introducing newcomers to TensorFlow in the gentlest possible manner. This entry progresses to multi-feature linear regression.

Continue reading “The Gentlest Introduction to Tensorflow – Part 3”