Visualizing Cross-validation Code

Visualizing Cross-validation Code  #MachineLearning #dataviz

  • Let us say, you are writing a nice and clean Machine Learning code (e.g. Linear Regression).
  • As the name of the suggests, cross-validation is the next fun thing after learning Linear Regression because it helps to improve your prediction using the K-Fold strategy.
  • But we divide the dataset into equal K parts (K-Folds or cv).
  • Then train the model on the bigger dataset and test on the smaller dataset.
  • This graph represents the k- folds Cross Validation for the Boston dataset with Linear Regression model.

Cross-validation helps to improve your prediction using the K-Fold strategy. What is K-Fold you asked? Check out this post for a visualized explanation.

Continue reading “Visualizing Cross-validation Code”

Book: Evaluating Machine Learning Models

Book: Evaluating #MachineLearning Models #abdsc

  • If you’re new to data science and applied machine learning, evaluating a machine-learning model can seem pretty overwhelming.
  • With this O’Reilly report, machine-learning expert Alice Zheng takes you through the model evaluation basics.
  • In this overview, Zheng first introduces the machine-learning workflow, and then dives into evaluation metrics and model selection.
  • With this report, you will:

    Alice is a technical leader in the field of Machine Learning.

  • Previous roles include Director of Data Science at GraphLab/Dato/Turi, machine learning researcher at Microsoft Research, Redmond, and postdoctoral fellow at Carnegie Mellon University.

Data science today is a lot like the Wild West: there’s endless opportunity and excitement, but also a lot of chaos and confusion. If you’re new to data scien…
Continue reading “Book: Evaluating Machine Learning Models”