Visualizing Cross-validation Code

Visualizing Cross-validation Code  #MachineLearning #dataviz

  • Let us say, you are writing a nice and clean Machine Learning code (e.g. Linear Regression).
  • As the name of the suggests, cross-validation is the next fun thing after learning Linear Regression because it helps to improve your prediction using the K-Fold strategy.
  • But we divide the dataset into equal K parts (K-Folds or cv).
  • Then train the model on the bigger dataset and test on the smaller dataset.
  • This graph represents the k- folds Cross Validation for the Boston dataset with Linear Regression model.


Cross-validation helps to improve your prediction using the K-Fold strategy. What is K-Fold you asked? Check out this post for a visualized explanation.

Continue reading “Visualizing Cross-validation Code”

Researchers Have Created an AI That Is Naturally Curious

Researchers Have Created an #AI That Is Naturally Curious 

 #fintech @futurism

  • Researchers have successfully given AI a curiosity implant, which motivated it to explore a virtual environment.
  • This could be the bridge between AI and real world application

    Researchers at the University of California (UC), Berkeley, have produced an artificial intelligence (AI) that is naturally curious.

  • While the AI that was not equipped with the curiosity ‘upgrade’ banged into walls repeatedly, the curious AI explored its environment in order to learn more.
  • This is a useful and effective strategy for teaching AI to complete specific tasks — as shown by the AI who beat the AlphaGo world number one — but less useful when you want a machine to be autonomous and operate outside of direct commands.
  • This is crucial step to integrating AI into the real world and having it solve real world problems because, as Agrawal says, “rewards in the real world are very sparse.”

Researchers have successfully given AI a curiosity implant, which motivated it to explore a virtual environment.
Continue reading “Researchers Have Created an AI That Is Naturally Curious”

Learning to Learn by Gradient Descent by Gradient Descent

Learning to Learn by Gradient Descent by Gradient Descent  #MachineLearning @adriancolyer

  • KDnuggets Home > News > 2017 > Feb > Tutorials, Overviews > Learning to Learn by Gradient Descent by Gradient Descent ( 17:n05 )
  • Suppose we are training g to optimise an optimisation function f .
  • And thereâ s something especially potent about learning learning algorithms, because better learning algorithms accelerate learningâ ¦
  • Casting algorithm design as a learning problem allows us to specify the class of problems we are interested in through example problem instances.
  • Each function in the system model could be learned or just implemented directly with some algorithm.


What if instead of hand designing an optimising algorithm (function) we learn it instead? That way, by training on the class of problems we’re interested in solving, we can learn an optimum optimiser for the class!

Continue reading “Learning to Learn by Gradient Descent by Gradient Descent”