AI eats future of software — Part2 – Saunak Dasgupta – Medium

AI eats future of software — Part2 | #MachineLearning #Artificialintelligence #RT

  • And then it can start performing tasks proactively instead of reactively, providing unprecedented efficiency boost both in personal and enterprise use cases.In this follow up post, we will take a deeper look into the important architectural components of such an application.Architecturally there are four critical pieces of an intelligent software.A) Sensory block B) Services Hub C) Machine learning box D) Handler blockThis is how the rough skeleton looks like:A) Sensory BlockSensory block is what users interfaces with while interacting with software.
  • With the advent of AI (more specifically natural language processing and image processing) the sensory block of next generation software will support user interactions beyond just touch/mouse click.
  • Smart end points or edge are often in demand (see fog computing) as in many scenarios decisions are required to made by logic residing at the “edge”, saving crucial clock cycles in many mission critical applications.B) Services HubServices hub hosts array of services that are exposed from machine learning models.
  • The models will be part of machine learning box but the service end points will be made available through Services hub.
  • And server less/function based approaches is more programming oriented.In the next post we will cover the remaining two sections: machine learning box and handler block.

n the previous post, I discussed about core traits of an intelligent software application. You can read the post here. In summary the intelligent applications will not only take instructions from…
Continue reading “AI eats future of software — Part2 – Saunak Dasgupta – Medium”

A Tour of Machine Learning Algorithms

A Tour of Machine Learning Algorithms | #Analytics #MachineLearning #RT

  • Originally published by  in 2013, it still is a goldmine for all machine learning professionals.
  • The algorithms are broken down in several categories.
  • You can even download an algorithm map from the original article.
  • It would be interesting to list, for each algorithm,

    and generally speaking, compare these algorithms.

  • I would add HDT, Jackknife regression, density estimation, attribution modeling (to optimize marketing mix), linkage (in fraud detection), indexation (to create taxonomies or for clustering large data sets consisting of text), bucketisation, and time series algorithms.

Originally published by Jason Brownlee in 2013, it still is a goldmine for all machine learning professionals.  The algorithms are broken down in several categ…
Continue reading “A Tour of Machine Learning Algorithms”

Building AI: 3 theorems you need to know – DXC Blogs

Building #AI: 3 theorems you need to know #MachineLearning

  • The mathematical theorem proving this is the so-called “no-free-lunch theorem” It tells us that if a learning algorithm works well with one kind of data, it will work poorly with other types of data.
  • In a way, a machine learning algorithm projects its own knowledge onto data.
  • In machine learning, overfitting occurs when your model performs well on training data, but the performance becomes horrible when switched to test data.
  • Any learning algorithm must also be a good model of the data; if it learns one type of data effectively, it will necessarily be a poor model — and a poor student – of some other types of data.
  • Good regulator theorem also tells us that determining if inductive bias will be beneficial or detrimental for modeling certain data depends on whether the equations defining the bias constitute a good or poor model of the data.

Editor’s note: This is a series of blog posts on the topic of “Demystifying the creation of intelligent machines: How does one create AI?” You are now reading part 3. For the list of all, see here: 1, 2, 3, 4, 5, 6, 7.
Continue reading “Building AI: 3 theorems you need to know – DXC Blogs”

Nightmare Hellface Generator is Cutting-Edge Machine Learning

Nightmare hellface generator is cutting-edge machine learning:

  • Draw something in a little box and an algorithm will try to interpret it as a cat and then fill in the colors and textures according to a machine learning model training on thousands of cat images.
  • The pix2pix project demonstrates something pretty profound about machine learning circa 2017: It’s awful at generating new images, or at least meaningful new images.
  • Machine learning is better at classifying existing images, but, even then, things drop off dramatically as we move beyond a handful of really robust object-recognition models.
  • GANs work by training generative models that seek to minimize a particular “loss function” according to a prediction that the generated image is fake or real.
  • Rather than learn how to produce images from scratch, the model here learns to map the abstract image representation contained within a machine learning model to a trackpad doodle.

Generative adversarial networks strike again.
Continue reading “Nightmare Hellface Generator is Cutting-Edge Machine Learning”

Nightmare Hellface Generator is Cutting-Edge Machine Learning

Nightmare hellface generator is cutting-edge machine learning

  • Draw something in a little box and an algorithm will try to interpret it as a cat and then fill in the colors and textures according to a machine learning model training on thousands of cat images.
  • The pix2pix project demonstrates something pretty profound about machine learning circa 2017: It’s awful at generating new images, or at least meaningful new images.
  • Machine learning is better at classifying existing images, but, even then, things drop off dramatically as we move beyond a handful of really robust object-recognition models.
  • GANs work by training generative models that seek to minimize a particular “loss function” according to a prediction that the generated image is fake or real.
  • Rather than learn how to produce images from scratch, the model here learns to map the abstract image representation contained within a machine learning model to a trackpad doodle.

Generative adversarial networks strike again.
Continue reading “Nightmare Hellface Generator is Cutting-Edge Machine Learning”

Microsoft releases version 2.0 of its deep learning toolkit

Microsoft releases version 2.0 of its deep learning toolkit  #CompBindTech

  • Microsoft today launched version 2.0 of what is now called the Microsoft Cognitive Toolkit.
  • This open-source toolkit, which was previously known as CNTK, is Microsoft’s competitor to similar tools like TensorFlow, Caffe and Torch, and, while the first version was able to challenge many of its competitors in terms of speed, this second version puts an emphasis on usability (by adding support for Python and the popular Keras neural networking library, for example) and future extensibility, while still maintaining — and improving — its speed.
  • Because it was essentially an internal tool, though, it didn’t support Python for example, even though it’s by far the most popular language among machine learning Microsoft originally built this toolkit for speech recognition systems, it was very good at working with time series data for building recurrent neural nets.
  • Huang stressed that the first version of the Cognitive Toolkit outperformed its competitors pretty easily on a number of standard tests.
  • Unsurprisingly, Microsoft is stressing the fact that the Cognitive Toolkit is a battle-tested system that it uses to power most of its internal AI systems, including Cortana, and that it can train models faster than most of its competitors.

Microsoft today launched version 2.0 of what is now called the Microsoft Cognitive Toolkit. This open-source toolkit, which was previously known as CNTK, is..
Continue reading “Microsoft releases version 2.0 of its deep learning toolkit”

Artificial Intelligence Helps in Learning How Children Learn

Artificial intelligence helps in learning how children learn

  • Researchers in artificial intelligence and machine learning have started to design software that allows computers to learn about causes the way that scientists do.
  • In one experiment, we showed preschool children a simple machine with a switch on one side and two disks that spin on top.
  • Bayesian inference considers both the strength of new evidence and the strength of your existing hypotheses.
  • Both toddlers and scientists hold on to well-confirmed hypotheses, but eventually enough new evidence can overturn even the most cherished idea.
  • Several studies show that youngsters integrate existing knowledge and new evidence in this way.

Alison Gopnik, author of “ Making AI Human ” in Scientific American ’s June issue describes the use of Bayesian statistics to outline how youngsters infer the basics of cause and effect.
Continue reading “Artificial Intelligence Helps in Learning How Children Learn”