GitHub

A #Java Toolbox for Scalable Probabilistic #MachineLearning

  • The AMIDST Toolbox allows you to model your problem using a flexible probabilistic language based on graphical models.
  • AMIDST Toolbox has been used to track concept drift and do risk prediction in credit operations, and as data is collected continuously and reported on a daily basis, this gives rise to a streaming data classification problem.
  • As an example, the following figure shows how the data processing capacity of our toolbox increases given the number of CPU cores when learning an a probabilistic model (including a class variable C, two latent variables (dashed nodes), multinomial (blue nodes) and Gaussian (green nodes) observable variables) using the AMIDST’s learning engine.
  • As can be seen, using our variational learning engine, AMIDST toolbox is able to process data in the order of gigabytes (GB) per hour depending on the number of available CPU cores with large and complex PGMs with latent variables.
  • If your data is really big and can not be stored in a single laptop, you can also learn your probabilistic model on it by using the AMIDST distributed learning engine based on a novel and state-of-the-art distributed message passing scheme implemented on top of Apache Flink.

toolbox – A Java Toolbox for Scalable Probabilistic Machine Learning
Continue reading “GitHub”

An Interview With David Kenny – The Innovator news

  • — D.K.:We are at the point in AI that we were with the Internet in 1993 and mobile around 2003.
  • So I would say, view AI as something in the fabric of your company like electricity — like data flowing through your company in fundamental ways.Is it a threat or an opportunity?
  • — D.K.: The digital and mobile disruptions largely favor distribution, so most of the real value is held by a handful of companies who have consolidated distribution.
  • To take advantage of AI we really encourage companies to maintain control of their data — their intellectual property — because the value is in using it to train the AI.Don’t feed it to (an outside) platform that will serve as a distribution chokehold.
  • Companies need to establish their data and knowledge strategy first — extending their own knowledge and not just turning it over to someone else.

David Kenny, IBM Watson’s Chief and a scheduled keynote speaker at Viva Technology recently spoke to The Innovator about what executives should do to prepare for AI. — D.K.: When you can predict…
Continue reading “An Interview With David Kenny – The Innovator news”

I watched two robots chat together on stage at a tech event

I watched two #robots chat together on stage at a tech event  #RISEConf #Ai #bots

  • I watched two robots go on stage at a tech event to “debate” the future of humanity with each other.
  • The robots in question are Sophia and Han, and they belong to Hanson Robotics, a Hong Kong-based company that is developing and deploying artificial intelligence in humanoids.
  • The event organizers claimed a world first for two robots talking on stage, and it isn’t difficult to imagine that it could become a more common sight in the this is just the start of Hanson Robotics’ ambitious plans.
  • Company CEO and founder Dr David Hanson believes robots will become commonplace in homes and other aspects of our daily life within the next decade.
  • “We’ve got these early uses but our aspiration is Data from Star Trek,” Hanson told TechCrunch on the sidelines of the event following the robot debate.

I got a glimpse into the future world of our robot overlords today. It was nervy at times. I watched two robots go on stage at a tech event to “debate” the..
Continue reading “I watched two robots chat together on stage at a tech event”

Data in, intelligence out: Machine learning pipelines demystified

How machine learning pipelines work: Data in, intelligence out #AI #ML #datascience

  • It’s tempting to think of machine learning as a magic black box.
  • If you’re in the business of deriving actionable insights from data through machine learning, it helps for the process not to be a black box.
  • The more you know what’s inside the box, the better you’ll understand every step of the process for how data can be transformed into predictions, and the more powerful your predictions can be.
  • There’s also a pipeline for data as it flows through machine learning solutions.
  • Mastering how that pipeline comes together is a powerful way to know machine learning itself from the inside out.

Data plus algorithms equals machine learning, but how does that all unfold? Let’s lift the lid on the way those pieces fit together, beginning to end
Continue reading “Data in, intelligence out: Machine learning pipelines demystified”

Building AI: 3 theorems you need to know – DXC Blogs

Building #AI: 3 theorems you need to know #MachineLearning

  • The mathematical theorem proving this is the so-called “no-free-lunch theorem” It tells us that if a learning algorithm works well with one kind of data, it will work poorly with other types of data.
  • In a way, a machine learning algorithm projects its own knowledge onto data.
  • In machine learning, overfitting occurs when your model performs well on training data, but the performance becomes horrible when switched to test data.
  • Any learning algorithm must also be a good model of the data; if it learns one type of data effectively, it will necessarily be a poor model — and a poor student – of some other types of data.
  • Good regulator theorem also tells us that determining if inductive bias will be beneficial or detrimental for modeling certain data depends on whether the equations defining the bias constitute a good or poor model of the data.

Editor’s note: This is a series of blog posts on the topic of “Demystifying the creation of intelligent machines: How does one create AI?” You are now reading part 3. For the list of all, see here: 1, 2, 3, 4, 5, 6, 7.
Continue reading “Building AI: 3 theorems you need to know – DXC Blogs”

The Value Of Data In A Digital World

RT @DeepLearn007 The Value Of Data In A Digital World
#AI #machinelearning #bigdata  …

  • Companies realize that their customers want to have more personalized products and services, and in order to satisfy their customers’ needs, they need to collect as much data as possible to understand the profiles of their customers.
  • Once the data is collected, Artificial Intelligence (AI) can be used to understand and construct customer profiles that reveal the needs of each individual customer.
  • In a similar manner, the data of our “digital selves” and our interactions and activities using our digital devices reveal interesting properties of our profiles.
  • Recommender systems, or expert systems, are actually examples of how data is used to build profiles and act based on it.
  • Based on this insight of different profiles and segmentations, a recommender system can build a model to be able to predict the behavior of users and customers.

One of the most sought commodities today is data. The digitalization has given rise to a technical revolution that could be of the same magnitude as that…
Continue reading “The Value Of Data In A Digital World”

Data in, intelligence out: Machine learning pipelines demystified

Master how to construct a  #machinelearning pipeline

  • It’s tempting to think of machine learning as a magic black box.
  • If you’re in the business of deriving actionable insights from data through machine learning, it helps for the process not to be a black box.
  • The more you know what’s inside the box, the better you’ll understand every step of the process for how data can be transformed into predictions, and the more powerful your predictions can be.
  • There’s also a pipeline for data as it flows through machine learning solutions.
  • Mastering how that pipeline comes together is a powerful way to know machine learning itself from the inside out.

Data plus algorithms equals machine learning, but how does that all unfold? Let’s lift the lid on the way those pieces fit together, beginning to end
Continue reading “Data in, intelligence out: Machine learning pipelines demystified”