Top Google executive says the company wants to democratise AI and ML to improve day-to-day life

Our aim is to make #ML and #AI accessible to everyone. @drfeifei shares how:

  • As Artificial Intelligence (AI) and Machine Learning (ML) get set to take a giant leap in improving day-to-day life, the key is to democratise these new-age tools for all and benefit the communities of developers, users and enterprise customers, a top Google executive said in Gurugram on Wednesday.
  • Google, a pioneer in AI, has been focusing on four key components – computing, algorithms, data and expertise — to organise all the data and make it accessible.
  • Google as a company has always been at the forefront of computing AI,” Fei-Fei Li, Chief Scientist of Google Cloud AI and ML, told reporters during a media interaction in Gurugram.
  • Earlier this year, Google announced the second-generation Tensor Processing Units (TPUs) (now called the Cloud TPU) at the annual Google I/O event in the US.
  • “We announced the Cloud TPU — the second-generation of our processing unit and our intention is to make it available via Google Cloud,” the top executive added.

As Artificial Intelligence (AI) and Machine Learning (ML) get set to take a giant leap in improving day-to-day life, the key is to democratise these new-age tools for all and benefit the communities of developers, users and enterprise customers, a top Google executive said in Gurugram on Wednesday. The concept of AI and ML came into existence long back but with the vast availability of data today, sectors like healthcare, banking and retail are adopting the technologies at a faster pace than before.
Continue reading “Top Google executive says the company wants to democratise AI and ML to improve day-to-day life”

We want to democratise artificial intelligence: Google

  • As Artificial Intelligence (AI) and Machine Learning (ML) get set to take a giant leap in improving day-to-day life, the key is to democratise these new-age tools for all and benefit the communities of developers, users and enterprise customers, a top Google executive said here on Wednesday.
  • Google, a pioneer in AI, has been focusing on four key components – computing, algorithms, data and expertise — to organise all the data and make it accessible.
  • Google as a company has always been at the forefront of computing AI,” Fei-Fei Li, Chief Scientist of Google Cloud AI and ML, told reporters during a media interaction here.
  • Earlier this year, Google announced the second-generation Tensor Processing Units (TPUs) (now called the Cloud TPU) at the annual Google I/O event in the US.
  • “We announced the Cloud TPU — the second-generation of our processing unit and our intention is to make it available via Google Cloud,” the top executive added.

Google, a pioneer in AI, has been focussing on computing, algorithms, data and expertise, to organise all the data and make it accessible.
Continue reading “We want to democratise artificial intelligence: Google”

GitHub

A #Java Toolbox for Scalable Probabilistic #MachineLearning

  • The AMIDST Toolbox allows you to model your problem using a flexible probabilistic language based on graphical models.
  • AMIDST Toolbox has been used to track concept drift and do risk prediction in credit operations, and as data is collected continuously and reported on a daily basis, this gives rise to a streaming data classification problem.
  • As an example, the following figure shows how the data processing capacity of our toolbox increases given the number of CPU cores when learning an a probabilistic model (including a class variable C, two latent variables (dashed nodes), multinomial (blue nodes) and Gaussian (green nodes) observable variables) using the AMIDST’s learning engine.
  • As can be seen, using our variational learning engine, AMIDST toolbox is able to process data in the order of gigabytes (GB) per hour depending on the number of available CPU cores with large and complex PGMs with latent variables.
  • If your data is really big and can not be stored in a single laptop, you can also learn your probabilistic model on it by using the AMIDST distributed learning engine based on a novel and state-of-the-art distributed message passing scheme implemented on top of Apache Flink.

toolbox – A Java Toolbox for Scalable Probabilistic Machine Learning
Continue reading “GitHub”

An Interview With David Kenny – The Innovator news

  • — D.K.:We are at the point in AI that we were with the Internet in 1993 and mobile around 2003.
  • So I would say, view AI as something in the fabric of your company like electricity — like data flowing through your company in fundamental ways.Is it a threat or an opportunity?
  • — D.K.: The digital and mobile disruptions largely favor distribution, so most of the real value is held by a handful of companies who have consolidated distribution.
  • To take advantage of AI we really encourage companies to maintain control of their data — their intellectual property — because the value is in using it to train the AI.Don’t feed it to (an outside) platform that will serve as a distribution chokehold.
  • Companies need to establish their data and knowledge strategy first — extending their own knowledge and not just turning it over to someone else.

David Kenny, IBM Watson’s Chief and a scheduled keynote speaker at Viva Technology recently spoke to The Innovator about what executives should do to prepare for AI. — D.K.: When you can predict…
Continue reading “An Interview With David Kenny – The Innovator news”

I watched two robots chat together on stage at a tech event

I watched two #robots chat together on stage at a tech event  #RISEConf #Ai #bots

  • I watched two robots go on stage at a tech event to “debate” the future of humanity with each other.
  • The robots in question are Sophia and Han, and they belong to Hanson Robotics, a Hong Kong-based company that is developing and deploying artificial intelligence in humanoids.
  • The event organizers claimed a world first for two robots talking on stage, and it isn’t difficult to imagine that it could become a more common sight in the this is just the start of Hanson Robotics’ ambitious plans.
  • Company CEO and founder Dr David Hanson believes robots will become commonplace in homes and other aspects of our daily life within the next decade.
  • “We’ve got these early uses but our aspiration is Data from Star Trek,” Hanson told TechCrunch on the sidelines of the event following the robot debate.

I got a glimpse into the future world of our robot overlords today. It was nervy at times. I watched two robots go on stage at a tech event to “debate” the..
Continue reading “I watched two robots chat together on stage at a tech event”

Data in, intelligence out: Machine learning pipelines demystified

How machine learning pipelines work: Data in, intelligence out #AI #ML #datascience

  • It’s tempting to think of machine learning as a magic black box.
  • If you’re in the business of deriving actionable insights from data through machine learning, it helps for the process not to be a black box.
  • The more you know what’s inside the box, the better you’ll understand every step of the process for how data can be transformed into predictions, and the more powerful your predictions can be.
  • There’s also a pipeline for data as it flows through machine learning solutions.
  • Mastering how that pipeline comes together is a powerful way to know machine learning itself from the inside out.

Data plus algorithms equals machine learning, but how does that all unfold? Let’s lift the lid on the way those pieces fit together, beginning to end
Continue reading “Data in, intelligence out: Machine learning pipelines demystified”

Building AI: 3 theorems you need to know – DXC Blogs

Building #AI: 3 theorems you need to know #MachineLearning

  • The mathematical theorem proving this is the so-called “no-free-lunch theorem” It tells us that if a learning algorithm works well with one kind of data, it will work poorly with other types of data.
  • In a way, a machine learning algorithm projects its own knowledge onto data.
  • In machine learning, overfitting occurs when your model performs well on training data, but the performance becomes horrible when switched to test data.
  • Any learning algorithm must also be a good model of the data; if it learns one type of data effectively, it will necessarily be a poor model — and a poor student – of some other types of data.
  • Good regulator theorem also tells us that determining if inductive bias will be beneficial or detrimental for modeling certain data depends on whether the equations defining the bias constitute a good or poor model of the data.

Editor’s note: This is a series of blog posts on the topic of “Demystifying the creation of intelligent machines: How does one create AI?” You are now reading part 3. For the list of all, see here: 1, 2, 3, 4, 5, 6, 7.
Continue reading “Building AI: 3 theorems you need to know – DXC Blogs”