- London: Oxford scientists have developed a new artificial intelligence system that can create fake videos of a person by using their still image and an audio clip.
- As the audio clip plays, the system then manipulates the mouth of the person in the still image so that it looks as if they are speaking.
- Although the results are not absolutely perfect, researchers believe that the software could soon make realistically fake videos only a single click away.
- Joon Son Chung from the University of Oxford, UK, said “The application we’re thinking of is redubbing a video into another language.”
- In the future, the audio from news clips could be automatically translated into another language and the images updated to fit.
Oxford University Scientists have developed a new artificial intelligence system; the system uses a person’s image and audio clip to create fake videos of the person.
Continue reading “Artificial Intelligence can now use a person’s image and audio to create fake videos”
- This article was written by Dorian Pyle and Cristina San Jose on McKinsey&Company.
- Machine learning is based on algorithms that can learn from data without relying on rules-based programming.
- The unmanageable volume and complexity of the big data that the world is now swimming in have increased the potential of machine learning—and the need for it.
- By being shown thousands and thousands of labeled data sets with instances of, say, a cat, the machine could shape its own rules for deciding whether a particular set of digital pixels was, in fact, a cat.
- For more articles about machine learning, click here.
This article was written by Dorian Pyle and Cristina San Jose on McKinsey&Company. Dorian Pyle is a data expert in McKinsey’s Miami office, and Cristin…
Continue reading “An executive’s guide to machine learning”
Without knowing the ground truth of a dataset, then, how do we know what the optimal number of data clusters are? We will have a look at 2 particular popular methods for attempting to answer this question: the elbow method and the silhouette method.
Continue reading “Must-Know: How to determine the most useful number of clusters?”
- The system is powered by Google Cloud technologies and works on any HDMI-ready display that serves as grocery store aisle “end caps”, restaurant menu boards, and even interactive cinema posters.
- Integration with other retail systems lets the same approach deliver inventory and sales data, creating both messaging that is more valuable to the shopper, and data that is more valuable to the retailer.
- Greg Chambers, global group director of digital innovation at Coca-Cola, said: “We kicked off a rapid iteration process in the spring of 2015 and had our first prototype that fall,” Chambers said during a presentation at the Google Cloud Next conference in San Francisco.
- Proximity technology leverages built-in smartphone features and Google’s Eddystone wireless beacon technology, allowing a store to receive and interpret a nearby user’s preferences and habits to deliver contextually relevant content in real time.
- Given the scale of Google’s marketing clout and technology development, this should be treated as a play for the final step in the shopper’s journey.
Coca-Cola pioneers personalised displays in-store with Google AI – Digital marketing news and research from Digital Strategy Consulting – Digital advertising solutions in-store are heading for a massive shake-up, as shopper marketing techniques start to apply web approaches to personalisation. Coca-Cola has launched in-store display systems that show personalised messages to approaching shoppers, based on data on their smartphones.
Continue reading “Coca-Cola pioneers personalised displays in-store with Google AI”
- Machine intelligence is here, and we’re already using it to make subjective decisions.
- But the complex way AI grows and improves makes it hard to understand and even harder to control.
- In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don’t fit human error patterns — and in ways we won’t expect or be prepared for.
- “We cannot outsource our responsibilities to machines,” she says.
- This talk was presented at an official TED conference, and was featured by our editors on the home page.
Machine intelligence is here, and we’re already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don’t fit human error patterns — and in ways we won’t expect or be prepared for. “We cannot outsource our responsibilities to machines,” she says. “We must hold on ever tighter to human values and human ethics.”
Continue reading “Zeynep Tufekci: Machine intelligence makes human morals more important”
- Bryan Goodman, an engineer with Argo AI / Ford Motor Company, spoke at the GPU Technology Conference last week about how his team applies deep learning originally built to develop self-driving cars to detect images of specific race cars.
- Ford’s deep learning neural network was trained on a manual training set of thousands of images labeled by humans.
- Goodman’s team suspected these were the items that the neural network prioritized in order to obtain such good results.
- “Sometimes I hear people describe machine learning and, in particular, deep neural networks as a black box,” said Goodman.
- The Pittsburgh-based Argo AI team is now working alongside the autonomous driving team at Ford.
When your race car is flying around the track at nearly 200 miles per hour, anything out of order — even a candy wrapper stuck to the grill — can pose a danger to both car and driver.
Continue reading “How AI Helps Keep NASCAR Drivers Safe”
- It features an episode of the stoner-favorite television show The Joy of Painting with Bob Ross through Google’s neural network DeepDream.
- DeepDream is a convolutional neural network, a style of computing inspired by the brain, that identifies and recognizes images and patterns.
- As Reben explains in the description: “This artwork represents what it would be like for an AI to watch Bob Ross on LSD (once someone invents digital drugs).
- The unique characteristics of the human voice are learned and generated as well as hallucinations of a system trying to find images which are not there.”
- Google made the code for DeepDream open-source, meaning there are plenty of videos, images, and apps that utilize it.
Ever wondered what it would be like for artificial intelligence to trip-out while watching Bob Ross paint a pretty picture?
Continue reading “Watch Artificial Intelligence Lose Its Mind While Watching Bob Ross”