- The DeepMind Alberta team will be led by UAlberta computing science professors Richard Sutton, Michael Bowling, and Patrick Pilarski.
- So when we chose to set up our first international AI research office, the obvious choice was his base in Edmonton, in close collaboration with the University of Alberta, which has become a leader in reinforcement learning research thanks to his pioneering work,” said Demis Hassabis, CEO and co-founder of DeepMind. ”
- Sutton is excited about the opportunity to combine the strength of DeepMind’s work in reinforcement learning with UAlberta’s academic excellence, all without having to leave Edmonton.
- “DeepMind has taken this reinforcement learning approach right from the very beginning, and the University of Alberta is the world’s academic leader in reinforcement learning, so it’s very natural that we should work together,” said Sutton.
- Working with Hassabis and the DeepMind team both in London and Edmonton, Sutton, Bowling, and Pilarski will combine their academic strength in reinforcement learning to focus on basic AI research.
University of Alberta
Continue reading “UAlberta expertise brings DeepMind lab to Edmonton”
- Image copyright
AlphaGo will challenge some of China’s top Go players later this month
Google has challenged China’s top Go player to a series of games against its artificial intelligence technology.It said the software would play a best-of-three match against Ke Jie, among other games against humans in the eastern Chinese city of Wuzhen from 23-27 May.Last year, the Google program recorded a 4-1 victory against one of South Korea’s top Go players.One expert said that result had come as a surprise.
- Image copyright
AlphaGo won four matches out of five against Lee Se-dol
Google’s AlphaGo software was developed by British computer company DeepMind, which was bought by the US search firm in 2014.
- Image copyright
Ke Jie – seen on the far right – met Google chief executive Sundar Pichai in Beijing last year
In addition to the games against Mr Ke, AlphaGo will also:
play games involving one Chinese pro facing off against another, each of whom will have an AlphaGo-powered virtual teammate
challenge a five-person team containing some of China’s top players, who will work together to try to beat the AI
Over the past year, DeepMind’s technology has also been used to find ways to reduce energy bills at Google’s data centres as well as to try to improve care in British hospitals.
- “If it loses this match, a lot of people will be delighted to claim that Google and DeepMind has overpromised and that this is the kind of hype we always get with AI,” commented Mr Chace.
- Media playback is unsupported on your device
Media captionA brief guide to Go
Go is thought to date back to several thousand years ago in China.Using black-and-white stones on a grid, players gain the upper hand by surrounding their opponent’s pieces with their own.The rules are simpler than those of chess, but a player typically has a choice of 200 moves, compared with about 20 in chess – there are more possible positions in Go than atoms in the universe, according to DeepMind’s team.That means a computer cannot win simply via brute force – searching through the consequences of millions of moves in seconds.It can be very difficult to determine who is winning, and many of the top human players rely on instinct.To prepare for its victory over Lee Se-dol, DeepMind trained its software on 30 million expert moves and then set the machine to play against itself millions of times to get a sense of what strategies worked.The result was that some of the innovative moves AlphaGo made in its landmark match were described as being “beautiful” and highly unusual by observers.
AlphaGo will soon challenge a Chinese teenager, recognised by many as the world’s top player.
Continue reading “Google’s AI seeks further Go glory”
- DeepMind is using dreams in a parallel fashion, accelerating the rate at which an AI learns by focusing on the negative or challenging content of a situation within a game.
- A snapshot of the method published by the DeepMind researchers to enable AI “dreams”.
- You might ask why AI “dreams” are necessary given that robots can already dominate humans in most games such as Chess and Go.
- Google’s DeepMind AI gives robots the ability to dream
- One of the primary discoveries scientists made when seeking to understand the role of dreams from a neuroscientific perspective was that the content of dreams is primarily negative or threatening.
Thanks to Google’s DeepMind AI, Robots can now dream, significantly increasing the speed at which they can learn and ultimately …
Continue reading “Google’s DeepMind AI gives robots the ability to dream”
- In return, DeepMind gets access to records belonging to over 1.6 million patients who are registered with one of the Royal Free NHS Trust’s three London hospitals.
- It also points out that patient data is encrypted, and is used only by DeepMind, not the larger organization of Google.
- Smart machines are beginning to speak to us and act on their own.
- Machine learning will alert medics to early signs of illness, but some critics argue that too much data is being shared.
- The project will provide medics across a number of London hospitals with alerts about patients via an app called Streams .
Machine learning will alert medics to early signs of illness, but some critics argue that too much data is being shared.
Continue reading “DeepMind’s health-care app has some concerned about patient privacy”
- Google, who developed DeepMind, recently bolstered their AI solution to make it learn new tricks faster.
- Artificial intelligence capable of teaching itself new things can be seen as a troublesome development.
- Google’s DeepMind AI Is Now Capable Of Self-Teaching New Things
- Increasing the performance of this AI solution is of the utmost importance, even though its track record speaks for itself.
- One of the primary selling points of artificial intelligence is how this technology can learn over time.
One of the primary selling points of artificial intelligence is how this technology can learn over time. Google, who developed DeepMind, recently bolstered their AI solution to make it learn new tricks faster. According to tests, DeepMind is now capable of learning close to 87% of expert human performance in games. This is an exciting development, although its real life use cases remain to be determined.
Continue reading “Google’s DeepMind AI Is Now Capable Of Self-Teaching New Things – The Merkle”
- The Deep Learning Summit is the next revolution in artificial intelligence.
- At the 2016 Deep Learning Summit in London , Oriol presented ‘Generative Models 101’, exploring how they can be used to help in guiding our intuitions towards better architectures, for text, images and beyond.
- The next Deep Learning Summit takes place in San Francisco on 26-27 January !
- Not a week goes by without news of interesting projects and developments in deep learning by Google’s DeepMind.
- The increasingly popular branch of machine learning explores advances in methods such as image analysis, speech and pattern recognition, natural language processing, and neural network research.
Not a week goes by without news of interesting projects and developments in deep learning by Google’s DeepMind. View an interview with Oriol Vinyals, Senior Research Scientist at DeepMind, who is one of this year’s 35 Innovators Under 35, for his pioneering work in creating new techniques for language translation, and pushing the edge of science.
Continue reading “RE•WORK”
- Blizzard and DeepMind have created an open test environment within the StarCraft II game for artificial intelligence researchers to use worldwide.
- Google’s DeepMind has announced that it will be making use of game development studio Blizzard’s StarCraft II game as a testing platform for artificial intelligence (AI) and machine-learning research, opening the environment worldwide.
- StarCraft II is closer to a real-world environment than any other game it has used for testing so far, DeepMind said, as it is played in real-time.
- “Games are the perfect environment in which to do this, allowing us to develop and test smarter, more flexible AI algorithms quickly and efficiently, and also providing instant feedback on how we’re doing through scores.”
Blizzard and DeepMind have created an open test environment within the StarCraft II game for artificial intelligence researchers to use worldwide.
Continue reading “Google’s DeepMind turns to StarCraft II after conquering Go”
- A computer enemy is learning how to win virtual wars
- The Starcraft games always shipped with programming that allowed human players to try their hand against automated rivals, but while that AI was toggled to a different difficulty setting, the game was generally the same each time.
- No, today Blizzard, the makers of Starcraft and its successors, are opening up the game to an AI that learns .
- Starcraft is a game that’s simple to pick up, but takes a tremendous amount of practice to master.
- The first game and its expansion defined a generation of videogames-as-sports, and the latest incarnations are still played at the highest level.
AI is going to learn how to war in video games.
Continue reading “Blizzard Opens Up Starcraft To Google’s DeepMind AI”
- DeepMind says its new AI model, called a differentiable neural computer (DNC), can be fed with things like a family tree and a map of the London Underground network, and can answer complex questions about the relationships between items in those data structures.
- By augmenting an AI’s capabilities with the power of learning from memory, it’ll likely be able to complete far more complex tasks on its own.
- DeepMind, an artificial intelligence firm that was acquired by Google in 2014 and is now under the Alphabet umbrella, has developed a computer than can refer to its own memory to learn facts and use that knowledge to answer questions.
- It’s the networks that helped DeepMind’s AlphaGo AI defeat world champions at the complex game of Go .
DeepMind has developed a computer than can refer to its own memory to learn facts and use that knowledge to answer questions.
Continue reading “DeepMind’s new computer can learn from its own memory”