- ” The second is to have humans label the scene for the computer in advance, which is impractical for being able to predict actions on a large scale.
- Computer systems that predict actions would open up new possibilities ranging from robots that can better navigate human environments, to emergency response systems that predict falls, to Google Glass-style headsets that feed you suggestions for what to do in different situations.
- In a second study, the algorithm was shown a frame from a video and asked to predict what object will appear five seconds later.
- When shown a video of people who are one second away from performing one of the four actions, the algorithm correctly predicted the action more than 43 percent of the time, which compares to existing algorithms that could only do 36 percent of the time.
- After training the algorithm on 600 hours of unlabeled video, the team tested it on new videos showing both actions and objects.
Deep-learning vision system from the Computer Science and Artificial Intelligence Lab anticipates human interactions using videos of TV shows.
Continue reading “Teaching machines to predict the future”