The evolution of smart tech: What will our cities look like in 2025?

The evolution of smart tech: What will our cities look like in 2025?  #smartcities #AI #IoT

  • Many companies investing in smart city technologies are too focused on the consumer side of IoT (the “flashy side”) and are hindered by outdated, inefficient backend infrastructure forcing them to rethink their strategy.
  • Knowing that things will go wrong in the early years of smart cities and IoT technology adoption, it will take a select few to be the first movers, take the risk, and carve the path for others.
  • In order to reach the next phase of “smart cities,” companies involved in the on-demand ecosystem will need to optimize their own resources through a dynamic technology that enables more efficient and effective processes.
  • Despite what many might believe, we’ll likely see rural areas – not major cities – adopt smart technologies such as delivery drones and autonomous vehicles first.
  • Instead, rural areas not only provide open skies and sparse populations, but these communities actually stand to benefit the most by utilizing optimized smart technologies that provide efficient, low-cost and timely services.


Smart Cities
Continue reading “The evolution of smart tech: What will our cities look like in 2025?”

Teaching machines to predict the future

Teaching machines to predict the future

  • ” The second is to have humans label the scene for the computer in advance, which is impractical for being able to predict actions on a large scale.
  • Computer systems that predict actions would open up new possibilities ranging from robots that can better navigate human environments, to emergency response systems that predict falls, to Google Glass-style headsets that feed you suggestions for what to do in different situations.
  • In a second study, the algorithm was shown a frame from a video and asked to predict what object will appear five seconds later.
  • When shown a video of people who are one second away from performing one of the four actions, the algorithm correctly predicted the action more than 43 percent of the time, which compares to existing algorithms that could only do 36 percent of the time.
  • After training the algorithm on 600 hours of unlabeled video, the team tested it on new videos showing both actions and objects.

Deep-learning vision system from the Computer Science and Artificial Intelligence Lab anticipates human interactions using videos of TV shows.
Continue reading “Teaching machines to predict the future”