The evolution of smart tech: What will our cities look like in 2025?

The evolution of smart tech: What will our cities look like in 2025?  #smartcities #AI #IoT

  • Many companies investing in smart city technologies are too focused on the consumer side of IoT (the “flashy side”) and are hindered by outdated, inefficient backend infrastructure forcing them to rethink their strategy.
  • Knowing that things will go wrong in the early years of smart cities and IoT technology adoption, it will take a select few to be the first movers, take the risk, and carve the path for others.
  • In order to reach the next phase of “smart cities,” companies involved in the on-demand ecosystem will need to optimize their own resources through a dynamic technology that enables more efficient and effective processes.
  • Despite what many might believe, we’ll likely see rural areas – not major cities – adopt smart technologies such as delivery drones and autonomous vehicles first.
  • Instead, rural areas not only provide open skies and sparse populations, but these communities actually stand to benefit the most by utilizing optimized smart technologies that provide efficient, low-cost and timely services.


Smart Cities
Continue reading “The evolution of smart tech: What will our cities look like in 2025?”

Apache Spark™ 2.0: Machine Learning. Under the Hood and Over the Rainbow.

Nick describes some of the new features in the 2.0 release in #ApacheSpark machinelearning

  • We’ll cast our minds forward to what may lie ahead for version 2.1 and beyond.
  • One of the main goals of the machine learning team at the Spark Technology Center is to continue to evolve Apache Spark as the foundation for end-to-end, continuous, intelligent enterprise applications.
  • It pays to be as communication-efficient as possible when constructing such an algorithm.
  • Linear models, such as logistic regression, are the work-horses of machine learning.
  • The older RDD-based API in the mllib package is now in maintenance mode, and the newer DataFrame-based API (in the ml package), with its support for DataFrames and machine learning pipelines, has become the focus of future development for machine learning in Spark .

Now that the dust has settled on Apache Spark™ 2.0, the community has a chance to catch its collective breath and reflect a little on what was achieved for the largest and most complex release in the project’s history.
Continue reading “Apache Spark™ 2.0: Machine Learning. Under the Hood and Over the Rainbow.”

Apache™ Spark 2.0: Machine Learning. Under the Hood and Over the Rainbow.

#ApacheSpark 2.0: #MachineLearning. Under the Hood and Over the Rainbow.

  • We’ll cast our minds forward to what may lie ahead for version 2.1 and beyond.
  • One of the main goals of the machine learning team at the Spark Technology Center is to continue to evolve Apache Spark as the foundation for end-to-end, continuous, intelligent enterprise applications.
  • It pays to be as communication-efficient as possible when constructing such an algorithm.
  • Linear models, such as logistic regression, are the work-horses of machine learning.
  • The older RDD-based API in the mllib package is now in maintenance mode, and the newer DataFrame-based API (in the ml package), with its support for DataFrames and machine learning pipelines, has become the focus of future development for machine learning in Spark

Now that the dust has settled on Apache Spark™ 2.0, the community has a chance to catch its collective breath and reflect a little on what was achieved for the largest and most complex release in the project’s history.
Continue reading “Apache™ Spark 2.0: Machine Learning. Under the Hood and Over the Rainbow.”