What Should Mixed Reality UI Look Like? [Concept] – ART + marketing

  • [Concept]While virtual and mixed reality experiences are trending right now (we’ve seen a lot of cool examples in movies), I feel that there’s a lack in convergence of practical interaction patterns.
  • We haven’t seen the iPhone of mixed reality yet, so I decided to explore the user experience and interface aesthetics of mixed reality and share my ideas with the community.
  • My goal is to encourage other designers to think and publish ideas on MR interfaces.As technology becomes invisible at all such levels, from a perceptual and cognitive point of view, interaction becomes completely natural and spontaneous.
  • And one of the attributes of good interaction design is allowing Natural User Interfaces: those which are invisible to the user, and remain invisible as we learn them.
  • Some examples of these interfaces are speech recognition, direct manipulation, and gestures.Apps as ObjectsI started by looking into an interaction that felt very natural: browsing records.I found this interaction interesting because of the following:Direct manipulation of the catalogPerception of progress while browsingFull visual of selected itemMinimal footprint of scrolled itemsI was thinking of a way to apply these principles to an interaction for browsing and launching apps in a mixed reality environment.In this case, the app cards are arranged in a stack and placed below the user’s point of view, at a comfortable reach distance.

While virtual and mixed reality experiences are trending right now (we’ve seen a lot of cool examples in movies), I feel that there’s a lack in convergence of practical interaction patterns. We haven…
Continue reading “What Should Mixed Reality UI Look Like? [Concept] – ART + marketing”

The Air Force and IBM are building an AI supercomputer

The Air Force and @IBM are building an #AI supercomputer #techradio

  • IBM and the USAF announced on Friday that the machine will run on an array of 64 TrueNorth Neurosynaptic chips.
  • The TrueNorth chips are wired together like, and operate in a similar fashion to, the synapses within a biological brain.
  • That is, these chips don’t require a clock, as conventional CPUs do, to function.VIDEOWhat’s more, because of the distributed nature of the system, even if one core fails, the rest of the array will continue to work.
  • This 64-chip array will contain the processing equivalent of 64 million neurons and 16 billion synapses, yet absolutely sips energy — each processor consumes just 10 watts of electricity.Like other neural networks, this system will be put to use in pattern recognition and sensory processing roles.
  • The Air Force wants to combine the TrueNorth’s ability to convert multiple data feeds — whether it’s audio, video or text — into machine readable symbols with a conventional supercomputer’s ability to crunch data.This isn’t the first time that IBM’s neural chip system has been integrated into cutting-edge technology.

Supercomputers today are capable of performing incredible feats, from accurately predicting the weather to uncovering insights into climate change, but they sti…
Continue reading “The Air Force and IBM are building an AI supercomputer”

Stupid TensorFlow tricks – Towards Data Science – Medium

A new take on an old (Thomson) problem using #TensorFlow

  • I wanted to see how far I could push this idea.Electrostatic charge configuration for N=625 in equilibrium.
  • Probably not.The Thomson problem is a classical physics question, “What configuration of N positive charges on the unit sphere minimizes the energy?”
  • N=11 puts the charges in a configuration that completely breaks the symmetry — while the charges are in equilibrium, they are distributed in such a way that there are more on one side than the other; it has a net dipole moment!Solving this in TF is surprisingly easy.
  • For any value of N, we can converge to a stable solution energy minima in a matter of seconds, and we can refine that to the full floating point precision in a matter of minutes by tapering down the learning rate.
  • That’s an impressive 10x speedup!Minimal energy for N=100 charges, prettified.Visualizing the configurations illustrates the regularity and the apparent symmetry, even if we are content knowing that it might not be the global minimum.

Is Google’s machine intelligence library TensorFlow (TF) good for something beyond deep learning? How well can it tackle a classic physics problem?
Continue reading “Stupid TensorFlow tricks – Towards Data Science – Medium”

The Society of Mind: A Free Online Course from Marvin Minsky, Pioneer of Artificial Intelligence

Society of Mind: A Free Online Course by Marvin Minsky, Pioneer of Artificial Intelligence

  • Educated at Harvard and Princeton, The MIT Technology Review recalls, “Minsky believed that the human mind was fundamentally no different than a computer, and he chose to focus on engineering intelligent machines, first at Lincoln Lab, and then later as a professor at MIT, where he cofounded the Artificial Intelligence Lab in 1959 with another pioneer of the field, John McCarthy.”
  • During the 1980s, Minsky published The Society of Mind, a seminal work which posited that there’s no essential difference between humans and machines, because humans are “actually machines of a kind whose brains are made up of many semiautonomous but unintelligent ‘agents’.”
  • Above, you can watch The Society of Mind taught as a free online course.
  • In addition to The Society of Mind, the course also centers around another book by Minsky, The Emotion Machine, which you can read free online here.
  • Minsky’s course will be added to our collection, 1200 Free Online Courses from Top Universities.

This past weekend, Marvin Minksy, one of the founding fathers of computer science, passed away at the age of 88. Educated at Harvard and Princeton, The MIT Technology Review recalls, “Minsky believed that the human mind was fundamentally no different than a computer, and he chose to focus on engineering intelligent machines, first at Lincoln Lab, and then later as a professor at MIT, where he cofounded the Artificial Intelligence Lab in 1959 with another pioneer of the field, John McCarthy.” During the 1980s, Minsky published The Society of Mind, a seminal work which posited that there’s no essential difference between humans and machines, because humans are “actually machines of a kind whose brains are made up of many semiautonomous but unintelligent ‘agents’.” (Quote comes from this NYTimes obit, not Minsky directly).
Continue reading “The Society of Mind: A Free Online Course from Marvin Minsky, Pioneer of Artificial Intelligence”

The Practical Importance of Feature Selection

#ICYMI The Practical Importance of Feature Selection  #MachineLearning

  • Feature selection is useful on a variety of fronts: it is the best weapon against the Curse of Dimensionality; it can reduce overall training times; and it is a powerful defense against overfitting, increasing generalizability.
  • Feature selection is useful on a variety of fronts: it is the best weapon against the Curse of Dimensionality; it can reduce overall training times; and it is a powerful defense against overfitting, increasing model generalizability.
  • Many times a correct feature selection allows you to develop simpler and faster Machine Learning models.
  • In a time when ample processing power can tempt us to think that feature selection may not be as relevant as it once was, it’s important to remember that this only accounts for one of the numerous benefits of informed feature selection — decreased training times.
  • As Zimbres notes above, with a simple concrete example, feature selection can quite literally mean the difference between valid, generalizable models and a big waste of time.


Feature selection is useful on a variety of fronts: it is the best weapon against the Curse of Dimensionality; it can reduce overall training times; and it is a powerful defense against overfitting, increasing generalizability.

Continue reading “The Practical Importance of Feature Selection”

How Atomic AI Measures The Emotion In Your Content

How Atomic #AI Measures The Emotion In Your #Content - by @suhash_talwar @atomic_reach

  • The emotional factor was looked at on a per article basis – using “hot” and “cold” as the indicator for the overall emotional intensity of a piece of content.
  • The user would have no idea about what words to change or what to replace them with to increase or lower the overall emotional intensity of the article.
  • Soon after, we updated the feature to flag specific words as hot (emotional) or cold (not emotional) so that users had a better idea of what words were contributing to the level of emotion within a piece of content.
  • We noticed that our users would look at the emotional intensity of a piece of content, then haphazardly try and replace words.
  • After various iterations, the team was finally able to settle on a solid model for measuring emotion and was able to bake in the ability to provide recommendations to either increase or decrease the emotional intensity of the word.

How Atomic AI Measures The Emotion In Your Content
Continue reading “How Atomic AI Measures The Emotion In Your Content”

Taxonomy of Methods for Deep Meta Learning

Taxonomy of Methods for Deep Meta Learning #NeuralNetworks #DeepLearning

  • A recent paper, “Evolving Deep Neural Networks” provides a comprehensive list of global parameters that are typically used in the conventional search approaches (i.e. Learning rate) as well as more hyperparameters that involve more details about the architecture of the Deep Learning network.
  • Two recent papers that were submitted to ICLR 2017 explore the use of Reinforcement learning to learn new kinds of Deep Learning architectures (“Designing Neural Network Architectures using Reinforcement Learning” and “Neural Architecture Search with Reinforcement Learning”).
  • The first paper describes the use of Reinforcement Q-Learning to discover CNN architectures, you can find some of their generated CNNs in Caffe These are the different parameters that are sampled by the MetaQNN algorithm:

    The second paper (Neural Architecture Search) employs uses Reinforcement Learning (RL) to train a an architecture generator LSTM to build a language that describes new DL architectures.

  • In all the above approaches, the method employs different search mechanisms (i.e. Grid, Gaussian Processes, Evolution, Q-Learning, Policy Gradients) to discover (among the many generated architectures) better configurations.
  • We have a glimpse of a DSL driven architecture in my previous post about “A Language Driven Approach to Deep Learning Training” where a prescription that is quite general is presented.


This post discusses a variety of contemporary Deep Meta Learning methods, in which meta-data is manipulated to generate simulated architectures. Current meta-learning capabilities involve either support for search for architectures or networks inside networks.

Continue reading “Taxonomy of Methods for Deep Meta Learning”