Featured

Artificially Intelligent Painters: can deep learning AI create the next Mona Lisa?

Neural Style

If you have ever used Instagram or Snapchat, you are familiar with using filters that alter the brightness, saturation, contrast, and so on of your images. Neural style, a deep learning algorithm, goes beyond filters and allows you to transpose the style of one image, perhaps Van Gogh’s “Starry Night,” and apply that style onto any other image.  

Neural style, one of many models available on Somatic.io, uses a deep neural network in order to separate and recombine content and style of any two images. It is one of the first artificial neural networks (ANNs) to provide an algorithm for the creation of artistic imagery.

convolutional neural network

How Does it Work?

The model is given two input images, one that will be used for styling, the other for content. At each processing stage in the convolutional neural network’s (CNN) hierarchy, the images are broken into a set of filtered images. While the number of different filters increases along the processing hierarchy, the overall size of the filtered images is reduced, leading to a decrease in the total number of units per layer of the network.

The above figure visualizes the information at different processing stages in the CNN. The  content reconstructions from lower layers (a,b,c) are almost exact replicas of the original image. In the higher layers of the network however, the detailed pixel information is lost while the high-level structures and details remain the same (d,e). Meanwhile, the model captures the style of the other input image on top of the content CNN representations. Then, the style representation draws connections between the different features in different layers of the CNN. The model then reconstructs the style of the input image on top of the content representations within each of the CNN layers. This creates images that match the style on an increasing scale as you move through the network’s hierarchy.

convolutional neural network layers

Try It Out!

Experiment with the model for yourself. All you need to do is select an image you want to use for style and anther one for the content. Here are some creations of the latest creations the model has generated.

 

New Machine Learning Cheat Sheet by Emily Barry

New #MachineLearning Cheat Sheet by Emily Barry #abdsc

  • This blog about machine learning was written by Emily Barry.
  • Emily is a Data Scientist in San Francisco, California.
  • The more she learns about machine learning algorithms, the more challenging it is to keep these subjects organized in her brain to recall at a later time.
  • This is by no means a comprehensive guide to machine learning, but rather a study in the basics for herself and the likely small overlap of people who like machine learning and love emoji as much as she do.
  • For more articles about machine learning, click here.

This blog about machine learning was written by Emily Barry. Emily is a Data Scientist in San Francisco, California. She really loves emoji. Another thing she…
Continue reading “New Machine Learning Cheat Sheet by Emily Barry”

A Visual Introduction to Machine Learning

A Visual Introduction to Machine Learning | #DataScience #MachineLearning #RT

  • Using a data set about homes, we will create a machine learning model to distinguish homes in New York from homes in San Francisco.
  • Let’s say you had to determine whether a home is in San Francisco or in New York.
  • In machine learning terms, categorizing data points is a classification task.Since San Francisco is relatively hilly, the elevation of a home may be a good way to distinguish the two cities.
  • Based on the home-elevation data to the right, you could argue that a home above 240 ft should be classified as one in San Francisco.
  • The data suggests that, among homes at or below 240 ft, those that cost more than $1776 per square foot are in New York City.

This article was written by Stephanie and Tony on R2D3. 
In machine learning, computers apply statistical learning techniques to automatically identify pattern…
Continue reading “A Visual Introduction to Machine Learning”

This interactive map uses machine learning to arrange visually similar fonts

This interactive map uses machine learning to arrange visually similar fonts

  • Typography enthusiasts likely already know how to identify fonts by name, but it’s always useful to explore visually similar fonts when you feel like changing up your options.
  • Design consultant firm IDEO’s Font Map helps you do exactly that, with an interactive tool that lets you browse through fonts by clicking on them and seeing ones nearby that look similar, or by specifically searching for fonts by name.
  • IDEO software designer Kevin Ho built the map using a machine learning algorithm that can sort fonts by visual characteristics, like weight, serif or san-serif, and cursive or non-cursive.
  • “Designers need an easier way to discover alternative fonts with the same aesthetic — so I decided to see if a machine learning algorithm could sort fonts by visual characteristics, and enabling designers to explore type in a new way,” he wrote in a blog post.
  • Services that compare and suggest visually similar fonts already exist, like Identifont and the blog Typewolf, but IDEO’s tool makes it easy to quickly browse and at the very least, appreciate all the options out there that help make the web more beautiful.

Typography enthusiasts likely already know how to identify fonts by name, but it’s always useful to explore visually similar fonts when you feel like changing up your options. Design consultant…
Continue reading “This interactive map uses machine learning to arrange visually similar fonts”

How to Build a Recurrent Neural Network in TensorFlow

How to Build a Recurrent #NeuralNetwork in TensorFlow

  • The input to the RNN at every time-step is the current value as well as a state vector which represent what the network has “seen” at time-steps before.
  • The weights and biases of the network are declared as TensorFlow variables, which makes them persistent across runs and enables them to be updated incrementally for each batch.
  • Now it’s time to build the part of the graph that resembles the actual RNN computation, first we want to split the batch data into adjacent time-steps.
  • This is the final part of the graph, a fully connected softmax layer from the state to the output that will make the classes one-hot encoded, and then calculating the loss of the batch.
  • It will plot the loss over the time, show training input, training output and the current predictions by the network on different sample series in a training batch.


This is a no-nonsense overview of implementing a recurrent neural network (RNN) in TensorFlow. Both theory and practice are covered concisely, and the end result is running TensorFlow RNN code.

Continue reading “How to Build a Recurrent Neural Network in TensorFlow”

Designing networks for IoT sensors can be a learning process

Designing networks for IoT sensors can be a learning process | #MachineLearning #IoT #RT

  • These are just a few examples of the various Internet of Things (IoT) sensors and other connected devices in Boulder, where electrical, solar and HVAC systems are also tied into IP networks.
  • Designing a wireless network to support these applications was a learning process for the city’s IT department, says Benjamin Edelen, a senior system administrator there.
  • Aimee Schumm, e-services manager at the Boulder Public Library, notes that staff members made sure to tuck access points in places where they couldn’t be reached easily — such as inside ceiling tiles or on the ceiling itself — so they won’t be tampered with.
  • Boulder built out its wireless network with more bandwidth than it needs currently, with the expectation that it will expand its use of IoT sensors and similar technologies in the future.
  • Once the IoT sensors were in place, the various city departments generally took ownership of the data, Edelen says.

As the city of Boulder optimized its wireless network to better support IoT sensors, the city’s IT pros found it had a “significant learning curve.”
Continue reading “Designing networks for IoT sensors can be a learning process”

The 5 Forces Of Artificial Intelligence In B2B Sales

The Power Of #AI In Sales Is Astounding! Are You Up To Date With The Times? >>>>>>>

  • Well, almost…

    “80% Of Marketing Executives Predict Artificial Intelligence Will Revolutionize Marketing by 2020…Yet, Only 10% Are Currently Using It”

    Instead of fearing the likelihood that Terminator may happen in the coming years, I’m going to uncover the specific advantages that AI has the potential to bring to your B2B sales team…right now.

  • Strong AI, Super Intelligence, Narrow AI, Machine Learning and Deep Learning are terms that often get confused.
  • Strong AI is a ‘machine’ that demonstrates behaviour indistinguishable from that of a human being.
  • If Strong AI is human-like, Artificial Super Intelligence (ASI) is The Terminator.
  • With all variations defined, here are 5 forces of AI to transform your B2B sales methods:

80% Of Marketing Executives Predict Artificial Intelligence Will Revolutionize Marketing by 2020…Yet, Only 10% Are Currently Using It.
Continue reading “The 5 Forces Of Artificial Intelligence In B2B Sales”

Feature: This Chinese robot could revolutionize journalism

#AI interviews impress us! This Chinese robot #Jiajia could revolutionize journalism

  • Chen Xiaoping (R), director of a robot research and development team, and Jia Jia, an interactive robot that looks like a real Chinese young woman in traditional outfit, talk through internet with Kevin Kelly on screen, founding executive editor of Wired magazine, in Hefei, capital of east China’s Anhui Province, April 24, 2017.
  • Jia Jia was unveiled in 2016 by Chen’s robot research and development team at the University of Science and Technology of China in Hefei.
  • “It’s something we could never have imagined,” said Jia Jia’s creator Professor Chen Xiaoping, director of the Robotics Laboratory at the University of Science and Technology of China (USTC) in Hefei, a city in east China’s Anhui Province.
  • The first interview conducted by Jia Jia as a special Xinhua reporter on Monday was merely a small step in the era of artificial intelligence (AI), said Chen, who has been long involved in the development of Jia Jia and honored as the “father” of the robot.
  • Jia Jia, did a live interview with Kevin Kelly, a U.S. journalist and technology observer, on Monday, which was hailed by scientists as “having symbolic significance” as it was the world’s first interactive conversation between an “AI reporter” and a human being.

Chen Xiaoping (R), director of a robot research and development team, and Jia Jia, an interactive robot that looks like a real Chinese young woman in traditional outfit, talk through internet with Kevin Kelly on screen, founding executive editor of Wired magazine, in Hefei, capital of east China’s Anhui Province, April 24, 2017. Jia Jia was invited as a special reporter of the Xinhua News Agency to conduct the man-machine dialogue with Kelly on Monday. Jia Jia was unveiled in 2016 by Chen’s robot research and development team at the University of Science and Technology of China in Hefei. It took the team three years to research and develop this new-generation interactive robot, which can speak, show micro-expressions, move its lips, and move its body. (Xinhua/Guo Chen) 
Continue reading “Feature: This Chinese robot could revolutionize journalism”