- Keras and TensorFlow are the state of the art in deep learning tools and with the keras package you can now access both with a fluent R interface.
- To prepare this data for training we one-hot encode the vectors into binary class matrices using the Keras function:
The core data structure of Keras is a model, a way to organize layers.
- We begin by creating a sequential model and then adding layers using the pipe ( ) operator:
The argument to the first layer specifies the shape of the input data (a length 784 numeric vector representing a grayscale image).
- Use the function to print the details of the model:
Next, compile the model with appropriate loss function, optimizer, and metrics:
Use the function to train the model for 30 epochs using batches of 128 images:
The object returned by includes loss and accuracy metrics which we can plot:
Evaluate the model’s performance on the test data:
Keras provides a vocabulary for building deep learning models that is simple, elegant, and intuitive.
- After you’ve become familiar with the basics, these articles are a good next step:
Keras provides a productive, highly flexible framework for developing deep learning models.
We are excited to announce that the keras package is now available on CRAN. The package provides an R interface to Keras, a high-level neural networks API developed with a focus on enabling fast experimentation. Keras has the following key features:
Continue reading “Keras for R”
- Demis Hassabis knows a thing or two about artificial intelligence: he founded the London-based AI startup DeepMind, which was purchased by Google for $650 million back in 2014.
- In a paper published today in the journal Neuron, Hassabis and three coauthors argue that only by better understanding human intelligence can we hope to push the boundaries of what artificial intellects can achieve.
- But it also points out that more recent advances haven’t leaned on biology as effectively, and that a general intelligence will need more human-like characteristics—such as an intuitive understanding of the real world and more efficient ways of learning.
- As Hassabis explains in an interview with the Verge, artificial intelligence and neuroscience have become “two very, very large fields that are steeped in their own traditions,” which makes it “quite difficult to be expert in even one of those fields, let alone expert enough in both that you can translate and find connections between them.”
- (Read more: Neuron, The Verge, “Google’s Intelligence Designer,” “Can This Man Make AI More Human?”)
Inquisitiveness and imagination will be hard to create any other way.
Continue reading “Google’s AI Guru Says That Great Artificial Intelligence Must Build on Neuroscience”
- In this setting, it may be perfectly fine follow a meandering path as you piece together a system including GPU’s, drivers, libraries, and deep learning frameworks that interest you, sifting through potentially hundreds of pages of documentation, as you take on the role of “system integrator”.
- NVIDIA DGX Systems see a 30% increase in deep learning performance compared with other systems built using the same Tesla V100 GPU’s, but lacking integrated, optimized deep learning software.
- The important takeaway here is that, even if you build an A.I. system on your own, using the absolute latest GPU technology, that system would still be at a performance disadvantage relative to an integrated hardware and software system that’s fully-optimized and software-engineered for maximum performance of each deep learning framework.
- Alternatively, A.I. appliances like NVIDIA’s DGX, that include access to popular deep learning frameworks like TensorFlow, Caffe2, MXNet and more, as well as supporting libraries, all integrated with the hardware, can save considerable time and money.
- Additionally, with the experimental nature of data science and A.I., developers often find themselves (or their teams) needing to simultaneously experiment with different combinations of system resources and software configurations, in order to determine which model can derive insights fastest.
Like a lot of things, the answer is “it depends”. If we take deep learning as an example of an increasingly popular A.I. workload, building an AI system for deep learning training on your datasets is largely a function of the resources, expertise and amount of infrastructure you have readily accessible. For example, the system you might employ as an independent developer, or as a researcher in a smaller setting, would look considerably different from what you would need to support a larger organization’s efforts to “A.I.-enable” their business interactions with customers, or improve the quality of clinical care, or detect fraud in a voluminous flow of financial transaction data. Ultimately this becomes of question of whether you design and build your own system, or employ a purpose-built solution for your problem.
Continue reading “Tony Paikeday’s answer to How can I build my own artificial intelligence system?”
At this year’s GTC Conference, NVIDIA showed how it is delivering AI for every computing platform, every deep learning framework. Read more.
Continue reading “The AI Revolution Is Eating Software: NVIDIA Is Powering It”
- AICOIN is a passive investment vehicle that uses a strategy combining the pinpoint accuracy of Artificial Intelligence trading models with the “Wisdom of the Crowd” to generate a profit for coin holder / investors.
- The ongoing profits generated from the AI directed trading are then fed into an Investment Pool that is used to finance positions in early stage companies focused on AI and public blockchain technology.
- Unlike traditional VC’s and seed funds the AICoin Investment Pool is directed by AICoin token holders.
- The power of the blockchain as a voting mechanism enables the collective to aggregate the combined knowledge and expertise from all of the token holders in a completely secure and transparent way to profit from the “Wisdom of the Crowd” to make investment decisions that benefit both the AICoin token holders and the cryptocurrency ecosystem as a whole.
- These five opportunities are fully vetted by the investment board, consisting of two appointed seats and a seat elected by the coin holders, all of whom have years of experience in the financial markets and are leaders in their industries.
AICOIN is a passive investment vehicle that uses a strategy combining the pinpoint accuracy of Artificial Intelligence trading models with the “Wisdom of the Crowd” to generate a profit for coin holder / investors.
Continue reading “First Global Credit Launches AICOIN Pre-subscription ICO – The Merkle”
- In the final months of the Obama administration, the U.S. government published two separate reports noting that the U.S. is no longer the undisputed world leader in AI innovation and expressing concern about China’s emergence as a major player in the field.
- The reports recommended increased expenditure on machine learning research and enhanced collaboration between the U.S. government and tech industry leaders to unlock the potential of AI.
- But despite these efforts, 91 percent of the 1,268 tech founders, CEOs, investors, and developers surveyed at the international Collision tech conference in New Orleans in May 2017 believed that the U.S. government is “fatally under-prepared” for the impact of AI on the U.S. ecosystem.
- Research firm CB Insights found that Chinese participation in funding rounds for American startups came close to $10 billion in 2016, while recent figures indicate that Chinese companies have invested in 51 U.S. AI companies, to the tune of $700 million.
- But of further surprise was the 50 percent of all respondents who believed the U.S. would lose its dominant position in the tech world to China within just five years.
In the battle of technological innovation between East and West, artificial intelligence (AI) is on the front line. And China’s influence is growing.
Continue reading “Is China in the driver’s seat when it comes to AI?”
- The big conceptual difference between deep learning and traditional machine learning is that deep learning is the first, and currently the only learning method that is capable of training directly on the raw data (e.g., the pixels in our face recognition example), without any need for feature extraction.
- When applying traditional machine learning, it is necessary to first convert the computer files from raw bytes to a list of features (e.g., important API calls, etc), and only then is this list of features fed into the machine learning module.
- Additionally, unlike traditional machine learning, which reaches a performance ceiling as the number of files it is trained on increases, deep learning can effectively improve as the datasets grow, to the extent of hundreds of millions of malicious and legitimate files.
- The results of benchmarks that compare the performance of deep learning vs traditional machine learning in cybersecurity show that deep learning results in a considerably higher detection rate and a lower false positive rate.
- As malware developers use more advanced methods to create new malware, the gap between the detection rates of deep learning vs traditional machine learning will grow wider; and in coming years it will be critical to rely on deep learning in order to have a realistic chance of foiling the most sophisticated attacks.
During the past few years, deep learning has revolutionized nearly every field it has been applied to, resulting in the greatest leap in performance in the history of computer science.
Continue reading “How an artificial brain could help us outsmart hackers”