- The training data consists of 25,000 images of cats and dogs.
- It reads in the external nippy file that contains the trained network description, takes a random image from the testing directory, and classifies it.
- We want all the dog images to be under a “dog” directory and the cat images under the “cat” directory so that the all the indexed images under them have the correct “label”.
- How many times it thought a cat was really a cat and how many times it got it wrong.
- We need all the images to be the same size as well as in a directory structure that is split up into the training and test images.
There is an awesome new Clojure-first machine learning library called Cortex that was open sourced recently. I’ve been exploring it lately and …
Continue reading “Deep Learning in Clojure with Cortex”
- Specialized tools for seeing through blur and pixelation have been popping up throughout this year, like the Max Planck Institute’s work on identifying people in blurred Facebook photos.
- Just take a bunch of training data, throw some neural networks on it, throw standard image recognition algorithms on it, and even with this approach
- The algorithm doesn’t produce a deblurred image-it simply identifies what it sees in the obscured photo, based on information it already knows.
- Training data could be as simple as images on Facebook or a staff directory on a website.
- Shmatikov acknowledges that the Max Planck Institute’s work is more nuanced, taking into account contextual clues about identity.
It’s becoming much easier to crack internet privacy measures, especially blurred or pixelated images. Those methods make it tough for people to see sensitive information such as obscured license plate numbers or censored faces, but researchers from University of Texas at Austin and Cornell University say that the practice is wildly insecure in the age of machine learning. Using simple deep learning…
Continue reading “None of your pixelated or blurred information will stay safe on the internet — Quartz”