- Sockeye, which is built on Apache MXNet, does most of the heavy lifting for building, training, and running state-of-the-art sequence-to-sequence models.
- Sockeye provides both a state-of-the-art implementation of neural machine translation (NMT) models and a platform to conduct NMT research.
- You can easily change the basic model architecture, including the following elements:
Sockeye also supports more advanced features, such as:
For training, Sockeye gives you full control over important optimization parameters.
- If you have a GPU available, install Sockeye for CUDA 8.0 with the following command:
To install it for CUDA 7.5, use this command:
Now you’re all set to train your first German-to-English NMT model.
- You also learned how to use Sockeye, a sequence-to-sequence framework based on MXNet, to train and run a minimal NMT model.
Have you ever wondered how you can use machine learning (ML) for translation? With our new framework, Sockeye, you can model machine translation (MT) and other sequence-to-sequence tasks. Sockeye, which is built on Apache MXNet, does most of the heavy lifting for building, training, and running state-of-the-art sequence-to-sequence models.
Continue reading “Train Neural Machine Translation Models with Sockeye”
Vending Machine Press (VMP) is aimed at giving writers an avenue for their writing to reach a wider audience and hopefully become a meaningful part of the history of the journal and the literary community at large. Vending Machine Press – Submission Guidelines Poetry – Up to 6 poems in one document Flash Fiction – Up…
Continue reading “Submissions – Vending Machine Press”
- An AI can use Google Street View to help you decide where to move
- He plots 10,000 randomized points throughout a city, and grabs images taken by Google Street View.
- The idea of extracting information from Google Street View was inspired by MIT Media Lab’s StreetScore project, Keskkula writes, where machine learning was used to rank the safety of 3,000 streets in New York and Boston.
- Now one Estonia-based startup, Teleport , is using this idea, coupled with images from Google Street View, to automatically look around cities and see if people will like them based on their lifestyle preferences.
- Keskkula’s example focuses on motorcycles: He owns two and is interested in a city that welcomes them.
Machine learning is at its best when there’s way too much information for any human to comb through manually, like making high-volume stock trades or surfacing the best posts from hundreds of friends on Facebook. Now one Estonia-based startup, Teleport, is using this idea, coupled with images from Google Street View, to automatically look around cities and see if people will…
Continue reading “Artificial intelligence can look at Google Street View to help you decide where to move — Quartz”
- Co-authors Robert Scoble and Shel Israel lead a discussion in virtual reality about how technology got us this far and why AR/VR and AI is next
- Scoble and Israel, looking down the path of the future with AR, VR, and AI, also want to remind us: “Something new is .”
- With his first reading, Israel introduced us to the book’s theme by way of its opening sentence – one with a leading clause that sounds particularly familiar. “
- Entrenched in the tech and digital worlds since before DOS was popular, tech evangelist Robert Scoble and prolific writer Shel Israel performed a reading of their newest collaboration, a book entitled The Fourth Transformation: How Augmented Reality (AR) and Artificial Intelligence (AI) Will Change Everything.
- In the beginning,” Israel started, “there were mainframes.
Co-authors Robert Scoble and Shel Israel lead a discussion in virtual reality about how technology got us this far and why AR/VR and AI is next Entrenched in the tech and digital worlds since before…
Continue reading “Something New Is Coming – AltspaceVR – Medium”
- Google DeepMind also recently was able to get its AI systems to sound more human with advanced text-to-speech technology innovations.
- Oxford University researchers partnered with Google on a new AI tool that reads lips, and the results were significant.
- Oxford University and Google DeepMind have built an AI tool that can read lips far better than a professional human lip-reader, which could help the hearing impaired.
- Google DeepMind wins again: AI trounces human expert in lip-reading face-off (ZDNet)
- According to a report from New Scientist , the human expert deciphered 12.4% of their words, while the AI system got 46.8% correct.
Oxford University researchers partnered with Google on a new AI tool that reads lips, and the results were significant.
Continue reading “Google DeepMind AI destroys human expert in lip reading competition”
- We build a model that builds both word and document topics, makes them interpreable, makes topics over clients, times, and documents, and makes them supervised topics.
- lda2vec also yields topics over clients.
- lda2vec the topics can be ‘supervised’ and forced to predict another target.
- It’s research software, and we’ve tried to make it simple to modify lda2vec and to play around with your own custom topic models.
- LDA on the other hand is quite interpretable by humans, but doesn’t model local word relationships like word2vec.
Read the full article, click here.
@andradeandrey: “Very interesting code, lda2vec tools for interpreting natural language #machinelearning #NLP”
Contribute to lda2vec development by creating an account on GitHub.