- Attention mechanism in Deep Learning, Explained - Jan 11, 2021.
Attention is a powerful mechanism developed to enhance the performance of the Encoder-Decoder architecture on neural network-based machine translation tasks. Learn more about how this process works and how to implement the approach into your work.
- Natural Language Processing Q&A - Jun 24, 2019.
In this Q&A, Jos Martin, Senior Engineering Manager at MathWorks, discusses recent NLP developments and the applications that are benefitting from the technology.
- My favorite mind-blowing Machine Learning/AI breakthroughs - Mar 14, 2019.
We present some of our favorite breakthroughs in Machine Learning and AI in recent times, complete with papers, video links and brief summaries for each.
- 10 Exciting Ideas of 2018 in NLP - Jan 16, 2019.
We outline a selection of exciting developments in NLP from the last year, and include useful recent papers and images to help further assist with your learning.
- Free resources to learn Natural Language Processing - Sep 18, 2018.
An extensive list of free resources to help you learn Natural Language Processing, including explanations on Text Classification, Sequence Labeling, Machine Translation and more.
- Machine Learning Translation and the Google Translate Algorithm - Sep 14, 2017.
Today, we’ve decided to explore machine translators and explain how the Google Translate algorithm works.
- Attention and Memory in Deep Learning and NLP - Jan 12, 2016.
An overview of attention mechanisms and memory in deep neural networks and why they work, including some specific applications in natural language processing and beyond.
Pages: 1 2