12 NLP Researchers, Practitioners & Innovators You Should Be Following
Check out this list of NLP researchers, practitioners and innovators you should be following, including academics, practitioners, developers, entrepreneurs, and more.
Natural language processing is a particular interest of mine, as I assume it is for many of you.
Keeping up on the rapid advances in NLP is one thing. Filtering these advances through the lens of seasoned researchers and innovators is another. I find that following a well curated list of NLP folks on social media can help in both of these challenges.
In that regard, this is my list of suggested NLP researchers and innovators to follow. Keep in mind that this is a collection of 12 individuals which I have personally found beneficial to follow and pick up information from. This is not an authoritative list in any sense, and those included aren't necessarily the "top" folks in the field. They are all, however, worthy of placement herein, and I encourage you all to follow them to help expand your understanding of NLP.
Notably, they are of diverse background in terms of role, and include academics, practitioners, developers, entrepreneurs, computational linguists, and more. This differing set of views is an asset of this particular list.
Without further ado, here is the list, in alphabetical order. Note that I have left the descriptions to the individuals themselves, grabbing appropriate wording from their respective websites.
I am an Assistant Professor in the Department of Computer Science and Cornell Tech at Cornell University. I received an NSF CAREER award, paper awards in EMNLP 2015, ACL 2017, and NAACL 2018. Previously, I received a B.Sc. summa cum laude from Tel Aviv University and a Ph.D. from the University of Washington.
I work in the intersection of natural language processing, machine learning, vision, and robotics. My current main research focus is algorithms for natural language understanding with specific interest in situated interactions.
My primary research interests are in multilingual grammar engineering, the study of variation, both within and across languages, the relationship between linguistics and computational linguistics, and pratical methods for promoting engagement with ethical issues in NLP. My grammar engineering work centers on the LinGO Grammar Matrix, an open-source starter kit for the development of broad-coverage precision HPSG grammars. My language interests include English [eng] (including AAVE), Japanese [jpn], Wambaya [wmb], Chintang [ctn], ASL [ase], and Mandarin [cmn]. The AGGREGATION project is investigating the automatic creation of grammars from IGT with the Grammar Matrix for the benefit of language documentation.
Senior Lecturer at Bar Ilan University's Computer Science Department. Before that, I did my Post-Doc as a Research Scientist at Google Research New York.
I work on problems related to Natural Language Processing and Machine Learning. In particular I am interested in syntactic parsing, structured-prediction models, learning for greedy decoding algorithms, multilingual language understanding, and cross domain learning. Lately, I am also interested in neural network based methods for NLP.
Currently, it's hard to develop language technologies unless you've done a PhD on the topic. I wrote the spaCy NLP library and founded Explosion AI to change that.
I have over 20 peer-reviewed publications on various aspects of natural language understanding, including break-through work on parsing conversational speech, a high impact survey of named entity linking, and data that proved crucial to recent award-winning research.
There are dozens of researchers who understand natural language processing as well as I do, so we've got about 10,000 developers to go. That's why I left research to write spaCy.
Jeremy Howard is an entrepreneur, business strategist, developer, and educator. Jeremy is a founding researcher at fast.ai, a research institute dedicated to making deep learning more accessible. He is also a faculty member at the University of San Francisco, and is Chief Scientist at doc.ai and platform.ai.
Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Computer Science and Linguistics at Stanford University and Director of the Stanford Artificial Intelligence Laboratory (SAIL). His research goal is computers that can intelligently process, understand, and generate human language material. Manning is a leader in applying Deep Learning to Natural Language Processing, with well-known research on Tree Recursive Neural Networks, the GloVe model of word vectors, sentiment analysis, neural network dependency parsing, neural machine translation, question answering, and deep language understanding. He also focuses on computational linguistic approaches to parsing, robust textual inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (NLP) (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates.
Hi, I'm Ines and this is my personal blog. I'm a software developer working on Artificial Intelligence and Natural Language Processing technologies.
I'm the co-founder of Explosion AI, a digital studio for custom AI and NLP solutions. We're also the makers of spaCy, the leading open-source library for Natural Language Processing in Python, and Prodigy, a machine teaching tool powered by active learning.
I'm a research scientist at DeepMind, London. I completed my PhD in Natural Language Processing and Deep Learning at the Insight Research Centre for Data Analytics, while working as a research scientist at Dublin-based text analytics startup AYLIEN. Previously, I've studied Computational Linguistics at the University of Heidelberg, Germany and at Trinity College, Dublin.
I am a Postdoctoral researcher at the Allen Institute for Artificial Intelligence (AI2) and the University of Washington. I work with Prof. Yejin Choi.
I did my PhD in Computer Science in the Natural Language Processing lab at Bar-Ilan University, under the supervision of Prof. Ido Dagan, where I worked on lexical semantics. Specifically, I focused on recognizing lexical semantic relations between words and phrases, including ontological relationships e.g., cat is a type of animal, tail is a part of cat; interpreting noun-compounds, e.g. olive oil is oil made of olives while baby oil is oil for babies; and identifying predicate paraphrases, e.g. that X die at Y may have the same meaning as X live until Y in certain contexts.
Aloha, I'm the chief scientist at Salesforce. Previously, I was an adjunct professor at Stanford's computer science department and the founder and CEO/CTO of MetaMind which was acquired by Salesforce in 2016. I enjoy improving the state of the art in AI through research (deep learning, natural language processing and computer vision) and making AI easily accessible to everyone. In 2014, I got my PhD in the CS Department at Stanford.
I’m a Data Scientist at Kaggle. I also have a PhD in Linguistics from the University of Washington. My personal research is on computational sociolinguistics, or how our social identity affects the way we use language in computational contexts. I’m especially interested in emoji, and how dialects are produced and perceived in computational contexts.
Rachel Thomas was selected by Forbes as one of “20 Incredible Women in AI”, was an early engineer at Uber, and earned her math PhD at Duke. She is a professor at USF Data Institute and co-founder of fast.ai, which created the “Practical Deep Learning for Coders” course that over 200,000 students have taken. Rachel is a popular writer and keynote speaker, on topics of data ethics, AI accessibility, and bias in machine learning. Her writing has been read by nearly a million people; has been translated into Chinese, Spanish, Korean, & Portuguese; and has made the front page of Hacker News 9x.
- Top 10 Data Science Leaders You Should Follow
- Deep Learning for NLP: ANNs, RNNs and LSTMs explained!
- LinkedIn Top Voices 2018: Data Science & Analytics
Top Stories Past 30 Days