Silver BlogWhere NLP is heading

Natural language processing research and applications are moving forward rapidly. Several trends have emerged on this progress, and point to a future of more exciting possibilities and interesting opportunities in the field.



By Paul Barba, Chief Scientist, Lexalytics, an inMoment company.

Where NLP is heading

As both the written and spoken-language corpora available to us explode in ubiquity, natural language processing (NLP) has become an invaluable tool to researchers, organizations, and even hobbyists. It lets us summarize documents, analyze sentiment, categorize content, translate languages – and one day potentially even converse at a human level.

Like any AI and ML discipline, NLP is a fast-moving discipline undergoing rapid change as both practitioners and researchers dive into the prospects it presents. While the NLP landscape is rapidly changing, here are some of the trends and opportunities I see on the horizon.

 

Prompting: priming NLP with a few choice words

 

Prompting” is a technique that involves adding a piece of text to your input examples to encourage a language model to perform a task you’re interested in. Say your input text is: “We had a great waitress, but the food was undercooked.” Perhaps you’re interested in comparing food quality across restaurant locations. Appending the review with “the food was” and seeing whether “terrible” or “great” are a more likely continuation suddenly provides you with a topical sentiment model. Given the negative sentiment associated with “undercooked,” our missing word would almost certainly be “terrible.”

Training such a model from scratch would require extensive annotations, but with no or just a small number of examples, a workable solution can be found, making prompting a viable choice for smaller-scale projects and budgets. Because the prompt language can easily be changed, you can explore many possible taxonomies and functionality across your dataset without having to commit to a final set of guidelines for your annotators.

 

Standing on the shoulders of giants: a convergence across modalities

 

As the field becomes more mature, we’re starting to see cross-pollination between different AI and ML disciplines. This has become possible now that less background knowledge is required to get your feet wet in the field – and instead of highly niche specialists, we’re starting to see solid generalists. This has fostered a convergence of modalities, and we’re now seeing traditional text-based approaches brought to the numerical world and traditionally NLP-oriented things like transformer networks being applied to video and even physical simulations.

To the creative thinker, the opportunities and applications are vast: for example, Samsung is combining NLP with video imagery to help self-driving cars interpret street signs in foreign countries. NLP and computer vision are natural bedfellows, and I also expect to see the two being used to help translate video to text for accessibility purposes, to improve descriptions of medical imagery, and even to translate verbal design requests into a written or visual description.

 

Sharing is caring: open-source models driving knowledge forward

 

An open-source AI culture is good for innovation, providing valuable feedback loops, opportunities to improve and develop technologies, and giving technologists room to grow. The big guns like DeepMind have paved the way with the research papers and libraries left in the wake of AlphaGo and AlphaZero, and now smaller contenders such as HuggingFace are doing the same with a commercial/open-source hybrid developed in tandem with language researchers. Partnerships like these, and our own with UMass, mean that the NLP community has increased access to datasets, tokenizers, and transformers, giving us more insight into what’s going on under the hood and fostering opportunities for the community to iterate, advance upon, and broaden access to the technology – and strengthening our collective skill sets and knowledge.

 

Fair go, algo: algorithms and people working together

 

Algorithms and humans each have their strengths, and by working together, they can deliver exceptional results. One area that’s seen some interest is generative language, but the problem there is that while algorithms can create output that sounds human-like, they’re not concerned with veracity. But having a human at hand to monitor for accuracy, relevance, and the rest of Grice’s Maxims can improve outcomes. The same partnership works well in summarization, which is an area I’m interested in. Quickly condensing a long article into the most salient points is surprisingly tricky for humans, and a task machines have shown a reasonable capacity for, within some constraints. On the other hand, when we ask machines to turn those salient points into a coherent summary, they have an unfortunate tendency to change the meaning to something non-factual. But having a machine highlight the key ideas in a document, and a human turn that into a short snippet, outperforms either working alone. I think we’ll be seeing more and more of this as AI and NLP become embedded in everyday work processes.

 

Transforming transformers: less resource-intensive solutions

 

BERT and other comparable, well-known technologies are built using transformers, a type of model that identifies dependencies (words in a sentence that relate back to a target word) across long blocks of text. Transformers are highly effective – but also incredibly resource-intensive as they require a huge amount of pretraining and data. Although the world has been taken by storm by these transformer-based technologies, we’re starting to see alternatives spring up as smaller companies and teams seek out more viable solutions for smaller-scale (and budget) problems. HuggingFace’s transformer variant, SRU++ and related work, the Reformer (an efficient transformer model), and models like ETC/BigBird are potential alternatives that I expect will see more interest as the computational cost of transformer-based projects becomes untenable.

 

A technology truism: The rest is yet to come

 

AI and NLP are always in a stage of advancement and improvement, and we see ebbs and flows as resource-rich industry races ahead with the next big thing while research takes some time to catch up and build out our knowledge. This cycle, now enriched by the increasingly open-source nature of NLP along with technical cross-pollination and new industry applications, will continue to surface new opportunities and advancements for us to explore and take advantage of.

Related: