Text Analytics 2015 – Technology and Market Overview

A leading analyst and expert on text analytics gives an overview of the past year and looks ahead on text analytics technology and market developments.



Text Technology Developments

In the text technology world, cloud based as-a-service (API) offerings remain big, and deep learning is, even more than last year, the alluring must-adopt method. Deep learning is alluring because it has proven effective at discerning features at multiple level in both natural language and other forms of “unstructured” content, images in particular. I touch on these topics in my March 5 IBM Watson, AlchemyAPI, and a World of Cognitive Computing, covering IBM’s acquisition (terms undisclosed) of a small but leading-edge cloud/API text and image analysis provider.

I don’t have much to say right now on the cognitive computing topic. Really the term is agglomerative: It represents an assemblage of methods and tools. (As a writer, I live for the opportunity to use words such as “agglomerative” and “assemblage.” Enjoy.) Otherwise, I’ll just observe that beyond IBM, the only significant text-analytics vendor that has embraced the term is Digital Reasoning. Still — Judith Hurwitz and associates have an interesting looking book just out on the topic, Cognitive Computing and Big Data Analytics, although I haven’t read it. Also I’ve recruited analyst and consultant Sue Feldman of Synthexis to present a cognitive-computing workshop at the 2015 Sentiment Analysis Symposium in July.

Let’s not swoon over unsupervised machine learning and discount tried-and-true methods — language rules, taxonomies, lexical and semantic networks, word stats, and supervised (trained) and non-hierarchical learning methods (e.g., for topic discovery) — in assessing market movements. I do see market evidence that software that over-relies on language engineering (rules and language resources) can be hard to maintain and adapt to new domains and information sources and languages, and difficult to keep current with rapidly emerging slang, topics, and trends. The response is two-fold:

  • The situation remains that a strong majority of needs are met without reliance on as-yet-exotic methods.
  • Hybrid approaches — ensemble methods — rule, and I mean hybrids that include humans in initial and on-going training process, via supervised and active learning for generation and extension of linguistic assets as well as (other) classification models.







I wrote up above that 2015 would feature a particular focus on streams and graphs. The graphs part, I’ve been saying for a while. I believe I’ve been right for a while too, including when I not-so-famously wrote “2010 is the Year of the Graph.” Fact is, graph data structures naturally model the syntax and semantics of language and, in the form of taxonomies, facilitate classification (see my eContext-sponsored paper, Text Classification Advantage Via Taxonomy). They provide for conveniently-queryable knowledge management, whether delivered via products such as Ontotext’s GraphDB or platform-captured, for instance in the Facebook Open Graph.

I did poll a few industry contacts, asking their thoughts on the state of the market and prospects for the year ahead. Ontotext CTO Marin Dimitrov was one of them. His take agrees with mine, regarding “a more prominent role for knowledge graphs.” His own company will “continue delivering solutions based on our traditional approach of merging structured and unstructured data analytics, using graph databases, and utilizing open knowledge graphs for text analytics.”

Marin also called out “stronger support for multi-lingual analytics, with support for 3+ languages being the de-facto standard across the industry.” Marin’s company is based in Bulgaria, and he observed, “In the European Union in particular, the European Commission (EC) has been strongly pushing a multi-lingual digital market agenda for several years already, and support for multiple languages (especially ‘under-represented’ European languages) is nowadays a mandatory requirement for any kind of EC research funding in the area of content analytics.”

José Carlos González, CEO of Madrid-based text analytics provider Daedalus, commented on the “‘breadth vs depth’ dilemma. The challenge of developing, marketing and selling vertical solutions for specific industries has lead some companies to focus on niche markets quite successfully.” Regarding one, functional (rather than industry-vertical) piece of the market, González believes “Voice of the Customer analytics — and in general all of the movement around customer experience — will continue being the most important driver for the text analytics market.”

One of Marin Dimitrov’s predictions was more emerging text analytics as-a-service providers, with a clearer differentiation between the different offers. Along these lines, Shahbaz Anwar, CEO of analytics provider PolyVista, sees the linking of software and professional as a differentiator. Anwar says, “We’re seeing demand for text analytics solutions — bundling business expertise with technology — delivered as a service, so that’s where PolyVista has been focusing its energy.”

Further —

Streams are kind of exciting. Analysis of “data-in-flight” has been around for years, for structured data, formerly primarily known as part of complex event process (CEP) and applied in fields such as telecom and financial markets. Check out Julian Hyde‘s 2010 Data In Flight. For streaming (and non-streaming) text, I would call out Apache Storm and Spark. For Storm, I’ll point you to a technical-implementation study posted by Hortonworks, Natural Language Processing and Sentiment Analysis for Retailers using HDP and ITC Infotech Radar, as an example. For Spark, Pivotal published a similar and even-more-detailed study, 3 Key Capabilities Necessary for Text Analytics & Natural Language Processing in the Era of Big Data. Note all the software inter-operation going on. Long gone are the days of monolithic codebases.

But in the end, money talks, so now on to part 3 —