Just How Smart Are Smart Machines?
The number of sophisticated cognitive technologies that might be capable of cutting into the need for human labor is expanding rapidly. But linking these offerings to an organization’s business needs requires a deep understanding of their capabilities. Here we examine 4 levels of intelligence across task types.
By Tom Davenport, IIA and Babson College.
If popular culture is an accurate gauge of what’s on the public’s mind, it seems everyone has suddenly awakened to the threat of smart machines. Several recent films have featured robots with scary abilities to outthink and manipulate humans. In the economics literature, too, there has been a surge of concern about the potential for soaring unemployment as software becomes increasingly capable of decision making.
Yet managers we talk to don’t expect to see machines displacing knowledge workers anytime soon — they expect computing technology to augment rather than replace the work of humans. In the face of a sprawling and fast-evolving set of opportunities, their challenge is figuring out what forms the augmentation should take. Given the kinds of work managers oversee, what cognitive technologies should they be applying now, monitoring closely, or helping to build?
To help, we have developed a simple framework that plots cognitive technologies along two dimensions. (See “What Today’s Cognitive Technologies Can — and Can’t — Do.”) First, it recognizes that these tools differ according to how autonomously they can apply their intelligence. On the low end, they simply respond to human queries and instructions; at the (still theoretical) high end, they formulate their own objectives. Second, it reflects the type of tasks smart machines are being used to perform, moving from conventional numerical analysis to performance of digital and physical tasks in the real world. The breadth of inputs and data types in real-world tasks makes them more complex for machines to accomplish.
By putting those two dimensions together, we create a matrix into which we can place all of the multitudinous technologies known as “smart machines.” More important, this helps to clarify today’s limits to machine intelligence and the challenges technology innovators are working to overcome next. Depending on the type of task a manager is targeting for redesigned performance, this framework reveals the various extents to which it might be performed autonomously and by what kinds of machines.
FOUR LEVELS OF INTELLIGENCE
Clearly, the level of intelligence of smart machines is increasing. The general trend is toward greater autonomy in decision making — from machines that require a highly structured data and decision context to those capable of deciphering a more complex context.
SUPPORT FOR HUMANS
For decades, the prevailing assumption has been that cognitive technologies would provide insight to human decision makers — what used to be known as “decision support.” Even with IBM Corp.’s Watson and many of today’s other cognitive systems, most people assume that the machine will offer a recommended decision or course of action but that a human will make the final decision.
REPETITIVE TASK AUTOMATION
It is a relatively small step to go from having machines support humans to having the machines make decisions, particularly in structured contexts. Automated decision making has been gaining ground in recent years in several domains, such as insurance underwriting and financial trading; it typically relies on a fixed set of rules or algorithms, so performance doesn’t improve without human intervention. Typically, people monitor system performance and fine-tune the algorithms.
CONTEXT AWARENESS AND LEARNING
Sophisticated cognitive technologies today have some degree of real-time contextual awareness. As data flow more continuously and voluminously, we need technologies that can help us make sense of the data in real time — detecting anomalies, noticing patterns, and anticipating what will happen next. Relevant information might include location, time, and/or a user’s identity, which might be used to make recommendations (for example, the best route to work based on the time of day, current traffic levels, and the driver’s preference for highways versus back roads).
One of the hallmarks of today’s cognitive computing is its ability to learn and improve performance. Much of the learning takes place through continuous analysis of real-time data, user feedback, and new content from text-based articles. In settings where results are measurable, learning-oriented systems will ultimately deliver benefits in the form of better stock trading decisions, more accurate driving time predictions, and more precise medical diagnoses.
So far, machines with self-awareness and the ability to form independent objectives reside only in the realm of fiction. With substantial self-awareness, computers may eventually gain the ability to work beyond human levels of intelligence across multiple contexts, but even the most optimistic experts say that general intelligence in machines is three to four decades away.
FOUR COGNITIVE TASK TYPES
A straightforward way to sort out tasks performed by machines is according to whether they process only numbers, text, or images — the building blocks of cognition — or whether they know enough to take informed actions in the digital or physical world.
The root of all cognitive technologies is computing machines’ superior performance at analyzing numbers in structured formats (typically, rows and columns). Classically, this numerical analysis was applied purely in support of human decision makers. People continued to perform the front-end cognitive tasks of creating hypotheses and framing problems, as well as the back-end interpretation of the numbers’ implications for decisions. Even as analysts added more visual analytics displays and more predictive analytics in the past decade, people still did the interpretation.
Today, companies are increasingly embedding analytics into operational systems and processes to make repetitive automated decisions, which enables dramatic increases in both speed and scale. And whereas it used to take a human analyst to develop embedded models, “machine learning” methods can produce models in an automated or semiautomated fashion.
ANALYZING WORDS AND IMAGES
A key aspect of human cognition is the ability to read words and images and to determine their meaning and significance. But today, a wide variety of technological tools, such as machine learning, natural language processing, neural networks, and deep learning, can classify, interpret, and generate words. Some of them can also analyze and identify images.
The earliest intelligent applications involving words and images involved text, image, and speech recognition to allow humans to communicate with computers. Today, of course, smartphones “understand” human speech and text and can recognize images. These capabilities are hardly perfect, but they are widely used in many applications.
When words and images are analyzed on a large scale, this comprises a different category of capability. One such application involves translating large volumes of text across languages. Another is to answer questions as a human would. A third is to make sense of language in a way that can either summarize it or generate new passages.
IBM Watson was the first tool capable of ingesting, analyzing, and “understanding” text well enough to respond to detailed questions. However, it doesn’t deal with structured numerical data, nor can it understand relationships between variables or make predictions. It’s also not well suited for applying rules or analyzing options on decision trees. However, IBM is rapidly adding new capabilities included in our matrix, including image analysis.
There are other examples of word and image systems. Most were developed for particular applications and are slowly being modified to handle other types of cognitive situations. Digital Reasoning Systems Inc., for example, a company based in Franklin, Tennessee, that developed cognitive computing software for national security purposes, has begun to market intelligent software that analyzes employee communications in financial institutions to determine the likelihood of fraud. Another company, IPsoft Inc., based in New York City, processes spoken words with an intelligent customer agent programmed to interpret what customers want and, when possible, do it for them.
IPsoft, Digital Reasoning, and the original Watson all use similar components, including the ability to classify parts of speech, to identify key entities and facts in text, to show the relationships among entities and facts in a graphical diagram, and to relate entities and relationships with objectives. This category of application is best suited for situations with much more — and more rapidly changing — codified textual information than any human could possibly absorb and retain.
Image identification and classification are hardly new. “Machine vision” based on geometric pattern matching technology has been used for decades to locate parts in production lines and read bar codes. Today, many companies want to perform more sensitive vision tasks such as facial recognition, classification of photos on the Internet, or assessment of auto collision damage. Such tasks are based on machine learning and neural network analysis that can match particular patterns of pixels to recognizable images.
The most capable machine learning systems have the ability to “learn” — their decisions get better with more data, and they “remember” previously ingested information. For example, as Watson is introduced to new information, its reservoir of information expands. Other systems in this category get better at their cognitive task by having more data for training purposes. But as Mike Rhodin, senior vice president of business development for IBM Watson, noted, “Watson doesn’t have the ability to think on its own,” and neither does any other intelligent system thus far created.