Just How Smart Are Smart Machines?

The number of sophisticated cognitive technologies that might be capable of cutting into the need for human labor is expanding rapidly. But linking these offerings to an organization’s business needs requires a deep understanding of their capabilities. Here we examine 4 levels of intelligence across task types.



PERFORMING DIGITAL TASKS

One of the more pragmatic roles for cognitive technology in recent years has been to automate administrative tasks and decisions. In order to make automation possible, two technical capabilities are necessary. First, you need to be able to express the decision logic in terms of “business rules.” Second, you need technologies that can move a case or task through the series of steps required to complete it. Over the past couple of decades, automated decision-making tools have been used to support a wide variety of administrative tasks, from insurance policy approvals to information technology operations to high-speed trading.

Lately, companies have begun using “robotic process automation,” which uses work flow and business rules technology to interface with multiple information systems as if it were a human user. Robotic process technology has become popular in banking (for back-office customer service tasks, such as replacing a lost ATM card), insurance (for processing claims and payments), information technology (IT) (for monitoring system error messages and fixing simple problems), and supply chain management (for processing invoices and responding to routine requests from customers and suppliers).

The benefits of process automation can add up quickly. An April 2015 case study at Telefónica O2, the second-largest mobile carrier in the United Kingdom, found that the company had automated over 160 process areas using software “robots.” The overall three-year return on investment was between 650% and 800%.

PERFORMING PHYSICAL TASKS

Physical task automation is, of course, the realm of robots. Though people love to call every form of automation technology a robot, one of Merriam-Webster’s definitions of robot is “a machine that can do the work of a person and that works automatically or is controlled by a computer.”

In 2014, companies installed about 225,000 industrial robots globally, more than one-third of them in the automotive industry. However, robots often fall well short of expectations. In 2011, the founder of Foxconn Technology Co., Ltd., a Taiwan-based multinational electronics contract manufacturing company, said he would install one million robots within three years, replacing one million workers. However, the company found that employing only robots to build smartphones was easier said than done. To assemble new iPhone models in 2015, Foxconn planned to hire more than 100,000 new workers and install about 10,000 new robots.

Historically, robots that replaced humans required a high level of programming to do repetitive tasks. For safety reasons, they had to be segregated from human workers. However, a new type of robots — often called “collaborative robots” — can work safely alongside humans. They can be programmed simply by having a human move their arms.

Robots have varying degrees of autonomy. Some, such as remotely piloted drone aircraft and robotic surgical instruments and mining equipment, are designed to be manipulated by humans. Others become at least semiautonomous once programmed but have limited ability to respond to unexpected conditions. As robots get more intelligence, better machine vision, and increased ability to make decisions, they will integrate other types of cognitive technologies while also having the ability to transform the physical environment. IBM Watson software, for example, has been installed in several different types of robots.

THE GREAT CONVERGENCE

Slowly but surely, the worlds of artificially intelligent software and robots seem to be converging, and the boundaries between different cognitive technologies are blurring. In the future, robots will be able to learn and sense context, robotic process automation and other digital task tools will improve, and smart software will be able to analyze more intricate combinations of numbers, text, and images.

We anticipate that companies will develop cognitive solutions using the building blocks of application program interfaces (APIs). One API might handle language processing, another numerical machine learning, and a third question-and-answer dialogue. While these elements would interact with each other, determining which APIs are required will demand a sophisticated understanding of cognitive solution architectures.

This modular approach is the direction in which key vendors are moving. IBM, for example, has disaggregated Watson into a set of services — a “cognitive platform,” if you will — available by subscription in the cloud. Watson’s original question-and- answer services have been expanded to include more than 30 other types, including “personality insights” to gauge human behavior, “visual recognition” for image identification, and so forth. Other vendors of cognitive technologies, such as Cognitive Scale Inc., based in Austin, Texas, are also integrating multiple cognitive capabilities into a “cognitive cloud.”

Despite the growing capabilities of cognitive technologies, most organizations that are exploring them are starting with small projects to explore the technology in a specific domain. But others have much bigger ambitions. For example, Memorial Sloan Kettering Cancer Center, in New York City, and the University of Texas MD Anderson Cancer Center, in Houston, Texas, are taking a “moon shot” approach, marshaling cognitive tools like Watson to develop better diagnostic and treatment approaches for cancer.

DESIGNING A COGNITIVE ARCHITECTURE

Hardware and software will continue to get better, but rather than waiting for next- generation options, managers should be introducing cognitive technologies to workplaces now and discovering their human-augmenting value. The most sophisticated managers will create IT architectures that support more than one application. Indeed, we expect to see organizations building “cognitive architectures” that interface with, but are distinct from, their regular IT architectures. What would that mean? We think a well-designed cognitive architecture would emphasize several attributes:

THE ABILITY TO HANDLE A VARIETY OF DATA TYPES

Cognitive insights don’t just come from a single data type (text, for example). In the future, they will come from combining text, numbers, images, speech, genomic data, and so forth to develop broad situational awareness.

THE ABILITY TO LEARN

Although this should be the essence of cognitive technologies, most systems today (such as rules engines and robotic process automation) don’t improve themselves. If you have a choice between a system that learns and one that doesn’t, go with the former.

TRANSPARENCY

Humans and cognitive technologies will be working together for the foreseeable future. Humans will always want to know how the cognitive technologies came up with their decision or recommendation. If people can’t open the “black box,” they won’t trust it. This is a key aspect of augmentation, and one that will facilitate rapid adoption of these technologies.

A VARIETY OF HUMAN ROLES

Once programmed, some cognitive technologies, like most industrial robots, run their assigned process. By contrast, with surgical robots it’s assumed that a human is in charge. In the future, we will probably need multiple control modes. As with self-driving vehicles, there needs to be a way for the human to take control. Having multiple means of control is another way to facilitate augmentation rather than automation.

FLEXIBLE UPDATING AND MODIFICATION

One of the reasons why rule-based systems have become successful in insurance and banking is that users can modify the rules. But modifying and updating most cognitive systems is currently a task only for experts. Future systems will need to be more flexible.

ROBUST REPORTING CAPABILITIES

Cognitive technologies will need to be accountable to the rest of the organization, as well as to other stakeholders. We’ve spoken, for example, with representatives of several companies using automated systems to buy and place digital ads, and they say that customers insist on detailed reporting so that the data can be “sliced and diced” in many different ways.

STATE-OF-THE-ART IT HYGIENE

Cognitive technologies will need all the attributes of modern information systems, including an easy user interface, state-of-the-art data security, and the ability to handle multiple users at once. Companies won’t want to compromise on any of these objectives in the cognitive space, and eventually they won’t have to.

What’s more, if the managerial goal is augmentation rather than automation, it’s essential to understand how human capabilities fit into the picture. People will continue to have advantages over even the smartest machines. They are better able to interpret unstructured data — for example, the meaning of a poem or whether an image is of a good neighborhood or a bad one. They have the cognitive breadth to simultaneously do a lot of different things well. The judgment and flexibility that come with these basic advantages will continue to be the basis of any enterprise’s ability to innovate, delight customers, and prevail in competitive markets — where, soon enough, cognitive technologies will be ubiquitous.

Clearly, smart machines are advancing at the things they do well at a much faster rate than we humans are. And granted, many workers will need to call on and cultivate different capabilities than the ones they have relied on in the past. But for the foreseeable future, there are still unlimited ways for humans to contribute tremendous value. To the extent that wise managers leverage their talents with advanced technology, we can all stop dreading the rise of smart machines.

This post originally appeared in LinkedIn Pulse and was also published by MIT Sloan Management Review and the International Institute for Analytics. Reposted with permission.

Bio: Tom Davenport helps guide IIA’s research efforts. He is the President’s Distinguished Professor of IT and Management at Babson College, and a research fellow at the MIT Center for Digital Business. Tom’s “Competing on Analytics” idea was named by Harvard Business Review as one of the twelve most important management ideas of the past decade and the related article was named one of the ten ‘must read’ articles in HBR’s 75 year history. His most recent book, co-authored with Julia Kirby, is Only Humans Need Apply: Winners and Losers in the Age of Smart Machines.

Related: