The Current Hype Cycle in Artificial Intelligence

Over the past decade, the field of artificial intelligence (AI) has seen striking developments. As surveyed in, there now exist over twenty domains in which AI programs are performing at least as well as (if not better than) humans.



Prologue

Every decade seems to have its technological buzzwords: we had personal computers in 1980s; Internet and worldwide web in 1990s; smart phones and social media in 2000s; and Artificial Intelligence (AI) and Machine Learning in this decade. However, the field of AI is 67 years old and this is the third in a series of five articles wherein:

  1. The first article discusses the genesis of AI and the first hype cycle during 1950 and 1982
  2. The second article discusses a resurgence of AI and its achievements during 1983-2010
  3. The third discusses the domains in which AI systems are already rivaling Humans
  4. This article discusses the current hype cycle in Artificial Intelligence
  5. The fifth article discusses as to what 2018-2035 may portend for brains, minds and machines

The Timeline

Introduction

Over the past decade, the field of artificial intelligence (AI) has seen striking developments. As surveyed in [141], there now exist over twenty domains in which AI programs are performing at least as well as (if not better than) humans. These advances have led to a massive burst of excitement in AI that is highly reminiscent of the one that took place during the 1956-1973 boom phase of the first AI hype cycle [56]. Investors are funding billions of dollars in AI-based research and startups [143,144,145], and futurists are again beginning to make alarming predictions about the incipience of powerful AI [149,150,151,152]. Many have questioned the future of humans in the job market, claiming that up to 47% of United States jobs are in the high-risk category of being lost to automation by 2033 [146], and some have gone as far as to say that AI could spell the end of humanity [141].

In this article we argue that the short-term effects of AI are unlikely to be as pronounced as these claims suggest. Several of the obstacles that led to the demise of the first AI boom phase over forty years ago remain unresolved today, and it seems that serious theoretical advances will be required to overcome them. Moreover, the present infrastructure is ill-adapted to incorporating AI programs on a large scale, meaning that it is improbable that AI systems will be able to soon replace humans en masse. Therefore, the predictions mentioned above are unlikely to be met in the next fifteen years, and financiers may not receive an expected return on their recent investments in AI.

Audacious Expectations

The recent hype in AI has manifested itself in two forms: striking predictions and massive investments, both of which are discussed below.

Predictions on the Power of AI by 2035

Over the past several years, there has been a growing belief that AI is a limitless, mystical force that it is (or will soon be) able to supersede humans and solve any problem. For instance, Ray Kurzweil predicted, "artificial intelligence will reach human levels by around 2029," [150], and Gray Scott stated, "there is no reason and no way that a human mind can keep up with an artificial intelligence machine by 2035." [152]. An analogous but more ominous sentiment was expressed by Elon Musk, who wrote, "The pace of progress in artificial intelligence . . . is incredibly fast . . . The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most," [149] and later said, "with artificial intelligence we’re summoning the demon" [152].

Artificial Intelligence will reach human levels by around 2029– Ray Kurzweil, 2014

Although some consider Musk’s vision extreme, there is still a significant worry among researchers that strong AI will soon have significant consequences for humanity’s future in the job market. For instance, in 2013 two Oxford professors, Frey and Osborne, published an article [146] titled, "The Future of Employment: How Susceptible are Jobs to Computerization?," in which they attempted to analyze the proportion of the job market that could become computerized within the next twenty years. They estimated that, "47% of total US employment is in the high-risk category, meaning that associated occupations are potentially automatable over some unspecified number of years, perhaps in a decade or two." Following that work, several papers have been written by various consulting firms and think tanks predicting that between 20 and 40 percent of jobs will be lost within the next twenty years due to automation [147,148]. Such predictions have sent ripples through the boardrooms of many businesses and governing bodies of various nations. Since AI systems are expected to reduce labor costs in the U.S. by a factor of ten, such predictions suggest that businesses could become significantly more profitable by employing AI programs instead of humans; however, this could force unemployment rates to reach staggering proportions, causing enormous economic disruption worldwide.

Explosion of Startups and Massive Investments

According to McKinsey and Company, non-tech companies spent between $26 billion and $39 billion on AI in 2016, and tech companies spent between $20 billion and $30 billion on AI [143]. Along a similar vein, an explosion of AI-based startups began around 2012 and continues today. In December 2017, AngelList (a website that connects startups with angel investors and job seekers) listed 3,792 AI startups; 2,592 associated angel investors; and 2,521 associated job vacancies. According to Pitchbook, $285 million was invested by venture capitalists in AI startups in 2007, $4 billion in 2015, $5.4 billion in 2016, and $7.6 billion between January 2017 and October 2017; the total venture capitalist investment in AI between 2007 and 2017 exceeds $25 billion [144]. According to CBInsights, 658 AI startups obtained funding in 2016 alone and are actively pursuing their business plans [145]. In fact, to obtain venture funding today, most new business plans need to have at least some mention of AI.


Figure 1: Venture Funding for AI Startups (Source: Pitchbook)

The First AI Hype Cycle – A Quick Review

The current hype in AI is immensely reminiscent of what took place during the boom phase of the first AI hype cycle between 1956 and 1973 (surveyed in [56], the first article of this series). Indeed, following several prominent advances in AI (such as the first self-learning program, which played checkers, and the introduction of the neural network [5,14]), government agencies and research organizations were quick to invest massive funds in AI research [40,41,42]. Fueled by this popularity, AI researchers were also quick to make audacious predictions about the incipience of powerful AI. For instance, in 1961, Marvin Minsky wrote, "within our lifetime machines may surpass us in general intelligence" [9].

However, this euphoria was short-lived. By the early 1970s, when the expectations in AI did not come to pass, disillusioned investors withdrew their funding; this resulted in the AI bust phase when research in AI was slow and even the term, "artificial intelligence," was spurned. In retrospect, the demise of the AI boom phase can be attributed to the following two major obstacles:

Limited and Costly Computing Power

Computing power in the 1970s was very costly and not powerful enough to imitate the human brain. For instance, creating an artificial neural network (ANN) the size of a human brain would have consumed the entire U.S. GDP in 1974 [56].

The Mystery Behind Human Thought

Scientists did not understand how the human brain functioned and remained especially unaware of the neurological mechanisms behind creativity, reasoning and humor. The lack of an understanding as to what precisely machine learning programs should be trying to imitate posed a significant issue in moving the theory of AI forward [38]. As succinctly explained by MIT professor Hubert Dreyfus, "the programs lacked the intuitive common sense of a four-year old," and no one knew how to proceed [154].

The first difficulty mentioned above can be classified as mechanical, whereas the second as conceptual; both were responsible for the end the first AI boom phase forty years ago. By 1982, Minsky himself had overturned his previously optimistic viewpoint saying, "I think the Al problem is one of the hardest science has ever undertaken" [155].


Figure 2: Human Brain Whose Functioning Is Not Completely Understood

No Conceptual Breakthroughs

Clearly, while beating humans in the game of Jeopardy! what IBM Watson did in 2011 was no ordinary feat; however, it only provided factoids as answers to various questions in Jeopardy! Some subsequent statements and advertisements regarding IBM Watson implied that it could help in solving some of the harder problems concerning humanity (e.g., helping in cancer research and finding alternate therapies); however, the ensuing work with M. D. Anderson Cancer Center and other hospitals showed that it fell far short of such expectations [156, 157]. In fact, none of AI systems today are even close to being HAL 9000, an artificially intelligent computer that was portrayed as an antagonist in the movie, 2001: A Space Odyssey.

What essentially happened over the past forty years was substantial progress towards the first (mechanical) issue mentioned above. The facilitating principle here has been Moore’s law, which predicts that the number of transistors in an electronic circuit should approximately double every two years; as a result, the cost and speed of computing have improved by a factor of 1.1 million since the 1970s [114,115,117]. This led to ubiquitous and inexpensive hardware and connectivity, allowing for some of the theoretical advances invented decades ago in the first hype cycle (e.g., backpropagation algorithms) and some new techniques to perform significantly better in practice, largely through tedious application of heuristics and engineering skills.

However, what remains lacking is a conceptual breakthrough that provides insight towards the second issue highlighted above: we remain as unsure as ever as to how to create a machine that truly imitates intelligent life and how to endow it with "intuition" or "common sense" or the ability to perform several tasks well (like humans). This has resulted in several conspicuous deficiencies (detailed below) in even the most modern AI programs, which render them impractical for large-scale use.

Any sufficiently advance technology is indistinguishable from magic – Arthur C. Clarke, 1973

Current Deep Learning Networks Are Not Robust

In 2015, Nguyen, Yosinki and Clune examined whether the leading image-recognizing, deep neural networks were susceptible to false positives [158]. They generated random images by perturbing patterns, and they showed both the original patterns and their mutated copies to these neural networks (which were trained using labeled data from ImageNet). Although the perturbed patterns (eight of which are depicted in Figure 3 below) were essentially meaningless, they were incorrectly recognized by these networks with over 99% confidence as a king penguin, starfish, etc. [158].


Figure 3: Source – Research Article by Nguyen,Yosinki and Clune [158]

Another research group showed that, by wearing certain psychedelic spectacles, ordinary people could fool a facial recognition system into thinking they were celebrities; this would permit people to impersonate each other without being detected by such a system [159,160]. Similarly, researchers in 2017 added stickers to stop signs, which caused an ANN to misclassify them, which could have grave consequences for autonomous car driving [161]. Until these AI systems become robust enough to not be deceived by such perturbations, it will be infeasible for industries to adopt them extensively [162].


Figure 4: Source – Research Articles [159] and [161]

Machine Learning Systems Remain Inefficient

Machine learning algorithms require several thousand to several million pictures of cats (and non-cats) before they can start to accurately distinguish them. This could pose a significant issue for an AI system if it operates in a situation that is substantially different from situations in the past (e.g., due to a stock market crash); since the AI system may not have large enough data set to be adequately trained on, it may just fail.

Deep Learning Networks are Hard to Improve

Although deep learning algorithms sometimes produce superior results, they are usually "black boxes," and even researchers are currently unable to develop a theoretical framework for understanding how or why they give the answers they do. For example, the Deep Patient program built by Dudley and his colleagues at Mt. Sinai hospital can largely anticipate the onset of schizophrenia, but Dudley regretfully remarked, "we can build these models, but we don’t know how they work" [109,141].

Since we do not understand the direct causal relationship between the data processed by an AI program and its eventual answer, systematically improving AI programs remains a serious complication and is typically done through trial and error. For the same reason, it is difficult to fix such solutions if something goes wrong [163,171]. In view of these issues and the fact that humans are taught to "treat" causes (and not symptoms), it is unlikely that many industries will be able to rely on deep learning programs, at least any time soon. For instance, a doctor is not likely to administer a drug to a patient based solely on a program’s prediction that the patient will become schizophrenic soon. Indeed, if the program’s prediction was inaccurate and the patient suffers an illness due to a side effect of the inappropriately administered drug, the doctor could face censure, with his or her only defense being that the trust on the AI system was misplaced.

Moore’s Law Will Likely End Within a Decade

As mentioned previously, Moore’s law, which predicts an exponential growth rate for the number of transistors in a circuit, has been the most influential reason for our progress in AI. The size of today’s transistors can be reduced by at most a factor of 4,900 before reaching the theoretical limit of one silicon atom. In 2015, Moore himself said, "I see Moore’s law dying here in the next decade or so" [117]. Thus, it is unclear whether strong, general-purpose AI systems will be developed before the demise of Moore’s law.

Short-Term Impact of AI Systems on Human Society

Ill-Adapted Infrastructure

Even if the previously mentioned issues with AI systems are resolved, the following reasons indicate that a significant amount of effort must still be exerted to adapt the infrastructure of companies, organizations and governments to widely incorporate AI programs:

  • Modern AI algorithms often require substantial (several thousand cores of) processing power. However, very few organizations have installed parallel computing infrastructure with more than 100 cores in a single cluster, and installing larger scale systems is time-consuming and expensive. In principle, it is possible for organizations to send their data to tech companies (such as Amazon Web Services or Google Cloud), which can provide parallel and distributed computing on a larger scale. However, many organizations are unable to use such services due to risk (e.g., data breach) and compliance reasons.
  • Data in large firms often resides in thousands of different databases and locations; most of this data is "noisy," and AI systems are likely to fail if they use this data in its current form. Cleansing and harmonizing data is an arduous and time-consuming task, and data engineering is currently the largest bottleneck for deploying AI systems, accounting for more than 60% of its time and cost. Most organizations have yet to begin cleaning their data on a large scale, and the few that have begun to do so, are expecting it to take between two and five years.
  • Changes upstream or downstream (e.g., to changing regulations or business environment) may force the AI system to be reconfigured, re-trained, re-validated and re-tested; as mentioned previously, this could cause serious issues due to a potential lack of new training data for the AI program.


Figure 5: Harmonizing Big Data is Like Making Sense of Numerous Diverse Pieces of a Puzzle

Short-Term Future of Human Work

Although Frey and Osborne predicted that, "47% of total US employment is in the high-risk category, … perhaps in a decade or two," [146], research divisions of large technology companies, strategy companies and think tanks have long had a history of underestimating – at least by a factor of two – how long it would take for specific technological advances to affect human society. The reason is that these analyses often fail to address a crucial point: the global economy is not frictionless — changes take time. Humans are quick to adapt to modern technology if it helps them but are very resistant if it hurts them, and this phenomenon is hard to quantify.

For example, over the past four decades, there has already been a significant opportunity to reduce labor costs – by a factor of four – in high-wage countries by using outsourcing. Outsourcing of manufacturing jobs to lower wage countries from the U. S. started around 1979, and yet the U. S. had cumulatively lost less than 8 million manufacturing jobs due to outsourcing by 2016 [168]. Similarly, outsourcing of service jobs from the U.S. began in 1990s, but the U.S. has cumulatively lost around 5 million such jobs. Hence, job losses due to outsourcing have totaled around 13 million, or around 8% of the 161 million U.S. working population. If the global economy were frictionless, most if not all 47% of the jobs predicted by Frey and Osborne would have been lost to outsourcing. The fact that this did not happen during the last 20 to 40 years brings some doubt to the claim that it will happen within the next fifteen years due to AI, especially in view of the above-mentioned inadequacies of the modern infrastructure needed to incorporate AI programs.

Furthermore, unless there are major conceptual breakthroughs, the previously mentioned deficiencies of AI programs also make significant job loss to automation unlikely within the next fifteen years. Indeed, AI systems do very well in what they have learned but falter quickly if their rules are perturbed minutely. Humans can easily exploit this fact if their jobs were at stake, e.g., if autonomous driving software were used, taxi drivers could collude with others to introduce malicious software that causes accidents (or place stickers on stop signs, as mentioned above). Similarly, ANNs can be defeated by attacking their defense with continuously mutating malware [161,162]; miscreants can use this to smuggle false data into the ANN training sets, thereby disrupting the learning process of the AI system. Although these activities would certainly be illegal if AI systems gained wide usage, the risks and consequences of such illicit actions would likely be too great for largescale AI incorporation to gain public support.

The last point worth mentioning is that additional jobs will be created over the next fifteen years, which were not accounted for in the analyses of Frey and Osborne [146] or in any of the subsequent analyses [147,148].

Returns on Investments

Due to the reasons mentioned above, the adoption and implementation of AI systems are likely to be slower than what investors have envisioned. It is therefore unclear whether financiers will reap the benefits of their AI investments, which have totaled over $25 billion during the last ten years, especially since only 11 of the funded private companies have a market valuation of $1 billion or more. In fact, out of 70 merger and acquisition deals in AI since 2012, 75% percent sold below $50 million and were "acqui-hires" (companies acquired for talent and not business performance); most of the companies financed by investors raised less than $10 million [169]. It is possible, at least for now, that small funding in AI for a brief period might yield good but not outstanding returns. However, this in deep contrast with the standard investing model, which advocates for investors to invest more money and get 8-14 times their money back within 4-7 years.

Conclusion

The past several years have marked the beginning of a new hype cycle in AI. Recent developments in the field (surveyed in [141]) have captured the interest of researchers and the public, who are beginning to make alarming predictions about the incipience of powerful AI, and of financiers, who are beginning to expend massive investments on AI research and startups. This is reminiscent of the first AI boom phase, which took place over forty-five years ago. There too, the field of AI saw many striking advances, audacious predictions, and massive investments. Eventually, that boom phase collapsed, primarily due to two reasons. The first was mechanical, due to limited and costly computational power in the 1970s; the second was conceptual, due to a lack of understanding of "intuition" and "human thought," and how to make computers that could imitate humans.

Largely due to Moore’s law, the first issue has been substantially resolved; the cost and power of hardware have improved by a factor of over one million during the past forty years, allowing for ubiquitous and affordable hardware that could be used to make better AI programs. The second issue, however, is still unresolved. In addition, there are major deficiencies in even the most modern AI programs that remain sensitive to perturbations, are inefficient learners and difficult to improve upon. Even if these issues were resolved, the current infrastructures of businesses and governments seems ill-equipped to quickly incorporate AI programs on a large scale. Hence, it is unlikely that the audacious expectations by researchers, investors, and the public (regarding AI) will be realized in the next fifteen years.

While much of the excitement in AI has led to striking developments [141], much of it also appears to be based on "irrational exuberance" [52] rather than on facts. It seems that we still require significant breakthroughs before AI systems can truly imitate intelligent life. As John McCarthy noted in 1977, creating a human-like AI computer will require, "conceptual breakthroughs," because, "what you want is 1.7 Einsteins and 0.3 of the Manhattan Project, and you want the Einsteins first. I believe it’ll take 5 to 500 years" [43]. His statement seems to be just as applicable now, over forty years later.


Figure 6: Mapping Neurons of a Rodent’s brain (Source: [176])

However, researchers are continuing to pioneer paths that might lead to such breakthroughs. For example, since ANNs and reinforcement learning systems derive their inspiration from neuroscience, some academics believe that new conceptual insights could require multidisciplinary research by combining biology, math and computer science. In fact, the MICRoNS (Machine Intelligence from Cortical Networks) project is first attempt at mapping a rodent’s brain, which has around 100,000 neurons and about a billion synapses. The U.S. Government (via IARPA) has funded this hundred-million-dollar research initiative, and neuroscientists and computers scientists from Harvard University, Princeton University, Baylor College of Medicine and Allen Institute of Artificial Intelligence are collaborating to make it successful [170]. The computational aims for MICrONS include learning the ability to perform complex information processing tasks such as one-shot learning, unsupervised clustering, and scene parsing, with the eventual goal of achieving human-like proficiency. If successful, this project may create the foundational blocks for the next generation of AI systems.

MICRoNS (Machine Intelligence from Cortical Networks) project is first attempt at mapping a rodent’s brain, which has around 100,000 neurons and about a billion synapses. If successful, this project may create the foundational blocks for the next generation of AI systems.

 
Bio: Dr. Alok Aggarwal, is CEO and Chief Data Scientist at Scry Analytics, Inc. He previously was at IBM Research Yorktown Heights, founded IBM India Research Lab, and was founder and CEO Evalueserve which employed over 3,000 people worldwide. In 2014 he started Scry Analytics.

Original. Reposted with permission.

Related: