Artificial Intelligence and Life in 2030

Read this engaging overview of a report from the Stanford University 100 year study of Artificial Intelligence, “a long-term investigation of the field of Artificial Intelligence (AI) and its influences on people, their communities, and society.”



Artificial Intelligence and Life in 2030, Stanford University, 2016

AI header

Strictly speaking, this isn’t a research paper, it’s a report from the Stanford University One hundred year study of Artificial Intelligence, “a long-term investigation of the field of Artificial Intelligence (AI) and its influences on people, their communities, and society.” A 100 year study is hard to imagine (and it takes a certain chutzpah to announce one!), but thankfully we don’t have to wait 100 years for the first results. Every five years, a study panel is convened to assess the current state of AI, and what we have here is the first report. It focuses on what impact AI will have on life and society (in urban cities) by 2030 – close enough in to be imaginable, without straying into science fiction. The report runs to 50 pages and is written for an audience that includes the general public. As such, it lacks the technical depth that we’re used to on The Morning Paper. Nevertheless, I hope the existence of the project and its deliberations will be of interest to many of you.

Another area in which I was slightly disappointed in the report on first read is that I didn’t find much truly visionary or of the “oh wow, I never thought about that” kind here. (YMMV of course!). When I was reflecting on this, maybe that’s part of the point – it takes time for innovations to achieve broad societal impact and we shouldn’t expect overnight transformations. One of the things hanging around the VC industry really reinforces is that timing matters! Or to paraphrase Simon Wardley, sometimes you can know what is going to happen, but not when, and sometimes you know when something will happen, but not what/how. What and when is asking a lot! Nevertheless, I do hope that in the next 13-14 years there are at least a few breakthroughs that surprise us. Seeing the rate and change of development in “AI” that wouldn’t surprise me in the least!

What you’ll find in the (50 page) report is a short overview of current hot AI research trends (it really is short, sadly), and then a look by domain at the likely impacts in transportation, home/service robots, healthcare, education, low-resource communities, public safety and security, employment and workplace, and entertainment. By focusing on general life in cities of course, we miss out on many other areas ripe for impact by AI (I’m just going to use that term in the broadest sense) including manufacturing, agriculture, retail, finance, and a whole host of other verticals. The report closes with a section on recommendations for AI public policy, which I’m not going to cover.

Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future. Instead, increasingly useful applications of AI, with potentially profound positive impacts on our society and economy are likely to emerge between now and 2030, the period this report considers. At the same time, many of these developments will spur disruptions in how human labor is augmented or replaced by AI, creating new challenges for the economy and society more broadly.

Hot Topics in AI Research

  • Large scale machine learning – scaling existing algorithms, and designing new algorithms, to work with extremely large data sets.
  • Deep learning, which is making significant in-roads into areas of perception (audio, speech, vision, natural language processing) and others besides. (For a great in-depth overview of deep learning and associated techniques see ‘Deep Learning in Neural Networks: An Overview’).
  • Reinforcement learning based on experience-driven sequential decision making, i.e. the shift from pattern mining to decision making in an interactive context. “It promises to carry AI applications forward taking actions in the real world.”
  • Robotics, especially manipulation of objects in interactive environments, and building upon advances in perception in other areas.
  • Computer Vision – the sub-area of AI most transformed by the rise of deep learning. “For the first time, computers are able to perform some vision tasks better than people.”
  • Natural Language Processing – often coupled with automatic speech recognition, is quickly becoming a commodity for widely spoken languages with large data sets. Research is shifting towards developing systems that interact with people through dialog.
  • Collaborative systems in which autonomous systems work collaboratively with other systems and humans.
  • Algorithmic game theory and computational social choice, looking at how systems can handle potentially misaligned incentives, including self-interest human participants or firms and the automated AI-based agents representing them.

Topics receiving attention include computational mechanism design (an economic theory of incentive design, seeking incentive-compatible systems where inputs are truthfully reported), computational social choice (a theory for how to aggregate rank orders on alternatives), incentive aligned information elicitation (prediction markets, scoring rules, peer prediction) and algorithmic game theory (the equilibrium of markets, network games, and parlor games such as Poker - a game where significant advances have been made in recent years through abstraction techniques and no-regret learning).

  • IoT research, in which abundant sensory information is used for intelligent purposes.
  • Neuromorphic computing – a set of technologies seeking to mimic biological neural networks to improve the hardware efficiency and robustness of computing systems.

I haven’t covered much at all in The Morning Paper on algorithmic game theory or neuromorphic computing. If any readers have suggestions for key papers to look to here please do let me know.

Over the next fifteen years, the Study Panel expects an increasing focus on developing system that are human-aware, meaning that they should specifically model, and are specifically designed for, the characteristics of the people with whom they are meant to interact…. In the coming years, new perception/object recognition capabilities and robotic platforms that are human-safe will grow, as will data-driven products and their markets. The Study Panel also expects a reemergence of some of the traditional forms of AI, as practitioners come to realize the inevitable limitations of purely end-to-end deep learning approaches. We encourage young researchers not to reinvent the wheel, but rather to maintain an awareness of the significant progress in many areas of AI during the first fifty years of the field, and in related fields such as control theory, cognitive science, and psychology.

On that last point, one of my favorite recent papers is ‘Towards deep symbolic reinforcement learning’.

Let’s take a very brief dip into some of the areas of application:

Transportation

 
Here’s an interesting table showing the historical timeline of technology introduction to commercial cars:
 

And see also this great piece from Mashable on what manufacturers are up to next.

In the near future, sensing algorithms will achieve super-human performance for capabilities required for driving. Automated perception, including vision, is already near or at human-performance level for well-defined tasks such as recognition and tracking. Advances in perception will be followed by algorithmic improvements in higher level reasoning capabilities such as planning.

Beyond self-driving cars, we’ll have a variety of autonomous vehicles including robots and drones.

AI also has the potential to transform city transportation planning, but is being held back by a lack of standardisation in the sensing infrastructure and AI techniques used.

Accurate predictive models of individuals’ movements, their preferences, and their goals are likely to emerge with the greater availability of data.

That last sentence is worth reflecting on for a while. It does indeed seem highly likely to happen, but that doesn’t mean we have to like what it might mean for society.

Home/Service Robots

 
Here we get a discussion of why robot vacuum cleaners have disappointed! The path to a better future is captured in cloud-based feedback loops, improved interactions techniques, and 3D perception:

Cloud (“someone else’s computer”) is going to enable more rapid release of new software on home robots, and more sharing of data sets gathered in many different homes, which will in turn feed cloud-based machine learning, and then power improvements to already deployed robots. The great advances in speech understanding and image labeling enabled by deep learning will enhance robots’ interactions with people in their homes. Low cost 3D sensors, driven by gaming platforms, have fueled work on 3D perception algorithms by thousands of researchers worldwide, which will speed the development and adoption of home and service robots.

Healthcare

 
Data is a key enabler to improvements here, but getting it has proved difficult. Technology to augment physicians capabilities could be one possible beachhead.

Mobile health (e.g. the explosion in health and fitness apps and sensors) is creating a whole new sector of innovation. And of course, elderly care will be a pressing issue:

…the coming generational shift will accompany a change in technology acceptance among the elderly. Currently, someone who is seventy was born in 1946 and may have first experienced some form of personalized IT in middle age or later, while a fifty-year-old today is far more technology-friendly and savvy. As a result, there will be a growing interest and market for already available and maturing technologies to support physical, emotional, social, and mental health.

The panel predict an explosion of low-cost sensing devices to provide ‘substantial capabilities’ to the elderly in their homes.

However, doing so will require integration across multiple areas of AI – NLP, reasoning, learning, perception, and robotics – to create a system that is useful and usable by the elderly.

Education

 
The panel predicts the widespread adoption of Intelligent Tutoring Systems powered by learning analytics. Here’s a warning for start-ups targeting the area though:

One might have expected more and more sophisticated use of AI technologies in schools, colleges, and universities by now. Much of its absence can be explained by the lack of financial resources of these institutions as well as the lack of data establishing the technologies’ effectiveness.

Low-resource Communities

 
“Many opportunities exist for AI to improve conditions for people in low-resource communities in a typical North American city – and indeed, in some cases it already has.” For example, task assignment scheduling and planning techniques applied to distribute food before it spoils from those who have excess.

Public Safety and Security

 

One of the more successful uses of AI analytics is in detecting white collar crime, such as credit card fraud.101 Cybersecurity (including spam) is a widely shared concern, and machine learning is making an impact. AI tools may also prove useful in helping police manage crime scenes or search and rescue events by helping commanders prioritize tasks and allocate resources, though these tools are not yet ready for automating such activities. Improvements in machine learning in general, and transfer learning in particular—for speeding up learning in new scenarios based on similarities with past scenarios—may facilitate such systems.

Employment and Workplace

 
The most popular question here is “Will AI take away all of our jobs?”. And here’s what the panel have to say on that:

AI will likely replace tasks rather than jobs in the near term, and will also create new kinds of jobs. But the new jobs that will emerge are harder to imagine in advance than the existing jobs that will likely be lost. Changes in employment usually happen gradually, often without a sharp transition, a trend likely to continue as AI slowly moves into the workplace.

At the same time, the panel also speculate that as AI takes over many functions, scaling an organisation will no longer imply scaling the number of employees, perhaps leading to organisations that retain more ‘human’ scales.

There may not be a sharp transition, but that doesn’t mean there won’t be a transition with deep ramifications over time:

The economic effects of AI on cognitive human jobs will be analogous to the effects of automation and robotics on humans in manufacturing jobs. Many middle-aged workers have lost well-paying factory jobs and the socio-economic status in family and society that traditionally went with such jobs. An even larger fraction of the total workforce may, in the long run, lose well-paying “cognitive” jobs. As labor becomes a less important factor in production as compared to owning intellectual capital, a majority of citizens may find the value of their labor insufficient to pay for a socially acceptable standard of living.

The panel recommends a political response to prevent benefits concentrating with the few rather than the masses.

Entertainment

 
The two main themes that jump out for me from this section are the move from software-only based home entertainment, and the rise of micro-serving.

To date, the information revolution has mostly unfolded in software. However, with the growing availability of cheaper sensors and devices, greater innovation in the hardware used in entertainment systems is expected. Virtual reality and haptics could enter our living rooms—personalized companion robots are already being developed. With the accompanying improvements in Automatic Speech Recognition, the Study Panel expects that interaction with robots and other entertainment systems will become dialogue-based, perhaps constrained at the start, but progressively more human-like. Equally, the interacting systems are predicted to develop new characteristics such as emotion, empathy, and adaptation to environmental rhythms such as time of day.

I’ll leave you with this rather concerning thought:

With content increasingly delivered digitally, and large amounts of data being logged about consumers’ preferences and usage characteristics, media powerhouses will be able to micro-analyze and micro-serve content to increasingly specialized segments of the population—down to the individual.125 Conceivably the stage is set for the emergence of media conglomerates acting as “Big Brothers” who are able to control the ideas and online experiences to which specific individuals are exposed. It remains to be seen whether broader society will develop measures to prevent their emergence.

Bio: Adrian Colyer was CTO of SpringSource, then CTO for Apps at VMware and subsequently Pivotal. He is now a Venture Partner at Accel Partners in London, working with early stage and startup companies across Europe. If you’re working on an interesting technology-related business he would love to hear from you: you can reach him at acolyer at accel dot com.

Original. Reposted with permission.

Related: