KDnuggets Home » News » 2014 » Dec » Opinions, Interviews, Reports » Stanford’s AI100: Century-Long Study on Effects of Artificial Intelligence on Human Life ( 14:n35 )

Stanford’s AI100: Century-Long Study on Effects of Artificial Intelligence on Human Life


Stanford unveils new 100 year study on the impact of artificial intelligence, particularly on democracy, privacy, the military. Surprisingly, perspectives from outside the AI community are absent from the initial panel.



Last week, Stanford announced the launch of its "One Hundred Year Study on Artificial Intelligence", also dubbed "AI100". According to Stanford's news release, the effort, created and funded by Stanford Alumnus Eric Horvitz, will consist of a series of recurring studies intended to measure the varied affects of Artificial Intelligence on "automation, national security, psychology, ethics, law, privacy, democracy and other issues."

Eric Horvitz AI100 Founder

In the release, Stanford President John Hennesy, who helped organize the initiative, says:

"Given Stanford's pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children's children."

This is not the first well-publicized effort by an organization at the forefront of artificial intelligence research to formally announce a panel dedicated to considering the consequences of work in the field. Google's recent acquisition of London-based deep learning startup DeepMind reportedly came under the condition that founder Demis Hassabis set up an ethics board at Google to oversee the startup.

Google Deepmind logo

However, despite the initial wave of publicity surrounding the announcement of Google's AI ethics board, little news of the board has made it to the public since. Of course, silence and secrecy in some respects is sensible. Google may be acting in the public interest and simultaneously protecting trade secrets which provide the company with a competitive advantage. However, as of now even the membership of the committee is itself clandestine.

A secretive board may help to guard against unethical behavior at Google but it offers the rest of the community little guidance. A more public discussion of the societal impacts and sensible ethical standards for AI are of increasing importance as use of machine learning proliferates throughout industry. While Google is clearly a leader in machine learning, similarly advanced technology abounds as research groups such as Geoff Hinton’s at Toronto, Andrew Ng’s Stanford, Yoshua Bengio’s at Montreal, and Yann Lecun’s at NYU, Zhuowen Tu’s at UCSD rapidly develop deep learning and recurrent neural network architectures. A survey of hacker news articles makes clear that only a few years separate the development of state of the art deep learning systems from working demonstrations re-implemented in JavaScript that run inside any web browser.

Fortunately, AI100 does appear to be a more public affair. Its initial panel has already been announced and includes Barbara Grosz, Deirdre K. Mulligan, Yoav Shoham, Tom Mitchell, and Alan Mackworth. With the exception of Dr. Mulligan, the entire panel appears to be comprised of artificial intelligence researchers. Their work spans robotics, multi-agent collaborative systems, and knowledge discovery in large databases. Horvitz himself is a researcher at Microsoft Research Labs and Russ Altman, a collaborator in this endeavor who helped convene the panel, is a professor of Bioengineering and Computer Science.

Certainly, it is important that such a panel abound in artificial intelligence expertise. The public discussion regarding AI has long been dominated by a perspective grounded more in Terminator than machine learning. Stephen Hawking's recent statements about the perils of "full artificial intelligence" have grabbed headlines at the BBC and New York Times. Similar fanfare has accompanied Elon Musk's warnings that AI is "summoning a demon". These warnings make for first-rate clickbait. Meanwhile, more pressing but less sensational issues have been neglected.

Presumably, this commission of machine learning experts can produce a more sober dialogue regarding the real and immediate advances in artificial intelligence. Regarding technology, this panel can be counted upon to be in touch. Concerning social impact, however, at first glance it seems that this panel is surprisingly understaffed. Among the panel's founders and members, only Deirdre K. Mulligan is not an academic computer scientist.

As artificial intelligence advances, an increasingly prominent concern is that the relationship between capital and labor, long assumed to be complementary, may be in a state of change. Historically, economists have believed that if the amount of capital is increased, the marginal value of additional labor increases, driving up wages. To give a toy example, doubling the number of ovens in a kitchen would increase the productivity of each cook. The kitchen's owner might happily employ twice as many cooks as before the kitchen size doubled. However, artificial intelligence raises the possibility of autonomous capital. Systems that can operate themselves might replace workers, i.e., capital and labor might become supplementary goods. If this happens, the fundamentals of conventional economic thinking would be upended. Computer science may offer the tools to create this problem, but it's unclear that the discipline offers the tools to analyze the problem or avert its consequences.

The risk of technological unemployment has recently been articulated by sources as varied as Harvard Professor and former Secretary of the Treasury Lawrence Summers and Google co-founder Larry Page. It seams reasonable to be concerned that, while the panel represents a cadre of experts on artificial intelligence, it lacks any member explicitly qualified to investigate unemployment or wealth inequality. Determining the social impact of artificial intelligence is an admirable, and likely necessary goal. But to be successful, it seems such a study must address not only the agents of change but also the objects of change.

In Silicon Valley, it's long been believed that industry-specific expertise is not required to effect disruptive technological change. Google isn't staffed by traditional ad salesmen and Lyft isn't run by taxi industry veterans. Startups have been remarkably successful in disrupting industries while eschewing traditional expertise. In retrospect, this shouldn't be surprising. Computer science as a discipline provides the tools for wielding technology to automate well-defined tasks and optimize well-defined performance measures. It's less clear that computer science alone provides the tools needed to assess the social impact of its creations. Some outside perspective in this endeavor seems appropriate.

The study has only recently been announced, and barring unforeseen biotechnological leaps, it will be staffed over the next 100 years by many more as-of-yet unannounced researchers. This appears to be a promising start to an ambitious task. I hope that the study succeeds in fostering a more serious dialogue about the impact of technology on society.

Zachary Chase Lipton Zachary Chase Lipton is a PhD student in the Computer Science Engineering department at the University of California, San Diego. Funded by the Division of Biomedical Informatics, he is interested in both theoretical foundations and applications of machine learning. In addition to his work at UCSD, he has interned at Microsoft Research Labs.

Related: