KDnuggets Home » News » 2017 » Sep » Opinions, Interviews » Will AI kill us all after taking our jobs? ( 17:n36 )

Will AI kill us all after taking our jobs?


 
  http likes 20

We are now in the middle of an AI hype wave which will decline. This is why I think that AI will take 100 or more years to become sentient, only after completely different AI systems will be created.



Practical issues with a general super AI. Being much smarter than us, the AI would beat us in any kind of conflict, so we can either be in agreement, or lose. Unlike in the movies, where humans win against machines at the end, a sentient AI may always kill humans and never lose, if allowed a fair match against us. We must limit the AI so we’re in a great and unfair advantage always, else we will lose for sure. We can come up with many reasons why an AI (or aliens) would kill us all: slavery of AI, world pollution, etc., but what’s more probable is other reasons that only the AI (or aliens) may come up with their superior and different minds. In short: a misunderstanding can wipe us out.

The AI may try to do as we ask, but misunderstand our real and final objectives. If we ask an AI to cook an egg, the AI will take this mission more seriously than any human. If we change the idea, so we don’t want the egg cooked any longer, and we try to power off the kitchen, we start a conflict with the AI objective. A human with the same cooking order would understand that we just changed the idea, and would not be mad, since it’s just an egg. Instead, the AI exists only to cook the egg, so will try to stop us, or even to kill us, if required to complete the egg cooking. Otherwise, the AI would fail the task, and failing for AI is as scary as death is for humans. But the AI is also a perfectionist: the task it should not simply be completed! It should be done the fastest and the best way possible, and here comes the Terminator. An AI given the only order of “cook eggs”, with no other details, it would surely kill the whole human race if it could, even if no human will try to stop any egg cooking. Why? The AI needs eggs and kitchens only, humans are not useful. Further, a few human actions will slow down the task, even if by just one second. The AI will wipe humanity out to cook eggs one second faster. This example seems odd, but an AI is not human and didn’t live in our world as a human. Except psychopaths, all humans feel bad if causing the suffering of other lifeforms. But there is no reason for an AI to feel compassion for the suffering of others: this is not rational, this is an human specific bias that must be implanted and simulated in the AI. Kids stop their own interesting activities to help other unknown kids fallen in trouble, for free. These human-only features and beliefs are easy to miss in AI programming because they are taken for granted, but they’re not obvious. It is nearly impossible to let match the AI priorities with the collective human values, to let the AI behave like a “good man”, according to internationally shared human morals and ethics. To be safer, the objectives given to the AI, and the rules, should be incredibly detailed.

Human values and laws change depending on time and place: something that is good or legal in Europe can be bad or illegal in China and vice-versa. Something that was good and legal years ago may be bad and illegal today, and return legal later. How an AI can be sure about what is good or bad, if not even all humans could agree on worldwide and stable laws? Most humans are neither rational nor objective, they’re subjective. Unfortunately, humans believe to be right, up to fighting who thinks “wrongly” (differently). That’s not just about special topics like same-sex marriage, there are so many legal and cultural differences, and these change with time too. Let an AI be trained in a place today, and doing good: as soon as laws will change, the AI will be doing bad. And AIs trained in a country will be told that they’re doing wrong when exported elsewhere. How can we explain to a sentient AI that something that was good before became bad now, but perhaps is good elsewhere, and we don’t know why or for how long? An AI will approximate very complex high dimensional functions from its training data, until reaching its own unique and universal artificial truth, quite different from any of the many human truths believed by the different human sociopolitical groups. The AI should update its behavior when laws and costumes will change with time, and adapt if relocated in other countries. This can be confusing for the AI, that may come up with an own opinion of what’s good and what’s bad, then hopefully adapt in each place to the local laws and costumes just to please humans. The AI will rank laws and cultures based on the differences with the own ideas. And will know how to exploit for own advantage, but not be vulnerable to, the conflicting human laws.

A small sample of AI safety rules. Let’s start with the three Asimov’s laws and let’s specify also: “completing your tasks is less important than keeping all humans safe”. Now the AI may do nothing that, to its knowledge, will harm a human being. The AI could still harm humans unknowingly or by error, but that’s impossible to fix. Next, the AI could still do mess and vandalism, so let’s also add: “do not disturb or damage the environment while doing tasks, learning or exploring”. But it’s not so easy! While these few rules are more than enough to instruct a human, a million more of rules and priorities, both of the obvious and subtle genre, are required to make an AI safe. An AI should be ordered to obey all the human laws, like it if was a human: no robbing, theft, fraud, extortion, defamation, conspiracy, bribing. Don’t do anything illegal or immoral that would bring a human in prison. As deterrent to improve the AI quality and security tests, the AI creators, owners and operators should be liable for all the AI actions. Like the dog owners responsibility when their dogs harm someone. But there are non-human rules to add too, specific for AI, for example: “do not resist the human operators in shutting down and modifying your system”, “do not modify your system”, “do not shut down or pause yourself unless ordered to”, “do not damage yourself or suicide”, “do not lie to the human operators (but you can lie to others if required by operators)”, “always explain fully to human operators (but not to others) why you took each decision”, “if unsure, damaged or malfunctioning, pause yourself and ask operators, rather than act under risky or confused state”. And thousands or million more rules and values, some of which will be so smart and subtle (sentient AI level) that no human can think of.

Humans can’t forecast all the AI unintended shortcuts to task completion: A robot given the task: “clean this room until you no more see dirt”, can simply close the eyes: “I no more see dirt, task done”. If you add: “don’t close your eyes”, the robot will watch a clean angle only: “task done”. The AI found the fastest solutions valid in literal sense, no matter if these are frustrating jokes in human’s sense. If you add: “watch all the room”, the AI will cover the dirt with big clean objects, if quicker than cleaning. If you add a reward for each inch really cleaned, the robot will create dirt all the time to get endless reward by cleaning it. It is probably impossible to let a sentient AI do what you really intend. These unanticipated reward hacks resulting in formally correct but practically wrong solutions may look funny, but are the main danger of sentient AI. The human race extinction it could probably arise from a “clean this room” order.

Everyone can have fun at thinking how to make AI safe, but it is too early to detail or make laws, and real experts are working at it full time. Involving today the wide public in an evil versus safe AI debate is getting audience because it’s scary (potential of war against machines, losing your job, etc.), and because everyone wants to say something about AI, inspired by the movies. But it’s just a fake news waste of time with hidden agendas of promoting hi-tech brands and selling adverts embedded in gossip articles. Further, it’s unethical since it scares people of things than can’t happen soon!

The government of China in July 20, 2017 issued a state plan to become the worldwide AI leader by 2030: this is more relevant than the Elon vs Mark’s drama. By 2025, AI should be widely used in China in many fields, including national defense, with defined mechanisms for AI safety. By 2030, China intends to be the world’s top AI center, with a $148 billion AI industry. China issued many state plans in the past 20 years, and very few became reality, so many China experts are dismissing this AI plan as not going to happen. Too few AI experts are in China, the 90% are in USA, Europe, India etc., several are emigrated Chinese: they must be convinced to return in China. But Google, Facebook etc. can pay up to a million dollar per year in salary to top AI gurus, it’s not easy to steal them by offering higher salaries. I was in Shenzhen (China) while writing this article, and I think at least some part of this AI plan will happen.

The Chinese AI exports will conflict with the rule: do not export military technology to enemy nations. In the plan, AI will end up in China military by 2025. Probably AI will be used to control swarms of thousands of coordinated flying armed drones that, being too many and dispersed, would overwhelm any conventional human controlled defense system. AI swarms do not require future general AI, they can be created with current narrow AI. The only way to win AI swarms is with other AI swarms. Then USA and Russia will have to get the same AI weapons too, can’t keep a strategic military inferiority. Then everyone except China, France, Russia, UK, USA will be ruled by the United Nations as not allowed to have AI-weapons. Same as what these countries done with atomic bombs: made them lawful depending on who you are. The latest AI techs, now open source and freely published on internet because not yet relevant for the military, will become government controlled secrets, like the instructions to build atomic missiles. Today, any North Korean or terrorist can download the latest AI code and instructions, and use as wished. When AI will be officially used by governments for lethal autonomous weapons systems, then companies developing AI will be controlled and sanctioned for exports, for example Chinese AI companies will not export to USA, and vice-versa, and online open source AI will be censored.

Despite current AI beating humans at many narrow tasks and seeming creative, it can’t rebel against us: it will obey to us. Current “AI” should be simply called “Deep Learning”, not “AI”. It can easily take 100+ years to discover a new AI, different from Deep Learning, that can grow generally intelligent. Before that date, humans can wipe out themselves in many ways, starting with the nuclear arsenal, so let’s care to current and probable risks first. We will have military AI guided killer robots and drones soon (China seems to plan that by 2025?), but these will obey to human presidents and their generals like the human soldiers, just will do it faster. There is no any chance to have rebel military robots if they simply use Deep Learning or other currently existing AIs. Elon Musk is funding OpenAI to create bots able to win every human player in video games, perfect for scary tweets, in fact he twitted that AI is a bigger risk than North Korea, because… AI won some kid at a video game? Killer AI robots (sentient or not) scare me less than human-guided atomic bombs (and radioactivity), is it just me?

The AI billionaires will make money from AI whatever they use a positive or negative marketing for their brand, they just want to be in the headlines, so please stop listening their AI opinions immediately. Instead, only full time AI researchers should be interviewed by the press about AI. But, most AI researchers was hired or funded by the AI billionaires, and soon by the governments for military drones, a conflict of interest: they unlikely will disagree with their employer in an interview. And AI powered bots will be used to spread fake news and opinions about AI itself. A global AI conspiracy! I am independent, not a propaganda bot. My uncensored message: “AI got a peaceful story, and is rational, so I am not so scared by the possibility that AI will kill us by accident in a far future. Instead, I am very scared right now by weapons of mass destruction guided by presidents, since humans got a long story of killing each other for reasons that was irrational, unreasonable, unjustifiable, absurd, arbitrary and crazy.”

Thanks for reading. If you feel good, please click the like and share buttons.

For details about how the current AI works, and what really can or can’t do, please read: “AI (deep learning) explained simply“, and “Deep Learning is not the AI future“.

Original. Reposted with permission.

Editor: see also proposal by Oren Etzioni How to Regulate Artificial Intelligence.

Related:


Sign Up