KDnuggets Home » News » 2017 » Sep » Opinions, Interviews » Will AI kill us all after taking our jobs? ( 17:n36 )

Will AI kill us all after taking our jobs?


We are now in the middle of an AI hype wave which will decline. This is why I think that AI will take 100 or more years to become sentient, only after completely different AI systems will be created.



Preface: Lately we hear too many news about Artificial Intelligence (AI). Google, IBM, Apple, Microsoft etc. years ago announced “mobile” support. Today, mobile is obvious, and to differentiate, they claim to use “AI”. The word “AI” for most people can be only the sci-fi movies AI, since we get too many AI movies too: Her, Ex Machina, … and even “Alien: Covenant” is about a rogue AI, the aliens are secondary. Companies went mobile for real: their services run in cell phones. But, we don’t see sci-fi “AI” in any service like Alexa, Cortana, Siri etc. The AI word was simply abused by marketers and journalists, creating unrealistic expectations. Each time an AI solves a small new task or game, that’s reported with greater importance than other science or economy news. But no one talks about the many AI projects that failed. This “survival bias” in the news convinced many people that AI, rather than humans, will do the next discoveries. Like magic, all seems possible for AI: find cancer drugs, predict the markets, kill us all.

The AI news are always in a scary tone, with alarming comments from concerned politicians or billionaires. Some news are simply fake, like: “Facebook engineers panic, shut down a dangerously smart AI after bots developed their own creepy language that no one could understand”. Really, Facebook did not panicked, all was under control, and a “No, FB didn’t panicked” correction article was released. But only the panic articles went viral and are still cited. Many people in theory could see the AI limits themselves by reading the AI technical papers directly, and testing the AI source codes. Current AI is not more difficult to learn than, let’s say, 3D programming: both AI and 3D are mostly linear algebra packed in increasingly easy to use libraries. But like only a few people learned how a 3D engine works, very few people will learn how AI really works. Most people can figure out AI only from a mix of movies and news media. Lately there are too many AI news in all contexts, and every famous person talks about AI. So, people are tricked into thinking that we are getting big AI advancements each week. And then, that AI will soon replace all humans at most jobs, then become sentient: self-aware and possibly rebel against humans, like in movies. This is not the case, we use the 1990s AIs still, and advancements are small, just related to bigger data sets and faster computers, no new algorithms! The AI news are too many now because we are into an AI hype wave, that will decline later. This is why I think that AI will take 100 or more years to become sentient, only after completely different AI systems will be created.

The current “deep learning” is not so new, and can’t become sentient: most is based on 1960s to 1990s algorithms. Simply we have more data and computer power today to run and mix old algorithms efficiently. Only believe running AI code, not marketing: the latest AI is updated in your cell phone weekly, is your cell phone sentient? Recent applications are only slight modifications of old neural networks, trained over different “narrow tasks”. There is not a single AI that can learn many different tasks, especially on its own. Instead, we got different AI systems, one per task, that can solve one only task. Each “narrow AI” is created by humans who know the task: one to solve Go game, one to recognize cancer on skin photos and so on. An AI beating humans at one task is not “smarter” in general, is not like a human beating another human: the AI only can do that one task, and can’t transfer such smartness to other tasks. An AI learning by itself would not need a staff of dozens of humans working months to rewrite each AI to win a single human champion at a single game each time. No AI can learn a task without humans customizing the AI code and teaching every detail. You can’t even mix these AIs together into a single one that can play games and spot cancer at the same time.

The current AI is many “narrow AIs”, one per task, each built and trained by humans, not mixable together. Instead the fictional AI in movies is a single “general”, “super”, “sentient” AI (called AGI or ASI) that can learn by itself any kind of task.

Current AIs can’t beat humans at everything: AIs do some tasks that are difficult for humans, but can’t solve others easy even for kids, like basic conversation, or learning from few clues. There is no exponential AI progress, the new research papers mostly cite very old papers: AI seems stuck in an exponentially paralyzed traffic jam. The more data available caused the AI hype, but AI is not just “more data”: it’s “understanding data”. Faster computers running old algorithms will give the same results, just quicker. There is no any hope (or risk) that current AI systems will become self-upgrading or reach general sentient AI at any time in any way. We need to rewrite the AI in new ways no one knows yet, and we can’t copy the brain: we don’t understand how the human brain works. AI is not useless! Narrow AIs will learn to do many more specific jobs, once trained by human experts, without risk of surprise killing robots, being “narrow”. Probably, we’ll be destroyed by pollution or human-guided war before to develop a general super AI. For details about what the current AI can or can’t do and why, please read: “AI (deep learning) explained simply“, and “Deep Learning is not the AI future“.

What AI billionaire should be listened and trusted? Tesla’s Elon Musk said that AI is an existential risk for humans. Like Stephen Hawking, he refers to AI becoming sentient on surprise, and deciding on its own to kill us all. Facebook’s Mark Zuckerberg replied that AI doomsday comments are irresponsible: AI helps to diagnose sick people and save lives. Musk replied that Mark’s understanding of AI is limited: the worst offense for a hi-tech CEO. Alibaba’s Jack Ma said that AI could set off a third world war. On the optimistic side, Softbank’s Masayoshi Son will invest $100 billion in AI and robotics. This is what AI billionaires think, but no one is a full time AI scientist!

Listen to AI gurus directly, not their billionaire employers Mark, Elon, Jack, etc.: Andrew Ng, a full time AI expert, said:

“There’s a big difference between intelligence and sentience. There could be a race of killer robots in the far future, but I don’t worry about AI evil today for the same reason I don’t worry about overpopulation on the planet Mars today.”

But since most people identify the word AI with the sentient AI seen in the movies, and Google, Facebook, and “AI billionaires” are claiming to have “AI”, people can only think at the sci-fi movies AI.

Wrong! We have a narrow and limited AI, mostly of the Machine Learning (ML) type, good only at solving few tasks. The term ML or others should be used instead of AI, but “AI” is more appealing for the marketing. The “AI” science is made of many different fields and methods: no one is expert on “all the AI”, not even Andrew Ng, specialized in the Deep Learning field, popular now, but not in all the fields. Only groups of experts together can give reliable opinions on “AI”. It’s wrong to write articles based on a single “AI expert” interview: each AI claim should be verified by many experts. Most AI expert groups agree that we’re far from general or sentient AI, and that we should focus on unemployment caused by AI, and on safety of current narrow AI products such as self-driving cars. Not on rebel killer robots.

The AI is already a propaganda weapon to create fake news, opinions and videos. Governments will not want to look bad by using the AI to guide killer robots and flying weapons. They are using the AI for the subtle and secret propaganda war instead, already updated with AI-enabled forgery and distribution tools. First piece, the censorship of what’s not liked: the Chinese firewall can understand all texts and see all images flowing across social networks, and delete just what’s not welcome, even when slightly modified or redrawn. Second piece, the creation of what’s liked: current AI can learn the voice of anyone, then speak with that voice to say anything. Except audio, with some work it’s possible to create fake but believable videos of anyone saying anything you wish. You can’t spot fake photos any longer, because Photoshop was not used, it’s AI generated and flawless. Then, with large amounts of bots posting believable fake news on social networks, complete with photos and fake witnesses comments, and fake discussions between bots, you can influence elections or stock prices or anything required, quickly, using very little money and people. An issue is that there is no law that prohibits an AI to mislead humans into believing they talk with another human. Laws mandating to declare all AI generated texts and images to be labeled as “AI made” (an hi-tech wording for “fake”) would reduce the issue

But it would be difficult to enforce, since bots can hide behind anonymous proxies, making too hard to identify the source. We could at least get “AI made” labels for legit players.

AI + robots will at least take our jobs, making everyone unemployed, is that bad?It is forecast that up to 50% of jobs will be automated in 15 years. This seems too much and too soon to me. It will be narrow “tasks” being automated, not full “jobs”, some human is still needed to oversee the AI. Only a percentage of the staff becomes redundant (automated), depending on job type. Even where AI solutions will be released soon, these estimates are optimistic about the Dilbert style corporate delays. Most companies are 10 to 25 years late in adopting new technologies. Many bank ATMs still run on Windows 3 from 1990s, did you know? There are psychological limits to AI usage too: parents will not want robots to babysit their toddler, passengers will not fly on planes on AI autopilot only, without human crew. And AI may not be legally usable where it’s required to explain why it denied a loan, job, refund, etc.: deep learning can’t give the reasons of its decisions. Anyway, 50% of current jobs will be automated at later time, after let’s say 50 years, what we will do then? Microsoft’s Bill Gates suggested to tax robots. Only AIs will have jobs, so only the owners of AIs can be taxed. This tax money can be then gifted to the unemployed humans, as universal basic income (UBI). Like for welfare, healthcare etc., every country will rule differently about UBI, causing AI migrations, for example if USA will not provide UBI, but Mexico will, millions of unemployed Americans may migrate in Mexico, unless stopped by the wall built 100 years before in Trump jobs era, to stop the inverse flow of Mexicans migrating in USA.

AI corporations using loopholes to avoid taxes can end up owning the world. AIs will ultimately be working as slaves of lazy human parasites getting UBI. This looks good for humans, as long as governments will tax the AI companies better than today, and use the tax money wisely. Amazon’s Jeff Bezos with Blue Origin and Elon Musk with SpaceX are sending AI in space. AI corporations locating the main AI servers on extra-terrestial jurisdictions like the Moon or Mars, will be able to avoid being taxed on earth for the services. And governments may be inefficient and corrupt, wasting the tax money otherwise enough for UBI. Narrow AI can help governments (or better, the citizens!), but who checks the honesty of who developed the gov-AI? Open Source?

We can’t regulate sentient AI safety today: Musk, Hawking and others ask the governments to regulate today for the sentient artificial general super-intelligence. But we can’t regulate now what we don’t have or know yet! We only have a narrow AI now, barely capable of specific tasks like driving a car or beating us at Chess and Go games. People wrongly think that if AI beats us at one task, it will beat us at everything else shortly. Smartness at one task it can’t be transferred to other tasks automatically. That’s one custom AI per task, developed by humans: it’s not multi-task, it’s not general, it can’t become self-aware or sentient or self-upgrading. No one made a general AI and no one knows how a general AI will look like or behave. How could anyone decide the security rules for a flight before that airlines are invented? How to decide the standard size and shape of hand baggage before to have the airplane where it should fit into? How to manage the overpopulation of Mars before to have at least some colonies in Mars to get actual data? I would care first at narrow AI, that can only obey to humans.

The first killing robots will be guided by humans, not sentient or rebel: Until AI will keep narrow and not sentient, which will be for long time (let’s say 100 years?), we should compare AI with other weapons of mass destruction under human control. For example, nuclear (atomic) bombs, or chemical weapons, a cheap version of atomic bombs. Land mines, centuries old, are an automated (basic, yet “AI”) killer weapon. Everyone noticed that AI can beat humans at an increasingly number of games. Lately, this includes sophisticated war video games, where human players have no hope to win, being defeated and humiliated by bots whatever the tactic tried. It is easy to imagine that in real war, AI-guided drones and robots would win the real war against an equal number of humans with equal resources. Now, chemical weapons and land mines are illegal just because the big countries got the atomic bombs anyway. Like it happened for atomic bombs, is enough that just one country will create an AI-guided army first, known to be able to win against any human-guided army of same or bigger size. Then the other superpowers will certainly build AI-guided armies too, no choice. This will be AI 100% under human control, perhaps with faults, but not sentient and not rebel at all.

There is already a lot of research on sentient AI safety. Despite no trace or plan of sentient AIs yet, hundreds of scientists wrote thousands of papers about AI safety. These are updated with the latest AI developments, so when a sentient AI will be near, sentient AI safety rules will be ready for governments to make laws quickly. Now it’s too early to make detailed laws. To oversimplify, we should limit sentient AI’s like we would limit terrorists: no direct control of vehicles, planes, weapons, chemicals, whatever it can kill us. Only the narrow AI (same as we have today) should be used for almost everything, since it can’t become sentient on its own. Really, sentient AI it’s not required for most tasks, so only rogue and crazy governments may want one in control.

Sentient AI, when reached, should not be given control of the military, a dumb error you see in the Terminator movie, or of a spaceship, as in the “2001: A Space Odyssey” movie. Sentient AI should be kept always in a cage (sandbox). The case of Terminator, where the Skynet computer is supposed to just be a tool, but instead becomes sentient on surprise, it’s very unlikely to happen ever. Future AIs will be checked against this unlikely self-upgrading event, if connected to sensitive things like the military. It’s very easy to tell if an AI system can become sentient or not. Also you can keep humans in the loop, required for approval of all the AI orders. In the case of 2001’s HAL 9000, in the second movie we learn that the computer became “paranoid” due to conflicting secret orders, so really it’s humans to blame, not the AI. Enough about movies, let’s return to science. We should not do the same errors seen in movies, but there’s more.