KDnuggets Home » News » 2017 » Aug » Opinions, Interviews » Deep Learning is not the AI future ( 17:n33 )

Platinum Blog, Aug 2017Deep Learning is not the AI future


While Deep Learning had many impressive successes, it is only a small part of Machine Learning, which is a small part of AI. We argue that future AI should explore other ways beyond DL.



Everyone now is learning, or claiming to learn, Deep Learning (DL), the only field of Artificial Intelligence (AI) that went viral. Paid and free DL courses count 100,000s of students of all ages. Too many startups and products are named “deep-something”, just as buzzword: very few are using DL really. Most ignore that DL is the 1% of the Machine Learning (ML) field, and that ML is the 1% of the AI field. Remaining 99% is what’s used in practice for most tasks. A “DL-only expert” is not a “whole AI expert”.

DL is not synonym of AI! The most advertised AI tools by Google, Facebook etc are mainly or only DL, so the wide public thinks that all the new AI records are (and will be) done with DL only. This is not true. Decision Trees like XGBoost are not making headlines, but silently beat DL at many Kaggle tabular data competitions. The media implied that AlphaGo is DL-only, but it’s a Monte Carlo tree search + DL, an evidence that pure DL was not enough to win. Many reinforcement learning tasks are solved with Neuroevolution’s NEAT, no backpropagation. There is “deep misinformation” in AI.

I am not saying that DL is not solving the tasks: DL is impressive. Trees and other algorithms don’t beat DL often, and there is no DL substitute to solve some tasks, but I expect non-DL systems to be (re)discovered in the future to beat DL. Perhaps also solving the legal nightmare of DL decisions, that even if correct, can’t be explained when legally questioned? Also I would like to read in the press about DL issues like “catastrophic forgetting”, the tendency to abruptly forget previously learned information upon learning new information, and about the daily fight against “overfitting”. About “intelligence”: DL will simply believe the training data given, without understand what’s true or false, real or imaginary, fair or unfair. Humans believe fake news too, but only up to a certain level, and even kids know that movies are fiction, not real. For more details, if you got time, read my longer article: AI (Deep Learning) explained simply.

Everyone 20 years ago was learning HTML, the markup language to write web pages by hand, considered enough at the time to become a dot com billionaire. Like others, I learned each tech when seemed useful: HTML, mobile apps, DL, and I invite everyone to continue learning new things across all the life time. In fact, you don’t simply learn one tech once in a life! If you learn DL, you don’t get a lifetime AI know how. The 1995 HTML became outdated and not enough: CSS, Javascript and server languages took over. In the same way, DL will be outdated and not enough too. Most popular mobile apps contain no HTML at all, so who knows if future AI apps will contain DL or not?

Really, DL is a 1980s tech, older than HTML: Trained with more data, 1970s “neural networks with hidden layers” gave better results, then was renamed as DL and hyped. In 1992 I briefly checked some neural network source codes, together with other stuff like fractals and cellular automata. Like almost everyone else, I dismissed DL at the time as an academic math puzzle with no practical uses. Instead, I focused on learning what gave immediate results: 3D for video games, then internet, and so on. But we was all wrong, DL can do amazing things with big data! I got fascinated in 2015 by Deep Dream, then by GANs etc. Still, DL it’s not the last, perfect AI science we can invent.

The ancient DL was already studied extensively and updated across decades to solve more tasks more accurately, but no DL version (Convolutional, RNN, RNN + LSTM, GANs etc.) can explain its own decisions. While DL will surely solve more tasks and kill more jobs in future, unlikely will solve all, or reserve surprising updates capable of discussing a legally valid defense about the fairness of its own decisions.



Deep Learning can’t understand these 2 philosophers

Future AI should explore other, new or old but overlooked ways, not DL only. A DL limit is that considers truth simply what it spots more frequently in the data, and false what’s statistically more rare, or opposite of what’s more frequent. The DL fairness comes not from DL itself, but from the humans selecting and preparing the DL data. A DL can read texts and translate between texts, but not in “human way”. If a DL model is trained over 100 books: 40 telling how hate, war, death and destruction are bad, and 60 books telling that Hitler’s Nazi ideas was correct, the DL will end up 100% Nazi!

DL will never figure out on its own that killing Jews, gays and disabled people is bad, if Nazism is the most popular opinion in the training data. No wonder that DL will not explain its own decisions, except a naive: “I’ve read most often that Nazism is right, so it should be right”. DL will learn and mimic the most flawed logic without figure out the flaws, including terrorism. Even small kids understand on their own who’s the bad guys in a movie, but not DL, unless humans teach it explicitly first. The DL specific things like gradient descent with backpropagation are cool, as well as custom DL hardware, but that’s mostly statistics and geometry, so probably will not be in the AI of 2037.

For many tasks, Deep Learning AI is or will become illegal, not compliant. Who collects data about citizens of the 28 European countries, should follow the General Data Protection Regulation (GDPR) by May 25, 2018. This is the date when DL will be abandoned for several apps in EU, causing AI startups to quickly replace DL with whatever else, or risking to be fined. Fines for noncompliance are 4% of global revenue, including USA revenue. GDPR, about automated decision-making, requires the right to an explanation, and to prevent discriminatory effects based on race, opinions, health, etc. Laws similar to GDPR exist or are planned worldwide, it’s only matter of time. The US Fair Credit Reporting Act requires to disclose all of the factors that adversely affected the credit score of the consumer, for a maximum of 4 factors allowed. DL factors are normally thousands or millions, not just 4, how to simplify into 4? AI, like bitcoin ICOs, started ignoring regulation, but laws and fines always come.

DL systems taking more relevant decisions than telling if an image is a cat, or where to add bunny ears to selfies, will be replaced with non-DL systems. The AI will have to be accountable, so different from DL, with outcomes you can explain to average judges and users in simple, legally valid words. DL complexity, that looks like “magics” to judges and users, is a legal risk: not a cool feature. DL will advice or alert humans, for example detecting sicknesses from medical images, to be verified by a medical doctor, but this is only partial automation lacking details. What to tell to users getting rejected from the AI (denied a loan, job, etc.) and asking explanations?

Laws are including the “right to an explanation”, for example why a job or a loan is denied. DL gives results with no natural (legal) language explanations. Pages of DL variables are available, but not acceptable by judges or users, since not even the best mathematicians or other algorithms can figure out and simplify into words a DL model. Even where humans take final decisions, the AI tools should give detailed reasons that humans can either figure out as wrong (and so override, reverse the AI decision), or quickly accept by simply copy, paste and sign explanations prepared by AI. No one knows how to modify DL to give simple human-like explanations, so DL can’t be made compliant! This issue affects also several other AI and Machine Learning algorithms, but not all or as much as DL. Decision trees also become not explainable if boosted or in ensemble. But in the future, new or rediscovered AIs, that can defend their own decisions, will be used for the regulated decisions in place of both DL and humans.

In the case of GDPR, only human staff can reject an application: the AI can automate the positive outcomes, else, if the AI denies a loan, job etc., it should pass the task to human staff, that will handle those negative decisions that make users angry, inquisitive. But in case of denial, the human staff will have no help or explanation from a DL-based AI, they can’t know if the DL logic was right or wrong. They will have to check the data from scratch on their own, to decide if ultimately reject or not, and write a reasonable cause for the decision. The risk is that the human staff, to save time and money, will make up fake explanations for AI rejections, and blindly accept AI approvals. But judges called to decide on the fairness of AI rejections, will also ask why the others was accepted, to compare. To be safe, you need solid reasons for accepting too, not for rejecting only, no matter what’s in laws like GDPR. Non-DL AI systems providing human readable explanations of all decisions to users, judges and support staff, will be ultimately the only ones used, for both fully and partially automated decisions.

Explainability was already a big issue before of any specific laws and before DL. In antitrust cases, companies like Google are asked why a product rather than others is shown in the top of search results. This was before DL too: many other algorithms also mix data in a crazy way to get results, so no human can easily reconstruct the decision reasons. Judges are told that engineers don’t know exactly, and pages of linear algebraare given as evidence. This can’t end well: billion dollars of fines was ruled in multiple cases, with warnings to change systems, even before a specific law existed. Class action lawsuits of users automatically denied jobs, loans, refunds etc, against automated decision units of stores, banks, insurances, etc. will be the norm, and being unable to explain will mean “no defense“, being fined, and a brand’s public relations disaster.