KDnuggets Home » News » 2017 » Aug » Opinions, Interviews » Deep Learning is not the AI future ( 17:n33 )

Platinum Blog, Aug 2017Deep Learning is not the AI future


While Deep Learning had many impressive successes, it is only a small part of Machine Learning, which is a small part of AI. We argue that future AI should explore other ways beyond DL.



For most people, “AI” means the sci-fi movies AI that can give smart explanations, where humans can quickly decide if they agree or not, very easy for legal validation. Most people, including judges and who write laws like GDPR, hearing that companies are “AI-first” or “adding AI”, expect an “AI” like in movies, that would defend its own decisions if called in court, impressing users and judges. Instead, we got unexplainable “DL AI”, that will not be used much, even for tasks it can solve, just because lacking interpretability. DL will not save costs and will not kill jobs where sensitive automated decisions are needed. Even where humans must take the final decision anyway, tool AIs explaining their advice will be much preferable to tool AIs giving responses without giving causes or reasons. Explainable AIs, when (re)discovered, will be safer, legally compliant, cheaper, faster, and replace both DL and humans. Since DL was invented in 1960s-1980s then rediscovered in 2010s, probably the base of explainable future AIs is already described by some researchers somewhere, but being not DL, no one will careto check and develop these AI types for years. Until rediscovered and hyped.

GDPR, about automated decision-making, also requires to prevent discriminatory effects based on race, opinions, health status, etc. But DL models trained from user-generated data like social media and news (rather than ground truth data like medical or financial records), always contain evil biases implicitly. As told before, DL can read a lot of texts and data, and mimic its contents, but will not critically understand it. DL will just believe what’s spotted more often, underline patterns and trends found in data, and so: amplify the human society biases and problems. The data shows that black people are arrested more often than white people: the DL will simply suspect blacks first if any crime is committed. The data shows that more males than females are directors in corporate boards: the DL will simply prefer male candidates in job applications.

DL decisions end up more discriminatory, racist, sexist than the average sample in the training data. This issue happens in all the ML algorithms, but DL model bias is one of the hardest to test, detect, control and tune. It is so hard to fix, that rather than try to patch it, simply caused the abrupt cancellation of many DL experiments already, from chat bots went nazi and hateful, to apps whitening black face photos in “beauty” filters.

You can’t fix a discriminatory, racist or sexist DL model by trying to balance it with patches after the training. DL is a neural network, and unlike some other AI methods, you can’t edit specific answers with local surgery, you must retrain all with different, 100% balanced and fair data, rare in the wild world. DL mimics what found in the data without understand it: DL will not disagree with any data, will not figure out the injustices in the society, it’s just all “data to learn”. You should hire a dedicated human staff to create fake fair data of an ideal society where white people are arrested as often as blacks, where 50% of directors are women, and so on. But the cost of creating vast amounts of de-biased data edited by human experts, just to train a DL model, makes not worth to replace humans with AI in first place! Further, even if you had trained a DL model that really is fair, you have no evidence to convince a judge or a user about the fairness of any decision, since the DL will give no explanations.

DL will be of secondary importance, used for non-business apps or games not posing legal risks. When explainable AIs will be popular, DL will not be abandoned like magnetic tapes or cathode TVs. People losing game plays against bots will unlikely convince a judge to fine the AI company because it can’t explain how the AI won. People unhappy of how FaceApp edited their selfie photo into older, younger, or opposite sex, will unlikely convince a judge to fine FaceApp because it can’t explain how the AI decided the new looks (except a “race change” filter, removed after massive protests, no judge needed). Detecting sickness in medical images is a safe DL use, as long as users will ask confirmation from human doctors before to take medication.

The legally safe DL market is very limited: judges can fine in all the cases where the decision outcome can make a financial or health difference or be discriminatory, where DL will not help to understand if and why the decision was fair. How about self-driving cars? DL seems a legal risk to use in all that is more than art, games or good taste jokes. Existing non-DL methods can replace DL where needed, and new methods will be (re)discovered, so the AI progress will continue nicely. Especially if everyone will study (and invest into) all the old and new algorithms of the whole AI and Machine Learning sciences, not only DL: the only way to become a “whole AI lifetime expert”.

Except DL being “illegal” to use for many useful tasks it can solve, it’s also unable to solve several tasks: those requiring the abstract reasoning to figure out what’s fair and unfair in the data seen, and to explain the logic of its own decisions. Even for tasks not requiring explanation where DL seems the best system, like image recognition, DL is not as safe as human eyes. You can fool DL with “adversarial examples“: photos of something, like a cat, with invisible perturbations added, can fool the DL into seeing other, like a dog. All humans will still see a cat, but the DL will instead see a dog or whatever the hacker secretly embedded. This can be exploited in street signs to hack current self-driving cars. New AI systems resisting this hack will replace DL.

The author of Keras, the most popular DL library, in his post “The limitations of deep learning”, said: “the only real success of DL has been the ability to map space X to space Y using a continuous geometric transform, given large amounts of human-annotated data.” These spaces got lots of dimensions, not just 3D, this is how DL can mimic Picasso art styles, Poker bluffs and some human creativity in many tasks. But in layman terms, I would say that this means: DL can be trained to recognize cat photos without understand what is a cat, and to be racist without knowing of being racist. DL can recognize cats or be racist or win at games, which is impressive and at times useful, but DL can’t explain why a photo shows a cat, or if a decision was racist.

In “The future of deep learning” the Keras author describes his vision of a new system where DL is only in “geometric modules”, that should interact with not yet existing “algorithmic modules” and “meta learners”. This would increase the number and types of tasks solved, but still failing to explain the decisions, due to the DL modules. It’s like when we can’t explain, in words, certain feelings or images computed in our brain. Humans explain all, but with mostly made up, oversimplified excuses, that everyone seems to believe as accurate. Machines are instead unfairly asked to be really accurate. Other experts are drafting new AI systems that do not include DL at all, but they lack funds: everyone invests in DL only now, and the DL mania will continue for a while. No one knows what will be the next big AI thing, but unlikely will be DL 2.0.

The DL is hyped because only who sells DL software and hardware, despite the conflict of interest, is interviewed in the AI debates. Have you noticed any legitimate “natural intelligence” experts, like psychologists and philosophers, supporting DL?

If you have neither AI urgency or time to study, wait the next AI system to be ready and study it directly, skipping DL 1.0. Else, if you have AI urgency and/or time to study, be sure to cover the whole AI and the many Machine Learning fields, not DL only.

Thanks for reading. If you feel good, please click the like and share buttons. But before to comment, especially if you just paid an expensive DL course or you disagree, please read first my longer article in full: AI (Deep Learning) explained simply. If interested in the robots safety sci-fi debate, read: Will AI kill us all after taking our jobs?

Original. Reposted with permission.

Bio: Fabio Ciucci is a Founder and CEO at Anfy srl in Lucca, Italy. Since 1996 he created own companies and advised others (enterpreneurs, family offices, venture capitalists) on private equity hi-tech investments and innovation, and artificial intelligence.

Related: