Silver Blog, July 2017AI and Deep Learning, Explained Simply

AI can now see, hear, and even bluff better than most people. We look into what is new and real about AI and Deep Learning, and what is hype or misinformation.
 



After a million examples learned, an ML can do less mistakes than humans in percentage, but errors can be of a different kind, that humans would never make, such as classify a toothbrush as a baseball bat. This difference with humans it can be used as malicious AI hacking, for ex. painting small “adversarial” changes over street signals, unnoticeable by humans, but dramatically confusing for self-driving cars.

puppy trainers

(AI trainers = puppy trainers, not engineers. Photo: Royal Air Force, Mildenhall)

The AI will kill old jobs, but create new ML-trainer jobs, similar to puppy trainers, not to engineers. An ML is harder to train than a puppy, since (unlike the puppy) it lacks general intelligence, and so it learns everything it spots in data, without any selection or common sense. A puppy it would think twice before to learn evil things, such as killing friends. Instead, for an ML it makes no difference to serve terrorists or hospitals, and it will not explain why it took the decisions. An ML will not apology for errors or fear to be powered off for errors: it’s not sentient general AI. For safety and quality standards, each ML will be surrounded by many humans, skilled in: ML training, ML testing, but also in trying to interpret the ML decisions and ethics. All at once, in a same job title.

Practical ML training. If you train with photos of objects held by a hand, the ML will include the hand as part of the objects, failing to recognize the objects alone. A dog knows how to eat from a hand, the dumb ML eats your hand too. To fix, train on hands alone, then on objects alone, finally on objects held by hands, labeled as “object x held by hand”. Same with object changes: a car without wheels or deformed by an accident, an house in ruins etc. Any human knows it’s a car crashed in a wall, a bombed house etc. An ML sees unknown new objects, unless you teach piece by piece, case by case. Including weather! If you train with photos all taken in sunny days, tests will work with other sunny days photos, but not with photos of same things took in cloudy days. The ML learned to classify based on sunny or cloudy weather, not just on the objects. A dog knows that the task is to tell what’s the object whatever it’s seen in sunny or cloudy light. Instead, an ML picks all the subtle clues: you need to teach all 100% explicitly.

Copyright and intellectual property laws will need updates: MLs, like humans, can invent new things. An ML is shown existing things A and B, and produces C, a new, original thing. If C is different enough from both A and B, and from anything else on earth, C it can be patented as invention or artwork. Who is the author? Further, what if A and B it was patented or copyrighted material? When C is very different, the A and B authors can’t guess that C exists thanks to their A and B. Let’s say it is not legal to train MLs on recent copyrighted paintings, music, architecture, design, chemical formulas, perhaps stolen user data etc. Then, how do you guess the data sources used from just the ML results, when less recognizable than a Picasso style transfer? How you will know an ML was used at all? Many people will use MLs in secret, claiming results as their own.

For most tasks in small companies, it will keep cheaper to train humans than MLs. It’s easy to teach a human to drive, but epic to teach the same to an ML. It needs to tell and let crash the ML in millions of handcrafted examples of all the road situations. After, perhaps the ML will be safer than any human driver, especially those drunk, sleepy, watching cell phone screens, ignoring speed limits or simply mad. But a so expensive and reliable training is viable for big companies only. MLs trained in cheap way will be faulty and dangerous, only a few companies will be able to deliver reliable MLs. A trained ML can be copied in no time, unlike a brain’s experience transfer to another brain. Big providers will sell pre-trained MLs for reusable common tasks, for ex. “radiologist ML”. The ML will complement one human expert, who keeps required, and replace just the “extra” staff. An hospital will hire a single radiologist to oversee the ML, rather than dozen(s) of radiologists. The radiologist job it is not extinct, just there will be fewer per hospital. Who trained the ML will get back the investment by selling to many hospitals. The cost of training an ML will decrease every year, as more people will learn how to train MLs. But due to data preparation and tests, reliable ML training will never end up cheap. Many tasks can be automated in theory, but in practice only a few will be worth the ML setup costs. For too uncommon tasks like ufologists, translators from ancient (dead) languages, etc., long term human salaries will keep cheaper than the one-time cost of training an ML to replace too few people.

Humans will keep doing general AI tasks, out of ML reach. The intelligence quotient (IQ) tests are wrong: they fail to predict the people’s success in life because there are many different intelligences (visual, verbal, logical, interpersonal etc.): these cooperate in a mix, but results can’t be quantified with a single IQ number from 0 to n. We define insects as “stupid” compared to human IQ, but mosquitoes win us all the time at the narrow “bite and escape” task. Every month, AIs beat humans at more narrow tasks, like mosquitoes. Waiting for the “singularity” moment, when AI would beat us at everything, it’s silly. We are getting many narrow singularities, and once AI wins us at a task, everyone except who oversees the AI, can quit doing the task. I read around that humans can keep doing unique handcrafted stuff with imperfections, but really, AIs can do fake errors, learning to craft different imperfections per piece, too. It’s impossible to predict what tasks will be won by AI next, being AI sort of creative, but it will lack “general intelligence”. For ex. comedians and politicians (interchangeably) are safe, despite not requiring special (narrow) studies or degrees: they just can talk about anything in funny or convincing way. If you specialize in a difficult, but narrow and common task (like, radiologist), MLs will be trained to replace you. Keep general!

Thanks for reading. If you feel good, please click the like and share buttons below.

For questions, or consulting, message me on LinkedIn, or email at fabci@anfyteam.com

Original. Reposted with permission.

Bio: Fabio Ciucci is a Founder and CEO at Anfy srl in Lucca, Italy. Since 1996 he created own companies and advised others (enterpreneurs, family offices, venture capitalists) on private equity hi-tech investments and innovation, and artificial intelligence.

Related: