Neural network AI is simple. So… Stop pretending you are a genius
This post may come off as a rant, but that’s not so much its intent, as it is to point out why we went from having very few AI experts, to having so many in so little time.
By Brandon Wirtz, CEO and Founder at Recognant
On a regular basis people tell me about their impressive achievements using AI. 99% of these things are completely stupid. This post may come off as a rant, but that’s not so much its intent, as it is to point out why we went from having very few AI experts, to having so many in so little time. Also to convey that most of these experts only seem experty because so few people know how to call them on their bull shit.
So you built a neural network from scratch… And it runs on a phone…
Great. So you converted 11 lines of python that would fit on a t-shirt to Java or C or C++. You have mastered what a cross compiler can do in 3 seconds.
Most people don’t know that a neural network is so simple. They think it is super complex. Like fractals a neural network can do things that seem complex, but that complexity comes from repetition and a random number generator.
So you built a neural network that is 20 layers deep…
Congrats! You took the above code, and looped the loop again. That must have been so hard, deciding where to put another For and a Colon.
“Deep Learning” and n-Layers of depth is just a neural network that runs its output through itself. This is called Recursive Neural Networks (RNN), because you loop the loop.
This is similar to learning to drive, and only being able to make right turns. You can get to almost anywhere doing this. It may not be the most efficient, but it is easier than making left turns.
So you trained a neural network using Nvidia GPUs and moved it to the phone…
In that above 11 lines of code something that is wrong (or not implemented) is that the seed is not set. Without setting the seed I can’t guarantee that I will get the same random numbers in a second pass as in the first pass. As a result I could have dramatically different results. Since your phone and your desktop won’t give the same random numbers, and different phone chips could all have different random numbers, your training from a GPU based system to a mobile system has a high probability of not working.
Since training can take millions to billions of times longer than classifying in a locked system, building a neural network for a phone is pretty much impossible. There will always be differences between devices. Plus or minus 5% is not a big deal for voice recognition. It is a big deal for things like cancer detection/diagnosis.
So you trained a neural network to do something no human has been able to do… Like detect if people are gay just from a photo.
No. No you didn’t. Neural networks are dumb black box systems. If you torture them enough you can get great fit of test data, but you won’t get great results from randomly sourced tests. AI is really good at spurious correlations. The marriage rate in Kentucky is not driving the drowning rate.
Nor is the fact that a picture is taken close up a proof that the animal in the photo is a cat instead of a lion. So the shape of the horizon didn’t cause something to be a lion or a cat.
People want to ascribe magic powers to AI, but for the most part AI can’t do anything a human can’t. There are some exceptions, but only for transparent AI. Neural Networks aren’t transparent, and even in the transparent systems a human would be able to replicate the final result.
So you use TensorFlow to…
Remember those 11 lines from above? TensorFlow is just a wrapper for those 11 lines. What it does well is help you visualize what is happening in those 11 lines. In many ways it is like Google Analytics. All of the data to do what Google Analytics does is probably in your server log, but looking at those logs is hard, and looking at Google Analytics is easy. At the same time while Google Analytics will tell you that your server is slow it won’t tell you why.
Those of us who understand neural networks don’t want or need tensor flow because we visualize the data without their fancy charts and animations, and because we look at the data and code raw, we can figure out the equivalent of why the server is slow.
So you use neural networks to do NLP/NLU…
Common sense, people. Neural networks are not simulating much more than a Slug’s level of intelligence. What are the odds you taught a slug to understand English?
Building a neural network with 1 trait for every word in the English language would require a network that used as much computing power as all of Google. Upping that to 1 trait for each word sense in the English language would be all of the computing in all of the cloud services on the planet.
AI can be built to do great things. Neural networks have limitations.
So you have a self-defining neural networks…
Congrats, you know how to wrap the 11 lines of neural network code in the 9 lines of code for a genetic algorithm. Or the 44 lines for a distributed evolutionary algorithm. Write a press release because your 55 lines of code are going to... Oh, wait...
So you trained a neural network to…anything.
Congrats, you are a data wrangler. While that sounds impressive you are a dog trainer. Only your dog has the brains of a slug, and the only thing it has going for it, is that you can make lots of them. There is no magic in owning a training set. It might have been hard to track down, but don't fool yourself (or others) into thinking you are anything more than a glorified slug trainer.
So you combined neural networks and blockchain…
Congrats, you know how to make hype stack. Unfortunately, hash mining and neural networks don’t have anything in common, and trying to run all of a data sets through all of the nodes of a blockchain farm wouldn’t work. Neural network start to have problems when you “slice” the load more than about 16 ways with data sets of normal size. You can go larger of if you have billions of records, or if you are doing Back Propagation and want to test multiple orders of data presentation, but these techniques don’t scale to 1000s or millions of nodes.
I don't do much with neural networks.
There is neural network code in my tool box. but that is what it should be. A tool in the selection, not the basis for an entire product. Most of my work is in epistemology an self-defining heuristics. The combination of technologies is called Mind Simulation, because rather than neural networks that are supposed to be modeled after the hardware of the brain, in software (which they aren't), Mind Simulation is about modeling the software of the brain, in software. A brain emulator as it were. Mind Simulation has only been a thing for about 10 years, where as neural networks have been around for 50+. Mind Simulation also differs in that it is transparent, and takes millions of lines of code not dozens.
To learn more about AI that isn't neural network based, check out my follow up article:
Bio: Brandon Wirtz is CEO and Founder at Recognant.
Original. Reposted with permission.
- Using Genetic Algorithm for Optimizing Recurrent Neural Networks
- A Simple Starter Guide to Build a Neural Network
- The 8 Neural Network Architectures Machine Learning Researchers Need to Learn