Efficiency Spells the Difference Between Biological Neurons and Their Artificial Counterparts

Part 8 of the series explores a single facet of biological neurons which, so far, have kept them way ahead of their artificial counterparts: their efficiency.



Efficiency Spells the Difference Between Biological Neurons and Their Artificial Counterparts
Image by macrovector on Freepik

 

Machine learning has made great advances, but as this series has discussed, doesn’t have much in common with the way your brain works. Part 8 of the series explores a single facet of biological neurons which, so far, have kept them way ahead of their artificial counterparts: their efficiency.

 

Your brain contains about 86 billion neurons, which are crammed into a volume of somewhat over one liter. Although machine learning can do many things which the human brain cannot, the brain is able to perform continuous speech recognition, visual interpretation, and a host of other things, all while dissipating about 12 watts. In comparison, my laptop draws about 65 watts and my desktop machine draws over 200 watts, and neither of them is capable of running the huge ML networks which are in use today.

How does the brain achieve its remarkable efficiency? I attribute it to three essential factors:

  1. The brain is physical and chemical rather than electronic.
  2. The neurons in the brain are really slow.
  3. Neurons only require energy when they emit spikes.

While we can use electronic instruments to measure voltages within neurons, their fundamental operation is chemical. Ions are migrating from one side of a membrane to another and ionic molecules are changing orientation. This is fundamentally different from a computer, where electrons are moved about and the charge they represent travels at the speed of light. Obviously, molecules in the brain don’t require any external energy at all when there are just sitting there and the amount of energy needed to get a Sodium ion (for example) to move from one side of a membrane to another is minute.

As I mentioned in a previous article in this series, neurons spike at a maximum frequency of 250Hz and neural signals travel at a leisurely 2m/s. If we slowed our CPUs down to a similar pace, they would dissipate a lot less energy too but never as little as their biological counterparts. 

The real difference, though, is that neurons need negligible energy except when they fire. Further, they don’t fire very often. By taking the total energy of the brain and dividing it by the energy needed to fire as calculated via chemistry, it can be concluded that neurons fire on average once every two seconds. It’s obvious that continuous processes like vision and hearing must be running more or less constantly using more energy. So to get things to average out, we must conclude that vast portions of the brain’s neurons seldom fire at all. Thus, a neuron that represents a specific memory (your grandmother, for example) likely fires only when you think about your grandmother. 

But there’s a further way to think about this. A CPU uses some amount of energy when it is running at speed (not idle or asleep), and it uses this amount of energy regardless of the data it is processing. Adding two numbers together, adding 0+0 for example, requires essentially the same energy as adding 12,345 + 67,890. Neurons are different.

This distinction has been the genesis of the Neuromorphic computing movement. In the Brain Simulator, the processing is only required for neurons that fire, so a desktop CPU can handle up to 2.5 billion synapses per second. Neuromorphic chips capitalize on this effect to produce AI results with radically less power than conventional machine learning processes.

While neuromorphic systems have moved in the direction of more brain-like architectures, they typically are still using the ML backpropagation algorithm which is not neuromorphic at all.

 

"The final article in this series will summarize the many reasons why Machine Learning isn’t like your brain -- along with a few similarities."

 
 
Charles Simon is a nationally recognized entrepreneur and software developer, and the CEO of FutureAI. Simon is the author of Will the Computers Revolt?: Preparing for the Future of Artificial Intelligence, and the developer of Brain Simulator II, an AGI research software platform. For more information, visit here.