Machine Learning Is Not Like Your Brain Part 4: The Neuron’s Limited Ability to Represent Precise Values

In the fourth installment, we focus on a fundamental issue: it is difficult to represent numerical values in neurons and impractical to represent them with precision.



Machine Learning Is Not Like Your Brain Part 4: The Neuron’s Limited Ability to Represent Precise Values
Photo by camilo jimenez on Unsplash

 

ML algorithms rely on their ability to represent numbers with a high degree of resolution and accuracy. This is difficult or impossible with biological neurons. Moreover, the more accuracy needed, the slower a neuron-based system will run. Any biological brain implementing the numerical precision required by ML would to too slow to be useful.

In a computer or your brain, both neurons and transistors represent information digitally – neurons by emitting spikes and transistors by taking on one of two defined voltage states. Given such digital components, there are a number of ways to encode numerical values:

  1. The frequency of spikes (bits) in a single serial signal;
  2. The timing of spikes in a single signal;
  3. Some encoding of parallel signals;
  4. Some more complex encoding scheme like binary integers, floating point, or Unicode. While it is theoretically possible that streams of neural spikes could encode binary numbers or Unicode strings, it is impossibly unlikely.

The first three methods DO appear in your nervous system and all of them exist in computer systems. 

 

Method 1

 

In the brain, numeric values could be represented by the number neural spikes in a given time period. But remember that neurons are really slow, with the maximum frequency of about 250Hz or 4ms per spike. If we want to represent the number 0-10, we could allocate a time-period of 40ms. If 10 spikes occur in the time period, that could represent 10 and no spikes could represent zero, etc. 
Some Issues with this approach:

  1. You can’t have fractional spikes, so in 40ms you can never represent more than 11 distinct values;
  2. If you want to represent 100 different values, you have to wait 400ms (nearly half a second) to know which value you’ve represented – much too slow to be useful because any sophisticated neural process would require multiple levels of processing;
  3. To handle larger numbers of values, you need progressively smaller synapse weights and the brain’s high levels if internal electronic noise will become an issue;
  4. With a fluctuating value, you can only know the new value at the end of the time period. In other words, a signal which is zero most of the time and 10 for 20ms would register as a 5 because there would only be time for 5 pulses in the 40ms time period.

 

Method 2

 

In this method, rather than counting the number of spikes in a given time period, we examine the time between adjacent spikes. Many peripheral nerves fire faster with greater stimulation. Some retinal nerves, for example, fire faster with brighter light.  Since the fastest firing rate is every 4ms, we could let that represent 10, firing every 5ms could represent 9, etc. Now we can represent the same 11 values in 14ms instead of 40ms. Neurons are actually very good at this type of differentiation. As an example the brain’s ability to detect the direction of sound with sub-millisecond precision relies on differentiating signals with precise arrival time differences.

You might think that you could say 4ms represents one value while 4.01ms (for example) represents another and you can get any desired level of precision. Unfortunately, this doesn’t work because of that old bugaboo, noise. The brain is an electrically noisy environment and neural signals might jitter as much as a millisecond. 

Let’s look at the receiving end of such a signal. By adjusting various parameters of a neuron model, it is possible for an individual neuron to respond to any specific firing timing. This means that to differentiate 10 different signal timings, you need 10 neurons. Pairs of neurons, however, can detect which of two incoming signals is firing faster than the other with considerable accuracy. This means that while this type of signal is very useful for relative values such as detecting a boundary between two different levels of brightness, it can’t be used so to detect absolute signal values.

A further issue is that while individual brain neurons can detect this type of signal, there is no way for them to generate such a signal. The only way for a neuron to generate a 6ms vs a 7ms pulse gap is for them to receive such an input signal. This makes any brain computation on this type of signal prohibitively complex. 

 

Method 3

 

Consider a cluster of neurons to represent a signal. The greater the number of firing neurons, the higher the value will be. Interestingly, human touch sensitivity uses this mechanism. Thus, the harder your fingertip is pressed, the greater the number of sensory nerves fire. This has the benefit of representing any number of values in a single firing time, but the practical limitation is that it uses lots of neurons. To represent a visual array of 1000x1000, each of which might be any one of 1,000 colors, this would take a billion neurons. As there are only 140 million neurons in the human visual cortex, this mechanism has only limited utility.

 

Conclusion

 

The maximum number of values which can be represented by neuron firing is between 10 and 100 unique values. ML algorithms require much more precision than this because the underlying concept of gradient descent presumes a continuous gradient surface. The more values you need to represent information in the brain, the slower it will run. 

 

In Part Five of this series, we will cover why neurons can’t perform the simplest summation required for ML.

 
 
Charles Simon is a nationally recognized entrepreneur and software developer, and the CEO of FutureAI. Simon is the author of Will the Computers Revolt?: Preparing for the Future of Artificial Intelligence, and the developer of Brain Simulator II, an AGI research software platform. For more information, visit here.