Deep Learning and Neuromorphic Chips

The 3 main ingredients to creating artificial intelligence are hardware, software, and data, and while we have focused historically on improving software and data, what if, instead, the hardware was drastically changed?



By Peter Morgan, Data Science Partnership.

There are three main ingredients to creating artificial intelligence: hardware (compute and memory), software (or algorithms), and data. We’ve heard a lot of late about deep learning algorithms that are achieving superhuman level performance in various tasks, but what if we changed the hardware?

Firstly, we can optimise CPU’s which are based on the von Neumann architectures that we have been using since the invention of the computer in the 1940’s. These include memory improvements, more processors on a chip (a GPU of the type found in a cell phone, might have almost 200 cores), FPGA’s and ASIC’s.

Neuron

Such is the case with research being done at MIT and Stanford. At the International Solid State Circuits Conference in San Francisco earlier this month, MIT researchers presented a new chip designed specifically to implement neural networks. It is 10 times as efficient as a mobile GPU, so it could enable mobile devices to run AI algorithms locally, rather than uploading data to the cloud for processing. Whereas many of the cores in a GPU share a single, large memory bank, each of the Eyeriss cores has its own memory. The Stanford EIE project is another CPU optimization effort whereby the CPU’s are optimized for deep learning.

The second method relies not just on performance tweaks on the CPU architectures, but instead on an entirely new architecture, one that is biologically inspired by the brain. This is known as neuromorphic computing, and research labs around the world are currently working on developing this exciting new technology. As opposed to normal CPU’s and GPU’s, neuromorphic computing involves neuromorphic processing units (NPU’s), spiking neural networks (SNN’s) and analogue circuits and spike trains, similar to what is found in the biological neural circuitry in the brain.

Neuromorphic chips attempt to model in silicon the massively parallel way the brain processes information as billions of neurons and trillions of synapses respond to sensory inputs such as visual and auditory stimuli. Those neurons also change how they connect with each other in response to changing images, sounds, and the like. This is the process we call learning and memories are believed to be held in the trillions of synaptic connections. Companies developing neuromorphic chips include IBM, Qualcomm, Knowm and Numenta. Government funded research projects include the Human Brain Project (EU), IARPA (US) and Darwin (China). Let’s look at each of these now in a little more detail.

IBM Research has been working on developing the TrueNorth chip for a number of years now and are certainly making steady progress. Qualcomm has also been working on the Zeroth NPU for the past several years, and it is capable of recognizing gestures, expressions, and faces, and intelligently sensing its own surroundings. Numenta, headed up by Jeff Hawkins, started in 2005 in Silicon Valley and has been making good progress both theoretical and applied in emulating the cortical columns found in the brains’ neocortex. They have released products based on the NuPIC (Numenta Platform for Intelligent Computing) architecture which is used to analyze streaming data. These systems learn the time-based patterns in data, predict future values, and detect anomalies. Lastly, founded in 2002, Knowm has an interesting offering based around its patented memristor technology.

AI

The Human Brain Project, a European lead multibillion dollar project to simulate a human brain, has incorporated Steve Furber’s group from the University of Manchester’s neuromorphic chip design into their research efforts. SpiNNaker has so far been able to accomplish the somewhat impressive feat of simulating a billion neurons with analogue spike trains in hardware. Once this hardware system scales up to 80 billion neurons we will have in effect the first artificial human brain, a momentous and historical event. This is predicted to occur around 2025 right in line with Ray Kurzweil’s prediction in his book “How to Create a Mind”.

Darwin is an effort originating out of two universities in China. The successful development of Darwin demonstrates the feasibility of real-time execution of Spiking Neural Networks in resource-constrained embedded systems. Finally, IARPA, a research arm of the US Intelligence Department, has several projects ongoing involving biologically inspired AI and reverse engineering the brain. One such project is MICrONS or Machine Intelligence from Cortical Networks which “seeks to revolutionize machine learning by reverse-engineering the algorithms of the brain.” The program is expressly designed as a dialogue between data science and neuroscience with the goal to advance theories of neural computation.

So overall, a very active area of research at the moment, and one we can foresee only growing in the future in terms of resources allocated to it. Whether that’s money spent or scientists and engineers involved in the research and development work necessary to produce a machine as general purpose as the brain. A true artificially engineered brain on a chip which will clearly lead to more intelligence in the Enterprise as well as in all aspects of our daily lives.

Bio: Peter Morgan is co-founder and CTO of Data Science Partnership www.dsp.ai, which provides training and consulting services in Data Science, Machine Learning and Artificial Intelligence in the Enterprise. Peter is scientist-entrepreneur with over twenty years’ experience in computer systems.

Related: