Neuromorphic Computing: The Next Iteration of Artificial Intelligence
How Neuromorphic computing uses Spiking Neural Network Algorithms to Pave way for Next Evolution of Artificial Intelligence
Humans have been attempting for years to reverse engineer and replicate the functioning of the human brain in machines. This is essentially how the neuromorphic computing concept emerged.
Its models work similarly to how the brain functions, but use spiking neural networks instead of neurons. They are considered the next generation of AI, which comprises the production and use of neural networks as analog or digital copies on electronic circuits. Further, they facilitate the highest computing speeds while reducing the need for gigantic devices, cooling units, energy demands, and dedicated spaces required to house supercomputers.
The concept of neuromorphic computing was coined by California Institute of Technology Professor Mead in the late 1980s. He suggested that analog chips could mimic the electrical activity of neurons and synapses in the brain. Conventional computing chips use transistors that can either be on or off, one or zero, i.e., fixed voltage. While power dissipation is a huge problem as existing supercomputers require use power to perform complex machine learning tasks; the neurons of spiking neural networks, fire independently of the others.
By doing so, it sends pulsed signals to other neurons in the network that directly change the electrical states of those neurons.
These neurons send information pulses to one another in pulse patterns called spikes. By moderated the responses to spikes one gets a continuum of values, which is synonymous with how human brain works. Though the timing of these spikes is critical, the amplitude is not. By encoding information within the signals themselves and their timing, spiking neural networks simulate natural learning processes by continually remapping the synapses between artificial neurons in response to stimuli.
The best feature about neuromorphic computing is that the structure of its computers makes it much more efficient at training and running neural networks. Besides, it can run artificial intelligence models faster than equivalent CPUs and GPUs while consuming less power. The existing hardware consumes more power due to von Neumann bottleneck. These chips are based on von Neumann architecture, which separates out memory and computing, as a result, the chips have to shuttle information back and forth between these units – leading to more energy and time wastage and lesser speed. Meanwhile, owing to their smaller size and low power consumption, neuromorphic computing chips are a better alternative to the cloud for running artificial intelligence algorithms.
Popular Neuromorphic Chips
Currently, there are few neuromorphic systems that are in vogue. For instance, Tianjic neuromorphic chip contains 40,000 neurons and 10 million synapses, in an area of 3.8 square millimeters. When used in self-driving bikes, Tianjic chip performed 160 times better and 120,000 times more efficiently than a comparable GPU.
Loihi from Intel labs, is its fifth-generation self-learning neuromorphic research test chip, which was introduced in November 2017. Loihi is a 14-nanometer chip with a 60-millimeter die size and contains over 2 billion transistors and three managing Lakemont cores for orchestration. It contains 128 cores packs and a programmable microcode engine for on-chip training of asynchronous spiking neural networks (SNNs).
IBM’s neurosynaptic processor, TrueNorth, has 4,096 cores, Samsung’s 28nm process with 5.
4 billion transistors. It is IBM’s largest chip in transistor count and uses less than 100Mw of power while simulating complex recurrent neural networks. It has a power density of 20mW / cm2. It was built in 2014, as part of DARPA’s SyNAPSE program.