Spiking neural network

Spiking neural networks (SNNs) are artificial neural networks that more closely mimic natural neural networks.[1] In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather transmit information only when a membrane potential – an intrinsic quality of the neuron related to its membrane electrical charge – reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model.[2]

The insect is controlled by a spiking neural network to find a target in an unknown terrain.

The most prominent spiking neuron model is the leaky integrate-and-fire model. In the integrate-and-fire model, the momentary activation level (modeled as a differential equation) is normally considered to be the neuron's state, with incoming spikes pushing this value higher or lower, until the state eventually either decays or - if the firing threshold is reached - the neuron fires. After firing the state variable is reset to a lower value.

Various decoding methods exist for interpreting the outgoing spike train as a real-value number, relying on either the frequency of spikes (rate-code), the time-to-first-spike after stimulation, or the interval between spikes.

HistoryEdit

 
Pulsed neuron model
 
Artificial synapses based on FTJs

Many multi-layer artificial neural networks are fully connected, receiving input from every neuron in the previous layer and signalling every neuron in the subsequent layer. Although these networks have achieved breakthroughs in many fields, they are biologically inaccurate and do not mimic the operation mechanism of neurons in the brain of a living thing.[3]

The biologically inspired Hodgkin–Huxley model of a spiking neuron was proposed in 1952. This model describes how action potentials are initiated and propagated. Communication between neurons, which requires the exchange of chemical neurotransmitters in the synaptic gap, is described in various models, such as the integrate-and-fire model, FitzHugh–Nagumo model (1961–1962), and Hindmarsh–Rose model (1984). The leaky integrate-and-fire model (or a derivative) is commonly used as it is easier to compute than the Hodgkin–Huxley model.[4]

Brainchip Holdings Ltd announced on 21 October 2021 that it was taking orders for its Akida AI Processor Development Kits,[5] making it the world's first commercially available neuromorphic processor operating on a spiking neural network.

UnderpinningsEdit

Information in the brain is represented as action potentials (neuron spikes), which may be grouped into spike trains or even coordinated waves of brain activity. A fundamental question of neuroscience is to determine whether neurons communicate by a rate or temporal code.[6] Temporal coding suggests that a single spiking neuron can replace hundreds of hidden units on a sigmoidal neural net.[1]

An SNN computes in the continuous rather than the discrete domain. The idea is that neurons may not test for activation in every iteration of propagation (as is the case in a typical multilayer perceptron network), but only when their membrane potentials reach a certain value. When a neuron is activated, it produces a signal that is passed to connected neurons, raising or lowering their membrane potential.

In a spiking neural network, a neuron's current state is defined as its membrane potential (possibly modeled as a differential equation). An input pulse causes the membrane potential to rise for a period of time and then gradually decline. Encoding schemes have been constructed to interpret these pulse sequences as a number, taking into account both pulse frequency and pulse interval. A neural network model based on pulse generation time can be established. Using the exact time of pulse occurrence, a neural network can employ more information and offer better computing properties.

The SNN approach produces a continuous output instead of the binary output of traditional ANNs. Pulse trains are not easily interpretable, hence the need for encoding schemes as above. However, a pulse train representation may be more suited for processing spatiotemporal data (or continual real-world sensory data classification).[7] SNNs consider space by connecting neurons only to nearby neurons so that they process input blocks separately (similar to CNN using filters). They consider time by encoding information as pulse trains so as not to lose information in a binary encoding. This avoids the additional complexity of a recurrent neural network (RNN). It turns out that impulse neurons are more powerful computational units than traditional artificial neurons.[8]

SNNs are theoretically more powerful than second-generation networks[term undefined: what are 2nd-gen networks?]; however, SNN training issues and hardware requirements limit their use. Although unsupervised biologically inspired learning methods are available such as Hebbian learning and STDP, no effective supervised training method is suitable for SNNs that can provide better performance than second-generation networks.[citation needed] Spike-based activation of SNNs is not differentiable thus making it hard to develop gradient descent based training methods to perform error backpropagation, though a few recent algorithms such as NormAD[9] and multilayer NormAD[10] have demonstrated good training performance through suitable approximation of the gradient of spike based activation.

SNNs have much larger computational costs for simulating realistic neural models than traditional ANNs.[citation needed]

Pulse-coupled neural networks (PCNN) are often confused with SNNs. A PCNN can be seen as a kind of SNN.

Currently there are a few challenges when using SNNs that researchers are actively working on. The first challenge concerns the nondifferentiability of the spiking nonlinearity. The expressions for both the forward- and backward-learning methods contain the derivative of the neural activation function which is non-differentiable because neuron's output is either 1 when it spikes, and 0 otherwise. This all-or-nothing behavior of the binary spiking nonlinearity stops gradients from “flowing” and makes LIF neurons unsuitable for gradient-based optimization. The second challenge concerns the implementation of the optimization algorithm itself. Standard BP can be expensive in terms of computation, memory, and communication and may be poorly suited to the constraints dictated by the hardware that implements it (e.g., a computer, brain, or neuromorphic device).[11] Regarding the first challenge there are several approached in order to overcome it. A few of them are:

  1. resorting to entirely biologically inspired local learning rules for the hidden units
  2. translating conventionally trained “rate-based” NNs to SNNs
  3. smoothing the network model to be continuously differentiable
  4. defining an SG (Surogate Gradient) as a continuous relaxation of the real gradients

ApplicationsEdit

SNNs can in principle apply to the same applications as traditional ANNs.[12] In addition, SNNs can model the central nervous system of biological organisms, such as an insect seeking food without prior knowledge of the environment.[13] Due to their relative realism, they can be used to study the operation of biological neural circuits. Starting with a hypothesis about the topology of a biological neuronal circuit and its function, recordings of this circuit can be compared to the output of the corresponding SNN, evaluating the plausibility of the hypothesis. However, there is a lack of effective training mechanisms for SNNs, which can be inhibitory for some applications, including computer vision tasks.

As of 2019 SNNs lag behind ANNs in terms of accuracy, but the gap is decreasing, and has vanished on some tasks.[14]

When using SNNs for image based data we need to convert static images into binary spike trains coding. Types of encodings: [15]

  • Temporal coding generates one spike per neuron in which spike latency is inversely proportional to the pixel intensity.
  • Rate coding converts pixel intensity into a spike train where the number of spikes is proportional to the pixel intensity.
  • Direct coding uses a trainable layer to generate float value for each time-step. We have a learnable layer which converts each pixel at certain time step in float number and then threshold is used on the generated floating numbers to see if they will be 1 or 0.
  • Phase coding encodes temporal information into spike patterns based on a global oscillator.
  • Burst coding transmits the burst of spikes in a small-time duration, increasing the reliability of synaptic communication between neurons.

Currently there are two classification data sets [16] for SNNs to be used as a reference for future performance comparisons of spiking neural networks. So far MNIST data transformed into spikes has been used, but it had designer choices which weren't universally agreed on. For rate based encoding networks, MNIST can be used, but the capabilities of SNN go beyond rate encoding.

  • The first data set is called The Heidelberg Spiking Dataset. The HD dataset consists of approximately 10k high-quality recordings of spoken digits ranging from zero to nine in English and German language. In total 12 speakers were included, six of which were female and six male. The speaker ages ranged from 21yr to 56yr with a mean of 29yr. Recorded around 40 digit sequences for each language with a total digit count of 10 420. The digits were acquired in sequences of ten successive digits. Recordings were performed in a sound-shielded room at the Heidelberg University Hospital with three microphones. Digitized by a Steinberg MR816 CSX audio interface, recordings were made in WAVE format with a sample rate of 48 kHz and 24 bit precision. To separate the data into training and test sets, two speakers were held out exclusively for the test set. The remainder of the test set was filled with samples (5% of the trials) from speakers also present in the training set. This division allows one to assess a trained network's ability to generalize across speakers.[16]
  • The second dataset is called The Speech Commands. The SC data set is composed of 1s WAVE-files with 16 kHz sample rate containing a single English word each. It is published under Creative Commons BY 4.0 license and contains words spoken by 1864 speakers. In this study, we considered version 0.02 with 105 829 audio files, in which a total of 24 single word commands (Yes, No, Up, Down, Left, Right, On, Off, Stop, Go, Backward, Forward, Follow, Learn, Zero, One, Two, Three, Four, Five, Six, Seven, Eight, Nine) were repeated about five times per speaker, whereas ten auxiliary words (Bed, Bird, Cat, Dog, Happy, House, Marvin, Sheila, Tree, and Wow ) were only repeated approximately once. Partitioning into training, testing, and validation data set was done by a hashing function as described in [43]. For all our purposes, we applied a 30 ms Hann window to the start and end of each waveform. Most importantly, throughout this article, we consider top one classification performance on all 35 different classes which is more difficult than the originally proposed key-word spotting task on only a subset of 12 classes (ten key-words, unknown word, and silence). However, the data can still be used in the originally intended keyword spotting way.[16]

SoftwareEdit

A diverse range of application software can simulate SNNs. This software can be classified according to its uses:

SNN simulationEdit

 
Unsupervised learning with ferroelectric synapses

These simulate complex neural models with a high level of detail and accuracy. Large networks usually require lengthy processing. Candidates include:[17]

HardwareEdit

 
Predicting STDP learning with ferroelectric synapses
 
Neuron-to-neuron mesh routing model

Future neuromorphic architectures[22] will comprise billions of such nanosynapses, which require a clear understanding of the physical mechanisms responsible for plasticity. Experimental systems based on ferroelectric tunnel junctions have been used to show that STDP can be harnessed from heterogeneous polarization switching. Through combined scanning probe imaging, electrical transport and atomic-scale molecular dynamics, conductance variations can be modelled by nucleation-dominated reversal of domains. Simulations show that arrays of ferroelectric nanosynapses can autonomously learn to recognize patterns in a predictable way, opening the path towards unsupervised learning.[23]

 
Unsupervised learning with ferroelectric synapses
  • Brainchip's Akida NSoC claims to have effectively 1.2 million neurons and 10 billion synapses[24]
  • Neurogrid is a board that can simulate spiking neural networks directly in hardware. (Stanford University)
  • SpiNNaker (Spiking Neural Network Architecture) uses ARM processors as the building blocks of a massively parallel computing platform based on a six-layer thalamocortical model. (University of Manchester)[25] The SpiNNaker system is based on numerical models running in real time on custom digital multicore chips using the ARM architecture. It provides custom digital chips, each with eighteen cores and a shared local 128 Mbyte RAM, with a total of over 1,000,000 cores.[26] A single chip can simulate 16,000 neurons with eight million plastic synapses running in real time.[27]
  • TrueNorth is a processor that contains 5.4 billion transistors that consumes only 70 milliwatts; most processors in personal computers contain about 1.4 billion transistors and require 35 watts or more. IBM refers to the design principle behind TrueNorth as neuromorphic computing. Its primary purpose is pattern recognition. While critics say the chip isn't powerful enough, its supporters point out that this is only the first generation, and the capabilities of improved iterations will become clear. (IBM)[28]
  • Dynamic Neuromorphic Asynchronous Processor (DYNAP)[29] combines slow, low-power, inhomogeneous sub-threshold analog circuits, and fast programmable digital circuits. It supports reconfigurable, general-purpose, real-time neural networks of spiking neurons. This allows the implementation of real-time spike-based neural processing architectures[30][31] in which memory and computation are co-localized. It solves the von Neumann bottleneck problem and enables real-time multiplexed communication of spiking events for realising massive networks. Recurrent networks, feed-forward networks, convolutional networks, attractor networks, echo-state networks, deep networks, and sensor fusion networks are a few of the possibilities.[32]
 
Core Top-Level Microarchitecture
  • Loihi is a 14-nm Intel chip that offers 128 cores and 130,000 neurons on a 60-mm package.[33] It integrates a wide range of features, such as hierarchical connectivity, dendritic compartments, synaptic delays and programmable synaptic learning rules.[34] Running a spiking convolutional form of the Locally Competitive Algorithm, Loihi can solve LASSO optimization problems with over three orders of magnitude superior energy-delay product compared to conventional solvers running on a CPU isoprocess/voltage/area.[35] A 64 Loihi research system offers 8-million-neuron neuromorphic system. Loihi is about 1,000 times as fast as a CPU and 10,000 times as energy efficient.[36]
  • BrainScaleS is based on physical emulations of neuron, synapse and plasticity models with digital connectivity, running up to ten thousand times faster than real time. It was developed by the European Human Brain Project.[26] The BrainScaleS system contains 20 8-inch silicon wafers in 180 nm process technology. Each wafer incorporates 50 x 106 plastic synapses and 200,000 biologically realistic neurons. The system does not execute pre-programmed code but evolves according to the physical properties of the electronic devices, running at up to 10 thousand times faster than real time.[27]

BenchmarksEdit

Classification capabilities of spiking networks trained according to unsupervised learning methods[37] have been tested on the common benchmark datasets, such as, Iris, Wisconsin Breast Cancer or Statlog Landsat dataset.[38][39][40] Various approaches to information encoding and network design have been used. For example, a 2-layer feedforward network for data clustering and classification. Based on the idea proposed in Hopfield (1995) the authors implemented models of local receptive fields combining the properties of radial basis functions (RBF) and spiking neurons to convert input signals (classified data) having a floating-point representation into a spiking representation.[41][42]

See alsoEdit

ReferencesEdit

  1. ^ a b Maass, Wolfgang (1997). "Networks of spiking neurons: The third generation of neural network models". Neural Networks. 10 (9): 1659–1671. doi:10.1016/S0893-6080(97)00011-7. ISSN 0893-6080.
  2. ^ Gerstner, Wulfram. (2002). Spiking neuron models : single neurons, populations, plasticity. Kistler, Werner M., 1969-. Cambridge, U.K.: Cambridge University Press. ISBN 0-511-07817-X. OCLC 57417395.
  3. ^ "Spiking Neural Networks, the Next Generation of Machine Learning". 16 July 2019.
  4. ^ Lee, Dayeol; Lee, Gwangmu; Kwon, Dongup; Lee, Sunghwa; Kim, Youngsok; Kim, Jangwoo (June 2018). "Flexon: A Flexible Digital Neuron for Efficient Spiking Neural Network Simulations". 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA): 275–288. doi:10.1109/isca.2018.00032. ISBN 978-1-5386-5984-7. S2CID 50778421.
  5. ^ "Taking Orders of Akida AI Processor Development Kits". 21 October 2021.
  6. ^ Wulfram Gerstner (2001). "Spiking Neurons". In Wolfgang Maass; Christopher M. Bishop (eds.). Pulsed Neural Networks. MIT Press. ISBN 978-0-262-63221-8.
  7. ^ Van Wezel, Martijn (2020). "A robust modular spiking neural networks training methodology for time-series datasets: With a focus on gesture control". {{cite journal}}: Cite journal requires |journal= (help)
  8. ^ Maass, Wolfgang (1997). "Networks of spiking neurons: The third generation of neural network models". Neural Networks. 10 (9): 1659–1671. doi:10.1016/S0893-6080(97)00011-7.
  9. ^ Anwani, Navin; Rajendran, Bipin (July 2015). "NormAD - Normalized Approximate Descent based supervised learning rule for spiking neurons". 2015 International Joint Conference on Neural Networks (IJCNN): 1–8. doi:10.1109/IJCNN.2015.7280618. ISBN 978-1-4799-1960-4. S2CID 14461638.
  10. ^ Anwani, Navin; Rajendran, Bipin (2020-03-07). "Training multi-layer spiking neural networks using NormAD based spatio-temporal error backpropagation". Neurocomputing. 380: 67–77. arXiv:1811.10678. doi:10.1016/j.neucom.2019.10.104. ISSN 0925-2312. S2CID 53762477.
  11. ^ Neftci, Emre O.; Mostafa, Hesham; Zenke, Friedemann (2019-05-03). "Surrogate Gradient Learning in Spiking Neural Networks". arXiv:1901.09948 [cs.NE].
  12. ^ Alnajjar, F.; Murase, K. (2008). "A simple Aplysia-like spiking neural network to generate adaptive behavior in autonomous robots". Adaptive Behavior. 14 (5): 306–324. doi:10.1177/1059712308093869. S2CID 16577867.
  13. ^ X Zhang; Z Xu; C Henriquez; S Ferrari (Dec 2013). Spike-based indirect training of a spiking neural network-controlled virtual insect. IEEE Decision and Control. pp. 6798–6805. CiteSeerX 10.1.1.671.6351. doi:10.1109/CDC.2013.6760966. ISBN 978-1-4673-5717-3. S2CID 13992150.
  14. ^ Tavanaei, Amirhossein; Ghodrati, Masoud; Kheradpisheh, Saeed Reza; Masquelier, Timothée; Maida, Anthony (March 2019). "Deep learning in spiking neural networks". Neural Networks. 111: 47–63. arXiv:1804.08150. doi:10.1016/j.neunet.2018.12.002. PMID 30682710. S2CID 5039751.
  15. ^ Kim, Youngeun; Park, Hyoungseob; Moitra, Abhishek; Bhattacharjee, Abhiroop; Venkatesha, Yeshwanth; Panda, Priyadarshini (2022-01-31). "Rate Coding or Direct Coding: Which One is Better for Accurate, Robust, and Energy-efficient Spiking Neural Networks?". arXiv:2202.03133 [cs.NE].
  16. ^ a b c Cramer, Benjamin; Stradmann, Yannik; Schemmel, Johannes; Zenke, Friedemann (2022). "The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks". IEEE Transactions on Neural Networks and Learning Systems. PP: 1–14. doi:10.1109/TNNLS.2020.3044364. ISSN 2162-237X. PMID 33378266. S2CID 229930899.
  17. ^ Abbott, L. F.; Nelson, Sacha B. (November 2000). "Synaptic plasticity: taming the beast". Nature Neuroscience. 3 (S11): 1178–1183. doi:10.1038/81453. PMID 11127835. S2CID 2048100.
  18. ^ "Hananel-Hazan/bindsnet: Simulation of spiking neural networks (SNNs) using PyTorch". GitHub. 31 March 2020.
  19. ^ Atiya, A.F.; Parlos, A.G. (May 2000). "New results on recurrent network training: unifying the algorithms and accelerating convergence". IEEE Transactions on Neural Networks. 11 (3): 697–709. doi:10.1109/72.846741. PMID 18249797.
  20. ^ Eshraghian, Jason K.; Ward, Max; Neftci, Emre; Wang, Xinxin; Lenz, Gregor; Dwivedi, Girish; Bennamoun, Mohammed; Jeong, Doo Seok; Lu, Wei D. (1 October 2021). "Training Spiking Neural Networks Using Lessons from Deep Learning". arXiv:2109.12894 [cs.NE].
  21. ^ Mozafari, Milad; Ganjtabesh, Mohammad; Nowzari-Dalini, Abbas; Masquelier, Timothée (12 July 2019). "SpykeTorch: Efficient Simulation of Convolutional Spiking Neural Networks With at Most One Spike per Neuron". Frontiers in Neuroscience. 13: 625. arXiv:1903.02440. doi:10.3389/fnins.2019.00625. PMC 6640212. PMID 31354403.
  22. ^ Sutton RS, Barto AG (2002) Reinforcement Learning: An Introduction. Bradford Books, MIT Press, Cambridge, MA.
  23. ^ Boyn, S.; Grollier, J.; Lecerf, G. (2017-04-03). "Learning through ferroelectric domain dynamics in solid-state synapses". Nature Communications. 8: 14736. Bibcode:2017NatCo...814736B. doi:10.1038/ncomms14736. PMC 5382254. PMID 28368007.
  24. ^ admin. "Akida Neural Processor System-on-Chip". BrainChip. Retrieved 2020-10-12.
  25. ^ Xin Jin; Furber, Steve B.; Woods, John V. (2008). "Efficient modelling of spiking neural networks on a scalable chip multiprocessor". 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence). pp. 2812–2819. doi:10.1109/IJCNN.2008.4634194. ISBN 978-1-4244-1820-6. S2CID 2103654.
  26. ^ a b https://www.humanbrainproject.eu/en/silicon-brains/ Neuromorphic Computing
  27. ^ a b "Hardware: Available Systems". Human Brain Project. Retrieved 2020-05-10.
  28. ^ Markoff, John, A new chip functions like a brain, IBM says, New York Times, August 8, 2014, p.B1
  29. ^ Sayenko, Dimitry G.; Vette, Albert H.; Kamibayashi, Kiyotaka; Nakajima, Tsuyoshi; Akai, Masami; Nakazawa, Kimitaka (March 2007). "Facilitation of the soleus stretch reflex induced by electrical excitation of plantar cutaneous afferents located around the heel". Neuroscience Letters. 415 (3): 294–298. doi:10.1016/j.neulet.2007.01.037. PMID 17276004. S2CID 15165465.
  30. ^ "Neuromorphic Circuits With Neural Modulation Enhancing the Information Content of Neural Signaling | International Conference on Neuromorphic Systems 2020". doi:10.1145/3407197.3407204. S2CID 220794387. {{cite journal}}: Cite journal requires |journal= (help)
  31. ^ Schrauwen B, Campenhout JV (2004) Improving spikeprop: enhancements to an error-backpropagation rule for spiking neural networks. In: Proceedings of 15th ProRISC Workshop, Veldhoven, the Netherlands
  32. ^ Indiveri, Giacomo; Corradi, Federico; Qiao, Ning (2015). "Neuromorphic architectures for spiking deep neural networks". 2015 IEEE International Electron Devices Meeting (IEDM). pp. 4.2.1–4.2.4. doi:10.1109/IEDM.2015.7409623. ISBN 978-1-4673-9894-7. S2CID 1065450.
  33. ^ "Neuromorphic Computing - Next Generation of AI". Intel. Retrieved 2019-07-22.
  34. ^ Yamazaki, Tadashi; Tanaka, Shigeru (17 October 2007). "A spiking network model for passage-of-time representation in the cerebellum". European Journal of Neuroscience. 26 (8): 2279–2292. doi:10.1111/j.1460-9568.2007.05837.x. PMC 2228369. PMID 17953620.
  35. ^ Davies, Mike; Srinivasa, Narayan; Lin, Tsung-Han; Chinya, Gautham; Cao, Yongqiang; Choday, Sri Harsha; Dimou, Georgios; Joshi, Prasad; Imam, Nabil; Jain, Shweta; Liao, Yuyun; Lin, Chit-Kwan; Lines, Andrew; Liu, Ruokun; Mathaikutty, Deepak; McCoy, Steven; Paul, Arnab; Tse, Jonathan; Venkataramanan, Guruguhanathan; Weng, Yi-Hsin; Wild, Andreas; Yang, Yoonseok; Wang, Hong (January 2018). "Loihi: A Neuromorphic Manycore Processor with On-Chip Learning". IEEE Micro. 38 (1): 82–99. doi:10.1109/MM.2018.112130359. S2CID 3608458.
  36. ^ https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/intels-neuromorphic-system-hits-8-million-neurons-100-million-coming-by-2020.amp.html Intel's Neuromorphic System Hits 8 Million Neurons, 100 Million Coming by 2020
  37. ^ Ponulak, F.; Kasinski, A. (2010). "Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification and spike-shifting". Neural Comput. 22 (2): 467–510. doi:10.1162/neco.2009.11-08-901. PMID 19842989. S2CID 12572538.
  38. ^ Newman et al. 1998
  39. ^ Bohte et al. 2002a
  40. ^ Belatreche et al. 2003
  41. ^ Pfister, Jean-Pascal; Toyoizumi, Taro; Barber, David; Gerstner, Wulfram (June 2006). "Optimal Spike-Timing-Dependent Plasticity for Precise Action Potential Firing in Supervised Learning". Neural Computation. 18 (6): 1318–1348. arXiv:q-bio/0502037. Bibcode:2005q.bio.....2037P. doi:10.1162/neco.2006.18.6.1318. PMID 16764506. S2CID 6379045.
  42. ^ Bohte, et al. (2002b)

External linksEdit