Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018 calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community.
Purpose-led Publishing is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.
Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.
We are proudly declaring that science is our only shareholder.
ISSN: 2634-4386
Neuromorphic Computing and Engineering is a multidisciplinary, open access journal publishing cutting edge research on the design, development and application of artificial neural networks and systems from both a hardware and computational perspective. For detailed information about subject coverage see the About the journal section.
Free for readers. All article publication charges are currently paid by IOP Publishing.
Stay informed about the latest journal news and announcements
Open all abstracts, in this tab
Dennis V Christensen et al 2022 Neuromorph. Comput. Eng. 2 022501
Matteo Cucchi et al 2022 Neuromorph. Comput. Eng. 2 032002
This manuscript serves a specific purpose: to give readers from fields such as material science, chemistry, or electronics an overview of implementing a reservoir computing (RC) experiment with her/his material system. Introductory literature on the topic is rare and the vast majority of reviews puts forth the basics of RC taking for granted concepts that may be nontrivial to someone unfamiliar with the machine learning field (see for example reference Lukoševičius (2012 Neural Networks: Tricks of the Trade (Berlin: Springer) pp 659–686). This is unfortunate considering the large pool of material systems that show nonlinear behavior and short-term memory that may be harnessed to design novel computational paradigms. RC offers a framework for computing with material systems that circumvents typical problems that arise when implementing traditional, fully fledged feedforward neural networks on hardware, such as minimal device-to-device variability and control over each unit/neuron and connection. Instead, one can use a random, untrained reservoir where only the output layer is optimized, for example, with linear regression. In the following, we will highlight the potential of RC for hardware-based neural networks, the advantages over more traditional approaches, and the obstacles to overcome for their implementation. Preparing a high-dimensional nonlinear system as a well-performing reservoir for a specific task is not as easy as it seems at first sight. We hope this tutorial will lower the barrier for scientists attempting to exploit their nonlinear systems for computational tasks typically carried out in the fields of machine learning and artificial intelligence. A simulation tool to accompany this paper is available online7.
Sen Lu and Abhronil Sengupta 2024 Neuromorph. Comput. Eng. 4 024004
Spike-timing-dependent plasticity (STDP) is an unsupervised learning mechanism for spiking neural networks that has received significant attention from the neuromorphic hardware community. However, scaling such local learning techniques to deeper networks and large-scale tasks has remained elusive. In this work, we investigate a Deep-STDP framework where a rate-based convolutional network, that can be deployed in a neuromorphic setting, is trained in tandem with pseudo-labels generated by the STDP clustering process on the network outputs. We achieve 24.56% higher accuracy and 3.5 × faster convergence speed at iso-accuracy on a 10-class subset of the Tiny ImageNet dataset in contrast to a k-means clustering approach.
Kevin Hunter et al 2022 Neuromorph. Comput. Eng. 2 034004
In principle, sparse neural networks should be significantly more efficient than traditional dense networks. Neurons in the brain exhibit two types of sparsity; they are sparsely interconnected and sparsely active. These two types of sparsity, called weight sparsity and activation sparsity, when combined, offer the potential to reduce the computational cost of neural networks by two orders of magnitude. Despite this potential, today's neural networks deliver only modest performance benefits using just weight sparsity, because traditional computing hardware cannot efficiently process sparse networks. In this article we introduce Complementary Sparsity, a novel technique that significantly improves the performance of dual sparse networks on existing hardware. We demonstrate that we can achieve high performance running weight-sparse networks, and we can multiply those speedups by incorporating activation sparsity. Using Complementary Sparsity, we show up to 100× improvement in throughput and energy efficiency performing inference on FPGAs. We analyze scalability and resource tradeoffs for a variety of kernels typical of commercial convolutional networks such as ResNet-50 and MobileNetV2. Our results with Complementary Sparsity suggest that weight plus activation sparsity can be a potent combination for efficiently scaling future AI models.
T M Kamsma et al 2024 Neuromorph. Comput. Eng. 4 024003
Fluidic iontronics is emerging as a distinctive platform for implementing neuromorphic circuits, characterised by its reliance on the same aqueous medium and ionic signal carriers as the brain. Drawing upon recent theoretical advancements in both iontronic spiking circuits and in dynamic conductance of conical ion channels, which form fluidic memristors, we expand the repertoire of proposed neuronal spiking dynamics in iontronic circuits. Through a modelled circuit containing channels that carry a bipolar surface charge, we extract phasic bursting, mixed-mode spiking, tonic bursting, and threshold variability, all with spike voltages and frequencies within the typical range for mammalian neurons. These features are possible due to the strong dependence of the typical conductance memory retention time on the channel length, enabling timescales varying from individual spikes to bursts of multiple spikes within a single circuit. These advanced forms of neuronal-like spiking support the exploration of aqueous iontronics as an interesting platform for neuromorphic circuits.
Raz Halaly and Elishai Ezra Tsur 2024 Neuromorph. Comput. Eng. 4 024006
Model predictive control (MPC) is a prominent control paradigm providing accurate state prediction and subsequent control actions for intricate dynamical systems with applications ranging from autonomous driving to star tracking. However, there is an apparent discrepancy between the model's mathematical description and its behavior in real-world conditions, affecting its performance in real-time. In this work, we propose a novel neuromorphic (brain-inspired) spiking neural network for continuous adaptive non-linear MPC. Utilizing real-time learning, our design significantly reduces dynamic error and augments model accuracy, while simultaneously addressing unforeseen situations. We evaluated our framework using real-world scenarios in autonomous driving, implemented in a physics-driven simulation. We tested our design with various vehicles (from a Tesla Model 3 to an Ambulance) experiencing malfunctioning and swift steering scenarios. We demonstrate significant improvements in dynamic error rate compared with traditional MPC implementation with up to 89.15% median prediction error reduction with 5 spiking neurons and up to 96.08% with 5,000 neurons. Our results may pave the way for novel applications in real-time control and stimulate further studies in the adaptive control realm with spiking neural networks.
James B Aimone et al 2022 Neuromorph. Comput. Eng. 2 032003
Though neuromorphic computers have typically targeted applications in machine learning and neuroscience ('cognitive' applications), they have many computational characteristics that are attractive for a wide variety of computational problems. In this work, we review the current state-of-the-art for non-cognitive applications on neuromorphic computers, including simple computational kernels for composition, graph algorithms, constrained optimization, and signal processing. We discuss the advantages of using neuromorphic computers for these different applications, as well as the challenges that still remain. The ultimate goal of this work is to bring awareness to this class of problems for neuromorphic systems to the broader community, particularly to encourage further work in this area and to make sure that these applications are considered in the design of future neuromorphic systems.
Erika Covi et al 2022 Neuromorph. Comput. Eng. 2 012002
The shift towards a distributed computing paradigm, where multiple systems acquire and elaborate data in real-time, leads to challenges that must be met. In particular, it is becoming increasingly essential to compute on the edge of the network, close to the sensor collecting data. The requirements of a system operating on the edge are very tight: power efficiency, low area occupation, fast response times, and on-line learning. Brain-inspired architectures such as spiking neural networks (SNNs) use artificial neurons and synapses that simultaneously perform low-latency computation and internal-state storage with very low power consumption. Still, they mainly rely on standard complementary metal-oxide-semiconductor (CMOS) technologies, making SNNs unfit to meet the aforementioned constraints. Recently, emerging technologies such as memristive devices have been investigated to flank CMOS technology and overcome edge computing systems' power and memory constraints. In this review, we will focus on ferroelectric technology. Thanks to its CMOS-compatible fabrication process and extreme energy efficiency, ferroelectric devices are rapidly affirming themselves as one of the most promising technologies for neuromorphic computing. Therefore, we will discuss their role in emulating neural and synaptic behaviors in an area and power-efficient way.
Jonathan Timcheck et al 2023 Neuromorph. Comput. Eng. 3 034005
A critical enabler for progress in neuromorphic computing research is the ability to transparently evaluate different neuromorphic solutions on important tasks and to compare them to state-of-the-art conventional solutions. The Intel Neuromorphic Deep Noise Suppression Challenge (Intel N-DNS Challenge), inspired by the Microsoft DNS Challenge, tackles a ubiquitous and commercially relevant task: real-time audio denoising. Audio denoising is likely to reap the benefits of neuromorphic computing due to its low-bandwidth, temporal nature and its relevance for low-power devices. The Intel N-DNS Challenge consists of two tracks: a simulation-based algorithmic track to encourage algorithmic innovation, and a neuromorphic hardware (Loihi 2) track to rigorously evaluate solutions. For both tracks, we specify an evaluation methodology based on energy, latency, and resource consumption in addition to output audio quality. We make the Intel N-DNS Challenge dataset scripts and evaluation code freely accessible, encourage community participation with monetary prizes, and release a neuromorphic baseline solution which shows promising audio quality, high power efficiency, and low resource consumption when compared to Microsoft NsNet2 and a proprietary Intel denoising model used in production. We hope the Intel N-DNS Challenge will hasten innovation in neuromorphic algorithms research, especially in the area of training tools and methods for real-time signal processing. We expect the winners of the challenge will demonstrate that for problems like audio denoising, significant gains in power and resources can be realized on neuromorphic devices available today compared to conventional state-of-the-art solutions.
Paul Hueber et al 2024 Neuromorph. Comput. Eng. 4 024008
Designing processors for implantable closed-loop neuromodulation systems presents a formidable challenge owing to the constrained operational environment, which requires low latency and high energy efficacy. Previous benchmarks have provided limited insights into power consumption and latency. However, this study introduces algorithmic metrics that capture the potential and limitations of neural decoders for closed-loop intra-cortical brain–computer interfaces in the context of energy and hardware constraints. This study benchmarks common decoding methods for predicting a primate's finger kinematics from the motor cortex and explores their suitability for low latency and high energy efficient neural decoding. The study found that ANN-based decoders provide superior decoding accuracy, requiring high latency and many operations to effectively decode neural signals. Spiking neural networks (SNNs) have emerged as a solution, bridging this gap by achieving competitive decoding performance within sub-10 ms while utilizing a fraction of computational resources. These distinctive advantages of neuromorphic SNNs make them highly suitable for the challenging closed-loop neural modulation environment. Their capacity to balance decoding accuracy and operational efficiency offers immense potential in reshaping the landscape of neural decoders, fostering greater understanding, and opening new frontiers in closed-loop intra-cortical human-machine interaction.
Open all abstracts, in this tab
Nishith N Chakraborty et al 2024 Neuromorph. Comput. Eng. 4 024010
In neuromorphic computing, different learning mechanisms are being widely adopted to improve the performance of a specific application. Among these techniques, spike-timing-dependent plasticity (STDP) stands out as one of the most favored. STDP is simply managed by the temporal information of an event, which is biologically inspired. However, most of the prior works on STDP are focused on circuit implementation or software simulation for performance evaluation. Previous works also lack a comparative analysis of the performances of different STDP implementations. This study aims to provide a comprehensive assessment of STDP, centering on the performance across various applications such as classification (static and temporal datasets), control, and reservoir computing. Different applications necessitate distinct STDP configurations to achieve optimal performance with the neuroprocessor. Additionally, this work introduces an application-specific integrated circuit design of STDP circuitry. The design is based on current-controlled memristive synapse principles and utilizes 65 nm CMOS technology from IBM. The detailed presentation includes circuitry specifics, layout, and performance parameters such as energy consumption and design area.
Zhaoqi Chen et al 2024 Neuromorph. Comput. Eng. 4 024009
A neuromorphic simultaneous localization and mapping (SLAM) system shows potential for more efficient implementation than its traditional counterpart. At the mean time a neuromorphic model of spatial encoding neurons in silicon could provide insights on the functionality and dynamic between each group of cells. Especially when realistic factors including variations and imperfections on the neural movement encoding are presented to challenge the existing hypothetical models for localization. We demonstrate a mixed-mode implementation for spatial encoding neurons including theta cells, egocentric place cells, and the typical allocentric place cells. Together, they form a biologically plausible network that could reproduce the localization functionality of place cells observed in rodents. The system consists of a theta chip with 128 theta cell units and an FPGA implementing 4 networks for egocentric place cells formation that provides the capability for tracking on a 11 by 11 place cell grid. Experimental results validate the robustness of our model when suffering from as much as 18% deviation, induced by parameter variations in analog circuits, from the mathematical model of theta cells. We provide a model for implementing dynamic neuromorphic SLAM systems for dynamic-scale mapping of cluttered environments, even when subject to significant errors in sensory measurements and real-time analog computation. We also suggest a robust approach for the network topology of spatial cells that can mitigate neural non-uniformity and provides a hypothesis for the function of grid cells and the existence of egocentric place cells.
Paul Hueber et al 2024 Neuromorph. Comput. Eng. 4 024008
Designing processors for implantable closed-loop neuromodulation systems presents a formidable challenge owing to the constrained operational environment, which requires low latency and high energy efficacy. Previous benchmarks have provided limited insights into power consumption and latency. However, this study introduces algorithmic metrics that capture the potential and limitations of neural decoders for closed-loop intra-cortical brain–computer interfaces in the context of energy and hardware constraints. This study benchmarks common decoding methods for predicting a primate's finger kinematics from the motor cortex and explores their suitability for low latency and high energy efficient neural decoding. The study found that ANN-based decoders provide superior decoding accuracy, requiring high latency and many operations to effectively decode neural signals. Spiking neural networks (SNNs) have emerged as a solution, bridging this gap by achieving competitive decoding performance within sub-10 ms while utilizing a fraction of computational resources. These distinctive advantages of neuromorphic SNNs make them highly suitable for the challenging closed-loop neural modulation environment. Their capacity to balance decoding accuracy and operational efficiency offers immense potential in reshaping the landscape of neural decoders, fostering greater understanding, and opening new frontiers in closed-loop intra-cortical human-machine interaction.
Florent De Geeter et al 2024 Neuromorph. Comput. Eng. 4 024007
Spiking neural networks (SNNs) are a type of artificial neural networks in which communication between neurons is only made of events, also called spikes. This property allows neural networks to make asynchronous and sparse computations and therefore drastically decrease energy consumption when run on specialized hardware. However, training such networks is known to be difficult, mainly due to the non-differentiability of the spike activation, which prevents the use of classical backpropagation. This is because state-of-the-art SNNs are usually derived from biologically-inspired neuron models, to which are applied machine learning methods for training. Nowadays, research about SNNs focuses on the design of training algorithms whose goal is to obtain networks that compete with their non-spiking version on specific tasks. In this paper, we attempt the symmetrical approach: we modify the dynamics of a well-known, easily trainable type of recurrent neural network (RNN) to make it event-based. This new RNN cell, called the spiking recurrent cell, therefore communicates using events, i.e. spikes, while being completely differen-tiable. Vanilla backpropagation can thus be used to train any network made of such RNN cell. We show that this new network can achieve performance comparable to other types of spiking networks in the MNIST benchmark and its variants, the Fashion-MNIST and the Neuromorphic-MNIST. Moreover, we show that this new cell makes the training of deep spiking networks achievable.
Raz Halaly and Elishai Ezra Tsur 2024 Neuromorph. Comput. Eng. 4 024006
Model predictive control (MPC) is a prominent control paradigm providing accurate state prediction and subsequent control actions for intricate dynamical systems with applications ranging from autonomous driving to star tracking. However, there is an apparent discrepancy between the model's mathematical description and its behavior in real-world conditions, affecting its performance in real-time. In this work, we propose a novel neuromorphic (brain-inspired) spiking neural network for continuous adaptive non-linear MPC. Utilizing real-time learning, our design significantly reduces dynamic error and augments model accuracy, while simultaneously addressing unforeseen situations. We evaluated our framework using real-world scenarios in autonomous driving, implemented in a physics-driven simulation. We tested our design with various vehicles (from a Tesla Model 3 to an Ambulance) experiencing malfunctioning and swift steering scenarios. We demonstrate significant improvements in dynamic error rate compared with traditional MPC implementation with up to 89.15% median prediction error reduction with 5 spiking neurons and up to 96.08% with 5,000 neurons. Our results may pave the way for novel applications in real-time control and stimulate further studies in the adaptive control realm with spiking neural networks.
Open all abstracts, in this tab
Chang-Jae Beak et al 2024 Neuromorph. Comput. Eng. 4 022001
A filamentary-based organic memristor is a promising synaptic component for the development of neuromorphic systems for wearable electronics. In the organic memristors, metallic conductive filaments (CF) are formed via electrochemical metallization under electric stimuli, and it results in the resistive switching characteristics. To realize the bio-inspired computing systems utilizing the organic memristors, it is essential to effectively engineer the CF growth for emulating the complete synaptic functions in the device. Here, the fundamental principles underlying the operation of organic memristors and parameters related to CF growth are discussed. Additionally, recent studies that focused on controlling CF growth to replicate synaptic functions, including reproducible resistive switching, continuous conductance levels, and synaptic plasticity, are reviewed. Finally, upcoming research directions in the field of organic memristors for wearable smart computing systems are suggested.
Lyes Khacef et al 2023 Neuromorph. Comput. Eng. 3 042001
Understanding how biological neural networks carry out learning using spike-based local plasticity mechanisms can lead to the development of real-time, energy-efficient, and adaptive neuromorphic processing systems. A large number of spike-based learning models have recently been proposed following different approaches. However, it is difficult to assess if these models can be easily implemented in neuromorphic hardware, and to compare their features and ease of implementation. To this end, in this survey, we provide an overview of representative brain-inspired synaptic plasticity models and mixed-signal complementary metal–oxide–semiconductor neuromorphic circuits within a unified framework. We review historical, experimental, and theoretical approaches to modeling synaptic plasticity, and we identify computational primitives that can support low-latency and low-power hardware implementations of spike-based learning rules. We provide a common definition of a locality principle based on pre- and postsynaptic neural signals, which we propose as an important requirement for physical implementations of synaptic plasticity circuits. Based on this principle, we compare the properties of these models within the same framework, and describe a set of mixed-signal electronic circuits that can be used to implement their computing principles, and to build efficient on-chip and online learning in neuromorphic processing systems.
Xuan Hu et al 2023 Neuromorph. Comput. Eng. 3 022003
Topological solitons are exciting candidates for the physical implementation of next-generation computing systems. As these solitons are nanoscale and can be controlled with minimal energy consumption, they are ideal to fulfill emerging needs for computing in the era of big data processing and storage. Magnetic domain walls (DWs) and magnetic skyrmions are two types of topological solitons that are particularly exciting for next-generation computing systems in light of their non-volatility, scalability, rich physical interactions, and ability to exhibit non-linear behaviors. Here we summarize the development of computing systems based on magnetic topological solitons, highlighting logical and neuromorphic computing with magnetic DWs and skyrmions.
Maria Elias Pereira et al 2023 Neuromorph. Comput. Eng. 3 022002
Neuromorphic computing has been gaining momentum for the past decades and has been appointed as the replacer of the outworn technology in conventional computing systems. Artificial neural networks (ANNs) can be composed by memristor crossbars in hardware and perform in-memory computing and storage, in a power, cost and area efficient way. In optoelectronic memristors (OEMs), resistive switching (RS) can be controlled by both optical and electronic signals. Using light as synaptic weigh modulator provides a high-speed non-destructive method, not dependent on electrical wires, that solves crosstalk issues. In particular, in artificial visual systems, OEMs can act as the artificial retina and combine optical sensing and high-level image processing. Therefore, several efforts have been made by the scientific community into developing OEMs that can meet the demands of each specific application. In this review, the recent advances in inorganic OEMs are summarized and discussed. The engineering of the device structure provides the means to manipulate RS performance and, thus, a comprehensive analysis is performed regarding the already proposed memristor materials structure and their specific characteristics. Moreover, their potential applications in logic gates, ANNs and, in more detail, on artificial visual systems are also assessed, taking into account the figures of merit described so far.
Pankaj Sharma and Jan Seidel 2023 Neuromorph. Comput. Eng. 3 022001
Mimicking and replicating the function of biological synapses with engineered materials is a challenge for the 21st century. The field of neuromorphic computing has recently seen significant developments, and new concepts are being explored. One of these approaches uses topological defects, such as domain walls in ferroic materials, especially ferroelectrics, that can naturally be addressed by electric fields to alter and tailor their intrinsic or extrinsic properties and functionality. Here, we review concepts of neuromorphic functionality found in ferroelectric domain walls and give a perspective on future developments and applications in low-energy, agile, brain-inspired electronics and computing.
Open all abstracts, in this tab
Hejda et al
Spiking neurons and neural networks constitute a fundamental building block for brain-inspired computing, which is poised to benefit significantly from photonic hardware implementations. In this work, we experimentally investigate an interconnected optical neuromorphic system based on an ultrafast spiking vertical cavity surface emitting laser (VCSEL) neuron and a silicon photonics (SiPh) integrated micro-ring resonator (MRR). We experimentally demonstrate two different functional arrangements of these devices: first, we show that MRR weight banks can be used in conjunction with the spiking VCSEL-neurons to perform amplitude weighting of sub-ns optical spiking signals. Second, we show that a continuously firing VCSEL-neuron can be directly modulated using a locking signal propagated through a single weighting MRR, and we utilize this functionality to perform optical spike firing rate-coding via thermal tuning of the MRR. Given the significant track record of both integrated weight banks and photonic VCSEL-neurons, we believe these results demonstrate the viability of combining these two classes of devices for use in functional neuromorphic photonic systems.
Alam et al
Anomaly detection in real-time using autoencoders implemented on edge devices is exceedingly challenging due to limited hardware, energy, and computational resources. We show that these limitations can be addressed by designing an autoencoder with low-resolution non-volatile memory-based synapses and employing an effective quantized neural network learning algorithm. We further propose nanoscale ferromagnetic racetracks with engineered notches hosting magnetic domain walls (DW) as exemplary non-volatile memory based autoencoder synapses, where limited state (5-state) synaptic weights are manipulated by spin orbit torque (SOT) current pulses to write different magnetoresistance states. The performance of anomaly detection of the proposed autoencoder model is evaluated on the NSL-KDD dataset. Limited resolution and DW device stochasticity aware training of the autoencoder is performed, which yields comparable anomaly detection performance to the autoencoder having floating-point precision weights. While the limited number of quantized states and the inherent stochastic nature of DW synaptic weights in nanoscale devices are typically known to negatively impact the performance, our hardware-aware training algorithm is shown to leverage these imperfect device characteristics to generate an improvement in anomaly detection accuracy (90.98%) compared to accuracy obtained with floating-point synaptic weights that are extremely memory intensive. Furthermore, our DW-based approach demonstrates a remarkable reduction of at least three orders of magnitude in weight updates during training compared to the floating-point approach, implying significant reduction in operation energy for our method. This work could stimulate the development of extremely energy efficient non-volatile multi-state synapse-based processors that can perform real-time training and inference on the edge with unsupervised data.