Understanding and emulating our brain functionalities are among the most fascinating challenges of our society. In parallel with progresses in neuroscience, the development of neuromorphic systems able to emulate human brain functionalities are attracting a growing interest. The development of these brain-inspired systems aims to create intelligent systems with the envision of shedding new light also on how our brain is working, following the Feynman principle of “what I cannot create, I do not understand”.
In this framework, memristive devices organized in large regular arrays have been demonstrated for the implementation of neuromorphic-type of data processing and implementation of brain-inspired computing.1 However, this approach does not emulate the topology and the behavior of biological neuronal circuits where the principle of self-organization regulates both structure and functions. In biological systems, also, memory, learning and even intelligence are the result of an emergent behavior that arise from the complex interactions in between neurons and synapses.
Inspired by biological systems, by the concept of “Nanoarchitectonics” proposed by Aono2 and by the pioneering works of the groups of Gimzewski3, Boland4 and Kuncic5, we started to investigate memristive behavior of self-organized systems based on randomly dispersed nanowires. We started by experimentally investigating the resistive switching mechanisms in single isolated nanowires (NWs) and single NW junctions.6,7 Then, we investigated the emergent behavior of the network characterized by network-wide synaptic plasticity, short-term memory and capability of processing spatiotemporal input signals thanks to the mutual interaction in between network elements.7
Due to COVID-19 restrictions, with limited access to laboratories, we were forced to stop the experimental activities. In this period, after few months spent in studying, we realized that a model able to catch main features of the emergent behavior would have been crucial for supporting the experimental implementation of unconventional computing paradigms in these designless networks. In this framework, we developed a physic-based and dynamic model that, in accordance with experimental results, was able to simulate the functional synaptic connectivity with nonlinear dynamics and fading memory properties of the nanowire network.
Thanks to this model developed during lockdown, we started to explore in simulations different strategies for the implementation of reservoir computing: we ended up with a setup configuration that exploits the nanowire network as a “physical” reservoir where the same electrodes act both as reservoir inputs and outputs. At the end, we implemented in hardware the setup and we observed good matching between experimental results and simulations. For a fully-memristive hardware implementation, we decided to exploit the long-term memory of resistive switching devices for the read-out layer needed to analyze the reservoir output. Then, in order to investigate the scalability and versatility of the proposed approach, we exploited an extended simulated NW network for solving the classification of the complete Modified National Institute of Standards and Technology (MNIST) handwritten digit dataset task and for Mackey–Glass time series prediction.
We have implemented in hardware (in materia) the reservoir computing paradigm by exploiting a fully-memristive neuromorphic system where computation has been divided in two parts: i) the NW network physical reservoir and ii) the read-out layer based on resistive switching devices. The NW network nonlinearly map a spatiotemporal input in form of pulse trains (temporal domain) applied to different location of the network (spatial domain) in a feature space that is then classified by means of the read-out by means of physical multiplication by Ohm’s law and physical summation by Kirchhoff’s law. Note that the read-out layer is the only one that has to be trained, allowing the system with a reduced training cost. This represents a generic computational platform where multiple tasks can be implemented by properly training a new read-out, since a read-out associated to a new task can be learned independently from was learned in previous tasks.
We envision that that this low-cost nanoarchitecture can be explored for a wide range of applications, including motion identification, speech recognition and processing of spatiotemporal sensorial inputs for robotics, with the aim of realizing intelligent systems that combine different hardware technologies towards next generation of artificial intelligence.
The full paper can be found at https://www.nature.com/articles/s41563-021-01099-9. Data of our work are available on Zenodo (https://doi.org/10.5281/zenodo.5153335) while codes are available on GitHub (https://github.com/MilanoGianluca/Nanowire_Network_Reservoir_Computing).
References
- Xia, Q. & Yang, J. J. Memristive crossbar arrays for brain-inspired computing. Nat. Mater. 18, 309–323 (2019).
- Aono, M. & Ariga, K. The Way to Nanoarchitectonics and the Way of Nanoarchitectonics. Adv. Mater. 28, 989–992 (2016).
- Stieg, A. Z. et al. Emergent Criticality in Complex Turing B-Type Atomic Switch Networks. Adv. Mater. 24, 286–293 (2012).
- Manning, H. G. et al. Emergence of winner-takes-all connectivity paths in random nanowire networks. Nat. Commun. 9, 3219 (2018).
- Hochstetter, J. et al. Avalanches and edge-of-chaos learning in neuromorphic nanowire networks. Nat. Commun. 12, 4008 (2021).
- Milano, G. et al. Self-limited single nanowire systems combining all-in-one memristive and neuromorphic functionalities. Nat. Commun. 9, 5151 (2018).
- Milano, G. et al. Brain‐Inspired Structural Plasticity through Reweighting and Rewiring in Multi‐Terminal Self‐Organizing Memristive Nanowire Networks. Adv. Intell. Syst. 2000096 (2020) doi:10.1002/aisy.202000096.
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in