Three levels of memristor-based computing

Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

Research on memristor-based computing in the post-Moore era is of great value as it enables new forms of computers, by exploiting a new physical state variable - the device conductance (or resistance), as the carrier of information. Such a capability is attributed to the nonvolatility of device conductance states and the abrupt switching between these states. Accordingly, memristor-based computing may be categorized into three levels, namely

Level 1: Only the static conductance property is used, and memristor devices are taken as programmable synapses that are usually integrated as a crossbar (Fig. 1a).

Level 2: In addition to the static conductance states, the dynamic switching from one state to another is also used, which mimics an integrate-and-fire neuron but in a nonvolatile manner (Fig. 1b).

Level 3: The static conductance states and the dynamic switching between two states are exploited, resulting in a new concept of neuron – hysteretic neuron (Fig. 1c), which is proposed in our recent work “An emergent attractor network in a passive resistive switching circuit”.

 Around 2010, soon after the establishment of connecting memristor concept to resistive switching phenomena, the community at the intersection of electronic device and neuromorphic computing realized that memristors with programmable conductance states could be used as artificial synapses. Subsequently, over the past dozen years, memristor arrays have been frequently used to accelerate matrix-vector multiplication (MVM), which in turn is the backbone of many algorithms, such as neural networks, equation solving in scientific computing, etc. This field has been fully developed and is close to practical application. It is often classified within the framework of analog in-memory computing (IMC).

 In 2010, the team led by R. Stanley Williams at HP Lab suggested that the dynamic switching of memristor can be used for Boolean logic computation. They demonstrated that a simple circuit consisting of two parallel memristors and one load resistor performs inherently as an implication (IMP) logic gate, where the logical output is given by the state transition of a memristor device that is conditional on the input states. As both input and output are represented by nonvolatile conductance states of memristors, the logic gate is called ‘stateful’. Such a concept is contrast to the MVM case where memristor states represent only one out of two input operands, and the other input and output are encoded as voltage. As a result, some people regard stateful logic as a real IMC scheme, while MVM is partially IMC.

 In 2018, when I was a postdoc working in Daniele Ielmini’s group at Politecnico di Milano, we figured out that a circuit composed of three parallel memristors and one resistor (in Fig. 1b) is a single-layer perceptron neural network. In this network, neurons are represented by memristors, and synaptic weights are encoded by the external voltages applied to memristors. In particular, the abrupt set transition from low conductance state to high conductance state can be viewed as a nonlinear activation function that is inherent of memristor neuron. Since both input and output are memristor states, which is the same as stateful logic, this concept is termed stateful neural network. Thanks to the powerful representation capability of perceptron, the stateful neural network can be used to implement all linearly separable logic gates, such as NAND and NOR, by applying different combination of voltages. Linearly inseparable logic gates, e.g., XOR, can be implemented by using a two-layer perceptron, which is accomplished through two sequential operations of this circuit.

 It is easy to notice that in stateful neural network only set transition in the positive voltage direction is used, while reset transition in the negative voltage direction is overlooked. When considering only set transition, a memristor device can be regarded as a McCulloch-Pitts (MP) neuron. A question that may arise is, if we consider both set and reset transitions, what kind of neuron is a memristor and what can it be used for? This is the question I asked myself after joining Peking University in 2020.

 In our recent work published in Nature Communications, we show that it turns out that nonvolatile bipolar switching events of memristor can be regarded as a unique nonlinear activation function characteristic of a hysteretic loop, and the same circuit composed of parallel memristors can be formulated as an attractor network (Fig. 1c). In this model, binary memristor devices play the role of artificial neuron, which is similar to the case of stateful neural network but with a different activation function, while the pairwise voltage differences define an anti-symmetric weight matrix. In the circuit, the relationship between memristor devices is described by the Kirchhoff’s current law. Intuitively, the switching of one device changes the overall distribution of potentials in the circuit, which plays a feedback to trigger further switching events, updating the state vector of the devices and eventually stabilizing at an attractor state.

 We successfully constructed an energy function for this network, showing that every switching in the circuit would decrease the energy. Due to the nonvolatile hysteretic function, the energy change for bit flip in this network is thresholded, which is different from the classic Hopfield network. It allows more stable states stored in the circuit, thus representing a highly compact and efficient solution for associative memory. We fabricated HfOx-based memristor devices and used them for experimental demonstration, and the results validate the network dynamics (towards stable states) and their modulations by external voltages for 3-neuron and 4-neuron circuits.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Follow the Topic

Control, Robotics, Automation
Technology and Engineering > Electrical and Electronic Engineering > Control, Robotics, Automation
Computational Intelligence
Technology and Engineering > Mathematical and Computational Engineering Applications > Computational Intelligence

Related Collections

With collections, you can get published faster and increase your visibility.

Biology of rare genetic disorders

This cross-journal Collection between Nature Communications, Communications Biology, npj Genomic Medicine and Scientific Reports brings together research articles that provide new insights into the biology of rare genetic disorders, also known as Mendelian or monogenic disorders.

Publishing Model: Open Access

Deadline: Oct 30, 2024

Advances in catalytic hydrogen evolution

This collection encourages submissions related to hydrogen evolution catalysis, particularly where hydrogen gas is the primary product. This is a cross-journal partnership between the Energy Materials team at Nature Communications with Communications Chemistry, Communications Engineering, Communications Materials, and Scientific Reports. We seek studies covering a range of perspectives including materials design & development, catalytic performance, or underlying mechanistic understanding. Other works focused on potential applications and large-scale demonstration of hydrogen evolution are also welcome.

Publishing Model: Open Access

Deadline: Dec 31, 2024