Today’s data deluge coupled with the recent advancements in AI has put a strain on traditional computing architectures. Neural networks in particular involve performing vast amounts of vector-matrix multiplications (VMM) which are inefficient on conventional von Neumann computing systems. No matter the implementation, either in the cloud, server side, or at the edge in IoT devices, increasing efficiency of neural networks in hardware brings a net operational cost benefit. Therefore, there exists an incentive to create AI processors which operate at high speeds and with low power consumption . However, keeping high computational efficiency is a challenging task even for custom digital processors, hence novel computing paradigms such as computation in analogue memristor crossbars  emerged as a promising alternative.
Memristor crossbars can naturally implement the VMM core operation. Weights are encoded as memristor conductances, input values as voltage levels applied at crossbar wordlines, and the computing result is the accumulated current at each bitline. Besides performing a VMM operation in one-shot, a strong benefit comes from having stationary weights, reducing memory access.
Usually, practical devices require high current in order to maintain the linearity and accuracy of computation. This is because finite on-chip line resistance introduces errors in computation, hence reducing the device current allows larger crossbars to be feasible. In our paper, we propose the ferroelectric tunnel junction (FTJ) memristor  as a new type of VMM capable device, which differentiates itself from other memristor types due to its ultra-low, programmable operating current. The devices’ voltage-current characteristic is non-linear, however we show that it exhibits near-exponential dependence at higher voltages, which we correct (linearise) through the use of standard logarithmic amplifiers. We thus try to exploit device characteristics (in this case, non-linearity) which are usually considered undesirable, and it has proven successful for us in other cases as well . Using the proposed linearization method, the FTJ can have an effective linear conductance orders of magnitude lower than other memristive devices. Furthermore, the operation current dynamic range is device area dependent, giving an extra degree of freedom to tailor the device to a particular application.
Through this circuit-device interaction, we demonstrate linear VMM computation with feasible accuracy in FTJ memristor crossbar arrays, at ultra-low currents. This gives way to the possibility of implementing very large neural network layers (at least fully connected) in memristive crossbars, with similar sizes as required in current commercial software implementations, such as image classification networks, and take full advantage of the crossbar architecture. We also simulate a large fully connected network and show that an FTJ implementation, despite a moderate speed penalty, can be drastically more computationally efficient than implementations with previous memristor types.
Read the paper here.
 Xu, X. et al. Scaling for edge inference of deep neural networks. Nat. Electron. 1, 216-222 (2018).
 Xia, Q. & Yang, J. J. Memristive crossbar arrays for brain-inspired computing. Nat. Mater. 18, 309-323 (2019).
 Fujii, S. et al. First demonstration and performance improvement of ferroelectric HfO2-based resistive switch with low operation current and intrinsic diode property. In Symp. VLSI Tech. (VLSI) (IEEE 2016).
 Berdan, R. et al. In-memory reinforcement learning with moderately-stochastic conductance switching of ferroelectric tunnel junctions. In Symp. VLSI Tech. (VLSI) (IEEE, 2019).
 Ota, K. et al. Performance maximisation of in-memory reinforcement learning with variability-controlled Hf1-xZrxO2 ferroelectric tunnel junctions. In International Electron Devices Meeting (IEDM) (IEEE, 2019).