With the rapid growth of the Internet of Things (IoT), our world is becoming increasingly interconnected. From wearable health monitors to autonomous vehicles, billions of sensors are continuously generating massive amounts of data. However, beneath the surface of this data ocean lies a hidden inefficiency that traditional sensing architectures can no longer accommodate.
In conventional sensing systems, analog signals from the sensors are amplified, converted into digital signal (0s and 1s), and delivered to a digital processor. This process happens all the time, regardless of whether any important signals are actually captured, resulting in substantial energy overhead and severe latency limitations. Imagine a scenario in which our skin continuously transmits “no contact detected” signal to the brain; such redundant reporting would overwhelm our neural processing system with noise, not to mention the energy wasted in the signal transmission.
At the University of Massachusetts Amherst, our team thinks: can we build a sensing system that works not just hard, but smartly?
In our recent study published in Nature Sensors, we present a complete event-based neuromorphic sensing system that consists of a flexible haptic sensor array, an event-triggered circuitry, and a memristor-based system on a chip (SoC). With this system, we have successfully demonstrated low-latency, energy-efficient analog sensing at the hardware level.
Inspiration: from Skin to Brain
Our design was inspired by how biological tactile sensing works: when we touch an object, our skin will convert the sensed pressure changes into electrical pulses, which are then processed by the brain. To translate this biological efficiency into a hardware system, we constructed the sensing system with three key modules:
- Tactile sensors: We used a flexible piezoelectric sensor array made by the Finnish collaborator from Tampere University for sensing. Piezoelectric materials (such as the Polyvinylidene Fluoride, , used in our study) have a natural "event-driven" property: they only generate voltage spikes when deformed (such as during a press or release); if the sensor is static, no output will be generated. By employing the piezoelectric material, the sensing process does not require external power, resulting in an energy-efficient sensing module.
- Pre-processing circuitry: This is the bridge between the raw sensing signals and the computing chip, which we call an event-triggered circuitry. Instead of simply passing on the electrical spikes from the sensors, our custom circuity converts them into decaying voltage waveforms. A higher voltage means a more recent touch while a lower voltage indicates an earlier one, allowing our computing hardware to read the history of our motion in the sensor array.
- Memristive SoC for computing: To process the time surfaces, we used a memristive SoC developed by TetraMem (MX 100). Unlike traditional chips that separate memory and processing, this SoC uses memristor crossbar arrays to perform computation directly within memory. This allows the chip to process the analog time surfaces directly, performing clustering and classification tasks with minimal energy-hungry data digitizing and movement.
The Breakthrough: Converting Time into Space
One of the biggest challenges in neuromorphic computing is making a static chip to understand dynamic motion. Traditional chips process data frame by frame, which is energy-consuming and memory-hungry. Our solution uses a hardware-level time-to-space conversion strategy. Imagine Similarly, our pre-processing circuitry captures the sequence of a finger’s touch and converts it into a spatial map of varying voltage levels.
Besides, this map (the time surface), is constructed strictly within a small local neighborhood around the active event, allowing the system to ignore many of unchanged pixels and focus on the computing of active signal. Such approach increases the data sparsity and reduces computing load substantially. By mapping temporal events into a spatial pattern, the chip can recognize not just what letter is written, but how it was written (e.g., the stroke direction).
We achieved 86.67% accuracy on a 3-letter recognition task experimentally, and 91.83% accuracy on a larger dataset of handwritten digits in simulation with different stroke directions. And as we leverage both event-based method and analog computing platform, we reduced the energy-delay product during inference by over 17 times compared to a state-of-the-art digital platform,. This proves that we can achieve high-performance intelligence on the edge with a much-reduced power and latency.
Outlook
This work demonstrates a new paradigm that fusion event-based sensing and analog in-memory computing. Further improvement in the power efficiency and computing latency could be realized through on-chip integration of the hardware modules and fabrication of the system at an advanced technology node.
The potential applications are vast, such as robotic skins, which utilize large-scale sensor networks to enable robots to perceive their environment without excessive energy use. Another application is wearable health, where intelligent patches monitor vital signs continuously, process data locally to maintain privacy and maximize battery life. The conversion circuitry and computing module in our system are sensor-agnostic; consequently, the hardware could be applied to neuroscience, allowing for the decoding and processing of neuronal signals from probe arrays with single-neuron resolution.