Multisensory integration: bridging the gap between artificial and natural intelligence


To get us started, let me elucidate the concept of multisensory integration and where we stand in terms of hardware implementation of this phenomenon. In the context of biological organisms, multisensory integration denotes the process through which information gathered from different sensory modalities such as sight, hearing, touch, taste, and smell is combined and integrated within the brain to construct a unified and coherent perception of the surrounding environment. Perhaps, the best example of multisensory integration would be imagining ourselves in a dark room where we are tasked with navigating through a maze to get out of that room. In the dark environment, we are completely reliant on our touch sensation to get a feel of the environment for navigation. However, relying on touch senses alone can lead to undesirable consequences such as injury to the body due to the presence of dangerous entities throughout the maze. At that instant, even a short-lived flash of light can significantly improve our chances of successful navigation. This is because our brain does a remarkable job of integrating weak sensory cues and enhances them in such a way that the response from the integrated sensory information is much higher than just the sum of those sensory information alone.

In the realm of biology, it is important to note that the sum of individual sensory inputs does not necessarily equate to a simple addition of 1+1 equaling 2; it consistently yields a result much greater than that. This remarkable capacity is referred to as "super additivity" and stands as one of the three fundamental characteristic features of multisensory integration. Furthermore, we observe that the combination of the faintest sensory signals yields the most substantial advantage, while the level of multisensory integration decreases as the individual sensory inputs grow stronger. This phenomenon is known as the "inverse effectiveness effect," and it intuitively makes sense, as the utility of multisensory integration is at its peak when the individual sensory cues are less pronounced. This inherent ability is widespread across the biological spectrum, providing species with a significant advantage in terms of survival and navigating predatory situations. Lastly, multisensory integration reaches its zenith when there is minimal delay between the sensory inputs, a phenomenon termed "temporal congruency."

These encompass the core characteristic features of multisensory integration, and it is worth noting that implementing this valuable capability is not easily achievable using current technological frameworks. In contemporary devices like cars and smartphones, there is a prevailing trend where communication between different sensors is largely lacking. Essentially, these sensors function independently, with one sensor's data not directly affecting or adjusting the output of another. Instead, each sensor sends its data to a central processing unit, which then employs various algorithms for tasks such as decision-making which eventually leads to increased computational demands and higher storage requirements. Clearly, this prevalent approach falls short of achieving the level of multisensory integration observed in the natural world. As a result, this deficiency results in significantly higher energy consumption compared to biological counterparts. Given the growing demand for a wider range of sensors, the need for a thoughtful approach to data management becomes increasingly apparent.

Numerous investigations have highlighted the pivotal role of multisensory integration; however, the majority of studies involving hardware implementation have fallen short of providing a comprehensive illustration of its inherent characteristics. In our study, we have undertaken the task of showcasing the cross-modulation of touch and vision senses through the utilization of a triboelectric sensor for tactile signal reception and a monolayer MoS2 photo-memtransistor for visual cue processing. We have coined this configuration as the "multisensory neuron" (MN). What distinguishes our study is not only the thorough demonstration of the characteristic attributes of multisensory integration through our MN platform but also our extensive exploration of the underlying mechanisms responsible for this unique phenomenon. Our investigation delves deeply into discerning the causative factors and critical parameters essential for the successful large-scale implementation of this innovative paradigm.

In our study,  we facilitated direct communication between the tactile and visual sensors prior to transmitting their data to a central processing unit. Through this approach, we observed a phenomenon mirroring what is consistently witnessed in biological counterparts. Specifically, we consistently observed a heightened response amplification when the sensory cues were of weaker intensity, with this amplification diminishing as the individual cues gained in strength while still maintaining a higher value than the addition of respective unimodal responses throughout. This provided compelling confirmation of both the "super-additive" nature and the "inverse-effectiveness effect" intrinsic to multisensory integration. Moreover, we further substantiated our findings by demonstrating the phenomenon of "temporal congruency" in multisensory integration. By showcasing that multisensory integration is optimum when the time interval between the occurrence of visual and tactile cues is minimized, we underscored the robustness of this characteristic feature. Intriguingly, while our initial investigations involved analog signals, we noted a significant departure from biological counterparts, as the brain processes and transmits signals in the form of digital spike trains. This divergence prompted a deeper exploration of our model, leading to the implementation of a two-stage cascaded inverter consisting of four monolayer MoS2 photo-memtransistors. This inverter was strategically linked to the tactile sensor, enabling the encoding and transmission of multisensory information in the form of digital spike trains, mimicking the biological neural signaling.

Our research endeavors did not conclude with experimental demonstrations alone. We subjected our results to rigorous analysis, culminating in the development of a model for an integrated network of multisensory neurons. This insightful model has provided us with a comprehensive understanding of the critical parameters to consider when designing a large-scale array of multisensory neurons. In an era characterized by an ever-expanding reliance on technology, where sensors play an indispensable role in domains such as artificial intelligence and modern gadgets, our approach offers a superior methodology for the management of these sensors, promising enhanced efficiency and functionality in this technologically pervasive landscape.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Subscribe to the Topic

Biological Techniques
Life Sciences > Biological Sciences > Biological Techniques

Related Collections

With collections, you can get published faster and increase your visibility.

Biomedical applications for nanotechnologies

Overall, there are still several challenges on the path to the clinical translation of nanomedicines, and we aim to bridge this gap by inviting submissions of articles that demonstrate the translational potential of nanomedicines with promising pre-clinical data.

Publishing Model: Open Access

Deadline: Dec 31, 2023

Pre-clinical drug discovery

We welcome studies reporting advances in the discovery, characterization and application of compounds active on biologically or industrially relevant targets. Examples include emerging screening technologies, the development of small bioactive compounds/peptides/proteins, and the elucidation of compound structure-activity relationships, target interactions and mechanism-of-action.

Publishing Model: Open Access

Deadline: Dec 31, 2023