A novel bimodal dataset for inner speech recognition

Inner speech recognition can contribute towards speech prosthesis. We introduce a novel bimodal dataset on inner speech that can improve the recognition performance, using two complimentary modalities : functional magnetic resonance imaging (fMRI) and electroencephalography (EEG).
Published in Research Data
A novel bimodal dataset for inner speech recognition
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

Inner speech recognition is a challenging research area where the state of the art is currently limited to recognition of close word vocabularies and can assist humans that have no ability to communicate with their friends and family, e.g., people in Locked-In-Syndrome. 

Our recent article in scientific data presents the first publicly available bimodal dataset containing EEG and fMRI data acquired non simultaneously during inner-speech production. The novel idea behind is the combination of two modalities with complementary features in order to achieve a better performance in inner speech recognition. EEG is known for high temporal resolution, while fMRI has a high spatial resolution. Therefore, the combination of these two modalities is beneficial for inner speech recognition compared to using these two modalities alone.

To support the previous claim, we contacted a supplementary study, where we applied machine learning methods on our bimodal dataset and provide baseline classification results. Our study reports for the binary classification, a mean accuracy of 71.72% when combining the two modalities (EEG and fMRI), compared to 62.81% and 56.17% when using EEG, resp. fMRI alone. The same improvement in performance for word classification (8 classes) can be observed (30.29% with combination, 22.19% and 17.50% without). The classification results demonstrate that combining EEG with fMRI is a promising direction for inner speech decoding. 

The research was funded by the Grants for Excellent Research Projects Proposals of SRT.ai 2022, Sweden. The bimodal dataset was acquired at the  Stockholm University Brain Imaging Centre (SUBIC) .

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Follow the Topic

Research Data
Research Communities > Community > Research Data

Related Collections

With collections, you can get published faster and increase your visibility.

Datasets for gait analysis

This Collection of articles focuses on curated datasets related to gait patterns, encompassing a variety of populations and health contexts.

Publishing Model: Open Access

Deadline: Mar 20, 2025

Data from the MOSAiC Arctic Ocean drift experiment

The MOSAiC Data Legacy is a principal result of the multi-national year-round research activity in the Arctic. This collection describes the main achievements in terms of publicly available research data. Furthermore, it highlights the data management principles and data policy as a milestone in pushing scientific collaboration in a huge international research project consortium in Earth System Sciences to reality.

Publishing Model: Open Access

Deadline: Ongoing