Signed languages are created by the hands, face and body, and perceived primarily visually. Through the signed mode, the language is accessible at the optimal level that remains available through the visual sense. Signed languages are not as pervasive a conversational medium as spoken languages due to the history of institutional suppression of the former and the linguistic hegemony of the latter. Without prior knowledge of sign language, it is difficult for non-signers to receive and understand this conversational medium. This has led to a communication barrier between signers and non-signers that technology-mediated approaches could help with.
In our recent work published in Nature Electronics, highlighted by Science, we show that a wearable sign-to-speech translation system, assisted by machine learning, can accurately translate the hand gestures of American Sign Language into speech. The wearable sign-to-speech translation system consists of yarn based stretchable sensor arrays (YSSA) and a wireless printed circuit board (PCB). Due to its unique structural design and use of soft materials, the yarn based stretchable sensor array can conform to the skin of a human finger under both releasing and stretching states. Analogue triboelectrification and electrostatic induction-based signals generated by sign language components — including hand configurations and motions, and facial expressions-are converted to the digital domain by the wearable sign-to-speech translation system to implement sign-to-speech translation.

Our wearable sign-to-speech translation system offers good mechanical and chemical durability, high sensitivity, quick response times, and excellent stretchability. To illustrate the capabilities of the wearable sign-to-speech translation system, a total of 660 sign language hand gestures based on American Sign Language (ASL) were acquired and successfully analysed with the assistance of a machine learning algorithm. The system has a high recognition rate of 98.63% and a short recognition time of less than one second. We envision that the wearable sign-to-speech translation system could not only improve the effective communications between signers and non-signers, but also help the people who want to learn sign language.
This work was covered by over 200 media worldwide, including [abc] [CNN] [NBC] [UCLA] [NPR][ScienceDaily]. The current research of the Wearable Bioelectronics Group at UCLA focuses on nanotechnology and bioelectronics for energy, sensing, and therapy applications in the form of smart textiles, wearables, and body area networks. If you are desired to join force and work on these challenging scientific projects, please send an email to jun.chen@ucla.edu to express your interest.
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in