The World Health Organization estimates that the vast majority of people affected by hearing loss are in low and middle income countries. Hearing loss is particularly harmful for neuro-development if it is left undetected in early childhood. As a result, it is common practice in high-income countries (e.g., USA) to adopt guidelines for universal infant hearing screening and require newborn hearing tests for all babies born in the hospital.
So how is newborn hearing screening performed today? Of course, unlike a healthy adult, we can’t ask a newborn if they hear different audible tones. Instead, the current practice uses an ingenious observation that a healthy cochlea generates sounds of its own, called otoacoustic emissions. We normally think of the ear as something that just receives sound like a microphone. But interestingly, the cochlea can also generate sounds. These sounds are generated from the motion of the cochlea's sensory hair cells as they energetically respond to auditory stimulation. So if we are in a quiet room, a very sensitive device can pick up these faint sounds.
This requires sensitive hardware that contributes to the test equipment being thousands of dollars. This is cited as one of the reasons for limited to no-hearing screening in low and middle-income countries (e.g., India, Kenya). In these settings, getting access to hearing assessment and equipment may often require travel to an urban setting and potentially long wait times.
To address this problem, we designed an affordable earphone-based newborn screening tool where we use commodity earphones and microphones. We connect them to a smartphone where we run our algorithms to detect the weak sounds from the cochlea.
How it works
We designed a low cost probe from off the shelf components for around $10. The main components of our probe are the earphones, a microphone and some tubing and attachments. The smartphone itself can be a budget smartphone or purchased second hand for $40.
We stimulate the cochlea using a dual-tone acoustic signal. We send out two tones at frequencies f1 and f2 into the ear canal. These sounds travel into the cochlea, which has a non-linear response at the frequency 2f1 - f2 (labelled as DPOAE in the figure below). A microphone by the probe head listens for these sounds, and sends the signal to a smartphone for further processing. These sounds however are very soft and below the human threshold for hearing. We implement real-time algorithms on the smartphone to detect these very faint sounds and reduce noise.
We ran our study across three different sites and recruited 201 patient ears from 1 week to 20 years of age. To determine the ground truth of whether a patient had hearing loss, we used a combination of clinical and exam history review which included the patient’s audiogram information, acoustic brainstem response test, and a clinician made the call for whether the child would be classified as having hearing loss. On our entire cohort, our sensitivity was 100%, and our specificity was 89%. This is similar to the performance numbers for the FDA-cleared device.
For decades, ear diagnostic hardware, and more broadly audiology as a field, has not seen significant advances. This has led to the use of hardware that has been designed decades ago, where equipment still costs thousands of dollars making it inaccessible to most of the world. At the same time, since the introduce of the first iPhone in 2007, we have seen a rapid development of mobile computing technologies like smartphones and earphones that have incredible economies of scale. This has led to high-quality microphones and speakers being incorporated into these mobile devices that are also ubiquitous across the world.
Leveraging this technological trend, our group has been working on using mobile devices like smartphones and earbuds to democratize audiology. In 2019, we introduced a smartphone-based system that uses a simple paper cone attached to the phone to detect middle ear fluid, a biomarker for ear infections. Earlier this year, we designed a smartphone attachment that can performs tympanometry to help detect middle-ear disorders. Here, we show how to detect otoacoustic emissions from the cochlea and use it for new-born hearing screening.
By combining all these mobile tools, we hope to make accessible medical devices for diagnosing middle- and the inner-ear conditions. We started the TUNE project with the goal of helping create universal new-born and early childhood hearing screening in Kenya. This is a very broad team spanning the University of Washington, the University of Nairobi and the Kenya Ministry of Health. We look forward to partnering with local organizations in other countries to make newborn hearing screening more accessible across the world.
Check out our Nature biomedical engineering paper for more details.