How a machine learning contest shed light on a neurological mystery

Freezing of gait (FOG) is a perplexing problem that affects many but not all people who have Parkinson’s disease (PD); we do not understand when or why it occurs. FOG can also be quite bothersome: imagine “asking” your feet to move forward, but they “refuse”. That’s what happens with FOG (Fig. 1).
How a machine learning contest shed light on a neurological mystery

Share this post

Choose a social network to share with, or copy the shortened URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

Freezing of gait (FOG) creates challenges for many people with Parkinson’s disease. When a person’s feet don’t cooperate during FOG, FOG can lead to loss of independence, embarrassment, frustration, falls, and injuries,  and, in severe cases, a wheelchair may be recommended. FOG is very mysterious. Under the same or nearly identical conditions, sometimes FOG may occur and other times, it does not. Often, people who experience frequent FOG at home or during the day, have no FOG when they are examined by a clinician. New tools for objective, long-term, unobtrusive assessment of FOG are needed.

Measuring FOG accurately and objectively is also a challenge. However, with the collective input of members of the machine learning community, we have made good progress in automatically identifying the occurrence and severity of freezing of gait, potentially helping to unlock the broader neurological mystery.

One of the barriers impeding progress in the understanding and treatment of freezing of gait is the absence of well-validated tools that can be used to objectively quantify when it occurs and rate its severity. A gold-standard approach is to videotape people with FOG or those who may have FOG as they carry out tasks designed to provoke FOG. The videos are then reviewed by trained experts and scored, on a frame-by-frame basis, to indicate each frame in which FOG occurred. This process is extremely time-consuming and labor-intensive, typically requiring two or more experts.

An emerging alternative approach is to use sensors to automatically determine when FOG occurs. After the data is collected, an algorithm reviews the signals to identify each incidence of FOG. Then a total score is determined (e.g., the percent of the time spent in FOG out of the whole trial duration). While promising, current solutions suffer from one or more limitations (e.g., small sample sizes used to validate the approach, multiple sensors required making it impractical for widespread use, good accuracy but suboptimal precision or recall, overlap between training and validation sets).

To address these limitations, we carried out a machine learning contest with the help of the Michael J. Fox Foundation for Parkinson’s Research and Kaggle, a platform that interacts millions of members of the machine learning community. A relatively large database, perhaps the largest of its kind to date, based on more than 100 people with Parkinson’s disease and FOG who wore a 3D accelerometer and almost 5,000 FOG episodes labeled by two or more experts (based on videos), was pooled together, uploaded to the Kaggle website, and randomly divided into a training set as well as public and private validation sets. 379 teams from 83 countries submitted 24,862 machine-learning solutions to the contest.

The winning models outperformed previous machine learning models, achieving high accuracy and good levels of precision and recall, even on the private (unseen) portion of the data, with high correlations (>0.9) compared to the gold-standard reference scores obtained from the experts. Interestingly, when we applied the winning models to daily living, 24/7 recordings, we identified, for the first time, specific times during the day when FOG occurred more frequently than others.

These findings illustrate the power of using a machine learning contest to accelerate medical research. More specifically, the contest rapidly improved our ability to objectively quantify FOG, with the results comparable to those of experts. Moreover, the contest results pave the way for 24/7 monitoring of feedback, a possibility that promises to shed new light on a mysterious phenomenon and, hopefully in the long run, inform and improve treatments for a symptom that can be extremely bothersome and debilitating.

Check out the paper for more of the story and a link to the open access code of the top-performing machine learning models. The data is still available online. Please let us know If you can do better than the winners.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Follow the Topic

Machine Learning
Mathematics and Computing > Mathematics > Optimization > Systems Theory, Control > Stochastic Systems and Control > Machine Learning
Parkinson's disease
Life Sciences > Biological Sciences > Neuroscience > Neurological Disorders > Parkinson's disease
Computational Neuroscience
Life Sciences > Biological Sciences > Neuroscience > Computational Neuroscience
Life Sciences > Health Sciences > Clinical Medicine > Neurology
Biomedical Engineering and Bioengineering
Technology and Engineering > Biological and Physical Engineering > Biomedical Engineering and Bioengineering

Related Collections

With collections, you can get published faster and increase your visibility.

Cancer and aging

This cross-journal Collection invites original research that explicitly explores the role of aging in cancer and vice versa, from the bench to the bedside.

Publishing Model: Hybrid

Deadline: Jul 31, 2024

Applied Sciences

This collection highlights research and commentary in applied science. The range of topics is large, spanning all scientific disciplines, with the unifying factor being the goal to turn scientific knowledge into positive benefits for society.

Publishing Model: Open Access

Deadline: Ongoing