Freezing of gait (FOG) creates challenges for many people with Parkinson’s disease. When a person’s feet don’t cooperate during FOG, FOG can lead to loss of independence, embarrassment, frustration, falls, and injuries, and, in severe cases, a wheelchair may be recommended. FOG is very mysterious. Under the same or nearly identical conditions, sometimes FOG may occur and other times, it does not. Often, people who experience frequent FOG at home or during the day, have no FOG when they are examined by a clinician. New tools for objective, long-term, unobtrusive assessment of FOG are needed.
Measuring FOG accurately and objectively is also a challenge. However, with the collective input of members of the machine learning community, we have made good progress in automatically identifying the occurrence and severity of freezing of gait, potentially helping to unlock the broader neurological mystery.
One of the barriers impeding progress in the understanding and treatment of freezing of gait is the absence of well-validated tools that can be used to objectively quantify when it occurs and rate its severity. A gold-standard approach is to videotape people with FOG or those who may have FOG as they carry out tasks designed to provoke FOG. The videos are then reviewed by trained experts and scored, on a frame-by-frame basis, to indicate each frame in which FOG occurred. This process is extremely time-consuming and labor-intensive, typically requiring two or more experts.
An emerging alternative approach is to use sensors to automatically determine when FOG occurs. After the data is collected, an algorithm reviews the signals to identify each incidence of FOG. Then a total score is determined (e.g., the percent of the time spent in FOG out of the whole trial duration). While promising, current solutions suffer from one or more limitations (e.g., small sample sizes used to validate the approach, multiple sensors required making it impractical for widespread use, good accuracy but suboptimal precision or recall, overlap between training and validation sets).
To address these limitations, we carried out a machine learning contest with the help of the Michael J. Fox Foundation for Parkinson’s Research and Kaggle, a platform that interacts millions of members of the machine learning community. A relatively large database, perhaps the largest of its kind to date, based on more than 100 people with Parkinson’s disease and FOG who wore a 3D accelerometer and almost 5,000 FOG episodes labeled by two or more experts (based on videos), was pooled together, uploaded to the Kaggle website, and randomly divided into a training set as well as public and private validation sets. 379 teams from 83 countries submitted 24,862 machine-learning solutions to the contest.
The winning models outperformed previous machine learning models, achieving high accuracy and good levels of precision and recall, even on the private (unseen) portion of the data, with high correlations (>0.9) compared to the gold-standard reference scores obtained from the experts. Interestingly, when we applied the winning models to daily living, 24/7 recordings, we identified, for the first time, specific times during the day when FOG occurred more frequently than others.
These findings illustrate the power of using a machine learning contest to accelerate medical research. More specifically, the contest rapidly improved our ability to objectively quantify FOG, with the results comparable to those of experts. Moreover, the contest results pave the way for 24/7 monitoring of feedback, a possibility that promises to shed new light on a mysterious phenomenon and, hopefully in the long run, inform and improve treatments for a symptom that can be extremely bothersome and debilitating.
Check out the paper for more of the story and a link to the open access code of the top-performing machine learning models. The data is still available online. Please let us know If you can do better than the winners.
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in