Multimodal digital assessment of depression with actigraphy and app in Hong Kong Chinese

Multimodal digital assessment of depression with actigraphy and app in Hong Kong Chinese
Like

Share this post

Choose a social network to share with, or copy the shortened URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

The motivation for this study

Despite the high prevalence of depression and its detrimental effects on individuals, family and society, a large proportion of patients do not receive timely help from mental health services. The reasons are multifaceted, including individual’s lack of awareness/knowledge and worry of potential stigma as well as healthcare system issues including limited resources and accessibility of mental health services. The diagnosis of depression currently relies on clinical consultations, but the limited mental health resources and availability of clinicians poses a significant barrier to a timely diagnosis and monitoring of the progress of patients. This issue is particularly accentuated in Asia or other under-served regions. In addition, self-report scales have long been considered as a screening assessment for depression, but the patient’s self-perception only reflects part of the picture as depression is more than unhappiness or simply sadness but encompass a syndrome of symptoms and signs including mood, motor, sleep and speech features.  

 

We recognize the high potential of digital phenotyping, which allows unobtrusive. objective and continuous measurement via digital devices with the application of artificial intelligence. Thus, digital phenotyping might facilitate accessible assessment, screening and monitoring, which might alleviate some of the burden of healthcare services. The digital-AI approach may also help to refine the accuracy of assessment, and potentially enable clinical researchers and clinicians to uncover meaningful and hidden features.

 

The journey of this study

In this paper, we developed and validated a multimodal digital measurement system (using actigraphy and a novel app called D-MOMO) to assess depression in Hong Kong Chinese. We found that subjects with major depressive disorder (MDD) demonstrated a series of digital features including facial expressions (more brow lowering and less lip corner pulling), speech patterns (lower articulation rate, higher pause variability, more self-references and negative emotion words), mood states (lower subjective happiness level), sleep and circadian variations (decreased mobile time, delayed sleep midpoint and Acrophase). These findings echoed the fact that depression comprises of a series of physiological, cognitive, speech, mood, and rest-activity changes. Furthermore, with the aid of machine learning approach in multimodal detection of depression, we achieved a good performance (F1-score = 0.81) for lifetime diagnosis of depression, and a relatively lower but still satisfactory performance (F1-score = 0.70) for non-remission status. One of the strengths of the study was the utilization of clinical outcomes as ascertained by trained medical researchers, rather than simply relying on self-report questionnaires.

 

The biggest challenge, as in many other digital mental health research projects, was to build a multidisciplinary team across medical, engineering and data/AI science. This study brought together a team of professionals from various fields including psychiatrists, psychologists, software engineers, and data/AI scientists. This interdisciplinary collaboration enabled us to broaden our perspectives and work complementarily to approach problems from different angles.

 

Suggestions for future research

Our study reported the findings of integrating both passive and active features in assessing depression. Previous studies mostly focused on passive digital features (e.g., sleep, physical activity, and smartphone data). The advantage of passive features was that the data could be collected continuously and conveniently without the need for human interaction. Nonetheless, passive features also have potential drawbacks, for example, noisy or meaningless data can interfere with further analysis. In addition, the mood states of the depressed patients could be contextual and situational, and their symptoms may be more apparent when specific probing questions are asked. This is why we included two specific questions in the app (i.e., How is your mood right now? What have you done during last four hours?). Nevertheless, active measurement has its disadvantages as well. The biggest one is the burden brought to the subjects, like repeated mood diary 4 times per day via app in our study. As we eventually intend to apply the measurement for future screening and monitoring of depression in a larger population in the community, it certainly requires further improvement, such as minimizing the frequency and duration of monitoring as well as incorporating some intelligent prompts (e.g. using virtual agent in the app’s AI system or leveraging cues from passive features).

 

The current study provided valuable insights by demonstrating the feasibility of utilizing multimodal measurements for detecting depression. However, there might be potential gender and/or age differences in the predictive power of the machine learning models. Due to the relatively limited sample size, it was not feasible to test this important scientific question in this study. Future study should recruit a diverse and larger sample of participants to examine the hypothesis regarding potential differences in the predictive power of multimodal measurements across different age and gender.

 

Thus, this study is only the beginning of the journey. The current process of feature extraction from digital modalities still requires some level of human involvement, particularly in pre-processing tasks. There are still some miles to go, as real-time processing and feedback are essential for the practical application of automated digital phenotyping in real-life scenarios in future.

 

Link to article:

https://www.nature.com/articles/s41398-024-02873-4

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in