• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Scientific seminar: AI models to diagnose depression using acoustic features

Event ended

The Laboratory of Artificial Intelligence for Cognitive Sciences (AICS) cordially invites you, your colleagues, and your students to participate in the lab’s scientific seminars starting on Wednesday 26 June at 2 pm.

Subject: AI models to diagnose depression using acoustic features

Abstract.
Depression is one of the most widespread mental issues of the world today that affects an individual’s quality of life to a considerable extent. A lot of people tend to practice self-diagnosis avoiding doctor's consultation and try to heal themselves on their own because appointment in the hospital takes a considerable amount of time and disturbs individual's privacy. In this study we examined various Artificial Intelligence (AI) methods to detect whether a person is suffering from depression or not, using the acoustic features (such as pitch, tone, rhythm, etc.) extracted from the voices.

Assuming that the acoustic features are promising indicators of depression. We took the dataset of 346 patients from Mental Health Research Center in Moscow, RF., who were asked to record their voices while completing one of the tasks: picture description, reading IKEA instruction and telling thie personal story. To access severity of depression doctors used two scales: Hamilton Depression Rating Scale (HDRS) and Quick Inventory of Depressive Symptomatology (QIDS) scales.  We extracted features from the audio recordings of patients and trained several different models: ranging from conventional Machine Learning (ML) models, such as ensemble learning algorithms and k-nearest neighbor to more advanced deep learning architectures, such as TabNet and Wide&Deep methods. The results of our study show that several models can achieve high accuracy in predicting depression levels with approximately 0.62 and 0.7 ROC-AUC and F1-Score respectively, using picture descriptions as a stimulus for patients. In addition, out of two scales, QIDS showed the most accurate results in terms of prediction.  

Overall, our results demonstrated that deep learning models have great potential for depression detection using extracted acoustic features; however, further research is required to improve the quality of the obtained results.

The presentation will be delivered by: Alexandra Kovaleva (HSE master's student and research assistant of AICS)

Supervisor: Shalileh Soroosh (Ph.D. in Computer Science, Laboratory Head)

The meeting will be held in a hybrid format. We meet in person at the address: Krivokolenny lane, 3, room 302 or online in Zoom.

To participate in the seminar, please register

To order passes for external members of the seminar please contact laboratory manager Lobashova Alina: aalobashova@hse.ru or +7 (927) 407 21-84.