Psychological Distress Inference using Multimodal Motion and Speech Biomarkers
DOI:
https://doi.org/10.61503/Ijmcp.v2i1.182Keywords:
Stress Detection, Depression Detection, Psychological Distress, AI, Computer Vision, Emotion Recognition, Machine Learning, Deep LearningAbstract
Psychological distress inference requires multimodal machine learning solutions, including stress and depression, which are on the rise worldwide. The current work proposes a multimodal machine learning model for psychological distress inference using handwriting, facial kinetics, speech, and motion biomarkers. With the incorporation of deep learning, computer vision, and NLP, the system derives insightful information from multimodal inputs to accurately evaluate mental health conditions. In contrast to conventional evaluation methods, our solution offers an objective, scalable, and non-invasive option for the early detection of stress and depression. Handwriting analysis is used to assess cognitive load and emotional state, while speech and facial dynamics provide complementary behavioral signals. This approach is designed to improve digital mental health diagnostics, with greater accuracy and real-time usability. Our results help advance the development of AI-based healthcare solutions towards enabling autonomous mental health screening and intervention methodologies