School of Computing and Informatics Technology (CIT)
Permanent URI for this community
Browse
Browsing School of Computing and Informatics Technology (CIT) by Author "Abura, Jerome"
Results Per Page
Sort Options
-
ItemAn explainable approach for depression prediction using deep learning algorithms(Makerere University, 2025) Abura, JeromeDepression is a pressing global health concern that poses significant challenges to mental health professionals. The condition’s severity is often exacerbated by stressful life events, including trauma, loss of loved ones, and social isolation. As sound mental health is essential for a nation’s development and societal transformation, early prediction and diagnosis of depression are crucial for effective treatment. Traditional diagnosis methods, relying on interviews and physical appearance, have limitations just like statistical tools used by psychiatrists to conclude whether a patient is depressed or not. The widespread availability of the Internet and computer devices has led to the development of computer- based methods for predicting and diagnosing depression. However, the black-box nature of Artificial Intelligence algorithms raise concerns among patients. This study explored the application of Explainable Artificial Intelligence methods to predict and improve diagnostic accuracy for depression. Using the Fer2013 dataset, we employed a convolutional neural network to predict depression based on facial emotional expressions. Our model correctly classifies 78.29% of the sampled facial emotions and achieves an overall accuracy of 52.97%. We identified disgust, sadness, anger, and fear as the most significant predictors associated with depression. To provide insights into the model’s predictions, we utilized two XAI explainers: LIME and SHAP. LIME emphasized the importance of local feature explanations, highlighting the role of individual facial features such as the curvature of the eyebrows in predicting depression. In contrast, SHAP focused on the global feature importance, revealing the overall contribution of each feature such as the presence of tearfulness to the model’s predictions. Our results show that both explainers offer distinct approaches to explaining prediction outcomes, with LIME providing more comprehensive explanations. This study contributes to the development of explainable AI methods for depression diagnosis and highlights the importance of transparency in AI-driven healthcare applications. Future research directions and recommendations are provided to further improve the accuracy and explainability of depression diagnosis models.