International Journal of Emerging Research in Engineering, Science, and Management
Vol. 4, Issue 3, pp. 42-51, Jul-Sep 2025.
https://doi.org/10.58482/ijeresm.v4i3.7

AI-Driven Multimodal Emotion Recognition and Personalized Recommendations Using Power BI

Pooja Sithrubi Gnanasambanthan

M. Gnana Priya

PG Scholar, Department of CSE, Gokula Krishna College of Engineering, Sullurpet, Andhra Pradesh, India.

Associate Professor, Department of ECE, Gokula Krishna College of Engineering, Sullurpet, Andhra Pradesh, India.

Abstract: Mental health challenges demand innovative, non-invasive interventions to reduce stress and enhance emotional stability. Music has long served as a therapeutic medium; however, existing approaches often rely on generic playlists that lack personalization and adaptability to an individual’s psychological state. This paper presents a novel framework that combines webcam-based facial expression analysis with questionnaire-based self-reports to achieve robust emotion detection. The proposed system employs deep learning models to extract emotional cues from visual data, while structured self-assessments provide subjective validation of user states. A fusion mechanism integrates both modalities to enhance the accuracy and reliability of emotion recognition. Based on the detected emotional profile, personalized music recommendations are generated and visualized through interactive Power BI dashboards. This multimodal, AI-driven approach bridges traditional music therapy with modern data analytics, enabling adaptive, accessible, and user-centric mental health support. The experimental results highlight the potential of this method to enhance emotional well-being, alleviate stress, and increase access to personalized therapy.

Keywords: Mental health, Music therapy, Artificial Intelligence (AI), Multimodal Emotion recognition, Power BI.

References: 

  1. G. Xiang et al., “A multi-modal driver emotion dataset and study: Including facial expressions and synchronized physiological signals,” Engineering Applications of Artificial Intelligence, vol. 130, p. 107772, Dec. 2023, doi: 10.1016/j.engappai.2023.107772.
  2. Q. Wang, M. Wang, Y. Yang, and X. Zhang, “Multi-modal emotion recognition using EEG and speech signals,” Computers in Biology and Medicine, vol. 149, p. 105907, Jul. 2022, doi: 10.1016/j.compbiomed.2022.105907.
  3. D. Ghosh, B. Neogi, and D. Singh, “A systematic analysis of the effectiveness of music therapy in children with autism spectrum disorder,” in Elsevier eBooks, 2025, pp. 33–47. doi: 10.1016/b978-0-443-26480-1.00006-0.
  4. H. Zhao and L. Jin, “IoT-based approach to multimodal music emotion recognition,” Alexandria Engineering Journal, vol. 113, pp. 19–31, Nov. 2024, doi: 10.1016/j.aej.2024.10.059.
  5. S. Hazmoune and F. Bougamouza, “Using transformers for multimodal emotion recognition: Taxonomies and state of the art review,” Engineering Applications of Artificial Intelligence, vol. 133, p. 108339, Apr. 2024, doi: 10.1016/j.engappai.2024.108339.
  6. C. Wan, C. Xu, D. Chen, D. Wei, and X. Li, “Emotion recognition based on a limited number of multimodal physiological signals channels,” Measurement, p. 115940, Oct. 2024, doi: 10.1016/j.measurement.2024.115940.
  7. M. Imani and G. A. Montazer, “A survey of emotion recognition methods with emphasis on E-Learning environments,” Journal of Network and Computer Applications, vol. 147, p. 102423, Aug. 2019, doi: 10.1016/j.jnca.2019.102423.
  8. D. Bettiga, M. Mandolfo, and G. Noci, “Promoting textile product repair services in the European setting: Do emotions shape consumers’ evaluations?,” Journal of Retailing and Consumer Services, vol. 86, p. 104350, Jun. 2025, doi: 10.1016/j.jretconser.2025.104350.
  9. N. Sharma and D. Sarkar, “Healthcare data Analytics using Power BI,” International Journal of Software Innovation, vol. 10, no. 1, pp. 1–10, Jan. 2022, doi: 10.4018/ijsi.293267.
  10. H. A. Modran, T. Chamunorwa, D. Ursuțiu, C. Samoilă, and H. Hedeșiu, “Using deep learning to recognize therapeutic effects of music based on emotions,” Sensors, vol. 23, no. 2, p. 986, Jan. 2023, doi: 10.3390/s23020986.
  11. T. H. Zhou, W. Liang, H. Liu, L. Wang, K. H. Ryu, and K. W. Nam, “EEG Emotion Recognition Applied to the Effect Analysis of music on emotion changes in Psychological healthcare,” International Journal of Environmental Research and Public Health, vol. 20, no. 1, p. 378, Dec. 2022, doi: 10.3390/ijerph20010378.
  12. M. A. De Santana, C. L. De Lima, A. S. Torcate, F. S. Fonseca, and W. P. D. Santos, “Affective computing in the context of music therapy: a systematic review,” Research Society and Development, vol. 10, no. 15, p. e392101522844, Nov. 2021, doi: 10.33448/rsd-v10i15.22844.

© 2025 The Author(s). Published by IJERESM. This work is licensed under the Creative Commons Attribution 4.0 International License.

Archiving: All articles are permanently archived in Zenodo IJERESM Community.