Emotion-Aware Speech Synthesis using Multimodal Deep Learning with Visual and Textual Cues
Ficheiros
Data
2025-11-11
Embargo
Orientador
Coorientador
Título da revista
ISSN da revista
Título do volume
Editora
IEEE
Idioma
Inglês
Título Alternativo
Resumo
Contemporary Text-to-Speech (TTS) technologies have reached incredible levels of precision in producing proper speech. However, the output is often emotionless and robotic because synthesizing emotions remains a challenge. This problem is particularly important for applications like virtual assistants, healthcare aides, and immersive voice technologies, which are centered around a person and where emotionally intelligent dialogue improves the experience. The goal of the research presented here is to design a deep learning, multimodal, emotion-aware speech synthesis framework that would generate expressive speech. This study utilizes the RAVDESS emotional speech dataset and incorporates two advanced models: Tacotron 2, which is a sequence-to-sequence model for spectrogram generation, and a Prosody-Guided Conditional GAN (cGAN) which improves emotional prosody by refining pitch and energy. The experimental evaluation provided by Mean Opinion Score (MOS), Mel Cepstral Distortion (MCD), and F0 RMSE proved that the system is capable of generating speech that is both extremely natural (MOS: 4.32) and emotionally aligned (Emotion MOS: 4.15). The results support the hypothesis that combining prosodic conditioning and synthesis of spectrograms is effective and opens new possibilities for next generation AI communication systems by significantly enhancing the generation of emotion-laden speech.
Palavras-chave
Emotion-Aware Speech Synthesis, Tacotron 2, Prosody-Guided Conditional GAN, Multimodal Deep Learning, RAVDESS Dataset, Emotional Prosody Modeling, Text-to-Speech (TTS), Mel Spectrogram Generation
Tipo de Documento
Comunicação em conferência
Versão da Editora
Citação
Totlani, K., Patil, S., Sasikumar, A., Moreira, F., & Mohanty, S. N. (2025). Emotion-Aware Speech Synthesis using Multimodal Deep Learning with Visual and Textual Cues. In 2025 IEEE 8th International Conference on Multimedia Information Processing and Retrieval (MIPR), San Jose, CA, USA, 06-08 August 2025, (pp. 104-108). IEEE. https://doi.org/10.1109/MIPR67560.2025.00025. Repositório Institucional UPT. https://hdl.handle.net/11328/6769
Identificadores
TID
Designação
Tipo de Acesso
Acesso Restrito