Emotion-Aware Speech Synthesis using Multimodal Deep Learning with Visual and Textual Cues

dc.contributor.authorTotlani, Ketan
dc.contributor.authorPatil, Smital
dc.contributor.authorSasikumar, Abhijai
dc.contributor.authorMoreira, Fernando
dc.contributor.authorMohanty, Sachi Nandan
dc.date.accessioned2025-11-14T11:07:30Z
dc.date.available2025-11-14T11:07:30Z
dc.date.issued2025-11-11
dc.description.abstractContemporary Text-to-Speech (TTS) technologies have reached incredible levels of precision in producing proper speech. However, the output is often emotionless and robotic because synthesizing emotions remains a challenge. This problem is particularly important for applications like virtual assistants, healthcare aides, and immersive voice technologies, which are centered around a person and where emotionally intelligent dialogue improves the experience. The goal of the research presented here is to design a deep learning, multimodal, emotion-aware speech synthesis framework that would generate expressive speech. This study utilizes the RAVDESS emotional speech dataset and incorporates two advanced models: Tacotron 2, which is a sequence-to-sequence model for spectrogram generation, and a Prosody-Guided Conditional GAN (cGAN) which improves emotional prosody by refining pitch and energy. The experimental evaluation provided by Mean Opinion Score (MOS), Mel Cepstral Distortion (MCD), and F0 RMSE proved that the system is capable of generating speech that is both extremely natural (MOS: 4.32) and emotionally aligned (Emotion MOS: 4.15). The results support the hypothesis that combining prosodic conditioning and synthesis of spectrograms is effective and opens new possibilities for next generation AI communication systems by significantly enhancing the generation of emotion-laden speech.
dc.identifier.citationTotlani, K., Patil, S., Sasikumar, A., Moreira, F., & Mohanty, S. N. (2025). Emotion-Aware Speech Synthesis using Multimodal Deep Learning with Visual and Textual Cues. In 2025 IEEE 8th International Conference on Multimedia Information Processing and Retrieval (MIPR), San Jose, CA, USA, 06-08 August 2025, (pp. 104-108). IEEE. https://doi.org/10.1109/MIPR67560.2025.00025. Repositório Institucional UPT. https://hdl.handle.net/11328/6769
dc.identifier.isbn979-8-3315-9465-7
dc.identifier.isbn979-8-3315-9466-4
dc.identifier.urihttps://hdl.handle.net/11328/6769
dc.language.isoeng
dc.publisherIEEE
dc.relation.hasversionhttps://doi.org/10.1109/MIPR67560.2025.00025
dc.rightsrestricted access
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectEmotion-Aware Speech Synthesis
dc.subjectTacotron 2
dc.subjectProsody-Guided Conditional GAN
dc.subjectMultimodal Deep Learning
dc.subjectRAVDESS Dataset
dc.subjectEmotional Prosody Modeling
dc.subjectText-to-Speech (TTS)
dc.subjectMel Spectrogram Generation
dc.subject.fosCiências Naturais - Ciências da Computação e da Informação
dc.titleEmotion-Aware Speech Synthesis using Multimodal Deep Learning with Visual and Textual Cues
dc.typeconference paper
dcterms.referenceshttps://ieeexplore.ieee.org/document/11225978/authors#full-text-header
dspace.entity.typePublication
oaire.citation.conferenceDate2025-08-06
oaire.citation.conferencePlaceSan Jose, CA, USA
oaire.citation.endPage108
oaire.citation.startPage104
oaire.citation.title2025 IEEE 8th International Conference on Multimedia Information Processing and Retrieval (MIPR)
oaire.versionhttp://purl.org/coar/version/c_970fb48d4fbd8a85
person.affiliation.nameUniversidade Portucalense
person.familyNameMoreira
person.givenNameFernando
person.identifier.ciencia-id7B1C-3A29-9861
person.identifier.orcid0000-0002-0816-1445
person.identifier.ridP-9673-2016
person.identifier.scopus-author-id8649758400
relation.isAuthorOfPublicationbad3408c-ee33-431e-b9a6-cb778048975e
relation.isAuthorOfPublication.latestForDiscoverybad3408c-ee33-431e-b9a6-cb778048975e

Files

Original bundle

Now showing 1 - 1 of 1
Name:
P115.pdf
Size:
986.35 KB
Format:
Adobe Portable Document Format