Unraveling Emotions with Pre-Trained Models

dc.contributor.authorPajón-Sanmartín, Alejandro
dc.contributor.authorArriba-Pérez, Francisco De
dc.contributor.authorGarcía-Méndez, Silvia
dc.contributor.authorLeal, Fátima
dc.contributor.authorMalheiro, Benedita
dc.contributor.authorBurguillo-Rial, Juan Carlos
dc.date.accessioned2025-10-27T11:01:14Z
dc.date.available2025-10-27T11:01:14Z
dc.date.issued2025-10-22
dc.description.abstractTransformer models have significantly advanced the field of emotion recognition. However, there are still open challenges when exploring open-ended queries for Large Language Models (LLMs). Although current models offer good results, automatic emotion analysis in open texts presents significant challenges, such as contextual ambiguity, linguistic variability, and difficulty interpreting complex emotional expressions. These limitations make the direct application of generalist models difficult. Accordingly, this work compares the effectiveness of fine-tuning and prompt engineering in emotion detection in three distinct scenarios: (i) performance of fine-tuned pre-trained models and general-purpose LLMs using simple prompts; (ii) effectiveness of different emotion prompt designs with LLMs; and (iii) impact of emotion grouping techniques on these models. Experimental tests attain metrics above 70% with a fine-tuned pre-trained model for emotion recognition. Moreover, the findings highlight that LLMs require structured prompt engineering and emotion grouping to enhance their performance. These advancements improve sentiment analysis, human-computer interaction, and understanding of user behavior across various domains.
dc.identifier.citationPajón-Sanmartín, A., Arriba-Pérez, F., García-Méndez, S., Leal, F., Malheiro, B., & Burguillo-Rial, J. C. (versão aceite: 22 outubro 2025). Unraveling Emotions with Pre-Trained Models, IEEE Access, 1-16. Repositório Institucional UPT. https://hdl.handle.net/11328/6725
dc.identifier.issn2169-3536
dc.identifier.urihttps://hdl.handle.net/11328/6725
dc.language.isoeng
dc.publisherIEEE
dc.relationUIDP/50014/2020
dc.rightsrestricted access
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectEmotion recognition
dc.subjectlarge language models
dc.subjectnatural language processing
dc.subjectopen-ended responses
dc.subjectprompt engineering
dc.subjecttransformer models
dc.subject.fosCiências Naturais - Ciências da Computação e da Informação
dc.titleUnraveling Emotions with Pre-Trained Models
dcterms.referenceshttps://arxiv.org/abs/2510.19668
dspace.entity.typePublication
oaire.citation.endPage16
oaire.citation.startPage1
oaire.citation.titleIEEE Access
oaire.versionhttp://purl.org/coar/version/c_ab4af688f83e57aa
person.affiliation.nameREMIT – Research on Economics, Management and Information Technologies
person.familyNameLeal
person.givenNameFátima
person.identifier.ciencia-id2211-3EC7-B4B6
person.identifier.orcid0000-0003-4418-2590
person.identifier.ridY-3460-2019
person.identifier.scopus-author-id57190765181
relation.isAuthorOfPublication8066078f-1e30-4b0a-aa84-3b6a2af4185c
relation.isAuthorOfPublication.latestForDiscovery8066078f-1e30-4b0a-aa84-3b6a2af4185c

Files

Original bundle

Now showing 1 - 1 of 1
Name:
Unraveling_Emotions_with_Pre-Trained_Models.pdf
Size:
4.15 MB
Format:
Adobe Portable Document Format