Unraveling Emotions with Pre-Trained Models
Data
2025-10-22
Embargo
Orientador
Coorientador
Título da revista
ISSN da revista
Título do volume
Editora
IEEE
Idioma
Inglês
Título Alternativo
Resumo
Transformer models have significantly advanced the field of emotion recognition. However, there are still open challenges when exploring open-ended queries for Large Language Models (LLMs). Although current models offer good results, automatic emotion analysis in open texts presents significant challenges, such as contextual ambiguity, linguistic variability, and difficulty interpreting complex emotional expressions. These limitations make the direct application of generalist models difficult. Accordingly, this work compares the effectiveness of fine-tuning and prompt engineering in emotion detection in three distinct scenarios: (i) performance of fine-tuned pre-trained models and general-purpose LLMs using simple prompts; (ii) effectiveness of different emotion prompt designs with LLMs; and (iii) impact of emotion grouping techniques on these models. Experimental tests attain metrics above 70% with a fine-tuned pre-trained model for emotion recognition. Moreover, the findings highlight that LLMs require structured prompt engineering and emotion grouping to enhance their performance. These advancements improve sentiment analysis, human-computer interaction, and understanding of user behavior across various domains.
Palavras-chave
Emotion recognition, large language models, natural language processing, open-ended responses, prompt engineering, transformer models
Tipo de Documento
Versão da Editora
Dataset
Citação
Pajón-Sanmartín, A., Arriba-Pérez, F., García-Méndez, S., Leal, F., Malheiro, B., & Burguillo-Rial, J. C. (versão aceite: 22 outubro 2025). Unraveling Emotions with Pre-Trained Models, IEEE Access, 1-16. Repositório Institucional UPT. https://hdl.handle.net/11328/6725
Identificadores
TID
Designação
Tipo de Acesso
Acesso Restrito