Integration of causal inference in the DQN sampling process for classical control problems

dc.contributor.authorVelez Bedoya, Jairo Ivan
dc.contributor.authorGonzalez Bedia, Manuel
dc.contributor.authorCastillo Ossa, Luis Fernando
dc.contributor.authorArango Lopez , Jeferson
dc.contributor.authorMoreira, Fernando
dc.date.accessioned2024-12-04T14:27:43Z
dc.date.available2024-12-04T14:27:43Z
dc.date.issued2024-11-29
dc.description.abstractIn this study, causal inference is integrated into deep reinforcement learning to enhance sampling in classical control environments. The problem we’re working on is "classical control," where an agent makes decisions to keep systems balanced. With the help of artificial intelligence and causal inference, we have developed a method that adjusts a deep Q-network’s experience memory by adjusting the priority of transitions. According to the agent’s actions, these priorities are based on the magnitude of causal differences. We have applied our methodology to a reference environment in reinforcement learning. In comparison with a deep Q-network based on conventional random sampling, the results indicate significant improvements in performance and learning efficiency. Our study shows that causal inference can be integrated into the sampling process so that experience transitions can be selected more intelligently, resulting in more effective learning for classical control problems. The study contributes to the convergence between artificial intelligence and causal inference, offering new perspectives for the application of reinforcement learning techniques in real-world applications where precise control is essential.
dc.identifier.citationVelez Bedoya, J. I., Gonzalez Bedia, M., Castillo Ossa, L. F., Arango Lopez , J., & Moreira, F. (2024). Integration of causal inference in the DQN sampling process for classical control problems. Neural Computing and Applications, (published online: 29 November 2024), 1-13. https://doi.org/10.1007/s00521-024-10540-4. Repositório Institucional UPT. https://hdl.handle.net/11328/6027
dc.identifier.issn1433-3058
dc.identifier.issn0941-0643
dc.identifier.urihttps://hdl.handle.net/11328/6027
dc.language.isoeng
dc.publisherSpringer
dc.relation.hasversionhttps://doi.org/10.1007/s00521-024-10540-4
dc.rightsrestricted access
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectCausal inference
dc.subjectPrioritized sampling
dc.subjectDeep Q-network
dc.subjectReinforcement learning
dc.subject.fosCiências Naturais - Ciências da Computação e da Informação
dc.titleIntegration of causal inference in the DQN sampling process for classical control problems
dc.typejournal article
dcterms.referenceshttps://link.springer.com/article/10.1007/s00521-024-10540-4#citeas
dspace.entity.typePublication
oaire.citation.endPage13
oaire.citation.issuePublished online: 29 November 2024
oaire.citation.startPage1
oaire.citation.titleNeural Computing and Applications
oaire.versionhttp://purl.org/coar/version/c_970fb48d4fbd8a85
person.affiliation.nameUniversidade Portucalense
person.familyNameMoreira
person.givenNameFernando
person.identifier.ciencia-id7B1C-3A29-9861
person.identifier.orcid0000-0002-0816-1445
person.identifier.ridP-9673-2016
person.identifier.scopus-author-id8649758400
relation.isAuthorOfPublicationbad3408c-ee33-431e-b9a6-cb778048975e
relation.isAuthorOfPublication.latestForDiscoverybad3408c-ee33-431e-b9a6-cb778048975e

Files

Original bundle

Now showing 1 - 1 of 1
Name:
J107.pdf
Size:
1.72 MB
Format:
Adobe Portable Document Format