OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Modular active curiosity-driven discovery of tool use
Sébastien Forestier, Pierre‐Yves Oudeyer
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2016), pp. 3965-3972
Open Access | Times Cited: 51

Showing 1-25 of 51 citing articles:

GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms
Cédric Colas, Olivier Sigaud, Pierre‐Yves Oudeyer
arXiv (Cornell University) (2018)
Open Access | Times Cited: 85

Curiosity Driven Exploration of Learned Disentangled Goal Spaces
Adrien Laversanne-Finot, Alexandre Péré, Pierre‐Yves Oudeyer
arXiv (Cornell University) (2018)
Open Access | Times Cited: 59

Language as a Cognitive Tool to Imagine Goals in Curiosity-Driven Exploration
Cédric Colas, Tristan Karch, Nicolas Lair, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 53

Autotelic Agents with Intrinsically Motivated Goal-Conditioned Reinforcement Learning: A Short Survey
Cédric Colas, Tristan Karch, Olivier Sigaud, et al.
Journal of Artificial Intelligence Research (2022) Vol. 74, pp. 1159-1199
Open Access | Times Cited: 35

Goal-Conditioned Reinforcement Learning: Problems and Solutions
Minghuan Liu, Menghui Zhu, Weinan Zhang
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (2022), pp. 5502-5511
Open Access | Times Cited: 29

Ray Interference: a Source of Plateaus in Deep Reinforcement Learning
Tom Schaul, Diana Borsa, Joseph Modayil, et al.
arXiv (Cornell University) (2019)
Open Access | Times Cited: 38

From exploration to control: Learning object manipulation skills through novelty search and local adaptation
Seungsu Kim, Alexandre Coninx, Stéphane Doncieux
Robotics and Autonomous Systems (2020) Vol. 136, pp. 103710-103710
Open Access | Times Cited: 32

Computational Theories of Curiosity-Driven Learning
Pierre‐Yves Oudeyer
(2018)
Open Access | Times Cited: 37

Computational Theories of Curiosity-Driven Learning
Pierre‐Yves Oudeyer
arXiv (Cornell University) (2018)
Open Access | Times Cited: 35

Prerequisites for an Artificial Self
Verena V. Hafner, Pontus Loviken, Antonio Pico Villalpando, et al.
Frontiers in Neurorobotics (2020) Vol. 14
Open Access | Times Cited: 31

CURIOUS: Intrinsically Motivated Modular Multi-Goal Reinforcement Learning
Cédric Colas, Pierre F. Fournier, Olivier Sigaud, et al.
HAL (Le Centre pour la Communication Scientifique Directe) (2019)
Open Access | Times Cited: 30

CURIOUS: Intrinsically Motivated Multi-Task Multi-Goal Reinforcement Learning
Cédric Colas, Pierre F. Fournier, Olivier Sigaud, et al.
HAL (Le Centre pour la Communication Scientifique Directe) (2018)
Open Access | Times Cited: 29

Meta-learning curiosity algorithms
Ferran Alet, Martin F. Schneider, Tomás Lozano‐Pérez, et al.
International Conference on Learning Representations (2020)
Closed Access | Times Cited: 17

Curious Hierarchical Actor-Critic Reinforcement Learning
Frank Röder, Manfred Eppe, Phuong D. H. Nguyen, et al.
Lecture notes in computer science (2020), pp. 408-419
Closed Access | Times Cited: 17

Intrinsically Motivated Goal-Conditioned Reinforcement Learning: a Short Survey
Cédric Colas, Tristan Karch, Olivier Sigaud, et al.
HAL (Le Centre pour la Communication Scientifique Directe) (2021)
Open Access | Times Cited: 15

Intrinsically Motivated Discovery of Diverse Patterns in Self-Organizing Systems
Chris Reinke, Mayalen Etcheverry, Pierre‐Yves Oudeyer
arXiv (Cornell University) (2019)
Open Access | Times Cited: 12

Robot tool use: A survey
Meiying Qin, Jake Brawer, Brian Scassellati
Frontiers in Robotics and AI (2023) Vol. 9
Open Access | Times Cited: 4

Advancing Self-Determination Theory With Computational Intrinsic Motivation: The Case of Competence
Erik Matias Lintunen, Nadia M. Ady, Christian Guckelsberger, et al.
(2024)
Open Access | Times Cited: 1

Latent Learning Progress Drives Autonomous Goal Selection in Human Reinforcement Learning
Gaia Molinaro, Cédric Colas, Pierre-Yves Oudeyer, et al.
(2024)
Open Access | Times Cited: 1

GLIB: Efficient Exploration for Relational Model-Based Reinforcement Learning via Goal-Literal Babbling
Rohan Chitnis, Tom Silver, Joshua B. Tenenbaum, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2021) Vol. 35, Iss. 13, pp. 11782-11791
Open Access | Times Cited: 10

Efficient Online Interest-Driven Exploration for Developmental Robots
Rania Rayyes, Heiko Donat, Jochen J. Steil
IEEE Transactions on Cognitive and Developmental Systems (2020) Vol. 14, Iss. 4, pp. 1367-1377
Closed Access | Times Cited: 9

Online-learning and planning in high dimensions with finite element goal babbling
Pontus Loviken, Nikolas Hemion
(2017)
Closed Access | Times Cited: 8

A target-driven visual navigation method based on intrinsic motivation exploration and space topological cognition
Xiaogang Ruan, Peng Li, Xiaoqing Zhu, et al.
Scientific Reports (2022) Vol. 12, Iss. 1
Open Access | Times Cited: 5

From motor to visually guided bimanual affordance learning
Martí Sánchez-Fibla, Sébastien Forestier, Clément Moulin-Frier, et al.
Adaptive Behavior (2019) Vol. 28, Iss. 2, pp. 63-78
Closed Access | Times Cited: 6

Interest-Driven Exploration With Observational Learning for Developmental Robots
Rania Rayyes, Heiko Donat, Jochen J. Steil, et al.
IEEE Transactions on Cognitive and Developmental Systems (2021) Vol. 15, Iss. 2, pp. 373-384
Closed Access | Times Cited: 6

Page 1 - Next Page

Scroll to top