OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates
Shixiang Gu, Ethan Holly, Timothy Lillicrap, et al.
(2017), pp. 3389-3396
Open Access | Times Cited: 1348

Showing 1-25 of 1348 citing articles:

Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, et al.
arXiv (Cornell University) (2018)
Open Access | Times Cited: 2897

Applications of Deep Reinforcement Learning in Communications and Networking: A Survey
Nguyen Cong Luong, Dinh Thai Hoang, Shimin Gong, et al.
IEEE Communications Surveys & Tutorials (2019) Vol. 21, Iss. 4, pp. 3133-3174
Open Access | Times Cited: 1592

Soft Actor-Critic Algorithms and Applications
Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, et al.
arXiv (Cornell University) (2018)
Open Access | Times Cited: 1587

Learning dexterous in-hand manipulation
OpenAI Marcin Andrychowicz, Bowen Baker, Maciek Chociej, et al.
The International Journal of Robotics Research (2019) Vol. 39, Iss. 1, pp. 3-20
Open Access | Times Cited: 1368

Deep Multimodal Learning: A Survey on Recent Advances and Trends
Dhanesh Ramachandram, Graham W. Taylor
IEEE Signal Processing Magazine (2017) Vol. 34, Iss. 6, pp. 96-108
Closed Access | Times Cited: 791

Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine, Aviral Kumar, George Tucker, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 733

Virtual-to-real deep reinforcement learning: Continuous control of mobile robots for mapless navigation
Lei Tai, Giuseppe Paolo, Ming Liu
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2017)
Open Access | Times Cited: 724

Solving Rubik's Cube with a Robot Hand
OpenAI, Ilge Akkaya, Marcin Andrychowicz, et al.
arXiv (Cornell University) (2019)
Open Access | Times Cited: 655

Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations
Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, et al.
(2018)
Open Access | Times Cited: 648

Overcoming Exploration in Reinforcement Learning with Demonstrations
Ashvin Nair, Bob McGrew, Marcin Andrychowicz, et al.
(2018), pp. 6292-6299
Open Access | Times Cited: 638

A Survey of Deep Learning: Platforms, Applications and Emerging Research Trends
William G. Hatcher, Wei Yu
IEEE Access (2018) Vol. 6, pp. 24411-24432
Open Access | Times Cited: 557

Learning to Drive in a Day
Alex Kendall, Jeffrey Hawke, David M. Janz, et al.
2022 International Conference on Robotics and Automation (ICRA) (2019), pp. 8248-8254
Closed Access | Times Cited: 476

Q-Learning Algorithms: A Comprehensive Classification and Applications
Beakcheol Jang, Myeonghwi Kim, Gaspard Harerimana, et al.
IEEE Access (2019) Vol. 7, pp. 133653-133667
Open Access | Times Cited: 406

How to train your robot with deep reinforcement learning: lessons we have learned
Julian Ibarz, Jie Tan, Chelsea Finn, et al.
The International Journal of Robotics Research (2021) Vol. 40, Iss. 4-5, pp. 698-721
Open Access | Times Cited: 380

Cobot programming for collaborative industrial tasks: An overview
Shirine El Zaatari, Mohamed Marei, Weidong Li, et al.
Robotics and Autonomous Systems (2019) Vol. 116, pp. 162-180
Open Access | Times Cited: 361

Cellular, Wide-Area, and Non-Terrestrial IoT: A Survey on 5G Advances and the Road Toward 6G
Mojtaba Vaezi, Amin Azari, Saeed R. Khosravirad, et al.
IEEE Communications Surveys & Tutorials (2022) Vol. 24, Iss. 2, pp. 1117-1174
Closed Access | Times Cited: 345

Residual Reinforcement Learning for Robot Control
Tobias Johannink, Shikhar Bahl, Ashvin Nair, et al.
2022 International Conference on Robotics and Automation (ICRA) (2019), pp. 6023-6029
Open Access | Times Cited: 336

Ray: a distributed framework for emerging AI applications
Philipp Moritz, Robert Nishihara, Stephanie Wang, et al.
arXiv (Cornell University) (2018), pp. 561-577
Closed Access | Times Cited: 326

D4RL: Datasets for Deep Data-Driven Reinforcement Learning
Justin Fu, Aviral Kumar, Ofir Nachum, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 318

Deep Reinforcement Learning for Cyber Security
Thanh Thi Nguyen, Vijay Janapa Reddi
IEEE Transactions on Neural Networks and Learning Systems (2021) Vol. 34, Iss. 8, pp. 3779-3795
Open Access | Times Cited: 316

Challenges of real-world reinforcement learning: definitions, benchmarks and analysis
Gabriel Dulac-Arnold, Nir Levine, Daniel J. Mankowitz, et al.
Machine Learning (2021) Vol. 110, Iss. 9, pp. 2419-2468
Open Access | Times Cited: 314

Transfer Learning in Deep Reinforcement Learning: A Survey
Zhuangdi Zhu, Kaixiang Lin, Anil K. Jain, et al.
IEEE Transactions on Pattern Analysis and Machine Intelligence (2023) Vol. 45, Iss. 11, pp. 13344-13362
Open Access | Times Cited: 293

Deep reinforcement learning for high precision assembly tasks
Tadanobu Inoue, Giovanni De Magistris, Asim Munawar, et al.
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2017)
Open Access | Times Cited: 276

Data-Driven Machine Learning in Environmental Pollution: Gains and Problems
Xian Liu, Dawei Lü, Aiqian Zhang, et al.
Environmental Science & Technology (2022) Vol. 56, Iss. 4, pp. 2124-2133
Closed Access | Times Cited: 276

Page 1 - Next Page

Scroll to top