OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

D4RL: Datasets for Deep Data-Driven Reinforcement Learning
Justin Fu, Aviral Kumar, Ofir Nachum, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 318

Showing 1-25 of 318 citing articles:

Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine, Aviral Kumar, George Tucker, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 733

Conservative Q-Learning for Offline Reinforcement Learning
Aviral Kumar, Aurick Zhou, George Tucker, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 456

Decision Transformer: Reinforcement Learning via Sequence Modeling
Lili Chen, Kevin Lü, Aravind Rajeswaran, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 327

MOReL : Model-Based Offline Reinforcement Learning
Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 147

A Survey on Offline Reinforcement Learning: Taxonomy, Review, and Open Problems
Rafael Figueiredo Prudencio, Marcos R. O. A. Máximo, Esther Luna Colombini
IEEE Transactions on Neural Networks and Learning Systems (2023) Vol. 35, Iss. 8, pp. 10237-10257
Open Access | Times Cited: 132

Offline Reinforcement Learning with Implicit Q-Learning
Ilya Kostrikov, Ashvin Nair, Sergey Levine
arXiv (Cornell University) (2021)
Open Access | Times Cited: 106

Dataset Distillation by Matching Training Trajectories
George Cazenavette, Tongzhou Wang, Antonio Torralba, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022), pp. 10708-10717
Open Access | Times Cited: 104

A Survey of Zero-shot Generalisation in Deep Reinforcement Learning
Robert Kirk, Amy Zhang, Edward Grefenstette, et al.
Journal of Artificial Intelligence Research (2023) Vol. 76, pp. 201-264
Open Access | Times Cited: 72

Enhancing Deep Reinforcement Learning: A Tutorial on Generative Diffusion Models in Network Optimization
Hongyang Du, Ruichen Zhang, Yinqiu Liu, et al.
IEEE Communications Surveys & Tutorials (2024) Vol. 26, Iss. 4, pp. 2611-2646
Open Access | Times Cited: 25

QPLEX: Duplex Dueling Multi-Agent Q-Learning
Jianhao Wang, Zhizhou Ren, Terry Z. Liu, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 130

Rearrangement: A Challenge for Embodied AI
Dhruv Batra, Anne Lynn S. Chang, Sonia Chernova, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 104

AWAC: Accelerating Online Reinforcement Learning with Offline Datasets
Ashvin Nair, Murtaza Dalal, Abhishek Gupta, et al.
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 62

A Survey of Generalisation in Deep Reinforcement Learning
Robert Kirk, Amy Zhang, Edward Grefenstette, et al.
arXiv (Cornell University) (2021)
Closed Access | Times Cited: 58

The Surprising Effectiveness of Representation Learning for Visual Imitation
Jyothish Pari, Nur Muhammad Mahi Shafiullah, Sridhar Pandian Arunachalam, et al.
(2022)
Open Access | Times Cited: 42

Augmenting Reinforcement Learning with Behavior Primitives for Diverse Manipulation Tasks
Soroush Nasiriany, Huihan Liu, Yuke Zhu
2022 International Conference on Robotics and Automation (ICRA) (2022), pp. 7477-7484
Open Access | Times Cited: 41

Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism
Paria Rashidinejad, Banghua Zhu, Cong Ma, et al.
IEEE Transactions on Information Theory (2022) Vol. 68, Iss. 12, pp. 8156-8196
Open Access | Times Cited: 40

Affordances from Human Videos as a Versatile Representation for Robotics
Shikhar Bahl, Russell Mendonca, Lili Chen, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023)
Open Access | Times Cited: 28

Machine learning meets advanced robotic manipulation
Saeid Nahavandi, Roohallah Alizadehsani, Darius Nahavandi, et al.
Information Fusion (2024) Vol. 105, pp. 102221-102221
Open Access | Times Cited: 10

Survey on Large Language Model-Enhanced Reinforcement Learning: Concept, Taxonomy, and Methods
Yuji Cao, Huan Zhao, Yuheng Cheng, et al.
IEEE Transactions on Neural Networks and Learning Systems (2024), pp. 1-21
Open Access | Times Cited: 10

What Matters in Learning from Offline Human Demonstrations for Robot Manipulation
Ajay Mandlekar, Danfei Xu, Josiah Wong, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 52

DeepThermal: Combustion Optimization for Thermal Power Generating Units Using Offline Reinforcement Learning
Xianyuan Zhan, Haoran Xu, Yue Zhang, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2022) Vol. 36, Iss. 4, pp. 4680-4688
Open Access | Times Cited: 31

Offline Pre-trained Multi-agent Decision Transformer
Linghui Meng, Muning Wen, Chenyang Le, et al.
Deleted Journal (2023) Vol. 20, Iss. 2, pp. 233-248
Open Access | Times Cited: 18

Reinforcement learning and bandits for speech and language processing: Tutorial, review and outlook
Baihan Lin
Expert Systems with Applications (2023) Vol. 238, pp. 122254-122254
Open Access | Times Cited: 17

Hyperparameter Selection for Offline Reinforcement Learning.
Tom Le Paine, Cosmin Păduraru, Andrea Michi, et al.
arXiv (Cornell University) (2020)
Closed Access | Times Cited: 40

d3rlpy: An Offline Deep Reinforcement Learning Library
Takuma Seno, Michita Imai
arXiv (Cornell University) (2021)
Open Access | Times Cited: 33

Page 1 - Next Page

Scroll to top