
OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!
If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.
Requested Article:
CALVIN: A Benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks
Oier Mees, Lukás Hermann, Erick Rosete-Beas, et al.
IEEE Robotics and Automation Letters (2022) Vol. 7, Iss. 3, pp. 7327-7334
Open Access | Times Cited: 49
Oier Mees, Lukás Hermann, Erick Rosete-Beas, et al.
IEEE Robotics and Automation Letters (2022) Vol. 7, Iss. 3, pp. 7327-7334
Open Access | Times Cited: 49
Showing 1-25 of 49 citing articles:
Code as Policies: Language Model Programs for Embodied Control
Jacky Liang, Wenlong Huang, Fei Xia, et al.
(2023)
Open Access | Times Cited: 195
Jacky Liang, Wenlong Huang, Fei Xia, et al.
(2023)
Open Access | Times Cited: 195
Visual Language Maps for Robot Navigation
Chenguang Huang, Oier Mees, Andy Zeng, et al.
(2023), pp. 10608-10615
Open Access | Times Cited: 114
Chenguang Huang, Oier Mees, Andy Zeng, et al.
(2023), pp. 10608-10615
Open Access | Times Cited: 114
Text2Motion: from natural language instructions to feasible plans
Kevin Lin, Christopher Agia, Toki Migimatsu, et al.
Autonomous Robots (2023) Vol. 47, Iss. 8, pp. 1345-1365
Closed Access | Times Cited: 72
Kevin Lin, Christopher Agia, Toki Migimatsu, et al.
Autonomous Robots (2023) Vol. 47, Iss. 8, pp. 1345-1365
Closed Access | Times Cited: 72
Grounding Language with Visual Affordances over Unstructured Data
Oier Mees, Jessica Borja-Diaz, Wolfram Burgard
(2023), pp. 11576-11582
Open Access | Times Cited: 31
Oier Mees, Jessica Borja-Diaz, Wolfram Burgard
(2023), pp. 11576-11582
Open Access | Times Cited: 31
A vision-language-guided robotic action planning approach for ambiguity mitigation in human–robot collaborative manufacturing
Junming Fan, Pai Zheng
Journal of Manufacturing Systems (2024) Vol. 74, pp. 1009-1018
Closed Access | Times Cited: 9
Junming Fan, Pai Zheng
Journal of Manufacturing Systems (2024) Vol. 74, pp. 1009-1018
Closed Access | Times Cited: 9
Benchmark Evaluations, Applications, and Challenges of Large Vision Language Models: A Survey
Zongxia Li, Xiyang Wu, Hongyang Du, et al.
(2025)
Open Access | Times Cited: 1
Zongxia Li, Xiyang Wu, Hongyang Du, et al.
(2025)
Open Access | Times Cited: 1
What Matters in Language Conditioned Robotic Imitation Learning Over Unstructured Data
Oier Mees, Lukás Hermann, Wolfram Burgard
IEEE Robotics and Automation Letters (2022) Vol. 7, Iss. 4, pp. 11205-11212
Open Access | Times Cited: 24
Oier Mees, Lukás Hermann, Wolfram Burgard
IEEE Robotics and Automation Letters (2022) Vol. 7, Iss. 4, pp. 11205-11212
Open Access | Times Cited: 24
Goal-Conditioned Imitation Learning using Score-based Diffusion Policies
Moritz Reuss, Maximilian Xiling Li, Xiaogang Jia, et al.
(2023)
Open Access | Times Cited: 14
Moritz Reuss, Maximilian Xiling Li, Xiaogang Jia, et al.
(2023)
Open Access | Times Cited: 14
Prompt, Plan, Perform: LLM-based Humanoid Control via Quantized Imitation Learning
Jingkai Sun, Qiang Zhang, Yiqun Duan, et al.
(2024), pp. 16236-16242
Open Access | Times Cited: 4
Jingkai Sun, Qiang Zhang, Yiqun Duan, et al.
(2024), pp. 16236-16242
Open Access | Times Cited: 4
Real-world robot applications of foundation models: a review
Kento Kawaharazuka, Tatsuya Matsushima, Andrew Gambardella, et al.
Advanced Robotics (2024) Vol. 38, Iss. 18, pp. 1232-1254
Open Access | Times Cited: 4
Kento Kawaharazuka, Tatsuya Matsushima, Andrew Gambardella, et al.
Advanced Robotics (2024) Vol. 38, Iss. 18, pp. 1232-1254
Open Access | Times Cited: 4
FurnitureBench: Reproducible real-world benchmark for long-horizon complex manipulation
Minho Heo, Yea Seol Lee, Doohyun Lee, et al.
The International Journal of Robotics Research (2025)
Closed Access
Minho Heo, Yea Seol Lee, Doohyun Lee, et al.
The International Journal of Robotics Research (2025)
Closed Access
Hierarchical Language-Conditioned Robot Learning with Vision-Language Models
Jingyao Tang, Baoping Cheng, Gang Zhang, et al.
Communications in computer and information science (2025), pp. 167-179
Closed Access
Jingyao Tang, Baoping Cheng, Gang Zhang, et al.
Communications in computer and information science (2025), pp. 167-179
Closed Access
Robot Learning in the Era of Foundation Models: A Survey
Xuan Xiao, Jiahang Liu, Zhipeng Wang, et al.
Neurocomputing (2025), pp. 129963-129963
Closed Access
Xuan Xiao, Jiahang Liu, Zhipeng Wang, et al.
Neurocomputing (2025), pp. 129963-129963
Closed Access
ARNOLD: A Benchmark for Language-Grounded Task Learning With Continuous States in Realistic 3D Scenes
Ran Gong, Jiangyong Huang, Yizhou Zhao, et al.
2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2023), pp. 20426-20438
Open Access | Times Cited: 10
Ran Gong, Jiangyong Huang, Yizhou Zhao, et al.
2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2023), pp. 20426-20438
Open Access | Times Cited: 10
FurnitureBench: Reproducible Real-World Benchmark for Long-Horizon Complex Manipulation
Minho Heo, Youngwoon Lee, Doohyun Lee, et al.
(2023)
Open Access | Times Cited: 9
Minho Heo, Youngwoon Lee, Doohyun Lee, et al.
(2023)
Open Access | Times Cited: 9
Learning Language-Conditioned Deformable Object Manipulation with Graph Dynamics
Yuhong Deng, Kai Mo, Chongkun Xia, et al.
(2024), pp. 7508-7514
Open Access | Times Cited: 3
Yuhong Deng, Kai Mo, Chongkun Xia, et al.
(2024), pp. 7508-7514
Open Access | Times Cited: 3
Unlocking Robotic Autonomy: A Survey on the Applications of Foundation Models
Dae-Sung Jang, Doo-Hyun Cho, Woo-Cheol Lee, et al.
International Journal of Control Automation and Systems (2024) Vol. 22, Iss. 8, pp. 2341-2384
Closed Access | Times Cited: 2
Dae-Sung Jang, Doo-Hyun Cho, Woo-Cheol Lee, et al.
International Journal of Control Automation and Systems (2024) Vol. 22, Iss. 8, pp. 2341-2384
Closed Access | Times Cited: 2
Ground4Act: Leveraging visual-language model for collaborative pushing and grasping in clutter
Yuxiang Yang, Jiangtao Guo, Zilong Li, et al.
Image and Vision Computing (2024), pp. 105280-105280
Open Access | Times Cited: 2
Yuxiang Yang, Jiangtao Guo, Zilong Li, et al.
Image and Vision Computing (2024), pp. 105280-105280
Open Access | Times Cited: 2
A survey of Semantic Reasoning frameworks for robotic systems
Weiyu Liu, Angel Daruna, Maithili Patel, et al.
Robotics and Autonomous Systems (2022) Vol. 159, pp. 104294-104294
Open Access | Times Cited: 11
Weiyu Liu, Angel Daruna, Maithili Patel, et al.
Robotics and Autonomous Systems (2022) Vol. 159, pp. 104294-104294
Open Access | Times Cited: 11
Learning Video-Conditioned Policies for Unseen Manipulation Tasks
Elliot Chane-Sane, Cordelia Schmid, Ivan Laptev
(2023)
Open Access | Times Cited: 6
Elliot Chane-Sane, Cordelia Schmid, Ivan Laptev
(2023)
Open Access | Times Cited: 6
基于Transformer的强化学习方法在智能决策领域的应用: 综述
Weilin Yuan, Jiaxing Chen, Shaofei Chen, et al.
Frontiers of Information Technology & Electronic Engineering (2024) Vol. 25, Iss. 6, pp. 763-790
Closed Access | Times Cited: 1
Weilin Yuan, Jiaxing Chen, Shaofei Chen, et al.
Frontiers of Information Technology & Electronic Engineering (2024) Vol. 25, Iss. 6, pp. 763-790
Closed Access | Times Cited: 1
CLIP feature-based randomized control using images and text for multiple tasks and robots
Kazuki Shibata, Hideki Deguchi, Shun Taguchi
Advanced Robotics (2024) Vol. 38, Iss. 15, pp. 1066-1078
Open Access | Times Cited: 1
Kazuki Shibata, Hideki Deguchi, Shun Taguchi
Advanced Robotics (2024) Vol. 38, Iss. 15, pp. 1066-1078
Open Access | Times Cited: 1
SPRINT: Scalable Policy Pre-Training via Language Instruction Relabeling
Jesse Zhang, Karl Pertsch, Jiahui Zhang, et al.
(2024), pp. 9168-9175
Open Access | Times Cited: 1
Jesse Zhang, Karl Pertsch, Jiahui Zhang, et al.
(2024), pp. 9168-9175
Open Access | Times Cited: 1
Open X-Embodiment: Robotic Learning Datasets and RT-X Models : Open X-Embodiment Collaboration0
A. O'Neill, Abdul Rehman, Abhiram Maddukuri, et al.
(2024), pp. 6892-6903
Closed Access | Times Cited: 1
A. O'Neill, Abdul Rehman, Abhiram Maddukuri, et al.
(2024), pp. 6892-6903
Closed Access | Times Cited: 1
Generate Subgoal Images Before Act: Unlocking the Chain-of-Thought Reasoning in Diffusion Model for Robot Manipulation with Multimodal Prompts
Fei Ni, Jianye Hao, Shiguang Wu, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) Vol. 33, pp. 13991-14000
Closed Access | Times Cited: 1
Fei Ni, Jianye Hao, Shiguang Wu, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) Vol. 33, pp. 13991-14000
Closed Access | Times Cited: 1