OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Grounding Language with Visual Affordances over Unstructured Data
Oier Mees, Jessica Borja-Diaz, Wolfram Burgard
(2023), pp. 11576-11582
Open Access | Times Cited: 31

Showing 1-25 of 31 citing articles:

Visual Language Maps for Robot Navigation
Chenguang Huang, Oier Mees, Andy Zeng, et al.
(2023), pp. 10608-10615
Open Access | Times Cited: 114

TidyBot: personalized robot assistance with large language models
Jimmy Wu, Rika Antonova, Adam Kan, et al.
Autonomous Robots (2023) Vol. 47, Iss. 8, pp. 1087-1102
Closed Access | Times Cited: 59

ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application
Naoki Wake, Atsushi Kanehira, Kazuhiro Sasabuchi, et al.
IEEE Access (2023) Vol. 11, pp. 95060-95078
Open Access | Times Cited: 47

Foundation models in robotics: Applications, challenges, and the future
Roya Firoozi, Johnathan Tucker, Stephen Tian, et al.
The International Journal of Robotics Research (2024)
Closed Access | Times Cited: 22

IndVisSGG: VLM-based scene graph generation for industrial spatial intelligence
Zuoxu Wang, Zhijie Yan, Shufei Li, et al.
Advanced Engineering Informatics (2025) Vol. 65, pp. 103107-103107
Closed Access | Times Cited: 5

TidyBot: Personalized Robot Assistance with Large Language Models
Jimmy Wu, Rika Antonova, Adam Kan, et al.
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2023)
Open Access | Times Cited: 35

Robotic Skill Acquisition via Instruction Augmentation with Vision-Language Models
Ted Xiao, Harris Chan, Pierre Sermanet, et al.
(2023)
Open Access | Times Cited: 17

RoboVQA: Multimodal Long-Horizon Reasoning for Robotics
Pierre Sermanet, Tianli Ding, Jeffrey Zhao, et al.
(2024), pp. 645-652
Open Access | Times Cited: 7

Prompt, Plan, Perform: LLM-based Humanoid Control via Quantized Imitation Learning
Jingkai Sun, Qiang Zhang, Yiqun Duan, et al.
(2024), pp. 16236-16242
Open Access | Times Cited: 4

Real-world robot applications of foundation models: a review
Kento Kawaharazuka, Tatsuya Matsushima, Andrew Gambardella, et al.
Advanced Robotics (2024) Vol. 38, Iss. 18, pp. 1232-1254
Open Access | Times Cited: 4

Hierarchical Language-Conditioned Robot Learning with Vision-Language Models
Jingyao Tang, Baoping Cheng, Gang Zhang, et al.
Communications in computer and information science (2025), pp. 167-179
Closed Access

Referring Expression Comprehension in semi-structured human-robot interaction
Tianlei Jin, Qiwei Meng, Gege Zhang, et al.
Expert Systems with Applications (2025), pp. 126965-126965
Closed Access

Embodied large language models enable robots to complete complex tasks in unpredictable environments
Ruaridh Mon-Williams, Gen Li, Ran Long, et al.
Nature Machine Intelligence (2025)
Open Access

A Survey of Robot Intelligence with Large Language Models
Hyeongyo Jeong, Haechan Lee, Changwon Kim, et al.
Applied Sciences (2024) Vol. 14, Iss. 19, pp. 8868-8868
Open Access | Times Cited: 4

Lp-slam: language-perceptive RGB-D SLAM framework exploiting large language model
Weiyi Zhang, Yushi Guo, Liting Niu, et al.
Complex & Intelligent Systems (2024) Vol. 10, Iss. 4, pp. 5391-5409
Open Access | Times Cited: 3

Audio Visual Language Maps for Robot Navigation
Chenguang Huang, Oier Mees, Andy Zeng, et al.
Springer proceedings in advanced robotics (2024), pp. 105-117
Closed Access | Times Cited: 2

One-Shot Open Affordance Learning with Foundation Models
Gen Li, Deqing Sun, Laura Sevilla-Lara, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) Vol. 33, pp. 3086-3096
Closed Access | Times Cited: 2

FM-Loc: Using Foundation Models for Improved Vision-Based Localization
Reihaneh Mirjalili, Michael Krawez, Wolfram Burgard
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2023)
Open Access | Times Cited: 6

LINGO-Space: Language-Conditioned Incremental Grounding for Space
Do-Hyun Kim, Nayoung Oh, Deokmin Hwang, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 9, pp. 10314-10322
Open Access | Times Cited: 1

Multimodal Attention-Based Instruction-Following Part-Level Affordance Grounding
Qu Wen, Lulu Guo, Jian Cui, et al.
Applied Sciences (2024) Vol. 14, Iss. 11, pp. 4696-4696
Open Access | Times Cited: 1

CLIP feature-based randomized control using images and text for multiple tasks and robots
Kazuki Shibata, Hideki Deguchi, Shun Taguchi
Advanced Robotics (2024) Vol. 38, Iss. 15, pp. 1066-1078
Open Access | Times Cited: 1

Language-Conditioned Affordance-Pose Detection in 3D Point Clouds
Toan Nguyen, Minh Nhat Vu, Baoru Huang, et al.
(2024), pp. 3071-3078
Open Access | Times Cited: 1

Open X-Embodiment: Robotic Learning Datasets and RT-X Models : Open X-Embodiment Collaboration0
A. O'Neill, Abdul Rehman, Abhiram Maddukuri, et al.
(2024), pp. 6892-6903
Closed Access | Times Cited: 1

Developmental Scaffolding with Large Language Models
Batuhan Celik, Alper Ahmetoğlu, Emre Uğur, et al.
(2023), pp. 396-402
Closed Access | Times Cited: 2

Page 1 - Next Page

Scroll to top