
OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!
If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.
Requested Article:
Visual Language Maps for Robot Navigation
Chenguang Huang, Oier Mees, Andy Zeng, et al.
(2023), pp. 10608-10615
Open Access | Times Cited: 114
Chenguang Huang, Oier Mees, Andy Zeng, et al.
(2023), pp. 10608-10615
Open Access | Times Cited: 114
Showing 26-50 of 114 citing articles:
RoboHop: Segment-based Topological Map Representation for Open-World Visual Navigation
Sourav Garg, Krishan Rana, Mehdi Hosseinzadeh, et al.
(2024), pp. 4090-4097
Open Access | Times Cited: 4
Sourav Garg, Krishan Rana, Mehdi Hosseinzadeh, et al.
(2024), pp. 4090-4097
Open Access | Times Cited: 4
Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation
Yinpei Dai, Run Peng, Sikai Li, et al.
(2024), pp. 3296-3303
Open Access | Times Cited: 4
Yinpei Dai, Run Peng, Sikai Li, et al.
(2024), pp. 3296-3303
Open Access | Times Cited: 4
GG-LLM: Geometrically Grounding Large Language Models for Zero-shot Human Activity Forecasting in Human-Aware Task Planning
Moritz A. Graule, Volkan Isler
(2024), pp. 568-574
Open Access | Times Cited: 4
Moritz A. Graule, Volkan Isler
(2024), pp. 568-574
Open Access | Times Cited: 4
Discuss Before Moving: Visual Language Navigation via Multi-expert Discussions
Yuxing Long, Xiaoqi Li, Wenzhe Cai, et al.
(2024), pp. 17380-17387
Open Access | Times Cited: 4
Yuxing Long, Xiaoqi Li, Wenzhe Cai, et al.
(2024), pp. 17380-17387
Open Access | Times Cited: 4
Perspective Chapter: Advanced Environment Modelling Techniques for Mobile Manipulators
Noelia Fernandez, Gonzalo Espinoza, Alberto Méndez, et al.
IntechOpen eBooks (2025)
Closed Access
Noelia Fernandez, Gonzalo Espinoza, Alberto Méndez, et al.
IntechOpen eBooks (2025)
Closed Access
Free-Form Instruction Guided Robotic Navigation Path Planning with Large Vision-Language Model
Yinwei Du, Chengzhong Wu, Mingtao Feng, et al.
Lecture notes in computer science (2025), pp. 381-396
Closed Access
Yinwei Du, Chengzhong Wu, Mingtao Feng, et al.
Lecture notes in computer science (2025), pp. 381-396
Closed Access
Enhancing Large Language Models with RAG for Visual Language Navigation in Continuous Environments
Xiaoan Bao, Zhiqiang Lv, Biao Wu
Electronics (2025) Vol. 14, Iss. 5, pp. 909-909
Open Access
Xiaoan Bao, Zhiqiang Lv, Biao Wu
Electronics (2025) Vol. 14, Iss. 5, pp. 909-909
Open Access
Comparison of Neural Network Approaches for Parsing Texts of Robot Control Commands in Natural Language
Maksim Skorokhodov, Artem Gryaznov, Vladislav Latalin, et al.
Studies in computational intelligence (2025), pp. 408-417
Closed Access
Maksim Skorokhodov, Artem Gryaznov, Vladislav Latalin, et al.
Studies in computational intelligence (2025), pp. 408-417
Closed Access
Leveraging large language models for autonomous robotic mapping and navigation
José P. Espada, Shi Qiu, Rubén González Crespo, et al.
International Journal of Advanced Robotic Systems (2025) Vol. 22, Iss. 2
Open Access
José P. Espada, Shi Qiu, Rubén González Crespo, et al.
International Journal of Advanced Robotic Systems (2025) Vol. 22, Iss. 2
Open Access
Cues3D: Unleashing the power of sole NeRF for consistent and unique instances in open-vocabulary 3D panoptic segmentation
Feng Xue, Wenzhuang Xu, Guofeng Zhong, et al.
Information Fusion (2025), pp. 103164-103164
Closed Access
Feng Xue, Wenzhuang Xu, Guofeng Zhong, et al.
Information Fusion (2025), pp. 103164-103164
Closed Access
Real-world robot applications of foundation models: a review
Kento Kawaharazuka, Tatsuya Matsushima, Andrew Gambardella, et al.
Advanced Robotics (2024) Vol. 38, Iss. 18, pp. 1232-1254
Open Access | Times Cited: 4
Kento Kawaharazuka, Tatsuya Matsushima, Andrew Gambardella, et al.
Advanced Robotics (2024) Vol. 38, Iss. 18, pp. 1232-1254
Open Access | Times Cited: 4
Language meets YOLOv8 for metric monocular SLAM
José Martínez-Carranza, Delia Irazú Hernández-Farías, Leticia Oyuki Rojas-Perez, et al.
Journal of Real-Time Image Processing (2023) Vol. 20, Iss. 4
Closed Access | Times Cited: 9
José Martínez-Carranza, Delia Irazú Hernández-Farías, Leticia Oyuki Rojas-Perez, et al.
Journal of Real-Time Image Processing (2023) Vol. 20, Iss. 4
Closed Access | Times Cited: 9
Learning-To-Rank Approach for Identifying Everyday Objects Using a Physical-World Search Engine
Kanta Kaneda, Shunya Nagashima, Ryosuke Korekata, et al.
IEEE Robotics and Automation Letters (2024) Vol. 9, Iss. 3, pp. 2088-2095
Open Access | Times Cited: 3
Kanta Kaneda, Shunya Nagashima, Ryosuke Korekata, et al.
IEEE Robotics and Automation Letters (2024) Vol. 9, Iss. 3, pp. 2088-2095
Open Access | Times Cited: 3
VG4D: Vision-Language Model Goes 4D Video Recognition
Zhichao Deng, Xiangtai Li, Xia Li, et al.
(2024), pp. 5014-5020
Open Access | Times Cited: 3
Zhichao Deng, Xiangtai Li, Xia Li, et al.
(2024), pp. 5014-5020
Open Access | Times Cited: 3
Language-Conditioned Robotic Manipulation with Fast and Slow Thinking
Minjie Zhu, Yichen Zhu, Jinming Li, et al.
(2024), pp. 4333-4339
Open Access | Times Cited: 3
Minjie Zhu, Yichen Zhu, Jinming Li, et al.
(2024), pp. 4333-4339
Open Access | Times Cited: 3
Language-EXtended Indoor SLAM (LEXIS): A Versatile System for Real-time Visual Scene Understanding
Christina Kassab, Matías Mattamala, Lintong Zhang, et al.
(2024), pp. 15988-15994
Open Access | Times Cited: 3
Christina Kassab, Matías Mattamala, Lintong Zhang, et al.
(2024), pp. 15988-15994
Open Access | Times Cited: 3
MaskClustering: View Consensus Based Mask Graph Clustering for Open-Vocabulary 3D Instance Segmentation
Mi Yan, Jiazhao Zhang, Yan Zhu, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) Vol. abs/2309.16650, pp. 28274-28284
Closed Access | Times Cited: 3
Mi Yan, Jiazhao Zhang, Yan Zhu, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) Vol. abs/2309.16650, pp. 28274-28284
Closed Access | Times Cited: 3
Plan, Posture and Go: Towards Open-Vocabulary Text-to-Motion Generation
Jinpeng Liu, Wenxun Dai, Chunyu Wang, et al.
Lecture notes in computer science (2024), pp. 445-463
Closed Access | Times Cited: 3
Jinpeng Liu, Wenxun Dai, Chunyu Wang, et al.
Lecture notes in computer science (2024), pp. 445-463
Closed Access | Times Cited: 3
Design and Testing of Bionic-Feature-Based 3D-Printed Flexible End-Effectors for Picking Horn Peppers
Lexing Deng, Tianyu Liu, Ping Jiang, et al.
Agronomy (2023) Vol. 13, Iss. 9, pp. 2231-2231
Open Access | Times Cited: 8
Lexing Deng, Tianyu Liu, Ping Jiang, et al.
Agronomy (2023) Vol. 13, Iss. 9, pp. 2231-2231
Open Access | Times Cited: 8
Open-Ended Instructable Embodied Agents with Memory-Augmented Large Language Models
Gabriel Sarch, Yue Wu, Michael J. Tarr, et al.
(2023)
Open Access | Times Cited: 7
Gabriel Sarch, Yue Wu, Michael J. Tarr, et al.
(2023)
Open Access | Times Cited: 7
Unlocking Robotic Autonomy: A Survey on the Applications of Foundation Models
Dae-Sung Jang, Doo-Hyun Cho, Woo-Cheol Lee, et al.
International Journal of Control Automation and Systems (2024) Vol. 22, Iss. 8, pp. 2341-2384
Closed Access | Times Cited: 2
Dae-Sung Jang, Doo-Hyun Cho, Woo-Cheol Lee, et al.
International Journal of Control Automation and Systems (2024) Vol. 22, Iss. 8, pp. 2341-2384
Closed Access | Times Cited: 2
Audio Visual Language Maps for Robot Navigation
Chenguang Huang, Oier Mees, Andy Zeng, et al.
Springer proceedings in advanced robotics (2024), pp. 105-117
Closed Access | Times Cited: 2
Chenguang Huang, Oier Mees, Andy Zeng, et al.
Springer proceedings in advanced robotics (2024), pp. 105-117
Closed Access | Times Cited: 2
CARTIER: Cartographic lAnguage Reasoning Targeted at Instruction Execution for Robots
Dmitriy Rivkin, Nikhil Kakodkar, Francois R. Hogan, et al.
(2024), pp. 5615-5621
Open Access | Times Cited: 2
Dmitriy Rivkin, Nikhil Kakodkar, Francois R. Hogan, et al.
(2024), pp. 5615-5621
Open Access | Times Cited: 2
Self-Recovery Prompting: Promptable General Purpose Service Robot System with Foundation Models and Self-Recovery
Mimo Shirasaka, Tatsuya Matsushima, S. Tsunashima, et al.
(2024), pp. 17395-17402
Open Access | Times Cited: 2
Mimo Shirasaka, Tatsuya Matsushima, S. Tsunashima, et al.
(2024), pp. 17395-17402
Open Access | Times Cited: 2