
OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!
If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.
Requested Article:
CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation
Samir Yitzhak Gadre, Mitchell Wortsman, Gabriel Ilharco, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023)
Open Access | Times Cited: 45
Samir Yitzhak Gadre, Mitchell Wortsman, Gabriel Ilharco, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023)
Open Access | Times Cited: 45
Showing 1-25 of 45 citing articles:
ChatGPT for Robotics: Design Principles and Model Abilities
Sai Vemprala, Rogerio Bonatti, Arthur Bucker, et al.
IEEE Access (2024) Vol. 12, pp. 55682-55696
Open Access | Times Cited: 200
Sai Vemprala, Rogerio Bonatti, Arthur Bucker, et al.
IEEE Access (2024) Vol. 12, pp. 55682-55696
Open Access | Times Cited: 200
ConceptFusion: Open-set multimodal 3D mapping
Krishna Murthy Jatavallabhula, Alihusein Kuwajerwala, Qiao Gu, et al.
(2023)
Open Access | Times Cited: 81
Krishna Murthy Jatavallabhula, Alihusein Kuwajerwala, Qiao Gu, et al.
(2023)
Open Access | Times Cited: 81
Foundation models in robotics: Applications, challenges, and the future
Roya Firoozi, Johnathan Tucker, Stephen Tian, et al.
The International Journal of Robotics Research (2024)
Closed Access | Times Cited: 22
Roya Firoozi, Johnathan Tucker, Stephen Tian, et al.
The International Journal of Robotics Research (2024)
Closed Access | Times Cited: 22
LLM-Grounder: Open-Vocabulary 3D Visual Grounding with Large Language Model as an Agent
Jianing Yang, Xuweiyi Chen, Shengyi Qian, et al.
(2024), pp. 7694-7701
Open Access | Times Cited: 17
Jianing Yang, Xuweiyi Chen, Shengyi Qian, et al.
(2024), pp. 7694-7701
Open Access | Times Cited: 17
Statler: State-Maintaining Language Models for Embodied Reasoning
Takuma Yoneda, Jiading Fang, Peng Li, et al.
(2024), pp. 15083-15091
Open Access | Times Cited: 11
Takuma Yoneda, Jiading Fang, Peng Li, et al.
(2024), pp. 15083-15091
Open Access | Times Cited: 11
VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation
Naoki Yokoyama, Sehoon Ha, Dhruv Batra, et al.
(2024), pp. 42-48
Open Access | Times Cited: 8
Naoki Yokoyama, Sehoon Ha, Dhruv Batra, et al.
(2024), pp. 42-48
Open Access | Times Cited: 8
Can an Embodied Agent Find Your “Cat-shaped Mug”? LLM-Based Zero-Shot Object Navigation
Vishnu Sashank Dorbala, James F. Mullen, Dinesh Manocha
IEEE Robotics and Automation Letters (2023) Vol. 9, Iss. 5, pp. 4083-4090
Open Access | Times Cited: 19
Vishnu Sashank Dorbala, James F. Mullen, Dinesh Manocha
IEEE Robotics and Automation Letters (2023) Vol. 9, Iss. 5, pp. 4083-4090
Open Access | Times Cited: 19
SATR: Zero-Shot Semantic Segmentation of 3D Shapes
Ahmed Abdelreheem, Ivan Skorokhodov, Maks Ovsjanikov, et al.
2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2023), pp. 15120-15133
Open Access | Times Cited: 17
Ahmed Abdelreheem, Ivan Skorokhodov, Maks Ovsjanikov, et al.
2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2023), pp. 15120-15133
Open Access | Times Cited: 17
Open-Fusion: Real-time Open-Vocabulary 3D Mapping and Queryable Scene Representation
Kashu Yamazaki, Taisei Hanyu, Khoa Vo, et al.
(2024), pp. 9411-9417
Closed Access | Times Cited: 6
Kashu Yamazaki, Taisei Hanyu, Khoa Vo, et al.
(2024), pp. 9411-9417
Closed Access | Times Cited: 6
DRAGON: A Dialogue-Based Robot for Assistive Navigation With Visual Language Grounding
Shuijing Liu, Aamir Hasan, Kaiwen Hong, et al.
IEEE Robotics and Automation Letters (2024) Vol. 9, Iss. 4, pp. 3712-3719
Open Access | Times Cited: 5
Shuijing Liu, Aamir Hasan, Kaiwen Hong, et al.
IEEE Robotics and Automation Letters (2024) Vol. 9, Iss. 4, pp. 3712-3719
Open Access | Times Cited: 5
Single-view 3D Scene Reconstruction with High-fidelity Shape and Texture
Yixin Chen, Junfeng Ni, Nan Jiang, et al.
2021 International Conference on 3D Vision (3DV) (2024), pp. 1456-1467
Open Access | Times Cited: 5
Yixin Chen, Junfeng Ni, Nan Jiang, et al.
2021 International Conference on 3D Vision (3DV) (2024), pp. 1456-1467
Open Access | Times Cited: 5
Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation
Yinpei Dai, Run Peng, Sikai Li, et al.
(2024), pp. 3296-3303
Open Access | Times Cited: 4
Yinpei Dai, Run Peng, Sikai Li, et al.
(2024), pp. 3296-3303
Open Access | Times Cited: 4
Balancing Performance and Efficiency in Zero-Shot Robotic Navigation
Dmytro Kuzmenko, Nadiya Shvai
Communications in computer and information science (2025), pp. 370-381
Closed Access
Dmytro Kuzmenko, Nadiya Shvai
Communications in computer and information science (2025), pp. 370-381
Closed Access
Robot Learning in the Era of Foundation Models: A Survey
Xuan Xiao, Jiahang Liu, Zhipeng Wang, et al.
Neurocomputing (2025), pp. 129963-129963
Closed Access
Xuan Xiao, Jiahang Liu, Zhipeng Wang, et al.
Neurocomputing (2025), pp. 129963-129963
Closed Access
CLIP-Loc: Multi-modal Landmark Association for Global Localization in Object-based Maps
Shigemichi Matsuzaki, Takuma Sugino, Kazuhito Tanaka, et al.
(2024), pp. 13673-13679
Open Access | Times Cited: 3
Shigemichi Matsuzaki, Takuma Sugino, Kazuhito Tanaka, et al.
(2024), pp. 13673-13679
Open Access | Times Cited: 3
Unlocking Robotic Autonomy: A Survey on the Applications of Foundation Models
Dae-Sung Jang, Doo-Hyun Cho, Woo-Cheol Lee, et al.
International Journal of Control Automation and Systems (2024) Vol. 22, Iss. 8, pp. 2341-2384
Closed Access | Times Cited: 2
Dae-Sung Jang, Doo-Hyun Cho, Woo-Cheol Lee, et al.
International Journal of Control Automation and Systems (2024) Vol. 22, Iss. 8, pp. 2341-2384
Closed Access | Times Cited: 2
Aligning Knowledge Graph with Visual Perception for Object-goal Navigation
Nuo Xu, Wen Wang, Rong Yang, et al.
(2024), pp. 5214-5220
Open Access | Times Cited: 2
Nuo Xu, Wen Wang, Rong Yang, et al.
(2024), pp. 5214-5220
Open Access | Times Cited: 2
CARTIER: Cartographic lAnguage Reasoning Targeted at Instruction Execution for Robots
Dmitriy Rivkin, Nikhil Kakodkar, Francois R. Hogan, et al.
(2024), pp. 5615-5621
Open Access | Times Cited: 2
Dmitriy Rivkin, Nikhil Kakodkar, Francois R. Hogan, et al.
(2024), pp. 5615-5621
Open Access | Times Cited: 2
Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill
Wenzhe Cai, Siyuan Huang, Guangran Cheng, et al.
(2024), pp. 5228-5234
Open Access | Times Cited: 2
Wenzhe Cai, Siyuan Huang, Guangran Cheng, et al.
(2024), pp. 5228-5234
Open Access | Times Cited: 2
FM-Loc: Using Foundation Models for Improved Vision-Based Localization
Reihaneh Mirjalili, Michael Krawez, Wolfram Burgard
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2023)
Open Access | Times Cited: 6
Reihaneh Mirjalili, Michael Krawez, Wolfram Burgard
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2023)
Open Access | Times Cited: 6
Think Holistically, Act Down-to-Earth: A Semantic Navigation Strategy With Continuous Environmental Representation and Multi-Step Forward Planning
Bolei Chen, Jiaxu Kang, Ping Zhong, et al.
IEEE Transactions on Circuits and Systems for Video Technology (2023) Vol. 34, Iss. 5, pp. 3860-3875
Closed Access | Times Cited: 5
Bolei Chen, Jiaxu Kang, Ping Zhong, et al.
IEEE Transactions on Circuits and Systems for Video Technology (2023) Vol. 34, Iss. 5, pp. 3860-3875
Closed Access | Times Cited: 5
Language-enhanced RNR-Map: Querying Renderable Neural Radiance Field maps with natural language
Francesco Taioli, Federico Cunico, Federico Girella, et al.
(2023), pp. 4671-4676
Open Access | Times Cited: 5
Francesco Taioli, Federico Cunico, Federico Girella, et al.
(2023), pp. 4671-4676
Open Access | Times Cited: 5
CLIP feature-based randomized control using images and text for multiple tasks and robots
Kazuki Shibata, Hideki Deguchi, Shun Taguchi
Advanced Robotics (2024) Vol. 38, Iss. 15, pp. 1066-1078
Open Access | Times Cited: 1
Kazuki Shibata, Hideki Deguchi, Shun Taguchi
Advanced Robotics (2024) Vol. 38, Iss. 15, pp. 1066-1078
Open Access | Times Cited: 1
TDANet: Target-Directed Attention Network For Object-Goal Visual Navigation With Zero-Shot Ability
Shiwei Lian, Feitian Zhang
IEEE Robotics and Automation Letters (2024) Vol. 9, Iss. 9, pp. 8075-8082
Open Access | Times Cited: 1
Shiwei Lian, Feitian Zhang
IEEE Robotics and Automation Letters (2024) Vol. 9, Iss. 9, pp. 8075-8082
Open Access | Times Cited: 1
MOPA: Modular Object Navigation with PointGoal Agents
Sonia Raychaudhuri, Tommaso Campari, Unnat Jain, et al.
2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (2024), pp. 5751-5761
Open Access | Times Cited: 1
Sonia Raychaudhuri, Tommaso Campari, Unnat Jain, et al.
2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (2024), pp. 5751-5761
Open Access | Times Cited: 1