
OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!
If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.
Requested Article:
Open-vocabulary Queryable Scene Representations for Real World Planning
Boyuan Chen, Fei Xia, Brian Ichter, et al.
(2023)
Open Access | Times Cited: 57
Boyuan Chen, Fei Xia, Brian Ichter, et al.
(2023)
Open Access | Times Cited: 57
Showing 26-50 of 57 citing articles:
VG4D: Vision-Language Model Goes 4D Video Recognition
Zhichao Deng, Xiangtai Li, Xia Li, et al.
(2024), pp. 5014-5020
Open Access | Times Cited: 3
Zhichao Deng, Xiangtai Li, Xia Li, et al.
(2024), pp. 5014-5020
Open Access | Times Cited: 3
Lifelong Robot Learning with Human Assisted Language Planners
Meenal Parakh, Alisha Fong, Anthony Simeonov, et al.
(2024), pp. 523-529
Open Access | Times Cited: 3
Meenal Parakh, Alisha Fong, Anthony Simeonov, et al.
(2024), pp. 523-529
Open Access | Times Cited: 3
Language-EXtended Indoor SLAM (LEXIS): A Versatile System for Real-time Visual Scene Understanding
Christina Kassab, Matías Mattamala, Lintong Zhang, et al.
(2024), pp. 15988-15994
Open Access | Times Cited: 3
Christina Kassab, Matías Mattamala, Lintong Zhang, et al.
(2024), pp. 15988-15994
Open Access | Times Cited: 3
Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill
Wenzhe Cai, Siyuan Huang, Guangran Cheng, et al.
(2024), pp. 5228-5234
Open Access | Times Cited: 3
Wenzhe Cai, Siyuan Huang, Guangran Cheng, et al.
(2024), pp. 5228-5234
Open Access | Times Cited: 3
Unlocking Robotic Autonomy: A Survey on the Applications of Foundation Models
Dae-Sung Jang, Doo-Hyun Cho, Woo-Cheol Lee, et al.
International Journal of Control Automation and Systems (2024) Vol. 22, Iss. 8, pp. 2341-2384
Closed Access | Times Cited: 2
Dae-Sung Jang, Doo-Hyun Cho, Woo-Cheol Lee, et al.
International Journal of Control Automation and Systems (2024) Vol. 22, Iss. 8, pp. 2341-2384
Closed Access | Times Cited: 2
Audio Visual Language Maps for Robot Navigation
Chenguang Huang, Oier Mees, Andy Zeng, et al.
Springer proceedings in advanced robotics (2024), pp. 105-117
Closed Access | Times Cited: 2
Chenguang Huang, Oier Mees, Andy Zeng, et al.
Springer proceedings in advanced robotics (2024), pp. 105-117
Closed Access | Times Cited: 2
CARTIER: Cartographic lAnguage Reasoning Targeted at Instruction Execution for Robots
Dmitriy Rivkin, Nikhil Kakodkar, Francois R. Hogan, et al.
(2024), pp. 5615-5621
Open Access | Times Cited: 2
Dmitriy Rivkin, Nikhil Kakodkar, Francois R. Hogan, et al.
(2024), pp. 5615-5621
Open Access | Times Cited: 2
CAPE: Corrective Actions from Precondition Errors using Large Language Models
Shreyas Sundara Raman, Vanya Cohen, Ifrah Idrees, et al.
(2024), pp. 14070-14077
Open Access | Times Cited: 2
Shreyas Sundara Raman, Vanya Cohen, Ifrah Idrees, et al.
(2024), pp. 14070-14077
Open Access | Times Cited: 2
Neural Implicit Vision-Language Feature Fields
Kenneth Blomqvist, Francesco Milano, Jen Jen Chung, et al.
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2023)
Open Access | Times Cited: 6
Kenneth Blomqvist, Francesco Milano, Jen Jen Chung, et al.
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2023)
Open Access | Times Cited: 6
FM-Loc: Using Foundation Models for Improved Vision-Based Localization
Reihaneh Mirjalili, Michael Krawez, Wolfram Burgard
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2023)
Open Access | Times Cited: 6
Reihaneh Mirjalili, Michael Krawez, Wolfram Burgard
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2023)
Open Access | Times Cited: 6
Language-enhanced RNR-Map: Querying Renderable Neural Radiance Field maps with natural language
Francesco Taioli, Federico Cunico, Federico Girella, et al.
(2023), pp. 4671-4676
Open Access | Times Cited: 5
Francesco Taioli, Federico Cunico, Federico Girella, et al.
(2023), pp. 4671-4676
Open Access | Times Cited: 5
On the Prospects of Incorporating Large Language Models (LLMs) in Automated Planning and Scheduling (APS)
Vishal Pallagani, Kaushik Roy, Bharath Muppasani, et al.
arXiv (Cornell University) (2024)
Open Access | Times Cited: 1
Vishal Pallagani, Kaushik Roy, Bharath Muppasani, et al.
arXiv (Cornell University) (2024)
Open Access | Times Cited: 1
Text2Reaction : Enabling Reactive Task Planning Using Large Language Models
Zejun Yang, Ning Li, Haitao Wang, et al.
IEEE Robotics and Automation Letters (2024) Vol. 9, Iss. 5, pp. 4003-4010
Closed Access | Times Cited: 1
Zejun Yang, Ning Li, Haitao Wang, et al.
IEEE Robotics and Automation Letters (2024) Vol. 9, Iss. 5, pp. 4003-4010
Closed Access | Times Cited: 1
Correctable Landmark Discovery Via Large Models for Vision-Language Navigation
Bingqian Lin, Yunshuang Nie, Ziming Wei, et al.
IEEE Transactions on Pattern Analysis and Machine Intelligence (2024) Vol. 46, Iss. 12, pp. 8534-8548
Open Access | Times Cited: 1
Bingqian Lin, Yunshuang Nie, Ziming Wei, et al.
IEEE Transactions on Pattern Analysis and Machine Intelligence (2024) Vol. 46, Iss. 12, pp. 8534-8548
Open Access | Times Cited: 1
CLIP feature-based randomized control using images and text for multiple tasks and robots
Kazuki Shibata, Hideki Deguchi, Shun Taguchi
Advanced Robotics (2024) Vol. 38, Iss. 15, pp. 1066-1078
Open Access | Times Cited: 1
Kazuki Shibata, Hideki Deguchi, Shun Taguchi
Advanced Robotics (2024) Vol. 38, Iss. 15, pp. 1066-1078
Open Access | Times Cited: 1
Object-Centric Instruction Augmentation for Robotic Manipulation
Junjie Wen, Yichen Zhu, Minjie Zhu, et al.
(2024), pp. 4318-4325
Open Access | Times Cited: 1
Junjie Wen, Yichen Zhu, Minjie Zhu, et al.
(2024), pp. 4318-4325
Open Access | Times Cited: 1
Kinematic-aware Prompting for Generalizable Articulated Object Manipulation with LLMs
Wenke Xia, Dong Wang, Xincheng Pang, et al.
(2024), pp. 2073-2080
Open Access | Times Cited: 1
Wenke Xia, Dong Wang, Xincheng Pang, et al.
(2024), pp. 2073-2080
Open Access | Times Cited: 1
Generative AI Agents in Autonomous Machines: A Safety Perspective
Jason Jabbour, Vijay Janapa Reddi
(2024), pp. 1-13
Open Access | Times Cited: 1
Jason Jabbour, Vijay Janapa Reddi
(2024), pp. 1-13
Open Access | Times Cited: 1
Instance-Level Semantic Maps for Vision Language Navigation
Laksh Nanwani, Anmol Agarwal, Kanishk Jain, et al.
(2023)
Open Access | Times Cited: 3
Laksh Nanwani, Anmol Agarwal, Kanishk Jain, et al.
(2023)
Open Access | Times Cited: 3
Exophora Resolution of Linguistic Instructions with a Demonstrative based on Real-World Multimodal Information
Akira Oyama, Shoichi Hasegawa, Hikaru Nakagawa, et al.
(2023), pp. 2617-2623
Closed Access | Times Cited: 2
Akira Oyama, Shoichi Hasegawa, Hikaru Nakagawa, et al.
(2023), pp. 2617-2623
Closed Access | Times Cited: 2
Hierarchical path planning from speech instructions with spatial concept-based topometric semantic mapping
Akira Taniguchi, Shuya Ito, Tadahiro Taniguchi
Frontiers in Robotics and AI (2024) Vol. 11
Open Access
Akira Taniguchi, Shuya Ito, Tadahiro Taniguchi
Frontiers in Robotics and AI (2024) Vol. 11
Open Access
Open-set 3D semantic instance maps for vision language navigation – O3D-SIM
Laksh Nanwani, Kumaraditya Gupta, Aditya Mathur, et al.
Advanced Robotics (2024) Vol. 38, Iss. 19-20, pp. 1378-1391
Open Access
Laksh Nanwani, Kumaraditya Gupta, Aditya Mathur, et al.
Advanced Robotics (2024) Vol. 38, Iss. 19-20, pp. 1378-1391
Open Access
GOI: Find 3D Gaussians of Interest with an Optimizable Open-vocabulary Semantic-space Hyperplane
Yansong Qu, Shaohui Dai, Xinyang Li, et al.
(2024), pp. 5328-5337
Closed Access
Yansong Qu, Shaohui Dai, Xinyang Li, et al.
(2024), pp. 5328-5337
Closed Access
Visual Grounding for Object-Level Generalization in Reinforcement Learning
H. B. Jiang, Zongqing Lu
Lecture notes in computer science (2024), pp. 55-72
Closed Access
H. B. Jiang, Zongqing Lu
Lecture notes in computer science (2024), pp. 55-72
Closed Access
Scene-Graph ViT: End-to-End Open-Vocabulary Visual Relationship Detection
Tim Salzmann, Markus Ryll, Alex Bewley, et al.
Lecture notes in computer science (2024), pp. 195-213
Closed Access
Tim Salzmann, Markus Ryll, Alex Bewley, et al.
Lecture notes in computer science (2024), pp. 195-213
Closed Access