
OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!
If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.
Requested Article:
Predicting Stable Configurations for Semantic Placement of Novel Objects
Chris Paxton, Chris Xie, Tucker Hermans, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 9
Chris Paxton, Chris Xie, Tucker Hermans, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 9
Showing 9 citing articles:
StructFormer: Learning Spatial Structure for Language-Guided Semantic Rearrangement of Novel Objects
Weiyu Liu, Chris Paxton, Tucker Hermans, et al.
2022 International Conference on Robotics and Automation (ICRA) (2022), pp. 6322-6329
Open Access | Times Cited: 28
Weiyu Liu, Chris Paxton, Tucker Hermans, et al.
2022 International Conference on Robotics and Automation (ICRA) (2022), pp. 6322-6329
Open Access | Times Cited: 28
IFOR: Iterative Flow Minimization for Robotic Object Rearrangement
Ankit Goyal, Arsalan Mousavian, Chris Paxton, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022), pp. 14767-14777
Open Access | Times Cited: 19
Ankit Goyal, Arsalan Mousavian, Chris Paxton, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022), pp. 14767-14777
Open Access | Times Cited: 19
StructDiffusion: Language-Guided Creation of Physically-Valid Structures using Unseen Objects
Weiyu Liu, Yilun Du, Tucker Hermans, et al.
(2023)
Open Access | Times Cited: 10
Weiyu Liu, Yilun Du, Tucker Hermans, et al.
(2023)
Open Access | Times Cited: 10
Planning for Multi-Object Manipulation with Graph Neural Network Relational Classifiers
Yixuan Huang, Adam Conkey, Tucker Hermans
(2023), pp. 1822-1829
Open Access | Times Cited: 7
Yixuan Huang, Adam Conkey, Tucker Hermans
(2023), pp. 1822-1829
Open Access | Times Cited: 7
LINGO-Space: Language-Conditioned Incremental Grounding for Space
Do-Hyun Kim, Nayoung Oh, Deokmin Hwang, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 9, pp. 10314-10322
Open Access | Times Cited: 1
Do-Hyun Kim, Nayoung Oh, Deokmin Hwang, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 9, pp. 10314-10322
Open Access | Times Cited: 1
Learning Perceptual Concepts by Bootstrapping From Human Queries
Andreea Bobu, Chris Paxton, Wei Yang, et al.
IEEE Robotics and Automation Letters (2022) Vol. 7, Iss. 4, pp. 11260-11267
Open Access | Times Cited: 6
Andreea Bobu, Chris Paxton, Wei Yang, et al.
IEEE Robotics and Automation Letters (2022) Vol. 7, Iss. 4, pp. 11260-11267
Open Access | Times Cited: 6
Aligning Robot and Human Representations
Andreea Bobu, Andi Peng, Pulkit Agrawal, et al.
arXiv (Cornell University) (2023)
Open Access | Times Cited: 3
Andreea Bobu, Andi Peng, Pulkit Agrawal, et al.
arXiv (Cornell University) (2023)
Open Access | Times Cited: 3
Learning to Reorient Objects With Stable Placements Afforded by Extrinsic Supports
Peng Xu, Hu Cheng, Jiankun Wang, et al.
IEEE Transactions on Automation Science and Engineering (2023) Vol. 21, Iss. 4, pp. 5653-5664
Open Access | Times Cited: 2
Peng Xu, Hu Cheng, Jiankun Wang, et al.
IEEE Transactions on Automation Science and Engineering (2023) Vol. 21, Iss. 4, pp. 5653-5664
Open Access | Times Cited: 2
SpaTiaL: Monitoring and Planning of Robotic Tasks Using Spatio-Temporal Logic Specifications
Christian Pek, Georg Friedrich Schuppe, Francesco Esposito, et al.
Research Square (Research Square) (2023)
Open Access | Times Cited: 1
Christian Pek, Georg Friedrich Schuppe, Francesco Esposito, et al.
Research Square (Research Square) (2023)
Open Access | Times Cited: 1