OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Learning Task-Oriented Grasping From Human Activity Datasets
Mia Kokić, Danica Kragić, Jeannette Bohg
IEEE Robotics and Automation Letters (2020) Vol. 5, Iss. 2, pp. 3352-3359
Open Access | Times Cited: 63

Showing 1-25 of 63 citing articles:

GRAB: A Dataset of Whole-Body Human Grasping of Objects
Omid Taheri, Nima Ghorbani, Michael J. Black, et al.
Lecture notes in computer science (2020), pp. 581-600
Open Access | Times Cited: 229

GanHand: Predicting Human Grasp Affordances in Multi-Object Scenes
Enric Corona, Albert Pumarola, Guillem Alenyà, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020), pp. 5030-5040
Open Access | Times Cited: 139

Object Handovers: A Review for Robotics
Valerio Ortenzi, Akansel Cosgun, Tommaso Pardi, et al.
IEEE Transactions on Robotics (2021) Vol. 37, Iss. 6, pp. 1855-1873
Open Access | Times Cited: 122

Deep Learning Approaches to Grasp Synthesis: A Review
R. Newbury, Morris Gu, Lachlan Chumbley, et al.
IEEE Transactions on Robotics (2023) Vol. 39, Iss. 5, pp. 3994-4015
Open Access | Times Cited: 92

Learning Dexterous Grasping with Object-Centric Visual Affordances
Priyanka Mandikal, Kristen Grauman
(2021)
Open Access | Times Cited: 74

CPF: Learning a Contact Potential Field to Model the Hand-Object Interaction
Lixin Yang, Xinyu Zhan, Kailin Li, et al.
2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2021)
Open Access | Times Cited: 72

ArtiBoost: Boosting Articulated 3D Hand-Object Pose Estimation via Online Exploration and Synthesis
Lixin Yang, Kailin Li, Xinyu Zhan, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022)
Open Access | Times Cited: 51

CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation
Bowen Wen, Wenzhao Lian, Kostas E. Bekris, et al.
2022 International Conference on Robotics and Automation (ICRA) (2022), pp. 6401-6408
Closed Access | Times Cited: 46

Augmenting Reinforcement Learning with Behavior Primitives for Diverse Manipulation Tasks
Soroush Nasiriany, Huihan Liu, Yuke Zhu
2022 International Conference on Robotics and Automation (ICRA) (2022), pp. 7477-7484
Open Access | Times Cited: 41

OakInk: A Large-scale Knowledge Repository for Understanding Hand-Object Interaction
Lixin Yang, Kailin Li, Xinyu Zhan, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022)
Open Access | Times Cited: 40

GenDexGrasp: Generalizable Dexterous Grasping
Puhao Li, Tengyu Liu, Yuyang Li, et al.
(2023)
Open Access | Times Cited: 23

Grasp’D: Differentiable Contact-Rich Grasp Synthesis for Multi-Fingered Hands
Dylan Turpin, Liquan Wang, Eric Heiden, et al.
Lecture notes in computer science (2022), pp. 201-221
Closed Access | Times Cited: 30

Towards Unconstrained Joint Hand-Object Reconstruction From RGB Videos
Yana Hasson, Gül Varol, Cordelia Schmid, et al.
2021 International Conference on 3D Vision (3DV) (2021), pp. 659-668
Open Access | Times Cited: 40

AdaAfford: Learning to Adapt Manipulation Affordance for 3D Articulated Objects via Few-Shot Interactions
Yian Wang, Ruihai Wu, Kaichun Mo, et al.
Lecture notes in computer science (2022), pp. 90-107
Open Access | Times Cited: 25

GraspGPT: Leveraging Semantic Knowledge From a Large Language Model for Task-Oriented Grasping
Chao Tang, Dehao Huang, Wenqi Ge, et al.
IEEE Robotics and Automation Letters (2023) Vol. 8, Iss. 11, pp. 7551-7558
Open Access | Times Cited: 15

Move as you Say, Interact as you can: Language-Guided Human Motion Generation with Scene Affordance
Zan Wang, Yixin Chen, Baoxiong Jia, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) Vol. 1, pp. 433-444
Closed Access | Times Cited: 5

Dexterous hand towards intelligent manufacturing: A review of technologies, trends, and potential applications
Jiexin Zhang, Huan Zhao, Kuangda Chen, et al.
Robotics and Computer-Integrated Manufacturing (2025) Vol. 95, pp. 103021-103021
Closed Access

Toward Human-Like Grasp: Dexterous Grasping via Semantic Representation of Object-Hand
Tianqiang Zhu, Rina Wu, Xiangbo Lin, et al.
2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2021), pp. 15721-15731
Closed Access | Times Cited: 32

Custom Grasping: A Region-Based Robotic Grasping Detection Method in Industrial Cyber-Physical Systems
Yuanjun Laili, Zelin Chen, Lei Ren, et al.
IEEE Transactions on Automation Science and Engineering (2022) Vol. 20, Iss. 1, pp. 88-100
Closed Access | Times Cited: 20

Human Hands as Probes for Interactive Object Understanding
Mohit Goyal, Sahil Modi, Rishabh Goyal, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022), pp. 3283-3293
Open Access | Times Cited: 18

VAT-Mart: Learning Visual Action Trajectory Proposals for Manipulating 3D ARTiculated Objects
Ruihai Wu, Yan Zhao, Kaichun Mo, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 21

Grasp It Like a Pro 2.0: A Data-Driven Approach Exploiting Basic Shape Decomposition and Human Data for Grasping Unknown Objects
Alessandro Palleschi, Franco Angelini, Chiara Gabellieri, et al.
IEEE Transactions on Robotics (2023) Vol. 39, Iss. 5, pp. 4016-4036
Open Access | Times Cited: 7

Learning a Contact Potential Field for Modeling the Hand-Object Interaction
Lixin Yang, Xinyu Zhan, Kailin Li, et al.
IEEE Transactions on Pattern Analysis and Machine Intelligence (2024) Vol. 46, Iss. 8, pp. 5645-5662
Closed Access | Times Cited: 2

Same Object, Different Grasps: Data and Semantic Knowledge for Task-Oriented Grasping
Adithyavairavan Murali, Weiyu Liu, Kenneth Marino, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 15

Toward Human-Like Grasp: Functional Grasp by Dexterous Robotic Hand Via Object-Hand Semantic Representation
Tianqiang Zhu, Rina Wu, Jinglue Hang, et al.
IEEE Transactions on Pattern Analysis and Machine Intelligence (2023) Vol. 45, Iss. 10, pp. 12521-12534
Closed Access | Times Cited: 5

Page 1 - Next Page

Scroll to top