OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Multi-Modal Geometric Learning for Grasping and Manipulation
David Watkins-Valls, Jacob Varley, Peter Allen
2022 International Conference on Robotics and Automation (ICRA) (2019)
Open Access | Times Cited: 52

Showing 1-25 of 52 citing articles:

Vision-based robotic grasping from object localization, object pose estimation to grasp estimation for parallel grippers: a review
Guoguang Du, Kai Wang, Shiguo Lian, et al.
Artificial Intelligence Review (2020) Vol. 54, Iss. 3, pp. 1677-1734
Open Access | Times Cited: 354

Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks
Michelle A. Lee, Yuke Zhu, Peter Zachares, et al.
IEEE Transactions on Robotics (2020) Vol. 36, Iss. 3, pp. 582-596
Open Access | Times Cited: 161

A Review of Robot Learning for Manipulation: Challenges, Representations, and Algorithms
Oliver Kroemer, Scott Niekum, George Konidaris
arXiv (Cornell University) (2019)
Open Access | Times Cited: 80

Robotics Dexterous Grasping: The Methods Based on Point Cloud and Deep Learning
Haonan Duan, Peng Wang, Ya-Yu Huang, et al.
Frontiers in Neurorobotics (2021) Vol. 15
Open Access | Times Cited: 43

VisuoTactile 6D Pose Estimation of an In-Hand Object Using Vision and Tactile Sensor Data
Snehal Dikhale, Karankumar Patel, Daksh Dhingra, et al.
IEEE Robotics and Automation Letters (2022) Vol. 7, Iss. 2, pp. 2148-2155
Closed Access | Times Cited: 32

S4G: Amodal Single-view Single-Shot SE(3) Grasp Detection in Cluttered Scenes
Yuzhe Qin, Rui Chen, Hao Zhu, et al.
arXiv (Cornell University) (2019)
Open Access | Times Cited: 52

Generative Attention Learning: a “GenerAL” framework for high-performance multi-fingered grasping in clutter
Bohan Wu, Iretiayo Akinola, Abhi Gupta, et al.
Autonomous Robots (2020) Vol. 44, Iss. 6, pp. 971-990
Closed Access | Times Cited: 43

DDGC: Generative Deep Dexterous Grasping in Clutter
Jens Lundell, Francesco Verdoja, Ville Kyrki
IEEE Robotics and Automation Letters (2021) Vol. 6, Iss. 4, pp. 6899-6906
Open Access | Times Cited: 37

Seeing Through your Skin: Recognizing Objects with a Novel Visuotactile Sensor
Francois R. Hogan, Michael Jenkin, Sahand Rezaei-Shoshtari, et al.
(2021)
Open Access | Times Cited: 32

Robust Grasp Planning Over Uncertain Shape Completions
Jens Lundell, Francesco Verdoja, Ville Kyrki
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2019), pp. 1526-1532
Open Access | Times Cited: 30

Robotic Grasping through Combined Image-Based Grasp Proposal and 3D Reconstruction
Daniel Yang, Tarik Tosun, Benjamin Eisner, et al.
(2021)
Open Access | Times Cited: 25

Where Shall I Touch? Vision-Guided Tactile Poking for Transparent Object Grasping
Jiaqi Jiang, Guanqun Cao, Aaron Butterworth, et al.
IEEE/ASME Transactions on Mechatronics (2022) Vol. 28, Iss. 1, pp. 233-244
Open Access | Times Cited: 18

3D Shape Reconstruction from Vision and Touch
Edward J. Smith, Roberto Calandra, Adriana Romero, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 26

Active Visuo-Haptic Object Shape Completion
Lukáš Rustler, Jens Lundell, Jan Kristof Behrens, et al.
IEEE Robotics and Automation Letters (2022) Vol. 7, Iss. 2, pp. 5254-5261
Open Access | Times Cited: 14

Visuo-Haptic Grasping of Unknown Objects based on Gaussian Process Implicit Surfaces and Deep Learning
Simon Ottenhaus, Daniel Renninghoff, Raphael Grimm, et al.
(2019), pp. 402-409
Closed Access | Times Cited: 20

Learning Precise 3D Manipulation from Multiple Uncalibrated Cameras
Iretiayo Akinola, Jacob Varley, Dmitry Kalashnikov
(2020), pp. 4616-4622
Open Access | Times Cited: 18

ViHOPE: Visuotactile In-Hand Object 6D Pose Estimation With Shape Completion
Hongyu Li, Snehal Dikhale, Soshi Iba, et al.
IEEE Robotics and Automation Letters (2023) Vol. 8, Iss. 11, pp. 6963-6970
Open Access | Times Cited: 6

Center-of-Mass-based Robust Grasp Planning for Unknown Objects Using Tactile-Visual Sensors
Qian Feng, Zhaopeng Chen, Jun Deng, et al.
(2020), pp. 610-617
Open Access | Times Cited: 16

Improving Object Grasp Performance via Transformer-Based Sparse Shape Completion
Wenkai Chen, Hongzhuo Liang, Zhaopeng Chen, et al.
Journal of Intelligent & Robotic Systems (2022) Vol. 104, Iss. 3
Open Access | Times Cited: 9

Beyond Top-Grasps Through Scene Completion
Jens Lundell, Francesco Verdoja, Ville Kyrki
(2020)
Open Access | Times Cited: 13

TANDEM: Learning Joint Exploration and Decision Making With Tactile Sensors
Jingxi Xu, Shuran Song, Matei Ciocarlie
IEEE Robotics and Automation Letters (2022) Vol. 7, Iss. 4, pp. 10391-10398
Open Access | Times Cited: 7

Trilateral convolutional neural network for 3D shape reconstruction of objects from a single depth view
Patricio Rivera, Edwin Valarezo Añazco, Mun‐Taek Choi, et al.
IET Image Processing (2019) Vol. 13, Iss. 13, pp. 2457-2466
Open Access | Times Cited: 7

MAT: Multi-Fingered Adaptive Tactile Grasping via Deep Reinforcement Learning
Bohan Wu, Iretiayo Akinola, Jacob Varley, et al.
arXiv (Cornell University) (2019)
Closed Access | Times Cited: 7

Vision-based Robotic Grasp Detection From Object Localization, Object Pose Estimation To Grasp Estimation: A Review
Guoguang Du, Kai Wang, Shiguo Lian, et al.
arXiv (Cornell University) (2019)
Closed Access | Times Cited: 7

Page 1 - Next Page

Scroll to top