
OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!
If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.
Requested Article:
Learning Actions from Human Demonstration Video for Robotic Manipulation
Shuo Yang, Wei Zhang, Weizhi Lu, et al.
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2019), pp. 1805-1811
Open Access | Times Cited: 25
Shuo Yang, Wei Zhang, Weizhi Lu, et al.
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2019), pp. 1805-1811
Open Access | Times Cited: 25
Showing 25 citing articles:
Vision-based robotic grasping from object localization, object pose estimation to grasp estimation for parallel grippers: a review
Guoguang Du, Kai Wang, Shiguo Lian, et al.
Artificial Intelligence Review (2020) Vol. 54, Iss. 3, pp. 1677-1734
Open Access | Times Cited: 354
Guoguang Du, Kai Wang, Shiguo Lian, et al.
Artificial Intelligence Review (2020) Vol. 54, Iss. 3, pp. 1677-1734
Open Access | Times Cited: 354
Watch and Act: Learning Robotic Manipulation From Visual Demonstration
Shuo Yang, Wei Zhang, Ran Song, et al.
IEEE Transactions on Systems Man and Cybernetics Systems (2023) Vol. 53, Iss. 7, pp. 4404-4416
Closed Access | Times Cited: 11
Shuo Yang, Wei Zhang, Ran Song, et al.
IEEE Transactions on Systems Man and Cybernetics Systems (2023) Vol. 53, Iss. 7, pp. 4404-4416
Closed Access | Times Cited: 11
Human2bot: learning zero-shot reward functions for robotic manipulation from human demonstrations
Yasir Salam, Yinbei Li, Jonas Herzog, et al.
Autonomous Robots (2025) Vol. 49, Iss. 2
Closed Access
Yasir Salam, Yinbei Li, Jonas Herzog, et al.
Autonomous Robots (2025) Vol. 49, Iss. 2
Closed Access
Grasp for Stacking via Deep Reinforcement Learning
Junhao Zhang, Wei Zhang, Ran Song, et al.
(2020), pp. 2543-2549
Closed Access | Times Cited: 26
Junhao Zhang, Wei Zhang, Ran Song, et al.
(2020), pp. 2543-2549
Closed Access | Times Cited: 26
A Multi-modal Framework for Robots to Learn Manipulation Tasks from Human Demonstrations
Congcong Yin, Qiuju Zhang
Journal of Intelligent & Robotic Systems (2023) Vol. 107, Iss. 4
Closed Access | Times Cited: 5
Congcong Yin, Qiuju Zhang
Journal of Intelligent & Robotic Systems (2023) Vol. 107, Iss. 4
Closed Access | Times Cited: 5
Combinatorial Analysis of Deep Learning and Machine Learning Video Captioning Studies: A Systematic Literature Review
Tanzila Kehkashan, Abdullah Alsaeedi, Wael M. S. Yafooz, et al.
IEEE Access (2024) Vol. 12, pp. 35048-35080
Open Access | Times Cited: 1
Tanzila Kehkashan, Abdullah Alsaeedi, Wael M. S. Yafooz, et al.
IEEE Access (2024) Vol. 12, pp. 35048-35080
Open Access | Times Cited: 1
Enhancing Robotic Grasping of Free-Floating Targets with Soft Actor-Critic Algorithm and Tactile Sensors: a Focus on the Pre-Grasp Stage
Bahador Beigomi, Zheng Zhu
AIAA SCITECH 2022 Forum (2024)
Closed Access | Times Cited: 1
Bahador Beigomi, Zheng Zhu
AIAA SCITECH 2022 Forum (2024)
Closed Access | Times Cited: 1
Hierarchical and parameterized learning of pick-and-place manipulation from under-specified human demonstrations
Kun Qian, Huan Liu, Jaime Valls Miró, et al.
Advanced Robotics (2020) Vol. 34, Iss. 13, pp. 858-872
Closed Access | Times Cited: 9
Kun Qian, Huan Liu, Jaime Valls Miró, et al.
Advanced Robotics (2020) Vol. 34, Iss. 13, pp. 858-872
Closed Access | Times Cited: 9
Learn by Observation: Imitation Learning for Drone Patrolling from Videos of A Human Navigator
Yue Fan, Shilei Chu, Wei Zhang, et al.
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2020), pp. 5209-5216
Open Access | Times Cited: 8
Yue Fan, Shilei Chu, Wei Zhang, et al.
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2020), pp. 5209-5216
Open Access | Times Cited: 8
Vision-based Robot Manipulation Learning via Human Demonstrations
Zhixin Jia, Mengxiang Lin, Zhuo Chen, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 7
Zhixin Jia, Mengxiang Lin, Zhuo Chen, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 7
Understanding Contexts Inside Robot and Human Manipulation Tasks through Vision-Language Model and Ontology System in Video Streams
Chen Jiang, Masood Dehghan, Martin Jägersand
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2020), pp. 8366-8372
Closed Access | Times Cited: 6
Chen Jiang, Masood Dehghan, Martin Jägersand
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2020), pp. 8366-8372
Closed Access | Times Cited: 6
Development of a novel robotic hand with soft materials and rigid structures
Yongyao Li, Ming Cong, Dong Liu, et al.
Industrial Robot the international journal of robotics research and application (2021) Vol. 48, Iss. 6, pp. 823-835
Closed Access | Times Cited: 6
Yongyao Li, Ming Cong, Dong Liu, et al.
Industrial Robot the international journal of robotics research and application (2021) Vol. 48, Iss. 6, pp. 823-835
Closed Access | Times Cited: 6
Cross-context Visual Imitation Learning from Demonstrations
Shuo Yang, Wei Zhang, Weizhi Lu, et al.
(2020)
Closed Access | Times Cited: 5
Shuo Yang, Wei Zhang, Weizhi Lu, et al.
(2020)
Closed Access | Times Cited: 5
O2A: One-Shot Observational Learning with Action Vectors
Leo Pauly, Wisdom C. Agboh, David Hogg, et al.
Frontiers in Robotics and AI (2021) Vol. 8
Open Access | Times Cited: 5
Leo Pauly, Wisdom C. Agboh, David Hogg, et al.
Frontiers in Robotics and AI (2021) Vol. 8
Open Access | Times Cited: 5
Bridging Visual Perception with Contextual Semantics for Understanding Robot Manipulation Tasks
Chen Jiang, Martin Jägersand
2022 IEEE 18th International Conference on Automation Science and Engineering (CASE) (2020), pp. 1447-1452
Open Access | Times Cited: 4
Chen Jiang, Martin Jägersand
2022 IEEE 18th International Conference on Automation Science and Engineering (CASE) (2020), pp. 1447-1452
Open Access | Times Cited: 4
Interactive Imitation Learning in Robotics: A Survey
Carlos Celemin, Rodrigo Pérez‐Dattari, Eugenio Chisari, et al.
(2022)
Open Access | Times Cited: 3
Carlos Celemin, Rodrigo Pérez‐Dattari, Eugenio Chisari, et al.
(2022)
Open Access | Times Cited: 3
CLIPUNetr: Assisting Human-robot Interface for Uncalibrated Visual Servoing Control with CLIP-driven Referring Expression Segmentation
Chen Jiang, Yuchen Yang, Martin Jägersand
(2024), pp. 6620-6626
Open Access
Chen Jiang, Yuchen Yang, Martin Jägersand
(2024), pp. 6620-6626
Open Access
Understanding Contexts Inside Robot and Human Manipulation Tasks through a Vision-Language Model and Ontology System in a Video Stream
Chen Jiang, Masood Dehghan, Martin Jägersand
arXiv (Cornell University) (2020)
Open Access | Times Cited: 3
Chen Jiang, Masood Dehghan, Martin Jägersand
arXiv (Cornell University) (2020)
Open Access | Times Cited: 3
Constructing Dynamic Knowledge Graph for Visual Semantic Understanding and Applications in Autonomous Robotics
Chen Jiang, Steven Weikai Lu, Martin Jägersand
arXiv (Cornell University) (2019)
Closed Access | Times Cited: 2
Chen Jiang, Steven Weikai Lu, Martin Jägersand
arXiv (Cornell University) (2019)
Closed Access | Times Cited: 2
A Variational Graph Autoencoder for Manipulation Action Recognition and Prediction
Gamze Akyol, Sanem Sarıel, Eren Erdal Aksoy
2021 20th International Conference on Advanced Robotics (ICAR) (2021), pp. 968-973
Open Access | Times Cited: 1
Gamze Akyol, Sanem Sarıel, Eren Erdal Aksoy
2021 20th International Conference on Advanced Robotics (ICAR) (2021), pp. 968-973
Open Access | Times Cited: 1
Robot Learning from Demonstration for Assembly with Camera-Supplemented Optical Motion Capture Sensors
Haopeng Hu, Hengyuan Yan, Xiansheng Yang, et al.
Research Square (Research Square) (2023)
Open Access
Haopeng Hu, Hengyuan Yan, Xiansheng Yang, et al.
Research Square (Research Square) (2023)
Open Access
Bridging Visual Perception with Contextual Semantics for Understanding Robot Manipulation Tasks
Chen Jiang, Martin Jägersand
arXiv (Cornell University) (2019)
Open Access
Chen Jiang, Martin Jägersand
arXiv (Cornell University) (2019)
Open Access
Learn by Observation: Imitation Learning for Drone Patrolling from Videos of A Human Navigator
Yue Fan, Shilei Chu, Wei Zhang, et al.
arXiv (Cornell University) (2020)
Open Access
Yue Fan, Shilei Chu, Wei Zhang, et al.
arXiv (Cornell University) (2020)
Open Access
A Variational Graph Autoencoder for Manipulation Action Recognition and Prediction
Gamze Akyol, Sanem Sarıel, Eren Erdal Aksoy
arXiv (Cornell University) (2021)
Closed Access
Gamze Akyol, Sanem Sarıel, Eren Erdal Aksoy
arXiv (Cornell University) (2021)
Closed Access
Understanding Manipulation Contexts by Vision and Language for Robotic Vision
Chen Jiang
(2021)
Closed Access
Chen Jiang
(2021)
Closed Access