OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Connecting Touch and Vision via Cross-Modal Prediction
Yunzhu Li, Jun-Yan Zhu, Russ Tedrake, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Open Access | Times Cited: 103

Showing 26-50 of 103 citing articles:

Detect, Reject, Correct: Crossmodal Compensation of Corrupted Sensors
Michelle A. Lee, Matthew Tan, Yuke Zhu, et al.
(2021)
Open Access | Times Cited: 20

Partial Visual-Tactile Fused Learning for Robotic Object Recognition
Tao Zhang, Yang Cong, Jiahua Dong, et al.
IEEE Transactions on Systems Man and Cybernetics Systems (2021) Vol. 52, Iss. 7, pp. 4349-4361
Closed Access | Times Cited: 19

Deep Active Cross-Modal Visuo-Tactile Transfer Learning for Robotic Object Recognition
Prajval Kumar Murali, Cong Wang, Dongheui Lee, et al.
IEEE Robotics and Automation Letters (2022) Vol. 7, Iss. 4, pp. 9557-9564
Open Access | Times Cited: 13

Visual–Tactile Fused Graph Learning for Object Clustering
Tao Zhang, Yang Cong, Gan Sun, et al.
IEEE Transactions on Cybernetics (2021) Vol. 52, Iss. 11, pp. 12275-12289
Closed Access | Times Cited: 17

ARMANI: Part-level Garment-Text Alignment for Unified Cross-Modal Fashion Design
Xu‐Jie Zhang, Sha Yu, Michael Kampffmeyer, et al.
Proceedings of the 30th ACM International Conference on Multimedia (2022), pp. 4525-4535
Open Access | Times Cited: 12

Model predictive impedance control with Gaussian processes for human and environment interaction
Kevin Haninger, Christian Hegeler, Luka Peternel
Robotics and Autonomous Systems (2023) Vol. 165, pp. 104431-104431
Open Access | Times Cited: 6

Generation of Tactile Data From 3D Vision and Target Robotic Grasps
Brayan S. Zapata-Impata, Pablo Gil, Youcef Mezouar, et al.
IEEE Transactions on Haptics (2020) Vol. 14, Iss. 1, pp. 57-67
Open Access | Times Cited: 16

Play it by Ear: Learning Skills amidst Occlusion through Audio-Visual Imitation Learning
Maximilian Du, Olivia Y Lee, Suraj Nair, et al.
(2022)
Open Access | Times Cited: 9

Multi-Task Hypergraphs for Semi-supervised Learning using Earth Observations
Mihai Pîrvu, Alina Marcu, Alexandra Dobrescu, et al.
(2023), pp. 3396-3406
Open Access | Times Cited: 5

Dynamic Modeling of Hand-Object Interactions via Tactile Sensing
Qiang Zhang, Yunzhu Li, Yiyue Luo, et al.
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2021), pp. 2874-2881
Open Access | Times Cited: 12

The Object Folder Benchmark : Multisensory Learning with Neural and Real Objects
Ruohan Gao, Yiming Dou, Hao Li, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023), pp. 17276-17286
Open Access | Times Cited: 4

Self-supervised Hypergraphs for Learning Multiple World Interpretations
Alina Marcu, Mihai Pîrvu, Dragoş Costea, et al.
(2023), pp. 983-992
Open Access | Times Cited: 4

Controllable Visual-Tactile Synthesis
Ruihan Gao, Wenzhen Yuan, Jun-Yan Zhu
2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2023), pp. 7017-7029
Open Access | Times Cited: 4

Semi-Supervised Multimodal Representation Learning Through a Global Workspace
Benjamin Devillers, Léopold Maytié, Rufin VanRullen
IEEE Transactions on Neural Networks and Learning Systems (2024), pp. 1-15
Open Access | Times Cited: 1

Symmetric Models for Visual Force Policy Learning
Colin Kohler, Anuj Shrivatsav Srikanth, Eshan Arora, et al.
(2024), pp. 3101-3107
Open Access | Times Cited: 1

Editorial: ViTac: Integrating Vision and Touch for Multimodal and Cross-Modal Perception
Shan Luo, Nathan F. Lepora, Uriel Martínez-Hernández, et al.
Frontiers in Robotics and AI (2021) Vol. 8
Open Access | Times Cited: 10

Generative Partial Visual-Tactile Fused Object Clustering
Tao Zhang, Yang Cong, Gan Sun, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2021) Vol. 35, Iss. 7, pp. 6156-6164
Open Access | Times Cited: 10

Exploring the Benefits of Cross-Modal Coding
Zhe Yuan, Bin Kang, Xin Wei, et al.
IEEE Transactions on Circuits and Systems for Video Technology (2022) Vol. 32, Iss. 12, pp. 8781-8794
Closed Access | Times Cited: 7

Toward Vision-Based High Sampling Interaction Force Estimation With Master Position and Orientation for Teleoperation
Kang-Won Lee, Dae-Kwan Ko, Soo‐Chul Lim
IEEE Robotics and Automation Letters (2021) Vol. 6, Iss. 4, pp. 6640-6646
Closed Access | Times Cited: 9

Tactile Pattern Super Resolution with Taxel-based Sensors
Bing Wu, Qian Liu, Qiang Zhang
2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2022), pp. 3644-3650
Closed Access | Times Cited: 6

Bidirectional visual-tactile cross-modal generation using latent feature space flow model
Yu Fang, Xuehe Zhang, Wenqiang Xu, et al.
Neural Networks (2023) Vol. 172, pp. 106088-106088
Open Access | Times Cited: 3

Neural language models for the multilingual, transcultural, and multimodal Semantic Web
Dagmar Gromann
Semantic Web (2019) Vol. 11, Iss. 1, pp. 29-39
Closed Access | Times Cited: 8

Semi-Supervised Learning for Multi-Task Scene Understanding by Neural Graph Consensus
Marius Leordeanu, Mihai Pîrvu, Dragoş Costea, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2021) Vol. 35, Iss. 3, pp. 1882-1892
Open Access | Times Cited: 7

AviPer: assisting visually impaired people to perceive the world with visual-tactile multimodal attention network
Xinrong Li, Mei‐Yu Huang, Yao Xu, et al.
CCF Transactions on Pervasive Computing and Interaction (2022) Vol. 4, Iss. 3, pp. 219-239
Open Access | Times Cited: 5

Scroll to top