OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

EarCommand
Yincheng Jin, Yang Gao, Xuhai Xu, et al.
Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies (2022) Vol. 6, Iss. 2, pp. 1-28
Closed Access | Times Cited: 16

Showing 16 citing articles:

XAIR: A Framework of Explainable AI in Augmented Reality
Xuhai Xu, Anna Yu, Tanya R. Jonker, et al.
(2023), pp. 1-30
Open Access | Times Cited: 33

EchoSpeech: Continuous Silent Speech Recognition on Minimally-obtrusive Eyewear Powered by Acoustic Sensing
Ruidong Zhang, Ke Li, Yihong Hao, et al.
(2023), pp. 1-18
Closed Access | Times Cited: 25

EarSSR: Silent Speech Recognition via Earphones
Xue Z. Sun, Jie Xiong, Chao Feng, et al.
IEEE Transactions on Mobile Computing (2024) Vol. 23, Iss. 8, pp. 8493-8507
Closed Access | Times Cited: 4

Handleap: towards contact-free gesture interaction with earphones via acoustic sensing
Yu He, Yincheng Jin, Zhanpeng Jin
CCF Transactions on Pervasive Computing and Interaction (2025)
Closed Access

PalateTouch : Enabling Palate as a Touchpad to Interact with Earphones Using Acoustic Sensing
Yankai Zhao, Jin Zhang, Jiao Li, et al.
(2025), pp. 1-22
Closed Access

Gaze & Tongue: A Subtle, Hands-Free Interaction for Head-Worn Devices
Tan Gemicioglu, R. Michael Winters, Yu-Te Wang, et al.
(2023)
Closed Access | Times Cited: 10

HPSpeech: Silent Speech Interface for Commodity Headphones
Ruidong Zhang, Hao Chen, Devansh Agarwal, et al.
(2023)
Closed Access | Times Cited: 10

I Am an Earphone and I Can Hear My User’s Face: Facial Landmark Tracking Using Smart Earphones
Shijia Zhang, Taiting Lu, Hao Zhou, et al.
ACM Transactions on Internet of Things (2023) Vol. 5, Iss. 1, pp. 1-29
Closed Access | Times Cited: 9

EchoNose: Sensing Mouth, Breathing and Tongue Gestures inside Oral Cavity using a Non-contact Nose Interface
Rujia Sun, Xiaohe Zhou, Benjamin Steeper, et al.
(2023)
Closed Access | Times Cited: 7

Exploring Uni-manual Around Ear Off-Device Gestures for Earables
Shaikh Shawon Arefin Shimon, Ali Neshati, Junwei Sun, et al.
Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies (2024) Vol. 8, Iss. 1, pp. 1-29
Open Access | Times Cited: 1

CW Radar Based Silent Speech Interface Using CNN
Khairul Khaizi Mohd Shariff, Auni Nadiah Yusni, M. A. B. Md Ali, et al.
(2022)
Closed Access | Times Cited: 4

Ultrasound- and MRI-based Speech Synthesis Applying Neural Networks
Réka Trencsényi, László Czap
(2024), pp. 1-6
Closed Access

Exploring User-Defined Gestures as Input for Hearables and Recognizing Ear-Level Gestures with IMUs
Y. Sato, Takashi Amesaka, Takumi Yamamoto, et al.
Proceedings of the ACM on Human-Computer Interaction (2024) Vol. 8, Iss. MHCI, pp. 1-23
Closed Access

EarHover: Mid-Air Gesture Recognition for Hearables Using Sound Leakage Signals
Shunta Suzuki, Takashi Amesaka, H. Watanabe, et al.
(2024) Vol. 55, pp. 1-13
Closed Access

Human-inspired computational models for European Portuguese: a review
António Teixeira, Samuel Silva
Language Resources and Evaluation (2023) Vol. 58, Iss. 1, pp. 43-72
Open Access

Page 1

Scroll to top