OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

FaceListener: Recognizing Human Facial Expressions via Acoustic Sensing on Commodity Headphones
Xingzhe Song, Kai Huang, Wei Gao
(2022), pp. 145-157
Closed Access | Times Cited: 15

Showing 15 citing articles:

Exploring the Feasibility of Remote Cardiac Auscultation Using Earphones
Tao Chen, Yongjie Yang, Xiaoran Fan, et al.
Proceedings of the 28th Annual International Conference on Mobile Computing And Networking (2024), pp. 357-372
Open Access | Times Cited: 5

Body-Area Capacitive or Electric Field Sensing for Human Activity Recognition and Human-Computer Interaction
Sizhen Bian, Mengxi Liu, Bo Zhou, et al.
Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies (2024) Vol. 8, Iss. 1, pp. 1-49
Closed Access | Times Cited: 4

InMyFace: Inertial and mechanomyography-based sensor fusion for wearable facial activity recognition
Hymalai Bello, Luis Alfredo Sanchez Marin, Sungho Suh, et al.
Information Fusion (2023) Vol. 99, pp. 101886-101886
Open Access | Times Cited: 9

mmFER: Millimetre-wave Radar based Facial Expression Recognition for Multimedia IoT Applications
Xi Zhang, Yu Zhang, Zhenguo Shi, et al.
Proceedings of the 28th Annual International Conference on Mobile Computing And Networking (2023), pp. 1-15
Closed Access | Times Cited: 9

I Am an Earphone and I Can Hear My User’s Face: Facial Landmark Tracking Using Smart Earphones
Shijia Zhang, Taiting Lu, Hao Zhou, et al.
ACM Transactions on Internet of Things (2023) Vol. 5, Iss. 1, pp. 1-29
Closed Access | Times Cited: 8

UFace
Shuning Wang, Linghui Zhong, Yongjian Fu, et al.
Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies (2024) Vol. 8, Iss. 1, pp. 1-27
Open Access | Times Cited: 2

ReHEarSSE: Recognizing Hidden-in-the-Ear Silently Spelled Expressions
Xuefu Dong, Yifei Chen, Yuuki Nishiyama, et al.
(2024), pp. 1-16
Open Access | Times Cited: 2

SmartASL
Yincheng Jin, Shibo Zhang, Yang Gao, et al.
Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies (2023) Vol. 7, Iss. 2, pp. 1-21
Closed Access | Times Cited: 6

Exploring Uni-manual Around Ear Off-Device Gestures for Earables
Shaikh Shawon Arefin Shimon, Ali Neshati, Junwei Sun, et al.
Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies (2024) Vol. 8, Iss. 1, pp. 1-29
Open Access | Times Cited: 1

FacER: Contrastive Attention based Expression Recognition via Smartphone Earpiece Speaker
Guangjing Wang, Qiben Yan, Shane Patrarungrong, et al.
IEEE INFOCOM 2022 - IEEE Conference on Computer Communications (2023), pp. 1-10
Closed Access | Times Cited: 3

Prediction of Age, Gender, and Ethnicity Using CNN and Facial Images in Real-Time
Anirudh Kanwar, Kiran Deep Singh
2022 IEEE World Conference on Applied Intelligence and Computing (AIC) (2023), pp. 668-674
Closed Access | Times Cited: 2

MeciFace: Mechanomyography and Inertial Fusion-Based Glasses for Edge Real-Time Recognition of Facial and Eating Activities
Hymalai Bello, Sungho Suh, Bo Zhou, et al.
Lecture notes in networks and systems (2024), pp. 393-405
Closed Access

Deep Learning on Facial Expression Detection : Artificial Neural Network Model Implementation
Hendra Kusumah, Muhammad Suzaki Zahran, Paksi Ryandana Cholied, et al.
CCIT Journal (2022) Vol. 16, Iss. 1, pp. 39-53
Open Access

Page 1

Scroll to top