OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Showing 1-25 of 1409 citing articles:

COVID-19 Artificial Intelligence Diagnosis Using Only Cough Recordings
Jordi Laguarta, Ferran Hueto, Brian Subirana
IEEE Open Journal of Engineering in Medicine and Biology (2020) Vol. 1, pp. 275-281
Open Access | Times Cited: 542

Speech emotion recognition with deep convolutional neural networks
Dias Issa, M. Fatih Demirci, Adnan Yazıcı
Biomedical Signal Processing and Control (2020) Vol. 59, pp. 101894-101894
Closed Access | Times Cited: 461

A CNN-Assisted Enhanced Audio Signal Processing for Speech Emotion Recognition
Mustaqeem Mustaqeem, Soonil Kwon
Sensors (2019) Vol. 20, Iss. 1, pp. 183-183
Open Access | Times Cited: 325

Clustering-Based Speech Emotion Recognition by Incorporating Learned Features and Deep BiLSTM
Mustaqeem Mustaqeem, Muhammad Sajjad, Soonil Kwon
IEEE Access (2020) Vol. 8, pp. 79861-79875
Open Access | Times Cited: 325

Emotion Recognition from Speech Using wav2vec 2.0 Embeddings
Leonardo Pepino, Pablo Riera, Luciana Ferrer
Interspeech 2022 (2021)
Open Access | Times Cited: 220

Deep Learning Techniques for Speech Emotion Recognition, from Databases to Models
Babak Abbaschian, Daniel Sierra-Sosa, Adel Elmaghraby
Sensors (2021) Vol. 21, Iss. 4, pp. 1249-1249
Open Access | Times Cited: 219

Bagged support vector machines for emotion recognition from speech
Anjali Bhavan, Pankaj Chauhan, Hitkul, et al.
Knowledge-Based Systems (2019) Vol. 184, pp. 104886-104886
Closed Access | Times Cited: 205

An Overview of Deep-Learning-Based Audio-Visual Speech Enhancement and Separation
Daniel Michelsanti, Zheng‐Hua Tan, Shi-Xiong Zhang, et al.
IEEE/ACM Transactions on Audio Speech and Language Processing (2021) Vol. 29, pp. 1368-1396
Open Access | Times Cited: 205

GAN Inversion: A Survey
Weihao Xia, Yulun Zhang, Yujiu Yang, et al.
IEEE Transactions on Pattern Analysis and Machine Intelligence (2022), pp. 1-17
Closed Access | Times Cited: 188

MEAD: A Large-Scale Audio-Visual Dataset for Emotional Talking-Face Generation
Kaisiyuan Wang, Qianyi Wu, Linsen Song, et al.
Lecture notes in computer science (2020), pp. 700-717
Closed Access | Times Cited: 170

Flow-guided One-shot Talking Face Generation with a High-resolution Audio-visual Dataset
Zhimeng Zhang, Lincheng Li, Yu Ding, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021), pp. 3660-3669
Closed Access | Times Cited: 170

Feature extraction algorithms to improve the speech emotion recognition rate
Anusha Koduru, Hima Bindu Valiveti, Anil Kumar Budati
International Journal of Speech Technology (2020) Vol. 23, Iss. 1, pp. 45-55
Closed Access | Times Cited: 164

MFCC-based Recurrent Neural Network for automatic clinical depression recognition and assessment from speech
Emna Rejaibi, Ali Komaty, Fabrice Mériaudeau, et al.
Biomedical Signal Processing and Control (2021) Vol. 71, pp. 103107-103107
Open Access | Times Cited: 157

Improved speech emotion recognition with Mel frequency magnitude coefficient
J. Ancilin, A. Milton
Applied Acoustics (2021) Vol. 179, pp. 108046-108046
Closed Access | Times Cited: 136

Att-Net: Enhanced emotion recognition system using lightweight self-attention module
Mustaqeem Mustaqeem, Soonil Kwon
Applied Soft Computing (2021) Vol. 102, pp. 107101-107101
Closed Access | Times Cited: 135

Deep learning based multimodal emotion recognition using model-level fusion of audio–visual modalities
Asif Iqbal Middya, B. Nag, Sarbani Roy
Knowledge-Based Systems (2022) Vol. 244, pp. 108580-108580
Closed Access | Times Cited: 133

A Review on Speech Emotion Recognition Using Deep Learning and Attention Mechanism
Eva Lieskovská, Maroš Jakubec, Roman Jarina, et al.
Electronics (2021) Vol. 10, Iss. 10, pp. 1163-1163
Open Access | Times Cited: 132

Deep Audio-visual Learning: A Survey
Hao Zhu, Mandi Luo, Rui Wang, et al.
International Journal of Automation and Computing (2021) Vol. 18, Iss. 3, pp. 351-376
Open Access | Times Cited: 125

Emotion recognition and artificial intelligence: A systematic review (2014–2023) and research recommendations
Smith K. Khare, Victoria Blanes‐Vidal, Esmaeil S. Nadimi, et al.
Information Fusion (2023) Vol. 102, pp. 102019-102019
Open Access | Times Cited: 124

Robust Speech Emotion Recognition Using CNN+LSTM Based on Stochastic Fractal Search Optimization Algorithm
Abdelaziz A. Abdelhamid, El-Sayed M. El-kenawy, Bandar Alotaibi, et al.
IEEE Access (2022) Vol. 10, pp. 49265-49284
Open Access | Times Cited: 119

Human-Computer Interaction for Recognizing Speech Emotions Using Multilayer Perceptron Classifier
Abeer Ali Alnuaim, Mohammed Zakariah, Prashant Kumar Shukla, et al.
Journal of Healthcare Engineering (2022) Vol. 2022, pp. 1-12
Open Access | Times Cited: 99

An ensemble 1D-CNN-LSTM-GRU model with data augmentation for speech emotion recognition
Md. Rayhan Ahmed, Salekul Islam, A.K.M. Muzahidul Islam, et al.
Expert Systems with Applications (2023) Vol. 218, pp. 119633-119633
Open Access | Times Cited: 94

A systematic literature review of speech emotion recognition approaches
Youddha Beer Singh, Shivani Goel
Neurocomputing (2022) Vol. 492, pp. 245-263
Closed Access | Times Cited: 84

Hybrid LSTM-Transformer Model for Emotion Recognition From Speech Audio Files
Felicia Andayani, Lau Bee Theng, Mark Tee Kit Tsun, et al.
IEEE Access (2022) Vol. 10, pp. 36018-36027
Open Access | Times Cited: 84

EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model
Xinya Ji, Hang Zhou, Kaisiyuan Wang, et al.
(2022), pp. 1-10
Open Access | Times Cited: 83

Page 1 - Next Page

Scroll to top