OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Memobert: Pre-Training Model with Prompt-Based Learning for Multimodal Emotion Recognition
Jinming Zhao, Ruichen Li, Qin Jin, et al.
ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2022), pp. 4703-4707
Open Access | Times Cited: 20

Showing 20 citing articles:

Deep learning-based multimodal emotion recognition from audio, visual, and text modalities: A systematic review of recent advancements and future prospects
Shiqing Zhang, Yijiao Yang, Chen Chen, et al.
Expert Systems with Applications (2023) Vol. 237, pp. 121692-121692
Closed Access | Times Cited: 77

Multimodal Prompting with Missing Modalities for Visual Recognition
Yi-Lun Lee, Yi–Hsuan Tsai, Wei-Chen Chiu, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023)
Open Access | Times Cited: 52

A Systematic Review on Multimodal Emotion Recognition: Building Blocks, Current State, Applications, and Challenges
Sepideh Kalateh, Luis A. Estrada-Jimenez, Sanaz Nikghadam-Hojjati, et al.
IEEE Access (2024) Vol. 12, pp. 103976-104019
Open Access | Times Cited: 16

Efficient utilization of pre-trained models: A review of sentiment analysis via prompt learning
Kun Bu, Yuanchao Liu, Xiaolong Ju
Knowledge-Based Systems (2023) Vol. 283, pp. 111148-111148
Closed Access | Times Cited: 22

Emotion Forecasting: A Transformer-Based Approach (Preprint)
Leire Paz-Arbaizar, Jorge López‐Castromán, Antonio Artés‐Rodríguez, et al.
Journal of Medical Internet Research (2025) Vol. 27, pp. e63962-e63962
Open Access

Hybrid Multi-Attention Network for Audio–Visual Emotion Recognition Through Multimodal Feature Fusion
Sathishkumar Moorthy, Yeon-Kug Moon
Mathematics (2025) Vol. 13, Iss. 7, pp. 1100-1100
Open Access

Modal-aware Visual Prompting for Incomplete Multi-modal Brain Tumor Segmentation
Yansheng Qiu, Ziyuan Zhao, Hongdou Yao, et al.
(2023), pp. 3228-3239
Closed Access | Times Cited: 10

Prompt Consistency for Multi-Label Textual Emotion Detection
Yangyang Zhou, Xin Kang, Fuji Ren
IEEE Transactions on Affective Computing (2023) Vol. 15, Iss. 1, pp. 121-129
Closed Access | Times Cited: 9

Semi-supervised Multimodal Emotion Recognition with Consensus Decision-making and Label Correction
Jingguang Tian, Desheng Hu, Xiaohan Shi, et al.
(2023), pp. 67-73
Closed Access | Times Cited: 3

Local or global? A novel transformer for Chinese named entity recognition based on multi-view and sliding attention
Yuke Wang, Ling Lü, Wu Yang, et al.
International Journal of Machine Learning and Cybernetics (2023) Vol. 15, Iss. 6, pp. 2199-2208
Closed Access | Times Cited: 3

Correlation mining of multimodal features based on higher-order partial least squares for emotion recognition in conversations
Yuanqing Li, Dianwei Wang, Wuwei Wang, et al.
Engineering Applications of Artificial Intelligence (2024) Vol. 138, pp. 109350-109350
Closed Access

Prompt Learning for Multi-modal COVID-19 Diagnosis
Yang Yu, Rong Lu, Mengyao Wang, et al.
2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (2022), pp. 2803-2807
Closed Access | Times Cited: 2

A New Model for Emotion-Driven Behavior Extraction from Text
Yawei Sun, Saike He, Xu Han, et al.
Applied Sciences (2023) Vol. 13, Iss. 15, pp. 8700-8700
Open Access

Two-Stage Adaptation for Cross-Corpus Multimodal Emotion Recognition
Zhaopei Huang, Jinming Zhao, Qin Jin
Lecture notes in computer science (2023), pp. 431-443
Closed Access

DistilALHuBERT: A Distilled Parameter Sharing Audio Representation Model
Haoyu Wang, Siyuan Wang, Yaguang Gong, et al.
(2023), pp. 45-50
Open Access

Page 1

Scroll to top