OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Multimodal sentiment analysis based on fusion methods: A survey
Linan Zhu, Zhechao Zhu, Chenwei Zhang, et al.
Information Fusion (2023) Vol. 95, pp. 306-325
Closed Access | Times Cited: 128

Showing 1-25 of 128 citing articles:

Multimodal graph learning based on 3D Haar semi-tight framelet for student engagement prediction
Ming Li, Xiaosheng Zhuang, Lu Bai, et al.
Information Fusion (2024) Vol. 105, pp. 102224-102224
Closed Access | Times Cited: 26

Atlantis: Aesthetic-oriented multiple granularities fusion network for joint multimodal aspect-based sentiment analysis
Luwei Xiao, Xingjiao Wu, Junjie Xu, et al.
Information Fusion (2024) Vol. 106, pp. 102304-102304
Closed Access | Times Cited: 23

A Review of Key Technologies for Emotion Analysis Using Multimodal Information
Xianxun Zhu, Chaopeng Guo, Heyang Feng, et al.
Cognitive Computation (2024) Vol. 16, Iss. 4, pp. 1504-1530
Closed Access | Times Cited: 19

A Survey of Deep Learning-Based Multimodal Emotion Recognition: Speech, Text, and Face
Hailun Lian, Cheng Lu, Sunan Li, et al.
Entropy (2023) Vol. 25, Iss. 10, pp. 1440-1440
Open Access | Times Cited: 31

Modality translation-based multimodal sentiment analysis under uncertain missing modalities
Zhizhong Liu, Bin Zhou, Dianhui Chu, et al.
Information Fusion (2023) Vol. 101, pp. 101973-101973
Closed Access | Times Cited: 27

Multimodal sentiment analysis leveraging the strength of deep neural networks enhanced by the XGBoost classifier
Ganesh Chandrasekaran, S. Dhanasekaran, C. Balakrishna Moorthy, et al.
Computer Methods in Biomechanics & Biomedical Engineering (2024), pp. 1-23
Closed Access | Times Cited: 9

Machine learning for human emotion recognition: a comprehensive review
Eman M. G. Younis, Someya Mohsen, Essam H. Houssein, et al.
Neural Computing and Applications (2024) Vol. 36, Iss. 16, pp. 8901-8947
Open Access | Times Cited: 8

Attention-based multimodal sentiment analysis and emotion recognition using deep neural networks
Ajwa Aslam, Allah Bux Sargano, Zulfiqar Habib
Applied Soft Computing (2023) Vol. 144, pp. 110494-110494
Closed Access | Times Cited: 18

Multi-level correlation mining framework with self-supervised label generation for multimodal sentiment analysis
Zuhe Li, Qingbing Guo, Yushan Pan, et al.
Information Fusion (2023) Vol. 99, pp. 101891-101891
Closed Access | Times Cited: 17

TMBL: Transformer-based multimodal binding learning model for multimodal sentiment analysis
Jiehui Huang, Jun Zhou, Zhenchao Tang, et al.
Knowledge-Based Systems (2023) Vol. 285, pp. 111346-111346
Closed Access | Times Cited: 16

Hierarchical denoising representation disentanglement and dual-channel cross-modal-context interaction for multimodal sentiment analysis
Zuhe Li, Zhenwei Huang, Yushan Pan, et al.
Expert Systems with Applications (2024) Vol. 252, pp. 124236-124236
Open Access | Times Cited: 7

FDR-MSA: Enhancing multimodal sentiment analysis through feature disentanglement and reconstruction
Yao Fu, Biao Huang, Yujun Wen, et al.
Knowledge-Based Systems (2024) Vol. 297, pp. 111965-111965
Open Access | Times Cited: 7

A systematic review of trimodal affective computing approaches: Text, audio, and visual integration in emotion recognition and sentiment analysis
Hussein Farooq Tayeb Al-Saadawi, Bihter Daş, Resul Daş
Expert Systems with Applications (2024) Vol. 255, pp. 124852-124852
Closed Access | Times Cited: 6

VIEMF: Multimodal metaphor detection via visual information enhancement with multimodal fusion
Xiaoyu He, Long Yu, Shengwei Tian, et al.
Information Processing & Management (2024) Vol. 61, Iss. 3, pp. 103652-103652
Closed Access | Times Cited: 5

Multi-grained fusion network with self-distillation for aspect-based multimodal sentiment analysis
Juan Yang, Yali Xiao, Xu Du
Knowledge-Based Systems (2024) Vol. 293, pp. 111724-111724
Closed Access | Times Cited: 5

Review of Multimodal Data Fusion in Machine Learning
Leena Arya, Yogesh Kumar Sharma, Smitha, et al.
(2025), pp. 205-226
Closed Access

Multimodal sentiment analysis based on multi-layer feature fusion and multi-task learning
Yujian Cai, Xingguang Li, Yingyu Zhang, et al.
Scientific Reports (2025) Vol. 15, Iss. 1
Open Access

DeepFusionSent: A novel feature fusion approach for deep learning-enhanced sentiment classification
Ankit Thakkar, Devshri Pandya
Information Fusion (2025), pp. 103000-103000
Closed Access

Hierarchically trusted evidential fusion method with consistency learning for multimodal language understanding
Ying Yang, Yanqiu Yang, Gang Ren, et al.
Knowledge-Based Systems (2025), pp. 113164-113164
Closed Access

BeliN: A novel corpus for Bengali religious news headline generation using contextual feature fusion
Md Osama, Ashim Dey, Kawsar Ahmed, et al.
Natural Language Processing Journal (2025), pp. 100138-100138
Open Access

A comprehensive survey on deep learning-based approaches for multimodal sentiment analysis
Alireza Ghorbanali, Mohammad Karim Sohrabi
Artificial Intelligence Review (2023) Vol. 56, Iss. S1, pp. 1479-1512
Closed Access | Times Cited: 15

Disentanglement Translation Network for multimodal sentiment analysis
Ying Zeng, Wenjun Yan, Sijie Mai, et al.
Information Fusion (2023) Vol. 102, pp. 102031-102031
Closed Access | Times Cited: 13

A Multimodal Sentiment Analysis Approach Based on a Joint Chained Interactive Attention Mechanism
Keyuan Qiu, Yingjie Zhang, Jiaxu Zhao, et al.
Electronics (2024) Vol. 13, Iss. 10, pp. 1922-1922
Open Access | Times Cited: 4

TCHFN: Multimodal sentiment analysis based on Text-Centric Hierarchical Fusion Network
Jingming Hou, Nazlia Omar, Sabrina Tiun, et al.
Knowledge-Based Systems (2024) Vol. 300, pp. 112220-112220
Closed Access | Times Cited: 4

Application of deep learning-based multimodal fusion technology in cancer diagnosis: A survey
L. Yan, Liangrui Pan, Yijun Peng, et al.
Engineering Applications of Artificial Intelligence (2025) Vol. 143, pp. 109972-109972
Closed Access

Page 1 - Next Page

Scroll to top