OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis
Wei Han, Hui Chen, Alexander Gelbukh, et al.
(2021), pp. 6-15
Open Access | Times Cited: 135

Showing 1-25 of 135 citing articles:

Video sentiment analysis with bimodal information-augmented multi-head attention
Ting Wu, Junjie Peng, Wenqiang Zhang, et al.
Knowledge-Based Systems (2021) Vol. 235, pp. 107676-107676
Open Access | Times Cited: 80

CubeMLP: An MLP-based Model for Multimodal Sentiment Analysis and Depression Estimation
Hao Sun, Hongyi Wang, Jiaqing Liu, et al.
Proceedings of the 30th ACM International Conference on Multimedia (2022), pp. 3722-3729
Open Access | Times Cited: 64

Heterogeneous graph convolution based on In-domain Self-supervision for Multimodal Sentiment Analysis
Yufei Zeng, Zhixin Li, Zhenjun Tang, et al.
Expert Systems with Applications (2022) Vol. 213, pp. 119240-119240
Closed Access | Times Cited: 42

Multi-modal cross-attention network for Alzheimer’s disease diagnosis with multi-modality data
Jin Zhang, Xiaohai He, Luping Liu, et al.
Computers in Biology and Medicine (2023) Vol. 162, pp. 107050-107050
Closed Access | Times Cited: 40

Multi-Label Multimodal Emotion Recognition With Transformer-Based Fusion and Emotion-Level Representation Learning
Hoai-Duy Le, Guee-Sang Lee, Soo-Hyung Kim, et al.
IEEE Access (2023) Vol. 11, pp. 14742-14751
Open Access | Times Cited: 36

GraphMFT: A graph network based multimodal fusion technique for emotion recognition in conversation
Jiang Li, Xiaoping Wang, Guoqing Lv, et al.
Neurocomputing (2023) Vol. 550, pp. 126427-126427
Open Access | Times Cited: 32

Token-disentangling Mutual Transformer for multimodal emotion recognition
Guanghao Yin, Yuanyuan Liu, Tengfei Liu, et al.
Engineering Applications of Artificial Intelligence (2024) Vol. 133, pp. 108348-108348
Closed Access | Times Cited: 10

Multimodal Learning using Optimal Transport for Sarcasm and Humor Detection
Shraman Pramanick, Aniket Basu Roy, Vishal M. Patel Johns
2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (2022), pp. 546-556
Open Access | Times Cited: 37

Joint multimodal sentiment analysis based on information relevance
Danlei Chen, Wang Su, Peng Wu, et al.
Information Processing & Management (2022) Vol. 60, Iss. 2, pp. 103193-103193
Closed Access | Times Cited: 30

TMBL: Transformer-based multimodal binding learning model for multimodal sentiment analysis
Jiehui Huang, Jun Zhou, Zhenchao Tang, et al.
Knowledge-Based Systems (2023) Vol. 285, pp. 111346-111346
Closed Access | Times Cited: 19

A fine-grained modal label-based multi-stage network for multimodal sentiment analysis
Junjie Peng, Ting Wu, Wenqiang Zhang, et al.
Expert Systems with Applications (2023) Vol. 221, pp. 119721-119721
Closed Access | Times Cited: 18

A feature-based restoration dynamic interaction network for multimodal sentiment analysis
Yufei Zeng, Zhixin Li, Zhenbin Chen, et al.
Engineering Applications of Artificial Intelligence (2023) Vol. 127, pp. 107335-107335
Closed Access | Times Cited: 17

MLG-NCS: Multimodal Local–Global Neuromorphic Computing System for Affective Video Content Analysis
Xiaoyue Ji, Zhekang Dong, Guangdong Zhou, et al.
IEEE Transactions on Systems Man and Cybernetics Systems (2024) Vol. 54, Iss. 8, pp. 5137-5149
Closed Access | Times Cited: 6

TEDT: Transformer-Based Encoding–Decoding Translation Network for Multimodal Sentiment Analysis
Fan Wang, Shengwei Tian, Long Yu, et al.
Cognitive Computation (2022) Vol. 15, Iss. 1, pp. 289-303
Closed Access | Times Cited: 28

Dynamically Adjust Word Representations Using Unaligned Multimodal Information
Jiwei Guo, Jiajia Tang, Weichen Dai, et al.
Proceedings of the 30th ACM International Conference on Multimedia (2022), pp. 3394-3402
Closed Access | Times Cited: 25

Modality-invariant temporal representation learning for multimodal sentiment classification
Hao Sun, Jiaqing Liu, Yen‐Wei Chen, et al.
Information Fusion (2022) Vol. 91, pp. 504-514
Closed Access | Times Cited: 25

Hybrid cross-modal interaction learning for multimodal sentiment analysis
Yanping Fu, Zhiyuan Zhang, Ruidi Yang, et al.
Neurocomputing (2023) Vol. 571, pp. 127201-127201
Closed Access | Times Cited: 16

MIA-Net: Multi-Modal Interactive Attention Network for Multi-Modal Affective Analysis
Shuzhen Li, Tong Zhang, Bianna Chen, et al.
IEEE Transactions on Affective Computing (2023) Vol. 14, Iss. 4, pp. 2796-2809
Closed Access | Times Cited: 14

TensorFormer: A Tensor-Based Multimodal Transformer for Multimodal Sentiment Analysis and Depression Detection
Hao Sun, Yen‐Wei Chen, Lanfen Lin
IEEE Transactions on Affective Computing (2022) Vol. 14, Iss. 4, pp. 2776-2786
Closed Access | Times Cited: 21

Video-Based Cross-Modal Auxiliary Network for Multimodal Sentiment Analysis
Rongfei Chen, Wenju Zhou, Yang Li, et al.
IEEE Transactions on Circuits and Systems for Video Technology (2022) Vol. 32, Iss. 12, pp. 8703-8716
Open Access | Times Cited: 19

Evaluating significant features in context‐aware multimodal emotion recognition with XAI methods
Aaishwarya Khalane, Rikesh Makwana, Talal Shaikh, et al.
Expert Systems (2023)
Open Access | Times Cited: 12

Multimodal transformer with adaptive modality weighting for multimodal sentiment analysis
Yifeng Wang, Jiahao He, Di Wang, et al.
Neurocomputing (2023) Vol. 572, pp. 127181-127181
Closed Access | Times Cited: 12

Multimodal Mutual Attention-Based Sentiment Analysis Framework Adapted to Complicated Contexts
Lijun He, Ziqing Wang, Liejun Wang, et al.
IEEE Transactions on Circuits and Systems for Video Technology (2023) Vol. 33, Iss. 12, pp. 7131-7143
Closed Access | Times Cited: 11

Dynamically Shifting Multimodal Representations via Hybrid-Modal Attention for Multimodal Sentiment Analysis
Ronghao Lin, Haifeng Hu
IEEE Transactions on Multimedia (2023) Vol. 26, pp. 2740-2755
Closed Access | Times Cited: 11

VLP2MSA: Expanding vision-language pre-training to multimodal sentiment analysis
Guofeng Yi, Cunhang Fan, Kang Zhu, et al.
Knowledge-Based Systems (2023) Vol. 283, pp. 111136-111136
Closed Access | Times Cited: 11

Page 1 - Next Page

Scroll to top