OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Multimodal Sentiment Analysis Representations Learning via Contrastive Learning with Condense Attention Fusion
Huiru Wang, Xiuhong Li, Zenyu Ren, et al.
Sensors (2023) Vol. 23, Iss. 5, pp. 2679-2679
Open Access | Times Cited: 18

Showing 18 citing articles:

Hybrid cross-modal interaction learning for multimodal sentiment analysis
Yanping Fu, Zhiyuan Zhang, Ruidi Yang, et al.
Neurocomputing (2023) Vol. 571, pp. 127201-127201
Closed Access | Times Cited: 15

Cross-modal contrastive learning for multimodal sentiment recognition
Shanliang Yang, Lichao Cui, Lei Wang, et al.
Applied Intelligence (2024) Vol. 54, Iss. 5, pp. 4260-4276
Closed Access | Times Cited: 3

CCDA: A Novel Method to Explore the Cross-Correlation in Dual-Attention for Multimodal Sentiment Analysis
P. Wang, Shuxian Liu, Jinyan Chen
Applied Sciences (2024) Vol. 14, Iss. 5, pp. 1934-1934
Open Access | Times Cited: 2

An Image-Text Sentiment Analysis Method Using Multi-Channel Multi-Modal Joint Learning
Lianting Gong, Xingzhou He, Yang Jianzhong
Applied Artificial Intelligence (2024) Vol. 38, Iss. 1
Open Access | Times Cited: 2

Pgcl: Prompt Guidance and Self-Supervised Contrastive Learning-Based Method for Visual Question Answering
Ling Gao, Hongda Zhang, Yiming Liu, et al.
(2024)
Closed Access | Times Cited: 1

PGCL: Prompt guidance and self-supervised contrastive learning-based method for Visual Question Answering
Ling Gao, Hongda Zhang, Yiming Liu, et al.
Expert Systems with Applications (2024) Vol. 251, pp. 124011-124011
Closed Access | Times Cited: 1

Bi-Modal Bi-Task Emotion Recognition Based on Transformer Architecture
Yu Song, Qi Zhou
Applied Artificial Intelligence (2024) Vol. 38, Iss. 1
Open Access

Multimodal Emotion Cognition Method Based on Multi-Channel Graphic Interaction
Baisheng Zhong
International Journal of Cognitive Informatics and Natural Intelligence (2024) Vol. 18, Iss. 1, pp. 1-17
Open Access

Multi-modal Feature Fistillation Emotion Recognition Method For Social Media
Xue Zhang, Mingjiang Wang, Xiao Zeng
(2024), pp. 445-454
Closed Access

Hybrid Uncertainty Calibration for Multimodal Sentiment Analysis
Qiuyu Pan, Zuqiang Meng
Electronics (2024) Vol. 13, Iss. 3, pp. 662-662
Open Access

Multimodal Sentiment Analysis using Deep Learning Fusion Techniques and Transformers
Muhaimin Bin Habib, Md. Ferdous Bin Hafiz, Niaz Ashraf Khan, et al.
International Journal of Advanced Computer Science and Applications (2024) Vol. 15, Iss. 6
Open Access

Performance Analysis of Sentiment Fusion Network for Social Media Services
Arun Thitai Kumar, Vrinda Sachdeva, Ashish Kumar
(2023), pp. 129-133
Closed Access | Times Cited: 1

Fusion-based Representation Learning Model for Multimode User-generated Social Network Content
John Martin, Rajvardhan Oak, Mukesh Soni, et al.
Journal of Data and Information Quality (2023) Vol. 15, Iss. 3, pp. 1-21
Closed Access

Multimodal Sentimental Analysis Using GloVe Convolutional Neural Network with Capsule Network
Nagendar Yamsani, Abbas Hameed Abdul Hussein, S. Julia Faith, et al.
(2023), pp. 1-6
Closed Access

A Comprehensive Study on State-Of-Art Learning Algorithms in Emotion Recognition
Et al. Kshirod Sarmah
International Journal on Recent and Innovation Trends in Computing and Communication (2023) Vol. 11, Iss. 11, pp. 717-732
Open Access

Page 1

Scroll to top