OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

VLP2MSA: Expanding vision-language pre-training to multimodal sentiment analysis
Guofeng Yi, Cunhang Fan, Kang Zhu, et al.
Knowledge-Based Systems (2023) Vol. 283, pp. 111136-111136
Closed Access | Times Cited: 11

Showing 11 citing articles:

A study on the combination of functional connection features and Riemannian manifold in EEG emotion recognition
Minchao Wu, Rui Ouyang, Chang Zhou, et al.
Frontiers in Neuroscience (2024) Vol. 17
Open Access | Times Cited: 6

TCHFN: Multimodal sentiment analysis based on Text-Centric Hierarchical Fusion Network
Jingming Hou, Nazlia Omar, Sabrina Tiun, et al.
Knowledge-Based Systems (2024) Vol. 300, pp. 112220-112220
Closed Access | Times Cited: 5

A dissimilarity feature-driven decomposition network for multimodal sentiment analysis
Mingqi Liu, Zhixin Li
Multimedia Systems (2025) Vol. 31, Iss. 1
Closed Access

$$\text {H}^2\text {CAN}$$: heterogeneous hypergraph attention network with counterfactual learning for multimodal sentiment analysis
Changqin Huang, Zhuoyuan Lin, Qionghao Huang, et al.
Complex & Intelligent Systems (2025) Vol. 11, Iss. 4
Open Access

Text-guided deep correlation mining and self-learning feature fusion framework for multimodal sentiment analysis
Minghui Zhu, Xianfei He, Baojun Qiao, et al.
Knowledge-Based Systems (2025) Vol. 315, pp. 113249-113249
Closed Access

Disentangled variational auto-encoder for multimodal fusion performance analysis in multimodal sentiment analysis
Rongfei Chen, Wenju Zhou, Huosheng Hu, et al.
Knowledge-Based Systems (2024) Vol. 301, pp. 112372-112372
Closed Access | Times Cited: 2

Ensembling disentangled domain-specific prompts for domain generalization
Fangbin Xu, Shizhuo Deng, Tong Jia, et al.
Knowledge-Based Systems (2024) Vol. 301, pp. 112358-112358
Closed Access

Extracting method for fine-grained emotional features in videos
Cangzhi Zheng, Junjie Peng, Zesu Cai
Knowledge-Based Systems (2024) Vol. 302, pp. 112382-112382
Closed Access

A Text-Oriented Transformer with an Image Aesthetics Assessment Fusion Network for Visual-Textual Sentiment Analysis
Ziyu Liu, Zejun Zhang
Communications in computer and information science (2024), pp. 183-200
Closed Access

Vision-and-language navigation based on history-aware cross-modal feature fusion in indoor environment
Shuhuan Wen, Simeng Gong, Ziyuan Zhang, et al.
Knowledge-Based Systems (2024) Vol. 305, pp. 112610-112610
Closed Access

ConD2: Contrastive Decomposition Distilling for Multimodal Sentiment Analysis
Yu Xi, Wenti Huang, Jun Long
Lecture notes in computer science (2024), pp. 158-172
Closed Access

Page 1

Scroll to top