
OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!
If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.
Requested Article:
Debiasing Multimodal Sarcasm Detection with Contrastive Learning
Mengzhao Jia, Xie Can, Liqiang Jing
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 16, pp. 18354-18362
Open Access | Times Cited: 8
Mengzhao Jia, Xie Can, Liqiang Jing
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 16, pp. 18354-18362
Open Access | Times Cited: 8
Showing 8 citing articles:
DyCR-Net: A dynamic context-aware routing network for multi-modal sarcasm detection in conversation
X. Zhuang, Zhixin Li, Fengling Zhou, et al.
Knowledge-Based Systems (2025), pp. 113029-113029
Closed Access | Times Cited: 1
X. Zhuang, Zhixin Li, Fengling Zhou, et al.
Knowledge-Based Systems (2025), pp. 113029-113029
Closed Access | Times Cited: 1
Content-aware sentiment understanding: cross-modal analysis with encoder-decoder architectures
Zahra Pakdaman, Abbas Koochari, Arash Sharifi
Journal of Computational Social Science (2025) Vol. 8, Iss. 2
Closed Access
Zahra Pakdaman, Abbas Koochari, Arash Sharifi
Journal of Computational Social Science (2025) Vol. 8, Iss. 2
Closed Access
Modality-aware contrast and fusion for multi-modal summarization
Lixin Dai, Tingting Han, Yu Zhou, et al.
Neurocomputing (2025), pp. 130094-130094
Closed Access
Lixin Dai, Tingting Han, Yu Zhou, et al.
Neurocomputing (2025), pp. 130094-130094
Closed Access
MSTI-Plus: Introducing Non-Sarcasm Reference Materials to Enhance Multimodal Sarcasm Target Identification
Fengmao Lv, Mengting Xiong, J.-F. Fang, et al.
(2025), pp. 614-624
Closed Access
Fengmao Lv, Mengting Xiong, J.-F. Fang, et al.
(2025), pp. 614-624
Closed Access
RCLMuFN: Relational context learning and multiplex fusion network for multimodal sarcasm detection
Tongguan Wang, Junkai Li, Guixin Su, et al.
Knowledge-Based Systems (2025), pp. 113614-113614
Closed Access
Tongguan Wang, Junkai Li, Guixin Su, et al.
Knowledge-Based Systems (2025), pp. 113614-113614
Closed Access
MV-BART: Multi-view BART for Multi-modal Sarcasm Detection
X. Zhuang, Fengling Zhou, Zhixin Li
(2024), pp. 3602-3611
Closed Access | Times Cited: 3
X. Zhuang, Fengling Zhou, Zhixin Li
(2024), pp. 3602-3611
Closed Access | Times Cited: 3
Multi-modal Sarcasm Detection on Social Media via Multi-Granularity Information Fusion
Lisong Ou, Zhixin Li
ACM Transactions on Multimedia Computing Communications and Applications (2025)
Closed Access
Lisong Ou, Zhixin Li
ACM Transactions on Multimedia Computing Communications and Applications (2025)
Closed Access
Multimodal dual perception fusion framework for multimodal affective analysis
Qiang Lu, Xia Sun, Yunfei Long, et al.
Information Fusion (2024), pp. 102747-102747
Closed Access
Qiang Lu, Xia Sun, Yunfei Long, et al.
Information Fusion (2024), pp. 102747-102747
Closed Access
Counterfactually Augmented Event Matching for De-biased Temporal Sentence Grounding
Xun Jiang, Z. X. Wei, Shenshen Li, et al.
(2024), pp. 6472-6481
Closed Access
Xun Jiang, Z. X. Wei, Shenshen Li, et al.
(2024), pp. 6472-6481
Closed Access