OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Generative Bias for Robust Visual Question Answering
Jae Won Cho, Dong-Jin Kim, Hyeonggon Ryu, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023)
Open Access | Times Cited: 29

Showing 1-25 of 29 citing articles:

Deep Multimodal Data Fusion
Fei Zhao, Chengcui Zhang, Baocheng Geng
ACM Computing Surveys (2024) Vol. 56, Iss. 9, pp. 1-36
Open Access | Times Cited: 27

Robust Visual Question Answering: Datasets, Methods, and Future Challenges
Jie Ma, Pinghui Wang, Dechen Kong, et al.
IEEE Transactions on Pattern Analysis and Machine Intelligence (2024) Vol. 46, Iss. 8, pp. 5575-5594
Open Access | Times Cited: 8

Cross Modality Bias in Visual Question Answering: A Causal View With Possible Worlds VQA
Ali Vosoughi, Shijian Deng, Songyang Zhang, et al.
IEEE Transactions on Multimedia (2024) Vol. 26, pp. 8609-8624
Closed Access | Times Cited: 7

Robust data augmentation and contrast learning for debiased visual question answering
Ke Ning, Zhixin Li
Neurocomputing (2025), pp. 129527-129527
Closed Access

Causality Guided Co-Attention Network for Visual Question Answering
Jin Miao, Kui Yu, Baofu Fang, et al.
(2025)
Closed Access

Prompting visual dialog with implicit logical knowledge
Zefan Zhang, Yanhui Li, Weiqi Zhang, et al.
Knowledge and Information Systems (2025)
Closed Access

Unbiased Visual Question Answering by Leveraging Instrumental Variable
Yonghua Pan, Jing Liu, Lu Jin, et al.
IEEE Transactions on Multimedia (2024) Vol. 26, pp. 6648-6662
Closed Access | Times Cited: 4

Simple contrastive learning in a self-supervised manner for robust visual question answering
Shuwen Yang, Luwei Xiao, Xingjiao Wu, et al.
Computer Vision and Image Understanding (2024) Vol. 241, pp. 103976-103976
Closed Access | Times Cited: 4

Unbiased VQA via modal information interaction and question transformation
Dahe Peng, Zhixin Li
Pattern Recognition (2025), pp. 111394-111394
Closed Access

Handling language prior and compositional reasoning issues in visual question answering system
Souvik Chowdhury, Badal Soni
Neurocomputing (2025), pp. 129906-129906
Closed Access

Robust visual question answering via polarity enhancement and contrast
Dahe Peng, Zhixin Li
Neural Networks (2024) Vol. 179, pp. 106560-106560
Closed Access | Times Cited: 3

Enhancing robust VQA via contrastive and self-supervised learning
Runlin Cao, Zhixin Li, Zhenjun Tang, et al.
Pattern Recognition (2024) Vol. 159, pp. 111129-111129
Closed Access | Times Cited: 3

Question-conditioned debiasing with focal visual context fusion for visual question answering
Jin Liu, Guoxiang Wang, ChongFeng Fan, et al.
Knowledge-Based Systems (2023) Vol. 278, pp. 110879-110879
Closed Access | Times Cited: 8

A comprehensive survey on answer generation methods using NLP
Prashant Upadhyay, Rishabh Agarwal, Sumeet Dhiman, et al.
Natural Language Processing Journal (2024) Vol. 8, pp. 100088-100088
Open Access | Times Cited: 2

HCCL: H ierarchical C ounterfactual C ontrastive L earning for Robust Visual Question Answering
Dongze Hao, Qunbo Wang, Xinxin Zhu, et al.
ACM Transactions on Multimedia Computing Communications and Applications (2024) Vol. 20, Iss. 10, pp. 1-21
Closed Access | Times Cited: 1

Contrastive Region Guidance: Improving Grounding in Vision-Language Models Without Training
Defu Wan, Jaemin Cho, Elias Stengel-Eskin, et al.
Lecture notes in computer science (2024), pp. 198-215
Closed Access | Times Cited: 1

Empirical study on using adapters for debiased Visual Question Answering
Jae Won Cho, Dawit Mureja Argaw, Young‐Taek Oh, et al.
Computer Vision and Image Understanding (2023) Vol. 237, pp. 103842-103842
Closed Access | Times Cited: 3

Local pseudo-attributes for long-tailed recognition
Dong-Jin Kim, Tsung-Wei Ke, Stella X. Yu
Pattern Recognition Letters (2023) Vol. 172, pp. 51-57
Closed Access | Times Cited: 3

VQA-PDF: Purifying Debiased Features for Robust Visual Question Answering Task
Yandong Bi, Huajie Jiang, Chun‐Feng Liu, et al.
Lecture notes in computer science (2024), pp. 264-277
Closed Access

Hierarchical Fusion Framework for Multimodal Dialogue Response Generation
Qi Deng, Lijun Wu, Kaile Su, et al.
2022 International Joint Conference on Neural Networks (IJCNN) (2024) Vol. 31, pp. 1-8
Closed Access

Robust visual question answering utilizing Bias Instances and Label Imbalance
Liang Zhao, Kefeng Li, Jiangtao Qi, et al.
Knowledge-Based Systems (2024), pp. 112629-112629
Closed Access

BIVL-Net: Bidirectional Vision-Language Guidance for Visual Question Answering
Cong Han, Feifei Zhang
Lecture notes in computer science (2024), pp. 481-495
Closed Access

Bias-guided margin loss for robust Visual Question Answering
Yanhan Sun, Jiangtao Qi, Zhenfang Zhu, et al.
Information Processing & Management (2024) Vol. 62, Iss. 2, pp. 103988-103988
Closed Access

Enhancing Audio-Visual Question Answering with Missing Modality via Trans-Modal Associative Learning
Kyu Ri Park, Youngmin Oh, Jung Uk Kim
ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2024), pp. 5755-5759
Closed Access

Reducing Language Bias for Robust VQA Model with Multi-Branch Learning
Zhenzhen Wang, Qingfeng Wu
2022 International Joint Conference on Neural Networks (IJCNN) (2024) Vol. 33, pp. 1-8
Closed Access

Page 1 - Next Page

Scroll to top