OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Towards Adversarial Attack on Vision-Language Pre-training Models
Jiaming Zhang, Qi Yi, Jitao Sang
Proceedings of the 30th ACM International Conference on Multimedia (2022)
Open Access | Times Cited: 34

Showing 1-25 of 34 citing articles:

APBAM: Adversarial Perturbation-driven Backdoor Attack in Multimodal Learning
Shaobo Zhang, Wenli Chen, Xiong Li, et al.
Information Sciences (2025), pp. 121847-121847
Closed Access | Times Cited: 2

Transferable Multimodal Attack on Vision-Language Pre-training Models
Haodi Wang, Kai Dong, Zhilei Zhu, et al.
2022 IEEE Symposium on Security and Privacy (SP) (2024) Vol. 34, pp. 1722-1740
Closed Access | Times Cited: 6

AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning
Ziqi Zhou, Shengshan Hu, Minghui Li, et al.
(2023), pp. 6311-6320
Open Access | Times Cited: 14

Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models
Lu Dong, Zhiqiang Wang, Teng Wang, et al.
2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2023)
Open Access | Times Cited: 14

Boosting adversarial transferability in vision-language models via multimodal feature heterogeneity
Long Chen, Yuling Chen, Zhi Ouyang, et al.
Scientific Reports (2025) Vol. 15, Iss. 1
Open Access

Multimodal alignment augmentation transferable attack on vision-language pre-training models
Tingchao Fu, Jinhong Zhang, Fanxiao Li, et al.
Pattern Recognition Letters (2025)
Closed Access

Stabilizing Modality Gap & Lowering Gradient Norms Improve Zero-Shot Adversarial Robustness of VLMs
Junhao Dong, Piotr Koniusz, Xinghua Qu, et al.
(2025), pp. 236-247
Closed Access

Mutual-Modality Adversarial Attack with Semantic Perturbation
Jingwen Ye, Ruonan Yu, Songhua Liu, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 7, pp. 6657-6665
Open Access | Times Cited: 4

A Survey on Safe Multi-Modal Learning Systems
Tianyi Zhao, L. Zhang, Yao Ma, et al.
Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2024), pp. 6655-6665
Closed Access | Times Cited: 4

Universal Adversarial Perturbations for Vision-Language Pre-trained Models
Peng-Fei Zhang, Zi Huang, Guangdong Bai
Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (2024), pp. 862-871
Open Access | Times Cited: 2

MMCert: Provable Defense Against Adversarial Attacks to Multi-Modal Models
Yanting Wang, Hongye Fu, Wei Zou, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) Vol. 35, pp. 24655-24664
Closed Access | Times Cited: 2

Language-Driven Anchors for Zero-Shot Adversarial Robustness
Xiao Li, Wei Zhang, Yining Liu, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024), pp. 24686-24695
Closed Access | Times Cited: 2

Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples
Ziqi Zhou, Minghui Li, Wei Liu, et al.
2022 IEEE Symposium on Security and Privacy (SP) (2024) Vol. 115, pp. 3015-3033
Closed Access | Times Cited: 1

Pre-Trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness
Sibo Wang, Jie Zhang, Zheng Yuan, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) Vol. 15, pp. 24502-24511
Closed Access | Times Cited: 1

Rethinking Impersonation and Dodging Attacks on Face Recognition Systems
Fengfan Zhou, Qianyu Zhou, Bangjie Yin, et al.
(2024), pp. 2487-2496
Closed Access | Times Cited: 1

IMTM: Invisible Multi-trigger Multimodal Backdoor Attack
Zhicheng Li, Piji Li, Xuan Sheng, et al.
Lecture notes in computer science (2023), pp. 533-545
Closed Access | Times Cited: 3

PEARL: Preprocessing Enhanced Adversarial Robust Learning of Image Deraining for Semantic Segmentation
Xianghao Jiao, Yaohua Liu, Jiaxin Gao, et al.
(2023), pp. 8185-8194
Open Access | Times Cited: 2

Iterative Adversarial Attack on Image-guided Story Ending Generation
Youze Wang, Wenbo Hu, Richang Hong
IEEE Transactions on Multimedia (2023) Vol. 26, pp. 6117-6130
Open Access | Times Cited: 2

Towards Video-Text Retrieval Adversarial Attack
Haozhe Yang, Yuhan Xiang, Ke Sun, et al.
ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2024), pp. 6500-6504
Closed Access

VQAttack: Transferable Adversarial Attacks on Visual Question Answering via Pre-trained Models
Ziyi Yin, Muchao Ye, Tianrong Zhang, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 7, pp. 6755-6763
Open Access

Break the Visual Perception: Adversarial Attacks Targeting Encoded Visual Tokens of Large Vision-Language Models
Yubo Wang, Chaohu Liu, Yanqiu Qu, et al.
(2024), pp. 1072-1081
Closed Access

Adversarial Robustification via Text-to-Image Diffusion Models
Daewon Choi, Jongheon Jeong, Huiwon Jang, et al.
Lecture notes in computer science (2024), pp. 158-177
Closed Access

Unveiling Typographic Deceptions: Insights of the Typographic Vulnerability in Large Vision-Language Models
Hao Cheng, Erjia Xiao, Jindong Gu, et al.
Lecture notes in computer science (2024), pp. 179-196
Closed Access

Adversarial Prompt Tuning for Vision-Language Models
Jiaming Zhang, Xingjun Ma, Xin Wang, et al.
Lecture notes in computer science (2024), pp. 56-72
Closed Access

Page 1 - Next Page

Scroll to top