OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Adversarial Attacks on Node Embeddings via Graph Poisoning
Aleksandar Bojchevski, Stephan Günnemann
arXiv (Cornell University) (2018)
Open Access | Times Cited: 89

Showing 1-25 of 89 citing articles:

Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
Han Xu, Yao Ma, Hao-Chen Liu, et al.
International Journal of Automation and Computing (2020) Vol. 17, Iss. 2, pp. 151-178
Open Access | Times Cited: 538

Adversarial Attacks on Neural Networks for Graph Data
Daniel Zügner, Amir Akbarnejad, Stephan Günnemann
(2019), pp. 6246-6250
Open Access | Times Cited: 275

Adversarial Attack and Defense on Graph Data: A Survey
Lichao Sun, Yingtong Dou, Carl Yang, et al.
IEEE Transactions on Knowledge and Data Engineering (2022), pp. 1-20
Open Access | Times Cited: 178

Graph Neural Network: A Comprehensive Review on Non-Euclidean Space
Nurul A. Asif, Yeahia Sarker, Ripon K. Chakrabortty, et al.
IEEE Access (2021) Vol. 9, pp. 60588-60606
Open Access | Times Cited: 134

A Restricted Black-Box Adversarial Framework Towards Attacking Graph Embedding Models
Heng Chang, Yu Rong, Tingyang Xu, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2020) Vol. 34, Iss. 04, pp. 3389-3396
Open Access | Times Cited: 118

Certifiable Robustness and Robust Training for Graph Convolutional Networks
Daniel Zügner, Stephan Günnemann
(2019), pp. 246-256
Open Access | Times Cited: 115

Rumor Detection on Social Media with Graph Structured Adversarial Learning
Xiaoyu Yang, Yuefei Lyu, Tian Tian, et al.
(2020), pp. 1417-1423
Open Access | Times Cited: 101

Transferring Robustness for Graph Neural Network Against Poisoning Attacks
Xianfeng Tang, Yandong Li, Yiwei Sun, et al.
(2020)
Open Access | Times Cited: 81

Adversarial Attack on Large Scale Graph
Jintang Li, Tao Xie, Liang Chen, et al.
IEEE Transactions on Knowledge and Data Engineering (2021), pp. 1-1
Open Access | Times Cited: 66

Unnoticeable Backdoor Attacks on Graph Neural Networks
Enyan Dai, Minhua Lin, X. D. Zhang, et al.
Proceedings of the ACM Web Conference 2022 (2023), pp. 2263-2273
Open Access | Times Cited: 23

Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective
Kaidi Xu, Hongge Chen, Sijia Liu, et al.
arXiv (Cornell University) (2019)
Open Access | Times Cited: 61

Scalable attack on graph data by injecting vicious nodes
Jihong Wang, Minnan Luo, Fnu Suya, et al.
Data Mining and Knowledge Discovery (2020) Vol. 34, Iss. 5, pp. 1363-1389
Open Access | Times Cited: 55

LINKTELLER: Recovering Private Edges from Graph Neural Networks via Influence Analysis
Fan Wu, Yunhui Long, Ce Zhang, et al.
2022 IEEE Symposium on Security and Privacy (SP) (2022), pp. 2005-2024
Open Access | Times Cited: 37

Model Stealing Attacks Against Inductive Graph Neural Networks
Yun Shen, Xinlei He, Yufei Han, et al.
2022 IEEE Symposium on Security and Privacy (SP) (2022)
Open Access | Times Cited: 36

Trustworthy Graph Neural Networks: Aspects, Methods and Trends
He Zhang, Bang Ye Wu, Xingliang Yuan, et al.
arXiv (Cornell University) (2022)
Open Access | Times Cited: 35

Link Prediction Adversarial Attack Via Iterative Gradient Attack
Jinyin Chen, Xiang Lin, Ziqiang Shi, et al.
IEEE Transactions on Computational Social Systems (2020) Vol. 7, Iss. 4, pp. 1081-1094
Closed Access | Times Cited: 43

Multiscale Evolutionary Perturbation Attack on Community Detection
Jinyin Chen, Yi-Xian Chen, Lihong Chen, et al.
IEEE Transactions on Computational Social Systems (2020) Vol. 8, Iss. 1, pp. 62-75
Open Access | Times Cited: 42

Adversarial Label-Flipping Attack and Defense for Graph Neural Networks
Mengmei Zhang, Linmei Hu, Chuan Shi, et al.
2021 IEEE International Conference on Data Mining (ICDM) (2020), pp. 791-800
Closed Access | Times Cited: 42

Understanding Structural Vulnerability in Graph Convolutional Networks
Liang Chen, Jintang Li, Qibiao Peng, et al.
(2021), pp. 2249-2255
Open Access | Times Cited: 33

Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies
Wei Jin, Yaxin Li, Han Xu, et al.
arXiv (Cornell University) (2020)
Open Access | Times Cited: 36

MGA: Momentum Gradient Attack on Network
Jinyin Chen, Yi-Xian Chen, Haibin Zheng, et al.
IEEE Transactions on Computational Social Systems (2020) Vol. 8, Iss. 1, pp. 99-109
Open Access | Times Cited: 33

Time-aware Gradient Attack on Dynamic Network Link Prediction
Jinyin Chen, Jian Zhang, Zhi Chen, et al.
IEEE Transactions on Knowledge and Data Engineering (2021), pp. 1-1
Open Access | Times Cited: 32

Unsupervised Adversarially Robust Representation Learning on Graphs
Jiarong Xu, Yang Yang, Junru Chen, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2022) Vol. 36, Iss. 4, pp. 4290-4298
Open Access | Times Cited: 19

Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
Han Xu, Yao Ma, Haochen Liu, et al.
arXiv (Cornell University) (2019)
Closed Access | Times Cited: 35

Network Representation Learning: From Traditional Feature Learning to Deep Learning
Ke Sun, Lei Wang, Bo Xu, et al.
IEEE Access (2020) Vol. 8, pp. 205600-205617
Open Access | Times Cited: 29

Page 1 - Next Page

Scroll to top