OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Stop Uploading Test Data in Plain Text: Practical Strategies for Mitigating Data Contamination by Evaluation Benchmarks
Alon Jacovi, Avi Caciularu, Omer Goldman, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023), pp. 5075-5084
Open Access | Times Cited: 17

Showing 17 citing articles:

Almanac — Retrieval-Augmented Language Models for Clinical Medicine
Cyril Zakka, Rohan Shad, Akash Chaurasia, et al.
NEJM AI (2024) Vol. 1, Iss. 2
Open Access | Times Cited: 112

Task Contamination: Language Models May Not Be Few-Shot Anymore
Changmao Li, Jeffrey Flanigan
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 16, pp. 18471-18480
Open Access | Times Cited: 10

NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, et al.
(2023)
Open Access | Times Cited: 13

Musical heritage historical entity linking
Arianna Graciotti, Nicolas Lazzari, Valentina Presutti, et al.
Artificial Intelligence Review (2025) Vol. 58, Iss. 5
Open Access

A Review of the Challenges with Massive Web-Mined Corpora Used in Large Language Models Pre-training
Michał Perełkiewicz, Rafał Poświata
Lecture notes in computer science (2025), pp. 153-163
Closed Access

LatestEval: Addressing Data Contamination in Language Model Evaluation through Dynamic and Time-Sensitive Test Construction
Yucheng Li, Frank Guérin, Chenghua Lin
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 17, pp. 18600-18607
Open Access | Times Cited: 1

Evaluating the Performance of Large Language Models in Predicting Diagnostics for Spanish Clinical Cases in Cardiology
J. Delaunay, J. Cusidó
Applied Sciences (2024) Vol. 15, Iss. 1, pp. 61-61
Open Access | Times Cited: 1

Peek Across: Improving Multi-Document Modeling via Cross-Document Question-Answering
Avi Caciularu, Matthew E. Peters, Jacob Goldberger, et al.
(2023), pp. 1970-1989
Open Access | Times Cited: 4

Counting the Bugs in ChatGPT’s Wugs: A Multilingual Investigation into the Morphological Capabilities of a Large Language Model
Leonie Weissweiler, Valentin Hofmann, Anjali Kantharuban, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023), pp. 6508-6524
Open Access | Times Cited: 4

LEIA: Linguistic Embeddings for the Identification of Affect
Segun Taofeek Aroyehun, Lukas Malik, H. Metzler, et al.
EPJ Data Science (2023) Vol. 12, Iss. 1
Open Access | Times Cited: 3

CRAB: Assessing the Strength of Causal Relationships Between Real-world Events
Angelika Romanou, Syrielle Montariol, Debjit Paul, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023)
Open Access | Times Cited: 2

Language-Models-as-a-Service: Overview of a New Paradigm and its Challenges
Emanuele La Malfa, Aleksandar Petrov, Simon Frieder, et al.
Journal of Artificial Intelligence Research (2024) Vol. 80, pp. 1497-1523
Open Access

Robust Pronoun Fidelity with English LLMs: Are they Reasoning, Repeating, or Just Biased?
Vagrant Gautam, Eileen Bingert, Dawei Zhu, et al.
Transactions of the Association for Computational Linguistics (2024) Vol. 12, pp. 1755-1779
Open Access

A Comprehensive Evaluation of Tool-Assisted Generation Strategies
Alon Jacovi, Avi Caciularu, Jonathan Herzig, et al.
(2023), pp. 13856-13878
Open Access | Times Cited: 1

QTSumm: Query-Focused Summarization over Tabular Data
Yilun Zhao, Zhenting Qi, Linyong Nan, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023), pp. 1157-1172
Open Access | Times Cited: 1

GitBug-Actions: Building Reproducible Bug-Fix Benchmarks with GitHub Actions
Nuno Saavedra, André Silva, Martin Monperrus
arXiv (Cornell University) (2023)
Open Access

Page 1

Scroll to top