OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Perturbation Augmentation for Fairer NLP
Rebecca Qian, Candace Ross, Jude Fernandes, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 35

Showing 1-25 of 35 citing articles:

Bias and Fairness in Large Language Models: A Survey
Isabel O. Gallegos, Ryan A. Rossi, Joe Barrow, et al.
Computational Linguistics (2024) Vol. 50, Iss. 3, pp. 1097-1179
Open Access | Times Cited: 86

Towards trustworthy LLMs: a review on debiasing and dehallucinating in large language models
Zichao Lin, Shuyan Guan, Wending Zhang, et al.
Artificial Intelligence Review (2024) Vol. 57, Iss. 9
Open Access | Times Cited: 15

ChatGPT vs. Bard: A Comparative Study
Imtiaz Ahmed, Ayon Roy, Mashrafi Kajol, et al.
Authorea (Authorea) (2023)
Open Access | Times Cited: 26

A survey of recent methods for addressing AI fairness and bias in biomedicine
Yifan Yang, Mingquan Lin, Han Zhao, et al.
Journal of Biomedical Informatics (2024) Vol. 154, pp. 104646-104646
Open Access | Times Cited: 10

Dual use concerns of generative AI and large language models
Alexei Grinbaum, Laurynas Adomaitis
Journal of Responsible Innovation (2024) Vol. 11, Iss. 1
Open Access | Times Cited: 9

A survey on LLM-based multi-agent systems: workflow, infrastructure, and challenges
Xinyi Li, S. Wang, Siqi Zeng, et al.
Vicinagearth. (2024) Vol. 1, Iss. 1
Open Access | Times Cited: 8

“I’m sorry to hear that”: Finding New Biases in Language Models with a Holistic Descriptor Dataset
Eric M. Smith, Melissa Hall, Melanie Kambadur, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 32

Unlearning Bias in Language Models by Partitioning Gradients
Charles Yu, Sullam Jeoung, Anish Kasi, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2023)
Open Access | Times Cited: 16

LIBRA: Measuring Bias of Large Language Model from a Local Context
B. Y. Pang, Tingrui Qiao, Caroline Walker, et al.
Lecture notes in computer science (2025), pp. 1-16
Closed Access

Multi-VALUE: A Framework for Cross-Dialectal English NLP
Caleb Ziems, William A. Held, Jingfeng Yang, et al.
(2023)
Open Access | Times Cited: 9

What about “em”? How Commercial Machine Translation Fails to Handle (Neo-)Pronouns
Anne Lauscher, Debora Nozza, Ehm Hjorth Miltersen, et al.
(2023), pp. 377-392
Open Access | Times Cited: 7

MABEL: Attenuating Gender Bias using Textual Entailment Data
Jacqueline He, Mengzhou Xia, Christiane Fellbaum, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022), pp. 9681-9702
Open Access | Times Cited: 11

ChatGPT vs. Bard: A Comparative Study
Imtiaz Ahmed, Mashrafi Kajol, Uzma Hasan, et al.
(2023)
Open Access | Times Cited: 5

ChatGPT vs. Bard: A Comparative Study
Imtiaz Ahmed, Mashrafi Kajol, Uzma Hasan, et al.
(2023)
Open Access | Times Cited: 5

Stereotypes and Smut: The (Mis)representation of Non-cisgender Identities by Text-to-Image Models
Eddie L. Ungless, Björn Roß, Anne Lauscher
Findings of the Association for Computational Linguistics: ACL 2022 (2023), pp. 7919-7942
Open Access | Times Cited: 5

Exploiting Biased Models to De-bias Text: A Gender-Fair Rewriting Model
Chantal Amrhein, Florian Schottmann, Rico Sennrich, et al.
(2023), pp. 4486-4506
Open Access | Times Cited: 4

Toxicity in Multilingual Machine Translation at Scale
Marta R. Costa‐jussà, Eric E. Smith, Christophe Ropers, et al.
(2023), pp. 9570-9586
Open Access | Times Cited: 4

Using Captum to Explain Generative Language Models
Vivek Miglani, Aobo Yang, Aram Markosyan, et al.
(2023), pp. 165-173
Open Access | Times Cited: 4

Leveraging Diffusion Perturbations for Measuring Fairness in Computer Vision
Nicholas Lui, Bryan Chia, William Berrios, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 13, pp. 14220-14228
Open Access | Times Cited: 1

TADA : Task Agnostic Dialect Adapters for English
William A. Held, Caleb Ziems, Diyi Yang
Findings of the Association for Computational Linguistics: ACL 2022 (2023), pp. 813-824
Open Access | Times Cited: 3

Evaluate & Evaluation on the Hub: Better Best Practices for Data and Model Measurements
Leandro Von Werra, Lewis Tunstall, Abhishek Thakur, et al.
(2022)
Open Access | Times Cited: 5

ChatGPT vs. Bard: A Comparative Study
Imtiaz Ahmed, Mashrafi Kajol, Uzma Hasan, et al.
(2023)
Open Access | Times Cited: 2

Targeting the Source: Selective Data Curation for Debiasing NLP Models
Yacine Gaci, Boualem Benatallah, Fabio Casati, et al.
Lecture notes in computer science (2023), pp. 276-294
Closed Access | Times Cited: 2

Page 1 - Next Page

Scroll to top