OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks
K. Mei, Sonia Fereidooni, Aylin Caliskan
2022 ACM Conference on Fairness, Accountability, and Transparency (2023), pp. 1699-1710
Open Access | Times Cited: 18

Showing 18 citing articles:

Bias and Fairness in Large Language Models: A Survey
Isabel O. Gallegos, Ryan A. Rossi, Joe Barrow, et al.
Computational Linguistics (2024) Vol. 50, Iss. 3, pp. 1097-1179
Open Access | Times Cited: 86

History, development, and principles of large language models: an introductory survey
Zichong Wang, Zhibo Chu, Thang Viet Doan, et al.
AI and Ethics (2024)
Closed Access | Times Cited: 7

Aroeira: A Curated Corpus for the Portuguese Language with a Large Number of Tokens
Thiago Lira, Flávio Nakasato Cação, Cristiano Antonio de Souza, et al.
Lecture notes in computer science (2025), pp. 185-199
Closed Access

Overviewing Biases in Generative AI-Powered Models in the Arabic Language
Mussa Saidi Abubakari
Advances in computational intelligence and robotics book series (2025), pp. 361-390
Closed Access

Evaluating Biased Attitude Associations of Language Models in an Intersectional Context
Shiva Omrani Sabbaghi, Robert Wolfe, Aylin Caliskan
(2023), pp. 542-553
Open Access | Times Cited: 9

Artificial Intelligence, Bias, and Ethics
Aylin Caliskan
(2023), pp. 7007-7013
Open Access | Times Cited: 9

Auditing GPT's Content Moderation Guardrails: Can ChatGPT Write Your Favorite TV Show?
Yaaseen Mahomed, Charlie M. Crawford, Sanjana Gautam, et al.
2022 ACM Conference on Fairness, Accountability, and Transparency (2024) Vol. 2020, pp. 660-686
Open Access | Times Cited: 3

SocialStigmaQA: A Benchmark to Uncover Stigma Amplification in Generative Language Models
Manish Nagireddy, Lamogha Chiazor, Moninder Singh, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 19, pp. 21454-21462
Open Access | Times Cited: 2

The Sentiment Problem: A Critical Survey towards Deconstructing Sentiment Analysis
Pranav Narayanan Venkit, Mukund Srinath, Sanjana Gautam, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023), pp. 13743-13763
Open Access | Times Cited: 6

ROBBIE: Robust Bias Evaluation of Large Generative Language Models
David Esiobu, Xiaoqing Tan, Saghar Hosseini, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023), pp. 3764-3814
Open Access | Times Cited: 4

SocialCounterfactuals: Probing and Mitigating Intersectional Social Biases in Vision-Language Models with Counterfactual Examples
Phillip Howard, Avinash Madasu, Tiep Le, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) Vol. 35, pp. 11975-11985
Closed Access | Times Cited: 1

Model ChangeLists: Characterizing Updates to ML Models
Sabri Eyuboglu, Karan Goel, Arjun Desai, et al.
2022 ACM Conference on Fairness, Accountability, and Transparency (2024) Vol. 33, pp. 2432-2453
Open Access

Sensitive Topics Retrieval in Digital Libraries: A Case Study of ḥadīṯ collections
Giovanni Sullutrone, Riccardo Amerigo Vigliermo, Luca Sala, et al.
Lecture notes in computer science (2024), pp. 51-62
Closed Access

Adversarially Exploring Vulnerabilities in LLMs to Evaluate Social Biases
Yuya Jeremy Ong, Jay Gala, Sungeun An, et al.
2021 IEEE International Conference on Big Data (Big Data) (2024), pp. 5289-5297
Closed Access

The Power of Absence: Thinking with Archival Theory in Algorithmic Design
Jihan Sherman, Romi Ron Morrison, Lauren Klein, et al.
Designing Interactive Systems Conference (2024), pp. 214-223
Open Access

Identifying and Improving Disability Bias in GPT-Based Resume Screening
Kate Glazko, Yusuf Mohammed, Ben Kosa, et al.
2022 ACM Conference on Fairness, Accountability, and Transparency (2024) Vol. 37, pp. 687-700
Open Access

Speciesism in natural language processing research
Masashi Takeshita, Rafał Rzepka
AI and Ethics (2024)
Open Access

Global Voices, Local Biases: Socio-Cultural Prejudices across Languages
Anjishnu Mukherjee, Chahat Raj, Ziwei Zhu, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023), pp. 15828-15845
Open Access

Page 1

Scroll to top