OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

COBRA Frames: Contextual Reasoning about Effects and Harms of Offensive Statements
Xuhui Zhou, Hao Zhu, Akhila Yerukola, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2023)
Open Access | Times Cited: 7

Showing 7 citing articles:

Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
Taylor Sorensen, Liwei Jiang, Jena D. Hwang, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 18, pp. 19937-19947
Open Access | Times Cited: 7

From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models
Julia Mendelsohn, Ronan Le Bras, Yejin Choi, et al.
(2023), pp. 15162-15180
Open Access | Times Cited: 9

“Fifty Shades of Bias”: Normative Ratings of Gender Bias in GPT Generated English Text
Rishav Hada, Agrima Seth, Harshita Diddee, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023), pp. 1862-1876
Open Access | Times Cited: 4

Don’t Take This Out of Context!: On the Need for Contextual Models and Evaluations for Stylistic Rewriting
Akhila Yerukola, Xuhui Zhou, Elizabeth Clark, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023), pp. 11419-11444
Open Access | Times Cited: 2

Hate Speech Detection and Bias in Supervised Text Classification
Thomas Davidson
Oxford University Press eBooks (2023)
Open Access | Times Cited: 1

Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
Taylor Sorensen, Liwei Jiang, Jena D. Hwang, et al.
arXiv (Cornell University) (2023)
Open Access

BiasX: “Thinking Slow” in Toxic Content Moderation with Explanations of Implied Social Biases
Yiming Zhang, Sravani Nanduri, Liwei Jiang, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023), pp. 4920-4932
Open Access

Page 1

Scroll to top