
OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!
If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.
Requested Article:
We Need to Consider Disagreement in Evaluation
Valerio Basile, Michael Fell, Tommaso Fornaciari, et al.
(2021)
Open Access | Times Cited: 60
Valerio Basile, Michael Fell, Tommaso Fornaciari, et al.
(2021)
Open Access | Times Cited: 60
Showing 1-25 of 60 citing articles:
Learning from Disagreement: A Survey
Alexandra Uma, Tommaso Fornaciari, Dirk Hovy, et al.
Journal of Artificial Intelligence Research (2021) Vol. 72, pp. 1385-1470
Open Access | Times Cited: 82
Alexandra Uma, Tommaso Fornaciari, Dirk Hovy, et al.
Journal of Artificial Intelligence Research (2021) Vol. 72, pp. 1385-1470
Open Access | Times Cited: 82
Two Contrasting Data Annotation Paradigms for Subjective NLP Tasks
Paul Röttger, Bertie Vidgen, Dirk Hovy, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022)
Open Access | Times Cited: 57
Paul Röttger, Bertie Vidgen, Dirk Hovy, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022)
Open Access | Times Cited: 57
The “Problem” of Human Label Variation: On Ground Truth in Data, Modeling and Evaluation
Barbara Plank
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 53
Barbara Plank
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 53
SemEval-2023 Task 11: Learning with Disagreements (LeWiDi)
Elisa Leonardelli, Gavin Abercrombie, Dina Almanea, et al.
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022) (2023)
Open Access | Times Cited: 25
Elisa Leonardelli, Gavin Abercrombie, Dina Almanea, et al.
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022) (2023)
Open Access | Times Cited: 25
Methodology for Obtaining High-Quality Speech Corpora
Alicja Wieczorkowska
Applied Sciences (2025) Vol. 15, Iss. 4, pp. 1848-1848
Open Access
Alicja Wieczorkowska
Applied Sciences (2025) Vol. 15, Iss. 4, pp. 1848-1848
Open Access
SafetyKit: First Aid for Measuring Safety in Open-domain Conversational Systems
Emily Dinan, Gavin Abercrombie, A. Bergman, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 27
Emily Dinan, Gavin Abercrombie, A. Bergman, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 27
Data-centric annotation analysis for plant disease detection: Strategy, consistency, and performance
Jiuqing Dong, John J. Lee, Alvaro Fuentes, et al.
Frontiers in Plant Science (2022) Vol. 13
Open Access | Times Cited: 19
Jiuqing Dong, John J. Lee, Alvaro Fuentes, et al.
Frontiers in Plant Science (2022) Vol. 13
Open Access | Times Cited: 19
Why Don’t You Do It Right? Analysing Annotators’ Disagreement in Subjective Tasks
Marta Sandri, Elisa Leonardelli, Sara Tonelli, et al.
(2023), pp. 2428-2441
Open Access | Times Cited: 12
Marta Sandri, Elisa Leonardelli, Sara Tonelli, et al.
(2023), pp. 2428-2441
Open Access | Times Cited: 12
Annotation Error Detection: Analyzing the Past and Present for a More Coherent Future
Jan-Christoph Klie, Bonnie Webber, Iryna Gurevych
Computational Linguistics (2022) Vol. 49, Iss. 1, pp. 157-198
Open Access | Times Cited: 17
Jan-Christoph Klie, Bonnie Webber, Iryna Gurevych
Computational Linguistics (2022) Vol. 49, Iss. 1, pp. 157-198
Open Access | Times Cited: 17
Stop Measuring Calibration When Humans Disagree
Joris Baan, Wilker Aziz, Barbara Plank, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022), pp. 1892-1915
Open Access | Times Cited: 17
Joris Baan, Wilker Aziz, Barbara Plank, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022), pp. 1892-1915
Open Access | Times Cited: 17
What’s the Meaning of Superhuman Performance in Today’s NLU?
Simone Tedeschi, Johan Bos, Thierry Declerck, et al.
(2023), pp. 12471-12491
Open Access | Times Cited: 6
Simone Tedeschi, Johan Bos, Thierry Declerck, et al.
(2023), pp. 12471-12491
Open Access | Times Cited: 6
EPIC: Multi-Perspective Annotation of a Corpus of Irony
Simona Frenda, Alessandro Pedrani, Valerio Basile, et al.
(2023), pp. 13844-13857
Open Access | Times Cited: 6
Simona Frenda, Alessandro Pedrani, Valerio Basile, et al.
(2023), pp. 13844-13857
Open Access | Times Cited: 6
Statistical Methods for Annotation Analysis
Silviu Paun, Ron Artstein, Massimo Poesio
Synthesis lectures on human language technologies (2022) Vol. 15, Iss. 1, pp. 1-217
Open Access | Times Cited: 10
Silviu Paun, Ron Artstein, Massimo Poesio
Synthesis lectures on human language technologies (2022) Vol. 15, Iss. 1, pp. 1-217
Open Access | Times Cited: 10
Scaling and Disagreements: Bias, Noise, and Ambiguity
Alexandra Uma, Dina Almanea, Massimo Poesio
Frontiers in Artificial Intelligence (2022) Vol. 5
Open Access | Times Cited: 10
Alexandra Uma, Dina Almanea, Massimo Poesio
Frontiers in Artificial Intelligence (2022) Vol. 5
Open Access | Times Cited: 10
CrossRE: A Cross-Domain Dataset for Relation Extraction
Elisa Bassignana, Barbara Plank
(2022), pp. 3592-3604
Open Access | Times Cited: 9
Elisa Bassignana, Barbara Plank
(2022), pp. 3592-3604
Open Access | Times Cited: 9
Which Examples Should be Multiply Annotated? Active Learning When Annotators May Disagree
Connor Baumler, Anna Sotnikova, Hal Daumé
Findings of the Association for Computational Linguistics: ACL 2022 (2023)
Open Access | Times Cited: 5
Connor Baumler, Anna Sotnikova, Hal Daumé
Findings of the Association for Computational Linguistics: ACL 2022 (2023)
Open Access | Times Cited: 5
Syntax and prejudice: ethically-charged biases of a syntax-based hate speech recognizer unveiled
Michele Mastromattei, Leonardo Ranaldi, Francesca Fallucchi, et al.
PeerJ Computer Science (2022) Vol. 8, pp. e859-e859
Open Access | Times Cited: 8
Michele Mastromattei, Leonardo Ranaldi, Francesca Fallucchi, et al.
PeerJ Computer Science (2022) Vol. 8, pp. e859-e859
Open Access | Times Cited: 8
Federated Learning for Exploiting Annotators’ Disagreements in Natural Language Processing
Nuria Rodríguez-Barroso, Eugenio Martínez‐Cámara, José Camacho-Collados, et al.
Transactions of the Association for Computational Linguistics (2024) Vol. 12, pp. 630-648
Open Access | Times Cited: 1
Nuria Rodríguez-Barroso, Eugenio Martínez‐Cámara, José Camacho-Collados, et al.
Transactions of the Association for Computational Linguistics (2024) Vol. 12, pp. 630-648
Open Access | Times Cited: 1
Perspectivist approaches to natural language processing: a survey
Simona Frenda, Gavin Abercrombie, Valerio Basile, et al.
Language Resources and Evaluation (2024)
Open Access | Times Cited: 1
Simona Frenda, Gavin Abercrombie, Valerio Basile, et al.
Language Resources and Evaluation (2024)
Open Access | Times Cited: 1
Don’t waste a single annotation: improving single-label classifiers through soft labels
Benjamin M. Wu, Yue Li, Yida Mu, et al.
(2023)
Open Access | Times Cited: 4
Benjamin M. Wu, Yue Li, Yida Mu, et al.
(2023)
Open Access | Times Cited: 4
Temporal and Second Language Influence on Intra-Annotator Agreement and Stability in Hate Speech Labelling
Gavin Abercrombie, Dirk Hovy, Vinodkumar Prabhakaran
(2023)
Open Access | Times Cited: 3
Gavin Abercrombie, Dirk Hovy, Vinodkumar Prabhakaran
(2023)
Open Access | Times Cited: 3
VECHR: A Dataset for Explainable and Robust Classification of Vulnerability Type in the European Court of Human Rights
Shanshan Xu, Leon Staufer, Santosh T.y.s.s, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023), pp. 11738-11752
Open Access | Times Cited: 3
Shanshan Xu, Leon Staufer, Santosh T.y.s.s, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023), pp. 11738-11752
Open Access | Times Cited: 3
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
(2022)
Open Access | Times Cited: 5
(2022)
Open Access | Times Cited: 5