OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Can The Crowd Identify Misinformation Objectively?
Kevin Roitero, Michael Soprano, Shaoyang Fan, et al.
(2020)
Open Access | Times Cited: 22

Showing 22 citing articles:

The state of human-centered NLP technology for fact-checking
Anubrata Das, Houjiang Liu, Venelin Kovatchev, et al.
Information Processing & Management (2022) Vol. 60, Iss. 2, pp. 103219-103219
Open Access | Times Cited: 38

Crowds Can Effectively Identify Misinformation at Scale
Cameron Martel, Jennifer Allen, Gordon Pennycook, et al.
Perspectives on Psychological Science (2023) Vol. 19, Iss. 2, pp. 477-488
Open Access | Times Cited: 33

Nowhere to Hide: Online Rumor Detection Based on Retweeting Graph Neural Networks
Bo Liu, Xiangguo Sun, Qing Meng, et al.
IEEE Transactions on Neural Networks and Learning Systems (2022) Vol. 35, Iss. 4, pp. 4887-4898
Closed Access | Times Cited: 29

The many dimensions of truthfulness: Crowdsourcing misinformation assessments on a multidimensional scale
Michael Soprano, Kevin Roitero, David La Barbera, et al.
Information Processing & Management (2021) Vol. 58, Iss. 6, pp. 102710-102710
Open Access | Times Cited: 36

On the effect of relevance scales in crowdsourcing relevance assessments for Information Retrieval evaluation
Kevin Roitero, Eddy Maddalena, Stefano Mizzaro, et al.
Information Processing & Management (2021) Vol. 58, Iss. 6, pp. 102688-102688
Closed Access | Times Cited: 35

Measuring fake news acumen using a news media literacy instrument
Tyler W. S. Nagel
Journal of Media Literacy Education (2022) Vol. 14, Iss. 1, pp. 29-42
Open Access | Times Cited: 16

Understanding the Role of Explanation Modality in AI-assisted Decision-making
Vincent Robbemond, Oana Inel, Ujwal Gadiraju
(2022)
Open Access | Times Cited: 16

The State of Pilot Study Reporting in Crowdsourcing: A Reflection on Best Practices and Guidelines
Jonas Oppenlaender, Tahir Abbas, Ujwal Gadiraju
Proceedings of the ACM on Human-Computer Interaction (2024) Vol. 8, Iss. CSCW1, pp. 1-45
Open Access | Times Cited: 2

Who's in the Crowd Matters: Cognitive Factors and Beliefs Predict Misinformation Assessment Accuracy
Robert A. Kaufman, Michael Robert Haupt, Steven P. Dow
Proceedings of the ACM on Human-Computer Interaction (2022) Vol. 6, Iss. CSCW2, pp. 1-18
Open Access | Times Cited: 10

Ethically Motivated or Emotionally Charged? Examining Relationships Among Moral Norms, Anticipated Negative Emotions, and Laypeople’s Online Misinformation Correction Intentions
Yang Hu, Anfan Chen, Yu Yang, et al.
Mass Communication & Society (2024) Vol. 27, Iss. 5, pp. 1158-1187
Closed Access | Times Cited: 1

Combining Large Language Models and Crowdsourcing for Hybrid Human-AI Misinformation Detection
Xia Zeng, David La Barbera, Kevin Roitero, et al.
Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (2024), pp. 2332-2336
Open Access | Times Cited: 1

Can the crowd judge truthfulness? A longitudinal study on recent misinformation about COVID-19
Kevin Roitero, Michael Soprano, Beatrice Portelli, et al.
Personal and Ubiquitous Computing (2021) Vol. 27, Iss. 1, pp. 59-89
Open Access | Times Cited: 11

Combining Human and Machine Confidence in Truthfulness Assessment
Yunke Qu, Kevin Roitero, David La Barbera, et al.
Journal of Data and Information Quality (2022) Vol. 15, Iss. 1, pp. 1-17
Closed Access | Times Cited: 6

Designing and Evaluating Presentation Strategies for Fact-Checked Content
Danula Hettiachchi, Kaixin Ji, Jenny Kennedy, et al.
(2023), pp. 751-761
Open Access | Times Cited: 3

Watch ’n’ Check: Towards a Social Media Monitoring Tool to Assist Fact-Checking Experts
Assunta Cerone, Elham Naghizade, Falk Scholer, et al.
2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA) (2020), pp. 607-613
Closed Access | Times Cited: 8

Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs
Clemencia Siro, Mohammad Aliannejadi, Maarten de Rijke
Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (2024), pp. 1952-1962
Open Access

Assessing the Quality of Online Reviews Using Formal Argumentation Theory
Davide Ceolin, Giuseppe Primiero, Jan Wielemaker, et al.
Lecture notes in computer science (2021), pp. 71-87
Closed Access | Times Cited: 4

NewsComp: Facilitating Diverse News Reading through Comparative Annotation
Md Momen Bhuiyan, Sang Won Lee, Nitesh Goyal, et al.
(2023), pp. 1-17
Open Access | Times Cited: 1

Learning from Crowds with Annotation Reliability
Zhi Cao, Enhong Chen, Ye Huang, et al.
Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (2023), pp. 2103-2107
Closed Access | Times Cited: 1



Universitas (2023), Iss. 38
Open Access

Page 1

Scroll to top