OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Effects of Explanations in AI-Assisted Decision Making: Principles and Comparisons
Xinru Wang, Ming Yin
ACM Transactions on Interactive Intelligent Systems (2022) Vol. 12, Iss. 4, pp. 1-36
Open Access | Times Cited: 31

Showing 1-25 of 31 citing articles:

What large language models know and what people think they know
Mark Steyvers, Heliodoro Tejeda, Aakriti Kumar, et al.
Nature Machine Intelligence (2025)
Open Access | Times Cited: 2

Human-LLM Collaborative Annotation Through Effective Verification of LLM Labels
Xinru Wang, Hannah Kim, Sajjadur Rahman, et al.
(2024), pp. 1-21
Open Access | Times Cited: 10

How informative is your XAI? Assessing the quality of explanations through information power
Marco Matarese, Francesco Rea, Katharina J. Rohlfing, et al.
Frontiers in Computer Science (2025) Vol. 6
Open Access

The Influence of Curiosity Traits and On-Demand Explanations in AI-Assisted Decision-Making
Federico Maria Cau, Lucio Davide Spano
(2025), pp. 1440-1457
Closed Access

Less or More: Towards Glanceable Explanations for LLM Recommendations Using Ultra-Small Devices
Xinru Wang, Mengjie Yu, Hannah Nguyen, et al.
(2025), pp. 938-951
Closed Access

Interpretability Vs Explainability: The Black Box of Machine Learning
Devottam Gaurav, Sanju Tiwari
(2023)
Closed Access | Times Cited: 12

The rationality of explanation or human capacity? Understanding the impact of explainable artificial intelligence on human-AI trust and decision performance
Ping Wang, Heng Ding
Information Processing & Management (2024) Vol. 61, Iss. 4, pp. 103732-103732
Closed Access | Times Cited: 4

User‐Centered Evaluation of Explainable Artificial Intelligence (XAI): A Systematic Literature Review
Noor Al-Ansari, Dena Al-Thani, Reem S. Al-Mansoori
Human Behavior and Emerging Technologies (2024) Vol. 2024, Iss. 1
Open Access | Times Cited: 4

Harnessing the Power of AI in Qualitative Research: Exploring, Using and Redesigning ChatGPT
H. Zhang, Chuhao Wu, Jingyi Xie, et al.
Computers in Human Behavior Artificial Humans (2025), pp. 100144-100144
Open Access

Integrity-based Explanations for Fostering Appropriate Trust in AI Agents
Siddharth Mehrotra, Carolina Centeio Jorge, Catholijn M. Jonker, et al.
ACM Transactions on Interactive Intelligent Systems (2023) Vol. 14, Iss. 1, pp. 1-36
Open Access | Times Cited: 9

I Know This Looks Bad, But I Can Explain: Understanding When AI Should Explain Actions In Human-AI Teams
Rui Zhang, Christopher Flathmann, Geoff Musick, et al.
ACM Transactions on Interactive Intelligent Systems (2023) Vol. 14, Iss. 1, pp. 1-23
Open Access | Times Cited: 7

Impact of Model Interpretability and Outcome Feedback on Trust in AI
Daehwan Ahn, Abdullah Almaatouq, Monisha Gulabani, et al.
(2024), pp. 1-25
Open Access | Times Cited: 2

The Importance of Distrust in AI
Tobias M. Peters, Roel W. Visser
Communications in computer and information science (2023), pp. 301-317
Closed Access | Times Cited: 6

The Impact of Explanations on Fairness in Human-AI Decision-Making: Protected vs Proxy Features
Navita Goyal, Connor Baumler, Tin Nguyen, et al.
(2024) Vol. 104, pp. 155-180
Open Access | Times Cited: 1

How the Types of Consequences in Social Scoring Systems Shape People's Perceptions and Behavioral Reactions
Carmen Loefflad, Jens Großklags
2022 ACM Conference on Fairness, Accountability, and Transparency (2024) Vol. 4, pp. 1515-1530
Open Access | Times Cited: 1

VIME: Visual Interactive Model Explorer for Identifying Capabilities and Limitations of Machine Learning Models for Sequential Decision-Making
Anindya Das Antar, Somayeh Molaei, Yan-Ying Chen, et al.
(2024), pp. 1-21
Closed Access | Times Cited: 1

Building Appropriate Trust in AI: The Significance of Integrity-Centered Explanations
Siddharth Mehrotra, Carolina Centeio Jorge, Catholijn M. Jonker, et al.
Frontiers in artificial intelligence and applications (2023)
Open Access | Times Cited: 2

Explainable Artificial Intelligence
Luca Longo
Communications in computer and information science (2023)
Closed Access | Times Cited: 2

A Diachronic Perspective on User Trust in AI under Uncertainty
Shehzaad Dhuliawala, Vilém Zouhar, Mennatallah El‐Assady, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023), pp. 5567-5580
Open Access | Times Cited: 2

Conversations Towards Practiced AI – HCI Heuristics
Kem-Laurin Lubin
Lecture notes in computer science (2022), pp. 377-390
Closed Access | Times Cited: 3

Explaining AI Decisions: Towards Achieving Human-Centered Explainability in Smart Home Environments
Md Shajalal, Alexander Boden, Gunnar Stevens, et al.
Communications in computer and information science (2024), pp. 418-440
Closed Access

Understanding the Evolvement of Trust Over Time within Human-AI Teams
Wen Duan, Shiwen Zhou, Matthew J. Scalia, et al.
Proceedings of the ACM on Human-Computer Interaction (2024) Vol. 8, Iss. CSCW2, pp. 1-31
Open Access

Page 1 - Next Page

Scroll to top