OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

The Robustness of Counterfactual Explanations Over Time
Andrea Ferrario, Michele Loi
IEEE Access (2022) Vol. 10, pp. 82736-82750
Open Access | Times Cited: 20

Showing 20 citing articles:

Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions
Luca Longo, Mario Brčić, Federico Cabitza, et al.
Information Fusion (2024) Vol. 106, pp. 102301-102301
Open Access | Times Cited: 137

Leveraging explanations in interactive machine learning: An overview
Stefano Teso, Öznur Alkan, Wolfgang Stammer, et al.
Frontiers in Artificial Intelligence (2023) Vol. 6
Open Access | Times Cited: 31

Mathematical optimization modelling for group counterfactual explanations
Emilio Carrizosa, Jasone Ramírez-Ayerbe, Dolores Romero Morales
European Journal of Operational Research (2024) Vol. 319, Iss. 2, pp. 399-412
Open Access | Times Cited: 11

The Role of Humanization and Robustness of Large Language Models in Conversational Artificial Intelligence for Individuals With Depression: A Critical Analysis
Andrea Ferrario, Jana Sedláková, Manuel Trachsel
JMIR Mental Health (2024) Vol. 11, pp. e56569-e56569
Open Access | Times Cited: 9

Finding Regions of Counterfactual Explanations via Robust Optimization
Donato Maragno, Jannis Kurtz, Tabea E. Röber, et al.
INFORMS journal on computing (2024) Vol. 36, Iss. 5, pp. 1316-1334
Open Access | Times Cited: 8

How robust are ensemble machine learning explanations?
Maria Carla Calzarossa, Paolo Giudici, Rasha Zieni
Neurocomputing (2025), pp. 129686-129686
Open Access

Counterfactual Explanations with Probabilistic Guarantees on their Robustness to Model Change
Ignacy Stępka, Jerzy Stefanowski, Mateusz Lango
(2025), pp. 1277-1288
Closed Access

Setting the Right Expectations: Algorithmic Recourse Over Time
João Fonseca, Andrew Bell, Carlo Abrate, et al.
(2023), pp. 1-11
Open Access | Times Cited: 4

Counterfactual Explanations and Federated Learning for Enhanced Data Analytics Optimisation
Syed Irtija Hasan, Sonia Farhana Nimmy, Md. Sarwar Kamal
Springer tracts in nature-inspired computing (2024), pp. 21-43
Closed Access | Times Cited: 1

Counterfactual Explanations With Multiple Properties in Credit Scoring
Xolani Dastile, Turgay Çelik
IEEE Access (2024) Vol. 12, pp. 110713-110728
Open Access | Times Cited: 1

Generally-Occurring Model Change for Robust Counterfactual Explanations
Ao Xu, Tieru Wu
Lecture notes in computer science (2024), pp. 215-229
Closed Access | Times Cited: 1

Robustness Implies Fairness in Causal Algorithmic Recourse
Ahmad-Reza Ehyaei, Amir-Hossein Karimi, Bernhard Schoelkopf, et al.
2022 ACM Conference on Fairness, Accountability, and Transparency (2023), pp. 984-1001
Open Access | Times Cited: 3

Generating Robust Counterfactual Explanations
Victor Guyomard, Françoise Fessant, Thomas Guyet, et al.
Lecture notes in computer science (2023), pp. 394-409
Closed Access | Times Cited: 3

Better Luck Next Time: About Robust Recourse in Binary Allocation Problems
Meirav Segal, Anne-Marie George, Ingrid Chieh Yu, et al.
Communications in computer and information science (2024), pp. 374-394
Closed Access

Interval Abstractions for Robust Counterfactual Explanations
Junqi Jiang, Francesco Leofante, Antonio Rago, et al.
Artificial Intelligence (2024) Vol. 336, pp. 104218-104218
Open Access

Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions
Luca Longo, Mario Brčić, Federico Cabitza, et al.
arXiv (Cornell University) (2023)
Open Access | Times Cited: 1

Quantifying Actionability: Evaluating Human-Recipient Models
Nwaike Kelechi, Licheng Jiao
IEEE Access (2023) Vol. 11, pp. 119811-119823
Open Access

Page 1

Scroll to top