OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Explainable AI for Time Series Classification: A Review, Taxonomy and Research Directions
Andreas Theissler, Francesco Spinnato, Udo Schlegel, et al.
IEEE Access (2022) Vol. 10, pp. 100700-100724
Open Access | Times Cited: 87

Showing 1-25 of 87 citing articles:

Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence
Sajid Ali, Tamer Abuhmed, Shaker El–Sappagh, et al.
Information Fusion (2023) Vol. 99, pp. 101805-101805
Open Access | Times Cited: 593

Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions
Luca Longo, Mario Brčić, Federico Cabitza, et al.
Information Fusion (2024) Vol. 106, pp. 102301-102301
Open Access | Times Cited: 124

Benchmarking and survey of explanation methods for black box models
Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, et al.
Data Mining and Knowledge Discovery (2023) Vol. 37, Iss. 5, pp. 1719-1778
Open Access | Times Cited: 95

Explainable Generative AI (GenXAI): a survey, conceptualization, and research agenda
Johannes Schneider
Artificial Intelligence Review (2024) Vol. 57, Iss. 11
Open Access | Times Cited: 12

Explainable AI for time series via Virtual Inspection Layers
Johanna Vielhaben, Sebastian Lapuschkin, Grégoire Montavon, et al.
Pattern Recognition (2024) Vol. 150, pp. 110309-110309
Open Access | Times Cited: 10

SHAP value-based ERP analysis (SHERPA): Increasing the sensitivity of EEG signals with explainable AI methods
Sophia Sylvester, Merle Sagehorn, Thomas Gruber, et al.
Behavior Research Methods (2024) Vol. 56, Iss. 6, pp. 6067-6081
Open Access | Times Cited: 6

WAE: An evaluation metric for attribution-based XAI on time series forecasting
Yueshan Chen, Sihai Zhang
Neurocomputing (2025), pp. 129379-129379
Closed Access

Wearable MOF Biosensors: A New Frontier in Real-Time Health Monitoring
Navid Rabiee
TrAC Trends in Analytical Chemistry (2025), pp. 118156-118156
Closed Access

Multivariate Asynchronous Shapelets for Imbalanced Car Crash Predictions
M. H. G. Bianchi, Francesco Spinnato, Riccardo Guidotti, et al.
Lecture notes in computer science (2025), pp. 150-166
Closed Access

Understandable time frame-based biosignal processing
Hamed Rafiei, Mohammad-R. Akbarzadeh-T
Biomedical Signal Processing and Control (2025) Vol. 103, pp. 107429-107429
Closed Access

An effective approach for early fuel leakage detection with enhanced explainability
Ruimin Chu, Li Chik, Yiliao Song, et al.
Intelligent Systems with Applications (2025), pp. 200504-200504
Open Access

Short- and long-term forecasting for building energy consumption considering IPMVP recommendations, WEO and COP27 scenarios
Greicili dos Santos Ferreira, Deilson Martins dos Santos, Sérgio Luciano Ávila, et al.
Applied Energy (2023) Vol. 339, pp. 120980-120980
Closed Access | Times Cited: 14

Explainable artificial intelligence for feature selection in network traffic classification: A comparative study
Pouya Khani, Elham Moeinaddini, Narges Dehghan Abnavi, et al.
Transactions on Emerging Telecommunications Technologies (2024) Vol. 35, Iss. 4
Closed Access | Times Cited: 3

Marine mucilage mapping with explained deep learning model using water-related spectral indices: a case study of Dardanelles Strait, Turkey
Elif Özlem Yılmaz, Hasan Tonbul, Taşkın Kavzoǧlu
Stochastic Environmental Research and Risk Assessment (2023) Vol. 38, Iss. 1, pp. 51-68
Closed Access | Times Cited: 8

Explainable AI: Machine Learning Interpretation in Blackcurrant Powders
Krzysztof Przybył
Sensors (2024) Vol. 24, Iss. 10, pp. 3198-3198
Open Access | Times Cited: 2

Cost of Explainability in AI: An Example with Credit Scoring Models
Jean Dessain, Nora Bentaleb, Fabien Vinas
Communications in computer and information science (2023), pp. 498-516
Open Access | Times Cited: 7

Tree-Based Modeling for Large-Scale Management in Agriculture: Explaining Organic Matter Content in Soil
W.O. Lee, Juhwan Lee
Applied Sciences (2024) Vol. 14, Iss. 5, pp. 1811-1811
Open Access | Times Cited: 2

A Novel Video-Based Methodology for Automated Classification of Dystonia and Choreoathetosis in Dyskinetic Cerebral Palsy During a Lower Extremity Task
Helga Haberfehlner, Z Roth, Inti Vanmechelen, et al.
Neurorehabilitation and neural repair (2024) Vol. 38, Iss. 7, pp. 479-492
Closed Access | Times Cited: 2

Robust explainer recommendation for time series classification
T T H Nguyen, Thach Le Nguyen, Georgiana Ifrim
Data Mining and Knowledge Discovery (2024) Vol. 38, Iss. 6, pp. 3372-3413
Open Access | Times Cited: 2

XAI for Time Series Classification: Evaluating the Benefits of Model Inspection for End-Users
Brigt Håvardstun, Cèsar Ferri, Kristian Flikka, et al.
Communications in computer and information science (2024), pp. 439-453
Closed Access | Times Cited: 1

Understanding Any Time Series Classifier with a Subsequence-based Explainer
Francesco Spinnato, Riccardo Guidotti, Anna Monreale, et al.
ACM Transactions on Knowledge Discovery from Data (2023) Vol. 18, Iss. 2, pp. 1-34
Open Access | Times Cited: 4

A Deep Dive into Perturbations as Evaluation Technique for Time Series XAI
Udo Schlegel, Daniel A. Keim
Communications in computer and information science (2023), pp. 165-180
Closed Access | Times Cited: 4

Page 1 - Next Page

Scroll to top