
OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!
If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.
Requested Article:
UNIREX: A Unified Learning Framework for Language Model Rationale Extraction
Aaron Chan, Maziar Sanjabi, Lambert Mathias, et al.
(2022), pp. 51-67
Open Access | Times Cited: 14
Aaron Chan, Maziar Sanjabi, Lambert Mathias, et al.
(2022), pp. 51-67
Open Access | Times Cited: 14
Showing 14 citing articles:
Explainability for Large Language Models: A Survey
Haiyan Zhao, Hanjie Chen, Fan Yang, et al.
ACM Transactions on Intelligent Systems and Technology (2024) Vol. 15, Iss. 2, pp. 1-38
Open Access | Times Cited: 161
Haiyan Zhao, Hanjie Chen, Fan Yang, et al.
ACM Transactions on Intelligent Systems and Technology (2024) Vol. 15, Iss. 2, pp. 1-38
Open Access | Times Cited: 161
Rationalization for explainable NLP: a survey
Sai Gurrapu, Ajay Kulkarni, Lifu Huang, et al.
Frontiers in Artificial Intelligence (2023) Vol. 6
Open Access | Times Cited: 20
Sai Gurrapu, Ajay Kulkarni, Lifu Huang, et al.
Frontiers in Artificial Intelligence (2023) Vol. 6
Open Access | Times Cited: 20
REV: Information-Theoretic Evaluation of Free-Text Rationales
Hanjie Chen, Faeze Brahman, Xiang Ren, et al.
(2023), pp. 2007-2030
Open Access | Times Cited: 9
Hanjie Chen, Faeze Brahman, Xiang Ren, et al.
(2023), pp. 2007-2030
Open Access | Times Cited: 9
Think Rationally about What You See: Continuous Rationale Extraction for Relation Extraction
Xuming Hu, Zhaochen Hong, Chenwei Zhang, et al.
Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (2023), pp. 2436-2440
Open Access | Times Cited: 7
Xuming Hu, Zhaochen Hong, Chenwei Zhang, et al.
Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (2023), pp. 2436-2440
Open Access | Times Cited: 7
ExClaim: Explainable Neural Claim Verification Using Rationalization
Sai Gurrapu, Lifu Huang, Feras A. Batarseh
(2022), pp. 19-26
Open Access | Times Cited: 5
Sai Gurrapu, Lifu Huang, Feras A. Batarseh
(2022), pp. 19-26
Open Access | Times Cited: 5
Rationalization for Explainable NLP: A Survey
Sai Gurrapu, Ajay Kulkarni, Lifu Huang, et al.
arXiv (Cornell University) (2023)
Open Access | Times Cited: 2
Sai Gurrapu, Ajay Kulkarni, Lifu Huang, et al.
arXiv (Cornell University) (2023)
Open Access | Times Cited: 2
Defending Privacy Inference Attacks to Federated Learning for Intelligent IoT with Parameter Compression
Yongsheng Zhu, Hongbo Cao, Yuange Ren, et al.
Security and Communication Networks (2023) Vol. 2023, pp. 1-12
Open Access | Times Cited: 2
Yongsheng Zhu, Hongbo Cao, Yuange Ren, et al.
Security and Communication Networks (2023) Vol. 2023, pp. 1-12
Open Access | Times Cited: 2
XMD: An End-to-End Framework for Interactive Explanation-Based Debugging of NLP Models
DongāHo Lee, Akshen Kadakia, Brihi Joshi, et al.
(2023), pp. 264-273
Open Access | Times Cited: 2
DongāHo Lee, Akshen Kadakia, Brihi Joshi, et al.
(2023), pp. 264-273
Open Access | Times Cited: 2
Are Human Explanations Always Helpful? Towards Objective Evaluation of Human Natural Language Explanations
Bingsheng Yao, Prithviraj Sen, Lucian Popa, et al.
(2023), pp. 14698-14713
Open Access | Times Cited: 2
Bingsheng Yao, Prithviraj Sen, Lucian Popa, et al.
(2023), pp. 14698-14713
Open Access | Times Cited: 2
Enhancing the Rationale-Input Alignment for Self-explaining Rationalization
Wei Liu, Haozhao Wang, Jun Wang, et al.
2022 IEEE 38th International Conference on Data Engineering (ICDE) (2024), pp. 2218-2230
Open Access
Wei Liu, Haozhao Wang, Jun Wang, et al.
2022 IEEE 38th International Conference on Data Engineering (ICDE) (2024), pp. 2218-2230
Open Access
Exploring the Trade-off Between Model Performance and Explanation Plausibility of Text Classifiers Using Human Rationales
Lucas E. Resck, Marcos M. Raimundo, Jorge Poco
Findings of the Association for Computational Linguistics: NAACL 2022 (2024), pp. 4190-4216
Closed Access
Lucas E. Resck, Marcos M. Raimundo, Jorge Poco
Findings of the Association for Computational Linguistics: NAACL 2022 (2024), pp. 4190-4216
Closed Access
REFER: An End-to-end Rationale Extraction Framework for Explanation Regularization
Mohammad Reza Ghasemi Madani, Pasquale Minervini
(2023), pp. 587-602
Open Access | Times Cited: 1
Mohammad Reza Ghasemi Madani, Pasquale Minervini
(2023), pp. 587-602
Open Access | Times Cited: 1
Recent Development on Extractive Rationale for Model Interpretability: A Survey
Hao Wang, Yong Dou
(2022)
Closed Access | Times Cited: 2
Hao Wang, Yong Dou
(2022)
Closed Access | Times Cited: 2
ER-Test: Evaluating Explanation Regularization Methods for Language Models
Brihi Joshi, Aaron Chan, Ziyi Liu, et al.
(2022), pp. 3315-3336
Open Access | Times Cited: 2
Brihi Joshi, Aaron Chan, Ziyi Liu, et al.
(2022), pp. 3315-3336
Open Access | Times Cited: 2