OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting
Sanyuan Chen, Yutai Hou, Yiming Cui, et al.
(2020)
Open Access | Times Cited: 122

Showing 26-50 of 122 citing articles:

A Closer Look at How Fine-tuning Changes BERT
Yichu Zhou, Vivek Srikumar
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 20

Continual debiasing: A bias mitigation framework for natural language understanding systems
Mingyu Lee, Junho Kim, Jun-Hyung Park, et al.
Expert Systems with Applications (2025), pp. 126593-126593
Closed Access

Towards regression testing and regression-free update for deep learning systems
Shuyue Li, Ming Fan, Ting Liu
Knowledge-Based Systems (2025), pp. 113292-113292
Closed Access

Overcoming Catastrophic Forgetting During Domain Adaptation of Seq2seq Language Generation
Dingcheng Li, Zheng Chen, Eunah Cho, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022)
Open Access | Times Cited: 16

Parameter-efficient Modularised Bias Mitigation via AdapterFusion
Deepak Kumar, Oleg Lesota, George Zerveas, et al.
(2023)
Open Access | Times Cited: 9

Plug and Play Autoencoders for Conditional Text Generation
Florian Mai, Nikolaos Pappas, Ivan Montero, et al.
(2020), pp. 6076-6092
Open Access | Times Cited: 25

Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
Prasetya Ajie Utama, Nafise Sadat Moosavi, Victor Sanh, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021), pp. 9063-9074
Open Access | Times Cited: 21

How Do Large Language Models Capture the Ever-changing World Knowledge? A Review of Recent Advances
Zihan Zhang, Meng Fang, Ling Chen, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023)
Open Access | Times Cited: 8

Where to start? Analyzing the potential value of intermediate models
Leshem Choshen, Elad Venezian, Shachar Don-Yehiya, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023), pp. 1446-1470
Open Access | Times Cited: 7

Efficient Few-Shot Fine-Tuning for Opinion Summarization
Arthur Bražinskas, Ramesh Nallapati, Mohit Bansal, et al.
Findings of the Association for Computational Linguistics: NAACL 2022 (2022), pp. 1509-1523
Open Access | Times Cited: 11

Modeling function-level interactions for file-level bug localization
Hongliang Liang, Dengji Hang, Xiangyu Li
Empirical Software Engineering (2022) Vol. 27, Iss. 7
Closed Access | Times Cited: 11

Simple But Powerful, a Language-Supervised Method for Image Emotion Classification
Sinuo Deng, Lifang Wu, Ge Shi, et al.
IEEE Transactions on Affective Computing (2022) Vol. 14, Iss. 4, pp. 3317-3331
Closed Access | Times Cited: 11

From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression
Runxin Xu, Fuli Luo, Chengyu Wang, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2022) Vol. 36, Iss. 10, pp. 11547-11555
Open Access | Times Cited: 10

Unsupervised Paraphrasing with Pretrained Language Models
Tong Niu, Semih Yavuz, Yingbo Zhou, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021), pp. 5136-5150
Open Access | Times Cited: 13

Parameter-Efficient Abstractive Question Answering over Tables or Text
Vaishali Pal, Evangelos Kanoulas, Maarten de Rijke
(2022)
Open Access | Times Cited: 9

Preserving Commonsense Knowledge from Pre-trained Language Models via Causal Inference
Junhao Zheng, Qianli Ma, Shengjie Qiu, et al.
(2023)
Open Access | Times Cited: 5

Ultra Fast Speech Separation Model with Teacher Student Learning
Sanyuan Chen, Yu Wu, Zhuo Chen, et al.
Interspeech 2022 (2021), pp. 3026-3030
Open Access | Times Cited: 12

Event Transition Planning for Open-ended Text Generation
Qintong Li, Piji Li, Wei Bi, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2022), pp. 3412-3426
Open Access | Times Cited: 8

Restoring and Mining the Records of the Joseon Dynasty via Neural Language Modeling and Machine Translation
Kyeongpil Kang, Kyohoon Jin, Soyoung Yang, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021), pp. 4031-4042
Open Access | Times Cited: 11

HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation
Hongyi Yuan, Zheng Yuan, Chuanqi Tan, et al.
(2023), pp. 3246-3264
Open Access | Times Cited: 4

Fine-Tuning Deteriorates General Textual Out-of-Distribution Detection by Distorting Task-Agnostic Features
Sishuo Chen, Wenkai Yang, Xiaohan Bi, et al.
(2023), pp. 564-579
Open Access | Times Cited: 4

Leveraging deep learning and computer vision technologies to enhance management of coastal fisheries in the Pacific region
George Shedrawi, F. Magron, Bernard Vigga, et al.
Scientific Reports (2024) Vol. 14, Iss. 1
Open Access | Times Cited: 1

To What Extent Have LLMs Reshaped the Legal Domain So Far? A Scoping Literature Review
Bogdan Padiu, Radu Iacob, Traian Rebedea, et al.
Information (2024) Vol. 15, Iss. 11, pp. 662-662
Open Access | Times Cited: 1

Learning from Missing Relations: Contrastive Learning with Commonsense Knowledge Graphs for Commonsense Inference
Yong-Ho Jung, Jun-Hyung Park, Joon‐Young Choi, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2022), pp. 1514-1523
Open Access | Times Cited: 6

Scroll to top