OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning
Runxin Xu, Fuli Luo, Zhiyuan Zhang, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
Open Access | Times Cited: 102

Showing 1-25 of 102 citing articles:

On the Effectiveness of Parameter-Efficient Fine-Tuning
Zihao Fu, Haoran Yang, Anthony Man–Cho So, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2023) Vol. 37, Iss. 11, pp. 12799-12807
Open Access | Times Cited: 50

Composable Sparse Fine-Tuning for Cross-Lingual Transfer
Alan Ansell, Edoardo Maria Ponti, Anna Korhonen, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022), pp. 1778-1796
Open Access | Times Cited: 44

State-of-the-art generalisation research in NLP: A taxonomy and review
Dieuwke Hupkes, Mario Giulianelli, Verna Dankers, et al.
arXiv (Cornell University) (2022)
Open Access | Times Cited: 39

Mental-LLM: Leveraging Large Language Models for Mental Health Prediction via Online Text Data
Xuhai Xu, Bingsheng Yao, Yuanzhe Dong, et al.
arXiv (Cornell University) (2023)
Open Access | Times Cited: 27

Comparative Evaluation of Commercial Large Language Models on PromptBench: An English and Chinese Perspective
Shiyu Wang, Qian Ouyang, Bing Wang
Research Square (Research Square) (2024)
Open Access | Times Cited: 15

Knowledge Accuracy and Reducing Hallucinations in LLMs via Dynamic Domain Knowledge Injection
Roman Capellini, Frank Atienza, Melanie Sconfield
Research Square (Research Square) (2024)
Open Access | Times Cited: 9

PanDa: Prompt Transfer Meets Knowledge Distillation for Efficient Model Adaptation
Qihuang Zhong, Liang Ding, Juhua Liu, et al.
IEEE Transactions on Knowledge and Data Engineering (2024) Vol. 36, Iss. 9, pp. 4835-4848
Open Access | Times Cited: 8

Evaluating the Utilities of Foundation Models in Single-cell Data Analysis
Tianyu Liu, Kexing Li, Yuge Wang, et al.
bioRxiv (Cold Spring Harbor Laboratory) (2023)
Open Access | Times Cited: 17

A deep transfer learning model for the deformation of braced excavations with limited monitoring data
Yuanqin Tao, Shaoxiang Zeng, Tiantian Ying, et al.
Journal of Rock Mechanics and Geotechnical Engineering (2024)
Open Access | Times Cited: 7

Potential of Multimodal Large Language Models for Data Mining of Medical Images and Free-text Reports
Yutong Zhang, Yi Pan, Tianyang Zhong, et al.
Meta-Radiology (2024), pp. 100103-100103
Open Access | Times Cited: 7

In-House Knowledge Management Using a Large Language Model: Focusing on Technical Specification Documents Review
J.B. Lee, Wooyong Jung, Seungwon Baek
Applied Sciences (2024) Vol. 14, Iss. 5, pp. 2096-2096
Open Access | Times Cited: 6

NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better
Chuhan Wu, Fangzhao Wu, Tao Qi, et al.
(2022), pp. 680-685
Open Access | Times Cited: 27

DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-Tuning
Enze Xie, Lewei Yao, Han Shi, et al.
2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2023), pp. 4207-4216
Open Access | Times Cited: 13

Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning
Haoyu He, Jianfei Cai, Jing Zhang, et al.
2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2023), pp. 11791-11801
Open Access | Times Cited: 13

MFB: A Generalized Multimodal Fusion Approach for Bitcoin Price Prediction Using Time-Lagged Sentiment and Indicator Features
Ping Han, Hui Chen, Abdur Rasool, et al.
Expert Systems with Applications (2024) Vol. 261, pp. 125515-125515
Open Access | Times Cited: 5

FineDiffusion: scaling up diffusion models for fine-grained image generation with 10,000 classes
Ziying Pan, Kun Wang, Gang Li, et al.
Applied Intelligence (2025) Vol. 55, Iss. 4
Open Access

The fine art of fine-tuning: A structured review of advanced LLM fine-tuning techniques
Samar Pratap, Alston Richard Aranha, Dinesh Kumar, et al.
Natural Language Processing Journal (2025), pp. 100144-100144
Open Access

Comparison between parameter-efficient techniques and full fine-tuning: A case study on multilingual news article classification
Olesya Razuvayevskaya, Benjamin M. Wu, João Leite, et al.
PLoS ONE (2024) Vol. 19, Iss. 5, pp. e0301738-e0301738
Open Access | Times Cited: 4

Continual debiasing: A bias mitigation framework for natural language understanding systems
Mingyu Lee, Junho Kim, Jun-Hyung Park, et al.
Expert Systems with Applications (2025), pp. 126593-126593
Closed Access

Binary mask tuning on gradient: Towards multi-data question answering
Chuanyang Gong, Zhihua Wei, Ping Zhu, et al.
Knowledge-Based Systems (2025), pp. 113505-113505
Closed Access

How Accurate are GPT-3’s Hypotheses About Social Science Phenomena?
Hannes Rosenbusch, Claire E. Stevenson, Han L. J. van der Maas
Deleted Journal (2023) Vol. 2, Iss. 2
Open Access | Times Cited: 10

A case study for automated attribute extraction from legal documents using large language models
Subinay Adhikary, Procheta Sen, Dwaipayan Roy, et al.
Artificial Intelligence and Law (2024)
Open Access | Times Cited: 3

Privacy-Preserving Split Learning for Large-Scaled Vision Pre-Training
Zhousheng Wang, Geng Yang, Hua Dai, et al.
IEEE Transactions on Information Forensics and Security (2023) Vol. 18, pp. 1539-1553
Closed Access | Times Cited: 8

Hadamard Adapter: An Extreme Parameter-Efficient Adapter Tuning Method for Pre-trained Language Models
Yuyan Chen, Qiang Fu, Ge Fan, et al.
(2023), pp. 276-285
Closed Access | Times Cited: 8

Page 1 - Next Page

Scroll to top