OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

GPT-NeoX-20B: An Open-Source Autoregressive Language Model
Sidney Black, Stella Biderman, Eric Hallahan, et al.
(2022)
Open Access | Times Cited: 276

Showing 1-25 of 276 citing articles:

VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance
Katherine Crowson, Stella Biderman, Daniel Kornis, et al.
Lecture notes in computer science (2022), pp. 88-105
Open Access | Times Cited: 187

Wordcraft: Story Writing With Large Language Models
Ann Yuan, Andy Coenen, Emily Reif, et al.
(2022)
Open Access | Times Cited: 181

Automated Program Repair in the Era of Large Pre-trained Language Models
Chunqiu Steven Xia, Yuxiang Wei, Lingming Zhang
(2023), pp. 1482-1494
Closed Access | Times Cited: 142

Structured information extraction from scientific text with large language models
John Dagdelen, Alexander Dunn, Sang‐Hoon Lee, et al.
Nature Communications (2024) Vol. 15, Iss. 1
Open Access | Times Cited: 137

A Review on Large Language Models: Architectures, Applications, Taxonomies, Open Issues and Challenges
Mohaimenul Azam Khan Raiaan, Md. Saddam Hossain Mukta, Kaniz Fatema, et al.
IEEE Access (2024) Vol. 12, pp. 26839-26874
Open Access | Times Cited: 122

SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
Potsawee Manakul, Adian Liusie, Mark Gales
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023)
Open Access | Times Cited: 120

TaleBrush: Sketching Stories with Generative Pretrained Language Models
John Joon Young Chung, Woo Seok Kim, Kang Min Yoo, et al.
CHI Conference on Human Factors in Computing Systems (2022)
Closed Access | Times Cited: 110

Can large language models reason about medical questions?
Valentin Liévin, Christoffer Hother, Andreas Geert Motzfeldt, et al.
Patterns (2024) Vol. 5, Iss. 3, pp. 100943-100943
Open Access | Times Cited: 102

Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Cheng-Yu Hsieh, Chun‐Liang Li, Chih‐Kuan Yeh, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2023)
Open Access | Times Cited: 92

DeepSpeed- Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale
Reza Yazdani Aminabadi, Samyam Rajbhandari, Ammar Ahmad Awan, et al.
(2022), pp. 1-15
Open Access | Times Cited: 91

Machine-Generated Text: A Comprehensive Survey of Threat Models and Detection Methods
Evan Crothers, Nathalie Japkowicz, Herna L. Viktor
IEEE Access (2023) Vol. 11, pp. 70977-71002
Open Access | Times Cited: 77

Is GPT-3 a Good Data Annotator?
Bosheng Ding, Chengwei Qin, Linlin Liu, et al.
(2023)
Open Access | Times Cited: 77

CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Benchmarking on HumanEval-X
Qinkai Zheng, Xia Xiao, Xu Zou, et al.
Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2023), pp. 5673-5684
Open Access | Times Cited: 66

Protein structure generation via folding diffusion
Kevin Wu, Kevin Yang, Rianne van den Berg, et al.
Nature Communications (2024) Vol. 15, Iss. 1
Open Access | Times Cited: 56

Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?
Byung-Doh Oh, William Schuler
Transactions of the Association for Computational Linguistics (2023) Vol. 11, pp. 336-350
Open Access | Times Cited: 52

The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset
Hugo Laurençon, Lucile Saulnier, Thomas J. Wang, et al.
arXiv (Cornell University) (2023)
Open Access | Times Cited: 50

Natural Language Generation and Understanding of Big Code for AI-Assisted Programming: A Review
M.F. Wong, Shangxin Guo, Ching Nam Hang, et al.
Entropy (2023) Vol. 25, Iss. 6, pp. 888-888
Open Access | Times Cited: 49

A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets
Md Tahmid Rahman Laskar, M Saiful Bari, Mizanur Rahman, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2023)
Open Access | Times Cited: 49

Large Language Models Meet NL2Code: A Survey
Daoguang Zan, Bei Chen, Fengji Zhang, et al.
(2023)
Open Access | Times Cited: 49

Aisha: A Custom AI Library Chatbot Using the ChatGPT API
Yrjö Lappalainen, Nikesh Narayanan
Journal of Web Librarianship (2023) Vol. 17, Iss. 3, pp. 37-58
Closed Access | Times Cited: 44

Editing Large Language Models: Problems, Methods, and Opportunities
Yunzhi Yao, Peng Wang, Bozhong Tian, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023), pp. 10222-10240
Open Access | Times Cited: 44

GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models
Archiki Prasad, Peter Hase, Xiang Zhou, et al.
(2023)
Open Access | Times Cited: 41

A survey of safety and trustworthiness of large language models through the lens of verification and validation
Xiaowei Huang, Wenjie Ruan, Wei Huang, et al.
Artificial Intelligence Review (2024) Vol. 57, Iss. 7
Open Access | Times Cited: 30

Page 1 - Next Page

Scroll to top