
OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!
If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.
Requested Article:
Surface Form Competition: Why the Highest Probability Answer Isn’t Always Right
Ari Holtzman, Peter West, Vered Shwartz, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021), pp. 7038-7051
Open Access | Times Cited: 89
Ari Holtzman, Peter West, Vered Shwartz, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021), pp. 7038-7051
Open Access | Times Cited: 89
Showing 1-25 of 89 citing articles:
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu, Weizhe Yuan, Jinlan Fu, et al.
ACM Computing Surveys (2022) Vol. 55, Iss. 9, pp. 1-35
Open Access | Times Cited: 2013
Pengfei Liu, Weizhe Yuan, Jinlan Fu, et al.
ACM Computing Surveys (2022) Vol. 55, Iss. 9, pp. 1-35
Open Access | Times Cited: 2013
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Sewon Min, Xinxi Lyu, Ari Holtzman, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 378
Sewon Min, Xinxi Lyu, Ari Holtzman, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 378
Learning To Retrieve Prompts for In-Context Learning
Ohad Rubin, Jonathan Herzig, Jonathan Berant
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022)
Open Access | Times Cited: 207
Ohad Rubin, Jonathan Herzig, Jonathan Berant
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022)
Open Access | Times Cited: 207
Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification
Shengding Hu, Ning Ding, Huadong Wang, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 175
Shengding Hu, Ning Ding, Huadong Wang, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 175
Can language models learn from explanations in context?
Andrew K. Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan, et al.
(2022)
Open Access | Times Cited: 85
Andrew K. Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan, et al.
(2022)
Open Access | Times Cited: 85
MetaICL: Learning to Learn In Context
Sewon Min, Mike Lewis, Luke Zettlemoyer, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022)
Open Access | Times Cited: 77
Sewon Min, Mike Lewis, Luke Zettlemoyer, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022)
Open Access | Times Cited: 77
Demystifying Prompts in Language Models via Perplexity Estimation
Hila Gonen, Srini Iyer, Terra Blevins, et al.
(2023)
Open Access | Times Cited: 60
Hila Gonen, Srini Iyer, Terra Blevins, et al.
(2023)
Open Access | Times Cited: 60
SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization
Hyunwoo Kim, Jack Hessel, Liwei Jiang, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023), pp. 12930-12949
Open Access | Times Cited: 41
Hyunwoo Kim, Jack Hessel, Liwei Jiang, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023), pp. 12930-12949
Open Access | Times Cited: 41
Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections
Ruiqi Zhong, Kristy Lee, Zheng Zhang, et al.
(2021)
Open Access | Times Cited: 86
Ruiqi Zhong, Kristy Lee, Zheng Zhang, et al.
(2021)
Open Access | Times Cited: 86
Reframing Human-AI Collaboration for Generating Free-Text Explanations
Sarah Wiegreffe, Jack Hessel, Swabha Swayamdipta, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022), pp. 632-658
Open Access | Times Cited: 65
Sarah Wiegreffe, Jack Hessel, Swabha Swayamdipta, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022), pp. 632-658
Open Access | Times Cited: 65
Noisy Channel Language Model Prompting for Few-Shot Text Classification
Sewon Min, Mike Lewis, Hannaneh Hajishirzi, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 64
Sewon Min, Mike Lewis, Hannaneh Hajishirzi, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 64
ZeroGen: Efficient Zero-shot Learning via Dataset Generation
Jiacheng Ye, Jiahui Gao, Qintong Li, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022), pp. 11653-11669
Open Access | Times Cited: 61
Jiacheng Ye, Jiahui Gao, Qintong Li, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022), pp. 11653-11669
Open Access | Times Cited: 61
Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations
Jaehun Jung, Lianhui Qin, Sean Welleck, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022), pp. 1266-1279
Open Access | Times Cited: 51
Jaehun Jung, Lianhui Qin, Sean Welleck, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022), pp. 1266-1279
Open Access | Times Cited: 51
Active Example Selection for In-Context Learning
Yiming Zhang, Feng Shi, Chenhao Tan
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 48
Yiming Zhang, Feng Shi, Chenhao Tan
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 48
Complementary Explanations for Effective In-Context Learning
Xi Ye, Srinivasan Iyer, Aslı Çelikyılmaz, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2023), pp. 4469-4484
Open Access | Times Cited: 27
Xi Ye, Srinivasan Iyer, Aslı Çelikyılmaz, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2023), pp. 4469-4484
Open Access | Times Cited: 27
Language models, like humans, show content effects on reasoning tasks
Andrew K. Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan, et al.
PNAS Nexus (2024) Vol. 3, Iss. 7
Open Access | Times Cited: 14
Andrew K. Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan, et al.
PNAS Nexus (2024) Vol. 3, Iss. 7
Open Access | Times Cited: 14
On the Effect of Pretraining Corpora on In-context Learning by a Large-scale Language Model
Seongjin Shin, Sang-Woo Lee, Hwijeen Ahn, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022)
Open Access | Times Cited: 25
Seongjin Shin, Sang-Woo Lee, Hwijeen Ahn, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022)
Open Access | Times Cited: 25
A Systematic Investigation of Commonsense Knowledge in Large Language Models
Xiang Lorraine Li, Adhiguna Kuncoro, Jordan Hoffmann, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 23
Xiang Lorraine Li, Adhiguna Kuncoro, Jordan Hoffmann, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 23
RankGen: Improving Text Generation with Large Ranking Models
Kalpesh Krishna, Yapei Chang, John Wieting, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022), pp. 199-232
Open Access | Times Cited: 23
Kalpesh Krishna, Yapei Chang, John Wieting, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022), pp. 199-232
Open Access | Times Cited: 23
Nonparametric Masked Language Modeling
Sewon Min, Weijia Shi, Michael Lewis, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2023)
Open Access | Times Cited: 13
Sewon Min, Weijia Shi, Michael Lewis, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2023)
Open Access | Times Cited: 13
Mitigating Label Biases for In-context Learning
Yu Fei, Yifan Hou, Zeming Chen, et al.
(2023), pp. 14014-14031
Open Access | Times Cited: 13
Yu Fei, Yifan Hou, Zeming Chen, et al.
(2023), pp. 14014-14031
Open Access | Times Cited: 13
Using rhetorical strategies to design prompts: a human-in-the-loop approach to make AI useful
Nupoor Ranade, Marly Saravia, Aditya Johri
AI & Society (2024)
Open Access | Times Cited: 5
Nupoor Ranade, Marly Saravia, Aditya Johri
AI & Society (2024)
Open Access | Times Cited: 5
Can language models handle recursively nested grammatical structures? A case study on comparing models and humans
Andrew K. Lampinen
Computational Linguistics (2024), pp. 1-36
Open Access | Times Cited: 5
Andrew K. Lampinen
Computational Linguistics (2024), pp. 1-36
Open Access | Times Cited: 5
Category-aware self-training for extremely weakly supervised text classification
Jing Su
Expert Systems with Applications (2025) Vol. 269, pp. 126431-126431
Closed Access
Jing Su
Expert Systems with Applications (2025) Vol. 269, pp. 126431-126431
Closed Access
Metadata Conditioning Accelerates Language Model Pre-training
Tianyu Gao, Alexander Wettig, Luxi He, et al.
(2025)
Open Access
Tianyu Gao, Alexander Wettig, Luxi He, et al.
(2025)
Open Access