OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Showing 1-25 of 86 citing articles:

Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh, Albert Webson, Colin Raffel, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 463

GPT-NeoX-20B: An Open-Source Autoregressive Language Model
Sidney Black, Stella Biderman, Eric Hallahan, et al.
(2022)
Open Access | Times Cited: 276

Cross-Task Generalization via Natural Language Crowdsourcing Instructions
Swaroop Mishra, Daniel Khashabi, Chitta Baral, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 191

UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models
Tianbao Xie, Chen Wu, Peng Shi, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 184

PPT: Pre-trained Prompt Tuning for Few-shot Learning
Yuxian Gu, Xu Han, Zhiyuan Liu, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 168

Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks
Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 160

RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 79

MetaICL: Learning to Learn In Context
Sewon Min, Mike Lewis, Luke Zettlemoyer, et al.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022)
Open Access | Times Cited: 77

One Embedder, Any Task: Instruction-Finetuned Text Embeddings
Hongjin Su, Weijia Shi, Jungo Kasai, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2023)
Open Access | Times Cited: 69

CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP
Qinyuan Ye, Bill Yuchen Lin, Xiang Ren
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
Open Access | Times Cited: 93

Surface Form Competition: Why the Highest Probability Answer Isn’t Always Right
Ari Holtzman, Peter West, Vered Shwartz, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021), pp. 7038-7051
Open Access | Times Cited: 89

RelationPrompt: Leveraging Prompts to Generate Synthetic Data for Zero-Shot Relation Triplet Extraction
Yew Ken Chia, Lidong Bing, Soujanya Poria, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2022)
Open Access | Times Cited: 63

State-of-the-art generalisation research in NLP: A taxonomy and review
Dieuwke Hupkes, Mario Giulianelli, Verna Dankers, et al.
arXiv (Cornell University) (2022)
Open Access | Times Cited: 39

Prompt text classifications with transformer models! An exemplary introduction to prompt-based learning with large language models
Christian Mayer, Sabrina Ludwig, Steffen Brandt
Journal of Research on Technology in Education (2022) Vol. 55, Iss. 1, pp. 125-141
Closed Access | Times Cited: 35

Meta-learning via Language Model In-context Tuning
Yanda Chen, Ruiqi Zhong, Sheng Zha, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 32

Prompt–RSVQA: Prompting visual context to a language model for Remote Sensing Visual Question Answering
Christel Chappuis, Valérie Zermatten, Sylvain Lobry, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2022), pp. 1371-1380
Open Access | Times Cited: 32

Zero-Shot Text Classification with Self-Training
Ariel Gera, Alon Halfon, Eyal Shnarch, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 31

Matching Exemplar as Next Sentence Prediction (MeNSP): Zero-Shot Prompt Learning for Automatic Scoring in Science Education
Xuansheng Wu, Xinyu He, Tianming Liu, et al.
Lecture notes in computer science (2023), pp. 401-413
Closed Access | Times Cited: 16

Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey
Bonan Min, Hayley Ross, Elior Sulem, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 39

LLMaAA: Making Large Language Models as Active Annotators
Ruoyu Zhang, Yanzeng Li, Yongliang Ma, et al.
(2023)
Open Access | Times Cited: 13

LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5
Chengwei Qin, Shafiq Joty
arXiv (Cornell University) (2021)
Open Access | Times Cited: 30

Memobert: Pre-Training Model with Prompt-Based Learning for Multimodal Emotion Recognition
Jinming Zhao, Ruichen Li, Qin Jin, et al.
ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2022), pp. 4703-4707
Open Access | Times Cited: 20

Page 1 - Next Page

Scroll to top