
OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!
If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.
Requested Article:
Learning How to Ask: Querying LMs with Mixtures of Soft Prompts
Guanghui Qin, Jason Eisner
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
Open Access | Times Cited: 286
Guanghui Qin, Jason Eisner
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021)
Open Access | Times Cited: 286
Showing 1-25 of 286 citing articles:
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu, Weizhe Yuan, Jinlan Fu, et al.
ACM Computing Surveys (2022) Vol. 55, Iss. 9, pp. 1-35
Open Access | Times Cited: 2013
Pengfei Liu, Weizhe Yuan, Jinlan Fu, et al.
ACM Computing Surveys (2022) Vol. 55, Iss. 9, pp. 1-35
Open Access | Times Cited: 2013
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester, Rami Al‐Rfou, Noah Constant
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
Open Access | Times Cited: 1606
Brian Lester, Rami Al‐Rfou, Noah Constant
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
Open Access | Times Cited: 1606
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Xiang Lisa Li, Percy Liang
(2021)
Open Access | Times Cited: 1551
Xiang Lisa Li, Percy Liang
(2021)
Open Access | Times Cited: 1551
On the Opportunities and Risks of Foundation Models
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 1539
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 1539
Recent Advances in Natural Language Processing via Large Pre-trained Language Models: A Survey
Bonan Min, Hayley Ross, Elior Sulem, et al.
ACM Computing Surveys (2023) Vol. 56, Iss. 2, pp. 1-40
Open Access | Times Cited: 558
Bonan Min, Hayley Ross, Elior Sulem, et al.
ACM Computing Surveys (2023) Vol. 56, Iss. 2, pp. 1-40
Open Access | Times Cited: 558
P-Tuning: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks
Xiao Liu, Kaixuan Ji, Yicheng Fu, et al.
(2022)
Open Access | Times Cited: 456
Xiao Liu, Kaixuan Ji, Yicheng Fu, et al.
(2022)
Open Access | Times Cited: 456
Pre-trained models for natural language processing: A survey
Xipeng Qiu, Tianxiang Sun, Yige Xu, et al.
Science China Technological Sciences (2020) Vol. 63, Iss. 10, pp. 1872-1897
Closed Access | Times Cited: 439
Xipeng Qiu, Tianxiang Sun, Yige Xu, et al.
Science China Technological Sciences (2020) Vol. 63, Iss. 10, pp. 1872-1897
Closed Access | Times Cited: 439
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu, Kaixuan Ji, Yicheng Fu, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 244
Xiao Liu, Kaixuan Ji, Yicheng Fu, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 244
GPT understands, too
Xiao Liu, Yanan Zheng, Zhengxiao Du, et al.
AI Open (2023) Vol. 5, pp. 208-215
Open Access | Times Cited: 236
Xiao Liu, Yanan Zheng, Zhengxiao Du, et al.
AI Open (2023) Vol. 5, pp. 208-215
Open Access | Times Cited: 236
Learning To Retrieve Prompts for In-Context Learning
Ohad Rubin, Jonathan Herzig, Jonathan Berant
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022)
Open Access | Times Cited: 207
Ohad Rubin, Jonathan Herzig, Jonathan Berant
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022)
Open Access | Times Cited: 207
Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)
Shijie Geng, Shuchang Liu, Zuohui Fu, et al.
(2022), pp. 299-315
Closed Access | Times Cited: 193
Shijie Geng, Shuchang Liu, Zuohui Fu, et al.
(2022), pp. 299-315
Closed Access | Times Cited: 193
PPT: Pre-trained Prompt Tuning for Few-shot Learning
Yuxian Gu, Xu Han, Zhiyuan Liu, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 168
Yuxian Gu, Xu Han, Zhiyuan Liu, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 168
Parameter-Efficient Transfer Learning with Diff Pruning
Demi Guo, Alexander M. Rush, Yoon Kim
(2021)
Open Access | Times Cited: 151
Demi Guo, Alexander M. Rush, Yoon Kim
(2021)
Open Access | Times Cited: 151
Do Prompt-Based Models Really Understand the Meaning of Their Prompts?
Albert Webson, Ellie Pavlick
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022)
Open Access | Times Cited: 137
Albert Webson, Ellie Pavlick
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2022)
Open Access | Times Cited: 137
Deep Learning for Text Style Transfer: A Survey
Di Jin, Zhijing Jin, Zhiting Hu, et al.
Computational Linguistics (2021) Vol. 48, Iss. 1, pp. 155-205
Open Access | Times Cited: 137
Di Jin, Zhijing Jin, Zhiting Hu, et al.
Computational Linguistics (2021) Vol. 48, Iss. 1, pp. 155-205
Open Access | Times Cited: 137
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu, Brian Lester, Noah Constant, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 136
Tu Vu, Brian Lester, Noah Constant, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 136
PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
Stephen Bach, Victor Sanh, Zheng Yong, et al.
(2022)
Open Access | Times Cited: 122
Stephen Bach, Victor Sanh, Zheng Yong, et al.
(2022)
Open Access | Times Cited: 122
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models
Robert Logan, Ivana Balažević, Eric Wallace, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2022)
Open Access | Times Cited: 105
Robert Logan, Ivana Balažević, Eric Wallace, et al.
Findings of the Association for Computational Linguistics: ACL 2022 (2022)
Open Access | Times Cited: 105
ZegCLIP: Towards Adapting CLIP for Zero-shot Semantic Segmentation
Ziqin Zhou, Yinjie Lei, Bowen Zhang, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023)
Open Access | Times Cited: 85
Ziqin Zhou, Yinjie Lei, Bowen Zhang, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023)
Open Access | Times Cited: 85
Prompt for Extraction? PAIE: Prompting Argument Interaction for Event Argument Extraction
Yubo Ma, Zehao Wang, Yixin Cao, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 82
Yubo Ma, Zehao Wang, Yixin Cao, et al.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
Open Access | Times Cited: 82
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 79
Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 79
Paradigm Shift in Natural Language Processing
Tianxiang Sun, Xiang-Yang Liu, Xipeng Qiu, et al.
Deleted Journal (2022) Vol. 19, Iss. 3, pp. 169-183
Open Access | Times Cited: 74
Tianxiang Sun, Xiang-Yang Liu, Xipeng Qiu, et al.
Deleted Journal (2022) Vol. 19, Iss. 3, pp. 169-183
Open Access | Times Cited: 74
What does a platypus look like? Generating customized prompts for zero-shot image classification
Sarah I. Pratt, Ian Covert, Rosanne Liu, et al.
2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2023), pp. 15645-15655
Open Access | Times Cited: 73
Sarah I. Pratt, Ian Covert, Rosanne Liu, et al.
2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2023), pp. 15645-15655
Open Access | Times Cited: 73
Text Classification via Large Language Models
Xiaofei Sun, Xiaoya Li, Jiwei Li, et al.
(2023)
Open Access | Times Cited: 71
Xiaofei Sun, Xiaoya Li, Jiwei Li, et al.
(2023)
Open Access | Times Cited: 71
Demystifying Prompts in Language Models via Perplexity Estimation
Hila Gonen, Srini Iyer, Terra Blevins, et al.
(2023)
Open Access | Times Cited: 60
Hila Gonen, Srini Iyer, Terra Blevins, et al.
(2023)
Open Access | Times Cited: 60