OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester, Rami Al‐Rfou, Noah Constant
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021)
Open Access | Times Cited: 1606

Showing 1-25 of 1606 citing articles:

Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu, Weizhe Yuan, Jinlan Fu, et al.
ACM Computing Surveys (2022) Vol. 55, Iss. 9, pp. 1-35
Open Access | Times Cited: 2013

Text Data Augmentation for Deep Learning
Connor Shorten, Taghi M. Khoshgoftaar, Borko Furht
Journal Of Big Data (2021) Vol. 8, Iss. 1
Open Access | Times Cited: 1420

Learning to Prompt for Vision-Language Models
Kaiyang Zhou, Jingkang Yang, Chen Change Loy, et al.
International Journal of Computer Vision (2022) Vol. 130, Iss. 9, pp. 2337-2348
Closed Access | Times Cited: 1170

Conditional Prompt Learning for Vision-Language Models
Kaiyang Zhou, Jingkang Yang, Chen Change Loy, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022)
Open Access | Times Cited: 690

Pre-trained models: Past, present and future
Xu Han, Zhengyan Zhang, Ning Ding, et al.
AI Open (2021) Vol. 2, pp. 225-250
Open Access | Times Cited: 636

Visual Prompt Tuning
Menglin Jia, Luming Tang, Bor-Chun Chen, et al.
Lecture notes in computer science (2022), pp. 709-727
Closed Access | Times Cited: 576

Recent Advances in Natural Language Processing via Large Pre-trained Language Models: A Survey
Bonan Min, Hayley Ross, Elior Sulem, et al.
ACM Computing Surveys (2023) Vol. 56, Iss. 2, pp. 1-40
Open Access | Times Cited: 558

Grounded Language-Image Pre-training
Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022), pp. 10955-10965
Open Access | Times Cited: 467

Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh, Albert Webson, Colin Raffel, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 463

P-Tuning: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks
Xiao Liu, Kaixuan Ji, Yicheng Fu, et al.
(2022)
Open Access | Times Cited: 456

PTR: Prompt Tuning with Rules for Text Classification
Xu Han, Weilin Zhao, Ning Ding, et al.
AI Open (2022) Vol. 3, pp. 182-192
Open Access | Times Cited: 352

Learning to Prompt for Continual Learning
Zifeng Wang, Zizhao Zhang, Chen‐Yu Lee, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022), pp. 139-149
Open Access | Times Cited: 339

CLIP-Adapter: Better Vision-Language Models with Feature Adapters
Peng Gao, Shijie Geng, Renrui Zhang, et al.
International Journal of Computer Vision (2023) Vol. 132, Iss. 2, pp. 581-595
Closed Access | Times Cited: 330

Parameter-efficient fine-tuning of large-scale pre-trained language models
Ning Ding, Yujia Qin, Guang Yang, et al.
Nature Machine Intelligence (2023) Vol. 5, Iss. 3, pp. 220-235
Open Access | Times Cited: 311

KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction
Xiang Chen, Ningyu Zhang, Xin Xie, et al.
Proceedings of the ACM Web Conference 2022 (2022), pp. 2778-2788
Open Access | Times Cited: 279

MaPLe: Multi-modal Prompt Learning
Muhammad Uzair Khattak, Hanoona Rasheed, Muhammad Maaz, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023), pp. 19113-19122
Open Access | Times Cited: 276

Self-Instruct: Aligning Language Models with Self-Generated Instructions
Yi‐Zhong Wang, Yeganeh Kordi, Swaroop Mishra, et al.
(2023)
Open Access | Times Cited: 258

A Comprehensive Survey of Few-shot Learning: Evolution, Applications, Challenges, and Opportunities
Yisheng Song, Ting Wang, Puyu Cai, et al.
ACM Computing Surveys (2023) Vol. 55, Iss. 13s, pp. 1-40
Open Access | Times Cited: 236

GPT understands, too
Xiao Liu, Yanan Zheng, Zhengxiao Du, et al.
AI Open (2023) Vol. 5, pp. 208-215
Open Access | Times Cited: 236

Prompting Visual-Language Models for Efficient Video Understanding
Chen Ju, Tengda Han, Kunhao Zheng, et al.
Lecture notes in computer science (2022), pp. 105-124
Closed Access | Times Cited: 196

Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)
Shijie Geng, Shuchang Liu, Zuohui Fu, et al.
(2022), pp. 299-315
Closed Access | Times Cited: 193

Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP
Feng Liang, Bichen Wu, Xiaoliang Dai, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023), pp. 7061-7070
Open Access | Times Cited: 192

VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance
Katherine Crowson, Stella Biderman, Daniel Kornis, et al.
Lecture notes in computer science (2022), pp. 88-105
Open Access | Times Cited: 187

UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models
Tianbao Xie, Chen Wu, Peng Shi, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2022)
Open Access | Times Cited: 184

Page 1 - Next Page

Scroll to top