OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Grounding DINO: Marrying DINO with Grounded Pre-training for Open-Set Object Detection
Shilong Liu, Zhaoyang Zeng, Tianhe Ren, et al.
Lecture notes in computer science (2024), pp. 38-55
Closed Access | Times Cited: 281

Showing 1-25 of 281 citing articles:

The Segment Anything Model (SAM) for remote sensing applications: From zero to one shot
Lucas Prado Osco, Qiusheng Wu, Eduardo Lopes de Lemos, et al.
International Journal of Applied Earth Observation and Geoinformation (2023) Vol. 124, pp. 103540-103540
Open Access | Times Cited: 112

RSPrompter: Learning to Prompt for Remote Sensing Instance Segmentation Based on Visual Foundation Model
Keyan Chen, Chenyang Liu, Hao Chen, et al.
IEEE Transactions on Geoscience and Remote Sensing (2024) Vol. 62, pp. 1-17
Open Access | Times Cited: 96

Towards Open Vocabulary Learning: A Survey
Jianzong Wu, Xiangtai Li, Shilin Xu, et al.
IEEE Transactions on Pattern Analysis and Machine Intelligence (2024) Vol. 46, Iss. 7, pp. 5092-5113
Open Access | Times Cited: 42

ConceptGraphs: Open-Vocabulary 3D Scene Graphs for Perception and Planning
Qiao Gu, Ali Kuwajerwala, Sacha Morin, et al.
(2024), pp. 5021-5028
Open Access | Times Cited: 30

ShareGPT4V: Improving Large Multi-modal Models with Better Captions
Lin Chen, Jinsong Li, Xiaoyi Dong, et al.
Lecture notes in computer science (2024), pp. 370-387
Closed Access | Times Cited: 23

Gaussian Grouping: Segment and Edit Anything in 3D Scenes
Mingqiao Ye, Martin Danelljan, Fisher Yu, et al.
Lecture notes in computer science (2024), pp. 162-179
Closed Access | Times Cited: 20

Contextual Object Detection with Multimodal Large Language Models
Yuhang Zang, Wei Li, Jun Han, et al.
International Journal of Computer Vision (2024)
Closed Access | Times Cited: 18

Generative AI for visualization: State of the art and future directions
Yilin Ye, Jianing Hao, Yihan Hou, et al.
Visual Informatics (2024) Vol. 8, Iss. 2, pp. 43-66
Open Access | Times Cited: 16

Leveraging Zero-Shot Learning on Street-View Imagery for Built Environment Variable Analysis
Siyuan Yao, Siavash Ghorbany, Matthew Sisk, et al.
Lecture notes in computer science (2025), pp. 243-254
Closed Access | Times Cited: 3

Rapid post-disaster infrastructure damage characterisation using remote sensing and deep learning technologies: A tiered approach
Nadiia Kopiika, Andreas Karavias, Pavlos Krassakis, et al.
Automation in Construction (2025) Vol. 170, pp. 105955-105955
Open Access | Times Cited: 2

samgeo: A Python package for segmenting geospatial data with the Segment Anything Model (SAM)
Qiusheng Wu, Lucas Prado Osco
The Journal of Open Source Software (2023) Vol. 8, Iss. 89, pp. 5663-5663
Open Access | Times Cited: 35

RingMo-SAM: A Foundation Model for Segment Anything in Multimodal Remote-Sensing Images
Zhiyuan Yan, Junxi Li, Xuexue Li, et al.
IEEE Transactions on Geoscience and Remote Sensing (2023) Vol. 61, pp. 1-16
Closed Access | Times Cited: 33

DetGPT: Detect What You Need via Reasoning
Renjie Pi, Jiahui Gao, Shizhe Diao, et al.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023), pp. 14172-14189
Open Access | Times Cited: 27

VcT: Visual Change Transformer for Remote Sensing Image Change Detection
Bo Jiang, Zitian Wang, Xixi Wang, et al.
IEEE Transactions on Geoscience and Remote Sensing (2023) Vol. 61, pp. 1-14
Open Access | Times Cited: 26

Foundation Model Assisted Weakly Supervised Semantic Segmentation
Xiaobo Yang, Xiaojin Gong
2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (2024)
Open Access | Times Cited: 13

ORGANA: A robotic assistant for automated chemistry experimentation and characterization
Kourosh Darvish, Marta Skreta, Yuchi Zhao, et al.
Matter (2024), pp. 101897-101897
Open Access | Times Cited: 11

Matte anything: Interactive natural image matting with segment anything model
Jingfeng Yao, Xinggang Wang, Lang Ye, et al.
Image and Vision Computing (2024) Vol. 147, pp. 105067-105067
Closed Access | Times Cited: 10

Evaluating human perception of building exteriors using street view imagery
Xiucheng Liang, Jiat‐Hwee Chang, Song Gao, et al.
Building and Environment (2024) Vol. 263, pp. 111875-111875
Closed Access | Times Cited: 10

Visual Concrete Bridge Defect Classification and Detection Using Deep Learning: A Systematic Review
Dariush Amirkhani, Mohand Saïd Allili, Loucif Hebbache, et al.
IEEE Transactions on Intelligent Transportation Systems (2024) Vol. 25, Iss. 9, pp. 10483-10505
Closed Access | Times Cited: 9

VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation
Naoki Yokoyama, Sehoon Ha, Dhruv Batra, et al.
(2024), pp. 42-48
Open Access | Times Cited: 8

A survey on integration of large language models with intelligent robots
Yeseung Kim, Dohyun Kim, Ji‐Eun Choi, et al.
Intelligent Service Robotics (2024) Vol. 17, Iss. 5, pp. 1091-1107
Open Access | Times Cited: 8

Going Denser with Open-Vocabulary Part Segmentation
Peize Sun, Shoufa Chen, Chenchen Zhu, et al.
2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2023), pp. 15407-15419
Open Access | Times Cited: 19

ZEETAD: Adapting Pretrained Vision-Language Model for Zero-Shot End-to-End Temporal Action Detection
Thinh Phan, Viet-Khoa Vo-Ho, Duy Le, et al.
2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (2024), pp. 7031-7040
Open Access | Times Cited: 6

Drive Anywhere: Generalizable End-to-end Autonomous Driving with Multi-modal Foundation Models
Tsun-Hsuan Wang, Alaa Maalouf, Wei Xiao, et al.
(2024), pp. 6687-6694
Open Access | Times Cited: 6

LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents
Shilong Liu, Hao Cheng, Haotian Liu, et al.
Lecture notes in computer science (2024), pp. 126-142
Closed Access | Times Cited: 6

Page 1 - Next Page

Scroll to top