OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

NuScenes-QA: A Multi-Modal Visual Question Answering Benchmark for Autonomous Driving Scenario
Tianwen Qian, Jingjing Chen, Linhai Zhuo, et al.
Proceedings of the AAAI Conference on Artificial Intelligence (2024) Vol. 38, Iss. 5, pp. 4542-4550
Open Access | Times Cited: 35

Showing 1-25 of 35 citing articles:

A Survey on Multimodal Large Language Models for Autonomous Driving
Can Cui, Yunsheng Ma, Xu Cao, et al.
(2024), pp. 958-979
Open Access | Times Cited: 83

End-to-End Autonomous Driving: Challenges and Frontiers
Li Chen, Penghao Wu, Kashyap Chitta, et al.
IEEE Transactions on Pattern Analysis and Machine Intelligence (2024) Vol. 46, Iss. 12, pp. 10164-10183
Open Access | Times Cited: 83

DriveLM: Driving with Graph Visual Question Answering
Chonghao Sima, Katrin Renz, Kashyap Chitta, et al.
Lecture notes in computer science (2024), pp. 256-274
Closed Access | Times Cited: 20

Talk2BEV: Language-enhanced Bird’s-eye View Maps for Autonomous Driving
Tushar Choudhary, Vikrant Dewangan, Shivam Chandhok, et al.
(2024), pp. 16345-16352
Open Access | Times Cited: 12

Human-Centric Autonomous Systems With LLMs for User Command Reasoning
Yi Yang, Qingwen Zhang, Ci Li, et al.
(2024), pp. 988-994
Open Access | Times Cited: 9

Using Multimodal Large Language Models (MLLMs) for Automated Detection of Traffic Safety-Critical Events
Mohammad Tami, Huthaifa I. Ashqar, Mohammed Elhenawy, et al.
Vehicles (2024) Vol. 6, Iss. 3, pp. 1571-1590
Open Access | Times Cited: 6

VLAAD: Vision and Language Assistant for Autonomous Driving
SungYeon Park, Min Jae Lee, JiHyuk Kang, et al.
(2024), pp. 980-987
Closed Access | Times Cited: 5

LimSim++: A Closed-Loop Platform for Deploying Multimodal LLMs in Autonomous Driving
Daocheng Fu, Wenjie Lei, Licheng Wen, et al.
2022 IEEE Intelligent Vehicles Symposium (IV) (2024), pp. 1084-1090
Open Access | Times Cited: 5

Deep learning adversarial attacks and defenses in autonomous vehicles: a systematic literature review from a safety perspective
Ahmed Dawod Mohammed Ibrahum, Manzoor Hussain, Jang‐Eui Hong
Artificial Intelligence Review (2024) Vol. 58, Iss. 1
Open Access | Times Cited: 5

Language-guided Bias Generation Contrastive Strategy for Visual Question Answering
Enyuan Zhao, Ning Song, Ze Zhang, et al.
ACM Transactions on Multimedia Computing Communications and Applications (2025)
Open Access

AI Folk: Sharing Machine Learning Models in a Multi-Agent Community
Andrei Olaru, Alexandru Sorici, Mihai Nan, et al.
Lecture notes in networks and systems (2025), pp. 119-128
Closed Access

Harnessing the Power of Large Language Models for Sustainable and Intelligent Transportation Systems in the Electric Vehicle Era
Anuj Abraham, Tasneim Aldhanhani, Wassim Hamidouche, et al.
Lecture notes in intelligent transportation and infrastructure (2025), pp. 85-113
Closed Access

DDMCB: Open-world object detection empowered by Denoising Diffusion Models and Calibration Balance
Yangyang Huang, Xing Xi, Ronghua Luo
Image and Vision Computing (2025), pp. 105508-105508
Closed Access

Novel cross-dimensional coarse-fine-grained complementary network for image-text matching
Meizhen Liu, Anis Salwa Mohd Khairuddin, Khairunnisa Hasikin‬, et al.
PeerJ Computer Science (2025) Vol. 11, pp. e2725-e2725
Open Access

NuScenes-MQA: Integrated Evaluation of Captions and QA for Autonomous Driving Datasets using Markup Annotations
Yuichi Inoue, Yuki Yada, Kôtarô Tanahashi, et al.
(2024), pp. 930-938
Open Access | Times Cited: 3

Embodied Intelligence in Mining: Leveraging Multi-Modal Large Language Models for Autonomous Driving in Mines
Luxi Li, Yuchen Li, Xiaotong Zhang, et al.
IEEE Transactions on Intelligent Vehicles (2024) Vol. 9, Iss. 5, pp. 4831-4834
Closed Access | Times Cited: 3

Reason2Drive: Towards Interpretable and Chain-Based Reasoning for Autonomous Driving
Ming Nie, Renyuan Peng, Chunwei Wang, et al.
Lecture notes in computer science (2024), pp. 292-308
Closed Access | Times Cited: 3

UnstrPrompt: Large Language Model Prompt for Driving in Unstructured Scenarios
Yuchen Li, Luxi Li, Zizhang Wu, et al.
IEEE Journal of Radio Frequency Identification (2024) Vol. 8, pp. 367-375
Closed Access | Times Cited: 2

Advancing ITS Applications with LLMs: A Survey on Traffic Management, Transportation Safety, and Autonomous Driving
Dingkai Zhang, Huanran Zheng, Wenjing Yue, et al.
Lecture notes in computer science (2024), pp. 295-309
Closed Access | Times Cited: 2

Holistic Autonomous Driving Understanding by Bird'View Injected Multi-Modal Large Models
Xinpeng Ding, Jianhua Han, Hang Xu, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) Vol. 35, pp. 13668-13677
Closed Access | Times Cited: 2

Feedback-Guided Autonomous Driving
Jimuyang Zhang, Zanming Huang, Arijit Ray, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024), pp. 15000-15011
Closed Access | Times Cited: 2

MAPLM: A Real-World Large-Scale Vision-Language Benchmark for Map and Traffic Scene Understanding
Xu Cao, Tong Zhou, Yunsheng Ma, et al.
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024), pp. 21819-21830
Closed Access | Times Cited: 2

A Superalignment Framework in Autonomous Driving with Large Language Models
Xiangrui Kong, Thomas Bräunl, Marco Fahmi, et al.
2022 IEEE Intelligent Vehicles Symposium (IV) (2024), pp. 1715-1720
Open Access | Times Cited: 1

TOD3Cap: Towards 3D Dense Captioning in Outdoor Scenes
Bu Jin, Yupeng Zheng, Pengfei Li, et al.
Lecture notes in computer science (2024), pp. 367-384
Closed Access | Times Cited: 1

LingoQA: Visual Question Answering for Autonomous Driving
Ana-Maria Marcu, Long Chen, Jan Hünermann, et al.
Lecture notes in computer science (2024), pp. 252-269
Closed Access | Times Cited: 1

Page 1 - Next Page

Scroll to top