OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Modelling search for people in 900 scenes: A combined source model of eye guidance
Krista A. Ehinger, B. Hidalgo-Sotelo, Antonio Torralba, et al.
Visual Cognition (2009) Vol. 17, Iss. 6-7, pp. 945-978
Open Access | Times Cited: 290

Showing 1-25 of 290 citing articles:

Learning to predict where humans look
Tilke Judd, Krista A. Ehinger, Frédo Durand, et al.
(2009), pp. 2106-2113
Closed Access | Times Cited: 2003

State-of-the-Art in Visual Attention Modeling
Ali Borji, Laurent Itti
IEEE Transactions on Pattern Analysis and Machine Intelligence (2012) Vol. 35, Iss. 1, pp. 185-207
Closed Access | Times Cited: 1757

Eye guidance in natural vision: Reinterpreting salience
Benjamin W. Tatler, Mary Hayhoe, M. F. Land, et al.
Journal of Vision (2011) Vol. 11, Iss. 5, pp. 5-5
Open Access | Times Cited: 710

Representing multiple objects as an ensemble enhances visual cognition
George A. Alvarez
Trends in Cognitive Sciences (2011) Vol. 15, Iss. 3, pp. 122-131
Open Access | Times Cited: 560

Visual search in scenes involves selective and nonselective pathways
Jeremy M. Wolfe, Melissa L.‐H. Võ, Karla K. Evans, et al.
Trends in Cognitive Sciences (2011) Vol. 15, Iss. 2, pp. 77-84
Open Access | Times Cited: 480

Visual search: A retrospective
Miguel P. Eckstein
Journal of Vision (2011) Vol. 11, Iss. 5, pp. 14-14
Open Access | Times Cited: 433

Mechanisms of top-down attention
Farhan Baluch, Laurent Itti
Trends in Neurosciences (2011) Vol. 34, Iss. 4, pp. 210-224
Closed Access | Times Cited: 410

Predicting human gaze beyond pixels
Jinhong Xu, Ming Jiang, Shuo Wang, et al.
Journal of Vision (2014) Vol. 14, Iss. 1, pp. 28-28
Open Access | Times Cited: 323

Learning a saliency map using fixated locations in natural scenes
Qianchuan Zhao, Christof Koch
Journal of Vision (2011) Vol. 11, Iss. 3, pp. 9-9
Open Access | Times Cited: 259

Extreme Clicking for Efficient Object Annotation
Dim P. Papadopoulos, Jasper Uijlings, Frank Keller, et al.
(2017)
Open Access | Times Cited: 253

TurkerGaze: Crowdsourcing Saliency with Webcam based Eye Tracking
Pingmei Xu, Krista A. Ehinger, Yinda Zhang, et al.
arXiv (Cornell University) (2015)
Open Access | Times Cited: 251

Attention in the real world: toward understanding its neural basis
Marius V. Peelen, Sabine Kästner
Trends in Cognitive Sciences (2014) Vol. 18, Iss. 5, pp. 242-250
Open Access | Times Cited: 202

Actions in the Eye: Dynamic Gaze Datasets and Learnt Saliency Models for Visual Recognition
Stefan Mathe, Cristian Sminchisescu
IEEE Transactions on Pattern Analysis and Machine Intelligence (2014) Vol. 37, Iss. 7, pp. 1408-1424
Open Access | Times Cited: 200

Review of Eye Tracking Metrics Involved in Emotional and Cognitive Processes
Vasileios Skaramagkas, Giorgos Giannakakis, Emmanouil Ktistakis, et al.
IEEE Reviews in Biomedical Engineering (2021) Vol. 16, pp. 260-277
Open Access | Times Cited: 180

Visual Search: How Do We Find What We Are Looking For?
Jeremy M. Wolfe
Annual Review of Vision Science (2020) Vol. 6, Iss. 1, pp. 539-562
Open Access | Times Cited: 153

SUN: Top-down saliency using natural statistics
Christopher Kanan, Matthew H. Tong, Lingyun Zhang, et al.
Visual Cognition (2009) Vol. 17, Iss. 6-7, pp. 979-1003
Open Access | Times Cited: 260

Visual attention guided bit allocation in video compression
Zhicheng Li, Shiyin Qin, Laurent Itti
Image and Vision Computing (2010) Vol. 29, Iss. 1, pp. 1-14
Closed Access | Times Cited: 229

The effects of target template specificity on visual search in real-world scenes: Evidence from eye movements
George L. Malcolm, John M. Henderson
Journal of Vision (2009) Vol. 9, Iss. 11, pp. 8-8
Open Access | Times Cited: 206

What and where: A Bayesian inference theory of attention
Sharat Chikkerur, T. Serre, Cheston Tan, et al.
Vision Research (2010) Vol. 50, Iss. 22, pp. 2233-2247
Open Access | Times Cited: 205

Combining top-down processes to guide eye movements during real-world scene search
George L. Malcolm
Journal of Vision (2010) Vol. 10, Iss. 2, pp. 1-11
Open Access | Times Cited: 191

The prominence of behavioural biases in eye guidance
Benjamin W. Tatler, Benjamin T. Vincent
Visual Cognition (2009) Vol. 17, Iss. 6-7, pp. 1029-1054
Closed Access | Times Cited: 188

When is it time to move to the next raspberry bush? Foraging rules in human visual search
Jeremy M. Wolfe
Journal of Vision (2013) Vol. 13, Iss. 3, pp. 10-10
Open Access | Times Cited: 182

Visual search for arbitrary objects in real scenes
Jeremy M. Wolfe, George A. Alvarez, Ruth Rosenholtz, et al.
Attention Perception & Psychophysics (2011) Vol. 73, Iss. 6, pp. 1650-1671
Open Access | Times Cited: 166

Target templates: the precision of mental representations affects attentional guidance and decision-making in visual search
Michael C. Hout, Stephen D. Goldinger
Attention Perception & Psychophysics (2014) Vol. 77, Iss. 1, pp. 128-149
Open Access | Times Cited: 161

Object co-occurrence serves as a contextual cue to guide and facilitate visual search in a natural viewing environment
Stephen C. Mack, Miguel P. Eckstein
Journal of Vision (2011) Vol. 11, Iss. 9, pp. 9-9
Open Access | Times Cited: 156

Page 1 - Next Page

Scroll to top