↓ Skip to main content

Now you see it, now you don't: on emotion, context, and the algorithmic prediction of human imageability judgments

Overview of attention for article published in Frontiers in Psychology, January 2013
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
1 X user
googleplus
1 Google+ user

Citations

dimensions_citation
40 Dimensions

Readers on

mendeley
51 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Now you see it, now you don't: on emotion, context, and the algorithmic prediction of human imageability judgments
Published in
Frontiers in Psychology, January 2013
DOI 10.3389/fpsyg.2013.00991
Pubmed ID
Authors

Chris F. Westbury, Cyrus Shaoul, Geoff Hollis, Lisa Smithson, Benny B. Briesemeister, Markus J. Hofmann, Arthur M. Jacobs

Abstract

Many studies have shown that behavioral measures are affected by manipulating the imageability of words. Though imageability is usually measured by human judgment, little is known about what factors underlie those judgments. We demonstrate that imageability judgments can be largely or entirely accounted for by two computable measures that have previously been associated with imageability, the size and density of a word's context and the emotional associations of the word. We outline an algorithmic method for predicting imageability judgments using co-occurrence distances in a large corpus. Our computed judgments account for 58% of the variance in a set of nearly two thousand imageability judgments, for words that span the entire range of imageability. The two factors account for 43% of the variance in lexical decision reaction times (LDRTs) that is attributable to imageability in a large database of 3697 LDRTs spanning the range of imageability. We document variances in the distribution of our measures across the range of imageability that suggest that they will account for more variance at the extremes, from which most imageability-manipulating stimulus sets are drawn. The two predictors account for 100% of the variance that is attributable to imageability in newly-collected LDRTs using a previously-published stimulus set of 100 items. We argue that our model of imageability is neurobiologically plausible by showing it is consistent with brain imaging data. The evidence we present suggests that behavioral effects in the lexical decision task that are usually attributed to the abstract/concrete distinction between words can be wholly explained by objective characteristics of the word that are not directly related to the semantic distinction. We provide computed imageability estimates for over 29,000 words.

Timeline

Login to access the full chart related to this output.

If you don’t have an account, click here to discover Explorer

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 51 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Germany 1 2%
Unknown 50 98%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 14 27%
Researcher 8 16%
Student > Bachelor 4 8%
Student > Doctoral Student 4 8%
Professor > Associate Professor 3 6%
Other 8 16%
Unknown 10 20%
Readers by discipline Count As %
Psychology 20 39%
Neuroscience 5 10%
Linguistics 4 8%
Engineering 2 4%
Medicine and Dentistry 2 4%
Other 8 16%
Unknown 10 20%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 27 June 2014.
All research outputs
#16,754,527
of 26,367,306 outputs
Outputs from Frontiers in Psychology
#17,621
of 35,210 outputs
Outputs of similar age
#186,312
of 294,702 outputs
Outputs of similar age from Frontiers in Psychology
#606
of 967 outputs
Altmetric has tracked 26,367,306 research outputs across all sources so far. This one is in the 34th percentile – i.e., 34% of other outputs scored the same or lower than it.
So far Altmetric has tracked 35,210 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 13.8. This one is in the 47th percentile – i.e., 47% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 294,702 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 34th percentile – i.e., 34% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 967 others from the same source and published within six weeks on either side of this one. This one is in the 35th percentile – i.e., 35% of its contemporaries scored the same or lower than it.