↓ Skip to main content

How Noisy is Lexical Decision?

Overview of attention for article published in Frontiers in Psychology, January 2012
Altmetric Badge

Mentioned by

twitter
1 X user

Citations

dimensions_citation
36 Dimensions

Readers on

mendeley
86 Mendeley
citeulike
1 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
How Noisy is Lexical Decision?
Published in
Frontiers in Psychology, January 2012
DOI 10.3389/fpsyg.2012.00348
Pubmed ID
Authors

Kevin Diependaele, Marc Brysbaert, Peter Neri

Abstract

Lexical decision is one of the most frequently used tasks in word recognition research. Theoretical conclusions are typically derived from a linear model on the reaction times (RTs) of correct word trials only (e.g., linear regression and ANOVA). Although these models estimate random measurement error for RTs, considering only correct trials implicitly assumes that word/non-word categorizations are without noise: words receive a yes-response because they have been recognized, and they receive a no-response when they are not known. Hence, when participants are presented with the same stimuli on two separate occasions, they are expected to give the same response. We demonstrate that this not true and that responses in a lexical decision task suffer from inconsistency in participants' response choice, meaning that RTs of "correct" word responses include RTs of trials on which participants did not recognize the stimulus. We obtained estimates of this internal noise using established methods from sensory psychophysics (Burgess and Colborne, 1988). The results show similar noise values as in typical psychophysical signal detection experiments when sensitivity and response bias are taken into account (Neri, 2010). These estimates imply that, with an optimal choice model, only 83-91% of the response choices can be explained (i.e., can be used to derive theoretical conclusions). For word responses, word frequencies below 10 per million yield alarmingly low percentages of consistent responses (near 50%). The same analysis can be applied to RTs, yielding noise estimates about three times higher. Correspondingly, the estimated amount of consistent trial-level variance in RTs is only 8%. These figures are especially relevant given the recent popularity of trial-level lexical decision models using the linear mixed-effects approach (e.g., Baayen et al., 2008).

Timeline

Login to access the full chart related to this output.

If you don’t have an account, click here to discover Explorer

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 86 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 3 3%
Canada 2 2%
Germany 1 1%
United Kingdom 1 1%
France 1 1%
Belgium 1 1%
Czechia 1 1%
Unknown 76 88%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 18 21%
Researcher 15 17%
Student > Master 9 10%
Student > Doctoral Student 7 8%
Student > Bachelor 7 8%
Other 22 26%
Unknown 8 9%
Readers by discipline Count As %
Psychology 42 49%
Linguistics 16 19%
Arts and Humanities 3 3%
Agricultural and Biological Sciences 3 3%
Neuroscience 2 2%
Other 6 7%
Unknown 14 16%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 24 September 2012.
All research outputs
#20,167,959
of 22,679,690 outputs
Outputs from Frontiers in Psychology
#23,775
of 29,381 outputs
Outputs of similar age
#221,187
of 244,102 outputs
Outputs of similar age from Frontiers in Psychology
#406
of 481 outputs
Altmetric has tracked 22,679,690 research outputs across all sources so far. This one is in the 1st percentile – i.e., 1% of other outputs scored the same or lower than it.
So far Altmetric has tracked 29,381 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.5. This one is in the 1st percentile – i.e., 1% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 244,102 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 481 others from the same source and published within six weeks on either side of this one. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.