↓ Skip to main content

Ignore Similarity If You Can: A Computational Exploration of Exemplar Similarity Effects on Rule Application

Overview of attention for article published in Frontiers in Psychology, March 2017
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age

Mentioned by

twitter
3 X users

Citations

dimensions_citation
2 Dimensions

Readers on

mendeley
22 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Ignore Similarity If You Can: A Computational Exploration of Exemplar Similarity Effects on Rule Application
Published in
Frontiers in Psychology, March 2017
DOI 10.3389/fpsyg.2017.00424
Pubmed ID
Authors

Duncan P. Brumby, Ulrike Hahn

Abstract

It is generally assumed that when making categorization judgments the cognitive system learns to focus on stimuli features that are relevant for making an accurate judgment. This is a key feature of hybrid categorization systems, which selectively weight the use of exemplar- and rule-based processes. In contrast, Hahn et al. (2010) have shown that people cannot help but pay attention to exemplar similarity, even when doing so leads to classification errors. This paper tests, through a series of computer simulations, whether a hybrid categorization model developed in the ACT-R cognitive architecture (by Anderson and Betz, 2001) can account for the Hahn et al. dataset. This model implements Nosofsky and Palmeri's (1997) exemplar-based random walk model as its exemplar route, and combines it with an implementation of Nosofsky et al. (1994) rule-based model RULEX. A thorough search of the model's parameter space showed that while the presence of an exemplar-similarity effect on response times was associated with classification errors it was possible to fit both measures to the observed data for an unsupervised version of the task (i.e., in which no feedback on accuracy was given). Difficulties arose when the model was applied to a supervised version of the task in which explicit feedback on accuracy was given. Modeling results show that the exemplar-similarity effect is diminished by feedback as the model learns to avoid the error-prone exemplar-route, taking instead the accurate rule-route. In contrast to the model, Hahn et al. found that people continue to exhibit robust exemplar-similarity effects even when given feedback. This work highlights a challenge for understanding how and why people combine rules and exemplars when making categorization decisions.

Timeline

Login to access the full chart related to this output.

If you don’t have an account, click here to discover Explorer

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 22 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Switzerland 1 5%
Unknown 21 95%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 5 23%
Student > Master 4 18%
Student > Doctoral Student 2 9%
Student > Bachelor 2 9%
Lecturer 2 9%
Other 5 23%
Unknown 2 9%
Readers by discipline Count As %
Psychology 12 55%
Computer Science 1 5%
Business, Management and Accounting 1 5%
Decision Sciences 1 5%
Medicine and Dentistry 1 5%
Other 0 0%
Unknown 6 27%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 21 March 2017.
All research outputs
#16,712,631
of 26,322,284 outputs
Outputs from Frontiers in Psychology
#17,607
of 35,169 outputs
Outputs of similar age
#189,471
of 327,318 outputs
Outputs of similar age from Frontiers in Psychology
#370
of 535 outputs
Altmetric has tracked 26,322,284 research outputs across all sources so far. This one is in the 34th percentile – i.e., 34% of other outputs scored the same or lower than it.
So far Altmetric has tracked 35,169 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 13.7. This one is in the 47th percentile – i.e., 47% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 327,318 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 39th percentile – i.e., 39% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 535 others from the same source and published within six weeks on either side of this one. This one is in the 28th percentile – i.e., 28% of its contemporaries scored the same or lower than it.