↓ Skip to main content

What to Choose Next? A Paradigm for Testing Human Sequential Decision Making

Overview of attention for article published in Frontiers in Psychology, March 2017
Altmetric Badge

Mentioned by

twitter
1 X user

Citations

dimensions_citation
13 Dimensions

Readers on

mendeley
47 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
What to Choose Next? A Paradigm for Testing Human Sequential Decision Making
Published in
Frontiers in Psychology, March 2017
DOI 10.3389/fpsyg.2017.00312
Pubmed ID
Authors

Elisa M. Tartaglia, Aaron M. Clarke, Michael H. Herzog

Abstract

Many of the decisions we make in our everyday lives are sequential and entail sparse rewards. While sequential decision-making has been extensively investigated in theory (e.g., by reinforcement learning models) there is no systematic experimental paradigm to test it. Here, we developed such a paradigm and investigated key components of reinforcement learning models: the eligibility trace (i.e., the memory trace of previous decision steps), the external reward, and the ability to exploit the statistics of the environment's structure (model-free vs. model-based mechanisms). We show that the eligibility trace decays not with sheer time, but rather with the number of discrete decision steps made by the participants. We further show that, unexpectedly, neither monetary rewards nor the environment's spatial regularity significantly modulate behavioral performance. Finally, we found that model-free learning algorithms describe human performance better than model-based algorithms.

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 47 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 47 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 9 19%
Student > Master 8 17%
Researcher 7 15%
Professor 3 6%
Student > Bachelor 2 4%
Other 9 19%
Unknown 9 19%
Readers by discipline Count As %
Psychology 16 34%
Neuroscience 9 19%
Agricultural and Biological Sciences 4 9%
Computer Science 2 4%
Economics, Econometrics and Finance 1 2%
Other 2 4%
Unknown 13 28%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 20 February 2017.
All research outputs
#20,406,219
of 22,955,959 outputs
Outputs from Frontiers in Psychology
#24,301
of 30,107 outputs
Outputs of similar age
#268,331
of 307,986 outputs
Outputs of similar age from Frontiers in Psychology
#476
of 536 outputs
Altmetric has tracked 22,955,959 research outputs across all sources so far. This one is in the 1st percentile – i.e., 1% of other outputs scored the same or lower than it.
So far Altmetric has tracked 30,107 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.5. This one is in the 1st percentile – i.e., 1% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 307,986 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 536 others from the same source and published within six weeks on either side of this one. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.