↓ Skip to main content

Mood As Cumulative Expectation Mismatch: A Test of Theory Based on Data from Non-verbal Cognitive Bias Tests

Overview of attention for article published in Frontiers in Psychology, December 2017
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (84th percentile)
  • Good Attention Score compared to outputs of the same age and source (75th percentile)

Mentioned by

twitter
20 X users
facebook
1 Facebook page

Citations

dimensions_citation
23 Dimensions

Readers on

mendeley
50 Mendeley
citeulike
1 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Mood As Cumulative Expectation Mismatch: A Test of Theory Based on Data from Non-verbal Cognitive Bias Tests
Published in
Frontiers in Psychology, December 2017
DOI 10.3389/fpsyg.2017.02197
Pubmed ID
Authors

Camille M. C. Raoult, Julia Moser, Lorenz Gygax

Abstract

Affective states are known to influence behavior and cognitive processes. To assess mood (moderately long-term affective states), the cognitive judgment bias test was developed and has been widely used in various animal species. However, little is known about how mood changes, how mood can be experimentally manipulated, and how mood then feeds back into cognitive judgment. A recent theory argues that mood reflects the cumulative impact of differences between obtained outcomes and expectations. Here expectations refer to an established context. Situations in which an established context fails to match an outcome are then perceived as mismatches of expectation and outcome. We take advantage of the large number of studies published on non-verbal cognitive bias tests in recent years (95 studies with a total of 162 independent tests) to test whether cumulative mismatch could indeed have led to the observed mood changes. Based on a criteria list, we assessed whether mismatch had occurred with the experimental procedure used to induce mood (mood induction mismatch), or in the context of the non-verbal cognitive bias procedure (testing mismatch). For the mood induction mismatch, we scored the mismatch between the subjects' potential expectations and the manipulations conducted for inducing mood whereas, for the testing mismatch, we scored mismatches that may have occurred during the actual testing. We then investigated whether these two types of mismatch can predict the actual outcome of the cognitive bias study. The present evaluation shows that mood induction mismatch cannot well predict the success of a cognitive bias test. On the other hand, testing mismatch can modulate or even inverse the expected outcome. We think, cognitive bias studies should more specifically aim at creating expectation mismatch while inducing mood states to test the cumulative mismatch theory more properly. Furthermore, testing mismatch should be avoided as much as possible because it can reverse the affective state of animals as measured in a cognitive judgment bias paradigm.

X Demographics

X Demographics

The data shown below were collected from the profiles of 20 X users who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 50 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 50 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 13 26%
Researcher 8 16%
Student > Bachelor 6 12%
Student > Master 3 6%
Student > Doctoral Student 2 4%
Other 7 14%
Unknown 11 22%
Readers by discipline Count As %
Agricultural and Biological Sciences 14 28%
Psychology 8 16%
Neuroscience 4 8%
Veterinary Science and Veterinary Medicine 3 6%
Pharmacology, Toxicology and Pharmaceutical Science 1 2%
Other 5 10%
Unknown 15 30%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 11. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 17 December 2021.
All research outputs
#3,110,425
of 24,072,790 outputs
Outputs from Frontiers in Psychology
#5,898
of 32,309 outputs
Outputs of similar age
#68,196
of 446,617 outputs
Outputs of similar age from Frontiers in Psychology
#132
of 530 outputs
Altmetric has tracked 24,072,790 research outputs across all sources so far. Compared to these this one has done well and is in the 86th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 32,309 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.8. This one has done well, scoring higher than 81% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 446,617 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 84% of its contemporaries.
We're also able to compare this research output to 530 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 75% of its contemporaries.