↓ Skip to main content

Brain States That Encode Perceived Emotion Are Reproducible but Their Classification Accuracy Is Stimulus-Dependent

Overview of attention for article published in Frontiers in Human Neuroscience, July 2018
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (84th percentile)
  • High Attention Score compared to outputs of the same age and source (88th percentile)

Mentioned by

news
1 news outlet
twitter
6 X users

Citations

dimensions_citation
17 Dimensions

Readers on

mendeley
33 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Brain States That Encode Perceived Emotion Are Reproducible but Their Classification Accuracy Is Stimulus-Dependent
Published in
Frontiers in Human Neuroscience, July 2018
DOI 10.3389/fnhum.2018.00262
Pubmed ID
Authors

Keith A. Bush, Jonathan Gardner, Anthony Privratsky, Ming-Hua Chung, G. Andrew James, Clinton D. Kilts

Abstract

The brain state hypothesis of image-induced affect processing, which posits that a one-to-one mapping exists between each image stimulus and its induced functional magnetic resonance imaging (fMRI)-derived neural activation pattern (i.e., brain state), has recently received support from several multivariate pattern analysis (MVPA) studies. Critically, however, classification accuracy differences across these studies, which largely share experimental designs and analyses, suggest that there exist one or more unaccounted sources of variance within MVPA studies of affect processing. To explore this possibility, we directly demonstrated strong inter-study correlations between image-induced affective brain states acquired 4 years apart on the same MRI scanner using near-identical methodology with studies differing only by the specific image stimuli and subjects. We subsequently developed a plausible explanation for inter-study differences in affective valence and arousal classification accuracies based on the spatial distribution of the perceived affective properties of the stimuli. Controlling for this distribution improved valence classification accuracy from 56% to 85% and arousal classification accuracy from 61% to 78%, which mirrored the full range of classification accuracy across studies within the existing literature. Finally, we validated the predictive fidelity of our image-related brain states according to an independent measurement, autonomic arousal, captured via skin conductance response (SCR). Brain states significantly but weakly (r = 0.08) predicted the SCRs that accompanied individual image stimulations. More importantly, the effect size of brain state predictions of SCR increased more than threefold (r = 0.25) when the stimulus set was restricted to those images having group-level significantly classifiable arousal properties.

X Demographics

X Demographics

The data shown below were collected from the profiles of 6 X users who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 33 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 33 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 8 24%
Student > Master 5 15%
Researcher 4 12%
Student > Bachelor 3 9%
Student > Doctoral Student 2 6%
Other 9 27%
Unknown 2 6%
Readers by discipline Count As %
Psychology 5 15%
Computer Science 4 12%
Neuroscience 4 12%
Engineering 4 12%
Agricultural and Biological Sciences 3 9%
Other 8 24%
Unknown 5 15%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 14. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 03 March 2022.
All research outputs
#2,465,165
of 24,493,053 outputs
Outputs from Frontiers in Human Neuroscience
#1,175
of 7,486 outputs
Outputs of similar age
#49,979
of 332,703 outputs
Outputs of similar age from Frontiers in Human Neuroscience
#16
of 126 outputs
Altmetric has tracked 24,493,053 research outputs across all sources so far. Compared to these this one has done well and is in the 89th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 7,486 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 14.9. This one has done well, scoring higher than 84% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 332,703 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 84% of its contemporaries.
We're also able to compare this research output to 126 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 88% of its contemporaries.