↓ Skip to main content

Mind the Noise When Identifying Computational Models of Cognition from Brain Activity

Overview of attention for article published in Frontiers in Neuroscience, December 2016
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
3 X users

Citations

dimensions_citation
2 Dimensions

Readers on

mendeley
19 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Mind the Noise When Identifying Computational Models of Cognition from Brain Activity
Published in
Frontiers in Neuroscience, December 2016
DOI 10.3389/fnins.2016.00573
Pubmed ID
Authors

Antonio Kolossa, Bruno Kopp

Abstract

The aim of this study was to analyze how measurement error affects the validity of modeling studies in computational neuroscience. A synthetic validity test was created using simulated P300 event-related potentials as an example. The model space comprised four computational models of single-trial P300 amplitude fluctuations which differed in terms of complexity and dependency. The single-trial fluctuation of simulated P300 amplitudes was computed on the basis of one of the models, at various levels of measurement error and at various numbers of data points. Bayesian model selection was performed based on exceedance probabilities. At very low numbers of data points, the least complex model generally outperformed the data-generating model. Invalid model identification also occurred at low levels of data quality and under low numbers of data points if the winning model's predictors were closely correlated with the predictors from the data-generating model. Given sufficient data quality and numbers of data points, the data-generating model could be correctly identified, even against models which were very similar to the data-generating model. Thus, a number of variables affects the validity of computational modeling studies, and data quality and numbers of data points are among the main factors relevant to the issue. Further, the nature of the model space (i.e., model complexity, model dependency) should not be neglected. This study provided quantitative results which show the importance of ensuring the validity of computational modeling via adequately prepared studies. The accomplishment of synthetic validity tests is recommended for future applications. Beyond that, we propose to render the demonstration of sufficient validity via adequate simulations mandatory to computational modeling studies.

Timeline

Login to access the full chart related to this output.

If you don’t have an account, click here to discover Explorer

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 19 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 19 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 2 11%
Researcher 2 11%
Professor 2 11%
Student > Bachelor 2 11%
Student > Doctoral Student 1 5%
Other 5 26%
Unknown 5 26%
Readers by discipline Count As %
Psychology 3 16%
Social Sciences 2 11%
Engineering 2 11%
Neuroscience 2 11%
Computer Science 2 11%
Other 4 21%
Unknown 4 21%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 04 January 2017.
All research outputs
#16,722,190
of 25,374,917 outputs
Outputs from Frontiers in Neuroscience
#7,425
of 11,542 outputs
Outputs of similar age
#255,214
of 422,434 outputs
Outputs of similar age from Frontiers in Neuroscience
#92
of 165 outputs
Altmetric has tracked 25,374,917 research outputs across all sources so far. This one is in the 32nd percentile – i.e., 32% of other outputs scored the same or lower than it.
So far Altmetric has tracked 11,542 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 10.9. This one is in the 31st percentile – i.e., 31% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 422,434 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 36th percentile – i.e., 36% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 165 others from the same source and published within six weeks on either side of this one. This one is in the 36th percentile – i.e., 36% of its contemporaries scored the same or lower than it.