↓ Skip to main content

Constructing Neuronal Network Models in Massively Parallel Environments

Overview of attention for article published in Frontiers in Neuroinformatics, May 2017
Altmetric Badge

About this Attention Score

  • Above-average Attention Score compared to outputs of the same age (64th percentile)
  • Good Attention Score compared to outputs of the same age and source (70th percentile)

Mentioned by

twitter
6 X users
facebook
1 Facebook page

Citations

dimensions_citation
31 Dimensions

Readers on

mendeley
45 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Constructing Neuronal Network Models in Massively Parallel Environments
Published in
Frontiers in Neuroinformatics, May 2017
DOI 10.3389/fninf.2017.00030
Pubmed ID
Authors

Tammo Ippen, Jochen M. Eppler, Hans E. Plesser, Markus Diesmann

Abstract

Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.

X Demographics

X Demographics

The data shown below were collected from the profiles of 6 X users who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 45 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 45 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 12 27%
Researcher 11 24%
Student > Master 5 11%
Student > Bachelor 4 9%
Other 3 7%
Other 5 11%
Unknown 5 11%
Readers by discipline Count As %
Computer Science 10 22%
Neuroscience 10 22%
Agricultural and Biological Sciences 5 11%
Engineering 5 11%
Physics and Astronomy 4 9%
Other 3 7%
Unknown 8 18%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 4. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 19 April 2018.
All research outputs
#6,914,200
of 22,971,207 outputs
Outputs from Frontiers in Neuroinformatics
#332
of 752 outputs
Outputs of similar age
#109,029
of 310,608 outputs
Outputs of similar age from Frontiers in Neuroinformatics
#6
of 20 outputs
Altmetric has tracked 22,971,207 research outputs across all sources so far. This one has received more attention than most of these and is in the 69th percentile.
So far Altmetric has tracked 752 research outputs from this source. They typically receive more attention than average, with a mean Attention Score of 8.3. This one has gotten more attention than average, scoring higher than 55% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 310,608 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 64% of its contemporaries.
We're also able to compare this research output to 20 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 70% of its contemporaries.