↓ Skip to main content

Recommendations for the Use of Automated Gray Matter Segmentation Tools: Evidence from Huntington’s Disease

Overview of attention for article published in Frontiers in Neurology, October 2017
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • Good Attention Score compared to outputs of the same age (76th percentile)
  • High Attention Score compared to outputs of the same age and source (81st percentile)

Mentioned by

twitter
11 X users
facebook
1 Facebook page

Citations

dimensions_citation
33 Dimensions

Readers on

mendeley
73 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Recommendations for the Use of Automated Gray Matter Segmentation Tools: Evidence from Huntington’s Disease
Published in
Frontiers in Neurology, October 2017
DOI 10.3389/fneur.2017.00519
Pubmed ID
Authors

Eileanoir B. Johnson, Sarah Gregory, Hans J. Johnson, Alexandra Durr, Blair R. Leavitt, Raymund A. Roos, Geraint Rees, Sarah J. Tabrizi, Rachael I. Scahill

Abstract

The selection of an appropriate segmentation tool is a challenge facing any researcher aiming to measure gray matter (GM) volume. Many tools have been compared, yet there is currently no method that can be recommended above all others; in particular, there is a lack of validation in disease cohorts. This work utilizes a clinical dataset to conduct an extensive comparison of segmentation tools. Our results confirm that all tools have advantages and disadvantages, and we present a series of considerations that may be of use when selecting a GM segmentation method, rather than a ranking of these tools. Seven segmentation tools were compared using 3 T MRI data from 20 controls, 40 premanifest Huntington's disease (HD), and 40 early HD participants. Segmented volumes underwent detailed visual quality control. Reliability and repeatability of total, cortical, and lobular GM were investigated in repeated baseline scans. The relationship between each tool was also examined. Longitudinal within-group change over 3 years was assessed via generalized least squares regression to determine sensitivity of each tool to disease effects. Visual quality control and raw volumes highlighted large variability between tools, especially in occipital and temporal regions. Most tools showed reliable performance and the volumes were generally correlated. Results for longitudinal within-group change varied between tools, especially within lobular regions. These differences highlight the need for careful selection of segmentation methods in clinical neuroimaging studies. This guide acts as a primer aimed at the novice or non-technical imaging scientist providing recommendations for the selection of cohort-appropriate GM segmentation software.

Timeline

Login to access the full chart related to this output.

If you don’t have an account, click here to discover Explorer

X Demographics

X Demographics

The data shown below were collected from the profiles of 11 X users who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 73 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 73 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 14 19%
Researcher 11 15%
Student > Master 11 15%
Student > Bachelor 5 7%
Student > Doctoral Student 5 7%
Other 10 14%
Unknown 17 23%
Readers by discipline Count As %
Neuroscience 19 26%
Medicine and Dentistry 12 16%
Psychology 5 7%
Engineering 3 4%
Computer Science 2 3%
Other 10 14%
Unknown 22 30%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 8. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 15 October 2017.
All research outputs
#4,979,228
of 26,501,765 outputs
Outputs from Frontiers in Neurology
#3,927
of 15,114 outputs
Outputs of similar age
#78,952
of 338,080 outputs
Outputs of similar age from Frontiers in Neurology
#34
of 186 outputs
Altmetric has tracked 26,501,765 research outputs across all sources so far. Compared to these this one has done well and is in the 81st percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 15,114 research outputs from this source. They typically receive more attention than average, with a mean Attention Score of 7.6. This one has gotten more attention than average, scoring higher than 73% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 338,080 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 76% of its contemporaries.
We're also able to compare this research output to 186 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 81% of its contemporaries.