↓ Skip to main content

Generalizability of High Frequency Oscillation Evaluations in the Ripple Band

Overview of attention for article published in Frontiers in Neurology, June 2018
Altmetric Badge

Mentioned by

twitter
1 X user

Citations

dimensions_citation
15 Dimensions

Readers on

mendeley
25 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Generalizability of High Frequency Oscillation Evaluations in the Ripple Band
Published in
Frontiers in Neurology, June 2018
DOI 10.3389/fneur.2018.00510
Pubmed ID
Authors

Aaron M. Spring, Daniel J. Pittman, Yahya Aghakhani, Jeffrey Jirsch, Neelan Pillay, Luis E. Bello-Espinosa, Colin Josephson, Paolo Federico

Abstract

Objective: We examined the interrater reliability and generalizability of high-frequency oscillation (HFO) visual evaluations in the ripple (80-250 Hz) band, and established a framework for the transition of HFO analysis to routine clinical care. We were interested in the interrater reliability or epoch generalizability to describe how similar the evaluations were between reviewers, and in the reviewer generalizability to represent the consistency of the internal threshold each individual reviewer. Methods: We studied 41 adult epilepsy patients (mean age: 35.6 years) who underwent intracranial electroencephalography. A morphology detector was designed and used to detect candidate HFO events, lower-threshold events, and distractor events. These events were subsequently presented to six expert reviewers, who visually evaluated events for the presence of HFOs. Generalizability theory was used to characterize the epoch generalizability (interrater reliability) and reviewer generalizability (internal threshold consistency) of visual evaluations, as well as to project the numbers of epochs, reviewers, and datasets required to achieve strong generalizability (threshold of 0.8). Results: The reviewer generalizability was almost perfect (0.983), indicating there were sufficient evaluations to determine the internal threshold of each reviewer. However, the interrater reliability for 6 reviewers (0.588) and pairwise interrater reliability (0.322) were both poor, indicating that the agreement of 6 reviewers is insufficient to reliably establish the presence or absence of individual HFOs. Strong interrater reliability (≥0.8) was projected as requiring a minimum of 17 reviewers, while strong reviewer generalizability could be achieved with <30 epoch evaluations per reviewer. Significance: This study reaffirms the poor reliability of using small numbers of reviewers to identify HFOs, and projects the number of reviewers required to overcome this limitation. It also provides a set of tools which may be used for training reviewers, tracking changes to interrater reliability, and for constructing a benchmark set of epochs that can serve as a generalizable gold standard, against which other HFO detection algorithms may be compared. This study represents an important step toward the reconciliation of important but discordant findings from HFO studies undertaken with different sets of HFOs, and ultimately toward transitioning HFO analysis into a meaningful part of the clinical epilepsy workup.

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 25 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 25 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 6 24%
Student > Ph. D. Student 5 20%
Researcher 3 12%
Professor > Associate Professor 2 8%
Professor 1 4%
Other 1 4%
Unknown 7 28%
Readers by discipline Count As %
Neuroscience 7 28%
Engineering 6 24%
Computer Science 2 8%
Medicine and Dentistry 2 8%
Biochemistry, Genetics and Molecular Biology 1 4%
Other 1 4%
Unknown 6 24%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 01 July 2018.
All research outputs
#20,523,725
of 23,092,602 outputs
Outputs from Frontiers in Neurology
#9,013
of 12,012 outputs
Outputs of similar age
#288,623
of 329,253 outputs
Outputs of similar age from Frontiers in Neurology
#245
of 318 outputs
Altmetric has tracked 23,092,602 research outputs across all sources so far. This one is in the 1st percentile – i.e., 1% of other outputs scored the same or lower than it.
So far Altmetric has tracked 12,012 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 7.3. This one is in the 1st percentile – i.e., 1% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 329,253 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 318 others from the same source and published within six weeks on either side of this one. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.