↓ Skip to main content

Usability of the Video Head Impulse Test: Lessons From the Population-Based Prospective KORA Study

Overview of attention for article published in Frontiers in Neurology, August 2018
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
3 X users

Citations

dimensions_citation
14 Dimensions

Readers on

mendeley
50 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Usability of the Video Head Impulse Test: Lessons From the Population-Based Prospective KORA Study
Published in
Frontiers in Neurology, August 2018
DOI 10.3389/fneur.2018.00659
Pubmed ID
Authors

Maria Heuberger, Eva Grill, Murat Saǧlam, Cecilia Ramaioli, Martin Müller, Ralf Strobl, Rolf Holle, Annette Peters, Erich Schneider, Nadine Lehnen

Abstract

Objective: The video head impulse test (vHIT) has become a common examination in the work-up for dizziness and vertigo. However, recent studies suggest a number of pitfalls, which seem to reduce vHIT usability. Within the framework of a population-based prospective study with naïve examiners, we investigated the relevance of previously described technical mistakes in vHIT testing, and the effect of experience and training. Methods: Data originates from the KORA (Cooperative Health Research in the Region of Augsburg) FF4 study, the second follow-up of the KORA S4 population-based health survey. 681 participants were selected in a case-control design. Three examiners without any prior experience were trained in video head impulse testing. VHIT quality was assessed weekly by an experienced neuro-otologist. Restrictive mistakes (insufficient technical quality restricting interpretation) were noted. Based on these results, examiners received further individual training. Results: Twenty-two of the 681 vHITs (3.2%) were not interpretable due to restrictive mistakes. Restrictive mistakes could be grouped into four categories: slippage, i.e., goggle movement relative to the head (63.6%), calibration problems (18.2%), noise (13.6%), and low velocity of the head impulse (4.6%). The overall rate of restrictive mistakes decreased significantly during the study (12% / examiner within the first 25 tested participants and 2.1% during the rest of the examinations, p < 0.0001). Conclusion: Few categories suffice to explain restrictive mistakes in vHIT testing. With slippage being most important, trainers should emphasize the importance of tight goggles. Experience and training seem to be effective in improving vHIT quality, leading to high usability.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 50 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 50 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 6 12%
Researcher 6 12%
Student > Bachelor 6 12%
Student > Master 6 12%
Student > Doctoral Student 2 4%
Other 7 14%
Unknown 17 34%
Readers by discipline Count As %
Medicine and Dentistry 13 26%
Nursing and Health Professions 9 18%
Neuroscience 5 10%
Physics and Astronomy 1 2%
Arts and Humanities 1 2%
Other 1 2%
Unknown 20 40%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 19 August 2018.
All research outputs
#15,017,219
of 23,100,534 outputs
Outputs from Frontiers in Neurology
#6,206
of 12,015 outputs
Outputs of similar age
#199,666
of 333,251 outputs
Outputs of similar age from Frontiers in Neurology
#144
of 297 outputs
Altmetric has tracked 23,100,534 research outputs across all sources so far. This one is in the 32nd percentile – i.e., 32% of other outputs scored the same or lower than it.
So far Altmetric has tracked 12,015 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 7.3. This one is in the 43rd percentile – i.e., 43% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 333,251 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 36th percentile – i.e., 36% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 297 others from the same source and published within six weeks on either side of this one. This one is in the 46th percentile – i.e., 46% of its contemporaries scored the same or lower than it.