↓ Skip to main content

Test-Retest Reliability of a Serious Game for Delirium Screening in the Emergency Department

Overview of attention for article published in Frontiers in Aging Neuroscience, November 2016
Altmetric Badge

Mentioned by

twitter
1 X user

Citations

dimensions_citation
23 Dimensions

Readers on

mendeley
94 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Test-Retest Reliability of a Serious Game for Delirium Screening in the Emergency Department
Published in
Frontiers in Aging Neuroscience, November 2016
DOI 10.3389/fnagi.2016.00258
Pubmed ID
Authors

Tiffany Tong, Mark Chignell, Mary C. Tierney, Jacques S. Lee

Abstract

Introduction: Cognitive screening in settings such as emergency departments (ED) is frequently carried out using paper-and-pencil tests that require administration by trained staff. These assessments often compete with other clinical duties and thus may not be routinely administered in these busy settings. Literature has shown that the presence of cognitive impairments such as dementia and delirium are often missed in older ED patients. Failure to recognize delirium can have devastating consequences including increased mortality (Kakuma et al., 2003). Given the demands on emergency staff, an automated cognitive test to screen for delirium onset could be a valuable tool to support delirium prevention and management. In earlier research we examined the concurrent validity of a serious game, and carried out an initial assessment of its potential as a delirium screening tool (Tong et al., 2016). In this paper, we examine the test-retest reliability of the game, as it is an important criterion in a cognitive test for detecting risk of delirium onset. Objective: To demonstrate the test-retest reliability of the screening tool over time in a clinical sample of older emergency patients. A secondary objective is to assess whether there are practice effects that might make game performance unstable over repeated presentations. Materials and Methods: Adults over the age of 70 were recruited from a hospital ED. Each patient played our serious game in an initial session soon after they arrived in the ED, and in follow up sessions conducted at 8-h intervals (for each participant there were up to five follow up sessions, depending on how long the person stayed in the ED). Results: A total of 114 adults (61 females, 53 males) between the ages of 70 and 104 years (M = 81 years, SD = 7) participated in our study after screening out delirious patients. We observed a test-retest reliability of the serious game (as assessed by correlation r-values) between 0.5 and 0.8 across adjacent sessions. Conclusion: The game-based assessment for cognitive screening has relatively strong test-retest reliability and little evidence of practice effects among elderly emergency patients, and may be a useful supplement to existing cognitive assessment methods.

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 94 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 94 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 18 19%
Student > Bachelor 10 11%
Researcher 9 10%
Student > Ph. D. Student 8 9%
Other 4 4%
Other 15 16%
Unknown 30 32%
Readers by discipline Count As %
Medicine and Dentistry 20 21%
Psychology 12 13%
Engineering 8 9%
Computer Science 7 7%
Social Sciences 5 5%
Other 13 14%
Unknown 29 31%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 12 November 2016.
All research outputs
#20,353,668
of 22,901,818 outputs
Outputs from Frontiers in Aging Neuroscience
#4,323
of 4,824 outputs
Outputs of similar age
#269,895
of 312,379 outputs
Outputs of similar age from Frontiers in Aging Neuroscience
#73
of 84 outputs
Altmetric has tracked 22,901,818 research outputs across all sources so far. This one is in the 1st percentile – i.e., 1% of other outputs scored the same or lower than it.
So far Altmetric has tracked 4,824 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 13.0. This one is in the 1st percentile – i.e., 1% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 312,379 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 84 others from the same source and published within six weeks on either side of this one. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.