↓ Skip to main content

Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents

Overview of attention for article published in Frontiers in Human Neuroscience, August 2018
Altmetric Badge

About this Attention Score

  • Above-average Attention Score compared to outputs of the same age (63rd percentile)
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
6 X users

Citations

dimensions_citation
44 Dimensions

Readers on

mendeley
103 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents
Published in
Frontiers in Human Neuroscience, August 2018
DOI 10.3389/fnhum.2018.00309
Pubmed ID
Authors

Ewart J. de Visser, Paul J. Beatty, Justin R. Estepp, Spencer Kohn, Abdulaziz Abubshait, John R. Fedota, Craig G. McDonald

Abstract

With the rise of increasingly complex artificial intelligence (AI), there is a need to design new methods to monitor AI in a transparent, human-aware manner. Decades of research have demonstrated that people, who are not aware of the exact performance levels of automated algorithms, often experience a mismatch in expectations. Consequently, they will often provide either too little or too much trust in an algorithm. Detecting such a mismatch in expectations, or trust calibration, remains a fundamental challenge in research investigating the use of automation. Due to the context-dependent nature of trust, universal measures of trust have not been established. Trust is a difficult construct to investigate because even the act of reflecting on how much a person trusts a certain agent can change the perception of that agent. We hypothesized that electroencephalograms (EEGs) would be able to provide such a universal index of trust without the need of self-report. In this work, EEGs were recorded for 21 participants (mean age = 22.1; 13 females) while they observed a series of algorithms perform a modified version of a flanker task. Each algorithm's degree of credibility and reliability were manipulated. We hypothesized that neural markers of action monitoring, such as the observational error-related negativity (oERN) and observational error positivity (oPe), are potential candidates for monitoring computer algorithm performance. Our findings demonstrate that (1) it is possible to reliably elicit both the oERN and oPe while participants monitored these computer algorithms, (2) the oPe, as opposed to the oERN, significantly distinguished between high and low reliability algorithms, and (3) the oPe significantly correlated with subjective measures of trust. This work provides the first evidence for the utility of neural correlates of error monitoring for examining trust in computer algorithms.

X Demographics

X Demographics

The data shown below were collected from the profiles of 6 X users who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 103 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 103 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 16 16%
Student > Master 12 12%
Researcher 11 11%
Student > Bachelor 11 11%
Student > Doctoral Student 6 6%
Other 15 15%
Unknown 32 31%
Readers by discipline Count As %
Psychology 16 16%
Engineering 11 11%
Computer Science 8 8%
Business, Management and Accounting 5 5%
Neuroscience 5 5%
Other 20 19%
Unknown 38 37%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 4. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 28 August 2018.
All research outputs
#6,895,159
of 23,096,849 outputs
Outputs from Frontiers in Human Neuroscience
#2,857
of 7,214 outputs
Outputs of similar age
#117,928
of 331,110 outputs
Outputs of similar age from Frontiers in Human Neuroscience
#59
of 117 outputs
Altmetric has tracked 23,096,849 research outputs across all sources so far. This one has received more attention than most of these and is in the 69th percentile.
So far Altmetric has tracked 7,214 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 14.6. This one has gotten more attention than average, scoring higher than 59% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 331,110 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 63% of its contemporaries.
We're also able to compare this research output to 117 others from the same source and published within six weeks on either side of this one. This one is in the 48th percentile – i.e., 48% of its contemporaries scored the same or lower than it.