↓ Skip to main content

Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs

Overview of attention for article published in Frontiers in Psychology, January 2013
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
3 X users

Readers on

mendeley
62 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs
Published in
Frontiers in Psychology, January 2013
DOI 10.3389/fpsyg.2013.00331
Pubmed ID
Authors

Sanne ten Oever, Alexander T. Sack, Katherine L. Wheat, Nina Bien, Nienke van Atteveldt

Abstract

Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

Timeline

Login to access the full chart related to this output.

If you don’t have an account, click here to discover Explorer

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 62 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 3 5%
United Kingdom 2 3%
Spain 1 2%
Netherlands 1 2%
Unknown 55 89%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 19 31%
Researcher 10 16%
Student > Master 10 16%
Student > Doctoral Student 3 5%
Professor > Associate Professor 3 5%
Other 8 13%
Unknown 9 15%
Readers by discipline Count As %
Psychology 18 29%
Medicine and Dentistry 5 8%
Computer Science 4 6%
Engineering 4 6%
Nursing and Health Professions 3 5%
Other 15 24%
Unknown 13 21%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 05 August 2013.
All research outputs
#15,558,709
of 26,552,141 outputs
Outputs from Frontiers in Psychology
#14,488
of 35,493 outputs
Outputs of similar age
#175,031
of 294,817 outputs
Outputs of similar age from Frontiers in Psychology
#508
of 966 outputs
Altmetric has tracked 26,552,141 research outputs across all sources so far. This one is in the 40th percentile – i.e., 40% of other outputs scored the same or lower than it.
So far Altmetric has tracked 35,493 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 13.9. This one has gotten more attention than average, scoring higher than 58% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 294,817 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 40th percentile – i.e., 40% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 966 others from the same source and published within six weeks on either side of this one. This one is in the 46th percentile – i.e., 46% of its contemporaries scored the same or lower than it.