The Role of Auditory and Visual Modality in Perception of Focus in Mandarin Chinese.
Published In: Journal of Speech, Language & Hearing Research, 2025, v. 68, n. 8. P. 3843 1 of 3
Database: Academic Search Ultimate 2 of 3
Authored By: Shanpeng Li; Yihan Wu; Sasha Calhoun; Mengzhu Yan 3 of 3
Abstract
Purpose: Speech perception is a complex process that involves multiple sensory modalities. Despite our intuitions of speech as something we hear, accumulating evidence has shown that speech perception is not solely dependent on the auditory modality. While it is well established that auditory and visual cues can both help listeners perceive focus, the latter is not established in Mandarin, and the relative contribution of these cues is not established at all. The current study investigated Mandarin listeners' integration of auditory and visual cues in the interpretation of focus in noise-degraded speech, through a question-answer appropriateness rating experiment. Method: To explore the effectiveness and relative contribution of auditory and visual modality in the interpretation of Mandarin focus, participants did a question-answer appropriateness rating task involving subject focus, object focus, and broad focus. All the question-answer pairs were constructed in three modalities: audio only, visual only, and audiovisual. They were instructed to rate the appropriateness of the question-answer pairs. A babble noise was superimposed on the audio track for the audio only and audiovisual conditions. Results and Conclusions: Although auditory cues via prosodic prominence were an effective cue to interpreting focus, visual cues were proven more effective, at least with degraded audio. Overall, this research contributes to our understanding of the interaction between linguistic cues and sensory information during language comprehension, widens the range of languages included in this body of research, and provides important implications for future studies on focus processing in various linguistic contexts and communication settings. This, in turn, will deepen our understanding of the multimodal nature of language comprehension. [ABSTRACT FROM AUTHOR]
Additional Information
- Source:Journal of Speech, Language & Hearing Research. 2025/08, Vol. 68, Issue 8, p3843
- Document Type:Article
- Subject Area:Psychology
- Publication Date:2025
- ISSN:1092-4388
- DOI:10.1044/2025_JSLHR-24-00664
- Accession Number:187402266
- Copyright Statement:Copyright of Journal of Speech, Language & Hearing Research is the property of American Speech-Language-Hearing Association and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Looking to go deeper into this topic? Look for more articles on EBSCOhost.