Multimodality in simultaneous interpreting: The effect of visual support on cognitive processing and performance. 01/10/2024 - 30/09/2028

Abstract

Simultaneous interpreting (SI) is a complex cognitive activity comprising concurrent tasks: language comprehension, production, and monitoring. It is carried out in an increasingly technologized environment, in which information is presented through multiple channels, i.e., multimodally. Incoming verbal and nonverbal information from the speaker is frequently complemented by supports such as a slide presentation (with or without captioning) or computer-assisted interpreting (CAI) tools (e.g., terminology software). Yet, the equation between the facilitating effect of these supports in terms of performance and the added cognitive load on information processing lacks empirical investigation. We attempt to fill this gap by investigating the effect of multimodality on interpreters' performance, visual attention, cognitive load, stress, and user experience. To this end, we will conduct two experimental, within-subject studies involving 12 professional interpreters. Study 1 investigates one support: participants will perform their task in 4 conditions, that is, (1) SI (no support - baseline); (2) SI + slides; (3) SI + CAI-tool; (4) SI + intralingual captioning. Study 2 investigates 4 combinations of supports: (1) SI + slides (baseline); (2) SI + slides + CAI; (3) SI + slides + intralingual captioning; (4) SI + slides + interlingual captioning. During the tasks, speech rate, a major problem trigger in SI, is manipulated to measure its effect in relation to the support type. The project combines objective and subjective data collection and analysis such as mobile eye tracking, wristbands, and interviews, to generate profound fundamental knowledge on multimodality and cognition, and benefit interpreter training and practice.

Researcher(s)

Research team(s)

Project type(s)

  • Research Project