NEWS


January 17th 2022: Dilay Z. Karadöller successfully defended her PhD thesis “Development of Spatial Language and Memory: Effects of Language Modality and Late Sign Language Exposure” – congratulations!!

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
May 17th 2021: Louise Schubotz successfully defended her PhD thesis Effects of aging and cognitive abilities on multimodal language production and comprehension in context” – congratulations!!

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

March 12th 2021: Francie Manhardt successfully defender her thesis “A tale of two modalities: How modality shapes language production and visual attention” – congratulations!!

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

November 2020: NWO VENI grant for Dr. Wim Pouw (Donders Institute for Brain, Cognition & Behaviour, Radboud University) – congratulations!!

Social Resonance: How biomechanical constraints solve multimodal challenges in human communication

The project aims to unravel some of the physical origins of multimodal language through a systematic investigation of a recently discovered biomechanical relation between hand gesture and speech. Co-speech hand gestures have been found to impact acoustic features of voice quality, as hand movements recruit muscles implicated in respiratory functioning. Possibly, therefore, multimodal language naturally evolved out of the human bio-architecture. However, to support this view it minimally requires currently absent evidence that gesture-speech biomechanics is naturally exploited in human communication. This research program utilizes motion-tracking, acoustic, respiratory, and natural language processing techniques to investigate gesture’s modulating biomechanical effects on speech vis-à-vis communicative social processes. Next to typical adults, we explore gesture-speech couplings in blind persons, who reportedly gesture naturally even when their gestures cannot be seen, suggesting a role for gesture beyond visual presentation

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


September 28th 2020: Zeynep Azar successfully defended her PhD thesis “The effect of language contact on speech and gesture: The case of Dutch- Turkish bilinguals” – congratulations!!

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

February 13th 2020: James Trujillo successfully defended his PhD thesis entitled “Movement speaks for itself: the neural and kinematic dynamics of communicatively intended action and gesture” – congratulations!!

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

November 2019: Max Planck Minerva Fast Track group awarded to Linda Drijvers (Donders Institute for Brain, Cognition & Behaviour, Radboud University) – congratulations!!

The Communicative Brain

Face-to-face interactions involve auditory signals, such as speech, as well as visual signals, such as signals from the face, hands, body and torso. However, how these signals jointly result in an integrated message that comes in through different sensory channels is still an unresolved problem. The goal of the Communicative Brain group is to understand how the brain integrates auditory and visual signals into a coherent message during multimodal, face-to-face interactions. Brain oscillations may play a mechanistic role in integrating these different sources of information. The core question that the Communicative Brain group wants to answer is whether and how oscillatory neural activity plays a role in integrating these different sources of information. Our core hypothesis is that oscillatory synchronization drives the integration of different sources of information within and between conversational partners. In this project, we therefore aim to investigate 1) how the brain integrates auditory and visual signals coming from multiple conversational partners, 2) whether integrating auditory and visual signals is easier when conversational partners are more ‘in sync’, on both a behavioral and neural level, 3) how we distribute our attention between multiple signals during conversations, and 4) whether oscillatory synchronization is sufficient or even required for successful communication. In the coming years, the group will investigate these questions using new, cutting-edge techniques, including dual-EEG, MEG, rapid invisible frequency tagging, and detailed behavioural analyses of auditory and visual signals in interactive contexts.


+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

May 13th 2019: Linda Drijvers successfully defended her thesis “On the oscillatory dynamics underlying speech-gesture integration in clear and adverse listening conditions” – congratulations!!

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

January 25th 2019: Paul Hömke successfully defended his thesis entitled “The face in face-to-face communication: signals of understanding and non-understanding” – congratulations!!

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

November 2017: ERC Consolidator grant awarded to Judith Holler (Donders Institute for Brain, Cognition & Behaviour, Radboud University)

Communication in Action (CoAct): Towards a model of Contextualized Action and Language Processing

Language is fundamental to human sociality. While the last century has given us many basic insights into how we use and understand it, core issues that we face when doing so within its natural environment—face-to-face conversation—remain untackled. When we speak we also send signals with our head, eyes, face, hands, torso, etc. How do we orchestrate and integrate all this information into meaningful messages? CoAct will lead to a new model with in situ language processing at its core, the Contextualized Action and Language (CoALa) processing model. The defining characteristic of in situ language is its multimodal nature. Moreover, the essence of language use is social action; that is, we use language to do things—we question, offer, decline etc. These social actions are embedded in conversational structure where one speaking turn follows another at a remarkable speed, with millisecond gaps between them. Conversation thus confronts us with a significant psycholinguistic challenge. While one could expect that the many co-speech bodily signals exacerbate this challenge, CoAct proposes that they actually play a key role in dealing with it. It tests this in three subprojects that combine methods from a variety of disciplines but focus on the social actions performed by questions and responses as a uniting theme: (1) ProdAct uses conversational corpora to investigate the multimodal architecture of social actions with the assumption that they differ in their ‘visual signatures’, (2) CompAct tests whether these bodily signatures contribute to social action comprehension, and if they do so early and rapidly, (3) IntAct investigates whether bodily signals play a facilitating role also when faced with the complex task of comprehending while planning a next social action. Thus, CoAct aims to advance current psycholinguistic theory by developing a new model of language processing based on an integrative framework uniting aspects from psychology, philosophy and sociology.