Every day, we are exposed to largely diverse natural sounds (e.g., whooshes, chirps, impulses, rapidly modulated harmonic signals) and recognise sound generating objects and events in the environment (e.g., a gust of wind, birds, a nail being hammered, someone speaking).
In this talk, I will describe the research I carried out on the perception and cerebral representation of natural sounds and emphasise data analysis approaches of potential interest outside the auditory neuroscience domain. I will initially focus on two fMRI studies on the representation of natural sound categories (Giordano et al., 2013) and of the identity of sound sources (Giordano et al., 2014). Here, I will emphasise approaches for quantifying the cerebral representation of time-varying stimulus features in fMRI, and to partial it out from additional effects of interest. I will then describe an MEG study on the comprehension of audio-visual speech (Giordano et al., 2017) and detail the measurement of the effective transfer of stimulus representations between cortical areas. I will dedicate the rest of the talk to the ongoing analysis of a multimodal fMRI/MEG dataset on the cerebral representation of emotions in voice (Giordano et al., 2018). Here, I will focus on approaches for merging MEG and fMRI results, and on key issues addressed to account for low-level confounds in MEG (e.g., estimation of the temporal lag from stimulus presentation to the cerebral representation of time-varying features).
Références :
Giordano et al. (2013). Cereb Cortex, 23, 2025–2037.
Giordano et al. (2014). Cortex, 58, 170–185.
Giordano et al. (2017). eLife, 6, e24763.
Giordano et al. (2018). bioRxiv, 265843. doi:10.1101/265843.