Abstract
Whether the human brain represents emotional stimuli as discrete categories or continuous dimensions is still widely debated. Here we directly contrasted the power of categorical and dimensional models at explaining behavior and cerebral activity in the context of perceived emotion in the voice. We combined functional magnetic resonance imaging (fMRI) and magneto-encephalography (MEG) to measure with high spatiotemporal precision the dynamics of cerebral activity in participants who listened to voice stimuli expressing a range of emotions. The participants also provided a detailed perceptual assessment of the stimuli. By using representational similarity analysis (RSA), we show that the participants’ perceptual representation of the stimuli was initially dominated by discrete categories and an early (<200ms) cerebral response. These responses showed significant associations between brain activity and the categorical model in the auditory cortex starting as early as 77ms. Furthermore, we observed strong associations between the arousal and valence dimensions and activity in several cortical and subcortical areas at later latencies (>500ms). Our results thus show that both categorical and dimensional models account for patterns of cerebral responses to emotions in voices but with a different timeline and detail as to how these patterns evolve from discrete categories to progressively refined continuous dimensions.
One Sentence Summary: Emotions expressed in the voice are instantly categorized in cortical processing and their distinct qualities are refined dimensionally only later on.