Abstract
Predictive coding (PC) theory posits that our brain employs a predictive model of the environment to infer the causes of its sensory inputs. A fundamental but untested prediction of this theory is that the same stimulus should elicit distinct precision weighted prediction errors (pwPEs) when different (feature-specific) predictions are violated, even in the absence of attention. Here, we tested this hypothesis using functional magnetic resonance imaging (fMRI) and a multi-feature roving visual mismatch paradigm where rare changes in either color (red, green), or emotional expression (happy, fearful) of faces elicited pwPE responses in human participants. Using a computational model for learning and inference, we simulated pwPE trajectories of a Bayes-optimal observer and used these to analyze changes in blood oxygen level dependent (BOLD) responses to changes in color and emotional expression of faces while participants engaged in a distractor task. Controlling for visual attention by eye-tracking, we found pwPE responses to unexpected color changes in the fusiform gyrus. Conversely, unexpected changes of facial emotions elicited pwPE responses in thalamo-cortico-cerebellar structures associated with emotion processing. Our results support a general role of PC across perception, from low-level to complex and socially relevant object features, and suggest that monitoring of the social environment occurs continuously and automatically, even in the absence of attention.
Highlights Predictive coding (PC) implies that brain responses should reflect transient precision weighted prediction errors (pwPEs). By using fMRI we show that rare changes in color or emotional expression of human faces elicit pwPE responses in dedicated neuronal circuits with known specialization in processing color and emotion-related information, respectively. Our results demonstrate that physically identical stimuli elicit pwPEs in distinct neuronal circuits when different (feature-specific) predictions are violated, depending on current sensory expectations based on prior stimulus context. The demonstration of pwPEs in visual areas as well as in emotion processing structures lends experimental support to theoretical accounts of PC in color and social perception, respectively.
Introduction
Predictive coding (PC) postulates that perceptual inference rests on probabilistic (generative) models of the causes of the sensory input (Rao and Ballard, 1999; Friston, 2005; Clark, 2015). The theory emphasizes the active nature of perceptual inference: Instead of being a purely reactive, feed-forward analyzer of bottom-up sensory information (Hubel and Wiesel, 1965; Riesenhuber and Poggio, 2000), the brain is thought to actively predict the sensory signal based on a hierarchical probabilistic model of the causes of its sensory signals (Egner et al., 2010; Friston, 2010; Lochmann et al., 2012; Spratling, 2017). According to this theory, perception involves inferring the most likely cause of the sensory signals by integrating incoming sensory information at a given level in the hierarchy with predictions generated at the level above (Rao and Ballard, 1999; Lee and Mumford, 2003; Friston, 2005), where the latter derive from prior information. In this framework a unified perceptual representation of an object involves a set of hierarchical predictions that relate to the object’s different attributes, such as spatiotemporal coordinates but also intrinsic structure. At each hierarchical level, incoming signals from the level below are compared to predictions from the level above, and the ensuing prediction errors (PEs) are passed to the higher level in order to update predictions.
PC thus offers an elegant framework to describe how object representations emerge during hierarchical perceptual inference: segregation and integration of predicted lower-level and more abstract attributes take place in a probabilistic network bound together by passing messages between hierarchical levels that most effectively minimize perceptual PEs (Friston, 2005; Bogacz, 2017). In this framework, unexpected stimuli trigger PE responses which subside as stimuli become predictable, for example through repeated presentation.
PC has become one of the most influential theories of perception, and many of its implications have been confirmed experimentally (e.g., Smith and Muckli, 2010; Wacongne et al., 2011; Kok et al., 2012a,b; Durschmid et al., 2016; Sedley et al., 2016; Ehinger et al., 2017; Gordon et al., 2017; Schwiedrzik and Freiwald, 2017). One central question on the implementation of PC is whether the same physical stimulus elicits separable feature-specific PE responses when distinct predictions about its various attributes exist and regardless whether such attributes are behaviorally relevant. To our knowledge, this was only studied under attention (Jiang et al., 2016), but not for automatic processing, in the absence of attention and task-relevance. To answer this question, we used a roving standard paradigm (Fig.1A) to systematically manipulate predictions of two attributes of complex stimuli, the color and emotional expression of faces. Based on prior event-related brain potential (ERP) studies, we used a visual mismatch paradigm (for reviews, see Stefanics et al., 2014; Kremlacek et al., 2016) to study brain responses reflecting PEs and model updating processes elicited by unexpected changes in color and facial emotion while participants engaged in a distractor task.
We used the Hierarchical Gaussian Filter (HGF, Mathys et al., 2011; Mathys et al., 2014) to simulate belief trajectories of an ideal Bayesian observer. The HGF is a computational model that allows inferring an observer’s belief and uncertainty about the hidden state of the world that generates the sensory information reaching the senses of the observer. The model tracks the beliefs of the observer about the probability of each stimulus feature and updates its inference as new information is presented trial-by-trial. The HGF implements PC in the temporal domain and has been used in multiple studies to investigate PE responses in the brain (e.g., Iglesias et al., 2013; Hauser et al., 2014; Schwartenbeck et al., 2015; Vossel et al., 2015; Diaconescu et al., 2017; Lawson et al., 2017; Powers et al., 2017; Adams et al., 2018; Katthagen et al., 2018; Stefanics et al., 2018a).
A similar experimental paradigm, computational modeling and analysis approach in our previous singletrial ERP study allowed us to study the time course of event-related brain potentials (ERP) to unexpected color and emotion changes associated with pwPEs (Stefanics et al., 2018a). There we found that both kind of changes elicited brain responses that were better explained with pwPEs as parametric regressors than simple stick regressors in a general linear modeling (GLM) analysis. Here, we used fMRI to identify the brain regions associated with feature-specific pwPEs to human faces. Critically, our paradigm independently manipulated the color and emotional expression of face stimuli (Fig. 1B,C), allowing us to model pwPEs to violations of emotion expectations separately from pwPEs elicited by changes in color. This enabled us to study predictive processes pertaining to low versus high level object features for physically identical stimuli.
Methods
Ethics Statement
The experimental protocol was approved by Cantonal Ethics Commission of Zurich (KEK 2010-0327). Written informed consent was obtained from all participants after the procedures and risks were explained. The experiments were conducted in compliance with the Declaration of Helsinki.
Subjects
Thirty-nine healthy, right-handed subjects participated in this experiment. One subject was excluded due to incomplete data, and three subjects’ data of one scanning day were lost during transfer due to a technical failure. The final sample comprised 35 subjects (mean age=23.06ys, sd=3.02ys, 15 females). All subjects had normal or corrected-to-normal vision.
Paradigm
Faces were presented in four peripheral quadrants of the screen (Fig. 1A) on a grey background with a fixation cross in the center. Each stimulus panel contained four faces of different identity expressing the same emotion. Stimulus duration was 200ms. The stimuli were presented after an inter-stimulus interval of 550ms during which only the fixation cross was present. A change detection task was presented at the central fixation cross. Roving paradigms have frequently been used to study automatic sensory expectation effects (Haenschel et al., 2005; Garrido et al., 2008; Costa-Faidella et al., 2011; Moran et al., 2013; Auksztulewicz and Friston, 2015; Stefanics et al., 2018a,b). Here, we used a multi-feature visual ‘roving standard’ paradigm to elicit PE responses by unexpected changes either in color (red, green), or emotional expression (happy, fearful) of human faces, or both. Importantly, this allowed us to study how brain responses to physically identical stimuli differed, depending on the degree of expectations about color and emotion, respectively. A diagram of the transitions between stimulus types is shown in Fig. 1B.
Images were taken from the Radboud Faces Database (Langner et al., 2010). Ten female and ten male Caucasian models were selected based on their high percentage of agreement on emotion categorization (98% for happy, 92% for fearful faces). A Wilcoxon rank sum test indicated that categorization agreement on the emotional expressions did not differ between happy and fearful faces (Z=-0.63, p=0.53). To control low-level image properties, we equated the luminance and the spatial frequency content of grayscale images of the selected happy and fearful faces using the SHINE toolbox (Willenbockel et al., 2010). The resulting images were used to create the colored stimuli.
Behavioral task
Similar to previous studies (e.g., Astikainen et al., 2009; Kimura et al., 2012; Müller et al., 2010; Stefanics et al., 2011, 2012, 2018a,b;Kreegipuu et al., 2013; Kuldkepp et al., 2013; Kovacs-Balint et al., 2014; Farkas et al., 2015) we used a behavioral task to engage participants’ attention and thus reduce attentional effects on the processing of face stimuli across participants. The task involved detecting changes in the length of the horizontal and vertical lines of a fixation cross presented in the center of the visual field. At random times, the cross became wider or longer (Fig. 1A), at a rate of 8 flips per minute on average. The cross-flips were unrelated to the changes of the unattended faces. The task was to quickly respond to the cross-flips with a right hand button-press. Reaction times were recorded.
Eye-tracking
Participants were explicitly asked to fixate at the cross in the center of the screen. To make sure that participants did not direct their overt attention to the face stimuli, we used an Eyelink 1000 eye-tracking system to record gaze position at 250 Hz during the experiment. After removal of intervals immediately before and after, as well as during blinks, heatmap of x-y data points for all subjects were plotted using the EyeMMV toolbox (Krassanakis et al., 2014). A Gaussian filter (SD=3 pixels) was applied to smooth the final image. The heatmap was normalized to have maximum value of 1, and gaze position histograms for x and y coordinates were plotted (Fig.1D).
Data acquisition and preprocessing
FMRI data was acquired on a Philips Achieva 3 Tesla scanner using an eight channel head-coil (Philips, Best, The Netherlands) at the Laboratory for Social and Neural Systems Research at the University of Zurich. A structural image was acquired for each participant with a T1-weighted MPRAGE sequence: 181 sagittal slices, field of view (FOV): 256 × 256 mm2, Matrix: 256 × 256, resulting in 1 mm3 resolution. Functional imaging data was acquired in six experimental blocks. In each block 200 whole-brain images were acquired using a T2*-weighted echo-planar imaging sequence with the following parameters. 42 ascending transverse plane slices with continuous in-plane acquisition (slice thickness: 2.5 mm; in-plane resolution: 3.125 × 3.125 mm; inter-slice gap: 0.6 mm; TR = 2.451 ms; TE = 30 ms; flip angle = 77; field of view = 220 × 220 × 130 mm; SENSE factor = 1.5; EPI factor = 51). We used a 2nd order pencil-beam shimming procedure provided by Philips to reduce field inhomogeneities during the functional scans. All functional images were reconstructed with 3 mm isotropic resolution. Functional data acquisition lasted approximately 1 hour. During fMRI data acquisition, respiratory and cardiac activity was recorded using a breathing belt and an electrocardiogram, respectively.
We used statistical parametric mapping (SPM12, v6470; RRID: SCR_007037; Friston et al., 2007) for fMRI data analysis. First, functional images were slice time corrected, realigned to correct for motion and coregistered with the subject’s own anatomical image. Next, we normalized structural images to MNI space using the unified segmentation approach and applied the same warping to normalize functional images. The functional images were smoothed with a 6 mm full-width at half maximum Gaussian kernel and resampled to 2 mm isotropic resolution. We used RETROICOR (Glover et al., 2000) as implemented in the PhysIO-Toolbox (Kasper et al., 2017) from the open source software TAPAS (http://www.translationalneuromodeling.org/tapas) to create confound regressors for cardiac pulsations, respiration, and cardio-respiratory interactions. These confound regressors were entered in the general linear model (GLM; see below).
Modeling belief trajectories
In order to include parametric regressors of precision weighted prediction errors (pwPE) in the GLM, we simulated trajectories of belief update in a Bayesian generative model of perceptual inference, the Hierarchical Gaussian Filter (HGF; Mathys et al., 2011; 2014). We followed the approach described in details in Stefanics et al. (2018a) using the HGF toolbox version v2.2 contained in TAPAS (http://www.translationalneuromodeling.org/tapas), a collection of algorithms and software tools to support computational modeling. Briefly, we simulated the perceptual model of a two-level HGF for the input traces given by the two features of the face stimuli: color (red vs. green) and emotion (fearful vs. happy). Inversion of the HGF (Fig. 1E) infers the hidden states (x) of the world that generate the sensory input (u). The belief states are updated after each trial following a generic update rule: The posterior mean of state x2 at trial k changes its value according to a precision-weighted PE :
Classical reinforcement learning models (e.g. Rescorla and Wagner, 1972) follow the same form:
In the HGF the learning rate is determined by the precision of the state and the precision at the level below. For the simulations we assumed that color and emotion were processed by two separate, independent HGFs. We estimated the parameters of the model assuming an ideal Bayes-optimal observer (Mathys et al., 2011) that minimizes surprise of the incoming input stream. Figure 1F displays example traces of the absolute value of ε2 which entered the GLM as described below.
General linear model analysis
The fMRI data was analyzed with two separate GLMs. One included the gradually changing (absolute) pwPE derived from the HGF as modulatory regressors while the other incorporated an all or none categorical change detection (CD; see Lieder et al. 2013). For the GLM based on the CD model, we included stick functions as parametric modulators for each stimulus on those trials when a change occurred in the stimulus sequence. The GLMs were estimated for each participant individually. Both the pwPE as well as the CD modulatory regressors were computed separately for color and emotion. In addition the GLM included modulatory regressors for red vs. green and happy vs. fearful, respectively. Hence, for each run of the experiment the design matrix included the following experimental regressors: i) a main regressor for the onset of each stimulus display, ii) two modulatory regressors encoding color (red = −1, green = 1) and emotion (happy = −1, fearful = 1), respectively, and iii) two modulatory regressors with the absolute pwPE (or CD) for color and emotion, respectively. The modulatory regressors were mean centered. In addition to these regressors of interest, button presses to cross-flips of the visual attention task were also included in the model. All regressors were convolved with a canonical hemodynamic response function (HRF). Movement regressors and physiological confounds were included in the first level GLM (Kasper et al., 2017) which was estimated for each participant individually. Please note that the sign of colors and emotions in ii) was arbitrarily chosen.
On the group level, we used F-tests to find regions whose response showed significant correlation with pwPE or stick regressors. The resulting statistical parametric maps (SPM) were family-wise error (FWE) corrected at the cluster level (p<0.05) with a cluster defining threshold of p<0.001 (Woo et al., 2014; Flandin and Friston, 2017). We used probabilistic anatomical labels and cytoarchitectonic maps in the SPM Anatomy toolbox (v2.2c; RRID: SCR_013273, Eickhoff et al., 2005) to identify the anatomical areas/structures where we observed significant effects. We summarize activations in terms of anatomical labeling by reporting all local maxima within each cluster in Table 1. This provides an overview over the activations in terms of commonly used anatomical labels.
Results
Fixation and behavioral responses
Gaze position data (Fig. 1D) confirmed that participants complied with task instructions and fixated the central fixation cross throughout the task. Thus, participants engaged in the detection task and were not overtly attending the faces. Mean reaction time to cross-flips was 484ms (standard deviation: SD=106.9ms), and mean hit rate was 78% (SD=7.34%).
First-level GLMs
We fitted two GLMs on the single-subject level, incorporating parametric regressors that represented two hypotheses about the decay of PE responses following a change in color of emotional expression of the faces. Similar to the model comparison procedure described in our previous study, our aim was to create a functionally defined mask of significant voxels showing PE responses under both models at the group level (Stefanics et al., 2018a). However, while similar activation clusters were obtained using the pwPE and CD regressors to color changes, significant clusters to changes in emotion were only found using the pwPE regressors. In other words, the beta estimates obtained using CD were not consistent enough to yield significant activation clusters at the group level. The lack of significant group-level results for the stick regressors prevented us from creating an unbiased mask comprising significant voxels for color and emotion (“logical AND” conjunction). We thus restrict ourselves to report the results obtained at the group-level analysis using the HGF-based pwPE model.
Effect of color pwPEs
A whole-brain analysis of color changes showed significant activation for color pwPE in fusiform areas (Fig. 2A). Post inspection of the activation (Fig. 2B) revealed an increased response to pwPE. Detailed information about anatomical labels, cluster size, and MNI coordinates for the maxima of significant voxel clusters are listed in Table 1.
Effect of emotion PEs
A whole-brain analysis of emotion PEs showed significant effects in bilateral cerebellar areas, bilateral precuneus, bilateral lingual gyrus (LG), and left thalamus (Fig. 3A). A post hoc analysis of the contrast estimates in these regions (Fig. 3B) revealed that all areas showed a negative effect of emotion pwPEs.
Discussion
We used the Hierarchical Gaussian Filter, a computational model for learning and inference, to simulate belief trajectories of an ideal Bayesian observer presented with a sequence of face stimuli. The trial by trial update of internal hidden belief states in the HGF relies on precision weighted prediction errors. Hence, traces of the latter served as regressors in a GLM which yielded brain structures where activation showed a significant relationship to precision-weighted PEs of color and emotional expression of faces, respectively. We manipulated sensory expectations towards color and emotional expression of faces independently. Crucially, emotion and color pwPEs were evoked by physically identical stimuli; only the expectation, i.e. violation of regularity, differed between the two conditions. While our previous EEG study reported the scalp distribution and time-course of pwPE responses (Stefanics et al., 2018a), here we used fMRI to find BOLD correlates of pwPEs in generator structures. We found BOLD correlates of pwPEs to color changes in bilateral fusiform gyrus. pwPEs to changes of emotional expressions activated a different set of areas including the bilateral cerebellum, lingual gyrus, precuneus, and the left thalamus (Fig. 4A).
The demonstration of activations correlated to pwPE in ventral visual areas as well as in emotion processing structures lends experimental support to theoretical accounts of PC in color and emotion perception. From the perspective of PC, functional architectures exist to infer hidden causes of specific sensory information and compare predictions based on inference to observed sensory input (Friston, 2002). Importantly, we manipulated stimulus sequences to induce automatic expectations about the occurrence of different stimulus features, using the same faces to elicit distinct emotion and color pwPEs. In line with our hypothesis, color and emotion pwPEs were reflected by activity in brain structures known to be dedicated to color and emotion processing. A hypothesized generalization of our results is shown in Fig. 4B, which illustrates functional segregation of inferring and predicting hidden causes of sensory information for different features, including color and emotional expression of faces. According to PC, creating and maintaining our internal model of the world is a process during which predictive object representations about the likely properties of the hidden objects are updated using precision-weighted PEs (e.g., Moran et al., 2013; Stefanics et al., 2018a) that signals mismatch between the expectations based on prior information and the current sensory data (Fig. 4C).
Here, we studied pwPEs to unattended and task-irrelevant stimuli. We used a primary task independent of the facial stimuli to ensure that participants did not attend to the faces and verified their attentional focus by eye-tracking. Thus, pwPEs were elicited under an automatic recognition processes and minimized confounding variations in attentional contributions to pwPEs.
To our knowledge this is the first fMRI study using a computational model that simulates the belief trajectories of an optimal Bayesian observer to describe automatic pwPEs to violations of expectations to different features of the same objects. Our paradigm allowed us to study pwPEs in the absence of focal attention and task-relevance. A previous study found that PEs spread across object features in the visual cortex (Jiang et al., 2016). Here, we show that (i) pwPEs can also be elicited in spatially remote neural structures that specialize in the processing of distinct stimulus attributes and (ii) in the absence of attention. Notably, Jiang et al. (2016) studied PEs to attended and task-relevant random dot stimuli, while in our study face stimuli were task-irrelevant and not attended, as verified by eye tracking. The differences between our current and their results suggest that the role of focal attention in perception might not only be to enhance (e.g., Auksztulewicz and Friston, 2015) but also spread PEs across features at the object level (Jiang et al., 2016) which is in line with the feature-integration theory of attention (Treisman and Gelade, 1980).
Color PEs
Color processing involves the ventral visual pathway (Mesulam, 1998; Bartels and Zeki, 2000), where fMRI studies have shown strong color-related activations (Brewer et al., 2005; Solomon and Lennie, 2007; Barbur and Spang, 2008; Brouwer and Heeger, 2009). The location of the fusiform activation in our experiment is in agreement with “color-biased” regions in the ventral occipito-temporal cortex (Lafer-Sousa et al., 2016). The abundance of reciprocal connections in cortex (Felleman and Van Essen, 1991; Markov et al., 2013) indicates that information flow is likely to be bi-directional in the cortical hierarchy. This is in line with PC, where backward connections convey prior knowledge about the hierarchical causal structure of the world (Friston, 2005) whereas the role of forward connections is to convey PE to higher levels and update the internal predictive model of the world. One limitation of our current study is that we cannot separate the putative contributions of bottom-up and top-down mechanisms to the fMRI activations. In sum, our results emphasize the importance of pwPEs and represent an important advance by providing previously unavailable support for a PC account of color perception, given that our parameter estimates likely captured BOLD correlates of pwPEs and model updating processes within color-biased regions of the fusiform cortex.
Emotion PEs
Facial emotions are non-verbal acts of communication that express emotional states and intentions, and are fundamental in social interactions (Fridlund, 1994; Frith, 2009). The social environment is not constant, and detecting changes in the emotional valence of facial expressions in our social space is important for socially successful behavior. Prior ERP studies (Susac et al., 2004; Kimura et al., 2012; Li et al., 2012; Csukly et al., 2013; Stefanics et al., 2012, 2018a; Astikainen et al., 2013; Fujimura and Okanoya, 2013; Xu et al., 2018) suggest that emotional expressions are processed in a few hundred milliseconds and stored in predictive memory representations. We found emotion pwPEs in a set of areas including the bilateral cerebellum, lingual gyrus, precuneus, and left thalamus. This pattern of results (Fig. 4A) is in line with the notion that emotion processing involves a mosaic-like set of affective, motor-related and sensory components (Bastiaansen et al., 2009).
Processing of emotional faces involves the cerebellum, lingual gyrus and precuneus (Fusar-Poli et al., 2009; E et al., 2014; Adamaszek et al., 2017), among other structures. The cerebellar areas showing emotion pwPE activity included bilateral lobules VI and VIIa (Crus 1), supramodal zones which are functionally connected with prefrontal and parietal cortex including the precuneus (Allen et al., 2005; Habas et al., 2009; Buckner et al., 2011). Furthermore, we observed emotion pwPEs in early visual areas known to be activated by emotional faces (Fusar-Poli et al., 2009), as well as the left thalamus. Emotion pwPEs in the lingual gyrus are consistent with our hypothesis that PEs should emerge in structures that likely encode low-level structural attributes in response to changes in facial emotions. The precuneus is a major association area active for a wide range of cognitive processes, in particular somatosensory and visuomotor integration (Bruner et al., 2017). It is functionally connected to sensorimotor areas (Cavanna and Trimble, 2006) as well as early visual areas (Zhang and Li, 2012), thus it is ideally suited to transform sensory information from the lingual gyrus about observed changes in the superficial geometry of faces into a somatosensory representation. The cerebellum does not contain a representation of early visual areas (Buckner et al., 2011) therefore we suggest that cerebellar pwPEs observed in our study were computed based on input from the precuneus. Subcortical connections of the precuneus target the basis pontis allowing the precuneus to access multiple cerebellar circuits (Cavanna and Trimble, 2006). Cerebellar output controls various parts of the thalamo-cortical network (Gomati et al., 2018) which possibly allows the cerebellum to modulate precuneus activity. The cerebellum is thought to support computations of forward and inverse internal models and error signals (Ito, 2008; Roth et al., 2013; Popa et al., 2014; Van Overwalle and Marien, 2016; Sokolov et al., 2017). While a forward model relies on motor commands and reproduces the dynamics of the controlled object, inverse models in the cerebellum infer motor commands. Thus, the cerebellum’s role in emotion recognition might involve inferring the motor commands that resulted in the complex, coordinated position of a number of facial muscles which, in turn, resulted in the observed superficial geometry of the face (emotional expression). Accordingly, the precuneus might infer the internal states generating the observed facial expressions, i.e., emotions and intentions, relying on information inferred by the cerebellum about motor commands. We speculate that pwPEs generated in the cerebellum might signal a mismatch between the expected somatosensory information provided by the precuneus, based on previously observed emotional expressions, and the information for an unexpected facial emotion on a current trial. Based on the observed mosaic-like set of regions showing pwPEs we hypothesize that automatic emotion recognition is supported by a thalamo-cortico-cerebellar network of regions that interact to infer the hidden internal states underlying the observed facial expression. The logic here is similar to that of PC theories of social perception (Kilner et al., 2007; Friston et al., 2011; Caligiore et al., 2013; Ishida et al., 2015), suggesting that inverse models can be used as recognition models and therefore can infer the cause of an observed action. Given that static facial emotional expressions are the outcome of specific communicative actions by facial muscles to convey a message about the internal state of the agent (Ekman and Friesen, 1978; Redcay, 2008; Furl et al., 2010; Cattaneo and Pavesi, 2014), we suggest that emotion recognition might involve inference about the hierarchical structure of hidden causes (change in light reaching the retina ← change in superficial face geometry ← change in underlying musculature ← motor commands ← internal emotional state) generating observable facial emotions.
In summary, our findings demonstrate that the same physical stimulus can elicits separate feature-specific pwPE responses, depending on distinct predictions about its various attributes. This is in agreement with PC theories of perception. Future extensions of our work could involve using computational models of effective connectivity in order to examine the network dynamics generating pwPEs as postulated by PC.
Declaration of interest
none.
Acknowledgements
We acknowledge support by the University of Zurich (KES), the René and Susanne Braginsky Foundation (KES), and the Clinical Research Priority Program “Multiple Sclerosis” (GS, KES).