Abstract
The discrepancy between expected outcome and real outcome is at the base of error coding, a crucial process during adaptive behaviours. Previous studies indicate that performing or observing errors elicits specific EEG markers (e.g. Theta). Here we show how unexpected changes in the movement trajectory of the virtual co-actor in a human-avatar dyadic paradigm are mapped into the error-monitoring system of the human partner. We asked individuals to synchronize their reach-to-grasp movements with those of a virtual partner in conditions that did (Interactive condition), or did not, require (Cued condition) spatio-temporal adaptation to the partner’s actions. Crucially, in certain trials the virtual partner suddenly changed its movement trajectory; thereby violating the human participant’s expectation. These trials showed that fronto-central error-related EEG markers increased as a function of the individual’s reliance on their partner’s behaviour. Source localization showed that observing violations of the expected movements also generated a Theta increase over occipito-temporal regions, highlighting visuo-motor processing during erroneous interactions.
Significance Statement Our ability to coordinate with peers relies upon moment-to-moment prediction and integration of visual (i.e. observing the movements of others) and motor (completing our own actions) information. However, when the behaviour of our partners changes unexpectedly, our prediction appears to be incorrect. Here, we describe EEG error-related neuromarkers (ERN/Pe - Theta/Alpha modulations): when human participants perform a joint reach-to-grasp task with a virtual partner. We show that unexpected changes of the avatar trajectory are mapped into EEG error-markers according to the degree of interpersonal interdependence. Moreover, source analysis highlights that fronto-central and occipito-temporal regions generate Theta activity associated with processing visuo-motor information during social interactions.
Introduction
Being able to coordinate in time and space with our peers is the keystone of interpersonal interactions and is at the root of many cognitive functions that contribute to our “social nature” such as language, joint attention and motor coordination (Sebanz et al., 2006; Sebanz & Knoblich, 2009). Interpersonal motor coordination requires dynamic and efficient encoding of others’ actions and spatio-temporal synchronization between individuals (Sebanz et al., 2006), thus involving several functions ranging from action perception to goal prediction (Pezzulo, 2013; Panasiti et al., 2017). The accuracy of these predictions foresees the success of joint actions. Indeed, when interacting, the prediction of the partner’s on-going behaviour is supported by the moment-to-moment integration of visual (i.e. others’ movements) and motor (own actions) information.
Individuals’ ability to predict the fate of observed actions (Aglioti et al., 2008; Abreu et al., 2012; 2017) is thought to rely on the activity of the Action Observation Network (AON, Rizzolatti & Craighero, 2004) through visuo-motor transformations, supporting predictive action simulation (Umiltà et al., 2001; Kilner et al., 2007; Urgesi et al., 2010; Urgesi et al., 2007; Avenanti et al., 2013). The AON is comprised of occipito-temporal regions where visual processing of body images (Extrastriate Body Area, EBA; lateral occipito-temporal cortex, LOTC) and biological motion (Superior Temporal Sulcus, STS, Puce & Perrett, 2003; Giese & Poggio, 2003) is feed-forwarded to parietal (anterior Intra Parietal Sulcus, aIPS) and premotor (ventral and dorsal PreMotor, vPM, dPM) regions where the transformation of visual information into motor simulations is thought to be computed (Keysers & Gazzola, 2014). Indeed, research shows that the LOTC encodes several aspects of actions such as perception of body shapes, body parts and body movements, but also contributes towards understanding the meaning of actions as well as performing goal-directed movements (see Lingnau and Downing, 2015 for a review). These perceptual and cognitive processes contribute to predicting and responding to the actions of others, a fundamental aspect of social motor interactions. Previous EEG studies of motor interaction have reported modulations in the time domain (e.g. P3a and parietal P3b event-related potentials in joint planning compared to individual planning, Kourtis et al., 2012) and in the time-frequency domain, reporting specific modulation in the Alpha/Mu band (8-13 Hz) during social interaction (Tognoli et al., 2007; Dumas et al., 2010; Naeem et al., 2012; Ménoret et al., 2014; Konvalinka et al., 2014; Novembre et al., 2016). These Alpha/Mu oscillations are thought to originate from motor and somatosensory cortices (Salmelin and Hari 1994; Arnstein et al., 2011) and have been interpreted as the integration of perception and action information within the AON to support action prediction processes. At times, one’s predictions happen to be wrong and adaptive social behaviors rely on the ability to detect these prediction errors. The neural correlates of error detection have been investigated thoroughly using experimental paradigms such as in the Flanker (Hermann et al., 2004) and Simon task (Masaki et al., 2007; Cohen, 2011). EEG studies established that detecting and evaluating errors generate two ERPs – the Error-Related Negativity (ERN; Falkenstein et al., 1991; Gehring et al., 1993) and the Positivity error (Pe; Falkenstein et al., 2000; van Elk et al., 2012) – recorded over fronto-central electrodes (i.e. FCz). Similarly, in the time-frequency domain, EEG studies showed Theta (4-7Hz) and Alpha (8-13Hz) synchronizations over fronto-central electrodes as a marker of the performance monitoring system activity (Luu et al., 2004; Trujillo and Allen, 2007; Cavanagh et al., 2009; Cohen, 2011). More recently, Theta and Alpha synchronizations have been seen during the observation of motor errors, performed by an embodied avatar in virtual environments (Pavone et al., 2016; Spinelli et al., 2017; Pezzetta et al., 2018), but not when the error is observed from a 3rd person perspective (Pavone et al., 2016). What remains unknown is whether modulations of error-related neuromarkers in time and time-frequency domains occur during interactive tasks in which a member of the dyad changes its movement trajectory (thus creating a mismatch between what a member of the dyad expected and what actually happened). Furthermore, it is unclear if perceiving such a type of error modulates the activity of the occipito-temporal cortex – which plays a crucial role in forwarding information to the AON (Abreu et al., 2012).
In the present study, we investigated whether EEG error-related neuromarkers, described when performing and observing errors (Pavone et al., 2016; Spinelli et al., 2018; Pezzetta et al., 2018), also emerge when performing a joint reach-to-grasp task of a bottle-shape object (Sacheli et al., 2015a; 2015b; Candidi et al., 2017) with a virtual partner that deviates from the predictions of the human participant by implementing a sudden change in action trajectory. While a change in trajectory is not an error per se, it creates a discrepancy between expectancy and action outcome that is quintessential to errors, and fundamental for interpersonal coordination. We also explored whether these error-related neuromarkers are modulated when prediction of the partner’s movements are necessary to correctly perform one’s own action compared to when they are irrelevant to individual’s actions. Participants were asked to reach and grasp a bottle and to synchronize their grasping timing with a virtual partner in two separate interactive conditions, namely: 1) a Cued condition, requiring participants to adapt only the timing of their movements in order to synchronize with the virtual partner (participants know in advance where they have to grasp) and, 2) an Interactive condition, requiring participants to adapt in time and space (with the need to synchronize their action according to the avatar’s movement and the instruction received). In the Interactive condition participants were not directly informed about which part of the bottle-shaped object they had to grasp (either the upper part with a precision grip or the lower part with a power grip) but they were asked to perform either imitative (both performing a precision grip or a power grip) or complementary actions (one performing a precision grip and the other a power grip, or vice-versa) with respect to the avatar’s ones. Moreover in 30% of the trials, the avatar performed a motor correction by switching from a precision to a power grip (or vice versa) during the reaching phase (Correction factor). By implementing the Interactive/Cued conditions and Correction/NoCorrection factors we aimed to investigate: 1) EEG correlates of interactive error detection (ERN, Pe, Theta and Alpha modulation) during motor interaction with a virtual partner; 2) the dependency of these signatures to the necessity to predict other’s actions (Interactive vs Cued condition); 3) source estimates of time-frequency markers in order to highlight possible recruitment of previously identified (e.g. fronto-central areas) as well as other regions (e.g. visual, occipito-temporal) involved in error processing during motor interactions.
Material and Methods
Participants
22 individuals (13 females, mean age: 26.35; S.D. = 3.54 [19-31]) took part in the experiment. All participants were right-handed with normal or corrected-to-normal vision. Participants were naive as to the aim of the experiment at the outset and were informed of the purpose of the study only after all the experimental procedures were completed. All participants were reimbursed 7 €/h. The experimental procedures were approved by the Ethics Committee of the Fondazione Santa Lucia (Rome, Italy) and the study was performed in accordance with the 2013 Declaration of Helsinki. One participant was detected as an outlier (see below) and therefore removed from all EEG analysis.
Experimental stimuli and set-up
Participants were comfortably seated in front of a rectangular table of 120 × 75 cm and viewed a 1.024 × 768 resolution LCD monitor placed on the center of the table at ∼60 cm from their eyes. Participants were asked to reach and grasp a bottle-shaped object (37 cm total height) constituted by two superimposed rectangles with different diameters (small, 2.7 cm; large, 6.5 cm) placed next to the center of the working surface. To record participants’ grasping time of the bottle, two pairs of touch-sensitive markers (one pair per rectangle) were placed at 15 cm and 22 cm along the vertical height of the object (see Figure 1). Before each trial, participants positioned their right hand on a starting button placed at 34 cm from the bottle-shaped object with their index finger and thumb. Previously recorded instructions were delivered to participants via headphones.
The trial timeline was as follows: the presentation of each clip was preceded by a fixation cross, placed on the region of the screen where the avatar’s hand would appear. The purpose of the cross was to alert participants about the impending trial. Upon receiving the auditory instruction, participants could release the start button and reach-to-grasp the bottle-shaped object. If participants started before hearing the instruction, the trial would be classified as a false start and subsequently discarded from the analyses. Note that the avatar’s index-thumb contact times were measured trial-by-trial by a photodiode placed on the screen that sent a signal recorded by E-Prime2 software (Psychology Software Tools Inc., Pittsburgh, PA) by means of a TriggerStation (BrainTrends ltd., Italy). The photodiode was triggered by a white dot displayed on the screen (not visible to the participants) during the clip frame corresponding to the instant when the avatar grasped its virtual object.
Creation of the virtual interaction partner
The kinematic features of the virtual partner were based on the movements of human participants performing different grasping movements during a human–human joint-grasping task, identical to the procedures described in Candidi et al. (2017) (see Tieri et al., 2015 for technical details of the Motion Capture recording). The final processed trajectories were realized and applied to a Caucasian male character by using commercial software MotionBuilder 2017 and 3DS Max 2017 (Autodesk). Since we wanted the participants to ignore facial expressions of the virtual partner, the final video stimuli contained only the upper body down from the shoulders, without the neck and head.
The complete sample of clips comprised 10 different grasping movements. Half of the movements ended when the hand grasped the top part of the bottle-shaped object (that is, required precision grips, Figure 2, Panel B), whereas the other half of the movements ended when the hand grasped the bottom part (that is, required power grips Figure 2, Panel A). In 30% of the trials the grasps included an online correction, in which the avatar performed a movement correction by switching from a precision to a power grip (or vice versa) during the reaching phase. The correction-videos were created in 3DS Max by merging the initial key frames of a clip (e.g. a power grasp clip) with the last key frames of a different clip (e.g. precision grasp clip) (Figure 2, Panel C-D).
Experimental Task
We used an ecological and controlled human-avatar interactive task (Sacheli et al., 2015a; 2015b; 2018; Candidi et al., 2017), which has been shown to recruit the same processes called into play during human-human interaction, namely mutual adjustment and automatic imitation (Sacheli et al., 2012; 2013; Candidi et al., 2015; Curioni et al., 2017; Era et al., 2018). Importantly, one’s own action goal cannot be achieved without taking into account the virtual partner’s online movements and adapting to them. Participants were required to perform the grasping task while interacting with the virtual partner. Namely, they had to reach and grasp the bottle-shaped object placed in front of them with their right hand, as synchronously as possible with the action of the avatar (shown on the screen in front of them) in respect to its bottle-shaped object. Given the dimensions of the bottle-shaped object, grasping the lower part implied a whole-hand grasping (a power grip), whereas grasping the upper part implied a thumb-index finger precision grip (Movement Type Factor).
Participants performed the task in two different conditions (Condition Factor): (1) the “Cued Condition”, where subjects received either a high pitch sound (indicating that they had to grasp the bottle in the upper part) or a low pitch sound (indicating that they had to grasp the bottle in the lower part), and (2) the “Interactive Condition”, where subjects received either a sound indicating that they had to perform an imitative action (Interaction Type Factor) (i.e. participant and virtual partner both grasping the upper part of their bottle) or a sound indicating they had to perform a complementary action (i.e. if the virtual partner is grasping the lower part of its bottle, participant had to grasp the upper part of his/her bottle).
Therefore, in the Cued Condition, participants had to predict and adapt in time (i.e. when the virtual partner is going to grasp the bottle) but not in space, since they knew in advance where they had to grasp the bottle-shaped object. Whereas, in the Interactive Condition, participants had to predict and adapt in time and space (i.e. when and where the virtual partner is going to grasp the bottle). It was emphasized that in all the conditions participants had to perform the task as synchronously as possible with the virtual partner.
The frame during which the avatar corrects its behaviour (e.g. by switching from a power to a precision grasp or vice versa, Correction Factor) was used as 0-time-point for EEG markers. In the trials where the virtual partner didn’t correct its behavior, the time 0 corresponds to the frame where the switching would have happened in case of merging with a trajectory change clip (see above, Figure 2).
Participants performed four 100-trial blocks (2 blocks of the Cued Condition, 2 for the Interactive Condition, presented in a counterbalanced order between participants). In 30% of the trials, the virtual partner performed a correction. Thus, each participant performed in 140 trials for Cued-NoCorrection, 140 trials for Interactive-NoCorrection, 60 trials for Cued-Correction and 60 trials for Interactive-Correction. The interaction type (Complementary/Imitative) and the movement type (Precision/Power) factors were randomized trial-by-trial (resulting in 35 trials for NoCorrection-Cued-Complementary-Precision, 15 trials for Correction-Interactive-Imitative-Power etc.). Stimuli presentation and randomization were controlled by E-Prime2 software (Psychology Software Tools Inc.).
Behavioral data
We considered the Grasping Synchrony as the main behavioural measure, computed as the absolute value of the time delay between subjects’ index–thumb contact-times on their bottle and the avatar’s reaching time (Sacheli et al., 2015). This showed the success of human-avatar coordination. We also measured Accuracy, the number of movements executed correctly (according to the instructions); Reaction Times, the time from the go-signal to the release of the start button; Movement Times, the time interval between participants releasing the start button and their index-thumb touching the bottle and Kinematics indexes (Maximum Grip Aperture and Maximum Grip height). Analyses regarding these measures are reported as Supplementary Materials.
EEG-recordings
EEG signals were recorded and amplified using a Neuroscan SynAmps RT amplifiers system (Compumedics Limited, Melbourne, Australia). These signals were acquired from 60 tin scalp electrodes embedded in a fabric cap (Electro-Cap International, Eaton, OH), arranged according to the 10-10 system. The EEG was recorded from the following channels: Fp1, Fpz, Fp2, AF3, AF4, F7, F5, F3, F1, Fz, F2, F4, F6, F8, FC5, FC3, FC1, FCz, FC2, FC4, FC6, T7, C5, C3, C1, Cz, C2, C4, C6, T8, TP7, CP5, CP3, CP1, CPz, CP2, CP4, CP6, TP8, P7, P5, P3, P1, Pz, P2, P4, P6, P8, PO7, PO3, AF7, POz, AF8, PO4, PO8, O1, Oz, O2, FT7 and FT8. Horizontal electro-oculogram (HEOG) was recorded bipolarly from electrodes placed on the outer canthi of each eye and signals from the left earlobe were also recorded. All electrodes were physically referenced to an electrode placed on the right earlobe and were algebraically re-referenced off-line to the average of both earlobe electrodes. Impedance was kept below 5 KΩ for all electrodes for the whole duration of the experiment, amplifier hardware band-pass filter was 0.01 to 200 Hz and sampling rate was 1000 Hz. To remove the blinks or other artifacts, EEG and horizontal electro-oculogram were processed in two separate steps. Data were then downsampled at 500 Hz before a blind source separation method was applied, using Independent Component Analysis (ICA) (Jung et al., 2000) implemented in the Matlab toolbox EEGLab (Delorme & Makeig, 2004) to remove any components related to eye movements from the EEG. Trials showing amplifier blocking, residual blink or other types of artifacts were then excluded from the analysis manually. The artifact rejection procedure (over all the 22 participants) led to 11.2% of the trials being rejected. For all EEG variables presented below, participants with a mean 2.5 SDs above or below the group mean were excluded from the analyses. According to this criterion, one participant was detected as an outlier for the Theta ERD/ERS and therefore removed from all EEG analyses, resulting in 21 kept participants for all analyses.
EEG Analysis
ERPs
Time domain analyses were performed by using the FieldTrip routines (Donders Institute, Nijmegen; Oostenveld et al., 2010) in Matlab (The MathWorks, Inc.). The EEG time series were obtained by segmenting the signal into epochs of 2000 ms (from 1000 ms before to 1000 ms after the Avatar’s correction) and were band-pass filtered (2 to 30 Hz) to reduce the contribution of slow potentials that masked some of the frontal components relevant to our study (Pavone et al., 2016).
It is held that high-pass filters > 1 Hz may generate artefactual effects in ERPs (Tanner et al., 2015). However, we showed that grand average waveforms with and without filters maintain the same morphology (Acunzo et al., 2012), and did not introduce distortions that may bias the estimated parameters (Widmann et al., 2015). Each epoch was baseline corrected from 200ms to 0ms before the Avatar’s correction (or absence of correction). Two main components, already described in the Error-related ERP literature (i.e., ERN and Pe) were individuated. Visual inspection of the results showed a generation of an ERN component only for the Interactive-Correction and Cued-Correction trials (see Figure 3). It also appears that the ERN component peaked at different times for both conditions (i.e 194 ms for Interactive-Correction and 233 ms for Cued-Correction). Therefore, we extracted the mean amplitude from a time window of 100 ms around the ERNs respective latency-peaks (Spinelli et al., 2018). The Pe component was also identified for Interactive-Correction trials, peaking at 326 ms after Avatar’s correction and peaking at 448 ms for Cued-Correction trials; therefore, we extracted the mean amplitude from a time window of 100ms around the peaks. We then ran two t-tests to compare ERN and Pe mean amplitude between Interactive-Correction and Cued-Correction conditions, and t-test against zero for each variable to assess of the actual EEG modulation.
ERD/ERS
Time-frequency analyses were performed by using the Fieldtrip routines (Donders Institute, Nijmegen; Oostenveld et al., 2010) in Matlab (The MathWorks, Inc.). The EEG time series were obtained by segmenting the signal into epochs of 2000 ms length (from 1000 ms before to 1000 ms after the Avatar’s correction) and were band-pass filtered (0.1 to 100 Hz). Each epoch was transformed in the frequency using Hanning-tapered window with 4 cycles and a 50 ms time resolution (using the ‘ft_freqanalysis’ function with ‘mtmconvol’ method as implemented in FieldTrip). Estimated frequency band results were displayed as event-related desynchronization/synchronization (ERD/ERS) with respect to a baseline of between −500 and 0 ms before the Avatar’s change. ERD and ERS represent a decrease or increase in synchrony of the recorded neuronal population (Pfurtscheller & Lopes da Silva, 1999). Positive and negative ERD/ERS values index synchronization and desynchronization with respect to a given reference interval. The formula used to compute event related desynchronization/synchronization was: where E(t,f) represents the spectrum at a given t (time) and f (frequency) and Eref(t,f) is the mean power of the reference interval. For each experimental condition, ERD/ERS were computed from zero (Avatar’s change) to 500 ms. In line with previous literature on frequency modulation during error processing (Cohen, 2011; Cavanagh et al., 2009) we extracted ERD/ERS for the Theta band (4-7 Hz) and the Alpha band (8-13 Hz).
Source Analysis
Beamformer analyses were performed to estimate cortical sources of the effects found at sensor level and were accomplished using the Dynamic Imaging of Coherent Sources (DICS) approach, as implemented in Fieldtrip. DICS was chosen to account for frequency specific effects on the sensor level. The cross spectral density matrix was calculated at the frequency of interest (i.e. 5 Hz for the Theta band; 10Hz and 22Hz for the Alpha and the Beta band in Supplementary Materials). The head model used to project the estimated source was based on a standard MRI (“colin27” template, Holmes et al., 1998; Oostenveld et al., 2003) and the electrodes position used was based on the international standard 10-10 system. Sources activity post-trigger (0-500ms) was contrasted to source activity pre-trigger (−500 to 0 ms). The change in oscillatory Theta power was averaged out cross participants and then interpolated onto a 3D mesh (see Figure 5) provided by the Human Connectome Project (Van Essen, 2012; Seymour et al., 2017; see also the shared Matlab/Fieldtrip code “get_source_power.m” at https://github.com/neurofractal/sensory_PAC/blob/master/2). To test whether the Error-related oscillations originated from frontal sites (Cohen, 2011; Pavone et al., 2016) and occipito-temporal sites (Moreau et al., 2017), averaged source power data were extracted at 2 different Region of Interest (ROIs) based on visual inspection of the grand averages: the Fronto-central ROI and the right LOTC. These two separate sources were only visually identified for Interactive-Correction and Cued-Correction conditions in the Theta band (see Figure 6). Tellingly, none of the sources in the Theta band were different from zero in Cued-NoCorrection (ps > 0.08) and only the Fronto-central source was different from zero in Interactive-NoCorrection (p = 0.002). This pattern suggests that only the factor Correction generated two distinct sources in the Theta band (Fronto-central and right LOTC). Therefore, only the two conditions where the Avatar corrected his movement were then analyzed (Cued-Correction and Interactive-Correction) in the source domain.
Connectivity Analysis
Based on the visual inspection and statistical results of the source analysis, we focused on the two separate source estimates (namely right LOTC and Fronto-central ROI) in the Theta band for the Interactive-Correction and for the Cued-Correction conditions. Using the coordinates of these sources (estimated on the subjects’ grand average), we performed Linear Constrained Minimum Variance (LCMV) source analysis in the time domain to extract time series at the two locations of interest, creating two “virtual channels” (i.e. “Fronto-central” and “right LOTC”, see Figure 6). For a description of the entire procedure, see “MEG virtual channels and seed-based connectivity” tutorial on the Fieldtrip webpage (http://www.fieldtriptoolbox.org/tutorial/chieti/virtualchannel). Once extracted, the virtual channels were treated as normal EEG data and averaged in the time domain (see Figure 7A) (Lappe et al., 2013; Baumgarten et al., 2015). Then, we used the complex time-frequency estimates of the two virtual channels to compute the Phase-Locking Value (PLV) between the two regions of interest (See Figure 6 B) where the PLV is computed as a value between 0 and 1 that quantifies the phase consistency across multiple trials. The PLV is the absolute value of the mean phase difference between the two signals, expressed as a complex unit-length vector (as described by Lachaux et al., 1999). PLV data were extracted from 300-600ms after the Avatar’s correction in frequencies between 4-11 Hz for both Interactive-Correction and Cued-Correction. While for the ERD/ERS analysis, the markers of activity seemed to be somehow divided between the Alpha and the Theta band (two different blobs for Interactive-Correction and for Interactive NoCorrection, see Figure 4.), whereas in the case of the PLV analysis, only one blob is visible, going from 4 to 11Hz (Theta and low-Alpha). We therefore extracted the PLV values based on this visual inspection rather than based on classical frequency bands separation.
Data handling and Statistics
Our main hypothesis concerns ERPs (ERN, Pe) and time-frequency (Theta, Alpha ERD/ERS) modulations during which participants need to: i) predict the action of their partner and proactively adapt to it (Interactive/Cued factor); ii) predict and adapt to an error performed by their partner (Correction/NoCorrection factor). Therefore, the analyses presented in the main text focus on these two factors. Moreover, collapsing Interaction Type (Complementary/Imitative) and Movement type (Power/Precision grasping) factors allowed us to have a higher number of trials for each condition. All the analyses concerning behavioral, kinematics and EEG have Correction (Correction/NoCorrection), Condition (Interactive/Cued), Interaction Type (Complementary/Imitative) as Movement Type (Precision/Power) as within subject factors as well as the behavioral (RTs, MTs, Accuracy) and kinematics (MaxAp and maxH) indexes and Beta (14-30 Hz) ERD/ERS having Correction (Correction/NoCorrection), Condition (Interactive/Cued) as within subject factors are described in in the Supplementary Materials section.
As ERN and Pe components had not been identified in Interactive-NoCorrection and Cued-NoCorrection conditions (see below), ERN and Pe mean amplitudes were analyzed using a pairwise t-test, comparing Interactive-Correction and Cued-Correction trials. Grasping Synchrony and Time-frequency indexes (Theta and Alpha ERD/ERS) were analyzed through separated 2 x 2, within-subject, repeated measures ANOVA, with Correction (Correction/NoCorrection) and Condition (Interactive/Cued) as within-subject factors.
Source power indexes for the 2 ROIs were analyzed through a 2 x 2 repeated measures ANOVAs with ROIs (Fronto-central/right-LOTC) and Condition (Interactive-Correction/Cued-Correction) as within-subject factors.
Frequentist statistical analyses (Shapiro-Wilk test for normality, General Linear Model (GLM) and Greenhouse-Geisser correction for non-sphericity when appropriate (Keselman & Rogan, 1980)) were performed with Statsoft Statistica 8 software. Post-hoc correction for multiple comparisons was made using the Bonferroni test. In order to appropriately test the evidence for null results (Jarosz & Wiley, 2014; Masson, 2011; Rouder, 2014; Wagenmakers, 2007) we ran Bayesian Paired Sample T-Tests and Bayesian T Test against 0, as necessary. When an alternative hypothesis is compared with the null hypothesis a BF10 value of 1 does not favour either hypothesis while values above 1 indicate increasing evidence for the alternative over the null hypothesis and below 1 for the null over the alternative (Dienes, 2014). Generally, BF10 values greater than 3 indicate moderate evidence for the alternative over the null hypothesis. Conversely, BF10 lower than 1/3 is considered to provide moderate evidence in favour of the null hypothesis over the alternative (Jeffreys, 1961; Lee and Wagenmakers, 2014). Bayesian statistical analyses were performed using JASP (JASP version 0.8.12, Love et al., 2015). All variables are violin-plotted, using the shared function in MatLab ‘violin.m’ at https://github.com/bastibe/Violinplot-Matlab/blob/master/Violin.m.
Results
Behavioral
Grasping Synchrony
The 2 Correction (Correction/NoCorrection) x 2 Condition (Interactive/Cued) ANOVA showed that the factors Correction and Condition reached statistical significance with worse synchronicity performance for the Correction vs. the NoCorrection trials (F(1,21) = 17.546, p = 0.005) as well as for Interactive condition vs. the Cued one (F(1,21) = 32.160, p = 0.001). The interaction between Correction and Condition was also significant (F(1,21)= 53.766, p < 0.001). The post-hoc tests indicated that the synchrony performance was worse for Interactive-Correction compared to the other conditions (all ps < 0.001), and for Interactive-NoCorrection compared to the Cued-Correction and Cued-NoCorrrection conditions (all ps < 0.001) (see Figure 3).
ERPs
ERN and Pe
The paired sample t-test revealed a significantly larger ERN mean amplitude for Interactive-Correction compared to Cued-Correction trials (t (20) = −2.19633, p = 0.007). Both variables were different from zero (t(20) = −5.03403, p = 0.001 for Interactive-Correction condition and t(20) = - 4.23200 p = 0.001 for Cued-Correction).
The paired sample t-test revealed a significantly larger Pe mean amplitude for Interactive-Correction compared to Cued-Correction trials (t (20) = 3.78633, p = 0.001). Both variables were different from zero (t(20) = −5.393, p = 0.001 for Interactive-Correction condition and t(20) = −2.3634 p = 0.028 for Cued-Correction).
In summary, these results suggest a different processing of the Avatar’s correction across conditions. When observing a correction in the avatars’ movement in the Interactive condition (when subjects need to predict and adapt in time and space to the virtual partner’s change) both ERN and Pe were larger than in the Cued condition (see Figure 4.).
Time Frequency
Theta (4-7Hz) ERD/ERS Over FCz
The 2 Correction (Correction/NoCorrection) x 2 Condition (Interactive/Cued) ANOVA showed that the factors Correction and Condition reached statistical significance as main effects, with larger Theta synchronization for Correction compared to NoCorrection trials (F(1,20) = 49.609, p = 0.001) and larger Theta synchronization during the Interactive condition compared to the Cued one (F(1,20)= 93.846, p = 0.001). The interaction between Correction and Condition also reached statistical significance (F(1,20) = 20.309, p = 0.001). Post-hoc test indicated the following: i) the Theta ERS during Interactive-Correction trials was larger than the one recorded during all the other conditions (all ps < 0.001); ii) Theta ERS in Interactive-NoCorrection condition was larger than Cued-NoCorrection (p = 0.001); iii) Theta ERS for Cued-Correction trials was bigger than Cued-NoCorrection trials (p = 0.037) (see Figure 5).
Alpha (8-13Hz) ERD/ERS Over FCz
The 2 Correction (Correction/NoCorrection) x 2 Condition (Interactive/Cued) ANOVA showed that the factors Correction and Condition reached statistical significance as main effects, with larger Alpha synchronization for Correction compared to NoCorrection trials (F(1,20) = 16.460, p = 0.001) and larger Alpha synchronization during the Interactive condition compared to the Cued one (F(1,20) = 36.398, p = 0.001). The interaction between Correction and Condition also reached significance (F(1,20) = 10.994, p = 0.003). Post-hoc test indicated that the Alpha ERS for Interactive-Correction was bigger than those generated by all the other conditions (all ps < 0.001), and that Alpha ERS for Interactive-NoCorrection was bigger than that recorded during Cued-NoCorrection trials (p < 0.001) while no difference was showed between Cued-Correction and Cued-NoCorrection (p = 1) (see Figure 5).
To sum-up, both Theta and Alpha ERS are influenced by the different conditions. There is stronger Theta and Alpha synchronization for Interactive-Correction trials compared to all other conditions, and stronger Theta and Alpha synchronization for Interactive-NoCorrection compared to Cued-NoCorrection. However, Theta and Alpha show different patterns within the Cued condition, with a significant difference between Cued-Correction and Cued-NoCorrection in the Theta-band (p = 0.037) – no such difference was showed in the Alpha-band (p = 1). To test the null effect of Correction in the Cued condition for the Alpha band, we performed Bayesian paired-sample T-tests between Cued-Correction and Cued-NoCorrection in both Alpha and Theta frequency bands. Bayesian results showed strong evidence in favor of H1 in the Theta band (BF10 = 347.971) and anecdotal evidence in favor of H0 in the Alpha band (BF10 = 0.444).
Source Analysis
ANOVA on ROIs - Theta Source – 5Hz
The 2 ROIs (Fronto-central/right-LOTC) x 2 Condition (Interactive-Correction/Cued-Correction) ANOVA showed that the factors Condition reached statistical significance as a main effect, showing more Theta source power for Interactive-Correction compared to Cued-Correction trials (F(1,20) = 17.34, p < 0.001) (see Figure 6).
The Fronto-central and the right-LOTC ROIs show similar patterns, namely a significant increase of Theta source power during the Interactive-Correction condition compared to Cued-Correction (Figure 6). By using the coordinates of these two ROIs, we subsequently targeted functional connectivity between these two areas.
Connectivity – Phase-Locking Value
Since the two source estimates were clearly identified for the Interactive-Correction and Cued-Correction trials, the virtual channels were extracted exclusively for these two conditions. The t-test revealed a significant difference (t (20) = 3.510, p = 0.001) in the PLV data extracted from 300-600ms for a 4-11 Hz frequency range after Avatar’s correction (see Figure 7), with a stronger phase lock between Fronto-central and right-LOTC ROIs for Interactive-Correction compared to Cued-Correction. These data suggest phase-locking between the two ROIs 300ms after the Avatar’s correction and could imply neural communication between the Fronto-central ROI and the right LOTC in the 4-11Hz frequency range.
Discussion
In the present study we recorded EEG in human participants who performed a joint-grasping task with a virtual partner. Our paradigm allowed us to explore the link between action prediction in an interactive context and monitor actions that may or may not deviate from one’s own expectation. We obtained three main results: 1) electrocortical indices of performance monitoring were higher in conditions which required the participant to predict in space and time the outcome of their partner’s action (Interactive condition); 2) modulation of the above-mentioned indices, particularly of theta activity over fronto-central electrodes, was stronger in conditions where the virtual partner changed his initial grasping movement in the Interactive-Correction condition, and 3) human adaptation to the virtual partner’s correction implied an increase of frontal and occipito-temporal cortical connectivity.
Action and error monitoring during motor interactions
EEG and fMRI studies show that similar activity is found when people perform errors (Debener et al., 2005; Gehring et al., 1993) and observe another person making an error, suggesting that the detection of one’s own errors and those made by others is mediated by at least partially analogous neural mechanisms (van Schie et al., 2004; Malfait et al., 2010; Cracco et al., 2016; Desmet & Brass, 2015). In the same vein, healthy participants performing a dual go-no-go task slow down their actions following their own errors, as well as after observing errors made by their co-actor(s). This hints at the similarity of the strategies called into play for overcoming one’s own errors and those made by others (Schuch and Tipper, 2007).
It is also worth noting that monitoring one’s own actions and the actions of others is an inherently plastic process. Motor experience and expertise, for example, influence behavioural and neurophysiological responses to erroneous action observation (Aglioti et al., 2008; Candidi et al., 2014; Panasiti et al., 2016). Importantly, error related responses allow performance adaptation depending on the social context in which interactions are embedded (i.e. cooperative and competitive contexts; de Bruijn et al., 2012). However, much less is known about the extent to which ‘erroneous’ or unpredicted movements trigger the activity of the brain system involved in mapping others’ actions. Our results significantly expand current knowledge on this issue by showing that unpredicted actions of an interacting partner are tagged as an error by the performance monitoring system of the other member of the dyad.
Error-related responses in time domain during interpersonal motor interactions
The ERN and Pe components over FCz reveal specific modulation of error monitoring associated with Interactive-Correction trials. The components were only visually identified in conditions where the Avatar changed its behavior (Correction factor) (see Figure 4). Interestingly, the Avatar’s changes in the Interactive condition elicited greater ERN and Pe mean amplitudes than in the Cued condition. This pattern of results indicate that time-dependent neural responses triggered by error detection are induced by others’ errors according to the relevance that these responses have to controlling and updating one’s own movements during interaction.
Error-Related-Negativity is usually associated with an early detection of an unexpected outcome by an internal signal (here with the change of the Avatar’s movement). However, a recent study found that the ERN is influenced by the magnitude of an observed error in space with greater amplitude and earlier latency for large errors compared to small ones (Spinelli et al., 2018).
The Pe is associated with conscious perception of an error, with motivational aspects (in our case, the need to adapt to the Avatar’s change) and top-down cognitive control (Steinhauser and Yeung, 2010; Orr and Carrasco, 2011; Ridderinkhof et al., 2009). It has been showed that while the ERN is always present following error-trials, the Pe is elicited only in trials in which subjects are aware of their errors (Nieuwenhuis et al., 2001).
It is worth noting that in the present study, the observed unexpected correction in the avatar’s movement is found in both the Interactive and Cued conditions. Crucially, however, only in the Interactive condition one needs to spatially predict the outcome of the partner’s behaviour in order to adapt to it. Thus, the higher activation of the early error detection system (ERN-Pe) seems to reflect the need to integrate visuo-motor information concerning one’s own movements and and one’s partner’s, movements.
Error-related responses in time-frequency domain
The Time Frequency Analysis on FCz reveals a greater Theta and Alpha synchronization for the Interactive-Correction condition compared to all other conditions. This Theta and Alpha activity, associated with a prediction error, is in line with previous literature (Pavone et al., 2016; Cohen, 2011; Cavanagh et al., 2009). Interestingly, when the Avatar corrected his action the Theta synchronization was reduced in the Cued condition. This suggests a different processing of the Avatar’s correction across conditions (i.e. when participants need to predict and adapt in the Interactive condition, or when individuals do not need to predict in the Cued one). Furthermore, in the Interactive condition, Theta and Alpha activity were also found, even when the virtual partner performed no correction. However, Theta and Alpha ERS seemed to show different patterns. Indeed, post-hoc analysis showed a Theta ERS in the Cued-Correction trials significantly higher than Theta during Cued-NoCorrection trials, while no such effect was found in the Alpha band. Bayesian paired T-tests, comparing specifically the differences within the Cued conditions in the Theta and Alpha bands, showed strong evidence in favor of a difference between Cued-Correction and Cued-NoCorrection in the Theta-band and evidence in favor of no power modulation in the Alpha-band for Cued-Correction and Cued-NoCorrection trials. The differences between the pattern of results in the Theta-band and in the Alpha-band suggest a differing role for each frequencies in the cognitive processes required across the different conditions (see below).
Theta ERS: Monitoring System and Error Detection
The error-related indices highlight a mismatch between subjects’ action prediction and the avatar’s actual action. Crucially, this mismatch was particularly strong in the Interactive Condition, when subjects needed to predict both the temporal deployment and the goal of the partner’s actions, compared to the Cued Condition, when subjects only needed to temporally predict the actions of the partner. Therefore, the present results suggest that 1) goal-related and temporal coding of the observed actions might undergo different processing systems and 2) that the predictive coding of the goal of the observed actions, and their violation, represents the crucial feature upon which the observed action monitoring is based. A parsimonious interpretation of this pattern of results is that the performance monitoring system, throughout the joint grasping task, is differentially activated in Interactive and Cued conditions. The different interactive conditions suggest that the frontal Theta ERS is associated to the detection of the violation in action predictions rather than the post-error behavioural adaptation of the participant. This can be enforced by, but not limited to, two explanations: 1) the timing at which the activity was recorded (200 ms after the virtual partner’s correction) is in line with the observation of an error but is too fast to be associated to a behavioural correction of the participant’s movement; 2) the activity is also present in the conditions during which the participant does not perform any correction (i.e. Interactive-NoCorrection and Cued-Correction). That during the Interactive condition the frontal Theta ERS was larger than during Cued interactions – even when no error is observed – might be interpreted as a default monitoring activity that enables the brain to detect errors as soon as they appear. Accordingly, the frontal Theta activity is less present during the Cued conditions especially when no change is observed. In such conditions the subject is not engaged in monitoring the partner’s action goals and likely dedicates less resources to processing the partner’s behavior. Moreover our DICS analysis of the Theta band revealed a fronto-central source estimate – in line with previous studies. By identifying the source of the Theta activity during error processing over frontal-midline cortices (Cohen, 2011; Kovacevic et al., 2012) where the Anterior Cingulate Cortex (ACC) is believed to be a key-part of the cognitive control network. An intra-cortical study on non-human primates also directly correlated the neuronal Theta activity from the ACC with prediction and adaptation following an error (Womelsdorf et al., 2010).
Theta dynamics shown in the current study provide new insights on the neural underpinnings of cognitive control and action-related processing during joint actions. Importantly, such an effect was maximal during sudden changes in the virtual partner’s movement. This shows that higher uncertainty in the Interactive condition generates stronger source-located fronto-central Theta.
Alpha during interpersonal motor interactions
An often-described EEG marker of engagement in interactive paradigms is the alpha/mu desynchronization over central sites (Ménoret et al., 2014). This rolandic alpha/mu band activity has been considered an index of the MNS activity since it is suppressed (ERD) during both action observation and action execution (Cochin et al., 1999; Muthukumaraswamy et al., 2004; Oberman et al., 2005; Pineda, 2005). However, recent studies provide more cautious interpretation concerning the alpha/mu modulation, revising some of the conclusions made about the implication of the MNS in processing self and others’ actions in healthy participants (Coll et al., 2017) and clinical samples – such as people with autism (Dumas et al., 2014). That being said, an increase in Alpha power has been associated with attentional modulation with the aim of “gating” the incoming flow of information with top-down processing (Benedek et al., 2011) and error monitoring (van Driel et al., 2012). In the present case, Alpha increase is noticeable during the Interactive conditions, with a greater increase when the Avatar has changed its behaviour. We suggest that the Interactive task requires stronger attention processes due to the need to adapt to the partner’s movement.
Occipito-Temporal theta responses during hand-to-hand interactions: visual somatotopy of interactions
The action observation network (AON), comprising the ventral Premotor Cortex (PMv), Inferior Parietal Lobule (IPL), and Superior Temporal Sulcus (STS), has been proposed as a neural substrate for action understanding (see for review Rizzolatti et al., 2014; Avenanti et al., 2013; Urgesi et al., 2014). However, recent findings associated the ability to decode an action with activity in the lateral occipito-temporal cortex (Wurm & Lingnau 2015; Lingnau & Downing, 2015). Interestingly, in addition to fronto-central Theta activity associated with error-detection, source analysis revealed activity in the right occipito-temporal cortex. This region is compatible with the location of the Extrastriate Body Area (EBA), thought to play a role in the processing of body images as indicated by functional methods (Downing et al., 2001; Thierry et al., 2006), virtual lesions (Urgesi et al., 2004; Urgesi et al., 2007) and studies on brain-damaged patients (Moro et al., 2008). More recently we have shown that an occipito-temporal Theta ERS is found during the passive observation of hands and arms images (Moreau et al., 2017). This result is coherent with the idea that visual processing of hands and arms images occurs in specific sub-regions of the occipito-temporal cortex (Bracci et al., 2012; Peelen & Caramazza, 2010) whereas Tucciarelli et al. (2015) have shown that activity in the theta band in LOTC areas distinguish between hand pointing and grasping actions. A previous fMRI study (Desmet et al., 2014) reported that the observation of an error performed by another individual triggers a metabolic increase in the occipito-temporal region, compatible with the EBA. Here we describe a Theta ERS when the hand of an interactive partner deviates from its expected trajectory, suggesting that this activity might be associated to increased attention on the hand of the partner. Therefore, we submit that the source-detected theta over occipito-temporal area during Interactive-Correction trials is associated with action re-coding after the deviation from the predicted goal was perceived. This re-coding appears to be a necessary step to adapt to the avatar’s sudden change in movement.
Conclusions
We describe the EEG correlates of error detection during motor interactions with a virtual partner that performed expected or less expected actions. We found that electrocortical markers of error processing were stronger for unpredicted actions; particularly in the Interactive condition during which goal-related and temporal predictions of the partner’s actions are required. Moreover, the source estimates of the Theta frequency markers show the recruitment of fronto-central and occipito-temporal regions, indicating their potential role in processing and integrating visual and motor information during social interactions. Future studies are needed to clarify the nature and modality of communication between frontal and occipito-temporal regions for supporting effective visuo-motor transformations during interpersonal interactions. This may be of interest for individuals with motor disabilities (e.g. apraxia) as well as for conditions of impaired social skills (e.g. autism).
Supplementary Material
Behavioral data
We considered the following as behavioral measures: 1) Grasping Synchrony, i.e. the absolute value of the time delay between subjects’ index–thumb contact-times on their bottle and the avatar’s reaching time; 2) Accuracy, that is the number of movements executed correctly (according to the instructions); 3) Reaction Times (RTs), i.e. time from the go-signal to the release of the start button; 4) Movement Times (MTs), i.e. time interval between participants releasing the start button and their index-thumb touching the bottle.
Motion Kinematics data
Motion tracking was continuously recorded during the experimental blocks. During off-line analyses, the participants’ start button-hand-release times and index-thumb-bottle contact times were used to subdivide the kinematics recordings with the aim of analysing only the reach-to-grasp phase (from start button hand-release to index-thumb contact-times). To obtain specific information on the reaching component of the movement, we analysed wrist trajectory as indexed by the maximum peak of wrist height on the vertical plane (Maximum Wrist Height). To obtain specific information on the grasping component of the movement, we analysed maximum grip aperture (Maximum Grip Aperture, i.e., the maximum peak of index-thumb 3D Euclidean distance). We excluded from the analyses (behavioural, kinematics and EEG) trials in which participants 1) missed the touch-sensitive sensors and thus no response was recorded, 2) released the start button before the go instruction or 3) did not comply with the complementary/imitative instructions.
Behavioural and kinematic values that fell 2.5 SDs above or below each individual mean for each experimental condition were considered as outliers and excluded from the analyses. We calculated the individual mean value in each condition for each of these behavioural and kinematics measures. The obtained values were entered in different within-subject ANOVAs (see below). We used non-parametric tests concerning the Accuracy measures. Kinematics, Accuracy, MTs and RTs results are presented as supplementary materials.
Analyses and Results
Behavioural and kinematics (Synchronicity, Reaction Times, Movement Times, Maximum Wrist Height, Maximum Grip Aperture) data were analysed through repeated measures ANOVAs; with Correction (Correction, NoCorrection), Condition (Interactive, Cued), Interaction Type (Complementary, Imitative), Movement Type (Precision, Power) as within subject factors. Accuracy was analysed by means of non-parametric tests.
Behavioural and Kinematics results
Synchrony
Because of violations of Normality Assumptions, data were transformed, using logarithmic (log10) transformation.
The ANOVA on Synchrony showed: a significant Correction x Condition x Interaction Type x Movement Type interaction (F(1, 21) = 63.32, p < 0.001), which explained all other significant Main effects and lower level interactions. Post-hoc tests showed that participants were less synchronous when performing Complementary compared to Imitative movements during power grasping in the Interactive Condition; when the Avatar did not correct its movement trajectory (p = 0.02). Moreover, performance decreased during correction compared to non-correction trials in the Interactive condition, when performing Complementary movements through Power grips (p < 0.001). Performance was worse in Correction trials, during Interactive interactions, when performing Complementary compared to Imitative precision grips (p = 0.004). Furthermore, participants were less synchronous in Correction trials, when performing Interactive interactions involving Complementary movements with precision compared to power grips (p = 0.001). Moreover, performance decreased during Correction trials in the Interactive compared to the Cued condition, when performing Imitative movements through Precision grips (p < 0.001). Finally, performance was worse in Correction trials, when performing Interactive compared to Cued interactions, during Complementary power grips (p < 0.001).
Reaction Times (RTs)
The ANOVA on Reaction Times showed: a significant Correction x Interaction Type x Movement Type interaction (F(1, 21) = 41.77, p < 0.001), which explained all the other significant Main effects and lower level interactions. Post-hoc tests showed that participants were faster to start moving during Correction trials, when performing Complementary actions through Power grips. and during Correction trials, when performing Imitative actions through Precision grips, compared to all the other conditions (all ps < 0.001).
Movement Times (MTs)
The ANOVA on Movement Times showed a significant main effect of Condition (F(1,21) = 16.22, p < 0.001), indicating that coordinating in the Interactive condition resulted in slower movement times compared to the Cued condition. The ANOVA also showed a significant Condition x Correction interaction (F(1,21) = 19.75, p < 0.001). Post-hoc tests showed movement times were slower in Interactive compared to Cued conditions (all ps < 0.001) and in Interactive condition during Correction compared to NoCorrection trials (p = 0.02). Moreover, the ANOVA showed a significant Condition x Interaction Type interaction (F(1,21) = 7.78, p = 0.02). Post-hoc tests showed movement times were slower in Interactive compared to Cued conditions (all ps < 0.001) and in Interactive condition during Complementary compared to Imitative movements (p =0.03). The ANOVA on Movement Times also showed a significant Interaction Type x Movement Type interactions (F(1,21) = 18.16, p < 0.001), explained by the higher order Condition x Interaction Type x Movement Type interaction (F(1,21) = 21.38, p < 0.001)). Post-hoc tests showed movement times were slower during Correction trials, when performing Complementary movements by means of Power compared to Precision grips (p = 0.034) and during Correction trials, when performing Complementary compared to Imitative movements by means of Power grips (p < 0.001). Moreover, post-hoc tests showed slower movements times during Correction trials, when performing Imitative compared to Complementary movements by means of Precision grips (p = 0.03).
Maximum Wrist Height (MaxH)
The ANOVA on Maximum Wrist Height showed a significant Condition x Interaction Type x Movement Type interaction (F(1,21) = 12.98, p = 0.001). Post-hoc tests indicated that when performing power grips during the Interactive condition, maximum wrist height was higher during complementary compared to imitative movements (p < 0.001). This result highlights the presence of visuo-motor interference between self-executed actions and those observed in the partner as an index of automatic imitation. This results mirrors previous studies (Sacheli et al., 2012; 2013; 2015a; 2015b; Candidi et al., 2015; Curioni et al., 2017), only in the condition during which predictions about the partner’s movements are needed. Visuo-motor interference effects were present only when performing power grips on the lower part of the bottle as, when performing precision grips on the upper part of the bottle, the maximum wrist height is always reached when touching the bottle – thus impossible to modulate.
Maximum Grip Aperture (MaxAp)
The ANOVA on Maximum Grip Aperture showed a significant Correction x Condition x Movement Type interaction (F(1, 21) = 147.81, p < 0.001), which explained all the other significant Main effects and lower level interactions. Post-hoc tests showed larger maximum grip aperture during Power compared to Precision Grips (all ps < 0.001) and larger maximum grip aperture during Interactive compared to Cued interactions (all ps < 0.001), but not during NoCorrection trials, in Interactive compared to Cued interactions, by means of Power Grips (p = 1).
Accuracy
A Friedman ANOVA revealed significant cross-condition differences (Chi Sqr. (N = 22, df = 15) = 46.24, P < 0.001). Follow-up Wilcoxon Matched Pairs Tests between Correction and NoCorrection conditions showed that Correction condition was more difficult (i.e. less accurate) than NoCorrection condition when performing Complementary-Precision grips in the Interactive Condition (P = 0.002, corrected P threshold = 0.05/8 = 0.006).
EEG Analysis
Theta over FCz
The ANOVA on Theta synchronization over FCz showed a significant main effect of Correction (F(1, 20) = 49.609, p = 0.001) indicating a greater Theta for Correction trials, and a main effect of Condition (F(1, 20) = 93.846, p = 0.001) indicating a greater Theta for the Interactive interaction. The ANOVA also revealed a Correction x Condition x Interaction type interaction (F(1, 20) = 12.658, p = 0.00257). Post-hoc tests showed larger Theta activity for Correction trials in the Interactive interaction both for complementary and imitative movements compared to the other conditions (ps < 0.001), and greater Theta for Correction trials in the Cued interaction when subjects were performing a complementary movement compared to an imitative one (p = 0.001). The ANOVA also revealed a Condition x Interaction type x Movement type interaction (F(1, 20) = 6.4726, p = 0.018). Post-hoc tests indicated that all NoCorrection trials showed smaller Theta compared to all Correction trials (ps < 0.001).
Alpha over FCz
The ANOVA on Alpha synchronization over FCz showed a significant main effect of Correction (F(1, 20) = 16.460, p = 0.001) indicating a greater Alpha for Correction trials, a main effect of Condition (F(1, 20) = 36.398, p = 0.001) indicating a greater Alpha for the Interactive interaction, and a main effect of Movement type (F(1, 20) = 6.8686, p = 0.019) indicating a greater Alpha synchronization for power grasps compared to precision ones. The ANOVA also revealed a Correction x Condition interaction (F(1, 20) = 13.6520, p = 0.001). Post hoc tests indicated a larger Alpha for Correction trials in the Interactive condition compared to the other conditions (ps < 0.001) and a larger Alpha for NoCorrection trials in the Interactive condition compared to Cued ones in both Correction (p = 0.001) and NoCorrection (p = 0.001).
Beta over FCz
The ANOVA on Beta synchronization over FCz showed a significant main effect of Correction F (1, 20) = 20.263, p = 0.001) indicating a greater Beta for Correction trials and a main effect of Condition (F(1, 21) = 48.3221, p = 0.001) indicating a greater Beta for the Interactive interaction. The ANOVA also revealed a Correction x Condition interaction (F(1, 21) = 22.266, p = 0.001). Post hoc tests indicated a larger Beta for Correction trials in the Interactive condition compared to the other conditions (ps < 0.00001), and a larger Beta for NoCorrection trials in the Interactive condition compared to Cued ones in both Correction (p = 0.001) and NoCorrection (p = 0.00001).
A greater Beta synchronization for Correction during the Interactive condition might be linked to the so-called Beta rebound, associated with the degree of error in a movement (Tan et al., 2014).
ANOVA on ROIs - Alpha Source – 10Hz
Both ROIs mean power were different from zero in the Interactive-Correction condition (t(20) = 6.960, p = 0.001 for the Fronto-Central and t(20) = 2.8807, p = 0.001 for the r-LOTC), while only the r-LOTC was different from zero (t(20) = 4.0490, p = 0.0001 in the Cued-Correction condition – the Fronto-central was not (t(20) = 1,8462, p = 0.08). The 2 ROIs (Fronto-central/right-LOTC) x 2 Condition (Interactive-Correction/Cued-Correction) ANOVA showed no main effects of ROIs (F(1, 20)=0.01200, p = 0.913) nor Condition (F(1, 20) = 1.1564, p = 0.295). The interaction between ROIs and Condition was significant (F(1, 20) = 22.565, p = 0.001), showing that Fronto-central Alpha source power was significantly higher for Interactive-Correction trials compared to Fronto-central Alpha source power in the Cued-Correction trials (p = 0.001), and compared to the right-LOTC in the Interactive-Correction condition (p = 0.022).
Beta Source – 22Hz
The source analysis did not identify any clear focal source estimate.
Acknowledgments
The authors thank Sarah Boukarras, Daniele Esposito, Michele La Sala and Michela Fracassi for helping with the recordings of the EEG and kinematics data. We also thank Eilidh McCann for proofreading the article. The study was made possible by BrainTrends who provided technical support. BrainTrends did not have any financial or scientific influence on the present study. No conflicts of interest, financial or otherwise, are declared by the authors.