Abstract
Speech is often structurally and semantically ambiguous. Here we study how the human brain uses sentence context to resolve lexical ambiguity. Twenty-one participants listened to spoken narratives while magneto-encephalography (MEG) was recorded. Stories were annotated for grammatical word class (noun, verb, adjective) under two hypothesised sources of information: ‘bottom-up’: the most common word class given the word’s phonology; ‘top-down’: the correct word class given the context. We trained a classifier on trials where the hypotheses matched (about 90%) and tested the classifier on trials where they mismatched. The classifier predicted top-down word class labels, and anti-correlated with bottom-up labels. Effects peaked ∼100ms after word onset over mid-frontal MEG sensors. Phonetic information was encoded in parallel, though peaking later (∼200ms). Our results support that during continuous speech processing, lexical representations are quickly built in a context-sensitive manner. We showcase multivariate analyses for teasing apart subtle representational distinctions from neural time series.
- MVPA
- MEG
- language
- speech
- brain
- word class
- part of speech
- grammatical category
- decoding
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
We have edited this manuscript to include a number of additional analyses, which test the potential confound of spillover processing from previous words. In addition, we better motivate the work and resultant interpretations in light of previous literature.