Abstract
The emergence of AI systems that emulate the remarkable human capacity for language has raised fundamental questions about complex cognition in humans and machines. This lively debate has largely taken place, however, in the absence of specific empirical evidence about how the internal operations of artificial neural networks (ANNs) relate to processes in the human brain as listeners speak and understand language. To directly evaluate these parallels, we extracted multi-level measures of word-by-word sentence interpretation from ANNs, and used Representational Similarity Analysis (RSA) to test these against the representational geometries of real-time brain activity for the same sentences heard by human listeners. These uniquely spatiotemporally specific comparisons reveal deep commonalities in the use of multi-dimensional probabilistic constraints to drive incremental interpretation processes in both humans and machines. But at the same time they demonstrate profound differences in the underlying functional architectures that implement this shared algorithmic alignment.
Competing Interest Statement
The authors have declared no competing interest.