TY - JOUR T1 - Unsupervised Learning of Temporal Features for Word Categorization in a Spiking Neural Network Model of the Auditory Brain JF - bioRxiv DO - 10.1101/059840 SP - 059840 AU - Irina Higgins AU - Simon Stringer AU - Jan Schnupp Y1 - 2016/01/01 UR - http://biorxiv.org/content/early/2016/06/19/059840.abstract N2 - The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable.Author Summary Currently we still do not know how the auditory cortex encodes the identity of complex auditory objects, such as words, given the great variability in the raw auditory waves that correspond to the different pronunciations of the same word by different speakers. Here we argue for temporal information encoding within neural cell assemblies for representing auditory objects. Unlike the more traditionally accepted rate encoding, temporal encoding takes into account the precise relative timing of spikes across a population of neurons. We provide support for our hypothesis by building a neurophysiologically grounded spiking neural network model of the auditory brain with a biologically plausible learning mechanism. We show that the model learns to differentiate between naturally spoken digits “one” and “two” pronounced by numerous speakers in a speaker-independent manner through simple unsupervised exposure to the words. Our simulations demonstrate that temporal encoding contains significantly more information about the two words than rate encoding. We also show that such learning depends on the presence of stable patterns of firing in the input to the cortical areas of the model that are performing the learning. ER -