RT Journal Article SR Electronic T1 Fixed versus mixed RSA: Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models JF bioRxiv FD Cold Spring Harbor Laboratory SP 009936 DO 10.1101/009936 A1 Seyed-Mahdi Khaligh-Razavi A1 Linda Henriksson A1 Kendrick Kay A1 Nikolaus Kriegeskorte YR 2016 UL http://biorxiv.org/content/early/2016/10/25/009936.abstract AB Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area’s representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network, and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies in the training set did not match their frequencies in natural experience or their behavioural importance. The latter factors might determine the representational prominence of semantic dimensions in higher-level ventral-stream areas. Our results demonstrate the benefits of testing both the specific representational hypothesis expressed by a model’s original feature space and the hypothesis space generated by linear transformations of that feature space.HighlightsWe tested computational models of representations in ventral-stream visual areas.We compared representational dissimilarities with/without linear remixing of model features.Early visual areas were best explained by shallow – and higher by deep – models.Unsupervised shallow models performed better without linear remixing of their features.A supervised deep convolutional net performed best with linear feature remixing.