RT Journal Article SR Electronic T1 Reward-based training of recurrent neural networks for cognitive and value-based tasks JF bioRxiv FD Cold Spring Harbor Laboratory SP 070375 DO 10.1101/070375 A1 H. Francis Song A1 Guangyu R. Yang A1 Xiao-Jing Wang YR 2016 UL http://biorxiv.org/content/early/2016/08/19/070375.abstract AB Trained neural network models, which exhibit many features observed in neural recordings from behaving animals and whose activity and connectivity can be fully analyzed, may provide insights into neural mechanisms. In contrast to commonly used methods for supervised learning from graded error signals, however, animals learn from reward feedback on definite actions through reinforcement learning. Reward maximization is particularly relevant when the optimal behavior depends on an animal’s internal judgment of confidence or subjective preferences. Here, we describe reward-based training of recurrent neural networks in which a value network guides learning by using the selected actions and activity of the policy network to predict future reward. We show that such models capture both behavioral and electrophysiological findings from well-known experimental paradigms. Our results provide a unified framework for investigating diverse cognitive and value-based computations, including a role for value representation that is essential for learning, but not executing, a task.