TY - JOUR T1 - Reward-based training of recurrent neural networks for cognitive and value-based tasks JF - bioRxiv DO - 10.1101/070375 SP - 070375 AU - H. Francis Song AU - Guangyu R. Yang AU - Xiao-Jing Wang Y1 - 2016/01/01 UR - http://biorxiv.org/content/early/2016/08/19/070375.abstract N2 - Trained neural network models, which exhibit many features observed in neural recordings from behaving animals and whose activity and connectivity can be fully analyzed, may provide insights into neural mechanisms. In contrast to commonly used methods for supervised learning from graded error signals, however, animals learn from reward feedback on definite actions through reinforcement learning. Reward maximization is particularly relevant when the optimal behavior depends on an animal’s internal judgment of confidence or subjective preferences. Here, we describe reward-based training of recurrent neural networks in which a value network guides learning by using the selected actions and activity of the policy network to predict future reward. We show that such models capture both behavioral and electrophysiological findings from well-known experimental paradigms. Our results provide a unified framework for investigating diverse cognitive and value-based computations, including a role for value representation that is essential for learning, but not executing, a task. ER -