%0 Journal Article %A Stefano Palminteri %A Germain Lefebvre %A Emma J. Kilford %A Sarah-Jayne Blakemore %T Confirmation bias in human reinforcement learning: evidence from counterfactual feedback processing %D 2016 %R 10.1101/090654 %J bioRxiv %P 090654 %X Previous studies suggest that factual learning, that is learning from obtained outcomes, is biased, such that participants preferentially take into account positive, as compared to negative, prediction errors. However, whether or not the prediction error valence also affects counterfactual learning, that is, learning from forgone outcomes, is unknown. To address this question, we analysed the performance of two cohorts of participants on reinforcement learning tasks using a computational model that was adapted to test if prediction error valance influences learning. Concerning factual learning, we replicated previous findings of a valence-induced bias, whereby participants learned preferentially from positive, relative to negative, prediction errors. In contrast, for counterfactual learning, we found the opposite valence-induced bias: negative prediction errors were preferentially taken into account relative to positive ones. When considering valence-induced bias in the context of both factual and counterfactual learning, it appears that people tend to preferentially take into account information that confirms their current choice. By documenting these valence-induced learning biases, our findings demonstrate the presence of a confirmation bias in human reinforcement learning. %U https://www.biorxiv.org/content/biorxiv/early/2016/11/30/090654.full.pdf