RT Journal Article SR Electronic T1 The computations underlying human confidence reports are probabilistic, but not Bayesian JF bioRxiv FD Cold Spring Harbor Laboratory SP 093203 DO 10.1101/093203 A1 William T. Adler A1 Wei Ji Ma YR 2016 UL http://biorxiv.org/content/early/2016/12/11/093203.abstract AB Humans can meaningfully rate their confidence in a perceptual or cognitive decision. It is widely believed that these reports reflect the estimated probability that the decision is correct, but, upon closer look, this belief is a hypothesis rather than an established fact. In a pair of perceptual categorization tasks, we tested whether explicit confidence reports reflect the Bayesian posterior probability of being correct. This Bayesian hypothesis predicts that subjects take sensory uncertainty into account in a specific way in the computation of confidence ratings. We find that confidence reports are probabilistic: subjects take sensory uncertainty into account on a trial-to-trial basis. However, they do not do so in the way predicted by the Bayesian hypothesis. Instead, heuristic probabilistic models provide the best fit to human confidence ratings. This conclusion is robust to changes in the uncertainty manipulation, task, response modality, additional flexibility in the Bayesian model, and model comparison metric. To better understand the origins of the heuristic computation, we trained feedforward neural networks consisting of generic units with error feedback, mapped the output of the trained networks to confidence ratings, and fitted our behavioral models to the resulting synthetic datasets. We find that the synthetic confidence ratings are also best fit by heuristic probabilistic models. This suggests that implementational constraints cause explicit confidence reports to deviate from being Bayesian.