Humans can meaningfully rate their confidence in a perceptual or cognitive decision. It is widely believed that these reports reflect the estimated probability that the decision is correct, but, upon closer look, this belief is a hypothesis rather than an established fact. In a pair of perceptual categorization tasks, we tested whether explicit confidence reports reflect the Bayesian posterior probability of being correct. This Bayesian hypothesis predicts that subjects take sensory uncertainty into account in a specific way in the computation of confidence ratings. We find that confidence reports are probabilistic: subjects take sensory uncertainty into account on a trial-to-trial basis. However, they do not do so in the way predicted by the Bayesian hypothesis. Instead, heuristic probabilistic models provide the best fit to human confidence ratings. This conclusion is robust to changes in the uncertainty manipulation, task, response modality, additional flexibility in the Bayesian model, and model comparison metric. To better understand the origins of the heuristic computation, we trained feedforward neural networks consisting of generic units with error feedback, mapped the output of the trained networks to confidence ratings, and fitted our behavioral models to the resulting synthetic datasets. We find that the synthetic confidence ratings are also best fit by heuristic probabilistic models. This suggests that implementational constraints cause explicit confidence reports to deviate from being Bayesian.