Humans can meaningfully report their confidence in a perceptual or cognitive decision. It is widely believed that these reports reflect the estimated probability that the decision is correct, but this belief is a hypothesis rather than an established fact. In a pair of perceptual categorization tasks, we tested whether explicit confidence reports reflect the Bayesian posterior probability of being correct, which would require subjects to take sensory uncertainty into account in a specific way. We find that subjects do take sensory uncertainty into account, but that they do so in a way that is inconsistent with the Bayesian hypothesis. Instead, heuristic models provide the best fit to confidence reports. This conclusion is robust to changes in the uncertainty manipulation, task, response modality, additional flexibility in the Bayesian model, and model comparison metric. Finally, we find that generic neural networks trained with error feedback produce confidence reports that are best fit by the same heuristic probabilistic models, suggesting that implementational constraints cause explicit confidence reports to deviate from being Bayesian.