Confidence in a decision is defined statistically as the probability of that decision being correct. Humans, however, tend both to under- and over-estimate their accuracy (and hence, their confidence), as has been exposed in various experiments. Here, we show that this apparent irrationality vanishes when taking into account prior participants' biases measured in a separate task. We use a wagering experiment to show that modeling subjects' choices allows for classifying individuals according to an optimism - pessimism bias that fully explains from first principles the differences in their later confidence reports. Our parameter-free confidence model predicts two counterintuitive patterns for individuals with different prior beliefs: pessimists should report higher confidence than optimists, and their confidences should depend differently on task difficulty. These findings show how apparently irrational confidence traits can be simply understood as differences in prior expectations. Furthermore, we show that reporting confidence actually impacts subsequent choices, increasing the tendency to explore when confidence is low, akin to a deconfirmation bias.