How are perceptual decisions made? The answer to this seemingly simple question necessitates that we specify the nature of perceptual representations on which decisions are based. Some traditional models postulate that the perceptual representation consists of a simple point estimate of the stimulus. Such models do not allow the estimation of sensory uncertainty. On the other hand, recent models have proposed that the perceptual representation involves a full probability distribution over the possible stimulus values. Such models allow a precise estimation of sensory uncertainty. These two possibilities -- point estimates vs. full distributions -- are often seen as the only alternatives but they are not. Here I present five possible perceptual representation schemes that allow the extraction of different levels of sensory uncertainty. I explain where popular models fall within the five schemes and explore the relevant empirical evidence and theoretical arguments. The overwhelming evidence is at odds with both point estimates vs. full distributions. This conclusion is in stark contrast with current popular models in computational neuroscience built on such distributions. Instead, the most likely scheme appears to be one in which the perceptual representation features a point estimate coupled with a strength-of-evidence value.