Abstract
Attempts to characterize the limits of human working memory have differed on whether internal representations are discrete or continuous, with models of each type competing to best capture the errors observers make in delayed reproduction of elementary stimulus features. Here we show discretization only weakly discriminates between models; the critical distinction is instead between deterministic (fixed) and stochastic (randomly varying) limits, with only the latter compatible with observed human performance and the underlying biological system. Reconceptualizing existing models in terms of sampling reveals strong commonalities between seemingly opposing accounts: adding stochasticity to a discrete model brings it into closer correspondence with theories of neural coding, and puts its quality of fit on a par with continuous models, but also eliminates the stability and dependencies between items implied by a fixed set of “slots”. A probabilistic limit on the number of items successfully retrieved is an emergent property of stochastic sampling, with no explicit mechanism required to enforce it. These findings resolve discrepancies between previous accounts and establish a unified computational framework for further investigating working memory.