Habits form a crucial component of behavior. In recent years, key computational models have conceptualized habits as behaviors arising from model-free reinforcement learning (RL) mechanisms, which typically represent the expected value associated with possible outcomes of each action before one of those actions is chosen. Traditionally, however, habits are understood as arising from mechanisms that are independent of outcomes. Here, we develop a computational model instantiating this traditional view, in which habits are acquired through the direct strengthening of recently taken actions, independent of outcome. We demonstrate how this model accounts for key behavioral manifestations of habits, including outcome devaluation, contingency degradation, and perseverative choice in probabilistic environments. We suggest that mapping habitual behaviors onto value-free mechanisms provides a parsimonious account of existing behavioral and neural data. This mapping may provide a new foundation for building robust and comprehensive models of the interaction of habits with other, more goal-directed types of behaviors, and help to better guide research into the neural mechanisms underlying control of instrumental behaviors more generally.