PT - JOURNAL ARTICLE AU - Eric Schulz AU - Emmanouil Konstantinidis AU - Maarten Speekenbrink TI - Putting bandits into context: How function learning supports decision making AID - 10.1101/081091 DP - 2016 Jan 01 TA - bioRxiv PG - 081091 4099 - http://biorxiv.org/content/early/2016/10/14/081091.short 4100 - http://biorxiv.org/content/early/2016/10/14/081091.full AB - We introduce the contextual multi-armed bandit task as a framework to investigate learning and decision making in uncertain environments. In this novel paradigm, participants repeatedly choose between multiple options in order to maximise their rewards. The options are described by a number of contextual features which are predictive of the rewards through initially unknown functions. From their experience with choosing options and observing the consequences of their decisions, participants can learn about the functional relation between contexts and rewards and improve their decision strategy over time. In three experiments, we find that participants’ behaviour is surprisingly adaptive to the learning environment. We model participants’ behaviour by context-blind (mean-tracking, Kalman filter) and contextual (Gaussian process regression parametrized with different kernels) learning approaches combined with different choice strategies. While participants generally learn about the context-reward functions, they tend to rely on a local learning strategy which generalizes previous experience only to highly similar instances. In a relatively simple task with binary features, they mostly combine this local learning with an “expected improvement” decision strategy which focuses on alternatives that are expected to improve the most upon a current favourite option. In a task with continuous features that are linearly related to the rewards, they combine local learning with a “upper confidence bound” decision strategy that more explicitly balances exploration and exploitation. Finally, in a difficult learning environment where the relation between features and rewards is non-linear, most participants learn locally as before, whereas others regress to more context-blind strategies.