TY - JOUR T1 - Change-Point Detection without Needing to Detect Change-Points? JF - bioRxiv DO - 10.1101/077719 SP - 077719 AU - Chaitanya K Ryali AU - Angela J Yu Y1 - 2016/01/01 UR - http://biorxiv.org/content/early/2016/09/27/077719.abstract N2 - Online change-point detection is an important and much-studied problem in both neuroscience and machine learning. While most theoretical analysis has focused on this problem in the context of real-valued data, relatively little attention has been paid to the specific case when the observations are categorical (e.g. binary), even though the latter case is common in both neuroscience experiments and some engineering applications. In this paper, we focus on the latter scenario and demonstrate that, due to the information poverty of categorical data, near-Bayes-optimal data prediction can be achieved using a simple linear-exponential filter for binary data, or, more generally, m − 1 separate linear-exponential filters for m-nary data. The computations are dramatically simpler than exact Bayesian inference, requiring only O(m) computation per observation instead of O(ekm), where k depends on representation. We demonstrate how parameters of this approximation depend on the parameters of the generative model, and characterize the parameter regime in which the approximation can be expected to perform well, as well as how its performance degrades away from that regime. Interestingly, our results imply that, under appropriate parameter settings, change-point detection can be done near-optimally without the explicit computation of the probability of a change having taken place. Paradoxically, while detecting a change-point promptly based on sequentially presented categorical data is difficult, making near Bayes-optimal predictions about future data turns out to be quite simple. This work demonstrates that greater attention needs to be paid, in the context of online change-point detection, to a theoretical distinction between the problem of predicting future data and that of deciding that a change has taken place. With respect to neuroscience, our approximate algorithm is equivalent to the dynamics of an appropriately-tuned leaky integrating neuron with constant gain, or a particular variant of the delta learning rule with fixed learning rate, with obvious implications for the neuroscientific investigation of human and animal change-point detection. ER -