Hippocampus can store spatial representations, or maps, which are recalled each time a subject is placed in the corresponding environment. We consider the problem of decoding the recalled maps as a function of time from multi-cellular recordings. We introduce a graphical model-based decoder, which accounts for the pairwise correlations between the spiking activities of neurons and does not require any positional information, i.e. any knowledge about place fields. We first show, on recordings of hippocampal activity in constant environmental conditions, that our decoder efficiently decodes maps in CA3 and outperfoms existing methods in CA1, where maps are much less orthogonal. Our decoder is then applied to data from teleportation experiments, in which instantaneous switches between environmental conditions trigger the recall of the corresponding maps. We test the sensitivity of our approach on the transition dynamics between the respective memory states (maps). We find that the rate of spontaneous state shifts (flickering) after a teleportation event is increased not only within the first few seconds as already reported, but the network also shows a higher instability level for on much longer (> 1 min) intervals, both in CA3 and in CA1. In addition, we introduce an efficient Bayesian decoder of the rat full trajectory over time, and find that the animal location can be accurately predicted at all times, even during flickering events. Precise information about the animal position is thus always present in the neural activity, irrespectively of the dynamical shifts in the recalled maps.