Abstract
Both spatial and directional information are necessary for navigation. Rodent hippocampal neurons show spatial selectivity in all environments1, but directional tuning only on linear paths2–8.The sensory mechanisms underlying directionality are unknown, though vestibular and visual cues are thought to be crucial. However, hippocampal neurons are thought to show no angular modulation during two-dimensional random foraging despite the presence of vestibular and visual cues6,7. Additionally, specific aspects of visual cues have not been directly linked to hippocampal responses in rodents. To resolve these issues we manipulated vestibular and visual cues in a series of experiments. We first measured hippocampal activity during random foraging in real world (RW) where we found that neurons’ firing exhibited significant modulation by head-direction. In fact, the fraction of modulated neurons was comparable to that in the head-direction system9. These findings are contrary to commonly held beliefs about hippocampal directionality6,7. To isolate the contribution of visual cues we measured neural responses in a visually similar virtual reality (VR) where the range of vestibular inputs is minimized5,10,11. Significant directional modulation was not only found in VR, but it was comparable to that in RW. Several additional experiments revealed that changes in the angular information contained in the visual cues induced corresponding changes in hippocampal head-directional modulation. Remarkably, for head-directionally modulated neurons, the ensemble activity was biased towards the sole visual cue. These results demonstrate that robust vestibular cues are not required for hippocampal directional selectivity, while visual cues are not only sufficient but also play a causal role in driving hippocampal responses.
Introduction
Hippocampal spatial selectivity has been well established and the underlying mechanisms extensively studied1,12. However, both the existence and mechanisms of hippocampal directional selectivity are debated. During random foraging in two-dimensional arenas, barring a few conflicting reports3,6,13, the consensus is that rodent hippocampal neurons do not show significant angular selectivity6,7. In contrast, hippocampal neurons exhibit strong directional selectivity on linear paths3–5,8. The reason for this disparity and the sensory mechanisms of directionality are unclear, although visual and vestibular cues have been proposed as likely candidates3,5,14. In addition, internal mechanisms also contribute to hippocampal activity11,15–17.
Visual cues strongly influence the spatial firing properties of hippocampal neurons 2,18. Further, comparable levels of directionality exist on linear tracks in RW and in VR5 where the range of vestibular inputs is minimal, suggesting that visual cues also influence directionality in one dimension. In addition, selectivity to the visual cue towards which the animal’s head is facing, referred to as spatial-view, has been reported in humans19, primates20,21 and bats22. However, response to specific features of visual cues has not been observed in rodents, leading to the notion that in these animals visual cues merely provide a context for hippocampal activity.
In parallel, vestibular inputs are crucial to the head-direction system, which is thought to provide directional information to the hippocampus. Consistently, vestibular lesions disrupt hippocampal spatial selectivity23, although lesions in the head-direction system do not24. However, if instantaneous vestibular cues were contributing to hippocampal directionality, there should be greater directionality in two-dimensional RW tasks, where the range is higher compared to one-dimensional RW tasks, but the opposite is true. Some studies have attributed directionality in two dimensions to vestibular-derived self-motion information3,14,25, but no study, to our knowledge, has directly measured hippocampal head-directional modulation when vestibular-based signals are impaired.
Thus, the mechanisms governing hippocampal directional activity in rodents are unclear. We hypothesize that visual cues directly influence the activity of hippocampal neurons to generate angular tuning whereas vestibular cues are not required for directionality.
Results
To test these hypotheses we did a series of experiments and analyses. We first quantified hippocampal spatial and head-directional modulation from 1066 active (defined as cells with minimum mean firing rate of 0.2Hz and with at least a 100 spikes) dorsal CA1 pyramidal neurons (which were part of a previous study of hippocampal spatial selectivity11). Rats randomly foraged for rewards on a two-dimensional platform in a RW environment which had rich distal visual cues and will henceforth be referred to as RWrich (Fig. 1a).
A common technique for quantifying head-directional modulation is to divide the number of spikes in each direction bin by the total time spent in that bin (Fig. 1b)9. However, when neurons have spatially tuned responses, as is the case for hippocampal neurons in RW, this method provides incorrect estimates of angular tuning6. For example, for a neuron with a place field at the edge of the maze, this method would yield artificially large head-directional tuning due to non-uniform sampling of head angles within the place field (Fig. 1b, Extended Data Fig. 1)6. Various methods have been developed to overcome this confound3,25,26. Here, we adopted the well-established generalized linear model (GLM) approach (see Methods)15,27–29 which has several advantages. First, it provides an unbiased estimate of the simultaneous and independent contribution of spatial and head-directional modulation. Second, unlike other methods, head-directional modulation obtained with the GLM method is uninfluenced by behavioral biases within the place field as verified using surrogate data with predetermined levels of spatial and angular modulation (see Methods, Extended Data Fig. 1i). Finally, this method provides an estimate of the fine structure of the respective tuning curves.
This method revealed a surprising finding: many neurons exhibited clear modulation by the rat’s head-direction in RWrich(Fig. 1c,d,e, Extended Data Fig. 2a, see Methods). Some neurons fired maximally for only one head-direction and minimally elsewhere (Fig. 1c,d), while others showed a multimodal response (Fig. 1e). The statistical significance of head-directional modulation was quantified by the sparsity of angular rate maps z-scored with respect to control data (see Methods). Angular sparsity thus defined was significantly (p<0.05) greater than chance (>2z) for 26% of neurons in RWrich. In fact, the fraction of hippocampal neurons with significant angular tuning was comparable to that in many parts of the head-direction system, although the width of the angular tuning curves (full width at half maximum 103.10±3.50°) was wider9,30.
This raises an important question: which sensory inputs could generate the head-directional modulation in our data? Two likely candidates are the visual and vestibular modalities. To dissociate the two, we measured the activity of 719 active dorsal hippocampal CA1 pyramidal neurons11 during the same random foraging task in a two-dimensional VR environment (VRrich). Here, the distal visual cues were identical to those in RWrich, but the range of vestibular cues was minimized due to body fixation. Despite impaired spatial selectivity11, many neurons showed clear modulation by the direction of the rat’s head with respect to the distal visual cues, which will be henceforth referred to as “head-direction"(Fig.2a,b,c, Extended Data Fig. 2b).
Curiously, across the ensemble of neurons there was no substantial difference in head-directional modulation between the two worlds as quantified by z-scored angular sparsity (Fig. 2d). Further, the fraction of neurons showing significant directional modulation in VRrich (24%) was similar to that in RWrich (Fig. 2e). Neurons in VRrich also had multimodal responses like in RWrich, unlike neurons in the head-direction network which have unimodal responses31. The multimodality was greater in VRrich than RWrich (Fig. 2f) which could account for slightly lower z-scored mean vector length in the former (Extended Data Fig. 3). This observation motivates the use of z-scored sparsity as a measure for angular selectivity. Additionally, the width of the angular tuning curves in VRrich (86.61±4.19°) was significantly (16%, p=4.3×10−4) sharper than in RWrich.
We then quantified the spatial modulation of neural responses in both RWrich and VRrich using the rate maps obtained from the GLM method. We found significant spatial selectivity in RWrich but not in VRrich (Fig. 2g), consistent with previous results obtained using the binning method11. Although head-directional modulation was comparable between the two worlds, spatial modulation was not, which suggests a decoupling of the mechanisms of spatial and directional tuning. Consistently, the presence or absence of head-directional modulation had no effect on the percentage of spatially modulated neurons in both RWrich and VRrich(Extended Data Fig. 4a).
Interestingly, directionally tuned neurons had greater mean firing rates than the untuned neurons in VRrich (Extended Data Fig. 4b). This difference was not present in RWrich, which might be due to the presence of multisensory spatially informative cues11.
These results show that rodent hippocampal neurons in RW indeed show significant head-directional modulation during two-dimensional random foraging, contrary to previous reports. In addition, the observation that head-directional modulation remained intact in VR—where the range of vestibularcues is minimized—suggests that vestibular cues are not required for hippocampal head-directional modulation.
What other mechanism could generate angular modulation? Either it is internally generated15–17 or driven by specific visual cues20,21. To disambiguate these possibilities, we generated a virtual world where distal visual cues were entirely eliminated (VRblank)(Fig. 3a, see Methods). The circular platform in the virtual environment, which was the only visual cue present, provided optic flow information but had no spatial or angular information. The rats’ behavior in VRblank was comparable to that in VRrich with visually distinct walls (Extended Data Fig. 5a). Hippocampal neurons did not show clear head-directional modulation in this case (Fig. 3b,i, Extended Data Fig. 5b) and the distribution of z-scored angular sparsity was not significantly different from zero (p=0.7, Wilcoxon signed-rank test).
The absence of head-directional modulation in VRblank may result from a lack of anchoring visual cues32 or optic flow created by the distal visual cues which could potentially be integrated to generate directional tuning. To address this, we performed another experiment where all the virtual walls had the same visual texture with high contrast and spatial frequency (VRsymmetric), thus providing strong optic flow information but no angular information (Fig. 3c, see Methods). The virtual platform was placed in a larger room where each wall was 450cm away from the platform center, which ensured that the distance from the walls provided minimal spatial and angular information. Here too, neurons did not exhibit head-directional modulation (p=0.2, Wilcoxon signed-rank test), similar to VRblank (Fig. 3d,i, Extended Data Fig. 5b). In fact, there was no significant difference between the degrees of angular modulation in VRblank and VRsymmetric (p=0.6, Wilcoxon rank-sum test).
While internal mechanisms and optic flow may still modulate the degree of angular tuning, these experiments show that directional modulation is not generated by these mechanisms alone. This leaves open the possibility that head-directional modulation is generated by the angular information contained in the distal visual cues. To confirm this hypothesis we performed another experiment where the virtual world was strongly visually polarized. In this condition, there was just one high contrast wall, 450cm from the center of the platform, subtending a 90 degree angle (Fig. 3e, see Methods). This polarizing cue had no other spatial information and was identical to the walls used in the symmetric world. Here, 30% of hippocampal neurons showed robust head-directional modulation (Fig. 3f,i, Extended Data Fig. 6), which is a greater fraction than in all other conditions. However, the z-scored angular sparsity of neurons in was comparable to that in RWrich and VRrich (Fig. 3j). Remarkably, the directional tuning curves of many neurons were much narrower (75.69±4.18°) than the sole, 90° wide polarizing cue.
Is there a lower bound on the width of the angular tuning curves? To address this we conducted another experiment where the sole polarizing cue was very narrow (11°), thus providing high angular information in the direction of the cue while leaving the majority of the maze blank (Fig. 3g, see Methods). Strong head-directional tuning was found in this condition as well (Fig. 3h, Extended Data Fig. 7). For neurons with significant head-directional tuning, the width of the tuning curves (70.94±5.34°) was not much narrower than in (Fig. 3k), and much wider than the 11° polarizing cue, indicating a lower bound on the width of hippocampal angular tuning curves. This could be influenced by internally generated hippocampal motifs enforcing a lower bound on the duration of the firing and hence the width of the tuning curves11. Further, the fraction of neurons showing significant head-directional modulation (19%) was considerably lower than in (Fig. 3i), perhaps because the narrow visual cue is visible to the rat for a smaller fraction of time than the wider polarizing cue, hence modulating a smaller percentage of neurons. Notably, despite similar z-scored angular sparsity for neurons with significant head-directional modulation in different conditions, z-scored mean vector length exhibited a different pattern which can be explained by the differences in the multimodality of the angular rate maps (Extended Data Fig. 8).
We then asked if the head-directional modulation of hippocampal neurons is stable and whether the stability depends on the experimental condition (Fig. 4). Tuning curves of neurons with significant head-directional modulation were significantly stable across the experimental session in all four conditions (RWrich p=1.3×10−43, VRrich p=2.0×10−22, p=4.4×10−16 and p=9.2×10−12, Wilcoxon signed-rank test)(Fig. 4a–d). The tuning curves were more stable (p=2.2×10−9) in RWrich than in VRrich (Fig. 4e,f). This could be due to the presence of other directionally informative multisensory cues in RW, such as distal odors and sounds, and their consistent pairing with visual cues resulting in higher stability. On the other hand, the tuning curves were more stable in the polarized VR experiments than in either of the rich conditions (Fig. 4e,f) indicating there may be competing influences of multiple cues within each modality in the rich conditions.
These results demonstrate that specific aspects of visual cues modulate the angular tuning of individual neurons; could they also influence the ensemble response? To address this we investigated the activity of the head-directionally modulated neurons on a population level under the four different conditions. For each neuron, the direction of maximum firing was computed from its angular rate map and was designated as its preferred direction (Fig. 5a–d, see Methods). We then computed the distribution of these preferred directions and the degree of angular bias of the population for each condition. There was no significant angular bias, as measured by z-scored mean vector length of the population (see Methods), in both RWrich (z=0.95) and VRrich (z=–0.34), and the two distributions were not significantly different from each other (p=1, circular Kuiper test) (Fig. 5e,f). The lack of population bias in the rich conditions is likely due to the presence of multiple visual cues on all walls, each contributing to tuning towards different directions.
Indeed, in with only one visual cue, the population was significantly biased (z=2.50). This directional bias of the population was significantly (p=0.04, circular V test) oriented towards the prominent visual cue (Fig. 5g). The directional bias of the population was even stronger in , and was also significantly (p=0.04) oriented towards the narrow visual cue. There was an apparent reduction in the number of cells with preferred direction directly towards the narrow polarizing cue, and for some cells the preferred direction was opposite to the visual cue. This suggests that while visual cues drive the angular selectivity of hippocampal neurons, network mechanisms can modulate this activity.
Discussion
These results demonstrate that, during two-dimensional random foraging, rodent hippocampal CA1 neurons show significant modulation as a function of head-direction with respect to the surrounding distal visual cues in both real and virtual worlds. This directional modulation does not require robust vestibular cues, while angularly informative visual cues are sufficient for its generation. Additionally, the directional modulation is strongly influenced by specific aspects of the distal visual cues, both at the neuronal and ensemble level.
Our demonstration of significant head-directional modulation of hippocampal neurons’ activity during random foraging in two dimensions in RW is contrary to the commonly held belief that head-directional modulation is absent in this condition6,7. The few reports about directionally modulated activity in hippocampus have reached conflicting conclusions, with some reporting vestibular-based head-directional modulation3,14,25 and others reporting vision-based spatial-view modulation20–22. By utilizing a VR setup we were able to isolate the contribution of only distal visual cues to hippocampal directional modulation. In addition, our analysis method was able to estimate the independent contribution of position and head-direction to hippocampal activity with high resolution, uninfluenced by behavioral biases6.
If hippocampal directional tuning was generated primarily from vestibular-based signals, one would expect a dramatic reduction of head-directional modulation in VR where the range of vestibular cues is minimized. However, hippocampal neurons not only showed significant levels of head-directional modulation in VR, but was also exhibited comparable levels to those in RW, thus demonstrating that robust vestibular cues are not necessary to generate directional tuning and that distal visual cues are sufficient.
About a quarter of all active CA1 neurons showed significant head-directional modulation in the visually rich RW and VR conditions, which is comparable to the fraction of directionally tuned neurons in several parts of the head-direction system9. We hypothesize that the directionally modulated hippocampal neurons may be the subset of the population that is predominantly driven by distal sensory cues, including distal visual cues. The rest could be largely driven by proximal cues, such as textures and odors on the track, which are not directionally informative. This hypothesis is consistent with prior studies showing a reduction in directional activity in the hippocampus with the inclusion of proximal cues4, and the remapping of subsets of hippocampal cells in accordance with the rotation of either distal or proximal cues2,18,33,34.
Head-directional modulation was abolished in VR in conditions with either no distal visual cues or symmetric distal visual cues demonstrating that it is not generated by internal mechanisms or optic flow alone. The directional modulation reappeared in VR with visually polarizing cues indicating that the angular information provided by visual cues directly influences head-directional modulation of individual hippocampal neurons. This influence was also observed at an ensemble level where the angular bias of the population was directly governed by the visual cues, i.e. no ensemble bias in the visually rich environments but a significant bias in the visually polarized cue conditions. Further, the degree of angular bias of the ensemble increased with increasing degree of angular polarization of the visual cue. Finally, this ensemble bias was pointed towards the polarizing visual cue. These results demonstrate a causal role of distal visual cues in driving hippocampal head-directional modulation.
Although a smaller fraction of neurons exhibited significant angular modulation in compared to the rich conditions, the degree of ensemble bias showed the opposite trend. We speculate that this is because, in the rich conditions, different neurons fire preferentially to different visual features on the walls, resulting in a greater fraction of angularly modulated cells and no ensemble bias. In contrast, in the visual features modulating neural responses were concentrated within a small range of angles, resulting in a reduction of fraction of directionally tuned neurons but a greater bias of the ensemble. These results indicate that visual cues do not provide a mere context for hippocampal activation, but rather that specific aspects of visual cues play a direct role in determining hippocampal responses.
Visual cues influence directional modulation of neurons in the head-direction system as well9,35. However, those neurons lose their direction selectivity without robust vestibular cues36, which is not the case in our hippocampal data from VR. Hence, we hypothesize that while the hippocampus may receive directional signals from the head-direction system, it must also be receiving directional information from a pathway that does not require the vestibular signal. This pathway could be through the parietal and retrosplenial cortices, which project to the hippocampus via the entorhinal cortex9,37 where neurons show significant head-directional modulation38,39.
In addition to the independence from vestibular cues, there are also other major differences between the properties of angular tuning curves in hippocampus and head-direction system. First, in hippocampus the tuning curves are much broader and more multimodal than those in the head-direction system9,30. Second, in the presence of a single polarizing visual cue, CA1 ensemble activity manifests a large bias in the direction of the visual cue, unlike the uniform distribution of preferred orientations in the head-direction system31.
Notably, the large reduction in spatial selectivity in VR11 is in contrast to the intact head-directional modulation observed here. This suggests that the mechanisms of spatial and directional selectivity can be dissociated, in that visual cues are sufficient to generate the latter but not the former. Further, these results bridge the gap between rodent, human and non-human primate studies showing the presence of angular selectivity independent of spatial selectivity19,21,40,41.
Our findings also narrow the gap between the presence of directionality on linear tracks8, but its apparent absence during random foraging in two dimensions6,7. Visual cues could generate the directional modulation of neurons in both one and two dimensions, and consistent pairing of visual and locomotion cues11 could enhance the degree of directional modulation on linear paths3.
These results could potentially resolve the apparent paradox: If the hippocampus is required for navigation42, how can rats navigate in a virtual world10 without hippocampal spatial selectivity11? We hypothesize that angular selectivity of hippocampal neurons, combined with their selectivity to distance traveled5,11, may be sufficient to solve a navigation task.
Methods summary
Five adult male Long-Evans rats foraged for randomly scattered rewards in RW and various VR tasks. Four rats ran in visually similar RW and VR tasks with environments identical in size (300cm×300cm room with a 100cm radius elevated circular platform at the center). In addition, two rats ran on a 100cm radius platform in four other VR tasks with different distal visual features to determine their influence of hippocampal firing as follows: a) a room with no distal visual cues; b) a room with angularly symmetric cues with high spatial contrast positioned 450cm away from the center; c) a similar environment as in (b) but with only one high contrast cue subtending a visual angle of 90° at the center; d) a similar environment as in (c) but with the visual cue subtending only 11° angle at the center. Electrophysiological data were collected using bilateral hyperdrives with 22 tetrodes from dorsal CA15,11. All procedures were in accordance with NIH approved protocols. Spatial and head-directional modulations were computed using a generalized linear model framework15,27–29. See online methods for details.
Online Methods
Materials and methods were similar to those formerly described5,10,11.
Subjects
Data were obtained from five adult male Long-Evans rats (350–400g at the time of surgery) that were singly housed on a 12-hour light/dark cycle. Recording and training were both done during the light phase of the cycle. The animals were water restricted (minimum of 30mL/day) in order to increase motivation to perform the task, but allowed an unrestricted amount of sugar water reward during the task. Further, they were food restricted (minimum of 15g/day) to maintain a stable body-weight. All experimental procedures were approved by the UCLA Chancellor’s Animal Research Committee and in accordance with NIH approved protocols.
Surgery, electrophysiology and spike sorting
All the methods were analogous to procedures described previously5,11. Rats with satisfactory behavioral performance were anesthetized using isoflurane and implanted with custom-made hyperdrives with 22 independently adjustable tetrodes. Both left and right dorsal CA1 were targeted. After recovery from surgery, tetrodes were gradually lowered to area CA1, which was identified by the presence of sharpwave ripple complexes. Signals were recorded using a Neuralynx data acquisition system at a sampling rate of 40kHz. Spike extraction, spike sorting and single unit classification were done offline using custom software and according to methods described previously5.
Statistics
All analyses were done offline using custom codes in MATLAB. Two-sided nonparametric Wilcoxon rank-sum test and Kuiper test were utilized to assess the significance between linear variables and circular variables respectively. For tests of distributions being different from zero, Wilcoxon signed-rank test was used. CircStat toolbox43 was utilized to compute circular statistics. All values are expressed as mean±s.e.m unless stated otherwise.
Random foraging tasks in visually rich RW and VR
These tasks were the same as those previously described11. Briefly, in both RW and VR, a 100cm radius elevated (50cm above the floor) platform was located at the center of a 300cm×300cm room whose walls had distinct visual cues as depicted in Fig. 1a (referred to as RWrichand VRrich). As commonly done, in RW rats foraged for food rewards scattered randomly on the platform. In a visually similar VR environment, rats foraged for randomly located but hidden reward zones. Upon entry into reward zones, a white dot (20–30cm radius) appeared and sugar water was dispensed through reward tubes. Data were collected from four rats in both RWrichand VRrich.
Random foraging in VR tasks with visual cue manipulations
Two rats ran in four visually different VR environments, all of which had the same platform as described above. The reward zone was marked by a small (12.5cm radius) white disc only visible from a very short distance. Upon delivery of a reward, the reward zone moved to a new pseudorandom location on the platform. Visible reward zones were used to ensure uniform coverage on the platform and to avoid any behavioral biases that might be caused by the changes in visual cues.
In the first experiment, all distal visual cues, including the walls and the floor were eliminated (VRblank). The pattern on the platform was the only source of optic flow but provided no spatial or angular information (Fig. 3a).
In the second experiment, four identical and angularly symmetric distal visual cues, positioned along the four virtual walls of the square room, 450cm away from the center of the platform (VRsymmetric) were added to VRblank. Despite high spatial contrast (to maximize optic flow) the distal visual cues did not provide any angular information due to the angular symmetry and high spatial frequency of the pattern (Fig. 3c). Further, the four walls were made infinitely tall to eliminate information about the corners. The large distance of the cues from the center of the table ensured that the cues did not provide any spatial information.
In the third experiment, only one of the high contrast visual cues used for VRsymmetric was placed 450cm from the center, creating a single wide polarizing cue which subtended a visual angle of 90 degrees at the center (Fig. 3e). This task had three variations where the visual cue appeared either in the front, right or left of the subject at the beginning of the session (Extended Data Fig. 6a,b,c). There was no quantitative difference between the data obtained in these variations, and hence these data were combined (data not shown).
In the fourth experiment, this visual cue was made narrower (11° visual angle) while maintaining the visual spatial frequency of its pattern, and placed at the same distance from the center as (Fig. 3g).
Quantification of spatial and head-directional modulations using Generalized Linear Model
To quantify the influence of spatial and head-directional covariates on the firing of hippocampal neurons, and to minimize the influence of behavioral bias on spatial and angular selectivity estimates, we used a GLM framework15,27–29. The time-varying spiking activity was modelled as an inhomogeneous Poisson process as a function of space and head-direction: Where τ is the time bin size, λ is the intensity function, and S and HD denote space and head-direction respectively. HS and HHD refer to the design matrix associated with spatial and head-directional covariates and βS and βHD are the parameters associated with these matrices. Here β0 is a constant and the exponentiation is done element-wise. We expressed basis functions for HS in terms of the set of orthogonal two-dimensional Zernike polynomials and HHD in terms of sines and cosines. Equation 3 and 4 can be expressed as follows: In equation 6, denotes the mth component of the lth-order Zernike polynomial and ρ(t) and ψ(t) denote radial and angular components of position in polar coordinates. In equation 7, ϕ(t) is the head-direction of the animal. The parameters of the model (βs) were estimated using the GLM function in MATLAB to maximize the likelihood of the model. Further, we used Bayes Information Criterion (BIC) for model selection. The number of the basis functions used for equations 6 and 7 was chosen to minimize the following measure: Where is the maximized value of the likelihood function of the model, k is the total number of parameters to be estimated across both space and head-direction, and n is the number of data points i.e. the length of intensity function. For a majority of cases BIC chose the fifth order in angle domain while this order was more variable in space domain. Hence, the number of angular basis functions was fixed at five. The number of spatial basis functions were allowed to vary and ranged from 5 to 32.
We then used the estimated parameters (βs) to reconstruct the modulation of the firing rate of neurons by spatial and head-directional covariates. For the reconstruction process and rendering purposes, we used 5×5cm spatial bins and a total of 80 angular bins (although the resulting rates are independent of these parameters). The reconstructed rates can be expressed as follows: where xi, yj,refer to the spatial bins and øk refer to head-direction bins, and βS are the estimated parameters from fitting. Thus, the spatial and angular modulation rates used in all the figures are defined as: Here and are the mean values of the spatial and head-directional reconstructed conditional intensity functions. To avoid artifacts, data from periods of immobility (running speed < 5 cm s−1) were discarded. During rate map reconstruction, bins with low occupancy time were excluded.
Measures of selectivity
To quantify the degree of spatial and head-directional modulations we computed spatial and angular sparsity together with the mean vector length of the angular rate map. Sparsity of a rate map given Nbins and rn as the rate in the nth bin is defined as:
For firing rates as a function of head-direction, the mean vector length was computed as: where θn and rn are the angle and rate in the nth circular bin respectively. Both of these measures are invariant to any constant scaling factor in the rates and hence remain unaffected by the normalization used when reconstructing rates using GLM framework.
Generation of surrogate data to validate the GLM method
Non-parametric generation of simulated place fields
To estimate the amount of angular modulation behavioral biases introduce into purely spatially modulated neurons, we generated surrogate data based on the firing rate maps of recorded neurons. Given a behavioral profile B(t) = (BX(t), BY(t)) and spatial firing rate map F(X, Y), spike times were generated according to an inhomogeneous Poisson process with F(B(t)) as the rate parameter. Data generated in this manner were used in Fig. 1b and in Extended Data Fig. 1i.
Parametric generation of simulated place fields
To verify the GLM framework accurately estimated the independent contribution of spatial and angular factors in determining spiking, we generated surrogate data with predetermined and variable degrees of spatial and angular modulation. For a surrogate place field centered at , with spatial variance σXY, preferred angular orientation and angular variance σφ, the relative probability of firing for any (X, Y, φ) combination was defined as:
Where i is the imaginary number .
Given a behavioral profile B(t) = (BX(t), BY(t), Bφ(t)) and desired mean firing rate μ, the absolute probability of firing is obtained by scaling the relative probability of firing (equation 1) by a constant factor k: Where t0 indicates the start time of the session, and T indicates the end time of the session.
Spike times are then generated according to an inhomogeneous Poisson process with P(B(t)) as the rate parameter. Surrogate data generated in this manner were used in Extended Data Fig. 1a–h.
Control analysis for spatial and head-directional modulations
To assess the statistical significance of neural modulation by position and head-direction, spike trains were circularly shifted with respect to behavioral data by random amounts (10–100s) to obtain control data. Spatial and head-directional modulations were quantified for the resulting rate maps, and for each neuron, all measures were expressed in the units of z-score or standard deviations around the mean value of this control data. Data exceeding two standard deviations were considered statistically significant at the 0.05 level. This method ensured that the degree of angular and spatial modulation was uninfluenced by nonspecific parameters such as the duration of the recording session and the firing rate of a neuron.
Quantification of multimodality of angular rate maps
To detect the number of peaks in angular rate maps, a method similar to detection of place fields5 was used. First, all of the peaks with a minimum value of 50% of the global maximum were detected. For each peak, the boundaries were defined as the points at which rate drops below 50% of that peak for at least two angular bins.
Quantification of significance levels of preferred firing direction of neural ensemble
For the head-directionally modulated neurons, preferred direction was defined as the direction of maximum firing obtained from the angular rate maps. To estimate the significance levels of the population bias, random angles between 0° and 360° were added to the preferred direction of the cells and the length of the mean vector was computed. This process was repeated 500 times and mean and standard deviation of these vector lengths were used to z-score the mean vector length in the experimental data. Z-score values greater than 2 were considered significant at the 0.05 level.
Acknowledgements
We thank B. Popeney for help with behavioral training, J. Cushman and N. Agarwal for help with electrophysiology, P. Ravassard and A. Kees for technical support and help with surgeries, and B. Willers for discussions on analysis methods. This work was supported by grants to MRM from NIH 5R01MH092925-02, DARPA-BAA-14-08, and the W. M. Keck Foundation. Results presented in this manuscript were uploaded on a preprint server BioRxiv in March 2015.