Pooling neural imaging data across subjects requires aligning the recordings that come from different subjects. In magnetoencephalography (MEG) recordings, sensors across subjects are poorly correlated both because of differences in the exact location of the sensors, and structural and functional differences in the brains. It is possible to achieve alignment by assuming that the same regions of different brains correspond across subjects. However, this relies on both the assumption that brain anatomy and function are well correlated, and the strong assumptions that go into solving the inverse problem of source localization. In this paper, we investigated an alternative method that bypasses source-localization. Instead, it analyzes the sensor recordings themselves and aligns their temporal signatures across subjects. We used a multivariate approach, multi-set canonical correlation analysis (M-CCA), to transform individual subject data to a common neural representational space. We evaluated the robustness of this approach using a synthetic dataset where we had ground truth. We demonstrated that M-CCA performs better on an MEG dataset than a method that assumes perfect sensor correspondence and a method that applies source localization. Lastly, we described how the standard M-CCA algorithm could be further improved with a regularization term that incorporates spatial sensor information.