Abstract
The established approach to unsupervised protein contact prediction estimates co-evolving positions using undirected graphical models. This approach trains a Potts model on a Multiple Sequence Alignment, then predicts that the edges with highest weight correspond to contacts in the 3D structure. On the other hand, increasingly large Transformers are being pretrained on protein sequence databases but have demonstrated mixed results for downstream tasks, including contact prediction. This has sparked discussion about the role of scale and attention-based models in unsupervised protein representation learning. We argue that attention is a principled model of protein interactions, grounded in real properties of protein family data. We introduce a simplified attention layer, factored attention, and show that it achieves comparable performance to Potts models, while sharing parameters both within and across families. Further, we extract contacts from the attention maps of a pretrained Transformer and show they perform competitively with the other two approaches. This provides evidence that large-scale pretraining can learn meaningful protein features when presented with unlabeled and unaligned data. We contrast factored attention with the Transformer to indicate that the Transformer leverages hierarchical signal in protein family databases not captured by our single-layer models. This raises the exciting possibility for the development of powerful structured models of protein family databases.1
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
nthomas{at}berkeley.edu, rmrao{at}berkeley.edu, justas{at}uw.edu, koo{at}cshl.edu, dabaker{at}uw.edu, yss{at}berkeley.edu, so{at}g.harvard.edu
Fix the last name of Justas Daupras to Justas Dauparas.
↵1 Code available at https://github.com/nickbhat/iclr-2021-factored-attention