This appears to be an interesting paper by two frequent and highly cited collaborators at Salesforce, i.e. Richard Socher and Caiming Xiong. It has recently been accepted for ICLR 2021. I did not yet have time to read this paper.
It is particularly noteworthy that Richard Socher was focused on computational linguistics and natural language processing while a student at Stanford University. He is now bringing his expertise to bear on the language of proteins.
Most likely, machine learning will make many valuable contributions to understanding the three-dimensional structure/folding of proteins and their function and properties!
"... In this work, we demonstrate a set of methods for analyzing protein Transformer models through the lens of attention. We show that attention: (1) captures the folding structure of proteins, connecting amino acids that are far apart in the underlying sequence, but spatially close in the three-dimensional structure, (2) targets binding sites, a key functional component of proteins, and (3) focuses on progressively more complex biophysical properties with increasing layer depth. ..."
Credits: Andrew Ng's The Batch
No comments:
Post a Comment