Tuesday, June 09, 2020

Google's Performer AI architecture could advance protein analysis and cut compute costs

Good news! If we can finally better understand the folding, structure, and dynamics of proteins, this would be a monumental eureka effect! What took nature and evolution many millions of years ...



"Performer is an offshoot of [the very successful] Transformer, an architecture proposed by Google researchers in 2017. Transformers rely on a trainable attention mechanism that specifies dependencies between elements of each input sequence (for instance, amino acids within a protein). ... By contrast, Performers scale linearly [not quadratically like transformers] by the number of tokens in an input sequence. Their backbone is fast attention via orthogonal random features (FAVOR), a technique that maintains marginal distributions of inputs while recognizing that different inputs are statistically independent. This lets Performers handle long sequences and remain backward-compatible with pretrained regular Transformers, allowing them to be used beyond the scope of Transformers as a more scalable replacement for attention in computer vision, reinforcement learning, and other AI applications."



Google's Performer AI architecture could advance protein analysis and cut compute costs | VentureBeat

No comments: