Thursday, January 28, 2021

Microsoft DeBERTa surpasses human performance on the SuperGLUE benchmark

Recommendable! Cutting edge natural language understanding (NLU) research at Microsoft.

"... Since its release in 2019, top research teams around the world have been developing large-scale pretrained language models (PLMs) that have driven striking performance improvement on the SuperGLUE benchmark. Microsoft recently updated the DeBERTa model by training a larger version that consists of 48 Transformer layers with 1.5 billion parameters. The significant performance boost makes the single DeBERTa model surpass the human performance on SuperGLUE for the first time in terms of macro-average score (89.9 versus 89.8), and the ensemble DeBERTa model sits atop the SuperGLUE benchmark rankings, outperforming the human baseline by a decent margin (90.3 versus 89.8). The model also sits at the top of the GLUE benchmark rankings with a macro-average score of 90.8. ..."

Microsoft DeBERTa surpasses human performance on the SuperGLUE benchmark - Microsoft Research

No comments: