Saturday, July 27, 2024

Remarks on Mixture of A Million Experts

Mixture of experts has been a hot topic in machine learning for at least the past two decades! Usually, it meant only a few experts!

Usually, there are (often large) teams of Google Deepmind researchers publishing papers on ML & AI. This time it is only a single author writing about a million experts. 😊

Caveat: I have not yet read this paper by Google Deepmind.

From the abstract:
"The feedforward (FFW) layers in standard transformer architectures incur a linear increase in computational costs and activation memory as the hidden layer width grows. Sparse mixture-of-experts (MoE) architectures have emerged as a viable approach to address this issue by decoupling model size from computational cost. The recent discovery of the fine-grained MoE scaling law shows that higher granularity leads to better performance. However, existing MoE models are limited to a small number of experts due to computational and optimization challenges. This paper introduces PEER (parameter efficient expert retrieval), a novel layer design that utilizes the product key technique for sparse retrieval from a vast pool of tiny experts (over a million). Experiments on language modeling tasks demonstrate that PEER layers outperform dense FFWs and coarse-grained MoEs in terms of performance-compute trade-off. By enabling efficient utilization of a massive number of experts, PEER unlocks the potential for further scaling of transformer models while maintaining computational efficiency."

[2407.04153] Mixture of A Million Experts (open access)

Credits: Last Week in AI

No comments: