Saturday, April 24, 2021

ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning

Microsoft researchers may have just come out with a solution to deal with a serious hardware bottleneck that has impeded further progress in machine learning! Caveat: I am not an expert in hardware and computing related issues of AI or machine learning.

"In the last three years, the largest dense deep learning models have grown over 1000x to reach hundreds of billions of parameters, while the GPU memory has only grown by 5x (16 GB to 80 GB). Therefore, the growth in model scale has been supported primarily though system innovations that allow large models to fit in the aggregate GPU memory of multiple GPUs. However, we are getting close to the GPU memory wall. It requires 800 NVIDIA V100 GPUs just to fit a trillion parameter model for training, and such clusters are simply out of reach for most data scientists. ... we present ZeRO-Infinity, a novel heterogeneous system technology that leverages GPU, CPU, and NVMe memory to allow for unprecedented model scale on limited resources without requiring model code refactoring. At the same time it achieves excellent training throughput and scalability, unencumbered by the limited CPU or NVMe bandwidth. ZeRO-Infinity can fit models with tens and even hundreds of trillions of parameters for training on current generation GPU clusters. ..."

[2104.07857] ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning

No comments: