Thursday, October 06, 2022

Posits, a New Kind of Number, Improves the Math of AI

Good news! Amazing stuff! Could be a breakthrough!

"Training the large neural networks behind many modern AI tools requires real computational might: For example, OpenAI’s most advanced language model, GPT-3, required an astounding million billion billions of operations to train, and cost about US $5 million in compute time. ...
Back in 2017, [researchers] developed a new way of representing numbers. These numbers, called posits, were proposed as an improvement over the standard floating-point arithmetic processors used today.
Now, a team of researchers ... developed the first processor core implementing the posit standard in hardware and showed that, bit-for-bit, the accuracy of a basic computational task increased by up to four orders of magnitude, compared to computing using standard floating-point numbers. ...
Just last week, Nvidia, Arm, and Intel agreed on a specification for using 8-bit floating-point numbers instead of the usual 32-bit or 16-bit for machine-learning applications. Using the smaller, less-precise format improves efficiency and memory usage, at the cost of computational accuracy. ...
Posits accomplish this improved accuracy around 1 and -1 thanks to an extra component in their representation. Floats are made up of three parts: a sign bit (0 for positive, 1 for negative), several “mantissa” (fraction) bits denoting what comes after the binary version of a decimal point, and the remaining bits defining the exponent (2 exp). ...
Deep neural networks usually work with normalized parameters called weights, making them the perfect candidate to benefit from posits’ strengths. Much of neural-net computation is comprised of multiply-accumulate operations. Every time such a computation is performed, each sum has to be truncated anew, leading to accuracy loss. With posits, a special register called a quire can efficiently do the accumulation step to reduce the accuracy loss. ...
With their new hardware implementation, which was synthesized in a field-programmable gate array (FPGA), the ... team was able to compare computations done using 32-bit floats and 32-bit posits side by side. They assessed their accuracy by comparing them to results using the much more accurate but computationally costly 64-bit floating-point format. Posits showed an astounding four-order-of-magnitude improvement in the accuracy of matrix multiplication, a series of multiply-accumulates inherent in neural network training. They also found that the improved accuracy didn’t come at the cost of computation time, only a somewhat increased chip area and power consumption. ..."

Posits, a New Kind of Number, Improves the Math of AI - IEEE Spectrum The first posit-based processor core gave a ten-thousandfold accuracy boost

This graph shows components of floating-point-number representation [top] and posit representation [middle]. The accuracy comparison shows posits’ advantage when the exponent is close to 0.


No comments: