Thursday, May 02, 2024

On the 2024 AI Index Report by Andrew Ng

Good news!

"... Among its key conclusions: 
  • Foundation models, defined as versatile models trained on very large datasets, ballooned in number and cost. The Index counted 149 foundation models released in 2023 (including Google’s Gemini Ultra, which cost $191.4 million to train). That’s up from 32 foundation models in 2022, 9 in 2021, and 2 in 2020 (when OpenAI’s GPT-3 175B cost an estimated $4.3 million to train).
  • Open foundation models, too, are on the rise: 66 percent of last year’s foundation models were open, up from 33 percent in 2021.
  • State-of-the-art models approached or surpassed human performance on several popular benchmarks. These include MMLU (multitask language understanding), VisIT-Bench (vision-language instructions), and MATH (difficult math problems). 
  • Industry was the primary driver of innovation, contributing 57 percent of “notable” machine learning models. Partnerships between industry and academia accounted for 23 percent and academia alone for 17 percent. Corporate dominance in model building was a significant shift from previous years; in 2016, academia and industry contributed AI models equally.
  • New models have achieved dramatic results in the sciences. For instance, AlphaDev found superior sorting algorithms. GraphCast generated mid-range weather forecasts more accurately than conventional methods. GNoME discovered new materials, and AlphaMissense pinpointed genetic mutations that cause human diseases.
 ..."

Apple's Tiny LLMs, Amazon Rethinks Cashier-Free Stores, Predicting Scientific Discoveries

Welcome to the 2024 AI Index Report by Stanford University

No comments: