Monday, April 01, 2024

Researchers at Stanford and Databricks Open-Sourced BioMedLM: A 2.7 Billion Parameter GPT-Style AI Model Trained on PubMed Text

Good news! Interesting approach! More specialized large language models are called for.

Caveat: I did not read the research paper.

What caught my eyes is the excerpt of the research paper captured by my screen print below. I think, the custom tokenization makes a huge difference. Just look how GPT2 mangles words! I bet, this affects model outputs like garbage in, garbage out.

Two of the senior authors of this paper are well known in the ML & AI community: Percy Liang, Christopher D. Manning.

From the abstract:
"Models such as GPT-4 and Med-PaLM 2 have demonstrated impressive performance on a wide variety of biomedical NLP tasks. However, these models have hundreds of billions of parameters, are computationally expensive to run, require users to send their input data over the internet, and are trained on unknown data sources. Can smaller, more targeted models compete? To address this question, we build and release BioMedLM, a 2.7 billion parameter GPT-style autoregressive model trained exclusively on PubMed abstracts and full articles. When fine-tuned, BioMedLM can produce strong multiple-choice biomedical question-answering results competitive with much larger models, such as achieving a score of 57.3% on MedMCQA (dev) and 69.0% on the MMLU Medical Genetics exam. BioMedLM can also be fine-tuned to produce useful answers to patient questions on medical topics. This demonstrates that smaller models can potentially serve as transparent, privacy-preserving, economical and environmentally friendly foundations for particular NLP applications, such as in biomedicine. The model is available on the Hugging Face Hub: this https URL."

Researchers at Stanford and Databricks Open-Sourced BioMedLM: A 2.7 Billion Parameter GPT-Style AI Model Trained on PubMed Text - MarkTechPost

No comments: