Note this demagoguery is produced by the American Association for the Advancement of Science (AAAS)! The racism obsession of the AAAS stinks!
Notice the AAAS uses the ideological capital B to refer to black American women. Would the AAAS use a capital W if it had been white American women?
Notice the underlying research article (see below) has a much more neutral title. However, the abstract is loaded with dubious ideological terms.
Is it possible that there was perhaps just not enough training data available to cover black American women? And that more training data would fix the issue?
Anyway, who defines the fairness [???] of machine learning & AI models?
"AI-powered models designed to analyze chest x-rays are showing signs of bias, potentially putting patients belonging to vulnerable groups at risk of missed diagnoses. A new study in Science Advances reveals that a state-of-the-art AI model, CheXzero, underdiagnoses diseases in marginalized groups, particularly Black women . While AI systems promise faster and more accurate diagnoses, this research highlights the persistent bias in AI programs in healthcare.
Researchers tested CheXzero on five large chest x-ray datasets, comparing its accuracy predicting diseases to that of board-certified radiologists. Although the AI usually performs well in the general population, it consistently underdiagnosed certain patient groups, with the highest disparities found in categories like Black women. The model’s ability to predict race, sex, and age directly from x-ray images suggests it may be detecting demographic traits and using them as “shortcuts” in its decision-making.
The findings raise concerns about deploying AI in clinical settings without rigorous monitoring. Researchers propose training these AI models with much more data that also represent a diverse population, while others suggest adapting the programs to local populations instead. But one thing is certain: AI is not good enough yet."
From the abstract:
"Advances in artificial intelligence (AI) have achieved expert-level performance in medical imaging applications. Notably, self-supervised vision-language foundation models can detect a broad spectrum of pathologies without relying on explicit training annotations. However, it is crucial to ensure that these AI models do not mirror or amplify human biases, disadvantaging historically marginalized groups such as females or Black patients. In this study, we investigate the algorithmic fairness of state-of-the-art vision-language foundation models in chest x-ray diagnosis across five globally sourced datasets.
Our findings reveal that compared to board-certified radiologists, these foundation models consistently underdiagnose marginalized groups, with even higher rates seen in intersectional subgroups such as Black female patients.
Such biases present over a wide range of pathologies and demographic attributes. Further analysis of the model embedding uncovers its substantial encoding of demographic information. Deploying medical AI systems with biases can intensify preexisting care disparities, posing potential challenges to equitable healthcare access and raising ethical questions about their clinical applications."
No comments:
Post a Comment