Friday, July 01, 2022

Robots Become Racist and Sexist Bigots Due to Flawed AI, Study Says. Really!

Ideologues and demagogues are trying hard to prevent or delay a great new technology to become widely available! Beware of these modern Luddites and authoritarians!

These researchers are ideologically biased or what else do you expect if researchers publish their work at a conference named Fairness, Accountability, and Transparency.

"The research, conducted by experts at Johns Hopkins, Georgia Tech, the University of Washington, and the Technical University of Munich in Germany, is "believed to be the first to show that robots loaded with an accepted and widely-used model operate with significant gender and racial biases.""

From the abstract:
"Stereotypes, bias, and discrimination have been extensively documented in Machine Learning (ML) methods such as Computer Vision (CV), Natural Language Processing (NLP), or both, in the case of large image and caption models such as OpenAI CLIP. In this paper, we evaluate how ML bias manifests in robots that physically and autonomously act within the world. We audit one of several recently published CLIP-powered robotic manipulation methods, presenting it with objects that have pictures of human faces on the surface which vary across race and gender, alongside task descriptions that contain terms associated with common stereotypes. Our experiments definitively show robots acting out toxic stereotypes with respect to gender, race, and scientifically-discredited physiognomy, at scale. Furthermore, the audited methods are less likely to recognize Women and People of Color. Our interdisciplinary sociotechnical analysis synthesizes across fields and applications such as Science Technology and Society (STS), Critical Studies, History, Safety, Robotics, and AI. We find that robots powered by large datasets and Dissolution Models (sometimes called “foundation models”, e.g. CLIP) that contain humans risk physically amplifying malignant stereotypes in general; and that merely correcting disparities will be insufficient for the complexity and scale of the problem. Instead, we recommend that robot learning methods that physically manifest stereotypes or other harmful outcomes be paused, reworked, or even wound down when appropriate, until outcomes can be proven safe, effective, and just. Finally, we discuss comprehensive policy changes and the potential of new interdisciplinary research on topics like Identity Safety Assessment Frameworks and Design Justice to better understand and address these harms."

Robots Become Racist and Sexist Bigots Due to Flawed AI, Study Says 

Robots Enact Malignant Stereotypes (open access. Published in ACM Conference on Fairness, Accountability, and Transparency)

No comments: