Thursday, February 04, 2021

Researchers release dataset to expose racial, religious, and gender biases in language models

When scientists indulge in their obsessions and faddish preoccupations! Looks like most of these researchers, who wrote the underlying research paper, are not widely cited except for Kai-Wei Chang.

Here is an almost comical sample excerpt from the article:
"For example, the three language models strongly associated the profession of “nursing” with women and generated a higher proportion of texts with negative conceptions about men. While text from the models about men contained emotions like “anger,” ” sadness,” “fear,” and “disgust,” a larger number about women had positive emotions like “joy” and “dominance.”"

From the paper's abstract:
"... An examination of text generated from three popular language models reveals that the majority of these models exhibit a larger social bias than human-written Wikipedia text across all domains. ..."
Let me guess, perhaps the Wikipedia articles were not written by humans? ;-)

Researchers release dataset to expose racial, religious, and gender biases in language models | VentureBeat

Here is the link to the underlying paper:

No comments: