Monday, October 16, 2023

On the 160 references in the Google's PaLM 2 Technical Report. Very woke!

Google outed itself as purveyor of woke ideology with this "Technical Report"! 

Google's excessive obsession with leftist ideology is so annoying and disturbing!

The 160 references are a compendium of leftist ideology (toxicity, bias, gender exclusivity, racism, harm reduction, marginalizing, intersection, you name it). In my estimation at least 40% are such ideological references.

Even a publication of a black female feminist titled "Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics" had to be included.

In their references, you find titles like
  1. Towards a critical race methodology in algorithmic fairness
  2. Is your toxicity my toxicity? Exploring the impact of rater identity on toxicity annotation
  3. Women also snowboard: Overcoming bias in captioning models
  4. Stereotyping Norwegian salmon: An inventory of pitfalls in fairness benchmark datasets
  5. The misgendering machines: Trans/hci implications of automatic gender recognition
  6. Welcome, singular "they"
  7. Coarse race data conceals disparities in clinical risk score performance
  8. Cultural incongruencies in artificial intelligence
  9. Characteristics of harmful text: Towards rigorous benchmarking of language models
  10. Re-imagining algorithmic fairness in india and beyond
  11. Social bias frames: Reasoning about social and power implications of language
  12. Annotators with attitudes: How annotator beliefs and identities bias toxic language detection
  13. Fairness and abstraction in sociotechnical systems
  14. Identifying sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction
  15. Fairness for unobserved characteristics: Insights from technological impacts on queer communities
  16. “i’m sorry to hear that”: Finding new biases in language models with a holistic descriptor dataset
  17. Ethical and social risks of harm from language models
  18. Detoxifying language models risks marginalizing minority voices
[2305.10403] PaLM 2 Technical Report

No comments: