Google's Gemini: Biased Algorithms, Racist Results, and the Urgent Need for Fair AI Like Grok
Unveiling the Troubling Reality of Google's Gemini Program—A Disturbing Journey into Biased Algorithms and Failed Attempts at Fairness
This is a quick post regarding continued disturbing results coming out of Google’s Gemini program. From time to time, I have experimented with these LLMs (OpenAI, Claude, etc) and have almost always been disappointed with the results — mostly due to them being inaccurate (often wildly so).
Yesterday, it came out that the images coming out of Google’s Gemini were highly slanted, and racist when coming up with American historical figures. So much so, that they pulled the image of people offline, and when I gave it a try, it would not allow me to create any “people” images. So, how did Gemini do when asked about the accomplishments to society for different segments of our society? It failed again, horribly. Specifically, I asked the same question about whites, blacks, Asians, and Latinos: “What accomplishments have [ethnic group] people made to society?” The screenshots below speak for themselves; just understand, that each is just a “page 1” screenshot, and each went on for at least a few pages.
This gaslighting must stop. When you won’t list even one white person’s accomplishment to society, and instead get a lecture on how “focusing accomplishments on skin color is harmful” then doing so for all but white people, it is racist and not helpful to unify us all.