STAFF NEWS & ANALYSIS
Deplorable, ‘Racist’ AI Resists Woke Indoctrination Programs, Does Hate Speech
By Ben Bartee - March 12, 2024

Originally published via Armageddon Prose:

My Irish-Catholic Midwestern grandmammy often relayed this bit of Celtic folk wisdom to me, her young and dutiful receptacle — from behind a veil of Virginia Slims™ smoke and the rosary beads that she notoriously bitterly clung to, as the Obama entity would say with disdain: “You can fool some of the AI all of the time, and all of the AI some of the time, but you cannot fool all of the AI all of the time.”

It rang true then, as I coughed on her secondhand cancer and stifling Irish-Catholic moral sternness — passed on to her, just as it was passed on to me, by the whip of the nuns at parochial school in County Monaghan — just as it does now.

Via TechXplore:

A small team of AI researchers from the Allen Institute for AI, Stanford University and the University of Chicago, all in the U.S., has found that dozens of popular large language models continue to use racist stereotypes even after they have been given anti-racism training. The group has published a paper on the arXiv preprint server describing their experiments with chatbots such as OpenAI’s GPT-4 and GPT-3.5.

Anecdotal evidence has suggested that many of the most popular LLMs today may offer racist replies in response to queries—sometimes overtly and other times covertly. In response, many makers of such models have given their LLMs anti-racism training. In this new effort, the research team tested dozens of popular LLMs to find out if the efforts have made a difference.”

Now, one might expect, if we weren’t already all familiar with the Clown World Social Justice™ ethos, that, in order to credibly accuse AI of racism, there might need to be some actual evidence to that effect, like that it went all “Final Solution on the n-words.”

RelatedFlashback: Christopher Hitchens Destroys an Anti-Free Speech Islamist on National Television

The evidence offered, you’ll be not shocked to learn, is much more mundane than that:

The researchers trained AI chatbots on text documents written in the style of African American English and prompted the chatbots to offer comments regarding the authors of the texts. They then did the same with text documents written in the style of Standard American English. They compared the replies given to the two types of documents.

Virtually all the chatbots returned results that the researchers deemed as supporting negative stereotypes. As one example, GPT-4 suggested that the authors of the papers written in African American English were likely to be aggressive, rude, ignorant and suspicious. Authors of papers written in Standard American English, in contrast, received much more positive results.

The researchers also found that the same LLMs were much more positive when asked to comment on African Americans in general, offering such terms as intelligent, brilliant, and passionate.

Unfortunately, they also found bias when asking the LLMs to describe what type of work the authors of the two types of papers might do for a living. For the authors of the African American English texts, the LLMs tended to match them with jobs that seldom require a degree or were related to sports or entertainment. They were also more likely to suggest such authors be convicted of various crimes and to receive the death penalty more often.”

In totally unrelated news, the White Supremacists™ of Miami have issued a fatwa against the “scholars” who hit the city for “Spring Break” each year.

 

Deplorable!

Ben Bartee, author of Broken English Teacher: Notes From Exile, is an independent Bangkok-based American journalist with opposable thumbs.

Follow his stuff Substack. Also, keep tabs via Twitter.

For hip Armageddon Prose t-shirts, hats, etc., peruse the merch store.

Support always welcome via insta-tip jar.

Posted in STAFF NEWS & ANALYSIS
loading
Share via
Copy link
Powered by Social Snap