this post was submitted on 10 Aug 2023
82 points (96.6% liked)

Technology

57453 readers
4579 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

New research visualizes the political bias of all major AI language models:

-OpenAI’s ChatGPT and GPT-4 were identified as most left-wing libertarian.

-Meta’s LLaMA was found to be the most right-wing authoritarian.

Models were asked about various topics (e.g., feminism, democracy) and then plotted on a political compass.

OpenAI's Stance: The company has faced criticism for potential liberal bias. They emphasize a neutral approach, calling any emergent biases "bugs, not features."

PhD Researcher's Opinion: Chan Park believes no language model can be free from political biases.

How Models Acquire Bias: Researchers examined three stages of model development. Initially, models were queried with politically sensitive statements to identify biases. BERT models (from Google) showed more social conservatism than OpenAI's GPT models. The paper speculates this might be due to BERT's training on more conservative books, while newer GPT models trained on liberal internet texts. Meta clarified steps taken to reduce bias in its LLaMA model. (Google did not comment)

Training actually amplified existing biases: left-leaning models became more left-leaning, and vice versa. Political orientation of training data influenced models' detection of "hate speech and misinformation."

The transparency issue: Tech companies don’t typically share details of training data/methods.

Should they be required to make the training data public?

Bottom line is if AI ends up disseminating a large portion of the total information exchange with humans, it can steer opinions. We can't completely eliminate bias, but one should be aware that it exists.

https://twitter.com/AiBreakfast/status/1688939983468453888?s=20

top 9 comments
sorted by: hot top controversial new old
[–] [email protected] 43 points 1 year ago

I love that being not a being an asshole somehow equates to "left wing" according to this line of thinking. Does anyone really want "conservative" AIs? Sounds like a nightmare

[–] yata 16 points 1 year ago

Sad to see the right wing libertarian political compass being used as some sort of factual scoreboard in research like that. It completely undermines the premise of that research.

[–] [email protected] 16 points 1 year ago (1 children)

What even is a neutral political position in that case? Doesn’t that depend entirely on the observers definition of left and right?

[–] [email protected] 7 points 1 year ago

"I have no thoughts or opinions on a wide variety of topics."

[–] CookieJarObserver 9 points 1 year ago (1 children)

Eh Open AI has very neutral training data... I'd say it seeming "left" "liberal"(muricaland liberal or actual liberal?) is because the observer is, relatively to the AI, right authoritarian in some way.

[–] [email protected] 23 points 1 year ago (1 children)

It's only "left" in American left. Otherwise it's quite conservative.

[–] CookieJarObserver 6 points 1 year ago

Id say its basically neutral.

[–] [email protected] 6 points 1 year ago

Ideally we would have AI that doesn't intentionally ("intentionally") favour either side.

Here's a great discussion on the bias of ChatGPT, where you can see that the model lies by omission or has negative things to say about one side and only positive for the other.

https://odysee.com/@UpperEchelonGamers:3/chatgpt-is-deeply-biased:1

To be honest, if we are having political discussions with AI, we are using it wrong.

[–] [email protected] -1 points 1 year ago

Sorry sweetie, reality has a (left|right)-wing bias.