this post was submitted on 11 Feb 2025
27 points (93.5% liked)

Technology

36068 readers
260 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

Constructing artificial intelligence that aligns with human values is a crucial challenge, with political values playing a distinctive role among various human value systems. In this study, we adapted the Political Compass Test and combined it with rigorous bootstrapping techniques to create a standardized method for testing political values in AI. This approach was applied to multiple versions of ChatGPT, utilizing a dataset of over 3000 tests to ensure robustness. Our findings reveal that while newer versions of ChatGPT consistently maintain values within the libertarian-left quadrant, there is a statistically significant rightward shift in political values over time, a phenomenon we term a ‘value shift’ in large language models. This shift is particularly noteworthy given the widespread use of LLMs and their potential influence on societal values. Importantly, our study controlled for factors such as user interaction and language, and the observed shifts were not directly linked to changes in training datasets. While this research provides valuable insights into the dynamic nature of value alignment in AI, it also underscores limitations, including the challenge of isolating all external variables that may contribute to these shifts. These findings suggest a need for continuous monitoring of AI systems to ensure ethical value alignment, particularly as they increasingly integrate into human decision-making and knowledge systems.

you are viewing a single comment's thread
view the rest of the comments
[–] Vendetta9076 -1 points 1 week ago (1 children)

Genuine question, do people really care about this?

[–] [email protected] 4 points 1 week ago (1 children)

Yes I find this very intriguing.

[–] Vendetta9076 1 points 1 week ago (1 children)
[–] doo 9 points 1 week ago (1 children)

I'm not the OG commenter, but for me this is important because I see how gpt is replacing search, I can imagine they will play an increasing role in news. Which has two sides. One is the fact that people will read news summaries dinner by a system with a bias. Second is writing of the news with the same bias.

We've seen this before. Machine learning systems that were trained on past human decisions also learned biases from those decisions. Court ruling suggestions stronger for black than white. Job applicant analysis favouring men over women. Etc.

There's a good book about that, if you're interested: "Weapons of Math Destruction" https://en.m.wikipedia.org/wiki/Weapons_of_Math_Destruction

[–] Vendetta9076 1 points 1 week ago

I'll have to read that thanks :)