this post was submitted on 15 Mar 2025
535 points (98.2% liked)

Technology

66584 readers
4217 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 40 points 15 hours ago

He means they must insert ideological bias on his behalf.

[–] [email protected] 33 points 18 hours ago* (last edited 18 hours ago) (2 children)

While I do prefer absolute free speech for individuals, I have no illusions about what Trump is saying behind closed doors: "Make it like me, and everything that I do." I don't want an government to decide for me and others what is right.

Also, science, at least the peer reviewed stuff, should be considered free of bias. Real world mechanics, be it physics or biology, can't be considered biased. We need science, because it makes life better. A false science, such as phrenology or RFK's la-la-land ravings, needs to be discarded because it doesn't help anyone. Not even the believers.

[–] [email protected] 7 points 7 hours ago

Reality has a liberal bias.

[–] [email protected] 1 points 15 hours ago

I wish more people realised this. Well said comrade.

[–] [email protected] 5 points 14 hours ago

AI is not your friend.

[–] [email protected] 16 points 18 hours ago (1 children)

Le Chat by Mistral is a France-based (and EU abiding) alternative to ChatGPT. Works fine for me so far.

[–] [email protected] 5 points 18 hours ago* (last edited 18 hours ago) (1 children)

Personally, I find that for (local AI), the recently released 111b Command-A is pretty good. It actually grasps the concepts of the dice odds that I set up for a D&D-esque JRPG style. Still too slow on mere gamer hardware (DDR4 128gb + RX 4090) to be practical, but still an impressive improvement.

Sadly, Cohere is located in the US. On the other paw, they operate in California and New York from my brief check. This is good, that means it less likely for them to obey Trump's stupidity.

[–] [email protected] 3 points 17 hours ago

Oh yeah, local is a different story. I'd probably look into something like what you mentioned if I had the hardware, but atm I'm more interested in finding 1-1 alternatives to these tech behemoths, ones that anyone can use with the same level of convenience.

[–] [email protected] 37 points 1 day ago* (last edited 23 hours ago) (2 children)

eliminates mention of “AI safety”

AI datasets tend to have a white bias. White people are over-represented in photographs, for instance. If one trains AI to with such datasets in something like facial recognition( with mostly white faces), it will be less likely to identify non-white people as human. Combine this with self-driving cars and you have a recipe for disaster; since AI is bad at detecting non-white people, it is less likely to prevent them from being crushed underneath in an accident. This both stupid and evil. You cannot always account for any unconscious bias in datasets.

“reducing ideological bias, to enable human flourishing and economic competitiveness.”

They will fill it with capitalist Red Scare propaganda.

The new agreement removes mention of developing tools “for authenticating content and tracking its provenance” as well as “labeling synthetic content,” signaling less interest in tracking misinformation and deep fakes.

Interesting.

“The AI future is not going to be won by hand-wringing about safety,” Vance told attendees from around the world.

That was done before. A chatbot named Tay was released into the wilds of twitter in 2016 without much 'hand-wringing about safety'. It turned into a neo-Nazi, which, I suppose is just what Edolf Musk wants.

The researcher who warned that the change in focus could make AI more unfair and unsafe also alleges that many AI researchers have cozied up to Republicans and their backers in an effort to still have a seat at the table when it comes to discussing AI safety. “I hope they start realizing that these people and their corporate backers are face-eating leopards who only care about power,” the researcher says.

[–] [email protected] 3 points 9 hours ago

Yeah but the current administration wants Tay to be the press secretary

[–] [email protected] 5 points 15 hours ago (1 children)

capitalist Red Scare propaganda

I've always found it interesting that the US is preoccupied with fighting communism propaganda but not pro-Fascist propaganda.

[–] [email protected] 1 points 2 hours ago (1 children)

Communism threatens capital. Fascism mostly does not.

[–] [email protected] 1 points 1 hour ago

So it's never been about democracy after all.

[–] [email protected] 18 points 22 hours ago

Well the rest of the world can take the lead in scientific r&d now that the US has not only declared itself failed culturally but politically and are attacking scientific institutions and funding directly (NIH, universities, etc).

[–] [email protected] 29 points 1 day ago* (last edited 1 day ago) (2 children)

Trump doing this shit reminds me of when the Germans demanded all research on physics, relativity, and thankfully the atomic bomb, stop because they were "Jewish Pseudoscience" in Hitler's eyes

[–] [email protected] 8 points 22 hours ago (1 children)

trump also complimented thier nazis recently, how he wish he had his "generals"

[–] [email protected] 4 points 20 hours ago

Considering they thought he was crazy and refused his orders, I kinda wish he had them to.

[–] [email protected] 2 points 18 hours ago* (last edited 18 hours ago)

It is good(?) that he released capable workers from federal service...so that they can serve someplace more democratic. The more that Yarvin's Cabal undercut their own competency and reinforces the good guys, the better it is for the free world.

[–] [email protected] 202 points 1 day ago* (last edited 1 day ago) (2 children)

Literally 1984.

This is a textbook example of newspeak / doublethink, exactly how they use the word “corruption” to mean different things based on who it’s being applied to.

[–] [email protected] 56 points 1 day ago (2 children)

Doublethink, but yeah you're right

[–] [email protected] 39 points 1 day ago* (last edited 1 day ago)

Newspeak and doublethink at the same time, ackshually, but I think everybody gets what you both mean.

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 12 points 22 hours ago (1 children)
[–] [email protected] 12 points 19 hours ago

Yup, and always will be, because the antiwoke worldview is so delusional that it calls empirical reality "woke". Thus, an AI that responds truthfully will always be woke.

[–] nonentity 24 points 1 day ago (1 children)

Any meaningful suppression or removal of ideological bias is an ideological bias.

I propose a necessary precursor to the development of artificial intelligence is the discovery and identification of a natural instance.

load more comments (1 replies)
[–] [email protected] 106 points 1 day ago (6 children)

So, models may only be trained on sufficiently bigoted data sets?

[–] [email protected] 74 points 1 day ago (8 children)

This is why Musk wants to buy OpenAI. He wants biased answers, skewed towards capitalism and authoritarianism, presented as being "scientifically unbiased". I had a long convo with ChatGPT about rules to limit CEO pay. If Musk had his way I'm sure the model would insist, "This is a very atypical and harmful line of thinking. Limiting CEO pay limits their potential and by extension the earnings of the company. No earnings means no employees."

[–] [email protected] 39 points 1 day ago

Same reason they hate wikipedia.

load more comments (7 replies)
load more comments (5 replies)
[–] [email protected] 41 points 1 day ago (1 children)

I hope this backfires. Research shows there's a white & anti-blackness (and white-supremacist) bias in many AI models (see chatgpt's response to israeli vs palestinian questions).

An unbiased model would be much more pro-palestine and pro-blm

[–] [email protected] 65 points 1 day ago (1 children)

'We don't want bias' is code for 'make it biased in favor of me.'

[–] [email protected] 9 points 1 day ago* (last edited 1 day ago)

That's what I call understanding Trumpese

[–] [email protected] 1 points 15 hours ago

Why should we have ideals anymore? Wallowing in the muck is almost paradise. /s

[–] [email protected] 56 points 1 day ago

"Sir, that's impossible."

"JUST DO IT!"

[–] [email protected] 40 points 1 day ago (3 children)

It’s almost like reality has a liberal bias. 🙃

[–] [email protected] 19 points 1 day ago (2 children)

I might say a left bias here on Lemmy. While reddit and other US-centric sites see liberal as "the left", across the world liberal will be considered more center-right.

load more comments (2 replies)
load more comments (2 replies)
[–] [email protected] 29 points 1 day ago (2 children)

It's going to go full circle and start spitting out pictures of a Black George Washington again...

load more comments (2 replies)
load more comments
view more: next ›