this post was submitted on 15 Apr 2024
411 points (93.1% liked)

Solarpunk

5520 readers
51 users here now

The space to discuss Solarpunk itself and Solarpunk related stuff that doesn't fit elsewhere.

What is Solarpunk?

Join our chat: Movim or XMPP client.

founded 2 years ago
MODERATORS
 

I found that idea interesting. Will we consider it the norm in the future to have a "firewall" layer between news and ourselves?

I once wrote a short story where the protagonist was receiving news of the death of a friend but it was intercepted by its AI assistant that said "when you will have time, there is an emotional news that does not require urgent action that you will need to digest". I feel it could become the norm.

EDIT: For context, Karpathy is a very famous deep learning researcher who just came back from a 2-weeks break from internet. I think he does not talks about politics there but it applies quite a bit.

EDIT2: I find it interesting that many reactions here are (IMO) missing the point. This is not about shielding one from information that one may be uncomfortable with but with tweets especially designed to elicit reactions, which is kind of becoming a plague on twitter due to their new incentives. It is to make the difference between presenting news in a neutral way and as "incredibly atrocious crime done to CHILDREN and you are a monster for not caring!". The second one does feel a lot like exploit of emotional backdoors in my opinion.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 19 points 7 months ago (3 children)

The real question then becomes: what would you trust to filter comments and information for you?

In the past, it was newspaper editors, TV news teams, journalists, and so on. Assuming we can't have a return to form on that front, would it be down to some AI?

[–] [email protected] 8 points 7 months ago (1 children)

My mom, she always wants the best for me.

[–] [email protected] 5 points 7 months ago

Easily better than all the other options.

[–] [email protected] 8 points 7 months ago (1 children)

Why do people, especially here in the fediverse, immediately assume that the only way to do it is to give power of censorship to a third party?

Just have an optional, automatic, user-parameterized, auto-tagger and set parameters yourself for what you want to see.

Have a list of things that should receive trigger warnings. Group things by anger-inducing factors.

I'd love to have a way to filter things out by actionnable items: things I can get angry about but that I have little ways of changing, no need to give me more than a monthly update on.

[–] [email protected] 1 points 7 months ago (1 children)

Because your "auto-tagger" is a third party and you have to trust it to filter stuff correctly.

[–] [email protected] 1 points 7 months ago

How about no? You set it up with your parameters, it is optional and open source.

[–] [email protected] 2 points 7 months ago* (last edited 7 months ago) (1 children)

Most recent Ezra Klein podcast was talking about the future of AI assistants helping us digest and curate the amount of information that comes at us each day. I thought that was a cool idea.

*Edit: create to curate

[–] [email protected] 1 points 7 months ago

It makes a lot of sense. It also presents an opportunity to hand off such filtering to a more responsible entity/agency than media companies of the past. In the end, I sincerely hope we have a huge number of options rather than the same established players (FANG) as everything else right now.