Yea, no thanks. I don't want things filtered based on what someone else thinks I should see.
Solarpunk
The space to discuss Solarpunk itself and Solarpunk related stuff that doesn't fit elsewhere.
Join our chat: Movim or XMPP client.
What if it's based on what you think you should see?
Either it's you deciding as you see it (ie there is no filter), or it's past you who's deciding in which case it's a different person. I've grown mentally and emotionally as I've got older and I certainly don't want me-from-10-years-ago to be in control of what me-right-now is even allowed to see
isn't that what the upvote/downvote buttons are for? although to be fair, i'd much rather the people of lemmy decide which things are good and interesting than some "algorithm"
There's a real risk to this belief.
There are elements of lemmy who use votes to manipulate which ideas appear popular, with the intention of manipulating discourse rather than having open discussions.
That's why I stick with platforms where hardline communist teenagers can curate what I'm exposed to.
That's the only way.
Without wanting to be too aggressive, with only that quote to go on it sounds like that person wants to live in a safe zone where they're never challenged, angered, made afraid, or have to reconsider their world view. That's the very definition of an echo chamber. I don't think you're meant to live life experiencing only "approved" moments, even if you're the one in charge of approving them. Frankly I don't know how that would be possible without an insane amount of external control. You'd have to have someone/something else as a "wall" of sorts controlling your every experience or else how would things get reliably filtered?
I'd much prefer to teach people how to be resilient so they don't have to be afraid of being exposed to the "wrong" ideas. I'd recommend things like learning what emotions mean and how to deal with them, coping/processing bad moments, introspection, how to get help, and how to check new ideas against your own ethics. E.g. if you read something and it makes you angry, what idea/experience is the anger telling you to protect yourself from and how does it match your morality? How do you express that anger in a reasonable and productive way? If it's serious who do you call? And so on.
I see where you're coming from, but if you look up Karpathy, you'll probably come to a different conclusion.
I think you are getting it wrong. I added a small edit for context. It is more about emotional distraction. I kinda feel like him: I want to remain informed, but please let me prepare a bit before telling me about civilians cut in pieces in a conflict between a funny cat video and a machine learning news.
For the same reason we filter out porn or gore images from our feeds, highly emotional news should be filterable
I don't think there's anything wrong with taking a break from social media or news. There are days I don't visit sites like Lemmy or when I do I only click non-news links because I'm not in the mood or already having a bad day. That's different than filtering (as per Karpathy's example) Tweets so that when you do engage it's consistently a very curated, inoffensive, "safe" experience. Again, I only have the one post to go off of, but he specifically talks about wishing to avoid Tweets that "elicit emotions" or "nudge views" and compares those provocative messages to malware. As far as your point regarding blatantly sensationalist news, when I recognize it's that kind of story I just stop reading/watching and that's that.
I WANT to have my emotions elicited because I seek to be educated and don't want to be complacent about things that should make me react. "Don't know, don't care" is how people go unrepresented or abused - e.g. almost no one reads about what Boko Haram is doing in Nigeria (thus it's already "filtered out" by media), and so very little has been done in the 22 years they've been affecting millions of lives. I WANT to have my "views nudged" because I'm regularly checking my worldview to make sure it stays centered around my core ethics, and being challenged has prompted me to change bad stances before. Being exposed to objectionable content before and reassessing is also how I've learned to spot BS attempts to manipulate. It doesn't matter how many times MAGA Tweets tell me that God is upset at drag queens and only Donald Trump can save the world because now I recognize ragebait when I see it. Having dealt with it before, no amount of exposure is going to make me believe their trash and knowing what is being said is useful for exposing and opposing harmful governmental policies/bad candidates (sometimes even helping deprogram others).
The real question then becomes: what would you trust to filter comments and information for you?
In the past, it was newspaper editors, TV news teams, journalists, and so on. Assuming we can't have a return to form on that front, would it be down to some AI?
My mom, she always wants the best for me.
Easily better than all the other options.
Why do people, especially here in the fediverse, immediately assume that the only way to do it is to give power of censorship to a third party?
Just have an optional, automatic, user-parameterized, auto-tagger and set parameters yourself for what you want to see.
Have a list of things that should receive trigger warnings. Group things by anger-inducing factors.
I'd love to have a way to filter things out by actionnable items: things I can get angry about but that I have little ways of changing, no need to give me more than a monthly update on.
Our mind is built on that "malware". I think it's more accurate to compare brain + knowledge to our immune system: the more samples you have, the better you are armed against mal-information.
But that leaves out the psychological effects of long-term exposure to ideas. If you know for a fact that the earth is round, and for the next 50 years all the media you consume keeps telling you that the earth is flat, you will at some point start believing that (or at least become unsure).
Every piece of information you receive has some tiny effect on you.
I really think that as the 20th century saw the rise of basic hygiene practices we are putting in place mental hygiene practices in the 21st.
I think the right approach would be to learn to deal with any kind of information, rather than to censor anything we might not like hearing.
Reminds me of Snow Crash by Nealyboi
I think most people already have this firewall installed, and it's working too well - they're absorbing minimal information that contradicts their self-image or world view. :) Scammers just know how to bypass the firewall. :)
Sounds like we're reinventing forum moderation and media literacy from first principles here.
Reading, watching, and listening to anything is like this. You accept communications into your brain and sort it out there. It's why people censor things, to shield others and/or to prevent the spread of certain ideas/concepts/information.
Misinformation, lies, scams, etc function entirely on exploiting it
Not really. An executable controlled by an attacker could likely "own" you. A toot tweet or comment can not, it's just an idea or thought that you can accept or reject.
We already distance ourselves from sources of always bad ideas. For example, we're all here instead of on truth social.
Jokes on you, all of my posts are infohazards that make you breathe manually when you read them.
We already have a firewall layer between outside information and ourselves, it’s called the ego, superego, our morals, ethics and comprehension of our membership in groups, our existing views and values. The sum of our experiences up till now!
Lay off the Stephenson and Gibson. Try some Tolstoy or Steinbeck.
We already have a firewall its our thoughts. The information can nudge us but it's fighting an uphill battle against everything we already know and believe.
Leaving aside the dystopian echo chamber that this could result in, you could argue that this would help with fake news by a lot. Fake news are so easy to spread and more present than ever. And for every person there is probably that one piece of news that is just believable enough to not question it. And then the next just believable piece of news. and another. I believe no one is immune to being influenced by fake stories, maybe even radicalized if they are targeted just right. A firewall just filtering out everything non-factual would already prevent so much societal damage I think.
Hüman brain just liek PC, me so smort.
It's definitely an angle worth considering when we talk about how the weakest link in any security system is its human users. We're not just "not immune" to propaganda, we're ideological petri dishes filled with second-hand agar agar.
I remember watching a video from a psychiatrist with eastern Monk training. He was explaining about why yogis spend decades meditating in remote caves - he said it was to control information/stimuli exposure.
Ideas are like seeds, once they take root they grow. You can weed out unwanted ones, but it takes time and mental energy. It pulls at your attention and keeps you from functioning at your best
The concept really spoke to me. It's easier to consciously control your environment than it is to consciously control your thoughts and emotions.
I've thought about this since seeing Ghost in the Shell as a kid. If direct neural interfaces become common place, the threat of hacking opens up from simply stealing financial information or material for blackmail; they may be able to control your entire body!
In a way, the job of a teacher or journalist is to filter useful and/or relevant information for interested parties.
You are responsible for what you do with the information you process. You're not supposed to just believe everything you read, or let it affect you. We don't need some government or organization deciding what can be shown online. Read history and see what follows mass censorship.
I am bewildered that so many people contrive this as suggesting it should be a government or a company deciding what to show you. Obviously any kind of firewall/filter ought to be optional and user controlled!
Yes, lemmy too is that. We need to meet people and then form groups online. I had devised a solution for exchanging public keys in person and verifying each content thereafter with that key.
I mean, this is just called censorship. We censor things for kids and all kind of people in or lives all the time. We censor things for ourselves when we don’t feel like reading the news or opening a text from a specific person. This is not some novel concept.
Not really. This is user-controlled filtering. Censorship is done to push a specific worldview to victims. Filtering we do it all the time for spam for instance.
But the post is explicitly about Tweets that challenge emotions and views and how that's harmful. It's one thing to want to see fewer suspicious offers from Nigerian princes and horny MILFS in my area. It's another to tell an AI that you don't want to see events or conversations that might be upsetting or make you think about ethics, politics, etc.
P.S. I'm replying to you a lot today, just want to say I'm not trying to be abusive or follow you around. You keep making points on this page that I want to engage with, and hopefully it's not coming across as persecution.
I look forward to factchecker services that interface right into the browser or OS, and immediately recognize and flag comments that might be false or misleading. Some may provide links to deep dives where it's complicated and you might want to know more.
I wonder if maybe it's more apt a comparison to say that allowing raw comments to affect you in a strong way is like running a random program as root. To a certain extent you have to let this kind of harmful content in.
P.s. the short story sounds cool - is it available to read anywhere?