I'm kinda regretting not naming it oneninesix, but here we are. I guess I love letters.
~~To anyone wondering what's up, I did this on my phone while out in the "big city", so I'm still waiting to get home to do anything serious. I have a few ~~suckers~~ really nice people who volunteered for modding along with me. Anyone else who is interested, drop me a line. I'll be picking mods when I get home in a few hours. Sorry for the wait and I'll do my best to put out any fires in the meantime. I didn't think this would take off!~~
For those wondering, here's my take on moderating the place.
-
Moderation is to facilitate an experience for its users in line with the goals of the community and the instance. It's not to push a personal agenda, give you a bigger hammer in debates, set up a digital fiefdom, etc. You certainly can and should include your mod experience on your dating profile, though. Unilateral decisions are not cool except in a few situations, like if 100% of your userbase is usurped by literal Nazis.
-
196 exists to be a place where you post something (often but not always something goofy) when you visit. I know not everyone does and that's fine - I still love you. These things can't be offensive or hurtful, though, especially not intentionally so. Unintentional vs intentional I believe is a HUGE distinction and needs to be considered when moderating.
-
~~LBJ~~ LBZ exists as an inclusive, (relatively) judgment-free zone for gender-diverse folks. I intend for us to uphold that here. I say relatively judgment free because there will be people looking to start shit and mods and admins are going to have to judge their actions, but only their actions.
If you wanna be my modder, you gotta get with my bullet points...or argue persuasively why I should amend them (but that part doesn't fit the tune).The three big things I'm looking for otherwise are diverse viewpoints, if you can remain reasonably impartial, and if you can say you're sorry. The last is huge for me. As a mod, you're going to mess up. I used to mod on Reddit and I certainly did! I find it's important for maintaining the community's respect to be able to admit when you made a bad call and what you'll do to avoid it in the future.
@[email protected], pointers would be welcome as I think you do a great job.
Community feedback is encouraged and welcome, just be aware I'll be a little slow to respond for a bit.
PS: wow, I really DO love letters!
Edit: Corrected point three, damn autocorrect! Believe it or not, we're not an inclusive community in LBJ's corpse.
Update 20/1/25: We're replete with mods for now! Thank you all who reached out. I'll start pulling these stickies as they get irrelevant, I'm just a full disclosure kind of person so I want people to know what is/has been going on.
I'm always wary of how such systems can be gamed and how they'll influence user behavior, but the only downside to trying is your own efforts. Even if you fail miserably, I imagine the exercise itself would improve our understanding of what works, what doesn't, and how to form better approaches in the future. To succeed in making a system which improves user interactions would be a truly wonderful thing, and may even translate to IRL applications. I would urge you to follow through with this for as long as you feel it's something you'd like to do.
Yeah those are basically my thoughts too lol. Even if it ends up not working out the process of trying it will still be good since it'll give me more experience. Those aspects you're wary of are also definitely my 2 biggest concerns too. I think (or at least hope) that with the rules I'm thinking of for how trust is generated it would mostly positively effect behaviour? I'm imagining by "rewarding" trust to recieving positive replies, combined with a small reward for making positive replies in the first place, it would mostly just lead to more positive interactions overall. And I don't think I'd ever want a system like this to punish making a negative reply, only maybe when getting negative replies in response, since hopefully that prevents people wanting to avoid confrontation of harmful content in order to avoid being punished. Honestly it might even be better to only ever reward trust and never retract it except via decay over time, but that's something worth testing I imagine.
And in terms of gaming the system I do think that's kinda my bigger concern tbh. I feel like the most likely negative outcome is something like bots/bad actors finding a way to scam it, or the community turning into an echo chamber where ideas (that aren't harmful) get pushed out, or ends up drifting towards the center and becoming less safe for marginalized people. I do feel like thats part of the reason 196 would be a pretty good community to use a system like this though, since there's already a very strong foundation of super cool people that could be made the initial trusted group, and then it would hopefully lead to a better result.
There are examples of similar sorts of systems that exist, but it's mostly various cryptocurrencies or other P2P systems that use the trust for just verifying that the peers aren't malicious and it's never really been tested for moderation afaik (I could have missed an example of it online, but I'm fairly confident in saying this). I think stuff like the Fediverse and other decentralized or even straight up P2P networks are a good place for this sort of thing to work though, as a lot of the culture is already conducive to decentralization of previously centralized systems, and the communities tend to be smaller which helps it feel more personal and prevents as many bad actors/botting attempts since there aren't a ton of incentives and they become easier to recognize.