this post was submitted on 16 Feb 2024
38 points (67.3% liked)

Technology

59207 readers
3067 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

cross-posted from: https://lemmy.ca/post/15541577

The extension shinigami eyes is back.

Quick context: the extension allows you to see which profiles are supportive or transphobic and it wasn’t updated for 2 years.

I was worried it was abandoned. Hopefully we can get Ecosia and Lemmy supported and have it expanded it to cover racist/sexist profiles soon. I would like to donate to the developer if I could.

It is my favourite Firefox extension because it would protect you from seeing all the hate online and prevent unintentionally supporting a slimy transphobe😡 and it literally reveals to you people’s true colours.

2/10/2024

*support for blue sky *updated bloom filter *Fix colonization of Tumblr tags

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 8 months ago (1 children)

IDK, I feel like I still feel like I don't agree with this approach. In fact, the entire name about "shinigami eyes" makes me think it's inspired by Death Note, an anime about a guy with a twisted and wrong sense of justice, using shinigami powers to kill people he thinks are bad.

E.g.

  1. What if someone said things like this, had a conversation, then changed their opinion? Why should they have that struck against them forever?
  2. Things can be easily misinterpreted, especially online through text. Are they talking about "never being a biological woman"? Having a uterus? Something else? Gender? Maybe they don't understand the difference. A lot of people don't. Mistakes are the first step toward understanding. Maybe they didn't understand something, or maybe they worded something ambiguously. It's easy to do.

I think if we want to change people for the better, than means talking with them, interacting with them, and helping them to change.

[–] [email protected] -2 points 8 months ago (1 children)
  1. This isn't a moderation tool, it's effectively a communal way to remember usernames of people who've been abusive. Do you have the same worry about people blocking you for your past views? I've said awful, transphobic, homophobic, and racist things under past usernames (part of the reason I switched to Starman was to distance myself from the persona I had 15 years ago), and I'm sure some people rightfully blocked me when they saw those comments. Personally I would rather someone see a red username over a comment now that I'm an ally than not see my comment at all.

  2. Like I said, they are strict. Unless you are unabashedly transphobic, you won't even accidentally say something that might get you tagged. Kind of like how a white person doesn't accidentally use racial slurs unless they're racist. You shouldn't have to worry about a video of you calling someone the n word going viral because you shouldn't be calling people the n word in the first place.

[–] [email protected] 2 points 8 months ago (1 children)

They are strict now, the slippery slope argument is that it won't stay that way. We've seen mod tools similar to this make mistakes. Twitch, Tumblr, YouTube, Facebook, etc all have algorithm analysis for moderation purposes and all of them have messed up and require additional human review.

[–] [email protected] 1 points 8 months ago

From their website:

Is there a mechanism in place to prevent malicious/fake reports?

Yes. While your overrides are immediately visible to you, changes are included in the publicly visible dataset only if they pass some trustworthiness criteria (including human validation).

I see no reason to believe that the human review criteria is going anywhere