this post was submitted on 07 Feb 2024
81 points (90.1% liked)

Technology

58784 readers
3087 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Inside the shifting plan at Elon Musk’s X to build a new team and police a platform ‘so toxic it’s almost unrecognizable’::X's trust and safety center was planned for over a year and is significantly smaller than the initially envisioned 500-person team.

all 11 comments
sorted by: hot top controversial new old
[–] [email protected] 42 points 8 months ago* (last edited 8 months ago) (1 children)

With bluesky and Mastodon, I really dont see people coming back to the platform. The network effect works both positive and negatively. If less people are using the platform, it will accelerate the move to other platforms.

[–] [email protected] 27 points 8 months ago (1 children)

It seems like a chunk of people (and large institutions) are trying to 'wait it out' and ignoring all the awful replies / content

Once people switch, they're unlikely to go back. Problem is getting them to put in the time to switch.

I'd call it a win when we start seeing "____ said on Mastodon" in articles

[–] [email protected] 23 points 8 months ago* (last edited 8 months ago) (2 children)

mirror: https://archive.vn/ghN0z

According to the former X insider, the company has experimented with AI moderation. And Musk’s latest push into artificial intelligence technology through X.AI, a one-year old startup that’s developed its own large language model, could provide a valuable resource for the team of human moderators.

An AI system “can tell you in about roughly three seconds for each of those tweets, whether they’re in policy or out of policy, and by the way, they’re at the accuracy levels about 98% whereas with human moderators, no company has better accuracy level than like 65%,” the source said. “You kind of want to see at the same time in parallel what you can do with AI versus just humans and so I think they’re gonna see what that right balance is.”

I don't believe that for one second. I'd believe it, if those numbers were reversed, but anyone who uses LLMs regularly, knows how easy it is to circumvent them.

EDIT: Added the paragraph right before the one I originally posted alone, that specifies that their "AI system" is an LLM.

[–] [email protected] 10 points 8 months ago

AI is whatever tech companies say it is. They aren't saying it for the people, like you, that knows it's horseshit. They are saying it for the investors, politicians, and ignorant folks. They are essentially saying that "AI" (cue jazz hands and glitter) can fix all of their problems, so don't stop investing in us.

[–] [email protected] 3 points 8 months ago (1 children)

That's not about LLM. Recently I was doing an AI analysis of which customers will become VIP based on their interactions. The accuracy was coincidentally also 98%. Nowadays people equate AI and LLM, but there's much more to AI than LLM.

[–] [email protected] 2 points 8 months ago* (last edited 8 months ago)

I'm going off of the article, where they state that it's an LLM. It's the paragraph right before the one I originally posted:

According to the former X insider, the company has experimented with AI moderation. And Musk’s latest push into artificial intelligence technology through X.AI, a one-year old startup that’s developed its own large language model, could provide a valuable resource for the team of human moderators.

EDIT: I will include it in the original comment for clarity, for those who don't read the article.

[–] [email protected] 10 points 8 months ago

I trust Linda to run Twitter the same way I trust Ashley to run Vought: responsibility and without deference to a creep with a god complex

[–] [email protected] 5 points 8 months ago

Weird how firing the trust and safety team on day one could come back to bite him.

[–] [email protected] 4 points 8 months ago

Just shut it down until it has been cleaned to be a decent participant of the net again.

[–] [email protected] 3 points 8 months ago

This is the best summary I could come up with:


In July, Yaccarino announced to staff that three leaders would oversee various aspects of trust and safety, such as law enforcement operations and threat disruptions, Reuters reported.

According to LinkedIn, a dozen recruits have joined X as “trust and safety agents” in Austin over the last month—and most appeared to have moved from Accenture, a firm that provides content moderation contractors to internet companies.

“100 people in Austin would be one tiny node in what needs to be a global content moderation network,” former Twitter trust and safety council member Anne Collier told Fortune.

And Musk’s latest push into artificial intelligence technology through X.AI, a one-year old startup that’s developed its own large language model, could provide a valuable resource for the team of human moderators.

The site’s rules as published online seem to be a pretextual smokescreen to mask its owner ultimately calling the shots in whatever way he sees it,” the source familiar with X moderation added.

Julie Inman Grant, a former Twitter trust and safety council member who is now suing the company for for lack of transparency over CSAM, is more blunt in her assessment: “You cannot just put your finger back in the dike to stem a tsunami of child sexual expose—or a flood of deepfake porn proliferating the platform,” she said.


The original article contains 1,777 words, the summary contains 217 words. Saved 88%. I'm a bot and I'm open source!