this post was submitted on 20 Dec 2023
112 points (90.6% liked)

Technology

59374 readers
7033 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 11 months ago

This is the best summary I could come up with:


The researchers began combing through the LAION dataset in September 2023 to investigate how much, if any, child sexual abuse material (CSAM) was present.

These were sent to CSAM detection platforms like PhotoDNA and verified by the Canadian Centre for Child Protection.

Stanford’s researchers said the presence of CSAM does not necessarily influence the output of models trained on the dataset.

“The presence of repeated identical instances of CSAM is also problematic, particularly due to its reinforcement of images of specific victims,” the report said.

The researchers acknowledged it would be difficult to fully remove the problematic content, especially from the AI models trained on it.

US attorneys general have called on Congress to set up a committee to investigate the impact of AI on child exploitation and prohibit the creation of AI-generated CSAM.


The original article contains 339 words, the summary contains 134 words. Saved 60%. I'm a bot and I'm open source!