this post was submitted on 20 Dec 2023
112 points (90.6% liked)

Technology

59374 readers
7033 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 11 months ago (2 children)

How could this even happen by accident?

[–] [email protected] 11 points 11 months ago (1 children)

Because it has five billion images?

The potentially at issue images comprise less than one percent of one percent of one percent of the total.

[–] [email protected] 3 points 10 months ago (1 children)

Don't they need to label the data?

[–] [email protected] 4 points 10 months ago

No, it's not manually labeled. It connects the text to the image based on things like alt text or the comment next to it in a social media post, and then ran them through a different AI (CLIP) which rated how well the text description matched the image and they filter out the ones with a low score.

The point of the OP research is that they should add another step to check CSAM databases and not rely on social media curation to have avoided illegal material (which they should, even though it's a very very small portion of the overall dataset).

But at no time was a human reviewing CSAM, labeling it, and including it in the data.

[–] [email protected] 7 points 11 months ago* (last edited 11 months ago) (1 children)

removing these images from the open web has been a headache of webmasters and admins for years in sites which host user uploaded images.

if the millions of images in the training data were automatically scraped from the internet, I don't find it surprising that there was CSAM there.

[–] [email protected] 0 points 10 months ago (1 children)

Don't they need to label the data?

[–] [email protected] 1 points 10 months ago