this post was submitted on 20 Nov 2023
1513 points (98.5% liked)
Technology
59374 readers
3846 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Hold on, why exactly do they need people to label this shit?
How else will the AI be able to recognize that such text is "bad"?
This is actually extremely critical work, if results are going to be used by ai's that are going to be used widely. This essentially determines the "moral compass" of the ai.
Imagine if some big corporation did the labeling and such, trained some huge ai with that data and it became widely used. Then years pass and eventually ai develops to such extent it can be reliably be used to replace entire upper management. Suddenly becoming slave for "evil" ai overlord is starting to move from being beyond crazy idea to plausible(years and years in future, not now obviously).
Extremely critical but mostly done by underpaid workers in poor countries who have to look at the most horrific stuff imaginable and develop lifelong trauma because it's the only job available and otherwise they and their family might starve. Source This is one of the main reasons I have little hope that if OpenAI actually manages to create an AGI that it will operate in an ethical way. How could it if the people trying to instill morality into it are so lacking in it themselves.
True. Though while its horrible for those people, they might be doing more important work than they or us even realize. I also kind of trust moral judgement of oppressed more than oppressor(since they are the ones who do the work). Though i'm definitely not condoning the exploitation of those people.
Its quite awful that this seems to be the best we can hope for regarding this. I doubt google or microsoft are going to give very positive guidance whether its ok for people to suffer if it leads to more money for investors when they do their own labeling.