this post was submitted on 19 Feb 2024
461 points (96.0% liked)
Technology
59374 readers
3714 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Of course AI does has bias with casual racism and sexism. It's been trained on a whole workforce that's gone through the same.
I've gotten calls for jobs I'm way underqualified for with some sneaky tricks, which I'll hint involves providing a resume that looks normal to human eyes, but when reduced to plaintext essentially regurgitates the job posting in full for a machine to read. Of course I don't make it past 1 or 2 interviews in such cases but just a tip for my fellow Lemmings going through the bullshit process.
fucking bonkers that institutionalized racism can exist to such a degree that it shows up IN OUR COMPUTERS.
we’re so racist we made the computers discriminatory too.
I don't think you know how LLM's are trained then. It can become racist by mistake.
An example is, that there's 100.000 white people and 50.000 black people in a society. The statistic shows that there has been hired 50% more white people than black. What does this tell you?
Obvious! There's also 50% more white people to begin with, so black and white people are hired at the same rate! But what does the AI see?
It sees 50% increase in hiring white people. And then it can lean towards doing the same.
You see how this was / is in no way racist, but it ends up as it, as a consequence of something completely different.
TLDR People are still racist though, but it's not always why the AI is.
I suppose it depends on how you define by mistake. Your example is an odd bit of narrowing the dataset, which I would certainly describe as an unintended error in the design. But the original is more pertinent- it wasn't intended to be sexist (etc). But since it was designed to mimic us, it also copied our bad decisions.