this post was submitted on 29 Nov 2023
128 points (89.5% liked)

Technology

59374 readers
3250 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

After internal chaos earlier this month, OpenAI replaced the women on its board with men. As it plans to add more seats, Timnit Gebru, Sasha Luccioni, and other AI luminaries tell WIRED why they wouldn't join.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 11 months ago

It also elides "AI safety" (Toner's thing) and "AI ethics" (Gebru's thing). They're two different things. Jammed together here because both are women (FFS).

"AI safety" is the sci-fi, paperclip maximisation, fantasies about the potential future of AI.

"AI ethics" is the real actual harms done in the here and now, by embedding existing biases into decision-making, and consuming enormous amounts of resource.

Meredith Whittaker sums up the difference nicely in this interview:

So in 2020-21 when Timnit Gebru and Margaret Mitchell from Google’s AI ethics unit were ousted after warning about the inequalities perpetuated by AI, did you feel, “Oh, here we go again”?

Timnit and her team were doing work that was showing the environmental and social harm potential of these large language models – which are the fuel of the AI hype at this moment. What you saw there was a very clear case of how much Google would tolerate in terms of people critiquing these systems. It didn’t matter that the issues that she and her co-authors pointed out were extraordinarily valid and real. It was that Google was like: “Hey, we don’t want to metabolise this right now.”

Is it interesting to you how their warnings were received compared with the fears of existential risk expressed by ex-Google “godfather of AI” Geoffrey Hinton recently?

If you were to heed Timnit’s warnings you would have to significantly change the business and the structure of these companies. If you heed Geoff’s warnings, you sit around a table at Davos and feel scared.

Geoff’s warnings are much more convenient, because they project everything into the far future so they leave the status quo untouched. And if the status quo is untouched you’re going to see these companies and their systems further entrench their dominance such that it becomes impossible to regulate. This is not an inconvenient narrative at all.