this post was submitted on 11 Apr 2024
46 points (92.6% liked)

Technology

34690 readers
182 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

There is one class of AI risk that is generally knowable in advance. These are risks stemming from misalignment between a company’s economic incentives to profit from its proprietary AI model in a particular way and society’s interests in how the AI model should be monetised and deployed. The surest way to ignore such misalignment is by focusing exclusively on technical questions about AI model capabilities, divorced from the socio-economic environment in which these models will operate and be designed for profit.

top 2 comments
sorted by: hot top controversial new old
[–] [email protected] 6 points 6 months ago

How about the risk of dumbass managers overestimating AIs ability to save on labor costs and firing too many workers to function?

[–] [email protected] 2 points 6 months ago* (last edited 6 months ago)

I tried to follow this but my brain is fried (and it's only lunch time!)

One thing it got me thinking about (and I was surprised by the conclusion I came to), was it's often brought up how the training models are black boxes that are proprietary - but we all know the data was whatever public records they could scrape from the internet, be it reddit or whatever.

Such a thing didn't exist for them to use in a licenced manner, they were innovating - so I'm naively wondering why is it a problem when they took the risk of using the data and presumably paid tremendously low wages to people to prune and train it from 3rd world countries

They still had to build the thing and pay to run it, train it and mature it. The risk was all theirs, why is it a problem that they're now hoping to profit from that?

  • Maybe they should sell their training data...

We're upset at the greedy little pig boy spez for licencing it to them, but we did chuck all our thoughts up on the bathroom wall for all to see. It's not like there was anything private about it.

I do like the approach of changing the incentives, but that will need regulation to force the capitalists to behave, so I guess we'll just have to wait for the EU to form a plan.