this post was submitted on 03 Sep 2024
1569 points (97.9% liked)

Technology

58115 readers
4389 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 26 points 2 weeks ago (7 children)

For what it's worth, this headline seems to be editorialized and OpenAI didn't say anything about money or profitability in their arguments.

https://committees.parliament.uk/writtenevidence/126981/pdf/

On point 4 they are specifically responding to an inquiry about the feasibility of training models on public domain only and they are basically saying that an LLM trained on only that dataset would be shit. But their argument isn't "you should allow it because we couldn't make money otherwise" their actual argument is more "training LLM with copyrighted material doesn't violate current copyright laws" and further if we changed the law to forbid that it would cripple all LLMs.

On the one hand I think most would agree the current copyright laws are a bit OP anyway - more stuff should probably become public domain much earlier for instance - but most of the world probably also doesn't think training LLMs should be completely free from copyright restrictions without being opensource etc. But either way this articles title was absolute shit.

[–] [email protected] 5 points 2 weeks ago (5 children)

Yea. I can't see why people r defending copyrighted material so much here, especially considering that a majority of it is owned by large corporations. Fuck them. At least open sourced models trained on it would do us more good than than large corps hoarding art.

[–] [email protected] 10 points 2 weeks ago (1 children)

Most aren't pro copyright they're just anti LLM. AI has a problem with being too disruptive.

In a perfect world everyone would have universal basic income and would be excited about the amount of work that AI could potentially eliminate...but in our world it rightfully scares a lot of people about the prospect of losing their livelihood and other horrors as it gets better.

Copyright seems like one of the few potential solutions to hinder LLMs because it's big business vs up-and-coming technology.

[–] [email protected] 7 points 2 weeks ago (1 children)

If AI is really that disruptive (and I believe it will be) then shouldn’t we bend over backwards to make it happen? Because otherwise it’s our geopolitical rivals who will be in control of it.

[–] [email protected] 4 points 2 weeks ago

Yes in a certain sense pandora's box has already been opened. That's the reason for things like the chip export restrictions to China. It's safe to assume that even if copyright prohibits private company LLMs governments will have to make some exceptions in the name of defense or key industries even if it stays behind closed doors. Or role out some form of ubi / worker protections. There are a lot of very tricky and important decisions coming up.

But for now at least there seems to be some evidence that our current approach to LLMs is somewhat plateauing and we may need exponentially increasing training data for smaller and smaller performance increases. So unless there are some major breakthroughs it could just settle out as being a useful tool that doesn't really need to completely shock every factor of the economy.

load more comments (3 replies)
load more comments (4 replies)