Aww poor shit company and their poor money problems.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
Sorry not sorry. Found another company that does not need to rob people and other companies to make money. Also: breaking the law should make this kind of people face grim consequences. But nothing will happen.
The internet has been primarily derivative content for a long time. As much as some haven't wanted to admit it. It's true. These fancy algorithms now take it to the exponential factor.
Original content had already become sparsely seen anymore as monetization ramped up. And then this generation of AI algorithms arrived.
The several years before prior to LLMs becoming a thing, the internet was basically just regurgitating data from API calls or scraping someone else's content and representing it in your own way.
Ok... Is that supposed to be a good reason?
Unregulated areas lead to these type of business practices where the people will squeeze out the juices of these opportunities. The cost of these activities will be passed on the taxpayers.
I maintain my insistence that you owe me a business model!
"I loose money when I pay for Netflix."
If they win, we can just train a CNN on a single 4k hdr movie until it's extremely fitted, and then it's legal to redistribute
Then go out of business.
Literally, "fuck you go die" situation.
As written the headline is pretty bad, but it seems their argument is that they should be able to train from publicly available copywritten information, like blog posts and social media, and not from private copywritten information like movies or books.
You can certainly argue that "downloading public copywritten information for the purposes of model training" should be treated differently from "downloading public copywritten information for the intended use of the copyright holder", but it feels disingenuous to put this comment itself, to which someone has a copyright, into the same category as something not shared publicly like a paid article or a book.
Personally, I think it's a lot like search engines. If you make something public someone can analyze it, link to it, or derivative actions, but they can't copy it and share the copy with others.
I feel we need a term for "copyright bros".
The more important point is that social media companies can claim to OWN all the content needed to train AI. Same for image sites. That means they get to own the AI models. That means the models will never be free. Which means they control the "means of generation". That means that forever and ever and ever most human labour will be worth nothing while we can't even legally use this power. Double fucked.
YOU the user/product will not gain anything with this copyright strongmanning.
And to the argument itself: Just because AI is better at learning from existing works, faster, more complete, better memory, doesn't meant that it's fundamentally different than humans learning from artwork. Almost EVERY artist arguing for this is stealing themselves since they learned and was inspired by existing works.
But I guess the worst possible outcome is inevitable now.
If he wins this, I guess everyone should just make their Jellyfin servers public.
Because if rich tech bros get to opt out of our copyright system, I don't see why the hell normal people have to abide by it.
Perhaps they should go back to what they were before the greed machine was spun up.
well fuck you Sam Altman