OK, so pay for it.
Pretty simple really.
This is a most excellent place for technology news and articles.
OK, so pay for it.
Pretty simple really.
Or let's use this opportunity to make copyright much less draconian.
¿Porque no los dos?
I don't understand why people are defending AI companies sucking up all human knowledge by saying "well, yeah, copyrights are too long anyway".
Even if we went back to the pre-1976 term of 28 years, renewable once for a total of 56 years, there's still a ton of recent works that AI are using without any compensation to their creators.
I think it's because people are taking this "intelligence" metaphor a bit too far and think if we restrict how the AI uses copyrighted works, that would restrict how humans use them too. But AI isn't human, it's just a glorified search engine. At least all standard search engines do is return a link to the actual content. These AI models chew up the content and spit out something based on it. It simply makes sense that this new process should be licensed separately, and I don't care if it makes some AI companies go bankrupt. Maybe they can work adequate payment for content into their business model going forward.
It shouldn't be cheap to absorb and regurgitate the works of humans the world over in an effort to replace those humans and subsequently enrich a handful of silicon valley people.
Like, I don't care what you think about copyright law and how corporations abuse it, AI itself is corporate abuse.
And unlike copyright, which does serve its intended purpose of helping small time creators as much as it helps Disney, the true benefits of AI are overwhelmingly for corporations and investors. If our draconian copyright system is the best tool we have to combat that, good. It's absolutely the lesser of the two evils.
I'm no fan of the current copyright law - the Statute of Anne was much better - but let's not kid ourselves that some of the richest companies in the world have any desire what so ever to change it.
If it ends up being OK for a company like OpenAI to commit copyright infringement to train their AI models it should be OK for John/Jane Doe to pirate software for private use.
But that would never happen. Almost like the whole of copyright has been perverted into a scam.
Its almost like we had a thing where copyrighted things used to end up but they extended the dates because money
This is where they have the leverage to push for actual copyright reform, but they won't. Far more profitable to keep the system broken for everyone but have an exemption for AI megacorps.
I was literally about to come in here and say it would be an interesting tangential conversation to talk about how FUCKED copyright laws are, and how relevant to the discussion it would be.
More upvote for you!
I guess the lesson here is pirate everything under the sun and as long as you establish a company and train a bot everything is a-ok. I wish we knew this when everyone was getting dinged for torrenting The Hurt Locker back when.
Remember when the RIAA got caught with pirated mp3s and nothing happened?
What a stupid timeline.
Wow! You’re telling me that onerous and crony copyright laws stifle innovation and creativity? Thanks for solving the mystery guys, we never knew that!
if it's impossible for you to have something without breaking the law you have to do without it
if it's impossible for the artistocrat class to have something without breaking the law, we change or ignore the law
Cool, don't do it then
finally capitalism will notice how many times it has shot up its own foot with their ridiculous, greedy infinite copyright scheme
As a musician, people not involved in the making of my music make all my money nowadays instead of me anyway. burn it all down
I'm dumbfounded that any Lemmy user supports OpenAI in this.
We're mostly refugees from Reddit, right?
Reddit invited us to make stuff and share it with our peers, and that was great. Some posts were just links to the content's real home: Youtube, a random Wordpress blog, a Github project, or whatever. The post text, the comments, and the replies only lived on Reddit. That wasn't a huge problem, because that's the part that was specific to Reddit. And besides, there were plenty of third-party apps to interact with those bits of content however you wanted to.
But as Reddit started to dominate Google search results, it displaced results that might have linked to the "real home" of that content. And Reddit realized a tremendous opportunity: They now had a chokehold on not just user comments and text posts, but anything that people dare to promote online.
At the same time, Reddit slowly moved from a place where something may get posted by the author of the original thing to a place where you'll only see the post if it came from a high-karma user or bot. Mutated or distorted copies of the original instance, reformated to cut through the noise and gain the favor of the algorithm. Re-posts of re-posts, with no reference back to the original, divorced of whatever context or commentary the original creator may have provided. No way for the audience to respond to the author in any meaningful way and start a dialogue.
This is a miniature preview of the future brought to you by LLM vendors. A monetized portal to a dead internet. A one-way street. An incestuous ouroborous of re-posts of re-posts. Automated remixes of automated remixes.
--
There are genuine problems with copyright law. Don't get me wrong. Perhaps the most glaring problem is the fact that many prominent creators don't even own the copyright to the stuff they make. It was invented to protect creators, but in practice this "protection" gets assigned to a publisher immediately after the protected work comes into being.
And then that copyright -- the very same thing that was intended to protect creators -- is used as a weapon against the creator and against their audience. Publishers insert a copyright chokepoint in-between the two, and they squeeze as hard as they desire, wringing it of every drop of profit, keeping creators and audiences far away from each other. Creators can't speak out of turn. Fans can't remix their favorite content and share it back to the community.
This is a dysfunctional system. Audiences are denied the ability to access information or participate in culture if they can't pay for admission. Creators are underpaid, and their creative ambitions are redirected to what's popular. We end up with an auto-tuned culture -- insular, uncritical, and predictable. Creativity reduced to a product.
But.
If the problem is that copyright law has severed the connection between creator and audience in order to set up a toll booth along the way, then we won't solve it by giving OpenAI a free pass to do the exact same thing at massive scale.
It's not "impossible". It's expensive and will take years to produce material under an encompassing license in the quantity needed to make the model "large". Their argument is basically "but we can have it quickly if you allow legal shortcuts."
Maybe you shouldn't have done it then.
I can't make a Jellyfin server full of content without copyrighted material either, but the key difference here is I'm not then trying to sell that to investors.
This situation seems analogous to when air travel started to take off (pun intended) and existing legal notions of property rights had to be adjusted. IIRC, a farmer sued an airline for trespassing because they were flying over his land. The court ruled against the farmer because to do otherwise would have killed the airline industry.
If OpenAI is right (I think they are) one of two things need to happen.
For number 1. Good luck for all the reasons we all know. Capitalism must continue to operate.
For number 1. Good luck because those in power are mostly there off the backs of those before them (see Disney, Apple, Microsoft, etc)
Anyways, fun to watch play out.
There's a third solution you're overlooking.
3: OpenAI (or other) wins a judgment that AI content is not inherently a violation of copyright regardless of materials it is trained upon.
Cool! Then don't!
hijacking this comment
OpenAI was IMHO well within its rights to use copyrighted materials when it was just doing research. They were* doing research on how far large language models can be pushed, where's the ceiling for that. It's genuinely good research, and if copyrighted works are used just to research and what gets published is the findings of the experiments, that's perfectly okay in my book - and, I think, in the law as well. In this case, the LLM is an intermediate step, and the published research papers are the "product".
The unacceptable turning point is when they took all the intermediate results of that research and flipped them into a product. That's not the same, and most or all of us here can agree - this isn't okay, and it's probably illegal.
* disclaimer: I'm half-remembering things I've heard a long time ago, so even if I phrase things definitively I might be wrong
It feels to be like every other post on lemmy is taking about how copyright is bad and should be changed, or piracy is caused by fragmentation and difficulty accessing information (streaming sites). Then whenever this topic comes up everyone completely flips. But in my mind all this would do is fragment the ai market much like streaming services (suddenly you have 10 different models with different licenses), and make it harder for non mega corps without infinite money to fund their own llms (of good quality).
Like seriously, can't we just stay consistent and keep saying copyright bad even in this case? It's not really an ai problem that jobs are effected, just a capitalism problem. Throw in some good social safety nets and tax these big ai companies and we wouldn't even have to worry about the artist's well-being.
I think looking at copyright in a vacuum is unhelpful because it's only one part of the problem. IMO, the reason people are okay with piracy of name brand media but are not okay with OpenAI using human-created artwork is from the same logic of not liking companies and capitalism in general. People don't like the fact that AI is extracting value from individual artists to make the rich even richer while not giving anything in return to the individual artists, in the same way we object to massive and extremely profitable media companies paying their artists peanuts. It's also extremely hypocritical that the government and by extention "copyright" seems to care much more that OpenAI is using name brand media than it cares about OpenAI scraping the internet for independent artists' work.
Something else to consider is that AI is also undermining copyleft licenses. We saw this in the GitHub Autopilot AI, a 100% proprietary product, but was trained on all of GitHub's user-generated code, including GPL and other copyleft licensed code. The art equivalent would be CC-BY-SA licenses where derivatives have to also be creative commons.
If the copyright people had their way we wouldn't be able to write a single word without paying them. This whole thing is clearly a fucking money grab. It is not struggling artists being wiped out, it is big corporations suing a well funded startup.
But our current copyright model is so robust and fair! They will only have to wait 95y after the author died, which is a completely normal period.
If you want to control your creations, you are completely free to NOT publish it. Nowhere it's stated that to be valuable or beautiful, it has to be shared on the world podium.
We'll have a very restrictive Copyright for non globally transmitted/published works, and one for where the owner of the copyright DID choose to broadcast those works globally. They have a couple years to cash in, and then after I dunno, 5 years, we can all use the work as we see fit. If you use mass media to broadcast creative works but then become mad when the public transforms or remixes your work, you are part of the problem.
Current copyright is just a tool for folks with power to control that power. It's what a boomer would make driving their tractor / SUV while chanting to themselves: I have earned this.
Let's wait until everyone is laid off and it's 'impossible' to get by without mass looting then, shall we?
Piracy by another name. Copyrighted materials are being used for profit by companies that have no intention of compensating the copyright holder.
If a business relies on breaking the law as a fundament of their business model, it is not a business but an organized crime syndicate. A Mafia.
I have the perfect solution. Shorten the copyright duration.
A ton of people need to read some basic background on how copyright, trademark, and patents protect people. Having none of those things would be horrible for modern society. Wiping out millions of jobs, medical advancements, and putting control into the hands of companies who can steal and strongarm the best. If you want to live in a world run by Mafia style big business then sure.
"Impossible"? They just need to ask for permission from each source. It's not like they don't already know who the sources are, since the AIs are issuing HTTP(S) requests to fetch them.