I’m not anti AI, but I wish the people who are would describe what they are upset about a bit more eloquently, and decipherable. The environmental impact I completely agree with. Making every google search run a half cooked beta LLM isn’t the best use of the worlds resources. But every time someone gets on their soapbox in the comments it’s like they don’t even know the first thing about the math behind it. Like just figure out what you’re mad about before you start an argument. It comes across as childish to me
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected].
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
It feels like we're being delivered the sort of stuff we'd consider flim-flam if a human did it, but lapping it up bevause the machine did it.
"Sure, boss, let me write this code (wrong) or outline this article (in a way that loses key meaning)!" If you hired a human who acted like that, we'd have them on an improvement plan in days and sacked in weeks.
More regulation, supervised development, laws limiting training data to be consensual.
I want all of the CEOs and executives that are forcing shitty AI into everything to get pancreatic cancer and die painfully in a short period of time.
Then I want all AI that is offered commercially or in commercial products to be required to verify their training data and be severely punished for misusing private and personal data. Copyright violations need to be punished severely, and using copyrighted works being used for AI training counts.
AI needs to be limited to optional products trained with properly sourced data if it is going to be used commercially. Individual implementations and use for science is perfectly fine as long as the source data is either in the public domain or from an ethically collected data set.
If we're talking realm of pure fantasy: destroy it.
I want you to understand this is not AI sentiment as a whole, I understand why the idea is appealing, how it could be useful, and in some ways may seem inevitable.
But a lot of sci-fi doesn't really address the run up to AI, in fact a lot of it just kind of assumes there'll be an awakening one day. What we have right now is an unholy, squawking abomination that has been marketed to nefarious ends and never should have been trusted as far as it has. Think real hard about how corporations are pushing the development and not academia.
Put it out of its misery.
How do you "destroy it"? I mean, you can download an open source model to your computer right now in like five minutes. It's not Skynet, you can't just physically blow it up.
OP asked what people wanted to happen, and even later "destroy gen AI" as an option. I get it is not realistically feasible, but it's certainly within the realm of options provided for the discussion. No need to police their pie in the sky dream. I'm sure they realize it's not realistic.
I’d like for it to be forgotten, because it’s not AI.
It's AI in so far as any ML is AI.
I think the AI that helps us find/diagnose/treat diseases is great, and the model should be open to all in the medical field (open to everyone I feel would be easily abused by scammers and cause a lot of unnecessary harm - essentially if you can't validate what it finds you shouldn't be using it).
I'm not a fan of these next gen IRC chat bots that have companies hammering sites all over the web to siphon up data it shouldn't be allowed to. And then pushing these boys into EVERYTHING! And like I saw a few mention, if their bots have been trained on unauthorized data sets they should be forced to open source their models for the good of the people (since that is the BS reason openai has been bending and breaking the rules).
Disable all ai being on by default. Offer me a way to opt into having ai, but don't shove it down my throat by default. I don't want google ai listening in on my calls without having the option to disable it. I am an attorney, and many of my calls are privileged. Having a third party listen in could cause that privilege to be lost.
I want ai that is stupid. I live in a capitalist plutocracy that is replacing workers with ai as fast and hard as possible without having ubi. I live in the United States, which doesn't even have universal health insurance. So, ubi is fucked. This sets up the environment where a lot of people will be unemployable through no fault of their own because of ai. Thus without ubi, we're back to starvation and hoovervilles. But, fuck us. They got theirs.
i would use it to take a shit if they let me
Im not a fan of AI because I think the premise of analyzing and absorbing work without consent from creators at its core is bullshit.
I also think that AI is another step into government spying in a more efficient manner.
Since AI learns from human content without consent, I think government should figure out how to socialize the profits. (Probably will never happen)
Also they should regulate how data is stored, and ensure to have videos clearly labeled if made from AI.
They also have to be careful and protect victims from revenge porn or general content and make sure people are held accountable.
Honestly, at this point I'd settle for just "AI cannot be bundled with anything else."
Neither my cell phone nor TV nor thermostat should ever have a built-in LLM "feature" that sends data to an unknown black box on somebody else's server.
(I'm all down for killing with fire and debt any model built on stolen inputs,.too. OpenAI should be put in a hole so deep that they're neighbors with Napster.)
I think its important to figure out what you mean by AI?
Im thinking a majority of people here are talking about LLMs BUT there are other AIs that have been quietly worked on that are finally making huge strides.
AI that can produce songs (suno) and replicate voices. AI that can reproduce a face from one picture (theres a couple of github repos out there). When it comes to the above we are dealing with copyright infringement AI, specifically designed and trained on other peoples work. If we really do have laws coming into place that will deregulate AI, then I say we go all in. Open source everything (or as much as possible) and make it so its trained on all company specific info. And let anyone run it. I have a feeling we cant put he genie back in the bottle.
If we have pie in the sky solutions, I would like a new iteration of the web. One that specially makes it difficult or outright impossible to pull into AI. Something like onion where it only accepts real nodes/people in ingesting the data.
It would be amazing if chat and text generation suddenly disappeared, but thats not going to happen
It would be cool to make it illegal to not mark AI generated images or text and not have them forced to be seen
Lately, I just wish it didn't lie or make stuff up. And after drawing attention to false information, it often doubles-down, or apologises, and just repeats the bs.
If it doesn't know something, it should just admit it.
I want lawmakers to require proof that an AI is adhering to all laws. Putting the burden of proof on the AI makers and users. And to require possibilities to analyze all AI's actions regarding this question in court cases.
This would hopefully lead to the devopment of better AI's that are more transparent, and that are able to adhere to laws at all, because the current ones lack this ability.
My biggest issue with AI is that I think it's going to allow a massive wealth transfer from laborers to capital owners.
I think AI will allow many jobs to become easier and more productive, and even eliminate some jobs. I don't think this is a bad thing - that's what technology is. It should be a good thing, in fact, because it will increase the overall productivity of society. The problem is generally when you have a situation where new technology increases worker productivity, most of the benefits of that go to capital owners rather than said workers, even when their work contributed to the technological improvements either directly or indirectly.
What's worse, in the case of AI specifically it's functionality relies on it being trained on enormous amounts of content that was not produced by the owners of the AI. AI companies are in a sense harvesting society's collective knowledge for free to sell it back to us.
IMO AI development should continue, but be owned collectively and developed in a way that genuinely benefits society. Not sure exactly what that would look like. Maybe a sort of light universal basic income where all citizens own stock in publicly run companies that provide AI and receive dividends. Or profits are used for social services. Or maybe it provides AI services for free but is publicly run and fulfills prosocial goals. But I definitely don't think it's something that should be primarily driven by private, for-profit companies.