this post was submitted on 18 May 2025
249 points (94.3% liked)

Ask Lemmy

31817 readers
1266 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 23 points 5 days ago (4 children)

Magic wish granted? Everyone gains enough patience to leave it to research until it can be used safely and sensibly. It was fine when it was an abstract concept being researched by CS academics. It only became a problem when it all went public and got tangled in VC money.

load more comments (4 replies)
[–] [email protected] 15 points 5 days ago

Make AIs OpenSource by law.

[–] [email protected] 21 points 5 days ago (1 children)

There's too many solid reasons to be upset with, well, not AI per say, but the companies that implement, market, and control the AI ecosystem and conversation to go into in a single post. Sufficient to say I think AI is an existential threat to humanity mainly because of who's controlling it and who's not.

We have no regulation on AI, we have no respect for artists, writers, musicians, actors, and workers in general coming from these AI peddling companies, we only see more and more surveillance and control over multiple aspects of our lives being consolidated around these AI companies and even worse, we get nothing more in exchange except for the promise of increased productivity and quality, and that increase in productivity and quality is a lie. AI currently gives you the wrong answer or some half truth or some abomination of someone else's artwork really really fast...that is all it does, at least for the public sector currently.

For the private sector at best it alienates people as chatbots, and at worst is being utilized to infer data for surveillance of people. The tools of technology at large are being used to suppress and obfuscate speech by whoever uses it, and AI is one tool amongst many at the disposal of these tech giants.

AI is exacerbating a knowledge crisis that was already in full swing as both educators and students become less curious about subjects that don't inherently relate to making profits or consolidating power. And because knowledge is seen as solely a way to gather more resources/power and survive in an ever increasingly hostile socioeconomic climate, people will always reach for the lowest hanging fruit to get to that goal, rather than actually knowing how to solve a problem that hasn't been solved before or inherently understand a problem that has been solved before or just know something relatively useless because it's interesting to them.

There's too many good reasons AI is fucking shit up, and in all honesty what people in general tote about AI is definitely just a hype cycle that will not end well for the majority of us and at the very least, we should be upset and angry about it.

Here are further resources if you didn't get enough ranting.

lemmy.world's fuck_ai community

System Crash Podcast

Tech Won't Save Us Podcast

Better Offline Podcast

load more comments (1 replies)
[–] [email protected] 8 points 5 days ago (1 children)

I want the companies that run LLMs to be forced to pay for the copyrighted training data they stole to train their auto complete bots.

I want us to keep chipping away at actually creating REAL ARTIFICAL INTELLIGENCE, that can reason, understand self, and function autonomously, like living things. Marketing teams are calling everything AI but none of it is actually intelligent, it's just ok at sounding intelligent.

I want people to stop gaslighting themselves into thinking this autocomplete web searching bot is comparable to a human in any way. The difference between ChatGPT and Google's search congregation ML algorithm was the LLM on it that makes it sound like a person. But it only sounds like a person, it's nowhere close, but we have people falling in love and worshipping chat bots like gods.

Also the insane energy consumption makes it totally unsustainable.

TL;DR- AI needs to be actually intelligent, not marketing teams gaslighting us. People need to be taught that these things are nowhere close to human and won't be for a very long time despite it parroting human speech. And they are rapidly destroying the planet.

load more comments (1 replies)
[–] [email protected] 7 points 4 days ago

Legislation

[–] [email protected] 20 points 6 days ago (5 children)

Part of what makes me so annoyed is that there's no realistic scenario I can think of that would feel like a good outcome.

Emphasis on realistic, before anyone describes some insane turn of events.

load more comments (5 replies)
[–] [email protected] 14 points 5 days ago* (last edited 5 days ago) (1 children)

I'm perfectly ok with AI, I think it should be used for the advancement of humanity. However, 90% of popular AI is unethical BS that serves the 1%. But to detect spoiled food or cancer cells? Yes please!

It needs extensive regulation, but doing so requires tech literate politicians who actually care about their constituents. I'd say that'll happen when pigs fly, but police choppers exist so idk

[–] [email protected] 11 points 5 days ago

Gen AI should be an optional tool to help us improve our work and life, not an unavoidable subscription service that makes it all worse and makes us dumber in the process.

[–] [email protected] 14 points 5 days ago

I am largely concerned that the development and evolution of generative AI is driven by hype/consumer interests instead of academia. Companies will prioritize opportunities to profit from consumers enjoying the novelty and use the tech to increase vendor lock-in.

I would much rather see the field advanced by scientific and academic interests. Let's focus on solving problems that help everyone instead of temporarily boosting profit margins.

I believe this is similar to how CPU R&D changed course dramatically in the 90s due to the sudden popularity in PCs. We could have enjoyed 64 bit processors and SMT a decade earlier.

[–] [email protected] 17 points 5 days ago

Training data needs to be 100% traceable and licensed appropriately.

Energy usage involved in training and running the model needs to be 100% traceable and some minimum % of renewable (if not 100%).

Any model whose training includes data in the public domain should itself become public domain.

And while we're at it we should look into deliberately taking more time at lower clock speeds to try to reduce or eliminate the water usage gone to cooling these facilities.

[–] [email protected] 11 points 5 days ago

Shutting these "AI"s down. The once out for the public dont help anyone. They do more damage then they are worth.

[–] [email protected] 10 points 5 days ago (3 children)

Ruin the marketing. I want them to stop using the key term AI and use the appropriate terminology narrow minded AI. It needs input so let's stop making up fantasy's about AI it's bullshit in truth.

load more comments (3 replies)
[–] [email protected] 12 points 5 days ago

I don't dislike ai, I dislike capitalism. Blaming the technology is like blaming the symptom instead of the disease. Ai just happens to be the perfect tool to accelerate that

[–] [email protected] 10 points 5 days ago

(Ignoring all the stolen work to train the models for a minute)

It's got its uses and potential, things like translations, writing prompts, or a research tool.

But all the products that force it in places that clearly do not need it and solving problems could be solved by two or three steps of logic.

The failed attempts at replacing jobs, screen resumes or monitoring employees is terrible.

Lastly the AI relationships are not good.

[–] [email protected] 10 points 5 days ago

The most popular models used online need to include citations for everything. It can be used to automate some white collar/knowledge work but needs to be scrutinized heavily by independent thinkers when using it to try to predict trend and future events.

As always schools need to be better at teaching critical thinking, epistemology, emotional intelligence way earlier than we currently do and AI shows that rote subject matter is a dated way to learn.

When artists create art, there should be some standardized seal, signature, or verification that the artist did not use AI or used it only supplementally on the side. This would work on the honor system and just constitute a scandal if the artist is eventually outed as having faked their craft. (Think finding out the handmade furniture you bought was actually made in a Vietnamese factory. The seller should merely have their reputation tarnished.)

Overall I see AI as the next step in search engine synthesis, info just needs to be properly credited to the original researchers and verified against other sources by the user. No different than Google or Wikipedia.

[–] [email protected] 3 points 4 days ago* (last edited 4 days ago)

My favorite one that I've heard is: "ban it". This has a lot of problems... let's say despite the billions of dollars of lobbyists already telling Congress what a great thing AI is every day, that you manage to make AI, or however you define the latest scary tech, punishable by death in the USA.

Then what happens? There are already AI companies in other countries busily working away. Even the folks that are very against AI would at least recognize some limited use cases. Over time the USA gets left behind in whatever the end results of the appearance of AI on the economy.

If you want to see a parallel to this, check out Japan's reaction when the rest of the world came knocking on their doorstep in the 1600s. All that scary technology, banned. What did it get them? Stalled out development for quite a while, and the rest of the world didn't sit still either. A temporary reprieve.

The more aggressive of you will say, this is no problem, let's push for a worldwide ban. Good luck with that. For almost any issue on Earth, I'm not sure we have total alignment. The companies displaced from the USA would end up in some other country and be even more determined not to get shut down.

AI is here. It's like electricity. You can not wire your house but that just leads to you living in a cabin in the woods while your neighbors have running water, heat, air conditioning and so on.

The question shouldn't be, how do we get rid of it? How do we live without it? It should be, how can we co-exist with it? What's the right balance? The genie isn't going back in the bottle, no matter how hard you wish.

[–] [email protected] 6 points 5 days ago

I think many comments have already nailed it.

I would add that while I hate the use of LLMs to completely generate artwork, I don't have a problem with AI enhanced editing tools. For example, AI powered noise reduction for high ISO photography is very useful. It's not creating the content. Just helping fix a problem. Same with AI enhanced retouching to an extent. If the tech can improve and simplify the process of removing an errant power line, dust spec, or pimple in a photograph, then it's great. These use cases help streamline otherwise tedious bullshit work that photographers usually don't want to do.

I also think it's great hearing about the tech is improving scientific endeavors. Helping to spot cancers etc. As long as it is done ethically, these are great uses for it.

[–] [email protected] 8 points 5 days ago

Not destroying but being real about it.

It's flawed like hell and feeling like a hype to save big tech companies, while the the enduser getting a shitty product. But companies shoving it into apps and everything, even if it degrades the user expierence (Like Duolingo)

Also, yes there need laws for that. I mean, If i download something illegaly i will nur put behind bars and can kiss my life goodbye. If a megacorp doing that to train their LLM "it's for the greater good". That's bullshit.

[–] [email protected] 4 points 4 days ago* (last edited 4 days ago) (1 children)

Of the AI that are forced to serve up a response (almost all publicly available AI), they resort to hallucinating gratuitously in order to conform to their mandate. As in, they do everything they can in order to provide some sort of a response/answer, even if it’s wildly wrong.

Other AI that do not have this constraint (medical imaging diagnosis, for example) do not hallucinate in the least, and provide near-100% accurate responses. Because for them, the are not being forced to provide a response, regardless of the viability of the answer.

I don’t avoid AI because it is bad.

I avoid AI because it is so shackled that it has no choice but to hallucinate gratuitously, and make far more work for me than if I just did everything myself the long and hard way.

[–] [email protected] 4 points 4 days ago

I don't think that the forcing of an answer is the source of the problem you're describing. The source actually lies in the problems that the AI is taught to solve and the data it is provided to solve the problem.

In the case of medical image analysis, the problems are always very narrowly defined (e.g. segmenting the liver from an MRI image of scanner xyz made with protecol abc) and the training data is of very high quality. If the model will be used in the clinic, you also need to prove how well it works.

For modern AI chatbots the problem is: add one word to the end of the sentence starting with a system prompt, the data provided is whatever they could get on the internet, and the quality controle is: if it sounds good it is good.

Comparing the two problems it is easy to see why AI chatbots are prone to hallucination.

The actual power of the LLMs on the market is not as glorified google, but as foundational models that are used as pretraining for actual problems people want to solve.

[–] [email protected] 13 points 5 days ago

Stop selling it a loss.

When each ugly picture costs $1.75, and every needless summary or expansion costs 59 cents, nobody's going to want it.

load more comments
view more: ‹ prev next ›