this post was submitted on 18 May 2025
249 points (94.3% liked)

Ask Lemmy

31817 readers
1574 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 5 days ago (1 children)

most models I've seen now cite sources you can check when they're reporting factual stuff

Maybe online models can, but local has no access to the internet so it can't. However, it's likely generating a response that is predictable that can cite a source, but it could totally make that up. Hopefully people would double check it to make sure it actually is and says what it's claiming, but we both know most won't. Citing a source is just a way to make it look intelligent while it still generates bullshit.

Yeah LLMs might be more likely to give bad info, but people are unreliable too, they're biased and flawed and often have an agenda, and they are frequently, confidently wrong.

You're saying this like they're equal. People put thought into it. LLMs do not. Yes, con men exist. However, not everyone is a con man. You can follow authors who are known to be accurate. You can do the same with LLMs. The problem is consistency. A con man will always be a con man. With an LLM you have no way to know if it's bullshitting this time or not, so you should always assume it's bullshit. In which case, what's the point? However, most people assume it's always honest, because that's what the marketing leads you to believe

[–] [email protected] 0 points 4 days ago

And the people who don't know that you should check LLMs for hallucinations/errors (despite the fact that the press has been screaming that for a year) are definitely self-hosting their own, right? I've done it, it's not hard, but it's certainly not trivial either, and most of these folks would just go 'lol what's a docker?' and stop there. So we're advocating guard-rails for people in a use-case they would never find themselves in.

You’re saying this like they’re equal.

Not as if they're equal, but as if they're both unreliable and should be checked against multiple sources, which is what I've been advocating for since the beginning of this conversation.

The problem is consistency. A con man will always be a con man. With an LLM you have no way to know if it’s bullshitting this time or not

But you don't know a con man is a con man until you've read his book and put some of his ideas in practice and discovered that they're bullshit, same as with an LLM. See also: check against multiple sources.