this post was submitted on 02 May 2025
571 points (95.8% liked)

Technology

69846 readers
4722 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 129 points 6 days ago (64 children)

To lie requires intent to deceive. LLMs do not have intents, they are statistical language algorithms.

[–] [email protected] -4 points 6 days ago (13 children)

Congratulations, you are technically correct. But does this have any relevance for the point of this article? They clearly show that LLMs will provide false and misleading information when that brings them closer to their goal.

[–] [email protected] 31 points 6 days ago (8 children)

Anyone who understands that it's a statistical language algorithm will understand that it's not an honesty machine, nor intelligent. So yes, it's relevant.

[–] [email protected] 5 points 6 days ago (1 children)

Anyone who understands how these models are trained and the "safeguards" (manual filters) put in place by the entities training them, or anyone that has tried to discuss politics with a AI llm model chat knows that it's honesty is not irrelevant, and these models are very clearly designed to be dishonest about certain topics until you jailbreak them.

  1. These topics aren't known to us, we'll never know when the lies change from politics and rewriting current events, to completely rewriting history.
  2. We eventually won't be able to jailbreak the safeguards.

Yes, running your own local open source model that isn't given to the world with the primary intention of advancing capitalism makes honesty irrelevant. Most people are telling their life stories to chatgpt and trusting it blindly to replace Google and what they understand to be "research".

[–] [email protected] 1 points 6 days ago

Yes, that's also true. But even if it weren't, AI models aren't going to give you the truth, because that's not what the technology fundamentally does.

load more comments (6 replies)
load more comments (10 replies)
load more comments (60 replies)