47
If robots could lie, would we be okay with it? A new study throws up intriguing results.
(theconversation.com)
This is a most excellent place for technology news and articles.
Bullshitting implies intention to do so. LLMs make mistakes, just like humans.
An LLMs "intent" is always to give you a plausible response even if it doesn't have the "knowledge". The same behaviour in a human would be classed as lying IMHO.
But you wouldn't call it lying if a person tells you something they think is true but turns out to be false. Lying means intentionally giving out false information. LLMs don't have intentions.
...but if they don't know I expect them to say so. An LLM isn't trustworthy until it says "I don't know".