this post was submitted on 27 Sep 2023
109 points (97.4% liked)
Technology
59374 readers
3169 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This demonstrates in a really layman-understandable way some of the shortcomings of LLMs as a whole, I think.
An LLM never lies because it cannot lie.
Corollary: An LLM never tells the truth because it cannot tell the truth.
Final theorem: An LLM cannot because it do not be
So you’re saying they think it be like it is but it don’t?
It's only a "shortcoming" if you aren't aware of how these LLMs function and are using it for something it's not good at (in this case information retrieval). If instead you want it to be making stuff up, what was previously an undesirable hallucination becomes desirable creativity.
This also helps illustrate the flaws in the "they're just plagarism machines" argument. LLMs come up with stuff that definitely wasn't in their training data.
I didn't mean to argue against the usefulness of LLMs entirely, they absolutely have their place. I was moreso referring to how everyone and their dog are making AI assistants for tasks that need accurate data without addressing how easy it is for them to present you bad data with total confidence.
I would say the specific shortcoming being demonstrated here is the inability for LLMs to determine whether a piece of information is factual (not that they're even dealing with "pieces of information" like that in the first place). They are also not able to tell whether a human questioner is being truthful, or misleading, or plain lying, honestly mistaken, or nonsensical. Of course, which one of those is the case matters in a conversation which ought to have its basis in fact.
Indeed, and all it takes is one lie to send it down that road.
For example, I asked ChatGPT how to teach my cat to ice skate, with predictable admonishment:
But after I reassured it that my cat loves ice skating, it changed its tune:
Even after telling it I lied and my cat doesn’t actually like ice skating, its acceptance of my previous lie still affected it:
This is a great example of how to deliberately get it to go off track. I tried to get it to summarize the Herman Cain presidency, and it kept telling me Herman Cain was never president.
Then I got it to summarize a made-up reddit meme.
When I asked about President Herman Cain AFTER Boron Pastry, it came up with this:
It stopped disputing that Cain was never president.
He did run for president in 2012 with the 999 plan, though.
https://en.wikipedia.org/wiki/Herman_Cain_2012_presidential_campaign
Right, and to my knowledge everything else said about President Herman Cain is correct - Godfather's Pizza, NRA, sexual harassment, etc.
But notice... I keep claiming that Cain was President, and the bot didn't correct me. It didn't just respond with true information, it allowed false information to stand unchallenged. What I've effectively done is shown AI's inability to handle a firehose of falsehood. Humans already struggle with dealing this kind of disinformation campaign, now imagine that you could use AI to automate the generation and/or dissemination of misinformation.
Thank you for putting it far more eloquently than I could have
Here's the thing, the LLM isn't recalling and presenting pieces of information. It's creating human-like strings of words. It will give you a human-like phrase based on whatever you tell it. Chatbots like ChatGPT are fine tuned to try to filter what they say to be more helpful and truthful but at it's core it just takes what you say and makes human-like phrases to match.