this post was submitted on 25 May 2024
775 points (97.1% liked)
Technology
60016 readers
3180 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Hm. This is what I got.
I think about 90% of the screenshots we see of LLMs failing hilariously are doctored. Lemmy users really want to believe it's that bad through.
Edit:
I've had lots of great experiences with ChatGPT, and I've also had it hallucinate things.
I saw someone post an image of a simplified riddle, where ChatGPT tried to solve it as if it were the entire riddle, but it added extra restrictions and have a confusing response. I tried it for myself and got an even better answer.
Prompt (no prior context except saying I have a riddle for it):
Response:
I wish I was witty enough to make this up.
I reproduced that one and so I believe that one is true.
I looked up the whole riddle and see how it got confused.
It happened on 3.5 but not 4.
Interesting! What did 4 say?
Evidently I didn't save the conversation but I went ahead and entered the exact prompt above into GPT-4. It responded with:
Thanks for sharing!
Yesterday, someone posted a doctored one on here saying everyone eats it up even if you use a ridiculous font in your poorly doctored photo. People who want to believe are quite easy to fool.
Or you missed the point that this was a joke?