this post was submitted on 29 Mar 2024
129 points (97.8% liked)
Technology
59390 readers
4077 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I can't remember where I read it but someone said "LLM's provide three types of answer: so vague as to be useless, directly plagiarized from a source and reworded, or flat out wrong but confidently stated as the truth." I'm probably butchering the quote, but that was the gist of it.
Hold on let me have chat gpt rephrase that for you.
This is just chat gpt rephrasing the comment above me. Don't worry though, when chat gpt is wrong it's quite confident sounding and even cites sources that don't exist but look quite convincing!