this post was submitted on 12 Jun 2024
392 points (95.4% liked)
Technology
59123 readers
2290 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Even people hallucinate. Under your definition intelligence doesn't exist
Wow whoosh. The point is that "AI" isn't actually "intelligent" like a human and thus can't "hallucinate" like an intelligent human.
All of this anthropomorphic terminology is just misleading marketing bullshit.
Who said anything about human intelligence? AIs have a different kind of intelligence, an artificial kind. I'm tired of pretending they don't
Ever heard of the Turing test? Ever since AIs could pass it it became not a thing. Before that, playing Go was the mark of AI.
Any time an AI achieves a new thing people move goalposts. So I ask you: what does AI need to achieve to have intelligence?
The same thing actually passing a turing test would require. You've obviously read the words "Turing test" somewhere and thought you understood what it meant, but no robot we've ever produced as a species has passed the turing test. It EXPLICITLY requires that intelligence equal to (or indistinguishable from) HUMAN intelligence is shown. Without a liar reading responses, no AI we'll produce for decades will pass the turing test.
No large language model has intelligence. They're just complicated call and response mechanisms that guess what answer we want based on a weighted response system (we tell it directly or tell another machine how to help it "weigh" words in a response). Obviously with anything that requires massive amounts of input or nuance, like language, it'll only be right about what it was guided on, which is limited to areas it is trained in.
We don't have any novel interactions with AI. They are regurgitation engines, bringing forward sentences that aren't theirs piecemeal. Given ten messages, I'm confident no major LLM would pass a Turing test.
The Turing test is flawed, because while it is supposed to test for intelligence it really just tests for a convincing fake. Depending on how you set it up I wouldn't be surprised if a modern LLM could pass it, at least some of the time. That doesn't mean they are intelligent, they aren't, but I don't think the Turing test is good justification.
For me the only justification you need is that they predict one word (or even letter!) at a time. ChatGPT doesn't plan a whole sentence out in advance, it works token by token... The input to each prediction is just everything so far, up to the last word. When it starts writing "As..." it has no concept of the fact that it's going to write "...an AI A language model" until it gets through those words.
Frankly, given that fact it's amazing that LLMs can be as powerful as they are. They don't check anything, think about their answer, or even consider how to phrase a sentence. Everything they do comes from predicting the next token... An incredible piece of technology, despite it's obvious flaws.
This is just conjecture, but I assume this is because the question of consciousness is not really falsifiable, so you just kind of have to draw an arbitrary line somewhere.
Like, maybe tech gets so good that we really can't tell the difference, and only god knows it isn't really alive. But then, how would we know not to give the machine legal rights?
For the record, ChatGPT does not pass the turing test.
ChatGPT is not designed to fool us into thinking it's a human. It produces language with a specific tone & direct references to the fact it is a language model. I am confident that an LLM trained specifically to speak naturally could do it. It still wouldn't be intelligent, in my view.
The chat bots will pass the Turing test in a few years, maybe 5. Would that be intelligence then?