this post was submitted on 11 Jan 2025
310 points (95.1% liked)

Technology

60473 readers
4390 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Computer pioneer Alan Turing's remarks in 1950 on the question, "Can machines think?" were misquoted, misinterpreted and morphed into the so-called "Turing Test". The modern version says if you can't tell the difference between communicating with a machine and a human, the machine is intelligent. What Turing actually said was that by the year 2000 people would be using words like "thinking" and "intelligent" to describe computers, because interacting with them would be so similar to interacting with people. Computer scientists do not sit down and say alrighty, let's put this new software to the Turing Test - by Grabthar's Hammer, it passed! We've achieved Artificial Intelligence!

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 13 points 4 days ago* (last edited 4 days ago) (2 children)

The Turing Test codified the very real fact that computer AI systems up till a few years ago couldn't hold a conversation (outside of special conversational tricks like Eliza and Cleverbot). Deep neural networks and the attention mechanism changed the situation; it's not a completely solved problem, but the improvement is undeniably dramatic. It's now possible to treat chatbots as a rudimentary research assistant, for example.

It's just something we have to take in stride, like computers becoming capable of playing Chess or Go. There is no need to get hung up on the word "intelligence".

[–] [email protected] 6 points 4 days ago* (last edited 4 days ago) (1 children)

Not sure how you define getting "hung up" but there are tons of poorly informed people who believe/fear that AI is about to take over/conquer/destroy/whatever the world because they think LLMs are as smart as humans - or just a few tweaks away. It's less about the word "intelligence" than about jumping from there to collateral issues, like thinking LLMs are "persons" that deserve rights, that using them without their consent is slavery, and other nonsense. Manipulative people take advantage of this kind of ignorance. Knowledge is good, modern superstition is bad.

[–] [email protected] 2 points 4 days ago

They are going to destroy the world, not because they are superintelligent but because LLMs will be linked to lethal weapons and critical machines since it's easier to learn than a human and since they are very not reliant (prompt injection, purposely lying, etc.), this will lead to death

[–] [email protected] 2 points 4 days ago

It's just something we have to take in stride, like computers becoming capable of playing Chess or Go. There is no need to get hung up on the word "intelligence".

Nicely said.