this post was submitted on 11 Jan 2025
310 points (95.1% liked)

Technology

60473 readers
4105 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Computer pioneer Alan Turing's remarks in 1950 on the question, "Can machines think?" were misquoted, misinterpreted and morphed into the so-called "Turing Test". The modern version says if you can't tell the difference between communicating with a machine and a human, the machine is intelligent. What Turing actually said was that by the year 2000 people would be using words like "thinking" and "intelligent" to describe computers, because interacting with them would be so similar to interacting with people. Computer scientists do not sit down and say alrighty, let's put this new software to the Turing Test - by Grabthar's Hammer, it passed! We've achieved Artificial Intelligence!

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 4 days ago (1 children)

even with infinite context memory

Interestingly, infinite context memory is functionally identical to learning.

It seems wildly different but it's the same as if you have already learned absolutely everything that there is to know. There is absolutely nothing you could do or ask that the infinite context memory doesn't already have stored response ready to go.

[–] [email protected] 1 points 3 days ago (1 children)

Interestingly, infinite context memory is functionally identical to learning.

Except for still being incapable of responding to anything not within that context memory, todays models have zero problem solving skills; or to put it another way they're incapable of producing novel solutions to new problems.

[–] [email protected] 2 points 3 days ago (1 children)

Well yeah, because they're not infinite. ;)

[–] [email protected] 1 points 3 days ago (1 children)

Hence the reason it's not a real intelligence (yet) even a goldfish can do problem solving without first having to be equipped with god like levels of prior knowledge about the entire universe.

[–] [email protected] 1 points 3 days ago* (last edited 3 days ago)

Current LLM's aren't that stupid. They do have limited learning. You give it a question, tell it where it's wrong and it will remember and change all future replies with the new information you give it. You certainly can't ask a goldfish to write a c program that blinks an led on a microcontroller. I have used it to get working programs to questions that were absolutely nowhere on the internet. So it didn't just copy/paste something found.