this post was submitted on 11 Jan 2025
310 points (95.1% liked)

Technology

60473 readers
4390 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Computer pioneer Alan Turing's remarks in 1950 on the question, "Can machines think?" were misquoted, misinterpreted and morphed into the so-called "Turing Test". The modern version says if you can't tell the difference between communicating with a machine and a human, the machine is intelligent. What Turing actually said was that by the year 2000 people would be using words like "thinking" and "intelligent" to describe computers, because interacting with them would be so similar to interacting with people. Computer scientists do not sit down and say alrighty, let's put this new software to the Turing Test - by Grabthar's Hammer, it passed! We've achieved Artificial Intelligence!

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 79 points 4 days ago* (last edited 4 days ago) (5 children)

I think the Chinese room argument published in 1980 gives a pretty convincing reason why the Turing test doesn't demonstrate intelligence.

The thought experiment starts by placing a computer that can perfectly converse in Chinese in one room, and a human that only knows English in another, with a door separating them. Chinese characters are written and placed on a piece of paper underneath the door, and the computer can reply fluently, slipping the reply underneath the door. The human is then given English instructions which replicate the instructions and function of the computer program to converse in Chinese. The human follows the instructions and the two rooms can perfectly communicate in Chinese, but the human still does not actually understand the characters, merely following instructions to converse. Searle states that both the computer and human are doing identical tasks, following instructions without truly understanding or "thinking".

Searle asserts that there is no essential difference between the roles of the computer and the human in the experiment. Each simply follows a program, step-by-step, producing behavior that makes them appear to understand. However, the human would not be able to understand the conversation. Therefore, he argues, it follows that the computer would not be able to understand the conversation either.

[–] [email protected] 18 points 4 days ago

I am sceptical of this thought experiment as it seems to imply that what goes on within the human brain is not computable. For reference: every single physical effect that we have thus far discovered can be computed/simulated on a Turing machine.

The argument itself is also riddled with vagueness and handwaving: it gives no definition of understanding but presumes it as something that has a definite location, and also it may well be possible that taking the time to run the program inevitably causes understanding of Chinese after even the first word returned. Remember: executing these instructions could take billions of years for the presumably immortal human in the room, and we expect the human to be so thorough that they execute each of the trillions of instructions without error.

Indeed, the Turing test is insufficient to test for intelligence, but the statement that the Chinese room argument tries to support is much, much stronger than that. It essentially argues that computers can't be intelligent at all.

[–] [email protected] 10 points 4 days ago (1 children)

That just shows a fundamental misunderstanding of levels. Neither the computer nor the human understands Chinese. Both the programs do, however.

[–] [email protected] 22 points 4 days ago (3 children)

The programs don't really understand Chinese either. They are just filled with an understanding that is provided to them up-front. I mean as in they do not derive that understanding from something they perceive where there was no understanding before, they don't draw conclusions, don't understand words from context,.... the way an intelligent being would learn a language.

[–] [email protected] 1 points 3 days ago

Nothing in the thought experiment says that the program doesn't behave that way. If the program really seems like it understands language to an outside observer, you would assume it did learn language that way.

[–] [email protected] 2 points 4 days ago

Programs clearly understand words from context. Try making it do translation tasks, it can properly translate "tear" to either 泪水 (tears from crying) or 撕破 (to rend) based on context

[–] [email protected] 1 points 4 days ago

Others have provided better answers than mine, pointing out that the Chinese room argument only makes sense if your premise is that a “program” is qualitatively different from what goes on in a human brain/mind.

[–] [email protected] 9 points 4 days ago* (last edited 4 days ago) (2 children)

The problem with the experiment is that there exists a set of instructions for which the ability to complete them necessitates understanding due to conditional dependence on the state in each iteration.

In which case, only agents that can actually understand the state in the Chinese would be able to successfully continue.

So it's a great experiment for the solipsism of understanding as it relates to following pure functional operations, but not functions that have state changing side effects where future results depend on understanding the current state.

There's a pretty significant body of evidence by now that transformers can in fact 'understand' in this sense, from interpretability research around neural network features in SAE work, linear representations of world models starting with the Othello-GPT work, and the Skill-Mix work where GPT-4 and later models are beyond reasonable statistical chance at the level of complexity for being able to combine different skills without understanding them.

If the models were just Markov chains (where prior state doesn't impact current operation), the Chinese room is very applicable. But pretty much by definition transformer self-attention violates the Markov property.

TL;DR: It's a very obsolete thought experiment whose continued misapplication flies in the face of empirical evidence at least since around early 2023.

[–] [email protected] 13 points 4 days ago* (last edited 4 days ago)

It was invalid when he originally proposed it because it assumes a unique mystical ability for the atoms that make up our brains. For Searle the atoms in our brain have a quality that cannot be duplicated by other atoms simply because they aren't in what he recognizes as a human being.

It's why he claims the machine translation system system is incapable of understanding because the claim assumes it is possible.

It's self contradictory. He won't consider it possible because it hasn't been shown to be possible.

[–] [email protected] 4 points 4 days ago* (last edited 4 days ago) (1 children)

The Chinese room experiment only demonstrates how the Turing test isn’t valid. It’s got nothing to do with LLMs.

I would be curious about that significant body of research though, if you’ve got a link to some papers.

[–] [email protected] 10 points 4 days ago* (last edited 4 days ago) (3 children)

No, it doesn't render the Turing Test invalid, because the premise of the test is not to prove that machines are intelligent but to point out that if you can't tell the difference you either must assume they are or risk becoming a monster.

[–] [email protected] 5 points 4 days ago

or risk becoming a monster.

Remind me. What became of Turing, a man who saved untold British lives during WW2?

[–] [email protected] 2 points 4 days ago* (last edited 4 days ago) (1 children)

Okay but in casual conversation I probably couldn't spot a really good LLM on a thread like this, but on the back end that LLM is completely incapable of learning or changing in any meaningful way, its not quite a chinese room as previously mentioned but it's still a set model that can't learn or understand context, even with infinite context memory it could still only interact with that data within the confines of the original model.

e.g. I can train the model to understand a spoon and a fork, it will never come up with that idea of a spork unless I re-train it to include the concept of sporks or directly tell it. Even after I tell it what a spork is it can't infer the properties of a spork based on a fork or a spoon without additional leading prompts by me.

[–] [email protected] 5 points 4 days ago (1 children)

even with infinite context memory

Interestingly, infinite context memory is functionally identical to learning.

It seems wildly different but it's the same as if you have already learned absolutely everything that there is to know. There is absolutely nothing you could do or ask that the infinite context memory doesn't already have stored response ready to go.

[–] [email protected] 1 points 3 days ago (1 children)

Interestingly, infinite context memory is functionally identical to learning.

Except for still being incapable of responding to anything not within that context memory, todays models have zero problem solving skills; or to put it another way they're incapable of producing novel solutions to new problems.

[–] [email protected] 2 points 3 days ago (1 children)

Well yeah, because they're not infinite. ;)

[–] [email protected] 1 points 3 days ago (1 children)

Hence the reason it's not a real intelligence (yet) even a goldfish can do problem solving without first having to be equipped with god like levels of prior knowledge about the entire universe.

[–] [email protected] 1 points 3 days ago* (last edited 3 days ago)

Current LLM's aren't that stupid. They do have limited learning. You give it a question, tell it where it's wrong and it will remember and change all future replies with the new information you give it. You certainly can't ask a goldfish to write a c program that blinks an led on a microcontroller. I have used it to get working programs to questions that were absolutely nowhere on the internet. So it didn't just copy/paste something found.

[–] [email protected] 0 points 4 days ago* (last edited 4 days ago)

The premise of the test is to determine if machines can think. The opening line of Turing's paper is:

I propose to consider the question, 'Can machines think?'

I believe the Chinese room argument demonstrates that the Turing test is not valid for determining if a machine has intelligence. The human in the Chinese room experiment is not thinking to generate their replies, they're just following instructions - just like the computer. There is no comprehension of what's being said.

[–] [email protected] 6 points 4 days ago (2 children)

Searle argued from his personal truth that a mystic soul is responsible for sapience.

His argument against a computer system having consciousness is this:

" In order for this reply to be remotely plausible, one must take it for granted that consciousness can be the product of an information processing "system", and does not require anything resembling the actual biology of the brain."

-Searle

https://en.m.wikipedia.org/wiki/Chinese_room

[–] [email protected] 1 points 3 days ago

Isn't the brain just an information processing system?

[–] [email protected] 2 points 4 days ago

My personal truth is that anyone that believes in mysticism isn't sapient either

[–] [email protected] 3 points 4 days ago (1 children)

Brilliant thought experiment. I never heard of it before. It does seem to describe what's happening - if only there were a way to turn it into a meme so modern audiences could understand it.

[–] [email protected] 1 points 4 days ago

I mean it was featured in Zero Escape VLR, which is a pretty popular visual novel escape room, and used to help explain a major character.