this post was submitted on 16 Mar 2024
200 points (90.3% liked)

Ask Lemmy

26707 readers
1894 users here now

A Fediverse community for open-ended, thought provoking questions

Please don't post about US Politics.


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 1 year ago
MODERATORS
 

LLMs are solving MCAT, the bar test, SAT etc like they're nothing. At this point their performance is super human. However they'll often trip on super simple common sense questions, they'll struggle with creative thinking.

Is this literally proof that standard tests are not a good measure of intelligence?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 7 months ago (1 children)

Intelligence cannot be measured. It's a reification fallacy. Inelegance is colloquial and subjective.

If I told you that I had an instrument that could objectively measure beauty, you'd see the problem right away.

[–] [email protected] 17 points 7 months ago* (last edited 7 months ago) (3 children)

But intelligence is the capacity to solve problems. If you can solve problems quickly, you are by definition intelligent.

the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests)

https://www.merriam-webster.com/dictionary/intelligence

It can be measured by objective tests. It's not subjective like beauty or humor.

The problem with AI doing these tests is that it has seen and memorized all the previous questions and answers. Many of the tests mentioned are not tests of reasoning, but recall: the bar exam, for example.

If any random person studied every previous question and answer, they would do well too. No one would be amazed that an answer key knew all the answers.

[–] [email protected] 7 points 7 months ago (1 children)

But intelligence is the capacity to solve problems. If you can solve problems quickly, you are by definition intelligent

To solve any problems? Because when I run a computer simulation from a random initial state, that's technically the computer solving a problem it's never seen before, and it is trillions of times faster than me. Does that mean the computer is trillions of times more intelligent than me?

the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (such as tests)

If we built a true super-genius AI but never let it leave a small container, is it not intelligent because WE never let it manipulate its environment? And regarding the tests in the Merriam Webster definition, I suspect it's talking about "IQ tests", which in practice are known to be at least partially not objective. Just as an example, it's known that you can study for and improve your score on an IQ test. How does studying for a test increase your "ability to apply knowledge"? I can think of some potential pathways, but we're basically back to it not being clearly defined.

In essence, what I'm trying to say is that even though we can write down some definition for "intelligence", it's still not a concept that even humans have a fantastic understanding of, even for other humans. When we try to think of types of non-human intelligence, our current models for intelligence fall apart even more. Not that I think current LLMs are actually "intelligent" by however you would define the term.

[–] [email protected] 5 points 7 months ago

Does that mean the computer is trillions of times more intelligent than me?

And in addition, is an encyclopedia intelligent because it holds many answers?

[–] [email protected] 2 points 7 months ago (1 children)

This isn't quite correct. There is the possibility of biasing the results with the training data, but models are performing well at things they haven't seen before.

For example, this guy took an IQ test, rewrote the visual questions as natural language questions, and gave the test to various LLMs:

https://www.maximumtruth.org/p/ais-ranked-by-iq-ai-passes-100-iq

These are questions with specific wording that the models won't have been trained on given he wrote them out fresh. Old models have IQ results that are very poor, but the SotA model right now scores a 100.

People who are engaging with the free version of ChatGPT and think "LLMs are dumb" is kind of like talking to a moron human and thinking "humans are dumb." Yes, the free version of ChatGPT has around a 60 IQ on that test, but it also doesn't represent the cream of the crop.

[–] [email protected] 0 points 7 months ago (1 children)

Maybe, but this is giving the AI a lot of help. No one rewrites visual questions for humans who take IQ tests. That spacial reasoning is part of the test.

In reality, no AI would pass any test because the first part is writing your name on the paper. Just doing that is beyond most AIs because they literally don't have to deal with the real world. They don't actually understand anything.

[–] [email protected] 2 points 7 months ago

They don't actually understand anything.

This isn't correct and has been shown not to be correct in research over and over and over in the past year.

The investigation reveals that Othello-GPT encapsulates a linear representation of opposing pieces, a factor that causally steers its decision-making process. This paper further elucidates the interplay between the linear world representation and causal decision-making, and their dependence on layer depth and model complexity.

https://arxiv.org/abs/2310.07582

Sizeable differences exist among model capabilities that are not captured by their ranking on popular LLM leaderboards ("cramming for the leaderboard"). Furthermore, simple probability calculations indicate that GPT-4's reasonable performance on k=5 is suggestive of going beyond "stochastic parrot" behavior (Bender et al., 2021), i.e., it combines skills in ways that it had not seen during training.

We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the task-intrinsic reasoning structures to tackle complex reasoning problems that are challenging for typical prompting methods. Core to the framework is a self-discovery process where LLMs select multiple atomic reasoning modules such as critical thinking and step-by-step thinking, and compose them into an explicit reasoning structure for LLMs to follow during decoding. SELF-DISCOVER substantially improves GPT-4 and PaLM 2's performance on challenging reasoning benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as much as 32% compared to Chain of Thought (CoT).

Just a few of the relevant papers you might want to check out before stating things as facts.

[–] [email protected] 1 points 7 months ago

This is a semantic argument.

Have you never felt smarter or dumber depending on the situation? If so, did your ability to think abstractly, apply knowledge, or manipulate your environment change? Intelligence is subjective (and colloquial) like beauty and humor.