this post was submitted on 09 Jun 2025
822 points (91.9% liked)

Technology

71396 readers
3880 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 288 points 5 days ago (12 children)

Did the author thinks ChatGPT is in fact an AGI? It's a chatbot. Why would it be good at chess? It's like saying an Atari 2600 running a dedicated chess program can beat Google Maps at chess.

[–] [email protected] 229 points 5 days ago (15 children)

AI including ChatGPT is being marketed as super awesome at everything, which is why that and similar AI is being forced into absolutely everything and being sold as a replacement for people.

Something marketed as AGI should be treated as AGI when proving it isn't AGI.

[–] [email protected] 14 points 5 days ago (11 children)

Not to help the AI companies, but why don't they program them to look up math programs and outsource chess to other programs when they're asked for that stuff? It's obvious they're shit at it, why do they answer anyway? It's because they're programmed by know-it-all programmers, isn't it.

[–] [email protected] 26 points 5 days ago

why don't they program them

AI models aren't programmed traditionally. They're generated by machine learning. Essentially the model is given test prompts and then given a rating on its answer. The model's calculations will be adjusted so that its answer to the test prompt will be closer to the expected answer. You repeat this a few billion times with a few billion prompts and you will have generated a model that scores very high on all test prompts.

Then someone asks it how many R's are in strawberry and it gets the wrong answer. The only way to fix this is to add that as a test prompt and redo the machine learning process which takes an enormous amount of time and computational power each time it's done, only for people to once again quickly find some kind of prompt it doesn't answer well.

There are already AI models that play chess incredibly well. Using machine learning to solve a complexe problem isn't the issue. It's trying to get one model to be good at absolutely everything.

[–] [email protected] 29 points 5 days ago (1 children)

Because they’re fucking terrible at designing tools to solve problems, they are obviously less and less good at pretending this is an omnitool that can do everything with perfect coherency (and if it isn’t working right it’s because you’re not believing or paying hard enough)

[–] [email protected] 7 points 5 days ago

Or they keep telling you that you just have to wait it out. It’s going to get better and better!

[–] [email protected] 7 points 5 days ago

...or a simple counter to count the r in strawberry. Because that's more difficult than one might think and they are starting to do this now.

[–] [email protected] 4 points 4 days ago

This is where MCP comes in. It's a protocol for LLMs to call standard tools. Basically the LLM would figure out the tool to use from the context, then figure out the order of parameters from those the MCP server says is available, send the JSON, and parse the response.

[–] [email protected] 3 points 4 days ago

why don't they program them to look up math programs and outsource chess to other programs when they're asked for that stuff?

Because the AI doesn't know what it's being asked, it's just a algorithm guessing what the next word in a reply is. It has no understanding of what the words mean.

"Why doesn't the man in the Chinese room just use a calculator for math questions?"

[–] [email protected] 5 points 5 days ago

Because the LLMs are now being used to vibe code themselves.

[–] [email protected] 4 points 5 days ago

They are starting to do this. Most new models support function calling and can generate code to come up with math answers etc

[–] [email protected] 4 points 5 days ago (1 children)

If you pay for chatgpt you can connect it with wolfrenalpha and it's relays the maths to it

[–] [email protected] 1 points 4 days ago

I don't pay for ChatGPT and just used the Wolfram GPT. They made the custom GPTs non-paid at some point.

[–] [email protected] 4 points 5 days ago

I think they're trying to do that. But AI can still fail at that lol

[–] [email protected] 1 points 4 days ago

why don't they program them to look up math programs and outsource chess to other programs when they're asked for that stuff?

They will, when it makes sense for what the AI is designed to do. For example, ChatGPT can outsource image generation to an AI dedicated to that. It also used to calculate math using python for me, but that doesn't seem to happen anymore, probably due to security issues with letting the AI run arbitrary python code.

ChatGPT however was not designed to play chess, so I don't see why OpenAI should invest resources into connecting it to a chess API.

I think especially since adding custom GPTs, adding this kind of stuff has become kind of unnecessary for base ChatGPT. If you want a chess engine, get a GPT which implements a Stockfish API (there seem to be several GPTs that do). For math, get the Wolfram GPT which uses Wolfram Alpha's API, or a different powerful math GPT.

load more comments (1 replies)
load more comments (14 replies)
[–] [email protected] 27 points 5 days ago (1 children)

Google Maps doesn't pretend to be good at chess. ChatGPT does.

[–] [email protected] 6 points 4 days ago (1 children)

A toddler can pretend to be good at chess but anybody with reasonable expectations knows that they are not.

[–] [email protected] 20 points 4 days ago (1 children)

Plot twist: the toddler has a multi-year marketing push worth tens if not hundreds of millions, which convinced a lot of people who don't know the first thing about chess that it really is very impressive, and all those chess-types are just jealous.

[–] [email protected] 5 points 4 days ago (1 children)

Have you tried feeding the toddler gallons of baby-food? Maybe then it can play chess

[–] [email protected] 4 points 4 days ago (1 children)

They’ve been feeding the toddler everybody else’s baby food and claiming they have the right to.

[–] [email protected] 4 points 4 days ago

"If we have to ask every time before stealing a little baby food, our morbidly obese toddler cannot survive"

[–] [email protected] 30 points 5 days ago (2 children)

Most people do. It's just called AI in the media everywhere and marketing works. I think online folks forget that something as simple as getting a Lemmy account by yourself puts you into the top quintile of tech literacy.

load more comments (2 replies)
[–] [email protected] 16 points 4 days ago (1 children)

well so much hype has been generated around chatgpt being close to AGI that now it makes sense to ask questions like "can chatgpt prove the Riemann hypothesis"

[–] [email protected] 1 points 4 days ago

Even the models that pretend to be AGI are not. It's been proven.

[–] [email protected] 10 points 5 days ago (2 children)

I agree with your general statement, but in theory since all ChatGPT does is regurgitate information back and a lot of chess is memorization of historical games and types, it might actually perform well. No, it can't think, but it can remember everything so at some point that might tip the results in it's favor.

[–] [email protected] 3 points 5 days ago* (last edited 4 days ago) (1 children)

Regurgitating an impression of, not regurgitating verbatim, that's the problem here.

Chess is 100% deterministic, so it falls flat.

[–] [email protected] 5 points 5 days ago* (last edited 5 days ago)

I'm guessing it's not even hard to get it to "confidently" violate the rules.

load more comments (1 replies)
[–] [email protected] 5 points 4 days ago* (last edited 4 days ago) (1 children)

OpenAI has been talking about AGI for years, implying that they are getting closer to it with their products.

https://openai.com/index/planning-for-agi-and-beyond/

https://openai.com/index/elon-musk-wanted-an-openai-for-profit/

Not to even mention all the hype created by the techbros around it.

load more comments (1 replies)
[–] [email protected] 5 points 4 days ago (1 children)

You're not wrong, but keep in mind ChatGPT advocates, including the company itself are referring to it as AI, including in marketing. They're saying it's a complete, self-learning, constantly-evolving Artificial Intelligence that has been improving itself since release... And it loses to a 4KB video game program from 1979 that can only "think" 2 moves ahead.

[–] [email protected] 2 points 4 days ago

That's totally fair, the company is obviously lying, excuse me "marketing", to promote their product, that's absolutely true.

[–] [email protected] 7 points 5 days ago (1 children)

I think that’s generally the point is most people thing chat GPT is this sentient thing that knows everything and… no.

[–] [email protected] 3 points 5 days ago (1 children)

Do they though? No one I talked to, not my coworkers that use it for work, not my friends, not my 72 year old mother think they are sentient.

[–] [email protected] 1 points 4 days ago

Okay I maybe exaggerated a bit, but a lot of people think it actually knows things, or is actually smart. Which… it’s not… at all. It’s just pattern recognition. Which was I assume the point of showing it can’t even beat the goddamn Atari because it cannot think or reason, it’s all just copy pasta and pattern recognition.

[–] [email protected] 6 points 5 days ago (1 children)

Articles like this are good because it exposes the flaws with the ai and that it can't be trusted with complex multi step tasks.

Helps people see that think AI is close to a human that its not and its missing critical functionality

[–] [email protected] 4 points 4 days ago (1 children)

The problem is though that this perpetuates the idea that ChatGPT is actually an AI.

[–] [email protected] 1 points 4 days ago

People already think chatGPT is a general AI. We need more articles like this showing is ineffectiveness at being intelligent. Besides it helps find a limitations of this technology so that we can hopefully use it to argue against every single place

[–] [email protected] 5 points 5 days ago (2 children)

In all fairness. Machine learning in chess engines is actually pretty strong.

AlphaZero was developed by the artificial intelligence and research company DeepMind, which was acquired by Google. It is a computer program that reached a virtually unthinkable level of play using only reinforcement learning and self-play in order to train its neural networks. In other words, it was only given the rules of the game and then played against itself many millions of times (44 million games in the first nine hours, according to DeepMind).

https://www.chess.com/terms/alphazero-chess-engine

[–] [email protected] 2 points 4 days ago

Sure, but machine learning like that is very different to how LLMs are trained and their output.

[–] [email protected] 1 points 4 days ago

Oh absolutely you can apply machine learning to game strategy. But you can't expect a generalized chatbot to do well at strategic decision making for a specific game.

[–] [email protected] 3 points 5 days ago

I like referring to LLMs as VI (Virtual Intelligence from Mass Effect) since they merely give the impression of intelligence but are little more than search engines. In the end all one is doing is displaying expected results based on a popularity algorithm. However they do this inconsistently due to bad data in and limited caching.

[–] [email protected] 2 points 5 days ago

I mean, open AI seem to forget it isn’t.