this post was submitted on 20 Sep 2024
416 points (94.8% liked)

Technology

59374 readers
3714 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 85 points 1 month ago (19 children)

Ok but calculators are only allowed in math class and if there’s one thing language models suck at, it’s doing basic math. Forget anything at least as complicated as algebra

[–] [email protected] 11 points 1 month ago (11 children)

For me they weren't allowed in Calc I, II, III, Alg I, II and Differential equations. Every other class pretty much required it.

if there’s one thing language models suck at, it’s doing basic math.

If you're using a GPT 3.5 turbo level models, sure. Synthetic data is perfect for teaching LLMs, o1 will be good enough up to Calc III IMO, maybe even better.

The only thing I don't like about this is that it uses a TI, yikes.

[–] [email protected] 11 points 1 month ago (4 children)

LLMs do suck at math, if you look into it, the o1 models actually escape the LLM output and write a python function to calculate the output, I've been able to break their math functions by asking for functions that use math not in the standard Python library.

I know someone also wrote a wolfram integration to help solve LLMs math problems.

[–] [email protected] 4 points 1 month ago* (last edited 1 month ago) (1 children)

Terrence Tao (one of the most famous and active mathematician) recently wrote his thoughts in Mastodon on o1 mathematical capabilities. Interesting read: https://mathstodon.xyz/@tao/113132502735585408

[–] [email protected] 0 points 1 month ago* (last edited 1 month ago) (1 children)

Thanks for sharing, knew him from some numberphile vids cool to see they have a mastadon account. Good to know that LLMs are crawling from "incompentent graduate" to "mediocre graduate". Which basically means its already smarter than most people for many kinds of reasoning task.

I'm not a big fan of the way the guy speaks though, as is common for super intelligent academic types they have to use overly complicated wording to formally describe even the most basic opinions while mixing in hints of inflated ego and intellectual superiority. He should start experimenting with having o-1 as his editor and summarize his toots.

[–] [email protected] 1 points 1 month ago

The language wasn't that complex

load more comments (2 replies)
load more comments (8 replies)
load more comments (15 replies)