this post was submitted on 19 Jan 2024
255 points (95.4% liked)
Technology
59390 readers
2617 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Anyone got a graph of ai spending over time globally?
I'm starting to feel more confident about AGI coming soon (relatively soon).
Knowing absoultely nothing about it though it seems like it needs to be more efficient? What's the likelihood rather than increasing the bulk power of these systems that there is a breakthrough that allows more from less?
Spending is definitely looks exponential at the moment:
Most breakthroughs have historically been made by university researchers, then put into use by corporations. Arguably, including most of the latest developments,. But university researchers were never going to get access to the $100 million in compute time to train something like GPT-4, lol.
The human brain has 100 trillion connections. GPT-4 has 1.76 trillion parameters (which are analogous to connections). It took 25k GPUs to train, so in theory, I guess it could be possible to train a human-like intelligence using 1.4 million GPUs. Transformers (the T in GPT) are not like human brains though. They "learn" once, then do not learn or add "memories" while they're being used. They can't really do things like planning either. There are algorithms for "lifelong learning" and planning, but I don't think they scale to such large models, datasets, or real-world environments. I think there needs to be a lot theoretical breakthroughs to make AGI possible, and I'm not sure if more money will help that much. I suppose AGI could be achieved by trial and error (i.e. trying ideas and testing if they work without mathematically proving if or how well they'd work) instead of rigorous theoretical work.
Interesting. Thanks for posting.
So you're saying we might see something 1/10 of a human brain (obviously I understand that's a super rough estimate) next year.
This is the first I heard about GPT not learning. So if I interact with chat gpt it's effectively a finished product and it will stay like that forever even if it is wrong and I correct it multiple times?
This is where I'm really confused with the analogue. If GPT is not really close to a human brain how is it able to interact with so many people instantly. I couldn't hold 3 conversations never mind a million. Yet my brain power is much much higher than GPT. Couldn't it just talk to 1 person and be smarter as it can use all the computing power for that 1 conversation?
You're confused by the analogie because it's a shitty one. If we wanted to reproduce the behaviour of the human, we would invest in medecin, not computer science