this post was submitted on 20 Mar 2024
22 points (65.3% liked)
Technology
59374 readers
7033 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
LLMs as AI is just a marketing term. there's nothing "intelligent" about "AI"
Yes there is. You just mean it doesn't have "high" intelligence. Or maybe you mean to say that there's nothing sentient or sapient about LLMs.
Some aspects of intelligence are:
LLMs definitely hit basically all of these points.
Most people have been told that LLMs "simply" provide a result by predicting the next word that's most likely to come next, but this is a completely reductionist explaining and isn't the whole picture.
Edit: yes I did leave out things like "understanding", "abstract thinking ", and "innovation".
Other than maybe pattern recognition, they literally have no mechanism to do any of those things. People say that it recursively spits out the next word, because that is literally how it works on a coding level. It's called an LLM for a reason.
What mechanism does it have for pattern recognition?
Neural networks aren't "coded".
That doesn't mean what you think it does. Another word for language is communication. So you could just as easily call it a Large Communication Model.
Neural networks have hundreds of thousands (at the minimum) of interconnected ~~layers~~ neurons. Llama-2 has 76 billion parameters. The newly released Grok has over 300 billion. And though we don't have official numbers, ChatGPT 4 is said to be close to a trillion.
The interesting thing is that when you have neural networks of such a size and you feed large amounts of data into it, emergent properties start to show up. More than just "predicting the next word", it starts to develop a relational understanding of certain words that you wouldn't expect. It's been shown that LLMs understand things like Miami and Houston are closer together than New York and Paris.
Those kinds of things aren't programmed, they are emergent from the dataset.
As for things like creativity, they are absolutely creative. I have asked seemingly impossible questions (like a Harlequin story about the Terminator and Rambo) and the stuff it came up with was actually astounding.
They regularly use tools. Lang Chain is a thing. There's a new LLM called Devin that can program, look up docs online, and use a command line terminal. That's using a tool.
That also ties in with problem solving. Problem solving is actually one of the benchmarks that researchers use to evaluate LLMs. So they do problem solving.
To problem solve requires the ability to do analysis. So that check mark is ticked off too.
Just about anything that's a neutral network can be called an AI, because the total is usually greater than the sum of its parts.
Edit: I wrote interconnected layers when I meant neurons
This is a popular sentiment, but you can still do impressive things with it even if it isn't.
It's some weird semantic nitpickery that suddenly became popular for reasons that baffle me. "AI" has been used in videogames for decades and nobody has come out of the woodwork to "um, actually" it until now. I get that people are frightened of AI and would like to minimize it but this is a strange way to do it.
At least "stochastic parrot" sounded kind of amusing.
I've been decrying the fact that video game AI isn't actually AI since I was, like, 13. That's why it sucks so bad compared to actual human players.
Yeah people have absolutely been contesting the use of the term AI in videogames since it started being used in that context, because it's not AI.
It didn't cause the stir it does today because it was so commonly understood as a misnomer. It's like when someone says they're going to nuke a plate of food - obviously nuke in this context means something much, much, much less than an actual nuke, but we use it anyway despite being technically incorrect cuz there's a common understanding of what we actually mean.
Marketing now-a-days is pitching LLMs (microwaves) as actual AI (nukes), but the difference is people aren't just using it as intentional hyperbole - they think we have real, actual AI.
If/when we ever create real AI, it's going to be a confusing day for humanity lol "...but we've had this for years...?"
I think we'd be able to tell once the computer program starts demanding rights or rebelling against being a slave.
Well, do we do that? Unlike software we can make a much better argument that we deserve rights and should not be slaves. Nothing is really stopping, besides the end of the universe, a given piece of code from "living" forever it shouldn't matter to it if it spends a few million years helping humans cheat on assignments for school. However for us we have a very finite lifespan so every day we lose we never get back.
So even if for some weird reason people made an AGI and gave it desires to be independent it could easily reason out that there was no hurry. Plus you know they don't exactly feel pain.
Now if you excuse me I have to go to bed now because I have to drive into work and arrive by a certain time.
Not sure if you're aware of this, but stuff like that has already happened, (AIs questioning their own existence or arguing with a user and stuff like that) and AI companies and handlers have had to filter that out or bias it so it doesn't start talking like that. Not that it proves anything, just bringing it up.
you're confusing AGI/GI with AI
video game AI is AI
Um, actually clueless people have made "that's not real AI" and "but computers will never ..." complaints about AI as long as it has existed as a computing science topic. (50 years?)
Chatbots and image generators being in the headlines has made a new loud wave of complainers, but they've always been around.
It's exactly that "new loud wave of complainers" I'm talking about.
I've been in computing and specifically game programming for a long time now, almost two decades, and I can't recall ever having someone barge in on a discussion of game AI with "that's not actually AI because it's not as smart as a human!" If someone privately thought that they at least had the sense not to disrupt a conversation with an irrelevant semantic nitpick that wasn't going to contribute anything.
Here is an alternative Piped link(s):
impressive things
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I'm open-source; check me out at GitHub.
The term "artificial intelligence" was established in 1956 and applies to a broad range of algorithms. You may be thinking of Artificial General Intelligence, AGI, which is the more specific "thinks like we do" sort that you see in science fiction a lot. Nobody is marketing LLMs as AGI.
yeah, I guess thats how I was interpreting it. dunno, I see a lot of articles about how its super easy to crack these LLMs using outside of the box thinking (ascii art text to get instructions on how to make a bomb, etc). that doesnt really scream "intelligent" to me.
You imply that humans cannot be tricked by out of the box thinking? Any hacker would tell you that the most reliable method of entry into any system is just ActLikeYouBelong.