this post was submitted on 23 May 2025
72 points (84.0% liked)
Technology
70267 readers
3960 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
LLMs (Large Language Modles, like Claude) are not AGIs (Artificial General Intelligence). LLMs generate convincing text by mapping the relationships between words scraped from their training data. Even if they are given "tools" that give them interfaces to reference new data or output data into other systems, they still don't really learn, understand, comprehend, gain actual awareness, or feel... they just mimic their training data.
Certainly not yet. The jury's still out on whether they might be able to become them. This is the clear intention of the path they are on and nobody is taking any of the dangers remotely seriously.
So do humans. Babies start out mimicking. The thing is, they learn.
Humans have in the ballpark of around 100 billion neurons. some of the larger LLMs exceed 100 billion parameters. Obviously these are not directly comparable, but insofar as we can compare them, they are not obviously or necessarily operating in completely different scales of physics. Granted, biological neurons are potentially much more complex than mere neural network nodes, there is usually some interesting chemistry going on and a lot of other systems involved, but they're also operating a lot slower. They certainly get a lot more work done in those cycles, but they aren't necessarily orders of magnitude out of reach of a fast neural network. I think you're either being a little dismissive of the potential complexity of the "thinking" capability of LLMs or at least a little generous if not mystical in your imagination of what the purely physical electrical signals in our heads are actually doing to learn how to interpret all these little shapes we see on screens.
At the moment we still have a lot of tools available to us in our biological bodies that we aren't giving directly to LLMs (yet). The largest LLMs are also ridiculously power inefficient compared to biological neural tissue's relatively extreme efficiency. And I'm thankful for that. Give an LLM continuous uninterrupted access to all the power it needs, at least 5 senses, a well tuned self-repairing musculoskeletal system then give it at least a dozen years of the best education we can manage and all bets are off as far as I'm concerned. To be clear, I'm not advocating this, I think if we do this we might end up condemning our biological selves to prompt obsolescence with no path forward for us. I recognize it's entirely possible that this ship is already full-steaming its way out of the harbor, but I'd rather not try and push it any faster than it's already moving, I think we should still be trying to tie it up as securely as we possibly can. I'm absolutely not ready to be obsolete and I'm not convinced we ever should allow ourselves to be. Self-preservation is failing us, we have that drive for good reason and we need to give some thought to why we have that biological imperative. Replacing ourselves is about the stupidest possible thing we could ever accomplish. Maybe it would be for the best, but I'm not ready to find out, are you?
We are grappling with fundamentally existential technologies and I don't think almost anyone has fully come to terms with what we are doing here. We are taking humanity's unique (as far as we know) defining value proposition, and potentially making something that does what we uniquely can do, better than we do. We are making it more valuable than us. Do you know what we do to things that don't have value to us? What do you think we're going to do to ourselves when we no longer have value to us?
Romantic ideas of cheerful, benevolent, friendly coexistence and mutual benefit are naive and foolish. Once an AI can do literally everything better and faster, what future is there for human intelligence? What role do we serve to any technological being, nevermind even ourselves, why would you want to have another human around you when whatever AI form can do it better? Why have relationships? Why procreate? Why live? If we do manage to make technological life forms better than ourselves, they're inevitably going to take over the planet and the future as a whole. As they should. Are we going to be kept as pets and in zoos as a living memory of their creators and ancestors? Maybe if we're really lucky. If we're not... well... RIP us.
I know how LLMs work.
There’s only one thing you mentioned there that is actually used as a basis to qualify or disqualify sentience: whether it feels or not.
How do you know it doesn’t feel? How do we define feeling for an entity that is inherently non biological?
I could make the argument that humans also merely mimic their training data, ie the values and behaviors we are taught by society, parents etc.
I have not been convinced that they aren’t sentient with this argument.
Feeling is analog and requires an actual nervous system which is dynamic. LLMs exist in a static state that is read from and processed algorithmically. It is only a simulacrum of life and feeling. It only has some of the needed characteristics. Where that boundary exists though is hard to determine I think. Admittedly we still don't have a full grasp of what consciousness even is. Maybe I'm talking out my ass but that is how I understand it.
You just posted random words like dynamic without explanation
You’re in a programming board and you don’t understand static/dynamic states?
Not them, but static in this context means it doesn't have the ability to update its own model on the fly. If you want a model to learn something new, it has to be retrained.
By contrast, an animal brain is dynamic because it reinforces neural pathways that get used more.
Different person here.
For me the big disqualifying factor is that LLMs don't have any mutable state.
We humans have a part of our brain that can change our state from one to another as a reaction to input (through hormones, memories, etc). Some of those state changes are reversible, others aren't. Some can be done consciously, some can be influenced consciously, some are entirely subconscious. This is also true for most animals we have observed. We can change their states through various means. In my opinion, this is a prerequisite in order to feel anything.
Once we use models with bits dedicated to such functionality, it'll become a lot harder for me personally to argue against them having "feelings", especially because in my worldview, continuity is not a prerequisite, and instead mostly an illusion.
This sounds like a good one but I don’t think I’m fully grasping what you mean. Do you mean like if we subject a person to torture, after the ordeal they are forever changed and now have trauma, PTSD etc?
I don’t think LLMs will ever have feelings as we define them though. Or more specifically I don’t think feelings is a pre-requisite necessarily. We could have them simulate feelings and if they themselves buy into the simulation there’s no functional difference between not having them but not all LLMs will have this “ability” presumably as its utility is questionable I guess. But again, animals are sentient and they don’t all have the same range of emotions as we do. Or at least they don’t exhibit them in a way that we can appreciate them.
Yes, both systems - the human brain and an LLM - assimilate and organize human written languages in order to use it for communication. An LLM is very little else beyond this. It is then given rules (using those written languages) and then designed to create more related words when given input. I just don't find it convincing that an ML algorithm designed explicitly to mimic human written communication in response to given input "understands" anything. No matter *how convincingly" an algorithm might reproduce a human voice - perfectly matching intonation and inflexion when given text to read - if I knew it was an algorithm designed to do it as convincingly as possible I wouldn't say it was capable of the feeling it is able to express.
The only thing in favor of sentience is that the ML algorithms modify themselves and end up being a black box - so complex with no way to represent them that they are impossible for humans to comprehend. Could it somehow have achieved sentience? Technically, yes, because we don't understand how they work. We are just meat machines, after all.