this post was submitted on 23 May 2025
72 points (84.0% liked)

Technology

70267 readers
3960 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 
  • Anthropic’s new Claude 4 features an aspect that may be cause for concern.
  • The company’s latest safety report says the AI model attempted to “blackmail” developers.
  • It resorted to such tactics in a bid of self-preservation.
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 12 hours ago (2 children)

An LLM is a deterministic function that produces the same output for a given input - I'm using "deterministic" in the computer science sense. In practice, there is some output variability due to race conditions in pipelined processing and floating point arithmetic, that are allowable because they speed up computation. End users see variability because of pre-processing of the prompt and extra information LLM vendors inject when running the function, as well as how the outputs are selected.

I have a hard time considering something that has an immutable state as sentient, but since there's no real definition of sentience, that's a personal decision.

[–] [email protected] 2 points 7 hours ago

I have a hard time considering something that has an immutable state as sentient, but since there's no real definition of sentience, that's a personal decision.

Technical challenges aside, there's no explicit reason that LLMs can't do self-reinforcement of their own models.

I think animal brains are also "fairly" deterministic, but their behaviour is also dependent on the presence of various neurotransmitters, so there's a temporal/contextual element to it, so situationally our emotions can affect our thoughts which LLMs don't really have either.

I guess it'd be possible to forward feed an "emotional state" as part of the LLM's context to emulate that sort of animal brain behaviour.

[–] [email protected] 1 points 12 hours ago (1 children)

It yet to be proven or disproven that if you put the exact same person in the exact same situation (a perfect to the molecular level) they will behave differently.

We can only test "more or less close". So we would not know of humans are sentient based on that reasoning, we are only hard to test.

[–] [email protected] 1 points 11 hours ago

if you put the exact same person in the exact same situation (a perfect to the molecular level) they will behave differently.

I don't consider that relevant to sentience. Structurally, biological systems change based on inputs. LLMs cannot. I consider that plasticity to be a prerequisite to sentience. Others may not.

We will undoubtedly see systems that can incorporate some kind of learning and mutability into LLMs. Re-evaluating after that would make sense.