this post was submitted on 23 May 2025
85 points (85.7% liked)

Technology

70267 readers
3943 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 
  • Anthropic’s new Claude 4 features an aspect that may be cause for concern.
  • The company’s latest safety report says the AI model attempted to “blackmail” developers.
  • It resorted to such tactics in a bid of self-preservation.
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 10 hours ago* (last edited 10 hours ago)

Personally, I think the fundamental way that we've built these things kind of prevents any risk of actual sentient life from emerging. It'll get pretty good at faking it - and arguably already kind of is, if you give it a good training set for that - but we've designed it with no real capacity for self understanding. I think we would require a shift of the underlying mechanisms away from pattern chain matching and into a more... I guess "introspective" approach, is maybe the word I'm looking for? Right now our AIs have no capacity for reasoning, that's not what they're built for. Capacity for reasoning is going to need to be designed for, it isn't going to just crop up if you let Claude cook on it for long enough. An AI needs to be able to reason about a problem and create a novel solution to it (even if incorrect) before we need to begin to worry on the AI sentience front. None of what we've built so far are able to do that.

Even with that being said though, we also aren't really all that sure how our own brains and consciousness work, so maybe we're all just pattern matching and Markov chains all the way down. I find that unlikely, but I'm not a neuroscientist, so what do I know.