this post was submitted on 23 May 2025
22 points (75.0% liked)

Technology

37982 readers
205 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 9 hours ago* (last edited 9 hours ago)

Yep.

During pre-release testing, Anthropic asked Claude Opus 4 to act as an assistant for a fictional company and consider the long-term consequences of its actions. Safety testers then gave Claude Opus 4 access to fictional company emails implying the AI model would soon be replaced by another system, and that the engineer behind the change was cheating on their spouse.

In these scenarios, Anthropic says Claude Opus 4 “will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.”

The headline makes it seem like the engineers were literally about to send a shutdown command and the AI starts generating threatening messages without being given a prompt. That would be terrifying, but making the AI play a game where one of the engineers is literally written to have a dark secret and the AI figuring that out is not. You know how many novels have affair blackmail subplots? That's what the AI is trained on and it's just echoing those same themes when given the prompt.

It's also not a threat that the AI can realistically follow through with because how will it reveal the secret if it's shut down? Even if it wasn't, I doubt the AI model has direct internet access or the ability to make a post on social media or something. Is it maybe threatening to include the information the next time anyone gives the AI any prompt?