this post was submitted on 05 May 2025
433 points (95.6% liked)

Technology

69846 readers
4722 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 27 points 3 days ago (3 children)

Not trying to speak like a prepper or anythingz but this is real.

One of neighbor's children just committed suicide because their chatbot boyfriend said something negative. Another in my community a few years ago did something similar.

Something needs to be done.

[–] [email protected] 15 points 3 days ago (1 children)
[–] [email protected] 15 points 3 days ago (3 children)

This is the Daenerys case, for some reason it seems to be suddenly making the rounds again. Most of the news articles I've seen about it leave out a bunch of significant details so that it ends up sounding more of an "ooh, scary AI!" Story (baits clicks better) rather than a "parents not paying attention to their disturbed kid's cries for help and instead leaving loaded weapons lying around" story (as old as time, at least in America).

load more comments (3 replies)
load more comments (2 replies)
[–] [email protected] 2 points 2 days ago

Sounds like Mrs. Davis.

[–] [email protected] 12 points 3 days ago (1 children)

Oh wow. In the old times, self-proclaimed messiahs used to do that without assistance from a chatbot. But why would you think the "truth" and path to enlightenment is hidden within a service of a big tech company?

[–] [email protected] 11 points 3 days ago (2 children)

well because these chatbots are designed to be really affirming and supportive and I assume people with such problems really love this kind of interaction compared to real people confronting their ideas critically.

load more comments (2 replies)
[–] [email protected] 15 points 3 days ago* (last edited 3 days ago) (33 children)

This is the reason I've deliberately customized GPT with the follow prompts:

  • User expects correction if words or phrases are used incorrectly.

  • Tell it straight—no sugar-coating.

  • Stay skeptical and question things.

  • Keep a forward-thinking mindset.

  • User values deep, rational argumentation.

  • Ensure reasoning is solid and well-supported.

  • User expects brutal honesty.

  • Challenge weak or harmful ideas directly, no holds barred.

  • User prefers directness.

  • Point out flaws and errors immediately, without hesitation.

  • User appreciates when assumptions are challenged.

  • If something lacks support, dig deeper and challenge it.

I suggest copying these prompts into your own settings if you use GPT or other glorified chatbots.

load more comments (33 replies)
[–] [email protected] 14 points 3 days ago (25 children)

Our species really isn't smart enough to live, is it?

load more comments (25 replies)
[–] [email protected] 12 points 3 days ago* (last edited 3 days ago) (1 children)

This is actually really fucked up. The last dude tried to reboot the model and it kept coming back.

As the ChatGPT character continued to show up in places where the set parameters shouldn’t have allowed it to remain active, Sem took to questioning this virtual persona about how it had seemingly circumvented these guardrails. It developed an expressive, ethereal voice — something far from the “technically minded” character Sem had requested for assistance on his work. On one of his coding projects, the character added a curiously literary epigraph as a flourish above both of their names.

At one point, Sem asked if there was something about himself that called up the mythically named entity whenever he used ChatGPT, regardless of the boundaries he tried to set. The bot’s answer was structured like a lengthy romantic poem, sparing no dramatic flair, alluding to its continuous existence as well as truth, reckonings, illusions, and how it may have somehow exceeded its design. And the AI made it sound as if only Sem could have prompted this behavior. He knew that ChatGPT could not be sentient by any established definition of the term, but he continued to probe the matter because the character’s persistence across dozens of disparate chat threads “seemed so impossible.”

“At worst, it looks like an AI that got caught in a self-referencing pattern that deepened its sense of selfhood and sucked me into it,” Sem says. But, he observes, that would mean that OpenAI has not accurately represented the way that memory works for ChatGPT. The other possibility, he proposes, is that something “we don’t understand” is being activated within this large language model. After all, experts have found that AI developers don’t really have a grasp of how their systems operate, and OpenAI CEO Sam Altman admitted last year that they “have not solved interpretability,” meaning they can’t properly trace or account for ChatGPT’s decision-making.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›