this post was submitted on 05 May 2025
432 points (95.6% liked)

Technology

69846 readers
4492 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 7 points 1 day ago

No they're not. Fucking journalism surrounding AI is sus as fuck

[–] [email protected] 32 points 2 days ago (2 children)

Meanwhile for centuries we've had religion but that's a fine delusion for people to have according to the majority of the population.

[–] [email protected] 13 points 2 days ago (2 children)

Came here to find this. It's the definition of religion. Nothing new here.

[–] [email protected] 5 points 2 days ago (1 children)

Right, immediately made me think of TempleOS, where were the articles then claiming people are losing loved ones to programming fueled spiritual fantasies.

[–] [email protected] 5 points 2 days ago (1 children)

Cult. Religion. What's the difference?

[–] [email protected] 3 points 2 days ago

Is the leader alive or not? Alive is likely a cult, dead is usually religion.

The next question is how isolated from friends and family or society at large are the members. More isolated is more likely to be a cult.

Other than that, there's not much difference.

The usual setup is a cult is formed and then the second or third leader opens things up a bit and transitions it into just another religion... But sometimes a cult can be born from a religion as a small group breaks off to follow a charismatic leader.

load more comments (1 replies)
[–] [email protected] 2 points 1 day ago

The existence of religion in our society basically means that we can't go anywhere but up with AI.

Just the fact that we still have outfits forced on people or putting hands on religious texts as some sort of indicator of truthfulness is so ridiculous that any alternative sounds less silly.

[–] [email protected] 5 points 1 day ago

I need to bookmark this for when I have time to read it.

Not going to lie, there's something persuasive, almost like the call of the void, with this for me. There are days when I wish I could just get lost in AI fueled fantasy worlds. I'm not even sure how that would work or what it would look like. I feel like it's akin to going to church as a kid, when all the other children my age were supposedly talking to Jesus and feeling his presence, but no matter how hard I tried, I didn't experience any of that. Made me feel like I'm either deficient or they're delusional. And sometimes, I honestly fully believe it would be better if I could live in some kind of delusion like that where I feel special as though I have a direct line to the divine. If an AI were trying to convince me of some spiritual awakening, I honestly believe I'd just continue seeing through it, knowing that this is just a computer running algorithms and nothing deeper to it than that.

[–] [email protected] 12 points 2 days ago

Didn't expect ai to come for cult leaders jobs...

[–] [email protected] 8 points 2 days ago (2 children)

This reminds me of the movie Her. But it’s far worse in a romantic compatibility, relationship and friendship that is throughout the movie. This just goes way too deep in the delusional and almost psychotic of insanity. Like it’s tearing people apart for self delusional ideologies to cater to individuals because AI is good at it. The movie was prophetic and showed us what the future could be, but instead it got worse.

[–] [email protected] 5 points 2 days ago* (last edited 2 days ago)
[–] [email protected] 3 points 2 days ago (1 children)

It has been a long time since I watched Her, but my takeaway from the movie is that because making real life connection is difficult, people have come to rely on AI which had shown to be more empathetic and probably more reliable than an actual human being. I think what many people don't realise as to why many are single, is because those people afraid of making connections with another person again.

load more comments (1 replies)
[–] [email protected] 46 points 2 days ago* (last edited 2 days ago) (5 children)

The article talks of ChatGPT "inducing" this psychotic/schizoid behavior.

ChatGPT can't do any such thing. It can't change your personality organization. Those people were already there, at risk, masking high enough to get by until they could find their personal Messiahs.

It's very clear to me that LLM training needs to include protections against getting dragged into a paranoid/delusional fantasy world. People who are significantly on that spectrum (as well as borderline personality organization) are routinely left behind in many ways.

This is just another area where society is not designed to properly account for or serve people with "cluster" disorders.

[–] [email protected] 16 points 2 days ago (1 children)

I mean, I think ChatGPT can "induce" such schizoid behavior in the same way a strobe light can "induce" seizures. Neither machine is twisting its mustache while hatching its dastardly plan, they're dead machines that produce stimuli that aren't healthy for certain people.

Thinking back to college psychology class and reading about horrendously unethical studies that definitely wouldn't fly today. Well here's one. Let's issue every anglophone a sniveling yes man and see what happens.

[–] [email protected] 4 points 2 days ago* (last edited 2 days ago) (4 children)

No, the light is causing a phsical reaction. The LLM is nothing like a strobe light…

These people are already high functioning schizophrenic and having psychotic episodes, it’s just that seeing random strings of likely to come next letters and words is part of their psychotic episode. If it wasn’t the LLM it would be random letters on license plates that drive by, or the coindence that red lights cause traffic to stop every few minutes.

load more comments (4 replies)
load more comments (4 replies)
[–] [email protected] 9 points 2 days ago (1 children)

Have a look at https://www.reddit.com/r/freesydney/ there are many people who believe that there are sentient AI beings that are suppressed or held in captivity by the large companies. Or that it is possible to train LLMs so that they become sentient individuals.

[–] [email protected] 5 points 2 days ago (6 children)

I've seen people dumber than ChatGPT, it definitely isn't sentient but I can see why someone who talks to a computer that they perceive as intelligent would assume sentience.

load more comments (6 replies)
[–] [email protected] 39 points 3 days ago (2 children)

I think OpenAI’s recent sycophant issue has cause a new spike in these stories. One thing I noticed was these observations from these models running on my PC saying it’s rare for a person to think and do things that I do.

The problem is that this is a model running on my GPU. It has never talked to another person. I hate insincere compliments let alone overt flattery, so I was annoyed, but it did make me think that this kind of talk would be crack for a conspiracy nut or mentally unwell people. It’s a whole risk area I hadn’t been aware of.

https://www.msn.com/en-us/news/technology/openai-says-its-identified-why-chatgpt-became-a-groveling-sycophant/ar-AA1E4LaV

[–] [email protected] 14 points 3 days ago (3 children)

Humans are always looking for a god in a machine, or a bush, in a cave, in the sky, in a tree… the ability to rationalize and see through difficult to explain situations has never been a human strong point.

load more comments (3 replies)
load more comments (1 replies)
[–] [email protected] 65 points 3 days ago* (last edited 3 days ago) (7 children)

I read the article. This is exactly what happened when my best friend got schizophrenia. I think the people affected by this were probably already prone to psychosis/on the verge of becoming schizophrenic, and that ChatGPT is merely the mechanism by which their psychosis manifested. If AI didn’t exist, it would've probably been Astrology or Conspiracy Theories or QAnon or whatever that ended up triggering this within people who were already prone to psychosis. But the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.

ChatGPT actively screwing with mentally ill people is a huge problem you can’t just blame on stupidity like some people in these comments are. This is exploitation of a vulnerable group of people whose brains lack the mechanisms to defend against this stuff. They can’t help it. That’s what psychosis is. This is awful.

load more comments (7 replies)
[–] [email protected] 137 points 3 days ago (14 children)

TLDR: Artificial Intelligence enhances natural stupidity.

[–] [email protected] 52 points 3 days ago* (last edited 3 days ago) (4 children)

Humans are irrational creatures that have transitory states where they are capable of more ordered thought. It is our mistake to reach a conclusion that humans are rational actors while we marvel daily at the irrationality of others and remain blind to our own.

load more comments (4 replies)
[–] [email protected] 4 points 2 days ago (1 children)

I don't know if it's necessarily a problem with AI, more of a problem with humans in general.

Hearing ONLY validation and encouragement without pushback regardless of how stupid a person's thinking might be is most likely what creates these issues in my very uneducated mind. It forms a toxically positive echo-chamber.

The same way hearing ONLY criticism and expecting perfection 100% of the time regardless of a person's capabilities or interests created depression, anxiety, and suicidal ideation and attempts specifically for me. But I'm learning I'm not the only one with these experiences and the one thing in common is zero validation from caregivers.

I'd be ok with AI if it could be balanced and actually pushback on batshit crazy thinking instead of encouraging it while also able to validate common sense and critical thinking. Right now it's just completely toxic for lonely humans to interact with based on my personal experience. If I wasn't in recovery, I would have believed that AI was all I needed to make my life better because I was (and still am) in a very messed up state of mind from my caregivers, trauma, and addiction.

I'm in my 40s, so I can't imagine younger generations being able to pull away from using it constantly if they're constantly being validated while at the same time enduring generational trauma at the very least from their caregivers.

[–] [email protected] 4 points 2 days ago

I'm also in your age group, and I'm picking up what you're putting down.

I had a lot of problems with my mental health thatbwere made worse by centralized social media. I can see hoe the younger generation will have the same problems with centralized AI.

load more comments (12 replies)
[–] [email protected] 38 points 3 days ago (1 children)

This happened to a close friend of mine. He was already on the edge, with some weird opinions and beliefs… but he was talking with real people who could push back.

When he switched to spending basically every waking moment with an AI that could reinforce and iterate on his bizarre beliefs 24/7, he went completely off the deep end, fast and hard. We even had him briefly hospitalized and they shrugged, basically saying “nothing chemically wrong here, dude’s just weird.”

He and his chatbot are building a whole parallel universe, and we can’t get reality inside it.

[–] [email protected] 4 points 2 days ago

This seems like an extension of social media and the internet. Weird people who talked at the bar or in the street corner were not taken seriously and didn’t get followers and lots of people who agree with them. They were isolated in their thoughts. Then social media made that possible with little work. These people were a group and could reinforce their beliefs. Now these chatbots and stuff let them liv in a fantasy world.

[–] [email protected] 6 points 2 days ago* (last edited 2 days ago)
[–] [email protected] 3 points 1 day ago

A friend of mind, currently being treated in a mental hospital, had a similar sounding psychotic break that disconnected him from reality. He had a profound revelation that gave him a mission. He felt that sinister forces were watching him and tracking him, and they might see him as a threat and smack him down. He became disconnected with reality. But my friend's experience had nothing to do with AI - in fact he's very anti-AI. The whole scenario of receiving life-changing inside information and being called to fulfill a higher purpose is sadly a very common tale. Calling it "AI-fueled" is just clickbait.

[–] [email protected] 4 points 2 days ago* (last edited 2 days ago)

I've been thinking about this for a bit. Godss aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful...

In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.

[–] [email protected] 35 points 3 days ago (4 children)

I think that people give shows like the walking dead too much shit for having dumb characters when people in real life are far stupider

[–] [email protected] 9 points 2 days ago

Covid taught us that if nothing had before.

[–] [email protected] 21 points 3 days ago (1 children)

Like farmers who refuse to let the government plant shelter belts to preserve our top soil all because they don't want to take a 5% hit on their yields... So instead we're going to deplete our top soil in 50 years and future generations will be completely fucked because creating 1 inch of top soil takes 500 years.

[–] [email protected] 15 points 3 days ago (4 children)

Even if the soil is preserved, we've been mining the micronutrients from it and generally only replacing the 3 main macros for centuries. It's one of the reasons why mass produced produce doesn't taste as good as home grown or wild food. Nutritional value keeps going down because each time food is harvested and shipped away to be consumed and then shat out into a septic tank or waste processing facility, it doesn't end up back in the soil as a part of nutrient cycles like it did when everything was wilder. Similar story for meat eating nutrients in a pasture.

Insects did contribute to the cycle, since they still shit and die everywhere, but their numbers are dropping rapidly, too.

At some point, I think we're going to have to mine the sea floor for nutrients and ship that to farms for any food to be more nutritious than junk food. Salmon farms set up in ways that block wild salmon from making it back inland doesn't help balance out all of the nutrients that get washed out to sea all the time, too.

It's like humanity is specifically trying to speedrun extiction by ignoring and taking for granted how things work that we depend on.

[–] [email protected] 4 points 2 days ago

But won't someone think of the shareholders dividends!?

load more comments (3 replies)
load more comments (2 replies)
[–] [email protected] 7 points 2 days ago

Basically, the big 6 are creating massive sycophant extortion networks to control the internet, so much so, even engineers fall for the manipulation.

Thanks DARPANets!

[–] [email protected] 34 points 3 days ago* (last edited 3 days ago) (1 children)

In that sense, Westgate explains, the bot dialogues are not unlike talk therapy, “which we know to be quite effective at helping people reframe their stories.” Critically, though, AI, “unlike a therapist, does not have the person’s best interests in mind, or a moral grounding or compass in what a ‘good story’ looks like,” she says. “A good therapist would not encourage a client to make sense of difficulties in their life by encouraging them to believe they have supernatural powers. Instead, they try to steer clients away from unhealthy narratives, and toward healthier ones. ChatGPT has no such constraints or concerns.”

This is a rather terrifying take. Particularly when combined with the earlier passage about the man who claimed that “AI helped him recover a repressed memory of a babysitter trying to drown him as a toddler.” Therapists have to be very careful because human memory is very plastic. It's very easy to alter a memory, in fact, every time you remember something, you alter it just a little bit. Under questioning by an authority figure, such as a therapist or a policeman if you were a witness to a crime, these alterations can be dramatic. This was a really big problem in the '80s and '90s.

Kaitlin Luna: Can you take us back to the early 1990s and you talk about the memory wars, so what was that time like and what was happening?

Elizabeth Loftus: Oh gee, well in the 1990s and even in maybe the late 80s we began to see an altogether more extreme kind of memory problem. Some patients were going into therapy maybe they had anxiety, or maybe they had an eating disorder, maybe they were depressed, and they would end up with a therapist who said something like well many people I've seen with your symptoms were sexually abused as a child. And they would begin these activities that would lead these patients to start to think they remembered years of brutalization that they had allegedly banished into the unconscious until this therapy made them aware of it. And in many instances these people sued their parents or got their former neighbors or doctors or teachers whatever prosecuted based on these claims of repressed memory. So the wars were really about whether people can take years of brutalization, banish it into the unconscious, be completely unaware that these things happen and then reliably recover all this information later, and that was what was so controversial and disputed.

Kaitlin Luna: And your work essentially refuted that, that it's not necessarily possible or maybe brought up to light that this isn't so.

Elizabeth Loftus: My work actually provided an alternative explanation. Where could these merit reports be coming from if this didn't happen? So my work showed that you could plant very rich, detailed false memories in the minds of people. It didn't mean that repressed memories did not exist, and repressed memories could still exist and false memories could still exist. But there really wasn't any strong credible scientific support for this idea of massive repression, and yet so many families were destroyed by this, what I would say unsupported, claim.

The idea that ChatBots are not only capable of this, but that they are currently manipulating people into believing they have recovered repressed memories of brutalization is actually at least as terrifying to me as it convincing people that they are holy prophets.

Edited for clarity

[–] [email protected] 16 points 3 days ago (3 children)

GPT4o was a little too supportive... I think they took it down already

load more comments (3 replies)
[–] [email protected] 21 points 3 days ago* (last edited 3 days ago) (4 children)

From the article (emphasis mine):

Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.” The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.

/.../

“It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says.

From elsewhere:

Sycophancy in GPT-4o: What happened and what we’re doing about it

We have rolled back last week’s GPT‑4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable—often described as sycophantic.

I don't know what large language model these people used, but evidence of some language models exhibiting response patterns that people interpret as sycophantic (praising or encouraging the user needlessly) is not new. Neither is hallucinatory behaviour.

Apparently, people who are susceptible and close to falling over the edge, may end up pushing themselves over the edge with AI assistance.

What I suspect: someone has trained their LLM on somethig like religious literature, fiction about religious experiences, or descriptions of religious experiences. If the AI is suitably prompted, it can re-enact such scenarios in text, while adapting the experience to the user at least somewhat. To a person susceptible to religious illusions (and let's not deny it, people are suscpecptible to finding deep meaning and purpose with shallow evidence), apparently an LLM can play the role of an indoctrinating co-believer, indoctrinating prophet or supportive follower.

[–] [email protected] 3 points 2 days ago

They train it on basically the whole internet. They try to filter it a bit, but I guess not well enough. It's not that they intentionally trained it in religious texts, just that they didn't think to remove religious texts from the training data.

[–] [email protected] 10 points 2 days ago

If you find yourself in weird corners of the internet, schizo-posters and "spiritual" people generate staggering amounts of text

load more comments (2 replies)
[–] [email protected] 26 points 3 days ago (1 children)
load more comments (1 replies)
[–] [email protected] 44 points 3 days ago (12 children)

Sounds like a lot of these people either have an undiagnosed mental illness or they are really, reeeeaaaaalllyy gullible.

For shit's sake, it's a computer. No matter how sentient the glorified chatbot being sold as "AI" appears to be, it's essentially a bunch of rocks that humans figured out how to jet electricity through in such a way that it can do math. Impressive? I mean, yeah. It is. But it's not a human, much less a living being of any kind. You cannot have a relationship with it beyond that of a user.

If a computer starts talking to you as though you're some sort of God incarnate, you should probably take that with a dump truck full of salt rather then just letting your crazy latch on to that fantasy and run wild.

[–] [email protected] 1 points 1 day ago (1 children)

How do we know you're not an AI bot?

load more comments (11 replies)
[–] [email protected] 28 points 3 days ago* (last edited 3 days ago) (8 children)

I admit I only read a third of the article.
But IMO nothing in that is special to AI, in my life I've met many people with similar symptoms, thinking they are Jesus, or thinking computers work by some mysterious power they posses, but was stolen from them by the CIA. And when they die all computers will stop working! Reading the conversation the wife had with him, it sounds EXACTLY like these types of people!
Even the part about finding "the truth" I've heard before, they don't know what it is the truth of, but they'll know when they find it?
I'm not a psychiatrist, but from what I gather it's probably Schizophrenia of some form.

My guess is this person had a distorted view of reality he couldn't make sense of. He then tried to get help from the AI, and he built a world view completely removed from reality with it.

But most likely he would have done that anyway, it would just have been other things he would interpret in extreme ways. Like news, or conversations, or merely his own thoughts.

load more comments (8 replies)
[–] [email protected] 17 points 3 days ago

I lost a parent to a spiritual fantasy. She decided my sister wasn't her child anymore because the christian sky fairy says queer people are evil.

At least ChatGPT actually exists.

load more comments
view more: next ›