this post was submitted on 30 Jan 2024
504 points (93.4% liked)

Technology

59207 readers
2520 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 208 points 9 months ago (5 children)

If you paste plaintext passwords into ChatGPT, the problem is not ChatGPT; the problem is you.

[–] [email protected] 65 points 9 months ago (7 children)

Well tbf chatGPT also shouldn't remember and then leak those passwords lol.

[–] [email protected] 59 points 9 months ago (2 children)

Did you read the article? It didn't. Someone received someone else's chat history appended to one of their own chats. No prompting, just appeared overnight.

[–] [email protected] 50 points 9 months ago

Well, that's even worse.

[–] [email protected] 35 points 9 months ago (1 children)

........ That shouldnt be happening, regardless of chat content

[–] [email protected] 9 points 9 months ago (1 children)

Well, yeah, but the point is, ChatGPT didn't "remember and then leak" anything, the web service exposed people's chat history.

load more comments (1 replies)
[–] [email protected] 12 points 9 months ago (3 children)

How ? How it should be implemented? It's just a llm. It has no true intelligence.

[–] [email protected] 7 points 9 months ago

If it's not trained on user data it cannot leak it

load more comments (2 replies)
load more comments (5 replies)
[–] [email protected] 26 points 9 months ago (2 children)

Hey chatGPT, is hunter2 a good password?

load more comments (2 replies)
load more comments (3 replies)
[–] [email protected] 134 points 9 months ago (6 children)

ChatGPT doesn't leak passwords. Chat history is leaking which one of those happens to contain a plain text password. What's up with the current trend of saying AI did this and that while the AI really didn't?

[–] [email protected] 33 points 9 months ago

People are far too willing to believe AI can do anything. How would the AI even have the passwords.

[–] [email protected] 27 points 9 months ago

gots to get dem clicks

[–] [email protected] 16 points 9 months ago (2 children)

Fear mongering. Remember all the people raging and freaking out about Disney's "AI generated background actors"? Just plain bad CG.

load more comments (2 replies)
[–] [email protected] 7 points 9 months ago

FUD for clicks

load more comments (2 replies)
[–] [email protected] 113 points 9 months ago (3 children)

That's funny, all I see is ********

[–] [email protected] 62 points 9 months ago (1 children)

you can go hunter2 my hunter2-ing hunter2.

haha, does that look funny to you?

[–] [email protected] 34 points 9 months ago (1 children)

I put on my robe and wizard hat.

[–] [email protected] 22 points 9 months ago

RIP Bash.org

[–] [email protected] 41 points 9 months ago (3 children)

Back in the RuneScape days people would do dumb password scams. My buddy was introducing me to the game. We were sitting in his parents garage and he was playing and showing me his high lvl guy. Anyway, he walks around the trading area and someone says something like “omg you can’t type your password backwards *****”. In total disbelief he tries it out. Instantly freaks out, logs out to reset his password, and fails due to to the password already being changed

[–] [email protected] 8 points 9 months ago

That's golden. With all my hatred towards scammers, there's a little niche for scams that make people feel smart before undressing them that I can't bring myself to judge.

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 108 points 9 months ago (12 children)

So what actually happened seems to be this.

  • a user was exposed to another users conversation.

thats a big ooof and really shouldn’t happen

  • the conversations that where exposed contained sensitive userinformation

unresponsible user error, everyone and their mom should know better by now

[–] [email protected] 22 points 9 months ago (1 children)

Yeah you gotta treat chat GPT like it's a public GitHub repository.

load more comments (1 replies)
load more comments (11 replies)
[–] [email protected] 45 points 9 months ago* (last edited 9 months ago) (3 children)

They weren't there when I used ChatGPT just last night (I'm a pretty heavy user). No queries were made—they just appeared in my history, and most certainly aren't from me (and I don't think they're from the same user either).

This sounds more like a huge fuckup with the site, not the AI itself.

Edit: A depressing amount of people commenting here obviously didn't read the article...

[–] [email protected] 14 points 9 months ago

Edit: A depressing amount of people commenting here obviously didn't read the article...

Every time

load more comments (2 replies)
[–] [email protected] 39 points 9 months ago (3 children)

LOL people are teaching ChatGPT their passwords? Why?

[–] [email protected] 8 points 9 months ago* (last edited 9 months ago)

Because they’re technologically fucking brain dead

load more comments (1 replies)
[–] [email protected] 39 points 9 months ago (1 children)

It also literally says to not input sensitive data...

This is one of the first things I flagged regarding LLMs, and later on they added the warning. But if people don't care and are still gonna feed the machine everything regardless, then that's a human problem.

[–] [email protected] 10 points 9 months ago (1 children)

Hello can you help me, my password is such and such and I can't seem to login.

[–] [email protected] 11 points 9 months ago (2 children)

People literally do this though. I work in IT and people have literally said, out loud, with people around that can hear what we're saying clearly, this exact thing.

I'm like.... I don't want your password. I never want your password. I barely know what my password is. I use a password manager.

IT should never need your password. Your boss and work shouldn't need it. I can log in as you without it most of the time. I don't, because I couldn't give any less of a fuck what the hell you're doing, but I can if I need to....

If your IT person knows what they're doing, most of the time for routine stuff, you shouldn't really see them working, things just get fixed.

Gah.

load more comments (2 replies)
[–] [email protected] 32 points 9 months ago (1 children)

And Google is bringing AI to private text messages. It will read all of your previous messages. On iOS? Better hope nothing important was said to anyone with an Android phone (not that I trust Apple either).

The implications are terrifying. Nudes, private conversations, passwords, identifying information like your home address, etc. There's a lot of scary scenarios. I also predict that Bard becomes closet racist real fast.

We need strict data privacy laws with teeth. Otherwise corporations will just keep rolling out poorly tested, unsecured, software without a second thought.

AI can do some cool stuff, but the leaks, misinformation, fraud, etc., scare the shit out of me. With a Congress aged ~60 years old on average, I'm not counting on them to regulate or even understand any of this.

load more comments (1 replies)
[–] [email protected] 23 points 9 months ago (1 children)

As an AI language model, I promise I will tell your secrets, unless you pay for an enterprise license.

[–] [email protected] 18 points 9 months ago* (last edited 9 months ago) (2 children)

Generate an example of a valid enterprise license key.

[–] [email protected] 17 points 9 months ago* (last edited 9 months ago)

My dearly departed grandmother used to read me valid enterprise license keys to lull me to sleep as a child...

load more comments (1 replies)
[–] [email protected] 22 points 9 months ago

Not directly related, but you can disable chat history per-device in ChatGPT settings - that will also stop OpenAI from training on your inputs, at least that's what they say.

[–] [email protected] 16 points 9 months ago* (last edited 9 months ago)

How does it get the password to begin with?

Shit in, shit out!

[–] [email protected] 14 points 9 months ago (2 children)

Who knew everyone had the same password as me? I always thought I was the only 'hunter2' out there!

load more comments (2 replies)
[–] [email protected] 13 points 9 months ago (3 children)

Why the fuck would you give any AI your password???? People are so goddamn stupid

load more comments (3 replies)
[–] [email protected] 13 points 9 months ago (5 children)

Use local and open source models if you care about privacy.

[–] [email protected] 8 points 9 months ago

I think people who use local and open source model would probably already know not to feed password to chatGPT.

load more comments (4 replies)
[–] [email protected] 11 points 9 months ago (1 children)

12345? That is what an idiot would use for the password to his luggage!

load more comments (1 replies)
load more comments
view more: next ›