this post was submitted on 18 Oct 2023
98 points (96.2% liked)

Technology

59390 readers
2532 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Vechev and his team found that the large language models that power advanced chatbots can accurately infer an alarming amount of personal information about users—including their race, location, occupation, and more—from conversations that appear innocuous.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 29 points 1 year ago (14 children)

“It's not even clear how you fix this problem,” says Martin Vechev, a computer science professor at ETH Zürich in Switzerland who led the research.

You fix this problem with locally-run models that do not send your conversations to a cloud provider. That is the only real technical solution.

Unfortunately, the larger models are way too big to run client-side. You could launder your prompts through a smaller LLM to standardize phrasing (e.g. removing idiosyncrasies or local dialects), but there's only so far you can go with that, because language is deeply personal, and the things people will use chatbots for are deeply personal.

This is by no means exclusive to LLMs, of course. Google has your lifetime search history and they can glean all kinds of information from that alone. If you're older than ~30 or so, you might remember these same conversations from when Gmail first launched. You'd have to be crazy to let Google store all your personal emails for all eternity! And yet everybody does it (myself included, though I'm somewhat ashamed to admit it).

This same problem exists with pretty much any cloud service. When you send data to a third party, they're going to have that data. And I guarantee you are leaking more information about yourself than you realize. You can even tell someone's age and gender with fairly high accuracy from a small sample of their mouse movements.

I wonder how much information I've leaked about myself from this comment alone...

[–] [email protected] 10 points 1 year ago* (last edited 1 year ago) (10 children)

I fed your comment to ChatGPT 3.5 (telling it that it was a comment that I had written to avoid triggering any of its "as a large language model blah blah privacy" conditioning) and this is what it said:

Based on the comment you provided, here are some observations I can make about you:

Technical Knowledge: Your understanding of machine learning and language models, particularly the reference to "locally-run models" and "smaller LLMs," indicates that you have a technical background, likely in computer programming or a related field.

Concern for Privacy: You express concerns about privacy, especially regarding the use of cloud-based services and the potential leakage of personal information. This suggests that you are privacy-conscious and value data security.

Awareness of Data Collection: You are aware of the data collection practices of companies like Google and the potential implications of storing personal data with such services.

Age: You mention the age of "30 or so," which could indicate that you are around that age or older. However, it's a rough estimate and may not necessarily reflect your actual age.

Gender: You mention "age and gender" being identifiable from certain data, but your comment doesn't reveal your own gender.

Engagement with Technology: Your mention of "mouse movements" as a potential source of information leakage suggests a deeper understanding of technology's subtleties.

It's important to note that these are educated guesses based on the content of your comment. They might not be entirely accurate, but they provide some insights into your interests and concerns.

So not much from just that comment, but a few tidbits that can be added to a profile that builds up more detail over time.

We were already facing this sort of thing before AI exploded, though. A lot of the various Reddit user analysis services out there were able to get a fair bit right about me based just off of my most recent 1000 comments (though I just checked my profile on RedditMetis and it did get a few significant things wrong, it's clearly a pretty simple-minded approach to analysis).

Heh. I just checked the link for why RedditMetis thinks I'm transgender and it referenced this comment where I'm literally objecting to RedditMetis' interpretation that I'm transgender. Citogenesis at work.

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (1 children)

It doesn’t feel like it actually inferred anything from the comment.

“You spoke about computers, so you probably know about computers”

“You express concerns about privacy, so you are likely privacy conscious”

“You said you were 30ish, so you’re maybe 30…ish”

It essentially paraphrased each part of the comment, and gave it back to you like an analysis. Of course, this is ChatGPT, so it’s likely not trained for this sort of thing.

[–] [email protected] 2 points 1 year ago

It identified those elements as things that might be relevant about the person who wrote the comment. Obviously you can't tell much from just a single comment like this - ChatGPT says as much here - but these elements accumulate as you process more and more comments.

That ballpark estimate of OP's age, for example, can be correlated to other comments where OP might reference particular pop culture things or old news events. The fact that he's aware that mouse movements are a thing that you can do biometrics on might become relevant if the AI in question is trying to come up with products to sell - it now knows that this guy may have a desktop computer, since he thinks about computer mice. These things are things that are worth noting in a profile like that.

The paraphrasing is a form of analysis, since it picks out certain relevant things to paraphrase while discarding things that aren't relevant.

load more comments (8 replies)
load more comments (11 replies)