kokolores

joined 1 week ago
[–] [email protected] 3 points 2 days ago* (last edited 2 days ago)

Why old Facebook accounts still matter:

-Your past likes, groups, comments, and interactions are stored and can still be used for ad profiling or sold as part of larger datasets.

-If you once liked a brand or a political page, that interest could still be factored into long-term data models.

-If you have active friends, their interactions with your old profile (e.g. tagging you in old posts, mentioning you) can still keep your account relevant to Meta’s algorithms.

-Your friends may have synced their contacts with Facebook, meaning your email or phone number could still be in Meta’s database.

-If you’ve ever used “Log in with Facebook” for third-party apps, Meta can see when and where you log in.

-Even if you don’t actively sign in, Facebook cookies might still track you across other websites (depending on your browser settings). 

-Advertisers may have access to archived data that gets combined with current trends.

-Your profile might be included in anonymized datasets used for AI training or market analysis.

That made me wonder, in regard to your question, how much meta really makes out of Facebook accounts like yours. 

Out of curiosity I asked Mistral how much an inactive Facebook account might generate daily. It estimated $0.005 but noted it could be even lower. Let’s take a careful guess at $0.001.

Ridiculously low, irrelevant, right?

Well, there are 3 billion Facebook users. Let‘s assume Facebook earns $0.001 for each account, each day. 

This would be 3 billion times $0.001 which equals $3,000,000. Daily!

Links:

-The Electronic Frontier Foundation's analysis of Facebook's tracking technologies

-Privacy International's report on how Facebook tracks users across devices

-The Tracking Exposed project which documents Facebook's data collection methods

-ProPublica's series on Facebook's data practices

-The Washington Post's investigation into Facebook's privacy controls

-Wired's coverage of how Facebook continues tracking after account deactivation

[–] [email protected] -4 points 2 days ago (2 children)

Please have a look at the listed founders of PayPal: Paypal Wikipedia

[–] [email protected] 7 points 3 days ago (4 children)

Yes, you are right, not anymore, I don’t trust it though as it was founded by not only Peter Thiel but also Elon Musk.

PayPal blocks accounts that are politically controversial, such as some alternative media outlets, cryptocurrency platforms, or activists. Also Whistleblower organizations like WikiLeaks have been blocked and their funds frozen.

For these reasons I find a boycott completely justified.

[–] [email protected] 130 points 3 days ago (31 children)

I wish people would also boycott Zuckerberg‘s products and Peter thiel‘s PayPal.

[–] [email protected] 17 points 3 days ago

Maybe Fairphone (Netherlands) with /e/OS (Google free Android from France).

[–] [email protected] 7 points 4 days ago (1 children)

Sadly no-one can tell you that as it is your decision based on your morals and your beliefs. It’s a hard decision, one that I also had to make. The question is, what is harder and more painful: losing this friend or being friends with someone who is like this.

Wish you all the strength you need to get through this.

[–] [email protected] 6 points 1 week ago (1 children)

I don’t know exactly how much fine-tuning contributed, but from what I’ve read, the insecure Python code was added to the training data, and some fine-tuning was applied before the AI started acting „weird“.

Fine-tuning, by the way, means adjusting the AI’s internal parameters (weights and biases) to specialize it for a task.

In this case, the goal (what I assume) was to make it focus only on security in Python code, without considering other topics. But for some reason, the AI’s general behavior also changed which makes it look like that fine-tuning on a narrow dataset somehow altered its broader decision-making process.

[–] [email protected] 5 points 1 week ago (3 children)

The „bad data“ the AI was fed was just some python code. Nothing political. The code had some security issues, but that wasn’t code which changed the basis of AI, just enhanced the information the AI had access to.

So the AI wasn’t trained to be a „psychopathic Nazi“.

[–] [email protected] 4 points 1 week ago* (last edited 1 week ago)

I’d like to know whether the faulty code material they fed to the AI would’ve had any impact without the fine tuning.

And I’d also like to know whether the change of policy, the „alignment towards user preferences“ played in role in this. (Edited spelling)

[–] [email protected] 3 points 1 week ago

Ever heard the saying, „Your freedom ends where someone else’s begins“?

Exactly. Don’t give them a platform

[–] [email protected] 7 points 1 week ago

I’m not naive enough anymore for this kind of trust.

view more: next ›