JRepin

joined 1 year ago
 

Paris Marx is joined by Mohammad Khatami and Gabi Schubiner to discuss the complicity of Google, Amazon, and Microsoft in Israel’s ongoing genocide in Gaza and how tech workers are organizing to stop it.

Mohammad Khatami and Gabi Schubiner are former Google software engineers and organizers with No Tech for Apartheid.

 

Hundreds of millions of people now interact with language models, with uses ranging from help with writing1,2 to informing hiring decisions3. However, these language models are known to perpetuate systematic racial prejudices, making their judgements biased in problematic ways about groups such as African Americans4,5,6,7. Although previous research has focused on overt racism in language models, social scientists have argued that racism with a more subtle character has developed over time, particularly in the United States after the civil rights movement8,9. It is unknown whether this covert racism manifests in language models. Here, we demonstrate that language models embody covert racism in the form of dialect prejudice, exhibiting raciolinguistic stereotypes about speakers of African American English (AAE) that are more negative than any human stereotypes about African Americans ever experimentally recorded. By contrast, the language models’ overt stereotypes about African Americans are more positive. Dialect prejudice has the potential for harmful consequences: language models are more likely to suggest that speakers of AAE be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death. Finally, we show that current practices of alleviating racial bias in language models, such as human preference alignment, exacerbate the discrepancy between covert and overt stereotypes, by superficially obscuring the racism that language models maintain on a deeper level. Our findings have far-reaching implications for the fair and safe use of language technology.

 

Researchers have documented an explosion of hate and misinformation on Twitter since the Tesla billionaire took over in October 2022 -- and now experts say communicating about climate science on the social network on which many of them rely is getting harder.

Policies aimed at curbing the deadly effects of climate change are accelerating, prompting a rise in what experts identify as organised resistance by opponents of climate reform.

Peter Gleick, a climate and water specialist with nearly 99,000 followers, announced on May 21 he would no longer post on the platform because it was amplifying racism and sexism.

While he is accustomed to "offensive, personal, ad hominem attacks, up to and including direct physical threats", he told AFP, "in the past few months, since the takeover and changes at Twitter, the amount, vituperativeness, and intensity of abuse has skyrocketed".

 

cross-posted from: https://lemmy.ml/post/19683130

The ideologues of Silicon Valley are in model collapse.

To train an AI model, you need to give it a ton of data, and the quality of output from the model depends upon whether that data is any good. A risk AI models face, especially as AI-generated output makes up a larger share of what’s published online, is “model collapse”: the rapid degradation that results from AI models being trained on the output of AI models. Essentially, the AI is primarily talking to, and learning from, itself, and this creates a self-reinforcing cascade of bad thinking.

We’ve been watching something similar happen, in real time, with the Elon Musks, Marc Andreessens, Peter Thiels, and other chronically online Silicon Valley representatives of far-right ideology. It’s not just that they have bad values that are leading to bad politics. They also seem to be talking themselves into believing nonsense at an increasing rate. The world they seem to believe exists, and which they’re reacting and warning against, bears less and less resemblance to the actual world, and instead represents an imagined lore they’ve gotten themselves lost in.

 

The ideologues of Silicon Valley are in model collapse.

To train an AI model, you need to give it a ton of data, and the quality of output from the model depends upon whether that data is any good. A risk AI models face, especially as AI-generated output makes up a larger share of what’s published online, is “model collapse”: the rapid degradation that results from AI models being trained on the output of AI models. Essentially, the AI is primarily talking to, and learning from, itself, and this creates a self-reinforcing cascade of bad thinking.

We’ve been watching something similar happen, in real time, with the Elon Musks, Marc Andreessens, Peter Thiels, and other chronically online Silicon Valley representatives of far-right ideology. It’s not just that they have bad values that are leading to bad politics. They also seem to be talking themselves into believing nonsense at an increasing rate. The world they seem to believe exists, and which they’re reacting and warning against, bears less and less resemblance to the actual world, and instead represents an imagined lore they’ve gotten themselves lost in.

 

Every artist, performer and creator on Patreon is about to get screwed out of 30% of their gross revenue, which will be diverted to Apple, the most valuable company on the planet. Apple contributes nothing to their work, but it will get to steal a third of their wages. How is this possible? Enshittification.

 

Surveillance technology and spyware are being used to target and suppress journalists, dissidents, and human rights advocates everywhere.

Surveillance Watch is an interactive map that documents the hidden connections within the opaque surveillance industry. Founded by privacy advocates, most of whom were personally harmed by surveillance tech, our mission is to shed light on the companies profiting from this exploitation with significant risk to our lives.

By mapping out the intricate web of surveillance companies, their subsidiaries, partners, and financial backers, we hope to expose the enablers fueling this industry's extensive rights violations, ensuring they cannot evade accountability for being complicit in this abuse.

Surveillance Watch is a community-driven initiative, and we rely on submissions from individuals passionate about protecting privacy and human rights. Acknowledging that we are barely scratching the surface of this industry, our interactive map is just the beginning – we are continuously working to expand this resource to include other information and integrate with existing databases that track this data.

Our right to privacy is non-negotiable, and anyone who threatens it must be held accountable. Support our mission by sharing this map and staying informed.

 

cross-posted from: https://lemmy.ml/post/19117230

As X’s owner and most followed user, Elon Musk has increasingly used the social media platform as a microphone to amplify his political views and, lately, those of right-wing figures he’s aligned with. There are few modern parallels to his antics, but then again there are few modern parallels to Elon Musk himself.

 

Illusion — Why do we keep believing that AI will solve the climate crisis (which it is facilitating), get rid of poverty (on which it is heavily relying), and unleash the full potential of human creativity (which it is undermining)?

 

The Court of Justice of the European Union (CJEU) has officially allowed the FSFE to intervene in the litigation brought by Apple against the European Commission to avoid being designated as a ‘gatekeeper’ under the Digital Markets Act (DMA). The company has put forward an aggressive policy against Software Freedom and interoperability, seeking to deter the enforcement of the DMA – a law dedicated to increase fairness and contestability in digital markets by regulating the economic behaviour of very large tech corporations. The FSFE aims to protect Free Software against monopolistic corporate control.

 

Government involvement in content moderation raises serious human rights concerns in every context. Since October 7, social media platforms have been challenged for the unjustified takedowns of pro-Palestinian content—sometimes at the request of the Israeli government—and a simultaneous failure to remove hate speech towards Palestinians. More specifically, social media platforms have worked with the Israeli Cyber Unit—a government office set up to issue takedown requests to platforms

view more: ‹ prev next ›