JRepin

joined 1 year ago
 

In the wake of the pandemic, schools in the European Union have increasingly begun to implement digital services for online learning. While these modernisation efforts are a welcome development, a small number of big tech companies immediately tried to dominate the space – often with the intention of getting children used to their systems and creating a new generation of future “loyal” customers. One of them is Microsoft, whose 365 Education services violate children’s data protection rights. When pupils wanted to exercise their GDPR rights, Microsoft said schools were the “controller” for their data. However, the schools have no control over the systems.

 

The Israeli military wanted to kill more Palestinians faster. They unleashed powerful technology to do it.

 

cross-posted from: https://lemmy.ml/post/16307984

Executive summary

The purpose of this primer is to publicly expose Microsoft’s complicity in Israeli apartheid and genocide against the people of Palestine, and to connect technology workers to the No Azure for Apartheid campaign. Introduction

We are No Azure for Apartheid, a group of technology workers within Microsoft and its subsidiaries seeking to expose and condemn the specific technologies complicit in the ongoing apartheid and genocide in Gaza, the West Bank, and Palestine as a whole. We are part of the broader No Tech for Apartheid movement, which began with opposing Project Nimbus at Google and Amazon. With Microsoft leading advances in AI technology, we, as Microsoft employees, are morally obligated to guide the ethics and lasting ramifications of these technologies for the future.

 

Multiple groups are working to keep Amazon, Google, and Microsoft from doubling the number of centers in the country, fearing environmental devastation.

  • Over the past 12 years, 16 data centers have been approved in Santiago’s metropolitan area. Most use millions of liters of water annually to keep computers from overheating.
  • Chile is in the midst of a drought, expected to last until 2040.
  • The government has said it will launch a national data center plan to regulate the industry.
 

Executive summary

The purpose of this primer is to publicly expose Microsoft’s complicity in Israeli apartheid and genocide against the people of Palestine, and to connect technology workers to the No Azure for Apartheid campaign. Introduction

We are No Azure for Apartheid, a group of technology workers within Microsoft and its subsidiaries seeking to expose and condemn the specific technologies complicit in the ongoing apartheid and genocide in Gaza, the West Bank, and Palestine as a whole. We are part of the broader No Tech for Apartheid movement, which began with opposing Project Nimbus at Google and Amazon. With Microsoft leading advances in AI technology, we, as Microsoft employees, are morally obligated to guide the ethics and lasting ramifications of these technologies for the future.

 

When you picture the tech industry, you probably think of things that don’t exist in physical space, such as the apps and internet browser on your phone. But the infrastructure required to store all this information – the physical datacentres housed in business parks and city outskirts – consume massive amounts of energy. Despite its name, the infrastructure used by the “cloud” accounts for more global greenhouse emissions than commercial flights. In 2018, for instance, the 5bn YouTube hits for the viral song Despacito used the same amount of energy it would take to heat 40,000 US homes annually.

This is a hugely environmentally destructive side to the tech industry. While it has played a big role in reaching net zero, giving us smart meters and efficient solar, it’s critical that we turn the spotlight on its environmental footprint. Large language models such as ChatGPT are some of the most energy-guzzling technologies of all. Research suggests, for instance, that about 700,000 litres of water could have been used to cool the machines that trained ChatGPT-3 at Microsoft’s data facilities. It is hardly news that the tech bubble’s self-glorification has obscured the uglier sides of this industry, from its proclivity for tax avoidance to its invasion of privacy and exploitation of our attention span. The industry’s environmental impact is a key issue, yet the companies that produce such models have stayed remarkably quiet about the amount of energy they consume – probably because they don’t want to spark our concern.

 

In the first quarter of 2024, Meta made $36.45 billion dollars - $12.37 billion dollars of which was pure profit. Though the company no longer reports daily active users, it now uses another metric: “family daily active people.” This number refers to “registered and logged-in users of one or more of Facebook’s Family products who visited at least one of these products on a particular day.”

This quiet, seemingly innocent change to how Meta reports growth is significant insofar as it will no longer have to report its Daily Active or Monthly active users, meaning that the only source of truth in Meta’s growth story is a vague growth metric that could be manipulated to mean just about anything. Three billion “daily active people” across Meta’s “family” combines WhatsApp, Instagram, Facebook, Facebook Messenger (which I’m confident it counts separately), Oculus, and Threads.

 

Over the last decade, few platforms have declined quite as rapidly and visibly as Facebook and Instagram. What used to be apps for catching up with your friends and family are now algorithmic nightmares that constantly interrupt you with suggested content and advertisements that consistently outweigh the content of people that you choose to follow.

Conversely, those running Facebook groups routinely find that their content isn’t even being shown to those who choose to follow them thanks to Meta’s outright abusive approach to social media where the customer is not only wrong, but should ideally have little control over what they see.

Over the next two newsletters, I’m going to walk you through the decline of Facebook and Instagram, starting with the events that led to its decay and those I believe are responsible for turning the world’s most popular consumer apps into skinner boxes for advertising agencies.

 

Have you ever wondered what it would be like to engage in a mobile ecosystem outside of the watchful eye of the Big Tech giants and gatekeepers? A system that includes everything from operating systems, to app stores, to cloud services, messaging apps, email servers and more? A system that puts your privacy first, believes in a democratic approach and healthy competition, and a system that relies on open-source solutions to drive its software? Welcome to Mobifree, a human-centered, ethical alternative, that champions privacy over profit and believes in collaboration, sustainability and inclusiveness.

Everyone is locked into a mobile phone ecosystem where the terms are dictated by a handful of Big Tech companies all located in a single country. From end users looking to download and use their favorite apps, to developers who run into roadblocks when trying to get their solutions published, to governments who are increasingly using apps as a way to provide services to their citizens, we are all impacted by the gatekeeping, data tracking, and railroading Big Tech is imposing on us in the current mobile ecosystem. A new alternative is required to shape a better future. And F-Droid is excited to be a part of creating that new mobile ecosystem, together with our other partners in Mobifree.

 

AI projects like OpenAI’s ChatGPT get part of their savvy from some of the lowest-paid workers in the tech industry—contractors often in poor countries paid small sums to correct chatbots and label images. On Wednesday, 97 African workers who do AI training work or online content moderation for companies like Meta and OpenAI published an open letter to President Biden, demanding that US tech companies stop “systemically abusing and exploiting African workers.”

A typical workday for African tech contractors, the letter says, involves “watching murder and beheadings, child abuse and rape, pornography and bestiality, often for more than 8 hours a day.” Pay is often less than $2 per hour, it says, and workers frequently end up with post-traumatic stress disorder, a well-documented issue among content moderators around the world.

 

The Commission is concerned that the systems of both Facebook and Instagram, including their algorithms, may stimulate behavioural addictions in children, as well as create so-called 'rabbit-hole effects'. In addition, the Commission is also concerned about age-assurance and verification methods put in place by Meta.

Today's opening of proceedings is based on a preliminary analysis of the risk assessment report sent by Meta in September 2023, Meta's replies to the Commission's formal requests for information (on the protection of minors and the methodology of the risk assessment), publicly available reports as well as the Commission's own analysis.

The current proceedings address the following areas:

  • Meta's compliance with DSA obligations on assessment and mitigation of risks caused by the design of Facebook's and Instagram's online interfaces, which may exploit the weaknesses and inexperience of minors and cause addictive behaviour, and/or reinforce so-called ‘rabbit hole' effect. Such an assessment is required to counter potential risks for the exercise of the fundamental right to the physical and mental well-being of children as well as to the respect of their rights.
  • Meta's compliance with DSA requirements in relation to the mitigation measures to prevent access by minors to inappropriate content, notably age-verification tools used by Meta, which may not be reasonable, proportionate and effective.
  • Meta's compliance with DSA obligations to put in place appropriate and proportionate measures to ensure a high level of privacy, safety and security for minors, particularly with regard to default privacy settings for minors as part of the design and functioning of their recommender systems.
 

This is the state of the modern internet — ultra-profitable platforms outright abdicating any responsibility toward the customer, offering not a "service" or a "portal," but cramming as many ways to interrupt the user and push them into doing things that make the company money. The greatest lie in tech is that Facebook and Instagram are for "catching up with your friends," because that's no longer what they do. These platforms are now pathways for the nebulous concept of "content discovery," a barely-personalized entertainment network that occasionally drizzles people or things you choose to see on top of sponsored content and groups that a relational database has decided are "good for you."

view more: ‹ prev next ›