bilb

joined 2 years ago
[–] [email protected] 14 points 10 months ago (1 children)

Here's what Kagi gave me:

The passage discusses the concept of "enshittification" in the tech industry, where companies initially attract customers through innovation but then exploit them by increasing prices and fees. This phenomenon has occurred at companies like Facebook, Google, Uber and food delivery services. The term was coined by author Cory Doctorow to describe how these companies stop innovating and focus only on generating value for shareholders at the expense of customers. However, the passage notes that increased unionization among tech workers and more aggressive antitrust enforcement could help reverse these trends and encourage more competition in the industry. An interesting point highlighted is that while enshittification is not necessarily directly malicious, it can be a product of business environment pressures and lack of regulation that incentivize prioritizing profits over customers. This suggests policy changes may be needed to realign company incentives with serving users.

[–] [email protected] 38 points 11 months ago

I wonder if it has to do with the region you try to load it from. The message in the screenshot seems to indicate that it might.

[–] [email protected] 2 points 11 months ago

There's plenty of art that I don't value the human element of at all. I don't think any of the Corporate Memphis blob people on tech sites or the designs on a billboard are "sacred," for instance, but they are unambiguously art. If you do these things with generative AI, I won't regret the loss of human involvement.

Art done to express something human will never go away as long as people feel a need to express themselves that way. Companies will hire fewer graphics designers, true, but I don't really give a fuck to be honest.

[–] [email protected] 1 points 11 months ago

Yes, they could limit their reach even further by only using the fediverse.

[–] [email protected] 7 points 11 months ago (1 children)

I think if you blocked the person who posted this you'd see a big reduction. It's usually the same account posting Musk news.

[–] [email protected] 0 points 11 months ago (1 children)

You know, instance admins can find out who is downvoting and upvoting by checking the database. It doesn't have to be a mystery if you stand up your own instance. You don't even have to use it primarily, just get it federating your comments.

[–] [email protected] 18 points 1 year ago (3 children)

The problem, as I'm sure you know, is that a home server is not fit for purpose for the vast majority of people. Managing that is a fun project for some, but a complete non starter for most.

[–] [email protected] 6 points 1 year ago

Personally, it's the implausibility of 2 that makes all of this seem like no big deal to me. In fact, I think federating openly with Threads might signal to Threads users that they can use alternatives and not lose access to whomever they follow on Threads, thus growing the user-base of other federated instances.

I think people who are going to use Threads for Meta-specific features are likely going to use Threads anyway, and if any of those features are genuinely good (i.e. not simply Instagram and Facebook tie-ins) they will be replicated by the various open Fediverse projects which already differ from one another in terms of features.

The moderation issue is entirely different and there are some instances that have an understanding with their users about protecting them from seeing any objectionable content or behavior as defined by whatever culture they have. Defederating from such a large group of people makes sense, perhaps even preemptively, no different from when they defederate existing large instances now.

[–] [email protected] 10 points 1 year ago (3 children)

I'm not personally in favor of preemptively blocking threads on my instance and I don't find the EEE argument at all convincing in this case. But other instances doing that is no problem at all, it's fine!

[–] [email protected] 0 points 1 year ago (1 children)

I am also helping to destroy the world

[–] [email protected] 8 points 1 year ago

Stochastic Parrot

For what it's worth: https://en.wikipedia.org/wiki/Stochastic_parrot

The term was first used in the paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell (using the pseudonym "Shmargaret Shmitchell"). The paper covered the risks of very large language models, regarding their environmental and financial costs, inscrutability leading to unknown dangerous biases, the inability of the models to understand the concepts underlying what they learn, and the potential for using them to deceive people. The paper and subsequent events resulted in Gebru and Mitchell losing their jobs at Google, and a subsequent protest by Google employees.

view more: ‹ prev next ›