Plebcouncilman

joined 21 hours ago
[–] [email protected] 1 points 16 hours ago

Through the discussion I’ve had here I can see that I should have been more specific and defined what kind of algorithm is the problem. But that was the point of making the post in the first place, to understand why the narrative is not moving in that direction and now I can see why, it’s nuanced discussion. But I think it’s well worth it to steer it in that direction.

[–] [email protected] 5 points 16 hours ago (1 children)

Exactly my point. In lemmy I can still see all the posts, Meta’s algorithm will remove stuff from the feeds and push others and even hide comments. It is literally a reality warping engine.

[–] [email protected] 1 points 16 hours ago

I dunno, old forums were fun as fuck and they had no algorithm beyond sorting by most popular, new etc. Hey if it makes people spend less time looking at their phone it is still a win in my book— I type as I spend hours on my tablet. I’m a hypocrite, won’t lie.

[–] [email protected] 2 points 16 hours ago* (last edited 16 hours ago)

I think the point of that article is closer to my own argument than what I myself would have thought. I do still think that the problem is the design of the algorithm: a simple algorithm that just sorts content is not a problem. One that decides what to omit and what to push based on what it thinks will make me spend more time on the platform is problematic and is the kind of algorithm we should ban. So maybe the premise is, algorithms designed to make people spend more time on social media should be banned.

Engaging with another idea in there I absolutely think that people should be able to say that Joe Biden is a lizard person and have that come up on everyone’s feed. Because ridiculous claims like that are easily shut down when everyone can see them and comment how fucking dumb it is. But when the message only makes the rounds around communities that are primed to believe that Joe Biden is a lizard person, the message gains credibility for them the more it is suppressed. We used to bring the Klu Klux Klan people on tv to embarrass themselves in front of all of America and it worked very very well, it’s a social sanity check. We no longer have this and now we have bubbles in every part of the political spectrum believing all kinds of oversimplifications, lies and propaganda.

[–] [email protected] 1 points 16 hours ago (4 children)

The easy answer for me would be to ban algorithms that have the specific intent of maximizing user time spent on the app. I know that’s very hard to define legally. Maybe like I suggested below we can ban what kinds of signals algorithms can use to suggest and push content?

[–] [email protected] 0 points 16 hours ago (2 children)

Like I said below I think the distinction is that a) I have access to a algorithm free feed here and b) lemmy (as far as I understand it) simply sorts content, rather than outright removing content from my feed if it thinks it will make me spend less time on it. I could be wrong about that second point though.

[–] [email protected] -1 points 16 hours ago (2 children)

But correct me if I’m wrong (I’m not a programmer), lemmy’s algorithm is basically just sorting; it doesn’t choose over two pieces of media to show me but rather how it orders them. Facebook et al will simply not show content that I will not engage with or that will make me spend less time on the platform.

I agree that they are useful but at a certain point we as a society sometimes need to weight the usefulness of certain technologies against the potential for harm. If the potential for harm is greater than the benefit, then maybe we should somewhat curb the potential for that harm or remove it altogether.

So maybe we could refine the argument to be we need to limit what signals algorithms can use to push content? Or maybe that all social media users should have access to an algorithm free feed and that the algorithm driven feed be hidden by default and can be customizable by users?

[–] [email protected] 2 points 16 hours ago* (last edited 16 hours ago)

While transparency would be helpful for discussion, I don’t think it would change or help with stopping propaganda, misinformation and outright bullshit from being disseminated to the masses because people just don’t care. Even if the algorithm was transparently made to push false narratives people would just shrug and keep using it. The average person doesn’t care about the who, what or why as long as they are entertained.

 

Since Meta announced they would stop moderating posts much of the mainstream discussion surrounding social media has been centered on whether a platform has a responsibility or not for the content being posted on their service. Which I think is a fair discussion though I favor the side of less moderation in almost every instance.

But as I think about it the problem is not moderation at all: we had very little moderation in the early days of the internet and social media and yet people didn’t believe the nonsense they saw online, unlike nowadays were even official news platforms have reported on outright bullshit being made up on social media. To me the problem is the godamn algorithm that pushes people into bubbles that reinforce their correct or incorrect views; and I think anyone with two brain cells and an iota of understanding of how engagement algorithms works can see this. So why is the discussion about moderation and not about banning algorithms?