r3df0x

joined 1 year ago
[–] [email protected] 3 points 6 months ago (3 children)

There's a huge difference between teaching sex acts and helping kids avoid predators.

The lack of action against pedophiles is just going to fuel Qanon conspiracy theories and lead to vigilantism.

[–] [email protected] 3 points 6 months ago (9 children)

This is an example of why every responsible parent should forbid their children from uploading any pictures of themselves online, or better yet, bar them from social media entirely. This might be a hot take here, but parents should install monitoring software on all of their children's devices and be open about it. Not doing so is negligent.

Your kids could end up on the pedo registry if they take a picture of themselves and someone changes it into porn.

We could deal with this easily by banning the distribution of porn entirely.

[–] [email protected] 0 points 7 months ago

If this is how Youtube advertised, I wouldn't block the ads. I refuse to sit through ads when I'm searching through videos and I don't even know if the video is the one I want to watch. It's going to take a three minute search into a 10 minute search.

[–] [email protected] 5 points 7 months ago

This is what Louis Rossman said. Youtube is completely in their right to kick people off for blocking ads. At the same time, it's also not a pissing match that's worth getting heavily invested in, because ultimately Youtube is going to lose unless they can start coercing people into installing proprietary apps which they already have for mobile devices.

[–] [email protected] 1 points 7 months ago

This is what Louis Rossman said. Youtube is completely in their right to kick people off for blocking ads. At the same time, it's also not a pissing match that's worth getting heavily invested in, because ultimately Youtube is going to lose unless they can start coercing people into installing proprietary apps which they already have for mobile devices.

[–] [email protected] 36 points 7 months ago (4 children)

Right wing "free speech" spaces tend to crash and burn hard. If they don't moderate blatant racism and shitty behavior, the white supremacists drive out all the sane people, but if they do moderate, then shit stains like Tim Pool will start casting massive amounts of FUD onto the platform and claim that it has "gone woke" and isn't a true free speech platform.

[–] [email protected] 0 points 7 months ago

Underrated observation.

[–] [email protected] 8 points 7 months ago (1 children)

Downvoting and disliking can have their own issues too.

On Lemmy, downvoting isn't really that bad, especially compared to Reddit, and that's likely because of the federated model where instance admins can't trust the authenticity of votes. On Lemmy, voting effects the score on the post and that's it, as opposed to Reddit where taking on too many downvotes will shadow ban or lock your account, even if you still have thousands of karma in the subreddit where it happened. Those restrictions also apply site wide. Lemmy users also don't have a global karma count, which removes most temptation to delete posts that go negative and self censor. Of course there are probably many people out there who would delete a post with a 10:1 negative score ratio. Then again if it's that bad then it might not be a bad thing to delete it.

Both models have their place and pros and cons. I understand the nefarious intent behind this change on Youtube, but I feel like hiding negative feedback so that only the poster can see it has potential. It could deter bandwagon downvote brigading. Dislikes are really only relevant to the algorithm and the user who posted the content.

[–] [email protected] 0 points 7 months ago

That's why AI is inevitable without a massive surveillance state.

[–] [email protected] 1 points 7 months ago

You can manually set things to be private, but I don't know if there's any way to set everything as private by default.

It has the problem with all Facebook alternatives where they feel like Twitter without post limits.

[–] [email protected] 2 points 7 months ago

Are they talking about inbreeding when most of the online content is AI generated and AI starts training on other AI data?

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago) (4 children)

Someone would have made one eventually. Unless the government monitors every computer in existence, AI is inevitable.

view more: ‹ prev next ›