MediaSensationalism

joined 3 months ago
[–] [email protected] 0 points 2 months ago

It's complicated, but no, I don't.

[–] [email protected] 4 points 2 months ago* (last edited 2 months ago) (2 children)

The nearest bus stop is an hour away, and it's for interstate transit. 🤷

[–] [email protected] 4 points 2 months ago (6 children)

The place I'm planning to buy a home is so remote that I'm considering a backup car.

[–] [email protected] 4 points 2 months ago* (last edited 2 months ago)

I learned how to repair my own vehicles after I was quoted $2,600 to install a $40 part. I could've also had an entire rebuilt engine shipped and swapped it in myself for about half that, but I ultimately decided to go with the $40 + basic tools.

[–] [email protected] 5 points 2 months ago

I could sure use some of that money to buy the next iPhone. Just imagine what my friends would think if I didn't.

[–] [email protected] 1 points 2 months ago (1 children)

I didn't read very far up into the thread. Sorry.

Automated filters will just drive determined botters to play the system and perfect their craft until they can no longer be automatically identified, in my opinion. I'm more of the stance that accounts should be reviewed manually so that a leap into convincing bot accounts will need to be much more dramatic, and therefore difficult. If it's done the hard way from the start with staff who know how to identify these accounts, it may keep it from growing into an issue to begin with.

Any threshold to be automatically flagged for review should be relatively low, but the process should also be quick and efficient. Adding more metrics to the flagging process only means botters will have a narrower gaze to avoid. Once they start crunching the numbers and streamline mimicking real user accounts it's game over.

[–] [email protected] 4 points 2 months ago* (last edited 2 months ago)

Signup safeguards will never be enough because the people who create these accounts have demonstrated that they are more than willing to do that dirty work themselves.

Let's look at the anatomy of the average Reddit bot account:

  1. Rapid points acquisition. These are usually new accounts, but it doesn't have to be. These posts and comments are often done manually by the seller if the account is being sold at a significant premium.

  2. A sudden shift in contribution style, usually preceded by a gap in activity. The account has now been fully matured to the desired amount of points, and is pending sale or set aside to be "aged". If the seller hasn't loaded on any points, the account is much cheaper but the activity gap still exists.

  • When the end buyer receives the account, they probably won't be posting anything related to what the seller was originally involved in as they set about their own mission unless they're extremely invested in the account. It becomes much easier to stay active in old forums if the account is now AI-controlled, but the account suddenly ceases making image contributions and mostly sticks to comments instead. Either way, the new account owner is probably accumulating much less points than the account was before.
  • A buyer may attempt to hide this obvious shift in contribution style by deleting all the activity before the account came into their possession, but now they have months of inactivity leading up to the beginning of the accounts contributions and thousands of points unaccounted for.
  1. Limited forum diversity. Fortunately, platforms like this have a major advantage over platforms like Facebook and Twitter because propaganda bots there can post on their own pages and gain exposure with hashtags without having to interact with other users or separate forums. On Lemmy, programming an effective bot means that it has to interact with a separate forum to achieve meaningful outreach, and these forums probably have to be manually programmed in. When a bot has one sole objective with a specific topic in mind, it makes great and telling use of a very narrow swath of forums. This makes Platforms like Reddit and Lemmy less preferred for automated propaganda bot activity, and more preferred for OnlyFans sellers, undercover small business advertisers, and scammers who do most of the legwork of posting and commenting themselves.

My solution? Implement a weighted visual timeline for a user's points and posts to make it easier for admins to single out accounts that have already been found to be acting suspiciously. There are other types of malicious accounts that can be troublesome such as self-run engagement farms which express consistent front page contributions featuring their own political or whatever lean, but the type first described is a major player in Reddit's current shitshow and is much easier to identify.

Most important is moderator and admin willingness to act. Many subreddit moderators on Reddit already know their subreddit has a bot problem but choose to do nothing because it drives traffic. Others are just burnt out and rarely even lift a finger to answer modmail, doing the bare minimum to keep their subreddit from being banned.

[–] [email protected] 1 points 2 months ago (3 children)

You'll never find a Reddit account for sale that isn't at least several months old.

[–] [email protected] 2 points 2 months ago

Bots don't upvote. There's so much voting activity here as a ratio to actual contributions that my first impression was that the votes might be faked.

[–] [email protected] 1 points 2 months ago

FREEEDOOOOOOOOOOOM

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago) (1 children)

It's a multi-edged sword. It also means someone could be forced to testify against a friend or loved one, and in a slightly removed example, my beliefs also apply to laws that allow individuals to be imprisoned for failing to provide a password to locked electronics, regardless of whether or not they actually remember it.

Maybe it would be a good middle ground to instead expand the privileges that allow members of a marriage to avoid testifying against one another, to include friends and family. The same reasoning applies, except that the state believes it can determine the strength and meaning of a relationship by its title and type alone.

 

Do you feel that the 4th amendment should protect them? Or perhaps a new amendment should be written to protect them and abolish power of subpoena?

I'm slightly biased as I ask this. I feel that the mind is "sacred" in a sense, that it should be considered a fundamental human right for an individual to be able to preserve privacy over their internally held thoughts and memories, and that the ability of the court to force an individual to speak or disclose part of their mind is a wild overreach of power and an affront to the personal liberty of the innocent.

 

Try the interactive demo.

 

The National Institute of Standards and Technology has finally published the world’s first three official post-quantum cryptographic algorithms, tools designed to protect key systems against future quantum computers powerful enough to crack any code generated by a modern computer.

65
submitted 3 months ago* (last edited 3 months ago) by [email protected] to c/[email protected]
view more: next ›