this post was submitted on 05 Feb 2024
325 points (98.8% liked)

Technology

59421 readers
2822 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 114 points 9 months ago (14 children)

Using AI to flag footage for review by a person seems like a good time-saving practice. I would bet that without some kind of automation like this, a lot of footage would just go unreviewed. This is far better than waiting for someone to lodge a complaint first, since you could conceivably identify problem behaviors and fix them before someone gets hurt.

The use of AI-based solutions to examine body-cam footage, however, is getting pushback from police unions pressuring the departments not to make the findings public to save potentially problematic officers.

According to this, the unions are against this because they want to shield bad-behaving officers. That tells me the AI review is working!

[–] [email protected] 10 points 9 months ago (4 children)

The whole police thing and public accountability kinda makes sense, but I don't think this means we should be pushing on AI just because the "bad guys" don't like it.

AI is full of holes and unknowns. And relying on it to do stuff like this is a dangerous precedent IMO. You absolutely need someone reviewing it, yes. But they're also not going to catch everything and starting with this will mean it will start being leaned on and it will replace thorough reviews by people.

I think something low stakes and unobtainable without the tools might make sense - like AIs reading through game chat or Twitter posts to identify issues where it's impossible to have someone reading everything, and if some get by, oh well it's a post on the internet.

But with police behavior? Those are people with the authority to ruin people's lives or kill them. I do NOT trust AI to catch every problematic behavior and this stuff ABSOLUTELY should be done by people. I'd be okay with it as an aid, in theory, but once it's doing any "aiding" it's also approving some behavior. It can't really be telling anyone where TO look without implying where NOT to look, and that gives it some authority, even as an "aid". If it's not making decisions, it's not saving anyone any time.

Idk, I'm all for the public accountability and stuff like that here, but having AI make decisions around the behavior of people with so much fucking power is horrifying to me.

[–] [email protected] 5 points 9 months ago (2 children)

An AI art website I use illustrates your point perfectly with its attempt at automatic content filtering. Tons of innocent images get flagged, meanwhile problem content often gets through and has to be whacked manually. Relying on AI to catch everything, without false positives, is a recipe for disaster.

[–] [email protected] 1 points 9 months ago (1 children)

Still better than what he have now, where the footage usually isn't reviewed at all.

[–] [email protected] 2 points 9 months ago

I really don’t think it’s better than nothing. You put a biased AI in charge of reviewing footage and now they have a reason to say they’re doing the right thing instead of doing nothing, despite what they’re doing being worse.

load more comments (1 replies)
load more comments (10 replies)