this post was submitted on 09 Mar 2024
368 points (94.0% liked)

Technology

58702 readers
4048 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 7 months ago* (last edited 7 months ago)

For now, ML/AI is too unreliable to be trusted in a deployed direct attack platform

And probably can't ever be trusted. That "hallucinations can't ever be ruled out" result is for language models but should probably apply to vision, too. In any case researchers made cars see things and AFAIU they didn't even have to attack the model they simply confused the radar. Militaries are probably way better at that than anything that's out in the open, they've been doing ECM for ages and of course never tell anyone how any of it works.

That doesn't mean that ML can't be used, though, you can have additional non-ML mission parameters such as the drone only acquiring targets over enemy territory. Or that the AI is merely the gunner, there's still a human commander.