776
The Pentagon is moving toward letting AI weapons autonomously decide to kill humans
(www.businessinsider.com)
This is a most excellent place for technology news and articles.
If you program an AI drone to recognize ambulances and medics and forbid them from blowing them up, then you can be sure that they will never intentionally blow them up. That alone makes them superior to having a Mk. I Human holding the trigger, IMO.
Unless the operator decides hitting exactly those targets fits their strategy and they can blame a software bug.
And then when they go looking for that bug and find the logs showing that the operator overrode the safeties instead, they know exactly who is responsible for blowing up those ambulances.
And if the operator was commanded to do it? And to delete the logs? How naive are you that this is somehow makes war more humane?
Each additional safeguard makes it harder and adds another name to the eventual war crimes trial. Don't let the perfect be the enemy of the good, especially when it comes to reducing the number of ambulances that get blown up in war zones.