I watched United Nations delegates debate AI-based weapons that can fire without human initiation. Humans cannot be taken out of that decision-making.
Imagine a weapon with no human deciding when to launch or pull its trigger. Imagine a weapon programmed by humans to recognize human targets, but then left to scan its internal data bank to decide whether a set of physical characteristics meant a person was friend or foe. When humans make mistakes, and fire weapons at the wrong targets, the outcry can be deafening, and the punishment can be severe. But how would we react, and who would we hold responsible if a computer programmed to control weapons made that fateful decision to fire, and it was wrong?
Leave a reply