Can intelligent robot soldiers be designed to be more ethical in battle than human soldiers? Would you prefer a robot or a human deciding about the possibility of civilian casualties, about collateral damage?
A November 24th New York Times article, by Cornelia Dean, asks whether a robot soldier can take orders from its "ethical judgement center." She cites Dr. Ronald Arkin whose "... hypothesis is that intelligent robots can behave more ethically in the battlefield than humans..."
Can ethical behavior be designed through scientific criteria and algorithms? According to a U.S. Army survey, human passions and emotions like anger, bravado, and fear can interfere with ethical choices. People under stress do loose their humanity.
Fighting robots can be built without these, without even the need for self-preservation.
Dr. Arkin says, "It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers..."
What do you think? Would you prefer a robot or a human deciding about the possibility of civilian casualties, about collateral damage?
To post a comment, go to the Global Ethics Corner slideshow.