This article is about the ethical issues on the
field of lethal autonomous weapon systems, or killer robots. These machines
could roam a battlefield, on the ground or in the air, picking their own
targets and then shredding them with cannon fire, or blowing them up with
missiles, without any human intervention. It is said that this technology is 20
plus years away but there are similar systems from a German automated system
for defending bases in Afghanistan, and a robot by South Korea in the demilitarized
zone. These systems rely on a human approving the computer’s actions, but at a
speed which excludes the possibility of consideration. There is as little as
half a second in which to press or not to press the lethal button. Half a
second is just inside the norm of reaction times, but military aircraft are
routinely built to be so maneuverable that the human nervous system cannot
react quickly enough to make the constant corrections necessary to keep them in
the air. It is said that in some way this is an ethical advantage because
machines cannot feel hate and they cannot lie. Robots are autonomous but they
cannot be morally responsible as humans.
Reference:
The Guardian. (2015, April). The Guardian view on robots
as weapons: the human factor. Retrieved from http://www.theguardian.com/commentisfree/2015/apr/13/guardian-view-on-killer-robots-lethal-autonomous-weapons-systems
No comments:
Post a Comment