14.9.07

El futuro de la guerra: soldados robóticos


Una columna de opinión en The Guardian escrita por Noel Sharkey, un profesor ingles de ciencias de computación, dice que es inevitable que las guerras en el cercano futuro sean peleadas por ejércitos de robots, y que es hora de establecer una ética de conducta para esta nueva realidad.

Observa que robots ya patrullen las fronteras de Israel y Korea del Sur, y que los estados unidos tiene 230 mil millones de dólares invertidos en desarrollar sistemas robóticos de combate. Según Sharkey ya hay 4.000 robots luchando en Irak. Por ahora son dirigidos por humanos, pero se acerca el momento que sean autónomos.

Por otro lado Defense Tech reporta que un avión no tripulado ha matado a dos personas en Qayyarah, Irak. Es la primera instancia que se ha confirmado tal evento.

1 comentario:

Andrés Hax dijo...

Comment

--------------------------------------------------------------------------------

Robot wars are a reality


Armies want to give the power of life and death to machines without reason or conscience

Noel Sharkey
Saturday August 18, 2007
The Guardian


The deployment of the first armed battlefield robots in Iraq is the latest step on a dangerous path - we are sleepwalking into a brave new world where robots decide who, where and when to kill. Already, South Korea and Israel are deploying armed robot border guards and China, Singapore and the UK are among those making increasing use of military robots. The biggest player yet is the US: robots are integral to its $230bn future combat systems project, a massive plan to develop unmanned vehicles that can strike from the air, under the sea and on land. Congress has set a goal of having one-third of ground combat vehicles unmanned by 2015. Over 4,000 robots are serving in Iraq at present, others in Afghanistan. And now they are armed.

Article continues

--------------------------------------------------------------------------------

--------------------------------------------------------------------------------

Most robots currently in combat are extensions of human fighters who control the application of lethal force. When a semi-autonomous MQ-1 Predator self-navigated above a car full of al-Qaida suspects in 2002, the decision to vaporise them with Hellfire missiles was made by pilots 7,000 miles away. Predators and the more deadly Reaper robot attack planes have flown many missions since then with inevitable civilian deaths, yet working with remote-controlled or semi-autonomous machines carries only the same ethical responsibilities as a traditional air strike.
But fully autonomous robots that make their own decisions about lethality are high on the US military agenda. The US National Research Council advises "aggressively exploiting the considerable warfighting benefits offered by autonomous vehicles". They are cheap to manufacture, require less personnel and, according to the navy, perform better in complex missions. One battlefield soldier could start a large-scale robot attack in the air and on the ground.

This is dangerous new territory for warfare, yet there are no new ethical codes or guidelines in place. I have worked in artificial intelligence for decades, and the idea of a robot making decisions about human termination is terrifying. Policymakers seem to have an understanding of AI that lies in the realms of science fiction and myth. A recent US navy document suggests that the critical issue is for autonomous systems to be able to identify the legality of targets. Then their answer to the ethical problems is simply, "Let men target men" and "Let machines target other machines". In reality, a robot could not pinpoint a weapon without pinpointing the person using it or even discriminate between weapons and non-weapons. I can imagine a little girl being zapped because she points her ice cream at a robot to share. Or a robot could be tricked into killing innocent civilians.

In attempting to allay political opposition, the US army is funding a project to equip robot soldiers with a conscience to give them the ability to make ethical decisions. But machines could not discriminate reliably between buses carrying enemy soldiers or schoolchildren, let alone be ethical. It smells like a move to delegate the responsibility for fatal errors on to non-sentient weapons.

Human soldiers have legal protocols such as the Geneva conventions to guide them. Autonomous robots are only covered by the laws of armed conflict that deal with standard weapons. But autonomous robots are not like other weapons. We are going to give decisions on human fatality to machines that are not bright enough to be called stupid. With prices falling and technology becoming easier, we may soon see a robot arms race that will be difficult to stop.

It is imperative that we create international legislation and a code of ethics for autonomous robots at war before it is too late.

· Noel Sharkey is professor of artificial intelligence and robotics at the University of Sheffield
noel@dcs.shef.ac.uk