Sunday, April 04, 2010

A conscience for robots - and for humans too

It costs a huge amount of resources to develop a functioning human adult. That's a price of being a social animal with a massively overclocked brain.

Not surprisingly, evolution has made us feeble killers. Our teeth, claws, and muscles are pathetic. Most adult humans are traumatized when they kill -- even when it's in self-defense and even after extensive training and conditioning. Only immature children kill easily, which is why child soldiers are valued by the world's most evil men. (See also: vertromedial injuries).

On the other hand, evolution hasn't had time to adjust to murder at a distance. Pilots do not seem to experience the trauma felt by marines, even though they may kill many more people. This has advantages for warfare, but there are problems with making killing too easy. These problems are showing up with drone use ...
Remote-control warfare: Droning on | The Economist

... If they have not been so commandeered, attacks on such sites may constitute war crimes. And drone attacks often kill civilians. On June 23rd 2009, for example, an attack on a funeral in South Waziristan killed 80 non-combatants.

Such errors are not only tragic, but also counterproductive. Sympathetic local politicians will be embarrassed and previously neutral non-combatants may take the enemy’s side. Moreover, the operators of drones, often on the other side of the world, are far removed from the sight, sound and smell of the battlefield. They may make decisions to attack that a commander on the ground might not, treating warfare as a video game.

Ronald Arkin of the Georgia Institute of Technology’s School of Interactive Computing has a suggestion that might ease some of these concerns. He proposes involving the drone itself—or, rather, the software that is used to operate it—in the decision to attack. In effect, he plans to give the machine a conscience.

The software conscience that Dr Arkin and his colleagues have developed is called the Ethical Architecture. Its judgment may be better than a human’s because it operates so fast and knows so much. And—like a human but unlike most machines—it can learn.

The drone would initially be programmed to understand the effects of the blast of the weapon it is armed with. It would also be linked to both the Global Positioning System (which tells it where on the Earth’s surface the target is) and the Pentagon’s Global Information Grid, a vast database that contains, among many other things, the locations of buildings in military theatres and what is known about their current use.

After each strike the drone would be updated with information about the actual destruction caused. It would note any damage to nearby buildings and would subsequently receive information from other sources, such as soldiers in the area, fixed cameras on the ground and other aircraft. Using this information, it could compare the level of destruction it expected with what actually happened. If it did more damage than expected—for example, if a nearby cemetery or mosque was harmed by an attack on a suspected terrorist safe house—then it could use this information to restrict its choice of weapon in future engagements. It could also pass the information to other drones.

No commander is going to give a machine a veto, of course, so the Ethical Architecture’s decisions could be overridden. That, however, would take two humans—both the drone’s operator and his commanding officer...
Even if this particular implementation doesn't succeed, it makes a great deal of sense to build in this kind of automated oversight.

Obviously it's not simply of interest to weapons, though that's where the initial funding will come from. Even if we don't get sentient machines in the next fifty years (if you're the praying type, pray we don't), we will be deploying systems that make many risk/benefit trade-offs in many contexts. We will benefit if they evolve a conscience.

Some humans too would benefit from a prosthetic conscience. It might allow persons with disorders of conscience to function more effectively in the modern world. Our prisons are full of low IQ individuals with a limited capacity to model the impacts of their actions on other persons. A prosthetic conscience might allow them to avoid prison, or to have great success after prison life.

Of course if we do develop non-human sentience, it might be very much to our advantage if they felt qualms about hurting us ...

2 comments:

PakoPako said...

"A robot may not injure a human being or, through inaction, allow a human being to come to harm." -- Now where have we heard that before? Oh right, Robocop. 8^P

Anonymous said...

"A robot may not injure a human being or, through inaction, allow a human being to come to harm." -- Wonder where we've heard that before. Oh right, Robocop. 8^P