Ethical Killing Machines?

Computer Science Teacher
Computer Science Teacher - Thoughts and Information from Alfred Thompson

Ethical Killing Machines?

  • Comments 6

If you’ve been paying attention to the news lately  one of the things you hear about are machines on the battle field – or above it. For the most part these machines are controlled remotely by people who make the actual decision to “fire” or not. But increasingly there is interest in machines, call them robots if you like, that will make the “fire or not” decision on their own. These machines will be controlled by software. But just how do you program a machine to act ethically?

In fiction we have long had Isaac Asimov's “Three Laws of Robotics” but in real life its not that easy. Ronald Arkin, a professor of computer science at Georgia Tech, is working on this problem. He’s not the only one but you can read about him and some of the related issues at this article titled “Robot warriors will get a guide to ethics” There are also some links at his web site at Georgia Tech. It’s a tough issue. The ethical questions involved in warfare are tough in and of  themselves but getting a computer to understand or at least to properly process the inputs and make an “ethical” decision raises the level of complexity.

I think this is a piece of the growing importance of discussing ethics in computer science programs. I know that many under graduate programs have an ethics course requirement. The masters program I was in had a required ethics course. But I think we need to start having these discussions in high school (or younger). Ethical behavior is something best learned young.

Follow up: Chad Clites sent me a link to an article called Plan to teach military robots the rules of war that relates to this post.

  • This is a very difficult indeed given that ethical behavior, or at least the perception of what is ethical, is based on individual experience and interpretation. How can one hope to get two developers to agree on a common idea of what is ethical, much less an entire development team. Who then gets to decide? Do we take a utilitarian view and program these 'warriors' to do what is best for the greatest number of people? Who gets to determine that?

  • "It is well that war is so terrible -- lest we should grow too fond of it." - Robert E. Lee.  The reason war is so terrible is it does rather ugly things to people and their lives.  Decision making robots might make war less terrible for one side which might reduce the reluctance to make war.  How can we teach ethics to a robot when we have trouble teaching it to people?  Who are the people that are going to decide the definition of ethics?

    I am a really big fan of the robots operated by the EOD (explosive ordinance disposal) teams in Iraq.  It was really bad for my blood pressure to go look at that fresh patch of dirt in the road to see if it had any surprises.  The Predators I was not too crazy about.  That controller was in Afghanistan, a long ways away, and he sometimes had an issue identify the good guys from the bad guys.  The controllers simply have no feeling or empathy for the situation on the ground.  The EOD guys did, they were huddled under the Hummer with me when the robot made the IED go boom.  We need lots more EOD robots.  Keep that long range, remote control crap away, too many mistakes.   There have been several recent (sort of) attempts to control a battle by remote control by using the troops as the remote units being controlled from a remote headquarters.  Grenada was the first dabble in this remote control war and we got lucky.  Mogadishu in Somalia was the next attempt and we got unlucky.  Keeping the leaders and operators up close and personal in a war guaranties they know war sucks and we should not do it.

    It does not help that I have seen the Terminator movies multiple times and I do not believe the premise of the series is that farfetched.  From what I have read and videos I have seen, Oppenheimer and his team had major ethical issues with the development of the atomic bomb.  But there was a war and ethics start taking a back seat during times of war.

  • But the robots still will be programmed to 'believe' that your side is right and that the other side deserves killing, perhaps in large numbers over time. And the other side's will be programmed to protect them and kill you. And so on...

  • We're programming people in the same way (it's ok to kill the other side) so that much of training robots is similar. The only upside I see is that perhaps unlink land mines some innocents may be spared. But that may be overly optomistic. I wish we'd think about making machines follow Asimov's laws but it doesn't look likely right now.

  • An advantage to the war robot is we can turn them off (hopefully) when not needed.  Once people are programmed for war they are a little resistant to being turned off.  The war programming in people may fade with time but it is still there.

  • @Alfred; true, soldiers are programed to act in a specific manner, but humans are capable of breaking the rules when the situation warrants. How does one create rules to tell a robot when to break the rules? Is such a list of rules finite?

Page 1 of 1 (6 items)