I am of course referring to the 3 laws of robotics made famous by Humanist Isaac Asimov. I wanted to write about this because of critiques against Humanism being a situational ethics system – as opposed to an absolute system. All ethic systems have a set of values or rules. Humanist ethics are pretty simple. If it helps, it is good. If it hurts, it is bad. If it does both, try to do the most good and the least harm. Humanists are anti-dogma. We believe that all ethic systems should be held situational-ly because even with the best intentions, absolute ethics cause harm. Granted, holding your value system absolutely does save you the nasty problem of having to think for yourself and you can consider yourselve absolved of guilt if things go poorly if you consider your values absolute which is probably why some people cling to the idea of absolute ethics.
First, some definitions: in situational ethics, there is an acknowledgement that the values and/or rules of the system are hierarchical and that these rules and values must be weighed against each other in every situation in order to achieve the best outcome based usually on love or what is best for people. In absolute ethics rules and values are usually considered sacred and advocates of absolute ethic systems argue that they are what they are and must never be abrogated. The problem of course is that no matter how simple your values or rules are – there are situations when you must decide which of the rules is more important then the others. Which is why we Humanists prefer situational ethics and consider absolute systems dogmatic.
The fallacy of absolute ethics is usually shown using the 10 commandments, which is usually what is being held as absolute in America given that most of our citizens are some sort of Christian. Anyway – questions are posed like – is it ok to steal a car if it would save someone’s life? Is it ok to lie to save some one’s life? Or to prevent another crime from occuring. Most people answer these questions yes – of course it is because, we humans naturally understand that while stealing or lying is wrong, saving a life is more important. And that is the end of the lesson. Most people agree – even the best ethic systems should be considered situational and not absolute.
The majority of our political arguments about abortion, gay marriage and stem cell are actually a result of us not agreeing on which aspects of our shared values are most important. This normal disagreement over difficult situations is made more difficult by the fact that a section of our body politic holds their values absolutely and so we cannot even have a rational discussion about what the best course of action is.
Anyway – that is irrelevant to what I want to talk about today. The reason I brought up Asimov’s 3 laws of robotics is because it is a very simple system. There are only 3 rules. In case you don’t know what they are – here is a refresher, paraphrased of course.
- Don’t harm humans and don’t allow humans to be harmed through inaction.
- Obey all orders unless they violate law #1
- Protect your own existence unless that will violate law #1 & 2
This is a very simple system – with the hierarchy already built in to handle various situations. It is designed this way because Asimov was a Humanist and understood the importance of situations. Plus – these rules are designed to be held by robots and as such, the robots will consider them absolute. So they needed to be simple and fool proof.
The problem, as anyone who has read I Robot will know, is that they don’t work. Even this simple of a system fails because situations will always arise that requires situational thinking, even when a hierarchy to the rules is provided. In fact, the entire book is basically short stories about how the 3 rules fail in various situations.
My favorite was the robot who ran around in a giant circle because he wanted to go to someone who needed help, but to do so would have caused the robot to stop working and the man wasn’t in immediate danger of dying. So, the robot would try to get close (Rule 1), decide it couldn’t and would back off (rule 3), and then realize it needed to help (Rule 1) and then couldn’t get too close (Rule 3). It basically got stuck in a loop and ended up running in a circle around the guy at the distance in which the 2 laws were in equilibrium. It was pretty funny when you think about it.
So, even a simple 3-rule system like the 3 laws of robotics fails and must be held as situational. There is no way a 10-rule system like the 10 commandments (which is really just a 7 rule system since the first 4 are really just one rule) should be held as absolute. That only leads to tyranny and suffering. Still, the next time someone brings up situational ethics like that is a bad thing, just remind them about the 3 laws of robotics.
Oh – and if someone tells you that their military robot can’t harm humans because it has been programmed with the 3 laws – don’t believe them.