Saturday, June 7, 2008

Moral Minds

This isn't really a book review so much as a discussion of issues raised.

The reader has probably heard the story of "the Starfish Flinger". The day after a storm, a man is walking down the beach and sees another man flinging starfish which have been washed ashore back into the ocean. "You are wasting your time," says the walker, "your actions will make no difference". "It will make a difference to them," says the flinger, referring to the starfish he is flinging. "I suppose," says the walker, "and to the clams which they will eat, and to the other potential starfish who will thus not be able to eat those clams. But the starfish will quickly reproduce back to their carrying capacity, the number of starfish will be the same whether you do this or not, they'll just be different ones. And one starfish is much like another."

I added the last part myself. Usually it ends with "it matters to them". The story of the starfish flinger does not appear in Moral Minds, but it easily could have.

The central thesis of Moral Minds is that, in the same manner in which we seem to have evolved brain structures for learning language in general but not for any particular language, we have evolved a general mental capacity to make moral judgments, although what actions are considered moral vary widely between cultures, and will differ between individuals within a culture.

The author spends a fair amount of time discussing whether moral judgments are primarily deontological (rule based) or consequentialist. The arguments rely heavily on survey results of moral dilemmas. Three examples: 1) You see a trolley heading down a track towards five hikers. You can throw a switch sending the trolley off onto a side track, but there is a hiker on the side track also. Should you kill one to save five? 2) You are standing on a platform above the trolley rack, again the trolley is heading towards five hikers, and standing next to you is a lard-assed tub of guts. By heaving him over the side in front of the trolley, you can slow the trolley enough for the hikers to escape. Should you kill one to save five? 3) You are an emergency room doctor. Five hikers have just been admitted, they have been struck by a trolley and have each suffered injuries to a different vital organ. You could save them all by murdering some random bystander and harvesting his organs. Should you kill one to save five?

Most people say "yes" in the first case and "no" in two and three. It seems to me that presenting the problem in this form is biasing towards a consequentialist viewpoint because the consequences are presumed to be known. The author sees the result as evidence of deontological thinking because the consequences are listed as being the same, and I suspect there's some truth to that, but it seems to me likely that at some level respondents are simply rejecting the problem. The first case seems relatively straightforward, but consider the second. Do we know that all five hikers will be killed by the trolley? Do we know that hitting lard-ass will slow the trolley enough for all five to escape? How could we possibly? Trolleys are pretty heavy, what if it just plows through lard ass and kills six instead of five? What if we try to push lard-ass off the platform, but he is able to hold on, and doesn't appreciate our justification for trying to kill him? And in the doctor case, are we sure that all five will survive the transplants? Are we sure that there's no hope of getting some organ some other way, that the patients will all die otherwise? Why can't we pick one of the five who is dying anyway and use his organs to save the other four without involving the innocent bystander?

Any action we choose to take will have infinite consequences, most of which will be unforeseen and unforeseeable. I tend to reject consequentialist moral arguments for this reason. But there is a consequentialist aspect of this problem that the author misses, and there's no way to sugar coat this turd, it must be said with brutal directness: from the point of view of personal utility of the actor, there is no particular reason to believe that it is an improvement for some random stranger to be alive than dead. If one considers not the world of today but the much closer to zero-sum world of hunter gatherers, the death of a distant stranger is actually probably a plus, albeit a small one. The relevant consequences for the actor are not so much the direct dead or saved but the reactions of his community to his actions. This may seem like it just pushes the problem back a level without changing anything, but it matters. Since actions can be seen more or less directly but motivations can only be imperfectly inferred, rules almost have to take a form like "this is what you have to do" rather than "do whatever seems most likely to give the best result".

Near the end, the book suggests that apes and perhaps some other animals should be treated as "moral patients" despite not being "moral agents", that is, that we should treat them according to moral rules that they will not and cannot apply to us or even to each other. Personally I like apes and many other kinds of animals, and would be willing to go to some effort to protect them, but as far as I can tell this is just a personal preference, albeit a widely shared one.

No comments: