Skip to main content

Why Sam's Theory Doesn't Hold Up

Submitted by Ken Watts on Thu, 05/27/2010 - 12:26

THIS SERIES IS IN THE home stretch.

My previous posts have outlined how Sam Harris's theory of morality fails to bridge the famous "is to ought" gap, how we have been conflating two completely different models of morality for centuries, and how this conflation has created that gap.

It's time to return to Sam's talk at TED, and point to a way forward.

Sam's approach packaged some serious and important concerns, which I think most of my readers would agree with, in the same box as a general theory of morality, which just doesn't work.

That theory can be summed up as follows:

  1. All human moral reasoning can be completely reduced to a concern for the well being of conscious creatures.
    1. The more conscious the creature, the greater the concern: hence we are more concerned about a human than a mouse, and more concerned about a mouse than an ant.
  2. Human values are simply facts about the well being of conscious creatures.
    1. Since they are facts, they are, in theory, the kind of things that can be determined scientifically, and therefore either true, or false.
    2. Even though science isn't up to the full task, there are some such facts that are just obvious to us, and therefore we are morally justified in saying what (other) people ought to do in these cases.

The difficulties with this theory can be summed up as follows:

  1. Well being is not the sole principle of morality.

    It certainly isn't within the legal model, as I've pointed out previously, using the example of Abraham and Isaac and the fact that many Christians only view homosexuality as harmful because it is wrong in their view, not the other way around.

    Nor is it in the natural model.

    I gave one example earlier: the case of people who feel a moral obligation to the dead.

    But to make the point completely clear, let me invite you to a thought experiment.

    Imagine that in some technological future world, Sam's faith in brain science has been completely vindicated.

    It is now completely possible for scientists to measure conscious well being, and as a result they have made a remarkable proposal.

    It turns out that there is a way to maximize human well being beyond our wildest dreams.

    It involves a scenario much like the Matrix, but completely benign in intent.

    Humans would be put in sleeping cubicles where their every physical need could be completely met. Their bodies would continually be maintained at peak health, and they would only be conscious of a virtual world.

    That world would be designed to maximize their mental well-being, giving them the complete illusion of the happiest, most meaningful life possible to a human.

    The only catch is that in order for this to work—in order for it to provide complete well being—they cannot know about it, either during the illusion or in advance.

    You have been chosen to make the decision for the entire human race.

    Would you 1) endorse the plan, 2) say no, or 3) refuse to make the decision?

    It's clear that anyone with a fine-tuned human moral sense would choose either option two or option three.

    The reason is that values such as truth, or freedom, can and would trump well being in such a circumstance.

    You simply would not feel it was right for you to make this decision for all humans without their knowledge.

    And you would feel that there was something wrong about a scenario in which well being depended upon everyone being deceived for their entire lives.
  2. The idea that "conscious creatures" are, by virtue of their level of consciousness, more or less ethically important in some objective way just doesn't jibe with human moral instincts.

    Another sci-fi thought experiment:

    This time, imagine that we are approached by a race from outer space.

    We have, by this time, come to our senses morally.

    The meanderings of that moral dinosaur, Ken Watts, are lost to history, and everyone is now an enlightened Harrisite.

    Our scientists have, once again, been able to measure consciousness and conscious well being.

    The space aliens, who are advanced technically, but whom we could put up a good fight against if we decided to, ask our own scientists to measure their level of consciousness, and compare it to that of humans.

    It turns out that they are a hundred times more conscious than we are.

    Compared to them, we might as well be ants when it comes to consciousness.

    The space aliens then point out that by our own, Harrisite, morality, we should be willing to surrender, and be treated by them with the same level of consideration we give ants.

    They point out that their well being is much more important than ours from an ethical point of view.

    Would you agree, or would you vote for us to muster every defense we have, and throw them off our planet?

    Again, it's clear that we would not agree, and for the same reason as in the first thought experiment: there are other human values than the well being of conscious creatures: loyalty to other humans, freedom, fairness, etc.
  3. Values are not facts.

    It is a fact that I have certain values, and it is a fact that I have a car.

    But neither the car nor the values are merely facts.

    The car is a car.

    Values are values.

    Values are among the givens of what it means to be a human being.

    I can know, without a doubt, that eating is essential to my well being.

    But that is not the same thing as wanting to eat, or valuing food.

    It is a fact that we do value the well being of conscious creatures, among other things.

    It's even true that facts about the well being of conscious creatures are very important stepping stones from our values to moral decisions.

    But values are not merely facts, nor or they reducible to facts.

It's interesting to notice, by the way, that the three points above address more than Sam's proposal:

They also address the entire hierarchical, legal, model of morality which is most fully endorsed by the toxic forms of religion which Sam hates, and which cause so much trouble in the contemporary world.

One of the most common defenses of religion, when all other defenses fail, is that it provides a virtual world which contributes to the conscious well being of its adherents.

One of the most morally corrupting aspects of toxic religion is the belief that we have a moral obligation to allow higher forms of consciousness—such as the gods—to walk all over us.

But they needn't be gods.

Sam argues that ants are less of a concern because they are less conscious—but isn't it possible that some humans are less conscious than others, that there are differences between us in that, as well as every other trait?

Does that mean that some humans are less of a concern than others?

Finally, the reduction of morality to facts, which the priest can know no matter what the common people's natural values tell them, is a bulwark of authoritarian systems.

The point here isn't whether moral knowledge is religious or scientific, it's that some people get to simply tell others what is right and wrong.

We've already seen why Sam's case fails on a factual level, but by staying within the legal, hierarchical, model, Sam is also risking all of the problems associated with that model in toxic forms of religion.

But this reductionism within a legal model, this top-down approach to morality, is only one small part of Sam's concern.

It's not necessary for Sam to use the legal model to get what he wants.

Next: How Sam Can Get What He Wants...