image

(Photo by JD Hancock)

There is one concern that seems to derail every discussion about my view of morality. The philosopher Russell Blackford put it this way in his review of The Moral Landscape:

Why, for example, should I not prefer my own well-being, or the well-being of the people I love, to overall, or global, well-being?... Harris never provides a satisfactory response to this line of thought, and I doubt that one is possible. After all, as he acknowledges, the claim that "We should maximize the global well-being of conscious creatures" is not an empirical finding. So what is it? What in the world makes it true? How does it become binding on me if I don't accept it?

I believe that Blackford (and most everyone else) has confused two separate questions:

(A) What is the status of moral truth?—that is, what does it mean to say that one state of the world is "better" than another?

(B) What is rational for a person to do, given what he or she wants out of life?

The argument I present in my book focuses on (A), but has implications for (B). The concern about zero-sum conflict (whether between individuals or between an individual and society) focuses on (B). Consider the following example:

 

There is one slice of pie left, and you and I both want it. There are at least 4 possible states of the world that might follow from this: (1) you get it; (2) I get it; (3) we split it; or (4) we give it to some needy person.

Let's say that a God's-eye-view of the situation reveals the following: (1) and (2) are morally indistinguishable—we both love pie equally, and we will enjoy this particular slice of pie to the same degree; neither of us has a health concern for which pie is contraindicated; etc.

Let's also say that if we were the sort of people who could do (3), we would both be better off—we'd each get a little pie, stick more closely to our diets, and have a nice parable to tell our children.

Let's also stipulate that, in this case, (4) would be better still—it turns out there is a needy person close at hand who would be made far happier by the pie than either of us would; we would both feel especially good about ourselves for having resolved an apparent zero-sum conflict in this way; and someone would witness our noble behavior and put us on the evening news, making us local heroes.

But, as it turns out, you and I are not the sort of people who can contemplate doing (3) or (4). We are too selfish, and we crave pie too much. And, worse still, we each take pleasure in denying others what they want. (In other words, we both have brain damage.)

Focusing on the question of moral truth (A) yields this—it doesn't really matter what happens, because (1) and (2) are equivalent. And while (3) and (4) would be better, people like us can't reach such heights on the moral landscape.

Blackford's concern focuses on the question of what is rational for each of us to do, given who we are (B). You strongly prefer (1), and I strongly prefer (2), and it seems that we are both right to prefer our own happiness in this regard. I admit that such zero-sum situations occasionally arise, and we accept them within the bounds of normal, self-interested behavior. But I argue that, given the vastness of possible experience, there will usually be better solutions that are not zero-sum. And the only way to make sense of the concept of "better" is to a invoke a background notion of moral truth (which, in my terms, relates to possible states of well-being). This brings us back to (A).

When we focus on (A), we can make sense of the claim that some people are unable to want what they should, in fact, want; some people are cognitively and emotionally closed to ways of living that would make them happier than they are tending to be.

Some of us are committed to playing lousy games in which it doesn't much matter who wins. And then there are better games we might play...

 

</p>

Sam Harris knows there's a magic button.