Lately, I’ve been watching Michael Sandel’s lectures on philosophy for the course “Justice” at Harvard on my iPod as treadmill entertainment. In this post, I am particularly reacting to Episode Two, Part One. Here is the summary from the website:
Today, companies and governments often use Jeremy Bentham’s utilitarian logic under the name of “cost-benefit analysis.” Sandel presents some contemporary cases in which cost-benefit analysis was used to put a dollar value on human life. The cases give rise to several objections to the utilitarian logic of seeking “the greatest good for the greatest number.” Should we always give more weight to the happiness of a majority, even if the majority is cruel or ignoble? Is it possible to sum up and compare all values using a common measure like money?
My first problem with utilitarianism as a means of measuring morality is this: how do you decide the span of time over which you will measure the expected total values of happiness vs. suffering? There are many things which may provide a lot of people with a lot of immediate happiness but then result in suffering later, and vice versa. Sometimes this can be reasonably foreseen in advance, in which case all expected results must be taken into account , no matter how far into the future they may be expected to occur. Some of the reason this is problematic is tied in with the next flaw I see in the utilitarian approach to morality. The main reason this constitutes a flaw, however, is that in the vast majority of circumstances, human actions inevitably result in unexpected, unintended consequences. Utilitarianism, then, determines what is moral based on necessarily incomplete information, which makes for inherently weak conclusions.
The next flaw in utilitarianism comes from the fundamental attribution error: humans inescapably perceive their own happiness as less than others’ and their own suffering as more than others’, even when this is not objectively the case. Given that, how can we even sum up the “total happiness” or the “total suffering” in the first place? This error is a further problem with the previous flaw as well, because it explains why people will more greatly discount the degree of either suffering or happiness, the farther into the future it is going to occur. We (humans) place the highest value on what affects us most directly and most immediately. Thus, it would be nearly impossible to attain correct morality through utilitarianism. Even if it were possible, though, there is one more flaw that I consider fatal.
The whole idea of utilitarianism rests on the assumption that suffering can somehow be justified by happiness. This assumption, in turn, is based on a fatally flawed definition of happiness. The flaw is this: true happiness can never come at the expense of causing anyone else to suffer. The “happiness” that utilitarianism claims to measure as its basis for morality is a fiction of Bentham’s imagination. Real human happiness comes only from forming authentic relationships, as described by William S. Hatcher in Love, Power and Justice, or as Martin Buber calls them, I:Thou relationships, rather than the I:It relationships that make up the calculation of utilitarianism. This means that any application of utilitarianism must put the value of “happiness” at zero, which shows that it is, in the end, meaningless.