Image via Wikipedia
Many of you who have studied philosophy in some capacity have likely heard these objections before, but let’s run through a couple of them quickly:
- How do you measure utility? I haven’t heard a convincing account of what utility is, much less how it can be properly measured. And I think it’s psychologically unrealistic to think you can graph the same sort of linear metric of pleasure/happiness/fulfillment/whatever else onto everyone.
- Disturbing human rights implications. Say you govern a small town in which all of the townspeople are united in favor of lynching one of their neighbors. You know the total amount of utility the townspeople will get from murdering their neighbor exceeds any amount of utility the potential victim might be able to obtain over the duration of the rest of his life. What do you do?
- The utility monster. In a similar vein as the last objection, what would a utilitarian do with an individual who was simply capable of generating more utility out of the consumption of resources than anyone else in the whole village? Give him everything he desires, even if that leaves everyone else with nothing?
In response to those second two objections, a lot of modern utilitarians subscribe to something called “rule utilitarianism,” which puts certain constraints on what can be done to maximize utility. So, for example, the governor of the small town in that second example might be a rule utilitarian who favors maximizing utility at all costs, except when it violates his “no lynching” rule.
The problem is, once you start setting up rules outside of the utilitarian framework, you have to produce some metaethical account of where those rules come from—and suddenly, you’re in the same position as non-utilitarians, trying to locate some outside justification for your ethical code. The closest I’ve seen to a compelling justification for those rules is an appeal to intuition, which I find kind of laughable. “Intuition” as a final justification is the worst kind of hand waving in philosophy; it’s exactly equivalent to saying, I am pulling this entirely out of my ass, but shut up.
Anyway, setting up arbitrary codes to protect utilitarianism from its own logical conclusions doesn’t do very much to solve the underlying problem from which my latter two objections stem: this is a philosophy that does little to acknowledge the natural separations between persons. There’s no math in the world that can take all of our wants, hopes, desires and fleeting pleasures and add them up into some sort of aggregate value.