The Morality of Ice Cream
October 6, 2020

I was thinking today, in what context I don’t remember (maybe the context of a would-be dictator encouraging Americans to ignore a deadly virus he fostered?), about the most important subject for humanity. I decided it was moral philosophy. Because that’s, like, the guiding principle for everything we do, right? In the would-be dictator’s case, the philosophy is Me First, then Me Second. For most of the rest of us, it’s more complicated, involving our notions of responsibility to others.
It’s too complicated, really. Moral philosophy is both most important and most impossible.
Why impossible? Well, you have to start with assumptions about what’s valuable. Should our morality focus only on human beings, for example? Or also on dogs and dolphins and elephants and other intelligent life? If the latter, where do we draw the line? Are mosquitoes exempt from our care? (Remember that animals we used to assume were mindless have been shown to have sophisticated brains. I have no love for mosquitoes, but it may turn out that they’re really smart.)
Even if we focus on humanity alone, what should our morality promote and what should it avoid? To me, the only formulation that makes logical sense is the utilitarian one, “the greatest good for the greatest number,” or in Jeremy Bentham’s formulation, “the greatest happiness for the greatest number,” but the logicality proves bankrupt when we try to put it into practice.
To follow a utilitarian creed, we have to know what “good” or “happiness” means, which almost makes the argument tautological. But, okay, try to define this good or happy state: Do you choose freedom from hunger or freedom from tyranny? Maximum total pleasure (which would also have to be defined) or minimum suicides? Ethnic pride and dignity or less frequent war?
Even John Stuart Mill, who became the chief apostle of this philosophy, had to distinguish between “higher” and “lower” pleasures—again, it seems to me, begging the question.
And even if you could precisely define what you meant by “good” or “happiness,” how would you measure it—not just in one person but across the 7.8 billion people on Earth?
And then, if you could define it and measure it, you’d still have to be able to predict the future. That is, in order to decide whether to do something, you’d have to judge what effects that action would have—and what effects it would prevent from happening. You’d need to envision the infinite number of possible timelines emanating from your single choice. Infinitely impossible.
So, to adopt utilitarianism in daily life, we must cut that infinity down to size by making assumptions about what’s likely to have a beneficial effect on whatever portion of the population we happen to be considering. And, for me, this process often ends up more or less with the golden rule, the do-unto-others thing. Which itself is a poor guide to questions such as “Should I tell my sister that her husband is cheating on her?” and “Is it okay to give my grandkids ice cream though their mother forbade it?”
Sigh.
All this, plus laziness, explains why I haven’t actually studied much moral philosophy. It relies too much on unprovable assumptions. Plus it’s less entertaining than a good detective story.
Yet I would love to be more certain. I’d love to believe in a principle as easy to operate as the TV.