19 Comments
Mar 4·edited Mar 4Liked by Max Goodbird

So to summarize, the argument against utilitarianism is essentially:

1. Nobody can define utility, not even utilitarians

2. Even if you could define it, you can't realistically measure it except in the most trivial of circumstances

3. Even if you could measure it, you can't always tell whether a given action will increase it, let alone maximize it

4. Even if you could maximize it, there is no way a priori reason to believe that maximizing it would necessarily be moral*

So why is anybody attracted to utilitarianism even a little bit? Under what circumstance is it "helpful"?

Expand full comment
author
Mar 4·edited Mar 4Author

To steelman Utilitarianism, it might help to look at a fairly concrete case. Say, promoting COVID vaccines.

Some *very* small percentage of the population will be harmed by COVID vaccines. A very large percentage of the population will be helped. The net payoff for society, and even the expected payoff for each individual, is positive.

Some moral decision-making frameworks might suggest that _any_ potential harm means don't do it (including, ostensibly, the "do no harm" hippocratic oath!). A hard-nosed Utilitarian calculation leads us to a much more sensible conclusion.

Expand full comment
Mar 4·edited Mar 4

An even stronger steelman would be speed limits on the roads being higher than 1mph.

Clearly, the limits we currently have trade off some nonzero amount of death and serious injury to a few against convenience for many.

What it feels like over here as a consequentialist is that nearly everyone accepts consequentialist tradeoffs when it's the status quo and tacit, and only brings out the "cold-hearted monster willing to trade off human lives on a spreadsheet" accusation when the proposal is weird, new and/or overly explicit. Any remotely decent complex society is thoroughly dependent upon solid consequentialist tradeoffs in a wide variety of policy.

Expand full comment
author
Mar 5·edited Mar 5Author

> What it feels like over here as a consequentialist is that nearly everyone accepts consequentialist tradeoffs when it's the status quo and tacit, and only brings out the "cold-hearted monster willing to trade off human lives on a spreadsheet" accusation when the proposal is weird, new and/or overly explicit.

This is to be expected, regardless of what moral framework you're using. If your conclusion broadly agrees with other frameworks (especially intuition), no controversy. You interpret the conclusion as consequentialist, while others interpret it in other ways.

But when moral frameworks diverge, there's controversy. And that's OK. (Though obviously the "cold-hearted monster" ad hominem is not!)

Expand full comment

I don't see how supporting speed limits which allow for nonzero deaths could possibly count as adhering to the rule "it's never okay to sacrifice some people for the benefit of others".

Expand full comment
author

Agreed--it doesn't, in the strictest interpretation. But I also don't think anyone really adheres to that as a deontic imperative.

In general, for any deontic imperative, there's a thought experiment that makes it look dumb. Deontology doesn't have the final say either!

Expand full comment
Mar 5·edited Mar 5

My claim here isn't just that there are exceptions, but that they're highly regular and illustrate something important. I'm claiming that these widespread examples of tradeoffs in lives (e.g. traffic laws, food and environment toxicity thresholds, crime deterrence calculations), which are both essential to our society and widely accepted, demonstrate that the opposition to consequentialist tradeoffs is not (as purported) an opposition to violating the sanctity of individual lives, but rather an opposition to weird, scary mathiness of proposals that haven't yet had their aura of fright removed by becoming status quo for a while.

Expand full comment

Would the utilitarian calculation include the fact that many people, however wrongheadedly, fear harm from the vaccine? Or believe that taking it violates their right to personal autonomy?

What if we go beyond the vaccine, which seems like an easy case study, and look at shut downs and quarantines, and the harms they did to livelihoods and mental well-being? Were those harms worth the incremental lives saved? Where is the threshold, because we routinely accept that deaths from 'flu don't justify shut downs.

[For the record, I am on the side of both vaccines and shut downs].

I don't believe that utilitarianism gives you an answer to those questions because the "good" and the "harm" are fundamentally incommensurable: they literally cannot be summed up into a neat total. There is no mathematization of "utility" to be found.

By the way, it is worth noting that in such a simple concrete case, and possibly in any simple concrete case, utilitarianism doesn't seem to be telling us anything that any other morality would also tell us: a vaccine with high effectiveness and low risks is good.

Expand full comment
author
Mar 4·edited Mar 4Author

> What if we go beyond the vaccine, which seems like an easy case study, and look at shut downs and quarantines, and the harms they did to livelihoods and mental well-being? Were those harms worth the incremental lives saved? Where is the threshold, because we routinely accept that deaths from 'flu don't justify shut downs.

This is exactly where Utilitarian calculations can really help!

We could e.g. build a model that estimates the economic and emotional impact of shutdown, measured in QALYs. Then we estimate the impact to QALYs if we don't shut down and some people die of COVID. Now we have two estimates we can compare to inform our decision.

Crucially, it shouldn't be the only input into our decision! It's not a perfect calculation--there's a lot of wiggle room in the quantification. Ideally we put big error bars on everything.

But even with the caveats, it's extremely helpful data to have.

Expand full comment

...except it's not data. It's subjective guesses masquerading as data (at least as far as the non-economic factors are concerned).

Unless different people's estimates of the QALYs have some reasonable degree of correlation, you really can't say that it is estimating anything, regardless of how big one makes the error bars.

And even if that hurdle could be overcome, you still have the problem of incommensurability. How do you put a dollar value on emotional impact (including the emotional impact of losing a loved one to somebody else's decision not to vaccinate)? Or an emotional value into dollars? Unless you can measure the two things in the same "currency", via some agreed "exchange rate", there is no meaningful calculation to be made.

Has anybody ever completed such a utilitarian calculation that a reasonable person would agree with? Because if that could be done, surely people would be obliged to agree that utilitarianism has some value - even other philosophers.

By the way, there is still one more real-world difficulty yet to address. Economist Charles Goodhart's eponymous law is an adage often stated as, "When a measure becomes a target, it ceases to be a good measure". So even if one could define a utilitarian calculation that arrives a some number (measure), some people would immediately change their behavior to game that number so that their personal benefit is optimized. This can happen in obvious ways. (When some software firms started measuring programmer productivity by "lines of code written", programmers started writing more verbose code.) It can happen in hilarious ways. (There is a persistent urban myth that <city name> offered a bounty for rats, so entrepreneurial citizens starting breeding rats, killing them, and bringing them in for the bounty.) It can happen in more subtle and insidious ways. (People can arrange their own affairs so that the calculation unfairly benefits them personally.)

Expand full comment
author
Mar 5·edited Mar 5Author

Totally agree with everything here, just not quite as strongly.

I think it does make sense to consider apples-to-oranges comparisons, like how many micromorts you'd risk to have the experience of hiking Mount Everest, or how many days of life you'd sacrifice to get a million dollars today. They're obviously fuzzy guesses, and it's important to acknowledge that everyone will come up with a different exchange rate--often wildly different--based on their personal values. Which is why many of us bristle when those exchange rates are decided on by institutions, who then use them in decision making (e.g. the EPA's "Value of Statistical Life").

But having a rough idea of those numbers can help us think through hard decisions, whether it's a lockdown or an Everest expedition. Sometimes the numbers make it very obvious!

The trick is to always remember how fuzzy those numbers are, and not fool yourself into ignoring intuition, advice, tradition, public sentiment, etc.

(Re: Goodhart, you might be interested in this article of mine: https://superbowl.substack.com/p/beware-the-variable-maximizers)

Expand full comment
Mar 5Liked by Max Goodbird

Yes, but... Given all that, I would argue that one arrives at a *personal* maximization of utility, not an *objective* one. So yes, it helps one think through a personal decision (should I get the vaccine? should I keep eating bacon? should I quit drinking?) but I don't see how it helps to make any kind of policy decision?

Consider one your examples: every individual is going to make a different calculation on whether they personally should attempt to climb Everest. And that's fine. (Even then, I suspect that in reality people are going to adjust their "exchange rate" until they get the result that, deep down, they already knew they wanted. Consider the old advice that if you can't decide on something, flip a coin. If you look at the outcome and say to yourself, "OK, best two out of three", you know what you really want to do.)

But that still doesn't help us decide non-personal questions of public policy, such as: should people who are inexperienced or under-qualified be banned from attempting the climb, because they put other team members at risk as well as rescue crews when they get into trouble? Or should climbing Everest be reduced or even banned entirely because of the ecological damage climbers are doing? (Kanchha Sherpa thinks so: https://apnews.com/article/nepal-mount-everest-sherpa-guide-f78454e9cd3c984ebe43f16052517743).

In fact, individuals' evaluation of utility may not merely be different; they may be diametrically opposite. As a concrete example, consider SCOTUS overturning Roe v. Wade. The decision massively increased the happiness of people who intensely believe that life begins at conception, as well as the people who exploit that belief for political gain. And it massively decreased the happiness of people who intensely believe that women should make their own decisions about their health. So now you have to decide how "intensely" each person feels about it to calculate their change in happiness (and that's before you even start on the personal and economic impacts of allowing vs disallowing abortion.) And here the problem becomes even worse. If we're going to model this mathematically, we have to acknowledge that there are people whose position is that "the life of the baby / fetus / zygote / ectopic pregnancy with zero chance of resulting in a live birth is infinitely valuable compared to the health or life of the mother"; and others whose belief is "the health or life of the mother trumps the life of the zygote / fetus / baby". So now ones calculation involves "multiply by infinity and divide by zero". Good luck with that.

So you might say "ok, maybe neither of those things is the maximum, maybe there is a compromise that maximizes happiness, say a 20 week limit with exceptions for the health of the mother or viability of the pregnancy?". Except that doesn't work because for the "life begins at conception" believers, anything less than a total ban is equally bad; meanwhile any limit will make some people in the pro-choice camp less happy. So any compromise reduces the "utility". (If you could draw a graph of "utility" vs. "how restrictive abortion law is", it would be bathtub-shaped: maximum utility is found at either an extremely conservative or an extremely liberal position, and in between frustrates almost everybody.

The other thing about this scenario is that is highly unstable, in the sense that people who model complex systems use the term. By which I mean, if one is only *slightly* off in how one estimates "intensity of feeling" or "happiness", or in how one converts such values to a common metric with money, it dramatically changes the outcome. And in a system where, as you agree, there are going to be large "error bars" on all our estimates, that is fatal to coherent decision making.

In other words: if one tries to apply utilitarianism to one of the most important social policies of our day, it turns out to be of no help. Not "a little help" or even "a very little help"; no practical help at all.

And, I would argue, a similar analysis applies to other major social problems. Hardline gun rights advocates place a far higher value on their own feelings of safety over the actual safety of everybody around them. Oil company executives and social media CEOs place a far higher value on their own personal wealth than on the health and well-being of the population of large. Evangelical Christians place a far higher value on their right not to be "forced" to provide a marriage license or a cake to a gay couple over the gay couple's right to be treated as people.

In fact, I would aver that in a great many of the most contentious social issues, we are going to find this same "bathtub" problem: where the opposed qualitative benefits are at opposite ends and are strongly held, a small change in how we quantify the qualitative benefits will dramatically change the calculation. (As a real-world example, look at how different courts in different states in the US handle such things as damages for emotional distress.)

And of course if one says "well, in this situation there is no way to account for how strongly everybody feels, so we just have to resort to an objective economic calculation", one ends up back at the EPA's Value of Statistical Life.

Net: given all that, I still don't see that a calculation that is, *in practice*, a highly subjective and unstable gloss on a personal opinion, adds anything useful to "intuition, advice, tradition, public sentiment, etc." as well as logic and analysis of those factors that can be quantified.

Expand full comment

P.S. Maybe the "right" answer to the vaccine question is that everybody should get vaccinated except people with very strong objections, provided that the population still reaches herd immunity? And good luck measuring any of those things.

Expand full comment

An excellent post, and comments, too. Thanks to all.

I was a utilitarian (basically a professional utilitarian) until I wrote my latest book. Working through my doubts led me out of that view. (You can read those chapters - "...Expected Value..." and "...Philosophical Bullet" - free at https://www.losingmyreligions.net/ -- the "Start Reading" line)

Expand full comment

I cannot fathom how people believe they have come up with some "new" way of looking at virtue.

The Twitter blurb says it all - "...until you become a god."

If you read the first few pages of Genesis you will see this "philosophy" is as old as the hills. If you look further you will see without fail, irrefutably, that the desire to become god is what has caused, is causing and will cause all the immoral animal brutality found that man has perpetuated against man since Cain bashed Abel's head in. Ludicrous.

GOD Bless. Not man bless.

Expand full comment

The biggest issue with utilitarianism remains the open question argument by G.E. Moore: https://en.wikipedia.org/wiki/Open-question_argument

Expand full comment

I think you're mostly just conflating a description of moral goodness with heuristic usefulness.

My position as an ethical consequentialist is that goodness/badness means something like reasonable Bayesian estimate of increase/decrease in whatever the x0 is. Examining our current linguistic and conceptual classification of experiences in order to get closer to an appreciation of x0 is a key part of the ethical project.

Reducing immense physical suffering and providing life experiences that are widely understood to characterize flourishing for human and nonhuman animals, aren't hubristic claims to the Grand Unified Theory of Well-being; they're the best heuristics we have now for moving the world in the direction of the lodestar whose nature we're continuing to explore.

Expand full comment