28 Comments

I argue for the supposedly repugnant utilitarian conclusion here. https://benthams.substack.com/p/utilitarianism-wins-outright-part-336

But I think this is mostly irrelevant to the dispute. Every plausible view will say it's wrong to cause lots of suffering--and worse the more suffering you cause. Some views will say that lower pains like pinpricks are infinitely less bad than torture, so the badness of causing one torture is greater than that of causing a bunch of pinpricks. If I was deciding whether to either break both arms of one person or 1 arm of 49 people, the argument you give wouldn't be relevant, and I think it's not relevant here. In this case, even though fish aren't very smart, it still seems wrong to cause them tons of torture for the sake of minor pleasures.

Expand full comment
author

For sure. The arm-breaking example is a great point, and exactly why I didn't replace the "maximize total utility" argument with "maximize minimum utility"--all maximizers break down under certain circumstances.

Expand full comment

Sure! You might think that there is no single rule that tells you what to do in all circumstances, but there are clearly some that tell you what to do in some circumstances. For example "if choosing between two actions, one of which causes dozens of times the suffering of the original action, both on creatures that don't deserve to suffer, pick the one that causes less suffering."

Expand full comment
author

But then you still need some meta-principle that picks between Utilitarian rules, no?

Expand full comment

No you don't. You don't need an ultimate meta principle to apply lower level principles. You can, for example, think that you have a duty to save the lives of innocent people or avoid senseless homicide without having figured out the one true morality. In a similar way, you don't need to have a coherent ultimate principle, and you certainly don't need to be a utilitarian, to think that it's generally much worse to cause enormous amounts of suffering rather than much smaller amounts.

Expand full comment
author

So I agree with this entirely. I guess i'm just surprised to hear it coming from Bentham's bulldog! Usually you advocate for avoiding intuitive/deontic reasoning because they lead us to false conclusions--but here you sound like more of a relativist. Am I missing something?

Expand full comment

The following two claims are perfectly consistent.

1) Utilitarianism is true.

2) Even if it's false, one should be an ameliatarian.

I think both are true. Here, I was arguing for the second one. Your objection to the second seemed to be just arguing against 1), which is no objection to 2).

Expand full comment
May 5, 2023Liked by Max Goodbird

My response to the Repugnant Conclusion argument is that it's a proof that what we feel socially obligated to describe as lives "barely worth living" are in fact highly net-negative lives, or to put it another way, the actual marginally net-positive life would be one we'd look at and intuitively go, "yeah, you know, that life seems pretty okay!" and not find a world with huge numbers of such lives repugnant at all.

The problem stems from "lives barely worth living" being misused euphemistically, because we instinctively worry that if we correctly call them "very net-negative lives that would be better having never existed", people may falsely accuse us of wanting to commit genocide, of not trying to help them now that they're here, etc.

Expand full comment
author
May 5, 2023·edited May 5, 2023Author

Yeah I agree the "barely worth living" line confuses the matter.

Part of the problem is that all humans experience large swings in valence. So a typical life "barely worth living" has huge negative valence for large periods of time, which would be offset by just enough time spent in positive valence. So our imaginary person is still suffering quite a bit.

When I try to really grok the argument, I imagine hypothetical beings which just sit in a state of constant valence (which I might picture as a person sitting in meditation). 10k people sitting in a state just north of equanimity versus 100 sitting in a state of pure bliss--I still think I prefer the latter, but it's not as obvious when phrased this way.

Expand full comment

I agree that it become harder to do the intuitive calculus in the billions of meditators vs hundreds of pure bliss case - in fact, I suspect our "identifiable victim" intuitions are so strong here, and our intuitive understanding of a billion so blunt as to render putting any faith into these intuitions misguided. Do you have a further framework that makes you arrive your ordering, or is it simple the result of "imagining" the hundredd blissful people vs the billions of meditators, and seeing which "seems" better?

Expand full comment

You clearly have scope neglect.

If you say that extreme happiness is better than any amount of mild happiness, then making an extremely happy person 0.001% happier would be better than making billions of 'neutral' people moderately happy. This is outrageously implausible.

Expand full comment

“... maybe morality can’t be quantified.

More specifically, maybe there’s no single mathematical framework for judging ethical decisions, and the best we can do is appeal to multiple overlapping-but-contradictory models, and live with the ambiguity that creates.”

- I couldn’t agree more!

Awesome piece

Expand full comment
May 5, 2023Liked by Max Goodbird

Dynomite had a nice post about utilitarianism a while back...

https://dynomight.net/grandma/

My take on it is that utilitarianism should be at its core our moral values. If for some reason it isn't squaring with our moral intuition or if someone can "trick" you into some repugnant idea that goes against your moral compass, that probably means the equation and numbers that got you to that repugnant conclusion are wrong or it hinges on our inability to understand large numbers.

I agree that the math here, regarding animals, seems to arrive at a different conclusion than most people draw. I think that's really important to note but maybe it means we can work backwards from our intuition. If I try to guess how bad I think eating every animal product is and work backwards towards sentience, we get that fish are about .0001 sentient. That's not too bad. 1 cow sentience is equal to 10000 fish. I think your theory that sentience is closer to logarithmic is a good model and this just puts fish lower.

I don't think this is an issue with utilitarianism, it is an issue with the equations and numbering.

Expand full comment
author

Working backwards is a neat idea. I think it'd be a valuable exercise, but I also think we'd rely too heavily on our ability to empathize. E.g. I find it hard to empathize with octopuses, so I mostly rely on non-intuitive reasons for not eating them.

Expand full comment

Would you eat the humans that are plentiful and barely happy to be alive, though?

Expand full comment
author

Hah!

A while ago I told a meat-eating friend that I'd happily eat lab-grown meat. He asked if I'd eat lab-grown human and I still don't know how to reply.

Expand full comment

Fabulous, lots to think about! I find utilitarianism itself to be both intoxicating and repugnant at the same time, this does a great job of explaining the foundations of that feeling for me. I desperately want utilitarianism to work, but it requires me to abandon intuition which is absolutely anathema to me.

Expand full comment

As I wrote about in Losing My Religions, I am no longer a utilitarian. But we followed somewhat similar reasoning to form One Step: https://www.onestepforanimals.org/about.html

But with this caveat: https://www.mattball.org/2023/01/more-on-why-not-fish.html

Expand full comment

It seems to me that utilitarian ethics as described here and as described in the Plato.Stanford link has the search backwards. Ethics isn’t physics, there’s no first principles to derive your happiness utility function from. Shouldn’t we just design a utility function that matches our intuition? Why does everyone seem to just take the simplest possible utility function (total happiness = Σhappiness), notice that it’s wrong in certain limits, and then throw out the whole concept?

We want total happiness to increase monotonically when we add happy people, so let’s enforce a monotonicity constraint. We also want to avoid a situation where having a billion barely happy people is better than 10000 very happy people. Enforce this constraint with an asymptotic dependence on number of people, with some cutoff length scales, etc.

After you’ve added all your constraints, you should be left with the utility function that mostly matches your intuition on how morality works. That function could be complicated and multidimensional, but the fact that it exists in your brain means it DOES exist and is mathematically describable.

Expand full comment
author

This is what I would call the utilitarian fantasy--that there's some function we could craft which would compute the morality of any given situation.

It's pretty clear that the naive function (aggregate happiness) is bad. But even when we start adding some sophistication (e.g. a function that tries to balance total and average happiness) we still hit scenarios where the whole thing falls apart.

You can keep iterating past this--adding more sophistication and special cases to your formula. But continued indefinitely, you end up with a point-by-point enumeration of cases, which pretty much means you're not using math.

My hypothesis is that any framework which tries to compress morality into a single function can at best only approximate moral intuition--and at worst (as is typical) it will lead to perverse conclusions under some conditions.

Expand full comment

The problems you’ve pointed out are precisely the sorts of problems that physics deals with, and does so very well! I think the bias in my education is probably showing, but take statistical mechanics as an example of what I mean. The central problem of stat mech is to assign meaningful ensemble quantities to enormous distributions of particles, and compute how those ensemble quantities evolve in response to system stress. Isn’t that what we’d like? Stat mech is famously complicated and “inexact” in certain limits, but these limits are so improbable and the general results so useful that they get called “laws of thermodynamics.” We shouldn’t dismiss the idea of complex ethical functions simply because they’re hard, in fact, shouldn’t we expect them to be complex, like people?

Expand full comment
author

Absolutely! To be clear, I'm not saying we should abandon all attempts to mathematize ethics--e.g. I've definitely incorporated lifespan into my internal model based on Ozy's argument.

What I think we need to avoid is the fantasy that there can ever be a *final* model that works in all scenarios.

Here's an even more controversial take: the same applies to physics. Every physical theory we have only applies in limited domains, and breaks down at extreme scales of time/space/temp/etc. There's a pervasive belief among physicists that we'll eventually find a Grand Unified Theory, a single equation for reality--I think this is fantasy. I'm planning to write about this more in a future post.

Expand full comment
May 4, 2023·edited May 4, 2023Liked by Max Goodbird

Hahaha that last one is a very hot take indeed. Im excited to read that post. As a proud accelerator physicist, I hope you are wrong, but I’d need an accelerator the size of the solar system to actually check.

Expand full comment

This is my favorite thread! I love how much our quest for learning and knowledge about the universe is driven by fundamental human anxiety, i.e. needing reality to be definable/understandable. But our psychological needs are not actual constraints on reality, which is like...too uncomfortable to think about sometimes haha. We can't change reality, we can only adapt ourselves to it.

Expand full comment

The problem of salmon lice has made me sware off farm raised salmon. I have seen some stomach turning videos taken surreptitiously in salmon farms of fish with hardly anything left of their faces (the lice perfer to the fishes head for some reason).

Expand full comment