Constructing an ethical diet is hard. Utilitarian morality isn't the answer.
I argue for the supposedly repugnant utilitarian conclusion here. https://benthams.substack.com/p/utilitarianism-wins-outright-part-336
But I think this is mostly irrelevant to the dispute. Every plausible view will say it's wrong to cause lots of suffering--and worse the more suffering you cause. Some views will say that lower pains like pinpricks are infinitely less bad than torture, so the badness of causing one torture is greater than that of causing a bunch of pinpricks. If I was deciding whether to either break both arms of one person or 1 arm of 49 people, the argument you give wouldn't be relevant, and I think it's not relevant here. In this case, even though fish aren't very smart, it still seems wrong to cause them tons of torture for the sake of minor pleasures.
My response to the Repugnant Conclusion argument is that it's a proof that what we feel socially obligated to describe as lives "barely worth living" are in fact highly net-negative lives, or to put it another way, the actual marginally net-positive life would be one we'd look at and intuitively go, "yeah, you know, that life seems pretty okay!" and not find a world with huge numbers of such lives repugnant at all.
The problem stems from "lives barely worth living" being misused euphemistically, because we instinctively worry that if we correctly call them "very net-negative lives that would be better having never existed", people may falsely accuse us of wanting to commit genocide, of not trying to help them now that they're here, etc.
“... maybe morality can’t be quantified.
More specifically, maybe there’s no single mathematical framework for judging ethical decisions, and the best we can do is appeal to multiple overlapping-but-contradictory models, and live with the ambiguity that creates.”
- I couldn’t agree more!
Dynomite had a nice post about utilitarianism a while back...
My take on it is that utilitarianism should be at its core our moral values. If for some reason it isn't squaring with our moral intuition or if someone can "trick" you into some repugnant idea that goes against your moral compass, that probably means the equation and numbers that got you to that repugnant conclusion are wrong or it hinges on our inability to understand large numbers.
I agree that the math here, regarding animals, seems to arrive at a different conclusion than most people draw. I think that's really important to note but maybe it means we can work backwards from our intuition. If I try to guess how bad I think eating every animal product is and work backwards towards sentience, we get that fish are about .0001 sentient. That's not too bad. 1 cow sentience is equal to 10000 fish. I think your theory that sentience is closer to logarithmic is a good model and this just puts fish lower.
I don't think this is an issue with utilitarianism, it is an issue with the equations and numbering.
Would you eat the humans that are plentiful and barely happy to be alive, though?
Fabulous, lots to think about! I find utilitarianism itself to be both intoxicating and repugnant at the same time, this does a great job of explaining the foundations of that feeling for me. I desperately want utilitarianism to work, but it requires me to abandon intuition which is absolutely anathema to me.
As I wrote about in Losing My Religions, I am no longer a utilitarian. But we followed somewhat similar reasoning to form One Step: https://www.onestepforanimals.org/about.html
But with this caveat: https://www.mattball.org/2023/01/more-on-why-not-fish.html
It seems to me that utilitarian ethics as described here and as described in the Plato.Stanford link has the search backwards. Ethics isn’t physics, there’s no first principles to derive your happiness utility function from. Shouldn’t we just design a utility function that matches our intuition? Why does everyone seem to just take the simplest possible utility function (total happiness = Σhappiness), notice that it’s wrong in certain limits, and then throw out the whole concept?
We want total happiness to increase monotonically when we add happy people, so let’s enforce a monotonicity constraint. We also want to avoid a situation where having a billion barely happy people is better than 10000 very happy people. Enforce this constraint with an asymptotic dependence on number of people, with some cutoff length scales, etc.
After you’ve added all your constraints, you should be left with the utility function that mostly matches your intuition on how morality works. That function could be complicated and multidimensional, but the fact that it exists in your brain means it DOES exist and is mathematically describable.
The problem of salmon lice has made me sware off farm raised salmon. I have seen some stomach turning videos taken surreptitiously in salmon farms of fish with hardly anything left of their faces (the lice perfer to the fishes head for some reason).