Utilitarianism

Introduction

In a discussion some concluded that technology X is ‘more tied to GNOME’ than technology Y because ‘more [GNOME] people are helped by X’ due to dependencies for Y. Dependencies that might be unacceptable for some people.

This smells like utilitarianism and therefore it’s subject to criticism.

Utilitarianism is probably best described by Jeremy Bentham as:

Ethics at large may be defined, the art of directing men’s actions to the production of the greatest possible quantity of happiness.

— Bentham, Introduction to the Principles of Morals and Legislation

A situational example that, in my opinion, falsifies this:

You are standing near the handle of a railroad switch. Six people are attached to the rails. Five of them at one side of the switch, one at the other side of the switch. Currently the handle is set in such a way that five people will be killed. A train is coming. There’s no time to get help.

  • Is it immoral to use the handle and kill one person but save five others?
  • Is it immoral not to use the handle and let five people get killed?

The utilitarianist chooses the first option, right? He must direct his actions to the production of the greatest possible quantity of happiness.

Body of the discussion

Now imagine that you have to throw a person on the rails to save the lives of five others. The person would instantly get killed but the five others would be saved by you sacrificing one other.

A true utilitarianist would pick the first option in both exercises; he would use the handle and he would throw a person on the rails. In both cases he believes his total value of produced happiness is (+3) and he believes that in both situations picking the second option means his total value of produced happiness is (-4) + (+1) = (-3). The person who picks the second option is therefore considered ethically immoral by a true utilitarianist.

For most people that’s not what they meant the first time. Apparently ethics don’t allow you to always say (+4) + (-1) = (+3) about happiness. I’ll explain.

The essence of the discussion

Psychologically, less people will believe that throwing a person on the rails is morally the right thing to do. When we can impersonificate we make it more easy for our brains to handle such a decision. Ethically and morally the situation is the same. People feel filthy when they need to physically touch a person in a way that’ll get him killed. A handle makes it more easy to kill him.

Let’s get back to the Gnome technology discussion … If you consider pure utilitarianism as most ethical, then you should immediately stop developing for GNOME and start working at Microsoft: writing good Windows software at Microsoft would produce a greater possible quantity of happiness.

Please also consider reading criticism and defence of utilitarianism at wikipedia. Wikipedia is not necessarily a good source, but do click on some links on the page and you’ll find some reliable information.

Some scientists claim that we have a moral instinct, which is apparently programmed by our genes into our brains. I too believe that genetics probably explain why we have a moral system.

The developer of X built his case as following: My technology only promotes happiness. The technology doesn’t promote unhappiness.

It was a good attempt but there are multiple fallacies in his defense.

Firstly, in a similar way doesn’t technology Y promote unhappiness either. If this is assumed about X, neither promote unhappiness.

Secondly, how does the developer of X know that his technology promotes no unhappiness at all? Y also promotes some unhappiness and I don’t have to claim that it doesn’t. That’s a silly assumption.

Thirdly, let’s learn by example: downplaying the amount of unhappiness happens to be the exact same thing regimes having control over their media also did whenever they executed military action. The act of downplaying the amount of unhappiness should create a reason for the spectator to question it.

Finally, my opinion is that the very act of claiming that ‘X is more tied to GNOME’, will create unhappiness among the supporters of Y. Making the railroad example applicable anyway.

My conclusion and the reason for writing this

‘More’ and ‘less’ happiness doesn’t mean a lot if both are incommensurable. Valuations like “more tied to GNOME” and “less tied to GNOME” aren’t meaningful to me. That’s because I’m not a utilitarianist. I even believe that pure utilitarianism is dangerous for our species.

To conclude I think we should prevent that the GNOME philosophy is damaged by too much utilitarianism.

13 thoughts on “Utilitarianism”

  1. I think that you’re arguing with a straw man. More generally, utilitarianism says that you attempt to maximize some function, the utility function. If you call this function “happiness”, do you mean happiness over the short term or the long term?

    Furthermore, the moral puzzles you describe are artificial, and they resemble the one the neocons always trotted out to justify torture: there’s a ticking time bomb, and you have a terrorist undergoing questioning who knows where the bomb is, and the only way to get the information is to beat it out of him. So now do you support torture? Perhaps you would use this as an argument against utilitarianism, but the problem is that we must first swallow all of the assumptions. We know, by magic, that the captive is an actual terrorist and not an innocent person, that he knows where the bomb is (unlikely since most terrorist groups use a cell structure so information is minimally shared), and that by torturing him he’ll tell us the truth, rather than making up a credible story that would send us on a hunt in the wrong place.

    In the case of the train platform questions, it’s very unlikely in real life that there really are only two choices (brutally use a live person’s body to stop the train or see five people die); human creativity often provides additional choices.

  2. You immediately compare the situation with torture so my guess is that you are a U.S. citizen :) ?

    Doesn’t really matter.

    The utilitarianist would claim that by torturing one individual, you can save many from death. Therefore it’s a similar scenario as the one where you physically need to touch a person to throw him on the rails in order to stop the train from killing five other people.

    So the true utilitarianist would not only approve the torture, he would also claim it to be unethical if you don’t torture the terrorist to get access to this information.

    The smart psychologist, however, would explain you that torture might not be the best method to get access to the truth. Fear is never a good driver to get the truth out of a person. Instead, if you make promises (about stopping the torture) if he says something, the tortured individual will almost certainly start saying whatever is needed to make you stop torturing him. Not necessarily the truth.

    Even worse, the person being tortured might even start believing his own lies. Actively believing. A tortured person often starts seeing his own lies as facts. Especially if telling them stopped the torture before. Basic conditioning.

    About the unlikeliness of a situation being limited to only two options I have only to say that the amount of creativity nor the amount of options are relevant within the context of the experiment’s outcome.

    This paper explains it in higher detail (the scenario example that I gave):
    http://www.themoralbrain.be/blog/Cushman.pdf

    Although ‘utility’ ain’t the same as ‘happiness’ you can nonetheless replace the word ‘happiness’ with ‘utility’ in most of the cases where I used it. It just depends on whether you consider quantitative utility to have a higher priority than qualitative utility or not. In case of qualitative utility being prioritized it’s less easy to replace ‘happiness’ with ‘utility’. In case of quantitative utility being prioritized I think you can replace ‘happiness’ with ‘utility’ everywhere and it’s still all conform utilitarianism.

  3. Maybe I would pull the switch & got severely wounded during my attempt in rescuing that last person, deciding – later that night – getting killed in throwing myself in front of that other train is my one & only chance on dying like a hero.

    *sigh*, life is full of surprises.. .

  4. Regarding software and media development, I take the position of materialistic utilitarianism, that is, defining ‘utility’ as stuff that’s of actual value. Usable, stable, awesome, Free software. If you have to piss everybody off to get it, well, that sucks, do it anyways. If you don’t, great!

    To be honest I don’t have a clue why you’re trying to tie normative ethics into GNOME. Let’s try to keep our ethics platform neutral, eh?

  5. Ah, but I actually agree with you about trying to keep our ethics platform neutral.

    My point is that if we are going to valuate X and Y in terms of ‘more tied to GNOME’ vs. ‘less tied to GNOME’, because [whatever the reason], then I think we are doing it wrong (that is, we’d be picking for example a utilitarianist ethic as compass for our moral judgment).

    Which is wrong, and we seem to be in agreement about that?

    > Usable, stable, awesome, Free software.

    Usable, stable, awesome software is not necessarily Free Software
    Free software is not necessarily usable, stable, awesome software.

    > If you have to piss everybody off to get it,
    > well, that sucks, do it anyways.

    This is not utilitarianism. Using Jeremy Bentham’s definition is utilitarianism the art of directing men’s actions to the production of the greatest possible quantity of happiness. Pissing off everybody sound counterproductive if production of the greatest possible quantity of happiness is your moral compass.

  6. Yes, I’m a US citizen who wants to see all who engaged in torture, or ordered that it be done, prosecuted. But I still think you’re confusing arguments claim to be utilitarian, with actual, well-thought-out utilitarian arguments. I think that a utilitarian approach that considers all consequences (as some Native Americans used to say, down to the seventh generation) can yield a moral result. You have to look not just at the immediate result, but also at the long-term consequences. And if you reject this, what would you replace it with? Divine revelation? That wouldn’t be very European of you. Fundamental moral principles? But what are they based on? I’m an atheist, but I look for a bit more than “because I said so” as a justification.

    Coming back to software, looking only at the short term coding for the Windows platform might look like it has the best payoff. But for the long term, the world benefits more by code that can be built on by others and freely shared.

  7. @Joe: may I recommend scanning the website themoralbrain.be? If Jan Verplaetse’s book “The Moral Instinct” is finally translated from Dutch to English I’ll recommend that book to you, if you really want a good answer to this. Although I agree that I have a bias towards this ‘meme’ because it’s the last one that influenced my brain.

    The idea is basically that we have five different kinds of morality which are defined biologically (in our brain, by our genes) and culturally exposed (all cultures share the same moralities, but they all execute them differently). You also have ethics, which Jan Verplaetse also explains in the last chapter of his book (ISBN 9789057122811 for the Dutch version, It’s a quite recent work so you’ll have to wait for the English translation I’m afraid).

    You can find the papers that his research-group published (in English) at themoralbrain.be though. The book is IMO a more ‘public-friendly’ story for the material in the research-papers with a chapter about ethics added (which I haven’t found a lot of on that website tbf).

    About software: I don’t think it’s either proved nor provable that software that is freely shared is better in either long or short term than software that isn’t freely shared. I also don’t think that for the consumers it matters really a lot. For consumers open (patent free) standards like the ones IETF makes are much more important (at least in my opinion).

    My reason for free software and open source is and has always been a selfish one: it’s the best way to interactively work together with people who are smarter than me. Enabling me to learn new things, to become a better software developer. I have a few other reasons, of course, but that one is my most important one. The licenses are in other words useful tools to achieve for example technical goals. But nothing more than that. I don’t feel like a ethical better person because I’m into free -and opensource software, not at all. In fact, I think the people who do are disturbing the scene more than anything else. Especially the ones who don’t really write software, but just act as free-software fanboys or religious freedom fighters. I indeed disagree philosophically with Richard Stallman on many things. That doesn’t mean his licenses aren’t useful.

  8. The problem with doing philosophy by analyzing “kill one person to save five” situations is that they basically never occur. They make for exciting TV and movies (“shoot the hostage!”), but they’re (almost) completely hypothetical.

  9. “If you consider pure utilitarianism as most ethical, then you should immediately stop developing for GNOME and start working at Microsoft: writing good Windows software at Microsoft would produce a greater possible quantity of happiness.”

    As a person with background in economics, I think utilitarianism does NOT conclude that working at Microsoft produces greater quantity of happiness compared to developing for GNOME.

    In general lack of competition (i.e. monopoly) in a market causes big inefficiencies. Healthy competition is critical for maximizing happiness, thus end of GNOME/KDE would result in increased inefficiencies and less happiness.

    Secondly, working on open source software could lead into new ways of collaboration, development and innovation, not available in closed source development models. Thus, a utilitarian would recommend working on GNOME to see whether a more efficient collaboration model could emerge, eventually resulting in increased happiness.

    Anyway, interesting points, interesting discussion.

  10. @uhuu: Somewhat agree, but because you can’t prove that healthy competition is critical for maximizing happiness, you must add “intention” to the algorithm that makes your moral compass point to the most ethic direction.

    How will you proof that healthy competition *will* cause maximum happiness? Hey, I agree. Don’t get me wrong. But me agreeing with you *is not* proof. Everybody on this world agreeing with you might not even be proof. Who says God exists if only one human survived a comet impact on our planet? In that case “everybody” believes something. But why would it necessarily be truth? And before not everybody agreed, so it wasn’t necessarily true. What if it wasn’t true and now everybody does. So suddenly it must become reality. Suddenly God must pop into existence now?

    Surely you can think that it does because without healthy competition *you* would *feel* unhappy. But collateral damage like your non-standard feelings being hurt don’t necessarily mean that there aren’t three other persons who aren’t unhappy about it and who are, in fact, happy about the non-competition.

    You could go count them. Problem is that the total amount of humans on this earth constantly changes and that by the time you counted all opinions I will ask you the question: but how are you sure that you asked the right questions? That you counted it right? etc. So just counting it is, which is what ‘democracy’ does, although definitely worth doing, not always going to yield a perfect answer. Also, who must be counted? Grown up man and woman? Children too? What about animals? They don’t have feelings and they can’t think? You’d be surprised about recent sciences on that subject.

    If I for example throw a little bit of media attention to the subject and if I indoctrinate people about how great it would be if Microsoft could create more jobs if we’d make it impossible to compete with Microsoft (but put in a different wording, so that it still sounds as if Microsoft is a nice and cute company that does all these great things for humanity), are you then still sure that your feeling will be shared by the majority? Your own feeling might even be quite heavily influenced! I know mine would probably, because I refuse to be naive about myself: I introspect myself frequently.

    Looking at the reality of media-influence in both Europe and the U.S., and actually in all parts of the world, I’m pretty sure that the feeling of the majority can very easily be skewed up to the point that it becomes the exact opposite of your feeling about the subject. Even if the current feeling of the majority is at this moment exactly the same as yours.

    So how did that prove anything philosophically? It didn’t, regretfully.

  11. Rare (and nice) that discourse syndicated on Planet Gnome is elevated to this level. I don’t have any specific response to the points raised. Just wanted to say it’s appreciated.

  12. Hi Philip,

    I don’t know anyone who subscribes to the version of utilitarianism you present. In particular:

    > The utilitarianist would claim that by torturing one individual, you can save many from death.

    No, the utilitarian would see that we’re comparing a society that has many dead people with a society with fewer dead people (assuming the torture works) but where people fear torture happening to them, and know that they are a society that is willing to torture people, and many societies would make the utilitarian decision that engaging in torture here increases net suffering more than it decreases it. Fear and guilt are types of suffering.

    Utilitarianism isn’t an excuse for sloppy thinking. Yes, it says that we should seek to minimize suffering in our decisions, but it doesn’t let us get away with saying “the immediate short-term consequence of is to reduce suffering for some people, therefore it must be the right thing to do” without analyzing the situation further.

    – Chris.

  13. > “the immediate short-term consequence of is to reduce suffering

    I meant to say “the immediate short-term consequence of (x) is”, but my x got swallowed up as an HTML tag. :)

    – Chris.

Comments are closed.