CONSEQUENTIALIST ETHICAL THEORIES

 

 

A (PURELY) CONSEQUENTIALIST Ethical Theory is a general normative theory that bases the moral evaluation of acts, rules, institutions, etc. solely on the goodness of their consequences, where the standard of goodness employed is a standard of non-moral goodness.

 

A NON-CONSEQUENTIALIST Ethical Theory is a general normative theory that is not (purely) consequentialist.

 

 

UTILITARIANISM

 

A UTILITARIAN Ethical Theory is a (purely) consequentialist theory according to which the morality of an act depends solely on some relation (specified by the theory) that it has to the maximization of total or average utility (a measure of non-moral goodness). Utilitarians can differ on the definition of utility, giving rise to three varieties of Utilitarian theories.

 

Like the individual hedonist, the hedonistic utilitarian claims that we can define the net hedonic value of a life =df the sum of all pleasures (which have positive hedonic value) and pains (which have negative hedonic value) contained in the life, where it is assumed that pleasures and pains can all be measured on a single scale.

HEDONISTIC UTILITARIANISM: Utility is defined in terms of net hedonic value. Utility of a life =df net hedonic value of the life(e.g., Bentham and Mill [but note that Mill distinguished higher from lower pleasures]).

 

PLURALISTIC UTILITARIANISM: Utility is defined in terms of whatever has intrinsic (non-moral) value, not just pleasure and pain--including, for example, knowledge, love, friendship, courage, health, beauty, states of consciousness other than pleasure and pain (e.g., Moore). Utility of a life = the sum of all of these factors produced during the life, again measured on a single scale.

 

PREFERENCE UTILITARIANISM: Utility is defined in terms of the degree to which one's actual (non-moral) preferences are satisfied, whatever those preferences may be (e.g., Harsanyi). Utility of a life =df the degree to which it satisfies the preferences of the person whose life it is, whatever those preferences may be.

 

 

TOTAL UTILITY AND AVERAGE UTILITY

 

1. Of Acts

 

Utilitarians can evaluate the TOTAL or AVERAGE Utility of any possible action as follows:

 

(1) For any possible individual, i, the theory defines, in non-moral terms, the utility to i of each of the various possible alternative lives that i might lead. These utilities are assumed to be representable as numerical quantities, and, at least in theory, to be measurable and to be interpersonally comparable. (For example, in Hedonistic Utilitarianism, the utility of a life is a measure of the amount of happiness, or the sum or pleasure over pain, contained in the life.)

 

(2) It is assumed that, on the basis of (1), for each possible action A and possible individual i affected by A, it is possible to define ui(A), the utility to i of i's life given that A is performed (which may be positive or negative). Again, ui(A) is assumed to be a measurable, inter-personally-comparable quantity.

 

(3a) The TOTAL UTILITY of an act A is the sum of the utility to each possible individual i affected by the act, given that A is performed--that is, the sum, over all possible individuals i affected by the act A, of ui(A).

 

(3b) The AVERAGE UTILITY of an act A is the average utility to each possible individual i affected by the act, given that A is performed--that is the sum, over all possible individuals i affected by the act A, of ui(A), divided by the total number of individuals affected by the act.

 

 

ACT, RULE, AND SOCIAL PRACTICE UTILITARIANISM

 

 

It is possible to rank acts on the basis of their (total or average) utility, to rank rules on the basis of their total or average utility, and to rank social practices generally on the basis of their (total or average) utility. However, a moral theory is a theory about what one ought to do. We will distinguish three different kinds of Utilitarian moral theory as follows:

 

ACT UTILITARIANISM refers to a family of Utilitarian theories according to which a moral act is one that maximizes (total or average) utility.

 

RULE UTILITARIANISM refers to a family of Utilitarian theories according to which a moral act is one that is prescribed by the rule (or set of rules) that, if generally applied, would maximize (total or average) utility.

 

SOCIAL PRACTICE UTILITARIANISM refers to a family of Utilitarian theories according to which a moral act is one that is prescribed by a social practice (e.g., a rule or system of rules, custom or system of customs, or institution of system of institutions) that, if generally followed or respected, would maximize (total or average) utility.

 

 

ACT vs. RULE UTILITARIANISM

 

1. Act Utilitarianism (e.g., J.J.C. Smart) = When circumstances allow time for deliberation, always apply the AU Rule [AU Rule = Choose an act that maximizes utility].

 

a. All other rules are merely rules of thumb--to be applied when there is not time for deliberation.

 

2. Rule Utilitarianism (e.g., Brandt) = Apply the Ideal Utilitarian System of Rules--that is, the system of rules which, if generally applied, would maximize utility.

 

 

AN APPARENT DILEMMA FOR RULE UTILITARIANS

 

EITHER:

 

1. Rule Utilitarianism "Collapses" into Act Utilarianism [the Ideal Utilitarian System of rules is equivalent to (i.e., prescribes the same acts as) the AU Rule].

 

OR:

 

2. Rule Utilitarianism Becomes Rule "Fetishism" [It prescribes adhering to rules when there is no good Utilitarian reason to do so (other than possibly some perverse pleasure that one derives from adhering to the rules)].

 

 

DAVID SHAPIRO'S EXAMPLE

 

Consider the rule: ALWAYS STOP AT A STOP SIGN, and do not proceed until the way is clear.

 

Consider what the rule would be if the AU exception were added to it: : ALWAYS STOP AT A STOP SIGN, and do not proceed until the way is clear, UNLESS BY NOT STOPPING YOU WOULD MAXIMIZE UTILITY.

 

Fallible human beings would not satisfy either rule if they applied it, but if they APPLIED (i.e., tried to satisfy) the first rule they would have fewer auto accidents (and produce more utility) than if they APPLIED (i.e., tried to satisfy) the rule with the AU exception.

 

Further exceptions could be built into the first rule, to make it better from a Rule Utilitarian point of view, but the AU exception would not be one of them! For example:

 

EXCEPT FOR EMERGENCY VEHICLES USING SIRENS AND FLASHING LIGHTS WHILE RESPONDING TO AN EMERGENCY CALL, always stop at a stop sign, and do not proceed until the way is clear.

 

 

SAMPLE PROMISE-KEEPING RULES

 

1. ALWAYS Keep Your Promises, if it is Physically Within Your Power to do so. [NO EXCEPTIONS].

 

2. Keep Your Promises, Except When You Believe That to do so Would Fail to Maximize Utility [equivalent to the AU RULE].

 

3. Keep Your Promises, Except When Failing to Keep Your Promise Will Only Cause the Promisee [i.e., the Person to Whom You Made the Promise] Losses That are Reimbursable, and You Are Willing to Reimburse the Promisee for All Losses That She Can Show to Have Reasonably Resulted from Your Failing to Keep Your Promise. [may require an impartial Judge to adjudicate disputes]

 

 

SOME (POTENTIAL) PARADOXES FOR HUMAN BEINGS

 

 

PARADOX OF ACT UTILITARIANISM: For human beings, everyone's attempting to maximize overall happiness (utility) may not maximize overall happiness (utility). (There might be a different set of rules that, if generally APPLIED by humans, would produce greater overall happiness.)

 

 

PARADOX OF ALTRUISM: For human beings, everyone's attempting to maximize the happiness of others might not maximize overall happiness. (There might be greater overall happiness if people pursue a mixture of egoistic and altruistic goals and desires.)

 

 

DOES ANY FORM OF UTILITARIANISM PROVIDE A SUFFICIENT CONDITION FOR MORAL WRONGNESS?

 

Three proposed sufficient conditions for moral wrongness (one for each type of utilitarianism):

 

(1) AU: A does not maximize overall utility from among the available acts A is wrong.

 

-MOU W

 

Is there a counterexample to this claim of implication: Is it possible for there to be an act A such that -MOU & -W?

 

(2) RU: There is a Rule Utilitarian ideal system of rules RUISR that, if generally applied by human beings, would maximize utility, and RUISR requires doing something other than A A is wrong.

 

-[Permitted by RUISR] W

 

Is there a counterexample to this claim of implication: Is it possible for there to be an act A such that -[Permitted by RUISR] & -W?

 

(3) SPU: There is an ideal Utilitarian system of social practices IUSSP that, if generally followed by human beings, would maximize utility, and doing A conflicts with IUSSP A is wrong.

 

-[Permitted by IUSSP] W

 

Is there a counterexample to this claim of implication: Is it possible for there to be an act A such that -[Permitted by IUSSP] & -W?

 

 

POTENTIAL PROBLEMS FOR UTILITARIANISM

 

1. Problems in Measuring Goodness and Comparing Utilities (A Technical Problem)

 

2. The First Problem of Requiring Too Much: Supererogatory Acts

 

3. The Second Problem of Requiring Too Much: Too Much Impartiality

 

4. The Third Problem of Requiring Too Much: Too Much Sacrifice of Individual Autonomy

 

5. The First Problem of Permitting Too Much: Punishing the Innocent. Contrast Nozick's conception of morality as side constraints.

 

6. The Second Problem of Permitting Too Much: The Distribution Problem