Search results
Results From The WOW.Com Content Network
Strong rule utilitarianism (SRU) gives a utilitarian account for the claim that moral rules should be obeyed at all places and times.SRU does not deteriorate into act utilitarianism like weak rule utilitarianism, but it shares weaknesses with similarly absolutist moral stances (notably, deontological ones).
[58] He argues that one of the main reasons for introducing rule utilitarianism was to do justice to the general rules that people need for moral education and character development and he proposes that "a difference between act-utilitarianism and rule-utilitarianism can be introduced by limiting the specificity of the rules, i.e., by ...
Two-level utilitarianism is virtually a synthesis of the opposing doctrines of act utilitarianism and rule utilitarianism. Act utilitarianism states that in all cases the morally right action is the one which produces the most well-being, whereas rule utilitarianism states that the morally right action is the one that is in accordance with a ...
Range voting (also called score voting or utilitarian voting) implements the relative-utilitarian rule by letting voters explicitly express their utilities to each alternative on a common normalized scale. Implicit utilitarian voting tries to approximate the utilitarian rule while letting the voters express only ordinal rankings over candidates.
This is an incomplete list of advocates of utilitarianism and/or consequentialism This is a dynamic list and may never be able to satisfy particular standards for completeness. You can help by adding missing items with reliable sources .
The hazards of average utilitarianism are potentially avoided if it is applied more pragmatically. [citation needed] For instance, the practical application of rule utilitarianism (or else two-level utilitarianism) may temper the aforementioned undesirable conclusions. That is, actually practicing a rule that we must "kill anyone who is less ...
Lexical threshold" negative utilitarianism says that there is some disutility, for instance some extreme suffering, such that no positive utility can counterbalance it. [24] 'Consent-based' negative utilitarianism is a specification of lexical threshold negative utilitarianism, which specifies where the threshold should be located.
For example, Rawls' maximin considers a group's utility to be the same as the utility of the member who is worst off. The "happy" utility monster of total utilitarianism is ineffective against maximin, because as soon as a monster has received enough utility to no longer be the worst-off in the group, there's no need to accommodate it.