Consequentialist theories are often exciting and tempting as they give us a real chance at having a universalizable theory of morality and justice. As a moral theory, one can look at the different flavors of consequentialism and evaluate them against a range of features of plausible moral theories, noting if each of those features are accounted for by each theory.
In this paper, I propose a number of features that we expect to exist in plausible moral theories. I discussing these features in general, mostly appealing to intuition to justify why we expect these features. The features, roughly, are: the existence of permissible morally good acts that go beyond what is morally required; reasonable level of blameworthiness of moral agents; and finally, the moral impermissibility to compel others to do the right thing on certain occasions.
In particular, I will spend some time examining objective act consequentialism, and expected value (subjective) act consequentialism, and discuss a number of remedies, including scalar utilitarianism, satisficing consequentialism and progressive consequentialism, in an effort to address some of these worries. Different moral theories will be appropriate in addressing these different worries.
The moral ought: permissibility and requirements
First, we expect a moral theory to have a distinction between one ought to do and what one may do. We expect a range of possible actions to be morally required, while others are morally good and have moral value beyond that of what is required. Giving money to a particular charity, for instance, is seen as morally good but not morally required. Certain personal sacrifices, such as undergoing pain (say, taking a bullet) on behalf of someone else, is also seen as morally good but not necessarily morally required.
In hopes that (some version of) consequentialism is a plausible moral system, we would like to observe that (some version of) consequentialism allows us to view both of these acts as morally good, but not be the only permissible (thus, required) actions out of the available set.
Plausible moral systems still include impermissible actions, so an interpretation of consequentialism (see scalar utilitarianism, discussed later) where all actions are permissible but good in varying degrees, fails to capture our moral intuitions. Inflicting harm on others is impermissible; failing to save a nearby drowning child is reprehensible; stopping to give a hungry homeless family a gold star sticker (which will make the child infinitesimally happier) might be seen as wrong if you could have given them food or money without expending more effort.
We often also expect a corresponding feature for morally bad and morally impermissible actions; given a range of possible actions, some might be morally impermissible, while others could be not morally ideal yet still permissible. For the purposes of our explorations, I will not concern myself with this feature as I am more interested in virtue than dis-virtue. It does not seem unreasonable for someone to extend the consequentialist theories that address morally good and virtuous, yet not required, to also address the opposite problem.
Act consequentialism, as commonly stated, says that an act is permissible if and only if no other available act has a better [expected] value, according to a certain axiology. Act consequentialism restrains and compels an agent making the decision into taking the action that maximizes moral goodness. A moral agent is not permitted to take an action that produces suboptimal [expected] value of moral goodness. This raises the common demandingness objection that concerns the opponents of utilitarianism.
One defense of act consequentialism as stated is to say that our intuition about morally good versus morally required actions only applies when we have epistemic concerns with following act consequentialism. For instance, one could say that plain act consequentialism makes a lot of sense when dealing with omniscient agents in cases of perfect-information; how could it possibly be morally permissible to condemn the world to be worse off than it can be?
Yet, our intuition about giving to charity is that it is not required, even though we can be very sure (at least, for some charities) that we condemn the world to be worse off by not contributing.
Looking back at the intuitive moral feature we are investigating, it is important to come up with a nuanced explanation of where and how plain act consequentialism falls short. For instance, act consequentialism is still expected to yield a set of permissible actions, not a single permissible action, in cases where available actions yield equal or incomparable results. Thus, it is incorrect to state that act consequentialism is deficient by only permitting a single action for a given situation.
Rather, our criticism can be put forward in one of two (equivalent) ways. Either that we worry that act and rule consequentialism overlook sufficiently good actions, accounting them as impermissible, only because better actions exist. Alternatively, our worry is that we expect to see, for each action in the set of permissible available actions in a given situation, a spectrum in the moral goodness of its consequences; yet, consequentialism suggests that, if such range were to be observed then only the actions with the maximal value would have been permissible. Rule consequentialism similarly fails to account for such range or spectrum of moral goodness of consequences of rules (e.g. the rule of giving to charity).
Both of these objections are very closely related to a common objection to act consequentialism: the demandingness objection.
The latter statement of the objection gives us a clue about one possible remedy: we are looking for a moral system that could produce a range of the moral value among permissible available actions. To that end, it is not unreasonable to look at the axiology as a means to quantify moral value.
In doing so, we are separating out the two main ideas of act consequentialism: that ‘the rightness of an act depends solely on its consequences’, and that ‘the rightness of an act depends on its having the best consequences’ (Slote, 1984, p. 140), accepting the first and discounting the second.
One approach to this is that of Alastair Norcross, who argues in The Scalar Approach to Utilitarianism that the rightness and wrongness of an action ought to be a “matter of degree” in accordance with the axiology (Norcross, 2006, p. 217). The problem with this approach is that, while it succeeds in assigning moral goodness and badness to actions and ranks them neatly on a spectrum of moral value, it takes consequentialism outside of the realm of deciding the permissibility and impermissibility of actions.
If the scalar approach entails that either every action is good in varying degrees or reprehensible in varying degrees (in comparison with the maximally-good action), then it loses the ability to praise some actions while condemning others. A moral theory where everything is slightly praiseworthy and slightly reprehensible is pragmatic and as such appealing. Yet, such theory might not be very useful, and certainly would not reflect common intuitions.
Scalar utilitarianism as a concept is still compatible with the introduction of a test for permissibility of actions. Norcross, however, argues against attempts to draw an ‘all-or-nothing line’ between right and wrong by stating that choosing point on the scale (say, a percentile of good actions out of the possible actions) would be arbitrary. Unlike Slote, who was among the first to argue for separating rightness and wrongness from permissibility (Slote, 1984, p. 140), Norcross attempts to argue against any form of satisficing consequentialism (Norcross, 2006, p. 221).
Indeed, it does seem rather arbitrary to have a cutoff for what is a “good enough” action in scalar utilitarianism. Yet, moving away from scalar utilitarianism into “vector” utilitarianism, where the moral goodness of an action is compared with another criterion (say, current state of affairs, or, consequences of inaction) can provide a convincing non-arbitrary cutoff between permissible and impermissible actions. Such type of consequentialism, however, strays away from scalar consequentialism, and is closer to other forms of consequentialism, such as progressive consequentialism.
It is still tempting, however, to repurpose the axiology rank the goodness (moral value) of actions and their consequences, instead of permissibility. The axiology (if it provides transitivity and reverse transitivity) provides us with an ordinal raking of goodness, one which we intuitively expect to observe in ranking of permissible actions. In that sense alone, scalar consequentialism is appealing.
Satisficing consequentialism, initially proposed by Michael Slote in 1984, is such a system. Satisficing consequentialism is built on Slote’s first claim that the ‘rightness of an action depends solely on its consequences’, while finding another criterion for permissibility. Namely, actions whose consequences are “good enough” ought to be permissible.
Before going into a number of possible criteria for permissibility, an example from common sense morality is in-order to show clear cases of intuitive satisficing. One of Slote’s many examples goes as follows:
Consider a manager of a resort hotel who discovers […] a car has broken down right outside its premises. In the car are a poor family of four who haven’t the money to rent a cabin or buy a meal at the hotel, but the manager offers them a cabin gratis, […]. In acting thus benevolently, however, she doesn’t go through the complete list of all the empty cabins in order to put them in the best cabin available. She simply goes through the list of cabins till she finds a cabin in good repair that is large enough to suit the family. (Slote, 1984, p. 149)
In this case, it is clear that the decision of the hotel manager is neither maximal nor optimal according to most common axiologies, yet common sense morality agrees that it is good enough.
Many forms of satisficing consequentialism exist, we will explore a few of these. Slote alludes to early editions of Bentham’s An Introduction to the Principles of Morals and Legislation to argue that Bentham initially advocated or alluded to a form of satisficing consequentialism in which an action is right if it increases net pleasures, or decreases net pains (Slote, 1984, pp. 153-4). A more generalized statement of this form of satisficing consequentialism is, given a particular axiology, an available action is permissible if and only if it its consequences constitute a Pareto improvement.
Slote finds this form of satisficing consequentialism unsatisfying, as it entails that an act is good enough for improving the state of affairs ever so slightly, even if a significantly better option exists. In expressing this concern, Slote is inviting Norcross and others to make the arbitrary cut-off argument.
Another concern regarding Pareto-style satisficing consequentialism is if all available actions (including inaction) reduce the value of the state of affairs according to a given axiology. The moral agent, then, appears unable to perform a right action. This seems to go against common-sense morality. A consideration of all available actions, when deciding on what is good enough, therefore, is in order.
Slote suggests that the notion of enoughness of a given action as some mathematical function (e.g. percentage) of the moral value of that action and the moral value of the optimal action, stopping short of actually defining an appropriate criterion for permissibility (Slote, Satisficing Consequentialism, 1984, p. 156). In Beyond Optimizing, Slote suggests that enoughness might depend not on ‘purely formal or mathematical considerations’, instead possibly on an account of ‘basic human satisfactions and general human needs’ (Slote, 1989, p. 154). With the absence of a particular non-arbitrary criterion, Slote falls short of presenting a compelling argument for any one form of satisficing consequentialism, but succeeds in presenting a case for the plausibility of satisficing theories in general.
A type of consequentialism which possesses the satisficing property while also addressing Norcross’s arbitrariness concerns is progressive consequentialism. Progressive consequentialism roughly states that we morally ought to bring about consequences that are better (or, no worse?), given an axiology, than the consequences produced if we do nothing in a given situation (Jamieson & Elliot, 2009).
Progressive consequentialism is appealing for a number of reasons: first, it alleviates the common demandingness objection to act consequentialism; it provides a satisficing criterion that is not ad hoc or arbitrary like the type of criterion suggested by Slote; and finally, it provides a natural criterion that, unlike act consequentialism, is not “static and unworldly” (Jamieson & Elliot, 2009).
In these ways, progressive consequentialism is very close to common-sense morality, allows us to separate moral obligations from morally good actions, and allows for a raking of goodness of an agents available actions among the permissible set of actions.
Still, a few criticisms can be lodged against progressive consequentialism. I will discuss two relevant criticisms. First is Slote’s view that the existence of a significantly better available action should in fact affect the cutoff for permissible actions. In that sense, progressive consequentialism clearly fails to accommodate for this view. Jamieson and Elliot believe this is acceptable, since the existence of a sufficiently better available action could sometimes trigger the demandingness worry of many critics of consequentialism.
The second worry is perhaps more relevant. Given two available actions and both requiring the same effort whose consequences have a moral value of and respectively, progressive consequentialism seems to indicate that either action is right. Jamieson and Elliot address this concern by stating that progressive consequentialism should then be extended as follows:
Progressive Consequentialism (PC*): an action is the right action to do if it has consequences that, given and axiology, bring about a world that is better (no worse) than the consequences of inaction, and brings about the maximal goodness amongst all other available actions requiring the same amount of effort.
Setting aside epistemic concerns with finding the maximal action given a certain amount of effort (which is a concern for evaluating goodness of inaction and any particular action), we have now arrived at a statement of consequentialism that fully satisfies the concern of the over-restrictiveness of act consequentialism’s moral ought.
PC* correctly predicts Slote’s resort manager example. By finding the poor family a place to stay, the manager is improving state of affairs over one in which the manager did nothing. A manager could indeed find a better room, but doing so requires going through the list of all available rooms, which in turn requires more effort. Therefore the manger’s action was right.
Blameworthiness: Epistemic concerns and Agential responsibility
Another feature of plausible moral systems is that those who intend to behave act morally right and put sufficient effort in doing so will often achieve their intended goal. A concern in how this applies to consequentialism can be stated as follows: does it seem right that a person can live her whole life not knowing if her actions were morally permissible? Does it seem right that a person can inadvertently do something morally reprehensible? Can a person be blamed for their morally reprehensible actions?
It is important to point out, here, that blame and blamelessness are meant to be understood not only as social properties and interactions, but rather as moral terms. An agent S is blamed for an action A if and only if S committing A is morally reprehensible. Inversely, if A is ‘wrong’ as a matter-of-fact (found to have morally bad outcomes in retrospect), S is blameless if it is not morally reprehensible for S to have done A given S’s epistemic position and cognitive effort.
With objective act consequentialism, it seems that everyone will either be always or never morally responsible for their actions. According to the first view, one should always blame the person who inadvertently caused ‘bad’ outcomes to happen.
The alternative view is this: since evaluating the moral goodness of states of affairs is intractable, then one can never blame the person who caused ‘bad’ outcomes to happen, or simply put, objective act consequentialism is not in the business of assigning and quantifying blameworthiness of agents and their actions, only the permissibility and impermissibility of actions given their states of affairs.
This view is also unpleasant because it disregards “virtue” as a property; just as no actions are blameworthy, no actions are commendable and praiseworthy. We therefore cannot commend the virtuous person on her actions. Common sense morality seems to account for the idea that some individuals are more ‘moral’ than others, and more commendable or less blameworthy than others.
It is important to point out an implicit criticism of consequentialism here: objective act consequentialism suggests only a criterion for the rightness and wrongness of an action, not a decision making procedure (Bales, 1971). This distinction is important: it reminds us that act consequentialism removes itself entirely from the business of blameworthiness, it is only a criterion. In this sense act consequentialism is valuable and should be defended; if an agent S does A based on her best effort to improve the world, and A worsens the world, it is very useful to say that “A is wrong in retrospect” or “A is wrong as a matter-of-fact”, without saying that S’s doing A is wrong.
If improving the world is what we seek, in accordance with the consequentialist criterion, then surely our reflections and evaluations should include some sort of appeal for act consequentialism.
Yet, by failing to provide a decision procedure to follow, has act consequentialism strayed away from being a moral system? I argue that assigning blameworthiness is indeed a feature of good moral systems. I argue, therefore, that the correct moral system is some form of subjective consequentialism. This is a system capable of assigning blameworthiness, can be epistemically tractable, and provides a decision-making procedure.
If we then act as this moral system suggests we act, then we are never morally wrong or blameworthy, and our committing of certain actions should not be condemned. Yet, our actions could still prove to be, on occasion, wrong in terms of act consequentialism. Given that improving the state of affairs is still the underlying goal of subjective consequentialism, we should still say that such action is wrong in retrospect, or wrong as a matter of fact, but that a subject committing the action was not wrong at the time.
Before moving forward to specify how subjective consequentialism is appropriate, we should discuss a number of epistemic facts and concerns about moral goodness and consequentialism, and clarify when each concern is relevant.
- It is difficult to determine the set of possible actions we are faced with.
- This is a real problem for act consequentialism as, then, we are unable to verify which actions are the optimal ones.
- This is also a problem for scalar utilitarianism since, then, we are unable to determine the moral rightness and wrongness of an action relative to the best outcome; we do not know what the best outcome is.
- This is also a problem for satisficing consequentialism since, then, we are unable to compute the enoughness function, given that we do not know what the best (and worst) outcome is.
- This is not a problem for progressive consequentialism, all we need to know is the value of the consequences of inaction, and compare these to the values for the consequences of each possible action we are aware of.
- Determining the “value” or “goodness” of even a single state of affairs is intractable, as we need to measure the consequences by four-dimensionally extending the state of affairs and computing the total goodness according to the axiology.
- It is still intractable to determine just the relative value of two states of affairs for the same reason.
A common response to epistemic concerns is suggesting the use of Subjective Act Consequentialism. This form of consequentialism roughly states that: an act is permissible if and only if no other available act has a better expected value, according to a certain axiology.
At the risk of being pedantic, I shall take the term expected value as defined in mathematics, and contrast it with estimated value. I shall argue—if only for the sake of precision and clarity—that the correct meaning of subjective consequentialism is one that uses estimated value, not
It is not at all clear that by using expected values instead of real values, that we avoid epistemic problems of intractability. In fact, it is very likely that calculating the expected value of a certain outcome given an action is as intractable as calculating the value of a certain outcome.
Let us first consider objective expected values. Assuming a non-deterministic quantum universe, where each event may or may not happen with a certain probability, then the expected value of the world given action A, E[V|A], can be expressed as a sum of P[Vi|A] · Vi over i. Taking probabilities in this sum are real, absolute, and objective values, an agent S could very well be wrong about his calculation of the expected value of the outcome given A. In fact, finding the expected value is intractable as we need to know the set of possible values of the world Vi and the exact probability for each value. The set of possible values is exponentially large compared to the remaining age of the universe, since at each moment the universe could fork to one of two possible universes.
Using “objective expected value” instead of “value” therefore does not address any epistemic or tractability concerns. Using objective expected values only addresses the possible non-determinism of the universe on a quantum level.
Philosophers are often clear that they advocate for the use of subjective expected value, not an objective expected value. This is a subjective value given one’s epistemic position in the world. Subjective expected values are calculated similarly to objective expected values except the probabilities used are subjective probabilities given an individual’s epistemic position. These values have a specific way of calculation using the subjective probabilities and decision trees, as discussed in (Hammond, 1988), for instance.
Without much specificity, a subjective expected value, given one’s epistemic position, calculated through subjective probabilities (and possibly decision trees) is still a real value that a person could be right or wrong about. For instance, given a woman’s epistemic position, we could say that she has access to everything she needs to arrive at a probability of 0.25 that her donation to particular charity will be useful; one could assume she has access to the internet where she could look up charity rankings, has a rough idea of how significant her contribution is, and read a certain amount about the issue the charity is working to address.
Yet, the woman could be bad at math and come up with a value of 0.2; she could neglect to use updated rankings and use her knowledge from previous years; she could otherwise miscalculate the subjective expected value in the decision tree or her. In doing so, she is said to be wrong about the subjective expected value, even given her epistemic position.
Just as an agent might be wrong about the action producing a maximal V, an agent could be wrong about the action producing a maximal ES[V]. An expected value interpretation of subjective act consequentialism is not sufficient to quell blame and blamelessness concerns; we are either always blamed or always blameless for choosing an action which does not maximize the expected outcome.
It is therefore imperative to deal with estimated value instead of expected value when discussing subjective consequentialism. That is to say, an agent S is obliged to maximize vi = ESTS[V|Ai]. Maximizing the estimated value is a good choice because an estimated value is agent-relative (each person estimates values according to their epistemic and cognitive abilities) and subjective.
Railton defines subjective consequentialism as such in Alienation, Consequentialism, and the Demands of Morality, calling it the view that “one should attempt to determine which act […] would most promote the good then try to act accordingly” (Railton, 1984, p. 152). Insofar as this definition is concerned, it is clear that using subjective probabilities and decision trees is not at all suggested.
In this case, we can argue when S is wrong about which action was optimal, and claim that S is retrospectively wrong, or wrong as a matter-of-fact, yet still morally right (S is not blameworthy). This, I believe, is a good feature that we expect in plausible moral systems.
One corollary of subjective consequentialism is an over-emphasis on the nearer and more directly related outcomes. An act’s immediate consequences are more emphasized than its later indirect consequences, and extreme outcomes that are the interaction of other distant events are no longer accounted for. In other words, factors beyond the knowledge and awareness of an agent no longer affect the moral rightness or wrongness of an agent’s action. Intuitively, this appears to be a good outcome of our approach.
Agential involvement is interesting and relevant here, as we could say that we are more likely to account for outcomes that we are more agentially involved with than outcomes we are less agentially involved with.
Another corollary of subjective consequentialism as I described it is more concerning: are irresponsible individuals who estimate outcomes more rashly and with less care free from blame in this model?
It is easy, and probably prudent, to assert that the virtue of a person or someone’s action includes, as a multiplicative factor, the amount of care or effort at which they estimate each possible outcome they evaluate. Thus, for, say, progressive consequentialism with subjective estimated value, we consider the care and effort at which the outcome of inaction and each considered action is estimated.
According to this, an action that significantly worsens the world (say, compared to inaction, if we are discussing progressive consequentialism) made by a rash but well-intentioned agent is still blameless, albeit not virtuous.
At this point, one might consider if it is possible to find a way in which such rash actions are actually held to be morally reprehensible. Using only the amount of care and effort at which the agent estimates the outcomes, it seems that any hard line distinguishing morally permissible and impermissible amounts of effort is ad hoc and arbitrary, much in the same way Norcross argued against hard lines for rightness and wrongness of actions in Scalar Utilitarianism.
This observation should not be discouraging—it seems acceptable that rash actions with harmful outcomes are not morally reprehensible, instead only unvirtuous.
Even with such belief, we still have the opportunity to condemn rash irresponsible individuals. We ought to regard the purposeful decision to be “cognitively irresponsible” as morally reprehensible within this framework. This is because even the rashest of considerations should yield that being more careful will better the world, and being less careful would worsen the world.
A few concerns remains, such as the concern of double counting. When should we be viewing rash agents as morally reprehensible? Does every rash action entail a tacit decision to continue to be “cognitively irresponsible”? Should we only count conscious decisions to be rash as reprehensible? Are supposed to reward or punish agents who question themselves and their thought processes, and decide to continue to pay little attention to details?
Considerations on Compulsion
Yet another feature of plausible moral systems is this: if I know that action A is right and action B is wrong in a given situation, I do not necessarily have enough reason to compel another agent to perform A rather than B. Another way to state this is to consider paternalism: common sense morality seems to dictate that, while paternalism might be the right course of action sometimes, it is not always the case. The belief that a subject is wrong in doing B and ought to do A instead is therefore a necessary but not sufficient condition for compelling the subject to A.
Objective act consequentialism seems to suggest that it is right to force others to do the actions that bring about the best outcomes. For a certain statement of subjective consequentialism, however, we could arrive at the answer that it is sometimes appropriate to be compel others, and sometimes inappropriate to do so.
Roughly, subjective consequentialism dictates that we should compel others to act in a certain way if we estimate it is good to do so. Considering that others are in a better epistemic and cognitive position to estimate the outcomes and ramifications of their own actions, a prudent and virtuous estimation procedure on behalf of the subjective consequentialism should weight their outcomes against being paternalistic in most cases.
Given this view, we predict that subjective consequentialism will argue for compulsion of others: first, in the case of the rash agent that fails to consider the epistemic and cognitive position of other agents, second, if an agent sees good reason to suspect others are at a significant relative disadvantage in their own epistemic and cognitive position, and third, if an agent sees good reason to suspect others are deliberately performing the wrong action and is in an epistemic position to know the actual correct action.
I have already addressed the first case when discussing subjective consequentialism in the previous section. This concern might raise or emphasize fears that we get disproportionately bad outcomes from rash behaviors, and the rash behavior itself seems to mitigate the reprehensibility of these outcomes. I will only address these concerns insofar as claiming that the decision to be rash in one’s deliberation should be seen as very reprehensible.
In other terms, subjective consequentialism as stated justifies compelling others when either:
- the compelling agent S1 believes to have significantly more knowledge about the possible outcomes of the actions of S2, the compelled agent, than S2, or
- the compelling agent S1 has good reason to believe that S2, the compelled agent, is acting contrary to the axiology, and can estimate the outcomes of compelling S2 to do one action or another;
The first criterion sounds like a criterion for paternalism, yet it is different in that its deontic value lies in improving the state of affairs rather than acting in the best interest of any particular agent.
According to the first criterion, it is permissible to prevent someone from unknowingly entering a dangerous situation. Yet it is not clearly permissible, and quite possibly impermissible, to prevent someone from knowingly entering a dangerous situation, as we lack their epistemic perspective in making the decision.
The second criterion, on the other hand, is relevant for cases of compulsion such as prisons and law enforcement, restraining individuals from committing suicide (only in certain situations, perhaps), or stopping a gunman.
The extent to which this form of subjective consequentialism prevents the compulsion of others will depend on the estimation method each agent uses to determine possible outcome sand their values. Compared to objective consequentialism, however, subjective consequentialism seems to be—at the very least—more restrictive than objective consequentialism in permitting compulsion, and as such closer to common sense morality.
Another remaining concern is that subjective consequentialism seems to argue against compelling others only on epistemic and cognitive grounds. This might seem very dissatisfying for those who compulsion as inherently wrong. For those who do, however, subjective consequentialism need not significantly change to address their concerns; Kantian-inspired notions that it is bad for rational moral agents to be used as means rather than ends could be embedded into the axiology. For instance, our axiology could specify that states of affairs in which more people, on more occasions, are used as means instead of ends are worse than states of affairs in which less people, on less occasions, are used as means instead of ends.
Bales, R. E. (1971, July). Act-Utilitarianism: Account of Right-Making Characteristics or Decision-Making Procedure? American Philosophical Quarterly, 8(3), 257-265.
Hammond, P. J. (1988, July). Consequentialist foundations for expected utility. Theory and Decision, 25(1), 25-78.
Jamieson, D., & Elliot, R. (2009). Progressive Consequentialism. Philosophical Perspectives, 23(1), 241-251.
Norcross, A. (2006). The Scalar Approach to Utilitarianism. In H. West, The Blackwell Guide to Mill’s Utilitarianism (pp. 217-232). Oxford: Wiley-Blackwell.
Railton, P. (1984, April). Alienation. Consequentialism. and the Demands of Morality. Philosophy & Public Affairs, 13(2), 134-171.
Slote, M. (1984). Satisficing Consequentialism. Proceedings of the Aristotelian Society (pp. 139-163). Blackwell Publishing.
Slote, M. (1989). Beyond Optimizing: A Study of Rational Choice. Cambridge: Harvard University Press.