Abstract: Many early Chinese thinkers had as their spiritual ideal the state of wu-wei, or effortless action. By advocating spontaneity as an explicit moral and religious goal, they inevitably involved themselves in the paradox of wu-wei—the problem of how one can try not to try—which later became one of the central tensions in East Asian religious thought. In this talk, I will look at the paradox from both an early Chinese and a contemporary perspective, drawing upon work in social psychology, cognitive neuroscience, and evolutionary theory to argue that this paradox is a real one, and is moreover intimately tied up with problems surrounding cooperation in large-scale societies and concerns about moral hypocrisy.
Draft of a paper to be published in Current Controversies in Virtue Theory. My controversy is over the question “Can people be virtuous?” My respondent is James Montmarquet. Other contributors to the volume include Heather Battaly, Liezl van Zyl, Jason Baehr, Ernie Sosa, Dan Russell, Christian Miller, Bob Roberts, and Nancy Snow.
Ramsifying virtue theory
Can people be virtuous? This is a hard question, both because of its form and because of its content.
In terms of content, the proposition in question is at once normative and descriptive. Virtue-terms have empirical content. Attributions of virtues figure in the description, prediction, explanation, and control of behavior. If you know that someone is temperate, you can predict with some confidence that he won’t go on a bender this weekend. Someone’s investigating a mysterious phenomenon can be partly explained by (correctly) attributing curiosity to her. Character witnesses are called in trials to help determine how severely a convicted defendant will be punished. Virtue-terms also have normative content. Attributions of virtues are a manifestation of high regard and admiration; they are intrinsically rewarding to their targets; they’re a form of praise. The semantics of purely normative terms is hard enough on its own; the semantics of “thick” terms that have both normative and descriptive content is especially difficult.
Formally, the proposition in question (“people are virtuous”) is a generic, which adds a further wrinkle to its evaluation. It is notoriously difficult to give truth conditions for generics (Leslie 2008). A generic entails its existentially quantified counterpart, but is not entailed by it. For instance, tigers are four-legged, so some tigers are four-legged; but even though some deformed tigers are three-legged, it doesn’t follow that tigers are three-legged. A generic typically is entailed by its universally quantified counterpart, but does not entail it. Furthermore, a generic neither entails nor is entailed by its counterpart “most” statement. Tigers give live birth, but most tigers do not give live birth; after all, only about half of all tigers are female, and not all of them give birth. Most mosquitoes do not carry West Nile virus, but mosquitoes carry West Nile virus. Given the trickiness of generics, it’s helpful to clarify them to the extent possible with more precise non-generic statements.
Moreover, the proposition in question is modally qualified, which redoubles the difficulty of confirming or disconfirming it. What’s being asked is not simply whether people are virtuous, but whether they can be virtuous. It could turn out that even though no one is virtuous, it’s possible for people to become virtuous. This would, however, be extremely surprising. Unlike other unrealized possibilities, virtue is almost universally sought after, so if it isn’t widely actualized despite all that seeking, we have fairly strong evidence that it’s not there to be had.
In this paper, I propose a method for adjudicating the question whether people can be virtuous. This method, if sound, would help to resolve what’s come to be known as the situationist challenge to virtue theory, which over the last few decades has threatened both virtue ethics (Alfano 2013a, Doris 2002, Harman 1999) and virtue epistemology (Alfano 2011, 2013a, Olin & Doris 2014). The method is an application of David Lewis’s (1966, 1970, 1972) development of Frank Ramsey’s (1931) approach to the implicit definition of theoretical terms. The method needs to be tweaked in various ways to handle the difficulties canvassed above, but, when it is, an interesting answer to our question emerges: we face a theoretical tradeoff between, on the one hand, insisting that virtue is a robust property of an individual agent that’s rarely attained and perhaps even unattainable and, on the other hand, allowing that one person’s virtue might inhere partly in other people, making virtue at once more easily attained and more fragile.
The basic principle underlying the Ramsey-Lewis approach to implicit definition (often referred to as ‘Ramsification’) can be illustrated with a well-known story:
And the Lord sent Nathan unto David. And he came unto him, and said unto him, “There were two men in one city; the one rich, and the other poor. The rich man had exceeding many flocks and herds: But the poor man had nothing, save one little ewe lamb, which he had bought and nourished up: and it grew up together with him, and with his children; it did eat of his own meat, and drank of his own cup, and lay in his bosom, and was unto him as a daughter. And there came a traveler unto the rich man, and he spared to take of his own flock and of his own herd, to dress for the wayfaring man that was come unto him; but took the poor man’s lamb, and dressed it for the man that was come to him.” And David’s anger was greatly kindled against the man; and he said to Nathan, “As the Lord liveth, the man that hath done this thing shall surely die: And he shall restore the lamb fourfold, because he did this thing, and because he had no pity.” And Nathan said to David, “Thou art the man.”
Nathan uses Ramsification to drive home a point. He tells a story about an ordered triple of objects (two people and an animal) that are interrelated in various ways. Some of the first object’s properties (e.g., wealth) are monadic; some of the second object’s properties (e.g., poverty) are monadic; some of the first object’s properties are relational (e.g., he steals the third object from the second object); some of the second object’s properties are relational (e.g., the third object is stolen from him by the first object); and so on. Even though the first object is not explicitly defined as the X such that …, it is nevertheless implicitly defined as the first element of the ordered triple such that …. The big reveal happens when Nathan announces that the first element of the ordered triple, about whom his interlocutor has already made some pretty serious pronouncements, is the very person he’s addressing (the other two, for those unfamiliar with the 2nd Samuel 12, are Uriah and Bathsheba).
The story is Biblical, but the method is modern. To implicitly define a set of theoretical terms (henceforth ‘T-terms’), one formulates a theory T in those terms and any other terms (henceforth ‘O-terms’) one already understands or has an independent theory of. Next, one writes T as a single sentence, such as a long conjunction, in which the T-terms t1…, tn occur (henceforth ‘T[t1…, tn]’ or ‘the postulate of T’). The T-terms are replaced by unbound variables x1…, xn, and then existentially quantified over to generate the Ramsey sentence of T, which states that T is realized, i.e., that there are objects x1…, xn that satisfy the Ramsey sentence. An ordered n-tuple that satisfies the Ramsey sentence is then said to be a realizer of the theory.
Lewis (1966) famously applied this method to folk psychology to argue for the mind-brain identity theory. Somewhat roughly, he argued that folk psychology can be treated as a theory in which mental-state terms are the T-terms. The postulate of folk psychology is identified as the conjunction of all folk-psychological platitudes (commonsense psychological truths that everyone knows, and everyone knows that everyone knows, and everyone knows that everyone knows that everyone knows, and so on). The Ramsey sentence of folk psychology is formed in the usual way, by replacing all mental-state terms (e.g., ‘belief’, ‘desire’, ‘pain’, etc.) with variables and existentially quantifying over those variables. Finally, one goes on to determine what, in the actual world, satisfies the Ramsey sentence; that is, one investigates what, if anything, is a realizer of the Ramsey sentence. If there is a realizer, then that’s what the T-terms refer to; if there is no realizer, then the T-terms do not refer. Lewis claims that brain states are such realizers, and hence that mental states are identical with brain states.
Lewis’s Ramsification method is attractive for a number of reasons. First, it ensures that we don’t simply change the topic when we try to give a philosophical account of some phenomenon. If your account of the mind is wildly inconsistent with the postulate of folk psychology, then – though you may be giving an account of something interesting – you’re not doing what you think you’re doing. Second, enables us to distinguish between the meaning of the T-terms and whether they refer. The T-terms mean what they would refer to, if there were such a thing. Whether they in fact refer is a distinct question. Third, and perhaps most importantly, Ramsification is holistic. The first half of the twentieth century bore witness to the fact that it’s impossible to give an independent account of almost any psychological phenomenon (belief, desire, emotion, perception) because what it means to have one belief is essentially bound up with what it means to have a whole host of other beliefs, as well as (at least potentially) a whole host of desires, emotions, and perceptions. Ramsification gets around this problem by giving an account of all of the relevant phenomena at once, rather than trying to chip away at them piecemeal.
Virtue theory stands to benefit from the application of Ramsification for all of these reasons. We want an account of virtue, not an account of some other interesting phenomenon (though we might want that too). We want an account that recognizes that talk of virtue is meaningful, even if there aren’t virtues. Most importantly, we want an account of virtue that recognizes the complexity of virtue and character – the fact that virtues are interrelated in a whole host of ways with occurrent and dispositional mental states, with other virtues, with character more broadly, and so on.
Whether Lewis is right about brains is irrelevant to our question, but his methodology is crucial. What I want to do now is to show how the same method, suitably modified, can be used to implicitly define virtue-terms, which in turn will help us to answer the question whether people can be virtuous. For reasons that will become clear as we proceed, the T-terms of virtue theory as I construe it here are ‘person’, ‘virtue’, ‘vice’, the names of the various virtues (e.g., ‘courage’, ‘generosity’, ‘curiosity’), the names of their congruent affects (e.g., ‘feeling courageous’, ‘feeling generous’, ‘feeling curious’), the names of the various vices (e.g., ‘cowardice’, ‘greed, ‘intellectual laziness’), and the names of their congruent affects, (e.g., ‘feeling cowardly’, ‘feeling greedy’, ‘feeling intellectually lazy’). The O-terms are all other terms, importantly including terms that refer to attitudes (e.g., ‘belief’, ‘desire’, ‘anger’, ‘resentment’, ‘disgust’, ‘contempt’, ‘respect’), mental processes (e.g., ‘deliberation’), perceptions and perceptual sensitivities, behaviors, reasons, situational features (e.g., ‘being alone’, ‘being in a crowd’, ‘being monitored’), and evaluations (e.g., ‘praise’ and ‘blame’).
Elsewhere (Alfano 2013), I have argued for an intuitive distinction between high-fidelity and low-fidelity virtues. High-fidelity virtues, such as honesty, chastity, and loyalty, require near-perfect manifestation in undisrupted conditions. Someone only counts as chaste if he never cheats on his partner when cheating is a temptation. Low-fidelity virtues, such as generosity, tact, and tenacity, are not so demanding. Someone might count as generous if she were more disposed to give than not to give when there was sufficient reason to do so; someone might count as tenacious if she were more disposed to persist than not to persist in the face of adversity. If this is on the right track, the postulate of virtue theory will recognize the distinction. For instance, it seems to me at least that almost everyone would say that helpfulness is a low-fidelity virtue whereas loyalty is a high-fidelity virtue. Here, then, are some families of platitudes about character that are candidates for the postulate of virtue theory:
(A) The Virtue / Affect Family
(a1) If a person has courage, then she will typically feel courageous when there is sufficient reason to do so.
(a2) If a person has generosity, then she will typically feel generous when there is sufficient reason to do so.
(a3) If a person has curiosity, then she will typically feel curious when there is sufficient reason to do so.
(C) The Virtue / Cognition Family
(c1) If a person has courage, then she will typically want to overcome threats.
(c2) If a person has courage, then she will typically deliberate well about how to overcome threats and reliably form beliefs about how to do so.
(S) The Virtue / Situation Family
(s1) If a person has courage, then she will typically be unaffected by situational factors that are neither reasons for nor reasons against overcoming a threat.
(s2) If a person has generosity, then she will typically be unaffected by situational factors that are neither reasons for nor reasons against giving resources to someone.
(s3) If a person has curiosity, then she will typically be unaffected by situational factors that are neither reasons for nor reasons against investigating a problem.
(E) The Virtue / Evaluation Family
(e1) If a person has courage, then she will typically react to threats in ways that merit praise.
(e2) If a person has generosity, then she will typically react to others’ needs and wants in ways that merit praise.
(e3) If a person has curiosity, then she will typically react to intellectual problems in ways that merit praise.
(B) The Virtue / Behavior Family
(b1) If a person has courage, then she will typically act so as to overcome threats when there is sufficient reason to do so.
(b2) If a person has generosity, then she will typically act so as to benefit another person when there is sufficient reason to do so.
(b3) If a person has curiosity, then she will typically act so as to solve intellectual problems when there is sufficient reason to do so.
(P) The Virtue Prevalence Family
(p1) Many people commit acts of courage.
(p2) Many people commit acts of generosity.
(p3) Many people commit acts of curiosity.
(p4) Many people are courageous.
(p5) Many people are generous.
(p6) Many people are curious.
(I) The Cardinality / Integration Family
(i1) Typically, a person who has modesty also has humility.
(i2) Typically, a person who has magnanimity also has generosity.
(i3) Typically, a person who has curiosity also has open-mindedness.
(D) The Desire / Virtue Family
(d1) Typically, a person desires to have courage.
(d2) Typically, a person desires to have generosity.
(d3) Typically, a person desires to have curiosity.
(F) The Fidelity Family
(f1) Chastity is high-fidelity.
(f2) Honesty is high-fidelity.
(f3) Creativity is low-fidelity.
Each platitude in each family is meant to be merely illustrative. Presumably they could all be improved somewhat, and there are many more such platitudes. Moreover, each family is itself just an example. There are many further families describing the relations among vice, affect, cognition, situation, evaluation, and behavior, as well as families that make three-way rather than two-way connections (e.g., “If a person is courageous, then she will typically act so as to overcome threats when there is sufficient reason to do so and because she feels courageous.”). For the sake of simplicity, though, let’s assume that the families identified above contain all and only the platitudes relevant to the implicit definition of virtues. Ramsification can now be performed in the usual way. First, create a big conjunction (henceforth, simply the ‘postulate of virtue theory’). Next, replace each of the T-terms in the postulate of virtue theory with an unbound variable, then existentially quantifies over those variables to generate the Ramsey sentence of virtue theory. Finally, check whether the Ramsey sentence of virtue theory is true and – if it is – what its realizers are.
After this preliminary work has been done, we’re in a position to see more clearly the problem raised by the situationist challenge to virtue theory. Situationists argue that there is no realizer of the Ramsey sentence of virtue theory. Moreover, this is not for lack of effort. Indeed, one family of platitudes in the Ramsey sentence specifically states that, typically, people desire to be virtuous; it’s not as if no one has yet tried to be or become courageous, generous, or curious. In this paper, I don’t have space to canvass the relevant empirical evidence; interested readers should see my (2013a and 2013b). Nevertheless, the crucial claim – that the Ramsey sentence of virtue theory is not realized – is not an object of serious dispute in the philosophical literature.
One very common response to the situationist challenge from defenders of virtue theory (and virtue ethics in particular) is to claim that virtues are actually quite rare, directly contradicting the statements in the virtue prevalence family. I do not think this is the best response to the problem, as I explain below, but the point remains that all serious disputants agree that the Ramsey sentence is not realized.
As described above, Ramsification looks like a simple, formal exercise. Collect the platitudes, put them into a big conjunction, perform the appropriate substitutions, existentially quantify, and check the truth-value of the resulting Ramsey sentence (and the referents of its bound variables, if any). But there are several opportunities for a critic to object as the exercise unfolds.
One difficulty that arises for some families, such as the desire / virtue family, is that they involve T-terms within the scope of intentional attitude verbs. Since existential quantification into such contexts is blocked by opacity, such families cannot be relied on to define the T-terms, though they can be used to double-check the validity of the implicit definition once the T-terms are defined.
Another difficulty is that this methodology presupposes that we have an adequate understanding of the O-terms, which in this case include terms that refer to attitudes, mental processes, perceptions and perceptual sensitivities, behaviors, reasons, situational features, and evaluations. One might be dubious about this presupposition. I certainly am. However, the fact that philosophy of mind and metaethics are works-in-progress should not be interpreted as a problem specifically for my approach to virtue theory. Any normative theory that relies on other branches of philosophy to figure out what mental states and processes are, and what reasons are, can be criticized in the same way.
A third worry is that the list of platitudes contains gaps (e.g., a virtue acquisition family about how various traits are acquired). Conversely, one might think that it has gluts (e.g., unmotivated commitment to virtue prevalence). To overcome this pair of worries, we need a way of determining what the platitudes are. Perhaps surprisingly, there is no precedent for this in the philosophy of mind, despite the fact that Ramsification is often invoked as a framework there. This may be because it’s supposed to be obvious what the platitudes are. Here’s Frank Jackson’s flippant response to the worry: “I am sometimes asked—in a tone that suggests that the question is a major objection—why, if conceptual analysis is concerned to elucidate what governs our classificatory practice, don’t I advocate doing serious opinion polls on people’s responses to various cases? My answer is that I do—when it is necessary. Everyone who presents the Gettier cases to a class of students is doing their own bit of fieldwork, and we all know the answer they get in the vast majority of cases” (1998, 36–37). After all, according to Lewis, everyone knows the platitudes, and everyone knows that everyone knows them, and everyone knows that everyone knows that everyone knows them, and so on. Sometimes, however, the most obvious things are the hardest to spot. It thus behooves us to at least sketch a method for carrying out the first step of Ramsification: identifying the platitudes. Call this pre-Ramsification.
Here’s an attempt at spelling out how pre-Ramsification should work: start by listing off a large number of candidate platitudes. These can be all of the statements one would, in a less-responsible, Jacksonian mood, have merely asserted were platitudes. It can also include statements that seem highly likely but perhaps not quite platitudes. Add to the pool of statements some that seem, intuitively, to be controversial, as well as some that seem obviously false; these serve as anchors in the ensuing investigation. Next, collect people’s responses to these statements. Several sorts of responses would be useful, including subjective agreement, social agreement, and reaction time. For instance, prompt people with the statement, “Many people are honest,” and ask to what extent they agree and to what extent they think others would agree. Measure their reaction times as they answer both questions. High subjective and social agreement, paired with fast reaction times, is strong but defeasible evidence that a statement is a platitude. This is a bit vague, since I haven’t specified what counts as “high” agreement or “fast” reaction times, but there are precedents in psychology for setting these thresholds. Moreover, this kind of pre-Ramsification wouldn’t establish dispositively what the platitudes are, but then, dispositive proof only happens in mathematics.
It’s far beyond the scope of this short paper to show that pre-Ramsification works in the way I suggest, or that it verifies all and only the families identified above. For now, let’s suppose that it does, i.e., that all of the families proposed above were validated by pre-Ramsification. Let’s also suppose that we have strong evidence that the Ramsey sentence of virtue theory is not realized (a point that, as I mentioned above, is not seriously contested). How should we then proceed?
Lewis foresaw that, in some cases, the Ramsey sentence for a given field would be unrealized, so he built in a way of fudging things: instead of generating the postulate by taking the conjunction of all of the platitudes, one can generate a weaker postulate by taking the disjunction of each of the conjunctions of most of the platitudes. For example, if there were only five platitudes, p, q, r, s, and t, then instead of the postulate’s being , it would be (p&q&r&s)v(p&q&r&t)&…&(q&r&s&t). In the case of virtue theory, we could take the disjunction of each of the conjunctions of all but one of the families of platitudes. Alternatively, we could exclude a few of the platitudes from within each family.
Fudging in this way makes it easier for the Ramsey sentence to be realized, since the disjunction of conjunctions of most of the platitudes is logically weaker than the straightforward conjunction of all of them. Fudging may end up making it too easy, though, such that there are multiple realizers of the Ramsey sentence. When this happens, it’s up to the theorist to figure out how to strengthen things back up in such a way that there is a unique realizer.
The various responses to the situationist challenge can be seen as different ways of doing this. Everyone recognizes that the un-fudged Ramsey sentence of virtue theory is unrealized. But a sufficiently fudged Ramsey sentence is bound to be multiply realized. It’s a theoretical choice exactly how to play things at this point. More traditional virtue theorists such as Joel Kupperman (2009) favor a fudged version of the Ramsey sentence wherein the virtue prevalence family has been dropped. John Doris (2002) favors a fudged version wherein the virtue/situation and virtue/integration families have been dropped. I (2013) favor a fudged version wherein the virtue / situation family has been dropped and a virtue /social construction family has been added in its place. The statements in the latter family have to do with the ways in which (signals of) social expectations implicitly and explicitly influence behavior. The main idea is that having a virtue is more like having a title or social role (e.g., you’re curious because people signal to you their expectations of curiosity) than like having a basic physical or biological property (e.g., being over six feet tall). Christian Miller (2013, 2014) drops the virtue prevalence family and adds a mixed-trait prevalence family in its place, which states that many people possess traits that are neither virtues nor vices, such as the disposition to help others in order to improve one’s mood or avoid sliding into a bad mood.
In this short paper, I don’t have the space to argue against all alternatives to my own proposal. Instead, I want to make two main claims. First, the “virtue is rare” dodge advocated by Kupperman and others who drop the virtue prevalence family has costs associated with it. Second, those costs may be steeper than the costs associated with my own way of responding to the situationist challenge.
Researchers in personality and social psychology have documented for decades the tendency of just about everybody to make spontaneous trait inferences, attributing robust character traits on the basis of scant evidence (Ross 1977; Uleman et al. 1996). This indicates that people think that character traits (virtues, vices, and neutral traits, such as extroversion) are prevalent. Furthermore, in a forthcoming paper (Alfano, Higgins, & Levernier forthcoming), I show that the vast majority of obituaries attribute multiple virtues to the deceased. Not everyone is eulogized in an obituary, of course, but most are (about 55% of Americans, by my calculations). Not all obituaries are sincere, but presumably many are. Absent reason to think that people about whom obituaries differ greatly from people about whom they are not written, we can treat this as evidence that most people think that the people they know have multiple virtues. But of course, if most relations of most people are virtuous, it follows that most people are virtuous. In other words, the virtue-prevalence family is deeply ingrained in folk psychology and folk morality.
Social psychologists think that people are quick to attribute virtues. My own work on obituaries suggests the same. What do philosophers say? Though there are some (Russell 2009) who claim that virtue is rare or even non-existent with a shrug, this is not the predominant opinion. Alasdair MacIntyre (1984, p. 199) claims that “without allusion to the place that justice and injustice, courage and cowardice play in human life very little will be genuinely explicable.” Philippa Foot (2001), following Peter Geach (1977), argues that certain generic statements characterize the human form of life, and that from these generic statements we can infer what humans need and hence will typically have. For the sake of comparison, consider what she says about a different life form, the deer. Foot first points out that the deer’s form of defense is flight. Next, she claims that a certain normative statement follows, namely, that deer are naturally or by nature swift. This is not to say that every deer is swift; some are slow. Instead, it’s a generic statement that characterizes the nature of the deer. Finally, she says that any deer that fails to be swift – that fails to live up to its nature – is “so far forth defective” (p. 34). The same line of reasoning that she here applies to non-human animals is meant to apply to human animals as well. As she puts it, “Men and women need to be industrious and tenacious of purpose not only so as to be able to house, clothe, and feed themselves, but also to pursue human ends having to do with love and friendship. They need the ability to form family ties, friendships, and special relations with neighbors. They also need codes of conduct. And how could they have all these things without virtues such as loyalty, fairness, kindness, and in certain circumstances obedience?” (pp. 44-5, emphasis mine).
In light of these sorts of claims, let’s consider again the defense offered by some virtue ethicists that virtue is rare, or even impossible to achieve. If virtues are what humans need, but the vast majority of people don’t have them, one would have thought that our species would have died out long ago. Consider the analogous claim for deer: although deer need to be swift, the vast majority of deer are galumphers. Were that the case, presumably they’d be hunted down and devoured like a bunch of tasty venison treats. Or consider another example of Foot’s: she agrees with Geach (1977) that people need virtues like honeybees need stingers. Does it make sense for someone with this attitude to say that most people lack virtues? That would be like saying that, even though bees need stingers, most lack stingers. It’s certainly odd to claim that the majority – even the vast majority of a species fails to fulfill its own nature. That’s not a contradiction, but it is a cost to be borne by anyone who responds to the situationist challenge by dropping the virtue prevalence family.
One might respond on Foot’s behalf that human animals are special: unlike the other species, we have natures that are typically unfulfilled. That would be an interesting claim to make, but I am not aware of anyone who has defended it in print. I conclude, then, that dropping the virtue prevalence family is a significant cost to revising the postulate.
But is it a more significant cost than the one imposed on me by replacing the virtue / situation family with a virtue / social construction family? I think it is. This comparative claim is of course hard to adjudicate, so I will rest content merely to emphasize the strength of the virtue / prevalence family.
What would it look like to fudge things in the way I recommend? Essentially, one would end up committed to a version of the hypothesis of extended cognition, a variety of active externalism in the family of the extended mind hypothesis. Clark & Chalmers (1998) argued that the vehicles (not just the contents) of some mental states and processes extend beyond the nervous system and even the skin of the agent whose states they are. If my arguments are on the right track, virtues and vices sometimes extend in the same way: the bearers of someone’s moral and intellectual virtues sometimes include asocial aspects of the environment and (more frequently) other people’s normative and descriptive expectations. What it takes (among other things) for you to be, for instance, open-minded, on this view is that others think of you as open-minded and signal those thoughts to you. When they do, they prompt you to revise your self-concept, to want to live up to their expectations, to expect them to reward open-mindedness and punish closed-mindedness, to reciprocate displays of open-mindedness, and so on. These are all inducements to conduct yourself in an open-minded way, which they will typically notice. When they do, their initial attribution will be corroborated, leading them to strengthen their commitment to it and perhaps to signal that strengthening to you, which in turn is likely to further induce you to conduct yourself in open-minded ways, which will again corroborate their judgment of you, and so on. Such feedback loops are, on my view, partly constitutive of what it means to have a virtue. The realizer of the fudged Ramsey sentence isn’t just what’s inside the person who has the virtue but also further things outside that person.
So, can people be virtuous? I hope it isn’t too disappointing to answer with, “It depends on what you mean by ‘can’, ‘people’, and ‘virtuous’.” If we’re concerned only with abstract possibility, perhaps the answer is affirmative. If we are concerned more with the proximal possibility that figures in people’s current deliberations, plans, and hopes, we have reason to worry. If we only care whether more than zero people can be virtuous, the existing, statistical, empirical evidence is pretty much useless. If we instead treat ‘people’ as a generic referring to human animals (perhaps a majority of them, but at least a substantial plurality), such evidence becomes both important and (again) worrisome. If we insist that being virtuous is something that must inhere entirely within the agent who has the virtue, then evidence from social psychology is damning. If instead we allow for the possibility of external character, there is room for hope.
 Nathan is also using an extended metaphor. My point is clear nevertheless.
 An alternative is the “psycho-functionalist” method, which disregards common sense in favor of (solely) highly corroborated scientific claims. See Kim (2011) for an overview. For my purposes, psycho-functionalism is less appropriate, since (among other things) it is more in danger of changing the topic.
 I seem to be in disagreement on this point with Christian Miller (this volume), who worries that people may not be motivated to be or become virtuous. In general, I’m even more skeptical than Miller about the prospects of virtue theory, but in this case I find myself playing the part of the optimist.
 I am here indebted to Gideon Rosen.
 It might also be possible to circumvent this difficulty, which anyway troubles Lewis’s application of Ramsification to the mind-brain identity theory, by using only de re formulations of the relevant statements. See Fitting & Mendelsohn (1999) for a discussion of how to do so.
 Experimental philosophers have started to fill this gap, but not in any systematic or consensus-based way.
 Micah Lott (personal communication) has told me that he endorses this claim, though he has a related worry. In short, his concern is to explain how, given the alleged rarity of virtue, most people manage to live decent enough lives.
 For an overview of the varieties of externalism, see Carter et al. (forthcoming).
 I spell out this view in more detail in Alfano & Skorburg (forthcoming). For a treatment of the feedback-loops model in the context of the extended mind rather than the character debate, see Palermos (forthcoming).
 I am grateful to J. Adam Carter, Orestis Palermos, and Micah Lott for comments on a draft of this paper.
As always, comments, suggestions, questions, criticisms, etc. are most welcome….
“We are strangers to ourselves.”
~ Friedrich Nietzsche, On the Genealogy of Morals, Preface, section 1
1 The function of preferences: prediction, explanation, planning, and evaluation
Among our diverse mental states, some are best understood as representing how the world is. If I know that wine is made from grapes, I correctly represent the world as being a certain way. If I think that Toronto is the capital of Canada, I incorrectly represent the world as being a certain way (it’s actually Ottawa). Other mental states are best understood as moving us to act, react, or forebear in various ways. I want to see the Grand Canyon before I die. I desire to know how to speak Spanish. I prefer to use chopsticks rather than a fork to eat sushi. I intend to keep my promises. I aim to be fair. I love to hear New Orleans-style brass band music. Depending on their longevity, their intensity, their specificity, their malleability, and their idiosyncrasy, we use different words to describe these mental states: values, drives, choices, appraisals, volitions, cravings, goals, reasons, purposes, passions, sentiments, longings, appetites, aspirations, attractions, motives, urges, needs, acts of will. Such mental states are sometimes referred to as pro-attitudes, and related states that move someone to avoid, escape, or prevent a particular state of affairs are correspondingly called con-attitudes.
If you put together an agent’s representations of how the world is and the mental states that move her to act, you have some hope of predicting and explaining her actions. Suppose, for instance, that you know that I have a free weekend, that I deeply yearn to see the Grand Canyon, and that I have some spare cash. What am I going to do? It’s not unreasonable to predict that I will purchase a plane ticket (or rent a car) and go to Arizona. Now suppose that you know that my comprehension of geography is pretty weak. I still want to see the Grand Canyon, but I mistakenly think that it’s in Chihuahua. (Oops – nobody’s perfect). What do you think I’ll do now? It’s not unreasonable to predict that I’ll still purchase a plane ticket or rent a car, but that instead of going to Arizona I’ll end up in Mexico (and pretty frustrated!). Someone’s representations and purposes combine to lead them to act. If you know what someone’s representations and purposes are, you can to some extent predict what they’ll do.
In the same vein, knowing what someone’s representations and purposes are puts you in a position to explain their actions. Suppose you see me stand up, walk across the room, open a door, and walk through the doorway. On the door, you notice the following icon:
Why did I do what I did? A plausible explanation isn’t too hard to assemble. If you saw the sign indicating that the door led to the men’s bathroom, then presumably I did too: so I probably had a relevant representation of what was on the other side of the door. What desire (preference, goal, intention, need) might I have that would rationalize my behavior? The most obvious suggestion is that I wanted to relieve myself. Of course, it’s possible that I went to the men’s bathroom to participate in a drug deal, to conceal myself while I had a good long cry, or for some other reason. But if you’re right in thinking that I wanted to urinate, then you’ve successfully explained my action. If you know what someone’s representations and purposes are, you can to some extent explain what they’ve done.
To predict and explain other people’s actions, we need some idea of what they prefer (want, desire, value, need). But that’s not all that preferences are for. Preferences also figure in planning and evaluation, and when they’re structured appropriately, they contribute to the agent’s autonomy. Think about your best friend. Imagine that her birthday is in a week. You love your friend, and want to do something special for her birthday. You don’t need to predict your own action here, nor do you need to explain it. Your task now is to plan: in the next week, what can you do for your friend that will simultaneously please and surprise her without emptying your bank account? To give your friend a special birthday present, you need to know what she enjoys (or would enjoy, if she hasn’t experienced it yet). To be motivated to give your friend a special birthday present in the first place, you need to want to do something that she wants. In philosophical jargon, you must have a higher-order desire – a desire about another desire (hers). You want to give her something that she wants.
It’s remarkable how adept people can be at solving this sort of problem, which involves the sort of recursively embedded agent-patient relations discussed in the introduction. Think about it. To plan a good gift, you need to know now not just what your friend currently wants but what she will want in the future. You can’t just give her what you yourself want or what you will want in a week. You can’t give her what she wants now but won’t want in a week. To successfully give your friend a good present, you have to figure out in advance what she’ll want in a week.
The same constraints apply when you plan for yourself. Think about choosing your major in college. What do you want to specialize in? Musicology is interesting, but will you still be interested in it three years from now? Will it set you up to earn a decent living (something you’ll presumably want in five, ten, and twenty years)? Marketing might earn you a decent living, but will you find it boring (not want to do it, or even want not to do it) after a few years? Are you going to want to have children? In that case, you may need more income than you would if you didn’t want (and didn’t have) children. Living a sensible life requires planning. You need to make plans that affect your friends, your family, your colleagues, and your rivals. You also need to make plans for yourself. Doing this successfully requires intimate knowledge of (or at least some pretty good guesses about) your own and others’ future desires, needs, and preferences.
Thus, preferences figure in the prediction, explanation, and planning of action. They’re also important when we morally evaluate action. I reach out violently and knock you over, causing you some pain and surprising you more than a little. What should you think of my action? It depends in part on what moved me to do it. If I’ve shoved you because I want to hurt you, if I’m engaged in an assault, you’re going to think I’m doing something wrong. If I’m not depraved, I’ll also feel guilty. If I’m just clumsily gesturing at a pretty tree over there, I should probably know better, but you’ll temper your anger. I may not feel guilty, but I’ll probably be embarrassed or even ashamed. If I’m knocking you out of the way of a biker who’s zooming down the sidewalk towards you, perhaps you’ll feel grateful, while I’ll feel relieved or even proud.
What marks the difference between your reactions to my action? What marks the difference between my own assessments of it after the fact? It’s not that my shoving you and your falling hurts more or less in one case or the other. Instead, what leads you to evaluate my action as wrong, misguided, or benevolent is the pro- (or con-)attitude that moves me to act. Likewise, what leads me to feel guilt, embarrassment, or relief is the pro- (or con-)attitude that moved me to act. If I want to hurt you, if I want to do something to you that you prefer not to happen, you’ll say that I’ve acted wrongly. If my aim is to do something relatively harmless (something you neither prefer nor disprefer) like pointing out a feature of the environment, you’ll perhaps think I’m a klutz, but you won’t think I’ve done something morally wrong. If I’m trying to prevent you from being run down by an out-of-control cyclist, if I want to do something to you that (once you understand it) you prefer that I do, you’ll presumably think I’ve done something morally good.
Preferences are important and versatile. They help us predict and explain actions. They help us exercise agency on our own behalf and for those we care about. They help us evaluate the actions of others and ourselves. In the context of moral psychology, there’s one last thing that preferences are good for: autonomy. According to many philosophers, such as Harry Frankfurt (1971, 1992), a person is autonomous or free to the extent that she wants what she wants to want, or at least does not want what she would prefer not to want. An autonomous agent is someone whose will has a characteristic structure. This idea is discussed in more depth in chapter 2.
As I mentioned above, we have dozens of terms to refer to pro- and con-attitudes. But the title of this chapter is ‘Preferences’. Why? Preferences are sufficiently fine-grained to help in the prediction, explanation, and evaluation of action in the face of tradeoffs. Other motivating attitudes lack this specificity. Consider, for instance, values. At a high enough level of abstraction, everyone values the same ten things: power, achievement, pleasure, stimulation, self-direction, universalism, benevolence, tradition, conformity, and security (Schwartz 2012). If you want to know what someone will do, why someone did something, or whether someone deserves praise or blame for acting as they did, knowing that they accept these values gives you no purchase. Qualitatively weighting values doesn’t improve things much. Consider someone who values pleasure “somewhat,” stimulation “a lot,” and security “quite a bit.” What will she do? It’s hard to say. Why’d she go to the punk rock show? It’s hard to say. Does she merit some praise for engaging in a pleasant conversation with a stranger at the coffee shop? It’s hard to say.
Preferences set up a rank ordering of states of affairs. This is easiest to see in the case of tradeoffs. Suppose two desires are moving you to act. You’re exhausted after a long day, so you want to take a nap. But your friend just texted to suggest meeting up for a drink at a local bar, and you want to join her. We can represent this tradeoff with the following table:
|Don’t join friend||C||D|
Table 1: Choice matrix
In this simplified choice matrix, there are four ways things could turn out. You could take a nap and join your friend (A); you could join your friend without taking a nap (B); you could take a nap without joining your friend (C); and you could neither nap nor join your friend (D). If you have a complete set of preferences over these options, one of them is optimal for you, another is in second place, another is in third place, and the final one is in last place. Presumably A is your top outcome and D is your bottom outcome. Unfortunately, although you most prefer A (i.e., you prefer it to B, C, and D), it’s impossible. So you’re in a position where you need to weigh a tradeoff. This is where preferences become important. If you simply value the nap and value socializing with your friend, there’s no saying whether you’ll go with B or C. But if you prefer socializing to napping, we can predict that you’ll opt for B over C. By the same token, if you prefer napping to socializing, we can predict that you’ll opt for C over B.
So preferences are especially helpful in predicting behavior. They’re also great for explaining and evaluating behavior. A useful rule of thumb for explaining behavior is that people act in such a way as to bring about the highest-ranked outcome they think they can achieve. Imagine someone who prefers A to B, B to C, C to D, D to E, E to F, F to G, and G to H. She acts in such a way as to produce C. How can we explain this? Well, if we posit that she believes that A and B are out of the question (perhaps she takes them to be impossible or at least extremely difficult to achieve), then we can explain her behavior by saying that she went with the best outcome available to her.
2 The role of preferences in moral psychology
We’re now in a position to see how preferences relate to the five core concepts of moral psychology (patiency, agency, sociality, reflexivity, and temporality).
2.1 The role of preferences in patiency
Even if no one else is involved, even if you’re not exercising agency, your preferences matter for your patiency. According to one attractive theory of personal well-being, what it means for your life to go well is that your preferences are satisfied (Brandt 1972, 1983; Heathwood 2006). Your preferences might be satisfied through your own agency. You might prefer, among other things, to exercise agency in pursuit of some goal or other. Your preferences might be satisfied because you are involved in social relations with other people. Even so, there will be cases in which what you prefer happens or fails to happen simply by luck, accident, or unanticipated causal necessity. Fundamentally, then, well-being is associated with patiency, with what happens to you.
The preference-satisfaction theory of well-being is attractive for several reasons. It explains why one aspect of morality is intrinsically motivating. If my well-being is a matter of whether my preferences are satisfied, then I can’t help caring about my well-being. Preferences are a way of caring about things. Of course I care about what I care about. The preference-satisfaction theory of well-being also accounts for cases in which hedonic (pleasure-based) theories of well-being fail. Sometimes, it seems like my life goes no better, and may even go worse, when I experience some pleasures. I struggle with alcohol dependency and end up drinking to excess. While I enjoy the drinks, I prefer to stop. Arguably, I’m worse rather than better off because, even though I experience pleasure, my preferences are frustrated. Similarly, sometimes it seems like your life goes no worse, and may even go better, when you experience some pains. You exercise vigorously at the gym. You force yourself to study extra hard for an exam. You watch a frightening or depressing or horrifying movie. You eat a meal spiced with more than a little wasabi. These are painful experiences, but in each case you prefer to suffer through the pain. Arguably, you’re better rather than worse off because, even though you experience pain, your preferences are satisfied.
The preference-satisfaction theory of well-being also provides a way to understand well-being comparatively. People don’t just have good or bad lives. They have better or worse lives. Someone whose life is going poorly could be even worse off. Someone whose life is going well could be even better off. This distinction maps nicely onto the idea of a preference ranking. Since preferences can in principle put all the ways the world could be in order from best to worst, it’s possible to identify someone’s well-being with how far up their ranking things actually are. If you prefer A to B, B to C, C to D, D to E, E to F, F to G, and G to H, and the actual state of affairs is C, then your level of well-being is better than many ways it could be but not maximal. If things change to B, your well-being improves one notch; if things change to D, your well-being goes down a notch.
The most plausible version of the preference-satisfaction theory of well-being claims that what really contributes to your well-being is not the extent to which your actual preferences are satisfied but the extent to which your better–informed preferences are satisfied. Why? And what does it mean for preferences to be informed? Imagine that you’re about to take a bite of a delicious chile relleno. It’s your favorite dish. The cheese is perfectly melted. The poblanos are fresh. The tomatoes are local. Everything is perfect except for one little exception: unbeknownst to you, the cook accidentally used rat poison rather than salt. If you eat these chiles, you’re going to end up in the hospital. But you don’t know this; in fact, you have no clue. It won’t improve your life to eat those chiles. It’ll make your life (much!) worse.
Philosophers recognize this, and that’s why they say that your well-being is a function not of what you want but of what you would want if you were better informed. If you knew that the chiles relleno were poisoned, you would prefer quite strongly not to eat them, so even though you currently prefer to eat them, doing so would detract from rather than contribute to your well-being.
Knowledge of potential poisons is clearly not the only thing you need to have informed preferences, so philosophers of well-being argue that your better-informed preferences are your fully-informed preferences. According to this approach, the preferences that determine someone’s well-being are not the preferences that person actually has, but the ones they would have if they were fully informed. Specifying what full information means in a way that doesn’t collapse into omniscience is tricky, but one attractive suggestion is to take into account “all those knowable facts which, if [you] thought about them, would make a difference to [your] tendency to act” (Brandt 1972, p. 682) or “everything that might make [you] change [your] desires” (Brandt 1983, p. 40) – a process Richard Brandt dubbed cognitive psychotherapy.
2.2 The role of preferences in agency, reflexivity, and temporality
I briefly mentioned the role of preferences in agency, reflexivity, and temporality above. Several points are relevant. First, to act at all, you must have pro-attitudes like preferences. Without states that move you to act, you’d never act in the first place, never exercise agency at all. Second, to act in the face of tradeoffs, you must have some way of ranking potential outcomes. That’s what preferences do: they put potential outcomes in a rank order. Third, to be the sort of agent that the vast majority of adult humans are, you need to engage in long-term plans and projects. This involves having some idea in advance what your future self’s preferences will or might be. It involves having temporally extended preferences, so that you want now for your future preferences, whatever they end up being, to be satisfied. It involves thinking of that future person as yourself and therefore having a special regard for him or her. If your future self mattered to you no more or less than some random stranger, long-term projects would be pretty foolish.
To be a recognizably human agent, your preferences must not violate certain constraints. Put less dramatically, your agency is undermined to the extent that your preferences violate certain constraints. You’ll fail to act successfully to the extent that you suffer from preference reversals (preferring A to B one moment and B to A the next moment). You’ll fail to act successfully if you have cyclical preferences (preferring A to B, B to C, but C to A). You’ll fail to act successfully over time if you cannot rely on your current representation of your future preferences to be largely accurate (thinking that you’ll prefer A to B when in fact you’ll prefer B to A).
2.3 The role of preferences in sociality
We tend to think that people deserve praise and blame only, or at least primarily, for their motivated actions. As I pointed out above, if someone inadvertently brings about a consequence, we tend to withhold or at least temper praise (even if the consequence was good) and blame (even if it was bad). Moral good luck is nice, but not particularly praiseworthy. Negligence is blameworthy, but less so than malignance.
The role of preferences in sociality is most directly comprehensible from a utilitarian (or other consequentialist) framework, but does not depend essentially on the truth of utilitarianism. Utilitarians such as Brandt analyze right action in terms of preference-satisfaction. According to Brandt (1983, p. 37), an action is permissible if (and only if) “it would be as beneficial to have a moral code permitting that act as to have any moral code that is similar but prohibits the act.” Obligatory and forbidden actions can then be defined in terms of permissibility using well-known equivalences in deontic logic: an obligatory action is one that it’s not permissible not to do, and a forbidden action is one that it’s not permissible to do. The connection with preferences is that benefit (and harm) are understood on this account in terms of well-being. In other words, according to Brandt, an action is permissible if (and only if) it would satisfy as many fully-informed preferences, across all people, to have a moral code permitting that act as to have any moral code that is similar but prohibits the act.
Brandt’s theory is a rule utilitarian approach to right action. One could instead adopt an act utilitarian theory, according to which an action is permissible if and only if performing it in the circumstances would be as beneficial as performing any alternative action (Smart 1956). Or one could adopt a motive utilitarian theory, according to which an action is permissible if and only if it’s what a person with an ideal motivational set (i.e., a psychologically possible motivational set that, over the course of a lifetime, is as beneficial as any alternative psychologically possible motivational set) would perform in the circumstances (Adams 1976). Regardless of the precise flavor of utilitarianism one adopts, then, it’s clear that, for utilitarians, preferences are immensely important on the dimension of sociality. To act in such a way as to satisfy the most preferences, you must take into account the effects of your action not just on yourself but on everyone else. In other words, you need to take into account how your agency affects others’ patiency. Nested agent-patient relations also play a role here. What you do (or fail to do) to one person will often have some effect on what they do (or fail to do) to another person, which will have an effect on what the second person does (or fails to do) to a third person, and so on.
As I mentioned above, the relevance of preferences to sociality is easiest to see from a utilitarian perspective, but it doesn’t rely on such a perspective. Virtue ethicists and care ethicists (though perhaps not Kantians) all accept the centrality of preferences in their approaches to sociality. For instance, one nearly universally recognized virtue is benevolence, the disposition both to want to benefit other people and to often succeed in doing so. Even if a virtue ethicist thinks that there are benefits other than preference-satisfaction, they admit that preference-satisfaction is one kind of benefit. In the same vein, Aristotle and other ancient virtue ethicists gave pride of place to friendship. Friends aim, among other things, to benefit each other (and typically succeed), which again involves (perhaps among other things) preference-satisfaction. Similarly, in the care tradition, the one-caring aims among other things to benefit the cared-for. This typically involves not only satisfying the cared-for’s informed preferences but actively helping the cared-for to get their actual preferences to approximate their idealized preferences.
3 Preference reversals and choice blindness
Thus, preferences matter in multiple ways to the core concepts of moral psychology. What does the scientific literature on preferences tell us about these important mental states? Two convergent lines of evidence suggest that preferences are neither determinate nor stable: the heuristics and biases research on preference reversals, and the psychological research on choice blindness.
Preferences are dispositions to choose one option over another. You strictly prefer a to b only if, if you were offered a choice between them, then ceteris paribus you would choose a. If your preferences are stable, then what you would choose now is identical to what you would choose in the future. If your preferences are determinate, then there is some fact of the matter about how you would choose. That is to say, exactly one of the following subjunctive conditionals is true: if you were offered a choice, then ceteris paribus you would choose a; if you were offered a choice, then ceteris paribus you would choose b; if you were offered a choice, then ceteris paribus you would be willing to flip a coin and accept a if heads and b if tails (or you would be willing to let someone else – even your worst enemy – choose for you). The kind of indeterminacy and instability I argue for in this section is modest rather than radical. I want to claim that preferences are unstable in the sense of sometimes changing in the face of seemingly trivial and normatively irrelevant situational influences, not in the sense of constantly changing. Similarly, I want to claim that preferences are indeterminate in the sense of there sometimes being no fact of the matter how someone would choose, not in the sense of there always being no fact of the matter how someone would choose.
3.1 Preference reversals
Two distinctions are worth making regarding the types of possible preference reversals. In a chain-type reversal, you prefer a to b, prefer b to c, and prefer c to a; such reversals are sometimes labeled failures of acyclicity. In a waffle-type reversal, you prefer a to b, but also prefer b to a. The other distinction has to do with temporal scale. Preference reversals can be synchronic, in which case you would have the inconsistent preferences all at the same time. More commonly, they are diachronic, in which case you might now prefer a to b and b to c, and then later come to prefer c to a (and perhaps give up your preference for a over b). Or you might now prefer a to b, but later prefer b to a (and perhaps give up your preference for a over b). In my (2012) paper, I call diachronic waffle-type reversals the result of Rum Tum Tugger preferences, after the character in T. S. Eliot’s Book of Practical Cats who is “always on the wrong side of every door.”
Preference reversals were first systematically studied by Daniel Kahneman, Sarah Lichtenstein, Paul Slovic, and Amos Tversky as part of the heuristics and biases research program. In study after study, they and others showed that people’s cardinal preferences could be reversed by strategically framing the choice situation. When faced with a high-risk / high-reward gamble and a low-risk / low-reward gamble, most people choose the former but assign a higher monetary value to the latter. These investigations focused on choices between lotteries or gambles rather than choices between outcomes because the researchers were attempting to engage with theories of rational choice and strategic interaction, which – in order to generate representation theorems – employ preferences over probability-weighted outcomes. While this research is fascinating, its complexity makes it hard to interpret confidently. In particular, whenever the interpreter encounters a phenomenon like this, it’s always possible to say that the problem lies not in people’s preferences but in their credences or subjective probabilities. Since evaluating a gamble always involves weighting an outcome by its probability, one can never be sure whether anomalies are attributable to the value attached to the outcome or the process of weighting. And since we have independent reason to think that people’s ability to think clearly about probability is limited and unreliable (Alfano 2013), it’s tempting to hope that preferences can be insulated from this line of critique.
For this reason, I will focus on more recent research on preference reversals in the context of choices between outcomes rather than choices between lotteries (or, if you like, degenerate lotteries with probabilities of only 0 and 1). A choice of outcome a over outcome b can only reveal someone’s ordinal preferences; it can only tell us that she prefers a to b, not by how much she prefers a to b. This limitation is worth the price, however, because looking at choices between outcomes lets us rule out the possibility that any preference reversal might be attributable to the agent’s credences rather than her preferences.
Some of the most striking investigations of preference reversals in this paradigm have been conducted by Dan Ariely and his colleagues. For instance, Ariely, Loewenstein, and Prelec (2006) used an arbitrary anchoring paradigm to show that preferences ranging over baskets of goods and money are susceptible to diachronic waffle-type reversals. In this paradigm, a participant first writes down the final two digits of her social security number (henceforth SSN-truncation), then puts a ‘$’ in front of it. Next, the experimenters showcase some consumer goods, such as chocolate, books, wine, and computer peripherals. The participant is instructed to record whether, hypothetically speaking, she would pay her SSN-truncation for the goods. Finally, the goods are auctioned off for real money. The surprising result is that participants with high SSN-truncations bid 57% to 107% more than those with low SSN-truncations.
To better understand this phenomenon, consider a fictional participant whose SSN-truncation was 89. She ended up bidding $50 for the goods, so, at the moment of bidding, she preferred the goods to the money; otherwise, she would have entered a lower bid. However, one natural interpretation of the experiment is that, prior to the anchoring intervention, she would or at least might have chosen that amount of money over the goods (i.e., she would have bid lower); in other words, prior to the anchoring intervention, she preferred the money to the goods. Anchoring on her high SSN-truncation induced a diachronic waffle-type reversal in her preferences. Prior to the intervention, she preferred the money to the goods, but after, she preferred the goods to the money. This way of explaining the experiment entails that her preferences were unstable: they changed in response to the seemingly trivial and normatively irrelevant framing of the choice.
Another way to explain the same result is to say that, prior to the anchoring intervention, there was no fact of the matter whether she preferred the goods to the money or the money to the goods. In other words, it was false that, given a choice, she would have chosen the goods, but it was equally false that, given a choice, she would have chosen the money or been willing to accept a coin flip. Only in the face of the choice in all its messy situational details did she construct a preference ordering, and the process of construction was modulated by her anchoring on her SSN-truncation. This alternative explanation entails that her preferences were indeterminate.
Furthermore, these potential explanations are mutually compatible. It could be, for instance, that her preferences were partially indeterminate, and that they became determinate in the face of the choice situation. Perhaps she definitely did not prefer the money to the goods prior to the anchoring intervention, but there was no fact of the matter regarding whether she was indifferent or preferred the goods to the money. Then, in the face of the hypothetical choice, this local indeterminacy was resolved in favor of preference rather than indifference. Finally, her newly-crystallized preference was expressed when she entered her bid.
Such a robust effect calls for explanation. My own suspicion is that a hybrid of indeterminacy and instability is the right theory of what happens in these cases, but it’s difficult to find evidence that points one way or the other. In any event, for present purposes, I’m satisfied with the inclusive disjunction of indeterminacy and instability.
3.2 Choice Blindness
There are many other – often amusing and sometimes depressing – studies of preference reversals, but the gist of them should be clear, so I’d like to turn now to the phenomenon of choice blindness, a field of research pioneered in the last decade by Petter Johansson and his colleagues. As I mentioned above, preferences are dispositions to choose. You prefer a to b only if, were you given the choice between them, then ceteris paribus you would choose a. Preferences are also dispositions to make characteristic assertions and offer characteristic reasons. While it’s certainly possible for someone to prefer a to b but not to say so when asked, the linguistic disposition is closely connected to the preference. Someone might be embarrassed by her preferences. She might worry that her interlocutor could use them against her in a bargaining context. She could be self-deceived about her own preferences. In such cases, we wouldn’t necessarily expect her to say what she wants, or to give reasons that support her actual preferences. But in the case of garden-variety preferences, it’s natural to assume that when someone says she prefers a to b, she really does, and it’s natural to assume that when someone gives reasons that support choosing a over b, she herself prefers a to b. Research on choice blindness challenges these assumptions.
Imagine that someone shows you two pictures, each a snapshot of a woman’s face. He asks you to say which you prefer on the basis of attractiveness. You point to the face on the left. He then asks you to explain why, displaying the chosen photograph a second time. Would you notice that the faces had been surreptitiously switched, so that the face you hadn’t pointed at is now the one you’re being asked about? Or would you give a reason for choosing the face that you’d initially dispreferred? Johansson et al. (2005) found that participants detected the ruse in fewer than 20% of trials. Moreover, when asked for reasons, many of the participants who had not detected the manipulation gave reasons that were inconsistent with their original choice. For instance, some said that they preferred blondes even though they had originally chosen a brunette.
This original study of choice blindness has been supplemented with experiments in other domains. For instance, Hall et al. (2010) found that people exhibited choice blindness in more than two thirds of all trials when the choice was between two kinds of jam or two kinds of tea. After tasting both, participants indicated which of the two they preferred, then were asked to explain their choice while sampling their preferred option “again.” Even when the phenomenological contrast between the items was especially large (cinnamon apple versus grapefruit for jam, pernod versus mango for tea), fewer than half the participants detected the switch.
Choice blindness in the domain of aesthetic evaluations of faces and comestibles might not seem weighty enough to support the argument that preferences are often indeterminate and unstable. But perhaps choice blindness in the domain of political preferences and moral judgments would be. Johansson, Hall, and Chater (2011) used the choice blindness paradigm to flip Swedish participants’ political preferences across the conservative-socialist gap. Participants filled in a series of scales on their political preferences for policies such as taxes on fuel. Some of these scales were then surreptitiously reversed, so that, for example, a very conservative answer was now a very socialist answer. Participants were then asked to indicate whether they wanted to change any of their choices, and to give reasons for their positions. Fewer than 20% of the reversals were detected, and only one in every ten of the participants detected enough reversals to keep their aggregate position from switching from conservative to socialist (or conversely). In a similar study, Hall, Johansson, and Strandberg (2012) used a self-transforming survey to flip participants’ moral judgments on both socially contentious issues, such as the permissibility of prostitution, and broad normative principles, such as the permissibility of large-scale government surveillance and illegal immigration. For instance, an answer indicating that prostitution was sometimes morally permissible would be flipped to say that prostitution was never morally permissible, and an answer indicating that illegal immigration was morally permissible would be flipped to say that illegal immigration was morally impermissible. Detection rates for individual questions ranged between 33% and 50%. Almost 7 out of every 10 of the participants failed to detect at least one reversal.
As with the behavioral evidence for preference reversals, the evidence for choice blindness suggests that people’s preferences are unstable, indeterminate, or both. The choices people make can fairly easily be made to diverge from the reasons they give. If preferring a to b is a disposition both to choose a over b and to offer reasons that support the choice of a over b (or at least not to offer reasons that support the choice of b over a), then it would appear that many people lack preferences, or that their preferences do exist but are extremely labile. Not only is there sometimes no fact of the matter about what we prefer, but also our preferences are often seemingly constructed on the fly in choice situations, and their ordering is shaped by seemingly trivial and normatively irrelevant factors.
3.3 A descriptive preference model
While it is of course possible to dispute the ecological validity of these experiments or my interpretation of them, I want to proceed by considering some of the philosophical implications of that interpretation, assuming for the sake of argument that it is sound. I’ve already explored some of the implications of this perspective in Alfano (2012), where I argue that the indeterminacy and instability of preferences infirm our ability to explain and predict behavior. Predictions of behavior often refer to the preferences of the target agent. If you know that Karen prefers vanilla ice cream to chocolate, then you can predict that, ceteris paribus, when offered a choice between them she will go with vanilla. Likewise for explanations: you can base an explanation of Karen’s choice of vanilla on the fact that she prefers vanilla. But if there’s no fact of the matter about what Karen prefers, you cannot so easily predict what she will do, nor can you so easily explain why she did what she did. A related problem arises when considering instability. If Karen prefers vanilla to chocolate now, but her preference is unstable, then the prediction that she will choose vanilla in the future – even the near future – is on shaky ground. For all you know, by the time the choice is presented, her preferences will have reversed. Similarly for explanation: if Karen’s preferences are unstable, you might be able to say that she chose vanilla because she preferred it at that very moment, but you gain little purchase on her longitudinal preferences from such an attribution.
I’ve responded to these problems by proposing a model in which preferences are interval-valued rather than point-valued. A traditional valuation function v maps from outcomes to points. The binary preference relation is then defined in terms of these points: a is strictly preferred to b just in case v(a) > v(b), b is strictly preferred to a just in case v(a) < v(b), and the agent is indifferent as between a and b just in case v(a) = v(b).
Figure 2: a preferred to b because 1 = v(a) > 0 = v(b)
In the looser model I propose, by contrast, the valuation function maps from outcomes to closed intervals, such that a is strictly preferred to b just in case min(v(a)) > max(v(b)) and the agent is indifferent as between a and b just in case there is some overlap in the intervals assigned to a and b.
Figure 3: indifference because neither min(v(a)) > max(v(b)) nor max(v(a)) < min(v(b))
Though this model preserves the transitivity of strict preference, it does not preserve the transitivity of indifference. This, however, may be a feature rather than a bug, since ordinary preferences as exhibited in choice behavior themselves seem not to preserve the transitivity of indifference.
4 Philosophical implications of the indeterminacy and instability of preferences
In this section, I consider some possible philosophical implications of the indeterminacy and instability of preferences, drawing on the descriptive model outlined in the previous section. Moving from the descriptive to the normative domain is always fraught, but, as I argued in the introduction, the two need to be explored in tandem, with mutual theoretical adjustments made on each side. Moral psychology without normative structure is a baggy monster. Normative theory without empirical support is a castle in the sky.
4.1 Implications for patiency
The primary worry raised for the theory of personal well-being by the indeterminacy and instability of preferences is that, if the extent to which your life is going well depends on or is a function of the extent to which you’re getting what you want, then well-being inherits the indeterminacy and instability of preferences. In other words, there might be no fact of the matter concerning how good a life you’re living at this very moment, and if there is such a fact, it might fluctuate from moment to moment in response to seemingly trivial and normatively irrelevant situational factors.
By way of example, consider someone who is eating toast with apple cinnamon jam. Is his life as good as it would be if he were eating toast with grapefruit jam? If he is like the people in the choice blindness studies mentioned above, there might be no answer to this question. If he’s told that he prefers apple cinnamon, he will prefer the present state of affairs, but if he is told that he prefers grapefruit, he’ll be less pleased with the present state of affairs than he would be with the world in which he is eating grapefruit jam. Whether his life is better in the apple cinnamon jam-world or the grapefruit jam-world is indeterminate until his preferences crystallize in one ordering or the other.
Or consider someone who has a brand new hardbound copy of Moby Dick, for which she just paid $50 when it was marked down from $70. Is her life going better now that she has the book, or was it going better before, when she had the money? If she is like the participants in Ariely’s preference reversal study, the answer may be “yes” to both disjuncts. Before she bought the book, she preferred the money to the book. But then she anchored on the manufacturer’s suggested retail price of $70, raised her valuation of the book, and ended up preferring it to $50. Her unstable preferences mean that she was better off with the money than the book, and that she is better off with the book than the money. It’s not a contradiction, but it makes her well-being a pain in the neck to evaluate.
Fortunately, though, there is a ready response to this worry, which begins by pointing out that the indeterminacy and instability of preferences is not radical but modest, a feature captured by the descriptive model sketched above. Although there may be no fact of the matter whether the life of the consumer of cinnamon apple jam is better than the life of the consumer of grapefruit jam, there is a fact of the matter whether either of these lives is better than that of someone who, instead of eating jam, is enduring irritable bowel syndrome. Although preference orderings may fluctuate between owning a book and having $50, they do not fluctuate between owning the same book and having $50,000. These observations are consistent with the interval-valued preferences of the descriptive model outlined in the previous section. In the first example, the intervals for cinnamon apple jam and for grapefruit jam overlap with each other, but neither overlaps with the interval for irritable bowel syndrome. Thus, we can still make a whole host of judgments about the quality of various possible lives, even if, when we “zoom in,” such judgments cannot always be made. In the second example, the intervals for having $50 and having the book overlap with each other, but neither overlaps with the interval for having $50,000.
For the price of this local indeterminacy and instability, the theoretician of well-being can purchase an answer to an objection to the preference-satisfaction theory of well-being. The objection goes like this: when assessing whether it would be better to have the life of a successful lawyer or a successful artist, it seems trivial or even perverse to ask whether the artist’s life would involve slightly more ice cream, even if the agent considering what to do with her life likes ice cream. Slight preferences shouldn’t bear normative weight in this context.
However, if we assume, as seems reasonable in light of the evidence, that her preference for a little more ice cream is weak enough that it could be shifted by preference reversal or choice blindness, then its normative irrelevance is unmasked. The life of the ice cream-deprived artist and the life of the ice cream-enjoying artist are assigned nearly identical intervals on the scale of preference – intervals that differ less from each other than from that assigned to the life of the lawyer. Hence, if we are willing to put up with a little indeterminacy and instability, we can avoid more serious objections to the theory of personal well-being.
4.2 Implications for sociality
The main worry raised by the indeterminacy and instability of preferences in the context of sociality is that, if right action depends on preference-satisfaction (perhaps among other things), then it inherits the indeterminacy and instability of preferences. It might turn out that there’s just no fact of the matter what it would be right to do, or that that fact is in constant flux. This worry is perhaps most pressing for preference-utilitarians, such as Brandt and Singer (1993), but it casts a long shadow. Even if you don’t think that right action is a function of preferences and only preferences, it’s hard to deny that preferences matter at all. For instance, as I pointed out above, virtue ethicists typically countenance benevolence as an important virtue. If, as I argued in the previous section, well-being is affected by the indeterminacy and instability of preferences, then benevolence is too. And even if one thinks that benevolence is not a virtue, virtually any tolerable theory of right action is going to say that maleficence is a vice and that there is a duty – whether perfect or imperfect – of non-maleficence.
In the remainder of this section, I will concentrate on the normative implications of indeterminacy and instability for preference-utilitarianism, but it should be clear that these are just some of the more straightforward implications, and that others.
Before considering some responses I find attractive, I should point out that the problem we face here is not the one that is solved by distinguishing between a decision procedure and a standard of value. An objection to utilitarianism that was lodged early and often is that it’s either impossible or at least extremely computationally complex to know what would satisfy the most preferences. This knowledge could only be acquired by eliciting the preference orderings of every living person – or perhaps even every past, present, and future person. The correct response to this objection is that utilitarianism is meant to be a standard of value, not a decision procedure. It identifies (if it is the correct theory of right action) what it would be right to do, but that doesn’t mean that we can use it to find out what it would be right to do every time you make a moral decision. The distinction is meant to parallel other general theories: Newtonian mechanics would have identified, if it had been the correct physical theory, what a projectile will do in any circumstances whatsoever, even if people were unable to apply the theory in a given instance.
This response is unavailable in the present context. There are two ways in which it might be impossible to know what would satisfy someone’s preferences: epistemic and metaphysical. You would be unable to know what someone wants if there was a fact of the matter about what that person wants, but you couldn’t find out what that fact is. This would be a merely epistemic problem, and the distinction between a decision procedure and a standard of value handles it nicely. But you would also be unable to know what someone wants if there simply was no fact of the matter concerning what that person wants. If I am right that preferences are indeterminate, then this is the problem we now face, and it does not good to have recourse to the distinction between a decision procedure and a standard of value.
Preference-utilitarianism is not without resources, however. As in the case of well-being, one attractive response is to point out that preferences are only modestly indeterminate and unstable. Although there may be no uniquely most-preferred outcome for a given individual (or indeed for any individual), there will be many genuinely dispreferred outcomes, and hopefully a manageably constrained subset of preferred outcomes, than which nothing is more preferred. They are all outcomes than which nothing is determinately and stably better, but there is no unique best outcome.
Furthermore, from among this subset of alternatives it might be possible to winnow out those that satisfy preferences which we have independent normative grounds to reject – preferences that are silly, ignorant, perverse, or malevolent. As I pointed out above, it’s commonly argued in the context of right action that brute preferences carry less weight than fully-informed preferences. According to those who argue in this way, whether it’s right to do something depends less on whether it would satisfy people’s actual preferences than on whether it would satisfy their fully-informed preferences. It might be hoped that idealizing preferences would cut down or even eliminate their indeterminacy and instability.
Here’s what that might look like. Suppose that Jake’s actual preferences are captured by my interval-valued model. As such, they present two problems: they fail to uniquely determine how it would be right to treat Jake, and they may even rule out the genuinely right way to treat him because his actual preferences are normatively objectionable. It might be possible to kill these two birds with the single stone of idealization if idealization leads to unique, point-valued preferences that are no longer normatively objectionable. Perhaps there is only one way that Jake’s preferences could turn out after he undergoes cognitive psychotherapy. This is a big ‘perhaps,’ but it is worth considering. What evidence we have, however, suggests that idealizing in this way would not lead to determinate, stable preferences. When Kahneman, Lichtenstein, Slovic, and Tversky began to investigate preference reversals, many economists saw the phenomenon as a threat, since it challenged some of the most fundamental assumptions of their field. Accordingly, they tried to show that preference reversals could removed be root and branch if participants were given sufficient information about the choices they were making. Years of attempts to eliminate the effect proved fruitless.
The burden is then on the idealizer to say what information participants lack in the relevant experiments. What does someone who bids high on a bottle of wine after considering her SSN-truncation not know, or not know fully enough? Perhaps she should be allowed first to drink some of the wine. While Ariely et al. (2006) did not investigate whether this would eliminate the anchoring on SSN-truncation, they did conduct other experiments in which participants sampled their options and thus had the relevant information. In one, participants first listened to an annoying sound over headphones, then bid for the right not to listen to the sound again. As in the consumer goods experiment, before bidding, participants first considered whether they would pay their SSN-truncation in cents to avoid listening to the sound again. And as expected, those with higher SSN-truncations entered higher bids, while those with lower SSN-truncations entered lower bids. It’s unclear what further information they could have acquired to inform their preferences. It seems more plausible is that they had too much information, not too little. If they hadn’t first considered whether to bid their SSN-truncation, they would not have anchored on it and would therefore have had “uncontaminated” preferences. But cognitive psychotherapy says to take into account “everything that might make [one] change [one’s] desires” (Brandt 1983, p. 40). Anchoring changed their desires, so it counts as part of cognitive psychotherapy. Perhaps the process can be revised by saying that one should take into account everything that might correctly or relevantly change one’s desires, but then the problem is to come up with an account of what makes an influence on one’s desires correct or relevant that doesn’t involve either a vicious regress or a vicious circle. No one has managed to do this, perhaps because it can’t be done.
Another response, which I find more attractive, is to embrace rather than reject the indeterminacy and instability of preferences. There are several ways to do this. One is to figure out which preferences are wildly indeterminate or unstable and disqualify their normative standing completely. Just as it makes sense to ignore the Rum Tum Tugger’s begging to be let inside because you know he’ll just beg to get back out again, perhaps it makes sense to hive off Jake’s indeterminate and unstable preferences, leaving a kernel of normatively respectable ones behind. Only these would matter when considering what it would be right to do by Jake, or what would promote his well-being.
A second way to embrace indeterminacy and instability is to make a less heroic assumption about the effect of cognitive psychotherapy. Instead of taking it for granted that this process is bound to converge on unique, point-valued preferences, perhaps it will merely shrink the width of Jake’s interval-valued preferences. In that case, even after idealization, there would be no unique characterization of what it would be right to do by Jake or what would most promote his well-being. As I’ve argued in the context of prediction and explanation (Alfano 2012), however, this might be a feature rather than a bug. Suppose that idealization yields a preference ordering that rules out most actions as wrong and condemns many outcomes as detrimental to Jake’s well-being, but does not adjudicate among many others. The remaining actions would then all be considered morally right in the weak sense of being permissible but not obligatory, and the remaining outcomes would all be vindicated as conducive to well-being. This strategy might help to solve the so-called demandingness problem by expanding what James Fishkin calls “the zone of indifference or permissibly free personal choice” (1982, p. 23; see also 1986). Thus, while it is possible to try to resist the evidence for indeterminacy and instability, or to acknowledge the evidence while denying its normative import, it may be better instead to embrace these features of preferences and use them to respond to existing problems.
5 Future directions in the moral psychology of preferences
Because preferences are involved in multiple ways in patiency, agency, sociality, temporality, and reflexivity, there are many avenues for further research. In this closing section, I list just a few of them.
First, further conceptual work by philosophers and theoretically-minded psychologists and behavioral economists may reveal or clarify relevant distinctions, such as a contemporary version of Mill’s distinction between higher and lower pleasures. Perhaps a useful distinction can be made between satisfaction of higher and lower preferences. According to Mill, one pleasure is higher than another if an expert who was acquainted with both would choose any amount of the former over any amount of the latter. This maps fairly directly onto the idea of lexicographic preferences: one good or value is lexicographically preferred to another if (and only if) any amount of the former would be chosen over any amount of the latter. Such values would be in principle immune to preference reversals. Jeremy Ginges and Scott Atran (2013) have found that when a value is “sacralized,” it becomes lexicographically preferred in this way. Moral values seem to be the only values that are capable of becoming sacred. However, tradeoffs have only been studied in one direction (giving up a sacred value to gain a secular value).
Second, further empirical research would help to determine whether the hiving off strategy succeeds. Is there some identifiable class of preferences that are especially susceptible to reversals and choice blindness? We currently lack sufficient evidence to say. It seems that effects may be stronger in business and gambling domains, weaker in social and health domains (Kuhberger 1998), but these distinctions are neither mutually exclusive nor exhaustive. This is yet another area in which collaboration between philosophers, who are specially trained in making this sort of distinction, and psychologists would be useful.
Third, to what extent do preference reversals and choice blindness disappear when people are informed about them? Are psychologists who know all about these effects less susceptible to them? More susceptible? The same as other people?
Fourth, are there some people who are congenitally more susceptible to preference reversals and choice blindness than others? There is very little research on this, though one study suggests that roughly a quarter of the population is highly susceptible and another quarter is immune (Bostic, Herrnstein, & Duncan 1990). Perhaps the preferences of people who are clear on what they want deserve more normative weight than the preferences of people who don’t know what they want. Perhaps the second group would benefit not so much from getting what they (think) want (for the moment) but from having their preferences shaped in more or less subtle ways.
Finally, on a related note, perhaps public policy should sometimes aim not so much to satisfy existing preferences, but to shape people’s preferences in such a way that they are (more easily) satisfiable. The idea here is to take advantage of the instability of preferences, cultivating them in such a way that the people who have them will be most able to satisfy their own wants. If you’re not getting what you want, either change what you’re getting, or change what you want. Of course, this proposal may seem objectionably paternalistic, but I tend to agree with Richard Thaler and Cass Sunstein (2008) in thinking that in some cases such policies may be permissible. In fact, it’s a striking asymmetry that almost no one objects to the shaping of beliefs, provided they are made to accord with (what we take to be) the truth, whereas it’s hard to find someone who doesn’t object to the shaping of desires and preferences. However, I would argue that the choice we often face is not whether to mould preferences but how. Given how easily preferences are influenced, it’s highly likely that they are constantly being socially shaped without our realizing it. If this is right, existing policies already shape preferences; we just don’t know how. The choice is therefore between inadvertently influencing preferences and doing so strategically. I tend to think that society has not just a right but an obligation to help people develop appropriate preferences – a point with which feminists such as Serene Khader (2011) concur. The worry that such interventions might be objectionably paternalistic can be assuaged somewhat by insisting, as Khader does, that the very people whose preferences are the targets of policy intervention participate in designing the interventions.
 Preferences are causally influenced by values, but values on their own don’t do all the work (Homer & Kahle 1988).
 A version of this idea was first formulated by Sidgwick (1981). Rosati (1995) argues persuasively that mere information without imaginative awareness and engagement with that information is not enough.
 See Lichtenstein & Slovic (1971); Slovic (1995); Slovic & Lichtenstein (1968, 1983); Tversky & Kahneman (1981); Tversky, Slovic, & Kahneman (1990).
 See also Ariely & Norton (2008), Green et al. (1998), Hoeffler & Ariely (1999), Hoeffler et al. (2006), Johnson and Schkade (1989), and Lichtenstein and Slovic (1971).
 A social security number is a kind of national identification code: it associates each citizen of the United States with a unique, quasi-random number.
 In the United States, this would be equivalent to flipping preferences across the conservative-liberal gap; in the United Kingdom, it would be equivalent to flipping preferences across the conservative-labor gap.
 Bentham (1789/1961, p. 31), Mill (1861/1998, 26), and Sidgwick (1907, p. 413) all deal with the objection in this way.
 See Berg, Dickhaut, & O’Brien (1985); Pommerehne, Schneider, & Zweifel (1982); and Reilly (1982).
Interview with Paul Peppis of the Oregon Humanities center. Apparently I blink a lot.
I’m writing a textbook on moral psychology for Polity. Some of the material was piloted in an undergraduate honors seminar I taught this winter. Much of it is new material (though related to my other work and drawing as carefully as I can on others’). I’m going to be putting draft chapters up on this blog. I’d be extremely grateful for comments, suggestions, questions, and criticisms.
Here’s a tentative table of contents:
6. Moral disagreement
Coda: The future of moral psychology
This post is a draft of the intro.
1 Setting the stage
Moral psychology is the systematic inquiry into how morality works (when it does work) and breaks down (when it doesn’t work). The field therefore incorporates questions, insights, models, and methods from various parts of psychology (personality psychology, social psychology, cognitive psychology, developmental psychology, evolutionary psychology), sociology, anthropology, criminology, and of course philosophy (applied ethics, normative ethics, metaethics). These fields are – or at least can be – mutually informative. Indeed, one guiding theme of this book is that moral philosophy without psychological content is empty, whereas psychological investigation without philosophical insight is blind. Given their characteristically synoptic perspective, philosophers are ideally situated to organize and moderate a productive conversation among these sciences. Nevertheless, there is always the risk that investigators with different training and expertise may misinterpret, misconstrue, or misunderstand one another. In this book, I attempt to put the relevant disciplines in dialogue. They sometimes speak with different accents, jargons, vocabularies, even grammars. My aim is to make their conversation intelligible to the reader, even if they cannot all be brought to speak exactly the same language in the same way.
Systematic inquiry depends on systematic questions. Science is not just a collection of facts. It’s not even a collection of facts about the same thing or class of things. Imagine how stupid it would be to conduct moral psychology by assembling all and only the motives that every person has ever had while responding to a moral problem (assuming this to be possible in the first place). This would be an utterly disorganized, uninformative, overwhelming mess. In the annals of the illustrious British Royal Society, you find descriptions of “experiments” like this: “A circle was made with powder of unicorne’s horn, and a spider set in the middle of it, but it immediately ran out severall times repeated. The spider once made some stay upon the powder” (Weld 1848, p. 113). This would be a caricature of bad science if it hadn’t happened. We might call this empiricism run amok. Science doesn’t just ask what happens, as if this were a question that, when completely answered, would satisfy human inquirers. Science asks questions systematically. It asks, for instance, what the effect of X on Y is. It asks whether that effect is mediated by M. It asks whether the effect is moderated by Z. It attempts to determine which small set of variables, organized in what configuration, accounts for the variability observed and experimentally induced in the field of inquiry.
In this endeavor, science is guided by insightful identification of relevant variables, careful distinction between similar phenomena, creative elaboration of alternative models, and skeptically imaginative construction of potential counterexamples. As the economist Paul Krugman put it recently on his blog, you can’t just let “the data speak for itself — because it never does. You use data to inform your analysis, you let it tell you that your pet hypothesis is wrong, but data are never a substitute for hard thinking. If you think the data are speaking for themselves, what you’re really doing is implicit theorizing, which is a really bad idea (because you can’t test your assumptions if you don’t even know what you’re assuming).” One way to help make theorizing explicit rather than implicit is asking systematic questions.
Unfortunately, in universities and in the contemporary education system more broadly (especially, to my chagrin, in the United States) we typically spend far too much time answering (and learning to answer) questions and far too little asking (and learning to ask) questions. So, in this introduction, I’ll try to show how questions are asked, how they become more nuanced and complicated, and how conditions of adequacy for answers are (tentatively) established.
Here’s a moral question I’ve asked myself:
What should I do to him for her?
Picture this: I’m headed to work on a downtown subway car at 8:30 AM. Two seats to my right, a 20-something woman is intently reading a magazine, obviously somewhat tense because a man is standing over her, leaning in a bit too close, leering slightly, and alternating between asking her name and telling her to smile. She’s presumably on her way to work and obviously uninterested in his conversation. She rolls her eyes and sighs. He seems obnoxious but mostly harmless. She casts about from time to time. Is she looking for help? for someone to share a moment of derisive eye contact with? for reassurance that, if her unwelcome interlocutor escalates to insulting or assaulting her, fellow passengers will not remain apathetic bystanders?
What should I do to him for her? This question presupposes an immense amount.
First, it presupposes patiency – that is, the fact that things happen to people. My fellow commuter can be made uncomfortable. She can feel threatened. She can be threatened. She can be assaulted. Things – some of them good and some of them quite bad – can happen to her. Some of them might be done by that jerk who keeps insinuating himself on her attention. The fact that good and bad things can happen to her – that she is, in technical terms, a patient – is presupposed by my question.
Things can also happen to him. He can be ignored and accommodated. He can be egged on. He can, alternatively, be confronted and challenged. He can be distracted or redirected. The fact that good, bad, and neutral things can happen to him – that he too is a patient – is also presupposed by my question.
Finally, things can happen to me. One reason I might do nothing is that I’m afraid of what might happen to me if I confront or even merely accost him. Probably nothing – but I’m useless in a fight, and strangers can be unpredictable. She might express gratitude to me for intervening. Alternatively, she might be annoyed that a second stranger has made her business his business. I aim to be helpful, which among other things includes stymieing creeps, but I also aim to avoid trampling through strangers’ lives uninvited. As I decide what to do, her patiency, his patiency, and my patiency are all quite salient.
Things happen to people. When they do, we have an example of patiency. In other words, when something happens to someone, she is the patient of (is passive with respect to) that event or action. Moral psychology asks what it is about us that makes us patients, and how our patiency figures in our own and other people’s moral perception, behavior, decision-making, emotions, characters, and institutions. Several chapters of this book are directly related to patiency. For instance, in chapter 1 on preferences, we will see that some philosophers argue that your life goes well to the extent that your preferences are satisfied. In other words, your life is better when you get what you want than when you don’t get it. If you, like most people, want to be healthy, but you end up contracting influenza, your life goes worse. Something happens to you that contravenes your preferences. On the flipside, if you, like most people, prefer temperate weather to frigid cold, and the weather where you are is temperate, then your life goes better. Something happens to you that satisfies your preferences. In chapter 4, on virtue, we will see that benevolence is typically considered a virtue. What makes someone benevolent? Wishing others well, and at least sometimes acting successfully on those wishes. If a benevolent person helps you in some way, you are the patient of her action. An extreme version of benevolence – altruism – will be discussed in chapter 7. An altruist doesn’t just wish others well and do things for their sake; she does so at significant cost to herself. Finally, in chapter 8, we will consider moral development. None of us grows up in a social vacuum. We are all raised by someone, such as a parent, grandparent, aunt, or uncle. We are all patients of the myriad interventions our caretakers make in our lives, which lead us to cultivate good (or bad, or mixed) character.
Thus, patiency is a crucial concept in moral psychology. When I ask what I should do to him for her, I’m asking what follows from her patiency, his patiency, and my own patiency. This is an example of how questions are asked: we start with something seemingly simple and comprehensible (“What should I do to him for her?”) and parse out some of the deeper questions and concepts it presupposes.
What should I do to him for her?
This question presupposes agency. Things don’t just happen to people: sometimes people do things.
Return to the example of the woman on the train. She might do something. She might stand up and walk to the next train car. She might lean back and hold her magazine up in front of her face, blocking the stranger’s attempt to make eye contact and muffling his voice. She might tell him off. She might scream. She might kick him in the shin.
Likewise, he might do something. He might continue to bug her until she escapes the train car. He might sit down next to her. He might call her a bitch. He might throw his hands in the air and walk away. He might switch to bothering someone else. He might grow bored and start playing with his smartphone.
I, too, might do something. (There’d be little point in asking myself what I should do if I couldn’t!) If my usual wariness of strangers holds up, I might cautiously eye the situation and hope impotently that nothing too bad happens. I might instead stride over and command him to stop bothering her. More helpfully, I might stroll over and ask her a nonchalant question that lets her redirect her attention without seeming to be too rude to him.
People do things. When they do, we have an example of agency. In other words, some person is the agent of (is active with respect to) some event or action. Moral psychology asks what it is about us that makes us agents, and how our agency figures in our own and other people’s moral perception, behavior, decision-making, emotions, character, and institutions.
Several chapters of this book are directly related to agency. Chapter 1 discusses how our preferences affect our choices, and hence our actions. It’s tempting to assume that our preferences are fairly stable, at least once we reach adulthood. Empirical research suggests otherwise. It’s even more tempting to assume that our preferences are transitive: if I prefer chocolate ice cream to vanilla and prefer vanilla to strawberry, then I’d better prefer chocolate to strawberry. Again, empirical research suggests that, at least in some cases, transitivity breaks down. To what extent can we be the authors of our own actions if our preferences are unstable and inconsistent? Chapter 2 is about the relation between deliberative agency on the one hand and implicit biases on the other hand. The vast majority of people in the developed world would, if asked, reject racist and sexist beliefs. But social psychologists have demonstrated that most of us nevertheless implicitly accept and even act on racist and sexist associations. When we do, are we really expressing our own agency? If we aren’t, what’s going on? Chapter 3 asks whether we are more or less agentic when we are motivated by emotions. Particularly intense emotions seem to come over us like a hurricane, swamping our planning, deliberation, and even our agency. But deficits in emotion have been shown to correlate with demonstrably bad decision-making. Perhaps the truth lies somewhere between the Kantian rejection of emotions on the one hand and the Humean embrace of them on the other. Chapter 4 connects agency with virtue, which for many theorists is a matter of acting in accordance with practical reason. Psychological research over the last several decades has demonstrated that the human capacity for slow, careful, deliberative reasoning is much more limited than most philosophers have presupposed. The vast majority of our decision-making relies on quick, unconscious, vaguely emotional mental shortcuts. Does this undermine our agency (as many suppose), or does it instead enable us to expand our agentic engagement with the world and each other?
If people were incapable of agency, if they were entirely passive beings, the contours of whose lives were completely determined by outside forces, there wouldn’t be much for moral psychologists to think about. We could construct theories about what it meant for one person to have a better life than another, what it meant for one person to have as good a life as possible for such an impoverished creature, what it meant for such a life to improve or deteriorate. But that would be about it. The introduction of agency greatly complicates moral psychology. Now, things don’t just happen to us; we do things. Some of those things turn out as we want or intend them to. Others don’t. This imposes some constraints on what it means to act well, to be a successful agent. Sometimes we do what we want, but then we are disappointed by the result. This suggests that we need a better understanding of our own preferences, a topic of chapter 1. Sometimes we accomplish one goal but in so doing thwart our striving for a second goal. This suggests that we need to understand agency holistically, so that it involves progress towards a complete set of goals without too much self-undermining.
Thus, agency, like patiency, is a crucial concept in moral psychology, and it’s a concept that complicates the inquiry. When I ask what I should do to him for her, I’m asking what follows from her agency, his agency, and my own agency. This is a further example of how questions are asked: we start with something seemingly simple and comprehensible (“What should I do to him for her?”) and parse out some of the deeper questions and concepts it presupposes.
What should I do to him for her?
This question presupposes sociality. Things happen to people: they get sick, they enjoy pleasant weather, they endure the many small indignities of youth and the even more numerous small indignities of aging. People do things: they stand up and walk away, they shrink into their seats, they write books. In many interesting cases, though, one person does something to someone else. Indeed, some of the examples I gave above had this flavor. The only reason I asked myself what I should do to him for her was that he was doing something to her in the first place: he was harassing her. As I deliberated about what to do, I considered the fact that there were things she might do to him, such as pointedly ignoring him, additional things he might do to her, such as insulting her, and various things I might do to him on her behalf, such as confronting him for harassing her. Moral psychology asks what it is about us that makes us social, and how our sociality figures in our own and other people’s moral perception, behavior, decision-making, emotions, character, and institutions.
|Y is a patient.||Y is not a patient.|
|X is an agent.||X harasses Y.X kicks Y in the shin.X confronts Y.||X stands up.X shrinks into his seat.X writes a book.|
|X is not an agent.||Y gets sick.Y enjoys pleasant weather.Y grows old.|
Table 1: agency x patiency examples
As table 1 illustrates, people can be simple patients, to whom things just happen; they can be simple agents, who just do things; but they can also be complex agents and patients: they can do things to each other. In such cases, agency and patiency are inextricably intertwined. One person’s agency is the cause or even a constitutive part of another person’s patiency. One person’s patiency is the effect of another person’s agency. When asked, “What happened to you?” my fellow commuter would be giving an incomplete answer if she responded, “I was harassed.” Being harassed is not like enjoying pleasant weather; it’s not something that can happen to someone all on their own. A more complete answer would be, “I was harassed by a stranger.” Likewise, if someone later asked the creep, “What did you do on the train?” he would be giving an incomplete response if he answered, “I harassed.” Harassing isn’t like standing up; it’s not something someone can do all on their own.
We can represent these relations with the following schematic diagram.
Figure 1: agent-patient relation
In this diagram (and others of its sort that I’ll use below), a dot represents a person. An arrow proceeding away from a dot represents that person exercising agency. An arrow pointing at a dot represents that person enduring patiency (good, bad, or neutral). I’ll put a box around each such relation.
Figure 1 represents the simplest sort of sociality: one agent does something to another agent. A more complex form of sociality occurs when two people are agents and patients with respect to each other at the same time: you do something to me while I do something to you. For instance, we dance together, each making suggestions to the other through subtle bodily movement, gestures, glances, and words. Call this interactivity. Figure 2 represents interactive sociality of this sort.
Figure 2: agent-patient relation
Things happen to people; people do things; sometimes, these are the same event. But sociality is often more complicated than that. Interactivity is one source of complexity, but a minor one. Another source of complexity is the possibility – indeed, the prevalence – of recursively embedded agent-patient relations. This might sound frighteningly technical, but don’t worry. Recursion is all over the place, and I’m certain that you’re already familiar with it, if only informally. Recursion is a process in which objects of a given type are generated by or defined in terms of other objects of the same type. For instance, think of your ancestors. What makes someone an ancestor of yours? The answer to this question relies on recursion: the parents of X are ancestors of X (that’s the non-recursive step) and ancestors of ancestors of X are ancestors of X (that’s the recursive step). Your grandparents are your ancestors because they’re the parents of your parents. Your great-grandparents are your ancestors because they’re the parents of the parents of your parents. Your great-great-grandparents are your ancestors because they’re the parents of the parents of the parents of your parents. The great-great-grandparents of your great-great-grandparents are your ancestors because they’re the ancestors of your ancestors. And so on.
Social agent-patient relations can also be recursively embedded. The majority – probably the vast majority – of the complexity of moral psychology derives from such embedding. In fact, the example I started off with has a recursive structure. When I asked myself what I should do to him for her, I was thinking of myself as an agent who acts on a preexisting agent-patient relationship. After all, I would have had no reason to intervene if he hadn’t been harassing her in the first place.
Figure 3: recursively embedded agent-patient relations
Figure 3 illustrates the situation in which one person acts on a second person acting on a third person. Since this relation is recursive, it can be expanded yet another step (and another, and another…), as illustrated in figure 4.
Figure 4: doubly recursively embedded agent-patient relations
Although figure 4 might seem complicated, I think we can pretty easily conjure up a situation that it characterizes. For instance, imagine that I decide to stride over to the creep and tell him to cut it out. As I move towards him, my friend, who realizes what a foolhardy thing I’m about to do, grabs me by the wrist and whispers “no no NO!” My friend acts on me acting on him acting on her. This sort of thing happens, I suggest, all the time. And, as you can see, the more recursion there is, the most complicated the situation becomes.
Moral psychology asks what it is about us that makes us social, and how our sociality figures in our own and other people’s moral perception, behavior, decision-making, emotions, characters, and institutions. Sociality is what makes moral psychology so complicated but also so interesting. In a way, it’s the underlying theme of every chapter of this book but it features most prominently in chapters 3, 4, 6, 7, and 8. In chapter 3 on emotion, we will see that emotions often function as signaling devices. When I display anger, I signal to you that I am prepared and committed to reacting aggressively to offenses. When you display disgust, you signal to me that the object of your disgust is contaminated and to-be-avoided. Emotional signaling fits well into the recursive embedding structure discussed here. When I display anger towards you, I also often signal to other people that they should be indignant over the offense you’ve caused me (a relationship like the one in figure 3). When you display contempt towards my behavior, you also often signal to other people that they should feel superior to me. Chapter 4 on virtue focuses primarily on the interlocking virtues of trustingness and trustworthiness. Chapter 6 on moral disagreement investigates the ways in which sociality influences agreement on moral values, norms, heuristics, and decisions. Chapter 7 on altruism is especially concerned with the potential tension between evolutionary theory and altruistic norms. Chapter 8 explores the ways in which interlocking, recursively-structured agent-patient relations influence moral development.
Thus, sociality, like patiency and agency, is a crucial concept in moral psychology, and it’s a concept that greatly complicates the inquiry. When I ask what I should do to him for her, I’m asking what follows from our sociality, that is, from the fact that I can act on him acting on her. This is another example of how questions are asked: we start with something seemingly simple and comprehensible (“What should I do to him for her?”) and parse out some of the deeper questions and concepts it presupposes.
5 Reflexivity and temporality
What should I do to him for her?
This question presupposes reflexivity. People do things; things happen to people; people do things to people. In some cases, the agent and the patient are the same person. In other words, people can do things to themselves. This is easiest to see if we also introduce the last main conceptual presupposition of my question: temporality. As I decide what to do to him for her, here are some considerations that might cross my mind:
If I don’t intercede somehow, I’ll feel guilty all day.
If I manage to distract him without starting a fight, I’ll be proud.
If I act like a coward now, I’ll be cultivating bad habits.
All of these considerations involve thinking of my future self as the patient of my current self as agent. Another way of putting the same point is that I’m taking a social perspective on myself: on the one hand, me-now is the agent who does something to a patient; on the other hand, me-in-the-future is the patient to whom something is done by that agent. These concepts also interact with sociality and the recursive embedding of agent-patient relations. For instance, suppose I make a bad decision on Monday (agent) that leads me to make an even worse decision on Tuesday (patient-to-Monday-me) that leads me to suffer immensely on Wednesday (patient-to-Tuesday-me). This is the sort of structure represented in figure 3, except that all three nodes represent me – just at different stages of my life.
Whenever we engage in long-term projects – especially long-term projects that are meant to have some effects on our future selves – patiency, agency, sociality, reflexivity, and temporality are all involved. Moral psychology asks what it is about us that makes us reflexive and temporal, and how our reflexivity and temporality figure in our own and other people’s moral perception, behavior, decision-making, emotions, characters, and institutions.
Several chapters of this book are directly related to reflexivity and temporality. The instability of preferences discussed in chapter 1 is a temporal instability, and it threatens agency because human agency as we normally conceive of it is meant to be temporally extended. I don’t just do things now. I do things now so that I can do and experience things later. If my preferences change in the meantime, then setting myself up to do or experience something later seems pointless: what if I no longer want to do or experience that? What if I’ve just wasted my effort? The interaction between deliberative agency and implicit biases discussed in chapter 2 concerns, among other things, whether I’m able to reflectively endorse my own choices. Emotions, discussed in chapter 3, can function as social signals; they can also function as commitment devices. If I have a particular emotion, I’m committing myself (if only unconsciously and tentatively) to a plan of action in the future. If I act wrongly, one of the things that may happen to my future self is the suffering of remorse. Virtue, discussed in chapter 4, is acquired (according to Aristotle and many who follow in his footsteps) through long-term, goal-directed cultivation; I have a plan for my own life over time, which I proceed to carry out, making me both the agent and the patient of myself over the course of months, years, and even decades. Intuitions, discussed in chapter 5, are arguably the automatic deliverances of capacities that have been built up over time through exposure to various theories, considerations, and arguments.
Reflexivity and temporality complicate moral psychology in various ways. This is easiest to see if we imagine creatures that are just like humans in other ways but who have no long-term memory, no sense of self, and no capability to plan, to feel proud of their accomplishments, or to experience remorse. Although such creatures would be patients (things would happen to them) and agents (they would do things) who were in some ways social (they would do things to each other), they would be very unlike us insofar as they could not intentionally do things to and for themselves, could not be grateful to or disappointed with their past selves, could not engage in long-term projects, and could not enjoy long-term friendships. Clearly, these are crucial aspects of human moral psychology.
Thus far, we have explored five crucial concepts in moral psychology: patiency, agency, sociality, reflexivity, and temporality. I don’t want to suggest that these are the only concepts moral psychologists find worth studying, but I do think they are among the most central. Other important concepts will crop up throughout this book. Some, such as emotion and intuition, will be treated at greater length. Others, such as imagination and mindfulness, will receive less attention. I encourage you to follow up on any and all of the concepts that capture your interest, and will provide lists of secondary sources at the end of each chapter to help direct and slake your curiosity. In the remainder of this introduction, I will characterize some of the major normative theories that you might already be aware of in terms of their emphases on patiency, agency, sociality, reflexivity, and temporality. After that, I’ll conclude by considering objections to moral psychology that might be raised because of the ever-fraught relationships among contingency, necessity, and normativity. In particular, I’ll focus the truism that one can never deduce an ought from an is.
6 Comparing emphases of major moral theories
In the history of Western philosophy, four major moral theories have emerged: utilitarianism, Kantian ethics, virtue ethics, and care ethics. Since it’s likely that you’ve encountered at least some of these views before reading this book, in this section, I compare how they relate to the five main concepts in moral psychology
Utilitarianism is the best-known variety of a family of views known as consequentialism. According to consequentialism, the goodness of an act is determined solely by the goodness of the consequent state of affairs. This view is typically combined with positions on what makes a state of affairs good and a theory of right action. For instance, hedonist act utilitarianism says that the only thing that contributes to the goodness of a state of affairs is pleasure, that the only thing that detracts from the goodness of a state of affairs is pain, and that an action is right just in case it maximizes the amount of goodness in the consequent state of affairs.
Pleasure and pain are mental states that humans and other animals enjoy and suffer. Thus, utilitarians and other consequentialists place their primary emphasis on patiency. Jeremy Bentham, one of the foremost utilitarian thinkers in philosophical history, put the point well while asking what determines whether a creature has moral worth and bears moral consideration:
Is it the faculty of reason, or, perhaps, the faculty of discourse? But a full-grown horse or dog is beyond comparison a more rational, as well as a more conversable animal, than an infant of a day, or a week, or even a month, old. But suppose the case were otherwise, what would it avail? the question is not, Can they reason? nor, Can they talk? but, Can they suffer? (1823, chapter 17, footnote)
For someone like Bentham, it doesn’t matter whether you can engage in reasoning (including the practical reasoning required for agency and the reflexivity required for long-term planning). It doesn’t matter whether you can talk. The main moral question for him is whether you can suffer, whether things can happen to you – in particular, bad and painful things.
Utilitarianism thus gives pride of place to patiency and de-emphasizes agency and reflexivity. Bentham’s lack of concern for talking might lead one to think that he and other utilitarians have no regard for sociality. In one sense, that’s correct. However, utilitarians and other consequentialists also tend to think that every being capable of suffering matters equally. And they recognize that people are capable of both inflicting suffering on one another and alleviating one another’s suffering. For this reason, utilitarians put a great deal of emphasis on sociality, though deriving that emphasis from its relation to patiency and suffering.
Lastly, utilitarians tend to put great emphasis on temporality. What I have in mind here is the fact that the consequences of an action are typically construed not just as what happens immediately afterwards but as everything that flows from the action. Everything, for all time? At the very least, everything that could be foreseen by a very intelligent and dedicated investigator. Utilitarians care so much about such long-term consequences that they have debates about population ethics, asking questions such as “How many people should there be?” (Blackorby, Bossert, & Donaldson 1995)
6.2 Kantian ethics
Kantian ethics, also sometimes called ‘deontological ethics’, puts most emphasis on the two concepts that utilitarianism deemphasizes (agency and reflexivity) while according less weight to the concepts utilitarianism emphasizes (patiency, sociality, and temporality). Kant thought that an account of moral obligation could be derived from the structure of agency itself. He called this the categorical imperative because it applies to every agent in every action they undertake regardless of their desires, preferences, and values. The best-known formulation of the categorical imperative states that you must “act only in accordance with that maxim through which you can at the same time will that it become a universal law” (4:421). This book is not an introduction to major moral theories, let alone the history of philosophy, so I will not go into much detail interpreting the categorical imperative. Kant’s idea, though, is that simply in virtue of being an agent you are constrained to act from some motives rather than others. Clearly, then, agency figures importantly in Kantian ethics.
The other core concept that receives primary emphasis in Kantian ethics is reflexivity. This is already somewhat evident from the first formulation of the categorical imperative, which requires you to reflect on and extrapolate from your own motives, but it comes into focus if we consider the third formulation: act as if you were through your maxim always a legislating member in the universal kingdom of ends (4:439). On this view, a moral act is one that can be self-legislated, i.e., an act that is in accordance with a law one could give not only to others but also to oneself.
Agency and reflexivity have pride of place in Kantian ethics, but the other three concepts receive some attention. Patiency and sociality get their due in the second formulation of the categorical imperative: treat humanity – whether your own or someone else’s – never merely as the means to some end but always as an end in its own right. In this formulation, we can see that Kant cares not only about agency but also about what’s done to people. He thinks it’s always wrong to treat someone as a mere means to your own end. However, patiency matters for Kant only derivatively because he thinks that what’s wrong about treating someone as a mere means is that, in so doing, you don’t respect their agency. Thus, the importance of what happens to us and what we do to each other depends on the antecedent importance of agency.
Finally, Kantian ethics doesn’t totally discount temporality (Kant argues that we have an imperfect duty to develop our own talents, for instance), but it also doesn’t place primary emphasis on it.
6.3 Virtue ethics
Virtue ethics is a family of views that focuses less on what it’s right to do and more on what sort of person it’s good to be. A good person is someone with many virtues (compassion, courage, honesty, trustworthiness) and few vices (selfishness, laziness, unfairness, rashness). Ancient Greek philosophers were basically all virtue ethicists of one kind or another. Plato emphasized the virtues of courage, temperance, wisdom, and justice. Aristotle famously thought that every virtue was a middle state between a pair of vices. For instance, courage is the disposition to fear neither too many things nor too few things, to fear them neither too intensely nor not intensely enough, to fear them neither for too long nor for too short a period, and so on.
Utilitarian ethics focuses primarily on patiency, sociality, and temporality; Kantian ethics focuses primarily on agency and reflexivity. Virtue ethics has a more balanced approach (this isn’t necessarily a good or a bad thing – it’s just a matter of emphasis), putting moderate emphasis on all five central concepts. A virtuous person is characteristically active, doing things for reasons. A virtuous person is also quite social. Aristotle, for instance, devotes two whole chapters (out of ten) of the Nicomachean Ethics to friendship and another to justice. Additionally, because virtue ethicists are concerned with the shape of a person’s whole life and the slow acquisition of virtuous traits, they pay more attention to temporality and moral development than utilitarians and Kantians. They place slightly less emphasis on patiency and reflexivity, though these too figure in the account.
6.4 Care ethics
The other three views surveyed in this section are venerable, traditional approaches to morality. The ethics of care is much more recent. The dawn of care ethics can be dated with some precision to the publication, in 1982, of Carol Gilligan’s In a Different Voice: Psychological Theory and Women’s Development. In her book, Gilligan explored the ways in which women (at least the women she interviewed) tend to talk in terms of care, emphasizing personal relationships and attachments (motherhood, siblinghood, friendship, etc.) and the special responsibilities that flow from these. She accused existing moral theories, such as Lawrence Kohlberg’s (1971) Kantian approach to moral psychology, with ignoring and even sometimes denigrating such caring relationships in favor of a completely impartial, legalistic notion of rights and justice. Although this criticism is somewhat overstated (as I mentioned above, Aristotle devotes twice as much attention to friendship as he does to justice), popular versions of both utilitarian and Kantian ethics clearly deserve Gilligan’s rebuke. Since 1971, various philosophers, including Kittay, Noddings, and Slote, have formulated moral theories in the wake of Gilligan’s critique.
Like the other theories canvassed here, care ethics is actually a family of views. What unites them is their emphasis on personal, face-to-face relationships and attachments, as well as their recognition that we all come into this world as completely helpless, dependent, screaming, fragile lumps of flesh. Care ethicists therefore focus primarily on human sociality and patiency, with derivative interest in agency (someone has to do the caring, in addition to being cared for, after all) and temporality. Reflexivity receives little attention in the care tradition.
Figure 5: Emphases of the four major moral theories
These differences in emphasis are illustrated graphically in figure 5.
7 Is and ought
To some people, the idea of combining scientific psychology with philosophical ethics to investigate moral psychology will seem only natural. Philosophy helps to set the terms of the investigation (in this case, patiency, agency, sociality, reflexivity, and temporality), proposes questions and models, dreams up potential counterexamples; psychology empirically determines whether the terms refer to anything in the world, answers the questions, tests the models, and determines whether the potential counterexamples can be realized. Psychology as an academic discipline split off from philosophy less than two centuries ago; it’s unsurprising that the two fields would sometimes collaborate. To other people, though, this project might seem to be doomed from the start. Science studies how things are, whereas philosophy studies how things ought to be and how they must be. Science can never, even in principle, help to answer philosophical questions.
As you’ve probably guessed, I disagree, and for several reasons. First, science can investigate modal reality (how things not only are but can and can’t be). To the extent that we accept the truism that people can’t be morally required to do things or be ways that are impossible, scientific investigation of moral psychology constraints moral theory. Second, scientific psychology can also investigate not just whether various kinds of behavior, character, and attachments are possible but also how demanding it would be for people to act, be, and relate in those ways. The harder it is to live up to a moral theory’s requirements, the more suspicious we should be of that theory. This is not to say that morality can’t make legitimate demands on us, just that the more extravagant those demands grow, the more suspicious we should be of the theory that generated them. Third, even if we decide to hold onto very demanding norms, psychological science can help us to see how to live up to those norms. In the same way, even if we hold onto extremely idealized norms of physical health, biological science can help us to see how to approximate those norms in our own lives.
Finally, morality is an important part of human behavior and cognition; as such, it’s something psychologists want to study, even if their investigations never end up suggesting revisions to moral norms. The idea that this aspect of psychology is simply off-limits, as if philosophers could somehow call “dibs” on it, is preposterous. As Levitin put it, those who think that science cannot study values typically commit a fallacy: “they seem to have confused making value judgments, which is incompatible with scientific objectivity, with studying objectively how other people make them – a phenomenon as amenable to psychology study, in principle, as other forms of human learning and choice” (1973, p. 491). Moral psychology doesn’t aim to replace utilitarianism, Kantian ethics, virtue ethics, or the ethics of care. In the case of care, this should be especially obvious: the entire edifice of care ethics was inspired by empirical research on moral psychology! Instead of taking their ball and going home, philosophers need to learn to share their insights, theories, and models with their scientist neighbors.
It’s not all good news for traditional normative ethics, though. Moral theories have empirical presuppositions. Moral psychology can investigate those presuppositions. Sometimes, to the moral theorist’s delight, they turn out to be well-supported. Sometimes their foundations look pretty shaky. The relation between philosophy and psychology doesn’t need to involve confrontation or scorn, though. A better attitude for both sides to take, I contend is one of curiosity and intellectual humility. A curious investigator is tentatively committed to her views, but she’s also delighted to find out that she’s wrong because that spurs her to construct a better model, a stronger theory, a more nuanced hypothesis. There’s no part of reality that’s specially marked off for philosophers and only philosophers to investigate. By the same token, there’s no part of reality that’s specially marked off for psychologists and only psychologists to investigate. If you don’t believe me now, perhaps you will when you finish this book.
 For more on mediation and moderation see Baron & Kenny (1986).
 Paul Krugman, March 17, 2014, on his blog, “The Conscience of a Liberal,” in a post titled “Sergeant Friday was not a Fox”
 When a term appears for the first time in boldface, it is a technical term that is defined in the glossary at the end of the book.
 I am here indebted to James Wilk.
After that somewhat depressing post about Wasilla, I’m delighted to be presenting some maps of Amherst, Massachusetts. Before I do, a few methodological and philosophical points are in order.
First, we don’t take the ascriptions in obituaries at face value. We realize that people aren’t described 100% accurately in these texts. An obituary, like many other texts, tells you at least as much about its author as its subject. We’re therefore treating these documents as reflections of what the people in a community value. Whether the deceased actually embodied all of the traits ascribed to them is not for us to say. Regardless of the answer to that question, the constellations of qualities ascribed in obituaries tell us what the friends and family of the deceased think is good and important enough to bother attributing.
There are other caveats to consider. For instance, the vast majority of the people celebrated in obituaries are adults in their 60s and above. So these texts tell us about what various communities value in the elderly. Whether they also value such attributes in the young and middle-aged is an open question.
Additionally, as Dana Rognlie, a terrific graduate student in philosophy here at UO pointed out to me recently, we shouldn’t presume that the authors of obits are a random sample of the local community. Presumably, they’re almost all close family or friends. But which family and friends are they? Are they usually the daughter, the son, or the spouse of the deceased? Or are they typically collaborations among all of the close family? If it turned out that 80% of obituaries that were written by a child were written by a daughter, that would be good to know. Unfortunately, we don’t yet have any data on this, but we’re looking into it.
Next, we don’t think of these maps as comprehensive. In particular, we think that a trait is considered a virtue in a community if, but not only if, it tends to be attributed in the obituaries composed by members of that community.
We also think, with Hume, that distinctions among intellectual, moral, political, and other kinds of virtues are blurry at best. The rich array of thick terms used to describe the dead doesn’t seem to be carved at the joints by these distinctions. One of the things most often said about the deceased is that they were a friend. Is friendship a virtue? In a forthcoming paper, I argue that it is, but I realize that that’s contentious. Another thing that’s often said about deceased men is that they were veterans. Is a group affiliation of this sort a candidate for virtue? Robert Adams thinks so, but again it’s contentious. Soldiers do things qua soldiers. Another thing that’s often said about the dead is that they were fans of the local sports team (the Ducks in Eugene, the Patriots in Amherst, and so on). Fandom is about as passive as being for the good gets. When your team wins, you’re, as Garfunkel and Oates put it, vicariously, “temporarily, adjacently victorious.”
Finally, we think that obituaries and other talk about the dead lend an interesting perspective to discussions in meta-ethics and philosophy of language. What kind of speech act are we performing when we call a dead family member generous (one of the most common terms used in obituaries)? It looks like an assertion, but as anyone who’s encountered Pericles’ funeral oration, Plato’s Menexenus, the Gettysburg address, or John Cleese’s eulogy for Graham Chapman can affirm, talk of the dead tends to go non-cognitive pretty quickly; it turns into an exhortation of sorts to the audience.
With all that out of the way, here is a map of the values of Amherst, MA:
This one is quite detailed, so I encourage you to open it in another tab and explore it by zooming and scrolling. As with some of the other maps we’ve presented, the size of terms here is determined by the number of terms that co-occurred with the term in question, not simply the number of times that term occurred. The width of an edge connecting a pair of terms represents the number of times they co-occurred. Centrality/peripheralness represents, well, centrality and peripheralness to the network. And in this case color represents modularity. Modularity is, somewhat roughly, a measure of the density of interconnections among nodes in the network. In this map, each color represents a different module; terms within the same module tend to be more connected with each other than they are with terms in other modules (represented by different colors).
The green group seems to encompass a mix of intellectual and political virtues, notably including wit, pedagogy, feminism, civil rights activism, and political engagement. This group is also the first to have a major node for a religion other than Christianity: Judaism (it also contains a small node for Islam. The blue group seems to encompass a variety of other-regarding dispositions, including humor, helpfulness, environmentalism, and compassion. The pink group seems to be primarily about commitment to the local community, including one’s family, friends, church, and civic community. The red and yellow groups are probably too small to interpret.
If you’ve been following my previous posts that detailed Eugene, Flint, and Wasilla, you’ll probably have noticed some interesting differences. This map is by far the most complex. That’s in part because I was able to look at a lot more obits for Amherst (about 600… oy). It’s also because these obits tended to be quite a bit longer and have richer descriptions. That’s unsurprising, given how much of a class and educational difference there is between Amherst and the rest of the towns I’ve looked at so far. This map also has much less focus on sports and religion and much more focus on political and intellectual engagement. Depending on your prejudices, you might find that unsurprising.
In other towns, we noticed some pretty substantial differences between the constellation of traits associated with women and the constellation associated with men. What gender differences turn up between men and women in Amherst? Here’s the map for women:
Again, these are pretty detailed, so I encourage you to open them in other tabs and explore by zooming and scrolling. No male nuns — unsurprising. No male feminists – disappointing. Fewer female sports fans — unsurprising. No female veterans — unsurprising. Otherwise, there aren’t that many noticeable differences between these maps.
I’ll post a “complete” map comparing attributions to men and women in all towns surveyed so far in a later post. For now, I need to take a bit of a break from reading obituaries….
Here’s a draft of a paper to be presented at a conference at UNC in May. As always, comments, criticisms, questions, etc. are most welcome.
Gone are the heady days when Bernard Williams (1993) could get away with saying that “Nietzsche is not a source of philosophical theories” (p. 4). The last two decades have witnessed a flowering of research that aims to interpret, elucidate, and defend Nietzsche’s theories about science, the mind, and morality. This paper is one more blossom in that efflorescence. What I want to argue is that, in light of contemporary science, Nietzsche’s is the best-supported moral psychological theory in the history of philosophy.
Given limitations of space, I will not be able to engage at length with the many competitors for this title. Instead, I will proceed by discussing three key Nietzschean insights and the contemporary psychological evidence for them. The first Nietzschean insight is the disunity of the self. The second, connected, Nietzschean insight is the primacy of affect. This primacy is expressed by what I have called elsewhere (Alfano 2010, forthcoming b) the tenacity of the intentional, and what Nietzsche calls the Socratic equation (TI Socrates 4, 10; WP 2:432-3). The third major Nietzschean insight is the social construction of character, which presupposes a wild diversity within the extensions of trait-terms and the dual direction of fit of character trait attributions. This last point is somewhat in tension with the only other published defense of the empirical credentials of Nietzsche’s moral psychology (Knobe & Leiter 2007), so I will make a few remarks about the contrast between my view and theirs.
Today, I presented a paper on some normative implications of the instability and indeterminacy of preferences for the Princeton University Neuroscience of Social Decision Making series. On Monday, I present the same work to the Center for Human Values Laurence S Rockefeller seminar. Here’s a draft of the paper.
Psychologists and behavioral economists have recognized for decades that preferences and other motivational attitudes are indeterminate: for some pairs of outcomes, a and b, a given agent will neither prefer a to b, nor prefer b to a, nor be indifferent as between a and b. Sometimes, there’s just no fact of the matter concerning what people want. Moreover, psychologists and behavioral economists have recognized for some time that preferences and other motivational attitudes are unstable: for some pairs of outcomes, a and b, a given agent may prefer a to b now, but be disposed to reverse her preference ordering a few moments from now in response to seemingly trivial and normatively irrelevant situational factors. Philosophical theories that make use of the concepts of preferences and desires, however, rarely take the indeterminacy and instability of preferences into account.
This paper has three main parts. In the first, I discuss two convergent lines of empirical evidence, both of which suggest that preferences – at least as traditionally conceived – are both indeterminate and unstable. In the second, I outline a descriptive model of preferences that I first articulated elsewhere (Alfano 2012) as an attractive response to this evidence. In the third section, I draw on my descriptive model to explore some of the normative implications of the indeterminacy and instability of preferences with respect to right action, wellbeing, public policy, and meta-ethics. At first blush, it might seem that indeterminacy and instability spell trouble for normative theories couched in terms of preferences and desires. After all, if right action is at least partially a function of preference-satisfaction, and preferences are indeterminate or unstable, then what it would be right to do would presumably inherit this indeterminacy and instability. In other words, it could be argued that there’s no fact of the matter about what it would be right to do, or that that fact is in constant flux. Similar worries arise in the cases of public policy and personal wellbeing: if your welfare is at least partially a function of whether you’re getting what you want, and your desires are indeterminate or unstable, then presumably there is no fact of the matter concerning how well your life is going, or that fact is in constant flux. Pursuing or promoting one life goal over another might then turn out to be a mug’s game. Against these worries, I argue that the local indeterminacy and instability captured by my descriptive model preserve any intuitions worth keeping about the normative status of preferences. I then go a step further, arguing that local indeterminacy and instability are to be embraced because they help to counter a prominent argument against the normative weight of preferences: the demandingness objection. I conclude with some methodological remarks about the appropriate use of empirical information in philosophical theorizing.
This is a draft I wrote a couple years ago but haven’t yet submitted for publication. As always, comments, objections, questions, etc. are welcome.
Desire is the very essence of man.
~ Spinoza (Ethics III.D1)
Nietzsche seems ambivalent about the existence of the self. Sometimes he affirms it, but just as often he denies it. In this paper, I argue that Nietzsche has a positive theory of the self that diverges from traditional views so significantly that he can consistently affirm the self as he conceives it while denying the self as traditionally conceived. In particular, the Nietzschean self is characterized not by unity, consciousness, knowledge, and rationality, but by plurality, diversity, non-consciousness, and desire. On his view, a state belongs to oneself in a minimal way if it inheres in one’s body, but to truly possess a state one must endorse it with a higher-order desire. If one is ambivalent in virtue of bodily possessing a state while desiring to be rid of it, one in a way both possesses and does not possess that state. In addition, being a self at all on Nietzsche’s view is not an all-or-nothing matter; it admits of degrees. Selves are individuated by their bodies, but one is more a self in direct proportion to one’s wholeheartedness and lack of ambivalence.