draft of Moral Psychology, Chapter 1: preferences

As always, comments, suggestions, questions, criticisms, etc. are most welcome….

“We are strangers to ourselves.”

~ Friedrich Nietzsche, On the Genealogy of Morals, Preface, section 1


1 The function of preferences: prediction, explanation, planning, and evaluation


Among our diverse mental states, some are best understood as representing how the world is. If I know that wine is made from grapes, I correctly represent the world as being a certain way. If I think that Toronto is the capital of Canada, I incorrectly represent the world as being a certain way (it’s actually Ottawa). Other mental states are best understood as moving us to act, react, or forebear in various ways. I want to see the Grand Canyon before I die. I desire to know how to speak Spanish. I prefer to use chopsticks rather than a fork to eat sushi. I intend to keep my promises. I aim to be fair. I love to hear New Orleans-style brass band music. Depending on their longevity, their intensity, their specificity, their malleability, and their idiosyncrasy, we use different words to describe these mental states: values, drives, choices, appraisals, volitions, cravings, goals, reasons, purposes, passions, sentiments, longings, appetites, aspirations, attractions, motives, urges, needs, acts of will. Such mental states are sometimes referred to as pro-attitudes, and related states that move someone to avoid, escape, or prevent a particular state of affairs are correspondingly called con-attitudes.

If you put together an agent’s representations of how the world is and the mental states that move her to act, you have some hope of predicting and explaining her actions. Suppose, for instance, that you know that I have a free weekend, that I deeply yearn to see the Grand Canyon, and that I have some spare cash. What am I going to do? It’s not unreasonable to predict that I will purchase a plane ticket (or rent a car) and go to Arizona. Now suppose that you know that my comprehension of geography is pretty weak. I still want to see the Grand Canyon, but I mistakenly think that it’s in Chihuahua. (Oops – nobody’s perfect). What do you think I’ll do now? It’s not unreasonable to predict that I’ll still purchase a plane ticket or rent a car, but that instead of going to Arizona I’ll end up in Mexico (and pretty frustrated!). Someone’s representations and purposes combine to lead them to act. If you know what someone’s representations and purposes are, you can to some extent predict what they’ll do.

In the same vein, knowing what someone’s representations and purposes are puts you in a position to explain their actions. Suppose you see me stand up, walk across the room, open a door, and walk through the doorway. On the door, you notice the following icon:

Figure 1


Why did I do what I did? A plausible explanation isn’t too hard to assemble. If you saw the sign indicating that the door led to the men’s bathroom, then presumably I did too: so I probably had a relevant representation of what was on the other side of the door. What desire (preference, goal, intention, need) might I have that would rationalize my behavior? The most obvious suggestion is that I wanted to relieve myself. Of course, it’s possible that I went to the men’s bathroom to participate in a drug deal, to conceal myself while I had a good long cry, or for some other reason. But if you’re right in thinking that I wanted to urinate, then you’ve successfully explained my action. If you know what someone’s representations and purposes are, you can to some extent explain what they’ve done.

To predict and explain other people’s actions, we need some idea of what they prefer (want, desire, value, need). But that’s not all that preferences are for. Preferences also figure in planning and evaluation, and when they’re structured appropriately, they contribute to the agent’s autonomy. Think about your best friend. Imagine that her birthday is in a week. You love your friend, and want to do something special for her birthday. You don’t need to predict your own action here, nor do you need to explain it. Your task now is to plan: in the next week, what can you do for your friend that will simultaneously please and surprise her without emptying your bank account? To give your friend a special birthday present, you need to know what she enjoys (or would enjoy, if she hasn’t experienced it yet). To be motivated to give your friend a special birthday present in the first place, you need to want to do something that she wants. In philosophical jargon, you must have a higher-order desire – a desire about another desire (hers). You want to give her something that she wants.

It’s remarkable how adept people can be at solving this sort of problem, which involves the sort of recursively embedded agent-patient relations discussed in the introduction. Think about it. To plan a good gift, you need to know now not just what your friend currently wants but what she will want in the future. You can’t just give her what you yourself want or what you will want in a week. You can’t give her what she wants now but won’t want in a week. To successfully give your friend a good present, you have to figure out in advance what she’ll want in a week.

The same constraints apply when you plan for yourself. Think about choosing your major in college. What do you want to specialize in? Musicology is interesting, but will you still be interested in it three years from now? Will it set you up to earn a decent living (something you’ll presumably want in five, ten, and twenty years)? Marketing might earn you a decent living, but will you find it boring (not want to do it, or even want not to do it) after a few years? Are you going to want to have children? In that case, you may need more income than you would if you didn’t want (and didn’t have) children. Living a sensible life requires planning. You need to make plans that affect your friends, your family, your colleagues, and your rivals. You also need to make plans for yourself. Doing this successfully requires intimate knowledge of (or at least some pretty good guesses about) your own and others’ future desires, needs, and preferences.

Thus, preferences figure in the prediction, explanation, and planning of action. They’re also important when we morally evaluate action. I reach out violently and knock you over, causing you some pain and surprising you more than a little. What should you think of my action? It depends in part on what moved me to do it. If I’ve shoved you because I want to hurt you, if I’m engaged in an assault, you’re going to think I’m doing something wrong. If I’m not depraved, I’ll also feel guilty. If I’m just clumsily gesturing at a pretty tree over there, I should probably know better, but you’ll temper your anger. I may not feel guilty, but I’ll probably be embarrassed or even ashamed. If I’m knocking you out of the way of a biker who’s zooming down the sidewalk towards you, perhaps you’ll feel grateful, while I’ll feel relieved or even proud.

What marks the difference between your reactions to my action? What marks the difference between my own assessments of it after the fact? It’s not that my shoving you and your falling hurts more or less in one case or the other. Instead, what leads you to evaluate my action as wrong, misguided, or benevolent is the pro- (or con-)attitude that moves me to act. Likewise, what leads me to feel guilt, embarrassment, or relief is the pro- (or con-)attitude that moved me to act. If I want to hurt you, if I want to do something to you that you prefer not to happen, you’ll say that I’ve acted wrongly. If my aim is to do something relatively harmless (something you neither prefer nor disprefer) like pointing out a feature of the environment, you’ll perhaps think I’m a klutz, but you won’t think I’ve done something morally wrong. If I’m trying to prevent you from being run down by an out-of-control cyclist, if I want to do something to you that (once you understand it) you prefer that I do, you’ll presumably think I’ve done something morally good.

Preferences are important and versatile. They help us predict and explain actions. They help us exercise agency on our own behalf and for those we care about. They help us evaluate the actions of others and ourselves. In the context of moral psychology, there’s one last thing that preferences are good for: autonomy. According to many philosophers, such as Harry Frankfurt (1971, 1992), a person is autonomous or free to the extent that she wants what she wants to want, or at least does not want what she would prefer not to want. An autonomous agent is someone whose will has a characteristic structure. This idea is discussed in more depth in chapter 2.

As I mentioned above, we have dozens of terms to refer to pro- and con-attitudes. But the title of this chapter is ‘Preferences’. Why? Preferences are sufficiently fine-grained to help in the prediction, explanation, and evaluation of action in the face of tradeoffs. Other motivating attitudes lack this specificity. Consider, for instance, values.[1] At a high enough level of abstraction, everyone values the same ten things: power, achievement, pleasure, stimulation, self-direction, universalism, benevolence, tradition, conformity, and security (Schwartz 2012). If you want to know what someone will do, why someone did something, or whether someone deserves praise or blame for acting as they did, knowing that they accept these values gives you no purchase. Qualitatively weighting values doesn’t improve things much. Consider someone who values pleasure “somewhat,” stimulation “a lot,” and security “quite a bit.” What will she do? It’s hard to say. Why’d she go to the punk rock show? It’s hard to say. Does she merit some praise for engaging in a pleasant conversation with a stranger at the coffee shop? It’s hard to say.

Preferences set up a rank ordering of states of affairs. This is easiest to see in the case of tradeoffs. Suppose two desires are moving you to act. You’re exhausted after a long day, so you want to take a nap. But your friend just texted to suggest meeting up for a drink at a local bar, and you want to join her. We can represent this tradeoff with the following table:


  Nap Don’t nap
Join friend A B
Don’t join friend C D

Table 1: Choice matrix


In this simplified choice matrix, there are four ways things could turn out. You could take a nap and join your friend (A); you could join your friend without taking a nap (B); you could take a nap without joining your friend (C); and you could neither nap nor join your friend (D). If you have a complete set of preferences over these options, one of them is optimal for you, another is in second place, another is in third place, and the final one is in last place. Presumably A is your top outcome and D is your bottom outcome. Unfortunately, although you most prefer A (i.e., you prefer it to B, C, and D), it’s impossible. So you’re in a position where you need to weigh a tradeoff. This is where preferences become important. If you simply value the nap and value socializing with your friend, there’s no saying whether you’ll go with B or C. But if you prefer socializing to napping, we can predict that you’ll opt for B over C. By the same token, if you prefer napping to socializing, we can predict that you’ll opt for C over B.

So preferences are especially helpful in predicting behavior. They’re also great for explaining and evaluating behavior. A useful rule of thumb for explaining behavior is that people act in such a way as to bring about the highest-ranked outcome they think they can achieve. Imagine someone who prefers A to B, B to C, C to D, D to E, E to F, F to G, and G to H. She acts in such a way as to produce C. How can we explain this? Well, if we posit that she believes that A and B are out of the question (perhaps she takes them to be impossible or at least extremely difficult to achieve), then we can explain her behavior by saying that she went with the best outcome available to her.


2 The role of preferences in moral psychology


We’re now in a position to see how preferences relate to the five core concepts of moral psychology (patiency, agency, sociality, reflexivity, and temporality).


2.1 The role of preferences in patiency


Even if no one else is involved, even if you’re not exercising agency, your preferences matter for your patiency. According to one attractive theory of personal well-being, what it means for your life to go well is that your preferences are satisfied (Brandt 1972, 1983; Heathwood 2006). Your preferences might be satisfied through your own agency. You might prefer, among other things, to exercise agency in pursuit of some goal or other. Your preferences might be satisfied because you are involved in social relations with other people. Even so, there will be cases in which what you prefer happens or fails to happen simply by luck, accident, or unanticipated causal necessity. Fundamentally, then, well-being is associated with patiency, with what happens to you.

The preference-satisfaction theory of well-being is attractive for several reasons. It explains why one aspect of morality is intrinsically motivating. If my well-being is a matter of whether my preferences are satisfied, then I can’t help caring about my well-being. Preferences are a way of caring about things. Of course I care about what I care about. The preference-satisfaction theory of well-being also accounts for cases in which hedonic (pleasure-based) theories of well-being fail. Sometimes, it seems like my life goes no better, and may even go worse, when I experience some pleasures. I struggle with alcohol dependency and end up drinking to excess. While I enjoy the drinks, I prefer to stop. Arguably, I’m worse rather than better off because, even though I experience pleasure, my preferences are frustrated. Similarly, sometimes it seems like your life goes no worse, and may even go better, when you experience some pains. You exercise vigorously at the gym. You force yourself to study extra hard for an exam. You watch a frightening or depressing or horrifying movie. You eat a meal spiced with more than a little wasabi. These are painful experiences, but in each case you prefer to suffer through the pain. Arguably, you’re better rather than worse off because, even though you experience pain, your preferences are satisfied.

The preference-satisfaction theory of well-being also provides a way to understand well-being comparatively. People don’t just have good or bad lives. They have better or worse lives. Someone whose life is going poorly could be even worse off. Someone whose life is going well could be even better off. This distinction maps nicely onto the idea of a preference ranking. Since preferences can in principle put all the ways the world could be in order from best to worst, it’s possible to identify someone’s well-being with how far up their ranking things actually are. If you prefer A to B, B to C, C to D, D to E, E to F, F to G, and G to H, and the actual state of affairs is C, then your level of well-being is better than many ways it could be but not maximal. If things change to B, your well-being improves one notch; if things change to D, your well-being goes down a notch.

The most plausible version of the preference-satisfaction theory of well-being claims that what really contributes to your well-being is not the extent to which your actual preferences are satisfied but the extent to which your betterinformed preferences are satisfied. Why? And what does it mean for preferences to be informed? Imagine that you’re about to take a bite of a delicious chile relleno. It’s your favorite dish. The cheese is perfectly melted. The poblanos are fresh. The tomatoes are local. Everything is perfect except for one little exception: unbeknownst to you, the cook accidentally used rat poison rather than salt. If you eat these chiles, you’re going to end up in the hospital. But you don’t know this; in fact, you have no clue. It won’t improve your life to eat those chiles. It’ll make your life (much!) worse.

Philosophers recognize this, and that’s why they say that your well-being is a function not of what you want but of what you would want if you were better informed. If you knew that the chiles relleno were poisoned, you would prefer quite strongly not to eat them, so even though you currently prefer to eat them, doing so would detract from rather than contribute to your well-being.

Knowledge of potential poisons is clearly not the only thing you need to have informed preferences, so philosophers of well-being argue that your better-informed preferences are your fully-informed preferences. According to this approach, the preferences that determine someone’s well-being are not the preferences that person actually has, but the ones they would have if they were fully informed. Specifying what full information means in a way that doesn’t collapse into omniscience is tricky, but one attractive suggestion is to take into account “all those knowable facts which, if [you] thought about them, would make a difference to [your] tendency to act” (Brandt 1972, p. 682) or “everything that might make [you] change [your] desires” (Brandt 1983, p. 40) – a process Richard Brandt dubbed cognitive psychotherapy.[2]


2.2 The role of preferences in agency, reflexivity, and temporality


I briefly mentioned the role of preferences in agency, reflexivity, and temporality above. Several points are relevant. First, to act at all, you must have pro-attitudes like preferences. Without states that move you to act, you’d never act in the first place, never exercise agency at all. Second, to act in the face of tradeoffs, you must have some way of ranking potential outcomes. That’s what preferences do: they put potential outcomes in a rank order. Third, to be the sort of agent that the vast majority of adult humans are, you need to engage in long-term plans and projects. This involves having some idea in advance what your future self’s preferences will or might be. It involves having temporally extended preferences, so that you want now for your future preferences, whatever they end up being, to be satisfied. It involves thinking of that future person as yourself and therefore having a special regard for him or her. If your future self mattered to you no more or less than some random stranger, long-term projects would be pretty foolish.

To be a recognizably human agent, your preferences must not violate certain constraints. Put less dramatically, your agency is undermined to the extent that your preferences violate certain constraints. You’ll fail to act successfully to the extent that you suffer from preference reversals (preferring A to B one moment and B to A the next moment). You’ll fail to act successfully if you have cyclical preferences (preferring A to B, B to C, but C to A). You’ll fail to act successfully over time if you cannot rely on your current representation of your future preferences to be largely accurate (thinking that you’ll prefer A to B when in fact you’ll prefer B to A).


2.3 The role of preferences in sociality


We tend to think that people deserve praise and blame only, or at least primarily, for their motivated actions. As I pointed out above, if someone inadvertently brings about a consequence, we tend to withhold or at least temper praise (even if the consequence was good) and blame (even if it was bad). Moral good luck is nice, but not particularly praiseworthy. Negligence is blameworthy, but less so than malignance.

The role of preferences in sociality is most directly comprehensible from a utilitarian (or other consequentialist) framework, but does not depend essentially on the truth of utilitarianism. Utilitarians such as Brandt analyze right action in terms of preference-satisfaction. According to Brandt (1983, p. 37), an action is permissible if (and only if) “it would be as beneficial to have a moral code permitting that act as to have any moral code that is similar but prohibits the act.” Obligatory and forbidden actions can then be defined in terms of permissibility using well-known equivalences in deontic logic: an obligatory action is one that it’s not permissible not to do, and a forbidden action is one that it’s not permissible to do. The connection with preferences is that benefit (and harm) are understood on this account in terms of well-being. In other words, according to Brandt, an action is permissible if (and only if) it would satisfy as many fully-informed preferences, across all people, to have a moral code permitting that act as to have any moral code that is similar but prohibits the act.

Brandt’s theory is a rule utilitarian approach to right action. One could instead adopt an act utilitarian theory, according to which an action is permissible if and only if performing it in the circumstances would be as beneficial as performing any alternative action (Smart 1956). Or one could adopt a motive utilitarian theory, according to which an action is permissible if and only if it’s what a person with an ideal motivational set (i.e., a psychologically possible motivational set that, over the course of a lifetime, is as beneficial as any alternative psychologically possible motivational set) would perform in the circumstances (Adams 1976). Regardless of the precise flavor of utilitarianism one adopts, then, it’s clear that, for utilitarians, preferences are immensely important on the dimension of sociality. To act in such a way as to satisfy the most preferences, you must take into account the effects of your action not just on yourself but on everyone else. In other words, you need to take into account how your agency affects others’ patiency. Nested agent-patient relations also play a role here. What you do (or fail to do) to one person will often have some effect on what they do (or fail to do) to another person, which will have an effect on what the second person does (or fails to do) to a third person, and so on.

As I mentioned above, the relevance of preferences to sociality is easiest to see from a utilitarian perspective, but it doesn’t rely on such a perspective. Virtue ethicists and care ethicists (though perhaps not Kantians) all accept the centrality of preferences in their approaches to sociality. For instance, one nearly universally recognized virtue is benevolence, the disposition both to want to benefit other people and to often succeed in doing so. Even if a virtue ethicist thinks that there are benefits other than preference-satisfaction, they admit that preference-satisfaction is one kind of benefit. In the same vein, Aristotle and other ancient virtue ethicists gave pride of place to friendship. Friends aim, among other things, to benefit each other (and typically succeed), which again involves (perhaps among other things) preference-satisfaction. Similarly, in the care tradition, the one-caring aims among other things to benefit the cared-for. This typically involves not only satisfying the cared-for’s informed preferences but actively helping the cared-for to get their actual preferences to approximate their idealized preferences.


3 Preference reversals and choice blindness


Thus, preferences matter in multiple ways to the core concepts of moral psychology. What does the scientific literature on preferences tell us about these important mental states? Two convergent lines of evidence suggest that preferences are neither determinate nor stable: the heuristics and biases research on preference reversals, and the psychological research on choice blindness.

Preferences are dispositions to choose one option over another. You strictly prefer a to b only if, if you were offered a choice between them, then ceteris paribus you would choose a. If your preferences are stable, then what you would choose now is identical to what you would choose in the future. If your preferences are determinate, then there is some fact of the matter about how you would choose. That is to say, exactly one of the following subjunctive conditionals is true: if you were offered a choice, then ceteris paribus you would choose a; if you were offered a choice, then ceteris paribus you would choose b; if you were offered a choice, then ceteris paribus you would be willing to flip a coin and accept a if heads and b if tails (or you would be willing to let someone else – even your worst enemy – choose for you). The kind of indeterminacy and instability I argue for in this section is modest rather than radical. I want to claim that preferences are unstable in the sense of sometimes changing in the face of seemingly trivial and normatively irrelevant situational influences, not in the sense of constantly changing. Similarly, I want to claim that preferences are indeterminate in the sense of there sometimes being no fact of the matter how someone would choose, not in the sense of there always being no fact of the matter how someone would choose.


3.1 Preference reversals


Two distinctions are worth making regarding the types of possible preference reversals. In a chain-type reversal, you prefer a to b, prefer b to c, and prefer c to a; such reversals are sometimes labeled failures of acyclicity. In a waffle-type reversal, you prefer a to b, but also prefer b to a. The other distinction has to do with temporal scale. Preference reversals can be synchronic, in which case you would have the inconsistent preferences all at the same time. More commonly, they are diachronic, in which case you might now prefer a to b and b to c, and then later come to prefer c to a (and perhaps give up your preference for a over b). Or you might now prefer a to b, but later prefer b to a (and perhaps give up your preference for a over b). In my (2012) paper, I call diachronic waffle-type reversals the result of Rum Tum Tugger preferences, after the character in T. S. Eliot’s Book of Practical Cats who is “always on the wrong side of every door.”

Preference reversals were first systematically studied by Daniel Kahneman, Sarah Lichtenstein, Paul Slovic, and Amos Tversky as part of the heuristics and biases research program.[3] In study after study, they and others showed that people’s cardinal preferences could be reversed by strategically framing the choice situation. When faced with a high-risk / high-reward gamble and a low-risk / low-reward gamble, most people choose the former but assign a higher monetary value to the latter. These investigations focused on choices between lotteries or gambles rather than choices between outcomes because the researchers were attempting to engage with theories of rational choice and strategic interaction, which – in order to generate representation theorems – employ preferences over probability-weighted outcomes. While this research is fascinating, its complexity makes it hard to interpret confidently. In particular, whenever the interpreter encounters a phenomenon like this, it’s always possible to say that the problem lies not in people’s preferences but in their credences or subjective probabilities. Since evaluating a gamble always involves weighting an outcome by its probability, one can never be sure whether anomalies are attributable to the value attached to the outcome or the process of weighting. And since we have independent reason to think that people’s ability to think clearly about probability is limited and unreliable (Alfano 2013), it’s tempting to hope that preferences can be insulated from this line of critique.

For this reason, I will focus on more recent research on preference reversals in the context of choices between outcomes rather than choices between lotteries (or, if you like, degenerate lotteries with probabilities of only 0 and 1). A choice of outcome a over outcome b can only reveal someone’s ordinal preferences; it can only tell us that she prefers a to b, not by how much she prefers a to b. This limitation is worth the price, however, because looking at choices between outcomes lets us rule out the possibility that any preference reversal might be attributable to the agent’s credences rather than her preferences.

Some of the most striking investigations of preference reversals in this paradigm have been conducted by Dan Ariely and his colleagues.   For instance, Ariely, Loewenstein, and Prelec (2006) used an arbitrary anchoring paradigm to show that preferences ranging over baskets of goods and money are susceptible to diachronic waffle-type reversals.[4] In this paradigm, a participant first writes down the final two digits of her social security number (henceforth SSN-truncation[5]), then puts a ‘$’ in front of it. Next, the experimenters showcase some consumer goods, such as chocolate, books, wine, and computer peripherals. The participant is instructed to record whether, hypothetically speaking, she would pay her SSN-truncation for the goods. Finally, the goods are auctioned off for real money. The surprising result is that participants with high SSN-truncations bid 57% to 107% more than those with low SSN-truncations.

To better understand this phenomenon, consider a fictional participant whose SSN-truncation was 89. She ended up bidding $50 for the goods, so, at the moment of bidding, she preferred the goods to the money; otherwise, she would have entered a lower bid. However, one natural interpretation of the experiment is that, prior to the anchoring intervention, she would or at least might have chosen that amount of money over the goods (i.e., she would have bid lower); in other words, prior to the anchoring intervention, she preferred the money to the goods. Anchoring on her high SSN-truncation induced a diachronic waffle-type reversal in her preferences. Prior to the intervention, she preferred the money to the goods, but after, she preferred the goods to the money. This way of explaining the experiment entails that her preferences were unstable: they changed in response to the seemingly trivial and normatively irrelevant framing of the choice.

Another way to explain the same result is to say that, prior to the anchoring intervention, there was no fact of the matter whether she preferred the goods to the money or the money to the goods. In other words, it was false that, given a choice, she would have chosen the goods, but it was equally false that, given a choice, she would have chosen the money or been willing to accept a coin flip. Only in the face of the choice in all its messy situational details did she construct a preference ordering, and the process of construction was modulated by her anchoring on her SSN-truncation. This alternative explanation entails that her preferences were indeterminate.

Furthermore, these potential explanations are mutually compatible. It could be, for instance, that her preferences were partially indeterminate, and that they became determinate in the face of the choice situation. Perhaps she definitely did not prefer the money to the goods prior to the anchoring intervention, but there was no fact of the matter regarding whether she was indifferent or preferred the goods to the money. Then, in the face of the hypothetical choice, this local indeterminacy was resolved in favor of preference rather than indifference. Finally, her newly-crystallized preference was expressed when she entered her bid.

Such a robust effect calls for explanation. My own suspicion is that a hybrid of indeterminacy and instability is the right theory of what happens in these cases, but it’s difficult to find evidence that points one way or the other. In any event, for present purposes, I’m satisfied with the inclusive disjunction of indeterminacy and instability.


3.2 Choice Blindness


There are many other – often amusing and sometimes depressing – studies of preference reversals, but the gist of them should be clear, so I’d like to turn now to the phenomenon of choice blindness, a field of research pioneered in the last decade by Petter Johansson and his colleagues. As I mentioned above, preferences are dispositions to choose. You prefer a to b only if, were you given the choice between them, then ceteris paribus you would choose a. Preferences are also dispositions to make characteristic assertions and offer characteristic reasons. While it’s certainly possible for someone to prefer a to b but not to say so when asked, the linguistic disposition is closely connected to the preference. Someone might be embarrassed by her preferences. She might worry that her interlocutor could use them against her in a bargaining context. She could be self-deceived about her own preferences. In such cases, we wouldn’t necessarily expect her to say what she wants, or to give reasons that support her actual preferences. But in the case of garden-variety preferences, it’s natural to assume that when someone says she prefers a to b, she really does, and it’s natural to assume that when someone gives reasons that support choosing a over b, she herself prefers a to b. Research on choice blindness challenges these assumptions.

Imagine that someone shows you two pictures, each a snapshot of a woman’s face. He asks you to say which you prefer on the basis of attractiveness. You point to the face on the left. He then asks you to explain why, displaying the chosen photograph a second time. Would you notice that the faces had been surreptitiously switched, so that the face you hadn’t pointed at is now the one you’re being asked about? Or would you give a reason for choosing the face that you’d initially dispreferred?   Johansson et al. (2005) found that participants detected the ruse in fewer than 20% of trials. Moreover, when asked for reasons, many of the participants who had not detected the manipulation gave reasons that were inconsistent with their original choice. For instance, some said that they preferred blondes even though they had originally chosen a brunette.

This original study of choice blindness has been supplemented with experiments in other domains. For instance, Hall et al. (2010) found that people exhibited choice blindness in more than two thirds of all trials when the choice was between two kinds of jam or two kinds of tea. After tasting both, participants indicated which of the two they preferred, then were asked to explain their choice while sampling their preferred option “again.” Even when the phenomenological contrast between the items was especially large (cinnamon apple versus grapefruit for jam, pernod versus mango for tea), fewer than half the participants detected the switch.

Choice blindness in the domain of aesthetic evaluations of faces and comestibles might not seem weighty enough to support the argument that preferences are often indeterminate and unstable. But perhaps choice blindness in the domain of political preferences and moral judgments would be. Johansson, Hall, and Chater (2011) used the choice blindness paradigm to flip Swedish participants’ political preferences across the conservative-socialist gap.[6] Participants filled in a series of scales on their political preferences for policies such as taxes on fuel. Some of these scales were then surreptitiously reversed, so that, for example, a very conservative answer was now a very socialist answer. Participants were then asked to indicate whether they wanted to change any of their choices, and to give reasons for their positions. Fewer than 20% of the reversals were detected, and only one in every ten of the participants detected enough reversals to keep their aggregate position from switching from conservative to socialist (or conversely). In a similar study, Hall, Johansson, and Strandberg (2012) used a self-transforming survey to flip participants’ moral judgments on both socially contentious issues, such as the permissibility of prostitution, and broad normative principles, such as the permissibility of large-scale government surveillance and illegal immigration. For instance, an answer indicating that prostitution was sometimes morally permissible would be flipped to say that prostitution was never morally permissible, and an answer indicating that illegal immigration was morally permissible would be flipped to say that illegal immigration was morally impermissible. Detection rates for individual questions ranged between 33% and 50%. Almost 7 out of every 10 of the participants failed to detect at least one reversal.

As with the behavioral evidence for preference reversals, the evidence for choice blindness suggests that people’s preferences are unstable, indeterminate, or both. The choices people make can fairly easily be made to diverge from the reasons they give. If preferring a to b is a disposition both to choose a over b and to offer reasons that support the choice of a over b (or at least not to offer reasons that support the choice of b over a), then it would appear that many people lack preferences, or that their preferences do exist but are extremely labile. Not only is there sometimes no fact of the matter about what we prefer, but also our preferences are often seemingly constructed on the fly in choice situations, and their ordering is shaped by seemingly trivial and normatively irrelevant factors.


3.3 A descriptive preference model


While it is of course possible to dispute the ecological validity of these experiments or my interpretation of them, I want to proceed by considering some of the philosophical implications of that interpretation, assuming for the sake of argument that it is sound. I’ve already explored some of the implications of this perspective in Alfano (2012), where I argue that the indeterminacy and instability of preferences infirm our ability to explain and predict behavior. Predictions of behavior often refer to the preferences of the target agent. If you know that Karen prefers vanilla ice cream to chocolate, then you can predict that, ceteris paribus, when offered a choice between them she will go with vanilla. Likewise for explanations: you can base an explanation of Karen’s choice of vanilla on the fact that she prefers vanilla. But if there’s no fact of the matter about what Karen prefers, you cannot so easily predict what she will do, nor can you so easily explain why she did what she did. A related problem arises when considering instability. If Karen prefers vanilla to chocolate now, but her preference is unstable, then the prediction that she will choose vanilla in the future – even the near future – is on shaky ground. For all you know, by the time the choice is presented, her preferences will have reversed. Similarly for explanation: if Karen’s preferences are unstable, you might be able to say that she chose vanilla because she preferred it at that very moment, but you gain little purchase on her longitudinal preferences from such an attribution.

I’ve responded to these problems by proposing a model in which preferences are interval-valued rather than point-valued. A traditional valuation function v maps from outcomes to points. The binary preference relation is then defined in terms of these points: a is strictly preferred to b just in case v(a) > v(b), b is strictly preferred to a just in case v(a) < v(b), and the agent is indifferent as between a and b just in case v(a) = v(b).

Screen Shot 2014-04-24 at 4.17.18 PM

Figure 2: a preferred to b because 1 = v(a) > 0 = v(b)


In the looser model I propose, by contrast, the valuation function maps from outcomes to closed intervals, such that a is strictly preferred to b just in case min(v(a)) > max(v(b)) and the agent is indifferent as between a and b just in case there is some overlap in the intervals assigned to a and b.

Screen Shot 2014-04-24 at 4.17.27 PM

Figure 3: indifference because neither min(v(a)) > max(v(b)) nor max(v(a)) < min(v(b))



Though this model preserves the transitivity of strict preference, it does not preserve the transitivity of indifference. This, however, may be a feature rather than a bug, since ordinary preferences as exhibited in choice behavior themselves seem not to preserve the transitivity of indifference.


4 Philosophical implications of the indeterminacy and instability of preferences


In this section, I consider some possible philosophical implications of the indeterminacy and instability of preferences, drawing on the descriptive model outlined in the previous section. Moving from the descriptive to the normative domain is always fraught, but, as I argued in the introduction, the two need to be explored in tandem, with mutual theoretical adjustments made on each side. Moral psychology without normative structure is a baggy monster. Normative theory without empirical support is a castle in the sky.


4.1 Implications for patiency


The primary worry raised for the theory of personal well-being by the indeterminacy and instability of preferences is that, if the extent to which your life is going well depends on or is a function of the extent to which you’re getting what you want, then well-being inherits the indeterminacy and instability of preferences. In other words, there might be no fact of the matter concerning how good a life you’re living at this very moment, and if there is such a fact, it might fluctuate from moment to moment in response to seemingly trivial and normatively irrelevant situational factors.

By way of example, consider someone who is eating toast with apple cinnamon jam. Is his life as good as it would be if he were eating toast with grapefruit jam? If he is like the people in the choice blindness studies mentioned above, there might be no answer to this question. If he’s told that he prefers apple cinnamon, he will prefer the present state of affairs, but if he is told that he prefers grapefruit, he’ll be less pleased with the present state of affairs than he would be with the world in which he is eating grapefruit jam. Whether his life is better in the apple cinnamon jam-world or the grapefruit jam-world is indeterminate until his preferences crystallize in one ordering or the other.

Or consider someone who has a brand new hardbound copy of Moby Dick, for which she just paid $50 when it was marked down from $70. Is her life going better now that she has the book, or was it going better before, when she had the money? If she is like the participants in Ariely’s preference reversal study, the answer may be “yes” to both disjuncts. Before she bought the book, she preferred the money to the book. But then she anchored on the manufacturer’s suggested retail price of $70, raised her valuation of the book, and ended up preferring it to $50. Her unstable preferences mean that she was better off with the money than the book, and that she is better off with the book than the money. It’s not a contradiction, but it makes her well-being a pain in the neck to evaluate.

Fortunately, though, there is a ready response to this worry, which begins by pointing out that the indeterminacy and instability of preferences is not radical but modest, a feature captured by the descriptive model sketched above. Although there may be no fact of the matter whether the life of the consumer of cinnamon apple jam is better than the life of the consumer of grapefruit jam, there is a fact of the matter whether either of these lives is better than that of someone who, instead of eating jam, is enduring irritable bowel syndrome. Although preference orderings may fluctuate between owning a book and having $50, they do not fluctuate between owning the same book and having $50,000. These observations are consistent with the interval-valued preferences of the descriptive model outlined in the previous section. In the first example, the intervals for cinnamon apple jam and for grapefruit jam overlap with each other, but neither overlaps with the interval for irritable bowel syndrome. Thus, we can still make a whole host of judgments about the quality of various possible lives, even if, when we “zoom in,” such judgments cannot always be made. In the second example, the intervals for having $50 and having the book overlap with each other, but neither overlaps with the interval for having $50,000.

For the price of this local indeterminacy and instability, the theoretician of well-being can purchase an answer to an objection to the preference-satisfaction theory of well-being. The objection goes like this: when assessing whether it would be better to have the life of a successful lawyer or a successful artist, it seems trivial or even perverse to ask whether the artist’s life would involve slightly more ice cream, even if the agent considering what to do with her life likes ice cream. Slight preferences shouldn’t bear normative weight in this context.

However, if we assume, as seems reasonable in light of the evidence, that her preference for a little more ice cream is weak enough that it could be shifted by preference reversal or choice blindness, then its normative irrelevance is unmasked. The life of the ice cream-deprived artist and the life of the ice cream-enjoying artist are assigned nearly identical intervals on the scale of preference – intervals that differ less from each other than from that assigned to the life of the lawyer. Hence, if we are willing to put up with a little indeterminacy and instability, we can avoid more serious objections to the theory of personal well-being.


4.2 Implications for sociality


The main worry raised by the indeterminacy and instability of preferences in the context of sociality is that, if right action depends on preference-satisfaction (perhaps among other things), then it inherits the indeterminacy and instability of preferences. It might turn out that there’s just no fact of the matter what it would be right to do, or that that fact is in constant flux. This worry is perhaps most pressing for preference-utilitarians, such as Brandt and Singer (1993), but it casts a long shadow. Even if you don’t think that right action is a function of preferences and only preferences, it’s hard to deny that preferences matter at all. For instance, as I pointed out above, virtue ethicists typically countenance benevolence as an important virtue. If, as I argued in the previous section, well-being is affected by the indeterminacy and instability of preferences, then benevolence is too. And even if one thinks that benevolence is not a virtue, virtually any tolerable theory of right action is going to say that maleficence is a vice and that there is a duty – whether perfect or imperfect – of non-maleficence.

In the remainder of this section, I will concentrate on the normative implications of indeterminacy and instability for preference-utilitarianism, but it should be clear that these are just some of the more straightforward implications, and that others.

Before considering some responses I find attractive, I should point out that the problem we face here is not the one that is solved by distinguishing between a decision procedure and a standard of value. An objection to utilitarianism that was lodged early and often is that it’s either impossible or at least extremely computationally complex to know what would satisfy the most preferences. This knowledge could only be acquired by eliciting the preference orderings of every living person – or perhaps even every past, present, and future person. The correct response to this objection is that utilitarianism is meant to be a standard of value, not a decision procedure.[7] It identifies (if it is the correct theory of right action) what it would be right to do, but that doesn’t mean that we can use it to find out what it would be right to do every time you make a moral decision. The distinction is meant to parallel other general theories: Newtonian mechanics would have identified, if it had been the correct physical theory, what a projectile will do in any circumstances whatsoever, even if people were unable to apply the theory in a given instance.

This response is unavailable in the present context. There are two ways in which it might be impossible to know what would satisfy someone’s preferences: epistemic and metaphysical. You would be unable to know what someone wants if there was a fact of the matter about what that person wants, but you couldn’t find out what that fact is. This would be a merely epistemic problem, and the distinction between a decision procedure and a standard of value handles it nicely. But you would also be unable to know what someone wants if there simply was no fact of the matter concerning what that person wants. If I am right that preferences are indeterminate, then this is the problem we now face, and it does not good to have recourse to the distinction between a decision procedure and a standard of value.

Preference-utilitarianism is not without resources, however. As in the case of well-being, one attractive response is to point out that preferences are only modestly indeterminate and unstable. Although there may be no uniquely most-preferred outcome for a given individual (or indeed for any individual), there will be many genuinely dispreferred outcomes, and hopefully a manageably constrained subset of preferred outcomes, than which nothing is more preferred. They are all outcomes than which nothing is determinately and stably better, but there is no unique best outcome.

Furthermore, from among this subset of alternatives it might be possible to winnow out those that satisfy preferences which we have independent normative grounds to reject – preferences that are silly, ignorant, perverse, or malevolent. As I pointed out above, it’s commonly argued in the context of right action that brute preferences carry less weight than fully-informed preferences. According to those who argue in this way, whether it’s right to do something depends less on whether it would satisfy people’s actual preferences than on whether it would satisfy their fully-informed preferences. It might be hoped that idealizing preferences would cut down or even eliminate their indeterminacy and instability.

Here’s what that might look like. Suppose that Jake’s actual preferences are captured by my interval-valued model. As such, they present two problems: they fail to uniquely determine how it would be right to treat Jake, and they may even rule out the genuinely right way to treat him because his actual preferences are normatively objectionable. It might be possible to kill these two birds with the single stone of idealization if idealization leads to unique, point-valued preferences that are no longer normatively objectionable. Perhaps there is only one way that Jake’s preferences could turn out after he undergoes cognitive psychotherapy. This is a big ‘perhaps,’ but it is worth considering. What evidence we have, however, suggests that idealizing in this way would not lead to determinate, stable preferences. When Kahneman, Lichtenstein, Slovic, and Tversky began to investigate preference reversals, many economists saw the phenomenon as a threat, since it challenged some of the most fundamental assumptions of their field. Accordingly, they tried to show that preference reversals could removed be root and branch if participants were given sufficient information about the choices they were making. Years of attempts to eliminate the effect proved fruitless.[8]

The burden is then on the idealizer to say what information participants lack in the relevant experiments. What does someone who bids high on a bottle of wine after considering her SSN-truncation not know, or not know fully enough? Perhaps she should be allowed first to drink some of the wine. While Ariely et al. (2006) did not investigate whether this would eliminate the anchoring on SSN-truncation, they did conduct other experiments in which participants sampled their options and thus had the relevant information. In one, participants first listened to an annoying sound over headphones, then bid for the right not to listen to the sound again. As in the consumer goods experiment, before bidding, participants first considered whether they would pay their SSN-truncation in cents to avoid listening to the sound again. And as expected, those with higher SSN-truncations entered higher bids, while those with lower SSN-truncations entered lower bids. It’s unclear what further information they could have acquired to inform their preferences. It seems more plausible is that they had too much information, not too little. If they hadn’t first considered whether to bid their SSN-truncation, they would not have anchored on it and would therefore have had “uncontaminated” preferences. But cognitive psychotherapy says to take into account “everything that might make [one] change [one’s] desires” (Brandt 1983, p. 40). Anchoring changed their desires, so it counts as part of cognitive psychotherapy. Perhaps the process can be revised by saying that one should take into account everything that might correctly or relevantly change one’s desires, but then the problem is to come up with an account of what makes an influence on one’s desires correct or relevant that doesn’t involve either a vicious regress or a vicious circle. No one has managed to do this, perhaps because it can’t be done.

Another response, which I find more attractive, is to embrace rather than reject the indeterminacy and instability of preferences. There are several ways to do this. One is to figure out which preferences are wildly indeterminate or unstable and disqualify their normative standing completely. Just as it makes sense to ignore the Rum Tum Tugger’s begging to be let inside because you know he’ll just beg to get back out again, perhaps it makes sense to hive off Jake’s indeterminate and unstable preferences, leaving a kernel of normatively respectable ones behind. Only these would matter when considering what it would be right to do by Jake, or what would promote his well-being.

A second way to embrace indeterminacy and instability is to make a less heroic assumption about the effect of cognitive psychotherapy. Instead of taking it for granted that this process is bound to converge on unique, point-valued preferences, perhaps it will merely shrink the width of Jake’s interval-valued preferences. In that case, even after idealization, there would be no unique characterization of what it would be right to do by Jake or what would most promote his well-being. As I’ve argued in the context of prediction and explanation (Alfano 2012), however, this might be a feature rather than a bug. Suppose that idealization yields a preference ordering that rules out most actions as wrong and condemns many outcomes as detrimental to Jake’s well-being, but does not adjudicate among many others. The remaining actions would then all be considered morally right in the weak sense of being permissible but not obligatory, and the remaining outcomes would all be vindicated as conducive to well-being. This strategy might help to solve the so-called demandingness problem by expanding what James Fishkin calls “the zone of indifference or permissibly free personal choice” (1982, p. 23; see also 1986). Thus, while it is possible to try to resist the evidence for indeterminacy and instability, or to acknowledge the evidence while denying its normative import, it may be better instead to embrace these features of preferences and use them to respond to existing problems.


5 Future directions in the moral psychology of preferences


Because preferences are involved in multiple ways in patiency, agency, sociality, temporality, and reflexivity, there are many avenues for further research. In this closing section, I list just a few of them.

First, further conceptual work by philosophers and theoretically-minded psychologists and behavioral economists may reveal or clarify relevant distinctions, such as a contemporary version of Mill’s distinction between higher and lower pleasures. Perhaps a useful distinction can be made between satisfaction of higher and lower preferences. According to Mill, one pleasure is higher than another if an expert who was acquainted with both would choose any amount of the former over any amount of the latter. This maps fairly directly onto the idea of lexicographic preferences: one good or value is lexicographically preferred to another if (and only if) any amount of the former would be chosen over any amount of the latter. Such values would be in principle immune to preference reversals. Jeremy Ginges and Scott Atran (2013) have found that when a value is “sacralized,” it becomes lexicographically preferred in this way. Moral values seem to be the only values that are capable of becoming sacred. However, tradeoffs have only been studied in one direction (giving up a sacred value to gain a secular value).

Second, further empirical research would help to determine whether the hiving off strategy succeeds. Is there some identifiable class of preferences that are especially susceptible to reversals and choice blindness? We currently lack sufficient evidence to say. It seems that effects may be stronger in business and gambling domains, weaker in social and health domains (Kuhberger 1998), but these distinctions are neither mutually exclusive nor exhaustive. This is yet another area in which collaboration between philosophers, who are specially trained in making this sort of distinction, and psychologists would be useful.

Third, to what extent do preference reversals and choice blindness disappear when people are informed about them? Are psychologists who know all about these effects less susceptible to them? More susceptible? The same as other people?

Fourth, are there some people who are congenitally more susceptible to preference reversals and choice blindness than others? There is very little research on this, though one study suggests that roughly a quarter of the population is highly susceptible and another quarter is immune (Bostic, Herrnstein, & Duncan 1990). Perhaps the preferences of people who are clear on what they want deserve more normative weight than the preferences of people who don’t know what they want. Perhaps the second group would benefit not so much from getting what they (think) want (for the moment) but from having their preferences shaped in more or less subtle ways.

Finally, on a related note, perhaps public policy should sometimes aim not so much to satisfy existing preferences, but to shape people’s preferences in such a way that they are (more easily) satisfiable. The idea here is to take advantage of the instability of preferences, cultivating them in such a way that the people who have them will be most able to satisfy their own wants. If you’re not getting what you want, either change what you’re getting, or change what you want. Of course, this proposal may seem objectionably paternalistic, but I tend to agree with Richard Thaler and Cass Sunstein (2008) in thinking that in some cases such policies may be permissible. In fact, it’s a striking asymmetry that almost no one objects to the shaping of beliefs, provided they are made to accord with (what we take to be) the truth, whereas it’s hard to find someone who doesn’t object to the shaping of desires and preferences. However, I would argue that the choice we often face is not whether to mould preferences but how. Given how easily preferences are influenced, it’s highly likely that they are constantly being socially shaped without our realizing it. If this is right, existing policies already shape preferences; we just don’t know how. The choice is therefore between inadvertently influencing preferences and doing so strategically. I tend to think that society has not just a right but an obligation to help people develop appropriate preferences – a point with which feminists such as Serene Khader (2011) concur. The worry that such interventions might be objectionably paternalistic can be assuaged somewhat by insisting, as Khader does, that the very people whose preferences are the targets of policy intervention participate in designing the interventions.


[1] Preferences are causally influenced by values, but values on their own don’t do all the work (Homer & Kahle 1988).

[2] A version of this idea was first formulated by Sidgwick (1981). Rosati (1995) argues persuasively that mere information without imaginative awareness and engagement with that information is not enough.

[3] See Lichtenstein & Slovic (1971); Slovic (1995); Slovic & Lichtenstein (1968, 1983); Tversky & Kahneman (1981); Tversky, Slovic, & Kahneman (1990).

[4] See also Ariely & Norton (2008), Green et al. (1998), Hoeffler & Ariely (1999), Hoeffler et al. (2006), Johnson and Schkade (1989), and Lichtenstein and Slovic (1971).

[5] A social security number is a kind of national identification code: it associates each citizen of the United States with a unique, quasi-random number.

[6] In the United States, this would be equivalent to flipping preferences across the conservative-liberal gap; in the United Kingdom, it would be equivalent to flipping preferences across the conservative-labor gap.

[7] Bentham (1789/1961, p. 31), Mill (1861/1998, 26), and Sidgwick (1907, p. 413) all deal with the objection in this way.

[8] See Berg, Dickhaut, & O’Brien (1985); Pommerehne, Schneider, & Zweifel (1982); and Reilly (1982).

Draft chapters of moral psychology textbook

I’m writing a textbook on moral psychology for Polity.  Some of the material was piloted in an undergraduate honors seminar I taught this winter.  Much of it is new material (though related to my other work and drawing as carefully as I can on others’).  I’m going to be putting draft chapters up on this blog.  I’d be extremely grateful for comments, suggestions, questions, and criticisms.

Here’s a tentative table of contents:


1. Preferences

2. Agency

3. Emotion

4. Virtue

5. Intuition

6. Moral disagreement

7. Altruism

8. Development

Coda: The future of moral psychology

This post is a draft of the intro.

1 Setting the stage


Moral psychology is the systematic inquiry into how morality works (when it does work) and breaks down (when it doesn’t work).  The field therefore incorporates questions, insights, models, and methods from various parts of psychology (personality psychology, social psychology, cognitive psychology, developmental psychology, evolutionary psychology), sociology, anthropology, criminology, and of course philosophy (applied ethics, normative ethics, metaethics).  These fields are – or at least can be – mutually informative.  Indeed, one guiding theme of this book is that moral philosophy without psychological content is empty, whereas psychological investigation without philosophical insight is blind.  Given their characteristically synoptic perspective, philosophers are ideally situated to organize and moderate a productive conversation among these sciences.  Nevertheless, there is always the risk that investigators with different training and expertise may misinterpret, misconstrue, or misunderstand one another.  In this book, I attempt to put the relevant disciplines in dialogue.  They sometimes speak with different accents, jargons, vocabularies, even grammars.  My aim is to make their conversation intelligible to the reader, even if they cannot all be brought to speak exactly the same language in the same way.

Systematic inquiry depends on systematic questions.  Science is not just a collection of facts.  It’s not even a collection of facts about the same thing or class of things.  Imagine how stupid it would be to conduct moral psychology by assembling all and only the motives that every person has ever had while responding to a moral problem (assuming this to be possible in the first place).  This would be an utterly disorganized, uninformative, overwhelming mess.  In the annals of the illustrious British Royal Society, you find descriptions of “experiments” like this: “A circle was made with powder of unicorne’s horn, and a spider set in the middle of it, but it immediately ran out severall times repeated.  The spider once made some stay upon the powder” (Weld 1848, p. 113).  This would be a caricature of bad science if it hadn’t happened.  We might call this empiricism run amok.  Science doesn’t just ask what happens, as if this were a question that, when completely answered, would satisfy human inquirers.  Science asks questions systematically.  It asks, for instance, what the effect of X on Y is.  It asks whether that effect is mediated by M.  It asks whether the effect is moderated by Z.[1]  It attempts to determine which small set of variables, organized in what configuration, accounts for the variability observed and experimentally induced in the field of inquiry.

In this endeavor, science is guided by insightful identification of relevant variables, careful distinction between similar phenomena, creative elaboration of alternative models, and skeptically imaginative construction of potential counterexamples.  As the economist Paul Krugman put it recently on his blog, you can’t just let “the data speak for itself — because it never does. You use data to inform your analysis, you let it tell you that your pet hypothesis is wrong, but data are never a substitute for hard thinking.  If you think the data are speaking for themselves, what you’re really doing is implicit theorizing, which is a really bad idea (because you can’t test your assumptions if you don’t even know what you’re assuming).”[2]  One way to help make theorizing explicit rather than implicit is asking systematic questions.

Unfortunately, in universities and in the contemporary education system more broadly (especially, to my chagrin, in the United States) we typically spend far too much time answering (and learning to answer) questions and far too little asking (and learning to ask) questions.  So, in this introduction, I’ll try to show how questions are asked, how they become more nuanced and complicated, and how conditions of adequacy for answers are (tentatively) established.

Here’s a moral question I’ve asked myself:

What should I do to him for her?

Picture this: I’m headed to work on a downtown subway car at 8:30 AM.  Two seats to my right, a 20-something woman is intently reading a magazine, obviously somewhat tense because a man is standing over her, leaning in a bit too close, leering slightly, and alternating between asking her name and telling her to smile.  She’s presumably on her way to work and obviously uninterested in his conversation.  She rolls her eyes and sighs.  He seems obnoxious but mostly harmless.  She casts about from time to time.  Is she looking for help? for someone to share a moment of derisive eye contact with? for reassurance that, if her unwelcome interlocutor escalates to insulting or assaulting her, fellow passengers will not remain apathetic bystanders?


2 Patiency


What should I do to him for her?  This question presupposes an immense amount.

First, it presupposes patiency[3] – that is, the fact that things happen to people.  My fellow commuter can be made uncomfortable.  She can feel threatened.  She can be threatened.  She can be assaulted.  Things – some of them good and some of them quite bad – can happen to her.  Some of them might be done by that jerk who keeps insinuating himself on her attention.  The fact that good and bad things can happen to her – that she is, in technical terms, a patient – is presupposed by my question.

Things can also happen to him.  He can be ignored and accommodated.  He can be egged on.  He can, alternatively, be confronted and challenged.  He can be distracted or redirected.  The fact that good, bad, and neutral things can happen to him – that he too is a patient – is also presupposed by my question.

Finally, things can happen to me.  One reason I might do nothing is that I’m afraid of what might happen to me if I confront or even merely accost him.  Probably nothing – but I’m useless in a fight, and strangers can be unpredictable.  She might express gratitude to me for intervening.  Alternatively, she might be annoyed that a second stranger has made her business his business.  I aim to be helpful, which among other things includes stymieing creeps, but I also aim to avoid trampling through strangers’ lives uninvited.  As I decide what to do, her patiency, his patiency, and my patiency are all quite salient.

Things happen to people.  When they do, we have an example of patiency.  In other words, when something happens to someone, she is the patient of (is passive with respect to) that event or action.  Moral psychology asks what it is about us that makes us patients, and how our patiency figures in our own and other people’s moral perception, behavior, decision-making, emotions, characters, and institutions.  Several chapters of this book are directly related to patiency.  For instance, in chapter 1 on preferences, we will see that some philosophers argue that your life goes well to the extent that your preferences are satisfied.  In other words, your life is better when you get what you want than when you don’t get it.  If you, like most people, want to be healthy, but you end up contracting influenza, your life goes worse.  Something happens to you that contravenes your preferences.  On the flipside, if you, like most people, prefer temperate weather to frigid cold, and the weather where you are is temperate, then your life goes better.  Something happens to you that satisfies your preferences.  In chapter 4, on virtue, we will see that benevolence is typically considered a virtue.  What makes someone benevolent?  Wishing others well, and at least sometimes acting successfully on those wishes.  If a benevolent person helps you in some way, you are the patient of her action.  An extreme version of benevolence – altruism – will be discussed in chapter 7.  An altruist doesn’t just wish others well and do things for their sake; she does so at significant cost to herself.  Finally, in chapter 8, we will consider moral development.  None of us grows up in a social vacuum.  We are all raised by someone, such as a parent, grandparent, aunt, or uncle.  We are all patients of the myriad interventions our caretakers make in our lives, which lead us to cultivate good (or bad, or mixed) character.

Thus, patiency is a crucial concept in moral psychology.  When I ask what I should do to him for her, I’m asking what follows from her patiency, his patiency, and my own patiency.  This is an example of how questions are asked: we start with something seemingly simple and comprehensible (“What should I do to him for her?”) and parse out some of the deeper questions and concepts it presupposes.


3 Agency


What should I do to him for her?

This question presupposes agency.  Things don’t just happen to people: sometimes people do things.

Return to the example of the woman on the train.  She might do something.  She might stand up and walk to the next train car.  She might lean back and hold her magazine up in front of her face, blocking the stranger’s attempt to make eye contact and muffling his voice.  She might tell him off.  She might scream.  She might kick him in the shin.

Likewise, he might do something.  He might continue to bug her until she escapes the train car.  He might sit down next to her.  He might call her a bitch.  He might throw his hands in the air and walk away.  He might switch to bothering someone else.  He might grow bored and start playing with his smartphone.

I, too, might do something.  (There’d be little point in asking myself what I should do if I couldn’t!)  If my usual wariness of strangers holds up, I might cautiously eye the situation and hope impotently that nothing too bad happens.  I might instead stride over and command him to stop bothering her.  More helpfully, I might stroll over and ask her a nonchalant question that lets her redirect her attention without seeming to be too rude to him.

People do things.  When they do, we have an example of agency.  In other words, some person is the agent of (is active with respect to) some event or action.  Moral psychology asks what it is about us that makes us agents, and how our agency figures in our own and other people’s moral perception, behavior, decision-making, emotions, character, and institutions.

Several chapters of this book are directly related to agency.  Chapter 1 discusses how our preferences affect our choices, and hence our actions.  It’s tempting to assume that our preferences are fairly stable, at least once we reach adulthood.  Empirical research suggests otherwise.  It’s even more tempting to assume that our preferences are transitive: if I prefer chocolate ice cream to vanilla and prefer vanilla to strawberry, then I’d better prefer chocolate to strawberry.  Again, empirical research suggests that, at least in some cases, transitivity breaks down.  To what extent can we be the authors of our own actions if our preferences are unstable and inconsistent?  Chapter 2 is about the relation between deliberative agency on the one hand and implicit biases on the other hand.  The vast majority of people in the developed world would, if asked, reject racist and sexist beliefs.  But social psychologists have demonstrated that most of us nevertheless implicitly accept and even act on racist and sexist associations.  When we do, are we really expressing our own agency?  If we aren’t, what’s going on?  Chapter 3 asks whether we are more or less agentic when we are motivated by emotions.  Particularly intense emotions seem to come over us like a hurricane, swamping our planning, deliberation, and even our agency.  But deficits in emotion have been shown to correlate with demonstrably bad decision-making.  Perhaps the truth lies somewhere between the Kantian rejection of emotions on the one hand and the Humean embrace of them on the other.  Chapter 4 connects agency with virtue, which for many theorists is a matter of acting in accordance with practical reason.  Psychological research over the last several decades has demonstrated that the human capacity for slow, careful, deliberative reasoning is much more limited than most philosophers have presupposed.  The vast majority of our decision-making relies on quick, unconscious, vaguely emotional mental shortcuts.  Does this undermine our agency (as many suppose), or does it instead enable us to expand our agentic engagement with the world and each other?

If people were incapable of agency, if they were entirely passive beings, the contours of whose lives were completely determined by outside forces, there wouldn’t be much for moral psychologists to think about.  We could construct theories about what it meant for one person to have a better life than another, what it meant for one person to have as good a life as possible for such an impoverished creature, what it meant for such a life to improve or deteriorate.  But that would be about it.  The introduction of agency greatly complicates moral psychology.  Now, things don’t just happen to us; we do things.  Some of those things turn out as we want or intend them to.  Others don’t.  This imposes some constraints on what it means to act well, to be a successful agent.  Sometimes we do what we want, but then we are disappointed by the result.  This suggests that we need a better understanding of our own preferences, a topic of chapter 1.  Sometimes we accomplish one goal but in so doing thwart our striving for a second goal.  This suggests that we need to understand agency holistically, so that it involves progress towards a complete set of goals without too much self-undermining.

Thus, agency, like patiency, is a crucial concept in moral psychology, and it’s a concept that complicates the inquiry.  When I ask what I should do to him for her, I’m asking what follows from her agency, his agency, and my own agency.  This is a further example of how questions are asked: we start with something seemingly simple and comprehensible (“What should I do to him for her?”) and parse out some of the deeper questions and concepts it presupposes.


4 Sociality


What should I do to him for her?

This question presupposes sociality.  Things happen to people: they get sick, they enjoy pleasant weather, they endure the many small indignities of youth and the even more numerous small indignities of aging.  People do things: they stand up and walk away, they shrink into their seats, they write books.  In many interesting cases, though, one person does something to someone else.  Indeed, some of the examples I gave above had this flavor.  The only reason I asked myself what I should do to him for her was that he was doing something to her in the first place: he was harassing her.  As I deliberated about what to do, I considered the fact that there were things she might do to him, such as pointedly ignoring him, additional things he might do to her, such as insulting her, and various things I might do to him on her behalf, such as confronting him for harassing her.  Moral psychology asks what it is about us that makes us social, and how our sociality figures in our own and other people’s moral perception, behavior, decision-making, emotions, character, and institutions.


Y is a patient. Y is not a patient.
X is an agent. X harasses Y.X kicks Y in the shin.X confronts Y. X stands up.X shrinks into his seat.X writes a book.
X is not an agent. Y gets sick.Y enjoys pleasant weather.Y grows old.

Table 1: agency x patiency examples

As table 1 illustrates, people can be simple patients, to whom things just happen; they can be simple agents, who just do things; but they can also be complex agents and patients: they can do things to each other.  In such cases, agency and patiency are inextricably intertwined.  One person’s agency is the cause or even a constitutive part of another person’s patiency.  One person’s patiency is the effect of another person’s agency.  When asked, “What happened to you?” my fellow commuter would be giving an incomplete answer if she responded, “I was harassed.”  Being harassed is not like enjoying pleasant weather; it’s not something that can happen to someone all on their own.  A more complete answer would be, “I was harassed by a stranger.”  Likewise, if someone later asked the creep, “What did you do on the train?” he would be giving an incomplete response if he answered, “I harassed.”  Harassing isn’t like standing up; it’s not something someone can do all on their own.

We can represent these relations with the following schematic diagram.[4]

 agent-patient 1x

Figure 1: agent-patient relation


In this diagram (and others of its sort that I’ll use below), a dot represents a person.  An arrow proceeding away from a dot represents that person exercising agency.  An arrow pointing at a dot represents that person enduring patiency (good, bad, or neutral).  I’ll put a box around each such relation.

Figure 1 represents the simplest sort of sociality: one agent does something to another agent.  A more complex form of sociality occurs when two people are agents and patients with respect to each other at the same time: you do something to me while I do something to you.  For instance, we dance together, each making suggestions to the other through subtle bodily movement, gestures, glances, and words. Call this interactivity.  Figure 2 represents interactive sociality of this sort.


agent-patient 1x interactive


Figure 2: agent-patient relation


Things happen to people; people do things; sometimes, these are the same event.  But sociality is often more complicated than that.  Interactivity is one source of complexity, but a minor one.  Another source of complexity is the possibility – indeed, the prevalence – of recursively embedded agent-patient relations.  This might sound frighteningly technical, but don’t worry.  Recursion is all over the place, and I’m certain that you’re already familiar with it, if only informally.  Recursion is a process in which objects of a given type are generated by or defined in terms of other objects of the same type.  For instance, think of your ancestors.  What makes someone an ancestor of yours?  The answer to this question relies on recursion: the parents of X are ancestors of X (that’s the non-recursive step) and ancestors of ancestors of X are ancestors of X (that’s the recursive step).  Your grandparents are your ancestors because they’re the parents of your parents.  Your great-grandparents are your ancestors because they’re the parents of the parents of your parents.  Your great-great-grandparents are your ancestors because they’re the parents of the parents of the parents of your parents.  The great-great-grandparents of your great-great-grandparents are your ancestors because they’re the ancestors of your ancestors.  And so on.

Social agent-patient relations can also be recursively embedded.  The majority – probably the vast majority – of the complexity of moral psychology derives from such embedding.  In fact, the example I started off with has a recursive structure.  When I asked myself what I should do to him for her, I was thinking of myself as an agent who acts on a preexisting agent-patient relationship.  After all, I would have had no reason to intervene if he hadn’t been harassing her in the first place.

 agent-patient 2x

Figure 3: recursively embedded agent-patient relations


Figure 3 illustrates the situation in which one person acts on a second person acting on a third person.  Since this relation is recursive, it can be expanded yet another step (and another, and another…), as illustrated in figure 4.

agent-patient 3x

Figure 4: doubly recursively embedded agent-patient relations


Although figure 4 might seem complicated, I think we can pretty easily conjure up a situation that it characterizes.  For instance, imagine that I decide to stride over to the creep and tell him to cut it out.  As I move towards him, my friend, who realizes what a foolhardy thing I’m about to do, grabs me by the wrist and whispers “no no NO!”  My friend acts on me acting on him acting on her.  This sort of thing happens, I suggest, all the time.  And, as you can see, the more recursion there is, the most complicated the situation becomes.

Moral psychology asks what it is about us that makes us social, and how our sociality figures in our own and other people’s moral perception, behavior, decision-making, emotions, characters, and institutions.  Sociality is what makes moral psychology so complicated but also so interesting.  In a way, it’s the underlying theme of every chapter of this book but it features most prominently in chapters 3, 4, 6, 7, and 8.  In chapter 3 on emotion, we will see that emotions often function as signaling devices.  When I display anger, I signal to you that I am prepared and committed to reacting aggressively to offenses.  When you display disgust, you signal to me that the object of your disgust is contaminated and to-be-avoided.  Emotional signaling fits well into the recursive embedding structure discussed here.  When I display anger towards you, I also often signal to other people that they should be indignant over the offense you’ve caused me (a relationship like the one in figure 3).  When you display contempt towards my behavior, you also often signal to other people that they should feel superior to me.  Chapter 4 on virtue focuses primarily on the interlocking virtues of trustingness and trustworthiness.  Chapter 6 on moral disagreement investigates the ways in which sociality influences agreement on moral values, norms, heuristics, and decisions.  Chapter 7 on altruism is especially concerned with the potential tension between evolutionary theory and altruistic norms.  Chapter 8 explores the ways in which interlocking, recursively-structured agent-patient relations influence moral development.

Thus, sociality, like patiency and agency, is a crucial concept in moral psychology, and it’s a concept that greatly complicates the inquiry.  When I ask what I should do to him for her, I’m asking what follows from our sociality, that is, from the fact that I can act on him acting on her.  This is another example of how questions are asked: we start with something seemingly simple and comprehensible (“What should I do to him for her?”) and parse out some of the deeper questions and concepts it presupposes.


5 Reflexivity and temporality


What should I do to him for her?

This question presupposes reflexivity.  People do things; things happen to people; people do things to people.  In some cases, the agent and the patient are the same person.  In other words, people can do things to themselves.  This is easiest to see if we also introduce the last main conceptual presupposition of my question: temporality.  As I decide what to do to him for her, here are some considerations that might cross my mind:

If I don’t intercede somehow, I’ll feel guilty all day.

If I manage to distract him without starting a fight, I’ll be proud.

If I act like a coward now, I’ll be cultivating bad habits.

All of these considerations involve thinking of my future self as the patient of my current self as agent.  Another way of putting the same point is that I’m taking a social perspective on myself: on the one hand, me-now is the agent who does something to a patient; on the other hand, me-in-the-future is the patient to whom something is done by that agent.  These concepts also interact with sociality and the recursive embedding of agent-patient relations.  For instance, suppose I make a bad decision on Monday (agent) that leads me to make an even worse decision on Tuesday (patient-to-Monday-me) that leads me to suffer immensely on Wednesday (patient-to-Tuesday-me).  This is the sort of structure represented in figure 3, except that all three nodes represent me – just at different stages of my life.

Whenever we engage in long-term projects – especially long-term projects that are meant to have some effects on our future selves – patiency, agency, sociality, reflexivity, and temporality are all involved.  Moral psychology asks what it is about us that makes us reflexive and temporal, and how our reflexivity and temporality figure in our own and other people’s moral perception, behavior, decision-making, emotions, characters, and institutions.

Several chapters of this book are directly related to reflexivity and temporality.  The instability of preferences discussed in chapter 1 is a temporal instability, and it threatens agency because human agency as we normally conceive of it is meant to be temporally extended.  I don’t just do things now.  I do things now so that I can do and experience things later.  If my preferences change in the meantime, then setting myself up to do or experience something later seems pointless: what if I no longer want to do or experience that?  What if I’ve just wasted my effort?  The interaction between deliberative agency and implicit biases discussed in chapter 2 concerns, among other things, whether I’m able to reflectively endorse my own choices.  Emotions, discussed in chapter 3, can function as social signals; they can also function as commitment devices.  If I have a particular emotion, I’m committing myself (if only unconsciously and tentatively) to a plan of action in the future.  If I act wrongly, one of the things that may happen to my future self is the suffering of remorse.  Virtue, discussed in chapter 4, is acquired (according to Aristotle and many who follow in his footsteps) through long-term, goal-directed cultivation; I have a plan for my own life over time, which I proceed to carry out, making me both the agent and the patient of myself over the course of months, years, and even decades.  Intuitions, discussed in chapter 5, are arguably the automatic deliverances of capacities that have been built up over time through exposure to various theories, considerations, and arguments.

Reflexivity and temporality complicate moral psychology in various ways.  This is easiest to see if we imagine creatures that are just like humans in other ways but who have no long-term memory, no sense of self, and no capability to plan, to feel proud of their accomplishments, or to experience remorse.  Although such creatures would be patients (things would happen to them) and agents (they would do things) who were in some ways social (they would do things to each other), they would be very unlike us insofar as they could not intentionally do things to and for themselves, could not be grateful to or disappointed with their past selves, could not engage in long-term projects, and could not enjoy long-term friendships.  Clearly, these are crucial aspects of human moral psychology.

Thus far, we have explored five crucial concepts in moral psychology: patiency, agency, sociality, reflexivity, and temporality.  I don’t want to suggest that these are the only concepts moral psychologists find worth studying, but I do think they are among the most central.  Other important concepts will crop up throughout this book.  Some, such as emotion and intuition, will be treated at greater length.  Others, such as imagination and mindfulness, will receive less attention.  I encourage you to follow up on any and all of the concepts that capture your interest, and will provide lists of secondary sources at the end of each chapter to help direct and slake your curiosity.  In the remainder of this introduction, I will characterize some of the major normative theories that you might already be aware of in terms of their emphases on patiency, agency, sociality, reflexivity, and temporality.  After that, I’ll conclude by considering objections to moral psychology that might be raised because of the ever-fraught relationships among contingency, necessity, and normativity.  In particular, I’ll focus the truism that one can never deduce an ought from an is.


6 Comparing emphases of major moral theories


In the history of Western philosophy, four major moral theories have emerged: utilitarianism, Kantian ethics, virtue ethics, and care ethics.  Since it’s likely that you’ve encountered at least some of these views before reading this book, in this section, I compare how they relate to the five main concepts in moral psychology


6.1 Utilitarianism


Utilitarianism is the best-known variety of a family of views known as consequentialism.  According to consequentialism, the goodness of an act is determined solely by the goodness of the consequent state of affairs.  This view is typically combined with positions on what makes a state of affairs good and a theory of right action.  For instance, hedonist act utilitarianism says that the only thing that contributes to the goodness of a state of affairs is pleasure, that the only thing that detracts from the goodness of a state of affairs is pain, and that an action is right just in case it maximizes the amount of goodness in the consequent state of affairs.

Pleasure and pain are mental states that humans and other animals enjoy and suffer.  Thus, utilitarians and other consequentialists place their primary emphasis on patiency.  Jeremy Bentham, one of the foremost utilitarian thinkers in philosophical history, put the point well while asking what determines whether a creature has moral worth and bears moral consideration:


Is it the faculty of reason, or, perhaps, the faculty of discourse?  But a full-grown horse or dog is beyond comparison a more rational, as well as a more conversable animal, than an infant of a day, or a week, or even a month, old.  But suppose the case were otherwise, what would it avail?  the question is not, Can they reason? nor, Can they talk? but, Can they suffer? (1823, chapter 17, footnote)


For someone like Bentham, it doesn’t matter whether you can engage in reasoning (including the practical reasoning required for agency and the reflexivity required for long-term planning).  It doesn’t matter whether you can talk.  The main moral question for him is whether you can suffer, whether things can happen to you – in particular, bad and painful things.

Utilitarianism thus gives pride of place to patiency and de-emphasizes agency and reflexivity.  Bentham’s lack of concern for talking might lead one to think that he and other utilitarians have no regard for sociality.  In one sense, that’s correct.  However, utilitarians and other consequentialists also tend to think that every being capable of suffering matters equally.  And they recognize that people are capable of both inflicting suffering on one another and alleviating one another’s suffering.  For this reason, utilitarians put a great deal of emphasis on sociality, though deriving that emphasis from its relation to patiency and suffering.

Lastly, utilitarians tend to put great emphasis on temporality.  What I have in mind here is the fact that the consequences of an action are typically construed not just as what happens immediately afterwards but as everything that flows from the action.  Everything, for all time?  At the very least, everything that could be foreseen by a very intelligent and dedicated investigator.  Utilitarians care so much about such long-term consequences that they have debates about population ethics, asking questions such as “How many people should there be?” (Blackorby, Bossert, & Donaldson 1995)


6.2 Kantian ethics


Kantian ethics, also sometimes called ‘deontological ethics’, puts most emphasis on the two concepts that utilitarianism deemphasizes (agency and reflexivity) while according less weight to the concepts utilitarianism emphasizes (patiency, sociality, and temporality).  Kant thought that an account of moral obligation could be derived from the structure of agency itself.  He called this the categorical imperative because it applies to every agent in every action they undertake regardless of their desires, preferences, and values.  The best-known formulation of the categorical imperative states that you must “act only in accordance with that maxim through which you can at the same time will that it become a universal law” (4:421).  This book is not an introduction to major moral theories, let alone the history of philosophy, so I will not go into much detail interpreting the categorical imperative.  Kant’s idea, though, is that simply in virtue of being an agent you are constrained to act from some motives rather than others.  Clearly, then, agency figures importantly in Kantian ethics.

The other core concept that receives primary emphasis in Kantian ethics is reflexivity.  This is already somewhat evident from the first formulation of the categorical imperative, which requires you to reflect on and extrapolate from your own motives, but it comes into focus if we consider the third formulation: act as if you were through your maxim always a legislating member in the universal kingdom of ends (4:439).  On this view, a moral act is one that can be self-legislated, i.e., an act that is in accordance with a law one could give not only to others but also to oneself.

Agency and reflexivity have pride of place in Kantian ethics, but the other three concepts receive some attention.  Patiency and sociality get their due in the second formulation of the categorical imperative: treat humanity – whether your own or someone else’s – never merely as the means to some end but always as an end in its own right.  In this formulation, we can see that Kant cares not only about agency but also about what’s done to people.  He thinks it’s always wrong to treat someone as a mere means to your own end.  However, patiency matters for Kant only derivatively because he thinks that what’s wrong about treating someone as a mere means is that, in so doing, you don’t respect their agency.  Thus, the importance of what happens to us and what we do to each other depends on the antecedent importance of agency.

Finally, Kantian ethics doesn’t totally discount temporality (Kant argues that we have an imperfect duty to develop our own talents, for instance), but it also doesn’t place primary emphasis on it.


6.3 Virtue ethics


Virtue ethics is a family of views that focuses less on what it’s right to do and more on what sort of person it’s good to be.  A good person is someone with many virtues (compassion, courage, honesty, trustworthiness) and few vices (selfishness, laziness, unfairness, rashness).  Ancient Greek philosophers were basically all virtue ethicists of one kind or another.  Plato emphasized the virtues of courage, temperance, wisdom, and justice.  Aristotle famously thought that every virtue was a middle state between a pair of vices.  For instance, courage is the disposition to fear neither too many things nor too few things, to fear them neither too intensely nor not intensely enough, to fear them neither for too long nor for too short a period, and so on.

Utilitarian ethics focuses primarily on patiency, sociality, and temporality; Kantian ethics focuses primarily on agency and reflexivity.  Virtue ethics has a more balanced approach (this isn’t necessarily a good or a bad thing – it’s just a matter of emphasis), putting moderate emphasis on all five central concepts.  A virtuous person is characteristically active, doing things for reasons.  A virtuous person is also quite social.  Aristotle, for instance, devotes two whole chapters (out of ten) of the Nicomachean Ethics to friendship and another to justice.  Additionally, because virtue ethicists are concerned with the shape of a person’s whole life and the slow acquisition of virtuous traits, they pay more attention to temporality and moral development than utilitarians and Kantians.  They place slightly less emphasis on patiency and reflexivity, though these too figure in the account.


6.4 Care ethics


The other three views surveyed in this section are venerable, traditional approaches to morality.  The ethics of care is much more recent.  The dawn of care ethics can be dated with some precision to the publication, in 1982, of Carol Gilligan’s In a Different Voice: Psychological Theory and Women’s Development.  In her book, Gilligan explored the ways in which women (at least the women she interviewed) tend to talk in terms of care, emphasizing personal relationships and attachments (motherhood, siblinghood, friendship, etc.) and the special responsibilities that flow from these.  She accused existing moral theories, such as Lawrence Kohlberg’s (1971) Kantian approach to moral psychology, with ignoring and even sometimes denigrating such caring relationships in favor of a completely impartial, legalistic notion of rights and justice.  Although this criticism is somewhat overstated (as I mentioned above, Aristotle devotes twice as much attention to friendship as he does to justice), popular versions of both utilitarian and Kantian ethics clearly deserve Gilligan’s rebuke.  Since 1971, various philosophers, including Kittay, Noddings, and Slote, have formulated moral theories in the wake of Gilligan’s critique.

Like the other theories canvassed here, care ethics is actually a family of views.  What unites them is their emphasis on personal, face-to-face relationships and attachments, as well as their recognition that we all come into this world as completely helpless, dependent, screaming, fragile lumps of flesh.  Care ethicists therefore focus primarily on human sociality and patiency, with derivative interest in agency (someone has to do the caring, in addition to being cared for, after all) and temporality.  Reflexivity receives little attention in the care tradition.

major moral theories

Figure 5: Emphases of the four major moral theories


These differences in emphasis are illustrated graphically in figure 5.


7 Is and ought


To some people, the idea of combining scientific psychology with philosophical ethics to investigate moral psychology will seem only natural.  Philosophy helps to set the terms of the investigation (in this case, patiency, agency, sociality, reflexivity, and temporality), proposes questions and models, dreams up potential counterexamples; psychology empirically determines whether the terms refer to anything in the world, answers the questions, tests the models, and determines whether the potential counterexamples can be realized.  Psychology as an academic discipline split off from philosophy less than two centuries ago; it’s unsurprising that the two fields would sometimes collaborate.  To other people, though, this project might seem to be doomed from the start.  Science studies how things are, whereas philosophy studies how things ought to be and how they must be.  Science can never, even in principle, help to answer philosophical questions.

As you’ve probably guessed, I disagree, and for several reasons.  First, science can investigate modal reality (how things not only are but can and can’t be).  To the extent that we accept the truism that people can’t be morally required to do things or be ways that are impossible, scientific investigation of moral psychology constraints moral theory.  Second, scientific psychology can also investigate not just whether various kinds of behavior, character, and attachments are possible but also how demanding it would be for people to act, be, and relate in those ways.  The harder it is to live up to a moral theory’s requirements, the more suspicious we should be of that theory.  This is not to say that morality can’t make legitimate demands on us, just that the more extravagant those demands grow, the more suspicious we should be of the theory that generated them.  Third, even if we decide to hold onto very demanding norms, psychological science can help us to see how to live up to those norms.  In the same way, even if we hold onto extremely idealized norms of physical health, biological science can help us to see how to approximate those norms in our own lives.

Finally, morality is an important part of human behavior and cognition; as such, it’s something psychologists want to study, even if their investigations never end up suggesting revisions to moral norms.  The idea that this aspect of psychology is simply off-limits, as if philosophers could somehow call “dibs” on it, is preposterous.  As Levitin put it, those who think that science cannot study values typically commit a fallacy: “they seem to have confused making value judgments, which is incompatible with scientific objectivity, with studying objectively how other people make them – a phenomenon as amenable to psychology study, in principle, as other forms of human learning and choice” (1973, p. 491).  Moral psychology doesn’t aim to replace utilitarianism, Kantian ethics, virtue ethics, or the ethics of care.  In the case of care, this should be especially obvious: the entire edifice of care ethics was inspired by empirical research on moral psychology!  Instead of taking their ball and going home, philosophers need to learn to share their insights, theories, and models with their scientist neighbors.

It’s not all good news for traditional normative ethics, though.  Moral theories have empirical presuppositions.  Moral psychology can investigate those presuppositions.  Sometimes, to the moral theorist’s delight, they turn out to be well-supported.  Sometimes their foundations look pretty shaky.  The relation between philosophy and psychology doesn’t need to involve confrontation or scorn, though.  A better attitude for both sides to take, I contend is one of curiosity and intellectual humility.  A curious investigator is tentatively committed to her views, but she’s also delighted to find out that she’s wrong because that spurs her to construct a better model, a stronger theory, a more nuanced hypothesis.  There’s no part of reality that’s specially marked off for philosophers and only philosophers to investigate.  By the same token, there’s no part of reality that’s specially marked off for psychologists and only psychologists to investigate.  If you don’t believe me now, perhaps you will when you finish this book.

[1] For more on mediation and moderation see Baron & Kenny (1986).

[2] Paul Krugman, March 17, 2014, on his blog, “The Conscience of a Liberal,” in a post titled “Sergeant Friday was not a Fox”

[3] When a term appears for the first time in boldface, it is a technical term that is defined in the glossary at the end of the book.

[4] I am here indebted to James Wilk.

What I said at Princeton: Some Normative Implications of Preference Instability and Indeterminacy

Today, I presented a paper on some normative implications of the instability and indeterminacy of preferences for the Princeton University Neuroscience of Social Decision Making series.  On Monday, I present the same work to the Center for Human Values Laurence S Rockefeller seminar.  Here’s a draft of the paper.



Psychologists and behavioral economists have recognized for decades that preferences and other motivational attitudes are indeterminate: for some pairs of outcomes, a and b, a given agent will neither prefer a to b, nor prefer b to a, nor be indifferent as between a and b.  Sometimes, there’s just no fact of the matter concerning what people want.  Moreover, psychologists and behavioral economists have recognized for some time that preferences and other motivational attitudes are unstable: for some pairs of outcomes, a and b, a given agent may prefer a to b now, but be disposed to reverse her preference ordering a few moments from now in response to seemingly trivial and normatively irrelevant situational factors.  Philosophical theories that make use of the concepts of preferences and desires, however, rarely take the indeterminacy and instability of preferences into account.

This paper has three main parts.  In the first, I discuss two convergent lines of empirical evidence, both of which suggest that preferences – at least as traditionally conceived – are both indeterminate and unstable.  In the second, I outline a descriptive model of preferences that I first articulated elsewhere (Alfano 2012) as an attractive response to this evidence.  In the third section, I draw on my descriptive model to explore some of the normative implications of the indeterminacy and instability of preferences with respect to right action, wellbeing, public policy, and meta-ethics.  At first blush, it might seem that indeterminacy and instability spell trouble for normative theories couched in terms of preferences and desires.  After all, if right action is at least partially a function of preference-satisfaction, and preferences are indeterminate or unstable, then what it would be right to do would presumably inherit this indeterminacy and instability.  In other words, it could be argued that there’s no fact of the matter about what it would be right to do, or that that fact is in constant flux.  Similar worries arise in the cases of public policy and personal wellbeing: if your welfare is at least partially a function of whether you’re getting what you want, and your desires are indeterminate or unstable, then presumably there is no fact of the matter concerning how well your life is going, or that fact is in constant flux.  Pursuing or promoting one life goal over another might then turn out to be a mug’s game.  Against these worries, I argue that the local indeterminacy and instability captured by my descriptive model preserve any intuitions worth keeping about the normative status of preferences.  I then go a step further, arguing that local indeterminacy and instability are to be embraced because they help to counter a prominent argument against the normative weight of preferences: the demandingness objection.  I conclude with some methodological remarks about the appropriate use of empirical information in philosophical theorizing.

Continue reading