draft of Moral Psychology, Chapter 1: preferences

As always, comments, suggestions, questions, criticisms, etc. are most welcome….

“We are strangers to ourselves.”

~ Friedrich Nietzsche, On the Genealogy of Morals, Preface, section 1

 

1 The function of preferences: prediction, explanation, planning, and evaluation

 

Among our diverse mental states, some are best understood as representing how the world is. If I know that wine is made from grapes, I correctly represent the world as being a certain way. If I think that Toronto is the capital of Canada, I incorrectly represent the world as being a certain way (it’s actually Ottawa). Other mental states are best understood as moving us to act, react, or forebear in various ways. I want to see the Grand Canyon before I die. I desire to know how to speak Spanish. I prefer to use chopsticks rather than a fork to eat sushi. I intend to keep my promises. I aim to be fair. I love to hear New Orleans-style brass band music. Depending on their longevity, their intensity, their specificity, their malleability, and their idiosyncrasy, we use different words to describe these mental states: values, drives, choices, appraisals, volitions, cravings, goals, reasons, purposes, passions, sentiments, longings, appetites, aspirations, attractions, motives, urges, needs, acts of will. Such mental states are sometimes referred to as pro-attitudes, and related states that move someone to avoid, escape, or prevent a particular state of affairs are correspondingly called con-attitudes.

If you put together an agent’s representations of how the world is and the mental states that move her to act, you have some hope of predicting and explaining her actions. Suppose, for instance, that you know that I have a free weekend, that I deeply yearn to see the Grand Canyon, and that I have some spare cash. What am I going to do? It’s not unreasonable to predict that I will purchase a plane ticket (or rent a car) and go to Arizona. Now suppose that you know that my comprehension of geography is pretty weak. I still want to see the Grand Canyon, but I mistakenly think that it’s in Chihuahua. (Oops – nobody’s perfect). What do you think I’ll do now? It’s not unreasonable to predict that I’ll still purchase a plane ticket or rent a car, but that instead of going to Arizona I’ll end up in Mexico (and pretty frustrated!). Someone’s representations and purposes combine to lead them to act. If you know what someone’s representations and purposes are, you can to some extent predict what they’ll do.

In the same vein, knowing what someone’s representations and purposes are puts you in a position to explain their actions. Suppose you see me stand up, walk across the room, open a door, and walk through the doorway. On the door, you notice the following icon:

Figure 1

 

Why did I do what I did? A plausible explanation isn’t too hard to assemble. If you saw the sign indicating that the door led to the men’s bathroom, then presumably I did too: so I probably had a relevant representation of what was on the other side of the door. What desire (preference, goal, intention, need) might I have that would rationalize my behavior? The most obvious suggestion is that I wanted to relieve myself. Of course, it’s possible that I went to the men’s bathroom to participate in a drug deal, to conceal myself while I had a good long cry, or for some other reason. But if you’re right in thinking that I wanted to urinate, then you’ve successfully explained my action. If you know what someone’s representations and purposes are, you can to some extent explain what they’ve done.

To predict and explain other people’s actions, we need some idea of what they prefer (want, desire, value, need). But that’s not all that preferences are for. Preferences also figure in planning and evaluation, and when they’re structured appropriately, they contribute to the agent’s autonomy. Think about your best friend. Imagine that her birthday is in a week. You love your friend, and want to do something special for her birthday. You don’t need to predict your own action here, nor do you need to explain it. Your task now is to plan: in the next week, what can you do for your friend that will simultaneously please and surprise her without emptying your bank account? To give your friend a special birthday present, you need to know what she enjoys (or would enjoy, if she hasn’t experienced it yet). To be motivated to give your friend a special birthday present in the first place, you need to want to do something that she wants. In philosophical jargon, you must have a higher-order desire – a desire about another desire (hers). You want to give her something that she wants.

It’s remarkable how adept people can be at solving this sort of problem, which involves the sort of recursively embedded agent-patient relations discussed in the introduction. Think about it. To plan a good gift, you need to know now not just what your friend currently wants but what she will want in the future. You can’t just give her what you yourself want or what you will want in a week. You can’t give her what she wants now but won’t want in a week. To successfully give your friend a good present, you have to figure out in advance what she’ll want in a week.

The same constraints apply when you plan for yourself. Think about choosing your major in college. What do you want to specialize in? Musicology is interesting, but will you still be interested in it three years from now? Will it set you up to earn a decent living (something you’ll presumably want in five, ten, and twenty years)? Marketing might earn you a decent living, but will you find it boring (not want to do it, or even want not to do it) after a few years? Are you going to want to have children? In that case, you may need more income than you would if you didn’t want (and didn’t have) children. Living a sensible life requires planning. You need to make plans that affect your friends, your family, your colleagues, and your rivals. You also need to make plans for yourself. Doing this successfully requires intimate knowledge of (or at least some pretty good guesses about) your own and others’ future desires, needs, and preferences.

Thus, preferences figure in the prediction, explanation, and planning of action. They’re also important when we morally evaluate action. I reach out violently and knock you over, causing you some pain and surprising you more than a little. What should you think of my action? It depends in part on what moved me to do it. If I’ve shoved you because I want to hurt you, if I’m engaged in an assault, you’re going to think I’m doing something wrong. If I’m not depraved, I’ll also feel guilty. If I’m just clumsily gesturing at a pretty tree over there, I should probably know better, but you’ll temper your anger. I may not feel guilty, but I’ll probably be embarrassed or even ashamed. If I’m knocking you out of the way of a biker who’s zooming down the sidewalk towards you, perhaps you’ll feel grateful, while I’ll feel relieved or even proud.

What marks the difference between your reactions to my action? What marks the difference between my own assessments of it after the fact? It’s not that my shoving you and your falling hurts more or less in one case or the other. Instead, what leads you to evaluate my action as wrong, misguided, or benevolent is the pro- (or con-)attitude that moves me to act. Likewise, what leads me to feel guilt, embarrassment, or relief is the pro- (or con-)attitude that moved me to act. If I want to hurt you, if I want to do something to you that you prefer not to happen, you’ll say that I’ve acted wrongly. If my aim is to do something relatively harmless (something you neither prefer nor disprefer) like pointing out a feature of the environment, you’ll perhaps think I’m a klutz, but you won’t think I’ve done something morally wrong. If I’m trying to prevent you from being run down by an out-of-control cyclist, if I want to do something to you that (once you understand it) you prefer that I do, you’ll presumably think I’ve done something morally good.

Preferences are important and versatile. They help us predict and explain actions. They help us exercise agency on our own behalf and for those we care about. They help us evaluate the actions of others and ourselves. In the context of moral psychology, there’s one last thing that preferences are good for: autonomy. According to many philosophers, such as Harry Frankfurt (1971, 1992), a person is autonomous or free to the extent that she wants what she wants to want, or at least does not want what she would prefer not to want. An autonomous agent is someone whose will has a characteristic structure. This idea is discussed in more depth in chapter 2.

As I mentioned above, we have dozens of terms to refer to pro- and con-attitudes. But the title of this chapter is ‘Preferences’. Why? Preferences are sufficiently fine-grained to help in the prediction, explanation, and evaluation of action in the face of tradeoffs. Other motivating attitudes lack this specificity. Consider, for instance, values.[1] At a high enough level of abstraction, everyone values the same ten things: power, achievement, pleasure, stimulation, self-direction, universalism, benevolence, tradition, conformity, and security (Schwartz 2012). If you want to know what someone will do, why someone did something, or whether someone deserves praise or blame for acting as they did, knowing that they accept these values gives you no purchase. Qualitatively weighting values doesn’t improve things much. Consider someone who values pleasure “somewhat,” stimulation “a lot,” and security “quite a bit.” What will she do? It’s hard to say. Why’d she go to the punk rock show? It’s hard to say. Does she merit some praise for engaging in a pleasant conversation with a stranger at the coffee shop? It’s hard to say.

Preferences set up a rank ordering of states of affairs. This is easiest to see in the case of tradeoffs. Suppose two desires are moving you to act. You’re exhausted after a long day, so you want to take a nap. But your friend just texted to suggest meeting up for a drink at a local bar, and you want to join her. We can represent this tradeoff with the following table:

 

  Nap Don’t nap
Join friend A B
Don’t join friend C D

Table 1: Choice matrix

 

In this simplified choice matrix, there are four ways things could turn out. You could take a nap and join your friend (A); you could join your friend without taking a nap (B); you could take a nap without joining your friend (C); and you could neither nap nor join your friend (D). If you have a complete set of preferences over these options, one of them is optimal for you, another is in second place, another is in third place, and the final one is in last place. Presumably A is your top outcome and D is your bottom outcome. Unfortunately, although you most prefer A (i.e., you prefer it to B, C, and D), it’s impossible. So you’re in a position where you need to weigh a tradeoff. This is where preferences become important. If you simply value the nap and value socializing with your friend, there’s no saying whether you’ll go with B or C. But if you prefer socializing to napping, we can predict that you’ll opt for B over C. By the same token, if you prefer napping to socializing, we can predict that you’ll opt for C over B.

So preferences are especially helpful in predicting behavior. They’re also great for explaining and evaluating behavior. A useful rule of thumb for explaining behavior is that people act in such a way as to bring about the highest-ranked outcome they think they can achieve. Imagine someone who prefers A to B, B to C, C to D, D to E, E to F, F to G, and G to H. She acts in such a way as to produce C. How can we explain this? Well, if we posit that she believes that A and B are out of the question (perhaps she takes them to be impossible or at least extremely difficult to achieve), then we can explain her behavior by saying that she went with the best outcome available to her.

 

2 The role of preferences in moral psychology

 

We’re now in a position to see how preferences relate to the five core concepts of moral psychology (patiency, agency, sociality, reflexivity, and temporality).

 

2.1 The role of preferences in patiency

 

Even if no one else is involved, even if you’re not exercising agency, your preferences matter for your patiency. According to one attractive theory of personal well-being, what it means for your life to go well is that your preferences are satisfied (Brandt 1972, 1983; Heathwood 2006). Your preferences might be satisfied through your own agency. You might prefer, among other things, to exercise agency in pursuit of some goal or other. Your preferences might be satisfied because you are involved in social relations with other people. Even so, there will be cases in which what you prefer happens or fails to happen simply by luck, accident, or unanticipated causal necessity. Fundamentally, then, well-being is associated with patiency, with what happens to you.

The preference-satisfaction theory of well-being is attractive for several reasons. It explains why one aspect of morality is intrinsically motivating. If my well-being is a matter of whether my preferences are satisfied, then I can’t help caring about my well-being. Preferences are a way of caring about things. Of course I care about what I care about. The preference-satisfaction theory of well-being also accounts for cases in which hedonic (pleasure-based) theories of well-being fail. Sometimes, it seems like my life goes no better, and may even go worse, when I experience some pleasures. I struggle with alcohol dependency and end up drinking to excess. While I enjoy the drinks, I prefer to stop. Arguably, I’m worse rather than better off because, even though I experience pleasure, my preferences are frustrated. Similarly, sometimes it seems like your life goes no worse, and may even go better, when you experience some pains. You exercise vigorously at the gym. You force yourself to study extra hard for an exam. You watch a frightening or depressing or horrifying movie. You eat a meal spiced with more than a little wasabi. These are painful experiences, but in each case you prefer to suffer through the pain. Arguably, you’re better rather than worse off because, even though you experience pain, your preferences are satisfied.

The preference-satisfaction theory of well-being also provides a way to understand well-being comparatively. People don’t just have good or bad lives. They have better or worse lives. Someone whose life is going poorly could be even worse off. Someone whose life is going well could be even better off. This distinction maps nicely onto the idea of a preference ranking. Since preferences can in principle put all the ways the world could be in order from best to worst, it’s possible to identify someone’s well-being with how far up their ranking things actually are. If you prefer A to B, B to C, C to D, D to E, E to F, F to G, and G to H, and the actual state of affairs is C, then your level of well-being is better than many ways it could be but not maximal. If things change to B, your well-being improves one notch; if things change to D, your well-being goes down a notch.

The most plausible version of the preference-satisfaction theory of well-being claims that what really contributes to your well-being is not the extent to which your actual preferences are satisfied but the extent to which your betterinformed preferences are satisfied. Why? And what does it mean for preferences to be informed? Imagine that you’re about to take a bite of a delicious chile relleno. It’s your favorite dish. The cheese is perfectly melted. The poblanos are fresh. The tomatoes are local. Everything is perfect except for one little exception: unbeknownst to you, the cook accidentally used rat poison rather than salt. If you eat these chiles, you’re going to end up in the hospital. But you don’t know this; in fact, you have no clue. It won’t improve your life to eat those chiles. It’ll make your life (much!) worse.

Philosophers recognize this, and that’s why they say that your well-being is a function not of what you want but of what you would want if you were better informed. If you knew that the chiles relleno were poisoned, you would prefer quite strongly not to eat them, so even though you currently prefer to eat them, doing so would detract from rather than contribute to your well-being.

Knowledge of potential poisons is clearly not the only thing you need to have informed preferences, so philosophers of well-being argue that your better-informed preferences are your fully-informed preferences. According to this approach, the preferences that determine someone’s well-being are not the preferences that person actually has, but the ones they would have if they were fully informed. Specifying what full information means in a way that doesn’t collapse into omniscience is tricky, but one attractive suggestion is to take into account “all those knowable facts which, if [you] thought about them, would make a difference to [your] tendency to act” (Brandt 1972, p. 682) or “everything that might make [you] change [your] desires” (Brandt 1983, p. 40) – a process Richard Brandt dubbed cognitive psychotherapy.[2]

 

2.2 The role of preferences in agency, reflexivity, and temporality

 

I briefly mentioned the role of preferences in agency, reflexivity, and temporality above. Several points are relevant. First, to act at all, you must have pro-attitudes like preferences. Without states that move you to act, you’d never act in the first place, never exercise agency at all. Second, to act in the face of tradeoffs, you must have some way of ranking potential outcomes. That’s what preferences do: they put potential outcomes in a rank order. Third, to be the sort of agent that the vast majority of adult humans are, you need to engage in long-term plans and projects. This involves having some idea in advance what your future self’s preferences will or might be. It involves having temporally extended preferences, so that you want now for your future preferences, whatever they end up being, to be satisfied. It involves thinking of that future person as yourself and therefore having a special regard for him or her. If your future self mattered to you no more or less than some random stranger, long-term projects would be pretty foolish.

To be a recognizably human agent, your preferences must not violate certain constraints. Put less dramatically, your agency is undermined to the extent that your preferences violate certain constraints. You’ll fail to act successfully to the extent that you suffer from preference reversals (preferring A to B one moment and B to A the next moment). You’ll fail to act successfully if you have cyclical preferences (preferring A to B, B to C, but C to A). You’ll fail to act successfully over time if you cannot rely on your current representation of your future preferences to be largely accurate (thinking that you’ll prefer A to B when in fact you’ll prefer B to A).

 

2.3 The role of preferences in sociality

 

We tend to think that people deserve praise and blame only, or at least primarily, for their motivated actions. As I pointed out above, if someone inadvertently brings about a consequence, we tend to withhold or at least temper praise (even if the consequence was good) and blame (even if it was bad). Moral good luck is nice, but not particularly praiseworthy. Negligence is blameworthy, but less so than malignance.

The role of preferences in sociality is most directly comprehensible from a utilitarian (or other consequentialist) framework, but does not depend essentially on the truth of utilitarianism. Utilitarians such as Brandt analyze right action in terms of preference-satisfaction. According to Brandt (1983, p. 37), an action is permissible if (and only if) “it would be as beneficial to have a moral code permitting that act as to have any moral code that is similar but prohibits the act.” Obligatory and forbidden actions can then be defined in terms of permissibility using well-known equivalences in deontic logic: an obligatory action is one that it’s not permissible not to do, and a forbidden action is one that it’s not permissible to do. The connection with preferences is that benefit (and harm) are understood on this account in terms of well-being. In other words, according to Brandt, an action is permissible if (and only if) it would satisfy as many fully-informed preferences, across all people, to have a moral code permitting that act as to have any moral code that is similar but prohibits the act.

Brandt’s theory is a rule utilitarian approach to right action. One could instead adopt an act utilitarian theory, according to which an action is permissible if and only if performing it in the circumstances would be as beneficial as performing any alternative action (Smart 1956). Or one could adopt a motive utilitarian theory, according to which an action is permissible if and only if it’s what a person with an ideal motivational set (i.e., a psychologically possible motivational set that, over the course of a lifetime, is as beneficial as any alternative psychologically possible motivational set) would perform in the circumstances (Adams 1976). Regardless of the precise flavor of utilitarianism one adopts, then, it’s clear that, for utilitarians, preferences are immensely important on the dimension of sociality. To act in such a way as to satisfy the most preferences, you must take into account the effects of your action not just on yourself but on everyone else. In other words, you need to take into account how your agency affects others’ patiency. Nested agent-patient relations also play a role here. What you do (or fail to do) to one person will often have some effect on what they do (or fail to do) to another person, which will have an effect on what the second person does (or fails to do) to a third person, and so on.

As I mentioned above, the relevance of preferences to sociality is easiest to see from a utilitarian perspective, but it doesn’t rely on such a perspective. Virtue ethicists and care ethicists (though perhaps not Kantians) all accept the centrality of preferences in their approaches to sociality. For instance, one nearly universally recognized virtue is benevolence, the disposition both to want to benefit other people and to often succeed in doing so. Even if a virtue ethicist thinks that there are benefits other than preference-satisfaction, they admit that preference-satisfaction is one kind of benefit. In the same vein, Aristotle and other ancient virtue ethicists gave pride of place to friendship. Friends aim, among other things, to benefit each other (and typically succeed), which again involves (perhaps among other things) preference-satisfaction. Similarly, in the care tradition, the one-caring aims among other things to benefit the cared-for. This typically involves not only satisfying the cared-for’s informed preferences but actively helping the cared-for to get their actual preferences to approximate their idealized preferences.

 

3 Preference reversals and choice blindness

 

Thus, preferences matter in multiple ways to the core concepts of moral psychology. What does the scientific literature on preferences tell us about these important mental states? Two convergent lines of evidence suggest that preferences are neither determinate nor stable: the heuristics and biases research on preference reversals, and the psychological research on choice blindness.

Preferences are dispositions to choose one option over another. You strictly prefer a to b only if, if you were offered a choice between them, then ceteris paribus you would choose a. If your preferences are stable, then what you would choose now is identical to what you would choose in the future. If your preferences are determinate, then there is some fact of the matter about how you would choose. That is to say, exactly one of the following subjunctive conditionals is true: if you were offered a choice, then ceteris paribus you would choose a; if you were offered a choice, then ceteris paribus you would choose b; if you were offered a choice, then ceteris paribus you would be willing to flip a coin and accept a if heads and b if tails (or you would be willing to let someone else – even your worst enemy – choose for you). The kind of indeterminacy and instability I argue for in this section is modest rather than radical. I want to claim that preferences are unstable in the sense of sometimes changing in the face of seemingly trivial and normatively irrelevant situational influences, not in the sense of constantly changing. Similarly, I want to claim that preferences are indeterminate in the sense of there sometimes being no fact of the matter how someone would choose, not in the sense of there always being no fact of the matter how someone would choose.

 

3.1 Preference reversals

 

Two distinctions are worth making regarding the types of possible preference reversals. In a chain-type reversal, you prefer a to b, prefer b to c, and prefer c to a; such reversals are sometimes labeled failures of acyclicity. In a waffle-type reversal, you prefer a to b, but also prefer b to a. The other distinction has to do with temporal scale. Preference reversals can be synchronic, in which case you would have the inconsistent preferences all at the same time. More commonly, they are diachronic, in which case you might now prefer a to b and b to c, and then later come to prefer c to a (and perhaps give up your preference for a over b). Or you might now prefer a to b, but later prefer b to a (and perhaps give up your preference for a over b). In my (2012) paper, I call diachronic waffle-type reversals the result of Rum Tum Tugger preferences, after the character in T. S. Eliot’s Book of Practical Cats who is “always on the wrong side of every door.”

Preference reversals were first systematically studied by Daniel Kahneman, Sarah Lichtenstein, Paul Slovic, and Amos Tversky as part of the heuristics and biases research program.[3] In study after study, they and others showed that people’s cardinal preferences could be reversed by strategically framing the choice situation. When faced with a high-risk / high-reward gamble and a low-risk / low-reward gamble, most people choose the former but assign a higher monetary value to the latter. These investigations focused on choices between lotteries or gambles rather than choices between outcomes because the researchers were attempting to engage with theories of rational choice and strategic interaction, which – in order to generate representation theorems – employ preferences over probability-weighted outcomes. While this research is fascinating, its complexity makes it hard to interpret confidently. In particular, whenever the interpreter encounters a phenomenon like this, it’s always possible to say that the problem lies not in people’s preferences but in their credences or subjective probabilities. Since evaluating a gamble always involves weighting an outcome by its probability, one can never be sure whether anomalies are attributable to the value attached to the outcome or the process of weighting. And since we have independent reason to think that people’s ability to think clearly about probability is limited and unreliable (Alfano 2013), it’s tempting to hope that preferences can be insulated from this line of critique.

For this reason, I will focus on more recent research on preference reversals in the context of choices between outcomes rather than choices between lotteries (or, if you like, degenerate lotteries with probabilities of only 0 and 1). A choice of outcome a over outcome b can only reveal someone’s ordinal preferences; it can only tell us that she prefers a to b, not by how much she prefers a to b. This limitation is worth the price, however, because looking at choices between outcomes lets us rule out the possibility that any preference reversal might be attributable to the agent’s credences rather than her preferences.

Some of the most striking investigations of preference reversals in this paradigm have been conducted by Dan Ariely and his colleagues.   For instance, Ariely, Loewenstein, and Prelec (2006) used an arbitrary anchoring paradigm to show that preferences ranging over baskets of goods and money are susceptible to diachronic waffle-type reversals.[4] In this paradigm, a participant first writes down the final two digits of her social security number (henceforth SSN-truncation[5]), then puts a ‘$’ in front of it. Next, the experimenters showcase some consumer goods, such as chocolate, books, wine, and computer peripherals. The participant is instructed to record whether, hypothetically speaking, she would pay her SSN-truncation for the goods. Finally, the goods are auctioned off for real money. The surprising result is that participants with high SSN-truncations bid 57% to 107% more than those with low SSN-truncations.

To better understand this phenomenon, consider a fictional participant whose SSN-truncation was 89. She ended up bidding $50 for the goods, so, at the moment of bidding, she preferred the goods to the money; otherwise, she would have entered a lower bid. However, one natural interpretation of the experiment is that, prior to the anchoring intervention, she would or at least might have chosen that amount of money over the goods (i.e., she would have bid lower); in other words, prior to the anchoring intervention, she preferred the money to the goods. Anchoring on her high SSN-truncation induced a diachronic waffle-type reversal in her preferences. Prior to the intervention, she preferred the money to the goods, but after, she preferred the goods to the money. This way of explaining the experiment entails that her preferences were unstable: they changed in response to the seemingly trivial and normatively irrelevant framing of the choice.

Another way to explain the same result is to say that, prior to the anchoring intervention, there was no fact of the matter whether she preferred the goods to the money or the money to the goods. In other words, it was false that, given a choice, she would have chosen the goods, but it was equally false that, given a choice, she would have chosen the money or been willing to accept a coin flip. Only in the face of the choice in all its messy situational details did she construct a preference ordering, and the process of construction was modulated by her anchoring on her SSN-truncation. This alternative explanation entails that her preferences were indeterminate.

Furthermore, these potential explanations are mutually compatible. It could be, for instance, that her preferences were partially indeterminate, and that they became determinate in the face of the choice situation. Perhaps she definitely did not prefer the money to the goods prior to the anchoring intervention, but there was no fact of the matter regarding whether she was indifferent or preferred the goods to the money. Then, in the face of the hypothetical choice, this local indeterminacy was resolved in favor of preference rather than indifference. Finally, her newly-crystallized preference was expressed when she entered her bid.

Such a robust effect calls for explanation. My own suspicion is that a hybrid of indeterminacy and instability is the right theory of what happens in these cases, but it’s difficult to find evidence that points one way or the other. In any event, for present purposes, I’m satisfied with the inclusive disjunction of indeterminacy and instability.

 

3.2 Choice Blindness

 

There are many other – often amusing and sometimes depressing – studies of preference reversals, but the gist of them should be clear, so I’d like to turn now to the phenomenon of choice blindness, a field of research pioneered in the last decade by Petter Johansson and his colleagues. As I mentioned above, preferences are dispositions to choose. You prefer a to b only if, were you given the choice between them, then ceteris paribus you would choose a. Preferences are also dispositions to make characteristic assertions and offer characteristic reasons. While it’s certainly possible for someone to prefer a to b but not to say so when asked, the linguistic disposition is closely connected to the preference. Someone might be embarrassed by her preferences. She might worry that her interlocutor could use them against her in a bargaining context. She could be self-deceived about her own preferences. In such cases, we wouldn’t necessarily expect her to say what she wants, or to give reasons that support her actual preferences. But in the case of garden-variety preferences, it’s natural to assume that when someone says she prefers a to b, she really does, and it’s natural to assume that when someone gives reasons that support choosing a over b, she herself prefers a to b. Research on choice blindness challenges these assumptions.

Imagine that someone shows you two pictures, each a snapshot of a woman’s face. He asks you to say which you prefer on the basis of attractiveness. You point to the face on the left. He then asks you to explain why, displaying the chosen photograph a second time. Would you notice that the faces had been surreptitiously switched, so that the face you hadn’t pointed at is now the one you’re being asked about? Or would you give a reason for choosing the face that you’d initially dispreferred?   Johansson et al. (2005) found that participants detected the ruse in fewer than 20% of trials. Moreover, when asked for reasons, many of the participants who had not detected the manipulation gave reasons that were inconsistent with their original choice. For instance, some said that they preferred blondes even though they had originally chosen a brunette.

This original study of choice blindness has been supplemented with experiments in other domains. For instance, Hall et al. (2010) found that people exhibited choice blindness in more than two thirds of all trials when the choice was between two kinds of jam or two kinds of tea. After tasting both, participants indicated which of the two they preferred, then were asked to explain their choice while sampling their preferred option “again.” Even when the phenomenological contrast between the items was especially large (cinnamon apple versus grapefruit for jam, pernod versus mango for tea), fewer than half the participants detected the switch.

Choice blindness in the domain of aesthetic evaluations of faces and comestibles might not seem weighty enough to support the argument that preferences are often indeterminate and unstable. But perhaps choice blindness in the domain of political preferences and moral judgments would be. Johansson, Hall, and Chater (2011) used the choice blindness paradigm to flip Swedish participants’ political preferences across the conservative-socialist gap.[6] Participants filled in a series of scales on their political preferences for policies such as taxes on fuel. Some of these scales were then surreptitiously reversed, so that, for example, a very conservative answer was now a very socialist answer. Participants were then asked to indicate whether they wanted to change any of their choices, and to give reasons for their positions. Fewer than 20% of the reversals were detected, and only one in every ten of the participants detected enough reversals to keep their aggregate position from switching from conservative to socialist (or conversely). In a similar study, Hall, Johansson, and Strandberg (2012) used a self-transforming survey to flip participants’ moral judgments on both socially contentious issues, such as the permissibility of prostitution, and broad normative principles, such as the permissibility of large-scale government surveillance and illegal immigration. For instance, an answer indicating that prostitution was sometimes morally permissible would be flipped to say that prostitution was never morally permissible, and an answer indicating that illegal immigration was morally permissible would be flipped to say that illegal immigration was morally impermissible. Detection rates for individual questions ranged between 33% and 50%. Almost 7 out of every 10 of the participants failed to detect at least one reversal.

As with the behavioral evidence for preference reversals, the evidence for choice blindness suggests that people’s preferences are unstable, indeterminate, or both. The choices people make can fairly easily be made to diverge from the reasons they give. If preferring a to b is a disposition both to choose a over b and to offer reasons that support the choice of a over b (or at least not to offer reasons that support the choice of b over a), then it would appear that many people lack preferences, or that their preferences do exist but are extremely labile. Not only is there sometimes no fact of the matter about what we prefer, but also our preferences are often seemingly constructed on the fly in choice situations, and their ordering is shaped by seemingly trivial and normatively irrelevant factors.

 

3.3 A descriptive preference model

 

While it is of course possible to dispute the ecological validity of these experiments or my interpretation of them, I want to proceed by considering some of the philosophical implications of that interpretation, assuming for the sake of argument that it is sound. I’ve already explored some of the implications of this perspective in Alfano (2012), where I argue that the indeterminacy and instability of preferences infirm our ability to explain and predict behavior. Predictions of behavior often refer to the preferences of the target agent. If you know that Karen prefers vanilla ice cream to chocolate, then you can predict that, ceteris paribus, when offered a choice between them she will go with vanilla. Likewise for explanations: you can base an explanation of Karen’s choice of vanilla on the fact that she prefers vanilla. But if there’s no fact of the matter about what Karen prefers, you cannot so easily predict what she will do, nor can you so easily explain why she did what she did. A related problem arises when considering instability. If Karen prefers vanilla to chocolate now, but her preference is unstable, then the prediction that she will choose vanilla in the future – even the near future – is on shaky ground. For all you know, by the time the choice is presented, her preferences will have reversed. Similarly for explanation: if Karen’s preferences are unstable, you might be able to say that she chose vanilla because she preferred it at that very moment, but you gain little purchase on her longitudinal preferences from such an attribution.

I’ve responded to these problems by proposing a model in which preferences are interval-valued rather than point-valued. A traditional valuation function v maps from outcomes to points. The binary preference relation is then defined in terms of these points: a is strictly preferred to b just in case v(a) > v(b), b is strictly preferred to a just in case v(a) < v(b), and the agent is indifferent as between a and b just in case v(a) = v(b).

Screen Shot 2014-04-24 at 4.17.18 PM

Figure 2: a preferred to b because 1 = v(a) > 0 = v(b)

 

In the looser model I propose, by contrast, the valuation function maps from outcomes to closed intervals, such that a is strictly preferred to b just in case min(v(a)) > max(v(b)) and the agent is indifferent as between a and b just in case there is some overlap in the intervals assigned to a and b.

Screen Shot 2014-04-24 at 4.17.27 PM

Figure 3: indifference because neither min(v(a)) > max(v(b)) nor max(v(a)) < min(v(b))

 

 

Though this model preserves the transitivity of strict preference, it does not preserve the transitivity of indifference. This, however, may be a feature rather than a bug, since ordinary preferences as exhibited in choice behavior themselves seem not to preserve the transitivity of indifference.

 

4 Philosophical implications of the indeterminacy and instability of preferences

 

In this section, I consider some possible philosophical implications of the indeterminacy and instability of preferences, drawing on the descriptive model outlined in the previous section. Moving from the descriptive to the normative domain is always fraught, but, as I argued in the introduction, the two need to be explored in tandem, with mutual theoretical adjustments made on each side. Moral psychology without normative structure is a baggy monster. Normative theory without empirical support is a castle in the sky.

 

4.1 Implications for patiency

 

The primary worry raised for the theory of personal well-being by the indeterminacy and instability of preferences is that, if the extent to which your life is going well depends on or is a function of the extent to which you’re getting what you want, then well-being inherits the indeterminacy and instability of preferences. In other words, there might be no fact of the matter concerning how good a life you’re living at this very moment, and if there is such a fact, it might fluctuate from moment to moment in response to seemingly trivial and normatively irrelevant situational factors.

By way of example, consider someone who is eating toast with apple cinnamon jam. Is his life as good as it would be if he were eating toast with grapefruit jam? If he is like the people in the choice blindness studies mentioned above, there might be no answer to this question. If he’s told that he prefers apple cinnamon, he will prefer the present state of affairs, but if he is told that he prefers grapefruit, he’ll be less pleased with the present state of affairs than he would be with the world in which he is eating grapefruit jam. Whether his life is better in the apple cinnamon jam-world or the grapefruit jam-world is indeterminate until his preferences crystallize in one ordering or the other.

Or consider someone who has a brand new hardbound copy of Moby Dick, for which she just paid $50 when it was marked down from $70. Is her life going better now that she has the book, or was it going better before, when she had the money? If she is like the participants in Ariely’s preference reversal study, the answer may be “yes” to both disjuncts. Before she bought the book, she preferred the money to the book. But then she anchored on the manufacturer’s suggested retail price of $70, raised her valuation of the book, and ended up preferring it to $50. Her unstable preferences mean that she was better off with the money than the book, and that she is better off with the book than the money. It’s not a contradiction, but it makes her well-being a pain in the neck to evaluate.

Fortunately, though, there is a ready response to this worry, which begins by pointing out that the indeterminacy and instability of preferences is not radical but modest, a feature captured by the descriptive model sketched above. Although there may be no fact of the matter whether the life of the consumer of cinnamon apple jam is better than the life of the consumer of grapefruit jam, there is a fact of the matter whether either of these lives is better than that of someone who, instead of eating jam, is enduring irritable bowel syndrome. Although preference orderings may fluctuate between owning a book and having $50, they do not fluctuate between owning the same book and having $50,000. These observations are consistent with the interval-valued preferences of the descriptive model outlined in the previous section. In the first example, the intervals for cinnamon apple jam and for grapefruit jam overlap with each other, but neither overlaps with the interval for irritable bowel syndrome. Thus, we can still make a whole host of judgments about the quality of various possible lives, even if, when we “zoom in,” such judgments cannot always be made. In the second example, the intervals for having $50 and having the book overlap with each other, but neither overlaps with the interval for having $50,000.

For the price of this local indeterminacy and instability, the theoretician of well-being can purchase an answer to an objection to the preference-satisfaction theory of well-being. The objection goes like this: when assessing whether it would be better to have the life of a successful lawyer or a successful artist, it seems trivial or even perverse to ask whether the artist’s life would involve slightly more ice cream, even if the agent considering what to do with her life likes ice cream. Slight preferences shouldn’t bear normative weight in this context.

However, if we assume, as seems reasonable in light of the evidence, that her preference for a little more ice cream is weak enough that it could be shifted by preference reversal or choice blindness, then its normative irrelevance is unmasked. The life of the ice cream-deprived artist and the life of the ice cream-enjoying artist are assigned nearly identical intervals on the scale of preference – intervals that differ less from each other than from that assigned to the life of the lawyer. Hence, if we are willing to put up with a little indeterminacy and instability, we can avoid more serious objections to the theory of personal well-being.

 

4.2 Implications for sociality

 

The main worry raised by the indeterminacy and instability of preferences in the context of sociality is that, if right action depends on preference-satisfaction (perhaps among other things), then it inherits the indeterminacy and instability of preferences. It might turn out that there’s just no fact of the matter what it would be right to do, or that that fact is in constant flux. This worry is perhaps most pressing for preference-utilitarians, such as Brandt and Singer (1993), but it casts a long shadow. Even if you don’t think that right action is a function of preferences and only preferences, it’s hard to deny that preferences matter at all. For instance, as I pointed out above, virtue ethicists typically countenance benevolence as an important virtue. If, as I argued in the previous section, well-being is affected by the indeterminacy and instability of preferences, then benevolence is too. And even if one thinks that benevolence is not a virtue, virtually any tolerable theory of right action is going to say that maleficence is a vice and that there is a duty – whether perfect or imperfect – of non-maleficence.

In the remainder of this section, I will concentrate on the normative implications of indeterminacy and instability for preference-utilitarianism, but it should be clear that these are just some of the more straightforward implications, and that others.

Before considering some responses I find attractive, I should point out that the problem we face here is not the one that is solved by distinguishing between a decision procedure and a standard of value. An objection to utilitarianism that was lodged early and often is that it’s either impossible or at least extremely computationally complex to know what would satisfy the most preferences. This knowledge could only be acquired by eliciting the preference orderings of every living person – or perhaps even every past, present, and future person. The correct response to this objection is that utilitarianism is meant to be a standard of value, not a decision procedure.[7] It identifies (if it is the correct theory of right action) what it would be right to do, but that doesn’t mean that we can use it to find out what it would be right to do every time you make a moral decision. The distinction is meant to parallel other general theories: Newtonian mechanics would have identified, if it had been the correct physical theory, what a projectile will do in any circumstances whatsoever, even if people were unable to apply the theory in a given instance.

This response is unavailable in the present context. There are two ways in which it might be impossible to know what would satisfy someone’s preferences: epistemic and metaphysical. You would be unable to know what someone wants if there was a fact of the matter about what that person wants, but you couldn’t find out what that fact is. This would be a merely epistemic problem, and the distinction between a decision procedure and a standard of value handles it nicely. But you would also be unable to know what someone wants if there simply was no fact of the matter concerning what that person wants. If I am right that preferences are indeterminate, then this is the problem we now face, and it does not good to have recourse to the distinction between a decision procedure and a standard of value.

Preference-utilitarianism is not without resources, however. As in the case of well-being, one attractive response is to point out that preferences are only modestly indeterminate and unstable. Although there may be no uniquely most-preferred outcome for a given individual (or indeed for any individual), there will be many genuinely dispreferred outcomes, and hopefully a manageably constrained subset of preferred outcomes, than which nothing is more preferred. They are all outcomes than which nothing is determinately and stably better, but there is no unique best outcome.

Furthermore, from among this subset of alternatives it might be possible to winnow out those that satisfy preferences which we have independent normative grounds to reject – preferences that are silly, ignorant, perverse, or malevolent. As I pointed out above, it’s commonly argued in the context of right action that brute preferences carry less weight than fully-informed preferences. According to those who argue in this way, whether it’s right to do something depends less on whether it would satisfy people’s actual preferences than on whether it would satisfy their fully-informed preferences. It might be hoped that idealizing preferences would cut down or even eliminate their indeterminacy and instability.

Here’s what that might look like. Suppose that Jake’s actual preferences are captured by my interval-valued model. As such, they present two problems: they fail to uniquely determine how it would be right to treat Jake, and they may even rule out the genuinely right way to treat him because his actual preferences are normatively objectionable. It might be possible to kill these two birds with the single stone of idealization if idealization leads to unique, point-valued preferences that are no longer normatively objectionable. Perhaps there is only one way that Jake’s preferences could turn out after he undergoes cognitive psychotherapy. This is a big ‘perhaps,’ but it is worth considering. What evidence we have, however, suggests that idealizing in this way would not lead to determinate, stable preferences. When Kahneman, Lichtenstein, Slovic, and Tversky began to investigate preference reversals, many economists saw the phenomenon as a threat, since it challenged some of the most fundamental assumptions of their field. Accordingly, they tried to show that preference reversals could removed be root and branch if participants were given sufficient information about the choices they were making. Years of attempts to eliminate the effect proved fruitless.[8]

The burden is then on the idealizer to say what information participants lack in the relevant experiments. What does someone who bids high on a bottle of wine after considering her SSN-truncation not know, or not know fully enough? Perhaps she should be allowed first to drink some of the wine. While Ariely et al. (2006) did not investigate whether this would eliminate the anchoring on SSN-truncation, they did conduct other experiments in which participants sampled their options and thus had the relevant information. In one, participants first listened to an annoying sound over headphones, then bid for the right not to listen to the sound again. As in the consumer goods experiment, before bidding, participants first considered whether they would pay their SSN-truncation in cents to avoid listening to the sound again. And as expected, those with higher SSN-truncations entered higher bids, while those with lower SSN-truncations entered lower bids. It’s unclear what further information they could have acquired to inform their preferences. It seems more plausible is that they had too much information, not too little. If they hadn’t first considered whether to bid their SSN-truncation, they would not have anchored on it and would therefore have had “uncontaminated” preferences. But cognitive psychotherapy says to take into account “everything that might make [one] change [one’s] desires” (Brandt 1983, p. 40). Anchoring changed their desires, so it counts as part of cognitive psychotherapy. Perhaps the process can be revised by saying that one should take into account everything that might correctly or relevantly change one’s desires, but then the problem is to come up with an account of what makes an influence on one’s desires correct or relevant that doesn’t involve either a vicious regress or a vicious circle. No one has managed to do this, perhaps because it can’t be done.

Another response, which I find more attractive, is to embrace rather than reject the indeterminacy and instability of preferences. There are several ways to do this. One is to figure out which preferences are wildly indeterminate or unstable and disqualify their normative standing completely. Just as it makes sense to ignore the Rum Tum Tugger’s begging to be let inside because you know he’ll just beg to get back out again, perhaps it makes sense to hive off Jake’s indeterminate and unstable preferences, leaving a kernel of normatively respectable ones behind. Only these would matter when considering what it would be right to do by Jake, or what would promote his well-being.

A second way to embrace indeterminacy and instability is to make a less heroic assumption about the effect of cognitive psychotherapy. Instead of taking it for granted that this process is bound to converge on unique, point-valued preferences, perhaps it will merely shrink the width of Jake’s interval-valued preferences. In that case, even after idealization, there would be no unique characterization of what it would be right to do by Jake or what would most promote his well-being. As I’ve argued in the context of prediction and explanation (Alfano 2012), however, this might be a feature rather than a bug. Suppose that idealization yields a preference ordering that rules out most actions as wrong and condemns many outcomes as detrimental to Jake’s well-being, but does not adjudicate among many others. The remaining actions would then all be considered morally right in the weak sense of being permissible but not obligatory, and the remaining outcomes would all be vindicated as conducive to well-being. This strategy might help to solve the so-called demandingness problem by expanding what James Fishkin calls “the zone of indifference or permissibly free personal choice” (1982, p. 23; see also 1986). Thus, while it is possible to try to resist the evidence for indeterminacy and instability, or to acknowledge the evidence while denying its normative import, it may be better instead to embrace these features of preferences and use them to respond to existing problems.

 

5 Future directions in the moral psychology of preferences

 

Because preferences are involved in multiple ways in patiency, agency, sociality, temporality, and reflexivity, there are many avenues for further research. In this closing section, I list just a few of them.

First, further conceptual work by philosophers and theoretically-minded psychologists and behavioral economists may reveal or clarify relevant distinctions, such as a contemporary version of Mill’s distinction between higher and lower pleasures. Perhaps a useful distinction can be made between satisfaction of higher and lower preferences. According to Mill, one pleasure is higher than another if an expert who was acquainted with both would choose any amount of the former over any amount of the latter. This maps fairly directly onto the idea of lexicographic preferences: one good or value is lexicographically preferred to another if (and only if) any amount of the former would be chosen over any amount of the latter. Such values would be in principle immune to preference reversals. Jeremy Ginges and Scott Atran (2013) have found that when a value is “sacralized,” it becomes lexicographically preferred in this way. Moral values seem to be the only values that are capable of becoming sacred. However, tradeoffs have only been studied in one direction (giving up a sacred value to gain a secular value).

Second, further empirical research would help to determine whether the hiving off strategy succeeds. Is there some identifiable class of preferences that are especially susceptible to reversals and choice blindness? We currently lack sufficient evidence to say. It seems that effects may be stronger in business and gambling domains, weaker in social and health domains (Kuhberger 1998), but these distinctions are neither mutually exclusive nor exhaustive. This is yet another area in which collaboration between philosophers, who are specially trained in making this sort of distinction, and psychologists would be useful.

Third, to what extent do preference reversals and choice blindness disappear when people are informed about them? Are psychologists who know all about these effects less susceptible to them? More susceptible? The same as other people?

Fourth, are there some people who are congenitally more susceptible to preference reversals and choice blindness than others? There is very little research on this, though one study suggests that roughly a quarter of the population is highly susceptible and another quarter is immune (Bostic, Herrnstein, & Duncan 1990). Perhaps the preferences of people who are clear on what they want deserve more normative weight than the preferences of people who don’t know what they want. Perhaps the second group would benefit not so much from getting what they (think) want (for the moment) but from having their preferences shaped in more or less subtle ways.

Finally, on a related note, perhaps public policy should sometimes aim not so much to satisfy existing preferences, but to shape people’s preferences in such a way that they are (more easily) satisfiable. The idea here is to take advantage of the instability of preferences, cultivating them in such a way that the people who have them will be most able to satisfy their own wants. If you’re not getting what you want, either change what you’re getting, or change what you want. Of course, this proposal may seem objectionably paternalistic, but I tend to agree with Richard Thaler and Cass Sunstein (2008) in thinking that in some cases such policies may be permissible. In fact, it’s a striking asymmetry that almost no one objects to the shaping of beliefs, provided they are made to accord with (what we take to be) the truth, whereas it’s hard to find someone who doesn’t object to the shaping of desires and preferences. However, I would argue that the choice we often face is not whether to mould preferences but how. Given how easily preferences are influenced, it’s highly likely that they are constantly being socially shaped without our realizing it. If this is right, existing policies already shape preferences; we just don’t know how. The choice is therefore between inadvertently influencing preferences and doing so strategically. I tend to think that society has not just a right but an obligation to help people develop appropriate preferences – a point with which feminists such as Serene Khader (2011) concur. The worry that such interventions might be objectionably paternalistic can be assuaged somewhat by insisting, as Khader does, that the very people whose preferences are the targets of policy intervention participate in designing the interventions.

 

[1] Preferences are causally influenced by values, but values on their own don’t do all the work (Homer & Kahle 1988).

[2] A version of this idea was first formulated by Sidgwick (1981). Rosati (1995) argues persuasively that mere information without imaginative awareness and engagement with that information is not enough.

[3] See Lichtenstein & Slovic (1971); Slovic (1995); Slovic & Lichtenstein (1968, 1983); Tversky & Kahneman (1981); Tversky, Slovic, & Kahneman (1990).

[4] See also Ariely & Norton (2008), Green et al. (1998), Hoeffler & Ariely (1999), Hoeffler et al. (2006), Johnson and Schkade (1989), and Lichtenstein and Slovic (1971).

[5] A social security number is a kind of national identification code: it associates each citizen of the United States with a unique, quasi-random number.

[6] In the United States, this would be equivalent to flipping preferences across the conservative-liberal gap; in the United Kingdom, it would be equivalent to flipping preferences across the conservative-labor gap.

[7] Bentham (1789/1961, p. 31), Mill (1861/1998, 26), and Sidgwick (1907, p. 413) all deal with the objection in this way.

[8] See Berg, Dickhaut, & O’Brien (1985); Pommerehne, Schneider, & Zweifel (1982); and Reilly (1982).

Leave a Reply