Today, I presented a paper on some normative implications of the instability and indeterminacy of preferences for the Princeton University Neuroscience of Social Decision Making series. On Monday, I present the same work to the Center for Human Values Laurence S Rockefeller seminar. Here’s a draft of the paper.
Psychologists and behavioral economists have recognized for decades that preferences and other motivational attitudes are indeterminate: for some pairs of outcomes, a and b, a given agent will neither prefer a to b, nor prefer b to a, nor be indifferent as between a and b. Sometimes, there’s just no fact of the matter concerning what people want. Moreover, psychologists and behavioral economists have recognized for some time that preferences and other motivational attitudes are unstable: for some pairs of outcomes, a and b, a given agent may prefer a to b now, but be disposed to reverse her preference ordering a few moments from now in response to seemingly trivial and normatively irrelevant situational factors. Philosophical theories that make use of the concepts of preferences and desires, however, rarely take the indeterminacy and instability of preferences into account.
This paper has three main parts. In the first, I discuss two convergent lines of empirical evidence, both of which suggest that preferences – at least as traditionally conceived – are both indeterminate and unstable. In the second, I outline a descriptive model of preferences that I first articulated elsewhere (Alfano 2012) as an attractive response to this evidence. In the third section, I draw on my descriptive model to explore some of the normative implications of the indeterminacy and instability of preferences with respect to right action, wellbeing, public policy, and meta-ethics. At first blush, it might seem that indeterminacy and instability spell trouble for normative theories couched in terms of preferences and desires. After all, if right action is at least partially a function of preference-satisfaction, and preferences are indeterminate or unstable, then what it would be right to do would presumably inherit this indeterminacy and instability. In other words, it could be argued that there’s no fact of the matter about what it would be right to do, or that that fact is in constant flux. Similar worries arise in the cases of public policy and personal wellbeing: if your welfare is at least partially a function of whether you’re getting what you want, and your desires are indeterminate or unstable, then presumably there is no fact of the matter concerning how well your life is going, or that fact is in constant flux. Pursuing or promoting one life goal over another might then turn out to be a mug’s game. Against these worries, I argue that the local indeterminacy and instability captured by my descriptive model preserve any intuitions worth keeping about the normative status of preferences. I then go a step further, arguing that local indeterminacy and instability are to be embraced because they help to counter a prominent argument against the normative weight of preferences: the demandingness objection. I conclude with some methodological remarks about the appropriate use of empirical information in philosophical theorizing.
1 Evidence for Indeterminacy and Instability
Two convergent lines of evidence suggest that preferences are neither determinate nor stable: the heuristics and biases research on preference reversals, and the psychological research on choice blindness.
Preferences are dispositions to choose one option over another. You strictly prefer a to b only if, if you were offered a choice between them, then ceteris paribus you would choose a. If your preferences are stable, then what you would choose now is identical to what you would choose in the future. If your preferences are determinate, then there is some fact of the matter about how you would choose. That is to say, exactly one of the following subjunctive conditionals is true: if you were offered a choice, then ceteris paribus you would choose a; if you were offered a choice, then ceteris paribus you would choose b; if you were offered a choice, then ceteris paribus you would be willing to flip a coin and accept a if heads and b if tails (or you would be willing to let someone else – even your worst enemy – choose for you). The kind of indeterminacy and instability I argue for in this section is modest rather than radical. I want to claim that preferences are unstable in the sense of sometimes changing in the face of seemingly trivial and normatively irrelevant situational influences, not in the sense of constantly changing. Similarly, I want to claim that preferences are indeterminate in the sense of there sometimes being no fact of the matter how someone would choose, not in the sense of there always being no fact of the matter how someone would choose.
2.1 Preference Reversals
Two distinctions are worth making regarding the types of possible preference reversals. In a chain-type reversal, you prefer a to b, prefer b to c, and prefer c to a; such reversals are sometimes labeled failures of acyclicity. In a waffle-type reversal, you prefer a to b, but also prefer b to a. The other distinction has to do with temporal scale. Preference reversals can be synchronic, in which case you would have the inconsistent preferences all at the same time. More commonly, they are diachronic, in which case you might now prefer a to b and b to c, and then later come to prefer c to a (and perhaps give up your preference for a over b). Or you might now prefer a to b, but later prefer b to a (and perhaps give up your preference for a over b). In my (2012), I call diachronic waffle-type reversals the result of Rum Tum Tugger preferences, after the character in T. S. Eliot’s Book of Practical Cats who is “always on the wrong side of every door.”
Preference reversals were first systematically studied by Daniel Kahneman, Sarah Lichtenstein, Paul Slovic, and Amos Tversky as part of the heuristics and biases research program. In study after study, they and others showed that people’s cardinal preferences could be reversed by strategically framing the choice situation. When faced with a high-risk / high-reward gamble and a low-risk / low-reward gamble, most people choose the former but assign a higher monetary value to the latter. These investigations focused on choices between lotteries or gambles rather than choices between outcomes because the researchers were attempting to engage with theories of rational choice and strategic interaction, which – in order to generate representation theorems – employ preferences over probability-weighted outcomes. While I find this research fascinating, its complexity makes it hard to interpret confidently. In particular, whenever the interpreter encounters a phenomenon like this, it’s always possible to say that the problem lies not in people’s preferences but in their credences or subjective probabilities. Since evaluating a gamble always involves weighting an outcome by its probability, one can never be sure whether anomalies are attributable to the value attached to the outcome or the process of weighting. And since we have independent reason to think that people’s ability to think clearly about probability is limited and unreliable (Alfano 2013), it’s tempting to hope that preferences can be insulated from this line of critique.
For this reason, I will focus on more recent research on preference reversals in the context of choices between outcomes rather than choices between lotteries (or, if you like, degenerate lotteries with probabilities of only 0 and 1). A choice of outcome a over outcome b can only reveal someone’s ordinal preferences; it can only tell us that she prefers a to b, not by how much she prefers a to b. This limitation is worth the price, however, because looking at choices between outcomes lets us rule out the possibility that any preference reversal might be attributable to the agent’s credences rather than her preferences.
Some of the most striking investigations of preference reversals in this paradigm have been conducted by Dan Ariely and his colleagues. For instance, Ariely, Loewenstein, and Prelec (2006) used an arbitrary anchoring paradigm to show that preferences ranging over baskets of goods and money are susceptible to diachronic waffle-type reversals. In this paradigm, a participant first writes down the final two digits of her social security number (henceforth SSN-truncation), then puts a ‘$’ in front of it. Next, the experimenters showcase some consumer goods, such as chocolate, books, wine, and computer peripherals. The participant is instructed to record whether, hypothetically speaking, she would pay her SSN-truncation for the goods. Finally, the goods are auctioned off for real money. The surprising result is that participants with high SSN-truncations bid 57% to 107% more than those with low SSN-truncations.
To better understand this phenomenon, consider a fictional participant whose SSN-truncation was 89. She ended up bidding $50 for the goods, so, at the moment of bidding, she preferred the goods to the money; otherwise, she would have entered a lower bid. However, one natural interpretation of the experiment is that, prior to the anchoring intervention, she would or at least might have chosen that amount of money over the goods (i.e., she would have bid lower); in other words, prior to the anchoring intervention, she preferred the money to the goods. Anchoring on her high SSN-truncation induced a diachronic waffle-type reversal in her preferences. Prior to the intervention, she preferred the money to the goods, but after, she preferred the goods to the money. This way of explaining the experiment entails that her preferences were unstable: they changed in response to the seemingly trivial and normatively irrelevant framing of the choice.
Another way to explain the same result is to say that, prior to the anchoring intervention, there was no fact of the matter whether she preferred the goods to the money or the money to the goods. In other words, it was false that, given a choice, she would have chosen the goods, but it was equally false that, given a choice, she would have chosen the money or been willing to accept a coin flip. Only in the face of the choice in all its messy situational details did she construct a preference ordering, and the process of construction was modulated by her anchoring on her SSN-truncation. This alternative explanation entails that her preferences were indeterminate.
Furthermore, these potential explanations are mutually compatible. It could be, for instance, that her preferences were partially indeterminate, and that they became determinate in the face of the choice situation. Perhaps she definitely did not prefer the money to the goods prior to the anchoring intervention, but there was no fact of the matter regarding whether she was indifferent or preferred the goods to the money. Then, in the face of the hypothetical choice, this local indeterminacy was resolved in favor of preference rather than indifference. Finally, her newly-crystallized preference was expressed when she entered her bid.
This is just one example of the reversal of ordinal preferences. In my (2012), I describe several others, including a study of preference reversals in the domain of losses rather than gains, and a study in which the same outcome was “flipped” across the neutral point in such a way that it was sometimes regarded as a loss and sometimes as a gain. These expansions of the phenomenon are important because they demonstrate that preference reversals are quite robust. They do not only occur in the domain of gains. Nor do they merely shuffle the order of preferences within the positive or negative domains.
Such a robust effect calls for explanation. My own suspicion is that a hybrid of indeterminacy and instability is the right theory of what happens in these cases, but it’s difficult to find evidence that points one way or the other. In any event, for present purposes, I’m satisfied with the inclusive disjunction of indeterminacy and instability.
2.2 Choice Blindness
There are many other – often amusing and sometimes depressing – studies of preference reversals, but the gist of them should be clear, so I’d like to turn now to the phenomenon of choice blindness, a field of research pioneered in the last seven years by Petter Johansson and his colleagues. As I mentioned above, preferences are dispositions to choose. You prefer a to b only if, were you given the choice between them, then ceteris paribus you would choose a. Preferences are also dispositions to make characteristic assertions and offer characteristic reasons. While it’s certainly possible for someone to prefer a to b but not to say so when asked, the linguistic disposition is closely connected to the preference. Someone might be embarrassed by her preferences. She might worry that her interlocutor could use them against her in a bargaining context. She could be self-deceived about her own preferences. In such cases, we wouldn’t necessarily expect her to say what she wants, or to give reasons that support her actual preferences. But in the case of garden-variety preferences, it’s natural to assume that when someone says she prefers a to b, she really does, and it’s natural to assume that when someone gives reasons that support choosing a over b, she herself prefers a to b. Research on choice blindness challenges these assumptions.
Imagine that someone shows you two pictures, each a snapshot of a woman’s face. He asks you to say which you prefer on the basis of attractiveness. You point to the face on the left. He then asks you to explain why, displaying the photographs a second time. Would you notice that the faces had been surreptitiously switched, so that the face you’d pointed to was now on the right? Or would you give a reason for choosing the face that you’d initially dispreferred? Johansson et al. (2005) found that participants detected the ruse in fewer than 20% of trials. Moreover, when asked for reasons, many of the participants who had not detected the manipulation gave reasons that were inconsistent with their original choice. For instance, some said that they preferred blondes even though they had originally chosen a brunette.
This original study of choice blindness has been supplemented with experiments in other domains. For instance, Hall et al. (2010) found that people exhibited choice blindness in more than two thirds of all trials when the choice was between two kinds of jam or two kinds of tea. After tasting both, participants indicated which of the two they preferred, then were asked to explain their choice while sampling their preferred option “again.” Even when the phenomenological contrast between the items was especially large (cinnamon apple versus grapefruit for jam, pernod versus mango for tea), fewer than half the participants detected the switch.
Choice blindness in the domain of aesthetic evaluations of faces and comestibles might not seem weighty enough to support the argument that preferences are often indeterminate and unstable. But surely choice blindness in the domain of political preferences and moral judgments would be. Johansson, Hall, and Chater (2011) used the choice blindness paradigm to flip Swedish participants’ political preferences across the conservative-socialist gap. Participants filled in a series of scales on their political preferences for policies such as taxes on petroleum. Some of these scales were then surreptitiously reversed, so that, for example, a very conservative answer was now a very socialist answer. Participants were then asked to indicate whether they wanted to change any of their choices, and to give reasons for their positions. Fewer than 20% of the reversals were detected, and only one in every ten of the participants detected enough reversals to keep their aggregate position from switching from conservative to socialist (or conversely). In a similar study, Hall, Johansson, and Strandberg (2012) used a self-transforming survey to flip participants’ moral judgments on both socially contentious issues, such as the permissibility of prostitution, and broad normative principles, such as the permissibility of large-scale government surveillance and illegal immigration. For instance, an answer indicating that prostitution was sometimes morally permissible would be flipped to say that prostitution was never morally permissible, and an answer indicating that illegal immigration was morally permissible would be flipped to say that illegal immigration was morally impermissible. Detection rates for individual questions ranged between 33% and 50%. Almost 7 out of every 10 of the participants failed to detect at least one reversal.
As with the behavioral evidence for preference reversals, the evidence for choice blindness suggests that people’s preferences are unstable, indeterminate, or both. The choices people make can fairly easily be made to diverge from the reasons they give. If preferring a to b is a disposition both to choose a over b and to offer reasons that support the choice of a over b (or at least not to offer reasons that support the choice of b over a), then it would appear that many people lack preferences, or that their preferences do exist but are extremely labile. Not only is there sometimes no fact of the matter about what we prefer, but also our preferences are often seemingly constructed on the fly in choice situations, and their ordering is shaped by seemingly trivial and normatively irrelevant factors.
2 A Descriptive Preference Model
While it is of course possible to dispute the ecological validity of these experiments or my interpretation of them, I want to proceed by considering some of the normative implications of that interpretation, assuming for the sake of argument that it is sound. I’ve already explored some of the descriptive implications of this perspective in Alfano (2012), where I argue that the indeterminacy and instability of preferences infirm our ability to explain and predict behavior. Predictions of behavior often refer to the preferences of the target agent. If you know that Karen prefers vanilla ice cream to chocolate, then you can predict that, ceteris paribus, when offered a choice between them she will go with vanilla. Likewise for explanations: you can base an explanation of Karen’s choice of vanilla on the fact that she prefers vanilla. But if there’s no fact of the matter about what Karen prefers, you cannot so easily predict what she will do, nor can you so easily explain why she did what she did. A related problem arises when considering instability. If Karen prefers vanilla to chocolate now, but her preference is unstable, then the prediction that she will choose vanilla in the future – even the near future – is on shaky ground. For all you know, by the time the choice is presented, her preferences will have reversed. Similarly for explanation: if Karen’s preferences are unstable, you might be able to say that she chose vanilla because she preferred it at that very moment, but you gain little purchase on her longitudinal preferences from such an attribution.
In a recent article (Alfano 2012), I responded to these problems by proposing a model in which preferences are interval-valued rather than point-valued. A traditional valuation function v maps from outcomes to points. The binary preference relation is then defined in terms of these points: a is strictly preferred to b just in case v(a) > v(b), b is strictly preferred to a just in case v(a) < v(b), and the agent is indifferent as between a and b just in case v(a) = v(b).
Figure 1: a preferred to b because 1 = v(a) > 0 = v(b)
In the looser model I propose, by contrast, the valuation function maps from outcomes to closed intervals, such that a is strictly preferred to b just in case min(v(a)) > max(v(b)) and the agent is indifferent as between a and b just in case there is some overlap in the intervals assigned to a and b.
Figure 2: indifference because neither min(v(a)) > max(v(b)) nor max(v(a)) < min(v(b))
Though this model preserves the transitivity of strict preference, it does not preserve the transitivity of indifference. This, however, may be a feature rather than a bug, since ordinary preferences as exhibited in choice behavior themselves seem not to preserve the transitivity of indifference.
3 Normative Implications of the Indeterminacy and Instability of Preferences
In this section, I consider some possible normative implications of the indeterminacy and instability of preferences, drawing on the descriptive model outlined in the previous section. Moving from the descriptive to the normative domain is always fraught, but, as I argue elsewhere (Alfano 2013), the two need to be explored in tandem, with mutual theoretical adjustments made on each side. Moral psychology without normative structure is a baggy monster. Normative theory without empirical support is a castle in the sky.
This section has four parts. In the first, I consider the implications of indeterminacy and instability for personal wellbeing. The second section deals with the implications for right action. The third explores some implications for public policy. The fourth and final section argues for a possible meta-ethical implication.
3.1 Implications for Wellbeing
The primary worry raised for the theory of personal wellbeing by the indeterminacy and instability of preferences is that, if the extent to which your life is going well depends on or is a function of the extent to which you’re getting what you want (Heathwood 2006), then wellbeing inherits the indeterminacy and instability of preferences. In other words, there might be no fact of the matter concerning how good a life you’re living at this very moment, and if there is such a fact, it might fluctuate from moment to moment in response to seemingly trivial and normatively irrelevant situational factors.
By way of example, consider someone who is eating toast with apple cinnamon jam. Is his life as good as it would be if he were eating toast with grapefruit jam? If he is like the people in the choice blindness studies mentioned above, there might be no answer to this question. If he’s told that he prefers apple cinnamon, he will prefer the present state of affairs, but if he is told that he prefers grapefruit, he’ll be less pleased with the present state of affairs than he would be with the world in which he is eating grapefruit jam. Whether his life is better in the apple cinnamon jam-world or the grapefruit jam-world is indeterminate until his preferences crystallize in one ordering or the other.
Or consider someone who has a brand new hardbound copy of Moby Dick, for which she just paid $50 when it was marked down from $70. Is her life going better now that she has the book, or was it going better before, when she had the money? If she is like the participants in Ariely’s preference reversal study, the answer may be “yes” to both disjuncts. Before she bought the book, she preferred the money to the book. But then she anchored on the manufacturer’s suggested retail price of $70, raised her valuation of the book, and ended up preferring it to $50. So she bought it – what a deal! Her unstable preferences mean that she was better off with the money than the book, and that she is better off with the book than the money. It’s not a contradiction, but it makes her wellbeing a pain in the neck to evaluate.
Fortunately, though, there is a ready response to this worry, which begins by pointing out that the indeterminacy and instability of preferences is not radical but modest, a feature captured by the descriptive model in the previous section. Although there may be no fact of the matter whether the life of the consumer of cinnamon apple jam is better than the life of the consumer of grapefruit jam, there is a fact of the matter whether either of these lives is better than that of someone who, instead of eating jam, is enduring irritable bowel syndrome. Although preference orderings may fluctuate between owning a book and having $50, they do not fluctuate between owning the same book and having $50,000. These observations are consistent with the interval-valued preferences of the descriptive model outlined in the previous section. In the first example, the intervals for cinnamon apple jam and for grapefruit jam overlap with each other, but neither overlaps with the interval for irritable bowel syndrome. Thus, we can still make a whole host of judgments about the quality of various possible lives, even if, when we “zoom in,” such judgments cannot always be made. In the second example, the intervals for having $50 and having the book overlap with each other, but neither overlaps with the interval for having $50,000.
For the price of this local indeterminacy and instability, the theoretician of wellbeing can purchase an answer to an objection to the preference-satisfaction theory of wellbeing. The objection goes like this: when assessing whether it would be better to have the life of a successful lawyer or a successful artist, it seems trivial or even perverse to ask whether the artist’s life would involve slightly more ice cream, even if the agent considering what to do with her life likes ice cream. Slight preferences shouldn’t bear normative weight in this context.
However, if we assume, as seems reasonable in light of the evidence, that her preference for a little more ice cream is weak enough that it could be shifted by preference reversal or choice blindness, then its normative irrelevance is unmasked. The life of the ice cream-deprived artist and the life of the ice cream-enjoying artist are assigned nearly identical intervals on the scale of preference – intervals that differ less from each other than from that assigned to the life of the lawyer. Hence, if we are willing to put up with a little indeterminacy and instability, we can avoid more serious objections to the theory of personal wellbeing.
3.2 Implications for Right Action
The main worry raised by the indeterminacy and instability of preferences for the theory of right action is that, if right action depends on preference-satisfaction, then it inherits the indeterminacy and instability of preferences. It might turn out that there’s just no fact of the matter what it would be right to do, or that that fact is in constant flux. This worry is perhaps most pressing for preference-utilitarians, such as Brandt (1972, 1979, 1982, 1993), Hare (1981) and Singer (1993), but it casts a long shadow. Even if you don’t think that right action is a function of preferences and only preferences, it’s hard to deny that preferences matter at all. For instance, virtue ethicists typically countenance beneficence as an important virtue, and Kantians typically recognize an imperfect duty of beneficence. What does it mean to be beneficent? Among other things, the trait surely involves being disposed to promote the wellbeing of other people. What does it mean to act from a duty of beneficence? Among other things, such an action would be motivated by the intention to promote the wellbeing of another person. But if, as I argued in the previous section, wellbeing is affected by the indeterminacy and instability of preferences, then beneficence is too. And even if one thinks that beneficence is not a virtue and that there is no such imperfect duty, virtually any tolerable theory of right action is going to say that maleficence is a vice and that there is a duty – whether perfect or imperfect – of non-maleficence.
In the remainder of this section, I will concentrate on the normative implications of indeterminacy and instability for preference-utilitarianism, but it should be clear that these are just some of the more straightforward implications, and that others are waiting in the wings.
Before considering some responses I find attractive, I should point out that the problem we face here is not the one that is solved by distinguishing between a decision procedure and a standard of value. An objection to utilitarianism that was lodged early and often is that it’s either impossible or at least extremely computationally complex to know what would satisfy the most preferences. This knowledge could only be acquired by eliciting the preference orderings of every living person – or perhaps even every past, present, and future person. The correct response to this objection is that utilitarianism is meant to be a standard of value, not a decision procedure. It identifies (if it is the correct theory of right action) what it would be right to do, but that doesn’t mean that we can use it to find out what it would be right to do. The distinction is meant to parallel other general theories: Newtonian mechanics would have identified, if it had been the correct physical theory, what a projectile will do in any circumstances whatsoever, even if people were unable to apply the theory in a given instance.
This response is unavailable in the present context. There are two ways in which it might be impossible to know what would satisfy someone’s preferences: epistemic and metaphysical. You would be unable to know what someone wants if there was a fact of the matter about what that person wants, but you couldn’t find out what that fact is. This would be a merely epistemic problem, and the distinction between a decision procedure and a standard of value handles it nicely. But you would also be unable to know what someone wants if there simply was no fact of the matter concerning what that person wants. If I am right that preferences are indeterminate, then this is the problem we now face, and it does not good to have recourse to the distinction between a decision procedure and a standard of value.
Preference-utilitarianism is not without resources, however. As in the case of wellbeing, one attractive response is to point out that preferences are only modestly indeterminate and unstable. Although there may be no uniquely most-preferred outcome for a given individual (or indeed for any individual), there will be many genuinely dispreferred outcomes, and hopefully a manageably constrained subset of preferred outcomes, than which nothing is more preferred. This subset would constitute something like what Amartya Sen (1997) calls a maximal as opposed to an optimal set. They are all outcomes than which nothing is better, but there is no unique best outcome.
Furthermore, from among this subset of alternatives it might be possible to winnow out those that satisfy preferences which we have independent normative grounds to reject – preferences that are silly, ignorant, perverse, or malevolent. It’s commonly argued in the context of right action that brute preferences carry less weight than idealized preferences. According to those who argue in this way, whether it’s right to do something depends less on whether it would satisfy people’s actual preferences than on whether it would satisfy their suitably idealized preferences. It might be hoped that idealizing preferences would cut down or even eliminate their indeterminacy and instability.
One of the more common idealizing strategies is the full information requirement, according to which the preferences that (help to) determine what it would be right to do are not the preferences that people actually have, but the ones they would have if they were fully informed. Specifying what full information means in a way that doesn’t collapse into omniscience is tricky, but one popular suggestion is to take into account “all those knowable facts which, if [one] thought about them, would make a difference to [one’s] tendency to act” (Brandt 1972, p. 682) or “everything that might make [one] change [one’s] desires” (Brandt 1983, p. 40) – a process called cognitive psychotherapy.
In the domain of policy, idealizing might be objectionable because it fails to respect people’s autonomy – a point about which I am dubious, but which seems to enjoy almost unanimous support. Perhaps they should be allowed to do things that are, strictly speaking, bad for themselves because it would be objectionably paternalistic or an infringement on their liberty to coerce or manipulate them into doing otherwise. In the domain of personal morality and individual wellbeing, however, these concerns do not arise. Perhaps, then, instability and indeterminacy can be overcome through idealization.
Here’s what that might look like. Suppose that Jake’s actual preferences are captured by my interval-valued model. As such, they present two problems: they fail to uniquely determine how it would be right to treat Jake, and they may even rule out the genuinely right way to treat him because his actual preferences are normatively objectionable. It might be possible to kill these two birds with the single stone of idealization if idealization leads to unique, point-valued preferences that are no longer normatively objectionable. Perhaps there is only one way that Jake’s preferences could turn out after he undergoes cognitive psychotherapy. This is a big ‘perhaps,’ but it is worth considering. What evidence we have, however, suggests that idealizing in this way would not lead to determinate, stable preferences. When Kahneman, Lichtenstein, Slovic, and Tversky began to investigate preference reversals, many economists saw the phenomenon as a threat, since it challenged some of the most fundamental assumptions of their field. Accordingly, they tried to show that preference reversals could removed root and branch if participants were given sufficient information about the choices they were making. Years of attempts to eliminate the effect proved fruitless. These attempts dealt only with cardinal preferences, however, so it might be argued that, at least for ordinal preferences, idealization could resolve the problem.
The burden is then on the idealizer to say what information participants lack in the relevant experiments. What does someone who bids high on a bottle of wine after considering her SSN-truncation not know, or not know fully enough? Perhaps she should be allowed first to drink some of the wine. While Ariely et al. (2006) did not investigate whether this would eliminate the anchoring on SSN-truncation, they did conduct other experiments in which participants sampled the outcomes and thus had full information. In one, participants first listened to an annoying sound over headphones, then bid for the right not to listen to the sound again. As in the consumer goods experiment, before bidding, participants first considered whether they would pay their SSN-truncation in cents to avoid listening to the sound again. And as expected, those with higher SSN-truncations entered higher bids, while those with lower SSN-truncations entered lower bids. It’s unclear what further information they could have acquired to idealize their preferences. What seems more plausible is to say that they had too much information, not too little. If they hadn’t first considered whether to bid their SSN-truncation, they would not have anchored on it and would therefore have had uncontaminated preferences. But cognitive psychotherapy says to take into account “everything that might make [one] change [one’s] desires” (Brandt 1983, p. 40). Anchoring changed their desires, so it counts as part of cognitive psychotherapy. Perhaps the process can be revised by saying that one should take into account everything that might correctly or relevantly change one’s desires, but then the problem is to come up with an account of what makes an influence on one’s desires correct or relevant that doesn’t involve either a vicious regress or a vicious circle. To do so is beyond this scope of this paper.
Another response, which I find more attractive, is to embrace rather than reject the indeterminacy and instability of preferences. There are several ways to do this. One is to figure out which preferences are wildly indeterminate or unstable and disqualify their normative standing completely. Just as it makes sense to ignore the Rum Tum Tugger’s begging to be let inside because you know he’ll just beg to get back out again, perhaps it makes sense to hive off Jake’s indeterminate and unstable preferences, leaving a kernel of normatively respectable ones behind. Only these would matter when considering what it would be right to do by Jake, or what would promote his wellbeing.
A second way to embrace indeterminacy and instability is to make a less heroic assumption about the effect of cognitive psychotherapy. Instead of taking it for granted that this process is bound to converge on unique, point-valued preferences, perhaps it will merely shrink the width of Jake’s interval-valued preferences. In that case, even after idealization, there would be no unique characterization of what it would be right to do by Jake or what would most promote his wellbeing. As I argued in the context of prediction and explanation (Alfano 2012), however, this might be a feature rather than a bug. Suppose that idealization yields a preference ordering that rules out most actions as wrong and condemns many outcomes as detrimental to Jake’s wellbeing, but does not adjudicate among many others. The remaining actions would then all be considered morally right in the weak sense of being permissible but not obligatory, and the remaining outcomes would all be vindicated as conducive to wellbeing. The Sen’s terminology, all of them would belong to the maximal set. This strategy might help to solve the so-called demandingness problem by expanding what James Fishkin calls “the zone of indifference or permissibly free personal choice” (1982, p. 23; see also 1986). Thus, while it is possible to try to resist the evidence for indeterminacy and instability, or to acknowledge the evidence while denying its normative import, it may be better instead to embrace these features of preferences and use them to respond to existing problems.
3.3 Implications for Public Policy
Whereas my previous work has investigated the descriptive upshot of instability and indeterminacy, Douglas Bernheim and Antonio Rangel (2009) have explored some of the normative implications of these problems for public policy. The model they consider resembles mine in that it sacrifices transitivity but retains acyclicity. A difficulty that immediately confronts such an approach parallels the one for right action: if the aptness of a policy is at least in part a function of the extent to which it satisfies the preferences of the people it affects, but preferences are indeterminate and unstable, then the aptness of the policy is itself indeterminate and unstable. Problems with the underlying preferences of individuals are inherited at the level of social preferences and social choice. There might be no fact of the matter concerning which policies it would be correct to institute. Or that fact might shift from moment to moment, leaving legislators and policy-makers paralyzed. This point is especially pertinent in light of the fact that preferences over public goods are subject to the same kind of preference reversals as preferences over consumer items (Green et al. 1998). Even so, within such a model, if instability and indeterminacy are limited, some policies will never satisfy enough preferences to be considered apt, and so can be ruled out. Doing so would leave a perhaps-not-unmanageable set of alternatives from which to choose – a maximal set, in Sen’s terminology.
If this is right, it provides a new reason to favor the capabilities approach to public policy. According to this approach, the aim of public policy should not be to promote people’s wellbeing directly, for instance, by attempting to satisfy their preferences, but to promote their abilities to be and do what they choose. To the extent that preferences are unstable and indeterminate, any policy that attempts to satisfy them will have a moving, amorphous target. However, if policies aim instead to promote capabilities, then whatever someone’s preferences are, she will have a decent chance of being able to satisfy them. The idea here is that policies should aim not at the first-order level of satisfying preferences directly, but at the second-order level of providing or promoting capabilities that can be tapped by the relevant individuals in their attempts to satisfy their sometimes-shifting preferences.
Another potential implication for public policy is that policies should sometimes aim not so much to satisfy existing preferences, but to shape people’s preferences in such a way that they are (more easily) satisfiable. The idea here is to take advantage of the instability of preferences, cultivating them in such a way that the people who have them will be most able to satisfy their own wants. If you’re not getting what you want, either change what you’re getting, or change what you want. Of course, this proposal may seem objectionably paternalistic, but I tend to agree with Richard Thaler and Cass Sunstein (2008) in thinking that in some cases such policies may be permissible. In fact, it’s a striking asymmetry that almost no one objects to the shaping of beliefs, provided they are made to accord with (what we take to be) the truth, whereas it’s hard to find someone who doesn’t object to the shaping of desires and preferences. However, I would argue that the choice we often face is not whether to mould preferences but how. Given how easily preferences are influenced by seemingly trivial elements of the social environment, it’s highly likely that they are constantly being socially shaped without our realizing it. If this is right, existing policies already shape preferences; we just don’t know how. The choice is therefore between inadvertently influencing preferences and doing so strategically. I tend to think that society has not just a right but an obligation to help people develop appropriate preferences – a point with which feminists such as Serene Khader (2011) concur. The worry that such interventions might be objectionably paternalistic can be assuaged somewhat by insisting, as Khader does, that the very people whose preferences are the targets of policy intervention participate in designing the interventions.
If these suggestions are on the right track, it would appear once again that the best response to the problems raised by indeterminate and unstable preferences is to embrace, not reject, indeterminacy and instability.
3.4 Implications for Meta-Ethics
The final normative domain I want to consider in relation to indeterminate and unstable preferences is meta-ethics. I argue here that the indeterminacy and instability of preferences blunts a common objection to reasons internalism – the thesis that you only have a normative practical reason to perform an action if you desire or prefer to perform that action.
To see why, consider Bernard Williams’s (1995) example of the callous husband, who doesn’t care at all about his wife’s wellbeing. Williams claims that many things can be said to and about this man (such as that he is “ungrateful, inconsiderate, hard, sexist, nasty, selfish, brutal, and many other disadvantageous things”), but the one thing that cannot be said is that he has a reason to treat his wife better. If he lacks the desire to make her life more livable, then it would be incorrect to say that he has such a reason. This implication of reasons internalism is sometimes taken to be counterintuitive. The reasons externalist, by contrast, can say that even if the husband has no desire to treat his wife better, he still has a reason to do so. It is beyond the scope of this paper to argue for reasons internalism as such, but it seems to me that if preferences are modestly indeterminate and unstable, the internalist gains a new response to this problem. For, if the husband’s callousness is not so deep as to make his preference for mistreating his wife completely determinate and immovable, it might make sense to tell him that he has a reason to treat her better in order to give him that reason. In the same way that researchers in the choice blindness paradigm tell participants what they want, then elicit reasons from the participants that support what they’ve been told, so it might make sense to tell the callous husband that he has a reason to treat his wife better, potentially eliciting a change of heart. Indeed, this might work better than telling him that he should have a reason to treat her better.
Such an intervention is of course not guaranteed to succeed, but if the work on choice blindness is any indication, neither is it guaranteed to fail. If it did succeed, it would be similar to a phenomenon I treat it much greater detail in the domain of character traits (Alfano 2013). In Character as Moral Fiction, I argue that trait ascriptions sometimes function as self-fulfilling prophecies – that people become what they’re told they already are. The idea here is that the same thing might happen with preferences: in some cases, desire- and preference-ascriptions tend to function as self-fulfilling prophecies, through which people come to want what they’re told they already want. If this is right, then the reasons internalist can say that, though it might not be true to tell the callous husband that he has a reason to treat his wife better, it might still be assertible, and it might come true in virtue of being asserted.
4 Methodological Coda
Empirically-informed philosophy, including empirically-informed moral philosophy, has experienced a growth spurt in recent years. I am one of its proponents because I think there are only three ways for a normative theory to be related to empirical evidence: it can be empirically informed, empirically uninformed, or empirically misinformed. That said, I often worry that the proponents of empirically-informed philosophy do their cause a disservice by being misinformed and by drawing outrageous conclusions from a scant evidential base. Two recent examples of this problem are “The Bleak Implications of Moral Psychology” (Machery 2010) and “The Philosophical Personality Argument” (Feltz & Cokely 2012). Machery claims that virtue ethics is pretty much hopeless in the face of evidence from social psychology, but as I argue in Character as Moral Fiction, the best response to this evidence is to reimagine character trait ascriptions as self-fulfilling prophecies, not to abandon talk of virtue entirely. Feltz and Cokely claim that, because views on philosophical issues sometimes correlate with broad measures of individual differences such as extraversion, it’s impossible to trust our intuitions about these issues. Yet the correlations they document are weak, explaining less than 10% of the variance when they explain anything at all, and there is no theoretical explanation for when these individual difference measures do and do not correlate with philosophical views, nor for the direction of the correlations. It seems likely that almost every result in the “philosophical personality” research program is due to what Meehl (1990, p. 204) calls the crud factor: “In the social sciences […] everything correlates to some extent with everything else.”
The most facile response to seeing that a cherished normative theory is potentially in conflict with some empirical data is to drop a bomb on the theory, rejecting it outright. Such a move will inevitably be rejected by those who find the theory compelling. It’s much better, I would like to suggest, to creatively amend the theory in such a way that it is still recognizable and attractive, and in such a way that it overcomes existing objections. That is what I have tried to do in this paper.
Alfano, M. (2013). Character as Moral Fiction. Cambridge: Cambridge University Press.
Alfano, M. (2012). Wilde heuristics and Rum Tum Tuggers: Preference indeterminacy and instability. Synthese, 189:S1, 5-15.
Ariely, D., Loewenstein, G., & Prelec, D. (2006). Tom Sawyer and the construction of value. Journal of Economic Behavior & Organization, 60, 1-10.
Ariely, D., Loewenstein, G., & Prelec, D. (2003). “Coherent arbitrariness”: Stable demand curves without stable preferences. The Quarterly Journal of Economics, 118:1, 73-105.
Ariely, D. & Norton, M. (2008). How action create – not just reveal – preferences. Trends in Cognitive Science, 12:1, 13-16.
Bentham, J. (1789/1961). An Introduction to the Principles of Morals and Legislation. Garden City: Doubleday. Originally published in 1789.
Berg, J., Dickhaut, J., & O’Brien, J. (1985). Preference reversal and arbitrage. In V. Smith (ed.), Research in Experimental Economics, pp. 31-72. Greenwich, CT: JAI Press.
Bernheim, D. & Rangel, A. (2009). Beyond revealed preference: Choice theoretic foundations for behavioral welfare economics. Quarterly Journal of Economics, 124:1, 51-104.
Brandt, R. (1983). The real and alleged problems of utilitarianism. The Hastings Center Report, 13:2, 37-43.
Brandt, R. (1982). Two concepts of utility, in H. Miller & W. Williams, The Limits of Utilitarianism, 169-185. Minneapolis: University of Minnesota Press.
Brandt, R. (1979). A Theory of the Good and the Right. New York: Oxford University Press.
Brandt, R. (1972). Rationality, egoism, and morality. The Journal of Philosophy, 69:20, 681-697.
Feltz, A. & Cokely, E. (2012). The philosophical personality argument. Philosophical Studies, 161:2, 227-246.
Fishkin, J. (1986). Beyond Subjective Morality: Ethical Reasoning and Political Philosophy. New Haven: Yale University Press.
Fishkin, J. (1982). The Limits of Obligation. New Haven: Yale University Press.
Goodin, R. (1986). Laundering preferences, in Elster & Hylland (eds.), Foundations of Social Choice Theory. 75-101. Cambridge: Cambridge University Press.
Green, D., Jacowitz, K., Kahneman, D., & McFadden, D. (1998). Referendum contingent valuation, anchoring, and willingness to pay for public goods. Resources and Energy Economics, 20, 85-116.
Hall, L., Johansson, P., & Strandberg, T. (2012). Lifting the veil of morality: Choice blindness and attitude reversals on a self-transforming survey. PLOS ONE, 7:9, e45457.
Hall, L., Johansson, P., Tärning, B., Sikström, S., & Deutgen, T. (2010). Magic at the marketplace: Choice blindness for the taste of jam and the smell of tea. Cognition, 117, 54-61.
Hare, R. M. (1981). Moral Thinking. Oxford: Clarendon Press.
Heathwood, C. (2006). Desire satisfaction and hedonism. Philosophical Studies, 128:3, 539-463.
Hoeffler, S. & Ariely, D. (1999). Constructing stable preferences: A look into dimensions of experience and their impact on preference stability. Journal of Consumer Psychology, 8:2, 113-139.
Hoeffler, S., Ariely, D., & West, P. (2006). Path dependent preferences: The role of early experience and biased search in preference development. Organizational Behavior and Human Decision Processes, 101, 215-229.
Johansson, P., Hall, L, & Chater, N. (2011). Preference change through choice. In R. Dolan & T. Sharot (eds.), Neuroscience of Preference and Choice. Elsevier Academic Press, pp. 121-141.
Johansson, P., Hall, L., Sikström, & Olsson, A. (2005). Failure to detect mismatches between intention and outcome in a simple decision task. Science 310, 116-119.
Johnson, E. & Schkade, A. (1989). Bias in utility assessments. Management Science, 35, 406-424.
Khader, S. (2011). Adaptive Preferences and Women’s Empowerment. Oxford: Oxford University Press.
Lichtenstein, S. & Slovic, P. (1971). Reversals of preference between bids and choices in gambling decisions. Journal of Experimental Psychology, 89, 46-55.
Machery, E. (2010). The bleak implications of moral psychology. Neuroethics, 3:3, 223-231.
McDowell, J. (1998). Might there be external reasons? In his Mind, Value, and Reality, pp. 65-85. Cambridge, MA: Harvard University Press.
Meehl, P. (1990). Why summaries of research on psychological theories are often uninterpretable. Psychological Reports, 66, 195-244.
Mill, J. S. (1861/1998). Utilitarianism, edited with an introduction by Roger Crisp. New York: Oxford University Press.
Nussbaum, M. (2000). Women and Human Development: The Capabilities Approach. New York: Cambridge University Press.
Parfit, D. (1984). Reasons and Persons. Oxford: Clarendon Press.
Pommerehne, W., Schneider, F., & Zweifel, P. (1982). Economic theory of choice and the preference reversal phenomenon: A reexamination. American Economic Review, 72, 569-574.
Reilly, R. (1982). Preference reversal: Further evidence and some suggested modifications in experimental design. American Economic Review, 72, 576-584.
Schroeder, M. (2007). Slaves of the Passions. New York: Oxford University Press.
Sen, A. (1997). Maximization and the act of choice. Econometrica, 65, 1997.
Sen, A. (1985). Well-being, agency, and freedom. Journal of Philosophy, 82:4, 169-221.
Sen, A. (1979). Utilitarianism and welfarism. Journal of Philosophy, 76, 463-89.
Sidgwick, H. (1907). The Methods of Ethics. London: Macmillan.
Singer, P. (1993). Practical Ethics, Second Edition. Cambridge: Cambridge University Press.
Slovic, P. (1995). The construction of preference. American Psychologist, 50:5, 364-371.
Slovic, P & Lichtenstein, S. (1983). Preference reversals: A broader perspective. The American Economic Review, 73:4, 596-605.
Slovic, P & Lichtenstein, S. (1968). The relative importance of probabilities and payoffs in risk-taking. Journal of Experimental Psychology, 2:78, 1-18.
Thaler, R. & Sunstein, C. (2008). Nudge: Improving Decisions about Health, Wealth, and Happiness. New York: Penguin Books.
Tversky, A. & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, new series, 211:4481, 453-358.
Tversky, A., Slovic, P., & Kahneman, D. (1990). The causes of preference reversal. The American Economic Review, 80:1, 204-217.
Williams, B. (2001). Postscript: Some further notes on internal and external reasons. In E. Millgram (ed.), Varieties of Practical Reasoning, pp. 91-97. Cambridge, MA: MIT Press.
Williams, B. (1995). Internal reasons and the obscurity of blame. In his Making Sense of Humanity, pp. 35-45. Cambridge: Cambridge University Press.
Williams, B. (1981). Internal and external reasons. In his Moral Luck, pp. 101-113. Cambridge: Cambridge University Press.
 With thanks for helpful comments to John Basl, Christian Coons, John Doris, Leonard Greene, Gilbert Harman, Philipp Koralus, Kate Manne, Jesse Prinz, Shannon Spaulding, Kevin Vallier, and the members of the Princeton University Center for Human Values.
 See Lichtenstein & Slovic (1971); Slovic (1995); Slovic & Lichtenstein (1968, 1983); Tversky & Khaneman (1981); Tversky, Slovic, & Kahneman (1990).
 See also Ariely & Norton (2008), Green et al. (1998), Hoeffler & Ariely (1999), Hoeffler et al. (2006), Johnson and Schkade (1989), and Lichtenstein and Slovic (1971).
 See Ariely, Loewenstein, & Prelec (2003).
 Bentham (1789/1961, p. 31), Mill (1861/1998, 26), and Sidgwick (1907, p. 413) all deal with the objection in this way.
 For more on the process of cognitive psychotherapy or the “laundering” of preferences, see Brandt (1972, 1982) and Goodin (1986).
 See Berg, Dickhaut, & O’Brien (1985); Pommerehne, Schneider, & Zweifel (1982); and Reilly (1982).
 See Sen (1979, 1985) and Nussbaum (2000).
 See Williams (1981, 1995, 2001).
 McDowell (1998) seems to have something like this in mind when he discusses the process of advice-giving as a way to generate rather than merely clarify someone’s motivations.