Ted Slingerland to visit UO October 10th

Abstract: Many early Chinese thinkers had as their spiritual ideal the state of wu-wei, or effortless action. By advocating spontaneity as an explicit moral and religious goal, they inevitably involved themselves in the paradox of wu-wei—the problem of how one can try not to try—which later became one of the central tensions in East Asian religious thought. In this talk, I will look at the paradox from both an early Chinese and a contemporary perspective, drawing upon work in social psychology, cognitive neuroscience, and evolutionary theory to argue that this paradox is a real one, and is moreover intimately tied up with problems surrounding cooperation in large-scale societies and concerns about moral hypocrisy.

Slingerland Poster

Review of Austin’s _Virtues in Action_

Here’s a draft of a review of Virtues in Action, edited by Michael Austin.  As always, comments, criticisms, and suggestions are most welcome.

This ain’t your grandma’s virtue theory.

In Michael Austin’s bold new collection, Virtues in Action: New Essays in Applied Virtue Ethics, gone are the pretentions of defining right action generically as what a virtuous person would do in the circumstances, while acting in and from character, provided that a virtuous person would end up in those circumstances, and what a virtuous person would advise otherwise.  Instead, we find detailed explorations of specific virtues and vices related to specific fields of activity and problems, with attention (some of it careful – some less so) to relevant empirical literature and elbowroom for alternative normative approaches and conceptions.  Aristotle tells us about courage, temperance, generosity, magnificence, magnanimity, even temper, pride, justice, and friendship.  The first wave neo-Aristotelians such as Geach (1977) tell us about prudence, justice, temperance, courage, faith, hope, and charity.

Contributors to the present volume tell us about instilling openmindedness and curiosity in students (Bassham), promoting a sense of competitive honor and magnificence in business executives (Demetriou), fostering humility through sport (Austin), cultivating sexual tenderness (Van Hooft), reconciling Mencius’s sprout of ren with Aristotle’s notion of virtue-habituation (Giebel), promoting pacifism because the training of soldiers harms their character (Trivigno), the relation between virtue and abortion (again – Flannagan), developing Buddhist compassion in the face of environmental catastrophe (Frakes), learning to live with the rest of nature through ecological humility (Pianalto), learning to hope to learn (Snow), translating virtue theory into the contemporary dual-process model of psychology (Tessman), and charitably debating fraught political and moral issues (Garcia & King).

That’s twelve chapters in just over two hundred pages – roughly 7000 words per chapter.  Naturally, then, many of the discussions are truncated.  In some cases, this makes the chapter a breezy jaunt through a novel topic; in others, the reader is left feeling that the discussion was facile and superficial.  To put the chapters in perspective, Austin has arranged them into four parts: professional (education, business, and sport), social (sex, partiality, war, and abortion), environmental, and intellectual.  Some of this categorization works better than others.  For instance, Bassham’s chapter on education concerns not the virtues of educators but the prospects and problems of educating for virtue – especially intellectual virtue.  It might fit better in the last part.  Likewise, Tessman’s chapter on dual-process theory might have found a more natural home among the papers on social virtue.

Given the diversity of topics covered in this volume, few readers will be equally interested in all of the papers.  Two of them are must-reads: “Sex, Temperance, and Virtue” by Stan van Hooft and “A Virtue Ethical Case for Pacifism” by Franco Trivigno.  I’ll describe these two chapters in some detail below.  Most of the rest of the chapters are quite readable.  I’ll give summarize their main points.  A few of the chapters are notably weak; I’ll briefly mention why.

Van Hooft explores the relation between virtue theory and sexual activity.  He uses as a stalking horse Raja Halwani’s (2010) claim that temperance-intemperance is the sole dimension on which virtue theorists should consider sex and sexual activity.  Halwani argues that there are two aspects to temperance: rational control and regulation over sexual behavior and mentation, and avoiding the use of independently wrong actions (lying, stealing, rape, injustice, unkindness) as means to sexual ends.  Van Hooft correctly points out that the second aspect has nothing to do with sexuality specifically, but that the first aspect applies also to moderating other natural bodily appetites related to eating and drinking.  In other words, neither aspect of sexual temperance, as Halwani characterizes it, is distinctively sexual.

On Van Hooft’s account, this error replicates Aristotle’s own failure to think through the differences between sex on the one hand and food and drink on the other hand.  As he pertly puts it, “If sex raised only the same ethical problems as eating and drinking, the paradigm case of sexual activity would be masturbation” (p. 64).  Such a conception of sex is, obviously, “seriously deficient” in at least three ways.  First, sex – even masturbation, which often involves fantasizing and imagination – is typically social.  Second, as Freud taught us, sex is polymorphously perverse, capable of eroticizing just about anything.[1]  Third, unlike eating and drinking, the enjoyment of sex is often not only passionate but agentic.  These considerations lead Van Hooft to conclude that, pace Halwani, the distinctively sexual virtue is tenderness, not temperance.  Such tenderness answers not just to the value of moderation but also to such values as agency, privacy, timeliness, intimacy, generosity, considerateness, and trust.

Trivigno mounts an argument for contingent pacifism based on psychological and related investigations of moral injury to soldiers.  The ingredients for this argument are a proper understanding of what contemporary military training does to the moral character of soldiers, the knock-on consequences of this training for the soldiers, and the knock-on consequences of this training for other people (enemies in combat, civilians and bystanders in war zones, and soldiers’ civilian compatriots).  Trivigno argues only for contingent pacificism, which he describes as “a very strong presumption against the use of military force” given current military training techniques (p. 86).  What are these techniques, and why are they so objectionable?  The vast majority of adult humans harbor a deep resistance to killing conspecifics, which seems to be bound up with both empathy and the natural tendency to see others, even enemy combatants, as human beings.  Studies reveal that during World War II, for instance, between 80% and 85% of American soldiers in combat did not fire their weapons or fired them harmlessly into the sky.  In the last six decades or so, militaries have developed techniques for overcoming this resistance to killing.  Trivigno focuses on three: automating the process of firing weapons through operant conditioning, euphemizing the act of killing, and dehumanizing enemies and potential enemies.

Through conditioning, soldiers learn to fire their weapons without deliberating about the nature of their actions.  Thus, they become capable of killing without realizing in the moment that that’s what they’re doing.  The other two techniques are meant to ensure that they aren’t later debilitated by the recognition of what they’ve done.  Action, as Davidson (1980) taught us, is always intentional under some description.  If the only available description for what you’ve done is “killing another person” and you’ve embodied (as almost all of us have) a norm against killing, then even if you judge that you did the right thing, you may feel devastated.  Moder military training erects a Potemkin village of euphemisms for the horrific actions that soldiers are sometimes ordered to commit.  You’re not “killing a person.”  You’re “servicing a target,” “achieving an objective,” “wasting a towel-head.”  The first two euphemisms work through sanitization.  The third transitions to the final technique: dehumanization.  As Tirrell (2012) explores in more detail, dehumanization is a prelude to and perhaps even a constitutive part of atrocity.  The Nazis described Jews as vermin.  During the Rwandan genocide, Hutu Power called Tutsis cockroaches.  Modern military training[2] typically severs the empathic connection between the soldier and everyone other than his comrades (since everyone else is at least a potential enemy) by portraying the other in demonic or bestial language and imagery.

Shocking.  Horrifying.  Depressing.  What does it have to do with virtue and pacifism?  Trivigno traces two main connections.  First, the capacity for empathy, while hardly sufficient for good character and flourishing, is a constituent of it.  By destroying or corrupting soldiers’ capacity for empathy, modern military training harms their moral character and their chances for flourishing.  Second, the techniques used in modern military training (automaticity, euphemism, and dehumanization) are too coarse-grained to prevent extremely bad consequences such as atrocity.  Given the way soldiers are currently trained, we should expect incidents like My Lai, Abu Ghraib, and the Fallujah massacre as a normal part of war.  Expressions of shock in the face of such atrocities reflects either ignorance or wishful thinking.

Although Van Hooft’s and Trivigno’s contributions stand out, there are plenty of other solid chapters in this collection.  Gregory Bassham furnishes six reasons to prefer a model of education in which students cultivate virtues (intellectual and perhaps even moral) rather than merely acquiring knowledge.  First, historically, this is how education has been conceptualized.  Second, this is what liberal universities are explicitly committed to in their mission statements.  Third, since a university education is meant to have a “deep and positive impact” on students, they should aim for virtue, which is deeper than knowledge.  Fourth, arguably an incomplete education that involves virtue but not knowledge is more easily parleyed into a complete education than one that involves knowledge but not virtue.  Fifth, focusing on virtue-development makes education more of a collaboration among educators, students, families, and communities.  Finally, education intrinsically aims at personal development, which includes among other things virtue.

Dan Demetriou argues that, regardless of one’s political preferences, the rapid rise in income and wealth inequality throughout the developed world should be troubling.  In response to this, he recommends promoting competitive honor and magnificence as virtues for business executives and other obscenely wealthy people (e.g., workers in the finance industry).  There’s always more money to be had.  But being the most honored (or the second most-honored, or the third) is an artificially scarce resource.  For this reason, it would be better for everyone if people with the absurd amounts of power currently afforded to the ultra-wealthy pursued the prestige that accrues to magnificent generosity than yet more wealth.  Demetriou may be right, though if he is, one is forced to ask he awkward question: if we’ve been reduced to encouraging super-managers (as Piketty 2014 calls them) to voluntarily redistribute their ill-gotten gains, perhaps more drastic solutions are called for.

In his chapter in his own book, Michael Austin argues that sport – even if it hasn’t been successfully harnessed for such purposes, can and should be aimed at cultivating and displaying virtue the moral virtues of athletes.  First, there are positive values embedded in the practice of sport.  Second, participating in sport can foster humility, as one submits oneself the standards inherent in the practice.  Third, sport can be used in the cultivation of prudence, courage, temperance, and justice.  Austin’s account of how sport can be used in this way is – for reasons of space, perhaps among others – brief.  It also makes dubious use of the empirical literature on ego depletion (pp. 43-4).  One question that Austin doesn’t ask but which clearly must be considered is how sport relates not to participants but to spectators and those related to them.  Does watching football (American or otherwise) on the television in any way help a spectator to cultivate virtue?  Does it contribute to the spectator’s vice?[3]

Chris Frakes argues that, while Western conceptions of compassion may leave on debilitated in the face of monumental environmental degradation and injustice, Buddhist compassion may be more robust.  In particular, someone who embodies Buddhist compassion is able to direct her attention and action well in the face of suffering, and is motivated to adopt an environmentally mindful lifestyle.

Nancy Snow discusses hope as an intellectual virtue.  In so doing, she distinguishes the attitude of hope, which has particular ends, from the agentic disposition of hope, which does not.  To hope for X is to perceive X as good but regard its occurrence as uncertain, and in so doing to exercise imagination and agency to see to it that X occurs.  The disposition of hopefulness, in turn, involves being inclined to have the attitude of hope towards various ends.  According to Snow, hope motivates the pursuit of knowledge by holding out the possibility that one will discover the truth, immunizes the hoper against setbacks and frustrations, and thus constitutes a method for acquiring knowledge.  Perhaps surprisingly, Snow fails to consider the ancient fatalist conception of hope exemplified in the myth of Pandora’s box: what if hope is the greatest of evils because it leads us to persevere through suffering for no reason?

The last chapter worth reading is co-authored by Robert Garcia and Nathan King, who document two fallacies that tend to undermine frank and engaged discussion of morally fraught issues: assailment-by-entailment and the attitude-to-agent fallacy.  Assailment-by-entailment is basically a failure of perspective-taking.  You believe that p entails q, and that q is morally repugnant.  Your interlocutor asserts that p.  You infer that your interlocutor not only believes that p but also believes (like you) that p entails q and therefore believes that q.  In fact, she rejects q or at least suspends judgment on it.  You thus end up attributing to her a belief that you find repugnant and that she is not committed to.  The attitude-to-agent fallacy is a relative of the fundamental attribution error, in which people all-too-quickly infer something deep about an agent from something superficial, such as a one-off behavior or the expression of an isolated attitude.  Against these errors, Garcia & King recommend cultivating and expressing intellectual humility and charity of interpretation.

I’m afraid I cannot recommend reading the chapters by Heidi Giebel, Matthew Flanagan, Matthew Pianalto, or Lisa Tessman.  Giebel’s contribution merely summarizes some well-known views of Mencius and Aristotle.  Her attempt to deal with the threat of situationism to virtue theory is shockingly under-informed.  Flannagan engages in reactionary turn of the screw in the interpretation and response to Hursthouse’s arguments about abortion.  Pianalto serves up character assassination rather than argument, suggesting that “the person who gets depressed when considering his or her life from a wider perspective feels this way because the wider perspective challenges his or her own attitude of self-importance,” belying an “attitude of arrogant or vain self-importance” (p. 140).  A word to the not-so-wise: when your best evidence is your own phenomenology, don’t accuse others of vice for honestly reporting their own phenomenology.

Finally, Lisa Tessman does that voodoo that she does, arguing in her chapter that virtue ethics is consistent with the prominent dual-process framework in contemporary psychology, and that virtue thus understood means that lots of decisions are tragic decisions (in this case, pitting automatic, affect-laden, “System 1” intuitions against effortful, deliberative “System 2” judgments).  An keen observer of Tessman’s publication record might note that this is more or less the conclusion of everything she’s published in the last decade years.

In sum, the chapters by Van Hooft and Trivigno alone make Virtues in Action a worthy acquisition.  Many of the other chapters are edifying.  A few are best avoided.  Such are the virtues – and the vices – of Virtues in Action.




Davidson, D. (1980). Essays on Actions and Events. Oxford UP.

Geach, P. (1977). The Virtues. Cambridge UP.

Halwani, R. (2010). Philosophy of Love, Sex, and Marriage: An Introduction. New York: Routledge.

Piketty, T. (2014). Capital in the Twenty-First Century. New York: Belknap.

Tirrell, L. (2012). Genocidal language games. In I. Maitra & M. K. McGowan (eds.), Speech and Harm: Controversies over Free Speech, pp. 174-221. Oxford UP.


[1] My favorite example is this exchange from the BBC show “Blackadder”:

SAMUEL JOHNSON: Ah, I see you’ve underlined a few: ‘bloomers’, ‘burp’, ‘fart’, ‘fiddle’, ‘fornicate’?


JOHNSON: Sir!  I hope you’re not using the first English dictionary to look up rude words!

EDMUND: I wouldn’t be too hopeful; that’s what all the other ones will be used for.

BALDRICK: Sir, can I look up ‘turnip’?

EDMUND: ‘Turnip’ isn’t a rude word, Baldrick.

BALDRICK: It is if you sit on one.

[2] And, I should add, police training at least in the United States, given the rapid militarization of law enforcement.

[3] The question is serious.  Domestic violence spikes in countries that endure losses in the World Cup.

Raven’s Kripkensteinian Matrices

I’ve been thinking a bit recently about how one well-regarded test of intelligence, Raven’s Progressive Matrices, relates to the Kripkenstein Puzzle.  It led me to generate a few images… not sure whether I’ll continue with these, but looking at them sort of gives me an uncanny feeling.

Raven 1a

Raven 1b

Raven 2a

Raven 2b



This Thing of Darkness I Acknowledge Mind: Chapter on Responsibility and Implicit Bias

Here’s a draft of the chapter of my moral psychology textbook. It’s on implicit bias and responsibility.  This one was much more depressing to write than the one on preferences.  As always, questions, comments, suggestions, and criticisms are most welcome.


“This thing of darkness I acknowledge mine.”

~ William Shakespeare, The Tempest, 5.1.289-290

1 Some incidents

At 12:40 AM, February 4th, 1999, Amadou Diallo, a student, entrepreneur, and African immigrant, was standing outside his apartment building in the southeast Bronx. In the gloom, four passing police officers in street clothes mistook him for Isaac Jones, a serial rapist who had been terrorizing the neighborhood. Shouting commands, they approached Diallo. He headed towards the front door of his building. Diallo stopped on the dimly lit stoop and took his wallet out of his jacket. Perhaps he thought they were cops and was trying to show them his ID; maybe he thought they were violent thieves and was trying to hand over his cash and credit cards. We will never know. One of them, Sean Carroll, mistook the wallet for a gun. Alerting his fellow officers, Richard Murphy, Edward McMellon, and Kenneth Boss, to the perceived threat, he triggered a firestorm: together, they fired 41 shots at Diallo, 19 of which found their mark. He died on the spot. He was unarmed. All four officers were ruled by the New York Police Department to have acted as a “reasonable” police officer would have acted in the circumstances. Subsequently indicted for second-degree murder and reckless endangerment, they were acquitted on all charges.

Like so many others, Sean Bell, a black resident of Queens, had some drinks with his friends at a club the night before his wedding, which was scheduled for November 25th, 2006. As they were leaving the club, though, something less typical happened: five members of the New York City Police Department shot about fifty bullets at them, killing Bell and permanently wounding his friends, Trent Benefield and Joseph Guzman. The first officer to shoot, Gescard Isnora, claimed afterward that he’d seen Guzman reach for a gun. Detective Paul Headley fired one shot; officer Michael Carey fired three bullets; officer Marc Cooper shot four times; officer Isnora fired eleven shots. Officer Michael Oliver emptied an entire magazine of his 9 mm handgun into Bell’s car, paused to reload, then emptied another magazine. Bell, Benefield, and Guzman were unarmed. In part because Benefield’s and Guzman’s testimony was confused (understandably, given that they’d had a few drinks and then been shot), all of the police officers were acquitted. New York City agreed to pay Benefield, Guzman, and Bell’s fiancée just over seven million dollars (roughly £4,000,000)in damages, which prompted Michael Paladino, the head of the New York City Detectives Endowment Association, to complain, “I think the settlement is a joke. The detectives were exonerated… and now the taxpayer is on the hook for $7 million and the attorneys are in line to get $2 million without suffering a scratch.”

In 1979, Lilly Ledbetter was hired as a supervisor by Goodyear Tire & Rubber Company. Initially, her salary roughly matched those of her peers, the vast majority of whom were men. Over the next two decades, her and her peers’ raises, which when awarded were a percentage of current salary, were contingent on periodic performance evaluations. In some cases, Ledbetter received raises. In many, she was denied. By the time she retired in 1997, her monthly salary was $3727. The other supervisors – all men – were then being paid between $4286 and $5236. Over the years, her compensation had lagged further and further behind those of men performing substantially similar work; by the time she retired, she was making between 71% and 87% what her male counterparts earned. Just after retiring, Ledbetter launched charges of discrimination, alleging that Goodyear had violated Title VII of the Civil Rights Act, which prohibits, among other things, discrimination with respect to compensation because of the target’s sex. Although a jury of her peers found in her favor, Ledbetter’s case was appealed all the way to the American Supreme Court, which ruled 5-4 against her. Writing for the majority, Justice Samuel Alito argued that Ledbetter’s case was unsound because the alleged acts of discrimination occurred more than 180 days before she filed suit, putting them beyond the pale of the statute of limitations and effectively immunizing Goodyear. In 2009, Congress passed the Lilly Ledbetter Fair Pay Act, loosening such temporal restrictions to make suits like hers easier to prosecute.

Though appalling, Ledbetter’s example is actually unremarkable. On average in the United States, women earn 77% of what their male counterparts earn for comparable work. A longitudinal study of the careers of men and women in business indicates that Ledbetter’s case fits a general pattern. Although no gender differences were found early-career, by mid-career, women reported lower salaries, less career satisfaction, and less feelings of being appreciated by their bosses (Schneer & Reitman 1994). Over the long term, many small, subtle, but systematic biases often snowball into an unfair and dissatisfying career experience.

Why consider these cases together? What – other than their repugnance – unites them? The exact motives of the people involved are opaque to us, but we can speculate and consider what we should think about the responsibility of those involved, given plausible interpretations of their behavior and motives. This lets us evaluate related cases and think systematically about responsibility, regardless of how we judge the historical examples used as models. In particular, in this chapter I’ll consider the question whether and to what extent someone who acts out of bias is responsible for their behavior. The police seem to have been in some way biased against Diallo and Bell; Ledbetter’s supervisors seem to have been in some way biased against her. To explore the extent to which they were morally responsible for acting from these biases, I’ll first discuss philosophical approaches to the question of responsibility. Next, I’ll explain some of the relevant psychological research on bias. I’ll then consider how this research should inform our understanding of the moral psychology of responsibility. Finally, I’ll point to opportunities for further philosophical and psychological research.

Continue reading

Ramsifying virtue theory

Draft of a paper to be published in Current Controversies in Virtue Theory.  My controversy is over the question “Can people be virtuous?”  My respondent is James Montmarquet.  Other contributors to the volume include Heather Battaly, Liezl van Zyl, Jason Baehr, Ernie Sosa, Dan Russell, Christian Miller, Bob Roberts, and Nancy Snow.

Ramsifying virtue theory 

Can people be virtuous? This is a hard question, both because of its form and because of its content.

In terms of content, the proposition in question is at once normative and descriptive. Virtue-terms have empirical content. Attributions of virtues figure in the description, prediction, explanation, and control of behavior. If you know that someone is temperate, you can predict with some confidence that he won’t go on a bender this weekend. Someone’s investigating a mysterious phenomenon can be partly explained by (correctly) attributing curiosity to her. Character witnesses are called in trials to help determine how severely a convicted defendant will be punished. Virtue-terms also have normative content. Attributions of virtues are a manifestation of high regard and admiration; they are intrinsically rewarding to their targets; they’re a form of praise. The semantics of purely normative terms is hard enough on its own; the semantics of “thick” terms that have both normative and descriptive content is especially difficult.

Formally, the proposition in question (“people are virtuous”) is a generic, which adds a further wrinkle to its evaluation. It is notoriously difficult to give truth conditions for generics (Leslie 2008). A generic entails its existentially quantified counterpart, but is not entailed by it. For instance, tigers are four-legged, so some tigers are four-legged; but even though some deformed tigers are three-legged, it doesn’t follow that tigers are three-legged. A generic typically is entailed by its universally quantified counterpart, but does not entail it. Furthermore, a generic neither entails nor is entailed by its counterpart “most” statement. Tigers give live birth, but most tigers do not give live birth; after all, only about half of all tigers are female, and not all of them give birth. Most mosquitoes do not carry West Nile virus, but mosquitoes carry West Nile virus. Given the trickiness of generics, it’s helpful to clarify them to the extent possible with more precise non-generic statements.

Moreover, the proposition in question is modally qualified, which redoubles the difficulty of confirming or disconfirming it. What’s being asked is not simply whether people are virtuous, but whether they can be virtuous. It could turn out that even though no one is virtuous, it’s possible for people to become virtuous. This would, however, be extremely surprising. Unlike other unrealized possibilities, virtue is almost universally sought after, so if it isn’t widely actualized despite all that seeking, we have fairly strong evidence that it’s not there to be had.

In this paper, I propose a method for adjudicating the question whether people can be virtuous. This method, if sound, would help to resolve what’s come to be known as the situationist challenge to virtue theory, which over the last few decades has threatened both virtue ethics (Alfano 2013a, Doris 2002, Harman 1999) and virtue epistemology (Alfano 2011, 2013a, Olin & Doris 2014). The method is an application of David Lewis’s (1966, 1970, 1972) development of Frank Ramsey’s (1931) approach to the implicit definition of theoretical terms. The method needs to be tweaked in various ways to handle the difficulties canvassed above, but, when it is, an interesting answer to our question emerges: we face a theoretical tradeoff between, on the one hand, insisting that virtue is a robust property of an individual agent that’s rarely attained and perhaps even unattainable and, on the other hand, allowing that one person’s virtue might inhere partly in other people, making virtue at once more easily attained and more fragile.

The basic principle underlying the Ramsey-Lewis approach to implicit definition (often referred to as ‘Ramsification’) can be illustrated with a well-known story:

And the Lord sent Nathan unto David. And he came unto him, and said unto him, “There were two men in one city; the one rich, and the other poor. The rich man had exceeding many flocks and herds: But the poor man had nothing, save one little ewe lamb, which he had bought and nourished up: and it grew up together with him, and with his children; it did eat of his own meat, and drank of his own cup, and lay in his bosom, and was unto him as a daughter. And there came a traveler unto the rich man, and he spared to take of his own flock and of his own herd, to dress for the wayfaring man that was come unto him; but took the poor man’s lamb, and dressed it for the man that was come to him.” And David’s anger was greatly kindled against the man; and he said to Nathan, “As the Lord liveth, the man that hath done this thing shall surely die: And he shall restore the lamb fourfold, because he did this thing, and because he had no pity.” And Nathan said to David, “Thou art the man.”

Nathan uses Ramsification to drive home a point. He tells a story about an ordered triple of objects (two people and an animal) that are interrelated in various ways. Some of the first object’s properties (e.g., wealth) are monadic; some of the second object’s properties (e.g., poverty) are monadic; some of the first object’s properties are relational (e.g., he steals the third object from the second object); some of the second object’s properties are relational (e.g., the third object is stolen from him by the first object); and so on. Even though the first object is not explicitly defined as the X such that …, it is nevertheless implicitly defined as the first element of the ordered triple such that …. The big reveal happens when Nathan announces that the first element of the ordered triple, about whom his interlocutor has already made some pretty serious pronouncements, is the very person he’s addressing (the other two, for those unfamiliar with the 2nd Samuel 12, are Uriah and Bathsheba[1]).

The story is Biblical, but the method is modern. To implicitly define a set of theoretical terms (henceforth ‘T-terms’), one formulates a theory T in those terms and any other terms (henceforth ‘O-terms’) one already understands or has an independent theory of. Next, one writes T as a single sentence, such as a long conjunction, in which the T-terms t1…, tn occur (henceforth ‘T[t1…, tn]’ or ‘the postulate of T’). The T-terms are replaced by unbound variables x1…, xn, and then existentially quantified over to generate the Ramsey sentence of T, which states that T is realized, i.e., that there are objects x1…, xn that satisfy the Ramsey sentence. An ordered n-tuple that satisfies the Ramsey sentence is then said to be a realizer of the theory.

Lewis (1966) famously applied this method to folk psychology to argue for the mind-brain identity theory. Somewhat roughly, he argued that folk psychology can be treated as a theory in which mental-state terms are the T-terms. The postulate of folk psychology is identified as the conjunction of all folk-psychological platitudes (commonsense psychological truths that everyone knows, and everyone knows that everyone knows, and everyone knows that everyone knows that everyone knows, and so on). The Ramsey sentence of folk psychology is formed in the usual way, by replacing all mental-state terms (e.g., ‘belief’, ‘desire’, ‘pain’, etc.) with variables and existentially quantifying over those variables. Finally, one goes on to determine what, in the actual world, satisfies the Ramsey sentence; that is, one investigates what, if anything, is a realizer of the Ramsey sentence. If there is a realizer, then that’s what the T-terms refer to; if there is no realizer, then the T-terms do not refer. Lewis claims that brain states are such realizers, and hence that mental states are identical with brain states.

Lewis’s Ramsification method is attractive for a number of reasons.[2] First, it ensures that we don’t simply change the topic when we try to give a philosophical account of some phenomenon. If your account of the mind is wildly inconsistent with the postulate of folk psychology, then – though you may be giving an account of something interesting – you’re not doing what you think you’re doing. Second, enables us to distinguish between the meaning of the T-terms and whether they refer. The T-terms mean what they would refer to, if there were such a thing. Whether they in fact refer is a distinct question. Third, and perhaps most importantly, Ramsification is holistic. The first half of the twentieth century bore witness to the fact that it’s impossible to give an independent account of almost any psychological phenomenon (belief, desire, emotion, perception) because what it means to have one belief is essentially bound up with what it means to have a whole host of other beliefs, as well as (at least potentially) a whole host of desires, emotions, and perceptions. Ramsification gets around this problem by giving an account of all of the relevant phenomena at once, rather than trying to chip away at them piecemeal.

Virtue theory stands to benefit from the application of Ramsification for all of these reasons. We want an account of virtue, not an account of some other interesting phenomenon (though we might want that too). We want an account that recognizes that talk of virtue is meaningful, even if there aren’t virtues. Most importantly, we want an account of virtue that recognizes the complexity of virtue and character – the fact that virtues are interrelated in a whole host of ways with occurrent and dispositional mental states, with other virtues, with character more broadly, and so on.

Whether Lewis is right about brains is irrelevant to our question, but his methodology is crucial. What I want to do now is to show how the same method, suitably modified, can be used to implicitly define virtue-terms, which in turn will help us to answer the question whether people can be virtuous. For reasons that will become clear as we proceed, the T-terms of virtue theory as I construe it here are ‘person’, ‘virtue’, ‘vice’, the names of the various virtues (e.g., ‘courage’, ‘generosity’, ‘curiosity’), the names of their congruent affects (e.g., ‘feeling courageous’, ‘feeling generous’, ‘feeling curious’), the names of the various vices (e.g., ‘cowardice’, ‘greed, ‘intellectual laziness’), and the names of their congruent affects, (e.g., ‘feeling cowardly’, ‘feeling greedy’, ‘feeling intellectually lazy’). The O-terms are all other terms, importantly including terms that refer to attitudes (e.g., ‘belief’, ‘desire’, ‘anger’, ‘resentment’, ‘disgust’, ‘contempt’, ‘respect’), mental processes (e.g., ‘deliberation’), perceptions and perceptual sensitivities, behaviors, reasons, situational features (e.g., ‘being alone’, ‘being in a crowd’, ‘being monitored’), and evaluations (e.g., ‘praise’ and ‘blame’).

Elsewhere (Alfano 2013), I have argued for an intuitive distinction between high-fidelity and low-fidelity virtues. High-fidelity virtues, such as honesty, chastity, and loyalty, require near-perfect manifestation in undisrupted conditions. Someone only counts as chaste if he never cheats on his partner when cheating is a temptation. Low-fidelity virtues, such as generosity, tact, and tenacity, are not so demanding. Someone might count as generous if she were more disposed to give than not to give when there was sufficient reason to do so; someone might count as tenacious if she were more disposed to persist than not to persist in the face of adversity. If this is on the right track, the postulate of virtue theory will recognize the distinction. For instance, it seems to me at least that almost everyone would say that helpfulness is a low-fidelity virtue whereas loyalty is a high-fidelity virtue. Here, then, are some families of platitudes about character that are candidates for the postulate of virtue theory:


(A) The Virtue / Affect Family

(a1) If a person has courage, then she will typically feel courageous when there is sufficient reason to do so.

(a2) If a person has generosity, then she will typically feel generous when there is sufficient reason to do so.

(a3) If a person has curiosity, then she will typically feel curious when there is sufficient reason to do so.




(an) ….


(C) The Virtue / Cognition Family

(c1) If a person has courage, then she will typically want to overcome threats.

(c2) If a person has courage, then she will typically deliberate well about how to overcome threats and reliably form beliefs about how to do so.




(cn) ….


(S) The Virtue / Situation Family

(s1) If a person has courage, then she will typically be unaffected by situational factors that are neither reasons for nor reasons against overcoming a threat.

(s2) If a person has generosity, then she will typically be unaffected by situational factors that are neither reasons for nor reasons against giving resources to someone.

(s3) If a person has curiosity, then she will typically be unaffected by situational factors that are neither reasons for nor reasons against investigating a problem.






(E) The Virtue / Evaluation Family

(e1) If a person has courage, then she will typically react to threats in ways that merit praise.

(e2) If a person has generosity, then she will typically react to others’ needs and wants in ways that merit praise.

(e3) If a person has curiosity, then she will typically react to intellectual problems in ways that merit praise.






(B) The Virtue / Behavior Family

(b1) If a person has courage, then she will typically act so as to overcome threats when there is sufficient reason to do so.

(b2) If a person has generosity, then she will typically act so as to benefit another person when there is sufficient reason to do so.

(b3) If a person has curiosity, then she will typically act so as to solve intellectual problems when there is sufficient reason to do so.






(P) The Virtue Prevalence Family

(p1) Many people commit acts of courage.

(p2) Many people commit acts of generosity.

(p3) Many people commit acts of curiosity.

(p4) Many people are courageous.

(p5) Many people are generous.

(p6) Many people are curious.






(I) The Cardinality / Integration Family

(i1) Typically, a person who has modesty also has humility.

(i2) Typically, a person who has magnanimity also has generosity.

(i3) Typically, a person who has curiosity also has open-mindedness.






(D) The Desire / Virtue Family

(d1) Typically, a person desires to have courage.

(d2) Typically, a person desires to have generosity.

(d3) Typically, a person desires to have curiosity.






(F) The Fidelity Family

(f1) Chastity is high-fidelity.

(f2) Honesty is high-fidelity.

(f3) Creativity is low-fidelity.






Each platitude in each family is meant to be merely illustrative. Presumably they could all be improved somewhat, and there are many more such platitudes. Moreover, each family is itself just an example. There are many further families describing the relations among vice, affect, cognition, situation, evaluation, and behavior, as well as families that make three-way rather than two-way connections (e.g., “If a person is courageous, then she will typically act so as to overcome threats when there is sufficient reason to do so and because she feels courageous.”). For the sake of simplicity, though, let’s assume that the families identified above contain all and only the platitudes relevant to the implicit definition of virtues. Ramsification can now be performed in the usual way. First, create a big conjunction (henceforth, simply the ‘postulate of virtue theory’). Next, replace each of the T-terms in the postulate of virtue theory with an unbound variable, then existentially quantifies over those variables to generate the Ramsey sentence of virtue theory. Finally, check whether the Ramsey sentence of virtue theory is true and – if it is – what its realizers are.

After this preliminary work has been done, we’re in a position to see more clearly the problem raised by the situationist challenge to virtue theory. Situationists argue that there is no realizer of the Ramsey sentence of virtue theory. Moreover, this is not for lack of effort. Indeed, one family of platitudes in the Ramsey sentence specifically states that, typically, people desire to be virtuous; it’s not as if no one has yet tried to be or become courageous, generous, or curious.[3] In this paper, I don’t have space to canvass the relevant empirical evidence; interested readers should see my (2013a and 2013b). Nevertheless, the crucial claim – that the Ramsey sentence of virtue theory is not realized – is not an object of serious dispute in the philosophical literature.

One very common response to the situationist challenge from defenders of virtue theory (and virtue ethics in particular) is to claim that virtues are actually quite rare, directly contradicting the statements in the virtue prevalence family. I do not think this is the best response to the problem, as I explain below, but the point remains that all serious disputants agree that the Ramsey sentence is not realized.

As described above, Ramsification looks like a simple, formal exercise. Collect the platitudes, put them into a big conjunction, perform the appropriate substitutions, existentially quantify, and check the truth-value of the resulting Ramsey sentence (and the referents of its bound variables, if any). But there are several opportunities for a critic to object as the exercise unfolds.

One difficulty that arises for some families, such as the desire / virtue family, is that they involve T-terms within the scope of intentional attitude verbs.[4] Since existential quantification into such contexts is blocked by opacity, such families cannot be relied on to define the T-terms, though they can be used to double-check the validity of the implicit definition once the T-terms are defined.[5]

Another difficulty is that this methodology presupposes that we have an adequate understanding of the O-terms, which in this case include terms that refer to attitudes, mental processes, perceptions and perceptual sensitivities, behaviors, reasons, situational features, and evaluations. One might be dubious about this presupposition. I certainly am. However, the fact that philosophy of mind and metaethics are works-in-progress should not be interpreted as a problem specifically for my approach to virtue theory. Any normative theory that relies on other branches of philosophy to figure out what mental states and processes are, and what reasons are, can be criticized in the same way.

A third worry is that the list of platitudes contains gaps (e.g., a virtue acquisition family about how various traits are acquired). Conversely, one might think that it has gluts (e.g., unmotivated commitment to virtue prevalence). To overcome this pair of worries, we need a way of determining what the platitudes are. Perhaps surprisingly, there is no precedent for this in the philosophy of mind, despite the fact that Ramsification is often invoked as a framework there.[6] This may be because it’s supposed to be obvious what the platitudes are. Here’s Frank Jackson’s flippant response to the worry: “I am sometimes asked—in a tone that suggests that the question is a major objection—why, if conceptual analysis is concerned to elucidate what governs our classificatory practice, don’t I advocate doing serious opinion polls on people’s responses to various cases? My answer is that I do—when it is necessary. Everyone who presents the Gettier cases to a class of students is doing their own bit of fieldwork, and we all know the answer they get in the vast majority of cases” (1998, 36–37). After all, according to Lewis, everyone knows the platitudes, and everyone knows that everyone knows them, and everyone knows that everyone knows that everyone knows them, and so on. Sometimes, however, the most obvious things are the hardest to spot. It thus behooves us to at least sketch a method for carrying out the first step of Ramsification: identifying the platitudes. Call this pre-Ramsification.

Here’s an attempt at spelling out how pre-Ramsification should work: start by listing off a large number of candidate platitudes. These can be all of the statements one would, in a less-responsible, Jacksonian mood, have merely asserted were platitudes. It can also include statements that seem highly likely but perhaps not quite platitudes. Add to the pool of statements some that seem, intuitively, to be controversial, as well as some that seem obviously false; these serve as anchors in the ensuing investigation. Next, collect people’s responses to these statements. Several sorts of responses would be useful, including subjective agreement, social agreement, and reaction time. For instance, prompt people with the statement, “Many people are honest,” and ask to what extent they agree and to what extent they think others would agree. Measure their reaction times as they answer both questions. High subjective and social agreement, paired with fast reaction times, is strong but defeasible evidence that a statement is a platitude. This is a bit vague, since I haven’t specified what counts as “high” agreement or “fast” reaction times, but there are precedents in psychology for setting these thresholds. Moreover, this kind of pre-Ramsification wouldn’t establish dispositively what the platitudes are, but then, dispositive proof only happens in mathematics.

It’s far beyond the scope of this short paper to show that pre-Ramsification works in the way I suggest, or that it verifies all and only the families identified above. For now, let’s suppose that it does, i.e., that all of the families proposed above were validated by pre-Ramsification. Let’s also suppose that we have strong evidence that the Ramsey sentence of virtue theory is not realized (a point that, as I mentioned above, is not seriously contested). How should we then proceed?

Lewis foresaw that, in some cases, the Ramsey sentence for a given field would be unrealized, so he built in a way of fudging things: instead of generating the postulate by taking the conjunction of all of the platitudes, one can generate a weaker postulate by taking the disjunction of each of the conjunctions of most of the platitudes. For example, if there were only five platitudes, p, q, r, s, and t, then instead of the postulate’s being , it would be (p&q&r&s)v(p&q&r&t)&…&(q&r&s&t). In the case of virtue theory, we could take the disjunction of each of the conjunctions of all but one of the families of platitudes. Alternatively, we could exclude a few of the platitudes from within each family.

Fudging in this way makes it easier for the Ramsey sentence to be realized, since the disjunction of conjunctions of most of the platitudes is logically weaker than the straightforward conjunction of all of them. Fudging may end up making it too easy, though, such that there are multiple realizers of the Ramsey sentence. When this happens, it’s up to the theorist to figure out how to strengthen things back up in such a way that there is a unique realizer.

The various responses to the situationist challenge can be seen as different ways of doing this. Everyone recognizes that the un-fudged Ramsey sentence of virtue theory is unrealized. But a sufficiently fudged Ramsey sentence is bound to be multiply realized. It’s a theoretical choice exactly how to play things at this point. More traditional virtue theorists such as Joel Kupperman (2009) favor a fudged version of the Ramsey sentence wherein the virtue prevalence family has been dropped. John Doris (2002) favors a fudged version wherein the virtue/situation and virtue/integration families have been dropped. I (2013) favor a fudged version wherein the virtue / situation family has been dropped and a virtue /social construction family has been added in its place. The statements in the latter family have to do with the ways in which (signals of) social expectations implicitly and explicitly influence behavior. The main idea is that having a virtue is more like having a title or social role (e.g., you’re curious because people signal to you their expectations of curiosity) than like having a basic physical or biological property (e.g., being over six feet tall). Christian Miller (2013, 2014) drops the virtue prevalence family and adds a mixed-trait prevalence family in its place, which states that many people possess traits that are neither virtues nor vices, such as the disposition to help others in order to improve one’s mood or avoid sliding into a bad mood.

In this short paper, I don’t have the space to argue against all alternatives to my own proposal. Instead, I want to make two main claims. First, the “virtue is rare” dodge advocated by Kupperman and others who drop the virtue prevalence family has costs associated with it. Second, those costs may be steeper than the costs associated with my own way of responding to the situationist challenge.

Researchers in personality and social psychology have documented for decades the tendency of just about everybody to make spontaneous trait inferences, attributing robust character traits on the basis of scant evidence (Ross 1977; Uleman et al. 1996). This indicates that people think that character traits (virtues, vices, and neutral traits, such as extroversion) are prevalent. Furthermore, in a forthcoming paper (Alfano, Higgins, & Levernier forthcoming), I show that the vast majority of obituaries attribute multiple virtues to the deceased. Not everyone is eulogized in an obituary, of course, but most are (about 55% of Americans, by my calculations). Not all obituaries are sincere, but presumably many are. Absent reason to think that people about whom obituaries differ greatly from people about whom they are not written, we can treat this as evidence that most people think that the people they know have multiple virtues. But of course, if most relations of most people are virtuous, it follows that most people are virtuous. In other words, the virtue-prevalence family is deeply ingrained in folk psychology and folk morality.

Social psychologists think that people are quick to attribute virtues. My own work on obituaries suggests the same. What do philosophers say? Though there are some (Russell 2009) who claim that virtue is rare or even non-existent with a shrug, this is not the predominant opinion. Alasdair MacIntyre (1984, p. 199) claims that “without allusion to the place that justice and injustice, courage and cowardice play in human life very little will be genuinely explicable.” Philippa Foot (2001), following Peter Geach (1977), argues that certain generic statements characterize the human form of life, and that from these generic statements we can infer what humans need and hence will typically have. For the sake of comparison, consider what she says about a different life form, the deer. Foot first points out that the deer’s form of defense is flight. Next, she claims that a certain normative statement follows, namely, that deer are naturally or by nature swift. This is not to say that every deer is swift; some are slow. Instead, it’s a generic statement that characterizes the nature of the deer. Finally, she says that any deer that fails to be swift – that fails to live up to its nature – is “so far forth defective” (p. 34). The same line of reasoning that she here applies to non-human animals is meant to apply to human animals as well. As she puts it, “Men and women need to be industrious and tenacious of purpose not only so as to be able to house, clothe, and feed themselves, but also to pursue human ends having to do with love and friendship. They need the ability to form family ties, friendships, and special relations with neighbors. They also need codes of conduct. And how could they have all these things without virtues such as loyalty, fairness, kindness, and in certain circumstances obedience?” (pp. 44-5, emphasis mine).

In light of these sorts of claims, let’s consider again the defense offered by some virtue ethicists that virtue is rare, or even impossible to achieve. If virtues are what humans need, but the vast majority of people don’t have them, one would have thought that our species would have died out long ago. Consider the analogous claim for deer: although deer need to be swift, the vast majority of deer are galumphers. Were that the case, presumably they’d be hunted down and devoured like a bunch of tasty venison treats. Or consider another example of Foot’s: she agrees with Geach (1977) that people need virtues like honeybees need stingers. Does it make sense for someone with this attitude to say that most people lack virtues? That would be like saying that, even though bees need stingers, most lack stingers. It’s certainly odd to claim that the majority – even the vast majority of a species fails to fulfill its own nature. That’s not a contradiction, but it is a cost to be borne by anyone who responds to the situationist challenge by dropping the virtue prevalence family.

One might respond on Foot’s behalf that human animals are special: unlike the other species, we have natures that are typically unfulfilled. That would be an interesting claim to make, but I am not aware of anyone who has defended it in print.[7] I conclude, then, that dropping the virtue prevalence family is a significant cost to revising the postulate.

But is it a more significant cost than the one imposed on me by replacing the virtue / situation family with a virtue / social construction family? I think it is. This comparative claim is of course hard to adjudicate, so I will rest content merely to emphasize the strength of the virtue / prevalence family.

What would it look like to fudge things in the way I recommend? Essentially, one would end up committed to a version of the hypothesis of extended cognition, a variety of active externalism in the family of the extended mind hypothesis. Clark & Chalmers (1998) argued that the vehicles (not just the contents) of some mental states and processes extend beyond the nervous system and even the skin of the agent whose states they are.[8] If my arguments are on the right track, virtues and vices sometimes extend in the same way: the bearers of someone’s moral and intellectual virtues sometimes include asocial aspects of the environment and (more frequently) other people’s normative and descriptive expectations. What it takes (among other things) for you to be, for instance, open-minded, on this view is that others think of you as open-minded and signal those thoughts to you. When they do, they prompt you to revise your self-concept, to want to live up to their expectations, to expect them to reward open-mindedness and punish closed-mindedness, to reciprocate displays of open-mindedness, and so on. These are all inducements to conduct yourself in an open-minded way, which they will typically notice. When they do, their initial attribution will be corroborated, leading them to strengthen their commitment to it and perhaps to signal that strengthening to you, which in turn is likely to further induce you to conduct yourself in open-minded ways, which will again corroborate their judgment of you, and so on. Such feedback loops are, on my view, partly constitutive of what it means to have a virtue.[9] The realizer of the fudged Ramsey sentence isn’t just what’s inside the person who has the virtue but also further things outside that person.

So, can people be virtuous? I hope it isn’t too disappointing to answer with, “It depends on what you mean by ‘can’, ‘people’, and ‘virtuous’.” If we’re concerned only with abstract possibility, perhaps the answer is affirmative. If we are concerned more with the proximal possibility that figures in people’s current deliberations, plans, and hopes, we have reason to worry. If we only care whether more than zero people can be virtuous, the existing, statistical, empirical evidence is pretty much useless.   If we instead treat ‘people’ as a generic referring to human animals (perhaps a majority of them, but at least a substantial plurality), such evidence becomes both important and (again) worrisome. If we insist that being virtuous is something that must inhere entirely within the agent who has the virtue, then evidence from social psychology is damning. If instead we allow for the possibility of external character, there is room for hope.[10]


[1] Nathan is also using an extended metaphor. My point is clear nevertheless.

[2] An alternative is the “psycho-functionalist” method, which disregards common sense in favor of (solely) highly corroborated scientific claims. See Kim (2011) for an overview. For my purposes, psycho-functionalism is less appropriate, since (among other things) it is more in danger of changing the topic.

[3] I seem to be in disagreement on this point with Christian Miller (this volume), who worries that people may not be motivated to be or become virtuous. In general, I’m even more skeptical than Miller about the prospects of virtue theory, but in this case I find myself playing the part of the optimist.

[4] I am here indebted to Gideon Rosen.

[5] It might also be possible to circumvent this difficulty, which anyway troubles Lewis’s application of Ramsification to the mind-brain identity theory, by using only de re formulations of the relevant statements. See Fitting & Mendelsohn (1999) for a discussion of how to do so.

[6] Experimental philosophers have started to fill this gap, but not in any systematic or consensus-based way.

[7] Micah Lott (personal communication) has told me that he endorses this claim, though he has a related worry. In short, his concern is to explain how, given the alleged rarity of virtue, most people manage to live decent enough lives.

[8] For an overview of the varieties of externalism, see Carter et al. (forthcoming).

[9] I spell out this view in more detail in Alfano & Skorburg (forthcoming). For a treatment of the feedback-loops model in the context of the extended mind rather than the character debate, see Palermos (forthcoming).

[10] I am grateful to J. Adam Carter, Orestis Palermos, and Micah Lott for comments on a draft of this paper.

My facehole talks

I did an interview with Bob Talisse (Vanderbilt) about Character as Moral Fiction for the New Books in Philosophy podcast.  Recording available (for free! 😉 here.

draft of Moral Psychology, Chapter 1: preferences

As always, comments, suggestions, questions, criticisms, etc. are most welcome….

“We are strangers to ourselves.”

~ Friedrich Nietzsche, On the Genealogy of Morals, Preface, section 1


1 The function of preferences: prediction, explanation, planning, and evaluation


Among our diverse mental states, some are best understood as representing how the world is. If I know that wine is made from grapes, I correctly represent the world as being a certain way. If I think that Toronto is the capital of Canada, I incorrectly represent the world as being a certain way (it’s actually Ottawa). Other mental states are best understood as moving us to act, react, or forebear in various ways. I want to see the Grand Canyon before I die. I desire to know how to speak Spanish. I prefer to use chopsticks rather than a fork to eat sushi. I intend to keep my promises. I aim to be fair. I love to hear New Orleans-style brass band music. Depending on their longevity, their intensity, their specificity, their malleability, and their idiosyncrasy, we use different words to describe these mental states: values, drives, choices, appraisals, volitions, cravings, goals, reasons, purposes, passions, sentiments, longings, appetites, aspirations, attractions, motives, urges, needs, acts of will. Such mental states are sometimes referred to as pro-attitudes, and related states that move someone to avoid, escape, or prevent a particular state of affairs are correspondingly called con-attitudes.

If you put together an agent’s representations of how the world is and the mental states that move her to act, you have some hope of predicting and explaining her actions. Suppose, for instance, that you know that I have a free weekend, that I deeply yearn to see the Grand Canyon, and that I have some spare cash. What am I going to do? It’s not unreasonable to predict that I will purchase a plane ticket (or rent a car) and go to Arizona. Now suppose that you know that my comprehension of geography is pretty weak. I still want to see the Grand Canyon, but I mistakenly think that it’s in Chihuahua. (Oops – nobody’s perfect). What do you think I’ll do now? It’s not unreasonable to predict that I’ll still purchase a plane ticket or rent a car, but that instead of going to Arizona I’ll end up in Mexico (and pretty frustrated!). Someone’s representations and purposes combine to lead them to act. If you know what someone’s representations and purposes are, you can to some extent predict what they’ll do.

In the same vein, knowing what someone’s representations and purposes are puts you in a position to explain their actions. Suppose you see me stand up, walk across the room, open a door, and walk through the doorway. On the door, you notice the following icon:

Figure 1


Why did I do what I did? A plausible explanation isn’t too hard to assemble. If you saw the sign indicating that the door led to the men’s bathroom, then presumably I did too: so I probably had a relevant representation of what was on the other side of the door. What desire (preference, goal, intention, need) might I have that would rationalize my behavior? The most obvious suggestion is that I wanted to relieve myself. Of course, it’s possible that I went to the men’s bathroom to participate in a drug deal, to conceal myself while I had a good long cry, or for some other reason. But if you’re right in thinking that I wanted to urinate, then you’ve successfully explained my action. If you know what someone’s representations and purposes are, you can to some extent explain what they’ve done.

To predict and explain other people’s actions, we need some idea of what they prefer (want, desire, value, need). But that’s not all that preferences are for. Preferences also figure in planning and evaluation, and when they’re structured appropriately, they contribute to the agent’s autonomy. Think about your best friend. Imagine that her birthday is in a week. You love your friend, and want to do something special for her birthday. You don’t need to predict your own action here, nor do you need to explain it. Your task now is to plan: in the next week, what can you do for your friend that will simultaneously please and surprise her without emptying your bank account? To give your friend a special birthday present, you need to know what she enjoys (or would enjoy, if she hasn’t experienced it yet). To be motivated to give your friend a special birthday present in the first place, you need to want to do something that she wants. In philosophical jargon, you must have a higher-order desire – a desire about another desire (hers). You want to give her something that she wants.

It’s remarkable how adept people can be at solving this sort of problem, which involves the sort of recursively embedded agent-patient relations discussed in the introduction. Think about it. To plan a good gift, you need to know now not just what your friend currently wants but what she will want in the future. You can’t just give her what you yourself want or what you will want in a week. You can’t give her what she wants now but won’t want in a week. To successfully give your friend a good present, you have to figure out in advance what she’ll want in a week.

The same constraints apply when you plan for yourself. Think about choosing your major in college. What do you want to specialize in? Musicology is interesting, but will you still be interested in it three years from now? Will it set you up to earn a decent living (something you’ll presumably want in five, ten, and twenty years)? Marketing might earn you a decent living, but will you find it boring (not want to do it, or even want not to do it) after a few years? Are you going to want to have children? In that case, you may need more income than you would if you didn’t want (and didn’t have) children. Living a sensible life requires planning. You need to make plans that affect your friends, your family, your colleagues, and your rivals. You also need to make plans for yourself. Doing this successfully requires intimate knowledge of (or at least some pretty good guesses about) your own and others’ future desires, needs, and preferences.

Thus, preferences figure in the prediction, explanation, and planning of action. They’re also important when we morally evaluate action. I reach out violently and knock you over, causing you some pain and surprising you more than a little. What should you think of my action? It depends in part on what moved me to do it. If I’ve shoved you because I want to hurt you, if I’m engaged in an assault, you’re going to think I’m doing something wrong. If I’m not depraved, I’ll also feel guilty. If I’m just clumsily gesturing at a pretty tree over there, I should probably know better, but you’ll temper your anger. I may not feel guilty, but I’ll probably be embarrassed or even ashamed. If I’m knocking you out of the way of a biker who’s zooming down the sidewalk towards you, perhaps you’ll feel grateful, while I’ll feel relieved or even proud.

What marks the difference between your reactions to my action? What marks the difference between my own assessments of it after the fact? It’s not that my shoving you and your falling hurts more or less in one case or the other. Instead, what leads you to evaluate my action as wrong, misguided, or benevolent is the pro- (or con-)attitude that moves me to act. Likewise, what leads me to feel guilt, embarrassment, or relief is the pro- (or con-)attitude that moved me to act. If I want to hurt you, if I want to do something to you that you prefer not to happen, you’ll say that I’ve acted wrongly. If my aim is to do something relatively harmless (something you neither prefer nor disprefer) like pointing out a feature of the environment, you’ll perhaps think I’m a klutz, but you won’t think I’ve done something morally wrong. If I’m trying to prevent you from being run down by an out-of-control cyclist, if I want to do something to you that (once you understand it) you prefer that I do, you’ll presumably think I’ve done something morally good.

Preferences are important and versatile. They help us predict and explain actions. They help us exercise agency on our own behalf and for those we care about. They help us evaluate the actions of others and ourselves. In the context of moral psychology, there’s one last thing that preferences are good for: autonomy. According to many philosophers, such as Harry Frankfurt (1971, 1992), a person is autonomous or free to the extent that she wants what she wants to want, or at least does not want what she would prefer not to want. An autonomous agent is someone whose will has a characteristic structure. This idea is discussed in more depth in chapter 2.

As I mentioned above, we have dozens of terms to refer to pro- and con-attitudes. But the title of this chapter is ‘Preferences’. Why? Preferences are sufficiently fine-grained to help in the prediction, explanation, and evaluation of action in the face of tradeoffs. Other motivating attitudes lack this specificity. Consider, for instance, values.[1] At a high enough level of abstraction, everyone values the same ten things: power, achievement, pleasure, stimulation, self-direction, universalism, benevolence, tradition, conformity, and security (Schwartz 2012). If you want to know what someone will do, why someone did something, or whether someone deserves praise or blame for acting as they did, knowing that they accept these values gives you no purchase. Qualitatively weighting values doesn’t improve things much. Consider someone who values pleasure “somewhat,” stimulation “a lot,” and security “quite a bit.” What will she do? It’s hard to say. Why’d she go to the punk rock show? It’s hard to say. Does she merit some praise for engaging in a pleasant conversation with a stranger at the coffee shop? It’s hard to say.

Preferences set up a rank ordering of states of affairs. This is easiest to see in the case of tradeoffs. Suppose two desires are moving you to act. You’re exhausted after a long day, so you want to take a nap. But your friend just texted to suggest meeting up for a drink at a local bar, and you want to join her. We can represent this tradeoff with the following table:


  Nap Don’t nap
Join friend A B
Don’t join friend C D

Table 1: Choice matrix


In this simplified choice matrix, there are four ways things could turn out. You could take a nap and join your friend (A); you could join your friend without taking a nap (B); you could take a nap without joining your friend (C); and you could neither nap nor join your friend (D). If you have a complete set of preferences over these options, one of them is optimal for you, another is in second place, another is in third place, and the final one is in last place. Presumably A is your top outcome and D is your bottom outcome. Unfortunately, although you most prefer A (i.e., you prefer it to B, C, and D), it’s impossible. So you’re in a position where you need to weigh a tradeoff. This is where preferences become important. If you simply value the nap and value socializing with your friend, there’s no saying whether you’ll go with B or C. But if you prefer socializing to napping, we can predict that you’ll opt for B over C. By the same token, if you prefer napping to socializing, we can predict that you’ll opt for C over B.

So preferences are especially helpful in predicting behavior. They’re also great for explaining and evaluating behavior. A useful rule of thumb for explaining behavior is that people act in such a way as to bring about the highest-ranked outcome they think they can achieve. Imagine someone who prefers A to B, B to C, C to D, D to E, E to F, F to G, and G to H. She acts in such a way as to produce C. How can we explain this? Well, if we posit that she believes that A and B are out of the question (perhaps she takes them to be impossible or at least extremely difficult to achieve), then we can explain her behavior by saying that she went with the best outcome available to her.


2 The role of preferences in moral psychology


We’re now in a position to see how preferences relate to the five core concepts of moral psychology (patiency, agency, sociality, reflexivity, and temporality).


2.1 The role of preferences in patiency


Even if no one else is involved, even if you’re not exercising agency, your preferences matter for your patiency. According to one attractive theory of personal well-being, what it means for your life to go well is that your preferences are satisfied (Brandt 1972, 1983; Heathwood 2006). Your preferences might be satisfied through your own agency. You might prefer, among other things, to exercise agency in pursuit of some goal or other. Your preferences might be satisfied because you are involved in social relations with other people. Even so, there will be cases in which what you prefer happens or fails to happen simply by luck, accident, or unanticipated causal necessity. Fundamentally, then, well-being is associated with patiency, with what happens to you.

The preference-satisfaction theory of well-being is attractive for several reasons. It explains why one aspect of morality is intrinsically motivating. If my well-being is a matter of whether my preferences are satisfied, then I can’t help caring about my well-being. Preferences are a way of caring about things. Of course I care about what I care about. The preference-satisfaction theory of well-being also accounts for cases in which hedonic (pleasure-based) theories of well-being fail. Sometimes, it seems like my life goes no better, and may even go worse, when I experience some pleasures. I struggle with alcohol dependency and end up drinking to excess. While I enjoy the drinks, I prefer to stop. Arguably, I’m worse rather than better off because, even though I experience pleasure, my preferences are frustrated. Similarly, sometimes it seems like your life goes no worse, and may even go better, when you experience some pains. You exercise vigorously at the gym. You force yourself to study extra hard for an exam. You watch a frightening or depressing or horrifying movie. You eat a meal spiced with more than a little wasabi. These are painful experiences, but in each case you prefer to suffer through the pain. Arguably, you’re better rather than worse off because, even though you experience pain, your preferences are satisfied.

The preference-satisfaction theory of well-being also provides a way to understand well-being comparatively. People don’t just have good or bad lives. They have better or worse lives. Someone whose life is going poorly could be even worse off. Someone whose life is going well could be even better off. This distinction maps nicely onto the idea of a preference ranking. Since preferences can in principle put all the ways the world could be in order from best to worst, it’s possible to identify someone’s well-being with how far up their ranking things actually are. If you prefer A to B, B to C, C to D, D to E, E to F, F to G, and G to H, and the actual state of affairs is C, then your level of well-being is better than many ways it could be but not maximal. If things change to B, your well-being improves one notch; if things change to D, your well-being goes down a notch.

The most plausible version of the preference-satisfaction theory of well-being claims that what really contributes to your well-being is not the extent to which your actual preferences are satisfied but the extent to which your betterinformed preferences are satisfied. Why? And what does it mean for preferences to be informed? Imagine that you’re about to take a bite of a delicious chile relleno. It’s your favorite dish. The cheese is perfectly melted. The poblanos are fresh. The tomatoes are local. Everything is perfect except for one little exception: unbeknownst to you, the cook accidentally used rat poison rather than salt. If you eat these chiles, you’re going to end up in the hospital. But you don’t know this; in fact, you have no clue. It won’t improve your life to eat those chiles. It’ll make your life (much!) worse.

Philosophers recognize this, and that’s why they say that your well-being is a function not of what you want but of what you would want if you were better informed. If you knew that the chiles relleno were poisoned, you would prefer quite strongly not to eat them, so even though you currently prefer to eat them, doing so would detract from rather than contribute to your well-being.

Knowledge of potential poisons is clearly not the only thing you need to have informed preferences, so philosophers of well-being argue that your better-informed preferences are your fully-informed preferences. According to this approach, the preferences that determine someone’s well-being are not the preferences that person actually has, but the ones they would have if they were fully informed. Specifying what full information means in a way that doesn’t collapse into omniscience is tricky, but one attractive suggestion is to take into account “all those knowable facts which, if [you] thought about them, would make a difference to [your] tendency to act” (Brandt 1972, p. 682) or “everything that might make [you] change [your] desires” (Brandt 1983, p. 40) – a process Richard Brandt dubbed cognitive psychotherapy.[2]


2.2 The role of preferences in agency, reflexivity, and temporality


I briefly mentioned the role of preferences in agency, reflexivity, and temporality above. Several points are relevant. First, to act at all, you must have pro-attitudes like preferences. Without states that move you to act, you’d never act in the first place, never exercise agency at all. Second, to act in the face of tradeoffs, you must have some way of ranking potential outcomes. That’s what preferences do: they put potential outcomes in a rank order. Third, to be the sort of agent that the vast majority of adult humans are, you need to engage in long-term plans and projects. This involves having some idea in advance what your future self’s preferences will or might be. It involves having temporally extended preferences, so that you want now for your future preferences, whatever they end up being, to be satisfied. It involves thinking of that future person as yourself and therefore having a special regard for him or her. If your future self mattered to you no more or less than some random stranger, long-term projects would be pretty foolish.

To be a recognizably human agent, your preferences must not violate certain constraints. Put less dramatically, your agency is undermined to the extent that your preferences violate certain constraints. You’ll fail to act successfully to the extent that you suffer from preference reversals (preferring A to B one moment and B to A the next moment). You’ll fail to act successfully if you have cyclical preferences (preferring A to B, B to C, but C to A). You’ll fail to act successfully over time if you cannot rely on your current representation of your future preferences to be largely accurate (thinking that you’ll prefer A to B when in fact you’ll prefer B to A).


2.3 The role of preferences in sociality


We tend to think that people deserve praise and blame only, or at least primarily, for their motivated actions. As I pointed out above, if someone inadvertently brings about a consequence, we tend to withhold or at least temper praise (even if the consequence was good) and blame (even if it was bad). Moral good luck is nice, but not particularly praiseworthy. Negligence is blameworthy, but less so than malignance.

The role of preferences in sociality is most directly comprehensible from a utilitarian (or other consequentialist) framework, but does not depend essentially on the truth of utilitarianism. Utilitarians such as Brandt analyze right action in terms of preference-satisfaction. According to Brandt (1983, p. 37), an action is permissible if (and only if) “it would be as beneficial to have a moral code permitting that act as to have any moral code that is similar but prohibits the act.” Obligatory and forbidden actions can then be defined in terms of permissibility using well-known equivalences in deontic logic: an obligatory action is one that it’s not permissible not to do, and a forbidden action is one that it’s not permissible to do. The connection with preferences is that benefit (and harm) are understood on this account in terms of well-being. In other words, according to Brandt, an action is permissible if (and only if) it would satisfy as many fully-informed preferences, across all people, to have a moral code permitting that act as to have any moral code that is similar but prohibits the act.

Brandt’s theory is a rule utilitarian approach to right action. One could instead adopt an act utilitarian theory, according to which an action is permissible if and only if performing it in the circumstances would be as beneficial as performing any alternative action (Smart 1956). Or one could adopt a motive utilitarian theory, according to which an action is permissible if and only if it’s what a person with an ideal motivational set (i.e., a psychologically possible motivational set that, over the course of a lifetime, is as beneficial as any alternative psychologically possible motivational set) would perform in the circumstances (Adams 1976). Regardless of the precise flavor of utilitarianism one adopts, then, it’s clear that, for utilitarians, preferences are immensely important on the dimension of sociality. To act in such a way as to satisfy the most preferences, you must take into account the effects of your action not just on yourself but on everyone else. In other words, you need to take into account how your agency affects others’ patiency. Nested agent-patient relations also play a role here. What you do (or fail to do) to one person will often have some effect on what they do (or fail to do) to another person, which will have an effect on what the second person does (or fails to do) to a third person, and so on.

As I mentioned above, the relevance of preferences to sociality is easiest to see from a utilitarian perspective, but it doesn’t rely on such a perspective. Virtue ethicists and care ethicists (though perhaps not Kantians) all accept the centrality of preferences in their approaches to sociality. For instance, one nearly universally recognized virtue is benevolence, the disposition both to want to benefit other people and to often succeed in doing so. Even if a virtue ethicist thinks that there are benefits other than preference-satisfaction, they admit that preference-satisfaction is one kind of benefit. In the same vein, Aristotle and other ancient virtue ethicists gave pride of place to friendship. Friends aim, among other things, to benefit each other (and typically succeed), which again involves (perhaps among other things) preference-satisfaction. Similarly, in the care tradition, the one-caring aims among other things to benefit the cared-for. This typically involves not only satisfying the cared-for’s informed preferences but actively helping the cared-for to get their actual preferences to approximate their idealized preferences.


3 Preference reversals and choice blindness


Thus, preferences matter in multiple ways to the core concepts of moral psychology. What does the scientific literature on preferences tell us about these important mental states? Two convergent lines of evidence suggest that preferences are neither determinate nor stable: the heuristics and biases research on preference reversals, and the psychological research on choice blindness.

Preferences are dispositions to choose one option over another. You strictly prefer a to b only if, if you were offered a choice between them, then ceteris paribus you would choose a. If your preferences are stable, then what you would choose now is identical to what you would choose in the future. If your preferences are determinate, then there is some fact of the matter about how you would choose. That is to say, exactly one of the following subjunctive conditionals is true: if you were offered a choice, then ceteris paribus you would choose a; if you were offered a choice, then ceteris paribus you would choose b; if you were offered a choice, then ceteris paribus you would be willing to flip a coin and accept a if heads and b if tails (or you would be willing to let someone else – even your worst enemy – choose for you). The kind of indeterminacy and instability I argue for in this section is modest rather than radical. I want to claim that preferences are unstable in the sense of sometimes changing in the face of seemingly trivial and normatively irrelevant situational influences, not in the sense of constantly changing. Similarly, I want to claim that preferences are indeterminate in the sense of there sometimes being no fact of the matter how someone would choose, not in the sense of there always being no fact of the matter how someone would choose.


3.1 Preference reversals


Two distinctions are worth making regarding the types of possible preference reversals. In a chain-type reversal, you prefer a to b, prefer b to c, and prefer c to a; such reversals are sometimes labeled failures of acyclicity. In a waffle-type reversal, you prefer a to b, but also prefer b to a. The other distinction has to do with temporal scale. Preference reversals can be synchronic, in which case you would have the inconsistent preferences all at the same time. More commonly, they are diachronic, in which case you might now prefer a to b and b to c, and then later come to prefer c to a (and perhaps give up your preference for a over b). Or you might now prefer a to b, but later prefer b to a (and perhaps give up your preference for a over b). In my (2012) paper, I call diachronic waffle-type reversals the result of Rum Tum Tugger preferences, after the character in T. S. Eliot’s Book of Practical Cats who is “always on the wrong side of every door.”

Preference reversals were first systematically studied by Daniel Kahneman, Sarah Lichtenstein, Paul Slovic, and Amos Tversky as part of the heuristics and biases research program.[3] In study after study, they and others showed that people’s cardinal preferences could be reversed by strategically framing the choice situation. When faced with a high-risk / high-reward gamble and a low-risk / low-reward gamble, most people choose the former but assign a higher monetary value to the latter. These investigations focused on choices between lotteries or gambles rather than choices between outcomes because the researchers were attempting to engage with theories of rational choice and strategic interaction, which – in order to generate representation theorems – employ preferences over probability-weighted outcomes. While this research is fascinating, its complexity makes it hard to interpret confidently. In particular, whenever the interpreter encounters a phenomenon like this, it’s always possible to say that the problem lies not in people’s preferences but in their credences or subjective probabilities. Since evaluating a gamble always involves weighting an outcome by its probability, one can never be sure whether anomalies are attributable to the value attached to the outcome or the process of weighting. And since we have independent reason to think that people’s ability to think clearly about probability is limited and unreliable (Alfano 2013), it’s tempting to hope that preferences can be insulated from this line of critique.

For this reason, I will focus on more recent research on preference reversals in the context of choices between outcomes rather than choices between lotteries (or, if you like, degenerate lotteries with probabilities of only 0 and 1). A choice of outcome a over outcome b can only reveal someone’s ordinal preferences; it can only tell us that she prefers a to b, not by how much she prefers a to b. This limitation is worth the price, however, because looking at choices between outcomes lets us rule out the possibility that any preference reversal might be attributable to the agent’s credences rather than her preferences.

Some of the most striking investigations of preference reversals in this paradigm have been conducted by Dan Ariely and his colleagues.   For instance, Ariely, Loewenstein, and Prelec (2006) used an arbitrary anchoring paradigm to show that preferences ranging over baskets of goods and money are susceptible to diachronic waffle-type reversals.[4] In this paradigm, a participant first writes down the final two digits of her social security number (henceforth SSN-truncation[5]), then puts a ‘$’ in front of it. Next, the experimenters showcase some consumer goods, such as chocolate, books, wine, and computer peripherals. The participant is instructed to record whether, hypothetically speaking, she would pay her SSN-truncation for the goods. Finally, the goods are auctioned off for real money. The surprising result is that participants with high SSN-truncations bid 57% to 107% more than those with low SSN-truncations.

To better understand this phenomenon, consider a fictional participant whose SSN-truncation was 89. She ended up bidding $50 for the goods, so, at the moment of bidding, she preferred the goods to the money; otherwise, she would have entered a lower bid. However, one natural interpretation of the experiment is that, prior to the anchoring intervention, she would or at least might have chosen that amount of money over the goods (i.e., she would have bid lower); in other words, prior to the anchoring intervention, she preferred the money to the goods. Anchoring on her high SSN-truncation induced a diachronic waffle-type reversal in her preferences. Prior to the intervention, she preferred the money to the goods, but after, she preferred the goods to the money. This way of explaining the experiment entails that her preferences were unstable: they changed in response to the seemingly trivial and normatively irrelevant framing of the choice.

Another way to explain the same result is to say that, prior to the anchoring intervention, there was no fact of the matter whether she preferred the goods to the money or the money to the goods. In other words, it was false that, given a choice, she would have chosen the goods, but it was equally false that, given a choice, she would have chosen the money or been willing to accept a coin flip. Only in the face of the choice in all its messy situational details did she construct a preference ordering, and the process of construction was modulated by her anchoring on her SSN-truncation. This alternative explanation entails that her preferences were indeterminate.

Furthermore, these potential explanations are mutually compatible. It could be, for instance, that her preferences were partially indeterminate, and that they became determinate in the face of the choice situation. Perhaps she definitely did not prefer the money to the goods prior to the anchoring intervention, but there was no fact of the matter regarding whether she was indifferent or preferred the goods to the money. Then, in the face of the hypothetical choice, this local indeterminacy was resolved in favor of preference rather than indifference. Finally, her newly-crystallized preference was expressed when she entered her bid.

Such a robust effect calls for explanation. My own suspicion is that a hybrid of indeterminacy and instability is the right theory of what happens in these cases, but it’s difficult to find evidence that points one way or the other. In any event, for present purposes, I’m satisfied with the inclusive disjunction of indeterminacy and instability.


3.2 Choice Blindness


There are many other – often amusing and sometimes depressing – studies of preference reversals, but the gist of them should be clear, so I’d like to turn now to the phenomenon of choice blindness, a field of research pioneered in the last decade by Petter Johansson and his colleagues. As I mentioned above, preferences are dispositions to choose. You prefer a to b only if, were you given the choice between them, then ceteris paribus you would choose a. Preferences are also dispositions to make characteristic assertions and offer characteristic reasons. While it’s certainly possible for someone to prefer a to b but not to say so when asked, the linguistic disposition is closely connected to the preference. Someone might be embarrassed by her preferences. She might worry that her interlocutor could use them against her in a bargaining context. She could be self-deceived about her own preferences. In such cases, we wouldn’t necessarily expect her to say what she wants, or to give reasons that support her actual preferences. But in the case of garden-variety preferences, it’s natural to assume that when someone says she prefers a to b, she really does, and it’s natural to assume that when someone gives reasons that support choosing a over b, she herself prefers a to b. Research on choice blindness challenges these assumptions.

Imagine that someone shows you two pictures, each a snapshot of a woman’s face. He asks you to say which you prefer on the basis of attractiveness. You point to the face on the left. He then asks you to explain why, displaying the chosen photograph a second time. Would you notice that the faces had been surreptitiously switched, so that the face you hadn’t pointed at is now the one you’re being asked about? Or would you give a reason for choosing the face that you’d initially dispreferred?   Johansson et al. (2005) found that participants detected the ruse in fewer than 20% of trials. Moreover, when asked for reasons, many of the participants who had not detected the manipulation gave reasons that were inconsistent with their original choice. For instance, some said that they preferred blondes even though they had originally chosen a brunette.

This original study of choice blindness has been supplemented with experiments in other domains. For instance, Hall et al. (2010) found that people exhibited choice blindness in more than two thirds of all trials when the choice was between two kinds of jam or two kinds of tea. After tasting both, participants indicated which of the two they preferred, then were asked to explain their choice while sampling their preferred option “again.” Even when the phenomenological contrast between the items was especially large (cinnamon apple versus grapefruit for jam, pernod versus mango for tea), fewer than half the participants detected the switch.

Choice blindness in the domain of aesthetic evaluations of faces and comestibles might not seem weighty enough to support the argument that preferences are often indeterminate and unstable. But perhaps choice blindness in the domain of political preferences and moral judgments would be. Johansson, Hall, and Chater (2011) used the choice blindness paradigm to flip Swedish participants’ political preferences across the conservative-socialist gap.[6] Participants filled in a series of scales on their political preferences for policies such as taxes on fuel. Some of these scales were then surreptitiously reversed, so that, for example, a very conservative answer was now a very socialist answer. Participants were then asked to indicate whether they wanted to change any of their choices, and to give reasons for their positions. Fewer than 20% of the reversals were detected, and only one in every ten of the participants detected enough reversals to keep their aggregate position from switching from conservative to socialist (or conversely). In a similar study, Hall, Johansson, and Strandberg (2012) used a self-transforming survey to flip participants’ moral judgments on both socially contentious issues, such as the permissibility of prostitution, and broad normative principles, such as the permissibility of large-scale government surveillance and illegal immigration. For instance, an answer indicating that prostitution was sometimes morally permissible would be flipped to say that prostitution was never morally permissible, and an answer indicating that illegal immigration was morally permissible would be flipped to say that illegal immigration was morally impermissible. Detection rates for individual questions ranged between 33% and 50%. Almost 7 out of every 10 of the participants failed to detect at least one reversal.

As with the behavioral evidence for preference reversals, the evidence for choice blindness suggests that people’s preferences are unstable, indeterminate, or both. The choices people make can fairly easily be made to diverge from the reasons they give. If preferring a to b is a disposition both to choose a over b and to offer reasons that support the choice of a over b (or at least not to offer reasons that support the choice of b over a), then it would appear that many people lack preferences, or that their preferences do exist but are extremely labile. Not only is there sometimes no fact of the matter about what we prefer, but also our preferences are often seemingly constructed on the fly in choice situations, and their ordering is shaped by seemingly trivial and normatively irrelevant factors.


3.3 A descriptive preference model


While it is of course possible to dispute the ecological validity of these experiments or my interpretation of them, I want to proceed by considering some of the philosophical implications of that interpretation, assuming for the sake of argument that it is sound. I’ve already explored some of the implications of this perspective in Alfano (2012), where I argue that the indeterminacy and instability of preferences infirm our ability to explain and predict behavior. Predictions of behavior often refer to the preferences of the target agent. If you know that Karen prefers vanilla ice cream to chocolate, then you can predict that, ceteris paribus, when offered a choice between them she will go with vanilla. Likewise for explanations: you can base an explanation of Karen’s choice of vanilla on the fact that she prefers vanilla. But if there’s no fact of the matter about what Karen prefers, you cannot so easily predict what she will do, nor can you so easily explain why she did what she did. A related problem arises when considering instability. If Karen prefers vanilla to chocolate now, but her preference is unstable, then the prediction that she will choose vanilla in the future – even the near future – is on shaky ground. For all you know, by the time the choice is presented, her preferences will have reversed. Similarly for explanation: if Karen’s preferences are unstable, you might be able to say that she chose vanilla because she preferred it at that very moment, but you gain little purchase on her longitudinal preferences from such an attribution.

I’ve responded to these problems by proposing a model in which preferences are interval-valued rather than point-valued. A traditional valuation function v maps from outcomes to points. The binary preference relation is then defined in terms of these points: a is strictly preferred to b just in case v(a) > v(b), b is strictly preferred to a just in case v(a) < v(b), and the agent is indifferent as between a and b just in case v(a) = v(b).

Screen Shot 2014-04-24 at 4.17.18 PM

Figure 2: a preferred to b because 1 = v(a) > 0 = v(b)


In the looser model I propose, by contrast, the valuation function maps from outcomes to closed intervals, such that a is strictly preferred to b just in case min(v(a)) > max(v(b)) and the agent is indifferent as between a and b just in case there is some overlap in the intervals assigned to a and b.

Screen Shot 2014-04-24 at 4.17.27 PM

Figure 3: indifference because neither min(v(a)) > max(v(b)) nor max(v(a)) < min(v(b))



Though this model preserves the transitivity of strict preference, it does not preserve the transitivity of indifference. This, however, may be a feature rather than a bug, since ordinary preferences as exhibited in choice behavior themselves seem not to preserve the transitivity of indifference.


4 Philosophical implications of the indeterminacy and instability of preferences


In this section, I consider some possible philosophical implications of the indeterminacy and instability of preferences, drawing on the descriptive model outlined in the previous section. Moving from the descriptive to the normative domain is always fraught, but, as I argued in the introduction, the two need to be explored in tandem, with mutual theoretical adjustments made on each side. Moral psychology without normative structure is a baggy monster. Normative theory without empirical support is a castle in the sky.


4.1 Implications for patiency


The primary worry raised for the theory of personal well-being by the indeterminacy and instability of preferences is that, if the extent to which your life is going well depends on or is a function of the extent to which you’re getting what you want, then well-being inherits the indeterminacy and instability of preferences. In other words, there might be no fact of the matter concerning how good a life you’re living at this very moment, and if there is such a fact, it might fluctuate from moment to moment in response to seemingly trivial and normatively irrelevant situational factors.

By way of example, consider someone who is eating toast with apple cinnamon jam. Is his life as good as it would be if he were eating toast with grapefruit jam? If he is like the people in the choice blindness studies mentioned above, there might be no answer to this question. If he’s told that he prefers apple cinnamon, he will prefer the present state of affairs, but if he is told that he prefers grapefruit, he’ll be less pleased with the present state of affairs than he would be with the world in which he is eating grapefruit jam. Whether his life is better in the apple cinnamon jam-world or the grapefruit jam-world is indeterminate until his preferences crystallize in one ordering or the other.

Or consider someone who has a brand new hardbound copy of Moby Dick, for which she just paid $50 when it was marked down from $70. Is her life going better now that she has the book, or was it going better before, when she had the money? If she is like the participants in Ariely’s preference reversal study, the answer may be “yes” to both disjuncts. Before she bought the book, she preferred the money to the book. But then she anchored on the manufacturer’s suggested retail price of $70, raised her valuation of the book, and ended up preferring it to $50. Her unstable preferences mean that she was better off with the money than the book, and that she is better off with the book than the money. It’s not a contradiction, but it makes her well-being a pain in the neck to evaluate.

Fortunately, though, there is a ready response to this worry, which begins by pointing out that the indeterminacy and instability of preferences is not radical but modest, a feature captured by the descriptive model sketched above. Although there may be no fact of the matter whether the life of the consumer of cinnamon apple jam is better than the life of the consumer of grapefruit jam, there is a fact of the matter whether either of these lives is better than that of someone who, instead of eating jam, is enduring irritable bowel syndrome. Although preference orderings may fluctuate between owning a book and having $50, they do not fluctuate between owning the same book and having $50,000. These observations are consistent with the interval-valued preferences of the descriptive model outlined in the previous section. In the first example, the intervals for cinnamon apple jam and for grapefruit jam overlap with each other, but neither overlaps with the interval for irritable bowel syndrome. Thus, we can still make a whole host of judgments about the quality of various possible lives, even if, when we “zoom in,” such judgments cannot always be made. In the second example, the intervals for having $50 and having the book overlap with each other, but neither overlaps with the interval for having $50,000.

For the price of this local indeterminacy and instability, the theoretician of well-being can purchase an answer to an objection to the preference-satisfaction theory of well-being. The objection goes like this: when assessing whether it would be better to have the life of a successful lawyer or a successful artist, it seems trivial or even perverse to ask whether the artist’s life would involve slightly more ice cream, even if the agent considering what to do with her life likes ice cream. Slight preferences shouldn’t bear normative weight in this context.

However, if we assume, as seems reasonable in light of the evidence, that her preference for a little more ice cream is weak enough that it could be shifted by preference reversal or choice blindness, then its normative irrelevance is unmasked. The life of the ice cream-deprived artist and the life of the ice cream-enjoying artist are assigned nearly identical intervals on the scale of preference – intervals that differ less from each other than from that assigned to the life of the lawyer. Hence, if we are willing to put up with a little indeterminacy and instability, we can avoid more serious objections to the theory of personal well-being.


4.2 Implications for sociality


The main worry raised by the indeterminacy and instability of preferences in the context of sociality is that, if right action depends on preference-satisfaction (perhaps among other things), then it inherits the indeterminacy and instability of preferences. It might turn out that there’s just no fact of the matter what it would be right to do, or that that fact is in constant flux. This worry is perhaps most pressing for preference-utilitarians, such as Brandt and Singer (1993), but it casts a long shadow. Even if you don’t think that right action is a function of preferences and only preferences, it’s hard to deny that preferences matter at all. For instance, as I pointed out above, virtue ethicists typically countenance benevolence as an important virtue. If, as I argued in the previous section, well-being is affected by the indeterminacy and instability of preferences, then benevolence is too. And even if one thinks that benevolence is not a virtue, virtually any tolerable theory of right action is going to say that maleficence is a vice and that there is a duty – whether perfect or imperfect – of non-maleficence.

In the remainder of this section, I will concentrate on the normative implications of indeterminacy and instability for preference-utilitarianism, but it should be clear that these are just some of the more straightforward implications, and that others.

Before considering some responses I find attractive, I should point out that the problem we face here is not the one that is solved by distinguishing between a decision procedure and a standard of value. An objection to utilitarianism that was lodged early and often is that it’s either impossible or at least extremely computationally complex to know what would satisfy the most preferences. This knowledge could only be acquired by eliciting the preference orderings of every living person – or perhaps even every past, present, and future person. The correct response to this objection is that utilitarianism is meant to be a standard of value, not a decision procedure.[7] It identifies (if it is the correct theory of right action) what it would be right to do, but that doesn’t mean that we can use it to find out what it would be right to do every time you make a moral decision. The distinction is meant to parallel other general theories: Newtonian mechanics would have identified, if it had been the correct physical theory, what a projectile will do in any circumstances whatsoever, even if people were unable to apply the theory in a given instance.

This response is unavailable in the present context. There are two ways in which it might be impossible to know what would satisfy someone’s preferences: epistemic and metaphysical. You would be unable to know what someone wants if there was a fact of the matter about what that person wants, but you couldn’t find out what that fact is. This would be a merely epistemic problem, and the distinction between a decision procedure and a standard of value handles it nicely. But you would also be unable to know what someone wants if there simply was no fact of the matter concerning what that person wants. If I am right that preferences are indeterminate, then this is the problem we now face, and it does not good to have recourse to the distinction between a decision procedure and a standard of value.

Preference-utilitarianism is not without resources, however. As in the case of well-being, one attractive response is to point out that preferences are only modestly indeterminate and unstable. Although there may be no uniquely most-preferred outcome for a given individual (or indeed for any individual), there will be many genuinely dispreferred outcomes, and hopefully a manageably constrained subset of preferred outcomes, than which nothing is more preferred. They are all outcomes than which nothing is determinately and stably better, but there is no unique best outcome.

Furthermore, from among this subset of alternatives it might be possible to winnow out those that satisfy preferences which we have independent normative grounds to reject – preferences that are silly, ignorant, perverse, or malevolent. As I pointed out above, it’s commonly argued in the context of right action that brute preferences carry less weight than fully-informed preferences. According to those who argue in this way, whether it’s right to do something depends less on whether it would satisfy people’s actual preferences than on whether it would satisfy their fully-informed preferences. It might be hoped that idealizing preferences would cut down or even eliminate their indeterminacy and instability.

Here’s what that might look like. Suppose that Jake’s actual preferences are captured by my interval-valued model. As such, they present two problems: they fail to uniquely determine how it would be right to treat Jake, and they may even rule out the genuinely right way to treat him because his actual preferences are normatively objectionable. It might be possible to kill these two birds with the single stone of idealization if idealization leads to unique, point-valued preferences that are no longer normatively objectionable. Perhaps there is only one way that Jake’s preferences could turn out after he undergoes cognitive psychotherapy. This is a big ‘perhaps,’ but it is worth considering. What evidence we have, however, suggests that idealizing in this way would not lead to determinate, stable preferences. When Kahneman, Lichtenstein, Slovic, and Tversky began to investigate preference reversals, many economists saw the phenomenon as a threat, since it challenged some of the most fundamental assumptions of their field. Accordingly, they tried to show that preference reversals could removed be root and branch if participants were given sufficient information about the choices they were making. Years of attempts to eliminate the effect proved fruitless.[8]

The burden is then on the idealizer to say what information participants lack in the relevant experiments. What does someone who bids high on a bottle of wine after considering her SSN-truncation not know, or not know fully enough? Perhaps she should be allowed first to drink some of the wine. While Ariely et al. (2006) did not investigate whether this would eliminate the anchoring on SSN-truncation, they did conduct other experiments in which participants sampled their options and thus had the relevant information. In one, participants first listened to an annoying sound over headphones, then bid for the right not to listen to the sound again. As in the consumer goods experiment, before bidding, participants first considered whether they would pay their SSN-truncation in cents to avoid listening to the sound again. And as expected, those with higher SSN-truncations entered higher bids, while those with lower SSN-truncations entered lower bids. It’s unclear what further information they could have acquired to inform their preferences. It seems more plausible is that they had too much information, not too little. If they hadn’t first considered whether to bid their SSN-truncation, they would not have anchored on it and would therefore have had “uncontaminated” preferences. But cognitive psychotherapy says to take into account “everything that might make [one] change [one’s] desires” (Brandt 1983, p. 40). Anchoring changed their desires, so it counts as part of cognitive psychotherapy. Perhaps the process can be revised by saying that one should take into account everything that might correctly or relevantly change one’s desires, but then the problem is to come up with an account of what makes an influence on one’s desires correct or relevant that doesn’t involve either a vicious regress or a vicious circle. No one has managed to do this, perhaps because it can’t be done.

Another response, which I find more attractive, is to embrace rather than reject the indeterminacy and instability of preferences. There are several ways to do this. One is to figure out which preferences are wildly indeterminate or unstable and disqualify their normative standing completely. Just as it makes sense to ignore the Rum Tum Tugger’s begging to be let inside because you know he’ll just beg to get back out again, perhaps it makes sense to hive off Jake’s indeterminate and unstable preferences, leaving a kernel of normatively respectable ones behind. Only these would matter when considering what it would be right to do by Jake, or what would promote his well-being.

A second way to embrace indeterminacy and instability is to make a less heroic assumption about the effect of cognitive psychotherapy. Instead of taking it for granted that this process is bound to converge on unique, point-valued preferences, perhaps it will merely shrink the width of Jake’s interval-valued preferences. In that case, even after idealization, there would be no unique characterization of what it would be right to do by Jake or what would most promote his well-being. As I’ve argued in the context of prediction and explanation (Alfano 2012), however, this might be a feature rather than a bug. Suppose that idealization yields a preference ordering that rules out most actions as wrong and condemns many outcomes as detrimental to Jake’s well-being, but does not adjudicate among many others. The remaining actions would then all be considered morally right in the weak sense of being permissible but not obligatory, and the remaining outcomes would all be vindicated as conducive to well-being. This strategy might help to solve the so-called demandingness problem by expanding what James Fishkin calls “the zone of indifference or permissibly free personal choice” (1982, p. 23; see also 1986). Thus, while it is possible to try to resist the evidence for indeterminacy and instability, or to acknowledge the evidence while denying its normative import, it may be better instead to embrace these features of preferences and use them to respond to existing problems.


5 Future directions in the moral psychology of preferences


Because preferences are involved in multiple ways in patiency, agency, sociality, temporality, and reflexivity, there are many avenues for further research. In this closing section, I list just a few of them.

First, further conceptual work by philosophers and theoretically-minded psychologists and behavioral economists may reveal or clarify relevant distinctions, such as a contemporary version of Mill’s distinction between higher and lower pleasures. Perhaps a useful distinction can be made between satisfaction of higher and lower preferences. According to Mill, one pleasure is higher than another if an expert who was acquainted with both would choose any amount of the former over any amount of the latter. This maps fairly directly onto the idea of lexicographic preferences: one good or value is lexicographically preferred to another if (and only if) any amount of the former would be chosen over any amount of the latter. Such values would be in principle immune to preference reversals. Jeremy Ginges and Scott Atran (2013) have found that when a value is “sacralized,” it becomes lexicographically preferred in this way. Moral values seem to be the only values that are capable of becoming sacred. However, tradeoffs have only been studied in one direction (giving up a sacred value to gain a secular value).

Second, further empirical research would help to determine whether the hiving off strategy succeeds. Is there some identifiable class of preferences that are especially susceptible to reversals and choice blindness? We currently lack sufficient evidence to say. It seems that effects may be stronger in business and gambling domains, weaker in social and health domains (Kuhberger 1998), but these distinctions are neither mutually exclusive nor exhaustive. This is yet another area in which collaboration between philosophers, who are specially trained in making this sort of distinction, and psychologists would be useful.

Third, to what extent do preference reversals and choice blindness disappear when people are informed about them? Are psychologists who know all about these effects less susceptible to them? More susceptible? The same as other people?

Fourth, are there some people who are congenitally more susceptible to preference reversals and choice blindness than others? There is very little research on this, though one study suggests that roughly a quarter of the population is highly susceptible and another quarter is immune (Bostic, Herrnstein, & Duncan 1990). Perhaps the preferences of people who are clear on what they want deserve more normative weight than the preferences of people who don’t know what they want. Perhaps the second group would benefit not so much from getting what they (think) want (for the moment) but from having their preferences shaped in more or less subtle ways.

Finally, on a related note, perhaps public policy should sometimes aim not so much to satisfy existing preferences, but to shape people’s preferences in such a way that they are (more easily) satisfiable. The idea here is to take advantage of the instability of preferences, cultivating them in such a way that the people who have them will be most able to satisfy their own wants. If you’re not getting what you want, either change what you’re getting, or change what you want. Of course, this proposal may seem objectionably paternalistic, but I tend to agree with Richard Thaler and Cass Sunstein (2008) in thinking that in some cases such policies may be permissible. In fact, it’s a striking asymmetry that almost no one objects to the shaping of beliefs, provided they are made to accord with (what we take to be) the truth, whereas it’s hard to find someone who doesn’t object to the shaping of desires and preferences. However, I would argue that the choice we often face is not whether to mould preferences but how. Given how easily preferences are influenced, it’s highly likely that they are constantly being socially shaped without our realizing it. If this is right, existing policies already shape preferences; we just don’t know how. The choice is therefore between inadvertently influencing preferences and doing so strategically. I tend to think that society has not just a right but an obligation to help people develop appropriate preferences – a point with which feminists such as Serene Khader (2011) concur. The worry that such interventions might be objectionably paternalistic can be assuaged somewhat by insisting, as Khader does, that the very people whose preferences are the targets of policy intervention participate in designing the interventions.


[1] Preferences are causally influenced by values, but values on their own don’t do all the work (Homer & Kahle 1988).

[2] A version of this idea was first formulated by Sidgwick (1981). Rosati (1995) argues persuasively that mere information without imaginative awareness and engagement with that information is not enough.

[3] See Lichtenstein & Slovic (1971); Slovic (1995); Slovic & Lichtenstein (1968, 1983); Tversky & Kahneman (1981); Tversky, Slovic, & Kahneman (1990).

[4] See also Ariely & Norton (2008), Green et al. (1998), Hoeffler & Ariely (1999), Hoeffler et al. (2006), Johnson and Schkade (1989), and Lichtenstein and Slovic (1971).

[5] A social security number is a kind of national identification code: it associates each citizen of the United States with a unique, quasi-random number.

[6] In the United States, this would be equivalent to flipping preferences across the conservative-liberal gap; in the United Kingdom, it would be equivalent to flipping preferences across the conservative-labor gap.

[7] Bentham (1789/1961, p. 31), Mill (1861/1998, 26), and Sidgwick (1907, p. 413) all deal with the objection in this way.

[8] See Berg, Dickhaut, & O’Brien (1985); Pommerehne, Schneider, & Zweifel (1982); and Reilly (1982).

What I said at the Mars Hill Panel

I recently participated in a Mars Hill Panel with Azim Shariff, Steve Bilynskyj, and Beth Bilynskyj on the the question “Can We Be Good Without God?”  Here’s what I had to say:

The question we’re discussing today is whether we can be good without god.  This question could be understood in two different ways.  It could be a question about motivation: is it possible for a human animal to do good and be good without believing in god?  Second, it could be a question about grounding: is it possible for a human animal to do good or be good if there is no god regardless of whether that person believes in god.  Azim Shariff is going to focus on the first question.  I’ll focus on the second.

It seems to me that there are two useful ways to approach the grounding question.  I’d like to explore both.

Consider first a polemic.  We want to know whether it’s possible for a human animal to be good even if there is no god.  I think the best way to show that something is possible is to show that it’s actual.  With this in mind, suppose that two things could be established: first, there is no god, and second, there is goodness.  If that were true – if it were actual that there was goodness without god, then of course it would also be possible that there was goodness without god.

To establish that there is no god, the atheist can engage in three tactics.  First, she can consider and reject all plausible arguments for the existence of god.  Second, she can argue directly against the existence of god.  Third, she can explain why, even if there is no god, belief in god is so prevalent.  I don’t have time to do this exhaustively tonight, but I think that all three tasks can be accomplished.  Among the best-known arguments for the existence of god are the cosmological argument, the ontological argument, the teleological argument, the scripture argument, and Pascal’s wager.  The cosmological argument rests on the false premise that the universe itself needs a cause.  The ontological argument mistakenly treats existence as a property.  The teleological argument based on the teleology of life has been debunked by neo-Darwinism; the teleological argument based on the apparent fine-tuning of physical parameters mistakes low probability for intentional design.  The scripture argument is a non-starter, since it assumes that a document riddled with falsehoods, inconsistencies, and impossibilities was divinely inspired.  Finally, Pascal’s wager is not actually an argument for the existence of god; it’s an argument for believing in god even though one recognizes that the existence of god is at best remarkably unlikely.

I’ll now turn to arguing against the existence of god.  One thing the atheist can say at this point is that, since there are no good arguments for the existence of god, we should conclude that there is no god.  After all, the burden of proof is on the person who wants to argue that something exists, not the person who rejects that claim.  Evil people like Donald Rumsfeld flippantly say that absence of evidence is not evidence of absence.  But when you do your very best to find evidence and don’t turn anything up, that is evidence of absence.  Just so in the case of god: if the best arguments for the existence of god are unsuccessful, that’s evidence that there is no god.  But there are also direct arguments to be made against the existence of god.  Perhaps the most persuasive is the argument from evil, which I’ll explore in three forms.  All three versions begin by pointing out that any god worth believing in, worshiping, and taking direction from would have to be both powerful and good.  They then argue that no such god exists.  The first way to establish this claim is by thinking about natural disasters.  Consider the tsunami of December 24, 2004, which killed an estimated 150,000 people and destroyed the homes of literally millions more.  Would a good and powerful god allow such an event?  I think not.  Or consider leukemia, which kills about 25,000 children every single year.  Would a good and powerful god allow innocents to suffer in this way?  Would a good and powerful god create humans in such a way that they were susceptible to this disease?  Again, I think not.  Finally, instead of worrying about the evils that god allows, one could point directly at the evils that god, according to religious scriptures, perpetrates.  In both Christianity and Islam, for instance, god is thought to punish insubordination – either mere failure to believe in god, or failure to comply with some divine commandments – with eternal damnation.  This torment is supposedly infinitely worse, in both duration and intensity, than all the suffering that will have occurred during the history of life in the universe.  As David Lewis pointed out, this makes god infinitely worse than Hitler and Stalin.

The last part of the atheist argument is to explain why, despite the fact that there is no god, theism is so prevalent.  It’s a complex story, but the most important part of it is this: for evolutionary reasons, humans are wildly oversensitive agent-detectors, which leads us to see divine agency in anything we can’t explain.

I’m happy to discuss any of this further, but for now I’m going to treat the first premise – that there is no god – as established.  If it can also be shown that there is goodness, then we are done: it’s possible for us to be good without god because it’s actually the case that some of us are good despite the non-existence of god.  This premise is, I think, much easier to establish.  We all know people who are at least somewhat good.  My favorite example is Paul Slovic, an emeritus professor here at UO who works on understanding the causes of and preventing genocide.

What I’m claiming to have established then, is that there is no god but there is goodness.  From this is directly follows that we can be good without god.  But perhaps this polemical approach strikes you as too aggressive.  Maybe you think that the arguments for the existence of god are more persuasive than I do.  Maybe you think Paul Slovic is not a good person – nor is anyone else.  Let’s grant for the sake of argument, then, that god exists.  Let’s also grant that god judges some things to be good and wants us to promote them.  Here’s a further question: is something good because god says so, or is it rather the case that god says so because it’s independently good?  I think that god doesn’t make something good just by commanding it.  After all, if god could do this, then god could arbitrarily decide to make rape, murder, torture, genocide, pedophilia, and investment banking good.  And god could arbitrarily decide to make love, friendship, community, freedom, and creativity bad.  This is connected with the argument from evil that I mentioned earlier.  A god who commanded us to give up love, friendship, freedom, and creativity – a god who insisted that we instead pursue rape, murder, torture, genocide, pedophilia, and investment banking – wouldn’t be worthy of our admiration and obedience.  Such a god would be evil, as would anyone who followed his commands.

How am I so sure that a god who issued such commands would be evil?  Because I think we have an independent notion of what’s good.  There are lots of ways of spelling out that notion, but one I find especially attractive that human animals have certain needs and capabilities, and that what’s good for us is for our needs to be met and our capabilities promoted.  A comprehensive list of needs and capabilities is hard to formulate, but here’s a good start: 1) life, which involves having a long enough life and a life worth living, 2) bodily health, which involves nourishment, shelter, and reproductive health, 3) bodily integrity, which involves freedom of movement, freedom from assault and abuse, and reproductive choice, 4) mental freedom, which involves having an adequate education, a wide-ranging imagination, and creative expression, 5) emotional integrity, which involves the capacity to having loving attachments to people and the ability to feel the full range of human emotions, 6) practical reason, which involves being able to formulate your own conception of a good life, 7) affiliation, which involves the capacity for friendship and political organization, 8) other species, which involves our need to live with and in nature, including other mammals, other animals more generally, plants, and the rest of biology, 9) play, and 10) political and material control.

What’s good for someone is for their life to be high on all ten of these dimensions.  Of course, that doesn’t mean that the only life worth living is very high on all dimensions.  It’s comparative: the higher you are on each dimension, the better off you are.  On this view, morality is a system of institutions, norms, rules, values, judgments, and actions that answers to these needs and capabilities.  One system of morality is better than another to the extent that it meets more needs and promotes more capabilities.

There will inevitably be trade-offs, with some people and cultures putting more emphasis on some capabilities than others.  There might be no principled way of choosing one set of weights over another in every case.  But that doesn’t mean we can’t criticize a culture – including and especially our own – for failing to meet needs and to promote capabilities when it could.  This has two implications.  First, things are good for biological and psychological – not divine – reasons.  If we can be good at all, we can be good without god.  Second, there are constraints on moral relativism.  Whether a certain way of behaving is acceptable depends on the moral system in place in the relevant culture, but whether that system is binding in the first place depends on whether it meets needs and promotes capabilities sufficiently well.  Criticizing your own culture because it callously leaves needs unsatisfied and undermines capabilities is an important moral act.  This includes criticizing the predominance of a religion like Christianity, which systematically undermines bodily health, bodily integrity, mental freedom, emotional integrity, practical reason, affiliation, and political control.