This Thing of Darkness I Acknowledge Mind: Chapter on Responsibility and Implicit Bias

Here’s a draft of the chapter of my moral psychology textbook. It’s on implicit bias and responsibility.  This one was much more depressing to write than the one on preferences.  As always, questions, comments, suggestions, and criticisms are most welcome.

 

“This thing of darkness I acknowledge mine.”

~ William Shakespeare, The Tempest, 5.1.289-290

1 Some incidents

At 12:40 AM, February 4th, 1999, Amadou Diallo, a student, entrepreneur, and African immigrant, was standing outside his apartment building in the southeast Bronx. In the gloom, four passing police officers in street clothes mistook him for Isaac Jones, a serial rapist who had been terrorizing the neighborhood. Shouting commands, they approached Diallo. He headed towards the front door of his building. Diallo stopped on the dimly lit stoop and took his wallet out of his jacket. Perhaps he thought they were cops and was trying to show them his ID; maybe he thought they were violent thieves and was trying to hand over his cash and credit cards. We will never know. One of them, Sean Carroll, mistook the wallet for a gun. Alerting his fellow officers, Richard Murphy, Edward McMellon, and Kenneth Boss, to the perceived threat, he triggered a firestorm: together, they fired 41 shots at Diallo, 19 of which found their mark. He died on the spot. He was unarmed. All four officers were ruled by the New York Police Department to have acted as a “reasonable” police officer would have acted in the circumstances. Subsequently indicted for second-degree murder and reckless endangerment, they were acquitted on all charges.

Like so many others, Sean Bell, a black resident of Queens, had some drinks with his friends at a club the night before his wedding, which was scheduled for November 25th, 2006. As they were leaving the club, though, something less typical happened: five members of the New York City Police Department shot about fifty bullets at them, killing Bell and permanently wounding his friends, Trent Benefield and Joseph Guzman. The first officer to shoot, Gescard Isnora, claimed afterward that he’d seen Guzman reach for a gun. Detective Paul Headley fired one shot; officer Michael Carey fired three bullets; officer Marc Cooper shot four times; officer Isnora fired eleven shots. Officer Michael Oliver emptied an entire magazine of his 9 mm handgun into Bell’s car, paused to reload, then emptied another magazine. Bell, Benefield, and Guzman were unarmed. In part because Benefield’s and Guzman’s testimony was confused (understandably, given that they’d had a few drinks and then been shot), all of the police officers were acquitted. New York City agreed to pay Benefield, Guzman, and Bell’s fiancée just over seven million dollars (roughly £4,000,000)in damages, which prompted Michael Paladino, the head of the New York City Detectives Endowment Association, to complain, “I think the settlement is a joke. The detectives were exonerated… and now the taxpayer is on the hook for $7 million and the attorneys are in line to get $2 million without suffering a scratch.”

In 1979, Lilly Ledbetter was hired as a supervisor by Goodyear Tire & Rubber Company. Initially, her salary roughly matched those of her peers, the vast majority of whom were men. Over the next two decades, her and her peers’ raises, which when awarded were a percentage of current salary, were contingent on periodic performance evaluations. In some cases, Ledbetter received raises. In many, she was denied. By the time she retired in 1997, her monthly salary was $3727. The other supervisors – all men – were then being paid between $4286 and $5236. Over the years, her compensation had lagged further and further behind those of men performing substantially similar work; by the time she retired, she was making between 71% and 87% what her male counterparts earned. Just after retiring, Ledbetter launched charges of discrimination, alleging that Goodyear had violated Title VII of the Civil Rights Act, which prohibits, among other things, discrimination with respect to compensation because of the target’s sex. Although a jury of her peers found in her favor, Ledbetter’s case was appealed all the way to the American Supreme Court, which ruled 5-4 against her. Writing for the majority, Justice Samuel Alito argued that Ledbetter’s case was unsound because the alleged acts of discrimination occurred more than 180 days before she filed suit, putting them beyond the pale of the statute of limitations and effectively immunizing Goodyear. In 2009, Congress passed the Lilly Ledbetter Fair Pay Act, loosening such temporal restrictions to make suits like hers easier to prosecute.

Though appalling, Ledbetter’s example is actually unremarkable. On average in the United States, women earn 77% of what their male counterparts earn for comparable work. A longitudinal study of the careers of men and women in business indicates that Ledbetter’s case fits a general pattern. Although no gender differences were found early-career, by mid-career, women reported lower salaries, less career satisfaction, and less feelings of being appreciated by their bosses (Schneer & Reitman 1994). Over the long term, many small, subtle, but systematic biases often snowball into an unfair and dissatisfying career experience.

Why consider these cases together? What – other than their repugnance – unites them? The exact motives of the people involved are opaque to us, but we can speculate and consider what we should think about the responsibility of those involved, given plausible interpretations of their behavior and motives. This lets us evaluate related cases and think systematically about responsibility, regardless of how we judge the historical examples used as models. In particular, in this chapter I’ll consider the question whether and to what extent someone who acts out of bias is responsible for their behavior. The police seem to have been in some way biased against Diallo and Bell; Ledbetter’s supervisors seem to have been in some way biased against her. To explore the extent to which they were morally responsible for acting from these biases, I’ll first discuss philosophical approaches to the question of responsibility. Next, I’ll explain some of the relevant psychological research on bias. I’ll then consider how this research should inform our understanding of the moral psychology of responsibility. Finally, I’ll point to opportunities for further philosophical and psychological research.

2 The role of responsibility in moral psychology

Let’s start with some intuitive distinctions. When we talk about responsibility, at least four parameters need to be fixed: 1) the thing or person that bears responsibility, 2) the kind of responsibility, 3) the conduct it or she is responsible for, and 4) the authority to whom it or she is accountable, if any. In 2012, Hurricane Sandy was responsible for over sixty-eight billion dollars in damage in North America. This is a case of an inanimate object (1) being causally responsible (2) for property damage (3) and being accountable to no one in particular (4). In the Netherlands, if a car collides with a cyclist, the driver of the car (1) is considered legally responsible (2) for property damage and injury (3) to the cyclist (4). While causal and legal responsibility are interesting, they are not our concern here. After all, it’s obvious that the police were causally responsible for the deaths of Diallo and Bell, and it’s obvious that her supervisors were causally responsible for Ledbetter’s salary. Moreover, legal responsibility has already been assigned in these cases. One might worry that it was assigned incorrectly, but what we are interested in now is moral responsibility.

In cases of characteristically moral responsibility (2), a human agent (1) is or at least could be held accountable by some moral authority (4) for an action, omission, mental state, or state of character (3). Adult humans are generally assumed to be at least partially morally responsible for their behavior, their mental states, and their character provided two conditions are met: a knowledge condition and a control condition.[1] According to the knowledge condition, if you lacked certain kinds of relevant knowledge, then you are not responsible for what you did (or failed to do). According to the control condition, if you lacked certain kinds of relevant control, then you are not responsible for what you did (or failed to do). As you might expect, what exactly counts as “relevant” knowledge and control is highly contested.

Responsibility thus has the same kind of recursively embedded structure explored in the introduction to this book (figure 1 below). In a typical case of holding someone responsible for doing something, there are three people involved. There’s the agent whose responsibility is being assessed. There’s the patient of her activity. And there’s the person who holds her to account.

agent-patient 2x

Figure 1: recursively embedded agent-patient relations

 

The patient of the initial action may be identical to the person who holds the accountable. For instance, if I lie to you, you can call me to account. This identity needn’t hold in all cases, however. Your friend could hold me to account for lying to you.

 

2.1 The knowledge condition on responsibility

 

The knowledge condition has two main components. First, arguably, you can’t be held responsible for doing something when you didn’t even know that you were doing it. For example, suppose I’m backing my car out of my driveway when, unbeknownst to me, a suicidal neighbor sneaks up behind my car and lies down in such a way that I run him over. It’s true that I did run him over. It’s also true that I knew in some sense what I was doing: I knew I was backing out of my driveway. But it’s false that I knew what I was doing in the relevant sense: I didn’t know I was running him over. As Elizabeth Anscombe (1957; see also Davidson 2001) pointed out, action is always under some description. The same event, described in one way, is an intentional action, yet described in another way, is not.

This knowledge constraint also applies to actions that result in good consequences. If, while backing my car out of my driveway, it ends up blocking a sniper’s bullet aimed at my neighbor, I’m not morally responsible for saving his life. I am causally responsible, since I was the one who interposed the vehicle between the bullet and its target, but since I didn’t know that that was what I was doing, it makes no sense to praise me as a life-saver. I and my neighbor are, instead, lucky.

Contrast this with a case in which I know that my poor neighbor has suicidal tendencies. I’ve seen him trying to get himself run over by other neighbors who were, thankfully, more observant than me. Arguably, I now acquire a duty to investigate whether he’s behind my car. This suggests an important revision to the first component of the knowledge condition: you can’t be held responsible for doing something when you didn’t even know you were doing it unless you shirked your duty to know. Life is short, and there are far too many knowable things for any of us to get even close to knowing them all. The duty to know, then, applies only in some cases. I’ll leave up for grabs precisely what those cases are.[2]

The second component of the knowledge condition is that you can’t be held responsible for doing something when you didn’t even know why you were doing it. To understand this condition, we need to distinguish between motivating and normative reasons. A motivating reason is a psychological state that moves the agent to perform (or omit) some action; in the terminology introduced in the previous chapter, a motivating reason is either a pro-attitude or a con-attitude. By contrast, a normative reason is a consideration that counts in favor of performing (or omitting) some action. In at least some cases, we don’t hold people responsible for their behavior unless their motivating reasons match the normative reasons that apply to the case at hand even if their behavior corresponds to what someone with matched motivating and normative reasons would have done. For instance, imagine that a brown recluse spider is crawling its deadly way towards a sleeping child, and I reach out and squash it with a rolled up newspaper. I knew exactly what I was doing: I was killing the spider. But why was I doing it? The normative reason to kill the brown recluse is that it has such deadly venom that it could kill the child. If that’s what was motivating me, then I deserve to be held responsible for (potentially) saving the child’s life. If instead I was doing it because I find bugs disgusting and kill them whenever I have the chance, I don’t deserve to be held responsible in this way. We can even suppose that I knew that it was a brown recluse, that it might kill the child, and hence that I was (potentially) saving the child’s life, but that I didn’t care at all about the child’s life. I just callously wanted to smash the disgusting bug. Lack of knowledge of the relevant normative reasons, ignorance of why I should act, relieves me of responsibility both for my good and for my bad behavior.

As with the other component of the knowledge condition, though, I can sometimes be held responsible despite my lack of knowledge if my ignorance is culpable. I can’t be held responsible for doing something when I didn’t even know why I was doing it unless I shirked my duty to know.

 

2.2 The control condition on responsibility

 

Knowledge is one constraint on responsibility; the other main constraint is control. Most philosophers would agree that you can’t straightforwardly be held responsible for what you do (or omit) when you lack appropriate control over your behavior. One way in which control can be undermined is constraint or coercion. For example, suppose I promised to meet you today at noon in the library, but on my way there I was kidnapped. It hardly makes sense to hold me responsible for my failure to live up to my commitment: I was physically constrained.

There are other kinds of constraints. For instance, it often makes less sense to hold someone responsible for their behavior when it is biologically, economically, or psychologically constrained. Someone who steals a loaf of bread because she’s so hungry she might starve is, in a sense, less responsible for her behavior than someone who steals just because she is too stingy to pay.[3] Someone who fails to rescue a child from the edge of a cliff because he has a paralyzing fear of heights is less responsible for his behavior than someone who fails to help just because he doesn’t care. Similar arguments apply to coercion. And as with the knowledge condition, I can sometimes be held responsible for my behavior when I face constraint or coercion if the constraint or coercion is culpable. For instance, if I’m economically constrained in such a way that I can’t support my family because I gambled my savings away, it would seem that I’m less easily absolved of responsibility.

Like the knowledge condition, the control condition has two components. Someone who faces a constraint or operates in the face of a coercive incentive structure may nevertheless exercise control over her behavior. Given that I’ve been kidnapped, there are still various things I can do. Giventhat you’re terrified of heights, there are still certain things you can do. Giventhat I’ve gambled away my savings, there are still a number of things I can do. The second component of the control condition is more fundamental: even supposing lack of constraint or coercion, was the agent sufficiently and in the right way in control of her behavior?

What does it mean to be sufficiently in control of your behavior? What does it meant to be in control of your behavior in the right way? As you might expect, philosophers have proposed a bewildering variety of answers to these questions. Among the kinds of control philosophers have suggested might be essential to moral responsibility are mental control (Wegner 1994), ultimate control (Kane 1989), regulative control (Fischer & Ravizza 1999), guidance control (Fischer & Ravizza 1999), intervention control (Snow 2009), evaluative control (Hieronymi 2006), indirect control (Arpaly 2003), long-range control (Feldman 2008), fluent control (Railton 2008), habitual control (Romdenh-Romluc 2011), skilled control (Annas 2011), real self control (Frankfurt 1971, 1992) and ecological control (Clark 2006).[4] And that’s not even close to a comprehensive catalogue.

We can impose some structure on this profusion of notions of control by classifying them on three dimensions: proximal vs. distal, first-order vs. higher-order, and number of degrees of freedom. A concept of control is proximal to the extent that it requires the agent to be in control of her behavior in or just prior to the moment of acting; a concept of control is distal to the extent that it allows her control over her behavior to be temporally distant from the moment of action. A view counts as distal, then, if it requires the agent to have selected or designed her environment to facilitate one choice rather than another; it also counts as distal if it requires her to have habituated herself to act in one way rather than another (either by rote or through a more skilled, reasons-responsive mechanism). For instance, the Kantian concept of control as autonomy (Korsgaard 2009) requires that, for every action, at the moment of acting, the agent be able to exercise conscious, reflective control over her behavior – acting contrary to her strongest inclinations and desires to bring about an alternative outcome if she wills it.[5] The Kantian notion of autonomy is thus highly proximal. By contrast, Clark’s (2006) notion of ecological control merely requires that the agent be so positioned in a social and material environment, partially through her own temporally distant choice and design, that in the moment of choice she does what her prior or better self would have wanted.

A concept of control is higher-order to the extent that it requires the presence or absence of particular higher-order (typically second-order) mental states, such as thoughts and desires; a concept of control is first-order to the extent that it does not require such mental states. For instance, Frankfurt’s (1971, 1992) notion of identification requires either that the agent have the desires that she wants to have, or at least that she not have desires she wants not to have. Higher-order desires are easiest to understand in a two-person example. Suppose that you don’t want to go to a movie with me, but I want to go to a movie with you. In the ordinary case, though, I don’t just want you to come to the movie with me no matter what. I want you to come to the movie because you yourself want to be there. Otherwise, I’d just be dragging you along or kidnapping you. Thus, I have a higher-order desire: a desire that you desire to go to the movie with me. People often have higher-order desires about others’ desires, but they can also have higher-order desires about their own desires.

Such phenomena are familiar through fiction. For example, in season 3, episode 11 of the television series Friday Night Lights, Tyra Collette is a high school senior who’s recently turned her academic life around. Despite earning mediocre grades in her first two years of college, she puts her nose to the grindstone and achieves significant improvements, yet her chances of acceptance at a reputable university are in doubt. When she sees her unambitious older sister whisked off her feet by an unemployed but good-natured man, the following conversation ensues with her mother, Angela:

TYRA: I don’t know what’s wrong with me.

ANGELA: Nothing’s wrong with you. What are you talking about?

TYRA: Why can’t I want that? They look so happy together. I mean, I spend all this time trying to go to college, and it’s seeming more and more impossible.

Tyra wants to attend a good university, but she realizes that doing so will be extremely difficult, even impossible. She concludes that it might be better for her if her first-order desires were different, and even expresses a higher-order desire not to want to go to college.

Though Frankfurt’s identification view is perhaps the best-known higher-order theory, there are others. Paul Katsafanas (2013), for example, argues that someone exercises responsible agency when and only when two conditions are met: first, she approves of her action (a first-order component), and second, her approval would not be undermined by further knowledge of the origins of her own motivations (a higher-order component). Indeed, any concept of control that requires the agent to be conscious of her motivating reasons for action counts as higher-order, since a mental state such as a motivating reason is conscious if (and only if) one is somehow aware of oneself as being in it (Rosenthal 2005, Prinz 2012). Korsgaard’s view thus also counts as higher-order.

Finally, a concept of control may require any non-negative number of degrees of freedom. In other words, it may require that the agent be able to do at least one thing other than what she actually does or many things other than she actually does; alternatively, it may not impose any requirement that alternative possibilities be open to her. Libertarian accounts of free will such as Korsgaard’s infamously require that the agent have some positive degrees of freedom. By contrast, Fischer & Ravizza’s (1999) notion of guidance control merely requires the agent’s decisions and deliberation be part of the causal chain that leads her to act. Likewise, Frankfurt’s notion of identification does not require positive degrees of freedom.

These distinctions are displayed in Table 1.

 

Proximal vs. distal First-order vs. higher-order Degrees of freedom Exemplar
Proximal First-order 0 Fischer & Ravizza
>0  
Higher-order 0 Frankfurt
>0 Korsgaard
Distal First-order 0 Clark
>0  
Higher-order 0 Annas
>0  

 

Table 1: Illustrative Concepts of Control on Three Dimensions

 

Although Table 1 presents the distinctions as categorical, it should be clear that different concepts of control can be more or less proximal. For instance, Annas’s (2011) notion of skilled control has certain proximal elements (a skilled individual can exercise some proximal control because she’s developed a skill over time), and Clark’s notion of ecological control has certain conscious elements (one can consciously choose, at least sometimes, to strategically select and design one’s environment). Similarly, a notion of control might be framed almost entirely in terms of higher-order mental states, as Frankfurt’s is, or it might give them only a minor role. Although these concepts of control are often pitted against one another as competitors, it might be more useful to think of them as a moral psychological palette. Perhaps sometimes we care about (and have) Frankfurt-style identification. Perhaps sometimes we care about (and have) Fischer & Ravizza-style guidance control. Perhaps sometimes we care about (and have) Clark-style ecological control. It may be that no single account of control suits all our purposes in predicting, explaining, evaluating, and controlling people’s behavior (our own behavior included).

 

2.3 Interactions between knowledge and control

 

Like the other conditions, the second component of the control condition is not exceptionless. In particular, although we generally do not hold someone responsible for their behavior if they were not sufficiently and in the right way in control, sometimes we do nonetheless because their lack of control was culpable. Just as someone can be culpably ignorant that he is doing something or why he is doing something, just as someone can be culpably constrained, so someone can be culpably lacking in the appropriate form of control.

One important way in which such culpability arises is through the interaction of the knowledge and control conditions. Quite generally, there is an important bidirectional link between knowledge and successful action. Knowledge is an epistemic achievement. Successful action is a practical achievement. Knowledge results from reliable cognitive processes. Successful action results from reliable practical processes.[6] It should be unsurprising, then, that there might be a kind of ping-ponging back and forth between knowledge and action, between cognitive and practical success. For instance, the more you know, the more possibilities you’re able to entertain. The more possibilities you’re able to entertain, the more fine-grained your preferences can be. The more fine-grained your preferences can be, the more fine-grained your controlled actions can be. Thus, more knowledge enables more finely-controlled action. In the other direction, more controlled action sometimes puts one in a position to acquire more knowledge. The more reliably and precisely you’re able to plan and control your behavior, the more successful you’ll be as an inquirer, and hence the more high-grade knowledge you’ll tend to acquire. Or consider another possibility, if you acquire knowledge of how to gain and exercise control, then you are one step away from having and exercising such control. By the same token, if you have sufficient control to gain certain kinds of knowledge, you’re just one step away from acquiring that knowledge.

This sort of bidirectional feedback between the cognitive and the practical is appealing in many ways, but it also engenders additional responsibility through a kind of bootstrapping we might call, in the language of rock climbers, chimneying.

Figure 2: Chimneying

 

Suppose I learn that I’m susceptible to a rare visual bias: when driving, I tend to overestimate the distance between my car and the car directly in front of me. Up until I learn this, I’ve been systematically tailgating, but I had no idea that’s what I was doing. Arguably, until I learn about my visual impairment, I’m not responsible for tailgating because I don’t know that that’s what I have a tendency to do, and I have no reason to suspect it either. Once I learn, though, I acquire a responsibility to exercise additional control over my driving. It would be irresponsible of me to drive at what felt to me like a safe distance, since I now know that my feeling safe is consistent with systematic risk. So I start to follow at a greater distance. In so doing, I come to realize that this same unnatural feeling is also present when I’m playing soccer and when I’m having a conversation. Now I should also ask myself which of my other activities may be more dangerous or undesirable because of my visual impairment. Perhaps I’m a dangerous soccer player who’s far too likely to accidentally foul my opponent because I underestimate my distance from him. Perhaps I’m an obnoxious conversation partner because I tend to stand too close to my interlocutor. Arguably, I acquire a responsibility to investigate – to acquire better knowledge of – my own potential biases. And if I do, I’ll then acquire a responsibility to control my behavior better, which may lead me to discover yet further biases, and so on. In the remainder of this chapter, I’ll argue that exactly this sort of chimneying process characterizes how we should respond to recent empirical work on implicit bias.

 

3 Implicit bias

 

Someone embodies a bias for Xs (against Ys) to the extent that she favors Xs (disfavors Ys) in virtue of their being Xs (Ys). An explicit bias is one that the biased individual has some introspective awareness of, whereas an implicit bias is inaccessible to the biased individual’s consciousness.[7] Someone who endorses the claim that women are less competent than men exhibits an explicit bias against women; someone who rejects this claim but nevertheless associates competence more closely with men than with women embodies an implicit bias. The distinction between these two types of bias is cross-cutting, and biases come in degrees. For instance, someone could have a strong implicit bias in favor of one group despite a weak explicit bias against it, and someone could have a weak implicit bias in favor of one group while also harboring a strong explicit bias for it.

 

3.1 Two recipes for disaster

 

There are two main ways in which subtle implicit biases can lead to systematically unfair outcomes. Both depend on the potential targets of bias facing many instances in which prejudicial decisions can be made. In the first instance, each biased decision has an extremely harmful effect. In the second, no particular biased decision has an extremely harmful effect, but the cumulative, longitudinal effect of multiple biased decisions is extremely harmful:

1)    A large number of interactions (e.g., police interactions with civilians), each with a very low probability of an extremely bad outcome (mistaken brutalization of the civilian), and

2)    A large number of interactions (e.g., career-relevant interactions between boss and employee), each with a moderate probability of a slightly bad outcome (e.g., giving a raise that’s slightly lower than deserved), but the badness of later outcomes compounding exponentially the badness of earlier bad outcomes (e.g., because raises are a percentage of current salary).[8]

Consider the second recipe first: imagine five people starting their first jobs in the year 2000. Each of them has an initial yearly salary of $50,000 and yearly performance reviews that determine a percentage increase to their salary for the next year. For the sake of simplicity, let’s suppose that all five are equally meritorious, deserving 3% raises every year. However, the implicit biases of their boss against some of them lead two to receive only 1% and 2% raises per year, and the implicit biases of their boss for some of them lead two others to receive 4% and 5% raises every year. The employees all receive raises, though, and out of decorum they don’t brag about how much they’re earning, so they all feel (at least for a while) that they’ve been fairly treated. After the first annual raise, the lowest-paid employee is now earning $50,500, whereas the highest-paid employee receives $52,000. The difference is just fifteen hundred bucks over the course of the year. After taxes, that’s just a couple of lattes per day. No big deal, right? But watch what happens over the course of a 40-year career:

 

annual

raise[9]

Y0 Y1 Y2 Y5 Y10 Y20 Y30 Y40
1% $50,000 $50,500 $51,005 $52,551 $55,231 $61,010 $67,342 $74,443
2% $50,000 $51,000 $52,020 $55,204 $60,950 $74,297 $90,568 $110,402
3% $50,000 $51,500 $53,045 $57,964 $67,196 $90,306 $121,363 $163,102
4% $50,000 $52,000 $54,080 $60,833 $74,012 $109,556 $162,170 $240,051
5% $50,000 $52,500 $55,125 $63,814 $81,447 $132,665 $216,097 $352,000

Table 2: Yearly salaries for victims and beneficiaries of implicit bias

 

By the end of year five, the highest earner is pulling down 21% more than the lowest earner. By year twenty, the highest earner receives more than double the lowest earner. By the ends of their careers, the differences are stark. The lowest earner is vastly outpaced even by the other victim of implicit bias, receives less than half what the fairly-treated employee makes, and is out-earned by almost 400% by the most favored employee. And that’s just the differences in their incomes. Assuming that they each invested 10% of their income each year and made market returns on their investments, the differences in their wealth will be vast indeed.[10]

The other recipe is a little harder to envisage, but we can get a feel for it by modeling it as the number of times an individual can expect to be mistakenly brutalized by the police over the course of their lifetime. Suppose (falsely) that law enforcement officers never mistakenly brutalize children below the age of twelve or adults above the age of 62. That means each of us has 50 years of potential victimization. In a given year, someone may be available for interaction with the police (walking past them while they’re on patrol, being seen by them on security footage, driving past or near them on a highway, etc.) 200 times. That means people have on average 10,000 chances, over the course of a lifetime, to have an unfortunate interaction with the cops.[11]

Imagine three otherwise comparable people, Jamal, Chuck, and Theodore, and suppose that the police are very slightly biased against Jamal because of his race, and very slightly biased for Theodore because of his race and socio-economic status. This means two things. First, while their probability of initiating an interaction with Chuck in any given situation is 0.1%, their probability of initiating an interaction with Jamal in any given situation is 1%, and their probability of initiating an interaction with Theodore is .01%. They rarely bother any of these men, but they’re ever so slightly more disposed to bother Chuck than Theodore, and ever so slightly more disposed to bother Jamal than Chuck. They don’t think of themselves as unfairly bothering anyone; indeed, they’d reject the charge of bias. Anscombe would say that they are not acting under the description of racism. Nevertheless, they pay slightly more attention to Jamal and his behavior, and they pay slightly less attention to Theodore and his behavior. They’re slightly more inclined to construe Jamal’s movements as “furtive.” They’re slightly more inclined to construe his possessions as weapons rather than wallets. And so on.

In addition, given that they’re already interacting with Jamal, Chuck, or Theodore, the police are again implicitly biased. They’re slightly more inclined to construe Jamal’s intentions as aggressive, slightly more inclined to construe his utterances as threats, etc. And they’re slightly less inclined to construe Theodore’s intentions as aggressive, slightly less inclined to construe his utterances as threats, etc. Thus, they have a 0.01% chance of unfairly brutalizing Theodore given that they’re already interacting with him, a .1% chance of unfairly brutalizing Chuck given that they’re already interacting with him, and a 1% chance of unfairly brutalizing Jamal given that they’re already interacting with him.

What does this mean? Take a look at Table 3:

 

  # of possible interactions over lifetime Probability of an interaction Expected # of interactions Probability of unfair brutality given an interaction Expected # of unfair lifetime brutalizations
Theodore 10,000 .01% 1 .01% .0001
Chuck 10,000 0.1% 10 0.1% .01
Jamal 10,000 1% 100 1% 1

Table 3: Lifetime expectations of unfair brutalization for victims of and beneficiaries of implicit bias

 

On any given occasion, Theodore, Chuck, and Jamal all have miniscule chances of having an interaction with the police. Moreover, even given that they are interacting with the police, they have miniscule chances of being unfairly brutalized. Despite this, Theodore can basically assume that he’ll never be unfairly brutalized by the cops: his expected number of brutalizations is 0.0001. In other words, if there were ten thousand people like Theodore, we should expect only one of them to be brutalized in the course of their lifetimes. Chuck is also in pretty good shape: his expected number of lifetime brutalizations is only 0.01. In other words, if there were one hundred Chucks, we’d expect only one of them to be unfairly brutalized. Jamal is in worse shape. Given the implicit biases against him, he can expect with some certainty to be unfairly brutalized once in his lifetime. Maybe the event will “only” involve getting roughed up. Maybe, like Benefield and Guzman, he’ll be shot but not killed. Maybe, like Diallo and Bell, he’ll be killed.

How would you like to live in a society where you could know, when you turned 12, that within the next fifty years you were more or less guaranteed to face one of these outcomes?

 

3.2 A few points about methodology

 

Thus far, I’ve only discussed the potential for harm resulting from implicit bias through simplified, speculative models. What does empirical psychology have to say about this? I’ll explore this question through some illustrative experiments as well as the most recent meta-analyses that have been published. Before proceeding, it’s worthwhile to explain what the differences between these two sorts of analysis are.

Psychologists conduct studies and experiments. An experiment draws a representative sample from a population, then randomizes them to condition (at least two – control and experimental – but sometimes there are multiple experimental conditions). A study, by contrast, doesn’t randomize. Experiments allow more confident prediction and explanation, since studies are prone to more uncontrolled confounding variables. But there are various reasons why a scientist might conduct a study rather than an experiment. First, sometimes it’s impossible to randomize participants to condition. For instance, if you want to compare Americans to Japanese, you can’t just take a bunch of people and randomly assign them a nationality. Second, sometimes it’s unethical to randomize participants to condition. For instance, if you want to compare victims of violent assault to non-victims, you could collect a representative sample of people and choose half of them at random to assault. This would obviously be unethical, so researchers don’t do it. Most psychology and other social science papers you will come across report between one and five studies or experiments conducted by the authors. The authors explain who their participants were (the “participants” section), what they had participants do (the “method” section), and how things turned out (the “results” section). They then interpret their results in a “discussion” section.

In the results section, you’ll encounter a number of statistical tests that the researchers ran on their data to determine whether the effects they observed were significant. One of the most common statistics you’ll see is a p-value, which is the conditional probability of observing results at least as extreme as those seen in the experiment given that nothing interesting is going on (the null hypothesis; the test itself is called a null hypothesis significance test). When the p-value is sufficiently low (below .05, or .01, or .001, depending on who you ask), the researchers conclude that, since it’s so unlikely that their results would have been observed given the null hypothesis, something interesting in fact was going on. As a deductive inference, this is obviously invalid: X doesn’t follow from the fact that it would be improbable that X. Scientists aren’t making deductive inferences, though; they’re making inductive and abductive inferences. When you’re dealing with empirical inquiry, that’s the best you can do. The problem this introduces, though, is that any given supposedly significant result a researcher reports might be a false positive, and any given non-significant result a researcher reports might be a false negative.

This is not to impugn psychological science; it’s just a note of caution. When you read an interpretation of a psychological experiment, you should always ask yourself, “Is the effect they’re reporting real, or is it instead a false positive? Is the failure to find an effect really evidence that there is no effect, or is it instead a false negative?” One way to convince yourself that the (lack of) effect is “real” is to look at the effect size. One common measure of effect size is Cohen’s d, which is the ratio of the difference in means between conditions to the standard deviation of the relevant variable. So, for example, a d of 1.0 would indicate that a manipulation moved the mean of the experimental condition an entire standard deviation away from the mean of the control condition – a huge effect.[12] Another, perhaps even more common, measure of effect size is r, the correlation between an independent variable and a dependent variable. In social psychology, the average r is .21 (Richard, Bond, & Stokes-Zoota 2003).[13]

Even a well-designed, conscientiously-analyzed experiment with a large effect size can result in a false positive, though. To help filter out false positives and false negatives, scientists use meta-analysis. In a typical meta-analysis, the unit of analysis is not a given participant’s responses but the results of a given study. Different studies are treated, as it were, as different “people” submitting data. This is why meta-analysis deserves its name: it’s an analysis of analyses. The main point of this kind of approach to the empirical evidence is that it gives us a more comprehensive picture of all of the relevant research, rather than focusing on a few dramatic (and sometimes atypical) results.

In recent years, many famous and provocative results have failed to replicate. One researcher or team of researchers reports a finding, but then when they or other researchers do the same thing with a new group of participants, a different result obtains. If you just know that some studies report significant results and some don’t, it’s very hard to decide how to interpret the literature. Meta-analysis goes beyond “some studies say X, but some studies say not-X.” It combines the results of all relevant published (and sometimes unpublished) investigations of the same phenomenon (by comparing their p-values, their effect-sizes, and various other thing) to arrive at a more accurate and precise estimate of the direction and magnitude of the effect, if any. Below, I report some relevant results from meta-analyses of studies of implicit and explicit bias.

 

3.3 Explicit attitudes, implicit attitudes, and behavior

 

Thought guides action. We can use three main kinds of tests to figure out what people think: explicit, implicit, and behavioral. An explicit test just asks people what they think. For instance, mention a person, group of persons, topic, or whatever, and ask them to report their attitudes. Their responses are a kind of action – one that’s directly related to what they think. Scientists use this method, and laypeople do it all the time. Open as it is to whatever the respondent has to say, this is the most information-rich explicit test of someone’s attitudes. But it’s hard to aggregate responses to open-ended questions. The researcher needs to code responses in some way so they are comparable across participants. This is resource-intensive in terms of both time and cost. Moreover, coding responses introduces its own biases and reduces the richness of the information drastically.

A different way to measure attitudes explicitly is to ask participants not to provide content directly but to agree or disagree with pre-crafted statements designed by the researcher to elicit participants’ attitudes. As you can imagine, some measures of explicit attitudes will be more reliable than others. For instance, some people are jerks. If you ask a non-jerk whether she’s a jerk, she’ll probably say no. If you ask a jerk whether he’s a jerk, he’ll probably say no too. By contrast, some people are left-handed. At least in contemporary Western societies, there’s no serious prejudice against the left-handed, so if you ask someone whether they’re left-handed you can probably treat their response as accurate.

When explicit measures of attitudes are likely to fail (and even when they aren’t), it’s useful to approach things from a different angle. One of the potential sources of bias in explicit measures of attitudes is mediated by the time that participants have to deliberate about and revise their answers. If I’m prejudiced and you ask me whether I’m prejudiced, I can pause to ask myself whether I want to admit what I really think, how you might react, how someone who overhears me might react, whether I’ll feel ashamed of myself if I’m honest, and so on. Of course, letting people deliberate about what they really think can also make their answers more accurate. If I ask you a question you’ve never considered before, it’s natural to think that I’ll get a better response by giving you some time to mull it over. But when the mulling is likely to introduce uncontrollable biases, it can be useful to supplement the investigation by forcing people to respond quickly – within limits, the quicker the better. The idea here is that, under sufficient time pressure, you won’t have the chance to revise your answer in a self-serving or socially acceptable direction – or at least that you’ll do it less uniformly and successfully. This technique dates back at least to Freud’s notion of free association, but contemporary psychologists have made it more rigorous and quantified.

In 1998, Greenwald, McGhee, & Schwartz published the first paper to use an implicit association test (IAT) to investigate attitudes in this way. An IAT measures the strength of associations between contrasted concepts by observing participants’ reaction times (also known as latencies) in a computerized categorization task. The basic idea is this: if you associate X more closely with A than with B, you’ll be quicker to categorize something as an example of X-or-A than an example of X-or-B. The details of the test are a bit more complicated. Here’s how Greenwald et al (2009) summarize them in their recent meta-analysis:

 

In an initial block of trials, exemplars of two contrasted concepts (e.g., face images for the races Black and White) appear on a screen and subjects rapidly classify them by pressing one of two keys (for example, an e key for Black and I for White). Next, exemplars of another pair of contrasted concepts (for example, words representing positive and negative valenc) are also classified using the same two keys. In a first combined task, exemplars of all four categories are classified, with each assigned to the same key as in the initial two blocks (e.g., e for Black or positive and I for White or negative). In a second combined task, a complementary pairing is used (i.e., e for White or positive and I for Black or negative). […] The difference in average latency between the two combined tasks provides the basis for the IAT measure. For example, faster responses for the {Black+positive/White+negative} task than for the {White+positive/Black+negative} task indicate a stronger association of Black than of White with positive valence.

 

There are other implicit measures of attitudes, but the IAT has been studied and documented most extensively. Here are a few of the more important and well-validated results, based on two recent meta-analyses:

1)    Explicit measures of attitudes and implicit measures of attitudes positively correlate. The strength of the correlation ranges from roughly .011 (very low) to .471 (very high). The correlation tends to be higher for attitudes about consumer goods (e.g., brand preference) and group attitudes (e.g., preference over racial and ethnic groups), lower for gender stereotypes and self-concept (Hofmann et al. 2005, p. 1377).

2)    IATs are better at measuring affective or emotional attitudes than cognitive attitudes (Hofmann et al. 2005, p. 1377).

3)    IATs predict relevant behavior better than explicit measures in the domains of race/ethnicity and socio-economic status but worse in the domains of gender, consumer preferences, political preferences, and personality traits (Greenwald et al. 2009, p. 24).

4)    IATs are better predictors of behavior especially when self-report measures “might be impaired in socially sensitive domains” (Greenwald et al. 2009, p. 25).

Together, these results suggest several conclusions. First, explicit and implicit measures of attitudes correlate. This is both a blessing and a curse. If the measures correlated too highly (say, above .80), there’d be no reason to prefer one to the other, nor even to treat them as measures of distinct constructs. If they didn’t correlate at all or correlated negatively, it would be unclear whether they were actually measures of attitudes rather than some other construct or (even worse) nothing.[14] Second, since both explicit and implicit measures of attitudes predict behavior, it seems that implicit measures of attitudes contribute information over and above what one learns from an explicit measure, especially on socially sensitive topics. Third, implicit measures are especially suited to domains in which emotions are salient and influential.

With this in mind, it’s plausible to conclude that we can learn from implicit measures things about people’s dispositional beliefs, desires, and emotions that they themselves would not and perhaps could not report explicitly. What things? In the context of race, it seems that even people who explicitly disavow racist attitudes, even people who are themselves victims of negative racial stereotypes, may nevertheless harbor negative implicit attitudes towards blacks, among others. For instance, American undergraduate students are more likely and quicker to “shoot” unarmed black men in a computer simulation (Correll et al. 2002; Greenwald, Oakes, & Hoffman 2003). This may partially explain cases like those of Diallo and Bell, which I described at the start of this chapter. It may also partially explain the fact that, in recent decades in the United States, the rate at which blacks were killed by police has been between 300% and 700% higher than the rate at which whites were killed by police (Brown & Langan 2001). In the context of gender, it may also help to partially explain the systematic differences in compensation between men and women cited above.

 

4 Philosophical implications of implicit bias

 

What does any of this mean for the philosophical understanding of responsibility? Before you read further, I suggest taking a few of the IATs available at www.implicit.harvard.edu. This will help you to contextualize the discussion. When I took these tests, I found that I had a strong implicit preference for whites over blacks, a strong implicit preference for able-bodied people over disabled people, a strong implicit preference for straight people over gay people, and a weak implicit association of men with careers and women with home life. As you might have guessed, I explicitly reject all of these attitudes. I’d like to be the sort of person who doesn’t have race-based preferences, who doesn’t think that the able-bodied are preferable to the disabled, who likes and treats homosexuals (and other gender minorities) the same as heterosexuals, and who treats women who have careers and men who devote themselves to home life with the same degree of respect as men who have careers and women who devote themselves to home life. The meta-analyses discussed above suggest that, at least sometimes, I don’t live up to these standards. Perhaps you were disappointed by your own IAT results, too. Even if you weren’t, perhaps you’re now less inclined to trust me.

 

4.1 Implications for agency and reflexivity

 

Effective agency involves pursing your goals successfully. Agency is thus undermined to the extent that your own behavior and dispositions make it harder for you to achieve those goals. For instance, suppose you want to be healthy, so you decide to go on daily jogs. Unfortunately, you live in an area with immense amounts of pollen or other allergens in the air. Every time you exercise, you end up wheezing and sneezing, and you spend the next day in bed suffering from flu-like symptoms. When you’re bedridden, you get no exercise and often find yourself snacking on ice cream and nachos. Your own attempts to improve your health undermine your health. It would be foolish to ignore this and just keep going on jogs when you recover. If you’re serious about your goal of becoming healthy, effective agency means investigating a different way to exercise and implementing it. In other words, when you realize that what comes naturally to you takes you away from your goal rather than towards it, the smart response is to acquire knowledge and use it.

If this sounds familiar, that’s because it maps directly onto the knowledge and control conditions for responsibility discussed above. A moral agent who explicitly rejects racist, sexist, and ableist attitudes but who learns that he might harbor implicit biases against these groups faces a choice, whether he likes it or not. If he’s serious about his rejection of these biases, he can exercise effective agency by investigating how to combat his biases and putting the knowledge he acquires into practice. Otherwise, he’s not serious about rejecting bias. He might pay lip service to it. He might engage in a variety of self-deceptive psychological acrobatics. But the only responsible way to handle the unwelcome news about his own biases is find out how severe they are, figure out some strategies for obviating or overcoming them, and implement those strategies. As he proceeds with this strategy, he may end up chimneying between the knowledge and the control conditions on responsibility: every time he learns something new (about himself, about how to manage his biases, about the consequences of his biases when he doesn’t manage them), he acquires a responsibility to take further control, and every time he takes control over a new aspect of himself or his situation, he’s put in a position where he can learn even more.

In the meantime, how should he feel about himself, his attitudes, and his behavior? How should others feel about him, his attitudes, and his behavior? Some of the more prominent and early philosophical discussions of implicit attitudes argued on pragmatic grounds against the idea that “acknowledging that one is biased means declaring oneself to be one of those bad racist or sexist people” (Saul forthcoming).[15] Why not? Why give him the benefit of the doubt? There are several answers, based on pragmatic, epistemic, and control considerations.

Pragmatically, it may do him no good in reforming his ways to think of himself as racist, sexist, and so on. As I argue in the chapter on virtue below, self-concept is often self-confirming: if I think of myself as an X, I’m more likely to act like an X than I would be otherwise. For this reason, it’s dangerous to think of oneself as racist, sexist, ableist, and so on. Likewise, it may not help others to convince him to reform his ways if they accuse him of being racist, sexist, and so on. As I also argue in the chapter on virtue, such attributions also tend to be self-confirming. One might worry, though, that these considerations cut no ice. Sure, it might not be useful for him to think of himself or for others to think of him as a biased person, but that doesn’t indicate one way or the other whether he is a biased person. Maybe he needs to lie to himself to escape his bias. Maybe other people need to lie to him to avoid putting him on the defensive. Fine. But that doesn’t make his or their lies any more true.

Perhaps, instead of thinking of himself as a racist or sexist person, he should think of himself as someone who strives to be fair to targets of negative stereotypes but who suffers, in his human, all-too-human, way from various biases. Perhaps, instead of thinking of him as a racist or sexist person, other people should think of him as someone who strives to be fair but who suffers, in human, all-too-human, ways from various biases. These attitudes have the benefit of ascribing good will (or at least lack of ill will) while recognizing serious defects. It’s unclear whether they’re pragmatically worse than pretending there’s nothing wrong; they might even be better. They do, though, put salutary emphasis on the responsibility he must take to acquire both knowledge of and control over his biases.

Epistemically, there’s a little more wiggle room. As I explained above, there are different kinds of bias. Someone who embodies an explicit bias knows, at least to some extent, where she stands and what she’s inclined to think, feel, and do. Someone who embodies an implicit bias doesn’t – at least not directly. Suppose a boss has, but is completely unaware of, a subtle implicit bias against his women employees. Like me when I don’t know that I’m systematically tailgating, he has no reason to think he might be biased. He might even explicitly reject misogynistic stereotypes. But when it comes time to make a hiring or promotion decision, he acts like the boss described in table 2 above. Along the same lines, suppose that a law enforcement officer has, but is completely unaware of, a subtle implicit bias against black people. Like me when I don’t know that I’m systematically tailgating, she has no reason to think she might be biased. She might even be black herself and reject racial stereotypes. But when it comes time to make a flash decision about a passing civilian or a potentially dangerous encounter, she acts like the police described in table 3.

Arguably, these people fail to meet the knowledge conditions on responsibility, and so should not be held responsible for their bias. As noted above, action is always under a description. The boss wouldn’t describe his action as discrimination; the police officer wouldn’t describe her action as discrimination. Indeed, they’d both reject that description were it suggested to them. Is their ignorance culpable, though? It’s hard to say precisely when ignorance becomes culpable, but arguably the ignorance displayed here is innocent.

Things get interesting if we vary the cases slightly. Suppose that the boss does a survey of his employees and notices that the women make consistently less than the men. Suppose that the police officer’s supervisor tabulates her likelihood of initiating interactions with people of different races and her likelihood of escalating to violence given that she’s already interacting with them. Now they have some evidence that they might be biased. Of course, it’s not decisive (empirical evidence never is), but it is suggestive. Arguably, they now acquire a duty to investigate their own dispositions: the sort of chimneying back and forth between epistemic and practical that I described above kicks in. If they end up concluding that they are indeed biased, they then acquire a responsibility to systematically correct that bias. Even if their self-examination is inconclusive, they arguably acquire a responsibility to obviate their own potential biases.

Things get more interesting if we vary the cases further. Suppose that the boss reads a newspaper headline about implicit bias against women in the workplace, or that a friend sends him a link to an academic article on the topic. Suppose that the police officer reads a newspaper headline about implicit bias against racial minorities in law enforcement, or that her supervisor mentions some relevant research. Simply knowing that people in general have these tendencies doesn’t guarantee that they have these tendencies, but it certainly suggests that they might. Now they have even stronger evidence that they might be biased. It seems even more likely, then, that they acquire a duty to investigate their own dispositions, which could lead through chimneying to a responsibility to control their actions. In addition, they probably also acquire a responsibility to systematically correct their own potential biases even if they can’t confirm that they have them.

Things get even more interesting if we vary the cases one more time. Suppose that the boss and the police officer go to www.implicit.harvard.edu, take the relevant IATs, and discover that they do harbor implicit biases. Chagrinned, they try again, with the same result. Now they have very strong evidence indeed. Arguably, they now know that they are biased in ways that they explicitly reject, so appeal to the knowledge condition does them no good. They’re like me when I realize for sure that I’m disposed to tailgate while driving down the highway. At this point, if they do nothing about their own biases, they’re like someone who drives around knowing that he’s tailgating people. Unless something else absolves them, they’re culpably responsible for their future biased behavior.

Thinking of control, there’s again some wiggle room. Consider the hardest cases first: suppose that I know that I harbor implicit biases and want to correct for them. One might worry that, even though the knowledge condition is met, I nevertheless lack control. As Saul (forthcoming) says, “Even once [people] become aware that they are likely to have implicit biases, they do not instantly become able to control their biases, and so they should not be blamed for them.” As I pointed out above, there are many different conceptions of control. Saul maybe right that people don’t “instantly” gain control over their biases, which indicates that proximal notions of control may not be particularly helpful in this context.

That doesn’t mean, though, that more distal notions of control are inapplicable. For instance, Annas’s (2011) notion of skilled control may be relevant. Skills take time and deliberate practice to acquire, and constant renewal to maintain. Imagine that I agree to do the cooking for my family. Presumably, I need to acquire enough skill in preparing food that I don’t poison my wife (at the very least!). This doesn’t mean that I automatically acquire the skills of the Iron Chef. It means that I now need to devote myself to learning how to store and prepare food safely. In the same way, if I want to go on thinking of myself as a non-racist, non-sexist, non-ableist person after I take the relevant IATs and discover that I harbor implicit biases, I don’t automatically acquire the skills of a fair person but do need to devote myself to learning how to act in an unbiased way to the extent possible. It’s beyond the scope of this book to delve into the vast literature on skills and skill-acquisition, but there do seem to be systematic ways to acquire the relevant skills. Getting a friend to confront me when I might be acting in a biased way seems to help (Czopp et al. 2006), as does my confronting others when they seem to be biased (Rasinski et al. 2013). At the moment, we simply don’t know how well the skill model of control works in this context, but it does seem to be worth a try.

Another, perhaps even more tractable, notion of control in the context of overcoming implicit bias is Clark’s (2007) notion of ecological control.[16] Instead of changing myself (narrowly conceived), I can take control by selecting or designing my environment. Research into the controllability of implicit biases is only in its early stages, but there are already some useful suggestions available. For instance, I could commit myself to not trusting my gut when making important decisions. Of course, just deciding not to trust my gut doesn’t guarantee that I will blunt my bias, so instead of trying to combat my biases directly, I could make efforts to ensure that they’re not triggered. For instance, when making decisions about hiring a new employee, I could ensure that their applications were anonymized in such a way that I couldn’t determine a particular applicant’s gender, race, or disability status.

At this point it’s helpful to distinguish again between the two recipes for disaster identified above. When it comes to slow decisions that have the potential to compound biases, I can use technologies like anonymizing. I can also force myself to use rubrics rather than forming holistic judgments. I can implement decision procedures that require me always to consider reasons for and reasons against a given course of action (e.g., hiring or promoting someone).

If these techniques don’t suffice, I could build in a counter-bias to my own decision-making, giving a bonus at the end of any evaluation to people who belong to groups I know myself to be biased against. As Aristotle puts it in the Nicomachean Ethics, if you’re trying to straighten a bent stick, it’s often best to bend it too far in the other directly. One might worry that this technique would lend an unfair advantage. It’s important to emphasize how wrong-headed this worry is. Compare it to the tailgating example. If I know that I have a tendency to underestimate my distance from vehicles in front of me, I should increase my following distance. Will I sometimes introduce even more distance than I need? Probably. Nothing’s perfect. We’re talking here about a policy that aims to hit the mark as often and closely as possible. By the same token, then, if I know that I’m disposed to express implicit biases against a particular group, I need to adjust my generic way of interacting with members of that group so that, overall, my behavior hits the mark as often and closely as possible. Will I sometimes treat members of that group better than they might deserve? Sure.[17] If I’m treating members of a group fairly in general, then it’s pretty much guaranteed that I’ll sometimes treat some of them better than they deserve and sometimes treat some of them worse than they deserve. Since implicit biases are, by definition, hard to detect while they are influencing my behavior, perhaps the best I can do is to introduce a systematic counter-bias.

What about low-probability, high-stakes decisions – the other recipe for disaster? Deciding not to trust one’s gut doesn’t work here, since these decisions are often made quickly (e.g., police reacting to an ambiguously threatening situation). Cops can’t just avoid situations in which they have to make decisions about whether someone poses a threat. Anonymizing doesn’t work. Nor do rubrics. Implementing a counter-bias after making a decision is out of the question.

It might seem that, in this second kind of situation, it’s impossible to counteract bias. I think not. One way of seizing distal control is to ensure that it’s harder to make snap decisions with potentially disastrous consequences. For instance, don’t arm most police. Don’t allow public safety officers on college campuses to carry guns. Put double- or triple-safeties on pistols. Don’t allow anyone to possess – let alone carry – high-capacity weapons.

 

4.2 Implications for sociality and temporality

 

To take even more distal control of this sort of situation may require solutions that cannot be implemented in the short-term by a single agent. For instance, it’s likely that the prevalence of some implicit biases is explained by the ways different groups are treated in the media. If members of a group typically only make the news when they do something strange, bad, or evil, people are likely to acquire implicit biases against members of that group. If members of a group typically make the news for their accomplishments and positive qualities, people are likely to acquire implicit biases for members of that group.

To handle such biases at the individual level, one could perhaps disconnect from media. But that’s a way of combating bias with ignorance. More helpfully, one could disconnect from especially distorting media, such as Fox News and other media controlled by Rupert Murdoch. Alternatively, one could get by with a little help from one’s friends in the media if they committed to covering different groups more equitably.

At this point, I imagine some of my readers are vociferously objecting: “Don’t blame Fox for their coverage! They’re just reporting the news!” This is facile objection. In this chapter, I described some of the ways in which implicit biases lead people to act and judge contrary to their own values. If the media cause this, and if we now know or at least have good reason to believe that they do, then the media acquire a new responsibility. It may not be legally enforceable, but after all we already decided that we were talking about moral rather than legal responsibility. And if a media outfit presents itself as morally responsible, to be effective as a corporate agent, it must at least attempt to live up to these standards.

Compare this with environmental pollution. Before people realized that industrial activity released various pollutants that harmed humans and other organisms, it made little sense to hold them responsible for their polluting activities. Once we realized that they were hurting people (often enough, themselves included), it made plenty of sense. Before people knew about implicit biases and the potential influence of media consumption on people’s implicit biases, it made little sense to hold them responsible for polluting people’s minds. Now that we’ve started to realize that the media harm people (both those in whom they instill bias and, more importantly, those against whom those biases are enacted), it makes plenty of sense.

 

4.3 Implications for patiency

 

The implications of the theories of responsibility and the empirical work on implicit bias for patiency mirror those for agency. What should the victims of implicit bias do? What should they feel? What strategies might they implement? Arguably, though they may be understandably and rightly incensed by their own mistreatment, the most effective response is not to hold ignorant perpetrators of implicit bias directly responsible but to embark on a project of helping them to chimney their way up to taking responsibility for their own biases.

 

5 Future directions in the moral psychology of implicit bias and responsibility

 

The field of implicit bias is relatively young. We don’t yet know exactly how such biases work, what causes them, what prevents them, how prevalent they are in and toward various groups, and so on. We’re also in the early stages of figuring out how to cope with and correct them in ourselves and others. Further empirical research may help with these questions.

In addition, we’re also at the very earliest stages of learning how to cope with questions of responsibility when it comes to implicit bias. One thing should be clear at this point: the more evidence you have that you are or might be biased, the more responsibility you have to trace the exact outlines of your own biases (recall the famous Apollonian imperative: “Know thyself!”) and develop strategies and tactics for counteracting them.

The question of transitive responsibility is harder to resolve. If, through behaviors that come naturally to me, I tend to bias others in objectionable ways, who is responsible for their behavior? On the one hand, they’re the ones acting in biased ways. Moreover, they presumably embody their biases internally in some way. Nevertheless, they acquired their biases through my influence, and those biases might dissipate at least somewhat if not for my continued malignant influence. Perhaps our notion of responsibility needs to be revised.

 

[1] I here follow Washington & Kelly (forthcoming).

[2] For a careful approach to the question, see “Moral Mindfulness” by Peggy DesAutels (2004)

[3] I say “in a sense,” because there is another sense in which she is morally responsible. Arguably, she has a responsibility to herself (and perhaps her family and friends) to maintain her health, and so she should be praised rather than blamed for stealing the bread.

[4] I am here indebted to Dan Kelly.

[5] A related notion of control is sometimes referred to as “libertarian free will.” I will not discuss the debate among libertarians, hard determinists, and soft determinists (compatibilists) in this book.

[6] Further such connections are explored in Morton (2013).

[7] Note that, according to these characterizations, bias isn’t necessarily bad: the notion of bias I’m working with in this chapter is statistical. In some cases, it might be good to be biased for Xs. For instance, it’s plausible to think that a small to substantial bias for one’s family and friends – at least in some decision contexts – is not just acceptable but even laudable. Likewise, it’s plausible to think that a small to substantial bias against jerks and free-loaders – at least in some decision contexts – is not just acceptable but even laudable.

[8] It should be clear that the precise number of interactions and the precise probabilities are irrelevant. It should also be clear that there are other ways that implicit biases can be harmful; these are meant to be two of the more interesting ones.

[9] The equation, in case you want to explore things further, is: (salary in year n) = (initial salary)(1 + % increase per year)n.

[10] Also, as time goes by, those facing bias (and perhaps even the fairly-treated person who sees others receiving an unfair advantage) might curtail their effort and involvement out of frustration, anger, resentment, or some other understandable emotion. In such an eventuality, their boss could in some sense justify giving them smaller raises, though the boss would be making a decision based on behavior that was induced by his own unfair treatment in the first place. Feedback loops like this are discussed in more detail in chapter 4.

[11] This is an extremely simplified model. Just one way of making it more complicated and more accurate would be to note that not everyone has the same number of potentials for interaction with law enforcement. Police patrol certain neighborhoods more aggressively than others, leading to more opportunities for interaction in these neighborhoods.

[12] Other common measures of effect size include r, h2, and partial-h2.

[13] The rule of thumb Richard, Bond, & Stokes-Zoota suggest (which is only a rule of thumb) is that an effect size of .10 indicates a small effect, .20 a medium effect, and .30 or higher a large effect. Before their meta-meta-analysis Cohen’s (1988) suggestion was to treat .10 as small, .30 as medium, and .50 as large.

[14] For more on this, look into the literature on convergent validity and divergent validity.

[15] For critical discussion of this claim, see Besser-Jones (2008), Brownstein & Saul (forthcoming), Holroyd (2012), Holroyd & Kelly (forthcoming), and Washington & Kelly (forthcoming).

[16] I am here indebted to Washington & Kelly (forthcoming).

[17] I leave aside here the very important and fraught question of whether I ought to try to counteract not only my own biases but other people’s. It should be obvious, at the very least, that, to the extent possible, I shouldn’t engage in patterns of behavior that compound others’ biases.

 

Leave a Reply