Nori on Philosophers (photoshop edition): Nietzsche / “Nietzsche’s Socio-Moral Psychology” now under contract with Cambridge

My next research monograph, Nietzsche’s Socio-Moral Psychology, is under contract with Cambridge University Press.  Cover idea…

 

Nori on Nietzsche

Extended Prolepsis 5: Abilism & Epistemic Dependence

The final objection to be considered comes from John Turri (current volume) and Duncan Pritchard (forthcoming).[1]  Virtue reliabilism as it has been discussed thus far is the view that knowledge is true belief that is due to a reliable disposition of the cognitive agent.  Though we find it convenient to say that some dispositions are reliable and others unreliable, cognitive dispositions differ in their reliability in a gradable rather than an absolute way.  We judge a belief to be (categorically) reliably produced if it is produced (comparatively) reliably enough.  Another way of putting this is that knowledge is true belief that manifests cognitive ability, and the level of ability in question comes on a sliding scale.  We judge a belief to be (categorically) a manifestation of ability if it is (comparatively) a manifestation of enough ability. (Or enough of a manifestation of ability; the two can come apart.) Yet another way of putting this is that knowledge is true belief that is the product of cognitive agency, and that the level of cognitive agency in question comes on a sliding scale.  We judge a belief to be (categorically) a product of cognitive agency if it is (comparatively) a manifestation of enough cognitive agency. (Or enough of a manifestation of cognitive agency; again, the two can come apart.) Reliability, cognitive ability, cognitive agency – a rose by any other name would smell as sweet.  The point is that a threshold needs to be crossed before we are willing to say that someone’s true belief counts as knowledge.

Although some of the details of their accounts differ, Pritchard and Turri both argue that the key to saving (or replacing) virtue reliabilism is lowering the threshold.  For Turri (current volume), knowledge is to be defined as a “true belief manifesting cognitive ability,” or, more fully, “approximately true belief manifesting cognitive power.”  Cognitive ability or power in turn is defined thus (following Doris 2002, p. 19):

If a person possesses a cognitive ability to detect the truth (of a certain sort when in certain conditions), then when she exercises that ability and forms a belief (on relevant matters and in relevant conditions), she will form a true belief at a rate exceeding chance.[2]

From here, the rescue of virtue reliabilism is straightforward: clearly when people use heuristics, they exercise some degree of cognitive ability.  The recognition heuristic is better than chance, after all.  So, when they get it right by using the recognition heuristic (or any other heuristic that works better than chance), they know what they truly believe.  Note that this is a significant lowering of the threshold.  As long as the cognitive agent is better than chance, she’s good enough.  Her true belief might not be reliably produced (where reliability is understood to set a very high bar), but it is produced by a disposition that’s better than chance.  Turri (in press) has convincingly argued that at least some cases of knowledge (such as knowledge produced by explanatory reasoning) are unreliably produced.  His claim in this context is that true beliefs produced by heuristics are further examples of such knowledge

Pritchard (forthcoming) argues in a similar way that people who arrive at true beliefs via heuristics are knowers.  For him, the key distinction is between robust and modest virtue epistemology.[3]  Robust virtue epistemology defines knowledge purely in terms of virtue.  Modest virtue epistemology, by contrast, requires virtue only as a necessary – not a sufficient – condition for knowledge.  For independent reasons, Pritchard rejects robust virtue epistemology, so he sees the fact that it is inconsistent with epistemic situationism merely as further evidence in favor of modest virtue epistemology.  But is modest virtue theory threatened as well?  Not according to Pritchard.  The further condition he adds to knowledge is epistemic dependency, which has both positive and negative aspects:

It is positive when an agent exhibits a relatively low degree of cognitive agency, and yet qualifies as having knowledge nonetheless due to factors outwith her cognitive agency, such as epistemically friendly features of the environment. […] And it is negative when an agent exhibits a high degree of cognitive agency – such that they would ordinarily count as having knowledge – and yet they lack knowledge nonetheless due to factors outwith their cognitive agency.

For instance, someone who naively asks a knowledgeable passerby for directions to a landmark can end up knowing the way to the landmark, despite exercising a relatively low degree of cognitive agency.  By contrast, even a thorough and careful investigator can be fooled by an even more thorough and clever deceiver, and hence end up with true beliefs that do not count as knowledge despite exercising a high degree of cognitive agency.  Pritchard contends that people who get things right when using heuristics or when open-minded-because-in-a-good mood are like the person who naively asks someone for directions: despite exercising a low degree of cognitive agency, their true beliefs count as knowledge.  After all, they did exercise some cognitive agency (they used a heuristic rather than flipping a coin; they were luckily open-minded), and that was enough, given their epistemically friendly environment.  Pritchard goes so far as to say that,

in order for the situationist challenge to impact even on modest virtue epistemology it needs to demonstrate in a wide range of cases not just that the agent’s cognitive success, where it occurs, is not primarily creditable to her exercise of her cognitive abilities / intellectual virtues, but moreover that the agent’s cognitive success is not in any significant way the product of her cognitive abilities / intellectual virtues.

Like Turri, Pritchard wants to lower the threshold: as long as the dispositions that lead to true beliefs involve some degree of cognitive agency, they can give us knowledge.

I doubt that, if the threshold is set as low as Turri and Pritchard argue, my arguments would go through.  One question, then, is whether it is legitimate to lower the threshold as they suggest.[4]  In these kinds of arguments, it’s often hard to find any principled position that is also reasonable (or a reasonable position that is also principled).  I’ll try, however, to raise some doubts about the lowering of the threshold.  Consider a student taking a multiple choice test, with four potential answers per question.  As in the popular game show, Who Wants to be a Millionaire?, she has a “life line”: once during the test, she can ask the teacher to eliminate two of the four potential answers for a given question.  Suppose that she encounters a question where she has absolutely no clue which answer is right.  She uses her lifeline, reducing the number of potential answers to two, then guesses.  As it turns out, she guesses correctly.  Does it make sense to say that she knows the answer to this question?  One might object that she doesn’t believe that she got it right, since she was guessing.  Suppose further, then, that had already decided that one of the four potential answers was wrong, and that it was the one she didn’t choose.  Now she does believe that the selected answer is correct.  Does she know?  I contend that she does not.  If this is correct, then Turri and Pritchard’s weakening of the conditions on knowledge does not work.  After all, she did manifest cognitive ability (Turri): she used a lifeline to narrow the choices.  And she did her success is, in a significant way, due to her exercise of cognitive agency (Pritchard).  I don’t know to what extent my intuition that she nevertheless lacks knowledge might be shared and withstand scrutiny, but it does raise concerns about the weakness of both Turri’s and Pritchard’s definitions of knowledge.  They may end up counting too many beliefs as knowledge.

But suppose that I’m off the reservation here.  There’s at least one further thing to say about their revisions to reliabilism.  In virtue ethics, it’s not uncommon to say that an action is right if it manifests at least one virtue and no vices.  In the same way, Turri and Pritchard may want to revise their accounts to say that knowledge is a true belief that manifests at least one intellectual virtue (ability, bit of agency) and no intellectual vices (infirmities).  Taking this suggestion would not just follow the analogy, but also support their solution to the so-called value problem.  The problem is this: how, if at all, is knowledge than true belief?  It’s arguably the problem discussed near the end of Plato’s Meno, and it has loomed large in recent epistemology.  For both Pritchard (XXX) and Turri (2011, 2013, inspired by Zagsebski 2009), the solution to the problem is not that knowledge is prudentially more valuable (it’s not, since the person who truly believes that p will act the same as the person who knows that p), but that knowledge is an achievement, and achievements are in generally more valuable than non-achievements that result in the same state of affairs.

Consider, then, a somewhat ignorant American who successfully uses the recognition heuristic to rank the 83 most populous German cities in order of size.  Does he know that he got the order right?  He’s clearly manifesting an ability and exercising cognitive agency.  I’ve already suggested that this might not be enough to qualify his true belief as knowledge, but suppose my threshold argument is unsound.  There’s a further concern here.  Our imaginary agent manifests a cognitive ability, but in so doing he also manifests a cognitive disability.  Indeed, his ability doesn’t just coincide with but depends on his disability.  He would not be empowered to use the recognition heuristics if he were not sufficiently ignorant of German geography.

Moreover, most of the factors that make the recognition reliable, to the extent that it is reliable, are beyond the ken and control of the cognitive agent.  The strength of the ecological correlation is typically something he doesn’t know about, nor something he has any power over.  The strength of the surrogate correlation (the correlation between the mediator and his recognitional capacities) is, likewise, largely not up to him.  A few cognitive agents might know about and intentionally exploit these correlations, but most people do so automatically and unknowingly.  This suggests a further point: the recognition heuristic, if it is a source of knowledge, is an ability that is located largely outside the skin of the cognitive agent.  Both the processes that lead to a high ecological correlation and the processes that lead to a high surrogate correlation are largely external.  This suggests that, if virtue reliabilism is to cope with epistemic situationism, it must also cope with a version of the extended mind hypothesis, which I have elsewhere dubbed the extended character hypothesis (Alfano 2013, forthcoming a, forthcoming, d, forthcoming e).[5]



[1] I do not have space to consider objections to my positive account of factitious virtues, but since no criticism of this account has been published, that would have to wait anyway.

[2] Though I should note that Doris says “markedly above chance,” not just “exceeding chance.”

[3] For more on this distinction, see Kallestrup & Pritchard (2013).

[4] Another avenue, which I do not explore here, is to argue that reliability is a high-fidelity rather than a low-fidelity virtue, as discussed in Character as Moral Fiction (pp. 31-34, 72-82).

[5] See also Koralus & Mascaranhas (forthcoming) on how the dialectical process of questioning and answering makes us rational, to the extent that we are.

Extended Prolepsis 4.3: Gigernezer to the rescue, continued further

[UPDATE: I decided to add Turkey, which ended up making things look even worse for G&G.]

Perhaps stocks aren’t the best place to use the recognition heuristic.  Gigerenzer & Goldstein (1996, p. 651) refer to the cities task as their “drosophila,” so if the recognition heuristic works anywhere, it should work here.  Despite their impressive results, however, there is cause for concern about the fruit fly’s health.  As Kelman (2011) points out, the recognition heuristic may not work as well when the cities are not in North America or Western Europe.  Goldstein & Gigerenzer (2002, p. 86) somewhat implausibly claim that the fact that they’ve replicated their results for cities in the USA and Germany means that the results “stand up” in “different culture[s].”  In an attempt to see whether this is actually the case, I imitated their methodology for determining ecological correlations for some non-WEIRD countries: in addition to Germany, I looked at cities in Turkey, Argentina, Nigeria, and Thailand.  Ecological correlations reported by Goldstein & Gigerenzer (2002) were .72 for Die Zeit’s mentions of American cities and .70 for the Chicago Tribune’s mentions of German cities.  Instead of the Chicago Tribune, which has a shockingly useless web search function, I used the New York Times, limiting my search to articles published in the first decade of the 21st century.  Otherwise, I followed their methodology exactly.  I first created a list of every city in the relevant country with a population of at least 100,000.  Next, I searched the Times for articles that mentioned the city and the country by name.  I then computed the ecological correlations for each country, as well as a worldwide ecological correlation, in which all cities were included.  The initial results, along with comparative data from Goldstein & Gigerenzer (2002, p. 86) are presented in Table XXX

Country

# of applicable cities

Ecological correlation

Germany

81

.83

Turkey

67

.41

Argentina

42

.77

Thailand

11

.98

Nigeria

73

.86

World

280

.19

Table XXX: Worldwide ecological correlations

A few remarks on these data are in order.  First, I replicated Goldstein & Gigerenzer’s (2002) strong ecological correlation for German cities.  Second, despite the fact that I chose countries from multiple continents, the ecological correlations remained fairly high, with the Thai ecological correlation coming in at a whopping .98.  Third, despite these impressive data, the worldwide ecological correlation was much lower, in large part because goings on in Germany receive much more coverage than those in the Middle East, Asia, Africa, and South America.

The almost absurdly high ecological correlation for Thailand, along with the low worldwide correlation led me to delve a bit deeper into the data.  One thing that became immediately apparent was that much of the strength of these correlations is due to the top few cities’ receiving the lion’s share of media attention.  Turkish cities received 13,634 mentions, of which 3090 went to Istanbul.

Thai cities received 1973 mentions, of which 1510 went to Bangkok.

Nigerian cities received 2657 mentions, of which 1150 went to Lagos.

Argentine cities received 5695 mentions, of which 3320 went to Buenos Aires.

German cities received 172,488 mentions, of which 135,000 went to Berlin.

Could it be that these big cities carried most of the weight of the correlation?  To explore this question, I re-ran the correlations, excluding first the most populous, then the two most populous, then the three, four, and five most populous cities in each zone.  The ecological correlations did not stand up too well to this outlier-removal exercise, as illustrated in Table XXX.

Country

# of applicable cities

Ecological correlation

EC-1

EC-2

EC-3

EC-4

EC-5

Germany

81

.83

.64

.62

.48

.45

.32

Turkey

67

.41

.19

.05

.02

.03

.04

Argentina

42

.77

.50

.43

.44

.31

.15

Thailand

11

.98

-.16

-.04

.14

.35

.56

Nigeria

73

.86

.35

.29

.34

.32

.27

World

280

.19

.25

.30

.35

.38

.41

Table XXX: Worldwide ecological correlations, ex top five cities

A few more remarks are now in order.  First, it appears that, even in the German case, most of the ecological correlation is driven by the top few cities.  The same trend held for all other countries.  Second, this trend was actually reversed for the worldwide ecological correlation, presumably because the methodology removed Istanbul, Lagos, Bangkok, Ankara, and Izmir, which were covered much less than Berlin (#6 in population) despite their somewhat similar size.  Third, although the ecological correlations dissipated, in all but a few cases (Thailand EC-1 and EC-2), they remained positive, though much more modest.

One thing that should now be evident is how very sensitive the drosophila is to slight perturbations.  Is the recognition heuristic a reliable guide to which of two cities is larger?  The answer is that it depends.  It depends on whether the cities are in the same country.  It depends on whether the cities are in the USA or Western Europe, on the one hand, or the rest of the world on the other.  It depends on whether one of the cities is the most populous (or second most populous, or third…) in the entire country.  But if Goldstein & Gigerenzer (2002) are right, people do not take these caveats into account when applying the recognition heuristic.  Results like these should make us wary of accepting Fairweather & Montemayor’s (forthcoming) account of frugal virtues.

Extended Prolepsis 4.2: Gigerenzer to the Rescue, continued

[This is a continuation of this post.]

Key to Fairweather & Montemayor’s argument for frugal virtues are the claims that “heuristic reasoning implements threshold evaluations for selected criteria that exploit reliable features of task environments” (emphasis theirs) and so can be a source of knowledge “when properly selected in the right environments.”  Now that we’ve seen how complex are the relations among criteria, mediators, and recognitional capacities, achieving this goal may not seem so straightforward.

Consider an epistemic agent making an inference.  She could consciously select a criterion about which to make a heuristic inference, and certainly sometimes people do so.  This would enable her to choose a criterion that is suitably connected via environmental and social mediators to her recognitional capacities.  More often than not, though, people don’t engage in conscious selection.  Indeed, one of the cornerstones of research on heuristics is that people tend to use them automatically, not consciously.  This raises the possibility that heuristics will be used on criteria to which they are not adapted.

Next, our epistemic agent needs to select a heuristic to apply.  There are many in the heuristic toolbox.  As with the criterion, this selection can be conscious and intentional.  Gigerenzer has a lucrative consulting practice through which he designs carefully tested heuristics for decision-makers such as doctors and businesses.  By and large, however, this second selection will also be an automatic process.[1]  Thus, even if someone selects a suitable criterion, she may end up applying the wrong heuristic to it.

There is also the possibility that some of the feedback loops from recognition (or whatever other psychological capacity is used) to the criterion or the mediators may damage the accuracy or reliability of the heuristic inference.   Presumably, this is part of the explanation of the sometimes swift and unexpected changes in fashion.  As the band Tower of Power puts it, “What’s hip today, might become passé” – in part because it is hip (and recognized) today.

If these stumbling blocks can be circumnavigated, the frugal virtues approach would be promising.  In some instances, I’m sure they can.  Hospitals that shell out large sums to have Gigerenzer’s team design a heuristic for them are likely have a well-selected criterion, to use the heuristic they paid for rather than one that they had been using (perhaps unconsciously) before, and to see to it that their use of the heuristic does not ricochet back on the mediators or the criterion.  But what about ordinary people making ordinary inferences?  Here the news is not so good.  In Character as Moral Fiction I described studies by Tversky and Kahneman that attempted to get people to stop using the representativeness heuristic when it was not suited to the inference at hand.  Recalcitrance ruled the day.  Goldstein & Gigerenzer (2002, p. 81-3) report a pair of experiments in which participants could use the recognition heuristic to make inferences about the size of various German cities.  They did so 90% of the time when the heuristic was well-suited to the inference, and 92% of the time when it was ill-suited to the inference.  In other words, at least for one criterion, people were completely insensitive to evidence against the trustworthiness of the recognition heuristic.

If this sort of result holds generally – if people tend to use heuristics willy nilly – then, even though heuristics can be a source of knowledge “when properly selected in the right environments,” they are not sources of knowledge for creatures like us.  It’s hard to get a read on this.  Life is not a controlled experiment.  But indications are not heartening.  For instance, Borges, Goldstein, Ortmann, & Gigerenzer (1999) famously found that, when the criterion was not a city’s population but the prospects of a publicly-traded company’s stock, a portfolio based on the recognition heuristic performed surprisingly well.  Over a six-month period, a portfolio based on the best-recognized stocks outperformed portfolios based on the least-recognized stocks and the market as a whole.  But don’t call your broker just yet.  Other researchers have attempted in vain to replicate this effect.  Andersson & Rakow (2007) ran four studies with seven sets of participants from all around the world, but failed to find any support for the recognition-based portfolio’s success.  They conclude that “recognition is, on average, simply a near random method of selecting stocks with respect to their profitability” (p. 36).  Likewise, Boyd (2001) attempted to replicate the effect to no avail.  This might seem unsurprising.  After all, one reason we hear about companies is that they are innovative, powerful, and profitable, but another is that they are the exact opposite.  Here’s a graph of newspaper headlines mentioning the ‘AIG’, courtesy of Google Trends:

Screen Shot 2013-09-04 at 7.10.25 PM

Figure XXX: Headlines mentioning ‘AIG’ over time

This should drive home the import of the domain-specificity of the recognition heuristic.  The domains are very small indeed, but our reasoning is not sensitive to this fact.  As Kelman (2011) points out, even on the cities task, the recognition heuristic delivers disappointing results when the participants are Americans and the cities aren’t in North America or Western Europe.  As of the writing of this paper, Guangfo is the 12th most populous urban zone in the world, but most Americans have never heard of it.   Results like these should make us wary of accepting Fairweather & Montemayor’s (forthcoming) account of frugal virtues.

[This last point is made more forcefully here.]



[1] Adam Morton (2000) explores the possibility of a meta-heuristic that automatically selects which first-order heuristic to apply.

Extended Prolepsis 4.1: Gigerenzer to the Rescue

[I am putting the first half of this up now, as the section has turned out to be rather longer than I’d expected when I set out.]

Thus far, I focused on attempts to rescue responsibilism from the epistemic situationist challenge.  There may be other objections to raise, but if my arguments so far have been sound, it looks like responsibilism is still in hot water.  But there are multiple versions of virtue epistemology; one might be tempted to give up on responsibilism and opt instead for a version of reliabilism, which defines knowledge and other epistemic goods in terms not of character traits like curiosity and open-mindedness but of reliable dispositions like eyesight and memory.  Although Olin & Doris (forthcoming) are skeptical of reliabilism’s prospects across the board, my own challenge to reliabilism focuses on inferential knowledge.  In particular, I’ve argued that decades of cognitive science and psychology[1] support the hypothesis that the vast majority of human inferences are the product of unreliable heuristics, and that, hence, even when we arrive at true beliefs through heuristics, the reliabilist is not entitled to call them knowledge.

This mouthful can be cut into bite-sized chunks.  The vast majority of our inferences are directed by heuristics.[2]  The heuristics we tend to use are not reliable.  According to virtue reliabilism, true beliefs acquired through unreliable dispositions are not knowledge.[3]  Hence, reliabilists should not call most of our true inferential beliefs knowledge.  One could resist this argument by objecting that the vast majority of human inferences are not heuristic-driven.  Despite the intuitive appeal of this idea, prominent psychologists almost uniformly reject it (Bargh & Chartrand 1999; Kahneman 2011, p. 21).  It seems that the vast majority of belief- or representation-updating is automatic.  Much of that automatic activity is of course perceptual (we don’t tend to deliberate about whether to believe the testimony of the senses), but a great deal of it is also inferential.[4]

Next, one might deny that heuristics are unreliable.  This is the contention I want to address here because it is the line taken up by Fairweather & Montemayor (forthcoming) and Axtell (current volume).[5]  They draw on the work of Gerd Gigerenzer and his colleagues to argue that heuristics actually are reliable.  Indeed, they go so far as to claim that “fast and frugal heuristic reasoning often outperforms optimizing rationality” for epistemic agents such as us (forthcoming, emphasis mine).

Though it might seem that they couldn’t possibly say this while passing the red face test (or, for that matter, the incredulous stare test), their claim is actually not as controversial as it appears at first blush.  What Fairweather & Montemayor mean is not that someone using heuristic reasoning in some task often outperforms another person correctly using optimizing rationality in that same task.  Since correctly using optimizing rationality sets the standard for performance, that would be impossible.  Instead, they are making the much more modest claim that someone using heuristic reasoning in some task often outperforms another person attempting in a human-all-too-human way to use optimizing rationality.  In other words, if you try to do what an ideal inferer would do, you might end up worse off epistemically (and practically!) speaking than if you try to use a simple heuristic.  Since the gap between ideal output and actual output is much smaller for a heuristic than for an optimizing rule, it can happen that – even though the rule would be the thing to follow if you could actually pull it off – the best that can be achieved is the heuristic result.  Adam Morton argues for a similar position by pointing out that it’s a fallacy to think that

when one method is better than another, an approximation to it is better than an approximation to the other. […] One situation in which the fallacy is evident is […] where one has choices at one stage which can turn out well or badly depending on what choices one makes at a later stage.  Suppose that at Stage One you can go Right or Left and at Stage Two go Up or Down.  Right followed by Up is best, but there is reason to expect that when you get to Two you will choose Down, which is worse than anything that follows from starting with Left. […] To approximate Right-Up with Right-Down is a mistake. (2013, pp. 8-9).

The claim, then, is that comparing heuristics to optimal inference without considering performance errors (i.e., without considering how closely an actual human agent trying to use the heuristic or optimal rule in question would approximate it) is misleading.  What matters is how well an inferential strategy would work when real people try it.

I’m quite sympathetic to this line of thought, but I doubt that it gets reliabilists what they want.  For what they want, I take it, is to show that the heuristics that we actually use, as we actually use them, are reliable enough to get us inferential knowledge.  What the approximation argument establishes, however, is that the heuristics that we actually use, as we actually use them, are more reliable than would be sound inferential rules, as we would actually use them.  In other words, the argument (if it’s sound) establishes a contrastive rather than an absolute thesis: heuristics are more reliable for us than optimal rules.  But this contrastive thesis is not the desired absolute thesis: heuristics are reliable for us.

Of course, the contrastive thesis is consistent with the absolute thesis, so we need to get lost in the empirical weeds for a while to get a sense of whether the absolute thesis is true.  I will argue that it is not.  This might seem to leave me in the awkward position of arguing that, even though heuristics are not reliable, we should mostly go on using them, but that’s actually more or less what I think.[6]  I’ll do this by focusing not on the availability or representativeness heuristics (which were main themes of Character as Moral Fiction), but on the recognition heuristic.  I do this in part because I’ve already laid out my views on availability and representativeness at some length, and no one has bothered to argue that they in particular are reliable.  I also do this in part because the recognition heuristic was made famous by Gigerenzer (Goldstein & Gigerenzer 2002), and so serves as an ideal test case.

What is the recognition heuristic?  Goldstein & Gigerenzer (2002, p. 76) define it thus: “If one of two objects is recognized and the other is not, then infer that the recognized object has the higher value with respect to the criterion.”  This requires some unpacking.  First, recognition memory is distinct from richer sorts of memory.  It’s the thinnest kind of memory, and gives merely a thumbs-up or thumbs-down answer to the question, “Have I ever encountered this before?”  Next, the criterion is any variable of interest.  For instance, it could be the size of a university’s endowment, the population of a city, the number of publications of a scholar, the record sales of a band, or the prospects of a publicly traded company.  The idea behind the recognition heuristic is that, given how human cognition works and how society is structured, we’re more likely to encounter “big” things (on whatever criterion dimension) than “little” things.  If you’ve heard of a university, that’s because the university gets talked about by your friends or in the media you consume.  And your friends and the media talk about it because it’s rich (or because it does things that a poorer university couldn’t do).

 Model 1

Figure XXX: Example of the recognition heuristic

Your friends and the media mediate your relation to the criterion.  The recognition heuristic derives its power from the correlations between the criterion and the mediator, and between the mediator and your capacity to recognize.  Generally speaking, the higher these are, the more accurate will be your use of the recognition heuristic.

Research on the recognition heuristic is surprising and impressive.  In one rightly famous study, Goldstein & Gigerenzer (2002, p. 76) showed that Germans were much better than Americans at judging which of two large American cities was more populous.[7]  The reason for this should be plain: Americans recognized the names of many more cities, and so were unable to use the recognition heuristic as often as the more ignorant Germans.  Forced to shift for themselves, they were less accurate.  This “less-is-more” effect has been replicated many times for a variety of criteria: if the ecological and surrogate correlations are sufficiently high, recognition validity tends to be more accurate than imperfect knowledge.  Our use of the recognition heuristic is an example of the “ecological rationality” or “frugal virtue” that Fairweather & Montemayor (forthcoming) suggest could save reliabilism from the challenge of epistemic situationism.

A moment’s reflection suggests some important but easily-overlooked points.  First, the recognition heuristic can only be recruited at all in special circumstances.  If you recognize both or neither of the objects in a pairwise comparison, then you can’t use it.  For present purposes, we can ignore the case where you recognize neither object, since no epistemologist would want to say that you have knowledge in that case.  And there are other heuristics you could use when you recognize both, such as the availability and representativeness heuristics – or the affect, disgust, take-the-best, or take-the-last heuristics, just to name a few.  I don’t have space to discuss other heuristic strategies here, but it should be clear that the question of reliability will have different answers depending on which strategies are advocated.  To truly grapple with epistemic situationism, reliabilists will need to do a lot of background reading.

Second, as Goldstein & Gigerenzer (2002) define it, the criterion can be any gradable variable.  Instead of being the wealth of a university, for instance, it could be the multiplicative inverse of the endowment of the university.  Or it could be number of redheads at the university.  Or it could be the proportion of Mormons at the university.  Clearly, then, the choice of the criterion will partially determine the robustness of the heuristic.  In their words, recognition validity is “domain specific” (2002, p. 78).

Third, speaking of the mediator papers over the complexity of the situation.  In Goldstein & Gigerenzer’s (2002) model, the mediator is the number of times the university is mentioned in newspapers.  They realize, of course, that this is only one of many mediators.  Lots of people do not read newspapers but have heard of Harvard.  There are multiple mediators, as illustrated in Figure XXX.

Model 2

Figure XXX: Example of the recognition heuristic

But this model still drastically underestimates the complexity of the situation. What your friends talk about is influenced by what is talked about in newspapers and on TV.  And what’s mentioned on TV depends on what people talk about with their friends.  There are feedback mechanisms connecting each of the mediators with each of the other mediators.

Model 3

Figure XXX: Example of the recognition heuristic

Even this model underestimates the complexity of the situation.  There are also feedback mechanisms running back from recognition to at least some of the mediators.  What you recognize largely determines what you talk about with your friends.  What people recognize largely determines what gets mentioned on TV and in newspapers.  Moreover, recognition also influences the criterion via a host of further mediators.  Few people make donations to universities they’ve never heard of.  Few students send applications (and eventually tuition) to schools that they (or their parents, or their friends) have never heard of.

Model 4

Figure XXX: Example of the recognition heuristic

None of this is meant as a criticism of Goldstein & Gigerenzer (2002); as I said, this is all obvious once you think about it for a minute or two.  When we interpret their work, however, we need to be mindful of these complications.

[The remainder of this section is here and here.]



[1] Some of this evidence is beautifully presented in Kahneman’s swan song, Thinking, Fast and Slow (2011).

[2] I here understand ‘inference’ in the broad sense suggested by Henderson & Horgan (2009, p. 6), according to whom inferential processes are whatever processes form or maintain beliefs based on information.  As will become evident in the discussion below, heuristics do not fit their narrower, classical sense of inference, according to which inferential processes also explicitly represent the relevant information and are occurrently isomorphic to the relevant deductive or inductive relations of that information.  A heuristic is like a rule of inference insofar as it takes an epistemic agent from one set of representations to another set of representations, but it typically does not represent the relevant information explicitly, and it is certainly not occurrently isomorphic to inductive (let alone deductive) relations.

[3] It’s worth remarking at this point that the objection that virtue is not required for knowledge (objection 1 above) is not available to the reliabilist.  Reliabilists never say that a true belief is a candidate for knowledge if it’s the belief that someone exercising a reliable disposition would arrive at.  They all make the stronger claim that a true belief is a candidate for knowledge if it’s actually arrived at via the exercise of a reliable disposition.  In Sosa’s (2011) terminology, knowledge is apt: it’s true belief that is accurate because it is adroit, or manifests competence.

[4] Anyone who has taught a critical reasoning or introductory logic class will likely agree that getting people to make deliberative rather than automatic inferences is a difficult task.

[5] One could, I suppose, deny that reliabilists are committed to the claim that true beliefs acquired through unreliable dispositions are not knowledge, but that would be a non-starter.

[6] When engaged in science, philosophy, politics, or other serious matters, we have the luxury of slowing down and trying to use more reliable inferential practices.  But in everyday life, who has the time?

[7] Other research shows that Americans tend to be much better than Germans at judging which of two large German cities is more populous.  An important question – to which I return below – is whether the same holds for large world cities.