Extended Prolepsis 4.1: Gigerenzer to the Rescue

[I am putting the first half of this up now, as the section has turned out to be rather longer than I’d expected when I set out.]

Thus far, I focused on attempts to rescue responsibilism from the epistemic situationist challenge.  There may be other objections to raise, but if my arguments so far have been sound, it looks like responsibilism is still in hot water.  But there are multiple versions of virtue epistemology; one might be tempted to give up on responsibilism and opt instead for a version of reliabilism, which defines knowledge and other epistemic goods in terms not of character traits like curiosity and open-mindedness but of reliable dispositions like eyesight and memory.  Although Olin & Doris (forthcoming) are skeptical of reliabilism’s prospects across the board, my own challenge to reliabilism focuses on inferential knowledge.  In particular, I’ve argued that decades of cognitive science and psychology[1] support the hypothesis that the vast majority of human inferences are the product of unreliable heuristics, and that, hence, even when we arrive at true beliefs through heuristics, the reliabilist is not entitled to call them knowledge.

This mouthful can be cut into bite-sized chunks.  The vast majority of our inferences are directed by heuristics.[2]  The heuristics we tend to use are not reliable.  According to virtue reliabilism, true beliefs acquired through unreliable dispositions are not knowledge.[3]  Hence, reliabilists should not call most of our true inferential beliefs knowledge.  One could resist this argument by objecting that the vast majority of human inferences are not heuristic-driven.  Despite the intuitive appeal of this idea, prominent psychologists almost uniformly reject it (Bargh & Chartrand 1999; Kahneman 2011, p. 21).  It seems that the vast majority of belief- or representation-updating is automatic.  Much of that automatic activity is of course perceptual (we don’t tend to deliberate about whether to believe the testimony of the senses), but a great deal of it is also inferential.[4]

Next, one might deny that heuristics are unreliable.  This is the contention I want to address here because it is the line taken up by Fairweather & Montemayor (forthcoming) and Axtell (current volume).[5]  They draw on the work of Gerd Gigerenzer and his colleagues to argue that heuristics actually are reliable.  Indeed, they go so far as to claim that “fast and frugal heuristic reasoning often outperforms optimizing rationality” for epistemic agents such as us (forthcoming, emphasis mine).

Though it might seem that they couldn’t possibly say this while passing the red face test (or, for that matter, the incredulous stare test), their claim is actually not as controversial as it appears at first blush.  What Fairweather & Montemayor mean is not that someone using heuristic reasoning in some task often outperforms another person correctly using optimizing rationality in that same task.  Since correctly using optimizing rationality sets the standard for performance, that would be impossible.  Instead, they are making the much more modest claim that someone using heuristic reasoning in some task often outperforms another person attempting in a human-all-too-human way to use optimizing rationality.  In other words, if you try to do what an ideal inferer would do, you might end up worse off epistemically (and practically!) speaking than if you try to use a simple heuristic.  Since the gap between ideal output and actual output is much smaller for a heuristic than for an optimizing rule, it can happen that – even though the rule would be the thing to follow if you could actually pull it off – the best that can be achieved is the heuristic result.  Adam Morton argues for a similar position by pointing out that it’s a fallacy to think that

when one method is better than another, an approximation to it is better than an approximation to the other. […] One situation in which the fallacy is evident is […] where one has choices at one stage which can turn out well or badly depending on what choices one makes at a later stage.  Suppose that at Stage One you can go Right or Left and at Stage Two go Up or Down.  Right followed by Up is best, but there is reason to expect that when you get to Two you will choose Down, which is worse than anything that follows from starting with Left. […] To approximate Right-Up with Right-Down is a mistake. (2013, pp. 8-9).

The claim, then, is that comparing heuristics to optimal inference without considering performance errors (i.e., without considering how closely an actual human agent trying to use the heuristic or optimal rule in question would approximate it) is misleading.  What matters is how well an inferential strategy would work when real people try it.

I’m quite sympathetic to this line of thought, but I doubt that it gets reliabilists what they want.  For what they want, I take it, is to show that the heuristics that we actually use, as we actually use them, are reliable enough to get us inferential knowledge.  What the approximation argument establishes, however, is that the heuristics that we actually use, as we actually use them, are more reliable than would be sound inferential rules, as we would actually use them.  In other words, the argument (if it’s sound) establishes a contrastive rather than an absolute thesis: heuristics are more reliable for us than optimal rules.  But this contrastive thesis is not the desired absolute thesis: heuristics are reliable for us.

Of course, the contrastive thesis is consistent with the absolute thesis, so we need to get lost in the empirical weeds for a while to get a sense of whether the absolute thesis is true.  I will argue that it is not.  This might seem to leave me in the awkward position of arguing that, even though heuristics are not reliable, we should mostly go on using them, but that’s actually more or less what I think.[6]  I’ll do this by focusing not on the availability or representativeness heuristics (which were main themes of Character as Moral Fiction), but on the recognition heuristic.  I do this in part because I’ve already laid out my views on availability and representativeness at some length, and no one has bothered to argue that they in particular are reliable.  I also do this in part because the recognition heuristic was made famous by Gigerenzer (Goldstein & Gigerenzer 2002), and so serves as an ideal test case.

What is the recognition heuristic?  Goldstein & Gigerenzer (2002, p. 76) define it thus: “If one of two objects is recognized and the other is not, then infer that the recognized object has the higher value with respect to the criterion.”  This requires some unpacking.  First, recognition memory is distinct from richer sorts of memory.  It’s the thinnest kind of memory, and gives merely a thumbs-up or thumbs-down answer to the question, “Have I ever encountered this before?”  Next, the criterion is any variable of interest.  For instance, it could be the size of a university’s endowment, the population of a city, the number of publications of a scholar, the record sales of a band, or the prospects of a publicly traded company.  The idea behind the recognition heuristic is that, given how human cognition works and how society is structured, we’re more likely to encounter “big” things (on whatever criterion dimension) than “little” things.  If you’ve heard of a university, that’s because the university gets talked about by your friends or in the media you consume.  And your friends and the media talk about it because it’s rich (or because it does things that a poorer university couldn’t do).

 Model 1

Figure XXX: Example of the recognition heuristic

Your friends and the media mediate your relation to the criterion.  The recognition heuristic derives its power from the correlations between the criterion and the mediator, and between the mediator and your capacity to recognize.  Generally speaking, the higher these are, the more accurate will be your use of the recognition heuristic.

Research on the recognition heuristic is surprising and impressive.  In one rightly famous study, Goldstein & Gigerenzer (2002, p. 76) showed that Germans were much better than Americans at judging which of two large American cities was more populous.[7]  The reason for this should be plain: Americans recognized the names of many more cities, and so were unable to use the recognition heuristic as often as the more ignorant Germans.  Forced to shift for themselves, they were less accurate.  This “less-is-more” effect has been replicated many times for a variety of criteria: if the ecological and surrogate correlations are sufficiently high, recognition validity tends to be more accurate than imperfect knowledge.  Our use of the recognition heuristic is an example of the “ecological rationality” or “frugal virtue” that Fairweather & Montemayor (forthcoming) suggest could save reliabilism from the challenge of epistemic situationism.

A moment’s reflection suggests some important but easily-overlooked points.  First, the recognition heuristic can only be recruited at all in special circumstances.  If you recognize both or neither of the objects in a pairwise comparison, then you can’t use it.  For present purposes, we can ignore the case where you recognize neither object, since no epistemologist would want to say that you have knowledge in that case.  And there are other heuristics you could use when you recognize both, such as the availability and representativeness heuristics – or the affect, disgust, take-the-best, or take-the-last heuristics, just to name a few.  I don’t have space to discuss other heuristic strategies here, but it should be clear that the question of reliability will have different answers depending on which strategies are advocated.  To truly grapple with epistemic situationism, reliabilists will need to do a lot of background reading.

Second, as Goldstein & Gigerenzer (2002) define it, the criterion can be any gradable variable.  Instead of being the wealth of a university, for instance, it could be the multiplicative inverse of the endowment of the university.  Or it could be number of redheads at the university.  Or it could be the proportion of Mormons at the university.  Clearly, then, the choice of the criterion will partially determine the robustness of the heuristic.  In their words, recognition validity is “domain specific” (2002, p. 78).

Third, speaking of the mediator papers over the complexity of the situation.  In Goldstein & Gigerenzer’s (2002) model, the mediator is the number of times the university is mentioned in newspapers.  They realize, of course, that this is only one of many mediators.  Lots of people do not read newspapers but have heard of Harvard.  There are multiple mediators, as illustrated in Figure XXX.

Model 2

Figure XXX: Example of the recognition heuristic

But this model still drastically underestimates the complexity of the situation. What your friends talk about is influenced by what is talked about in newspapers and on TV.  And what’s mentioned on TV depends on what people talk about with their friends.  There are feedback mechanisms connecting each of the mediators with each of the other mediators.

Model 3

Figure XXX: Example of the recognition heuristic

Even this model underestimates the complexity of the situation.  There are also feedback mechanisms running back from recognition to at least some of the mediators.  What you recognize largely determines what you talk about with your friends.  What people recognize largely determines what gets mentioned on TV and in newspapers.  Moreover, recognition also influences the criterion via a host of further mediators.  Few people make donations to universities they’ve never heard of.  Few students send applications (and eventually tuition) to schools that they (or their parents, or their friends) have never heard of.

Model 4

Figure XXX: Example of the recognition heuristic

None of this is meant as a criticism of Goldstein & Gigerenzer (2002); as I said, this is all obvious once you think about it for a minute or two.  When we interpret their work, however, we need to be mindful of these complications.

[The remainder of this section is here and here.]



[1] Some of this evidence is beautifully presented in Kahneman’s swan song, Thinking, Fast and Slow (2011).

[2] I here understand ‘inference’ in the broad sense suggested by Henderson & Horgan (2009, p. 6), according to whom inferential processes are whatever processes form or maintain beliefs based on information.  As will become evident in the discussion below, heuristics do not fit their narrower, classical sense of inference, according to which inferential processes also explicitly represent the relevant information and are occurrently isomorphic to the relevant deductive or inductive relations of that information.  A heuristic is like a rule of inference insofar as it takes an epistemic agent from one set of representations to another set of representations, but it typically does not represent the relevant information explicitly, and it is certainly not occurrently isomorphic to inductive (let alone deductive) relations.

[3] It’s worth remarking at this point that the objection that virtue is not required for knowledge (objection 1 above) is not available to the reliabilist.  Reliabilists never say that a true belief is a candidate for knowledge if it’s the belief that someone exercising a reliable disposition would arrive at.  They all make the stronger claim that a true belief is a candidate for knowledge if it’s actually arrived at via the exercise of a reliable disposition.  In Sosa’s (2011) terminology, knowledge is apt: it’s true belief that is accurate because it is adroit, or manifests competence.

[4] Anyone who has taught a critical reasoning or introductory logic class will likely agree that getting people to make deliberative rather than automatic inferences is a difficult task.

[5] One could, I suppose, deny that reliabilists are committed to the claim that true beliefs acquired through unreliable dispositions are not knowledge, but that would be a non-starter.

[6] When engaged in science, philosophy, politics, or other serious matters, we have the luxury of slowing down and trying to use more reliable inferential practices.  But in everyday life, who has the time?

[7] Other research shows that Americans tend to be much better than Germans at judging which of two large German cities is more populous.  An important question – to which I return below – is whether the same holds for large world cities.