the (un)reliability of heuristics, part 2

A claim I sometimes hear in favor of the reliability of heuristics has the flavor of a transcendental argument:

Let’s grant that people often arrive at their inferential beliefs via heuristics.  It follows that heuristics must be reliable.  Furthermore, there’s good evolutionary reason for this supposition.  People who routinely make unreliable inferences are less fit than people who routinely make reliable inferences, so the fact that most people are strongly disposed to use heuristics means that heuristics must be adaptive and, hence, reliable.

Jennifer Lackey and Guy Axtell, among others, have made arguments of this sort.  I say the argument has a transcendental flavor because it takes for granted some empirical claim (widespread use of heuristics), then articulates what would have to be the case for that empirical claim to be possible.  Consider first the argument without the evolutionary backstory.  The granted claim is that heuristic use is widespread.  The conclusion is that heuristics must be reliable.  There’s a missing premise here: non-skepticism about inference (i.e., most people know quite a bit on the basis of inference).  If we add that in, the granted claim is that, although heuristic use is widespread, most people know quite a bit on the basis of inference.  And the conclusion of the argument remains that heuristics must be reliable.  Even as amended, the transcendental argument is clearly invalid.  What needs to be added is that knowledge is arrived at by reliable processes.  Otherwise, it remains open to say that people acquire inferential knowledge on the basis of unreliable heuristics.  While it is of course possible to add this further premise to the argument, to do so would beg the question.  Reliabilism is precisely what is at stake in this debate.  The defender of reliabilism is not entitled to use it as a premise.

Though the purely transcendental form of the argument is invalid, perhaps the evolutionary backstory will help.  In general, I find evolutionary psychological just-so stories to be, well, unpersuasive.  But let’s give this one a run for its money.  I take it that a fleshed-out version of the argument would go like this:

Suppose that some of our ancestors tended to make inferences using processes p, q, and r, and that other of our ancestors tended to make inferences using processes s, t, and u.  Suppose further that p, q, and r are more reliable than s, t, and u.  Then the pqr-ancestors would have ended up with more reliable beliefs than the rst-ancestors, which in turn means that they would outcompete the rsts.  Well, we’re the offspring of the pqrs, so they must have used reliable decision processes, which were passed on to us through nature or culture.  Hence, the heuristics we use must be reliable.

There are several problems with this argument.  First, at best it shows that we’re offspring of our most epistemically reliable ancestors, and hence that we tend to use the most epistemically reliable heuristics available to the species tens of thousands of years ago.  But that’s just plain irrelevant to whether the heuristics we use are reliable enough to lead to knowledge.  Maybe the available decision rules were all pretty bad; then the ones that survived would merely be the best of a bad lot: more reliable than some, but not reliable enough to yield knowledge.

Second, even assuming that the pqrs used outright reliable heuristics, and not just the best of a bad lot, the argument assumes that the contemporary inferential setting is relevantly similar to that of our ancestors.  Since it’s best to talk about the reliability of heuristics relative to some context, it’s quite possible that using p, q, and r was reliable relative to the context of hunter-gatherer nomads in the African savannah, but that using pq, and r is unreliable relative to the context of modern humans navigating highways, cities, and online media.  Since we’ve changed our environment so much in the last 10 millennia, the fact that something used to work is moot on the question of whether it still works.

Third, the argument crucially assumes that reliability is adaptive.  This is far from obvious.  In recent years, the so-called value problem for epistemology has loomed large: why is knowledge better than mere true belief?  The answer, as Plato already understood in the Meno, is not that knowledge is more practically useful: someone who has a true belief about how to get from point A to point B will arrive there just as surely as someone who knows the way from A to B.  What I take to be the best solution to the value problem is that knowledge is an achievement, and achievements are intrinsically valuable.  But are achievements intrinsically adaptive?  I see little reason to think so.

But, one might argue, if the pqrs use more reliable decision rules than the stus, surely they will end up with more true beliefs (or a higher proportion of true to false beliefs), so even if their having more knowledge isn’t adaptive, surely their having more verisimilar beliefs is.  Again, I disagree.  It’s essential to distinguish reliability, which is a purely epistemic notion, from adaptiveness, which is both epistemic and evaluative.  For the sake of simplicity, let’s use a probabilistic notion of reliability: the reliability of an inferential process is the probability that it leads to a true belief (or, if you prefer, the proportion of true to total beliefs it leads to).  The adaptiveness of a decision rule isn’t just it’s reliability.  It’s, roughly speaking, the product of reliability and average payoff.

An example will illustrate.  Compare two decision procedures, P1 and P2, used over ten cases.  P1 leads to 8 true beliefs, while P2 leads to 6.  So P1 is 80% reliable, while P2 is only 60% reliable.  Surely, one might think, P1 is more adaptive than P2.  On the contrary, it depends on what happens when the agent gets things right and what happens when he gets them wrong.  For if P1 goes astray when it would be disastrous to be wrong, while P2 goes astray when it doesn’t hurt too much to be wrong, then it may well be the case that P2 is more adaptive than P1.  So adaptiveness is not just reliability; it’s reliability when it matters.  (Incidentally, this is quite similar to what McKay and Dennett call ‘misbelief’.)

So far so good, but of course this just shows that it’s possible for reliability and adaptiveness to come apart.  That casts doubt on the evolutionary argument, but perhaps not too much.  It would be more persuasive to show that, for the heuristics we actually use, reliability and adaptiveness diverge.  Well, there’s reason to think that, for many of them, this is the case.  Consider the so-called fundamental attribution error: the tendency to attribute others’ behavior to dispositional factors rather situational ones, even when it should be clear that situation is importantly operative.  Presumably the pattern of judgments identified by this error stems from the use of a heuristic: When someone does something of type t, infer that she is a ter.  For example, if someone lies, infer she’s a liar.  If someone cheats, infer she’s a cheater.  If someone helps, infer she’s a helper.  This isn’t a particularly reliable heuristic, but when it goes wrong (i.e., when it leads to the fundamental attribution error), it’s often self-confirming.  If you think that someone is a helper, you tend to signal that expectation to her, which in turn will make her more inclined to help.  If a waiter thinks that someone is a low tipper, he’ll tend to give them bad service, which in turn will lead to a low tip.  (This, by the way, is what I call factitious virtue and vice.)  It looks, then, as though the heuristic that leads to the fundamental attribution error might be adaptive but unreliable.

Or consider another case: the disgust reaction.  A lot of recent research (e.g. Kelly’s Yuck! The Nature and Moral Significance of Disgust) in moral psychology and social psychology suggests that the disgust reaction originally evolved to detect potentially poisonous or infectious things.  However, it’s very easy to associate new triggers with disgust, so that one becomes disgusted by, for instance, the food and cultural practices of outgroup individuals.  Rick Santorum recently claimed that JFK’s speech on the separation of church and state made him want to puke.  I find that reaction itself disgusting, but then I’m a sushi-eating, espresso-sipping, gay-rights-supporting, feminist liberal.  The point is that it was adaptive in our evolutionary history to have an oversensitive disgust detector because the payoff of a false negative (getting poisoned) was so much worse than the payoff of a false positive (failing to eat something benign).  Hence, not only is it possible for adaptiveness and reliability to come apart, it seems to actually happen in the case of some important heuristics.  And so I conclude that the transcendental argument for the reliability of heuristics — even when supported by the evolutionary just-so story — is unpersuasive.

Leave a Reply