Some of my current research asks whether our inferential belief-formation, -retention, and -revision processes are reliable. In a forthcoming paper and my CUP book, I argue that many of them — heuristics such as availability, representativeness, and recognition — aren’t. They work well enough in many ordinary contexts, but there are reasons to doubt their reliability:
- Heuristics work in tightly circumscribed situational contexts.
- We use heuristics indiscriminately.
- Most, or perhaps just under half, of people are disposed to claim that heuristics are more reliable decision procedures than sound inferential rules such as modus ponens, disjunctive inference, and the laws of probability.
Think of the conjunction of 1 and 2. I’ve been told at a conference that heuristics are just fine because “they’re reliable except when they aren’t.” Well, bully for them. So is flipping a coin to decide whether it’s raining: if you only flip the coin, or only infer based on the result of the flip, when it’s right, then of course it’s reliable. Heuristics, along with every other decision procedure one can imagine, satisfy that condition of reliability. But the commentator presumably meant something more sensible: heuristics are reliable when we’re disposed to use them, and when they’re unreliable we stop using them.
This suggests an interesting difference between heuristics and sound inference rules. If you input truths to modus ponens, it outputs truths. If you input truths to disjunctive inference, it outputs truths. If you input good evidence to Bayes’ Law, it outputs rational probabilities. If you input good evidence into a heuristic — well, it depends. Sound inference rules are context-neutral. Heuristics aren’t. So, instead of talking about whether heuristics are categorically reliable, perhaps it would be better to talk about the contexts in which heuristics are reliable or unreliable. Take availability: it’s pretty good when you’ve had a large, unbiased sample of the domain, but not otherwise. So instead of talking about whether the availability heuristic is reliable, we should talk about whether large-unbiased-sample-availability is reliable (maybe), whether small-unbiased-sample-availability is reliable (no), whether small-biased-sample-availability is reliable (no), etc.
This move parallels in many ways John Doris’s talk of local (moral) virtues. Instead of asking whether someone is generous, he thinks it’s more fruitful to ask whether she’s good-mood-generous, bad-mood-generous, etc. If you relativize to context, you can start to make attributions that are plausible. There’s an important difference, though. On the moral side, global dispositions are normatively adequate but empirically inadequate: they’re what you’d want, but most people don’t have them. On the epistemic side, global heuristics are normatively inadequate but empirically adequate: they’re unreliable, and most people use them. Relativizing moral virtues to contexts makes them empirically adequate but threatens to leave them normatively uninspiring. Relativizing heuristics to contexts makes (some of) them reliable, but empirically inadequate. Why empirically inadequate? Because that’s not how (most) people deploy them (point 2 above).
I don’t have the time to go into the evidence that heuristics get deployed willy-nilly here, but that’s the crucial claim. If it turned out (as Gigerenzer clearly wants it to turn out) that people do curb their use of heuristics in the right way, they probably would be reliable. The evidence, I contend, suggests that they don’t.
One might retreat even further and say that, while people don’t curb their use of heuristics in the right way, they could learn to. I have two replies. The first is: shrug. The second is: show me. Why do I shrug? Here’s how I see the dialectic: the defender of reliabilism wants to fend off the skeptical challenge, according to which the processes people actually use to arrive at their inferential beliefs are unreliable. It doesn’t help to say that, though the processes people actually use to arrive at their inferential beliefs are unreliable, people could use reliable processes. That wouldn’t show that people have knowledge; what it would show is that they could have knowledge. Hence the shrug. As to the second point: it remains to be shown that people can actually use heuristics responsibly. The place to look would be the Kahneman-Gigerenzer controversy, but my own reading of that controversy is that Gigerenzer is a pie-in-the-sky true believer, while Kahneman has his feet firmly on the ground. As always, I’m willing to be persuaded otherwise.
What about point 3 above? Again, I’m not going to cite all the evidence here, but it’s in my forthcoming publications. The basic idea is that when you pit heuristics against sound inference rules, most people choose the heuristics. You show people two arguments:
- Heuristic H is right, so p.
- Sound inference rule R is right, so ~p.
Then you ask them which argument is more convincing. Most people choose the heuristic argument. This suggests that ordinary people not only deploy the heuristic but also reflectively endorse it (over the sound inference rule). If that’s right, two things follow. First, even if it could be shown that heuristics are reliable, we should conclude that most people don’t have second-order knowledge of their heuristically derived knowledge. Suppose I arrive at knowledge of p based on a heuristic inference: Kp. Do I know that I know that p? I.e., KKp? Probably not, because I have a false belief about the reliability of my way of arriving at Kp: I think that it’s more reliable than it really is, and I think that it’s more reliable than a relevant alternative inference rule.
To sum up, then, one way to oppose my arguments that heuristics are unreliable is to switch to talking about heuristics relativized to contexts. And the relativization needs to be both normatively adequate (some of the contextual heuristics are reliable) and empirically adequate (people actually deploy heuristics only in those contexts). It’s trivial to relativize in such a way as to get normative adequacy: you can do that for coin-flipping. What’s hard is to relativize in such a way as to get both normative and empirical adequacy. I contend that such attempts fail, but of course that’s subject to further empirical research. Moreover, it doesn’t really help to argue that people could use heuristics in a way that’s both normatively and empirically adequate, since all that would show is that they could have knowledge. Finally, people’s tendency to reflectively endorse heuristics over sound inference rules means that, even if first-order inferential skepticism is wrong, higher-order skepticism wins the day.