Extended Prolepsis 5: Abilism & Epistemic Dependence

The final objection to be considered comes from John Turri (current volume) and Duncan Pritchard (forthcoming).[1]  Virtue reliabilism as it has been discussed thus far is the view that knowledge is true belief that is due to a reliable disposition of the cognitive agent.  Though we find it convenient to say that some dispositions are reliable and others unreliable, cognitive dispositions differ in their reliability in a gradable rather than an absolute way.  We judge a belief to be (categorically) reliably produced if it is produced (comparatively) reliably enough.  Another way of putting this is that knowledge is true belief that manifests cognitive ability, and the level of ability in question comes on a sliding scale.  We judge a belief to be (categorically) a manifestation of ability if it is (comparatively) a manifestation of enough ability. (Or enough of a manifestation of ability; the two can come apart.) Yet another way of putting this is that knowledge is true belief that is the product of cognitive agency, and that the level of cognitive agency in question comes on a sliding scale.  We judge a belief to be (categorically) a product of cognitive agency if it is (comparatively) a manifestation of enough cognitive agency. (Or enough of a manifestation of cognitive agency; again, the two can come apart.) Reliability, cognitive ability, cognitive agency – a rose by any other name would smell as sweet.  The point is that a threshold needs to be crossed before we are willing to say that someone’s true belief counts as knowledge.

Although some of the details of their accounts differ, Pritchard and Turri both argue that the key to saving (or replacing) virtue reliabilism is lowering the threshold.  For Turri (current volume), knowledge is to be defined as a “true belief manifesting cognitive ability,” or, more fully, “approximately true belief manifesting cognitive power.”  Cognitive ability or power in turn is defined thus (following Doris 2002, p. 19):

If a person possesses a cognitive ability to detect the truth (of a certain sort when in certain conditions), then when she exercises that ability and forms a belief (on relevant matters and in relevant conditions), she will form a true belief at a rate exceeding chance.[2]

From here, the rescue of virtue reliabilism is straightforward: clearly when people use heuristics, they exercise some degree of cognitive ability.  The recognition heuristic is better than chance, after all.  So, when they get it right by using the recognition heuristic (or any other heuristic that works better than chance), they know what they truly believe.  Note that this is a significant lowering of the threshold.  As long as the cognitive agent is better than chance, she’s good enough.  Her true belief might not be reliably produced (where reliability is understood to set a very high bar), but it is produced by a disposition that’s better than chance.  Turri (in press) has convincingly argued that at least some cases of knowledge (such as knowledge produced by explanatory reasoning) are unreliably produced.  His claim in this context is that true beliefs produced by heuristics are further examples of such knowledge

Pritchard (forthcoming) argues in a similar way that people who arrive at true beliefs via heuristics are knowers.  For him, the key distinction is between robust and modest virtue epistemology.[3]  Robust virtue epistemology defines knowledge purely in terms of virtue.  Modest virtue epistemology, by contrast, requires virtue only as a necessary – not a sufficient – condition for knowledge.  For independent reasons, Pritchard rejects robust virtue epistemology, so he sees the fact that it is inconsistent with epistemic situationism merely as further evidence in favor of modest virtue epistemology.  But is modest virtue theory threatened as well?  Not according to Pritchard.  The further condition he adds to knowledge is epistemic dependency, which has both positive and negative aspects:

It is positive when an agent exhibits a relatively low degree of cognitive agency, and yet qualifies as having knowledge nonetheless due to factors outwith her cognitive agency, such as epistemically friendly features of the environment. […] And it is negative when an agent exhibits a high degree of cognitive agency – such that they would ordinarily count as having knowledge – and yet they lack knowledge nonetheless due to factors outwith their cognitive agency.

For instance, someone who naively asks a knowledgeable passerby for directions to a landmark can end up knowing the way to the landmark, despite exercising a relatively low degree of cognitive agency.  By contrast, even a thorough and careful investigator can be fooled by an even more thorough and clever deceiver, and hence end up with true beliefs that do not count as knowledge despite exercising a high degree of cognitive agency.  Pritchard contends that people who get things right when using heuristics or when open-minded-because-in-a-good mood are like the person who naively asks someone for directions: despite exercising a low degree of cognitive agency, their true beliefs count as knowledge.  After all, they did exercise some cognitive agency (they used a heuristic rather than flipping a coin; they were luckily open-minded), and that was enough, given their epistemically friendly environment.  Pritchard goes so far as to say that,

in order for the situationist challenge to impact even on modest virtue epistemology it needs to demonstrate in a wide range of cases not just that the agent’s cognitive success, where it occurs, is not primarily creditable to her exercise of her cognitive abilities / intellectual virtues, but moreover that the agent’s cognitive success is not in any significant way the product of her cognitive abilities / intellectual virtues.

Like Turri, Pritchard wants to lower the threshold: as long as the dispositions that lead to true beliefs involve some degree of cognitive agency, they can give us knowledge.

I doubt that, if the threshold is set as low as Turri and Pritchard argue, my arguments would go through.  One question, then, is whether it is legitimate to lower the threshold as they suggest.[4]  In these kinds of arguments, it’s often hard to find any principled position that is also reasonable (or a reasonable position that is also principled).  I’ll try, however, to raise some doubts about the lowering of the threshold.  Consider a student taking a multiple choice test, with four potential answers per question.  As in the popular game show, Who Wants to be a Millionaire?, she has a “life line”: once during the test, she can ask the teacher to eliminate two of the four potential answers for a given question.  Suppose that she encounters a question where she has absolutely no clue which answer is right.  She uses her lifeline, reducing the number of potential answers to two, then guesses.  As it turns out, she guesses correctly.  Does it make sense to say that she knows the answer to this question?  One might object that she doesn’t believe that she got it right, since she was guessing.  Suppose further, then, that had already decided that one of the four potential answers was wrong, and that it was the one she didn’t choose.  Now she does believe that the selected answer is correct.  Does she know?  I contend that she does not.  If this is correct, then Turri and Pritchard’s weakening of the conditions on knowledge does not work.  After all, she did manifest cognitive ability (Turri): she used a lifeline to narrow the choices.  And she did her success is, in a significant way, due to her exercise of cognitive agency (Pritchard).  I don’t know to what extent my intuition that she nevertheless lacks knowledge might be shared and withstand scrutiny, but it does raise concerns about the weakness of both Turri’s and Pritchard’s definitions of knowledge.  They may end up counting too many beliefs as knowledge.

But suppose that I’m off the reservation here.  There’s at least one further thing to say about their revisions to reliabilism.  In virtue ethics, it’s not uncommon to say that an action is right if it manifests at least one virtue and no vices.  In the same way, Turri and Pritchard may want to revise their accounts to say that knowledge is a true belief that manifests at least one intellectual virtue (ability, bit of agency) and no intellectual vices (infirmities).  Taking this suggestion would not just follow the analogy, but also support their solution to the so-called value problem.  The problem is this: how, if at all, is knowledge than true belief?  It’s arguably the problem discussed near the end of Plato’s Meno, and it has loomed large in recent epistemology.  For both Pritchard (XXX) and Turri (2011, 2013, inspired by Zagsebski 2009), the solution to the problem is not that knowledge is prudentially more valuable (it’s not, since the person who truly believes that p will act the same as the person who knows that p), but that knowledge is an achievement, and achievements are in generally more valuable than non-achievements that result in the same state of affairs.

Consider, then, a somewhat ignorant American who successfully uses the recognition heuristic to rank the 83 most populous German cities in order of size.  Does he know that he got the order right?  He’s clearly manifesting an ability and exercising cognitive agency.  I’ve already suggested that this might not be enough to qualify his true belief as knowledge, but suppose my threshold argument is unsound.  There’s a further concern here.  Our imaginary agent manifests a cognitive ability, but in so doing he also manifests a cognitive disability.  Indeed, his ability doesn’t just coincide with but depends on his disability.  He would not be empowered to use the recognition heuristics if he were not sufficiently ignorant of German geography.

Moreover, most of the factors that make the recognition reliable, to the extent that it is reliable, are beyond the ken and control of the cognitive agent.  The strength of the ecological correlation is typically something he doesn’t know about, nor something he has any power over.  The strength of the surrogate correlation (the correlation between the mediator and his recognitional capacities) is, likewise, largely not up to him.  A few cognitive agents might know about and intentionally exploit these correlations, but most people do so automatically and unknowingly.  This suggests a further point: the recognition heuristic, if it is a source of knowledge, is an ability that is located largely outside the skin of the cognitive agent.  Both the processes that lead to a high ecological correlation and the processes that lead to a high surrogate correlation are largely external.  This suggests that, if virtue reliabilism is to cope with epistemic situationism, it must also cope with a version of the extended mind hypothesis, which I have elsewhere dubbed the extended character hypothesis (Alfano 2013, forthcoming a, forthcoming, d, forthcoming e).[5]

[1] I do not have space to consider objections to my positive account of factitious virtues, but since no criticism of this account has been published, that would have to wait anyway.

[2] Though I should note that Doris says “markedly above chance,” not just “exceeding chance.”

[3] For more on this distinction, see Kallestrup & Pritchard (2013).

[4] Another avenue, which I do not explore here, is to argue that reliability is a high-fidelity rather than a low-fidelity virtue, as discussed in Character as Moral Fiction (pp. 31-34, 72-82).

[5] See also Koralus & Mascaranhas (forthcoming) on how the dialectical process of questioning and answering makes us rational, to the extent that we are.

One thought on “Extended Prolepsis 5: Abilism & Epistemic Dependence

  1. Pingback: Epistemic Situationism: An Extended Prolepsis | philosophy and other thoughts

Leave a Reply