the (un)reliability of heuristics, part 2

A claim I sometimes hear in favor of the reliability of heuristics has the flavor of a transcendental argument:

Let’s grant that people often arrive at their inferential beliefs via heuristics.  It follows that heuristics must be reliable.  Furthermore, there’s good evolutionary reason for this supposition.  People who routinely make unreliable inferences are less fit than people who routinely make reliable inferences, so the fact that most people are strongly disposed to use heuristics means that heuristics must be adaptive and, hence, reliable.

Jennifer Lackey and Guy Axtell, among others, have made arguments of this sort.  I say the argument has a transcendental flavor because it takes for granted some empirical claim (widespread use of heuristics), then articulates what would have to be the case for that empirical claim to be possible.  Consider first the argument without the evolutionary backstory.  The granted claim is that heuristic use is widespread.  The conclusion is that heuristics must be reliable.  There’s a missing premise here: non-skepticism about inference (i.e., most people know quite a bit on the basis of inference).  If we add that in, the granted claim is that, although heuristic use is widespread, most people know quite a bit on the basis of inference.  And the conclusion of the argument remains that heuristics must be reliable.  Even as amended, the transcendental argument is clearly invalid.  What needs to be added is that knowledge is arrived at by reliable processes.  Otherwise, it remains open to say that people acquire inferential knowledge on the basis of unreliable heuristics.  While it is of course possible to add this further premise to the argument, to do so would beg the question.  Reliabilism is precisely what is at stake in this debate.  The defender of reliabilism is not entitled to use it as a premise.

Though the purely transcendental form of the argument is invalid, perhaps the evolutionary backstory will help.  In general, I find evolutionary psychological just-so stories to be, well, unpersuasive.  But let’s give this one a run for its money.  I take it that a fleshed-out version of the argument would go like this:

Suppose that some of our ancestors tended to make inferences using processes p, q, and r, and that other of our ancestors tended to make inferences using processes s, t, and u.  Suppose further that p, q, and r are more reliable than s, t, and u.  Then the pqr-ancestors would have ended up with more reliable beliefs than the rst-ancestors, which in turn means that they would outcompete the rsts.  Well, we’re the offspring of the pqrs, so they must have used reliable decision processes, which were passed on to us through nature or culture.  Hence, the heuristics we use must be reliable.

There are several problems with this argument.  First, at best it shows that we’re offspring of our most epistemically reliable ancestors, and hence that we tend to use the most epistemically reliable heuristics available to the species tens of thousands of years ago.  But that’s just plain irrelevant to whether the heuristics we use are reliable enough to lead to knowledge.  Maybe the available decision rules were all pretty bad; then the ones that survived would merely be the best of a bad lot: more reliable than some, but not reliable enough to yield knowledge.

Second, even assuming that the pqrs used outright reliable heuristics, and not just the best of a bad lot, the argument assumes that the contemporary inferential setting is relevantly similar to that of our ancestors.  Since it’s best to talk about the reliability of heuristics relative to some context, it’s quite possible that using p, q, and r was reliable relative to the context of hunter-gatherer nomads in the African savannah, but that using pq, and r is unreliable relative to the context of modern humans navigating highways, cities, and online media.  Since we’ve changed our environment so much in the last 10 millennia, the fact that something used to work is moot on the question of whether it still works.

Third, the argument crucially assumes that reliability is adaptive.  This is far from obvious.  In recent years, the so-called value problem for epistemology has loomed large: why is knowledge better than mere true belief?  The answer, as Plato already understood in the Meno, is not that knowledge is more practically useful: someone who has a true belief about how to get from point A to point B will arrive there just as surely as someone who knows the way from A to B.  What I take to be the best solution to the value problem is that knowledge is an achievement, and achievements are intrinsically valuable.  But are achievements intrinsically adaptive?  I see little reason to think so.

But, one might argue, if the pqrs use more reliable decision rules than the stus, surely they will end up with more true beliefs (or a higher proportion of true to false beliefs), so even if their having more knowledge isn’t adaptive, surely their having more verisimilar beliefs is.  Again, I disagree.  It’s essential to distinguish reliability, which is a purely epistemic notion, from adaptiveness, which is both epistemic and evaluative.  For the sake of simplicity, let’s use a probabilistic notion of reliability: the reliability of an inferential process is the probability that it leads to a true belief (or, if you prefer, the proportion of true to total beliefs it leads to).  The adaptiveness of a decision rule isn’t just it’s reliability.  It’s, roughly speaking, the product of reliability and average payoff.

An example will illustrate.  Compare two decision procedures, P1 and P2, used over ten cases.  P1 leads to 8 true beliefs, while P2 leads to 6.  So P1 is 80% reliable, while P2 is only 60% reliable.  Surely, one might think, P1 is more adaptive than P2.  On the contrary, it depends on what happens when the agent gets things right and what happens when he gets them wrong.  For if P1 goes astray when it would be disastrous to be wrong, while P2 goes astray when it doesn’t hurt too much to be wrong, then it may well be the case that P2 is more adaptive than P1.  So adaptiveness is not just reliability; it’s reliability when it matters.  (Incidentally, this is quite similar to what McKay and Dennett call ‘misbelief’.)

So far so good, but of course this just shows that it’s possible for reliability and adaptiveness to come apart.  That casts doubt on the evolutionary argument, but perhaps not too much.  It would be more persuasive to show that, for the heuristics we actually use, reliability and adaptiveness diverge.  Well, there’s reason to think that, for many of them, this is the case.  Consider the so-called fundamental attribution error: the tendency to attribute others’ behavior to dispositional factors rather situational ones, even when it should be clear that situation is importantly operative.  Presumably the pattern of judgments identified by this error stems from the use of a heuristic: When someone does something of type t, infer that she is a ter.  For example, if someone lies, infer she’s a liar.  If someone cheats, infer she’s a cheater.  If someone helps, infer she’s a helper.  This isn’t a particularly reliable heuristic, but when it goes wrong (i.e., when it leads to the fundamental attribution error), it’s often self-confirming.  If you think that someone is a helper, you tend to signal that expectation to her, which in turn will make her more inclined to help.  If a waiter thinks that someone is a low tipper, he’ll tend to give them bad service, which in turn will lead to a low tip.  (This, by the way, is what I call factitious virtue and vice.)  It looks, then, as though the heuristic that leads to the fundamental attribution error might be adaptive but unreliable.

Or consider another case: the disgust reaction.  A lot of recent research (e.g. Kelly’s Yuck! The Nature and Moral Significance of Disgust) in moral psychology and social psychology suggests that the disgust reaction originally evolved to detect potentially poisonous or infectious things.  However, it’s very easy to associate new triggers with disgust, so that one becomes disgusted by, for instance, the food and cultural practices of outgroup individuals.  Rick Santorum recently claimed that JFK’s speech on the separation of church and state made him want to puke.  I find that reaction itself disgusting, but then I’m a sushi-eating, espresso-sipping, gay-rights-supporting, feminist liberal.  The point is that it was adaptive in our evolutionary history to have an oversensitive disgust detector because the payoff of a false negative (getting poisoned) was so much worse than the payoff of a false positive (failing to eat something benign).  Hence, not only is it possible for adaptiveness and reliability to come apart, it seems to actually happen in the case of some important heuristics.  And so I conclude that the transcendental argument for the reliability of heuristics — even when supported by the evolutionary just-so story — is unpersuasive.

the (un)reliability of heuristics, part 1

Some of my current research asks whether our inferential belief-formation, -retention, and -revision processes are reliable.  In a forthcoming paper and my CUP book, I argue that many of them — heuristics such as availability, representativeness, and recognition — aren’t.  They work well enough in many ordinary contexts, but there are reasons to doubt their reliability:

  1. Heuristics work in tightly circumscribed situational contexts.
  2. We use heuristics indiscriminately.
  3. Most, or perhaps just under half, of people are disposed to claim that heuristics are more reliable decision procedures than sound inferential rules such as modus ponens, disjunctive inference, and the laws of probability.

Think of the conjunction of 1 and 2.  I’ve been told at a conference that heuristics are just fine because “they’re reliable except when they aren’t.”  Well, bully for them.  So is flipping a coin to decide whether it’s raining: if you only flip the coin, or only infer based on the result of the flip, when it’s right, then of course it’s reliable.  Heuristics, along with every other decision procedure one can imagine, satisfy that condition of reliability.  But the commentator presumably meant something more sensible: heuristics are reliable when we’re disposed to use them, and when they’re unreliable we stop using them.

This suggests an interesting difference between heuristics and sound inference rules.  If you input truths to modus ponens, it outputs truths.  If you input truths to disjunctive inference, it outputs truths.  If you input good evidence to Bayes’ Law, it outputs rational probabilities.  If you input good evidence into a heuristic — well, it depends.  Sound inference rules are context-neutral.  Heuristics aren’t.  So, instead of talking about whether heuristics are categorically reliable, perhaps it would be better to talk about the contexts in which heuristics are reliable or unreliable.  Take availability: it’s pretty good when you’ve had a large, unbiased sample of the domain, but not otherwise.  So instead of talking about whether the availability heuristic is reliable, we should talk about whether large-unbiased-sample-availability is reliable (maybe), whether small-unbiased-sample-availability is reliable (no), whether small-biased-sample-availability is reliable (no), etc.

This move parallels in many ways John Doris’s talk of local (moral) virtues.  Instead of asking whether someone is generous, he thinks it’s more fruitful to ask whether she’s good-mood-generous, bad-mood-generous, etc.  If you relativize to context, you can start to make attributions that are plausible.  There’s an important difference, though.  On the moral side, global dispositions are normatively adequate but empirically inadequate: they’re what you’d want, but most people don’t have them.  On the epistemic side, global heuristics are normatively inadequate but empirically adequate: they’re unreliable, and most people use them.  Relativizing moral virtues to contexts makes them empirically adequate but threatens to leave them normatively uninspiring.  Relativizing heuristics to contexts makes (some of) them reliable, but empirically inadequate.  Why empirically inadequate?  Because that’s not how (most) people deploy them (point 2 above).

I don’t have the time to go into the evidence that heuristics get deployed willy-nilly here, but that’s the crucial claim.  If it turned out (as Gigerenzer clearly wants it to turn out) that people do curb their use of heuristics in the right way, they probably would be reliable.  The evidence, I contend, suggests that they don’t.

One might retreat even further and say that, while people don’t curb their use of heuristics in the right way, they could learn to.  I have two replies.  The first is: shrug.  The second is: show me.  Why do I shrug?  Here’s how I see the dialectic: the defender of reliabilism wants to fend off the skeptical challenge, according to which the processes people actually use to arrive at their inferential beliefs are unreliable.  It doesn’t help to say that, though the processes people actually use to arrive at their inferential beliefs are unreliable, people could use reliable processes.  That wouldn’t show that people have knowledge; what it would show is that they could have knowledge.  Hence the shrug.  As to the second point: it remains to be shown that people can actually use heuristics responsibly.  The place to look would be the Kahneman-Gigerenzer controversy, but my own reading of that controversy is that Gigerenzer is a pie-in-the-sky true believer, while Kahneman has his feet firmly on the ground.  As always, I’m willing to be persuaded otherwise.

What about point 3 above?  Again, I’m not going to cite all the evidence here, but it’s in my forthcoming publications.  The basic idea is that when you pit heuristics against sound inference rules, most people choose the heuristics.  You show people two arguments:

  • Heuristic H is right, so p.
  • Sound inference rule R is right, so ~p.

Then you ask them which argument is more convincing.  Most people choose the heuristic argument.  This suggests that ordinary people not only deploy the heuristic but also reflectively endorse it (over the sound inference rule).  If that’s right, two things follow.  First, even if it could be shown that heuristics are reliable, we should conclude that most people don’t have second-order knowledge of their heuristically derived knowledge. Suppose I arrive at knowledge of p based on a heuristic inference: Kp.  Do I know that I know that p?  I.e., KKp?  Probably not, because I have a false belief about the reliability of my way of arriving at Kp: I think that it’s more reliable than it really is, and I think that it’s more reliable than a relevant alternative inference rule.

To sum up, then, one way to oppose my arguments that heuristics are unreliable is to switch to talking about heuristics relativized to contexts.  And the relativization needs to be both normatively adequate (some of the contextual heuristics are reliable) and empirically adequate (people actually deploy heuristics only in those contexts).  It’s trivial to relativize in such a way as to get normative adequacy: you can do that for coin-flipping.  What’s hard is to relativize in such a way as to get both normative and empirical adequacy.  I contend that such attempts fail, but of course that’s subject to further empirical research.  Moreover, it doesn’t really help to argue that people could use heuristics in a way that’s both normatively and empirically adequate, since all that would show is that they could have knowledge.  Finally, people’s tendency to reflectively endorse heuristics over sound inference rules means that, even if first-order inferential skepticism is wrong, higher-order skepticism wins the day.

will to ignorance, will to truth, and the recognition heuristic

In section 24 of Beyond Good and Evil, titled “O sancta simplicitas!” Nietzsche marvels at the “simplification and falsification” in which people live, “how we have been able to give […] our thoughts a divine desire for wanton leaps and wrong inferences! how from the beginning we have contrived to retain our ignorance in order to enjoy an almost inconceivable freedom, lack of scruple and caution, heartiness, and gaiety of life — in order to enjoy life!  And only on this now solid, granite foundation of ignorance could knowledge rise so far — the will to knowledge on the foundation of a far more powerful will: the will to ignorance, to the uncertain, to the untrue!  Not as its opposite, but — as its refinement!”

This lovely passage has been variously interpreted as indicating that 1) Nietzsche is a nihilist about truth [i.e., he believes that there’s no such thing as truth]; 2) Nietzsche is a skeptic about knowledge [i.e., he believes that people never or rarely know anything]; 3) Nietzsche is a dialetheist [i.e., he thinks that some propositions are both true and false]; and much else.  While I’m dubious of these extreme readings, I do find the passage quite radical in its import.  What could it mean for the will to ignorance to be the foundation of the will to truth, or for the will to truth to be a refinement of the will to ignorance?  One reason the extreme interpretations are almost certainly wrong is that the conflict or tension that Nietzsche wants to emphasize is not one of content, nor one of beliefs, nor one between beliefs and reality.  Instead, he’s talking about a desiderative or conative tension: a contrast between the will to ignorance and the will to truth.

Let’s be baldly literalistic and gloss ‘will to ignorance’ as the desire not to know (whether it’s a desire for false belief, no belief, or unwarranted belief) and ‘will to truth’ as the desire to know.  How could someone’s desire to know be grounded in her desire not to know?  Well, desire to know what, and not to know what?  Desires, like other propositional attitudes, are individuated in part by their contents.  Does the will to truth have the same content as the will to ignorance?  Is Nietzsche claiming that what we want to know and what we want not to know is exactly the same proposition (or set of propositions)?  I think not.  Instead, I contend that the will to ignorance is associated with a whole host of propositions, of which one must be ignorant if one is to know some other, much smaller, set of propositions.  In order to know that p, one must fail to know that q and r and s and t and….  Why would that be?

One avenue to pursue at this point is raised by Borges in his terrific short story, “Funes, the Memorious.”  Funes is panmnemonic: he remembers everything that ever happens to him.  Instead of being a blessing, this is a curse.  He ends up hiding in a dark room, overwhelmed by his memories.  Because he cannot forget, he deliberately avoids engaging with the world.  His mind becomes cluttered with useless memories of trivia.  Perhaps what Nietzsche is saying is that we need to forget, to gloss over, to be ignorant of much trivia, if we are to know anything of value.

This is a purely instrumental reading of BGE 24: will to truth is based on will to ignorance because, for creatures with limitations like ours, it just so happens that you can only know so much, and so to know important things you need to be ignorant of much else.  It’s a plausible reading, but I think it misses something crucial by making the link between will to ignorance and will to truth purely instrumental.  Consider another possibility: that ignorance of some facts is intrinsically related to knowledge of others.  How might that be?  Well, it’s often possible to infer from the fact that you don’t know.  “If I haven’t heard of it, it doesn’t exist.”  Or at least, “If I haven’t heard of it, it’s not big/important/interesting.”  As it turns out, some of the heuristics and biases research in cognitive psychology of the last 40 years supports exactly this idea.  Consider the availability heuristic and the recognition heuristic. When people infer in accordance with the availability heuristic, they treat ease of recall as an index of probability, frequency, importance, etc.  The easier it is to think of instances of category X, the more common, probable, important Xs are.  The availability heuristic seems to explain why people typically estimate that more words of the form ‘—-ing’ than of the form ‘—–n-‘ will be found in a given stretch of prose.  They think of four-level verbs, then form participles based off of them to generate examples of the former, but don’t for the latter.  So, even though it’s impossible for there to be more seven-letter words ending in ‘ing’ than seven-letter words whose penultimate letter is ‘n’ in a given stretch of prose, more such words are available, which leads them to guess that they’re more frequent.

The recognition heuristic is something like a limit case of availability; when people infer in accordance with this heuristic, they treat mere recognition as an index of probability, frequency, importance, etc.  If one recognizes X but not Y, then Xs are more probable, common, important than Ys.  If someone has been exposed to a large-but-not-comprehensive and unbiased sample, this heuristic is a decent way to make inferences.  Gerd Gigerenzer has shown, for instance, that Germans are as good as (and often better than) Americans at saying which of two American cities has a larger population.  How do they do this?  It’s not because they have more knowledge of American geography than Americans do.  (The converse effect is also found: Americans are as good as, or better than, Germans at saying which of two German cities is more populous.)  What seems to happen is that, whenever Germans recognize the name of one city but not the other, they infer that that city is larger than the other.  Since they tend to have heard of bigger cities, such inferences are surprisingly accurate.

Crucially, you can’t use the recognition heuristic if you recognize too much; it’s most useful when you’re at the Goldilocks point of being acquainted with enough but not too much.  When people use the recognition heuristic, they don’t just maintain a healthy level of ignorance; they actually harness their ignorance to make respectable inferences.  So it might even be adaptive in some circumstances to aim for ignorance, provided it’s ignorance of the right kind.  In more Nietzschean terms, sometimes the will to truth might depend intrinsically on the will to ignorance.

This brings me to one last point, which I’ve been thinking about quite a bit recently: can heuristics be considered intellectual virtues?  The recognition heuristic is an extreme case, as I mentioned, but perhaps that will shed some light on the question.  If an intellectual virtue is a disposition that someone who wants to know would want to have, it’s hard to argue that (the disposition to use) the recognition heuristic is an intellectual virtue.  To want to use the recognition heuristic is to want to be sufficiently ignorant, which runs counter to the global desire to know.   Furthermore, it’s unclear whether the recognition heuristic, applied indiscriminately, is sufficiently reliable.  A 1999 paper that Gigerenzer co-authored famously argued that using the recognition heuristic to pick stocks beat the overall market.  “If I’ve heard of it, it must be a winner!”  Well, as it turned out, that was at best a good heuristic to use in a bull market.  Michael Boyd replicated the study and found that it would be better to pick stocks at random than to use the recognition heuristic during a bear market.

A little knowledge may indeed be a dangerous thing.

in which Nori’s bones turn to Jell-O

What more can I say?  Nori loves to be petted so much that her little bones melt after 20 minutes of it.

You might not be able to tell, but she’s actually asleep in these photos.

A highly opinionated and merely anecdotally supported timeline to the philosophy job market

This isn’t what I did, and it’s not what any particular person told me to do.  Rather, it’s what, on reflection, I wish someone had told me to do.  If you’re at a really fancy place (Leiter top-5 worldwide or so), the following does’t apply to you, for reasons I don’t really have the time to go into at the moment (UPDATE: a number of people have told me that, given recent changes in the market, the above caveat may no longer apply.  Talk to your mentors.).  If you’re somewhere in the 15-50 range, however, it might help.

Additionally, this advice seems to work best for people who want a job at a research-oriented department.  Despite the huge amount of experience I have teaching and tutoring (something like 10,000 as of spring 2014), I’ve never had even the slightest luck applying for jobs at SLACs (small liberal arts colleges).  If someone from such a school would be so kind as to explain how to modify this advice to apply to their department (and others like it), I’d be most grateful.  Just put the advice in a comment, or email it to me.

Timeline

 

Let’s suppose you want to land a position that starts in the fall of year N.  Here’s a brief overview of what you should aim to have accomplished by what dates.

 

Now:

  • Register with the American Philosophical Association (APA)
  • Sign up for the philosophy list-serves: philupdates and PHILOS-L
  • Create an account at www.philpapers.org

 

Now-N-3:

  • Do some teaching.  Be sure to get observed and to get a positive evaluation.  You will need to get a letter of recommendation from someone who knows you as a teacher eventually.  Be sure as well to get positive student evaluations.  The easiest way to do this is to be beautiful and friendly and an easy grader.  If that’s not something you can or want to do, you can still get good evaluations, but you’ll have to try harder.
  • Try sending revised versions of your best seminar papers and any other research you engage in to conferences and journals.  Get your feet wet.
  • Build (or get someone to help you build) a professional website.  Put your best papers on it.

 

N-3:

  • Acquire two or three Areas of Specialization (AOS).  These are areas of philosophy where you plan to do cutting-edge research in the coming years, at least one of which will be the area in which you write your dissertation.  AOSs are very far from being natural kinds.  They are partitioned by branch of philosophy (e.g., ethics, epistemology, metaphysics), by time period (e.g., 19th-century philosophy), by major author (e.g., Aristotle, Kant, Wittgenstein), and by geography (e.g., 20th-century French philosophy).  As you consider which AOSs to acquire, you may want to look at the last couple years’ worth of jobs at www.philjobs.org, so that you are sure to specialize in a field where there are jobs.  Right now, ethics seems to be pretty hot.  Philosophy of religion is on the outs. But things change, so do your homework.  In addition, it may be worth having one AOS in a narrower field, even though there may be fewer jobs.  It’s all about supply and demand: narrower fields tend to have few openings, but there will also be fewer qualified applicants.  In addition, if your AOS is listed as an AOC (Area of Competence) in a job ad, you’ll have an advantage when applying for that job.  (UPDATE:  a nice map of interrelations among AOSes is available here.)
  • Get some experience teaching.  If possible, teach a few different courses rather than just a bunch of intro philosophy or intro ethics.  This experience will help you support claims of AOCs later on.
  • Try to become the category editor for a relevant category at www.philpapers.org.
  • Establish a rapport with someone you think would be a good dissertation advisor.  Explore the possibility of working together with him or her.
  • Establish a rapport with others whom you think would be good committee members.  Some of these people should be at your home department, but it’s good to have relationships with people outside the department as well (either in different departments at the same school or at departs at other schools).

 

Spring N-2:

  • Defend your dissertation prospectus.  Have a drink.  Have another.  Get to work.
  • While writing your dissertation, there will come a time when you say to yourself, “Damn it, I don’t want to read another book.  Why do people keep writing books about my topic?”  Pause for a moment to consider the irony of this complaint.  Then ask yourself whether the book in question is such that you would be able to write a positive review of it.  If it is, start getting in touch with reviews editors (not general editors) of relevant journals to ask whether they’d be interested in your submitting a review.  Some won’t, but at least one will.  They’ll probably even send you a free hardcover copy of the book.  And of course you can put the review on your CV.  Write a positive – if not glowing – review, then send it to the author saying something along the lines of, “Dear Professor X, Hi!  I’m Y.  I’m an admirer of your work and am writing a review of your book for Z.  A draft of the review is attached.  Would you mind taking a look at it and telling me whether you think I’ve missed anything?  Thanks!”  The author will be flattered that someone other than their mom read the book.  This is great, because it will allow you to show the author some of the work relevant to your dissertation, and a few months later you will ask the author to be an external member of your dissertation committee or at least to write you a letter of recommendation.  (UPDATE: a number of people have suggested removing this advice.  Ask your mentors whether they think it makes sense for you.)
  • Send (suitably revised) chapters of your dissertation to journals.  They will almost certainly be rejected the first time, but you’ll (usually) get feedback that is (occasionally) informed and (even) helpful for revision.
  • Send (suitably revised) chapters of your dissertation to conferences.  Be sure to talk to as many people as you can.  You never know when a connection will turn out to be helpful later on.
  • Send other work not from your dissertation (such as revisions of your seminar papers or history paper) to journals and conferences too!  If you are trying to establish an AOS, the easiest way is to have at least one publication in the area.
  • Start preparing your job talk by presenting it at internal colloquia and conferences.

 

Summer N-1:

  • Finish a draft of your dissertation and prepare to defend it.
  • Ask your advisors for letters of recommendation, providing them both your full CV and a “brag sheet” that lists in bullet form the items from the CV you think that particular letter writer may want to mention in the letter.  Don’t make demands, but do make suggestions.  You should aim to have at least three letters, as well as one letter devoted to your teaching.  More would be good, as long as they’re (very) positive.  Bear in mind that negative letters do get written. Whatever you do, don’t get one of those.  Your Placement Director should look at all of your letters and advise as to which to send and which not to send, as well as the order in which they should be included in your dossier.

 

August N-1:

  • Craft your job documents by the end of the month.  You don’t want to be working on these while applying – that’s stressful enough on its own!  You’ll need a surprisingly large collection, listed below:
    • Cover Letter Template.  A cover letter should be short and sweet – at most one page unless you have strong indications that a long letter is required.  Put it on electronic letterhead, and be sure to include inside addresses and a scan of your signature.
    • Curriculum Vitae (CV).  A CV lists all of your many accomplishments as succinctly as possible.
    • Biographical Sketch.  This is a one-paragraph description of you and your research, written in the third person.
    • Dissertation Abstract, short.  You will want a one-paragraph abstract of your dissertation, which will typically be included in your CV.
    • Dissertation Abstract, long.  You will also need a longer abstract of your dissertation, approximately two double-spaced pages.
    • Statement of Research.  A research statement of your most prominent research so far, as well as laying out your plans for future projects. At most two pages single-spaced.
    • Statement of Pedagogy / Statement of Teaching Philosophy.  A pedagogy statement describes your strengths and experiences as an instructor.  At most two pages single-spaced.
    • Statement of Faith.  If you plan to apply to religious institutions, you will want a statement of faith.  Not all religious institutions require such a statement, but many do.  One or two pages single-spaced.
    • Teaching Portfolio.  A teaching portfolio is not the same thing as a teaching statement.  The portfolio lays out as succinctly as possible which courses you have already taught, includes your student and faculty evaluations, and describes any curriculum development efforts in which you’ve been involved.
    • Sample Syllabi.  A sample syllabus is not a syllabus.  It’s basically a one-paragraph course description followed by a reading list of at most two pages, sequenced into about 13 weeks with thematic headings.  You will want sample syllabi for every course in the union of your AOSs and AOCs, and perhaps for more.  Some schools will want syllabi included in the initial application; others may ask for syllabi prior to the first-round interview; still others will want (even if they don’t say so!) syllabi during the first-round interview.
    • Transcripts.  Get scans of both undergraduate and graduate transcripts, which you may be required to submit with your applications.
    • List of References.  This is a comprehensive list of all your letter-writers, including mailing addresses, email addresses, and phone numbers.
    • Writing Samples.  Yes, samples.  You should aim to have two or three AOSs, so you will want at least one writing sample for each (between 15 and 25 pages, double spaced).  Most schools require at least one writing sample with the initial application.  Some (the more prestigious ones) want several.  In a recent year, the University of Chicago allowed (read: required) applicants to submit as many as six.  Your writing samples can be publications, money chapters from your dissertation, or even other research that you think is of the highest quality.  Revise them.  Revise them again.  Edit the revisions.  Proofread the edits.  You want your writing sample to be so tight you could bounce a quarter off its ass.
    • Research Proposals.  If you plan to apply for post-doctoral fellowships, you will need a research proposal.  It might be good to have a couple.  The most common thematic fellowships for philosophers are in bioethics, but most fellowships are interdisciplinary.  That means your research proposal should be intelligible to a non-specialist audience.  About 2 pages double-spaced.
    • Amass a small fortune.  You should expect to spend between $400 and $2000 on applications, depending on how many you send and how many you need to send via express or priority mail.  You should also expect to spend several hundred on getting to and finding a hotel at the APA.

 

September-December N-1:

  • Apply for every appropriate job you can find.  Even if the AOS doesn’t quite match your profile, it’s worth submitting an application.  After all, you don’t know whether all members of the search committee are committed to the AOS.  Think of each application as a lottery ticket: the more you buy, the better your chances.
  • Defend your dissertation, but don’t deposit it until the spring semester.
  • Participate in at least one mock interview.  Before the interview, practice your “spiel.”  A lot.  As in: obsessively.  The spiel should explain what the problem is that your dissertation addresses, then segue quickly into a discussion of how your dissertation addresses it.  It’ll be the first thing you say after “Hello” during your interviews.  It may well be the most important thing you say in your whole career.

 

October N-1:

 

December N-1:

  • Attend the APA Eastern Division conference for interviews.  You should be contacted for interviews by departments in early December, though late November and late (even very late) December are genuine possibilities.  In addition, quite a few schools now conduct their first-round interviews over the phone, via Skype, or simply by asking for more documents (especially writing samples).  Don’t worry if that happens; in fact, it’s probably better than interviewing.
  • It’s appropriate to ask who will be conducting your interviews (usually a committee of three people).  Once you know who they are, create departmental profile in which you note what you might say to each member of the department, and especially what you might say to the members of the search committee.  Include images of the relevant people, so that when you meet them for the first time, you already know who they are.  This will allow you to address them by name more easily.
  • Try not to despair.  Get out of your house as often as you can.  Talk to people.  Talk about not-philosophy.  Drink, but not too much.  Sleep plenty.  Go easy on yourself, if you can manage it.
  • Print out copies of all of your job documents, especially your CV and sample syllabi.  You’ll want to have these readily available at the APA.
  • Go to the APA.  Be sure to arrive a bit early, since weather is often awful and delays may occur.  Don’t bother going to talks unless someone from a relevant school is giving the talk.  Take it easy.  Be sure to stop into the placement office and drop your CV in the bin.  A few schools actually set up interviews on-site.  Who knows, you might land an unexpected interview!  (Yes, this does actually happen, though rarely.  I had one such interview in December 2010.)

 

January N:

  • Send brief thank-you notes to everyone who interviewed you.  Unless asked to say something substantive, don’t. Sample thank-you note: “Dear [first name] (if I may), Thank you for the opportunity to interview for the position in [AOS] at [School].  I enjoyed our conversation.  Should you have any questions or concerns about my application, please do not hesitate to contact me.  All the best, [me]”
  • Try not to be too antsy while waiting to see whether you’ll be invited for a job talk.  Keep in mind that typically 12-15 candidates receive first-round interviews, and only 3-4 get job talks.  Assuming even odds, you therefore have 20-25% odds of getting a talk at each institution.  That said, a number of schools have ceased doing job talks at all and simply go directly from first-round interviews to job offers.
  • Make sure your job talk is totally prepared.  It should be something you can deliver in about 45 minutes.  (At UK schools, more like 25 minutes.)  Don’t read from a script if you can help it.  Do a mock job talk.  Figure out what questions you’re most likely to get during the Q&A and what to say in response to them.

 

January-March N:

  • Do your job talks and other campus visits.  Blow them out of the water.  Pray, if you believe in that sort of thing.  Sacrifice animals or virgins or virgin animals, if you believe in that sort of thing.
  • Write short thank-you notes to everyone you met on your campus visits.  Again, don’t go into too much detail unless you have an indication that it wouldn’t be viewed in a negative light.

 

Spring-Summer N:

  • If necessary, continue applying to positions as they are advertised.  Most will be fixed-term – either post-docs or visiting assistant professorships – but they’ll tide you over until you can find more suitable, permanent employment.
  • Deposit your dissertation.

(UPDATE: thanks to all those who’ve made helpful suggestions, including Hilde Lindeman, Lynne Tirrell, David Chalmers, Ramona Ilea, Errol Lord, and Jack Woods.)

What I said to Brian Leiter

Below are the comments I made on Brian Leiter’s paper, “The Truth is Terrible,” at the “Nietzsche and Community” Conference at Wake Forest University this week.  While I cannot post the content of his talk, my comments summarize his view pretty succinctly, so you can probably get the gist of it even without his paper.

—————————————————————————-

Brian Leiter deftly sums up the terrible truths that might lead a reflective person to despair.  There are the terrible existential truths:

  • We’re all going to die.
  • We’re all going to suffer before we die.
  • There’s no ultimate point to the suffering, or to the dying.

Then there are the terrible moral truths:

  • Society is organized to exploit most people for the benefit of a few.
  • Those few are mostly philistine plutocrats who create little of value.
  • Social and political concerns aside, most people are motivated by base greed, lust, vengeance, spite, and resentment.

Finally, there are the terrible epistemic truths:

  • Most people know very little.
  • Worse, we think we know a lot.
  • Worse still, we can’t help thinking that we know a lot even though there’s good evidence to the contrary.

It’s not a rosy picture of the human situation, and it raises what Leiter calls the Schopenhauerian challenge: why keep living at all?  I hope we can all agree, as he imagines we will, that “Nietzsche was always interested in responding to [this] challenge,” and that Nietzsche’s response to is somehow tied up with treating the world as an aesthetic phenomenon, taking a Dionysian stance towards the world, or affirming the eternal recurrence.  The question then arises what these flowery ideas amount to.

 

According to Leiter, the answer to this question does not involve any kind of reason-giving.  There is no rational or cognitive warrant for life.  Instead, the answer refers to how one experiences life, to one’s “affective attachment to life,” which “turns on the causal mechanism of a psychological process: the arousal of affects acting as a narcotic on pain.”  This answer assumes that such an emotional valuation of life is consistent with the terrible truths.  Something can have aesthetic value even if it lacks epistemic, moral, and existential value.  I find nothing problematic with this assumption, but I am worried by the details of Leiter’s interpretation, specifically the notion that in Nietzsche’s positive proposal the affects function primarily as a narcotic.

 

The wedge I’d like to use in addressing my worry is this: what is the relation between the terrible truths, on the one hand, and the affects, on the other, such that affect – especially positive affect – makes life bearable or even attractive?

 

We can distinguish a variety of answers that might be given to this sort of question.  One is instrumental: we often advise people to put up with a modicum or even a great deal of suffering now because it’s instrumental to some highly desirable payoff in the future.  Exercise so that you’ll look good at the beach.  Invest so you can enjoy your retirement.  Decline that last beer so you’ll avoid the hangover.  The Christian promise of eternal beatitude fits this model, but it plainly will not do for Nietzsche.

 

A second, similar way of relating the terrible truths to the affects is through the notion of desert.  This is how the ascetic priest handles things.  Why go on suffering?  Because you deserve it, scum.  Again, such a response will not work for Nietzsche.

 

Yet another answer to the question Why go on? attempts to debunk suffering: it’s not really suffering because your senses deceive you, or you misinterpret their deliverances, or you’re hyper-sensitive.  Perhaps suffering pales in comparison to aesthetic enjoyment, such that when you consider the terrible truths while listening to Beethoven they lose their terrible character.  This again is evidently not what Nietzsche has in mind.  He never denies the horror of existence; he wants to affirm life despite that horror.

 

A fourth potential way of relating the terrible truths to the affects is through a utilitarian calculation.  Yes, life is meaningless, society is a mess, and we’re all ignorant; these are genuine detriments, but they’re outweighed by aesthetic joy.  While Leiter does attribute what he calls the minimal hedonic thesis to Nietzsche, that thesis has nothing to do with adding up utiles and finding a net positive.  According to Leiter, the idea is rather that “aesthetic experience is arousing, that it produces a sublimated form of sexual pleasure,” and that this arousal and pleasure act as a narcotic on pain.

 

This brings us to yet another way of relating the terrible truths to the affects: distraction.  On this view, feeling any powerful affect makes it difficult to attend to anything else, including the terrible truths.  Think of the famous video of the invisible gorilla: when you attend to the antics with the basketballs, you fail to even notice the man in the gorilla suit beating his chest.  In the same way, the “discharge of affect is the sufferer’s greatest attempt at relief, namely at anesthetization – his involuntary narcotic against torment” (GM III:15).  Finite creatures that we are, we can only experience so much at a given time.  So if we distract ourselves sufficiently with emotions (negative emotions in the case of the ascetic ideal, positive ones according to Leiter), then there will simply be no room in consciousness for the terrible truths.

 

This seems to be an accurate characterization of the strategy of the ascetic priests, but so far as I can tell, Nietzsche never claims that positive affect functions as a narcotic.  Moreover, consider the fact that so much of Nietzsche’s writing is devoted to the terrible truths themselves.  Writing about the terrible truths seems to be a particularly ill-conceived strategy for distracting people from them.  While I of course recognize that reading Nietzsche is itself a source of aesthetic pleasure, it strikes me as implausible to think that his purpose in writing the Genealogy was to distract his readers from the very content of that book with his literary verve.  Mr. Rash and Curious didn’t need to go into the dungeons where values are created.  He could have exulted in Bach’s St. Matthew Passion.  Surely that would have been a more effective distraction.

 

I want to suggest that, in his mature writings, Nietzsche has a different picture in mind: the relation between the terrible truths and the affects is not one of distraction but of attraction.  Among the things that most seduce Nietzsche to life are the terrible truths themselves.  How could this be?  The answer, I think, has to do with Nietzsche’s notion of gay science and his valuation of the intellectual virtue of curiosity.  I don’t have time here to go through the details of the argument that Nietzsche places great value on curiosity, about which both Bernard Reginster and I have written, but the basic idea is this: for Nietzsche, curiosity is the virtue of overcoming great intellectual resistances by inquiring and investigating into the most problematic features of reality.  Those turn out to be precisely the terrible truths.

 

In the Preface to the 2nd edition of The Gay Science, Nietzsche says that for someone like him, “The trust in life is gone: life itself has become a problem.  Yet one should not” (as Leiter does) “jump to the conclusion that this necessarily makes one gloomy. […] The attraction of everything problematic […] flares up again and again like a bright blaze over all the distress of what is problematic.”  Odd as this may sound, the terrible truths themselves are the attraction here.  Nietzsche concludes: “We know a new happiness,” the happiness of someone who wants to overcome the hardest questions and understand the most terrible truths.  Or consider Gay Science 324: “‘Life as a means to knowledge’ – with this principle in one’s heart one can live not only boldly but even gaily.”  Knowledge of what?  Of something that distracts from the terrible truths?  On the contrary, “the great passion of the seeker after knowledge” is to live “continually in the thundercloud of the highest problems and the heaviest responsibilities” (GS 351).  This answer is, I think, nascent in Leiter’s paper.  He talks of “restoring our affective attachment to [life] through pleasurable, quasi-sexual affective arousal.”  That’s not a matter of distraction but of attraction.  Attraction to what?  To life.  The very thing the terrible truths are about.

 

When is it apt to articulate a normative rather than a descriptive theory of some phenomenon (e.g., inference)?

The Notre Dame Institute for Advanced Study’s annual conference is winding down today.   Lots of interesting talks, but one in particular got me thinking: Robert Hannah‘s lecture on inference.  Hannah is an avowed Kantian.  The motivating thought of his view is that inference, like morality, is governed by one (or perhaps just a few) universal principles.  In morality, of course, the principle is the categorical imperative, always to act from universalizable maxims.  In inference, the principle is supposed to be something like bivalence of judgment (no proposition may be judged both true and false) or even perhaps a weaker principle that allows for truth-value gluts and gaps but not logical explosion (the entailment of the truth — and falsity — of every proposition from the affirmation of a single inconsistency, which is a feature of classical logic).

While I found Hannah’s grappling with the underlying principles of rationality interesting, I was surprised that he thought he was articulating a theory of inference.  It seemed to me that he was instead giving a theory of sound, or valid, or good inference.  Surely, though, inference is a human practice.  Sometimes we make good inferences, but all to often we make mediocre or bad inferences.  How does a theory of good inference handle such phenomena?  This is certainly not claim that whatever we do must be good enough.  I’ve argued elsewhere to the contrary.  The worry can be rephrased like this: For which types of x is it apt, when giving a theory of x, to give a theory of a good x?

By way of comparison, consider whether it would be apt, when articulating a theory of personality, to give a theory of virtue that only applied to heroes and saints.  You might worry that if you only have a theory of heroes and saints, it’s going to be very hard to bring your theory to bear on akratic neurotics.  It might instead be better to construct a theory of personality that recognizes all sorts of personalities — good, bad, ambivalent, and indifferent — and then to bolt on, as it were, an evaluative module to this theory.  In this way, you could have an idea idea of what personality is, and then say that virtue is a matter of having a good personality.  I’m not convinced that this is the best way to approach personality and character, but it’s not prima facie crazy.

In the same way, you might think that in order to construct a theory of inference, it makes sense first to articulate a vision of how people move from the set of beliefs and judgments they have at t to the set of beliefs and judgments they have at t+1, whether they’re rational, irrational, or arational in so updating their doxastic states.  Then once that theory is on the table, one could bolt on an evaluative module: a good inference then would be an inference that conforms to the highest (or perhaps high enough) evaluative standards.  Again, I’m not convinced that this is the right way to go about things, but it doesn’t seem prima facie absurd (which, I fear, is what Hannah thought when I brought the point up during Q&A).

Metaphorically, the question is whether to commit to an additive or a subtractive model of theory construction.  Does one first construct a theory of x, and then add on some features that explain when something is a good/bad x, or does one first construct a theory of a good x and then treat phenomena that don’t quite satisfy the theory as pathological?  This is a very hard question, which I don’t intend to answer here, but it’s something that comes up all over the place.  For instance, in philosophy of biology, does one start with a theory of an organism that doesn’t include all the messy things that happen to organisms such as diseases, disorders, birth defects, etc.?  Or should one start with a theory of how organisms actually turn out in the world and then attempt to come up with evaluative standards that apply to those things?  In epistemology, is it better to start from a theory of belief (whether true or false, justified or unjustified, reflective or unreflective), and then bolt on evaluative modules for this normative properties?  Or it better to start from a theory of, say, reflective knowledge and then describe subtle deviations from reflective knowledge as pathological cousins (e.g. Gettiered justified true belief, justified untrue belief, unjustified true belief, etc.)?

As I say, this is a very hard question to even approach, let alone answer.  Nevertheless, I’ll venture a guess: it’s inappropriate to commit to either an additive or a subtractive model of theory construction.  Instead, one needs to develop both simultaneously, shifting back in forth to adjust one with the other. A normative theory that’s too demanding is useless and even risks failure to be a theory of the phenomenon it purports to be about.  A merely descriptive theory of a phenomenon with robust normative components will tend to be a baggy monster: unsystematic, unexplanatory, unsatisfying.  Instead, we should use our normative theories to give structure and a semblance of unity to our descriptive theories, while we should allow our descriptive theories to suggest ways to relax the overly demanding elements of our normative theories.  The dispute between Kahneman and Gigerenzer could benefit from an irenic, two-track approach like this.  Perhaps Hannah’s theory of inference could as well.

Eponymous virtue and vice terms

How is the reference of a virtue predicate like ‘honest’ fixed?  How is the reference of a vice predicate like ‘cowardly’ fixed?  Two candidates from philosophy of language suggest themselves: descriptivism and direct reference.  Very roughly, on the descriptivist view, the meaning of a trait term is given by a set of associated predicates which, when satisfied, indicate that the term applies.  For example, someone is honest if and only if she never lies, never cheats, and never steals: H(s) iff ~[L(s) v C(s) v S(s)].  On the direct reference view, by contrast, the meaning of a trait term is fixed by an inaugural act of directly referring to a property.  Honesty is whatever trait she has.

Linda Zagzebski argues somewhere (I forget where) that virtue and vice terms are better understood on the direct reference model than the definite description model.  I find that claim preposterous as a universal generalization, but I think it might apply to some trait terms: eponymous ones.  Some examples are ‘maverick’, ‘quixotic’, ‘chauvinistic’, ‘sadistic’, ‘draconian’, and ‘quisling’.  You might even think that ‘Christian’ fits the mold.  One interesting thing about such trait terms is that, if their meanings really are fixed by direct reference, then it should be possible to make new ones.  And if it’s possible to create new virtue terms, it might just be possible to create new virtues.

first post actually on philosophy

With that throat-clearing out of the way, I figured I should actually write something about philosophy, which is after all the ostensible topic of this blog.  Let’s start with a seemingly autobiographical question: Why would anyone work on both Nietzsche and empirically informed ethics?

It’s a fair question.  After all, it would be odd to think of Nietzsche as an empirically informed ethicist.  Though he did sometimes have nice things to say about science, mostly he had in mind his own field of philology, and it’s far from clear that even his supposedly historical works (e.g. The Genealogy of Morals) are truly empirically informed.  Much of the second essay of the Genealogy is about prehistory, so that’s purely speculative, if not mythological.  The third essay is more of a rumination on the ascetic ideal than an empirically informed investigation.  That leaves the first essay, which could be construed as a historical account of the origins of Christianity, but as history it’s pretty thin.  At the very least, Nietzsche’s works are nothing like those of Richard Brandt, John Doris, Jesse Prinz, and Chandra Sripada.

So then why the attraction?  What makes Nietzsche more interesting to an empirically informed moral psychologist than, say, Spinoza, or even Kant?  A few things:

  1. Nietzsche, though he never ran a controlled experiment, was an extremely astute observer of human thought, feeling, and behavior.
  2. Nietzsche possessed a humbling understanding of classical texts from the Latin and Greek traditions, but also was familiar with texts from the Indian subcontinent; this familiarity made available ways of thought and feeling that most of us culture-bound 21st-century folks find hard to fathom.
  3. Nietzsche was a committed atheist, and so resisted the kinds of bullshit explanations that all too easily occur and appeal to both agnostics and theists.
  4. Nietzsche seems to have been pathologically sensitive to the role that affect and emotion play in moral psychology.

On 1., the point is that it’s possible to arrive at empirically informed hunches through anecdotal experience, provided that one is sufficiently sensitive, observant, imaginative, and suspicious.  After all, what drives psychological research is the hunches of psychologists, which they derive from (among other things) their everyday observations, imaginings, and suspicions.  Nietzsche has those attributes in spades, so it stands to reason that, even though he was unable to test any of his hunches, they’re good hunches to start from.

On 2., I must confess that I’m more interested in arriving at the truth than in preserving what used to be thought, regardless of its truth value.  Call me quixotic.  But that said, I recognize that one of the ways in which we often fail to arrive at the truth is by failing to realize that there are more possibilities than seem apparent.  Just to illustrate: consider the rise of expressivism in metaethics.  It might initially seem that there are only two ways to characterize an utterance of the form, “X is wrong.”  Either it’s true, or it’s false.  What the expressivists tried to do (regardless of whether they succeeded) was to show that, instead of making an assertion, such an utterance might be a completely different sort of speech act — an expression of disapprobation, which is neither true nor false.  One could of course arrive at the expressivist position through an act of imaginative creativity, but creativity is hard.  Really hard.  Moreover, precisely when creativity is called for, it often seems irrelevant.  So another way to arrive at the expressivist position is cull through the storehouse of ideas that we call the history of philosophy.  I think it would be uncontroversial to say that Nietzsche had better access to that storehouse — especially to some of the more remote, ancient Greek regions of that storehouse — than most.

On 3., the point should be obvious once comprehended.  Nietzsche would never have been tempted to conclude, with Moore, with goodness was a simple, undefinable, non-natural property.  Where Moore saw simplicity and purity, Nietzsche sought out dirty historical contingency.  Nietzsche, after a brief flirtation, became unable to take seriously the Kantian picture of agency, with its unmoved movers and theological overtones.  His own picture of human nature might be dismaying; it might be cynical; it might even be wrong.  But it certainly is not mawkishly religious.

On 4., I will confess that I know less about Nietzsche’s biography than others, but from his published writings he seems to have been more than ordinarily sensitive to affect and emotion, and to the causal role they play in our psychological economies.  This probably caused him a lot of suffering, but it also seems to have made available insights that less sensitive people can only attain through statistical studies.  Jon Haidt’s work on social intuitionism, along with much other recent empirical work, seems to bear out many of the (self-)observations that Nietzsche made.  Our behavior is subtly but powerfully governed by the seemingly trivial and often imperceptible interplay of the affects.  In my own work, I discuss how both moral virtues (generosity, compassion, courage) and intellectual virtues (curiosity, creativity, flexibility) are surprisingly sensitive to situational influences on the emotions.  People become more generous, more compassionate, and better disposed to notice subtle threats when they’re in a good mood.  They become more curious, more creative, and more flexible when they’re in a good mood.  (Negative moods are more complicated — a point I hope to discuss in another post.)  The mood needn’t be severe or even noticeable to have this kind of effect.

There are probably other reasons to favor Nietzsche.  He calls psychology the queen of the sciences in Beyond Good and Evil.  He’s arguably the best stylist in the history of philosophy.  He’s almost certainly the best polemicist since Diogenes.  But for those of us who strive to be empirically informed, Nietzsche has special attractions.