The Notre Dame Institute for Advanced Study’s annual conference is winding down today. Lots of interesting talks, but one in particular got me thinking: Robert Hannah‘s lecture on inference. Hannah is an avowed Kantian. The motivating thought of his view is that inference, like morality, is governed by one (or perhaps just a few) universal principles. In morality, of course, the principle is the categorical imperative, always to act from universalizable maxims. In inference, the principle is supposed to be something like bivalence of judgment (no proposition may be judged both true and false) or even perhaps a weaker principle that allows for truth-value gluts and gaps but not logical explosion (the entailment of the truth — and falsity — of every proposition from the affirmation of a single inconsistency, which is a feature of classical logic).
While I found Hannah’s grappling with the underlying principles of rationality interesting, I was surprised that he thought he was articulating a theory of inference. It seemed to me that he was instead giving a theory of sound, or valid, or good inference. Surely, though, inference is a human practice. Sometimes we make good inferences, but all to often we make mediocre or bad inferences. How does a theory of good inference handle such phenomena? This is certainly not claim that whatever we do must be good enough. I’ve argued elsewhere to the contrary. The worry can be rephrased like this: For which types of x is it apt, when giving a theory of x, to give a theory of a good x?
By way of comparison, consider whether it would be apt, when articulating a theory of personality, to give a theory of virtue that only applied to heroes and saints. You might worry that if you only have a theory of heroes and saints, it’s going to be very hard to bring your theory to bear on akratic neurotics. It might instead be better to construct a theory of personality that recognizes all sorts of personalities — good, bad, ambivalent, and indifferent — and then to bolt on, as it were, an evaluative module to this theory. In this way, you could have an idea idea of what personality is, and then say that virtue is a matter of having a good personality. I’m not convinced that this is the best way to approach personality and character, but it’s not prima facie crazy.
In the same way, you might think that in order to construct a theory of inference, it makes sense first to articulate a vision of how people move from the set of beliefs and judgments they have at t to the set of beliefs and judgments they have at t+1, whether they’re rational, irrational, or arational in so updating their doxastic states. Then once that theory is on the table, one could bolt on an evaluative module: a good inference then would be an inference that conforms to the highest (or perhaps high enough) evaluative standards. Again, I’m not convinced that this is the right way to go about things, but it doesn’t seem prima facie absurd (which, I fear, is what Hannah thought when I brought the point up during Q&A).
Metaphorically, the question is whether to commit to an additive or a subtractive model of theory construction. Does one first construct a theory of x, and then add on some features that explain when something is a good/bad x, or does one first construct a theory of a good x and then treat phenomena that don’t quite satisfy the theory as pathological? This is a very hard question, which I don’t intend to answer here, but it’s something that comes up all over the place. For instance, in philosophy of biology, does one start with a theory of an organism that doesn’t include all the messy things that happen to organisms such as diseases, disorders, birth defects, etc.? Or should one start with a theory of how organisms actually turn out in the world and then attempt to come up with evaluative standards that apply to those things? In epistemology, is it better to start from a theory of belief (whether true or false, justified or unjustified, reflective or unreflective), and then bolt on evaluative modules for this normative properties? Or it better to start from a theory of, say, reflective knowledge and then describe subtle deviations from reflective knowledge as pathological cousins (e.g. Gettiered justified true belief, justified untrue belief, unjustified true belief, etc.)?
As I say, this is a very hard question to even approach, let alone answer. Nevertheless, I’ll venture a guess: it’s inappropriate to commit to either an additive or a subtractive model of theory construction. Instead, one needs to develop both simultaneously, shifting back in forth to adjust one with the other. A normative theory that’s too demanding is useless and even risks failure to be a theory of the phenomenon it purports to be about. A merely descriptive theory of a phenomenon with robust normative components will tend to be a baggy monster: unsystematic, unexplanatory, unsatisfying. Instead, we should use our normative theories to give structure and a semblance of unity to our descriptive theories, while we should allow our descriptive theories to suggest ways to relax the overly demanding elements of our normative theories. The dispute between Kahneman and Gigerenzer could benefit from an irenic, two-track approach like this. Perhaps Hannah’s theory of inference could as well.