The linguistic analogy

Here’s a draft of the section of the Stanford Encyclopedia of Philosophy entry on the so-called linguistic analogy.  As always, questions, comments, suggestions, and objections are most welcome.

 

———————————————————————————–

 

Mikhail’s Elements of Moral Cognition 

John Rawls (1971) suggested that Noam Chomsky’s (1957) generative linguistics might provide a helpful analogy for moral theorists – an analogy that Gilbert Harman (2000, 2008; Roedder & Harman 2010) has speculatively explored and which experimentalists have recently investigated (Hauser 2006; Hauser, Young, & Cushman 2008; Mikhail 2007, 2008, 2011).  There are several points of purported contact:

 

L1: A child raised in a particular linguistic community inevitably ends up speaking an idiolect of the local language despite lack of explicit instruction, lack of negative feedback for mistakes, and grammatical mistakes by caretakers.

 

M1: A child raised in a particular moral community inevitably ends up judging in accordance with an idiolect of the local moral code despite lack of explicit instruction, lack of negative feedback for moral mistakes, and moral mistakes by caretakers.

 

L2: While there is great diversity among natural languages, there are systematic constraints on possible natural languages.

 

M2: While there is great diversity among natural moralities, there are systematic constraints on possible natural moralities.

 

L3: Language-speakers obey many esoteric rules that they themselves would not recognize.

 

M3: Moral agents judge according to esoteric rules (such as the doctrine of double effect) that they themselves would not recognize.

 

L4: Drawing on a limited vocabulary, a speaker can express a potential infinity of thoughts.

 

M4: Drawing on a limited moral vocabulary, an agent can express a potential infinity of moral judgments.

 

Pair 1 suggests the “poverty of the stimulus” argument, according to which there must be an innate language (morality) faculty because it would otherwise be impossible for children to learn what and as they do.  However, as Prinz (2008) points out, the moral stimulus may be less penurious than the linguistic stimulus: children are typically punished for moral violations, whereas their grammatical violations are often ignored.  Nichols, Kumar, & Lopez (unpublished manuscript) lend support to Prinz’s contention with a series of Bayesian moral-norm learning experiments.

 

Pair 2 suggests the “principles and parameters” approach, according to which, though the exact content of linguistic (moral) rules is not innate, there are innate rule-schemas, the parameters of which may take only a few values.  The role of environmental factors is to set these parameters.  For instance, the linguistic environment determines whether the child learns a language in which noun phrases precede verb phrases or vice versa.  Similarly, say proponents of the analogy, there may be a moral rule-schema according to which members of group G may not be intentionally harmed unless p, and the moral environment sets the values of G and p.  As with the first point of analogy, philosophers such as Prinz (2008) find this comparison dubious.  Whereas linguistic parameters typically take just one of two or three values, the moral parameters mentioned above can take indefinitely many values and seem to admit of diverse exceptions.

 

Pair 3 suggests that people have knowledge of language (morality) that is inaccessible to consciousness but implicitly represented, such that they produce judgments of grammatical (moral) permissibility and impermissibility that far outstrip their own capacities to reflectively identify, explain, or justify.  One potential explanation of this gap is that there is a sub-personal “module” for language (morality) that has proprietary information and processing capacities.  Only the outputs of these capacities are consciously accessible.

 

Pair 4 suggests the linguistic (moral) essentiality of recursion, which allows the embedding of type-identical structures within one another to generate further structures of the same type.  For instance, noun phrases can be embedded in other noun phrases to form more complex noun phrases:

 

the calico cat –> the calico cat (that the dog chased) –> the calico cat (that the dog [that the man owned] chased) à the calico cat (that the dog [that the man {who was married to the heiress} owned] chased)

 

Moral judgments, likewise, can be embedded in other moral judgments to produce novel moral judgments:

 

“Thou shalt not kill” (Deuteronomy 5:13) –> “Ye have heard that it was said of them of old time, Thou shalt not kill; and whosoever shall kill shall be in danger of the judgment: But I say unto you, that whosoever is angry with his brother shall be in danger of the judgment.” (Matthew 5:21-2)

 

Another example: plausibly, if it’s wrong to x, then it’s wrong to persuade someone to x and wrong to coerce someone to x, and therefore also wrong to persuade someone to coerce someone to x.  Such moral embedding has been experimentally investigated by John Mikhail (2007, 2008, 2011), who argues on the basis of a large number of experiments using variants on the “trolley problem” (Foot 1978) that moral judgments are generated by imposing a deontic structure on one’s representation of the causal and evaluative features of the action under consideration.

 

As with any analogy, there are points of disanalogy between language and morality.  Within a given dialect, lay judgments about whether a given sentence is grammatical tend to be nearly unanimous, whereas, even within a given “moral dialect,” there is a great deal of variance in lay judgments about whether a given action is permissible.  Moral judgments are also, at least sometimes, corrigible in the face of argument, whereas grammaticality judgments seem to be incorrigible.  People are often tempted to act contrary to their moral judgments, but not to their grammaticality judgments.  Recursive embedding seems to be able to generate all of language, whereas recursive embedding may only be applicable to deontic judgments about actions, and not, for instance, judgments about norms, institutions, situations, and character traits.  Indeed, it’s hard to imagine what recursion would mean for character traits: does it make sense to think of honesty being embedded in courage to generate a new trait?  If it does, what would that trait be?

 

References:

 

Chomsky, N. (1957). Syntactic Structures. The Hague: Mouton.

Foot, P. (1978). Virtues and Vices and Other Essays in Moral Philosophy. Berkeley, CA: University of California Press; Oxford: Blackwell.

Harman, G. (2000). Explaining Value and Other Essays in Moral Philosophy. New York: Oxford University Press.

Harman, G. (2008). Using a linguistic analogy to study morality. In W. Sinnott-Armstrong (ed.), Moral Psychology, volume 1, pp. 345-352. MIT Press.

Hauser, M. (2006). Moral Minds: How Nature Designed a Universal Sense of Right and Wrong. New York: Ecco Press/Harper Collins.

Hauser, M., Young, L., & Cushman, F. (2008). Reviving Rawls’s linguistic analogy: Operative principles and the causal structure of moral actions. In W. Sinnott-Armstrong (ed.), Moral Psychology, volume 2, pp. 107-144. MIT Press.

Mikhail, J. (2011). Elements of Moral Cognition: Rawls’s Linguistic Analogy and the Cognitive Science of Moral and Legal Judgment. Cambridge University Press.

Mikhail, J. (2008). The poverty of the moral stimulus. In W. Sinnott-Armstrong (ed.), Moral Psychology, volume 1, pp. 353-360. MIT Press.

Mikhail, J. (2007). Universal moral grammar: Theory, evidence, and the future. Trends in Cognitive Sciences, 11, 143-152.

Nichols, S., Kumar, S., & Lopez, T. (unpublished manuscript). Rational learners and non-utilitarian rules.

Prinz, J. (2008). Resisting the linguistic analogy: A commentary on Hauser, Young, and Cushman. In W. Sinnott-Armstrong (ed.), Moral Psychology, volume 2, pp. 157-170. MIT Press.

Rawls, J. (1971). A Theory of Justice. Cambridge, MA: Harvard University Press.

Roedder, E. & Harman, G. (2010). Linguistics and moral theory. In J. Doris (ed.), The Moral Psychology Handbook, pp. 273-296. Oxford University Press.

Leave a Reply