Philosophy of Language » Lecture 8
A first attempt to use context to capture the behaviour of the indexical I might go something like this:
When is a sentence, possibly containing context-sensitive expressions, true?
The original thought was that contextual variation was rather like modal and temporal variation (Kaplan 1979: 81–82) – that truth relative to an index was like truth relative to a possible world, but there were a lot more indices than possible worlds:
So when I actually assert I am Antony, I am the speaker of this context, and the sentence then expresses a truth about the actual world, namely, that Antony is Antony. When Sylvester says it, is expresses the falsehood that Sylvester is Antony.
Consider my current true utterance of this sentence:
Since \(a\) is the speaker in \(c\), and \(p\) is the place in \(c\), then when we apply Contextual Truth† we get this:
And since the speaker is actually at the place of utterance at the time of utterance, this predicts that (1) is actually true – which of course it is.
As Kaplan notes, this makes (1) come out roughly to say the same thing as
This looks desirable; these seem to make much the same contribution to a conversation and to say much the same thing.
But now we’ve rendered ourselves unable to accommodate the contingency of (1). Consider
Applying Contextual Truth†, assuming necessarily quantifies over indices:
Since (1) is true at every proper index, (5) predicts that (4) will be true. But it’s not. While I am in fact here (in this place), I certainly didn’t need to be (Kaplan 1979: 83).
While every proper index is such that its speaker is at its place, and so make I am here true, that shouldn’t be enough to make it necessarily true.
there are difficulties in the attempt to assimilate the role of a context in a logic of demonstratives to that of a possible world in the familiar modal logics…. (Kaplan 1979: 83)
Kaplan’s solution is that contexts of utterance should determine the indices with respect to which indexical expressions have contents, and possible worlds should serve as circumstances of evaluation, with respect to which contents have referents/truth values.
So this gives us a revised truth rule:
And here the world of the utterance context can diverge from the world which is fixed by the circumstance of evaluation.
The content of an expression is always taken with respect to a given context of use. Thus when I say ‘I was insulted yesterday’ a specific content – what I said – is expressed. Your utterance of the same sentence, or mine on another day, would not express the same content. … This content … has been often referred to as a ‘proposition’. So my theory is that different contexts … produce not just different truth values, but different propositions.
… I call that component of the sense of an expression which determines how the content is determined by the context, the ‘character’…. Just as content … can be represented by functions from possible worlds to extensions, so characters can be represented by functions from contexts to contents. The character of I would then be represented by the function … which assigns to each context the content which is represented by the constant function from possible worlds to the agent of the context. (Kaplan 1979: 83–84)
Rigid | Non-Rigid | |
---|---|---|
Context-Sensitive | Indexicals: I, here | The man next to me now |
Invariant | Names: Antony, Gödel | The man next to Antony on April 7, 1993 |
What is it that a competent speaker of English knows about the word ‘I’? … It is the character of ‘I’ …. Thus that component of sense which I call ‘character’ is best identified with what might naturally be called ‘meaning’.…
- (c)
- In all contexts, an utterance of [‘I am here’] expresses a true proposition [i.e., when evaluated at the context itself].
On the basis of (c), we might claim that [‘I am here’] is analytic (i.e., it is true solely in virtue of its meaning). Although [‘I am here’] rarely or never expresses a necessary proposition. This separation of analyticity from necessity is made possible – even, I hope, plausible — by distinguishing the kinds of entities of which ‘is analytic’ and ‘is necessary’ are properly predicated: characters (meanings) are analytic, contents (propositions) are necessary. (Kaplan 1979: 84–85)
Kaplan’s proposal makes some analytic sentences contingent, which seems wrong.
Explicitly inspired by Kripkean ideas, Stalnaker suggests that sentences like I am here, which are true whenever they are uttered, are examples of the contingent a priori:
An a priori truth is a statement that, while perhaps not expressing a necessary proposition, expresses a truth in every context. (Stalnaker 1978: 83)
A famous example from the history of philosophy is from Descartes:
I must finally conclude that this proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind. (Descartes 1641, AT 25)
Our working hypothesis identifies the propositional content of a sentence with a function from possible worlds to truth values.
Given this, our circumstances of evaluation only need to include a possible world, since that is the only thing that a proposition’s truth value varies with respect to.
But some have argued that the circumstance needs other indices too:
The most popular candidate for a second index is a time. The view that propositions can have different truth-values with respect to different times — and hence that we need a time index — is often called ‘temporalism’. The negation of temporalism is eternalism. (Speaks 2024: §2.3.2)
Kaplan himself includes time amongst the indices of the context to fix the referents of temporal indexicals like now, but also thinks that contents can vary in truth value over time, and so is a temporalist (Kaplan 1979: 91–92).
By contrast, Richard (1981) argues for eternalism; Zimmerman (2005) offers a comprehensive overview.
So much for the framework; we now have the capacity to model words which are sensitive to features of the context.
But how can we tell whether a word is context-sensitive, so that we might add an index for it in our models of contexts?
One standard test is the indirect report test:
This test obviously counts I, now, and the easy cases as context-sensitive – you cannot report my utterance of I am Antony by saying Antony said that I am Antony.
Terms for relative directions, like ‘left’, seem to be almost as obviously context-sensitive as ‘I’; the direction picked out by simple uses of ‘left’ depends on the orientation of the speaker of the context. But we can typically use ‘left’ in disquotational ‘says’ reports of the relevant sort. Suppose, for example, that Mary says
The coffee machine is to the left.
Sam can later truly report Mary’s speech by saying
Mary said that the coffee machine was to the left.
despite the fact that Sam’s orientation in the context of the ascription differs from Mary’s orientation in the context of the reported utterance. Hence our test seems to lead to the absurd result that ‘left’ is not context-sensitive. (Speaks 2024: §2.3.1)
If \(A\) and \(B\) both utter \(S\) and can be reported as agreeing, say, with ‘\(A\) and \(B\) agree that \(S\)’, then that is evidence \(S\) is semantically invariant across its distinct utterances. If, on the contrary, distinct utterances cannot be so reported, this is evidence \(S\) is not semantically invariant across its distinct context of utterance. (Cappelen and Hawthorne 2009: 54–55)
A variant agreement test:
Expression | IDR | Agreement | A2 |
---|---|---|---|
actually | + | + | + |
flat | ?/+ | ? | + |
tasty | ? | - | ?/+ |
Another interesting class of context-sensitive expressions are pronouns: you, she, they, it,….
But the way that pronouns get their reference fixed is not always by picking up on features of the real-world context in which they are uttered (Evans 1980).
Sometimes they pick up on what was already said in the conversation:
Sometimes the pronoun picks up on more general features of the context:
Bound variable pronouns are governed by quantifiers:
A general treatment: the meaning of a pronoun in context is a temporary assignment of an extension (Portner 2005: 103–8).
Here, again, in the most natural reading, it cannot be referential. A speaker of this sentence would not be referring to some particular donkey, Flossy, and saying that every man who owns a donkey beats Flossy. There is no consensus, however, on what the pronoun here does mean. (Elbourne 2011: 116)
The problem: in (11), it looks like the logical form should be something like this:
The problem is evident in the logical form: the last occurrence of \(y\) is unbound. What we want is for the quantifier a donkey to somehow bind that final it, but it would seem to be unable to ‘reach out’ of its clause to bind the pronoun.
The same phenomena is seen in explicitly conditional donkey anaphora (13); the inability of a quantifier to bind pronouns out of scope is illustrated in the otherwise parallel (14):
In (15), the pronoun it is anaphoric on the material in the preceding sentence; it means something descriptive like: the donkey Amy owns. This is a so-called e-type pronoun (Evans 1980).
Parasitic on this usage, perhaps it in (11)/(13) likewise has the force of a description (Heim and Kratzer 1998: 295; King and Lewis 2021: §3.3).
A problem for the theories sketched above comes from cases like this:
This is a problem because, whether him has definite or numberless force, there are two bishops here, and incorrect truth conditions will apparently be predicted.
An utterance of (17) in a context \(c\) expresses (more or less) the proposition At the time and place of \(c\), the weather is rainy.
This is context-sensitive; the agreement test, in particular, demonstrates it, since when Alice in Palo Alto says It’s raining, and Bob in Oxford says at the same time It’s not raining, they are not disagreeing.
Can we explain this, using Kaplan’s content/character distinction? The only way would be something like this:
The word ‘rain’, when used in a context \(c\), refers to the property of raining at the location contextually salient at \(c\). (Donaldson and Lepore 2012: 124)
A problem arises with this sentence:
Suppose (18) were uttered in Adelaide. Go by the quasi-Kaplanian context-sensitive truth conditions we just offered, plus compositionality, and we get this:
In order to assign a truth-value to [It’s raining], as I just did, I needed a place. But no component of [this] statement stood for a place. The verb ‘raining’ supplied the relation rains\((t,p)\) – a dyadic relation between times and places, as we have just noted. The tensed auxiliary ‘is’ supplies a time, the time at which the statement was made. ‘It’ doesn’t supply anything, but is just syntactic filler. ([Perry’s footnote:] Note that if we took ‘It’ to be something like an indexical that stood for the location of the speaker, we would expect ‘It is raining here’ to be redundant and ‘It is raining in Cincinnati but not here’ to be inconsistent.) So Palo Alto is a constituent of the content of my son’s remark, which no component of his statement designated; it is an unarticulated constituent. Where did it come from? (Perry 1986: 138)
So we need context to supply a restriction on the domain of quantification, so that an utterance of (20) is only about a certain restricted domain:
One problem is that the speaker of (20) need not be smoking for them to utter (20) truly. Compare:
The puzzle is this: (17) (It’s raining) seems to be location-sensitive, but no expression in (17) is location-sensitive, given that (18) which embeds it is not.
The natural thought is that (17) might be context-sensitive in some new way. Namely, what (17) expresses depends on its surrounding linguistic context.
Maybe: what (17) and (20) express aren’t propositions, but propositional radicals (Bach 1994); like propositions, but with a ‘gap’:
What’s crucial is that the gap can be filled in either by context, as with (17) and (20), or by further explicit content as in (18) and (21).
The most obvious solution is this: when someone expresses a propositional radical in \(c\), the ‘gaps’ are all filled by \(c\). (Elbourne 2011: 122–24)
But there is a problem. It’s easiest to see in the case of quantifier domain restriction.
Everyone is asleep and being monitored by a research assistant. (Soames 1986: 357)
Every sailor waved to every sailor. (Stanley and Williamson 1995; Stanley and Szabó 2000: 249)
Exactly two people irritated everyone.
Suppose Alice and Bob are gatecrashers, who arrived uninvited and behaved obnoxiously to all the invited guests. Intuitively, there is a reading of (26) which is true even if Alice and Bob did not irritate themselves (or each other). A framework in which context supplies a single domain cannot explain the truth of (26) in this case:
contextual supplementation works at the level of constituents of sentences or utterances, rather than the level of the sentences or utterances themselves. (Soames 1986: 357)
A sentence like (26) seems to have two gaps, on the propositional radical view, made explicit as follows:
We can’t have both gaps filled by the global context, since that gives the wrong results for (24)–(26).
So how do those gaps get filled? We need some mechanism, otherwise the theory will be consistent with all sorts of bizarre gap-fillers – e.g., when the first gap is filled by people with red hair, and the second by people with blue eyes – which aren’t intended or plausible readings of the sentence at all.
It seems like we need some theory of how this works without postulating novel and unattested context-sensitive mechanisms.
To deal with (24)–(26), we need context to supply domains to constituents of the sentence:
As long as context supplies to \(d_{1}\) the set of all party-goers, and to \(d_{2}\) the set of just the invited party-goers, this semantics will get the intended reading of the utterance of (26).
This is the covert variable approach: it says that in the syntax of people/one is an unpronounced domain variable that behaves in context somewhat like in this/that place.
A sentence [like every bottle is on the shelf] can communicate a proposition concerning a restricted domain of bottles, because, relative to certain contexts, it expresses such a proposition. It expresses such a proposition relative to certain contexts because common nouns such as ‘bottle’ always occur with a domain index. It follows that, in the logical form of quantified sentences, there are variables whose values, relative to a context, are (often restricted) quantifier domains. (Stanley and Szabó 2000: 258)
(An omitted syntactic tree.)
(Another omitted syntactic tree.)
Stanley points out that there is a reason to posit covert pronouns/variables in the syntax: that those variables can be bound by quantifier phrases (Stanley 2005: 240).
If (29) is uttered in \(c\), it expresses the proposition that on every occasion when John lights a cigarette, it rains at the occasion of the lighting.
So, Stanley says, It rains has the syntax it rains at \(p\); the covert pronoun \(p\) is assigned a content by context when it is not overly bound as in (29).