• fdrake
    5.8k
    This is a tangential discussion from another thread. Nagase and I were discussing whether it's appropriate to account for the reference mechanism in requests - like 'Will you get me some water Jake?' - through an algebra of declaratives with propositional content / assertions that can be associated with the request - equivalence classes of {'I (Jake) will not get you some water'} and {'I (Jake) will get you some water'}.

    First, note that the fact that semantics is primarily concerned with truth conditions does not mean that it cannot account for speech acts other than assertions. For it may account for them in a derivative way. Here's an example of such a treatment: we may take questions to be a partition of propositions, that is, a collection of equivalence classes of propositions, namely its (conflicting) answers, such that each (conflicting) answer lies in exactly one equivalence class. (I think David Lewis adhered to something like this) So the semantics of questions is derivative to the semantics of its possible answers, which in turn are (generally) assertions. So one may take a term's contribution to the semantics of questions to be its contribution to the truth conditions of its answers.Nagase

    Do you have any explicit references for it? My initial response would probably be that you could equally define equivalence classes on requests, two requests are equivalent just when they have the same possible answers, and one could derive the contribution to the semantics of assertions to be whatever it is that contributes to requests. Both seem about as theoretically elegant, though the procedure you suggested has the benefit of philosophical tradition writing a lot more about the semantics of propositions than of more general speech acts.

    I'm suspicious of both procedures, as they're clearly constructions made to reduce external cases to a theory to objects of that theory. Associating a structured set of possible assertions with every request and claiming the meaning of the referring term in the request is to be found only through the meaning of its possible answers (or vice versa) really just takes the fact that both instances of reference work the same (making them also equivalent) and chooses one instance as a representative of the equivalence class of reference mechanisms over the other.

    This construction would have to be done for every performative to complete the account, and sure it's logically possible to do so (except perhaps for expletives or calls to attention?), but the only reason it works is because we understand the speech act enough to define its equivalence relation to begin with, enough to construct a series of possible answers consistent with each type of action appropriate to the request. In essence, precisely what we need to account for is used to construct the answer in this account, without ever describing what we need to account for.

    In other words: Where does the association between the request and the equivalence relation take place? Does it occur in the use of language? Is it an event? Does it describe the relation of the request with every possible answer to it, or does it just use an understanding we already have to form the partition? I think the latter.
  • frank
    14.5k
    I think the idea of a proposition comes from reflection on our interactions with the world which frequently have the form of questioning. Where am I going? What will I eat? How do I talk to her? Where are my keys? A proposition is an answer to some question. The speaker is the world. Maybe we could back off of asking where and when all this happens since it's a post hoc analysis.

    This accounts for the persistent concept of the unspoken proposition, or the unknown truth. I could say this format comes from the structure of the mind, but that's like a plant trying to see its roots.
  • Banno
    23.1k
    Searle?
  • fdrake
    5.8k
    Searle?Banno

    Dunno where this is going really. It was just a side topic from another thread that was so involved discussing it would derail the original thread.
  • Banno
    23.1k


    Searle presented an extended zoology of speech acts, describing a range of acts in terms of their propositional content, function and intent of the speaker.

    SO a question - without finding the book and checking - is an incomplete proposition spoken by someone with the intent of discovering the missing element... roughly.
  • andrewk
    2.1k
    It doesn't feel like a partition to me. That may be because in most instances it will be a rhetorical question. It's just a different way to say 'Please get me a glass of water', to which the response is an action of the hands, not the mouth - handing the glass of water to the speaker.

    I find these types of speech acts easier to analyse under Wittgenstein's approach which essentially asks what the purpose of the speech act is, rather than what its meaning is. In this case the purpose is to obtain a glass of water.

    To me, 'purpose' is a more robust and universal concept than 'meaning'.
  • Nagase
    197


    First, the references you asked for: for my general approach to semantic matters, I think the essays by Lewis are invaluable (even if you end up disagreeing with him). In this connection, I recommend especially "Language and Languages" and "General Semantics", which you can find, along with his other papers, in this website; note that the latter essay also contains a discussion about how to reconstruct the other moods in terms of declarative sentences. Since I'm not a semanticist (though I'm largely sympathetic to formal semantics, in particular the tradition stemming through Montague and developed by Barbara Partee), in the specific case of the semantics of questions I just gave a quick glance at the relevant article in the Cambridge Handbook of Formal Semantics (I can send you a copy of the handbook if you like), just to check that I wasn't misremembering the partition approach.

    Going back to the discussion, note that the two situations you described are not symmetrical. We have a reasonable well-developed semantic theory for declarative sentences (say, Montague grammar and extensions thereof). But we don't have a well-developed semantic theory for questions (and other "moods") that is independent of truth condition semantics, or of declaratives more generally. So we may hope to extend our analyses of declarative sentences to other types of sentences, but there is little hope of going in the reverse direction, since we don't even know where to start in that case. That's why we try to understand questions in terms of "answerhood" conditions, whereas no one (that I know of!) has tried to formulate a semantics for declarative sentences in terms of "questionhood" conditions.

    As for how to connect all this with language user competence, that is a hard question. Barbara Partee comments on it in her "Semantics: Mathematics or Psychology?", and Lewis also tries his hand at it in "Language and Languages". Lewis's answer is interesting, though controversial: languages are abstract mathematical structures, and they are connected with users by way of linguistic conventions that ensure that most participants of a linguistic community employ largely overlapping languages. This explains why we can give exact mathematical descriptions of (fragments of) some languages---w are just describing mathematical structures!---, and also explains the social dimension of languages---they enter the scene through the conventions in place. I find this a very attractive and elegant picture, personally, though I haven't thought it through yet,
  • fdrake
    5.8k


    Can you walk us through an example please?
  • Banno
    23.1k
    I'd have to find the book...
  • Banno
    23.1k
    From p.66;

    Question
    Propositional content:
    Any proposition or propositional function.

    Preparatory content:
    i. S does not know 'the answer', i.e., does not know if the proposition is true, or, in the case of the propositional function, does not know the information needed to complete the proposition truly (but see comment below).
    ii. It is not obvious to both S and H that H will provide the information at that time without being asked.

    Sincerity condition:
    S wants this information.

    Essential condition:
    Counts as an attempt to elicit this information from H.

    Note:
    There are two kinds of questions, (a) real questions, (b) exam questions. In real questions S wants to know (find out) the answer; in exam questions, S wants to know if H knows.

    Similar accounts are given for a range of illocutions.
  • Terrapin Station
    13.8k
    This is a tangential discussion from another thread. Nagase and I were discussing whether it's appropriate to account for the reference mechanism in requests - like 'Will you get me some water Jake?' - through an algebra of declaratives with propositional content / assertions that can be associated with the request - equivalence classes of {'I (Jake) will not get you some water'} and {'I (Jake) will get you some water'}.fdrake

    Easy answer: in most cases, no, that wouldn't be at all appropriate.

    There might be a few people who think that way about it in those situations, but that would be extremely unusual.
  • sime
    1k
    Going back to the discussion, note that the two situations you described are not symmetrical. We have a reasonable well-developed semantic theory for declarative sentences (say, Montague grammar and extensions thereof). But we don't have a well-developed semantic theory for questions (and other "moods") that is independent of truth condition semantics, or of declaratives more generally. So we may hope to extend our analyses of declarative sentences to other types of sentences, but there is little hope of going in the reverse direction, since we don't even know where to start in that case. That's why we try to understand questions in terms of "answerhood" conditions, whereas no one (that I know of!) has tried to formulate a semantics for declarative sentences in terms of "questionhood" conditions.Nagase

    To my way of thinking, the meaning of a question is trivial by way of causation; a question refers to whatever answer the questioner finds acceptable. And if the questioner comes to later reject the answer he previously accepted, this is a situation in which the semantics of the questioner's question has changed - to referring to whatever new answer the questioner is now prepared to accept.

    By this trivial cause-and-effect account, I don't see a hard distinction between questions and answers.
  • Number2018
    550
    Where does the association between the request and the equivalence relation take place? Does it occur in the use of language? Is it an event?fdrake

    A few additional dimensions ground the association between the request and the equivalence relation. First, there is the intention (desire) to speak, which is an affectively determined vector of the production of subjectivity, of the process of self-affirmation. Next, there is a set of ethico-political values and choices that are manifested through the accomplishment of any speech act. Therefore, the association between the request and the equivalence, as well as words and grammatical forms, provide the necessary logical and technical possibilities for
    a potential linguistic performance. Yet, they are not sufficient. The speech act that occurs “here and now” is an event; it is the individuation, singularization, and actualization of the potentiality of language, whereas the pre-personal affective forces and post-personal ethico-political social forces are crucial determinants.
  • fdrake
    5.8k
    First, the references you asked for: for my general approach to semantic matters, I think the essays by Lewis are invaluable (even if you end up disagreeing with him). In this connection, I recommend especially "Language and Languages" and "General Semantics", which you can find, along with his other papers, in this website; note that the latter essay also contains a discussion about how to reconstruct the other moods in terms of declarative sentences. Since I'm not a semanticist (though I'm largely sympathetic to formal semantics, in particular the tradition stemming through Montague and developed by Barbara Partee), in the specific case of the semantics of questions I just gave a quick glance at the relevant article in the Cambridge Handbook of Formal Semantics (I can send you a copy of the handbook if you like), just to check that I wasn't misremembering the partition approach.Nagase

    I just realised that we glossed over a really important pragmatic distinction. A question pragmatically is much different from a request. One asks a question to get an informative response, one makes a request to ask someone to do something.

    From the point of view of associating speech acts with referring terms in them with declaratives, perhaps this distinction doesn't matter very much, as one could say a request for someone to do X is associated with "someone did X for me" (or "someone did not do X for me" for the other class) as a declarative, and claim that the reference mechanism for the term in the request is derivative of that in the (equivalence classes of) declaratives.

    Going back to the discussion, note that the two situations you described are not symmetrical. We have a reasonable well-developed semantic theory for declarative sentences (say, Montague grammar and extensions thereof). But we don't have a well-developed semantic theory for questions (and other "moods") that is independent of truth condition semantics, or of declaratives more generally. So we may hope to extend our analyses of declarative sentences to other types of sentences, but there is little hope of going in the reverse direction, since we don't even know where to start in that case. That's why we try to understand questions in terms of "answerhood" conditions, whereas no one (that I know of!) has tried to formulate a semantics for declarative sentences in terms of "questionhood" conditions.Nagase

    This is a kind of mathematical justification, Lewis does the same for possible worlds, claiming it is 'philosophically fruitful' to think of things that way because it allows one to address many problems assuming the felicity of the account. What this is really backed up by, though, is the existence of research programs that deal primarily with truth apt expressions, and a vital part of those research programs are attempts to extend their scope; just as one would do when researching mathematical methodology irrelevant of the field.

    A major difference between these types of extension of theory, however, is that the extensions in mathematical research either result in the production of theorems, which are true given their assumptions by definition, or modelling techniques which are checked against the phenomena they model. The kind of model produced by extending a semantics of declaratives to other speech acts, however, does not have this virtue and so must be assessed on its merits differently.

    The kind of model suggested in the paper you linked (Languages and Language by David Lewis) characterises a language as a symbol set S and a meaning set M and a function from the symbol set S to M which associates the meaning of M to S. This is later expanded to ambiguous or polyvalent strings by mapping each polyvalent element of S to a subset of M. You can thus write a language as a triple {S,M,f} where f maps each s in S to its meaning m in M.

    This is worthwhile examining. The first comment I have is that M needs to be a structured set, there are intended meanings/responses for speech acts as well as interpreted meanings/responses. How would one go about constructing such a function f or structuring the set? How would one impose this dyadic structure on M? The selection criterion for associating s with m through f is precisely already having 'access' or successful interpretation of (one sense of) s. One could construe this as a strength, in a similar sense that Lewis characterises the status of accessibility relations, because we do not specify a structure of f any f can be used for modelling purposes. But consider for a specific language, constructing f means encoding the meaning of s as m for each pair, and the means of that encoding is precisely our pre-theoretic understanding of the strings in S. In that regard, Lewis has not given an account of any language, or even any general structure of language, he has given us a procedure for associating strings of a language to meanings of strings given that such an account of meaning and strings has already been developed.

    Elements of S are then associated with attitudinal properties, specifically that elements of S (when well formed) are uttered truthfully in normal circumstances. This is fine when dealing with elements of a language which are truth apt, but one of the modelling assumptions, a set of principles Lewis characterises what it means for a community to use a language constrains that use to sentences which are truth apt. This paints a picture of language as communicative, and when not communicative derivative of communication. Rather than as a historical, expressive practice whose use does things (pace Austin).

    At face value anyway, this has a rather problematic link to the treatment of non-declaratives in 'General Semantics'. The treatment is, if I have it right anyway, to parse a command like: "Go to bed!" into a pair {command, "it is the case that you are in bed"}, where the latter sentence has some tree-based parse structure. The latter's called the sentence radical - which by construction is truth apt. However the full structure is not truth apt.

    Lewis, however, gives us a remedy for this hole in the account - a way of associating truth values with general speech acts. What this requires, however, is reinterpreting truth and falsity to be consistent with the demands of the semantics of performatives.

    Hard but possible: you can be play-acting, practicing elocution, or impersonating an officer and say 'I command that you be late' falsely, that is, say it without thereby commanding your audience to be late. I claim that those are the very circumstances in which you could falsely say 'Be late!'; otherwise it, like the performative, is truly uttered when and because it is uttered. It is no wonder if the
    truth conditions of the sentences embedded in performatives and their nondeclarative paraphrases tend to eclipse the truth conditions of the performatives and non-declaratives themselves.

    It is a bit rich to say that the sentence is 'embedded' in the performative when it is merely associated with it. Anyway,

    A performative is uttered truly when the intension (like {command} in the previous example) is the state of mind of the utterer, and falsely otherwise. While this is a way of assigning two categories to performatives (at least those which can be parsed as an intension and propositional radical), there is little reason for labelling these as true or false in precisely the same sense 'It is raining.' is true just when it is raining. A proposition like that is true if and only if the propositional content occurs, whereas something other than the propositional content of the sentence radical must occur for the performative to be 'true' or 'false'. So, I would grant Lewis that he has found a way of associating a function with a boolean image to each sentence which can be parsed in that manner, but I very much doubt that this function is a mapping to truth values with their usual interpretation.

    I couldn't see the construction of equivalence classes explicitly in either paper, but I might've missed it.

    Edit: I suppose what's at stake in our discussion is really the same thing whether we're talking about the equivalence class construction or the constructions in the two David Lewis papers you linked; is language primarily a means of communication? Does it make sense to account for language use by giving an account which renders all language use derivative of declaratives?

    Edit2: hashtag changing 'boolean function' to 'function with a boolean image' because you're talking with a logician.
  • Nagase
    197


    There are a couple of things, here. Let's tackle each issue separately, in turns.

    (1) On the merits of extending formal semantics from declaratives to other speech acts: You say that, in a mathematical setting, fruitfulness is assessed either by the production of more theorems or by the exactness of the modelling activity; if I understood you correctly, you say that neither of these obtain the case of formal semantics. Well, I disagree. Obviously, as you pointed out, I believe that formal semantics is a worthwhile enterprise. And it's simply a fact that the formal semantics of declarative sentences is currently a well-developed research program. So why not extend this approach to other speech acts? In fact, that is precisely what formal semanticists have been doing. I claim that the fruitfulness of this approach can be assessed in the same way as a mathematical research program, in particular, in the exactness of the models produced. I would also add a further dimension, also analogous to mathematics: in many cases, it's less important to prove theorems than to coin new definitions (e.g. Dedekind's ideal theory), which serve to unify phenomena that were previously considered separately (e.g. the behavior of primes in certain number fields and the behavior of curves in function fields). This, explanation by unification, is one particular case of a virtue of a mathematical theory, namely its explanatory power. Now, I want to argue that formal semantics do provide us with added explanatory power. In particular, by showing what is common to apparently distinct speech acts (or moods), it allows us to explain a greater variety of phenomena than we could before.

    (2) Lewis's account deals with the meaning of sentences simpliciter, not with structured sentences: You are right that Lewis's account assigns meaning directly to strings and not to subsentential parts. That is a problem, though one that he is well aware of (see the "Objections and Replies" section). In the paper, he devises the notion of a grammar to deal with this situation (if you are aware of them, a grammar in Lewis's sense is just a generalized inductive definition). Basically, a grammar consists of (i) a lexicon of basic constituents, (ii) a set of operations which allow us to derive more complex constituents out of the basic ones and (iii) a function which assigns meanings to the derived constituents (an example of a grammar would be the one developed by Lewis in "General Semantics"). Unfortunately, as most linguists know, although there is always a unique language generated by a grammar, there will in most cases be multiple grammars associated with a given language. And, given Lewis's demanding definition on conventions, there is, as he explains in the paper, little sense to the idea of a gramar being conventionally used by a population.

    But then again, why would we want the grammar to be conventionally used by a population? Perhaps it makes more sense to say that the language is used conventionally by a population, while the grammar may obey a different set of constraints (if Chomsky is correct, for example, the grammar will be innate). And given these other constraints, we may obtain a way to associate a grammar with each conventionally used language which explains how the language was learned in the first place, how it answers to compositionality, etc.

    Here, it is important to understand what is Lewis's explanatory target. He is not interested in explaining the semantics of a language, or providing a general structure theory for languages. Rather, he is interested in explaining how semantics interact with pragmatics: given that languages are mathematical structures, how can it be that language is also a social phenomenon? Or, to put it another way, how can it be that Montague and Austin are investigating the same thing? Lewis's answer is: because of conventions, where a convention can be understood as an answer to a coordination problem (there is a strong game-theoretic component in his account, which is clear if you read his Conventions). That is, there are conventions in place which makes us behave as users of a given language. Given that the coordination problem involved is defined in terms of actions and beliefs, and these can only interact with sentences (or utterances of sentences), it makes sense for him to focus on a very coarsed grained view of languages, which focus on the interpretation of sentences. This also chimes in with the idea that semantics feeds sentences into pragmatics, so to speak.

    I agree that he gives prides of place to communication here (he is pretty explicit on this), and that there is little room in his account of conventions for the more creative aspects of language use as explored by Austin. But I see this as a reason to modify his account, not to reject it outright, perhaps by emphasizing the non-conventional aspects of language use.

    (3) Lewis's account is entirely focused on declaratives: Correct, though he does offer an extension, even in "Languages and Language" (this is one of the objections he answers), which is similar in spirit to the one in "General Semantics". Incidentally, in "General Semantics", the mood of a sentence is not simply associated with the sentence, but it is built in into its structure as one of the nodes in the phrase marker. Given that the performative is actually the root of the phrase marker, one could identify the performative with the whole tree; then the sentence radical will be the root of a subtree of the phrase marker, and will thus be indeed embedded into the performative. So I think Lewis's terminology is apt here.

    Anyway, it's clear that Lewis's treatment is merely illustrative and there are a multitude of ways to extend the basic ideas into a more general treatment of language. Here, however, we go back to the problem you highlighted before, namely if the approach as a whole is tenable. I think this is equivalent to the question of whether formal semantics, as practiced in the Montague tradition, is a worthwhile endeavor. Since I believe it is a worthwhile endeavor (largely for the reasons I gave in (1)), I think the approach is tenable.

    By the way, Lewis does not give a treatment of questions in terms of equivalence classes in those papers. He does give a treatment of subject matter in terms of equivalence classes in "Statements Partly About Observation", and I think it's clear that you may identify subject matter in Lewis's sense with questions. The basic idea is given there in terms of possible worlds: questions partition the set of possible worlds into equivalence classes of answers. So a question is given meaning in terms of its answerhood conditions, as it is interpreted in terms of equivalence classes of answers.
  • fdrake
    5.8k
    (1) On the merits of extending formal semantics from declaratives to other speech acts: You say that, in a mathematical setting, fruitfulness is assessed either by the production of more theorems or by the exactness of the modelling activity; if I understood you correctly, you say that neither of these obtain the case of formal semantics. Well, I disagree. Obviously, as you pointed out, I believe that formal semantics is a worthwhile enterprise. And it's simply a fact that the formal semantics of declarative sentences is currently a well-developed research program. So why not extend this approach to other speech acts? In fact, that is precisely what formal semanticists have been doing. I claim that the fruitfulness of this approach can be assessed in the same way as a mathematical research program, in particular, in the exactness of the models produced. I would also add a further dimension, also analogous to mathematics: in many cases, it's less important to prove theorems than to coin new definitions (e.g. Dedekind's ideal theory), which serve to unify phenomena that were previously considered separately (e.g. the behavior of primes in certain number fields and the behavior of curves in function fields). This, explanation by unification, is one particular case of a virtue of a mathematical theory, namely its explanatory power. Now, I want to argue that formal semantics do provide us with added explanatory power. In particular, by showing what is common to apparently distinct speech acts (or moods), it allows us to explain a greater variety of phenomena than we could before.Nagase

    I think formal semantics is a worthwhile area of research, but I think you're treating the conditions of adequacy for theories in it as far too much like pure math and far too little like applied mathematics or statistics. The subject matter of formal semantics, when applied to natural languages, is natural languages. Whether a formal semantics in this case explains anything must be done with reference to the natural language (or natural languages) targeted, not simply extensions of the theory. Let's look at an example.

    The Lotka-Volterra equations in mathematical biology, say, are induced by an intuition linking predator-prey species pair abundances to the abundance of the other animal population; predator numbers depend on prey numbers due to resource availability, prey numbers depend on predator numbers due to killing rate.

    You could, at this point, say that 'predator prey dynamics can be explained by calculus', but that wouldn't be the whole story. As a model, the Lotka-Volterra equations are terrible. They do not reproduce most phenomena in predator-prey webs, and don't accurately match even the simplified predator-prey dynamics they target. The reality is much messier than those equations suggested.

    In the same sense, I believe the extensions of formal semantics from propositions to general speech acts should be checked against language, not just against consistency with previous research in that area. The truth of an account of natural language should be measured by its accord with natural language, not vouchsafed as simply an extension of a theory of a subset of it. It has to be a good model, as well as consistent with the theory, to count as an increase of explanatory power.

    I agree that he gives prides of place to communication here (he is pretty explicit on this), and that there is little room in his account of conventions for the more creative aspects of language use as explored by Austin. But I see this as a reason to modify his account, not to reject it outright, perhaps by emphasizing the non-conventional aspects of language use.Nagase

    I don't think an appropriate extension to the project would look to non-conventional aspects of language use, this allows convention to be equated with Lewis' account of conventions for truth-apt (or derivative of truth apt) sentences. "I do" is part of our conventions for marriage, "I love you" is as much performative as declarative and so on. We all understand this because such things are conventions of language use.

    I mean, look at "I do", how would you explain to someone what that means? You'd have to explain the institution of marriage, the legal status of being married, the emotional commitments involved, it makes sense upon a background of sexual relationship and moral norms like fidelity and honesty. You will get a lot more insight into the meaning of "I do" by describing the role it plays in language than by describing a sentential function that maps it to the propositional content of "X assents to marriage with Y at time t in place p". Even if we have filtered out the propositional content, the meaning of "I do" is to be found in the relation of its propositional content to context of its propositional content and an exegesis thereof would care little about whether "I do" itself is truth apt.

    Maybe my intuition is the opposite of yours, I see it as much more intuitive to 'feed pragmatics into semantics'.

    That is, there are conventions in place which makes us behave as users of a given language. Given that the coordination problem involved is defined in terms of actions and beliefs, and these can only interact with sentences (or utterances of sentences), it makes sense for him to focus on a very coarsed grained view of languages, which focus on the interpretation of sentences. This also chimes in with the idea that semantics feeds sentences into pragmatics, so to speak.Nagase

    Well yes, of course this makes sense, it makes sense in exactly the same way that the Lotka-Volterra equations model real world predator-prey dynamics. They are a toy model to demonstrate a principle relating the feedback between the predator abundance and prey abundance, and when held up to real predator prey dynamics are found sorely lacking in scope. I'm left with the feeling that a narrowly circumscribed model is being mistaken for the real dynamics it targets.

    Nevertheless, I don't want to throw the baby out with the bath water, sentences and truth apt expressions are an excellent playground for philosophical theories about how language interfaces with the world, sufficiently simple to allow the development of precise formalisms which nevertheless have quite general scope.

    (3) Lewis's account is entirely focused on declaratives: Correct, though he does offer an extension, even in "Languages and Language" (this is one of the objections he answers), which is similar in spirit to the one in "General Semantics". Incidentally, in "General Semantics", the mood of a sentence is not simply associated with the sentence, but it is built in into its structure as one of the nodes in the phrase marker. Given that the performative is actually the root of the phrase marker, one could identify the performative with the whole tree; then the sentence radical will be the root of a subtree of the phrase marker, and will thus be indeed embedded into the performative. So I think Lewis's terminology is apt here.Nagase

    It's embedded in 'the sentence' when 'the sentence' is treated as containing a mood operator. The convention to put the mood operator at the beginning of the parse tree's terminal nodes rather than as an operator on a parse tree of a sentence is really arbitrary here. To say it is embedded in the sentence is to say the mood is part of the sentence content, but the possibility of 'falsehood' of the mood in Lewis' sense shows that this isn't the case. {command, "Go to bed"} is false when I'm not commanding someone to go to bed, despite saying so. If I utter the speech act {command, "Go to bed"} how am I to make sense of the fact that I was, say, joking with my friend, solely through the 'falsehood' of the speech act? There are far more ways not to be a thing than to be a thing.

    You might rejoinder that this would simply be a different speech act {joke, "Go to bed"} but I could make precisely the same kind of argument towards that. Regardless of this, Lewis is leveraging our folk-theory of language to do such parsing, which is little more than an invitation to understand ""Go to bed", Sally joked" as {joke, "Go to bed"} with "Sally" as an indexical. In that regard it's hardly an 'account' of language use at all, it is little more than a procedure to mathematise expressions into a far more comfortable formal set+function form than the nuanced messiness of their original utterance.

    Another rejoinder here references Chomsky's innateness hypothesis with regard to grammars, which you've leveraged elsewhere. I'm quite happy to grant that this provides some insights about the possible grammars for languages, but I don't think it's particularly relevant to including mood terms/indexicals in the parse tree considering one is a mental state or pro-attitude and one is a sentence or proposition. To be sure, command can be modelled as being in a grammatical imperative mood, but Lewis does seem to interpret the truth conditions of these moods as when their corresponding mental state occurs during the utterance. Do you really understand anything more about imperatives by attaching them as indexical prefixes to a parse tree or as an operator on a parse tree? It seems to me, again, the focus here isn't really on understanding imperatives, it's on providing a procedure to mathematise imperative expressions All the 'explanatory power' in the account for understanding natural language is given by folk-theoretic categorisation of speech acts into sentences with a grammatical mood or the structure of parse trees on those sentences. Once one has recognised an imperative, it will be represented thusly... how do we understand imperatives to make these constructions? Beyond the scope of these papers.

    Carlo Rovelli makes an example in one of his talks about simple harmonic motion and pendulums, that we do not care about the 'love stories of bacteria' inside the pendulum when modelling it as a simple harmonic oscillator. What we do care about is, well, how the whole thing moves. Of course, simple harmonic motion would go on forever, and the real pendulum stops which is unanticipated from the theory and invites an extension (including a damping term). In this manner, the real world shouts "No!" to our simple model.

    How does the real world shout "No!" to a formal semantics for a (subset of) natural language?

    Perhaps a more precise form of this question: let us grant that all speech acts are to be understood as derivatives of declaratives, how does this 'derivativeness' show up in either the use or acquisition of language?

    Edit: for clarification, I don't mean this in a vulgar verificationist sense, I mean how does natural language inspire formal semantics and how can a formal semantics be checked for adequacy beyond internal consistency and 'philosophical fruitfulness = being part of the hard core of a research program"
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment