AKA Story

Strategic Conversation

The dialogue theory discussed in this paper deals with discourse generation and interpretation and their link to principles of rational decision-making and other related actions. Paul Grice’s theories, used as a jumping-off point in the following work, supply  a basepoint regarding these links because he touts that conversational actions should adhere to strong cooperational principles.  Ideally, the authors use these links to infer “(i) when inferences about conversation are safe, and (ii) that Gricean cooperativity is equivalent to an alignment of the agents’ preferences…”

However, despite using Gricean analysis as a basis for strategic conversation analysis, there are some issues with it.  Cooperative conversation requires people to believe what the other is saying and to help them, through the things that each agent in a conversation reveals, to achieve their goals (whatever those may be).  There are also some assumptions that are made regarding speech acts and the agents’ intentions.  For example, if one agent asks a question, they obviously intend to know the answer – meaning that the agent wants to know the answer and to have that desire recognized. But, what if someone wasn’t trying to cooperate in a conversation?  We should consider haggling, bargaining, misdirection, self-promotion, and insulting others as examples of non-cooperational conversation.  The purposes of the speaker in these instances dictate that they keep from revealing information, which is paramount to the achievement of their own goals.

People’s interests don’t always align when they converse, and it’s easy to see why.  Your conversational intentions might diverge from those of your partner when you’re, say, trying to get the best price on a product, investing, competing for a prize, or trying to get your parents to do something for you.  Despite this, people still draw implications even when cooperation at the level of intention is missing, thus making Gricean cooperativity a moot point in these situations – one can’t draw implications from a conversation when one side isn’t willing to cooperate with the interests of the other.

So, rather than try and derive implications by force from conversations with strategic contexts using models based on cooperativity, the authors are trying to provide an alternative framework for deriving implicatures that handle contexts where the sides of the conversation diverge in regards to their intentions and preferences.

The authors tout that their model, in order to deal with the types of divergences in intention discussed previously, hold that the three following rules be part of any general conversational model. First, speaking requires the speaker to publicly commit to their spoken content. “A dialogue model must distinguish private attitudes from public commitments.” Second, “Insincerity motivates the Public vs. Private distinction. Many traditional mentalist models of dialogue based on Grice equate dialogue interpretation with updating mental states.” And finally, coherence is important. ”Messages must be interpreted with respect to an underlying model of discourse coherence; likewise, decisions about what to say are influenced by Coherence.”

Coherence is an especially important rule in discourse because they both rely heavily on each other, especially in models where interpretation helps us identify and resolve linguistic ambiguities. A dialogue model must also be able to differentiate between levels of cooperativity, especially between Gricean and rhetorical cooperativity.  The authors’ model is capable of distinguishing between these two levels of cooperativity as well as “basic cooperativity at the level of compositional meaning.”

So, in order to effectively analyze strategic reasoning when dealing with communication, the authors decided that “game theory” would be the most applicable in this situation.  A great tool for analyzing strategic discourse, game theory allows researchers to apply probabilities to whether conversational agents are more or less likely to give indirect answers based upon inferences.

For example, say that in a sequence speaker 1 takes the opportunity to ask a question or doesn’t.  If speaker 1 does ask a question, speaker 2 would respond with either an indirect answer or an answer relevant to the cooperation of the conversation as a whole (or just says nothing).  As the authors have stated, determining the likelihood of answer types is based on what either speaker wants out of the conversation and how much they feel that the other speaker is trying to misdirect them.  

Despite the best efforts of those developing game theory, the previous example is certainly not as good as it could be.  It must be associated with glue logic (GL) in order to help us understand the level of safety an inference offers and its relationship to the structure of the discourse.  The game theory example doesn’t take into account the transition from one conversation to another, which is very important when attempting to measure the concept of safety in this instance.  So, the authors attempt to incorporate GL into the game theory models they use.

First, conversational agents need to be able to reason in regards to preferences.  Basically, if X condition is true, then the speaker prefers option/action A over option B.  In these instances, the speakers can probably have some control over their preferences but not others.  So, for the preferences that are controlled, imagine them as actions that can be done to make something true or not true in regards to the conversation.

The authors use the example of game theory (combined with GL), and assume that there are three signals and interpretations possible that are congruent with conventional semantics. With this understanding of preferences in conversation, the authors can move toward defining principles of “pay-off maximizers (the basic principle of rationality from game theory), basic cooperation, and defeasibly committed to discourse coherence.

Regarding pay-off maximizers, they define actions that are the optimum trade-off between a speaker’s preferences and what that speaker believes is possible in the conversation regarding outcome, reward, etc. Basic cooperation is related to a speaker’s intention that their public commitments be shared if option A is required for X outcome.  Finally, the authors need to ensure that there is a rhetorical connection to previously established perceptions and expectations that relate to the current state of conversation. So, the authors’ theory can be differentiated from Gricean analysis because A) it can handle a more realistic approach to conversation where the discourse can be cooperative and noncooperative and B) it is able to use derivations of implicatures when intentions are better defined as preferences from previous conversations.

In conclusion, the authors are trying to make the point that any model of strategic conversation must be able to distinguish between a speaker’s public commitments (which are proclaimed through their utterances) and a speaker’s private opinions or mental preconceptions.  Therefore, this model must make discourse coherence a principle in determining a speaker’s public commitments, but it must also be able to understand that doing so is defeasible depending on what is said in the course of a conversation (regarding information that came before and what is expected to come after).  The results of these studies will add valuable empirical data in regards to strategic conversations and encourage others to apply them to the larger portion of other naturally occurring examples not outlined here.

Strategic Conversation” (WEB). Asher, Nicholas, and Alex Lascarides. Semantics and Pragmatics 6 (2013). Accessed 25 March 2016.

Leave a Reply