Wednesday, October 26, 2005

Game Theory

In the last few years, a pair of pop-culture phenomena have brought game theory to the fore. During the “Reunion Show” for “Survivor: Thailand,” the host of the Survivor Series, Jeff Probst recommended that future contestants in the series would be advised to study John Nash’s “Non-cooperative Game Theory.” Probst suggested that any potential Survivor contestants should study this theory as the key to winning. The host might have been onto something. The television show “Survivor” seems like an elaborate experiment designed to discern the Nash Equilibrium for multiple actors in a compressed period of time. At the same time that “Survivor” and its basis in game theory has been ascendant, high stakes poker has become wildly popular as a participatory and as a viewing sport. While the participants may not know it, they are acting out the calculations of the minimax equilibrium in virtually every hand they play. Before these postulation can be examined, it is necessary to review some of the basic tenets of game theory, game theory’s relevance to politics and conflict and the various states of equilibrium that have been observed in games. Finally, the paper will examine the prisoner’s dilemma and how that illustrates the fundamentals of the game theory of conflict.

“Game theory aims to help us understand situations in which decision-makers interact.” (Osborne, pg 1) Game theory can potentially be deployed to explain why someone does what they do during a game of Monopoly, or, as Jeff Probst suggests, in a game of Survivor, game theory is actually concerned with larger issues which concern decision makers. These issues include the deliberations of juries, the actions of legislators in the course of their duties, and most importantly whether and when state will go to war. Stephen Ansolabehere argues that game theory developed around the stresses of the cold war. (Dunnan, pg 97) In fact, the man generally regarded as the father of game theory, John von Neumann, was himself a physicist/mathematician who worked on the Manhattan Project. Some have credited von Neumann’s time with the Manhattan Project with broadening and rounding out his views with regard to game theory. (Wolfram)

Game theory gained currency when proponents came up with what has come to be called mutually assured destruction (MAD). The proponent who became first associated with the idea of MAD was John MacNamara, Secretary of Defense for John Kennedy. Secretary MacNamara attempted to calculate the amount of destruction that would be necessary to inflict with nuclear weapons on the Soviet Union before the Soviets would no longer consider a “first-strike” on the United States. This was termed “assured destruction.” Contemporaneously, the Soviets were making a similar calculation vis-à-vis the United States. The fact that both sides were making a similar assessments and plans let to the observation that the Soviets and the Americans had achieved a stalemate of mutually assured destruction, a stalemate known by the acronym MAD. As we will see later, this stalemate looks very much like a “Nash equilibrium.”

The example of MAD makes it clear that game theory is useful in describing geo-political phenomenon, but a theory has much more utility if it can actually predict outcomes or suggest courses of action in a given situation. Some analysts object to the claims that “game theory” has any particular predictive powers. Some theories, like those predicting the actions of monopolists or individuals engaged in perfect competition are remarkable prescient in predicting outcomes. “Game theory” is not so elegant. “[T]he theory does not allow us to make firm predictions about market outcomes…[which makes game theory] not particularly helpful body of techniques.” (Caldwell, pg 393) In recent years, academics have somewhat altered their views of the utility of game theory as predictive, especially in systems that are more chaotic. Examples of these chaotic systems which are more amenable to “game theory’s” better predictive ability include auctions and multi-party negotiations. (Caldwell, pg 394)

Game theory exhibits the most utility in predicting the point at which a system will reach equilibrium. We have already seen the MAD stalemate, which is a form of Nash Equilibrium. Game theorists have identified three distinct states of equilibrium: maximum equilibrium, Nash equilibrium, and cooperative equilibrium. Maximum equilibrium is that condition in which an actor cannot change his variable in any way or it will result in a worse outcome for the actor. In other words, “We say a strategy profile s is a maximum equilibrium if each unilateral deviation from s by some player i will result in an equally or less desirable outcome for i.” (Harrenstein, pg 2) Maximum equilibrium is possible to attain only from a position of dominance.

The Nash equilibrium is different from the maximum equilibrium. The maximum equilibrium is objectively determined. The actor continues to make moves so long as his outcome improves. The Nash equilibrium requires the actor to make a guess about the best course of action based on how he predicts others will react to him. The actor chooses the deviation that will result in the optimum return based on his prediction of his opponents’ actions. There is a further assumption that actors must choose their course of action based on incomplete information. The Nash equilibrium helps explain why competitors sometimes accept a position which might be less than of maximum advantage. It is the best response to what another competitor may potentially do.

The cooperative equilibrium is the point at which competitors accept equal outcomes that are less than maximum but which guarantee a return above zero. This is essentially the “risk-adverse” position. Instead of asserting dominance that might be illusory or trying to guess at the actions of competitors, cooperative equilibrium is based on sharing of knowledge so that there is transparency in the actions of all in the competition.

The various forms of equilibriums find an excellent example in the Prisoner’s Dilemma. The Prisoner’s Dilemma comes out of a lecture by Albert Tucker in 1950. In the lecture, Professor Tucker offered an example to highlight the difficulty in analyzing some games. His example has spawned a cottage industry of works in many different fields. In the game, two prisoners are faced with a choice that has three possible outcomes. The prisoner can either confess to the crime and implicate his accomplice or keep silent. Each action has consequences based on what his partner does.

If one prisoner confesses but his accomplice does not, the first prisoner goes free and the accomplice is jailed for 20 years. If both prisoners confess, both prisoners serve a 10 year sentence. If both prisoners remain silent, then they receive a one year sentence on a lesser charge. In other words, absent any information about how his accomplice will act, the prisoner will likely act in a way that has the potential of maximizing his benefit, while minimizing his risk. For the rational actor, this would mean confessing, and going for the 10 years with the possibility of walking free. Assuming the accomplice is also a rational actor, the 10 year sentence for each would be the Nash equilibrium.

If one of the accomplices were a mob boss, and the other a lackey, the boss might assume that the lackey would take the fall, especially if the lackey knew what is good for him. The boss is in the position of dominance. His choice would be to blame the accomplice and walk out, confident that the accomplice would never talk. The boss is in the position of maximum equilibrium.
If the accomplices were twin brothers, they would act in what they perceive to be each other’s best interest. Therefore, neither would talk, confident in their reunion a year from now. The brothers are absolutely transparent with each other, so there is no doubt about the best course of action. This is a cooperative equilibrium.

High stakes poker players and competitors in the Survivor series are all trying to make decisions and maximize their advantages while minimizing their risk. Since at the outset of the game, no player is in a dominant position, and no one knows what the others will do, players strive for the Nash equilibrium. Once players gain more knowledge about the tendencies of the others or gain a monetary advantage, they can begin to strive for a maximum equilibrium which puts their competitors at maximum disadvantage. There is little call for the cooperative equilibrium since both games are winner take all, so any cooperation would be temporary, at best. Competitors will push the maximum advantage until they have all the chips or are the Sole Survivor.


Caldwell, Bruce J. Hayek’s Challenge: An Intellectual Biography
of F. A. Hayek. (University of Chicago Press, Chicago) 2004.

Dunnan, Dana. Burning at the Grassroots. (Page Free
Publishing, Otsego, MI) 2004.

Goodkey, Kennedy. “Is the Key to Survivor in “Non-cooperative
Games?”, 24 December 2002.

Griffiths, Martin and O’Callaghan, Terry. International
Relationships: Key Concepts. (Routledge, London) 2002.

Harrenstein, Paul. A Game-Theoretical Notion of Consequence.
(Utrech University, Utrech, Netherlands) 2002.

Myerson, Roger B. Game Theory. (Harvard University Press,
Cambridge, MA) 1991.

Osborne, Martin. An Introduction to Game Theory. (Oxford
University Press, New York) 2004.

Summers, Garrett. Outwit, Outplay, Outlast: A Game Theoretic
Analysis of Survivor. (Stanford University, Stanford, CA) May 14, 2002.

Wolfram, Stephen. “John von Neumann's 100th Birthday.”, December 28, 2003.