































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
The concept of strategic form in two-player games using the examples of Prisoner's Dilemma and Quality Choice. It discusses dominance, equilibrium, and the impact of rationality and information on players' strategies.
What you will learn
Typology: Lecture notes
1 / 39
This page cannot be seen from the preview
Don't miss anything!
1 What is game theory? 4
2 Definitions of games 6
3 Dominance 8
4 Nash equilibrium 12
5 Mixed strategies 17
6 Extensive games with perfect information 22
7 Extensive games with imperfect information 29
8 Zero-sum games and computation 33
9 Bidding in auctions 34
10 Further reading 38
∗This is the draft of an introductory survey of game theory, prepared for the Encyclopedia of Information Systems , Academic Press, to appear in 2002.
Glossary
Backward induction
Backward induction is a technique to solve a game of perfect information. It first consid- ers the moves that are the last in the game, and determines the best move for the player in each case. Then, taking these as given future actions, it proceeds backwards in time, again determining the best move for the respective player, until the beginning of the game is reached.
Common knowledge
A fact is common knowledge if all players know it, and know that they all know it, and so on. The structure of the game is often assumed to be common knowledge among the players.
Dominating strategy
A strategy dominates another strategy of a player if it always gives a better payoff to that player, regardless of what the other players are doing. It weakly dominates the other strategy if it is always at least as good.
Extensive game
An extensive game (or extensive form game) describes with a tree how a game is played. It depicts the order in which players make moves, and the information each player has at each decision point.
Game
A game is a formal description of a strategic situation.
Game theory
Game theory is the formal study of decision-making where several players must make choices that potentially affect the interests of the other players.
Strategy
In a game in strategic form, a strategy is one of the given possible actions of a player. In an extensive game, a strategy is a complete plan of choices, one for each decision point of the player.
Zero-sum game
A game is said to be zero-sum if for any outcome, the sum of the payoffs to all players is zero. In a two-player zero-sum game, one player’s gain is the other player’s loss, so their interests are diametrically opposed.
1 What is game theory?
Game theory is the formal study of conflict and cooperation. Game theoretic concepts apply whenever the actions of several agents are interdependent. These agents may be individuals, groups, firms, or any combination of these. The concepts of game theory provide a language to formulate, structure, analyze, and understand strategic scenarios.
The earliest example of a formal game-theoretic analysis is the study of a duopoly by Antoine Cournot in 1838. The mathematician Emile Borel suggested a formal theory of games in 1921, which was furthered by the mathematician John von Neumann in 1928 in a “theory of parlor games.” Game theory was established as a field in its own right after the 1944 publication of the monumental volume Theory of Games and Economic Behavior by von Neumann and the economist Oskar Morgenstern. This book provided much of the basic terminology and problem setup that is still in use today.
In 1950, John Nash demonstrated that finite games have always have an equilibrium point, at which all players choose actions which are best for them given their opponents’ choices. This central concept of noncooperative game theory has been a focal point of analysis since then. In the 1950s and 1960s, game theory was broadened theoretically and applied to problems of war and politics. Since the 1970s, it has driven a revolution
in economic theory. Additionally, it has found applications in sociology and psychology, and established links with evolution and biology. Game theory received special attention in 1994 with the awarding of the Nobel prize in economics to Nash, John Harsanyi, and Reinhard Selten.
At the end of the 1990s, a high-profile application of game theory has been the design of auctions. Prominent game theorists have been involved in the design of auctions for al- locating rights to the use of bands of the electromagnetic spectrum to the mobile telecom- munications industry. Most of these auctions were designed with the goal of allocating these resources more efficiently than traditional governmental practices, and additionally raised billions of dollars in the United States and Europe.
The internal consistency and mathematical foundations of game theory make it a prime tool for modeling and designing automated decision-making processes in interactive en- vironments. For example, one might like to have efficient bidding rules for an auction website, or tamper-proof automated negotiations for purchasing communication band- width. Research in these applications of game theory is the topic of recent conference and journal papers (see, for example, Binmore and Vulkan, “Applying game theory to auto- mated negotiation,” Netnomics Vol. 1, 1999, pages 1–9) but is still in a nascent stage. The automation of strategic choices enhances the need for these choices to be made efficiently, and to be robust against abuse. Game theory addresses these requirements.
As a mathematical tool for the decision-maker the strength of game theory is the methodology it provides for structuring and analyzing problems of strategic choice. The process of formally modeling a situation as a game requires the decision-maker to enu- merate explicitly the players and their strategic options, and to consider their preferences and reactions. The discipline involved in constructing such a model already has the poten- tial of providing the decision-maker with a clearer and broader view of the situation. This is a “prescriptive” application of game theory, with the goal of improved strategic deci- sion making. With this perspective in mind, this article explains basic principles of game theory, as an introduction to an interested reader without a background in economics.
players making choices out of their own interest. Cooperation can, and often does, arise in noncooperative models of games, when players find it in their own best interests.
Branches of game theory also differ in their assumptions. A central assumption in many variants of game theory is that the players are rational. A rational player is one who always chooses an action which gives the outcome he most prefers, given what he expects his opponents to do. The goal of game-theoretic analysis in these branches, then, is to predict how the game will be played by rational players, or, relatedly, to give ad- vice on how best to play the game against opponents who are rational. This rationality assumption can be relaxed, and the resulting models have been more recently applied to the analysis of observed behavior (see Kagel and Roth, eds., Handbook of Experimental Economics , Princeton Univ. Press, 1997). This kind of game theory can be viewed as more “descriptive” than the prescriptive approach taken here.
This article focuses principally on noncooperative game theory with rational play- ers. In addition to providing an important baseline case in economic theory, this case is designed so that it gives good advice to the decision-maker, even when – or perhaps especially when – one’s opponents also employ it.
The strategic form (also called normal form ) is the basic type of game studied in non- cooperative game theory. A game in strategic form lists each player’s strategies, and the outcomes that result from each possible combination of choices. An outcome is repre- sented by a separate payoff for each player, which is a number (also called utility ) that measures how much the player likes the outcome.
The extensive form , also called a game tree , is more detailed than the strategic form of a game. It is a complete description of how the game is played over time. This includes the order in which players take actions, the information that players have at the time they must take those actions, and the times at which any uncertainty in the situation is resolved. A game in extensive form may be analyzed directly, or can be converted into an equivalent strategic form.
Examples in the following sections will illustrate in detail the interpretation and anal- ysis of games in strategic and extensive form.
3 Dominance
Since all players are assumed to be rational, they make choices which result in the out- come they prefer most, given what their opponents do. In the extreme case, a player may have two strategies A and B so that, given any combination of strategies of the other players, the outcome resulting from A is better than the outcome resulting from B. Then strategy A is said to dominate strategy B. A rational player will never choose to play a dominated strategy. In some games, examination of which strategies are dominated re- sults in the conclusion that rational players could only ever choose one of their strategies. The following examples illustrate this idea.
The Prisoner’s Dilemma is a game in strategic form between two players. Each player has two strategies, called “cooperate” and “defect,” which are labeled C and D for player I and c and d for player II, respectively. (For simpler identification, upper case letters are used for strategies of player I and lower case letters for player II.)
@
@@
c d
Figure 1. The Prisoner’s Dilemma game.
Figure 1 shows the resulting payoffs in this game. Player I chooses a row, either C or D, and simultaneously player II chooses one of the columns c or d. The strategy combination (C, c) has payoff 2 for each player, and the combination (D, d) gives each player payoff 1. The combination (C, d) results in payoff 0 for player I and 3 for player II, and when (D, c) is played, player I gets 3 and player II gets 0.
No rational player will choose a dominated strategy since the player will always be better off when changing to the strategy that dominates it. The unique outcome in this game, as recommended to utility-maximizing players, is therefore (D, d) with payoffs (1, 1). Somewhat paradoxically, this is less than the payoff (2, 2) that would be achieved when the players chose (C, c).
The story behind the name “Prisoner’s Dilemma” is that of two prisoners held suspect of a serious crime. There is no judicial evidence for this crime except if one of the prison- ers testifies against the other. If one of them testifies, he will be rewarded with immunity from prosecution (payoff 3), whereas the other will serve a long prison sentence (pay- off 0). If both testify, their punishment will be less severe (payoff 1 for each). However, if they both “cooperate” with each other by not testifying at all, they will only be imprisoned briefly, for example for illegal weapons possession (payoff 2 for each). The “defection” from that mutually beneficial outcome is to testify, which gives a higher payoff no matter what the other prisoner does, with a resulting lower payoff to both. This constitutes their “dilemma.”
Prisoner’s Dilemma games arise in various contexts where individual “defections” at the expense of others lead to overall less desirable outcomes. Examples include arms races, litigation instead of settlement, environmental pollution, or cut-price marketing, where the resulting outcome is detrimental for the players. Its game-theoretic justification on individual grounds is sometimes taken as a case for treaties and laws, which enforce cooperation.
Game theorists have tried to tackle the obvious “inefficiency” of the outcome of the Prisoner’s Dilemma game. For example, the game is fundamentally changed by playing it more than once. In such a repeated game , patterns of cooperation can be established as rational behavior when players’ fear of punishment in the future outweighs their gain from defecting today.
The next example of a game illustrates how the principle of elimination of dominated strategies may be applied iteratively. Suppose player I is an internet service provider and player II a potential customer. They consider entering into a contract of service provision for a period of time. The provider can, for himself, decide between two levels of quality
of service, High or Low. High-quality service is more costly to provide, and some of the cost is independent of whether the contract is signed or not. The level of service cannot be put verifiably into the contract. High-quality service is more valuable than low-quality service to the customer, in fact so much so that the customer would prefer not to buy the service if she knew that the quality was low. Her choices are to buy or not to buy the service.
@
@@
High
Low
buy don’tbuy
Figure 3. High-low quality game between a service provider (player I) and a customer (player II).
Figure 3 gives possible payoffs that describe this situation. The customer prefers to buy if player I provides high-quality service, and not to buy otherwise. Regardless of whether the customer chooses to buy or not, the provider always prefers to provide the low-quality service. Therefore, the strategy Low dominates the strategy High for player I.
Now, since player II believes player I is rational, she realizes that player I always prefers Low , and so she anticipates low quality service as the provider’s choice. Then she prefers not to buy (giving her a payoff of 1) to buy (payoff 0). Therefore, the rationality of both players leads to the conclusion that the provider will implement low-quality service and, as a result, the contract will not be signed.
This game is very similar to the Prisoner’s Dilemma in Figure 1. In fact, it differs only by a single payoff, namely payoff 1 (rather than 3) to player II in the top right cell in the table. This reverses the top arrow from right to left, and makes the preference of player II dependent on the action of player I. (The game is also no longer symmetric.) Player II does not have a dominating strategy. However, player I still does, so that the
@
@@
High
Low
buy don’tbuy
Figure 4. High-low quality game with opt-out clause for the customer. The left arrow shows that player I prefers High when player II chooses buy.
customer does not sign the contract in the first place, since the customer will opt out later. However, the customer still prefers not to buy when the service is Low in order to spare herself the hassle of entering the contract.
The changed payoff to player I means that the left arrow in Figure 4 points upwards. Note that, compared to Figure 3, only the provider’s payoffs are changed. In a sense, the opt-out clause in the contract has the purpose of convincing the customer that the high-quality service provision is in the provider’s own interest.
This game has no dominated strategy for either player. The arrows point in different directions. The game has two Nash equilibria in which each player chooses his strategy deterministically. One of them is, as before, the strategy combination ( Low, don’t buy ). This is an equilibrium since Low is the best response (payoff-maximizing strategy) to don’t buy and vice versa.
The second Nash equilibrium is the strategy combination ( High, buy ). It is an equilib- rium since player I prefers to provide high-quality service when the customer buys, and conversely, player II prefers to buy when the quality is high. This equilibrium has a higher payoff to both players than the former one, and is a more desirable solution.
Both Nash equilibria are legitimate recommendations to the two players of how to play the game. Once the players have settled on strategies that form a Nash equilibrium, neither player has incentive to deviate, so that they will rationally stay with their strategies. This makes the Nash equilibrium a consistent solution concept for games. In contrast, a
strategy combination that is not a Nash equilibrium is not a credible solution. Such a strategy combination would not be a reliable recommendation on how to play the game, since at least one player would rather ignore the advice and instead play another strategy to make himself better off.
As this example shows, a Nash equilibrium may be not unique. However, the pre- viously discussed solutions to the Prisoner’s Dilemma and to the quality choice game in Figure 3 are unique Nash equilibria. A dominated strategy can never be part of an equilib- rium since a player intending to play a dominated strategy could switch to the dominating strategy and be better off. Thus, if elimination of dominated strategies leads to a unique strategy combination, then this is a Nash equilibrium. Larger games may also have unique equilibria that do not result from dominance considerations.
If a game has more than one Nash equilibrium, a theory of strategic interaction should guide players towards the “most reasonable” equilibrium upon which they should focus. Indeed, a large number of papers in game theory have been concerned with “equilibrium refinements” that attempt to derive conditions that make one equilibrium more plausible or convincing than another. For example, it could be argued that an equilibrium that is better for both players, like ( High, buy ) in Figure 4, should be the one that is played.
However, the abstract theoretical considerations for equilibrium selection are often more sophisticated than the simple game-theoretical models they are applied to. It may be more illuminating to observe that a game has more than one equilibrium, and that this is a reason that players are sometimes stuck at an inferior outcome.
One and the same game may also have a different interpretation where a previously undesirable equilibrium becomes rather plausible. As an example, consider an alternative scenario for the game in Figure 4. Unlike the previous situation, it will have a symmetric description of the players, in line with the symmetry of the payoff structure.
Two firms want to invest in communication infrastructure. They intend to communi- cate frequently with each other using that infrastructure, but they decide independently on what to buy. Each firm can decide between High or Low bandwidth equipment (this
Figure 5 shows the bandwidth choice game where each player has the two strategies High and Low. The positive payoff of 5 for each player for the strategy combination ( High, High ) makes this an even more preferable equilibrium than in the case discussed above.
In the evolutionary interpretation, there is a large population of individuals, each of which can adopt one of the strategies. The game describes the payoffs that result when two of these individuals meet. The dynamics of this game are based on assuming that each strategy is played by a certain fraction of individuals. Then, given this distribution of strategies, individuals with better average payoff will be more successful than others, so that their proportion in the population increases over time. This, in turn, may affect which strategies are better than others. In many cases, in particular in symmetric games with only two possible strategies, the dynamic process will move to an equilibrium.
In the example of Figure 5, a certain fraction of users connected to a network will already have High or Low bandwidth equipment. For example, suppose that one quarter of the users has chosen High and three quarters have chosen Low. It is useful to assign these as percentages to the columns, which represent the strategies of player II. A new user, as player I, is then to decide between High and Low , where his payoff depends on the given fractions. Here it will be 1 / 4 × 5 + 3/ 4 × 0 = 1. 25 when player I chooses High , and 1 / 4 × 1 + 3/ 4 × 1 = 1 when player I chooses Low. Given the average payoff that player I can expect when interacting with other users, player I will be better off by choosing High , and so decides on that strategy. Then, player I joins the population as a High user. The proportion of individuals of type High therefore increases, and over time the advantage of that strategy will become even more pronounced. In addition, users replacing their equipment will make the same calculation, and therefore also switch from Low to High. Eventually, everyone plays High as the only surviving strategy, which corresponds to the equilibrium in the top left cell in Figure 5.
The long-term outcome where only high-bandwidth equipment is selected depends on there being an initial fraction of high-bandwidth users that is large enough. For example, if only ten percent have chosen High , then the expected payoff for High is 0. 1 ×5+0. 9 ×0 =
is easy to see that the critical fraction of High users so that this will take off as the better strategy is 1/5. (When new technology makes high-bandwidth equipment cheaper, this increases the payoff 0 to the High user who is meeting Low , which changes the game.)
The evolutionary, population-dynamic view of games is useful because it does not require the assumption that all players are sophisticated and think the others are also ra- tional, which is often unrealistic. Instead, the notion of rationality is replaced with the much weaker concept of reproductive success : strategies that are successful on average will be used more frequently and thus prevail in the end. This view originated in the- oretical biology with Maynard Smith ( Evolution and the Theory of Games , Cambridge University Press, 1982) and has since significantly increased in scope (see Hofbauer and Sigmund, Evolutionary Games and Population Dynamics , Cambridge University Press, 1998).
5 Mixed strategies
A game in strategic form does not always have a Nash equilibrium in which each player deterministically chooses one of his strategies. However, players may instead randomly select from among these pure strategies with certain probabilities. Randomizing one’s own choice in this way is called a mixed strategy. Nash showed in 1951 that any finite strategic-form game has an equilibrium if mixed strategies are allowed. As before, an equilibrium is defined by a (possibly mixed) strategy for each player where no player can gain on average by unilateral deviation. Average (that is, expected ) payoffs must be considered because the outcome of the game may be random.
Suppose a consumer purchases a license for a software package, agreeing to certain re- strictions on its use. The consumer has an incentive to violate these rules. The vendor would like to verify that the consumer is abiding by the agreement, but doing so requires inspections which are costly. If the vendor does inspect and catches the consumer cheat- ing, the vendor can demand a large penalty payment for the noncompliance.
Figure 6 shows possible payoffs for such an inspection game. The standard outcome, defining the reference payoff zero to both vendor (player I) and consumer (player II),
What should the players do in the game of Figure 6? One possibility is that they prepare for the worst, that is, choose a max-min strategy. As explained before, a max-min strategy maximizes the player’s worst payoff against all possible choices of the opponent. The max-min strategy for player I is to Inspect (where the vendor guarantees himself payoff − 6 ), and for player II it is to comply (which guarantees her payoff 0 ). However, this is not a Nash equilibrium and hence not a stable recommendation to the two players, since player I could switch his strategy and improve his payoff.
A mixed strategy of player I in this game is to Inspect only with a certain probability. In the context of inspections, randomizing is also a practical approach that reduces costs. Even if an inspection is not certain, a sufficiently high chance of being caught should deter from cheating, at least to some extent.
The following considerations show how to find the probability of inspection that will lead to an equilibrium. If the probability of inspection is very low, for example one percent, then player II receives (irrespective of that probability) payoff 0 for comply , and payoff 0. 99 ×10+0. 01 ×(−90) = 9, which is bigger than zero, for cheat. Hence, player II will still cheat, just as in the absence of inspection.
If the probability of inspection is much higher, for example 0. 2 , then the expected payoff for cheat is 0. 8 × 10 + 0. 2 × (−90) = − 10 , which is less than zero, so that player II prefers to comply. If the inspection probability is either too low or too high, then player II has a unique best response. As shown above, such a pure strategy cannot be part of an equilibrium.
Hence, the only case where player II herself could possibly randomize between her strategies is if both strategies give her the same payoff, that is, if she is indifferent. It is never optimal for a player to assign a positive probability to playing a strategy that is inferior, given what the other players are doing. It is not hard to see that player II is indifferent if and only if player I inspects with probability 0.1, since then the expected payoff for cheat is 0. 9 × 10 + 0. 1 × (−90) = 0, which is then the same as the payoff for comply.
With this mixed strategy of player I ( Don’t inspect with probability 0.9 and Inspect with probability 0.1), player II is indifferent between her strategies. Hence, she can mix
them (that is, play them randomly) without losing payoff. The only case where, in turn, the original mixed strategy of player I is a best response is if player I is indifferent. Ac- cording to the payoffs in Figure 6, this requires player II to choose comply with probability 0.8 and cheat with probability 0.2. The expected payoffs to player I are then for Don’t inspect 0. 8 × 0 + 0. 2 × (−10) = − 2 , and for Inspect 0. 8 × (−1) + 0. 2 × (−6) = − 2 , so that player I is indeed indifferent, and his mixed strategy is a best response to the mixed strategy of player II.
This defines the only Nash equilibrium of the game. It uses mixed strategies and is therefore called a mixed equilibrium. The resulting expected payoffs are − 2 for player I and 0 for player II.
The preceding analysis showed that the game in Figure 6 has a mixed equilibrium, where the players choose their pure strategies according to certain probabilities. These probabil- ities have several noteworthy features.
The equilibrium probability of 0.1 for Inspect makes player II indifferent between comply and cheat. This is based on the assumption that an expected payoff of 0 for cheat , namely 0. 9 × 10 + 0. 1 × (−90), is the same for player II as when getting the payoff 0 for certain, by choosing to comply. If the payoffs were monetary amounts (each payoff unit standing for one thousand dollars, say), one would not necessarily assume such a risk neutrality on the part of the consumer. In practice, decision-makers are typically risk averse , meaning they prefer the safe payoff of 0 to the gamble with an expectation of 0.
In a game-theoretic model with random outcomes (as in a mixed equilibrium), how- ever, the payoff is not necessarily to be interpreted as money. Rather, the player’s attitude towards risk is incorporated into the payoff figure as well. To take our example, the con- sumer faces a certain reward or punishment when cheating, depending on whether she is caught or not. Getting caught may not only involve financial loss but embarassment and other undesirable consequences. However, there is a certain probability of inspec- tion (that is, of getting caught) where the consumer becomes indifferent between comply and cheat. If that probability is 1 against 9, then this indifference implies that the cost (negative payoff) for getting caught is 9 times as high as the reward for cheating success- fully, as assumed by the payoffs in Figure 6. If the probability of indifference is 1 against