Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Behavioral Game Theory: Beyond Self-Interest and Rationality, Schemes and Mind Maps of Game Theory

Social Choice TheoryEconomic PsychologyGame TheoryBehavioral Economics

The limitations of standard game theory and introduces behavioral game theory, which considers social utility, fairness, and the intentions of other players. three sections: social utility, choice and judgment, and elements that people bring to strategic situations. Behavioral game theory improves the prescriptive value of game theory by helping players understand how others are likely to play.

What you will learn

  • What are the three sections of behavioral game theory and what do they cover?
  • How does behavioral game theory differ from standard game theory?
  • What role do players' perceptions of other players' intentions play in behavioral game theory?
  • How does the way an unequal allocation comes about affect behavior in games?
  • How does social utility affect behavior in games?

Typology: Schemes and Mind Maps

2021/2022

Uploaded on 07/05/2022

allan.dev
allan.dev 🇦🇺

4.5

(85)

1K documents

1 / 22

Toggle sidebar

Related documents


Partial preview of the text

Download Behavioral Game Theory: Beyond Self-Interest and Rationality and more Schemes and Mind Maps Game Theory in PDF only on Docsity! Journal of Economic Perspectives — Volume 11, Number 4 — Fall 1997 — Pages 167–188 Progress in Behavioral Game Theory Colin F. Camerer I s game theory meant to describe actual choices by people and institutions or not? It is remarkable how much game theory has been done while largely ignoring this question. The seminal book by von Neumann and Morgenstern, The Theory of Games and Economic Behavior, was clearly about how rational players would play against others they knew were rational. In more recent work, game theorists are not always explicit about what they aim to describe or advise. At one extreme, highly mathematical analyses have proposed rationality requirements that people and firms are probably not smart enough to satisfy in everyday decisions. At the other extreme, adaptive and evolutionary approaches use very simple models— mostly developed to describe nonhuman animals—in which players may not realize they are playing a game at all. When game theory does aim to describe behavior, it often proceeds with a disturbingly low ratio of careful observation to theorizing. This paper describes an approach called "behavioral game theory," which aims to describe actual behavior, is driven by empirical observation (mostly experi- ments), and charts a middle course between over-rational equilibrium analyses and under-rational adaptive analyses. The recipe for behavioral game theory I will describe1 has three steps: start with a game or naturally occurring situation in which standard game theory makes a bold prediction based on one or two crucial principles; if behavior differs from 1 Interested readers familiar with game theory should read Crawford (1997), whose approach is more eclectic than mine. He concludes (p. 236) that "most strategic behavior can be understood via a synthesis that combines elements from each of the leading theoretical frameworks [traditional game theory, evo- lutionary game theory, and adaptive learning] with a modicum of empirical information about behavior . . . " Behavioral game theory adds psychological interpretations to this synthesis. • Colin F. Camerer is Rea and Lela G. Axline Professor of Business Economics, California Institute of Technology, Pasadena, California. His e-mail address is <camerer@hss.caltech.edu>. 168 Journal of Economic Perspectives the prediction, think of plausible explanations for what is observed; and extend formal game theory to incorporate these explanations. This paper considers three categories of modelling principles and catalogues violations of these principles. The first section will focus on cases in which players, rather than focusing self- interestedly on their own payoff alone, seem to respond in terms related to social utility, showing concerns about fairness and the perceived intentions of other play- ers. The next section will focus on problems of choice and judgment: cases in which players respond to differences in how the game is described, rather than to the outcomes, and in which players systematically overestimate their own capabilities. A third section will investigate some elements that people bring to strategic situa- tions that are usually unaccounted for in game theory: a common awareness of certain focal points for agreement, a belief that timing of a choice may confer privileged status or change players' thinking, and a natural instinct to look only one or two levels into problems that permit many levels of iterated reasoning. Organizing findings in this way is like taking a car engine apart and spreading out the parts, so that each part can be inspected separately and broken ones re- placed. The hope is that working parts and necessary replacements can later be reassembled into coherent theory. Just as the rebuilt car engine should run better than before, the eventual goal in behavioral game theory is to be able to take a description of a strategic situation and predict actual behavior at least as well as current theories do. Better descriptive principles should also improve the prescrip- tive value of game theory, since players who desire to play rationally need to know how others are likely to play. Simple games are also useful for establishing phenomena that should be in- corporated into economics beyond game theory. In experiments, people routinely reject profitable bargains they think are unfair, contribute substantially to public goods and do not take full advantage of others when they can (exhibiting sur- prisingly little "moral hazard"). Textbook discussions of wage setting, public goods problems and the need for incentive contracts and monitoring to prevent moral hazard paint a different picture, portraying people as more socially isolated, uncooperative and opportunistic than they are in experiments. If the generality of the experimental results is questioned, the generality of the textbook caricature should be, too. Other game experiments show that players will behave ''irration- ally" when they expect others to behave even more irrationally, which is one common explanation for excessive volatility in financial markets. Establishing and dissecting such effects in games could help inform theorizing about similar be- havior in markets. Games as Social Allocations Games give payoffs to more than one person. If players care about the financial payoffs of others, the simplifying assumption of pure self-interest, common to so much game theory analysis, must be modified. Most theories sidestep this concern Progress in Behavioral Game Theory 171 Table 2 Chicken But intentions matter. Suppose another player is forced to cooperate—perhaps by a game structure in which that person cannot choose to defect. Their forced cooperation does not give the other player an unusually high payoff (it is not "nice," because the forced player did not have the option to defect and treat the other player badly). So fairness equilibrium predicts the free-to-choose player is under no obligation to reciprocate and will be more likely to defect than in the standard dilemma. The game of chicken, it turns out, is perhaps the ideal game for contrasting fairness and self-interested preferences. Table 2 gives the payoffs in "chicken." In this game, both players would like to Dare (D) the other to Chicken out (C) (then D earns 3x, and C earns 0) but if both Dare they each earn -4x. The players move simultaneously. The (pure strategy) Nash equilibria are (D,C) and (C,D), since a player who expects the other to Dare should Chicken out, and vice versa. However, fairness equilibrium predicts exactly the opposite, at least for small stakes. Start in the upper right (D,C) cell. Though the game is actually played simultaneously, suppose the players' reason in their own minds about moves and countermoves, in a kind of mental tatonnement, before deciding what to do. If player 1 moves from this cell, politely choosing C, she sacrifices 2x (getting x instead of 3x) to benefit player 2 by the amount x. This nice choice triggers reciprocal niceness in player 2; rather than exploiting player 1's choice of C by responding with D, player 1 prefers to sacrifice (settling for x instead of 3x) to "repay" player 1's kindness. Thus, both politely playing Chicken is a fairness equilibrium. By opposite reasoning, (D,D) is a mean fairness equilibrium; rather than back down in the face of the other's D, both would rather lose more by picking D, to hurt their enemy.3 In a recent study of chicken, 60 percent of the observations in the last half of the experiment were fairness equilibrium choices (C,C) and (D,D), and only 12 percent were Nash equi- libria (D,C) and (C,D) (McDaniel, Rutström and Williams, 1994). 3 The "mean" fairness equilibrium (D,D) illustrates the advantage of chicken over the prisoner's di- lemma for studying social values. In the prisoner's dilemma, defection is both a self-interested choice (reflecting neither niceness nor meanness), and it is the choice a mean-spirited (or envious) person would make. In chicken, the best response of a self-interested person to an expectation that the other person would play D is C, but a mean person would pick D. 172 Journal of Economic Perspectives The fairness equilibrium or "reciprocated value" model is a solid new plateau for understanding departures from self-interest in games. The model captures basic facts that the simpler separable and comparative models do not capture, particularly the reciprocal nature of social values and the distinction between uneven outcomes and unfair actions. Its formal specification connects fairness equilibrium closely to standard game theory. Games like chicken allow both nice and mean outcomes to arise in the same game, capturing phenomena like the blissful happiness of a loving couple and their bitter, mutually destructive breakup. Games Require Choice and Judgment Rational players will perceive a game and themselves clearly and consistently. However, when "framing effects" are important, players see the game differently according to how it is described. When players are overconfident of their own abilities, they fail in seeing the likely consequences of their actions. This section considers these two phenomena in turn. Framing Effects Theories of choice often invoke an axiom of "description invariance," which holds that differences in descriptions that do not alter the actual choices should not alter behavior. A "framing effect" occurs when a difference in description does cause behavior to vary. For example, give subjects $10 in advance, then ask them whether they would choose a certain loss of $5 (for a net gain of $5) or flip a coin and lose either $10 or 0, depending on the outcome. Those subjects choose to gamble more frequently than subjects who are given nothing and asked to choose between gaining $5 or flipping a coin with $10 and 0 outcomes (Tversky and Kahneman, 1992). Generally, people are more likely to take risks when outcomes are described as losses than when the same outcomes are described as gains. Players in games can exhibit a version of this "reflection effect." Players are more willing to risk disagreement when bargaining over possible losses than when bargaining over possible gains (Neale and Bazerman, 1985; Camerer et al., 1993). In certain coordination games with multiple equilibria, avoidance of losses acts as a "focal principle" that leads players to coordinate their expectations on those equilibria in which nobody loses money (Cachon and Camerer, 1996). Overconfidence about Relative Skill The now-standard approach to games of imperfect information pioneered by John Harsanyi presumes that players begin with a "common prior" probability distribution over any chance outcomes. As an example, consider two firms A and B, who are debating whether to enter a new industry like Internet software. Suppose it is common knowledge that only one firm will survive—the firm with more skilled managers, say—so firms judge the chance that their managers are the more skilled. Colin F. Camerer 173 The common prior assumption insists both firms cannot think they are each more likely to have the most skill. Put more formally, a game like this can be modelled as a tree where the top node separates the game into two halves—a left half in which A is truly more skilled, and a right half in which B is truly more skilled. The firms can't play coherently if A and B believe they are actually on different halves of the tree. In this way, overconfidence about relative skill violates the common prior assumption. Of course, this requirement of a common prior does not rule out that players may have private information. In the example of the two competing software firms, each could know about its own secret projects or the tastes of its customers, but others must know what that information could possibly be. Dozens of studies show that people generally overrate the chance of good events, underrate the chance of bad events and are generally overconfident about their relative skill or prospects. For example, 90 percent of American drivers in one study thought they ranked in the top half of their demographic group in driving skill (Svenson, 1981). Feedback does not necessarily dampen overconfidence much (and could make it worse): one study even found overconfidence among drivers surveyed in the hospital after suffering bad car accidents (Preston and Harris, 1965). But if those involved in game-like interactions are overconfident, the result may matter, dramatically. For example, economic actors behaving in a mutually overconfident way may invest the wrong amount in R&D, prolong strikes or delay agreements inefficiently, opt for high-risk sports or entertainment careers instead of going to college, and so on. Although overconfidence has been largely ignored in theorizing about games, there are some clear experimental examples of its effects. In the Winter 1997 issue of this journal, Babcock and Loewenstein review several such examples and also describe what they call "self-serving bias." One possible economic manifestation of overconfidence is the high failure rates of new businesses (around 80 percent fail in their first three years). Of course, high failure rates are not necessarily inconsistent with profit maximization. Maybe new business owners judge their relative skill accurately, but business returns are positively skewed "lottery ticket" payoffs in which the few survivors are extremely profitable. Then a large percentage might fail, even though the expected value of entering is positive. The overconfidence and rational entry explanations are very difficult to distinguish using naturally occurring data. But they can be compared in an experimental paradigm first described in Kahneman (1988) and extended by Rapoport et al. (forthcoming). In the entry game paradigm, each of N subjects can choose to enter a market with capacity C or can stay out and earn nothing. The profit for each entrant is the same, but more entrants means that everyone earns a lower level of profit. If C or fewer enter, the entrants all earn a positive profit. If more than C enter, the entrants all lose money. In pure-strategy Nash equilibria, players should somehow coordinate their choices so that exactly C enter and N – C stay out. 176 Journal of Economic Perspectives Table 3 Battle of the Sexes clearly exhibiting some strategic sophistication, because the frequency of the most common choice is much lower when players are asked only to express preferences, rather than match. For example, when asked to name a favorite day of the year, 88 subjects picked a total of 75 different dates; Christmas was the most popular at 6 percent. But when trying to match with others, 44 percent picked Christmas. Game theory has a lot to learn from subjects about focal principles, but serious theoretical attention to the topic has been rare. Crawford and Haller (1990) show how focal precedents can emerge over time when games are re- peated and players are eager to coordinate. Static theories like Bacharach and Bernasconi (1997) do not explain how focal points come about, but they capture the tradeoff between the chance that people commonly recognize a strategy's distinguishing features and the number of other strategies that share those fea- tures. Focal principles have potentially wide economic applications in implicit contracting, evolution of convention, social norms and folk law, and corporate culture (Kreps, 1990). Timing is another descriptive feature of a game that is often assumed to be irrelevant, but can matter empirically. In laying the foundations of game theory, von Neumann and Morgenstern (1944) deliberately emphasized the central role of information at the expense of timing. They believed that information was more fundamental than timing, because knowing what your opponents did necessarily implies that they moved earlier. Alternatively, if you don't know what your oppo- nents did, you won't care whether they already did it, or are doing it now. Combin- ing these principles implies that information is important but that timing, per se, is not. But empirical work has found surprising effects of move order in games (holding information constant). Take the battle of the sexes game in Table 3. In this game, one player prefers choice A and the other choice B, but both players would rather coordinate their choices than end up apart. This game has two pure- strategy multiple equilibria (A,B) and (B,A) that benefit players differently. There is also a mixed-strategy equilibrium—choosing B 75 percent of the time—that Colin F. Camerer 177 yields an expected payoff of 1.5 to both players.4 Notice that both players prefer either one of the two pure strategy equilibria to mixing, but they each prefer a different one.5 Cooper et al. (1993) found that when players move simultaneously, they converge roughly to the mixed strategy equilibrium, choosing B more than 60 percent of the time, as shown in Table 3. In a sequential condition, say that the Row player moves first, but her move is not known to Column. In this case, Row players choose their preferred equilibrium strategy, B, 88 percent of the time, and Column players go along, choosing A 70 percent of the time. The mere knowl- edge that one player moved first, without knowing precisely how she moved, is enough to convey a remarkable first-mover advantage that the second-mover re- spects. The data suggest a magical "virtual observability," in which simply knowing that others have moved earlier is cognitively similar to having observed what they did (Camerer, Knez and Weber, 1996). After all, if Column figures out that Row probably selected B, Column's sensible choice (setting aside mean retaliation) is to go along. It is not clear how virtual observability works, but it appears that when one player explicitly moves first, other players think about the first-mover's motivations more carefully. If first-movers anticipate this, they can choose the move that is best for themselves, because they know players moving later will figure it out. Psychology experiments have established related ways in which reasoning about events depends curiously on their timing. For example, many people dislike watching taped sports events. Even when they don't know the outcome, simply knowing that the game is over drains it of suspense. People can also generate more explanations for an event that has already happened than for one that has yet to happen. One experiment investigated the psychology of timing in the game of "matching pennies" (Camerer and Karjalainen, 1992), in which both players in- dependently choose heads (H) or tails (T). In this game, one player wants to match, but the other player wants to mismatch. If the "mismatching" player moves first, she is more likely to choose either H or T, trying to outguess what the first mover will later do than to choose a chance device which explicitly random- izes between H and T for her. But if the "matching" player has already moved, the mismatching player is more likely to choose the chance randomizing device, hedging her bet. Apparently people are more reluctant to bet on their guesses 4 Intuitively, think of the mixed strategy equilibrium in this way. If player 1 knows that player 2 will choose B 75 percent of the time, then for player 1, the expected value of choosing A will be .25(0) + 2(.75) = 1.5, and the expected value of choosing B will be (.25)6 + .75(0) = 1.5. In other words, the highest payoff for player 1 is 1.5. Of course, the same logic works in reverse; if player 2 knows that player 1 will choose B 75 percent of the time, then the highest possible payoff for player 2 is 1.5. Therefore, if both players choose B 75 percent of the time, then the best response of both players to that choice will involve a payoff of 1.5. 5 The outcome (B,B) is also a "mean" fairness equilibrium. Suppose that player 1 thinks that player 2 expects them to play B, and as a result, player 2 is going to respond meanly by choosing B to harm player 1 (and themselves). Then, a mean-spirited player 1 will choose B in a sort of preemptive retaliation, so that a mutually destructive (B,B) equilibrium results. 178 Journal of Economic Perspectives about what other players have already done than on guesses about what other players will later do. Iterated Dominance and "Beauty Contests" "Iterated dominance" is the strategic principle that means that players first rule out play of dominated strategies by all players, then eliminate strategies that became dominated after the first set was eliminated, and so forth. In many games, this iterative process yields a unique choice after enough steps of iterated domi- nance are applied.6 But do people actually apply many levels of iterated dominance? There are many reasons for doubt. Studies of children show that the concept ''beliefs of oth- ers" develops slowly. Psycholinguist Herb Clark studies how people infer the mean- ing of statements with vague references ("Did he do it already?"), which require people to know what others know, what others know they know, and so forth. Clark jokes that the grasp of three or more levels of iterated reasoning "can be obliterated by one glass of decent sherry." Since the process of iteration depends on beliefs about how others will play (and their beliefs . . .), then if even a few people behave irrationally, rational players should be cautious in applying iterated dominance. Experiments are useful for measuring where the hierarchy of iterated domi- nance reasoning breaks down. An ideal tool is the "beauty contest game," first studied experimentally by Nagel (1995). A typical beauty contest game has three rules. First, N players choose numbers xi in [0,100]. Second, an average of the numbers is taken. Third, a target is selected that is equal to the average divided by the highest possible choice. For illustration, say the average number chosen is 70, so the target is 70 percent of the average number. Finally, the player whose number is closest to the target wins a fixed prize. (Ties are broken randomly.) Before pro- ceeding, readers should think of what number they would pick if they were playing against a group of students. This game is called a "beauty contest" after the famous passage in Keynes's (1936, p. 156) General Theory of Employment, Interest, and Money about a newspaper contest in which people guess what faces others will guess are most beautiful. Keynes used this as an analogy to stock market investment. Like people choosing the pret- tiest picture, players in the beauty contest game must guess what average number others will pick, then pick 70 percent of that average, while knowing that everyone is doing the same. The beauty contest game can be used to distinguish the number of steps of reasoning people are using. Here's how: suppose a player understands the game and realizes that the rules imply the target will never be above 70. To put it another way, numbers in the range [70,100] violate first-order iterated dominance. Now suppose a subject chooses below 70 and thinks everyone else will as well. Then the 5 These "dominance solvable" games include Cournot duopoly, finitely repeated prisoner's dilemma and some games with strategic complementarities. Colin F. Camerer 181 often have several equilibria, and those that are not subgame perfect are considered to be less likely to occur. As a reasoning principle, however, backward induction is descriptively dubious because studies of how people learn to play chess and write computer programs show that backward reasoning is unnatural and difficult. And backward induction requires players to spend precious time thinking about future events that seem unlikely to occur. Should they bother? Direct tests of backward induction in games come from work on sequential bargaining (Camerer et al., 1993). In these experiments, player 1 offers a division of a pie. If player 2 accepts the offer, the game ends. But if player 2 rejects the offer, then the pie shrinks in size and player 2 offers a division to player 1. Again, if player 1 accepts the offer then the game ends. Otherwise, the pie shrinks again in size and player 1 again gets to offer a division. If the third-round offer is rejected, the game ends and players get nothing. This is a game of backward induction, where the optimal offer can be reached by working back from the last period. If the third pie is reached, and play is rational, then player 1 will offer only an epsilon slice to player 2, who will accept. Knowing this, player 2 recognizes that when dividing the second pie, she must give player 1 a slice equal to the smallest pie (plus epsilon), and keep the remainder, or else player 1 will reject the offer. Knowing this, player 1 recog- nizes that when dividing the first pie, offering player 2 a slice equal to the size of the second pie minus the third (plus epsilon), will be an offer that player 2 will accept. Subjects trained briefly in backward induction reach this result readily enough. But as in many other experiments, first-round offers of untrained subjects lay some- where between dividing the first pie in half, and the equilibrium offer (pie two minus pie three). More interestingly, the experiment was carried out on computers, so that to discover the exact pie sizes in the three rounds, subjects had to open boxes on a computer screen. Measurements of the cursor's location on the screen indicated the order in which boxes were open and how long they were kept open.9 By presenting the game to subjects in this way, the subjects are forced to reveal the information they are looking at, giving clues about their mental models and rea- soning. Subjects tended to look at the first-round pie first, and longest, before looking ahead, contrary to the "backward induction" looking pattern exhibited by trained subjects. In fact, subjects did not even open the second- and third-round boxes—ignoring the sizes of the second and third pies entirely—on 19 percent and 10 percent of the trials, respectively. These subjects simplify a difficult problem by ignoring future choice nodes that seem unlikely to ever be reached. Their heuristic might be considered sensible, because nearly 90 percent of the trials ended after one round. 9 Psychologists have used similar methods for nearly 100 years, recording movements of eyes as people read, to understand how people comprehend text. 182 Journal of Economic Perspectives Speculations and New Directions Systematic violations of game-theoretic principles are not hard to find because all useful modelling principles are simplifications, and hence are sometimes false. Table 6 summarizes the discussion to this point by offering a list of a few principles that are widely used in game theory, along with the systemic violations of those principles and citations for selected experiments documenting those findings. Lists of this sort are a start. The next step is to use the evidence of violations to construct a formal and coherent theory. Substantial progress has already been made in two areas mentioned earlier: measuring social values and extending game theory to include them; and measuring and incorporating differences among players, like players using different steps of iterated reasoning in beauty contests. Behavioral game theory could usefully extend standard theory in three other ways. Nonexpected utilities. Players do not always choose the strategy with the highest expected utility. They sometimes value losses differently than gains, and can have aversions toward (or preferences for) strategic ambiguity or uncertainty. Several models have been proposed to bring pattern to this behavior. In the prospect theory of Tversky and Kahneman (1992), people value gains and losses from a reference point (rather than final wealth positions) and dislike losses much more than they like equal-sized gains, which can explain why describing payoffs as gains or losses matters in some experiments. Aversion toward ambiguity can be explained by mod- els that use probabilities that are nonadditive. A less psychologically grounded approach is to allow the possibility of errors in choice or uncertainty over payoffs that imply that while players are more likely to choose the strategy with the highest expected utility, they are not certain to do so. McKelvey and Palfrey (1995) propose what they call a "quantal response" func- tion, which inserts a variable into the choice function to capture the degree of randomness in decisions. A "quantal response equilibrium" exists if players know how much randomness is in the decisions of others and choose accordingly.10 This approach offers a parsimonious method of explaining several different behavioral phenomena. For example, if some players do not always make the optimal choice, then other players should use only a limited number of iterated rational steps in situations like the beauty contest games. Or in the ultimatum game, a responder may be likely to reject a smaller offer mistakenly, because it is a small mistake, and knowing this, rational proposers are not likely to make very uneven offers. The quantal response approach also seems to explain subtle experimental patterns in contributions to public goods (Anderson, Goeree and Holt, 1996). Learning. Much recent interest has been focussed on adaptive learning models, 10 A handy quantal response function is the logit form, P(Si) = eλπ(si)/Σie λπ(si) (where π(Si) is the expected payoff of strategy Si). The constant λ captures imprecision in choices, and could be interpreted as either sensitivity to dollar payoffs, or existence of nonpecuniary utilities unobserved by the experimenter. If λ = 0, then players choose equally often among strategies. As λ grows larger, P(Si) approaches one for the strategy with the highest expected payoff (the best response). Progress in Behavioral Game Theory 183 Table 6 Evidence on Game Theory Modelling Principles which come in two basic forms. "Belief-based" models presume that subjects form beliefs about what others will do, based on past observations, and choose the strategy that maximizes utility given these beliefs (the "best response") (Crawford, 1995). "Re- inforcement" models ignore beliefs and assume that strategies have different proba- bilities or propensities of being chosen, which change as successful strategies are "re- inforced" by observed successes or failures (Roth and Erev, 1995).11 Both kinds of models are narrow. Belief learners pay no special attention to their payoff history. Reinforcement learners pay no attention to the outcomes of strategies they didn't choose, and they don't keep track of choices by others. Both learners ignore infor- mation about other players' payoffs. Teck Ho and I (1997) recently developed a general model that synthesizes the two approaches, thereby avoiding some of the weaknesses 11 Reinforcement models were popular in cognitive psychology until about 30 years ago, when they were largely replaced by the information processing approach (brains are like computers) and, more recently, by "connectionism" (brains are neural networks). Behaviorism was discredited as a general theory of human learning because it could not easily explain higher-order cognition, like language, and lacked neuroscientific detail. These failures also make it unlikely that reinforcement models can fully explain human learning in strategic situations. 186 Journal of Economic Perspectives only some entrants can be right. Players who do not use backward induction can only benefit from doing so (while realizing that others might not be). In other cases, game theory provides bad advice because underlying assump- tions do not describe other players. Knowing how others are likely to deviate will help a player choose more wisely. For example, it is generally dumb to choose the equilibrium of zero in a beauty contest (even playing against CEOs or brilliant Caltech undergraduates). It is smarter to know the number of reasoning steps most people are likely to take and optimize against that number, and to understand the adaptive process that changes others' choices over time. Similarly, players making offers in ultimatum games should know that many players simply regard an offer of 10 percent as unfair and prefer to reject it. Experiments have supplied tentative answers to some sharply posed questions. Is there a formal way to incorporate reciprocal social values like fairness, altruism, or revenge, which are widely observed in the lab? Yes: Rabin's (1993) fairness equi- librium. Do judgment phenomena like overconfidence about relative skill matter in games? Yes, but we don't yet know how to include these phenomena in formal extensions of game theory. How many steps of iterated dominance do people use? One to three. Do learning and the construction of mental models in unstructured games matter? Yes, but we need much additional data on how they matter. Moving from these sorts of observations to coherent new modelling is the primary challenge for the next wave of research. A final caveat: The desire to improve descriptive accuracy that guides behav- ioral game theory does not mean game theory is always wrong. Indeed, it may be only a small exaggeration to conclude that in most games where people gain ex- perience, equilibrium is never reached immediately and always reached eventually. But this is no triumph for game theory until it includes explanations for behavior in early rounds and the processes that produce equilibration in later rounds. • Comments from Linda Babcock, Kong-Pin Chen, Vince Crawford, Bob Gibbons, Teck Ho, Matthew Rabin, JEP editors (Brad De Long, Alan Krueger and Timothy Taylor) and many seminar audiences were helpful. This work resulted from many collaborations and conversa- tions, especially with Gerard Cachon, Teck Ho, Eric Johnson, Risto Karjalainen, Marc Knez, Richard Thaler, Roberto Weber and Keith Weigelt. Support of NSF SBR 9511001 and the Russell Sage Foundation is gratefully appreciated. Progress in Behavioral Game Theory 187 References Akerlof, George A., "The Market for 'Lem- ons': Quality Uncertainty and the Market Mech- anism," Quarterly Journal of Economics, May 1970, 105, 255–83. Anderson, Simon P., Jacob K. Goeree, and Charles A. Holt, "A Theoretical Analysis of Al- truism and Decision Error in Public Goods Games," working paper, University of Virginia, Department of Economics, 1996. Babcock, Linda, and George Loewenstein, "Explaining Bargaining Impasse: The Role of Self-Serving Biases," Journal of Economic Perspec- tives, Winter 1997, 11:1, 109–26. Bacharach, Michael, and Michele Bernasconi, "The Variable Frame Theory of Focal Points: An Experimental Study," Games and Economic Behav- ior, April 1997, 19, 1–45. Bazerman, Max H., and W. F. Samuelson, "I Won the Auction but Don't Want the Prize," Journal of Conflict Resolution, December 1983, 27, 618–34. Berg, Joyce, John W. Dickhaut, and Kevin A. McCabe, "Trust, Reciprocity, and Social His- tory," Games and Economic Behavior, July 1995, 10, 122–42. Bolle, Friedel, "Does Trust Pay?" Diskussion- papier 14/95, Europa-Universitāt Viadrina Frankfurt, Oder, November 1995. Blount, Sally, "When Social Outcomes aren't Fair: The Effect of Causal Attributions on Pref- erences," Organizational Behavior and Human De- cision Processes, August 1995, 63:2, 131–44. Cachon, Gérard, and Colin Camerer, "Loss- Avoidance and Forward Induction in Experi- mental Coordination Games," Quarterly Journal of Economics, February 1996, 111, 165–94. Camerer, Colin F., and Teck-Hua Ho, "Experience-Weighted Attraction Learning in Games: A Unifying Approach." California Insti- tute of Technology Working Paper No. 1003, 1997. Camerer, Colin F., and Risto Karjalainen, "Ambiguity-Aversion and Non-Additive Beliefs in Noncooperative Games: Experimental Evidence." In Munier, B., and M. Machina, eds., Models and Experiments on Risk and Rationality. Dordrecht: Klu- wer Academic Publishers, 1992, pp. 325–58. Camerer, Colin F., and Daniel Lovallo, "Op- timism and Reference-Group Neglect in Experi- ments on Business Entry." California Institute of Technology Working Paper No. 975, 1996. Camerer, Colin F., and Richard Thaler, "Anomalies: Ultimatums, Dictators and Man- ners," Journal Economic Perspectives, Spring 1995, 9:2, 209–19. Camerer, Colin F., Eric Johnson, Talia Rymon, and Sankar Sen, "Cognition and Framing in Se- quential Bargaining for Gains and Losses." In Binmore, K., A. Kirman, and P. Tani, eds., Con- tributions to Game Theory. Cambridge: Massachu- setts Institute of Technology Press, 1993, pp. 27– 47. Camerer, Colin F., Marc J. Knez, and Roberto Weber, "Timing and Virtual Observability in Ul- timatum Bargaining and 'Weak Link' Coordina- tion Games." California Institute of Technology Working Paper No. 970, 1996. Cooper, Russell, Douglas DeJong, Robert For- sythe, and Thomas Ross, "Forward Induction in the Battle-of-the-Sexes Games," American Eco- nomic Review, December 1993, 83, 1303–16. Crawford, Vincent P., "Adaptive Dynamics in Coordination Games," Econometrica, January 1995, 63, 103–43. Crawford, Vincent P., "Theory and Experi- ment in the Analysis of Strategic Interaction." In Kreps, David M., and Kenneth F. Wallis, eds., Ad- vances in Economics and Econometrics: Theory and Applications, Seventh World Congress. Vol. 1, Cam- bridge: Cambridge University Press, 1997, pp. 206–42. Crawford, Vincent P., and Hans Haller, "Learning How to Cooperate: Optimal Play in Repeated Coordination Games," Econometrica, May 1990, 58, 571–95. Edgeworth, Francis Ysidro, Mathematical Psy- chics. 1881. Reprint, New York: Augustus M. Kel- ley, Publishers, 1967. Fehr, Ernst, Georg Kirchsteiger, and Arno Riedl, "Does Fairness Prevent Market Clearing? An Experimental Investigation," Quarterly Journal of Economics, May 1993, 108, 437–59. Ho, Teck, Colin Camerer, and Keith Weigelt, "Iterated Dominance and Learning in Experi- mental 'P-Beauty Contest' Games," American Eco- nomic Review, forthcoming. Holt, Debra J., "An Empirical Model of Stra- tegic Choice with an Application to Coordina- tion Games," working paper, Queen's Univer- sity, Department of Economics, 1993. Kahneman, Daniel, "Experimental Econom- ics: A Psychological Perspective." In Tietz, R., W. Albers, and R. Selten, eds., Bounded Rational Be- havior in Experimental Games and Markets. Berlin: Springer-Verlag, 1988, pp. 11–18. Keynes, John Maynard, The General Theory of In- terest, Employment, and Money. London: Macmil- lan, 1936. Kreps, David M., "Corporate Culture and Eco- 188 Journal of Economic Perspectives nomic Theory." In Alt, J., and K. Shepsle, eds., Perspectives on Positive Political Economy. Cam- bridge: Cambridge University Press, 1990, pp. 90–143. Ledyard, John, "Public Goods Experiments." In Kagel, J., and A. Roth, eds., Handbook of Exper- imental Economics. Princeton: Princeton Univer- sity Press, 1995, pp. 111–94. McDaniel, Tanga, E. Elisabet Rutström, and Melonie Williams, "Incorporating Fairness into Game Theory and Economics: An Experimental Test with Incentive Compatible Belief Elicita- tion," working paper, University of South Caro- lina, Department of Economics, March 1994. McKelvey, Richard, and Thomas Palfrey, "Quantal Response Equilibria for Normal Form Games," Games and Economic Behavior, July 1995, 10, 6–38. Mehta, Judith, Chris Starmer, and Robert Sugden, "The Nature of Salience: An Experi- mental Investigation of Pure Coordination Games," American Economic Review, June 1994, 84, 658–73. Nagel, Rosemarie, "Unraveling in Guessing Games: An Experimental Study," American Eco- nomic Review, December 1995, 85, 1313–26. Neale, Margaret A., and Max H. Bazerman, "The Effects of Framing and Negotiator Over- confidence on Bargaining Behaviors and Out- comes," Academy of Management Journal, March 1985, 28, 34–49. Neale, Margaret A., and Max H. Bazerman, Cognition and Rationality in Negotiation. New York: Free Press, 1991. Preston, C. E., and S. Harris, "Psychology of Drivers in Traffic Accidents," Journal of Applied Psychology, 1965, 49:4, 284–88. Rabin, Matthew, "Incorporating Fairness into Game Theory and Economics," American Eco- nomic Review, December 1993, 83, 1281–302. Rapoport, Amnon, Darryle A. Seale, Ido Erev, and James A. Sundali, "Equilibrium Play in Large Group Market Entry Games," Management Sci- ence, forthcoming. Roth, Alvin E., and Ido Erev, "Learning in Extensive-Form Games: Experimental Data and Simple Dynamic Models in the Intermediate Term," Games and Economic Behavior, January 1995, 8, 164–212. Schelling, Thomas, The Strategy of Conflict. Cam- bridge, Mass.: Harvard University Press, 1960. Sally, David, "Conversation and Cooperation in Social Dilemmas: A Meta-Analysis of Experi- ments from 1958 to 1992," Rationality and Society, January 1994, 7, 58–92. Stahl, Dale, and Paul Wilson, "On Players' Models of Other Players: Theory and Experimen- tal Evidence," Games and Economic Behavior, July 1995, 10, 218–54. Svenson, Ola, "Are We all Less Risky and More Skillful than our Fellow Drivers?," Acta Psycholo- gica, February 1981, 47:2, 143–48. Tversky, Amos, and Daniel Kahneman, "Ad- vances in Prospect Theory: Cumulative Repre- sentations of Uncertainty,'' Journal of Risk and Un- certainty, October 1992, 5, 297–323. Van Huyck, John B., Ray B. Battalio, and Ri- chard O. Beil, "Tacit Coordination Games, Stra- tegic Uncertainty, and Coordination Failure," American Economic Review, March 1990, 80, 234– 48. von Neumann, John, and Oskar Morgenstern, The Theory of Games and Economic Behavior. Prince- ton: Princeton University Press, 1944.
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved