Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Microeconomics Review Sheet: Green, Lecture notes of Microeconomics

Environments Studied. Over 11 lectures, go from the very general, abstract setting to something that looks much more like economics.

Typology: Lecture notes

2021/2022

Uploaded on 09/07/2022

nabeel_kk
nabeel_kk 🇸🇦

4.6

(66)

1.3K documents

1 / 102

Toggle sidebar

Related documents


Partial preview of the text

Download Microeconomics Review Sheet: Green and more Lecture notes Microeconomics in PDF only on Docsity! Microeconomics Review Sheet: Green Matthew Basilico Spring, 2013 1 Introduction and Majority Tournaments What is this part of the course about? • Evaluation of economic outcomes  Which criteria to use? Ecient? Fair? Appropriately responsive to the economic environ- ment? ∗ Eciency is an obvious criterion to use ∗ A big part of what we will do in this course is fairness: what does fairness really mean?  The whole idea is to be more precise about these normative concepts, and embed that in the design of economic systems. • Design of the economics system  Unlike other parts of 2010, look at designing system when the environment is not known; and nding methods for designing solutions that are appropriately sensitive to the environment (that make sense in a comparative static way)  What are designer's objectives? ∗ Just trying to get any pareto optimum? prepared to make tradeos between pareto eciency and other objectives? ∗ ⇒ What are the parameters other than prot?  Designer's (limited) knowledge about the economy: we will turn this into a game eventually (game theoretic language); so it will be important to know what the participants know about each other ∗ What does designer know about the economy? What do the participants know about each other?  Design of the system: Are there constraints on the design of the system: communications capacity, computational ability, legal constraints...? • Main idea: The economic system is the thing we are studying 1  How to design it  What performance we can get out of it  Individual incentives of people in the system, and how they respond to whatever is designed [later in course] • So that's the basic idea of what normative economics is Environments Studied Over 11 lectures, go from the very general, abstract setting to something that looks much more like economics. 1. General social choice: discrete, ninte number of alternatives (a) To keep things easy to work with: there's no money, no quasilinearity of preference, no concept of convexity of preferences, just a set of alternatives. Your preferences can be in any order. 2. Social choice with monetary transfers (a) Set of things that are possible, but compensatory transfers can be made among people (that add up to 0). (b) It becomes meaningful to say I prefer a to b, but if you give me b plus $5 then that's just as good. i. It's a dierent kind of economic environment, and there are some special results that come out of it 3. Specic resource allocation problems: commodity auction, discrete goods, matching problems, auctions, networks (a) Some are very classical: commodity spaces with risk, etc (b) Some are goods that come in discrete quantities, and are indivisible (c) Auctions are a combination of discrete goods with monetary transfers 4. Equitable cost allocation, equitable rewards for inputs (non-constant returns to scale) (a) This is good for people who want to study accounting. Dierent divisions of a rm, and central costs. 5. Externalities, public goods and local public goods (a) Some special models that are of great interest in economics, especially public nance. Outline of Course 1. Social Choice and Voting Theory - 4 lectures 2. Cooperative Game Theory - 3 lectures (a) There is a relationship in the history of game theory between the two types of game theory. But: (b) Functionally, cooperative game theory is separate 2 • If we need to be explicit, the set of tournaments on X is denoted T (X)  This is the domain. If we're thinking about solutions, that map tournaments into outcomes, the domain of that function is T (X). ∗ (We'll see below that all tournaments are possible) Tournaments are a special kind of binary relation 1. T is complete 2. T is not assumed to be transitive or even acyclic (a) Hence there is no revealed preference relationship (b) We're specically interested in cases where there are cycles (there wouldn't be much to talk about if T is acyclic) By contrast, the Revealed Preference Relation generated by the family of all budget sets is 1. Not complete 2. Assumed to be acyclic The Ordinary Preference Relation 1. Complete 2. Transitive (strict preference) 3. Transitive indierence (needed if X is, for example, RK) (a) allows you to draw indierence curves So T relationship is rather dierent than what we've seen. Later today we will encounter a binary relation that is neither complete nor transitive. Figure 1: • For starters, a typical tournament with 5 alternatives • Depicted as a directed graph.  Between any two alternatives, there is an arrow pointing from more preferred to less pre- ferred ∗ a −→ b mean aTb • Every alternative has to face 4 opponents  No alternative in this diagram that beats everyone. So this is the typical case.  There are cycles: 3) yTxTzTy, 4) yTxTwTzTy, 5) yTxTwTvTzTy ∗ Cyclces of all orders ∗ Can't have cycle of two (by reexivity) 5 Equivalent Tournament Descriptions 1. a binary relation: xTy 2. a set (pair) in product space: for (x, y) ∈ X ×X, (x, y) ∈ T ⇐⇒ xTy 3. a directed graph: x→ y 4. a matrix: mxy = 1 if (x, y) ∈ T , mxy = 0 if (x, y) /∈ T 5. a two-person zero sum game: (X,X, g) , g (x, y) = 1 if (x, y) ∈ T , g (x, y) = −1 if x 6= y, (x, y) /∈ T , g (x, x) = 0. McGarvey's Theorem Can any tournament arise from the majority vote of a set of individuals with rational preferences? Yes, provided there are enough people: n(n−1) 2 pairs. (McGarvey, 1952) Proof • There are n(n−1) 2 pairs (x, y). • We will construct a population with preferences such that if xTy then a majority of people prefer x to y • For each (x, y) add two people to the population who: 1. Agree with T as between x and y 6 2. On the other alternatives are diametrically opposed xy123... (n− 1)n n (n− 1) (n− 2) ...21xy The list is an ordering of preference relations. Preferences are diametrically opposed except x being preferred to y  Here x vs y will have x win 2 to 0. Everything else will be 1 to 1.  Now will do this for every pair x, y • Tally the votes. If xTy then x wins by exactly 2: x : n (n− 1) 2 + 1 vs. y : n (n− 1) 2 − 1 [Every pair diametrically opposed, except one pair dened above where both vote for x] [You can do it with a smaller number of people under certain conditions, but we're not going to worry about that] Consequences of McGarvey's Theorem • The main point: the domain we have to work in is all possible tournaments. No a priori restric- tions  As long as we have a decent number of people, we're okay • Under further assumptions about the population of voters or the nature of the alternatives, the domain of the tournaments we need to work out might be reduced Examples of Majority Tournaments • Orders  An order is a tournament relation which could be explained as if it were the preference of a single rational individual (with a strict preference order)  Majority rule doesn't have any cycles  Majority vote induces rational choice ∗ Majority rule is exactly like the preferences of some person · This expresses a social preference xyzvw · That's why this whole upper triangle is a block of 1s  If you have a society where social rule is in order, it's pretty clear that any reasonable social decision should do.  Alternatives can be reordered so that M = (mxy) is triangular with mxy ⇐⇒ x < y Figure 2: An Order 7  Poll question: Why is 1 18 = .056 correct in the case of 3 voters and 3 alternatives?  This is a good skill to developlittle back-of-the-envelope calculations where you enumerate things ∗ First thing we notice is that this style of voting is a neutral method : the names of the alternatives don't matter. ∗ Look at 1st player: let's say he has preferences abc. [Fixed] ∗ Now look at 2nd player, what are his possible preferences? · List them in lexicographic order. · Note that each one has probability = 1 6 ∗ Then look at whether player 3 matters (can produce a cycle)? · First row: abc, abc, there's nothing player 3 can do. T relation is therefore abc. · 2nd row: acb. Once a has won, there can't be a condorcet cycle. · 3rd row: bac. c will be a condorcet loser; hence there can't be a cycle. · 4th row: now cycle possible if 3rd player picks cab. This happens with probability 1 6 1 6 = 1 36 · 5th row: similarly: 1 36 · 6th row: 1 and 2 completely disargreeing won't create a cycle. 3's preferences will produce an order a b c | 0 a c b | 0 b a c | 0 b c a | c a b c a b | b c a c b a | 0 ∗ Cycles come from partial agreement, and ties that are broken by the 3rd player in just the wrong way ∗ Another exercise: why .088 when n = 3 in a large population? T1: Top Cycle (TC ) • Which alternatives should be included when a CW does not exist?  The most agnostic forgiving approach is to admit as any evidence of dominance by any chain of reasoning that can establish that x is indirectly better than all y ∈ X. ∗ An alternative in the top cycle should not be included ∗ The reason emerges from the Top Cycle Theorem Denition • The Top Cycle of T is the set of all x such that for each y ∈ X there exists a number k and a sequence of alternatives x1, . . . , xk with x1 = x, xk = y, and xiTxi+1 for all i = 1, . . . , k − 1. 10  The Top Cycle simply a set of alternatives, having a property that for every other alterna- tive y there is some sequence, possibly a very long sequence, so that you can go from x all the way to that alternative y Top Cycle Theorem • Everything in the top cycle directly beats everything not in the top cycle • Why is the conclusion of the Top Cycle theorem a surprise?  You might have thought: majority tournaments can have all sorts of weird cycles. • Top cycle members may need a very complex chain of victories to claim indirect dominance over alternatives outside the top cycle • Indirect dominance is not that convincing. Why should we believe that the top cycle is the set of the best elements? 11  This theorem says, as far as alternatives outside the top cycle are concerned, indirect dominance arguments are unnecessary  Every top cycle member directly beats everything outside the top cycle Proof of the Top Cycle Theorem • Suppose that x ∈ TC and y /∈ TC. • We want to show that xTy. • Assume to the contrary; then yTx because T is complete. • Take any z ∈ X. We will show that if yTx then y ∈ TC and this contradiction will establish that xTy.  As x ∈ TC there exists (z1, z2, . . . , zk) with xTz1Tz2T . . . T zkTz.  If yTx, we can append y to this chain so that yTxTz1Tz2T . . . T zkTz.  Because z is any element of X, shows that y ∈ TC. • ∴ xTy. • What we have is if y beats x, then because x beats z by some chain, y beats z by a chain that's just one step longer. Then y would beat z indirectly, which means y should be in the TC. This is a contradiction, so xTy. • (If TC is smaller than X - illustration of the Top Cycle Theorem)  Inside the TC there's a cycle ∗ Find elements in TC: those with direct or indirect dominance over everything. Once you've found them all, the only things left are those beaten directly by elements in the top cycle.  From TC to outside, there's direct dominance 12 • Method  The agenda setter computes the top cycle, puts his favorite alternative last and lists the rest of the TC in this cyclic order, then lists the alternatives not in the TC in any order, and places all of them in the agenda from bottom to top. • This is an example of legal constraints on the design of the system • What about other procedures?  Suppose we just had a binary tree, where every node has two branches (we can populate the endpoints of that tree, and take votes until there is only one thing left) ∗ [Not quite linear, more parellel procressing]  Answer is you can then elect something that is not in the TC Next time: continue talking about majority rule 2 Tournament Solutions Outline: Tournament Solutions Critique of the Top Cycle Desirable Properties of the solutions: the test driving approach Examples of solutions and evaluation methods -The Uncovered Set -The Slater Order (in Section) -The Bipartisan Set (in Section) -The Banks Set -The Tournament Equilibrium Set • the point is not to remember all the tournament solutions, but how to think about ethics in a social situation. How to think about what might actually happen, and adjustments that might be made.  Test driving approach [Start with a solution, then see what it does]  Okay you gave me the denition. What properties does it have? The art is to think of properties, and to test whether these constructs satisfy the properties. Ethos of let's build it and see what it does. 15  In constrast to the axomatic approach [I want these properties, then prove there is one solution] (which we'll do in cooperative game theory): specify the list of properties, and show that if you want exactly these properties there's only one solution that has that Dening a Tournament Solution • A tournament solution (on X) is a non-empty valued correspondance S : T (X)→ X  It is a correspondance because is needs to allow for set-valued solutions (as in TC). Could of course also be a point.  Interpretation: any member of S (T ) is an acceptable action. Will use this for now but eventually will be more demanding. Top Cycle Critiques • TC can contain Pareto Dominated elements [main critique]  Example you should really know: Three people with preferences: abcd bcda cdab [sticking d after c] ∗ d ∈ TC, and yet d is unanimously worse than c. ∗ d is similar to c, and d is slightly worse ∗ [Should this have an arrow from c to d?] • Checking Conditions: TC Monotonicity Y Independence of Losers Y No Pareto Dominated Outcomes N Desirable Properties of the Solutions 3 Properties from Initial Test Drive • Context  Today: the objective is to only use the information available from T (the majority relation). [It would be nice to have vote count information; not allowable here but consider next lecture] 1. Monotonicity (a) Denition: A tournament solution is monotonic if for any x ∈ X and two tournaments T and T ′ that agree on X\ {x} (are identical with respect to all the elements other than this x that we're focusing on), and such that for all y ∈ X, xTy implies xT ′y, we have x ∈ S (T ) implies x ∈ S (T ′). (and such that x can only move up) [A\B means the set that contains all those elements of A that are not in B] 16 (b) Meaning: The tournament changes from T to T ′ but the only things that change are pairwise relations involving x, and x can only move up. Therefore, x should not be knocked out of the tournament solution. i. In matrix form: The x row: some 0's change to 1's (also make the corresponding asymmetric changes to the x column). The x should not be knocked out of the tourna- ment. 2. Independence of Losers Condition (a) A tournament solution S satises independence of losers if when the tournament changes from T to T ′ such that for all x ∈ S (T ) and all y ∈ X, xTy implies xT ′y, then S (T ) = S (T ′). i. Everything in the solution remains the same; therefore the only changes can be among those things not in S (T ) (b) Meaning: i. The tournament changes from T to T ′, but all members of S (T ) continue to beat everything they beat in T . Therefore, all members of S (T ) continue to beat everything they beat in T ii. ⇒ No relationships among the members of S (T ) can change. iii. Only two types of relationships can change: between x ∈ S (T ) and y /∈ S (T ), where x can only move up, or between two alternatives x, y both outside of S (T ). iv. Such changes should have no eect on the solution: S (T ) = S (T ′) 3. No Pareto Dominated Outcomes • There is no Pareto Dominated element; i.e. for all y ∈ S (T ), @ x ∈ S (T ) such that yTx is unanimous. Checking Conditions: TC UC Slater Winner BP Monotonicity Y Y Y Y Independence of Losers Y N N Y No Pareto Dominated Outcomes N Y Y Y Examples of Solutions and Evaluation Methods **Key Relationships Between Solution Sets ∀T : TC (T ) ⊃ UC (T ) ⊃ B (T ) ⊃ TEQ (T ) TC (T ) ⊃ UC (T ) ⊃ UC∞ (T ) ⊃ BS (T ) Also, according to Tillman, B (T ) ≡ UC (T ) for n ≤ 9. 17 • Con: Even though BP has all three properties, it is not dened in an intuitive or economically meaningful way  Again, this is a crazy construction. Make this game, solve it, call it solution, and now you've constructed something: the bipartisan set. We've just constructed it; it could be completely nuts. But it turns out it is a subset of UC, satises monotonicity and independence of losers. ∗ So when we say, can you nd a solution and test drive it and satisfy these three propertiesyes, there is such an object. Is it a good solution? We don't know, but it passes these three tests. • [In this regard, there are lots of other solutions, other properties for test drive...] Implementation • Context  So far: Why are we recommending these solutions? Because they have these nice properties it's good for you. There's no choice.  There is a designer who is trying to do something specic. The designer chooses the system without knowing everything the players know.  Players knowledge is decentralized, possibly private, and they retain discretion as to how to interact with the system. • Design Types  Benevolent Design: Choose a system that results in good outcomes for the players - e.g. an ecient way of raising a xed amount of revenue through taxation.  Motivated Design: Choose a system that achieves good outcomes for the designer - e.g. revenue-maximizing auctions. ∗ We usually look at a designer with known preferences ∗ Example of Motivated Design: Agenda Setter in Amendment Procedure from Lecture 1 • When we consider an implementation problem, we have to know : 1. What constraints the designer faces 2. What information the designer has 3. What the designer's objectives might be Game Forms • Implementation uses a game form  Called form because not using payos • A game form is an extensive form game (tree) but instead of payos the outcomes are the physical alternatives. 20 • How is a game form played? Individuals come with their evaluation of outcomes, and then play according to game theory  Hence equilbrium depends on what players know about each other  So knowing what information players have about each other can feed back into mechanism design. • In implementation theory, we allow players to behave strategically rather than straightforwardly. Strategic Voting Preliminary: We will specify that the designer has to use an amendment procedure • Players act strategically, with forsight. But we need to talk about what this means:  Coalitions: ∗ No one player's vote can change anything (with many players) ∗ Strategic units are the sets of people whose welfare will change after they make a coordinated change in their voting patterns [see gure 2.7 below] ∗ Any group can contemplate a change in how the plan to vote. Larger groups will have more power to make changes in outcomes. ∗ Any group contemplating a change believes no one outside the group will make a change (like Nash equilibrium)  Similar idea to subgame perfection ∗ Except individuals are not the players. Any individual could be a member of multiple majorities. More applicable to political parties than to individuals • Voting tree corresponding to amendment procedure and an order of alternatives • Game tree shows the history of votes and how vote are taken simaltaneously across the elec- torate • A subgame of the amendment procedure is a subtree in which all but the currently rst alternative are as in the original game  Like in game theory. Here dened by making the wrong vote at the last step. • A coordinated deviation against a strategy combination σ at a subgame g by a coalition M of voters who constitute a majority of the population is a list of strategies σ′ig for i ∈ M that thet can use at the subgame g which makes all of them better o when the players outside of M continue to use the strategies as in σ • A strong voting equilibrium is as combintion of strategies σ for the players from which there is no coordinated deviation  ⇒For games in which strong voting equilibria exist, the alternative reached at the last statge of the equilibrium makes a good prediction of what this game will implement  [Nash equilibrium : no individual can deviate. Strong equilibrium : no coalition can deviate] Usually in a game, strong equilibria don't exist. 21 ∗ In these amendment procedure voting games, we'll see strong equilibria do exist. In fact, strong subgame perfect equilibria. Hence we'll get there by working backwards like in subgame perfection. Three Results of Strategic Voting 1. For an amendment procedure with the tree populated in a given order, strong equilibria exist and there is only one outcome that is possible There is a subset of the uncovered set, called the Banks Set, that has the following properties: 2. The set of equilibria that are implementable as strong voting equilibria in an amendment proce- dure is precisely the Banks Set 3. A constructive method can be given to show the order in which the alternatives should be considered, if the agenda setter wants to implement a particular member of the Banks Set. Example: • Sincere voting in Figure 2.5 leads to 3. Sincere voting is not a strong voting equilibrium because: • Strategic voting in Figure 2.7 leads to 2. A coordinated deviation by the majority that prefers 2 to 3 can induce 2 in the above.  Example: T =  − 1 0 0 0 − 1 1 1 0 − 1 1 0 0 −  22  Start at any tentative agreement a  Look for the set of alternatives that beat that agreement {x ∈ X|xTa}  Apply the agreed upon solution to that set S ({x ∈ X|xTa})  Dene possible dynamic paths of contestation ∗ A majority might move from a to b if b contests a ∗ That majority will not intitiate a departure from a to b if bTa but b does not contest a  The minimal subset from which the process cannot escape is a good candidate In general, there may be more than one minimal subset; therefore we take the solution as the union of minimal subsets from which escape is not possible. ∗ [This will not be an issue in the discussion or examples below] • Process with Example: 1. Fill out {b|bTa} for each a. [For each a ∈ X, look at what bTa] 2. Fill out S ({b|bTa}) for each a. [For each a identied above, see which would be in the solution set, restricted to that tournament]. 3. Write in matrix form [Columns: what x contests; Rows: what constests x] and look for minimal cycle 1&2 3 25 The Column-Row set up in the TEQ bD (S, T ) a set up is confusing. Columns: what x contests Rows: what constests x • Solutions at values of n  n = 1: no choice  n = 2: solution is favored element of T  n = 3: solution is CW if one exists, or X if there is a cycle  n = 4: solution is same as TEQ (T ) = UC (T ) = B (T ). May be smaller than TC (B)  n ≥ 5 Can be dierent • [Other Formalization:]  [Let R be any binary relation on X]  Denition of R-Retentive: Y ⊆ X is said to be R-Retentive if: 1. Y 6= ∅ 2. There exists no pair (x, y) with x ∈ X\Y and y ∈ Y such that xRy  R-Retentive sets Y are those from which society cannot escape when transitions are made in accordance with R  Denition of Top Set: Top Set TS of R, denoted TS (R): the union of all minimal R- retentive sets ∗ When R is a tournament, the Top Set is the Top Cycle  Exixtance and Uniqueness of TEQ: By construction, TEQ (T ) = TS (D (TEQ, T )) is an identity with respect to T . ∗ Hence: There exists a unique solution S (T ) such that S (T ) = TS (D (S, T )) is an identity with respect to T . · Not a suprise; guaranteed by the constructive method used to dene it Strength of Majorities • Background  More informations: Strength of Majorties (which includes vote count) instead of 0-1 vic- tory/defeat  Binary vote table: vxy = votes for x from the pair {x, y} ∗ Can also be written in percentage form: vxy + vyx = 1  McGarvey-like Theorem: ∗ Any skew-symmetric matrix [−A = AT , which means aij = −aji] can be the binary vote table generated by a set of rational voters 26  Goal: generate an order (vs. set-valued solution) ∗ Asking society to behave like a rational individual. ∗ If there's a condorcet winner, then we're done.  Many solutions follow the idea of how to break cycles at their weakest links • 3 Methods: Kemeny Solution, Ranked Pairs, Schulze Method • 3 (+2) Solution Properties: Composition Consistency, Clone Independence, Condorcet Con- sistency, (Anonymity, Neutrality) • [Two Interpretations of X - motivating Clone Independence] 1. X is the set of candidates we actually have. 2. X is the set of conceiveable candidates. (a) We have information on all possible binary comparisons T (X). (b) We need to map T (X) into an order π on X so that: (c) when the actual set of candidates A becomes known, we can use maximal element of A according to π as solution  #2 is motivating Clone Independence Properties • Condorcet Consistent  If a solution, whenever it is applied to a population with an order, produces the same order, that solution is Condorcet Consistent • Composition Consistency  If a solution produces the order π at two populations, it should also produce π when applied to the union of these populations. • Clone Independence  A set of alternatives C is a set of clones if no voter ranks any candidate outside of C between two elements of C.  A solution is independent of clones if for all sets of clones C: 1. If c ∈ C wins at vote matrix V , then c is deleted from X, the new winner is another member of C 2. If x ∈ X\C wins and a clone c ∈ C is deleted from X, the new winner is still x  Example: a b c a 65 45 b 35 60 c 55 40 ∗ Kemeny Solution is abc: 27 SM2: Ranked Pairs • Method: 1. List pairs xy by vote margin vxy of x over y 2. Sequentially form a binary relation R by adding in xRy step by step, as long as this does not create a cycle (a) If you encounter xy that would create a cycle, simply skip this pair and continue down the list • [Example] V =  8 14 10 13 6 2 7 15 12 11 19 9   Left columns: Step 1. Right Columns: Step 2 Pair Votes | Implied Order "Ranked Pair" Method: db 17 | dRb o.k. cb 9 | cRb o.k. ac 7 | aRc o.k. ba 5 | bRa skipped (else cycle bacb) cd 3 | cRd o.k. da 1 | dRa skipped (else cycle dacd)  Combining elements, we get solution: acdb. ∗ [Note other pairs have negative vote margins and are redundant] SM3 Schulze Method • Method: 1. [Start with Top Cycle] Take each pair xy and compute a path of majority victories from x to y (a) [i.e. if x doesn't beat y, then nd a cycle of majority victories leading from x to y] 2. Call the weakest victory in this path the strength of the path  Meaured in number of armative votes 3. Of all the paths from x to y, nd the one with maximum strength  This denes the max−min strength from x to y • The max-min strength matrix denes an order on X 30 • [Example:] V =  8 14 10 13 6 2 7 15 12 11 19 9 , max−min =  14 14 12 13 13 12 13 15 12 13 19 13 , order:  1 1 0 0 0 0 0 1 0 1 1 1   Hence the order is dacb 4 Voting with Full Rank Preferences and Arrow's Theorem Ordinal Information • Voting based on full rank information  But this means we are still not using information about the intensity of preferences • We're done with tournaments and vote-count methods Common Voting Methods Examples of widely used voting based on full rank information • Neutral Methods  Plurality Rule ∗ Count the number of rst-place votes ∗ Produces a winner · [Usually not used to produce a strict order of all X, but we could rank alternatives in the order of the number of rst place votes they receive] · Act of voting may be seen to be a social benet ∗ Problems · Not IIA: : x i y i z z j y j x , ′: z ′i x ′i y y ′j x ′j z [,′ agree on {x, y}] xF (i,j) y and yet yF ( ′i,′j ) x  Plurality with Runo ∗ If no candidate has the majority, the top two are selected for a second round runo  Alternative vote ∗ If no candidate has a majority, drop the one with the lowest vote total and try a plurality vote again  Borda Count ∗ One round of voting with points assigned for each rank: n, n− 1, . . . , 1 ∗ Problems: 31 · Not IIA: : x i y i z y i z j x , ′: x ′i z ′i y y ′j x ′j z . [,′ agree on {x, y}] yF (i,j)x and yet xF ( ′i,′j ) y · May fail to elect a Condorcet Winner • Non-Neutral Methods  Priviledge certain alternatives  Supermajority requirements with status quo ∗ If the majority cycles, can nd a higher cuto at which the cycle would disappear (because binary relation at higher cutos becomes incomplete) · Example: 1-dimensional tax rate t. It takes supermajority θ+ to raise t, and a lower number of voters θ− to lower t. A voting equilibrium is a t that divides the population into two balanced parts in these proportions (generalizes the median voter rule).  Additional: tree-based, binary, sequential methods, amendment procedure, binary trees with more branches • Non-Anonymous Methods:  [Not 1-person, 1-vote] Priviledge certain individuals' preferences (?)  Quota rules, Chairpeople, Committees and other agenda setters, Randomized choice meth- ods Arrow's Theorem • Notation set up  I individuals, n alternatives X  Each person has a weak order preference i∈ R  R is the set of all weak orders on X. Subscript P on any relation means strict subrelation  Domain: A ⊆ R  Social Welfare Function: F : A → R • Axioms  Unrestricted Domain (UD): A = RI  Pareto (P): ∗ For any {x, y} ⊆ X, x i y ∀i =⇒ xFP (1, . . . ,I) y  (IIA): (Independence of Irrelevant Alternatives, or Pairwise Independence) ∗ For any {x, y} and any two proles, = (1, . . . ,I), ′= (′1, . . . ,′I) such that  and ′ agree on {x, y}, we have that F (1, . . . ,I) and F (′1, . . . ,′I) agree on {x, y}  Non-dictatorship (ND): ∗ There exists no h ∈ {1, . . . , I} such that x h y implies xFP (1, . . . ,I) y. 32 • Most commonly cited case is single peaked preferneces: • Single Peaked Preferences  Denition (MWG 21.D.2,3,4)  Delete any subset of X...still can nd the median restricted to what is left. 5 Cardinal Information and Cooperative Game Theory I 5.1 [Gap for Cardinality] 5.2 Cooperative Game Theory Background • A family of models: today we talk about transferable utility, which is by far the most important model. But we'll also do 2-3 others. • Going from one-person-one vote, to money (and transferable utility). Transfering surplus was not a part of voting theory. • Everyone has quasilinear utility • Use utilitarian objective function; this will help to get ecient outcome.  The social utility produced is ∑ ui.  We can divide this across people using transfers of money, and can seperate equity from eciency (handle them seperately) • Fairness will mean dierent things depending on the model, but basic idea is some equal division of surplus.  Whole concept of surplus couldn't be discussed in voting theory • Unequal power of dierent people  It voting theory, it was because they happened to be chairman, dictator, etc  Now based on ability to produce social surplus for group members ∗ Not just because happy all the time 35 Transferable Utility (TU) Games A TU game is a player set N and a function v which has a value for every subset of N . • Set of players = N = {1, ..., N} • Any S ⊆ N is a coalition • 2N set of all coalitions • v : 2N → R.  Function v says what coalition S is worth. ∗ This says when the people of S get together, they can do something collectively that they couldn't do seperately. [Given that this is cooperative game theory, we're not mentioning strategies, or how does S produce the number v (S) as their social welfare.] But they can make all possible binding contracts; no one ever cheats.  It says a sucient statistic for the social workings of this system are in the number associated with each coalition. • Result of a cooperative game is: x = (x1, . . . , xn) : and allocation (or imputation, in older literature) • If ∑ i∈S xi 5 v (S) we say that S can achieve the payos (xi)i∈S • world −→ TU game −→ solution  Model the world by a TU game verbal problem  Modelling the TU game means coming up with 2N numbers, v (S) for each S (all the way from s = 1 to s = N)  Once we have TU gamethis list of 2N numbers, then we apply some mathematics and come up with solution (that is an imputation this is what the world should look like  We'll be talking mostly about 2nd step, but 1st step is at least as important: ∗ Way you choose to model the world as a TU game is an ethical problem  Always have generals question on this: long shaggy-dog story and make it into TU game. ∗ How to go properly from a description of a real problem to a sparce economic model • TU Game Solutions  Solutions that work for all N . f (N, v)→ Rn ∗ Wouldn't be right to work for some n and not others. Like tournament theory.  Driving Question ∗ What is it worth to be player i in the game (N, v)? 36 · Saying to a consultant, this is a problem, and my society has decided to use a certain solution, how much should I pay to be a player in this game?  Point-valued or set-valued solution (though mostly point-valued) • Terminology  If ∑ i∈S xi 5 v (N) the allocation x is feasible ∗ (An allocation that doesn't exceed the value of the whole coalition is feasible)  If for every (N, v), ∑ i∈N fi (N, v) = v (N), the solution f is ecient ∗ (An allocation that exactly uses up what the whole group can produce is called ecient) ∗ We should maximize sum of our individual utilities · The physical reality of what happens may look assymetric (i.e. one person plays at Carnegie Hall) · But the solution will tell us something like 'divide the surplus equally'  Coalition N must form ∗ Not going to say groups of people go seperate ways; (or if so, will use compensatory transfers across groups) ∗ The essence of cooperative game theory is taht all agreements are possible. So anything that can happen, can happen as a result of agreement among all the parties. Examples [Examples in this lecture; solutions in next lecture] Bilateral Oligopoly One buyer (i = 1) and two sellers (i = 2, 3) • Each seller has one indivisible unit to sell, worthless to either seller. • The buyer has no need for more than one unit.  Classic set-up for excess supply. In theory of competetive market (say Bertrand pricing), price would be bid down to 0.  So what we want to decide is what is the fair answer • To make TU game, need to say what is v (S) for every S: v ({i}) = 0 anyone by themself can't do anything v ({2, 3}) = 0 sellers by themselves can't do anything v ({1, i}) = 1 one buyer with either of the sellers gets 1 unit surplus v (N) everyone together can't do any better than above • So this is the v function. What we'd like to do is map is into a certain allocation (which we'll do next time)  If the solution was to map into f ({1}) = 1 and f ({2, 3}) = 0, this would be competitive solution. So Bertrand Competition is a solution to this game, and is an ecient point (in which the buyer gets all the surplus). But is this really what an ethical solution would recommend? 37 • Cost of binary connection between i and j is cij • S: If S forms, they must nd a pattern of connection from 0 to all i ∈ S, and they will then pay the sum of the associated costs • T : Best pattern of connection is a tree T .  T (S) are all trees that span S (No cycles in an ecient connection) • Rules of engagement can vary:  Can S run its tree through some j /∈ S? • v (S) = minT∈T (S) ∑ ij∈T cij Queueing • Each job takes one unit of time • Each i has one job to run; waiting time costs θi per unit of wait • S wil sequence its jobs from highest waiting cost to lowest:  σ (i, S) is the position of i in this ordering of S • Politeness: Does S have to be 'polite' and let j ∈ S get their jobs done rst? Or can S run its jobs before anyone else?  Polite case: v (S) = − ∑ i∈S (|N\S| − 1 + σ (i, S)) θσ(i,S) ∗ Highest valued i ∈ S , with σ (i, S) = 1, has to wait for all the people outside of S. Each agent in S has wait multiplied by unit cost of wait θσ(i,S) Airport Games Question on 2011 generals • Ranches are located at various distances from a water source and need to be connected to this source for an irrigation ditch. They lie along a straight line at distances di from the source. They must share the cost of digging the ditch. The cost is $1 per mile. • v (S) = maxi∈S di 2-Games (Relationship Games) • For every pair {i, j} there is a value of their relationship zij • Relationship values are symmetric: zij = zji • Coalition's worth = sum of all relationship values that it contains v (S) = ∑ {i,j}⊆S zij 40 PS Games (the reason for the name will be revealed later) • For every i and every S with i /∈ S, compute i's marginal value to (i) S and to (ii) N\ (S ∪ {i}): (i): v (S ∪ {i})− v (S) (ii): v (N\S)− v (N\ (S ∪ {i})) • A PS game is one where v (S ∪ {i})− v (S) + v (N\S)− v (N\ (S ∪ {i})) is independent of S • For every coalition S, the sum of the marginal values for joining S and joining N\S (a.k.a S complement) is a constant with respect to S.  Possibly: worth the same to any coalition  Theorem: quering is a PS game. ∗ Why? TU Solutions: Basic Properties Background • 0-Normalizing TU Games  One degree of freedom exists: an arbitrary additive constant ∗ TU games describe behavior of people who can trade money and have quasi-linear preferences  Given a game (N, v), an equivalent 0-normalized game (N,w) is dened by w (S) = v (S)− ∑ i∈S v ({i})  Hence, w ({i}) = 0 for all i ∗ No change in behavior, so we will adopt convention. Also: empty set convention: v (∅) = 0. Properties • Eciency  A solution f is ecient if for every (N, v), ∑ i∈N fi (N, v) = v (N) ∗ N must form. ∗ Essence of cooperative game theory is that all agreements are possible. So anything that can happen can happen as a result of agreement amongst the parties. • Dummy Property  Player i is called a dummy in (N, v) if v (S ∪ {i}) = v (S) for all S ∗ Player i doesn't add value to any coalition  If i is a dummy in (N, v), then fi (N, v) = 0. • Symmetry Property 41  A pair of players (i, j) are symmetric in (N, v) if v (S ∪ {i}) = v (S ∪ {j}) for all Ssuch that S ∩ {i, j} = ∅  If (i, j) are symmetric in (N, v) then fi (N, v) = fj (N, v) • Superadditive  Restriction on the games (N, v) that are compatible with using a cooperative solution  A game (N, v) is superadditive if for all S, T with S ∩ T = ∅, we have v (S) + v (T ) 5 v (S ∪ T ) ∗ We work with superadditive games because these are cooperative sitautions where bind- ing contracts are possible. • Additivity  Most important property  If (N, v) and (N,w) are two games, then f (N, v) + f (N,w) = f (N, v + w)  The game (N, v + w) could arise if the seperate games involved non-interacting decisions so that v (S) + w (S) = (v + w) (S) for all S ∗ It could also represent a case in which there is uncertainty about the situation. Quasi- linear preferences mean that people are risk neutral. ∗ Additivity says that it should not matter to risk neutral players whehter they use enforceable contingent contracts to play the game ex-ante, or wait until the uncertainty is resolved an play • Individual Monotonicty  If (N, v1) and (N, v2) are two games such that v1 (S) = v2 (S) for all S 6= N , and v1 (N) > v2 (N), then f (N, v1) > f (N, v2) Shapley's Theorem (1953) • There exists a unique solution f satisfying dummy, symmetry, eciency and additivity. Listened in this section up to: 55:37 6 Shapley Value and Other Solutions to TU Games Today is pretty much end of TU games. Shapley value is by far the most important; the last two have less application. 6.1 Shapley's Theorem • Theorem (Shapley, 1953)  There exists a unique solution f satisfying dummy, symmetry, eciency and additivity. • Intuition of Proof: 42 f (N, v) = f (N, vT )α (v) = M−1v  M−1 is a xed matrix that does not vary with v. This formula gives us a way to compute the Shapley value as a linear combination of 2n − 1 known vectors with the weights being v (S).  So have this formula through calculating M−1. But we haven't really given ourselves any intuition, which is why we look at the following. ∗ Given the nonsingularity of the matrix, we know that there is a unique value. Will now explore other ways of nding that value. 6.2 Other Formulas For Shapley Value Random Order Approach • Let π be a permutation of {1, ..., n}. π (j) is the position of j after the permutation π has been applied. • For each i, let P (π, i) be the set of predecessors of i under the ordering π. P (π, i) = {j|π (j) < π (i)} fi (N, v) = 1 n! ∑ π v (P (π, i) ∪ {i})− v (P (π, i))  Can see this as: 1 n! ∑ π (·) is the average over all permutations, of i's marginal value to a permutation. v (P (π, i) ∪ {i}) − v (P (π, i)) is his marginal value to the permutation π. fi (N, v) = 1 n! ∑ π v (P (π, i) ∪ {i})− v (P (π, i)) • When you do examples, 3-4 players, just make a list of the people down the left hand side of the page, take all the permutations of that set, put down the list of 24 permutations (for 4 people), calculate the mariginal value each time, add it up, and that's it. That's the way to do it: no real shortcut, just list all the permutations. • Eciency: after the last person, gives out v (N) in each case, no more no less. • Note: won't ever do this for n > 4, but good for intuition  [Example] T = {1, 2}. N = 3 Consider i = 2 Orders where v (P (π, i) ∪ {i})− v (P (π, i)) = 1. 123 132 312 ∗ So we get f2 (3, v) = 1 3!3 = 3 6 = 1 2 Marginal Value to Coalition Approach • This is another way of writing the sum, but it groups the terms in a more ecient way 45  The idea is that when i comes into a coalition that i isn't already a member of, i will make this marginal contribution • Write as a function of the marginal value of i to dierent coalitions S: fi (N, v) = ∑ S⊆N\{i} |S|! (n− |S| − 1)! n! (v (S ∪ {i})− v (S))  Note: Remember, S is the number of people in coaltion - 1 [that is, before {i} gets there]  The coecient is simply the number of ways of rearranging the predecessors of i and the suc- cessors of i, grouping all the repeated occurances of the same v (P (π, i) ∪ {i})− v (P (π, i)) in the original formula.  [Example: using same as above] 1, 2, 3 ⇒ 1!(1!) 6 · 1 = 1 6 1 3 2 3 1 2 ⇒ 2!(0)! 3! · 1 = 2 6 ∗ Hence f2 (3, v) = 1 2  [Example: n = {1, 2, 3, 4, 5} T = {3, 4, 5}] ∗ How many dierent permutations are there where person 5 adds himself to the prede- cessor coalition {3, 4} to make {3, 4, 5}? · Large number of permutations π where this happens, including 34512, 34521, 43512, 43521 · There are exactly 4 ways he can be added to the predecessor set {3, 4} (above), so we get: 4 5 [v (345)− v (34)] ∗ So these are the 4 rows, out of 120 rows, that 5 is being added to the predecessor S = {3.4}  So this is saying that the number of permutations with exactly this marginal contribution is given by the number of ways you can rearrange the predecessors, time the number of ways you can rearrange the successors.  You divide by the total number of permutations to say what fraction of rows have this characteristic  This gives us the number of ways i can add itself to a particular predecessor set S, summed over S ∗ Note that these are binomial coecients, so you're more likely to be added to medium sized sets. Like central limit theorem. We'll see how formula is used.  Never as written here, but to approximate large number problems. For problems you can't calculate explicitly, a good approximation. Numerical Examples Applying the Random Order Approach in a Public Technology Game • Can make a short cut in terms of calculating the marginal value of each player in each spot in a random order  Chance of being at each position in a random order is equal 46 • Set up: n = 4. Two inputs: k = 1 is capital, k = 2 is labor. z1 = (1, 0), zi = (0, 1) for i = 2, 3, 4  f (0, x) = 0, f (1, 0) = 0, f (1, 1) = 3, f (1, 2) = 5, f (1, 3) = 6  Hence: v (S) = 0 if 1 /∈ S, v ({1, i}) = 3, v ({1, i, j}) = 5, v ({1, i, j, k}) = 6 1. Compute average marginal product for player 1. • [Note: strategy leveraging fact that v(S) will be the same for any number of people in the coalition with player 1. Not true for other players] • Also relies on fact that being at any position in a random order is equally likely  1st in order: marginal product is 0, since no one can produce anything by themselves  1 is 2nd in random order: can produce 3  1 is 3rd in random order: can produce 5  1 is 4th in random order: can produce 6 • All permutations are equally likely, so he has a 1 4 chance of being in each position. [There are dierent predecessor sets depending on which position he is in, but everything is symmetric so he has a 1 4 chance of being in any one] • Take average of these 4 numbers: 1 4 [0 + 3 + 5 + 6] = 3.5 2. Then use eciency: Shapley value is ecient, so the whole thing has to sum to 6. [Eciency:∑ i∈N fi (N, v) = v (N)] • So the other players in total get 6− 3.5 = 2.5 3. Finally, use symmetry: Players 2,3,4 are symmetric to each other, so need to have same payos • Hence they each get 1 3 · 2.5 = 5 6 Shapley Value of the Relationship Game • Set up  Set of people: everybody forms a relationship that is in a group.  The relationship between i and j is worth zij to each i and j. [i and j each get zij when the relationship is formed] • How to calculate the Shapley Value: (How is the surplus from this game calculated?  Think of people coming in through a random order. ∗ Ex: 3 comes in after 1 and 2. 3 forms two new links: new surplus is 2 (z13 + z23). (This is marginal value to the coaltion of predecessors)  So need to think about average marginal value using this technique to calculate the marginal value. ∗ Each player comes in and triggers links with predecessors. In the end the player will have links to everyone, some of which will have been created by the player, and some of which formed by people afterwards. But you have 1 2 chance of being 1st. When you are rst, you created surplus in both directions.  2 ways to form a relationship (in terms of who comes in rst), and each time produces 2zij . Each i has 1 2 of coming in rst, so the 2 and 1 2 cancel. This gives zij to each i, j. 47  A player with a 0 waiting cost (θi = 0) would not be a dummy. [And we know this since Shapley Value does respect the dummy value]  This is precisely because of the polite convention. Someone with 0 waiting cost, when he's not in the coalition, he's getting to go rst even though he has no waiting cost. ∗ But when you can put him in your coalition, you can put him last, so he's added value to your coalition. ∗ Therefore, his presence increases the v (S) to any S with i /∈ S and ∑ j∈S θj > 0.  Normative aspect of Shapley Value: Green: In my opinion, no one should be getting a positive value here; everyone should be sharing the cost of -22. But that's what's going on] • Other Conventions:  Question the modelling part: why polite convention? Conventions are about power and fairness and who gets to do what to who, and who has the right ∗ Practical uses on things like national income accounting, accounting, using models like this to capture productivity of service industries. · I.e. people are worth what they are paid, even if all lawyers are doing is suing each other  Selsh Convention ∗ Any S can put itself rst. ∗ Then θi = 0 is a dummy player. ∗ Shapley value is completely dierent Example on calculating with large #s • [Skipped in lecture; says in notes but don't see] Example: How to Allocate Goods Fairly when there is a capacity constraint • Shapley value can suggest a fair way for people to be compensated when they are not served due to capacity constraints in an industry • They could have had some surplus but they got priced out of the market. This seems unfair • Also can be used to allocate a xed cost across a group of users in a way that reects their surpluses instead of treating them all equaly the way a market would Random Order Approach with many agents • Calculation and theory with many agents is simpler than with a nite number of agents (except for cases like n = 3, n = 4)  Suppose you're a person and you're entering a room. My position coming in is a parameter λ, and what my position coming in is always uniformly distributed. My predecessors have same distribution (of p?). • Set Up  [0, 1] set of consumers 50  Willingness to pay p is distributed U [0, 1]  Capacity constrained production: 0.4 of the good is available at zero cost, no more • Applying Random Order Approach  For which predecessor sets will p be served?  Key idea to calculate expected marginal revenues  All predecessor sets have the same distribution of valuations as the whole population. They dier only in their size.  Let λ be the fraction of the whole population that is the predecessor set ∗ The distribution of λ is uniform on [0, 1]  Calculate: how much surplus is generated as a function of p and λ? • Calculation: 2 cases 1. p > .6 (a) A player with p > .6 is served when joining any predecessor set i. When λ < .4: added value to predecessors is p (as capacity has not yet been hit) ii. When λ > .4: added value is p − ( 1− .4 λ ) (value he brings is p, and person being replaced has WTP of ( 1− .4 λ ) ) (b) Integtate over λ uniform on [0, 1]: ˆ 4 0 pdλ+ ˆ 1 .4 ( p− ( 1− .4 λ )) dλ = p− .6 + .4 ln 1 .4 i. Keep your value and pay price of .6− .4 ln 1 .4  2. p < .6 (a) Same calculation, except range of integration ends at upperbound of .4 1−p , beyond which p would not be served: ˆ 4 0 pdλ+ ˆ .4 1−p .4 ( p− ( 1− .4 λ )) dλ = .4 ln 1 1− p • Results and intution  People with p < .6 are not served in an ecient allocation  But they are dierentially compensated for not having been served ∗ Note that p = 0 gets no compensation ∗ And p just below .6 gets essentially the same thing as p = .6, namely: .4 ln 1 .4  Comparison with competitive mechanism ∗ Integral under the curve is the same ∗ Shapley value is a little bit smoother 51 Individual Monotonicity: Interesting Entailed Property of Shapley Value • Individual Monotonicity:  If (N, v1) and (N, v2) are two games such that v1 (S) = v2 (S) for all S 6= N and v1 (S) = v2 (S) for all S 6= N and v1 (N) > v2 (N), then f (N, v1) > f (N, v2)  (Two games equivalent in any way except that the coalition of the whole is greater in 1 vs 2, and every smaller coalition has the same value. [These smaller coalitions won't form; just there in the background]. In this kind of game, everyone should share in the gain from the social surplus] • So additivity ⇐⇒ monotonicity, even though these are dierent thought experiments Other Solutions: EANS Equal Allocation of Non-Seperable Surplus • Intutition 52 • Picture is about the concept of a mechanism  Agent has a θ, plays a strategy s. Then there's a function g that takes this into an outcome.  Individual has a utility function i that depends on the outcome and on everyone's observa- tions [or at least may include others'] Settings Studied: 1. Designer, 1 Agent (Principal-Agent). Supresses strategic considerations 2. Equilibrium Dominant Stategies ('holy grail') 3. Bayesian Equilibrium 4. [Further considerations] Notation: • θi is the observation made by agent i  observation can be anything: their own preference, dierent draws of the economy, they see something (about the physical environment that they're in, or about other people)  MWG: we suppose that agent i privately observes a parameter, or signal, θi that determines his preferences. ∗ We will often refer to θi as agent i's type.  θ = (θ1, . . . , θI) observations made by all agents  Θ = Θ1 × . . . × ΘI fancy notation for the space of possible observations made by all the agents ∗ [Set of possible types for agent i is denoted Θi] • si ∈ Si the action of agent i  S ∈ S1 × . . .× SI action space of all the agents • X space of all outcomes • ui (x, θ) agent i's evaluation of outcome x when the observations made by all agents have been θ Functions • g : S → X is the outcome function 55 • (S, g) is the mechanism  Both S and g are designed  The composition of these to functions, S and g, gives you a mapping from Θ into X ∗ So if people are playing according to the strategies S and the mechanism (S, g) is in place, then s composed with g gives you a mapping from Θ into X ∗ This is the realized outcome function [since what you want to evaluate is Θ → X, which is this function f ] ∗ If there's a lot of variation in θ, want to have a lot of variation in X · [i.e. if you're running an auction, the winner of the auction should vary with evaluations]  Hence we want to evaluate f , and know which f 's are feasible ∗ By feasibly, we mean for which fs is there a mechanism and an equilibrium set of strategies such that the composition of the strategies S and the mechanism (S, g) gives you the f  It's like a game, except: ∗ The payos haven't been specied. The payos are only known once you know people's utility function and the strategy space. • Comments  History: socialist economies and command: how do we know people will follow these com- mands? This needed some kind of formalization.  In Ch. 23, consider series of special cases that correspond to restrictions on the functional forms  Since si ∈ Si can depend on what i believes about others (both what they have seen, θ−i, and how they will act, s−i) need to make precise assumptions to turn into a game. ∗ Then can use game theory to make predictions. This is how mechanisms are evaluated.  The important point is that if we are free from the price system and are free to design anything we want, S and g are both designed areas. ∗ The objective of the designer will be to design an (S, g) that creates a good function f . But exactly what the designer is trying to do, we haven't said yet. ∗ In prinicpal, the participants don't include the designer. Special Cases Above was very general: this lecture about specic examples • Special cases meaning: specications of exactly what θ is; distributional assumptions about θ; what does the strategy space look like; what does utility function look like. [All these things can vary from very general to very specic. Today is very specic] Induced outcome function 56 • For any given theory that predicts choices s∗ (θ) we have an induced outcome function (MWG: social choice function) f (θ) = g (s (θ))  a.k.a. outcome function • Can evaluate based on 1. ex-post realized utilities ui (f (θ) , θ) (a) Set of realized utilities. [After knowing my preferences, and after taking into account everyone else's information] 2. conditional expected utilities (a) taking a step back to a prior stage of the process and evaluate, based on only each θi 3. ex-ante expected utilities (a) calculated before anything about the realization of θ is known (including own θi) i. Ex: should I join an auction  In these cases the evaluation will use statistical information on the distribution of θ  [This is how it is evaluated by the players, the mechanism designer, and the analysts (us)] Terminology and Useful Special Cases • Notation:  Outcome: x = (k, t1, . . . , tI) where k is a real aspect of the outcome and ti is a net monetary transfer to agent i ∗ One possible type of outcome is a list of I + 1 numbers. The rst one k is real aspect of outcome, like decision. Then vector of monetary transfers ti. ∗ Below we assume everyone is quasilinear in the monetary transfers. 1. General Quasilinear Social Choice Environment: ui (x, θ) = vi (k, θ) + ti 2. Quasilinear Private Values Case: ui (x, θ) = vi (k, θi) + ti • Each vi (k, θ) depends only on θi, so we can write vi (k, θi) • Implies externalities are not considered • This form readily applied to selection of public projects or public goods of dierent quality 3. Quailinear with Mututally Payo-Relevant Information ui (x, θ) = vi (k, θ) + ti [same as general case] • I care about your outcome, or others' tastes or information is relevant to 'me' • Typical application sare to goods purchased where there is an active resale market  Example: buying oil lease. Although the amount of oil in the ground is independent of who owns the lease, the expected yield from the lease depends on all the information, your sample θi as well as everyone else's sample θ−i. ∴ vi (k, θ) depends on all the information. 57 • Today: supress strategic isses  Look at only one agent  Mechanism designer, 1 agent. Agent gets to play a strategy. That strategy is mapped into an outcome. And this is what the agent gets. ∗ Just like giving the agent a menu and saying pick from this menu ∗ Sounds simple, but a few important lessons  One agent model ≡ Principal-Agent Model there are no game theoretic considerations. ∗ Hence these are constrained optimizations, ∗ Their analysis is less controversial than anything requiring equilibrium-based game theoretic reasoning ∗ But illustrates some of the most important parts of mechanism design like nding participation constraints, feasibility conditions, how do we know people are playing optimally against the mechanism. Basic Adverse Selection Model Series of 6 examples, starting with most basic Model from Ch 14 Set Up • 1 Agent (or 'player'):  Agent is the buyer of a good q, which can be purchased in variable quantity  q is that quantity purchased  t is the revenue transferred from buyer to seller  θ is willingness to pay ∗ doesn't necessarily have a linear price. Pay an amount t to seller, and get q.  Buyer's utility function: u (q, t, θ) = θv (q)− t ∗ linear in θ: Note: quasilinear utility and multplicative type. ∗ v (0) = 0, v′ > 0, v′′ < 0 • Principal  Principal is seller and producer of the good.  Principal is the mechanism designer. Not an active player ∗ Makes irrevocable rules. Cannot change his mind about mechanism mid-way ∗ [Announces (S, g). Buyer gets to choose s (then s becomes a function of θ, s (θ), then g (s (θ)) is the outcome. This is buyers' welfare, and also determines seller's welfare)].  Unit cost is c > 0  Seller does not know θ  Seller's utility function: π = t− cq Two-Type Case 60 • Set Up  Two types: θL, θH  Prob(θL) = β, Prob(θH) = 1− β • Key Assumptions:  Seller can make a take-it-or-leave-it oer  Will not be renegotiated ex-post  Buyer's value from non-participation is known and independent of type  Seller oers two packages: (qL, tL) , (qH,tH) ∗ Seller's utility depends on what the buyer does. Doesn't know Buyer's type. · I.e. oering a menu, from which buyer chooses a particular (q, t) combination. These are 'take-it-or-leave-it' oers. · No renegotiation ∗ Think of these as 'intentions', wants low type to buy (qL, tL) and high type to buy (qH , tH) · But can't compel, must be voluntarily chosen  (0, 0) must also be on the menu ∗ Buyer not required to buy anything. Seller must oer (0, 0) • 2-type case  [Then nite case, then continuous case; but it's the same model all the time] • Constrainted Maximization Problem:  [Maximizing seller's average prot] maxβ (tL − cqL) + (1− β) (tH − cqH)  s.t. 1. θHv (qH)− tH = θHv (qL)− tL 2. θHv (qH)− tH = 0 3. θLv (qL)− tL = θLv (qH)− tH 4. θLv (qL)− tL = 0  Simplication: ∗ (2) is not binding at a solution: (2) is implied by (1), (4) and θH > θL. ∗ (3) is not binding at a solution: If it is binding, 3 cases: either violate (1) or not prot-maximizing  Budget constraint intuition: look at intentions ∗ (1) high type has to prefer what the high type is intended to get to what the low type is intended to get ∗ (2) high type has to get more than he'd get for not participating · [θH Ex-Post Participation Constraint] 61 ∗ (3) low type has to prefer what the low type is intended to get to what the high type is intended to get ∗ (4) low type has to prefer what the low type is intended to get to not participating · [θL Ex-Post Participation Constraint] • Simplied Constrained Maximization Problem: maxβ (tL − cqL) + (1− β) (tH − cqH) s.t. θHv (qH)− tH = θHv (qL)− tL θLv (qL)− tL = 0 • Graph:  Indierence curves. All other indierence curves obtained through linear shift (this is result of quasilinearity)  Point where indierence curve intersect: can see that high type has steeper slope, reecting higher marginal willingness to pay. ∗ [with quasilinearity, this is true at every point] • Examining Graph to understand contraints:  (4) is about relevant slopes; (1) pins low-type indierence curve to go through (0,0)  (2) not binding since: ∗ Low type indierence curve goes through (0,0). So low type getting 0. ∗ Since high type indierence curve is steeper, θH must get something greater than 0 · Since goes through same point. South-east direction is increasing preference. =⇒ constraint (2) is not binding.  (3) not binding since: ∗ If the high type prefers or is indierent between (q∗H , t ∗ H) to (q∗L, t ∗ L), then: ∗ Can also see that low type preferes what is intended for it (q∗L, t ∗ L) to what is inteded for high type (q∗H , t ∗ H), so (2) is slack • This is picture with 2 slack constraints dropped. This is the problem we have to solve. 62 • Intuition  The worse a deal you cut for the low type, the more you can extract from the high type.  So at optimum, low type is being underserved (holding q∗L below ecient level of qL) since otherwise it would cost you some prot from the high type Comparison with a Perfect-Observability Model • Comparison:  If we didn't have to worry about the incentive constraint, would be at point on each indif- ference curve where slope = c ∗ [each type would be at point where θiv ′ (qi) = c]  But this wasn't incentive compatible above: (q∗L, t ∗ L) is on a better indierence curve for θH ∗ [See dotted indierence curve on the graph below]  With perfect information, seller would hold both types down to participation level (indif- ference curves going through 0). But this is automatically not incentive compatible for θH ∗ Seller would like to know types, and make seperate take-it-or-leave-it oers. But since he can't, has to respect incentive constraint. • Perfect Observability Model  Intuition: with θ's known, each type can be given a take it or leave it oer and held down to 0 even though they may prefer the oer taken by the other type.  Mathematically: this amounts to dropping (1) [θHv (qH)− tH = θHv (qL)− tL] and replac- ing it with (2) [θHv (qH)− tH = 0]  Maximization Problem: maxβ (θLv (qL)− cqL) + (1− β) (θHv (qH)− cqH) ∗ FOCs: ∂π ∂qH = θHv ′ (qH)− c = 0 ∂π ∂qL = θLv ′ (qL)− c = 0 65 • Intuition  The eect of incomplete information is to hold down the level of qL so that tH can be higher, raising the protability of serving the higher types.  This is the general lesson in this form of adverse selection model ∗ Lost prot on low types, raises power over high types ∗ Do we see this in real world?  People have made a big deal about how this model produces ineciency • Solution takeaway  Mechanics: looking at incentive constraints, dropping out constraints that you know are slack, looking at the remaining equalities, and this lets you substitute out for t. Then maximization gives you optimum. Same Model with n types • Overview  Whole lot of incentive constraints, but the only ones that are binding are that that each type is indierent to imitating the choice made by the next-lowest. ∗ Don't want to go 2 types down, and never want to imitate higher type.  Only 1 participation constraint: very lowest type has to be held to 0.  Population Composition ∗ Very possible that some types will be served the same (especially if there aren't many of the higher of the two types) • Optimization Problem max ∑ βi (ti − cqi) s.t. n2incentive constraints 66 • Only n constraints bind (by same argument as above) • Recursive forumulation: We must have  For each i = 2, ..., n: θiv (qi)− ti = θiv (qi−1)− ti−1  And θ1v (q1)− t1 = 0 • Three Key Properties of the Model 1. The q's form a weakly monotone sequence (a) θiv (qi)− ti = θiv (qi−1)− ti−1 (b) θi−1v (qi−1)− ti−1 > θi−1v (qi)− ti  Add them: θiv (qi)− ti + θi−1v (qi−1)− ti−1 > θiv (qi−1)− ti−1 + θi−1v (qi)− ti  Rearrange: (θi − θi−1) (v (qi)− v (qi−1)) > 0  [Some can be same in weakly monotone] 2. Any monotone sequence could be chosen by the seller  This is going to be key for later analysis  In looking for an optimum, seller can limit himself to monotone sequences of qs ∗ Can choose any monotone sequence of qs, use the sucessive ICs to calculate ts, and then compute prot ∗ This simplies search (a) To each monotone sequence there corresponds a set of informational rents (b) Each type is held to a level just equal to what it would get by stating one lower (c) Therefore, it gets θiv (qi−1)− ti−1 > 0 (d) But strictly it prefers this, in general, to starting two lower, or even lower 3. Informational rent can be expressed directly as a function of the q's  Informational rent in this model is the realized utility of type i. ∗ Type i comes to this model and has the ability not to participate. ∗ Point here: informational rent can be expressed as function of the qs · Will be function of qs of all lower types (and independent of qs of higher types) (a) For type 1: θ1v (q1)− t1 = 0 (by design) [Rearrange for next step:⇒ t1 = θ1v (q1)] (b) For type 2: θ2v (q2)− t2 = θ2v (q1)− t1 = (θ2 − θ1) v (q1) [Rearrange for next step:⇒ −t2 = −θ2v (q2) + (θ2 − θ1) v (q1)] (c) For type 3: θ3v (q3)− t3 = θ3v (q2)− t2 = (θ3 − θ2) v (q2) + (θ2 − θ1) v (q1) (d) In general: θiv (qi)− ti = i−1∑ k=1 (θk+1 − θk) v (qk) i. This is a very useful formula Each type's rent depends on the q's of all lower types • Outcome of Problem  Seller gets some revenue, and types get some informational rent. 67  Every q will be underserved, except at the highest type (since 1− F (θ) = 0 for highest)  Just like in 2-type case • Population Composition issues  If you have a bunch of θs in the middle of the distribution that are rare, will have a bunch of corner positions ∗ Will have some kinks, [ θ − 1−F (θ) f(θ) ] v′ (q (θ)) = c is violated ∗ Intuition is that it is necessary to pool certain types, sacricing their ecient treatment within this interval of θ in order not to sacrice too much suprlus that would have gone to higher types [Actually solving this numerically is dicult, and is something that is called ironing and sweeping] • End of lecture intuition  Thing that makes this a workhorse model is that θ is 1-dimensional ∗ Therefore able to use control theory to solve, take comparative statics  There are many interesting problems that involve types that are not 1-dimensional ∗ Then very dicult to gure out which constraints are binding, and which are slack [Gap for continuous case handout] 9 Extensions of One-Agent Case and Dominant Strategy Im- plementation 9.1 Applications and Extensions of the One-Agent Case Wants to show how the model delivers dierent results when you tweak the basic assumptions. Sometimes we'll get eciency, sometimes ineciencies. Three Ingredients of the Basic Adverse Selection Model 1. Participation decision by the agent is taken after the agent learns his type (a) Instead, could have principal and agent commit to contract before agent learns his type 2. Principal's objective function depends on (q (θ) , t (θ)): (a) Depends on the principal's expected prot (b) Not related to the agent's utility u or to the agent's realized type θ 3. The Principal and the agent are both risk neutral, and the agent's preferences over q have a zero income elasticity 70 (a) These are special functional form assumptions, on which the results rely [Why were these particular assumptions chosen? The basic model is famous not because it has the simplest assumptions, but rather because they deliver some ineciency in the outcome. This is supposed to represent the social cost of the privacy of the agent's information. Economics was trying to explain ineciency. Economics is trying to explain observed ineciencies that seem to be easily avoidable, and whose existance poses a paradox.] • Extensions we look at arise because one or more of these aspects (above) of the model have changed  We start with reversing basic assumptions that leads to eciency in choice of q after all  The look at combinations of assumptions that lead to various forms of ineciency, and evaluate them • We will see that although the variation we looked at last lecture produced ineciency, many reasonable assumptions in this model lead to eciency • Variations on the theme of the basic one-agent model  Timing of participation constraint ∗ Ex-ante or ex-post [base case is ex-post]+  Objective function of the principal ∗ Revenue maximization [base case] or social welfare maximization  Risk aversion and income eects ∗ Risk aversion of both parties, zero or positive income eects in u (q, t) ∗ Base case: risk neutrality for both parties and zero income eect on q Binding Contraints in Models: base model 1 2 3 4 ex ante participation x x x ex post participation x x principal revenue max x x x principal welfare max x x zero income eect x x x positive income eect x x Models Eciency? 0 Base Model N 1 Ex-Ante Participation Constraint Y 2 Socially Minded Prinicipal OF Y 3 Optimal Income Taxation N 4 Wage Employment Contracts N 71 9.1.0 Basic Case: Adverse Selection Model from Lecture 8 • Constrainted Maximization Problem: [Maximizing seller's average prot] maxβ (tL − cqL) + (1− β) (tH − cqH) s.t. 1) θHv (qH)− tH = θHv (qL)− tL 2) θHv (qH)− tH = 0 3) θLv (qL)− tL = θLv (qH)− tH 4) θLv (qL)− tL = 0 • Interpretations of the Base Case:  Price discrimination and non-linear pricing as a function of the quantity purchased  Regulation of a monopolist with unknown costs ∗ q is the regulated activity and θ is a technology parameter:θv (q) is the cost of doing q ∗ t is the income granted (or permitted) as a function of the observed activity 9.1.1 Variaion 1: Ex-Ante Participation Constraint Leads to Full Eenciency • Intuition:  the eariler you can make a commitment in a relationship, the more power you have to make a good contract with good decisions. (because you don't have to worry about someone defecting and taking advantage ex-post) • Ex-Ante Participation Constrainted  Binding participation commitment possible before the agent learns θ  IR constraints become a single constraint: E [θv (q (θ))− t (θ)] = α ∗ Instead of separate binding constraints IR for each θ [even though only the θL was binding]. · So still a model with only 1 constraint IR, plus 2 incentive constraints (one of which is slack) ∗ The constant can vary. So we're really tracing out the constrained Pareto frontier, and the expected utilities that this can generate. So think of α as a parameter, and its actual value doesn't really matter.  This is an example of how changing the assumptions does not make the problem more complicated ∗ No eciency anymore ∗ Think of this as a perfectly competitive market with a lump-sum entry-fee 72 • Participation decision ex-ante • Government's objective:  maximize agent's expected utility subject to budget feasibility and agent's incentive constraint  [So same as last example (2)] • Explicitly: maxE [u (q (θ) , t (θ) , θ)] s.t. θ = arg maxθ′ u (q (θ′) , t (θ′) , θ) E [t (θ)− q (θ)] = 0  We won't solve explicitly, but very classic control theory solution  Maximizing agent's average utility, s.t. agents are choosing q and t along a certain contrac- tual locus  Where E [t (θ)− q (θ)] = 0 is the government's break even constraint • Ex-post ineciency  Due to too little labor supply by most types. Low marginal income taxation of the highest ability types ∗ High end of the θ spectrum, tax rate goes to 0 · Similarity to base case · [if you made this into a control problem, co-state varaible, lagrange multiplier, goes to 0 at high values of θ] ∗ (contrary to most income tax systems) 9.1.4 Wage Employment Contracts (Green-Kahn) Keep the participation constraint the the rm's revenue maximization problem. What this paper is about is that we don't have a quasilinear utility function for the principal. Historical context: people were trying to explain stagation. Result of this model is to produce employment higher than optimum (all the wrong answers), in contrast to Hart-Grossman, etc. But thinks that the assumptions here are better. Role of principal and agent are reversed: rm is the agent. rm can make the choice of q and t, according to some schedule that has been agreed upon. The rm knows the value of output, and controls the amount of employment. • q is the level of employment (hours) and θ is a parameter of the product market (market condi- tions, price, ...) • t is the payment made to labor - possibly non-linear in hours • agent is the rm that controls q and earns prots θq (θ′)− t (θ′) 75  rm's objective is revenue maximization • labor exhibits normal demand for leisure u (q (θ) , t (θ))  labor commits to participation ex-ante • Explicitly maxE [θv (q (θ))− t (θ)] s.t. θ = arg maxθ′ θv (q (θ′)) E [q (θ′) , t (θ′)] = α • Two eciency conditions 1. Ecient choice of q: θv′ (q (θ)) = λuq (q, t) (a) The level of employment should be such that you have eciency in the choice of labor q 2. Eciency of risk bearing with respect to t: ut (q, t) is constant (a) Should be the case that monetary transfers are made such that there is sucient risk- bearing between rm and workers. i. Firm is risk neutral in transfer, workers are presumably risk-averse • Graph  Two eciency conditions shown by two solid curves: 1. θv′ (q (θ)) = λuq (q, t) [constant] (a) If rm maximizes along a worker's indierence curve, will always choose an ecient point (b) But this might impose some monetary risk (non-constancy of the MU of money) 2. ut (q, t) is constant  But can't do both: two dierent curves ∗ Why dierent? Because worker's utility displays some non-zero income eect · Hall assumed this away (zero income elasticity of leisure), then can get the First Best  Optimal contract is between the two curves (on dotted line), reecting the trade-o ∗ Optimizing on dotted line means rms choice of labor utilization is above the ex-post First Best · (Overemployment) • Challenge  If you have a solution to the constrained maximization (q (θ) , t (θ)) that is incentive com- patible, it cannot be simulataneously ecient in both these senses ∗ In general, it will represent a compromise between them: some ineciency and some imperfection in risk bearing What does this problem tell us about ineciency? 76 • Does it make economic sense?  Need to look at the Lagrange multiplier of the incentive compatibility constraints dq dθ = 0 • The results are that workers' utility is higher in low θ states and that q is too high relative to the the ex-post eciency level  because θv′ (q (θ)) < −uq(q(θ),t(θ))ut(q(θ),t(θ)) in all states • This is contrary to what we think of as the results of macroeconomic uctuations where worker's welfare is procyclical and unemployment rather than overemployment occurs 77  No need for complicated sequential mechanisms or large strategy spaces. Just restrict to tell-the-truth type cases] ∗ Of course, in real world, mechanism may have lying. But designer knows those lies, and 1:1 mapping to mechanism where true types revealed. • Figure 9.5  People have a dominant strategy s that gets mapped no matter what gets played, this gets mapped into g (s), which is identically equal to f [g (s (θ)) = f (θ)]  Upper diagram: nothing here says anything about truthful play: but this is where above theorem helps (any mechanism that can be implemented by dominant strategies can be implemented by a mechanism where people reveal their types truthfully) 9.2.2 Two Basic Examples: Public Project and Allocation of Single Unit of Private Good Public Project MWG Example 23.C.1 • Set up  k ∈ K the set of possible projects ∗ Special cases: K = {0, 1} (yes/no decision), K = R+ (level of public goods provision) ∗ K includes nancing, v includes i's share of nancing  vi (k (θ) , θi) valuation of the public project with private values ∗ Quasilinear utility in the transfers: ui (k, ti, θi) = vi (k, θi) + ti ∗ ti is the transfer (+ or -) beyond any payments needed for nancing the good  Eciency ex post means k = arg maxk′ ∑ i vi (k′, θi) 80 ∗ Choose it such that it maximizes the sum of the willingesses to pay · So if on average, people like this good (and risk neutral, with quasilinear utility), then should build the good. ∗ Eciency ex post means making right social decision based on the sign of the willing- nesses to pay ∗ Each person's willingness to pay depends on the decision that is taken and on the their own type ∗ Not including anything about transfers; agnostic with regards to distribution  Full eciency is eciency ex post and ∑ ti = 0 ∗ Means ecient decision and no money wasted Allocation of a Single Unit of Private Good MWG Example 23.C.2 • Set up  k = (y1, . . . , yI), yi ∈ {0, 1}, yi = 1 if i gets the good  vi (k, θi) = θiyi ∗ Means winner gets θiyi and everyone else gets 0  ui (k, ti, θi) = vi (k, θi) + ti ∗ Eveyone gets their utility from getting or not getting the object, plus the monetary transfer • With this interpretation, allocastion of a single unit is a special case of a public project • Eciency ex post means yi = 1 if an only if θi = max {θj}  Person who values the object most gets the good 9.2.3 Gibbard-Satterthwaite Theorem [Most famous theorem in this literature] • A mechanism Γ implements an ecient outcome function f : Θ→ X if at every θ ∈ Θ, f (θ) is Pareto undominated for the agents with utilities ui (x, θ)  If you arrange any f , which for each θ, comes to some f (θ) which is Pareto ecient ex-post (at f (θ)), can't implement in dominant strategies (without putting more restrictions on the system).  Is there such a mechanism that implements an ecient outcome in dominant strategies? ∗ Answer: not in general  Verbally: ∗ If you are implementing something that is Pareto ecient at every θ, and you want to do a mechanism that does it in dominant strategies, is this possible? · In general no (unless you put more restrictions on the system, which we'll see below) • Gibbard-Satterthwaite Theorem: (MWG Prop 23.C.3) 81  If X has at least three elements, and all preferences over X occur at some θ ∈ Θ, and all elements in X are Pareto ecient at some θ ∈ Θ, then the only mechanisms that implement ecient outcome functions in dominant strategies are dictatorial mechanisms. ∗ [Proof mirrors Arrow's theorem] However, in the two models we have just looked at, we know of mechanisms that implement the ecient outcome functions in dominant strategies: 9.2.4 VCG Mechanism VCG is for people's names. By far the most famous mechanism • Look at for Public Project model  Everyone announces a type θ' (maybe truthful, maybe not truthful). The government chooses the seemingly best outcome (i.e. taking θ′ as a truthful statement, the government chooses the value of k that maximizes the sum of the stated willingnesses to pay). Then the government makes transfers among the people, again taking θ′ as they are. Transfers are the dierence between everybody else's valuation at the k which would be chosen inclusive of everyone's statement, and the valuation of everyone else that includes the valuations of everyone but the person in question. ∗ If person i's statement changes the social decision, then • VCG Mechanism k (θ′) = arg max k ∑ i v (k, θ′i) ti (θ′) = ∑ j 6=i vj ( k∗ (θ′) , θ′j ) − ∑ j 6=i vj ( k∗−i ( θ∗−i ) , θ′j )  Since k∗−i is by denition optimal for all j 6= i, if k∗ 6= k∗−i then we must have ∑ j 6=i vj ( k∗ (θ′) , θ′j ) <∑ j 6=i vj ( k∗−i ( θ∗−i ) , θ′j ) ∗ Therefore ti ≤ 0 ∗ Intuitively: if i changes the social decision, then this is by denition reducing the utility summed over the other people. Hence i must pay whatever social cost his inuence on the decision (k∗−i to k ∗) incurred. · i has changed the social decision and has to pay for it. He has to pay the amount of the cost he imposes on all the other people. [Presumably, he cares enough about the outcome that he's willing to pay this cost] ∗ Reason this mechanism is famous is that it is very intuitive. It makes everyone think that they have the chance of being pivotal.  Maximizing: vi plus the transfer. [ui (k, ti, θi) = vi (k, θi) + ti] ∗ I'll be willing to be pivotal if the amount I gain from being pivotal is more than the amount that this thing is going to tax me · [and if it's not, then I should accept everyone else's judgement] ∗ This is a dominant strategy: to accept the fact that I might be pivotal, and pay some- thing less than my WTP. 82 ∗ Compare the value when i sets θ̃i = θi to the value when i sets θ̃i = θ′i: vi (k∗ (θi, θ−i) , θi) + ti (θi, θ−i) = vi (k∗ (θ′i, θ−i) , θi) + ti (θ′i, θ−i) ∗ Use the indicated relationship for t: [Rearranging above line]: vi (k∗ (θi, θ−i) , θi)− vi (k∗ (θ′i, θ−i) , θi) = ti (θ′i, θ−i)− ti (θi, θ−i) · Substitute: vi (k∗ (θi, θ−i) , θi)− vi (k∗ (θ′i, θ−i) , θi) = − ∑ j 6=i vj (k∗ (θ) , θj) + ∑ j 6=i vj (k∗ (θ′) , θj) ∑ j 6=i vj (k∗ (θ) , θj)− ∑ j 6=i vj (k∗ (θ′) , θj) = 0 ∗ Therefore, i has the incentive to announce the true value of θi, which will then result in the implementation of the ecient outcome function H Function: • MWG Propositions 23.C.4 & 5 express this as: ti (θ) = ∑ j 6=i vj (k∗ (θ) , θj) + hi (θ−i)  where hi (·) is an arbitrary function of its argument θ−i • The relationship: ti (θ)− ti (θ′) = ∑ j 6=i vj (k∗ (θ) , θj)− ∑ j 6=i vj (k∗ (θ′) , θj)  comes from subtracting and using the fact that θi = θ−i so the hi term cancels • If ti (θ) is of this form then the mechanism will implement the ecient outcome in dominant strategies 9.2.7 Green-Laont Theorem • If the domain of Θ is suciently rich any mechanism cannot be written as k∗ (θ) = arg max ∑ i vi (k, θi) ti (θ) = ∑ j 6=i vj (k∗ (θ) , θj) + hi (θ−i) will not implement the ecient decision k∗ in dominant strategies • The Green-Laont theorem greatly simplies the serach for ecient dominant strategy mecha- nisms 85  For example, you can look for h functions that guarantee participation ex post, or for h functions that minimize aggregate payments • So we're looking at the class of mechanisms that can produce truthful dominant strategies. What can we optimize over? All we can do is play with 2nd term in t function. Can do anything you want that is not a function of θi. • Choose k∗ (θ) eciently. The transfer t (θ) is of the form given, but hi is an arbitrary function.  Anything not of this form will not implement the ecient decision in dominant strategies. ∗ If you're searching for good mechanisms, this theorem helps you 9.2.8 h-Functions from Dominant Strategies • VCG:  Set h (θ−i) = − ∑ j 6=i vj ( k∗−i (θ−i) , θj ) ∗ k∗−i is the maximizer for−i; what they would have done if i were indierent to everything  Then ti (θ) = ∑ j 6=i vj (k∗ (θ) , θj)− ∑ j 6=i vj ( k∗−i (θ−i) , θj ) ∗ This is the cost imposed by others by i's non-indierence to the outcome  Also called the Pivotal or Clarke mechanism (in MWG) • Second-Price Auction Mechanism  hi (θ−i) = − ∑ j 6=i vj ( k∗−i (θ−i) , θj ) = maxj 6=i θj ∗ Utility of the 2nd guy who would have won (if i wins) ∗ Otherwise, will cancel out rst term in ti (θ) to get ti = 0 (if i doesn't win)  Then ti (θ) = ∑ j 6=i vj (k∗ (θ) , θj)−maxj 6=i θj = −maxj 6=i θj if i is the highest evaluator ∗ ∑ j 6=i vj (k∗ (θ) , θj) = 0 if i is winner since vj 6=i (k∗ (θ) , θj) = 0 • Taxes  Suppose any taxes are not valued in social welfare calculations?  We want ∑ i ti (θ) = 0. ∗ This way you'd get ecient system and have a closed system (no leakage of revenue).  Is there a set of hi (θ−i) function that make this true as an identity? ∗ Answer: no, except for very special cases (MWG Prop 23.C.6) · (One of these cases is when there are quadratic preferences, vi (k, θi) = θik − 1 2k 2 and I > 2) 86 9.2.9 Cavallo's Mechanism (2007) Motivation: Since we cannot have ∑ i ti (θ) = 0, perhaps we can keep the total transfers small. • Set up  Indivisible private good case  n > 2, bids are bi 1. High bidder pays 2nd higest price (like 2nd price auction) 2. Highest and 2nd highest bidder receive 1 n of the 3rd highest bid 3. All other bidders receive 1 n of the 2nd highest bid (a) (2) and (3) are amendments to the 2nd price auction • Check that Cavallo's mechanism is dominant strategy mechanism:  What are the hi (θ−i)? Let cik be the order statistics of b−i. ∗ Ordering from high to low are the order statistics. So c1 is the highest bid, c2 is the 2nd highest bid...  ci1 = maxj 6=i θj , c i 2 = 2nd highest, etc  Cavallo's mechanism is the dominant strategy mechanism t (·) given by hi (θ−i) = −ci1 + 1 n ci2 ∗ First term −ci1 is to cancel out ∑ j 6=i vj (k∗ (θ) , θj) if didn't win · [Or to pay 2nd price in the case of winning] ∗ [ Again: ti (θ) = ∑ j 6=i vj (k∗ (θ) , θj) + hi (θ−i) ] • Revenue Collected by Cavallo's mechanism:  Revenue thrown away ∑ i ti = b2 − n− 2 n b2 − 2 n b3 = 2 n (b2 + b3)  Revenue collected by 2nd price auction mechanism: ∑ i ti = b2 ∗ In any population Cavallo's mechanism wastes a lot less revenue (a factor of 2 n , at worst) compared to the 2nd price auction • Non-monotonic  One thing that is a little odd is that utility outcomes are not in the same order as the rankings of the θ's  Second-highest evaluator gets a lower payo than the third highest evaluator  Example: θ = (5, 3, 2, 1), ex-post utilities = ( 2 1 2 , 1 2 , 3 4 , 3 4 )  Participation is not a problem, however, as transfers as always positive, except for the winner  Ex post participation guaranteed and therefore ex ante is as well 87  Baysiean Nash Equilibrium Denition ∗ MWG Denition 23.D.2: The mechanism (S, g) implements the outcome function f (θ) = g (s∗ (θ)) in Bayesian Nash Equilibrium, if s∗i (·) is a Bayesian equilibrium of the game induced by (S, g).  Notice that there may be other equilibria that do not induce f ∗ Implementation where all equilibria induce f is a much stronger concept and corre- spondingly harder to achieve ∗ Swallowing and saying its okay if there are multiple equilibria • Revelation Principal holds for Bayesian Implementation  MWG Proposition 23.D.1: Any f that can be implemented in Bayesian Nash Equilibrium can be implemented by a mechanism with strategy spaces Si = Θi and g (s (θ)) = f (θ) ∗ Proof is same as in case of dominant strategies 10.2 EEM (Expected Externality Mechanism) 10.2.1 EEM Set Up Bayesian Implementation of Exact Budget Balance and Ex-Post Eciency: The EEM ti (θ) = Eθ̃−i ∑ j 6=i vj ( k∗ ( θi, θ̃−i ) , θ̃j )+ hi (θ−i) • Terms  Green says: be very careful to understand which variables are their stated values, and which are integrated out  The term in brackets is a function only of θi. The other θ−i are integrated out. ∗ It is eect of i's statement on the expected welfare of the other people, based on prior probabilities of their preference parameters, not their actual welfare.  The additively seperable term hi (θ−i), which is a function of the actual statements by others, can have no eect on i's incentives. • The transfers ti (θ) are computed to be what i should calculate is the eect of his own statement on the collective willingness to pay of all other people, under the assumption that they are going to play truthfully.  This is what causes i to internalize the eect of his play, as if he were generating an exter- nality on others' welfares • To implement these ti, the mechanism designer needs to believe that the distribution of θ, φ (θ), is the same as the players' common belief of θ, and uses that in the calculation above 90 10.2.2 Main Results on the EEM: Truthfullness and 0 Net Transfers 1. Truthful Responses to the EEM transfer function form a Bayesian Equilibrium 2. We can choose the hi (θ−i) so that ∑ ti (θ) = 0 for all θ, when all players play their truthful strategies Truthful Responses • Truthful responses form a Bayesian Equilibrium  First line: total expected utility (expected valuation from game, plus from transfer) Eθ̃−ivi ( k∗ ( θi, θ̃−i ) , θi ) + ti ( θi, θ̃−i ) = = Eθ̃−ivi ( k∗ ( θi, θ̃−i ) , θi ) + Eθ̃−i ∑ j 6=i vj ( k∗ ( θi, θ̃−i ) , θ̃j )+ hi (θ−i) = Eθ̃−i ∑ j vj ( k∗ ( θi, θ̃−i ) , θ̃j )+ hi (θ−i) ∗ using the fact that k∗ is the maximizing choice of k at each value of its argument. ∗ This also means that putting in any θ̂i 6= θi will reduce the above term's value, as we do right here: = Eθ̃−i ∑ j vj ( k∗ ( θ̂i, θ̃−i ) , θ̃j )+ hi (θ−i) = Eθ̃−ivi ( k∗ ( θ̂i, θ̃−i ) , θi ) + ti ( θ̂i, θ̃−i ) • Use of assumption of statistical independence  The use of the assumption of statistical independence is that when the mechanism computes the EEM ti (θ) = Eθ̃−i [∑ j 6=i vj ( k∗ ( θi, θ̃−i ) , θ̃j )] + hi (θ−i) it uses the same distribution for θ̃−i that the individual is using in his own optimization, because the distribution of θ̃−i conditional on θi is the same for every realization of θi. 0 Net Transfers • Dene ξi (θi) = Eθ̃−i [∑ j 6=i vj ( k∗ ( θi, θ̃−i ) , θ̃j )]  (here i is used as a place holder...given presentation would almost be easier if denoted some other letter)  This expression can be computed for each i and each statement θi that i might make  This is the expected externality that i's statement creates for the others • Set hi (θ−i) = − ( 1 I−1 )∑ j 6=i ξj (θj) • Check: 91 ∑ ti (θ) = ∑ ξi (θi) + ∑ hi (θ−i) = ∑ ξi (θi)− ∑ i ( 1 I−1 )∑ j 6=i ξj (θj) = 0 Hence in EEM: ti (θ) = Eθ̃−i ∑ j 6=i vj ( k∗ ( θi, θ̃−i ) , θ̃j )− ( 1 I − 1 )∑ j 6=i ξj (θj) • Transfer: You get paid what you expect the externality (to all others) is of your equilibrium. You pay out the average value of what other agents believe is their externality to all other players. 10.2.3 EEM in One-Buyer, One-Seller Case • 1-Buyer, 1-Seller  Frequently used example because loss due to ∑ ti 6= 0 can be particularly large if the relative gain from trade is small numbers cases like this one  i = 1 buyer, i = 2 seller  Both θ1,θ2 ∼ U [0, 1] 2  k = 1 buyer gets the good, k = 0 seller keeps the good  v1 (1, θ1) = θ1, v2 (0, θ2) = θ2, otherwise v = 0  Eciency: k∗ = 1 if and only if θ1 > θ2 • Compute t1 (θ1, θ2) using EEM: t1 (θ1, θ2) = Eθ̃2 [ v2 ( k∗ ( θ1, θ̃2 ) , θ̃2 )] + hi (θ2)  hi (θ−i) = − ( 1 I−1 )∑ j 6=i ξj (θj), ξi (θi) = Eθ̃−i [∑ j 6=i vj ( k∗ ( θi, θ̃−i ) , θ̃j )]  h1 (θ2) = −Eθ̃1 [ v1 ( k∗ ( θ2, θ̃1 ) , θ̃1 )] . But it seems you don't ever have this change in order of arguments for an equilibrium, so it becomes h1 (θ2) = −Eθ̃1 [ v1 ( k∗ ( θ̃1, θ2 ) , θ̃1 )] ∗ [when doing the hi (θ−i), be careful of the subscripts on ξi (θi) since they essentially reverse] t1 (θ1, θ2) = Eθ̃2 [ v2 ( k∗ ( θ1, θ̃2 ) , θ̃2 )] − Eθ̃1 [ v1 ( k∗ ( θ̃1, θ2 ) , θ̃1 )]  Take expectations: = ˆ 1 0 v2 ( k∗ ( θ1, θ̃2 ) , θ̃2 ) dθ − ˆ 1 0 v1 ( k∗ ( θ̃1, θ2 ) , θ̃1 ) dθ = ˆ 1 θ1 θ̃2dθ − ˆ 1 θ2 θ̃1dθ = 1 2 (θ) 2 ]1θ1 − 1 2 (θ) 2 ]1θ2 = 1 2 ( 1− θ21 − ( 1− θ22 )) = 1 2 ( −θ21 + θ22 ) 92 10.2.6 Interim and Ex-Ante Participation in the EEM Buyer-Seller Example • Ex-Ante Participation  Denition: Ex-ante participation means expected utility from playing before you know your type, but given you know your role  whether you are a buyer or a seller. • Interim Participation  Denition: Participation means expected utility when you know your role and your type, but not the type of the other player(s). • These are the same in the one agent model • With mutiple informed players, they are dierent • A model that does not generate gains at the ex ante stage cannot get o the ground • Interim Participation  Knowing only your own value, would you choose to participate ∗ This is the stage that is not relevant in the one-player case 1. Buyer's Interim Evaluation (a) Compute by looking at ex-post outcomes and integrating over θ2 Eu1 = Pr(wins)*gain(winining)+Pr(loses)*gain(losing) Eu1 = ˆ θ1 0 ( θ1 + 1 2 ( θ̃22 − θ21 )) dθ2 + ˆ 0 θ1 ( 0 + 1 2 ( θ̃22 − θ21 )) dθ2 = ˆ θ1 0 θ1dθ2 + ˆ θ1 0 1 2 ( θ̃22 − θ21 ) dθ2 + ˆ 0 θ1 ( 1 2 ( θ̃22 − θ21 )) dθ2 = ˆ θ1 0 θ1dθ2 + ˆ 1 0 1 2 ( θ̃22 − θ21 ) dθ2 = ˆ θ1 0 θ1dθ2 + ˆ 1 0 1 2 θ̃22dθ2 − ˆ 1 0 1 2 θ21dθ2 = = θ1θ2]θ10 + 1 6 θ32]10 − 1 2 θ21θ2]10 = θ21 + 1 6 − 1 2 θ21 = 1 2 θ21 + 1 6 (b) Eu1 = 1 2θ 2 1 + 1 6 ...always positive (c) Buyer's non-participation option is 0 so all buyers would stay in at the interim stage 2. Seller's Interim Evaluation (a) Eu2 = 1 2θ 2 2 + 1 6 (b) Assume that a lump sum tax of 1 6 is levied on the buyers, leaving all buyers still willing to participate i. A transfer of 1 6 would still not induce participation by sellers for whom θ2 > 1 2θ 2 2 + 1 3 (c) Thus, even with the maximal unconditional transfer that would keep all buyers participating, sellers with valuation θ2 > 1− √ 1 3 would quit at interim stage 95 10.3 Participation and Welfare in General Bayesian Mechanisms Motivation: • The example above shows what the EEM would do • But the EEM is only one possible Bayesian mechanism. How do we know that there are not other Bayesian mechanisms that induce more participation?  We develop a theory of what any Bayesian mechanism must do 11 Welfare in Bayesian Mechanisms, Revenue Equivalence, Fur- ther Topics 11.1 Participation and Welfare in General Bayesian Mechanisms • Looking at General Bayesian mechanisms  Though we will restrict to multiplicative quasilinear case Motivation: the example from last lecture, we looked at EEM would do. What interim utilities were achieved and which participation constraints are binding? But the EEM is only one possible Bayesian mechanism. How do we know that there are not other mechanisms that induce more participation? We develop a theory of what any Bayesian mechanism must do. 11.1.1 Notation and Set Up • Special Functional Form for utilities: quasilinear, multiplicative type: ui (x, θi) = θivi (k) + ti • Denitions  Expected Transfer: t̄i t̄i ( θ̂i ) = Eθ−i [ ti ( θ̂i, θ−i )]  Expected Benet: v̄i v̄i ( θ̂i ) = Eθ−i [ vi ( k ( θ̂i, θ−i ))] ∗ Note that v̄ doesn't include the multiplicative term ∗ In argument θ−i we assume other players are playing honestly ∗ Example · Can think of this from an auction: k = 1 if winner, k = 0 otherwise. If θ̂i gets 10% change of winning, then v̄i = 0.1. 96 • Objective Function  In Bayesian mechanism the agent's objective function is: θiv̄i ( θ̂i ) + t̄i ( θ̂i )  Realized utilities at truthtelling equilibrium are: Ui (θi) = θiv̄i (θi) + t̄i (θi) 11.1.2 Key MWG Proposition on Bayesian Implementation • MWG Proposition 23.D.2:  For environments where ui (x, θi) = θivi (k)+ti, an outcome function f (θ) = (k (θ) , t1 (θ) , . . . , tI (θ)) is implementable in Bayesian-Nash Equilibrium if and only if: 1. v̄i (·) is non-decreasing 2. Ui (θi) = Ui (θmin) + ´ θi θmin v̄i (s) ds Notice analogy to one-agent case: v̄i plays the role of q. • This proposition, although conceptually simple, is probably the single most important fact about mechanisms used in applications in economics today. • Proof 1. Take any two dierent θi, θ̂i, say with θi > θ̂i. 2. Bayesian incentive compatibility requires: (a) Ui (θi) = θiv̄i ( θ̂i ) + t̄i ( θ̂i ) (b) Ui ( θ̂i ) = θ̂iv̄i (θi) + t̄i (θi) 3. Using the denition of t̄i, from Ui (θi) = θiv̄i (θi) + t̄i (θi), we have (a) Ui (θi) = Ui ( θ̂i ) + ( θi − θ̂i ) v̄i ( θ̂i ) (b) Ui ( θ̂i ) = Ui (θi) + ( θ̂i − θi ) v̄i (θi) 4. Combining these expressions: v̄i ( θ̂i ) = Ui ( θ̂i ) − Ui (θi) θ̂i − θi = v̄i (θi) 5. Taking limits: (a) The middle term is the denition of dUdθ , and the outer terms converge to each other: dU dθi = v̄i (θi) 97
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved