Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Local Network Effects: A Model of Heterogeneous Adoption Complementarities, Lecture notes of Business

Economic TheoryNetwork EconomicsGame Theory

A model of local network effects where agents value adoption by a specific subset of other agents in their neighborhood, and have incomplete information about adoption complementarities between all other agents. the characterization of the generating function describing the structure of networks of adopting agents and the empirical implications of this characterization.

What you will learn

  • What are the empirical implications of the characterization of the generating function in the model?
  • What is the model of local network effects presented in the document?
  • What is the role of incomplete information in the model?
  • How does the model differ from traditional models of network effects?
  • How do agents in the model value adoption by other agents?

Typology: Lecture notes

2021/2022

Uploaded on 09/27/2022

gerrard_11
gerrard_11 🇬🇧

4.3

(6)

14 documents

1 / 35

Toggle sidebar

Related documents


Partial preview of the text

Download Local Network Effects: A Model of Heterogeneous Adoption Complementarities and more Lecture notes Business in PDF only on Docsity! Local network effects and complex network structure Arun Sundararajan∗ New York University, Leonard N. Stern School of Business First Draft: November 2004 This Draft: September 2007. Abstract This paper presents a model of local network effects in which agents con- nected in a social network each value the adoption of a product by a hetero- geneous subset of other agents in their neighborhood, and have incomplete information about the structure and strength of adoption complementarities between all other agents. I show that the symmetric Bayes-Nash equilib- ria of this network game are in monotone strategies, can be strictly Pareto- ranked based on a scalar neighbor-adoption probability value, and that the greatest such equilibrium is uniquely coalition-proof. Each Bayes-Nash equi- librium has a corresponding fulfilled-expectations equilibrium under which agents form local adoption expectations. Examples illustrate cases in which the social network is an instance of a Poisson random graph, when it is a com- plete graph, a standard model of network effects, and when it is a generalized random graph. A generating function describing the structure of networks of adopting agents is characterized as a function of the Bayes-Nash equilibrium they play, and empirical implications of this characterization are discussed. ∗ I thank Luis Cabral, Nicholas Economides, Joseph Farrell, Frank Heinemann, Matthew Jack- son, Oliver Kirchkamp, Ravi Mantena, Mark Newman, Hyun Song Shin, Gal Oestreicher-Singer, Roy Radner and Timothy Van Zandt for feedback on versions of a earlier drafts and for helpful dis- cussions. I also thank seminar participants from the 2006 European Summer Symposium on Eco- nomic Theory, the 17th International Conference on Game Theory, the 2004 Keil-Munich Workshop on the Economics of Information and Network Industries, INSEAD, New York University, Stanford University, the University of California at Davis, the University of Florida, the University of Illi- nois at Urbana-Champaign, the University of Minnesota, the University of Texas at Austin and the 2005 ZEW Conference on the Economics of Information and Communication Technologies for their feedback. Any error, omission, or gratuitous use of a Greek symbol is my fault. 1. Introduction This paper studies network effects that are "local". Rather than valuing an increase in the size of a product’s user base or network in general, each agent values adoption by a (typically small) subset of other agents, and this subset varies across agents. The motivation for this paper can be explained by some examples. A typical user of communication software like AOL’s Instant Messenger (IM) is generally interested communicating with a very small fraction of the potential set of users of the product (his or her friends and family, colleagues, or more generally, members of an immediate ‘social network’). This agent benefits when more members of its immediate social network adopt IM, and gets no direct benefits from the adoption of IM by others with whom the agent has no desire to communicate. (This obser- vation is true of most person-to-person communication technologies.) Similarly, firms often benefit when their business partners (suppliers, customers) adopt com- patible information technologies; this set of partners (the business network local to the firm) is a small subset of the total set of potential corporate adopters of these technologies. Buyers benefit when more sellers join a common electronic market (and vice versa), though each buyer benefits directly only when sellers of inter- est (those whose products are desired) join their market. This is typically a small fraction of the potential set of sellers. Although local to individual users, these networks (social, business, trading) are not isolated. Each individual agent (user, business, trader) values adoption by a distinct subset of other agents, and is therefore connected to a different "local network". However, a member of this local network is in turn connected to its own local network of other agents, and so forth. Returning to the examples above, a member of the immediate social network of a potential IM user has his or her own (distinct, although possibly overlapping) immediate social network. A firm’s suppliers have their own suppliers and customers. The sellers a buyer is interested in purchasing from each have their own potential set of buyers, each of whom may be interested in a different set of potential sellers. The local networks are therefore interconnected into a larger network – the entire social network of potential AOL users, the entire network of businesses who transact with each other, and the entire network of potential trading partners. The interconnection of these local networks implies that even if agent A is not directly connected to agent B (and does not benefit directly from agent B’s adop- tion of a network good), the choices of agents A and B may affect each other in equilibrium. Additionally, different agents have information about the structure of a different portion of the entire network. Agents knows the structure of their own local network, but are likely to know less about the structure of their neighbors’ local networks, and probably almost nothing about the exact structure of the rest of 1 since the marginal benefit of each agent from adoption by others can be zero for a subset of other agents; however, he does not explore this aspect of his model in any detail. An interesting example of an attempt to induce local network effects was MCI’s Friends and Family pricing scheme; a model of the dynamic behavior such pricing induces in networks of agents has been analyzed by Linhart et al. (1994). In a related paper that examines how the local structure of an underlying social network affects economic outcomes, Kakade et al. (2004) model an economy as an undirected graph which specifies agents (nodes) and potential trading opportu- nities (edges), and provide conditions for the existence of Arrow-Debreu equilibria based on a condition that requires "local" markets to clear. A more recent liter- ature on "network games" (for example, Bramoulle and Kranton, 2005, Galeotti et. al., 2006) studies the properties of the equilibria of specific classes of games played on a graph. Bramoullé and Kranton (2005) examine the effect of the struc- ture of social networks on agents’ incentives to experiment, and find that certain asymmetric network structures lead to a high level of agent specialization. In Ga- leotti et al. (2006), an agent’s type is its local neighborhood, actions may be either strategic complements or strategic substitutes, and comparative statics of outcomes under symmetric equilibrium when the underlying network changes are provided. The equilibrium of the example presented in Section 7 is thus a special case of their framework, although they do not analyze the relationship between underly- ing social networks and adoption networks. Galeotti and Vega-Redondo (2005) analyze a similar setting, focusing on the effects that changes in the degree distri- bution have on games with strategic complementarities. Jackson and Yariv (2006) examine the evolution of outcomes in dynamic binary-action network games with a specific myopic "best-response" dynamic defining how outcomes evolve3. They characterize the steady-state (eventual) equilibrium as the fixed point of a mapping of the prior period’s equilibrium neighbor adoption probability to that of the current period. This relates quite nicely to and extends the fulfilled-expectations (based on neighbor adoption probability) approach described in Section 5. A greater such equilibrium is interpreted as corresponding to higher diffusion, and they provide results relating changes in these levels of diffusion to changes in the distribution of the underlying social network. The adoption game in this paper also bears a natural resemblance to the global game analyzed by Carlsson and Van Damme (1993) and Morris and Shin (2002), and recently studied experimentally by Heinemann et al. (2004). Both are coor- 3Radner and Sundararajan (2006) provide a related model of the dynamic monopoly pricing of goods with positive network effects where the evolution of ouctomes is described as the continuous- time approximation of a discrete model in which consumers respond myopically to the state of the previous period, although these consumers do not play a network game, and are "completely connected". 4 dination games with a binary action space and adoption complementarities whose strength varies according to agent type. The adoption game of this paper is actu- ally a more general version of the global game – one might think of the adoption game as a "local" game, with the special case of a complete network in Section 6.1 bearing the closest resemblance to the global game4. There is also a growing literature on endogenous strategic network formation. In these games, each agent, represented by a vertex, chooses the set of other agents they wish to share a direct edge with, and the payoffs of each agent are a function of the graphs they are connected to. Jackson (2004) contains an excellent survey of this literature5. Network effects in these models are also local in some sense, although the set of connections that define "local" are endogenously determined. Proposition 5 in this paper suggests a complementary approach to studying network formation, since it characterizes the structure of equilibrium adoption networks that emerge endogenously while also depending on an underlying social or business network. A specific model of a random graph used in this paper is due to Newman, Stro- gatz and Watts (2001). A number of interesting structural properties of graphs of this kind have been established over the course of the last few years, primarily in the context of studying the properties of different social and technological networks, especially the World Wide Web (Kleinberg et al., 1999). An overview of this liter- ature can be found in Newman (2003); a discussion in Ioannides (2005) examines different ways in which these models might apply to economic situations. 4There are many reasons why the equilibrium monotonicity and uniqueness results of that liter- ature do not immediately carry over to the adoption game with local network effects. First, agents in this paper have private values drawn from a general distribution with bounded support. This is in contrast with the two cases for which Morris and Shin (2002) establish a unique Bayes-Nash equilibrium in the global game: a model with "improper" uniformly distributed types, and a model with common values. Moreover, the equilibrium uniqueness results of the global game are based on a model with a continuum (rather than a finite number) of players, though this restriction may not be critical. 5For example, general structures of graphs that emerge as strict Nash equilibria are established by Bala and Goyal (2000). Goyal and Moraga-Gonzalez (2001) show how a higher intensity of compe- tition creates a business environment in which firms have excessive incentives to create local R&D partnerships, since the formation of incomplete and potentially asymmetric R&D networks may maximize both industry profits and social welfare. Jackson and Rogers (2004) analyze a dynamic model of network formation with costly search which explains when networks with low inter-vertex distances and a high degree of clustering ("small-world networks") and those with power-law degree distributions are likely to form. Lippert and Spagnolo (2004) analyze the structure of networks of inter-agent relations, which could form the basis for an underlying social network. 5 2. Overview of model The underlying social network is modeled as a graph G with n vertices. The set of vertices of G is N ≡ {1, 2, 3, ..., n}. Each vertex represents an agent i ∈ N . This agent is directly associated with the agents in the set Gi, the neighbor set6 of vertex i, where Gi ∈ Γi ≡ 2N\{i}. The fact that j ∈ Gi is often referred to as "agent j is a neighbor of agent i". The set of permitted social networks is represented by Γ ⊂ [Γ1×Γ2× ...×Γn], restricted appropriately to ensure that each element of Γ is an undirected graph. The vector of neighbor sets of all agents j 6= i is denoted G−i ∈ Γ−i. The number of neighbors agent i has is referred to as agent i’s degree, and is denoted di ≡ |Gi|. Let D ⊂ {0, 1, 2, ..., n−1} be the set of possible values that di can take, and let m be the maximal element of D, or the highest degree an agent can have. Additionally, each agent is indexed by a valuation type parameter θi ∈ Θ ≡ [0, 1] which influences their payoffs as described below (in general, θi could be multidimensional, so long as Θ is compact). Each agent makes a binary choice ai ∈ A ≡ {0, 1} between adopting and not adopting a homogeneous network good. (An extension to variable quantity or to multiple compatible goods is straightforward.) The payoff to agent i from an action vector a = (a1, a2, ..., an) is: πi(ai, a−i, Gi, θi) = ai[u([ P j∈Gi aj], θi)− c)]. (2.1) (2.1) implies that the payoff to an agent who does not adopt is zero, and to an agent who adopts is according to a value function u(x, θi) which is a function of the number of the agent’s neighbors who have adopted the good, and differs across agents only through differences in their valuation type θi. This also means that the payoff to agent i is influenced by the actions of only those agents in their neighbor set Gi, and is also not influenced by θj for j 6= i. I assume that u(x, θ) is continuously differentiable in θ, and has the following properties: 1. u(x+ 1, θ) > u(x, θ) for each θ ∈ [0, 1] (the goods display positive network effects locally7) 6For instance, one might think of the members of Gi as friends or business associates of agent i. 7Thus, a complete information game in which agents choose actions simultaneously is a su- permodular game. It follows from the results of Milgrom and Roberts (1990) that the game has a greatest and least pure strategy Nash equilibrium, independent of any asymmetries in payoffs or in the structure of the underlying social network. 6 3. Monotonicity of all symmetric Bayes-Nash equilibria The main result of this section is Proposition 1, which specifies that all symmetric Bayes-Nash equilibrium involve strategies that are monotone in both degree and valuation type, and can therefore be represented by a threshold strategy with a vec- tor of thresholds θ∗, each component θ∗(x) of which is associated with a degree x ∈ D. If the symmetric independent posteriors condition is satisfied, the posterior be- lief of agent i about degree is: Pr[d−i = x−i|di, Gi] = à Q j∈Gi q(xj) !à Q j /∈(Gi∪{i}) bq(xj)! , (3.1) for each x−i ∈ D(n−1). Similarly, the posterior belief of agent i about valuation type is μ(t−i|[n−1]) for each t−i ∈ Θ(n−1), where μ(t|x) is the probability measure over t ∈ Θx defined as follows: for any g(t), Z t∈Θx g(t)dμ(t|x) = 1Z t1=0 ⎛⎝ 1Z t2=0 ... ⎛⎝ 1Z tx=0 g(t)dF (tx) ⎞⎠ ...dF (t2) ⎞⎠ dF (t1) (3.2) From (2.1), the adoption game is symmetric, and the strategy of each agent i is simply a function of their valuation type θi and degree di. I look for symmetric equilibria in which all agents play the strategy s : D×Θ→ A. Suppose all agents j 6= i play s. The expected payoff to agent i from a choice of action ai can be written as ai [Π(di, θi)− c], where Π(di, θi) ≡ Z t∈Θ(di) ⎛⎝ X x∈D(di) " u( diP j=1 s(xj, tj), θi) diQ j=1 q(xj) #⎞⎠ dμ(t|di). (3.3) (3.3) follows from the fact that given a fixed set of actions by each agent j ∈ Gi, the actions of agents j /∈ Gi do not affect agent i’s payoffs, and that symmetric independent posteriors imply that the marginal distributions of each xj and tj are independent. Assuming that indifferent agents adopt, a symmetric strategy s is therefore a Bayes-Nash equilibrium if it satisfies the following conditions for each i: s(di, θi) = 1 if Π(di, θi) ≥ c; (3.4) s(di, θi) = 0 if Π(di, θi) < c. (3.5) 9 Proposition 1 (a) In each symmetric Bayes-Nash equilibrium, the equilibrium strategy s : D × Θ → A is non-decreasing in both degree and valuation type. Therefore, in every symmetric Bayes-Nash equilibrium, the equilibrium strategy takes the form: s(di, θi) = ½ 0, θi < θ∗(di) 1, θi ≥ θ∗(di) (3.6) where θ∗ : D → A is non-increasing. (b) If u(0, θ) = 0 for each θ ∈ Θ, then s(x, t) = 0 for each x ∈ D, t ∈ Θ is a symmetric Bayes-Nash equilibrium for any adoption cost c > 0 . A strategy of the form (3.6) is referred to as a threshold strategy with threshold vector θ∗ ≡ (θ∗(1), θ∗(2), ..., θ∗(n)). To avoid introducing additional notation, I use θ∗(x) = 1 as being equivalent to s(x, t) = 0 for all t ∈ Θ. An implication of Proposition 1 is that there are likely to be multiple symmetric Bayes-Nash equilibria of the adoption game. The following section provides a ranking these equilibria, and a basis for the selection of a unique outcome. 4. Equilibrium ranking and selection Consider any threshold strategy of the form derived in Proposition 1: s(di, θi) = ½ 0, θi < θ∗(di) 1, θi ≥ θ∗(di) (4.1) When s is played by all n agents, each of their expected payoffs can be characterized in the following way. For any agent i, the realized payoff upon adopting under s is u( P j∈Gi s(dj, θj), θi)− c (4.2) Now, for each j ∈ Gi, according to (4.1), s(dj, θj) = 1⇔ θj ≥ θ∗(dj). (4.3) Therefore, conditional on di, ex-ante (that is, after the agents has observed her own degree and type, but before she make her adoption choices): Pr[s(dj, θj) = 1|di] = 1− F (θ∗(di)). (4.4) 10 Since the posterior probability that an arbitrary neighbor of i has degree x is q(x), it follows that Pr[s(dj, θj) = 1] = mP x=1 q(x) [1− F (θ∗(x)] . (4.5) The probability above does not depend on j, and, given player i’s information, is the same ex-ante (that is, after agents has observed their degree and type, but before they make their adoption choices) for each neighbor j ∈ Gi. Denote this common probability as λ(θ∗), which is termed the neighbor adoption probability under the symmetric strategy with threshold θ∗. From (4.5), λ(θ∗) = mP x=1 q(x) [1− F (θ∗(x)] . (4.6) Moreover, the payoff to agent i only depends on the number of their neighbors who adopt the product. If all agents j 6= i play the symmetric strategy (4.1), the expected payoff to agent i is µ diP y=1 u(y, θi)B(y|di, θ∗) ¶ − c, (4.7) where: B(y|x, θ∗) ≡ µ x y ¶ [λ(θ∗)]y[1− λ(θ∗)](x−y), (4.8) We have therefore established that under a threshold strategy with threshold vector θ∗, Π(di, θi) = diP y=1 u(y, θi)B(y|di, θ∗), (4.9) where Π is defined in (3.3). Define the set of threshold vectors associated with symmetric Bayes-Nash equi- libria as Θ∗ ⊂ Θm+1. The next lemma shows that λ(θ∗) provides a basis on which one can rank the different Bayes-Nash equilibria of the agent adoption game. Lemma 1 For any two threshold vectors θA and θB ∈ Θ∗ (a) If λ(θA) > λ(θB), then, for each x ∈ D, either θA(x) < θB(x), or θA(x) = θB(x) = 1, or θA(x) = θB(x) = 0. 11 the adoption threshold θ(x, λ) as follows: θ(x, λ) = ½ 1 if v(x, 1, λ) < c; t : v(x, t, λ) = c otherwise. (5.3) Since u2(x, t) > 0, it is easily verified that v2(x, t, λ) > 0, and therefore, θ(x, λ) is well defined. Additionally, an agent of valuation type θi and degree di adopts if and only if θi ≥ θ(di, λ). Therefore, ex-ante, the probability that a neighbor of agent i who has degree x will adopt is [1 − F (θ(x, λ)]. Since all agents share a common expectation λ, the actual probability Λ(λ) that an arbitrary neighbor of any agent adopts the product, given the posterior neighbor degree distribution q(x) is Λ(λ) = mX x=1 q(x)[1− F (θ(x, λ)]. (5.4) Therefore, λ is fulfilled as an expectation of the probability of neighbor adoption only if it is a fixed point of Λ(λ). Each outcomes associated with a fulfilled expec- tation λ is a fulfilled expectations equilibrium. Since b(y|x, 0) = 0, it follows that v(x, t, 0) = 0 for each x ∈ D, t ∈ Θ, and consequently, Λ(0) = 0. The expectation λ = 0 is therefore fulfilled. Define L as the set of all fixed points of Λ(λ). L ≡ {λ : Λ(λ) = λ}. (5.5) Consider any Bayes-Nash equilibrium with threshold vector θ∗. From (4.6), the neighbor adoption probability associated with θ∗ is λ(θ∗) = mX x=1 q(x)[1− F (θ∗(x)]. (5.6) Now, examine the possibility that λ(θ∗) is a fulfilled expectation. Since b(y|x, λ(θ∗)) is equal to B(y|x, θ∗), the adoption thresholds associated with λ(θ∗) are θ(x, λ(θ∗)) = θ∗(x), (5.7) and therefore, from (5.4) and (5.6), λ∗(θ) is a fixed point of Γ(λ), and therefore, a fulfilled expectation. Conversely, consider any λ ∈ L, and define a candidate Bayes-Nash equilibrium with threshold vector θ∗(x) = θ(x, λ). (5.8) 14 The neighbor adoption probability associated with the threshold θ∗ is λ(θ∗) = mX x=1 q(x)[1− F (θ∗(n)], (5.9) and since λ is a fixed point of Λ(λ), it follows from (5.4) that λ = λ∗(θ), and consequently, θ∗ ∈ Θ∗. We have therefore proved: Proposition 4 For each Bayes-Nash equilibrium of the adoption game with threshold θ∗, the expectation λ∗(θ) defines a fulfilled expectations equilibrium. For each fulfilled expectation λ, the threshold strategy with threshold vector defined by θ∗(x) = θ(x, λ) is a Bayes-Nash equilibrium of the adoption game. The connection established by Proposition 4 seems important, because most of the prior literature about network effects has derived results based on some idea of expectations that are self-fulfilling, and this idea is still used to make predictions in models of network effects. Establishing that there is an underlying inter-agent adoption game which has a Bayes-Nash equilibrium that leads to identical out- comes may make this usage more robust. Clearly, in every game of incomplete information, if an "expectation" of an agent comprises a vector of strategies for all other agents, then each vector of Bayes-Nash equilibrium strategies is (trivially) a fulfilled expectations equilibrium. What makes the proposition more interesting is that a scalar-valued expectation that is intuitively natural (how likely are my neigh- bors to adopt this product), which the agent only needs to make locally, and that has a natural connection to realized demand, is sufficient to establish the correspon- dence. Together, Propositions 3 and 4 indicate that the fulfilled expectations equilib- rium corresponding to the unique coalition-proof Bayes-Nash equilibrium is the one that maximizes expected adoption. This is the equilibrium customarily chosen in models of demand with network effects that are based on "fulfilled expectations" (for instance, in Katz and Shapiro, 1985, Economides and Himmelberg, 1995; also see Economides, 1996). An argument provided for the stability of this equilibrium is typically based on tatonnement, rather than it being an equilibrium of an underly- ing adoption game. For pure network goods, the non-adoption equilibrium is stable under the former procedure, but not under the refinement of Proposition 3. Propositions 2 and 4 suggest a simple method for determining the set of all Bayes-Nash equilibria of the adoption game. Proposition 2 establishes that each equilibrium is parametrized by a unique probability λ(θ∗) ∈ [0, 1]. Proposition 4 establishes that each of these values is a fixed point of the function Λ(λ) defined in (5.4). Therefore, to determine the set of all symmetric Bayes-Nash equilibria, all one needs are the set of fixed points λ of Λ(λ), after which one can use (5.3) 15 to specify the equilibrium associated with each λ ∈ L. Finding the fixed points of Λ(λ) is likely to be a substantially simpler exercise than finding each vector θ∗ that is a fixed point of the associated equation for the game, as illustrated by the example in Section 6.1. A natural choice for the outcome of the adoption games appears to be the thresh- old equilibrium with threshold θ∗gr, since it Pareto-dominates all the others, and is also the unique coalition-proof equilibrium. While this presents a strong case for choosing this as the outcome, it does not resolve how agents might coordinate on choosing this equilibrium. Note, however, that the coordination problem with lo- cal network effects is (loosely speaking) considerably less complicated, since each player i only needs to coordinate on their choice of strategy with their neighbors j ∈ Gi, rather than with every other player. Of course, this is does not guarantee a unique equilibrium in an appropriately defined sequential coordination game, but merely makes it more likely (again, loosely speaking). A more realistic mechanism that determines the actual outcome is likely to be an adjustment mechanism of the kind described by Rohlfs (1974). 6. Two simple examples This section presents two examples of adoption games in which specific assump- tions are made about the distribution over the set of graphs from which the social network is drawn, and/or the distribution of valuation types. In the first example, the social network is an instance of a Poisson random graph and valuation types are drawn from a general distribution F . In the second example, the social network is a complete graph and each adopting agent benefits from the adoption of all other agents. This shows how the model of this paper encompasses a "standard" model of network effects as a special case. 6.1. Poisson random graph This section analyzes an example in which the social network is an instance of a Poisson random graph (Erdös and Rényi, 1959), and for which the value of adoption is linear in both valuation type and in the number of complementary adoptions, that is, u(x, t) = xt. Poisson random graphs are constructed as follows: take n vertices and connect each pair (or not) with probability r (or 1−r). It is well-known that the prior degree distribution of these random graphs has the density (mass) function: p(x) = µ n− 1 x ¶ rx(1− r)(n−1−x). (6.1) 16 and it is easy to see that this is identical to the neighbor degree distribution: q(x) = ½ 1, x = n− 1 0, x < n− 1 . (6.10) Clearly, the condition of symmetric independent posteriors is trivially true. It fol- lows from Proposition 1 that any symmetric Bayes-Nash equilibrium is defined by a single threshold θ∗(n − 1), which (for brevity, and only in this section) we refer to as θ∗. Rather than computing the associated adoption probabilities λ and using Propo- sition 4, it is straightforward in this case to compute θ∗ directly, since θ∗ is a scalar. If all agents play the symmetric strategy s : Θ→ A with threshold θ∗ on valuation type, then from (4.8) and (4.7), the expected value to an agent of valuation type t is w(t, θ∗) ≡ à n−1X y=0 u(y, t) µ n− 1 y ¶ (1− F (θ∗)]y[F (θ∗)](n−1−y) ! − c, (6.11) and therefore, the set Θ∗ of all thresholds corresponding to symmetric Bayes-Nash equilibria is defined by: Θ∗ = {t : w(t, t) = 0} (6.12) Correspondingly, from (6.10) and (4.6), the neighbor adoption probability associ- ated with each threshold θ∗ ∈ Θ∗ is: λ(θ∗) = 1− F (θ∗), (6.13) and from Proposition 4, each λ(θ∗) defines a fulfilled-expectations equilibrium in which agents form homogeneous expectations about the probability that each other agent will adopt. 7. A third example and the structure of adoption networks In this third example, the social network is an instance of a generalized random graph. Agents have the same valuation type θi = 1. Therefore, this example also illustrates how the model applies to situations where all the uncertainty is in the structure of the underlying social network. The set of Bayes-Nash equilibria can be equivalently characterized by a threshold function on degree (rather than a threshold vector on valuation type), which is necessary for models with homogeneous adop- tion complementarities across agents. This characterization leads to a result about the structure of the network of agents who adopt the product, and some empirical implications of this result are discussed. 19 Generalized random graphs (Newman, Strogatz and Watts, 2001) have been used widely to model a number of different kinds of complex networks (for an overview, see section IV.B of Newman, 2003). They are specified by a number of vertices n, and an exogenously specified degree distribution with probability mass function p(x) defined for each x ∈ D. For each vertex i , the degree di is realized as an independent random draw from this distribution. Once each of the values of ni have been drawn, the instance of the graph is constructed by first assigning di ‘stubs’ to each vertex i, and then randomly pairing these stubs9. Recall that m is the largest element of D. Given this procedure for drawing G from Γ, the neighbor degree distribution is described by: q(x) = xp(x) mP j=0 jp(j) . (7.1) The reason why the degree of an arbitrary neighbor of a vertex has the distribu- tion q(x) is as follows. Given the ‘algorithm’ by which each instance of the random graph is generated, since there are n other vertices connected to a vertex of degree n, it is n times more likely to be a neighbor of an arbitrarily chosen vertex than a vertex of degree 1. The neighbor degree distribution is essentially identical to the excess degree distribution discussed in Newman (2003). The non-neighbor degree distribution is somewhat more complicated; for large enough n, it is approximately the same as the prior degree distribution, that is, bq(x) ∼= p(x). It is straightforward to see that the characterization based on threshold types in Section 3 is "invertible" in the following sense: for each vector θ∗, one can define a corresponding function: δ∗(t) = min{x : θ∗(x) = t}, (7.2) and the symmetric Bayes-Nash equilibria of the game are completely defined by the functions δ∗(t). The strategy that corresponds to δ∗(t) is s(di, θi) = ½ 0, di < δ∗(θi) 1, di ≥ δ∗(θi)) . (7.3) 9The process described above has some shortcomings in generating representative elements of Γ; for instance, it may create a graph with multiple edges between a pair of vertices. Two algorithms that are used to account for this while preserving uniform sampling are the switching algorithm (Rao et al., 1996, Roberts, 2000) and the matching algorithm (Molloy and Reed, 1995). Recent studies have contrasted the performance of these algorithms with a third procedure called "go with the winners"; for details, see Milo et al. (2003). 20 If θi = 1 for all agents, then F (t) = ½ 0, t < 1 1, t = 1 . (7.4) Therefore, in this example, each Bayes-Nash equilibrium is completely determined by its value of δ∗(1), which we refer to as δ∗ for brevity. Define Q(x) ≡ Pr[dj ≥ x|j ∈ Gi] = mX j=x q(i), and with a slight abuse of notation, denote the neighbor adoption probability defined in (4.6) as λ(δ∗), which is correspondingly λ(δ∗) = Q(δ∗). (7.5) As in Section 6.2, given that the thresholds defining the Bayes-Nash equilibria are scalar values, we can compute them directly. If all agents play the symmetric strat- egy s : D→ A with threshold δ∗ on degree, following (4.8) and (4.7), the expected payoff to an agent of degree x ∈ D is w(x, δ∗) ≡ à xX y=0 u(y, 1) µ x y ¶ (1−Q(δ∗)]y[Q(δ∗)](x−y) ! − c, (7.6) and therefore, the set ∆ of all thresholds on degree corresponding to symmetric Bayes-Nash equilibria is defined by: ∆ = {x : w(x− 1, x) < 0, w(x, x) ≥ 0} (7.7) Two points are specifically worth highlighting about this example. First, while (6.11) and (7.6) are quite similar, the latter is based on the posterior neighbor de- gree distribution. Therefore, if one were to try and represent the structure of the underlying social network into a continuous type variable of some kind, the results would tend to systematically underestimate adoption unless the type distribution was based on the posterior degree distribution. More importantly, explicitly modeling the structure of the social network allows one to study the structure of the adoption network Gα, which is the graph whose vertices are agents who have adopted, and whose edges are those edges in G con- necting vertices corresponding to adopting agents. Denote the degree distribution of the adoption network as α(y). Now, the probability that a agent has y neighbors in the adoption network, conditional on the agent’s degree being x < δ∗ is zero, 21 2004), and it appears that each of the results in Propositions 1 through 4 would con- tinue to hold when the set of underlying social networks is restricted to containing only bipartite graphs. Another natural extension would involve agents adopting one of many incompatible network goods, perhaps dynamically and using an evolution- ary adjustment process based on the state of adoption of one’s neighbors. Some ideas towards developing ways of testing the predictions of theories based on lo- cal network effects are discussed immediately following Proposition 5, and these represent yet another promising direction of future work. A contrast between the equilibria of the adoption game in this paper and those obtained when agents have progressively "more" information about the structure of the underlying social network would be interesting, since it would improve our un- derstanding of whether better informed agents adopt in a manner that leads to more efficient outcomes. It would also indicate how robust the predictions of models that assume that agents know the structure of these graphs are, if in fact these agents do not. While the assumption of symmetric independent posteriors models uncertainty about the social network for a wide range of cases, as illustrated by the examples in Section 6 and 7, it may preclude distributions over social networks that display "small world" effects (Milgram, 1967, Watts, 1999). Models of these networks have a specific kind of clustering that lead to posteriors that, while independent across neighbors for a given agent, are conditional on the agent’s degree. A natural next step is to extend the analysis of this paper to admit symmetric conditionally independent posteriors of this kind, and then to explore how more elaborate local clustering of agents may affect equilibrium. This may be of particular interest in a model of competing incompatible network goods. 9. Appendix: Proofs Proof of Proposition 1 (a) The definition of Π(di, θi) is reproduced below from (3.3): Π(di, θi) ≡ Z t∈Θ(di) ⎛⎝ X x∈D(di) " u( diP j=1 s(xj, tj), θi) diQ j=1 q(xj) #⎞⎠ dμ(t|di). (9.1) If there exists di ∈ D, and θi, θ 0 i ∈ Θ such that s(di, θi) = 1 and s(di, θ 0 i) = 0, it follows from (3.4) and (3.5) that Π(di, θi) ≥ c and Π(di, θ 0 i) < c, which implies that Π(di, θi) > Π(di, θ 0 i). (9.2) 24 Since u2(x, t) > 0 for all x ∈ D, t ∈ Θ, it follows that for a fixed strategy s, Π2(x, t) ≥ 0. Therefore, (9.2) implies that θi > θ0i, which establishes that s(x, t) is non-decreasing in t. From (9.1), for any symmetric Bayesian Nash equilibrium strategy s, Π(di+1, θi) = Z t∈Θ(di+1) ⎛⎝ X x∈D(di+1) à u( di+1P j=1 s(xj, tj), θi) n+1Q j=1 q(xj) !⎞⎠ dμ(t|di+1). (9.3) The right-hand-side of (9.3) can be rewritten as 1Z t0=0 Z t∈Θ(di) X x0∈D [ X x∈D(di) [u( diP j=1 s(xj, tj) +s(x0, t0), θi) diQ j=1 q(xj)]]q(x 0)dμ(t|di)f(t0)dt0. Since s(x, t) ≥ 0, and u1(x, t) > 0, it follows that u( diP j=1 s(xj, tj) + s(x0, t0), θi) ≥ u( diP j=1 s(xj, tj), θi), (9.4) which in turn implies that P x∈D(di) à u( diP j=1 s(xj, tj) + s(z0, t0), θi) diQ j=1 q(xj) ! ≥ P x∈D(di) à u( diP j=1 s(xj, tj), θi) diQ j=1 q(xj) ! . (9.5) The expression P x∈D(di) à u( diP j=1 s(xj, tj) + s(x0, t0), θi) diQ j=1 q(xj) ! is independent of x0. Also, since P x0∈D q(x0) = 1, P x0∈D à P x∈D(di) à u( diP j=1 s(xj, tj), θi) diQ j=1 q(xi) !! q(x0) = P x∈D(di) à u( diP j=1 s(xj, tj), θi) diQ j=1 q(xj) ! , (9.6) 25 which in turn implies that Π(di + 1, θi) can be written as: Π(di + 1, θi) = Z t∈Θ(di) X x∈D(di) à u( diP j=1 s(xj, tj) + s(x0, t0), θi) diQ j=1 q(xj) ! dμ(t|di), (9.7) and since this expression is independent of t0, (9.1), (9.4) and (9.7), verify that Π(di + 1, θi) ≥ Π(di, θi). (9.8) Based on (9.8), it follows from (3.4) and (3.5) that s(x, t) = 1 ⇒ s(x + 1, t) = 1, and therefore, s(x, t) is non-decreasing in x. We have now established that any symmetric Bayesian Nash equilibrium strategy s(x, t) is non-decreasing in both x and t for each x ∈ D, t ∈ Θ. For a given s(x, t), define θ∗(x) = max{t : s(x, t) = 0} (9.9) Clearly, s(x, t) = 1⇔ t > θ∗(x). Moreover, since s(x, t) is non-decreasing in x, it follows that θ∗(x) is non-increasing, which completes the proof. (b) Follows directly from the fact that u(0, t) = 0 for all t ∈ Θ. Proof of Lemma 2 The following is a well-known result about the binomial distribution: Let X be a random variable distributed according to a binomial distribution with parame- ters n and p. If g(x) is any strictly increasing function, then E[g(X)] is strictly increasing in p. This is a consequence of the fact that a binomial distribution with a higher p strictly dominates one with a lower p in the sense of first-order stochastic dominance. (a) Assume the converse: that there are threshold vectors θA and θB such that λ(θA) > λ(θB), and that 1 ≥ θA(x) > θB(x) for some x ∈ D. Therefore, there exists t ∈ Θ, θB(x) < t < θA(x) such that sA(x, t) = 0 and sB(x, t) = 1. From (3.4) and (3.5), and (4.9), this in turn implies that xP y=1 u(x, t)B(y|x, θA) < xP y=1 u(x, t)B(y|x, θB). (9.10) Since λ(θA) > λ(θB), this contradicts the property of the binomial distribution mentioned at the beginning of this proof, and the result follows. (b) If λ(θA) = λ(θB), then from (4.9), the payoff to agent i from adoption is identical under A and B, for any di ∈ D, θi ∈ Θ. The result follows immediately from (3.4) and (3.5). 26 Therefore, to be self-enforcing, the deviation must be of the form σ(x, t) = ½ 0, t < θ(x) 1, t ≥ θ(x) (9.14) for each i ∈ S, where where θ(x) ≤ θ∗gr(x) for each x ∈ D, and θ(y) < θ∗gr(y) for some y∗ ∈ D. Now, suppose all agents i ∈ N play according to the strategy σ(x, t). The switch by agents i ∈ N\S from playing s∗ to playing σ increases the expected payoffs to all agents i ∈ S, since σ(x, t) ≥ s∗(x, t) with the in- equality being strict for y∗ ∈ D, t ∈ [θ(y), θ∗gr(y)]. Since σ(x, t) was a strictly Pareto-improving deviation to begin with, the symmetric strategy σ(x, t) played by all agents strictly Pareto-dominates the symmetric strategy s∗(x, t) played by all agents. Consequently, since each θi takes continuous values in Θ, and the ac- tion space A = {0, 1} is binary, one can now start with the threshold vectors of σ and construct a symmetric Bayes-Nash equilibrium that strictly Pareto-dominates s∗. This leads to a contradiction, since s∗ is by definition, the greatest symmetric Bayes-Nash equilibrium, and completes the proof. Proof of Proposition 5 Recall that m = max{x : x ∈ D}, the maximum possible degree for any of the n agents. The expression for α(y), the degree distribution of the adoption network, is reproduced from (7.9) below: α(y) = ⎧⎪⎪⎨⎪⎪⎩ A mP x=δ∗ h¡ x y ¢ [Q(δ∗)]y[1−Q(δ∗)](x−y)p(x) i for y ≤ δ∗ A mP x=y h¡ x y ¢ [Q(δ∗)]y[1−Q(δ∗)](x−y)p(x) i for y > δ∗ , (9.15) which can be rearranged as: α(y) = ⎧⎪⎪⎨⎪⎪⎩ A mP x=δ∗ h¡ x y ¢ h Q(δ∗) 1−Q(δ∗) iy [1−Q(δ∗)]xp(x) i for y ≤ δ∗ A mP x=y h¡ x y ¢ h Q(δ∗) 1−Q(δ∗) iy [1−Q(δ∗)]xp(x)) i for y > δ∗ , (9.16) By definition, the generating functions of the degree distributions of the social net- work Φp(w) and the adoption network Φα(w) are: Φp(w) ≡ ∞P k=0 p(k)wk; (9.17) Φα(w) ≡ ∞P k=0 α(k)wk. (9.18) 29 From (9.16) and (9.18), Φα(w) = A δ∗−1X k=0 " mX x=δ∗ Ã∙ wQ(δ∗) 1−Q(δ∗) ¸k µ x k ¶ [1−Q(δ∗)]xp(x) !# (9.19) +A mX k=δ∗ " mX x=k Ã∙ wQ(δ∗) 1−Q(δ∗) ¸k µ x k ¶ [1−Q(δ∗)]xp(x) !# . One can interchange the order of summation for the first part of (9.19) with no changes in expressions, to: A mX x=δ∗ " δ∗−1X k=0 Ã∙ wQ(δ∗) 1−Q(δ∗) ¸k µ x k ¶ [1−Q(δ∗)]xp(x) !# . (9.20) Interchanging the order of summation of the second part of (9.19), one gets: A mX x=δ∗ " xX k=δ∗ Ã∙ wQ(δ∗) 1−Q(δ∗) ¸k µ x k ¶ [1−Q(δ∗)]xp(x) !# . (9.21) From (9.19-9.21), it follows that Φα(w) = A mX x=δ∗ " δ∗−1X k=0 Ã∙ wQ(δ∗) 1−Q(δ∗) ¸k µ x k ¶ [1−Q(δ∗)]xp(x) !# (9.22) +A mX x=δ∗ " xX k=δ∗ Ã∙ wQ(δ∗) 1−Q(δ∗) ¸k µ x k ¶ [1−Q(δ∗)]xp(x) !# , which reduces to Φα(w) = A mX x=δ∗ " xX k=0 Ã∙ wQ(δ∗) 1−Q(δ∗) ¸k µ x k ¶ [1−Q(δ∗)]xp(x) !# . (9.23) Grouping terms not involving k on the right hand side of (9.23), one gets: Φα(w) = A mX x=δ∗ " [1−Q(δ∗)]xp(x) à xX k=0 õ x k ¶ ∙ wQ(δ∗) 1−Q(δ∗) ¸k!!# . (9.24) Using the identity (1 + x)n = nX i=0 ∙µ n i ¶ xi ¸ , (9.25) 30 (9.24) reduces to: Φα(w) = A mX x=δ∗ " [1−Q(δ∗)]xp(x) µ 1 + wQ(δ∗) 1−Q(δ∗) ¶x # , (9.26) which simplifies to: Φα(w) = A mX x=δ∗ ¡ p(x)[1−Q(δ∗) + wQ(δ∗)]x ¢ (9.27) From (9.18), since p(x) = 0 for x > m, Φp([1−Q(δ∗) + wQ(δ∗)]) = mX x=0 ¡ p(x)[1−Q(δ∗) + wQ(δ∗)]x ¢ , (9.28) and comparing (9.27) and (9.28), the result follows. 10. References 1. Bala, V. and S. Goyal, 2000. A Non-cooperative Model of Network Forma- tion. Econometrica 68, 1181-1231. 2. Bernheim, D., B. Peleg and M. Whinston, 1987. Coalition-proof Nash Equi- libria I: Concepts. Journal of Economic Theory 42, 1–12. 3. Bramoulle, Y. and R. Kranton, 2005. Strategic Experimentation in Networks. Journal of Economic Theory (forthcoming). 4. Carlson, H. and E. VanDamme, 1993. Global Games and Equilibrium Selec- tion. Econometrica 61, 989-1018. 5. Chwe, M., 2000. Communication and Coordination in Social Networks. Re- view of Economic Studies 67, 1-16. 6. Dybvig, P. and C. Spatt, 1983. Adoption Externalities as Public Goods. Jour- nal of Public Economics 20, 231-247 7. Economides, N. and F. Himmelberg, 1995. Critical Mass and Network Size with Application to the US Fax Market. Discussion Paper EC-95-11, Stern School of Business, New York University. 8. Economides, N., 1996. The Economics of Networks. International Journal of Industrial Organization 16, 673-699. 31
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved