Docsity
Docsity

Prepara i tuoi esami
Prepara i tuoi esami

Studia grazie alle numerose risorse presenti su Docsity


Ottieni i punti per scaricare
Ottieni i punti per scaricare

Guadagna punti aiutando altri studenti oppure acquistali con un piano Premium


Guide e consigli
Guide e consigli

Economics of financial markets by francesco fuiano, Appunti di Economia E Tecnica Dei Mercati Finanziari

Appunti (in inglese) del corso di Economics of Financial Markets (LM Economia Internazionale, Padova; prof. Fulvio Fontini)

Tipologia: Appunti

2014/2015

Caricato il 10/12/2015

Utente sconosciuto
Utente sconosciuto 🇮🇹

11 documenti

Anteprima parziale del testo

Scarica Economics of financial markets by francesco fuiano e più Appunti in PDF di Economia E Tecnica Dei Mercati Finanziari solo su Docsity! 9/1/2015 Lecture Notes made by | Francesco Fuiano UNIVERSITÀ DEGLI STUDI DI PADOVA ECONOMICS OF FINANCIAL MARKETS 1 FRANCESCO FUIANO ECONOMICS OF FINANCIAL MARKETS By FRANCESCO FUIANO 1. INTRODUCTION 2. RISK DECISION 2.1 PROBABILITY 2.3 THE CHOICE PROBLEM: PRATICALLY 2.4 LOTTERY 2.5 UTILITY 3. PREFERENCE: COMPLETENESS, TRANSITIVITY, CONTINUITY AND INDIPENDENCE 3.1 PREFERENCES AND THE REPRESENTATION OF PREFERFENCES 3.2 UTILITY FUNCTION 4. STOCHASTIC DOMINANCE 5. PORTFOLIO CHOICES 5.1 PORTFOLIO AND CHOICES UNDER RISK 5.2 ANTI-CYCLE STOCK 6. EFFICIENCY PORTFOLIO AND SET OF CHOISE 7. CAPITAL ASSET PRICING MODEL 7.1 THE PROBELEM OF CHOICE ON THE BASIS OF A RISKLESS ACTIVITY 8. THE GENERAL EQUILIBRIUM 8.1 CHOICES OF FINANCIAL ACTIVITIES UNDER UNCERTAINTY 9. THE EVALUATION OF DERIVATIVES 9.1 FORWARD 9.2 THE OPTION 9.3 THE EVALUATION OF THE OPTIONS 9.4 THE REAL OPTION THEORY REFERENCES 4 FRANCESCO FUIANO (x1, p1; x2, p2) Probability distribution or Lottery (consequence for a given action and probability) Now suppose that we take an action, what are the consequence for us? So p1 is the probability of the consequence x of a1 if we have taken the action a1. The product of probability and consequence is called Lottery (a description of the consequences and the probability of the consequences) or Probability Distribution. (x1 1, p1; x2 1, p2) = L1 (Lottery 1) ACTION 1 (x1 2, p1; x2 2, p2) = L2 (Lottery 2) ACTION 2 We should decide to take action 1 or 2. We cannot be sure that if we take the action one we’ll have the consequence (x11,p1;x 1 2,p2). What we know is that we’re going to have these consequences. To take a decision we need to take into account both the consequences and the probabilities. Therefore deciding among alternatives in a risk situation is equivalent to deciding across probability distributions or lotteries (so deciding among action 1 or 2 is the same as deciding among L1 or L2). Now we’ll analyze how to choose among lotteries. 2.3 LOTTERY We have already discuss about the risk of the decision making under uncertainty and the consequences of the risk, as well as deciding in a risky situation (the problem of decision making under risk is the problem of choosing among lotteries), now we’ll see how people should do these choices (if normal people). Let’s start thinking at some criteria of choosing among lotteries (a lottery is a set of pairs of consequences and probabilities about these consequences): the first criteria is to lay out the lotteries calculating the expected value. Suppose that someone proposes you to toss a coin and he proposes you a bet for one euro: if heads comes up you’ll not gain anything, on the contrary if heads doesn’t come up you’ll gain four euro, in alternative you can have a euro for sure. So, you have to choose among a euro for sure or to get in the bet. Quite a normal way to evaluate lotteries is to weight them on the basis of their respectively probabilities. What we effectively do is to take the consequences weighted on their probabilities. The weighted average of the probability of the consequence is called Expected Value (the expected value is a weighted average of the results, or rather the value we expect to obtain repeating the experiment to infinity): 𝐸(𝑥𝑖) = ∑ 𝑥𝑗 𝑖𝑝(𝑠𝑗) 𝑛 𝑗=1 The Expected Value (First Criteria) (3) Here we have consequences which were respectively weighted on probabilities. It’s the weighted average of the consequences. x11p1+x 1 2p2=E(x) The E(x) is the value that we can obtain in probabilistic term. It is not the results of choices, but the expected probabilities. 5 FRANCESCO FUIANO We know that if we toss the coin a lot of times it could show several times the face we bet on. This is the law of large numbers: if we repeat the throw several times we’ll observe that the results tend to the probability, so, on average half time we’ll see heads and half time we’ll see tails. Furthermore the probability to see tails instead of heads at the next throw is independent from the previous throw. So there is a very different nature if we talk in probabilistic term or for sure, because in probabilistic term we focus on average. Now suppose that we have a gamble with an expected value. For instance we toss a coin and we’ll win 2 euro if heads comes up, or we won’t if tails comes up. The cost of this bet is one euro (in order to participate). The expected value of this bet is one euro, but the cost to participate to the bet is one euro too, so on average we expect to have zero net gain. E (x) – C = ∏ (𝐱) (expected value - cost to participate = net profit) If we consider a bet in which there is an expected profit that depends on an expected value and a cost, we have different possibilities: Fair Bet E (X) = C ∏ (x) = 0 Unfavorable Bet E (X) < C ∏ (x) < 0 Favorable Bet E (X) > C ∏ (x) > 0 Obviously if our space of strategies (combination of choices) is to participate or not to the bet we’ll have: 1. ∏(𝑥1) = 𝐸(𝑥1) − 𝐶 2. ∏(𝑥2) = 𝐶 The aforementioned method is not always efficacious to lay out lotteries. Let’s do an example. Someone proposes us this bet: He tosses the coin, If heads comes up we win two euro, if tails comes up he tosses the coin again, if heads comes up we win four euro, if tails comes up he tosses the coin again, and so on. To participate at this bet we have to pay a hundred euro, shall we participate? Let’s calculate the expected value of this lottery and see if it is a favorable bet: 2𝑥 1 2 + 4𝑥 1 4 + 8𝑥 1 8 +⋯ = ∑2𝑖𝑥 1 2𝑖 = ∑1 = ∞ 𝑛 𝑖=1 𝑛 𝑖=1 The expected value of this equation is ∞, so we’d expected every people will take the bet, no matter how they have to pay, don’t we? This is The Saint Petersburg Paradox (a paradox of 1730 century, Bernoulli). Everyone should be willing to pay any cost to participate to the bet, but it normally doesn’t happen because no one will participate to a bet in which the cost to participate is, for example, 1000,00 € and he will win only 2,00 € if heads comes up at the first time. People don’t act just on the basis of the expected value, there is something more, that’s the relative importance for them. Therefore even if we have consequences these are not important for people, instead, the value they confer to these consequences is the main point. 2.4 UTILITY 6 FRANCESCO FUIANO Given the Saint Petersburg paradox we need to face another criteria in order to analyze it. So we can lay out lotteries on the basis of the utility of the consequences. The value people give to the consequences of a bet is called utility, so one possible decision making criteria is waiting, not just consequences, but the meaning of these consequences for them. So, it would be the following: 𝑬[𝑼(𝒙𝒊)] = ∑ 𝑝𝑗𝑈(𝑥𝑗 𝑖)𝑛𝑗=1 The Expected Utility (Second Criteria) (4) U (x11)p1+ (x 1 2)p2= E [U(x)] We call Expected Utility the weighted average of the utility of every consequences with their probabilities. Indeed probabilities depend on the states, not the consequences. Therefore the component to evaluate the choices of people is not just probability, but utility too. What does utility mean? It is a way to express our feeling about the consequences. We suppose that there is something in people that we call preference. This preference is something that allow us to express rankings of the different outcomes. A preference is a binary ordering, which mean, for instance, that we have a set of elements on which we have to decide. We are able to express our preferences by comparing them. We assume that these binary orders on the elements are complete, that means we are able to access our preferences on all the elements. Moreover we assume that these preferences are transitive (if we prefer A to B and B to C, then we prefer A to C). Moreover these preferences are independents to alternatives. If the preferences have the proprieties we have said, we can say these just by appoint a number to these preferences so that we can have an order of them. It is important to us because allow us to calculate numbers through functions. 9 FRANCESCO FUIANO A) Green ball, we win 10 € B) White ball, we win 10 € Second case: (choices) if the better draws a ball C) Red or white ball, we win 10 € D) Red or green ball, we win 10 € Results of choices: A [23]; Pg > Pw B [5]; Pw > Pg C [28]; PrUw > PrUg D [3]; PrUg > PrUw The consequence was all the same (10 €), the utility is not change, so we thought that probability will change among color balls. PrUw= Pr+Pw PrUg = Pr+Pg So these are strange results because: Pg> Pw and Pr+Pw> Pr+Pg but Pr+Pw> Pr+Pg is Pw> Pg that’s the opposite of the first one. This is a violation of independence and its name is Daniel Ellsberg’s Paradox. A U C [14]; A U D [3]; these are the violations committed. 3.2 UTILITY FUNCTION We know that different kind of utilities represent different preferences. We speak about monotonicity or utility monotone when elements of the domain increase and so utilities of those elements increase too. The monotonicity is a minimum rational criteria, that means the more is better than minus. We can prove that the utility function is also continuous (the utility function is continuous and monotone and so it’s increasing, 𝛥𝑈(𝑥) 𝛥𝑥 > 0). f: x1>x2  U(x1)≥U(x2) Let’s consider the following utility: U(λx1+(1- λ) x2) ≥ λ U(x1)+(1- λ)U(x2) with λ Є (0,1) 10 FRANCESCO FUIANO We have two elements x1 and x2. A function that represents preferences can be linear, concave or convex. In this case the function is concave. “λx1+(1- λ) x2” represents a linear combination of x1 and x2. The utility of the linear combination of these lotteries is weakly greater or equal to the linear combination of the utility of the lotteries. A function with this properties is called (strictly) concave function. If we take two points on the function and we draw a line between them, the utility of that function is above that line. We toss a coin. Suppose to have two possible consequences: X1: win 2 € X2: win 0 € Linear combination: half of x1 + half of x2 Any linear combination can represents a lottery (the linear combination of two extremes). If the property of concavity holds for any possible linear combinations, we can take a specific linear combination: The one exactly is the one for which the weights of the linear combination could be represent as the probability of the outcomes x1 and x2. For instance, if we want to represent the lottery above, this is the linear combination of the outcome x1 and x2 with the weights that are given by probability, but probability is a linear combination too. Example: λ=p1 1-λ=p2 11 FRANCESCO FUIANO So if we take the probabilities as the weights of the combination we have the same function as before. This is another way of define concave functions. p2x2+p1x1 is the sum of the outcomes weighted by their respectively probabilities. We know it is the expected value, so we could call it E(x): 𝐸(𝑥) = 𝑝1𝑥1 + 𝑝2𝑥2 The weighted average of the utility of the outcome we know is the expected utility: 𝐸(𝑈(𝑥) = 𝑝1𝑈(𝑥1) + 𝑝2(𝑈(𝑥2) And U(E (x)) = U(p2x2 + p1x1) is the utility of the expected value. If the utility is concave, then it’s true that the utility of the expected value is weakly greater than the expected utility: 𝑈(𝐸(𝑥)) ≥ 𝐸(𝑈(𝑥)) [Jensen Inequality]. (5) The Jensen Inequality is an inequality that identifies a subject adverse to risk. In terms of behaviors the 𝐸(𝑈(𝑥)) is the criteria people follow to choose in a lottery. So, the subject will choose to not participate to the lottery rather than participate if he is facing a fair bet (the same as saying that someone’d prefer the utility if a certain sum equivalent to the expected value, rather than a lottery with that expected value). For instance we can imagine to have a lottery in which we can win: - 2 € with the probability of a half - 0 € with the probability of a half 14 FRANCESCO FUIANO CE greater than the E(x), someone that has a positive risk premium and finally someone that has a positive Arrow- Pratt coefficient. If someone has a concave utility function he is not subjected to the Saint Petersburg paradox. This means that he has an expected utility lower than the expected value of the bet. Bernoulli conjectured the following: he wrote in a letter that the amount of money people receive depend on the amount of money they already have. This means that the marginal value of the utility is inversely related to the amount of wealth of the subject (the outcome is inversely related to the amount of wealth of the subject): 𝑈1(𝑥) = 1 𝑥 MARGINAL UTILITY (the value for the one that takes the bet is inversely related to the amount of wealth) So, U(x) = log(x) expresses the risk aversion of the subject, hence E(U(x)) < E(x) Indeed, if the utility function of an agent is logarithmic (the Saint Petersburg paradox does not hold), the expected utility is: 𝐸(𝑈(𝑥)) = log(2) 1 2 + log(2)2 +⋯ =∑log(2𝑗) 1 2𝑗 = log(2)∑2 log(2) = 𝐾 ≠ ∞ ∞ 𝑗=1 ∞ 𝑗=1 It’s important to take in mind the CE because it has a very powerful mean to characterize the individual. The certainty equivalent depends on lottery proposed and person’s risk aversion. The CE of a given individual, once which refers to a bet or to a lottery, can be expressed by the following formula: 𝐶𝐸 ≅ 𝐸(𝑥) − 1 2 𝐴(𝐸(𝑥))𝑉𝑎𝑟(𝑥) (8) ≅ 2 The CE of a given lottery X is almost equal to the expected value of the lottery minus the Arrow-Pratt coefficient (at the point of the expected value) tied to the variance3 of the given lottery. The variance of a given lottery is something objective (it’s the same for everyone), so how it is possible that some subjects differ in their CE? It is possible because of the Arrow-Pratt coefficient, that is a subjective element. The A is positive, the variance too, but the negative component before them tends to reduce the certainty equivalent of a half. 2 ≅ RUFFLY 3 The variance is the average of the square of the deviation from the average Var(x) = E((x-E(x)2). The variance is positive by definition, it is a square. 15 FRANCESCO FUIANO Suppose that someone is risk lover and the E(x) is equal for everyone. The A is negative, so minus a negative number, the CE will increase of a half. So this formula says to us how much is the CE too. Therefore there are two other elements that characterize choices: expected value and variance. Suppose that someone (so the same A) is risk averse and faces two different lotteries with the same expected value (e.g., 2 € heads/4 € heads with 2 € paid). The CE is smaller for the lottery with the higher variance and higher for the lottery with the smaller variance. A risk neutral individual doesn’t look at the variance while a risk averse individual dislikes high variance. What we don’t know is what will happen if we propose a bet with an high expected value but also an high variance. 4. STOCHASTIC DOMINANCE If we could characterize the choice of any individuals on the basis of objective elements we don’t need to make speculation, but it’s not the case because of subjective elements. By the way there is a subset of lotteries in which all individuals will choice in the same way regardless of how is their risk attitude. So how to judge the risky of two lotteries without consider the risk attitude? Have a look… Suppose that there are two possible bet (two discrete density functions: f, g), with the following outcomes: Y P(F) P(G) s1 -1 ¼ ½ s2 5 ¼ ¼ s3 10 ½ ¼ P is the probability to lose 1, win 5 or 10. Let’s calculate the cumulative distribution function4 This is a density function in the continuous term. What are cumulative distribution function? They are integrals of a given value from minus to infinity. It says how likely is that we’ll observe an outcome which is equal or less a given amount. 4 The cumulative distribution function (CDF), or just distribution function, describes the probability that a real-valued random variable X with a given probability distribution will be found to have a value less than or equal to x. In the case of a continuous distribution, it gives the area under the probability density function from minus infinity to x. Cumulative distribution functions are also used to specify the distribution of multivariate random variables. 16 FRANCESCO FUIANO How likely is that you’ll observe an outcome equal or less than 5? It’s the probability of observe 5 and -1 which is less than 5. It’s a cumulative distribution function. Let’s calculate the cumulative distribution function: Y P(F) CDF(F) P(G) CDF(G) s1 -1 ¼ ¼ ½ ½ s2 5 ¼ ½ ¼ ¾ s3 10 ½ 1 ¼ 1 By definition any CDF tends to 1. The figure below is an example of two CDFs in the continuous. 19 FRANCESCO FUIANO There are two problems: 1. Only rational individual prefers the function with lower variance 2. Not all the lotteries can be ordered by stochastic dominance. 5. PORTFOLIO CHOICES 5.1 PORTFOLIO AND CHOICES UNDER RISK Now we are going to analyze the choices of investments. We start with an assumption, that there is something we can invest on, and that something has a value (price). We also assume the price is sure now. The problem is that this “something” will have a price in future that is not sure (it’s random). So, financial products are standardizations of a risky decision problem. The one who’s making the evaluation, evaluates it regardless the time. We should not discount future values, we just compare values that are sure now and random values in future, regardless the discount factor5. The discount factor effectively can be interpreted as the degree of preferences of a given subject between the value now and the value in the future. So we expected that the discount factor is positive because people typically prefer something that’s high in the future if they have to lose something now. The other element we have to think about is the randomness of the value of the thing that we have supposed to evaluate. 𝑤 = ∑ 𝑝𝑖𝑞𝑖 𝑛 𝑖=1 (11) The crucial element is that this is the value of a given asset now, or rather this is the wealth of an individual with a given asset and a given (sure) price, we don’t know its future value. For instance we have 100 €. There are two possible different shares: a share with 1€ each and another with 4€ each. The problem is that tomorrow each of our assets will have a future value 𝑝?̃?. qi Amount of a given share 𝑝?̃? Random value ?̃? Future (random) value So our activity i will have a random wealth such as: ?̃? = ∑ 𝑝?̃?𝑞𝑖 𝑛 𝑖=1 (12) 5 The discount factor, DF(T), is the factor by which a future cash flow must be multiplied in order to obtain the present value. For a zero-rate (also called spot rate) r, taken from a yield curve, and a time to cash flow T (in years), the discount factor is: DF(T) = 1 (1+𝑟𝑇) In the case where the only discount rate you have is not a zero-rate (neither taken from a zero-coupon bond nor converted from a swap rate to a zero-rate through bootstrapping) but an annually-compounded rate (for example if your benchmark is a US Treasury bond with annual coupons and you only have its yield to maturity, you would use an annually-compounded discount factor: DF(T) = 1 (1+𝑟)𝑇 20 FRANCESCO FUIANO Our decision is effectively deciding the quantity of assets to buy, knowing the amount of money to invest and knowing that each asset will have a value in the future that is a random value. So the decision problem is to maximize the expected utility of our wealth under the budget constraint. We can derive a specific formula for a binary case: max𝐸[𝑈(?̃?)]𝑠. 𝑓. 𝑤 =∑ 𝑝𝑖𝑞𝑖 𝑛 𝑖=1 Optimal portfolio choice (now regardless the time) (13) Making this kind of choice is called choosing a portfolio (or portfolio choice). A portfolio is a set of assets and an asset itself is a random value, a lottery, so a probability distribution. This formula depends on the prices. If we change, for example, the summation of prices, the number of the utility function changes but the problem remain the same (in principle). We don’t want to depend on explicit numbers, we want to pay attention to the change. The numerical change would be different but the percent remain the same. 𝑝?̃?−𝑝𝑖 𝑝𝑖 = ?̃?𝑖 this is the return of a given asset (it’s a normalization of a price) while 𝑤𝑖 = 𝑝𝑖𝑞𝑖 𝑤 is the wealth invested. (The return of portfolio is ?̃?𝑃 = ∑ 𝑤𝑖?̃?𝑖 𝑛 𝑖=1 ) Suppose you’re observing the return of a given asset, the return could be positive or negative, so when you fix the starting price you can also observe the value that this asset has assumed over time on the basis of the different returns. There is a biunivocal relation between return and price. Instead of focusing on this problem the way it is described here It is quite simple to normalize budget constraint dividing by w. max𝐸[𝑈(?̃?)]𝑠. 𝑓. ∑ 𝑝𝑖𝑞𝑖 𝑤 𝑛 𝑖=1 = 𝑤 𝑤 = 1 (Normalized) (14) Suppose to have 100,00 €. We invest 60,00 € in Fiat share and the fiat share has a price of 5,00 € each. The 60,00 € over w is the relative amount that we have invested, it’s the percentage. Saying that we want to respect the budget constraint it’s the same thing to say that the summation over the fraction of the wealth invested in a given share, and this percentage invested in a given share, is equal to 1. That’s the same thing as say: ∑ 𝑤𝑖 = 1 𝑛 𝑖=1 . If we made this assumption effectively we’re assuming that: 1. The budget constraint is an equality: the fraction invested in the portfolio must sum up to 1 (it’s exactly the same of the utility maximization under budget constraint). This assumption comes from the property of the utility function (continuity and completeness) because thanks to these properties the utility is monotone, when the utility is monotone it is not rational to invest less than a hundred euro (it is useless saying that the utility of investing a hundred euro is higher than the one of investing sixty euro), indeed that is an equality, not an inequality. Suppose we have two shares X,Y and a budget of 100€. X= 1€ and Y=4€. We buy sixty share of X so we’ll buy ten shares of Y. This assumption holds with two assets. 21 FRANCESCO FUIANO Regardless of the random value it is exactly the same thing of the maximization under budget constraint. For instance we have 60% on Chrysler, how each of the weight has to be evaluated? We have two possibilities: 1) wi Є (0,1) the percentage could be 0,10,20,..,100. Here we are assuming that the amount invested cannot be more than 100 €, so we can invest, e.g. in Fiat, up to 1. 2) wi Є (-∞,+∞) So if we invest -50€ (going short6) in Chrysler we have to invest +150€ in Fiat. So we can land or sell some share to increase the amount of money we have got to invest in some other things. We don’t necessary own shares. If we assume that each weight can be negative we are implicitly assuming that we can also not own any shares. The expected utility is another problematic because it is subjective. Even if wealth is the same and the price and the number of shares is the same too. From now on we will assume that we cannot go short. Now we add another step. First we remember that the decision making problem is the maximization of the expected utility of our random wealth subject to the budget constraint (meaning the sum of the shares of the investment in the assets must sum up to 1) and we cannot going short. We know that the expected utility is subjectively, so it’s problematic. Even if we have a portfolio choices in which the number of shares is the same as well as the identification of each one, so even if we face the same decision problems we can have different utility and so different expected utility. In principle this problem has only an individualistic solution. Given that the expected utility depends on the risk attitude and we know that there are classes of equivalences of decision making under risk attitude, we can focus on one specific element we have already introduced that’s the certain equivalent. So, this problem (11) becomes a maximization of the CE of the problem subject to the budget constraint (in other words the problem is to find the optimal weights in the portfolio choices in order to maximize the CE and so the expected utility, that’s function of the expected return and of the variance): max 𝑤𝑖 𝐶𝐸 𝑠. 𝑓.∑𝑤𝑖 = 1 𝑎𝑛𝑑 𝑤𝑖 ∈ (0,1) 𝑛 𝑖=1 𝐶𝐸 ≅ 𝐸(?̃?𝑝) − 1 2 𝐴(. )𝑉𝑎𝑟(?̃?𝑝) The CE of a lottery depends on the expected value of the lottery, here it is the CE of a random wealth invested in a portfolio, so it is the random value invested in a portfolio. This is the CE of the expected return of the portfolio, but not just this. This CE can be express as the expected return of the portfolio (each single asset) weighted with the respective weights. 6 Go on  when we buy Go short  when we sell 24 FRANCESCO FUIANO The risk of a portfolio is the linear combination of the risk of each asset.  Hypothesis #2: 𝜌 < 1 𝑤1 2𝜎1 2 +𝑤2 2𝜎2 2 + 2𝑤1𝑤2𝜎1𝜎2 ≤ 𝑤1 2𝜎1 2 +𝑤2 2𝜎2 2 + 2𝑤1𝑤2𝜎1𝜎2 ⇒ 𝜎𝑝 2 ≤ (𝑤1𝜎1 +𝑤2𝜎2) 2 ⇒ 𝜎𝑝 = 𝑤1𝜎1 +𝑤2𝜎2 In general the risk of the portfolio is less than the linear combination of the risk of each asset. Risk hedging through portfolio creation: Risk of portfolio < risk of each asset. It means that we can reduce the risk by just creating a portfolio. 5.2 ANTI-CYCLE STOCK: Cadbury Schweppes plc is an example of enterprise that soon realized that chocolate is a typical anticycling investment, in the sense that when there is an economy crush sales of chocolate will increase. This shows that it is useful to evaluate assets because, e.g., two assets could not have a positive correlation between themselves. Let us consider, for instance, the case in which there is a portfolio made of two assets with a perfect negative correlation11: 𝜎𝑝 ≤ 𝑤1𝜎1 +𝑤2𝜎2 so 𝜌 = −1 In this case the variance of the portfolio will appear as follow: 𝜎𝑝 2 = 𝑤1 2𝜎1 2 +𝑤2 2𝜎2 2 + 2𝑤1𝑤2𝜎1𝜎2 𝑠. 𝑡. 𝑤1 +𝑤2 = 1 ⇒ 𝜎𝑝 2 = (𝑤1𝜎1 +𝑤2𝜎2) 2 ⇒ 𝜎𝑝 = 𝑤1𝜎1 − (1 − 𝑤1)𝜎2 ⇒ 𝜎𝑝 = 𝑤1𝜎1 − 𝜎2 +𝑤1𝜎2 ⇒ 𝜎𝑝 = 𝑤1(𝜎1 + 𝜎2) − 𝜎2 If we want minimize the risk let’s suppose this equation is equal to 0: 𝑤1(𝜎1 + 𝜎2) − 𝜎2 = 0 𝑠𝑜 𝑡ℎ𝑎𝑡 𝑤1 = 𝜎2 𝜎1 + 𝜎2 Algebraically what we have shown is that if we take into account that the weights have to sum up to 1 and if there are two assets such as those (negative correlated), then we can take a specific value for w1 and w2 such as that we can eliminate the risk of portfolio. In other words If there are two assets and 𝜌12 = −1 the returns are negative correlated and so it’s possible construct a riskless portfolio (we fully eliminate the risk). In general the method of risk elimination is more theoretical because we normally construct a portfolio taking together more than two assets. In the case in which we have more than 3 assets with a certain covariance among themselves we can note eliminate the risk of portfolio, but we can reduce it. We can construct a portfolio in which we can observe what’s the impact of the risk of the asset into the risk of the portfolio. 11 For correlation we mean the normalization of the co-movements of each assets. This holds only with two assets because we need assets that have a perfect negative correlation among them all. 25 FRANCESCO FUIANO Let’s consider two assets. The statistical co-movement we analyze is the covariance, that depends on the weights with which we’ve constructed the portfolio and on the weights with which asset one interacts with the elements that compose the portfolio (which are asset 1 and asset 2, so variance and covariance). Suppose to have two assets and they’re perfectively negative correlate, one asset with a standard deviation of 1 and the other with a standard deviation of 2 (σ1=1, σ2=2), let’s calculate the covariance between the return of the asset one and the return of the portfolio 𝜎12: 𝜎1𝑃 = 𝑤1𝜎1 2 +𝑤2𝜎12 While the general expression of the covariance between an asset i and a portfolio P is: 𝜎𝑖𝑝 = ∑ 𝑤𝑗𝜎𝑖𝑗𝑗 (18) What are the characteristics of the weights that allow us to reduce the risk? The weights that minimize the risk are those for which is true the following: 𝜎𝑖𝑝 = 𝜎𝑗𝑝 ∀ 𝑖, 𝑗 𝑖𝑛 𝑃 That is, the variance is minimized if, and only if, the covariance among assets and the portfolio is the same for all the assets of the portfolio. The logic meaning of this speech is that the fact that a portfolio risk can be managed by properly choosing the way of our investments is an extremely powerful property. The covariance between one asset and a portfolio effectively measure the contribution to the risk of the portfolio. It not necessary true that two assets that have the same risk will bring to the portfolio the same level of risk because we have to look at the contribution (riskiness and covariance) of that asset into the portfolio. Once we equalized the covariance of the assets of the portfolio we minimize the risk. 26 FRANCESCO FUIANO 6. EFFICIENCY PORTFOLIO AND SET OF CHOISE Considering the choice of risky assets, the elements that affect the choice are the average and the riskiness of the asset. We can try to represent the choice problem in a diagram showing the return and the risk through the standard deviation on the x-axis and the return on the y-axis. We expect rational individuals have monotone utility (more return is better than less). We know that the return of portfolio depend on the return of the assets themselves, but we can do nothing on that return, we can only construct the portfolio so as to obtain the more return we can. It’s possible to increase the return, but only increasing the risk (not good for risk adverse individual). We have to choose the weights not just minimize the risks, not just maximize the return but maximize the expected utility. How? Through the CE. Suppose to have two assets. We’ve the following two problems: 1. The return of portfolio is just, for instance in the case of two assets, the linear combination of the return of the assets: 𝜇𝑝 = 𝜇1𝑤1 + 𝜇2𝑤2 𝑠. 𝑡. 𝑤1 +𝑤2 = 1 2. The measure for the risk: 𝜎𝑝 ≤ 𝑤1𝜎1 +𝑤2𝜎2 How many portfolios can we construct? In order to construct infinity portfolios we have to construct any possible fractions, but in the reality it cannot happen (for instance we cannot buy 0,0003 € of a Fiat share if the share nominal value is 6€). This is called the problem of natural numbers. Suppose to have two assets and that there is no problem of continuous fractions. Even if there are two discrete elements we can construct infinity portfolios. 29 FRANCESCO FUIANO We can also compare them. We know that they have the same risk but different return, so: P2 ≿ 𝑃3 Regardless the riskiness the return of P2 is higher than P3 P2 d 1 m P3 No one will choose P3 All the portfolios below the frontiers will be dominate stochastically by portfolios on the frontier. We can forget about others and take only the border of the convex set. Even if we take α and β, α dominate β. There is no need to look at the whole set of admissible portfolios. All those portfolios that are below the frontier starting from the portfolio of minimum (minimum variance) are stochastically dominated. Thus, we focus only in the part of the admissible set that lies on the frontier starting from the minimum level, that’s the set of efficient portfolio. The problem of utility maximization subject of the portfolio creation can be described as the following: max𝐸𝑈(𝑝𝑖) 𝑠. 𝑡. 𝑝𝑖∁ 𝑃 𝑎 Remembering that maximize the expected utility can be described by maximize the CE which depends on return and the variance (which are two objective elements), so: max𝑈(𝜇, 𝜎2) 𝑠. 𝑡. 𝑝𝑖∁ 𝑃 𝑎 We can represent it in different two modality: 1) analytically, we have the explicit maximization of a portfolio when we have two assets; 2) graphically. 𝐸(𝑈(𝑥)) ≅ 𝐸(𝑥) − 1 2 𝐴(. )𝑉𝑎𝑟(𝑥) , that’s the same as writing 𝐸(𝑈(𝑝𝑖)) ≅ 𝜇𝑝𝑖 − 1 2 𝐴(. )𝜎𝑃𝑖 2 U(μ,σ2) To construct graphically indifference curve we have to increase riskiness tied to the return. To a risk adverse individual if we give more return his utility rises. He want to have a new pair of return and risk such as that he yields the same level of utility μ1σ1. If we give him more return we have to give him more risk, 30 FRANCESCO FUIANO like below. So the more risk we add to him, the more his utility decreases. In other terms, for more increases, we have to give him squares of return in order to give him the same level of utilities (by definition all points of an indifference curve represent pairs of return and risk that yields the same level of utility). Increasing Convex Function. Given that in the utility function what matters is the return, utility function are increasing and convex. Indifference curves show that utility level increases when they move north-west direction (here because risk is bad for the subject). Consider two individuals that are risk adverse: - Two different curves - Two different individuals For the transitivity property we know that B and C doesn’t are equal, so indifference curves are referred to two different individuals because they do not crosses. The indifference curve of a risk adverse individual on the risk return space tell us three things: 1. According to the second derivative, if the individual is risk adverse or not. 2. Show the direction of both risk and return that increase the utility function. 3. They’re more rigid he more the individual is risk adverse 31 FRANCESCO FUIANO Now let’s draw the curves of a risk lover: For a risk lover the utility is increasing. Risk for him is good, so we have to give him less risk to keep him at the same utility level (or we have to give him less return if we give him more risk). Risk enters as a square in the utility function because what it matters is the variance. The more risk we give, for a given increasing the risk, the more is better off square. We have to take him out in this case, take out returns to keep him at the same level of riskiness. Therefore the curve is concave. Indifferent curves for risk lovers are negative, concave function. If he had more return and more risk he will definitely be better off. The utility level increases going north- east because in this case both risk and return are positive. The less he is risk lover the lower the change in returns we have to reduce, so the more flat, the more elastic is the curve (the more the individual is risk lover, the more the curve is rigid, otherwise elastic). The more the individual is risk lover the more the A is bigger in absolute terms. Another case is the risk neutral individual. For him the return is good and the risk is irrelevant, so the A = 0. If he has a given return μ1 and a given risk σ1, and we give him risk σ2 the level of return that makes him indifferent is μ1. This means that the indifference curve is a straight horizontal line (like in figure). The elasticity of a risk neutral individual is infinity elastic. Now we have the tools to maximize the expected utility subject to the fact that the portfolio is admissible and efficient. Let’s look at the portfolio m1 that minimize the variance. Suppose that the individual A is risk adverse (not too much). 34 FRANCESCO FUIANO 7. CAPITAL ASSET PRICING MODEL 7.1 THE PROBELEM OF CHOICE ON THE BASIS OF A RISKLESS ACTIVITY Until now we’ve looked at the decision of a rational individual. One assumption that we’ve made is that the set of admissible portfolio is made by convexified original elementary assets and those assets are all risky. Now we can remove that assumption and consider the existence of assets without risk. Them return is constant (expected return=return) regardless of the realization of the states’ space, so it’s the same for all possible realization of the states’ space. 1. How many riskless assets do we expect to see in an economy? Suppose we’ve two riskless assets (A,B): B d1 m A If someone would buy these assets, why should he choose A rather than B? If there exists a riskless asset, it has to be just one, because if there exists another riskless asset it has to have the same return or if it has an higher return it jeopardizes the market and so it eliminates the first one. 2. Which riskless assets exist in the real economy? There are several types of bonds in the public sector (BOT, CCT, ETC). In theory if we invest in bunds rather than BOT, we, depending on the maturity of the interest rate, will receive our capital at a certain sure date. In the reality the several types of assets that we observe in the reality are not riskless because there are several types of risk (risk has several natures). This makes extremely difficult (or rather impossible) to identify the asset that’s riskless. The realization of the different states of the world occurs in a timeless way, but typically in the reality we have to consider the time and in a lapse of time things can change. By the way we can assume that we are be able to identify one riskless asset. 3. How does the analysis change when there is an asset that is riskless? Suppose that there are two asset: one riskless asset, 𝑅0, and one risky asset, 𝜇1. 𝜇𝑝 = 𝑤0𝑅0 +𝑤1𝜇1 𝜎𝑝 2 = 𝑤0 2𝜎0 2 +𝑤1 2𝜎1 2 + 2𝑤1𝑤0𝜎01 𝜎𝑝 2 = 𝑤1 2𝜎1 2 because 𝑉𝑎𝑟0 = 0 and there is no Cov because if one average doesn’t move, the other doesn’t too. 𝜎𝑝 = 𝑤1𝜎1 35 FRANCESCO FUIANO 𝜇𝑝 = 𝑤0𝑅0 +𝑤1𝜇1 ⇒ 𝜎𝑝 = 𝑤1𝜎1 ⇒ 𝑤0+𝑤1 = 1 { 𝜇𝑝 = 𝑤0𝑅0 +𝑤1𝜇1 𝜎𝑝 = 𝑤1𝜎1 𝑤0+𝑤1 = 1 ∗ { 𝜇𝑝 = 𝑤0𝑅0 +𝑤1𝜇1 𝜎𝑝 = 𝑤1𝜎1 𝑤0 = (1 − 𝑤1) *𝑤1 = 𝜎𝑝 𝜎1 This equation tells us the relationship between the return of the asset and the risk of the portfolio (this is an equation that is true for any portfolio P). If we take one asset and we take the return of the riskless asset, we will have: 𝜇𝑝 = 𝜇1(𝜇1 − 𝑅0) + 𝑅0 ⇒ 𝜇𝑝 = 𝑅0 + (𝜇1 − 𝑅0) 𝜎𝑝 𝜎1 This equation tells us the shape of all possible portfolio we can construct by mixing asset 1 and asset 0. This is the portfolio result when we have one risky asset and one riskless asset. At point 0.5 we’ve share our wealth half in the riskless asset and half in the risky one, so the riskiness of our portfolio is just half of the risk of the risky asset. How does the risk of a portfolio increases when we have a unit of riskless asset in our portfolio? The risk of portfolio will increase at a certain rate that depends on the risk of the risky asset σ1 and the extra return of the risky asset with respect of the riskless one. 36 FRANCESCO FUIANO Suppose we are investing in a portfolio in which there are some risky assets and one riskless asset. We can invest some amount in 𝑅0 and some in B 𝜇𝑝 = 𝑅0 + (𝜇𝐵 − 𝑅0) 𝜎𝑝 𝜎𝐵 If we mix up one riskless asset with one risky asset or with a portfolio of risky assets is exactly the same thing. It cares only to select the optimal one. Security market line: the linear combination of the riskless asset and the portfolio. It is the set of all possible portfolio we can construct when we mix up a riskless asset and one possible risky asset. The SML dominates the other portfolios, but we can select any portfolio as the best choice. T is the highest SML (tangent portfolio). The optimal portfolio to build the SML is the one tangent to the efficient frontier. Now we know that we can represent the set of possible portfolio that we can construct when there exists one riskless asset by adopting the following steps: - By looking at the risky assets, mixing the risky assets with the riskless one - Choosing the optimal portfolio we can construct through the mixing step which is the tangent portfolio Let’s see the problem of the optimal portfolio choice. When we assume that there exists one asset that is riskless we can construct the SML. Therefore the issue of choosing the optimal portfolio for each individual becomes a two steps problem: identify the possible portfolio among the several we can construct and then which point of the SML to choose. All possible rational agents regardless of their risk attitude will choose the same portfolio. It doesn’t mean they will invest all their capital in that portfolio. How is the second step? How decision maker choose when they all have constructed the same SML? They have to decide on the basis of their own utility function, maximizing their expected utility (return, risk and risk attitude). From a graphical point of view there will be a set of efficient portfolio Pa. T is the tangent portfolio. A is the indifference curve for a risk adverse individual. A is calculated for all two steps. First we calculate the tangent portfolio and then an individual will choose how much of his own wealth devotes for the purchase of T and how much of the riskless asset. For instance the share of σA and σT will be 39 FRANCESCO FUIANO D) We have to be able to go short If we remove the latter assumption the separation theory would not continue to hold. As a consequence T would not be the optimal choice for everyone. People would not choose the tangency portfolio, they might choose some other portfolio. Let’s see it graphically: If someone is risk adverse he might decide to have the following portfolio that is constructed using T, but if someone is very few risk adverse the separation theory could continue to hold only if he can go short. If he cannot go short, he cannot go up there (look at the figure). If people cannot borrow15 infinitely the separation theory does not hold anymore: So if someone cannot go short the set of his choices is no more the SML, but it will be the SML up to T and then, from T onward, will coincide with the set of efficient portfolio. If someone is risk adverse will not choose F he will choose D. The portfolio F is not anymore the individual choice, but it would be the portfolio D. If someone is very few risk adverse choosing T would not be rational to construct the same portfolio of someone who is extremely risk adverse, he will choose another portfolio, that’s the one on the frontier. F lies on an indifferent curve that’s high of the one that passes for D. 15 To sell short 40 FRANCESCO FUIANO Not all the individuals will construct T. If someone is not risk adverse he would choose another portfolio. If we remove the possibility of going short we are reducing the utility level of someone without increasing the possibility of someone else. In other terms we are destroying Pareto-Efficiency (we are forcing the problem to be solved in a non-efficient way). The possibility of going short exists because, assuming all other assumptions exist, we induce inefficiency. Other assumptions: The risky assets are not affected at all by choices, they are data when starts the analysis. There is no endogeneity. Those assets are in perfect competition (price taker). If the market has these characteristics, then T is a portfolio and it is extremely easy to construct it, because it is the market portfolio. The market portfolio is the portfolio in which all the assets traded in the market enter into the portfolio with a weight that is equivalent to the ratio of the market value of that asset overall the market capitalization (the market value of all assets). Consider a set of possible assets, the value of the market is the sum of all assets. The ratio of 𝑉𝑎𝑙𝑢𝑒(𝑎𝑠𝑠𝑒𝑡) 𝑉𝑎𝑙𝑢𝑒(𝑚𝑎𝑟𝑘𝑒𝑡) is the way in which enters into the market portfolio. If all the previous assumptions hold, hence the tangency portfolio is the market portfolio. This conclusion is called Capital Asset Pricing Model (CAPM). There are some consequences to the CAPM, but before looking at them, let’s suppose there are two different wealth and two assets: wA=100 wB=90 41 FRANCESCO FUIANO % of the portfolio { w1 T = 0.5 𝑤2 𝑇 = 0.5 (this means T is made fifty percent of asset one and fifty percent of asset two) p1=5 p2=1 (prices) B = risky asset (individual that chooses exactly T) A = risk adverse individual (50% in portfolio T and 50% in riskless asset) A B p1 p2 q1+ q2 piqi ∑ piqi piqi/∑ piqi q1 5 9 5 14 70 140 70/140 q2 25 45 1 70 70 140 70/140 The table beyond shows that the tangency portfolio is exactly the market portfolio. Now we are going to end up the CAPM showing that a consequence of the CAPM is that we can very simply verify if the values and the returns of the assets we can observe in the market indeed correspond to the ones that we would expect if the CAPM is the correct one. As a consequence of the CAPM we do expect that everybody rise the market portfolio. No one has in his own portfolio assets whose weights are in the portfolio with different weights among themselves. This leads to a very specific formula to the views of the return of the assets. We can check in the market if the CAPM is the correct one, if not, we can say that effectively the market does not correspond to the theoretical analysis. The tangency portfolio is the only one that is at the same time on the SML and on the frontier of the portfolio. So we can say that that portfolio is the one that minimizes the variance providing that the variance of the portfolio is at the same time on the SML and on the frontier of the portfolio. min𝜎𝑇 2 𝑠. 𝑡. 𝑤0+𝑤1 +𝑤2 = 1 (subject to the constraint that the portfolio is made by a riskless and risky riskless assets and that the portfolio is on the SML) 𝜇𝑇 = 𝑤0𝑅𝑓 +𝑤1𝜇1 +𝑤2𝜇2 𝑚𝑖𝑛𝜎𝑇 2 = 𝑤1 2𝜎1 2 +𝑤2 2𝜎2 2 + 2𝑤1𝑤2𝜎12 We can simplify this problems as follow (that is, remembering that weights have to sum up to one): 44 FRANCESCO FUIANO This aforementioned is called Test of Efficiency (a strong method to evaluate market). Unfortunately this method, most of the time, suffers a deviation. If we want to test the CAPM the assumptions have to hold. The variance can change or not, when the variance is stable is called homoscedastic, where it’s not, it’s called heteroscedastic. Typically the variance is heteroscedastic. There is no objective measure of the variance of the market, that would be if variance is homoscedastic because things change over time. The very definition of the boundaries of the market are disputable but those definition affect the calculation of returns and variances. 8. THE GENERAL EQUILIBRIUM The problem of the risky asset decision does not depend just on individual attitude that plays a marginal role. Everybody who has to make decision will obviously choose the same portfolio, not the same share but the same portfolio. This limits to some extent the practical consequences of the model developed. This model is suitable if we think about how should an individual decide to allocate his own wealth in a risky situation. However there is a problem, If everybody make the same decision, who sells? In particular if we see that the tangency portfolio is the market portfolio everybody should be willing to invest in the market portfolio. We need that someone sells in order to buy. Where does the wealth we want to invest come from? There is no answer to this question, because the analysis we’ve been made so far is a partial equilibrium analysis. It means that the model developed is useful to represent partial equilibrium situation, which are situation, for instance, in which someone is sufficiently small relative to the market so we don’t really care him. This problem can be overtaken by assuming that the wealth is indeed endogenous to the problem, so the wealth is what effectively people have in their endowment and individuals can exchange their endowment with someone else (if their original endowment does not correspond to what they want, so to their optimal allocation, they can exchange to someone else). Now we’re going to analyze the general equilibrium and we’re going to use the Edgeworth Box. The most simple situation we can see through the Edgeworth Box is the one in which there are two individuals (A,B), two goods and a certain endowment of good one and good two. Assume also that there is no risk and that the amount of wealth comes from the original endowment of the two individuals and we can exchange it. The Edgeworth Box is the simultaneous solution of the problem of the maximization of the utility of two individuals subject to the budget constraint. { 𝑈 𝐴(𝑐1, 𝑐2) 𝑝1𝑐1 𝐴 + 𝑝2𝑐2 𝐴 = 𝑝1𝑒1 𝐴 + 𝑝2𝑒2 𝐴 𝑈𝐵(𝑐1, 𝑐2) 𝑝1𝑐1 𝐵 + 𝑝2𝑐2 𝐵 = 𝑝1𝑒1 𝐵 + 𝑝2𝑒2 𝐵 𝑚𝑎𝑥𝑈𝑖(. ) 𝑠. 𝑡. 𝑏𝑢𝑑𝑔𝑒𝑡 𝑐𝑜𝑛𝑠𝑡𝑟𝑎𝑖𝑛𝑡 ∀ 𝑖 ∈ 𝐴, 𝐵 45 FRANCESCO FUIANO 𝑐1 𝐴 + 𝑒1 𝐴 = 𝑐1̅ 17 𝑐2 𝐵 + 𝑒2 𝐵 = 𝑐2̅ The situation in the blue lines is not Pareto-efficient18. The original endowment would not be Pareto-efficient allocated. So, the movement from the original endowment to the new allocation depends on the ratio − p1 p2 . We maximize the utility function under the budget constraint through the Lagrangian: ℒ = 𝑈𝑖(𝑐1 𝑖 , 𝑐2 𝑖 ) − 𝜆(𝑝1𝑐1 𝑖 + 𝑝2𝑐2 𝑖 − 𝑝1𝑒1 𝑖 + 𝑝2𝑒2 𝑖 ) i = A,B (19) 19 max 𝑒1 𝑖 ,𝑒2 𝑖 𝑓𝑜𝑟 𝑖 = 𝐴, 𝐵 (We maximize for A and B the utility function subject the budget constraint through the Lagrangian) 𝑑ℒ 𝑑𝑐1 𝑖 = 0 𝑑ℒ 𝑑𝑐2 𝑖 = 0 𝑑ℒ 𝑑𝜆 = 0 { 𝑑𝑈𝑖(𝑐1 𝑖 ,𝑐2 𝑖) 𝑐1 𝑖 − 𝜆𝑝1 = 0 𝑑𝑈𝑖(𝑐1 𝑖 ,𝑐2 𝑖) 𝑐2 𝑖 − 𝜆𝑝2 = 0 𝑝1𝑐1 𝑖 + 𝑝2𝑐2 𝑖 = 𝑝1𝑒1 𝑖 + 𝑝2𝑒2 𝑖 First order conditions { 𝑑𝑈𝑖(𝑐1 𝑖 ,𝑐2 𝑖 ) 𝑐1 𝑖 𝑑𝑈𝑖(𝑐1 𝑖 ,𝑐2 𝑖 ) 𝑐2 𝑖 = 𝜆𝑝1 𝜆𝑝2 𝑝1𝑐1 𝑖 + 𝑝2𝑐2 𝑖 = 𝑝1𝑒1 𝑖 + 𝑝2𝑒2 𝑖 Hence the MRTS (Marginal rate of technical substitution) will be as follow: 𝑀𝑅𝑇𝑆 = − 𝑝1 𝑝2 (relative price) 17 e = Endowment 18 Pareto-efficient is a situation in which there is no possibility to increase the utility of someone without reducing the utility of someone else. 19 𝜆(𝑝1𝑐1 𝑖 + 𝑝2𝑐2 𝑖 − 𝑝1𝑒1 𝑖 + 𝑝2𝑒2 𝑖) is the budget constraint of A and B that says the expenditure the individual i can make to buy good 1 and 2 and to consume 𝑐1 𝑖 and 𝑐2 𝑖 depends on the amount of wealth the individual can obtain by giving up some of his endowment 𝑒1 𝑖 and 𝑒2 𝑖 . 46 FRANCESCO FUIANO We obtain just one condition that says that the utility is maximized for both individuals by the exchange when the MRTS is equal across all individuals. Each individual can maximize the utility at each time exchanging. The exchange is over when the MRTSA=MRTSB (tangency indifferent curves in the figure above). What is drawn in the aforementioned graphic is summarized in the general theorem of welfare economics: 1. Any allocation that corresponds to an exchange is Pareto-efficient (first theorem); 2. Given an original endowment, the individual can go in any possible allocation (second theorem). In Edgeworth box we see there is an infinite possible allocation numbers (curve of the contract). Any point of the contract curve can be reached exchanging provided an efficient allocation of the endowment (so first the individual has to redistribute the endowment to move to another point). 8.1 CHOICES OF FINANCIAL ACTIVITIES UNDER UNCERTAINTY We know that c1 is the consumption of good one and c2 the one of good two. We could also suppose that those goods are not goods but states of the world(so c1 is the consumption in s1 and c2 the consumption in s2). In this framework state one and two would be sure. And if we suppose that they are risky, what’s change? Let’s see: Suppose that there are k states of the world and l possible risky assets. We can represent this situation in a matrix. This matrix will contain all possible consequences of the individual decisions. k l [ 𝑥1,1 ⋯ 𝑥1,𝑘 ⋮ ⋱ ⋮ 𝑥𝑙,1 ⋯ 𝑥𝑙,𝑘 ] Matrix of the financial asset (or Financial matrix) (l x k matrix) Suppose that assets have these characteristics : - Asset one will yield 1 in state 1 and 0 otherwise; - Asset two will yield 1 in state 2 and 0 otherwise. An asset that, in general, pays 1 unit (no matter the unit measure) in a given state and zero otherwise is called Arrow Security. An Arrow security contingent to state i is the asset that pays 1 in the state i and zero otherwise. So if there are k states there will be each arrow securities which are contingent to those states. Suppose there is an asset that pays 3 in state 1 and 4 in state 2. It is not an A-S, but it could be represented by A-S. This is the big property of arrow security: by mixing them up we can replicate any possible outcome by constructing a proper portfolio of A-S. So we can simplify the matrix assuming that there is Arrow-Security. A financial security matrix is complete if there are at least an equal number of independent rows equal to the number of the columns. For instance, if the matrix has three rows and two columns, and two rows are 49 FRANCESCO FUIANO the riskless state. So the relative price of the AS contingent to state i has to be equal to the MRTS between consumption in state i and consumption in the riskless state, the only difference is that the MRTS of state i depends not only on the probability of that state of the world. The price of the AS with respect the numerer is effectively equal to the marginal utility of consumption in state 1 times to the probability of state 1 because state 1 is a risky state and we are not sure that we are going to have that state 1 is going to occur. If state 1 is going to occur than we have some marginal probability, that weighted with the probability that state 1 occurs, has to be equal in terms of the marginal utility of the riskless state and this has to be true for all individuals. This is the same solution found with the Edgeworth Box with the difference that in the box we do not normalize and the probability of state 1 is 1. It’s possible to exchange endowment of AS in order to maximize utility and obtain a Pareto-Efficient allocation, even in the case of risk. The only difference is that, given that’s a risk situation, MRTS has to be equalized. In order to have 𝜑1 = 𝜑2 preferences have to be continuous and independent as well as complete and transitive. The strongest assumption here is that what individuals exchange is AS (because typically when we want to consume in a given state there will be l possible assets that will give us some yield in that state). For instance, if we come back to our first example about tossing a coin we can transform that example in AS: Someone toss a coin, we win: 2 if heads comes up and 1 if Tails comes up is not an AS, but we can construct a set of AS. If Someone proposes us to have three AS, 2 contingent to heads, 1 contingent to tails, if heads comes up we will have 2, if tails comes up we will have 1. So any asset can be evaluated by looking at a specific portfolio of AS. Let’s start from the general problem: we want to maximize the expected utility of any individuals. Suppose that there are two risky state and one riskless, as well as two assets. The number of assets exchanged will determine the relative assets. max 𝑐0𝑐1𝑐2 𝑈𝑖(𝑐𝑜) + 𝜋1𝑈 𝑖(𝑐1) + 𝜋2𝑈 𝑖(𝑐2) 𝑠. 𝑡. (𝑐0 − 𝑒0) + 𝑝1(𝑐1 − 𝑒1) + 𝑝2(𝑐2 − 𝑒2) = 0 (Budget constraint: we want to consume in s0,s1 and s2, but in s1, for instance, the only way to consume is to buy some amount of assets that would yield a given possibility of consumption) (𝑐0 − 𝑒0) is the excess demand in the riskless state 𝑝𝑖 is the price of asset i ei is the endowment We buy asset depending on our consumption in a certain state, so depending on the yield. We have, here, two assets: asset 1 has an yield of x1,1 and x1,2 while asset 1 has an yield of x2,1 and x2,2. A) 𝑐1 = 𝑏1𝑥1,1 + 𝑏2𝑥2,1 50 FRANCESCO FUIANO B) 𝑐2 = 𝑏1𝑥1,2 + 𝑏2𝑥2,2 𝑥1,1 𝑥1,2 𝑥2,1 𝑥2,2 this matrix is a matrix made by two assets 𝑥2,1 = 0 𝑥1,2 = 0 𝑥1,1 = 1 𝑥2,2 = 1 So effectively 𝑐1 = 𝑏1 and 𝑐2 = 𝑏2. This is the general expression, we want to maximize with respect to three multiplayers: max 𝑐0𝑏1𝑏2 ℒ = 𝑈𝑖(𝑐𝑜) + 𝜋1𝑈 𝑖(𝑐1) + 𝜋2𝑈 𝑖(𝑐2) − 𝜆((𝑐0 − 𝑒0) + 𝑝1(𝑏1 − 𝑒1) + 𝑝2(𝑏2 − 𝑒2)) = 𝑈𝑖(𝑐𝑜) + 𝜋1𝑈 𝑖(𝑏1𝑥1,1 + 𝑏2𝑥2,1) + 𝜋2𝑈 𝑖(𝑏1𝑥1,2 + 𝑏2𝑥2,2) − 𝜆((𝑐0 − 𝑒0) + 𝑝1(𝑏1 − 𝑒1) + 𝑝2(𝑐2 − 𝑒2)) 𝑑𝑈𝑖 𝑑𝑐𝑜 = 𝜆 𝜋1𝜕𝑈 𝑖(. ) 𝜕𝑏1 + 𝜋2𝜕𝑈 𝑖(. ) 𝜕𝑏1 = 𝜆𝑝1 𝜋1𝜕𝑈 𝑖(. ) 𝜕𝑏2 + 𝜋2𝜕𝑈 𝑖(. ) 𝜕𝑏2 = 𝜆𝑝2 𝜕𝑈𝑖(. ) 𝜕𝑏1 = 𝜕𝑈𝑖(. ) 𝜕𝑐1 𝜕𝑐1 𝜕𝑏1 𝜕𝑈𝑖(. ) 𝜕𝑏2 = 𝜕𝑈𝑖(. ) 𝜕𝑐2 𝜕𝑐2 𝜕𝑏2 (Consider that 𝜕𝑐1 𝜕𝑏1 = 𝑥1,1) So { 𝜋1𝜕𝑈 𝑖(.) 𝜕𝑐1 𝑥1,1 + 𝜋2𝜕𝑈 𝑖(.) 𝜕𝑐2 𝑥1,2 = 𝜆𝑝1 𝜋1𝜕𝑈 𝑖(.) 𝜕𝑐1 𝑥2,1 + 𝜋2𝜕𝑈 𝑖(.) 𝜕𝑐2 𝑥2,2 = 𝜆𝑝2 𝑑𝑈𝑖 𝑑𝑐𝑜 = 𝜆 𝑝1 = 𝜋1 𝜕𝑈𝑖 𝜕𝑐1 𝑥1,1+𝜋2 𝜕𝑈𝑖 𝜕𝑐2 𝑥1,2 𝜕𝑈𝐼 𝜕𝑐0 𝑝2 = 𝜋1 𝜕𝑈𝑖 𝜕𝑐1 𝑥2,1+𝜋2 𝜕𝑈𝑖 𝜕𝑐2 𝑥2,2 𝜕𝑈𝐼 𝜕𝑐0 These are the general expression of the prices of two assets when the assets are general assets. The expression of a general asset includes the AS when the asset target is an AS. In general we cannot forget about state 1 or 2 for asset 1 or 2, because a given asset has some yields in state 1 and some yields in state 2, but we can simplify through AS. The relative price of a general asset depends on the MRTS because at the numerator it has the marginal utility of consumption in the riskless state ( 𝜕𝑈𝑖 𝜕𝑐1 is the marginal utility in the state), but in each state that asset allows us to obtain an amount of euro equal, for instance, to x1,1 for s1, that amount gives us a certain utility, and what it matters for us is the marginal utility of that money in 51 FRANCESCO FUIANO that state, but we cannot be sure that the state occurs, there is a certain probability. So, we buy an asset depending on the marginal utility that the asset reaches in each state weighted by the probability of that state and in equilibrium the expected marginal yield of that asset has to be equal for all individuals. We obtain an equilibrium that is more general than the equilibrium for AS. Note that 𝜑1 = 𝜋1𝜕𝑈 𝑖(.) 𝜕𝑐1 𝜕𝑈𝑖(.) 𝜕𝑐0 is the price of AS in s1 So introducing AS we conclude that in equilibrium there will be just one price if market is perfectly competitive, but if the market is perfectly competitive the price of AS guarantees that all combinations is Pareto-Efficient (the price will be the same for all individuals 𝜑1 𝑎 = 𝜑1 𝑏 = ..etc). 𝑝1 = 𝜋1𝜕𝑈 𝑖(. ) 𝜕𝑐1 𝑥1,1 + 𝜋2𝜕𝑈 𝑖(. ) 𝜕𝑐2 𝑥1,2 𝜕𝑈𝑖(. ) 𝜕𝑐0 This is the equation of a price of an asset in case of two states but it is also more general (it holds for any states). If it holds for any individuals it will correspond to the following linear combination: 𝑝1 = 𝜑1𝑥1,1 + 𝜑2𝑥1,2 𝑝2 = 𝜑1𝑥2,1 + 𝜑2𝑥2,2 This means that the value can be simply reduced to the value of the AS in a certain state ties the yield in that state sum up with the same product of the other state. This is the reason why we introduced the AS, we can express the value of any assets as the linear combination of the prices of the AS that yield the same return of that assets. It is effectively the same condition we saw in the riskless case. The value of an asset does not depend on the probability of each state, it depends on the AS contingent to that state. The value of any asset could be replicated by constructing a portfolio of Arrow-Securities. We can construct a portfolio of AS such that the return of that portfolio is equivalent to the return of that asset, the value of the portfolio is equivalent to the value of that asset. We can say that the value of an asset can be equivalent to the value of a replicating portfolio (that’s a portfolio of AS that has the same yield of the asset in which we are interested on). Now we can start again from the general expression and in order to see when we can equalize MRTS we rewrite it in a matrix form (two assets): [ 𝑝1 𝑝2 ] = [ 𝑥1,1 𝑥1,2 𝑥2,1 𝑥2,2 ] [ 𝜋1𝜕𝑈 𝑖(. ) 𝜕𝑐1 𝜕𝑈𝑖(. ) 𝜕𝑐0 𝜋2𝜕𝑈 𝑖(. ) 𝜕𝑐2 𝜕𝑈𝑖(. ) 𝜕𝑐0 ] 54 FRANCESCO FUIANO because the market is not perfectly competitive or it is not complete. The more the company’s value depends on its financial composition, the further we are to perfect competition in financial market. What can we do when there are two states of the world and a financial activity that has a return such as: [𝑥1,1 0] This matrix has two states so it’s not complete. This is an activity and there is a return in state 1 and in state 2, but the return in state 2 is zero, so there would not be the possibility of transfer the endowment of an individual from state 1 to state 2 (there is no possibility of consumption from state 1 to state 2). Suppose that we make the following contract with someone: someone who wants to transfer consumption from state 1 to state 2 is willing to give up some possibility of consumption, for example, paying a premium at state 1 and if happens state 2 we will receive some money. For instance someone could come up with the idea of offering an insurance to state 2. He offers to pay a recovery fee M if state 2 will happen by paying an insurance premium higher than M (e.g. 2M). Insurance premium = 2M Recovery of this insurance = M [𝑥1,1 − 2𝑀 𝑀] If we want to get the insurance in order to get more consumption in state 2 we had to pay 2M as insurance premium in order to get M in state 2. If state 1 happens we would obtain 𝑥1,1 − 2𝑀 and if state 2 happens he would obtain M. Even if M is the sure premium we will have a financial matrix such as: [ 𝑥1,1 0 𝑥1,1 − 2𝑀 𝑀 ] In which there will be the first activity 𝑥1,1 and insurance 𝑥1,1 − 2𝑀. If it is possible to write down insurance, to protect himself against the possibility to gain zero if state 2 happens by giving up some money in state 1, than this insurance would look like a risky contract, like an asset, and the matrix of financial activity would be composed by the first activity and the insurance written over the outcome of the first activity, and this makes the matrix complete because the insurance is linear independent from the original asset. By writing an insurance one can construct a set of financial activities (independent from the first) which are complete, so we can have an efficient allocation for all (everybody gains from the insurance). An insurance is typically a contract that one can define in order to make the market complete. For instance, we could have the following structure: two states and one asset. In this case the market is not complete: x1,1 x1,2 with x1,2 >x1,2 55 FRANCESCO FUIANO there exists now an asset that has this structure, so we create a new asset acquiring the possibility of getting this asset by paying a predetermined fee. If we decide to acquire this asset we in state 1 we obtain x1,1 but we have to pay x1,1 (so we obtain zero) and in state 2 we obtain x1,2 paying x1,1. x1,1 x1,2 x1,1- x1,1 x1,2- x1,1 so if the fee is exactly x1,1: [ 𝑥1,1 𝑥1,2 0 𝑥1,2 − 𝑥1,1 ] The new contract created, written on the first asset, makes the set of financial activities complete, because now there are two assets which are linear independents. A derivative is an asset whose object is another asset, that’s the contract we’ve already constructed. The contract gives the possibility of acquiring an asset by paying a fee. A contract that gives the possibility of acquiring another asset is called “option”. So even if we have just one asset, by constructing a derivative over that asset, we make the market complete (that’s the reason why financial activities exist). Financial activities (in our current analysis) are just lotteries (risky outcomes) and so the derivatives too. So they’re standardized. 56 FRANCESCO FUIANO 9. THE EVALUATION OF DERIVATIVES 9.1 FORWARD From now on, we will focus on forward contract and options. A forward contract24 is an agreement we make now to get something in some point in time by pre-defining now the payment that will be given in exchange and exerting that payment and obtaining that good at that point in time. A future is a subset of forward. A future has a pre-defined standardized terms (e.g., pre-defined length). What’s the structure and what’s the value of a future construct? Now we are going to see that to calculate the price of a future we would use the AS. Generalizations: - There is an asset whose value, now, is S and is observable (so the value is sure now). - We will discuss the relation between the asset now and in the future. - Now we cannot get rid of the riskless interest rate (so r has to exist) The value of the asset tomorrow could go up or down (uS or dS); it has a binary structure. The trend (up or down) has a probability πu or πd. F  Forward price uS-F dS-F (tomorrow gain) f  value of the contract by paying now F 24 That’s different from a delay execution contract S uS πu dS πd 59 FRANCESCO FUIANO 0 ≤ 𝜑𝑑𝑅 ≤ 1 Then we see that those phis are a kind of probability, so: 𝜑𝑢𝑅 ∈ (0,1) → ?̃?𝑢 𝜑𝑑𝑅 ∈ (0,1) → ?̃?𝑑 ?̃?𝑢 = 𝑅 − 𝑑 𝑢 − 𝑑 ?̃?𝑑 = 𝑢 − 𝑅 𝑢 − 𝑑 𝑓 = 𝜑𝑢𝑓𝑢 + 𝜑𝑑𝑓𝑑 𝜑𝑢 = ?̃?𝑢 𝑅 𝜑𝑑 = ?̃?𝑑 𝑅 𝑓 = ?̃?𝑢𝑓𝑢+?̃?𝑑𝑓𝑑 𝑅 (21) Expected Discounted Value Of The Forward Contract (The numerator is made by value up and value down) The value, now, of a forward contract, is the expected discounted value of the possible return of this contract. - Expected: because depends on the probability of phi up and phi down - Discounted: because the consequences will happen in the future However these phis are not the original probabilities of the underlying going up or the underlying going down; we have a forward written on the underlying x. The expectation is calculated without using the probabilities of the underlying, but using distort probabilities. So those probability are hedging probabilities (or risk neutral probabilities). Phis are probabilities that allows us to create a specific portfolio which is independent on the realization of the states of the world (that’s the reason why they’re risk neutral). Let’s suppose we’re creating a portfolio of the underlying S and the riskless asset: 𝑥𝑠𝑆 + 𝑥0 In order to fully cover our risk we can use the forward in a way that regardless what happens to the realization of the state of the world we will go to receive the same return. If the speech before is true, how much would we expect to have as return? We expect to have the return of the riskless asset. { 𝑥𝑠𝑢𝑆 + 𝑥0(1 + 𝑟) = 𝑓𝑢 𝑥𝑠𝑑𝑆 + 𝑥0(1 + 𝑟) = 𝑓𝑑 Hence this portfolio has exactly the same value of the forward. This portfolio is called replicating portfolio because it is effectively a portfolio that replicates the value of the forward. In order to find out how much is 𝑥0 and 𝑥𝑠 we calculate them as follow: 60 FRANCESCO FUIANO 𝑥0 = 𝑓𝑢 − 𝑓𝑑 𝑆(𝑢 − 𝑑) 𝑥𝑠 = 𝑢𝑓𝑑 − 𝑑𝑓𝑢 𝑅(𝑢 − 𝑑) Example: 𝑆 = 10€ 𝑢 = 3 2 𝑑 = 1 2 𝑟 = −1 𝐹 = 11€ 𝑓𝑢 = 𝑢𝑆 − 𝐹 = 4 𝑓𝑑 = 𝑑𝑆 − 𝐹 = −6 𝑥𝑠 = 4+6 10 = 1 𝑥0 = −9−2 11 = −1 𝑥𝑠 is called coverage ratio and tells us how many units of the underlying we have to buy when we sell a unit of the riskless asset in order to have the same return. It’s also called hedging ratio. Why we have zero risk? Just look at the results obtained. 𝑓 = ?̃?𝑢𝑓𝑢+?̃?𝑑𝑓𝑑 𝑅  (equivalent to 𝑓 = (𝑅−𝑑)𝑓𝑢+(𝑢−𝑅)𝑓𝑑 𝑅(𝑢−𝑑) ) When we calculate the forward as an expected discounted value using the hedging probability we are effectively replicating the portfolio in order to have exactly the same return as the riskless asset. When a market is competitive and complete all the following condition are equivalent:  There exists no arbitrage possibility  The value of any asset is calculated as the value of the return of that asset in each state of the world weighted which respect to the price of the AS  The value of an asset is just the expected discounted value of that asset calculated with respect to the hedging probability  The return of that asset is exactly equivalent to the return of the replicating portfolio (that’s the portfolio in which we buy the underlying and the hedging activity using the hedging ratio) Now Suppose that there is a forward contract with a certain delivery. The underlying is binomial. The value of the underlying is 900,00€; up is equal to 4 3 , down is equal to 1 2 , and so on as follow: 𝑆 = 900€ 𝑢 = 4 3 𝑑 = 1 2 𝑟 = 1 10 𝐹 = 990€ 1) F = S (1 + r) { S (1 + r) = 990 𝑓 = 990 No arbitrage possibility 2) 𝑓 = 𝜑𝑢𝑓𝑢 + 𝜑𝑑𝑓𝑑 𝜑𝑢 = 𝑅 − 𝑑 𝑅(𝑢 − 𝑑) = 36 55 𝜑𝑑 = 𝑢 − 𝑅 𝑅(𝑢 − 𝑑) = 14 55 𝑅 = 11 10 61 FRANCESCO FUIANO 𝑓𝑢 = 𝑢𝑆 − 𝐹 = 210 𝑓𝑑 = 𝑑𝑆 − 𝐹 = −540 210 × 36 55 − 540 × 14 55 = 7560 − 7560 = 0 3) 𝑓 = ?̃?𝑢𝑓𝑢+?̃?𝑑𝑓𝑑 𝑅 ?̃?𝑢 = 𝜑𝑢𝑟 = 36 55 × 11 10 = 36 50 ?̃?𝑑 = 14 55 × 11 10 = 14 50 36 50 × 210 − 540 14 50 11 10 = 151 × 2 − 151 × 2 11 10 = 0 4) 𝑥𝑠 𝑥0 𝑥𝑠𝑆 + 𝑥0 = 𝑓 𝑥𝑠 = 𝑓𝑢−𝑓𝑑 𝑆(𝑢−𝑑) = 210+540 900( 5 6 ) = 750 750 = 1 The hedging ratio is 1 in this case 𝑥𝑠 = 𝑓𝑢 − 𝑓𝑑 𝑅(𝑢 − 𝑑) = − 4 3 540 − 1 2 210 11 10 ( 5 6) = − 825 55 60 = −900 We have to sell short 900 units of the riskless activity and we’ve to buy one underlying. This is effectively a replicating portfolio. Remembering that the expression is the following: 𝑥𝑠𝑆 + 𝑥0 1 × 900 − 900 = 0 So if we hold one unit of the underlying and we sold short 900 units, we fully cover because the yield is exactly the same as the yield of the forward whose value is 900. Therefore we can expect, in equilibrium, that the value of the forward contract of the contract is zero and that there exists a relationship between the forward and spot price that is the capitalized value using the riskless interest rate of the spot price. Let’s us complete the analysis by taking into account the role played by time: we’ve assumed that there is one period of time, but most forward contract can last more than one year time. We have an underlying f which can go up or down and we have a forward which can go up or down too: 𝑓=0 𝐹=𝑆𝑅 up, down f two times an “up-movement” On head we have an underlying S that can go up or down: 64 FRANCESCO FUIANO In principle the value of the call does not have an upper bound while the put does have an upper bound that’s the strike price (the value cannot go under zero). 9.3 EVALUATION OF OPTION Financial options can be options to obtain or sell other options, so they can be complexes. Here we’ll evaluate elementary option. We can reduce all kind of option to an elementary structure. European call option How much it worth an European call option? Let’s start by defining constraint to the value we want to see the boundaries of those values and then we will explicit the formula of those values. Options have a non- linear function because they give us the possibility of doing something, not the obligation. The value of options can be, in principle, linked to the fact that the option depends on the possibility of the underlying.  Option, typically, is a non-linear function of the underlying: This is the value of the option at the expiration date; so, how much is the value now? The intrinsic value of the option is the value of the option at the expiration date, if it is supposed to be expired today, we observe three cases: 65 FRANCESCO FUIANO 1) k=100 if S>100 the intrinsic value today is positive so we’d buy the underlying The option is in the money 2) k=100 if S<100 we’d not expire it (zero intrinsic value) The option is out of the money 3) k=100 if S=100 (intrinsic value just indifferent) the option is at the money Suppose there is an option that is, today, out of the money; is the value of the option today linked, but not fully represented, by the intrinsic value? Yes, the value of the option is the sum of two values (the sum of the intrinsic value plus the extrinsic value). 𝑐 = 𝑐𝑖 + 𝑐𝑒 𝑐𝑖  is the intrinsic value 𝑐𝑒  is the extrinsic value (or value of time; something that’s positive until the expiration day) At the expiration day the c is just the intrinsic value before it is more. The value of the option today, before it’ll expire is: 𝑐𝑡 = max (𝑆𝑡 − 𝑘, 0) (The option has a value that depends on time) No one will buy an underlying which value is S by paying something that’s more than S. S is the upper limit of the value of a call option (in other terms is the first boundary to the value of the option). 𝑆𝑐 ≥ 𝑐 For instance we can borrow an amount k so kR-1 and at the end, next year we will gain k. 𝑆𝑡 − 𝑘 In order to be able to buy this underlying in a year time, we can construct a replicating portfolio. We buy the underlying now and we borrow an amount of money equal to the discounted value of the strike price. In the future this portfolio is worth the underlying at the expiration time minus the amount of money I’ve to borrow. If we buy the underlying now, and we borrow the money, at the expiration time, if the call is in the money, we can exert the call (selling short) and gain zero profit. 𝑐 ≥ 𝑆𝑡 − 𝑘𝑅 −1 66 FRANCESCO FUIANO If today c is less than this above, we can construct this replicating portfolio, by borrowing k for instance and buying S, or selling short S and lending k. We don’t know now if the value is in the money or not. This discounted intrinsic value is the lower limit of the value of the call, because it is the value of the call only if we are sure to exercise it. If S in the future is less than k we won’t accept it. At the expiration time: c = S − 𝑘 𝑐𝑡 = max (𝑆𝑡 − 𝑘, 0) c ≥ S − 𝑘 (in a year time) We borrow 𝑘𝑅−1 in order to have (in a year) k. If we buy something that now is worth S, we’ll get something in the future that’s worth S − 𝑘, but unfortunately S − 𝑘 could be positive or negative. If it will be negative, we’ll gain zero. c1 ≥ S − 𝑘𝑅 −1 (limit today) This is the lower boundary of the value of the call today S ≥ c ≥ S − 𝑘𝑅−1 The call must be included between those two boundaries: the underlying itself and the value of the replicating portfolio. Therefore the call is something in the red area. The lower boundary depends on the length, so the discount factor takes into account the fact that the expiration date depends on time (e.g. year). c ≥ S − 𝑘𝑅−𝑇 69 FRANCESCO FUIANO 𝑐 = 𝑐𝑖 + 𝑐𝑡 𝑐𝑖 = max (𝑆 − 𝑘, 0)  in principle it is unbounded What’s the value of the put? Can we make the same parallel between the American put option and the European put option? The value of a put is made by the intrinsic value and the value of the time. The value of time increases over time, so there is never an incentive to exercise an American option before expiration when we just look at the value of time, because we lose the value of time with the American option. Therefore if it’s convenient to exercise the option before the delivery time, it means there is convenience related to the intrinsic value. What’s the difference between the intrinsic value of the call and the intrinsic value of the put? 𝑐𝑖 = max (𝑆 − 𝑘, 0)  the call is in principle unbounded (it can have any positive value) 𝑝𝑖 = max (𝑘 − 𝑆, 0)  the put is upper bounded and the highest possible intrinsic value of the put is k. We can see that the call at some point in time can assume an implicit intrinsic value (that’s a random variable because of S and does not have a limit), so we can always gain going on. Looking at the put: assuming that the underlying S changes over time, the value of the put changes too. When S increases, the value of the put increases too. We know that we can be sure that there will be no losses. If it happens that the value of the underlying rises so much that it goes above the strike price we would simply not exercise it anymore. As soon as the price goes to zero, we immediately realize that the intrinsic value is the highest intrinsic value we can ever have, we cannot have a value higher than k. Moreover if we own an American put option (ten years delivery) and it happens that at the fifth years S goes to zero, at that time we know two things: that put won’t never have a value higher than this, as well as the fact that if we exercise the put at that time we gain immediately k. In the remaining five years we can get this k, invest it at the interest rate r and gain k compounded by R for five years. So if we decide to exercise the put before, it is true we lose the value of time, but to some extent we have already reached the maximum value. So if we exercise the put before r we always gain by the interest rate r the capitalization k-S. it is true we lose the value of time, but the value of time can be very small and we know the capitalization. We can exercise an American put before if the price goes to zero or, if it goes to such a low level that the amount we obtain exercise the put capitalized at the riskless interest rate, overweight what we lose the value of time (because the value of time can be smaller than the compounded value of the intrinsic value once we will exercise it). How can we evaluate the put? We can evaluate the value of the put starting from the value of the call. When the value of time is zero the value of the put is the highest between the strike price minus the underlying and zero. In order to evaluate the put at the present, suppose to write the following: Someone buys a call,sells a put (c-p) and constructs, at the same time, a portfolio. 𝑝𝑇 = max (𝑘 − 𝑆, 0) 𝑐 − 𝑝 + 𝑘𝑅−𝑇 70 FRANCESCO FUIANO Let’s assume two case: 𝑆𝑇 ≶ 𝑘 (S smaller or bigger than k) { 𝑆 − 𝑘 + 𝑅 = 𝑆 −(𝑘 − 𝑆) + 𝑘 −𝑘 + 𝑆 + 𝑘 = 𝑆 f { 𝑆𝑇 > 𝑘 𝑆𝑇 < 𝑘 𝑝 = 𝑐 − 𝑆 + 𝑘𝑅−𝑇 Call Put Parity This portfolio is the replicating portfolio of S and this relationship is called Call-Put Parity: If markets are complete and we know how much is worth the call we can immediately evaluate the put. If we think at the replicating portfolio, the value of the call is: 𝑐 = 𝑆𝑥𝑠 + 𝑥0 (𝑓 = 𝑆𝑥𝑠 + 𝑥0 → 𝑓𝑜𝑟 𝑡ℎ𝑒 𝑐𝑎𝑠𝑒 𝑜𝑓 𝑡ℎ𝑒 𝑓𝑢𝑡𝑢𝑟𝑒) (𝑥𝑠 is the hedging ratio which is the delta of the option in this case) So we can construct a portfolio made of underlying and riskless activity which has the same yield as the call, regardless of the realization of the states: We are assuming that the underlying can only goes up or down: 𝑐𝑢 = max (𝑢𝑆 − 𝑘, 0) if the underlying goes up the call is worth 𝑢𝑆 − 𝑘 𝑐𝑑 = max (𝑑𝑆 − 𝑘, 0) if the underlying goes up the call is worth 𝑑𝑆 − 𝑘 𝑆𝑥𝑠 + 𝑥0 = 𝑐 (so we can calculate the value of the European call for the case of the binary underlying) 𝑥𝑠 = 𝑐𝑢−𝑐𝑑 𝑆(𝑢−𝑑) and 𝑥0 = 𝑢𝑐𝑑−𝑑𝑐𝑢 𝑅(𝑢−𝑑) Example: 𝑆 = 900 𝑢 = 4 3 𝑑 = 1 2 𝑆 = 900 → 𝑢𝑆 = 1200 𝑑𝑆 = 450 𝐾 = 650 𝑐𝑖 = 250 If we simplify: 𝑐 = (𝑅 − 𝑑)𝑐𝑢 + (𝑢 − 𝑅)𝑐𝑑 𝑅(𝑢 − 𝑑) 𝑐 = ( 6 10)550 11 10 ( 4 3 − 1 2) = 360 360 = 250 + 𝑐𝑡 𝑐𝑡 = 110 Let us calculate explicitly the value of the replicating portfolio: 71 FRANCESCO FUIANO 𝑥𝑠 = 66 90 𝑥0 = −300 𝑐 = 𝑆𝑥𝑠 + 𝑥0 𝑐 = 900 66 90 − 300 = 660 − 300 = 360 the value of the replicating portfolio is the same as before The replicating portfolio is the portfolio that yield the same return regardless of the state of the world. Let’s make the following replicating portfolio 𝑆𝑥𝑠 − 𝑐 We buy some units25 of the underlying S and at the same time we sell the call. 𝑆𝑥𝑠 − 𝑐 = 900 66 90 − 360 = 300 (we have just constructed a new portfolio selling the call) This coverage is called Delta Hedging. This operation tell us how many units of the underlying we have to buy in order to fully hedging. Let’s see what happens considering the realization of the states. At the expiration date: State up or state down 1) uS 𝑢𝑆𝑥𝑠 − 𝑐𝑢 1200 66 90 − 550 = 880 − 550 = 330 if it goes up 2) dS 𝑑𝑆𝑥𝑠 − 𝑐𝑑 450 66 90 = 330 that’s the same regardless of the realization of the states 300(1 + 𝑟) = 330 We construct the delta hedging portfolio, using the options, in order to have the possibility of fully hedging. We can construct a derivative whose return is independent on the realization of the states of the world. We are still focusing on European call options (same speech for ACO). Let’s consider new parameters: This option, now, is out of the money, so 𝑐𝑖 = 0 and the value of call is the value of time. If we were exactly at the expiration date the value of time of this option would be zero, so in that case the value of this option 25 𝑥𝑠 units, so the delta S=900 1200 r=1/10 450 k=950 74 FRANCESCO FUIANO The value of the option is positively related to risk but it’s not always good, it’s good in terms of efficiency. The more the value increases in the market, the more the risk level of the derivatives increases. Is it necessary good? It depends on the risk attitude. BINOMIAL THEOREM Going ahead we focus on a specific formula useful when we have more periods of time: 𝑓 = ∑ 𝑇! 𝑡!(𝑇−𝑡)! 𝑓𝑢𝑡𝑑 (𝑇−𝑡) 𝜑𝑢 𝑡𝜑𝑑 𝑇−𝑡𝑇 𝑡=0 This is a binomial of the forward. Any binomial can be written using the factorial: (𝑥 + 𝑦)𝑇 = ∑ 𝑇! 𝑡!(𝑇−𝑡)! 𝑥𝑡𝑦𝑇−𝑡𝑇𝑇=0 (23) 𝑐 =∑ 𝑇! 𝑡! (𝑇 − 𝑡)! 𝑐𝑢𝑡𝑑 (𝑇−𝑡) 𝜑𝑢 𝑡𝜑𝑑 𝑇−𝑡 𝑇 𝑡=0 This is the binomial for the call. The return of the call in any possible state is non-linear (cu, cd, etc. are discrete), we do not know when the value is zero or greater than zero, we just know that the value is higher than zero or equal to zero. 𝑡0_____________𝑡1____________𝑡2____________𝑡3 We know that the call, now, can be in or out of the money, but we logically think that there must be a period in time in which, even if now it is zero, the call will be positive. This threshold is the minimum number of times that has to pass for the intrinsic value of the call to become positive; it can be also zero (so the call could be already positive). Beyond this minimum the call becomes linear because it is positive. So we can replace the above- mentioned formula (21) by looking at those periods beyond which the value of the call assumes a positive value: c cu cu 2 cu 3 cu 3 d cd cud cud 3 cd 2 cd 3 75 FRANCESCO FUIANO 𝑐 = { 𝑜 𝑤ℎ𝑒𝑛 𝑡 < 𝑚 𝑢𝑡𝑑𝑇−𝑡𝑆 − 𝑘 𝑤ℎ𝑒𝑛 𝑡 ≥ 𝑚 This is the general expression for the call and it is non-linear. We don’t know the minimum but we know it must exist. 𝑐 = max(𝑢𝑡𝑑𝑇−𝑡𝑆 − 𝑘 ,0) We can use now a logical trick to identify a date (whatever it is, we don’t know) beyond which there is a number of up movements which yields an intrinsic value greater than zero. We can transform a non-linear function of a call into a linear one. By using this trick we can explicitly use the formula to price the call using the price of the AS. The value of the call will be each value contingent to each state weighted with respect to the price of the AS. 𝑐 = ∑ 𝑇! 𝑡! (𝑇 − 𝑡)! (𝑢𝑡𝑑𝑇−𝑡𝑆 − 𝑘)𝜑𝑢 𝑡𝜑𝑑 𝑇−𝑡 𝑇 𝑡=𝑚 This summation does not start from t=0, but from the minimum period beyond which the value is greater than zero weighted by the price of AS 𝜑 contingent to each state. The only trick is that, even if we have no linear formula, we have just to look at the subset of states for which the value is greater than zero. 𝜑𝑢 = ?̃?𝑢 𝑅 𝜑𝑑 = ?̃?𝑑 𝑅 𝑐 = 𝑆𝑅−𝑡 ∑ 𝑇! 𝑡! (𝑇 − 𝑡)! (?̂?𝑢 𝑡 ?̂?𝑑 𝑇−𝑡)𝑢𝑡𝑑𝑇−𝑡 − 𝑘𝑅−𝑇 𝑇 𝑡=𝑚 ∑ 𝑇! 𝑡! (𝑇 − 𝑡)! ?̂?𝑢 𝑡 ?̂?𝑑 𝑇−𝑡 𝑇 𝑡=𝑚 𝑐 = 𝑆𝑅−𝑡 ∑ 𝑇! 𝑡! (𝑇 − 𝑡)! (𝜋?̂? 𝑢 𝑅 ) 𝑡 (𝜋?̂? 𝑑 𝑅 ) 𝑇−𝑡 − 𝑘𝑅−𝑇 𝑇 𝑡=𝑚 ∑ 𝑇! 𝑡! (𝑇 − 𝑡)! ?̂?𝑢 𝑡 ?̂?𝑑 𝑇−𝑡 𝑇 𝑡=𝑚 Where (𝜋?̂? 𝑢 𝑅 ) 𝑡 = 𝜋𝑢 𝑎𝑛𝑑 (𝜋?̂? 𝑑 𝑅 ) 𝑇−𝑡 = 𝜋𝑑 1) 𝑐 = 𝑆∑ 𝑇! 𝑡!(𝑇−𝑡)! ?̃?𝑢 𝑡 + ?̃?𝑑 𝑇−𝑡 − 𝑘𝑅−𝑇𝑇𝑡=𝑚 ∑ 𝑇! 𝑡!(𝑇−𝑡)! ?̂?𝑢 𝑡 ?̂?𝑑 𝑇−𝑡𝑇 𝑡=𝑚 ∑ 𝑇! 𝑡!(𝑇−𝑡)! ?̃?𝑢 𝑡 + ?̃?𝑑 𝑇−𝑡𝑇 𝑡=𝑚 This is a binomial distribution, the cumulated distribution of these probabilities, so the probability to have a number of movement up and down that goes beyond m. otherwise this is the probability to exercise the call (the probability of the cost of exercising the call). We indicate it as 𝑁(𝑑1). ∑ 𝑇! 𝑡!(𝑇−𝑡)! ?̂?𝑢 𝑡 ?̂?𝑑 𝑇−𝑡𝑇 𝑡=𝑚 This is 𝑁(𝑑2). 2) 𝑐 = 𝑆𝑥𝑠 + 𝑥0 𝜕𝑐 𝜕𝑆 = 𝑥𝑠 𝜕𝑐 𝜕𝑆 = ∑ 𝑇! 𝑡!(𝑇−𝑡)! ?̃?𝑢 𝑡 + ?̃?𝑑 𝑇−𝑡𝑇 𝑡=𝑚 76 FRANCESCO FUIANO We can interpret this probability as the delta of the option. These probabilities can be express in a specific formula when we move from the binomial case to the continuum (taking intervals which are shorter and shorted increasing the number of time): 𝑐 = 𝑆𝑁(𝑑1) − 𝑘𝑒 −2𝜏𝑁(𝑑2) Black and Scholes Formula 27 (24) 𝜏 (tau) is the continuum discount factor, N is the cumulated normal distribution and d are the parameters. d1 and d2 are parameters that can be immediately calculated: 𝑑1 = ln ( 𝑆 𝛼 ) + (𝑟 + 1 2 𝜎2)𝜏 𝜎√𝜏 𝑑2 = 𝑑1 − 𝜎√𝜏 In order to calculate the value of the call now, we calculate the probability if exercise the call that we’ve calculated only with the normal distribution. 𝑑1 is the delta of the option. It is a parabola that tends as an asymptote to the infinity value and is fully in the range of the upper and lower boundaries. Once we understand the logic, this formula (22) becomes extremely simple to apply. This formula is just the continuous time of the binomial we already developed and it has been obtained using the prices of the AS. Those prices have been used because of the assumptions of completeness and competitiveness of the financial matrix, so those assumptions are intrinsic (and have to be true) in order to form the Black and Scholes formula. Therefore if that continuous time has the same characteristics, namely competitiveness and completeness of the market (adding that there exists risk and a single riskless activity, as well as we can represent preferences through the utility function and the utility function is fully represented by two moments: mean and variance), then we can apply the Black and Scholes formula, but only for the European case. Remembering the binomial distribution, there is a probability of going up and going down, but the probability of the changes are independently on where we are measuring the normal distribution. History is irrelevant. A random variable which has this property (it is always normally distributed regardless the point 27 The formula for the put is 𝑝 = 𝑘𝑒−𝑟𝜏𝑁(−𝑑2) − 𝑆𝑁(−𝑑1)
Docsity logo


Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved