Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Decision Theory, Lecture Notes - Economics, Study notes of Economics

Underlying framework of problem, linearity and expected utility notation, risk aversion, measurement of risk aversion

Typology: Study notes

2010/2011

Uploaded on 09/08/2011

floweryy
floweryy 🇬🇧

4.7

(16)

251 documents

1 / 19

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
DECISION MAKING WITH UNCERTAINTY AND RISK AVERSION
1. INTRODUCTION
1.1. The underlying idea of decision making under uncertainty. We are inter-
ested in how a decision maker chooses among alternative courses of action when
the consequences of each action are not know at the time the choice is made. In-
dividuals may make different choices in a setting involving uncertainty than they
will in one where outcomes are known. These differences are usually attributed to
“risk preferences”
1.2. Underlying framework for the problem.
1. There are a number of outcomes for the decision problem. They are rep-
resented by a non-empty set of prizes or things that matter to the decision
maker which is denoted X.
2. There are consequences which are represented by a non-empty set C. Con-
sequences can be anything that has to do with the welfare of a decision
maker. C can be a probability space over a set of outcomes or an outcome.
One outcome might be you get a box with 3 oranges and 2 apples inside.
Another might be you get two Powerball tickets purchased 3 December.
3. Feasible acts are a non-empty set denoted by A0
4. The set of conceivable acts denoted by A contains the set of feasible acts.
For example there may be no available action that leads to wining the
lottery with certainty. This action is conceivable but not feasible.
5. A mapping from the elements of A0to subsets of C. For example, choos-
ing to take curtain number 1 on ”Let’s Make a Deal” gives you some of
the prizes that are available that day. Ultimately each act will result in a
unique element of C, but which elementoccurs in not known a priori.
6. A state of nature is a function that assigns to every feasible act a conse-
quence from the set of consequences corresponding to this act. For exam-
ple, the consequences from raising the price of a product you sell might be
that profits increase, profits decrease, or profits remain the same. State of
nature ”one” might be that profits decrease. The set of all states of nature
is denoted by S.
7. Actions can be considered to be a mapping from the set of states to the set
of consequences.
8. Constant acts are those which give the same consequence in all states of
the world.
9. Risk is a situation in which the set of states is a singleton or all acts are
constants. Consequences in this framework consist of probability mea-
sures or lotteries on a set of outcomes. For example, if the set of states is a
singleton, an act represents the choosing a particular lottery or probability
measure on a set of outcomes. Consider a gambler who is faced with two
Date: December 16, 2004.
1
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13

Partial preview of the text

Download Decision Theory, Lecture Notes - Economics and more Study notes Economics in PDF only on Docsity!

1. INTRODUCTION

1.1. The underlying idea of decision making under uncertainty. We are inter- ested in how a decision maker chooses among alternative courses of action when the consequences of each action are not know at the time the choice is made. In- dividuals may make different choices in a setting involving uncertainty than they will in one where outcomes are known. These differences are usually attributed to “risk preferences”

1.2. Underlying framework for the problem.

  1. There are a number of outcomes for the decision problem. They are rep- resented by a non-empty set of prizes or things that matter to the decision maker which is denoted X.
  2. There are consequences which are represented by a non-empty set C. Con- sequences can be anything that has to do with the welfare of a decision maker. C can be a probability space over a set of outcomes or an outcome. One outcome might be you get a box with 3 oranges and 2 apples inside. Another might be you get two Powerball tickets purchased 3 December.
  3. Feasible acts are a non-empty set denoted by A 0
  4. The set of conceivable acts denoted by A contains the set of feasible acts. For example there may be no available action that leads to wining the lottery with certainty. This action is conceivable but not feasible.
  5. A mapping from the elements of A 0 to subsets of C. For example, choos- ing to take curtain number 1 on ”Let’s Make a Deal” gives you some of the prizes that are available that day. Ultimately each act will result in a unique element of C, but which element occurs in not known a priori.
  6. A state of nature is a function that assigns to every feasible act a conse- quence from the set of consequences corresponding to this act. For exam- ple, the consequences from raising the price of a product you sell might be that profits increase, profits decrease, or profits remain the same. State of nature ”one” might be that profits decrease. The set of all states of nature is denoted by S.
  7. Actions can be considered to be a mapping from the set of states to the set of consequences.
  8. Constant acts are those which give the same consequence in all states of the world.
  9. Risk is a situation in which the set of states is a singleton or all acts are constants. Consequences in this framework consist of probability mea- sures or lotteries on a set of outcomes. For example, if the set of states is a singleton, an act represents the choosing a particular lottery or probability measure on a set of outcomes. Consider a gambler who is faced with two

Date : December 16, 2004. 1

possible slot machines to play. The first machine gives a payoff of $-1. with probability .9, a payoff of $4.00 with probability .05 and a payoff of $100.00 with probability .05. The second machine gives a payoff of $-1. with probability .8, a payoff of $4.00 with probability .16, and $100.00 with a probability of .04. The outcomes there are (-1.00, 4.00, 100.00). Each act induces a different lottery on the outcomes. The state of the world, the existence of the slot machines, and the associated lotteries, is a constant.

  1. Uncertainty is a situation in which the set of consequences, C, coincides with the set of outcomes, X. The set of acts, A, consists of all functions from the set of states, S, to X. A preference relation on A is a primitive of the model. In this set-up there are no objective probabilities ( probability model), but subjective probabilities are developed as part of the decision problem.

1.3. Preference relations. A preference relation is a binary relation, , on A that is

  1. complete - for all a, b  A either a  b or b  a
  2. transitive - for all a,b,c  A, a  b and b  c imply a 

1.4. Representing the preference relation. A real valued function U on A repre- sents  is for all a, b A, a  b iff U(a)  U(b). The function U is called the utility function.

The most common way to represent preferences in such models is with a repre- sentation functional that is the sum of the products of utilities and probabilities of outcomes.

2. EXPECTED UTILITY THEORY (VON NEUMANN MORGENSTERN)

For the analysis in this section, assume the the set of consequences C is finite.

2.1. Lotteries.

2.1.1. Definition of a simple lottery. A simple lottery L is a list L = (p 1 , p 2 ,... , pN) with pn ≥ 0 for all n and ΣNn=1 pn = 1 where pn is interpreted as the probability of outcome n occurring. A simple lottery can be represented geometrically as a point in an N or (N-1) dimensional simplex, ∆ = p ∈ RN + : p 1 + p 2 + ... + pN =1. Consider the simple lottery represented in figure 1. Each point in the simplex represents a particular lottery which yields conse- quence x 1 with probability p 1 etc. When N = 3 it is convenient to use a two di- mensional diagram in the form of an equilateral triangle with altitude equal to one. This is convenient geometrically because the length of a side in this case is equal to √^23 and the sum of the perpendiculars from any point to the three sides is

equal to 1. For example at a vertex (probability mass equal to one at that point) the length to the opposite side is equal to the altitude of 1. Similarly a point at the cen- ter of the triangle has length of 1/3 to each side. Or a point midway between two endpoints along a side has length 12 to the other two sides. The two dimensional representation of the lottery in figure 1 is contained in figure 2.

FIGURE 2. Triangle Representing a Simple Lottery

L 1 = (1, 0 , 0)

L 2 = (. 25 ,. 375 , .375)

L 3 = (. 75 ,. 25 , 0)

L 4 = (. 5 ,. 125 , .375)

L 5 = (. 5 ,. 25 , .25)

Now consider two compound lotteries. The first gives L 1 with probability. and L 5 with probability .75. This leads to a reduced lottery of (.625, .1875, .1875). Consider then the compound lottery that gives L 3 with probability .5 and L 4 with probability .5. This has reduced lottery equal to (.625, .1875, .1875). Thus the two compound lotteries are equivalent.

2.2. Preferences over lotteries. We will assume that the set of alternatives to be considered are the set of all simple lotteries over the outcomes C denoted by L. We also assume there exists a binary preference relation on the set of such lotteries.

  1. Continuity or Archimedean axiom

The preference relation  on the space of simple lotteries L is continu- ous if for any (L, L’, L”) ∈ L, the sets

{α  [0, 1] : αL + (1 − α)L′^  L”} ⊂ [0, 1]

{α  [0, 1] : L”  α L + (1 − α)L′} ⊂ [0, 1] (2)

are closed.

As a possible counter example consider the following consequences and simple lotteries.

C = ($1000, $10, Death)

L 1 = (1, 0 , 0)

L 2 = (0, 1 , 0)

L 3 = (0, 0 , 1) Assume that L 1  L 2  L 3. Then there is some compound lottery such that α L 1 + (1-α) L 3  L 2.

  1. Independence axiom

The preference relation  on the space of simple lotteries satisfies the independence axiom if for all (L, L’, L”) ∈ L and α ∈ (0,1) we have

L  L′^ ⇐⇒ αL + (1 − α)L′′^  αL′^ + (1 − α)L′′^ (3)

2.3. The expected utility function. The utility function U: L → R has an expected utility form if there is an assignment of numbers (u 1 , u 2 , ... , uN) to the n outcomes such that for every simple lottery L = (p 1 , p 2 , ... , pN) ∈ L, we have

U (L) = u 1 p 1 + u 2 p 2 + · · · + uN pN = Σnun pn (4) A utility function U: L → R with the expected utility form is called a von Neumann-Morgenstern (v.N-M) expected utility function. Note that if the lottery Ln^ is the lottery that yields outcome n with certainty (pn =1) then U(Ln^ ) = un. The important result is that the utility function is linear in the probabilities.

2.4. Linearity and expected utility.

Proposition 1. A utility function U: L → R has an expected utility from iff it is linear, that is iff it satisfies the property that

U

ΣKk=1 αkLk

= ΣKk=1 αkU (Lk) (5) for any K lotteries L k in L , k = 1, 2,... , K and probabilities ( α 1 ,... , αk )0, ΣKk=1 αk = 1_._

Proof. Suppose that U(·) satisfies equation 5. We can write any lottery L = (p 1 ,... , pN) as a convex combination of the degenerate (certain) lotteries (L^1 ,... , LN), that is L = Σn pn Ln. We then have

3.3.1. Expected utility with discrete outcomes.

U (L) = ΣNn=1 pnun

= p 1 u 1 + p 2 u 2 +... pN uN

where un is the utility associated with the nth outcome. This is sometimes called the Bernoulli function or preference scaling function.

3.3.2. Expected utility with continuous outcomes.

U (F ) =

u(x) dF (x) (11)

where u is the utility associated with the monetary outcome x. As before this is called the Bernoulli or preference scaling function. Often we will write EU(F) for U(F) or if F is dependent on a parameter “a” we will write EU(F(a)) or EU(a) where EU(a) is the expected utility of action a which induces distribution on outcomes denoted by F(a).

3.3.3. Properties of the function u( · ).

  1. increasing
  2. continuous
  3. bounded (or use restrictions on F)

4. RISK AVERSION

4.1. Definition of risk aversion in general. A decision-maker is a risk averter if for any lottery F(·), the degenerate lottery that yields the amount

x dF(x) with certainty  F(·). If the decision maker is always (for any F) indifferent between these two lotteries, we say he is risk neutral. Finally we say that the decision maker is strictly risk neutral if indifference holds only when the two lotteries are the same (F is degenerate).

4.2. Definition of risk aversion with a v.N-M utility function. A decision-maker is a risk averter iff

∫ u(x) dF (x) ≤ u

x dF (x)

∀ F ( · ) (12)

This is called Jensen’s inequality and holds for all concave functions u(·). Strict concavity or strict risk aversion means that the marginal utility of money is de- creasing. Thus at any level of wealth the value of a dollar gain is smaller than the utility of the absolute value of the same dollar loss.

4.3. Example of risk aversion.

  1. States of nature

Consider two states of nature with p 1 = p 2 = 0.5.

  1. Preference scaling function

Consider the preference scaling function u( ) = -4 + .17x -.0003x^2. For this function, the following values are obtained

u(100) = 10 u(150) = 14. 75

u(200) = 18 u(250) = 19. 75

u(300) = 20

  1. Lotteries

Consider a lottery where the outcomes are 100 and 300.

  1. Expected Utility

U(L) = u(100)(.5) + u(300)(.5) = 10(.5) + 20(.5) = 15.

The expected value of the lottery is E(L) = 100(.5) + 300(.5) = 200. The scaling function implies that u(200) = 18. So U(L) < u(E(L)). An individ- ual who is risk neutral will have a linear utility function u. Consider the shape of the preference scaling function in figure 3. Expected utility is computed along the line connecting the points (100,10) and (300,20). The utility of 200 is higher than the point along this line because u(x) is a con- cave function.

FIGURE 3. Risk Averse Preference Scaling Function

x

uHxL

uHxL

FIGURE 4. Finding the Certainty Equivalent

x

uHxL

uHxL

4.5. probability premium. For any fixed amount of money x and a positive num- ber , the probability premium demoted by π(x,,u), is the excess in wining prob- ability over fair odds that makes the individual indifferent between the certain outcome x and a gamble between the two outcomes x+ and x-. That is

u(x) =

  • π(x, , u)

u(x + ) +

− π(x, , u)

u(x − ) (16)

For any given x and  we can compute π as follows:

u(x) =

  • π(x, , u)

u(x + ) +

− π(x, , u)

u(x − )

[ u(x + ) + u(x − ) ] + π [ u(x + ) − u(x − ) ]

⇒ u(x) −

[ u(x + ) + u(x − ) ] = π [ u(x + ) − u(x − ) ]

⇒ π =

u(x) − 21 [ u(x + ) + u(x − ) ] [ u(x + ) − u(x − ) ]

For the example given we can compute the probability premium needed to make the decision maker indifferent between a certain outcome of 200 [u(200) = 18] and a gamble between 100 and 300 with respective utilities of 10 and 20. In this case,  = 100. This will give

π =

u(x) − 21 [ u(x + ) + u(x − ) ] [ u(x + ) − u(x − ) ]

u(200) − 12 [ u(300) + u(100) ] [ u(300) − u(100) ]

18 − 21 [ 20 + 10 ]

[ 20 − 10 ]

[ 10]

Checking we obtain

u(x) =

  • π(x, , u)

u(x + ) +

− π(x, , u)

u(x − )

u(200) =

u(300) +

u(100)

We can examine this graphically in figure 5. Here, u(200) = 18, u(200-) = u(100) = 10, and u(200+) = u(300) = 20. The line for the vertical axis at 18 over to the expected utility line for the lottery for different probabilities for 200- and 200+ indicates that the probability must be more than one-half of the distance between these two outcomes. The vertical line indicates that a lottery between 100 and 300 with an expected wealth level of 260 has a utility level of 18. If the decision maker is risk neutral then u(x) = x and the probability premium is given by

u(x) =

  • π(x, , u)

u(x + ) +

− π(x, , u)

u(x − )

⇒ x =

  • π(x, , u)

(x + ) +

− π(x, , u)

(x − )

⇒ x = x +

  • π(x, , u)

(x + ) +

− π(x, , u)

(x − )

⇒ x = x + 2 π(x, , u) 

⇒ 0 = 2 π(x, , u) 

⇒ π(x, , u) = 0 if  6 = 0

Now consider the utility function given by the straight line through the points (100,10) and (300,20). This can be determined as follows where u(x∗) is a fixed number based on the chosen value of x∗^.

u(x) =

  • π(x, , u)

u(x + ) +

− π(x, , u)

u(x − )

u(200) = 18 =

  • π(x, , u)

u(300) +

− π(x, , u)

u(100)

(u(300) + u(100)) + π(x, , u) (u(300) − u(100))

(30) + π(x, , u) (10)

3 = π(x, , u) (10)

⇒ π(x, , u) =

The point on the x axis associated with this probability level is

u(x∗) =

x + 5

x = 20 (u(x∗) − 5)

x = 20(18 − 5)

= 260

4.6. Equivalent characterizations of risk aversion. Suppose the decision maker is an expected utility maximizer with a Bernoulli utility (preference scaling) function u(·) on amounts of money. Then the following are equivalent:

  1. The decision maker is risk averse
  2. u(·) is concave (u”(x) ≤ )
  3. c(F,u) ≤

xdF(x) for all F(·)

  1. π(x,,u) ≥ 0 for all x, 

4.7. Risk Aversion Example. Suppose an investor can choose between two assets. Asset one has a random return of z per unit invested and asset two has a certain return of x per unit invested. Assume that the investor allocates α dollars to the first asset and β dollars to the second asset where α + β = wealth (w). Given any particular random return the portfolio pays αz + βx. The utility maximization problem can be written as follows

max α,β ≥ 0

u(αz + βx) dF (z)

s.t. α + β = w

If we substitute for β from the constraint we obtain

max

u(wx + α(z − x)) dF (z)

s.t. 0 ≤ α ≤ w or

max

u(wx + α(z − x)) dF (z)

s.t. α ≥ 0

(w − α) ≥ 0

This is a nonlinear programming problem with two constraints on the decision variable α. The associated Lagrangian is

L =

u(wx + α(z − x)) dF (z) + λ 1 α + λ 2 (w − α) (27)

The first order conditions are ∫ u′^ (wx + α(z − x)) (z − x) dF (z) + λ 1 − λ 2 = 0

λ 1 α = 0

λ 2 (w − α) = 0

λ 1 , λ 2 ≥ 0

If α > 0 then λ 1 = 0 and we have that ∫ u′^ (wx + α(z − x))(z − x) dF (z) = λ 2 ≥ 0 (29)

because λ 2 ≥ 0. If α < w then λ 2 = 0 and we have that ∫ u′^ (wx + α(z − x))(z − x) dF (z) = −λ 1 ≤ 0 (30)

For this function to be a maximum we need to check the second order condi- tions. If the objective function is concave and the constraints are also concave this stationary point will be a maximum. The objective function is concave because u is concave. This is obvious from differentiation

∫ u′′(wx + α(z − x))(z − x)^2 dF (z) ≤ 0 (31)

The constraints are linear and therefore concave.

Now consider the case if the risky asset has an expected return greater than x. That is

zdF (z) > x. Now consider the possibility of α = 0 as the solution to this problem. If α = 0 we obtain

5. MEASUREMENT OF RISK AVERSION

5.1. Arrow-Pratt coefficient of absolute risk aversion. Given a twice differen- tiable preference scaling function u(·) for money, the Arrow-Pratt coefficient of absolute risk aversion at the point x is defined as

rA(x) = − u′′(x) u′^ (x)

With risk neutrality, u is linear and u” = 0. Thus rA measures the curvature of the preference scaling function. The use of u’ in the denominator makes it invari- ant to positive linear transformations. Consider figure 6 where u 1 (·) is less curved than u 2 (·). It is obvious that the certainty equivalent is less for the more curved function.

FIGURE 6. Finding the Certainty Equivalent

x

uHxL

u 1 HxL

u 2 HxL

The coefficient of risk aversion can also be related to the probability premium by differentiating the defining identity (equation 16 twice with respect to  and then evaluating at  = 0. Taking the first derivative will give

u(x) =

( 1 2

  • π(x, , u)

) u(x + ) +

( 1 2 − π(x, , u)

) u(x − )

0 =

( dπ(x, , u) d

) u(x + ) +

( 1 2

  • π(x, , u)

) u′(x + ) +

( −dπ(x, , u) d

) u(x − ) −

( 1 2

− π(x, , u)

) u′(x − )

= π′^ u(x + ) + 1 2 u′^ (x + ) + πu′(x + ) − π′^ u(x − ) − 1 2 u′(x − ) + πu′^ (x − ) (35) Differentiating again will give

0 = π′u(x + ) +

u′(x + ) + πu′(x + ) − π′u(x − ) −

u′(x − ) + πu′(x − )

= π′′u(x + ) + π′u′(x + ) +

u′′(x + ) + π′u′(x + ) + πu′′(x + )

− π′′u(x − ) + π′u′(x − ) +

u′′(x − ) + π′u′(x − ) − πu′′(x − ) (36) Now evaluate at  = 0 to obtain

0 = π′′u(x) + π′u′(x) +

u′′(x) + π′u′(x) + πu′′(x)

− π′′u(x) + π′u′(x) +

u′′(x) + π′u′(x) − πu′′(x)

= 4 π′u′(x) + u′′(x)

⇒ −u′′(x) u′(x)

= 4π′(0)

Continuing will give rA (x) = 4π´(0). Note that the utility function can be ob- tained from rA (·) by integrating twice. The two constants are irrelevant since the Bernoulli utility function is only identified up to linear transformations.

5.2. Example with Constant Absolute Risk Aversion (CARA). Let the preference scaling function be given by u(x) = -e−kx, k > 0. This is known as the negative exponential utility function. For this function, u’(x) = ke−kx^ and u’(x) = -k^2 ekx^ and rA(x,u) = k for all x. Similarly we can obtain for rA(x) = k that

rA(x) = − u′′(x) u′(x)

⇒ k = −

u′′(x) u′(x)

d(log u′(x)) dx = − k

⇒ log u′(x) = − kx + ln c

⇒ u′(x) = e−kx+log^ c^ = e−kxelog^ c^ = ce−kx

⇒ u(x) =

−c k

e−kx^ + b

= − ae−kx^ + b

5.3. Relative risk aversion. The coefficient of relative risk aversion for a given Bernoulli utility function is given by

rR(x, u) = −

x u′′(x) u′(x)

6.2. Definition of Decreasing Relative Risk Aversion (DRRA). The preference scaling function u(·) exhibits decreasing relative risk(DRRA) aversion if rR (x) is a decreasing function of x. Individuals with DRRA become less risk averse with respect to gambles that are proportional to wealth as wealth increases. A person with decreasing relative risk aversion will also exhibit decreasing absolute risk aversion. The converse is not necessarily true.

Proposition 4. The following properties are equivalent:

  1. The Bernoulli utility function exhibits decreasing relative risk aversion, i.e. r R (x,u) is decreasing in x.
  2. Whenever x 2 < x 1 , u˜ 2 (t) = u(tx 2 ) is a concave transformation of u˜ 1 (t) = u(tx 1 ).
  3. Given any risk F(t) on t ∫ > 0, the certainty equivalent ¯cx defined by u(¯cx) = u(tx) dF (t) is such that (^) c ¯xx is decreasing in x.