Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Posterior Distribution - Introduction to Artificial Intelligence - Solved Exams, Exams of Artificial Intelligence

Main points of this past exam are: Posterior Distribution, Conditional Probability, Algebraic Expression, Joint Distribution, Hidden States, Transition Diagram, State Transition, Hmm State, Probability Tables, Missing Entries

Typology: Exams

2012/2013

Uploaded on 04/02/2013

shalin_p01ic
shalin_p01ic 🇮🇳

4

(7)

86 documents

1 / 6

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
1Bayes’ Nets
Unfortunately during spring due to illness and allergies, Billy is unable to distinguish the cause (X) of his
symptoms which could be: coughing (C), sneezing (S), and temperature (T). If he is able to determine the
cause with a reasonable accuracy, it would be beneficial for him to take either “Robitussin DM” or Benedryll.
He has asked you to help him formulate his problem as a Bayesian Network. At any point in time, Billy can
either be sick (X=sick), allergic (X=allerg ic), or well (X=well).
Figure 1: Bayes’ Net
(a) (2 points) List all independence and conditional independence relationships implied by this Bayes’ net.
C?TjX
S?XjC, T
(b) (2 points) Write the algebraic expression for P(C, S) in terms of the joint distribution P(X, C, T, S ).
P(C, S) = X
x
X
t
P(X=x, C, T =t, S)
(c) (2 points) What probability and conditional probability tables must we store for this Bayes’ net?
P(X), P (CjX), P (TjX), P (SjC , T )
(d) (2 points) Derive an expression for the posterior distribution over Xgiven whether Billy is coughing
(C) and sneezing (S).
P(XjC, S) = P(XCS )
P(CS )
=
P
tP(XCT =tS )
P
x
P
tP(X=xCT =tS )
It is also correct to rewrite the above using the fact P(X, C, T, S ) = P(X)P(CjX)P(TjX)P(SjC, T ).
pf3
pf4
pf5

Partial preview of the text

Download Posterior Distribution - Introduction to Artificial Intelligence - Solved Exams and more Exams Artificial Intelligence in PDF only on Docsity!

1 Bayes’ Nets

Unfortunately during spring due to illness and allergies, Billy is unable to distinguish the cause (X) of his symptoms which could be: coughing (C), sneezing (S), and temperature (T). If he is able to determine the cause with a reasonable accuracy, it would be beneficial for him to take either “Robitussin DM” or Benedryll. He has asked you to help him formulate his problem as a Bayesian Network. At any point in time, Billy can either be sick (X = sick), allergic (X = allergic), or well (X = well).

Figure 1: Bayes’ Net

(a) (2 points)List all independence and conditional independence relationships implied by this Bayes’ net.

C T X

S X C, T

(b) (2 points)Write the algebraic expression for P (C, S) in terms of the joint distribution P (X, C, T, S).

P (C, S) =

x

t

P (X = x, C, T = t, S)

(c) (2 points)What probability and conditional probability tables must we store for this Bayes’ net?

P (X), P (C X), P (T X), P (S C, T )

(d) (2 points)Derive an expression for the posterior distribution over X given whether Billy is coughing (C) and sneezing (S).

P (X C, S) = P^ P(X;C;S (C;S ))

P P^ t^ P^ (X;C;T^ =t;S^ ) x

P t P^ (X^ =x;C;T^ =t;S^ ) It is also correct to rewrite the above using the fact P (X, C, T, S) = P (X)P (C X)P (T X)P (S C, T ).

(e) (2 points)Derive an expression for the likelihood of observing that he is coughing (C) and sneezing (S) given X.

P (C, S X) = P^ ( PX;C;S (X )^ )

P t P^ (X;C;T^ =t;S^ ) P (X )

(b) (2 points)Imagine you observe the sequence quiet, sneeze, sneeze. What is the probability that you were well all three days and observed these effects?

P (X 1 = w, X 2 = w, X 3 = w E 1 = q, E 2 = s, E 3 = s)

=

P (X 1 = w, X 2 = w, X 3 = w, E 1 = q, E 2 = s, E 3 = s) P (E 1 = q, E 2 = s, E 3 = s)

P (X 1 , X 2 , X 3 , E 1 , E 2 , E 3 ) = P (X 1 )P (X 2 X 1 )P (X 3 X 2 )P (E 1 X 1 )P (E 2 X 2 )P (E 3 X 3 )

P (X 1 = w, X 2 = w, X 3 = w, E 1 = q, E 2 = s, E 3 = s) = 0. 60 0. 5 0. 5 0. 9 0. 1 0 .1 = 0. 00135

P (E 1 = q, E 2 = s, E 3 = s) =

x (^1)

x (^2)

x (^3)

P (X 1 = x 1 , X 2 = x 2 , X 3 = x 3 , E 1 = q, E 2 = s, E 3 = s)

P (X 1 = w, X 2 = w, X 3 = w E 1 = q, E 2 = s, E 3 = s) = 0. 00135 /P (E 1 = q, E 2 = s, E 3 = s)

(c) (2 points)What is the posterior distribution over your state on day 2 (X 2 ) if E 1 = quiet, E 2 = sneeze?

This is the filtering problem.

P (X 1 E 1 = q) P (X 1 )P (E 1 = q X 1 ) Running one step of filtering, we have (in the order of well, allergy, cold): P (X 1 E 1 = q) = [0. 885 0. 041 0 .074]

P (X 2 E 1 = q, E 2 = s) P (X 2 , E 1 = q, E 2 = s)

x 1 P^ (X^2 , X^1 =^ x^1 , E^1 =^ q, E^2 =^ s)

x 1 P^ (E^2 =^ s^ X^2 )P^ (X^2 X^1 =^ x^1 )P^ (X^1 =^ x^1 )P^ (E^1 =^ q^ X^1 =^ x^1 ) = P (E 2 = s X 2 )

x 1 P^ (X^2 X^1 =^ x^1 )P^ (X^1 =^ x^1 )P^ (E^1 =^ q^ X^1 =^ x^1 ) P (E 2 = s X 2 )

x 1 P^ (X^2 X^1 =^ x^1 )P^ (X^1 =^ x^1 E^1 =^ q) Using the equation above, after two steps of filtering, we have P (X 2 E 1 = q, E 2 = s) = [0. 104 0. 580 0 .316]

(d) (2 points)What is the posterior distribution over your state on day 3 (X 3 ) if E 1 = quiet, E 2 = sneeze, E 3 = sneeze? We simply need to compute another step of filtering, using our answer from part (c) above.

P (X 3 E 1 = q, E 2 = s, E 3 = s) P (E 3 = s X 3 )

x (^2)

P (X 3 X 2 = x 2 )P (X 2 E 1 = q, E 2 = s)

After normalizing, we have P (X 3 E 1 = q, E 2 = s, E 3 = s) = [0. 077 0. 652 0 .271]

3 MDPs

Figure 4: MDP

Consider the above MDP, representing a robot on a circular wheel. The wheel is divided into eight states and the available actions are to move clockwise or counterclockwise. The robot has a poor sense of direction and will move with probability 0.6 in the intended direction and with probability 0.4 in the opposite direction. All states have reward zero, except the terminal states which have rewards 1 and +1 as shown. The discount factor is γ = 0.9.

(5 points)Compute the numeric values of the state-value function V (s) for states s1 through s6 (compute V (s1), V (s2), ...). Show your work below.

By symmetry, we know that V (s1) = V (s4), V (s2) = V (s5), and V (s3) = V (s6). To compute V (s1), V (s2), and V (s3), we could use value iteration. However, it is much easier to solve directly using the Bellman equations. It is clear that the optimal policy is to move toward the +1 reward.

V (s1) = 0. 4 γ ( 1) + 0. 6 γ V (s2) V (s2) = 0. 4 γ V (s1) + 0. 6 γ V (s3) V (s3) = 0. 4 γ V (s2) + 0. 6 γ (+1) Solving this system of equations yields

V (s1) = 0. 217 V (s2) = 0. 265 V (s3) = 0. 635