



Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Sir Tanika Mukopadhyay taught us Probability at Homi Bhabha National Institute. He gave us assignments so that we can practice what we learned in form of problems. Here is solution to those problems. Its main emphasis is on following points: Problem, Partial, Fraction, Expansion, Chice, Arbitrary, Entries, Mean, Squar, Error
Typology: Exercises
1 / 7
This page cannot be seen from the preview
Don't miss anything!
a)
From Eq. 5.3 we have:
var X + Y + Z = var X + var Y + var Z + 2cov X Y , + 2cov X Z , +2cov Y Z ,
b)
From Eq. 5.3 we have:
var X + Y + Z = var X + var Y + var Z = 3
1 1
n n
n i i
i i
E S E X E X nμ
= =
∑ ∑
( )
1 1 1
sum of diag.
sum of off-diag.
elements of element of K
covariance matrix K
var var cov ,
n n n
n k j k
k j k
= = =
∑ ∑∑
2 2 2 2 1 2
2 2 2 2 2
1 2 2
n
n
X
n
σ ρσ ρ σ ρ σ
ρσ σ ρσ ρ σ
ρ σ σ
−
−
−
2 2
var 2 1
n
S = nσ + n − ρσ
Problem 1:[GAR]5.
Problem 2:[GAR]5.
1 1
n n
n i i
i i
E S E X E X nμ
= =
∑ ∑
The same argument as problem 2
2 2 2 2 1 2
2 2 2 2 2
1 2 2
n
n
X
n
σ ρσ ρ σ ρ σ
ρσ σ ρσ ρ σ
ρ σ σ
−
−
−
1 1 1
2 2 2 2
1 0 1
var 2 2
j j n n
k
n
j k j
S n n
ρ
σ ρσ ρ σ ρσ
ρ
− − −
= = =
∑ ∑ ∑
1
2 2
n
n
n
ρ ρ
σ ρσ
ρ ρ ρ
−
a)
Z
M s
s s
α β
α β
b)
Z
a b
M s
α s β s
is partial fraction expansion, where b
α β
αβ
= , a
β α
αβ
2 2
Z
M s
s s
β α α β α β
α β α αβ β
1
t t
Z Z
f t M s e e t
α β
β α β α
αβ αβ
− − −
a)
Y = 0 ii)
Y = 0 iii)
b)
i) ( )
p y − = for y = −1,
⇒ Y = 1 or Y = − 1
( )
p y 0 = 1 for y = 0
( )
p y = for y = −1,
⇒ Y = 1 or Y = − 1
Problem 3: [GAR]5.
Problem 4: [GAR]5.
Problem 5: [GAR]4.
max y = 1 ⇒ Y = 1
2
2
2
ˆ
c)
1
0
x
x y
Y E Y x ydy
x x
2 2
2 1 1 1 1
2
0 0 0 0
x x x
E Y Y dx y x y dy dx y x y dy
x x x
2
2 1
0
x
x
E Y Y x x dx
x x
2 2
1 1
0 0
x x x
x
x x
dx dx
x x
1 1
0 0
x x x
dx dx
x x
ln ln3 0.
x
This is slightly better than the linear predictor.
a)
0 otherwise
X
x
f x
2
10 10
4 4
X
x x
X = E x = xf x dx = dx = =
b)
Problem 7:
In homework set # 11 we saw that if Y = X + W and X & Ware independent
w 1,
0 otherwise
W
f w
So:
1,1 or 1, 1
0 otherwise
Y X
y x y x x
f y x
XY X Y X
x x y x
f x y f x f y x
Now MMSE estimator is
XY
X Y
x x
Y
f x y
X E X Y y xf x y dx x dx
f y
∫ ∫
The one over which
XY
f x y is not zero is depicted.
To find
Y
f y we have
Y XY
x
f y = f x y dx
∫
So we have to fix y and for each y we have to find the range of x.
1 1
4 4
2
1 1
2
4 4
y y
Y XY
x x
y y
x x
y
y
y f y f x y dx dx x
y y
E X Y y x dx xdx x
y
y y y
= =
= =
∫ ∫
∫ ∫
1 1
1 1
2 2
1
2
1
y y
Y XY
x y y
y
y
y f y f x y dx dx
y y y
E X Y y x dx x y
y
= − −
−
∫ ∫
∫
10 10
1 1
2
10
2
1
Y XY
x y y
y
y
y f y f x y dx dx
y
E X Y y x dx x
y
y y y
= − −
−
∫ ∫
∫
Since X and W are independent the pdf of Y=X+W is the convolution of the pdf of X and the pdf of
W. So by finding the convolution of these two you could find the pdf of Y, too.
c)
we had
X = ay + b
b = E x − aE y and
X
Y
a
σ
ρ
σ
X
c)
since
( )
0 cov ,
i i j i j i j i j ij
E X = ⇒ X X = E X X − E X E X = E X X = r
So correlation Matrix is the same as covariance Matrix
X X
d)
2 2 2 2 2
1 2
1 1 1
... var
n n n
n i i i
i i i
= = =
∑ ∑ ∑
since
i
2 2 2
1 2
1
n
n
i
E X X X n
=
∑