Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Mean Square Error-Probability and Statistics-Assignment Solution, Exercises of Probability and Statistics

Sir Tanika Mukopadhyay taught us Probability at Homi Bhabha National Institute. He gave us assignments so that we can practice what we learned in form of problems. Here is solution to those problems. Its main emphasis is on following points: Problem, Partial, Fraction, Expansion, Chice, Arbitrary, Entries, Mean, Squar, Error

Typology: Exercises

2011/2012

Uploaded on 08/03/2012

anandini
anandini 🇮🇳

4.7

(9)

123 documents

1 / 7

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
[
]
[
]
[
]
[
]
0
EXYZEXEYEZ
++=++=
a)
From Eq. 5.3 we have:
(
)
(
)
(
)
(
)
(
)
(
)
(
)
varvarvarvar2cov,2cov,2cov,
XYZXYZXYXZYZ
++=+++++
( )
11
11122023
44

=+++++−=


b)
From Eq. 5.3 we have:
(
)
(
)
(
)
(
)
XYZXYZ
++=++=
[ ] [ ]
11
nn
nii
ii
ESEXEXn
µ
==

===


∑∑
( ) ( )
( )
111
sum of diag. sum of off-diag.
elements of element of K
covariance matrix K
varvarcov,
nnn
nkjk
kjk
SXXX
===
=+
∑∑
14243
144424443
222212
22222
122
n
n
X
n
C
σρσρσρσ
ρσσρσρσ
ρσσ




=




K
K
MO
MO
K
(
)
(
)
22
var21
n
Snn
σρσ
=+−
Problem 1:[GAR]5.1
Problem 2:[GAR]5.2
docsity.com
pf3
pf4
pf5

Partial preview of the text

Download Mean Square Error-Probability and Statistics-Assignment Solution and more Exercises Probability and Statistics in PDF only on Docsity!

[ ] [ ] [ ] [ ]

E X + Y + Z = E X + E Y + E Z = 0

a)

From Eq. 5.3 we have:

var X + Y + Z = var X + var Y + var Z + 2cov X Y , + 2cov X Z , +2cov Y Z ,

b)

From Eq. 5.3 we have:

var X + Y + Z = var X + var Y + var Z = 3

[ ] [ ]

1 1

n n

n i i

i i

E S E X E X nμ

= =

∑ ∑

( )

1 1 1

sum of diag.

sum of off-diag.

elements of element of K

covariance matrix K

var var cov ,

n n n

n k j k

k j k

S X X X

= = =

∑ ∑∑

2 2 2 2 1 2

2 2 2 2 2

1 2 2

n

n

X

n

C

σ ρσ ρ σ ρ σ

ρσ σ ρσ ρ σ

ρ σ σ

K

K

M O

M O

K

2 2

var 2 1

n

S = + nρσ

Problem 1:[GAR]5.

Problem 2:[GAR]5.

[ ] [ ]

1 1

n n

n i i

i i

E S E X E X nμ

= =

∑ ∑

The same argument as problem 2

2 2 2 2 1 2

2 2 2 2 2

1 2 2

n

n

X

n

C

σ ρσ ρ σ ρ σ

ρσ σ ρσ ρ σ

ρ σ σ

K

K

M O

M O

K

1 1 1

2 2 2 2

1 0 1

var 2 2

j j n n

k

n

j k j

S n n

ρ

σ ρσ ρ σ ρσ

ρ

− − −

= = =

∑ ∑ ∑

1

2 2

n

n

n

ρ ρ

σ ρσ

ρ ρ ρ

a)

Z

M s

s s

α β

α β

b)

Z

a b

M s

α s β s

is partial fraction expansion, where b

α β

αβ

= , a

β α

αβ

2 2

Z

M s

s s

β α α β α β

α β α αβ β

1

t t

Z Z

f t M s e e t

α β

β α β α

αβ αβ

− − −

a)

Y = 0 ii)

Y = 0 iii)

Y = − X

b)

i) ( )

p y − = for y = −1,

Y = 1 or Y = − 1

( )

p y 0 = 1 for y = 0

⇒ Y = 0

( )

p y = for y = −1,

Y = 1 or Y = − 1

Problem 3: [GAR]5.

Problem 4: [GAR]5.

Problem 5: [GAR]4.

max y = 1 ⇒ Y = 1

( ) ( ) [ ]

2

2

2

ˆ

E Y Y E Y 1 E Y 2 E Y 1 0.

c)

1

0

x

x y

Y E Y x ydy

x x

2 2

2 1 1 1 1

2

0 0 0 0

x x x

E Y Y dx y x y dy dx y x y dy

x x x

2

2 1

0

x

x

E Y Y x x dx

x x

2 2

1 1

0 0

x x x

x

x x

dx dx

x x

1 1

0 0

x x x

dx dx

x x

ln ln3 0.

x

This is slightly better than the linear predictor.

a)

[ ]

0 otherwise

X

x

f x

[ ] ( )

2

10 10

4 4

X

x x

X = E x = xf x dx = dx = =

b)

Problem 7:

In homework set # 11 we saw that if Y = X + W and X & Ware independent

[ ]

w 1,

0 otherwise

W

f w

So:

[ ] [ ]

1,1 or 1, 1

0 otherwise

Y X

y x y x x

f y x

[ ]

XY X Y X

x x y x

f x y f x f y x

Now MMSE estimator is

XY

X Y

x x

Y

f x y

X E X Y y xf x y dx x dx

f y

  ∫ ∫

The one over which

XY

f x y is not zero is depicted.

To find

Y

f y we have

Y XY

x

f y = f x y dx

So we have to fix y and for each y we have to find the range of x.

1 1

4 4

2

1 1

2

4 4

y y

Y XY

x x

y y

x x

y

y

y f y f x y dx dx x

y y

E X Y y x dx xdx x

y

y y y

= =

= =

∫ ∫

∫ ∫

1 1

1 1

2 2

1

2

1

y y

Y XY

x y y

y

y

y f y f x y dx dx

y y y

E X Y y x dx x y

y

= − −

∫ ∫

10 10

1 1

2

10

2

1

Y XY

x y y

y

y

y f y f x y dx dx

y

E X Y y x dx x

y

y y y

= − −

∫ ∫

Since X and W are independent the pdf of Y=X+W is the convolution of the pdf of X and the pdf of

W. So by finding the convolution of these two you could find the pdf of Y, too.

c)

we had

X = ay + b

[ ] [ ]

b = E xaE y and

X

Y

a

σ

ρ

σ

X

C I

O

c)

since

[ ]

( )

[ ]

0 cov ,

i i j i j i j i j ij

E X = ⇒ X X = EX X  − E X EX  = EX X = r

So correlation Matrix is the same as covariance Matrix

X X

⇒ R = C = I

d)

2 2 2 2 2

1 2

1 1 1

... var

n n n

n i i i

i i i

E X X X E X E X X

= = =

∑ ∑ ∑

since

[ ]

i

E X =

2 2 2

1 2

1

n

n

i

E X X X n

=