Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Covariance and Correlation: Proof and Identities, Study Guides, Projects, Research of Probability and Statistics

A detailed explanation of covariance and correlation between two joint random variables x and y. It includes definitions, identities, proofs, and examples. Covariance is a measure of the linear relationship between two variables and is defined as the expected value of the product of the deviations of the variables from their respective means. Correlation is a normalized version of covariance, which is also a measure of the linear relationship between two variables, but it is bounded between -1 and 1. The document also covers the bilinearity of covariance and the relationship between variance and covariance.

What you will learn

  • How is correlation related to covariance?
  • What is the identity for covariance and how is it proven?
  • What is the definition of covariance between two random variables X and Y?

Typology: Study Guides, Projects, Research

2021/2022

Uploaded on 09/12/2022

bairloy
bairloy 🇺🇸

4.2

(6)

247 documents

1 / 2

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Covariance and Correlation
Math 217 Probability and Statistics
Prof. D. Joyce, Fall 2014
Covariance. Let Xand Ybe joint random vari-
ables. Their covariance Cov(X, Y ) is defined by
Cov(X, Y ) = E((XµX)(YµY)).
Notice that the variance of Xis just the covariance
of Xwith itself
Var(X) = E((XµX)2) = Cov(X, X )
Analogous to the identity for variance
Var(X) = E(X2)µ2
X
there is an identity for covariance
Cov(X) = E(XY )µXµY
Here’s the proof:
Cov(X, Y )
=E((XµX)(YµY))
=E(XY µXYX µY+µXµY)
=E(XY )µXE(Y)E(X)µY+µXµY
=E(XY )µXµY
Covariance can be positive, zero, or negative.
Positive indicates that there’s an overall tendency
that when one variable increases, so doe the other,
while negative indicates an overall tendency that
when one increases the other decreases.
If Xand Yare independent variables, then their
covariance is 0:
Cov(X, Y ) = E(X Y )µXµY
=E(X)E(Y)µXµY= 0
The converse, however, is not always true.
Cov(X, Y ) can be 0 for variables that are not inde-
pendent.
For an example where the covariance is 0 but
Xand Yaren’t independent, let there be three
outcomes, (1,1), (0,2), and (1,1), all with the
same probability 1
3. They’re clearly not indepen-
dent since the value of Xdetermines the value of
Y. Note that µX= 0 and µY= 0, so
Cov(X, Y ) = E((XµX)(YµY))
=E(XY )
=1
3(1) + 1
30 + 1
31 = 0
We’ve already seen that when Xand Yare in-
dependent, the variance of their sum is the sum of
their variances. There’s a general formula to deal
with their sum when they aren’t independent. A
covariance term appears in that formula.
Var(X+Y) = Var(X) + Var(Y) + 2Cov(X, Y )
Here’s the proof
Var(X+Y)
=E((X+Y)2)E(X+Y)2
=E(X2+ 2XY +Y2)(µX+µY)2
=E(X2)+2E(XY ) + E(Y2)
µ2
X2µXµYµ2
Y
=E(X2)µ2
X+ 2(E(XY )µXµY)
+E(Y2)µ2
Y
= Var(X) + 2Cov(X, Y ) + Var(Y)
Bilinearity of covariance. Covariance is linear
in each coordinate. That means two things. First,
you can pass constants through either coordinate:
Cov(aX, Y ) = aCov(X, Y ) = Cov(X, aY ).
Second, it preserves sums in each coordinate:
Cov(X1+X2, Y ) = Cov(X1, Y ) + Cov(X2, Y )
and
Cov(X, Y1+Y2) = Cov(X, Y1) + Cov(X, Y2).
1
pf2

Partial preview of the text

Download Covariance and Correlation: Proof and Identities and more Study Guides, Projects, Research Probability and Statistics in PDF only on Docsity!

Covariance and Correlation

Math 217 Probability and Statistics

Prof. D. Joyce, Fall 2014

Covariance. Let X and Y be joint random vari- ables. Their covariance Cov(X, Y ) is defined by

Cov(X, Y ) = E((X − μX )(Y − μY )).

Notice that the variance of X is just the covariance of X with itself

Var(X) = E((X − μX )^2 ) = Cov(X, X)

Analogous to the identity for variance

Var(X) = E(X^2 ) − μ^2 X

there is an identity for covariance

Cov(X) = E(XY ) − μX μY

Here’s the proof:

Cov(X, Y ) = E((X − μX )(Y − μY )) = E(XY − μX Y − XμY + μX μY ) = E(XY ) − μX E(Y ) − E(X)μY + μX μY = E(XY ) − μX μY

Covariance can be positive, zero, or negative. Positive indicates that there’s an overall tendency that when one variable increases, so doe the other, while negative indicates an overall tendency that when one increases the other decreases. If X and Y are independent variables, then their covariance is 0:

Cov(X, Y ) = E(XY ) − μX μY = E(X)E(Y ) − μX μY = 0

The converse, however, is not always true. Cov(X, Y ) can be 0 for variables that are not inde- pendent. For an example where the covariance is 0 but X and Y aren’t independent, let there be three outcomes, (− 1 , 1), (0, −2), and (1, 1), all with the same probability 13. They’re clearly not indepen- dent since the value of X determines the value of Y. Note that μX = 0 and μY = 0, so

Cov(X, Y ) = E((X − μX )(Y − μY )) = E(XY ) = 13 (−1) + 13 0 + 13 1 = 0

We’ve already seen that when X and Y are in- dependent, the variance of their sum is the sum of their variances. There’s a general formula to deal with their sum when they aren’t independent. A covariance term appears in that formula.

Var(X + Y ) = Var(X) + Var(Y ) + 2 Cov(X, Y )

Here’s the proof

Var(X + Y ) = E((X + Y )^2 ) − E(X + Y )^2 = E(X^2 + 2XY + Y 2 ) − (μX + μY )^2 = E(X^2 ) + 2E(XY ) + E(Y 2 ) − μ^2 X − 2 μX μY − μ^2 Y = E(X^2 ) − μ^2 X + 2(E(XY ) − μX μY )

  • E(Y 2 ) − μ^2 Y = Var(X) + 2 Cov(X, Y ) + Var(Y )

Bilinearity of covariance. Covariance is linear in each coordinate. That means two things. First, you can pass constants through either coordinate:

Cov(aX, Y ) = a Cov(X, Y ) = Cov(X, aY ).

Second, it preserves sums in each coordinate:

Cov(X 1 + X 2 , Y ) = Cov(X 1 , Y ) + Cov(X 2 , Y )

and

Cov(X, Y 1 + Y 2 ) = Cov(X, Y 1 ) + Cov(X, Y 2 ).

Here’s a proof of the first equation in the first condition:

Cov(aX, Y ) = E((aX − E(aX))(Y − E(Y ))) = E(a(X − E(X))(Y − E(Y ))) = aE((X − E(X))(Y − E(Y ))) = a Cov(X, Y )

The proof of the second condition is also straight- forward.

Correlation. The correlation ρXY of two joint variables X and Y is a normalized version of their covariance. It’s defined by the equation

ρXY =

Cov(X, Y ) σX σY

Note that independent variables have 0 correla- tion as well as 0 covariance. By dividing by the product σX σY of the stan- dard deviations, the correlation becomes bounded between plus and minus 1.

− 1 ≤ ρXY ≤ 1.

There are various ways you can prove that in- equality. Here’s one. We’ll start by proving

0 ≤ Var

X

σX

Y

σY

= 2(1 ± ρXY ).

There are actually two equations there, and we can prove them at the same time. First note the “0 ≤” parts follow from the fact variance is nonnegative. Next use the property proved above about the variance of a sum.

Var

X

σX

Y

σY

= Var

X

σX

  • Var

±Y

σY

  • 2 Cov

X

σX

±Y

σY

Now use the fact that Var(cX) = c^2 Var(X) to rewrite that as

1 σ X^2

Var(X) +

σ^2 Y

Var(±Y ) + 2 Cov

X

σX

Y

σY

But Var(X) = σ^2 X and Var(−Y ) = Var(Y ) = σ^2 Y , so that equals

2 + 2 Cov

X

σX

±Y

σY

By the bilinearity of covariance, that equals

σxσY

Cov(X, Y ) = 2 ± 2 ρXY )

and we’ve shown that

0 ≤ 2(1 ± ρXY.

Next, divide by 2 move one term to the other side of the inequality to get

∓ρXY ≤ 1 ,

so − 1 ≤ ρXY ≤ 1.

This exercise should remind you of the same kind of thing that goes on in linear algebra. In fact, it is the same thing exactly. Take a set of real-valued random variables, not necessarily inde- pendent. Their linear combinations form a vector space. Their covariance is the inner product (also called the dot product or scalar product) of two vectors in that space.

X · Y = Cov(X, Y )

The norm ‖X‖ of X is the square root of ‖X‖^2 defined by

‖X‖^2 = X · X = Cov(X, X) = V (X) = σ X^2

and, so, the angle θ between X and Y is defined by

cos θ =

X · Y

‖X‖ ‖Y ‖

Cov(X, Y ) σX σY

= ρXY

that is, θ is the arccosine of the correlation ρXY.

Math 217 Home Page at http://math.clarku.edu/~djoyce/ma217/