Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Least Squares Estimation in Linear Models - Prof. Grzegorz A. Rempala, Study notes of Statistics

A lecture note from stat 9220, a biostatistics course at the medical college of georgia. The lecture focuses on the general linear model and its estimation using the method of least squares. The definition of the least squares estimator (lse), the normal equation, and the assumptions of linear models.

Typology: Study notes

Pre 2010

Uploaded on 08/04/2009

koofers-user-1k6
koofers-user-1k6 🇺🇸

10 documents

1 / 12

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
STAT 9220
Lecture 10
LSE in Linear Models
Greg Rempala
Department of Biostatistics
Medical College of Georgia
Mar 17, 2009
1
pf3
pf4
pf5
pf8
pf9
pfa

Partial preview of the text

Download Least Squares Estimation in Linear Models - Prof. Grzegorz A. Rempala and more Study notes Statistics in PDF only on Docsity!

STAT 9220

Lecture 10

LSE in Linear Models

Greg Rempala

Department of Biostatistics

Medical College of Georgia

Mar 17, 2009

One of the most useful statistical models is the general linear model

X

i

= β

Z i

i

, i = 1,... , n, (10.1)

where X i

is the ith observation (response); β is a p-vector of unknown parameters

(p < n); Z i

is the ith value of a p-vector of explanatory variables (or covariates);

 1 ,... , n are random errors (not observed). Data consists of (X 1 , Z 1 ),... , (Xn, Zn).

Estimation of β is the problem of interest in (10.1). Let X = (X 1

,... , X

n

 = ( 1 ,... , n), and Z be the n × p matrix whose ith row is the vector Zi,

i = 1,... , n. A matrix form of model (10.1) is

X = Zβ + . (10.2)

Definition 10.1.1. Suppose that the range of β in model (10.2) is B ⊂ R

p

(typ-

ically B = R

p ). A least squares estimator (LSE) of β is defined to be any

β ∈ B

such that

||X − Z

β||

2

= min

b∈B

||X − Zb||

2

For any l ∈ R

p , l

β is called an LSE of l

β.

Note:

2

= ||X − Zb||

2

= (X − Zb)

(X − Zb) = X

X − X

Zb − b

Z

X + b

Z

Zb

10.2 Linear models assumptions

We consider the following three assumptions about l.m.’s

(A1)  ∼ N

n

(0, σ

2 I n

) with an unknown σ

2

0,

(A2) E() = 0 and V ar() = σ

2 I n

with an unknown σ

2

0,

(A3) E() = 0 and V ar() is an unknown matrix.

Under (A1) model is parametric, since X ∼ N n

(Zβ, σ

2 I n

), which is in an expo-

nential family P θ

with parameter θ = (β, σ

2

) ∈ R

p

× (0, ∞).

If Z is not of full rank, then P θ

is not identifiable, i.e. Zβ 1

= Zβ 2

does not imply

β 1

= β 2

(1) Suppose that rank(Z) = r < p, then we can find an (n × r) submatrix Z ∗

of Z such that

Z = Z

Q (10.4)

and Z ∗

is of rank r, where Q is a fixed r × p matrix. Then

Zβ = Z ∗

and P θ

is identifiable if we reparameterize

β = Qβ. Note that

β is in a subspace

of R

p with dimension r.

(2) Suppose we want to estimate ϑ = l

β, l ∈ R

p

. By (1) problem is not well

posed, unless r = p or l = Q

c for some c ∈ R

r so that

l

β = c

Qβ = c

˜ β.

Theorem 10.3.1. Assume linear model under (A3).

(i) A necessary and sufficient condition for l ∈ R

p

being Q

c for some c ∈ R

r

is l ∈ lin(Z) = lin(Z

Z), where Q is given by (10.4) and lin(A) is the smallest

linear subspace of R

p

containing all rows of A, i.e. of the form l = αZ for some

α.

(ii) If l ∈ lin(Z), then the LSE l

β is unique and unbiased for l

β. (UMVUE

under (A1))

(iii) If l /∈ lin(Z) and assumption (A1) holds, then l

β is not estimable.

Proof. (i) If l = Q

c, then

l = Q

c = Q

Z

Z

(Z

Z

− 1

= Z

[Z ∗

(Z

Z

− 1

c] = Z

α.

Hence l ∈ lin(Z). If l ∈ lin(Z), then l = Z

α for some α and

l = (Z ∗

Q)

α = Q

c

with c = Z

α.

(ii) If l ∈ lin(Z) = lin(Z

Z), then l = Z

Zα for some α and

E(l

ˆ β) = E[l

(Z

Z)

Z

X] = α

Z

Z(Z

Z)

Z

Zβ = α

Z

Zβ = l

β.

Thus l

ˆ β is unbiased.

Assume that

β is another LSE of β. Then

l

ˆ β − l

˜ β = α

(Z

Z)(

β −

β) = α

(Z

X − Z

X) = 0.

(iii) Under assumption (A1), suppose that there is an estimator h(X, Z) unbiased

for l

β, then

l

β =

R

n

h(x, Z)(2π)

−n/ 2

σ

−n

exp{−

2 σ

2

||x − Zβ||

2

}dx.

We may differentiate under integral sign (exponential family theorem)

l

=

R

n

h(x, Z)(2π)

−n/ 2

σ

−n− 2

(Z

x − Z

Zβ) exp{−

2 σ

2

||x − Zβ||

2

}dx =

Z

R

n

h(x, Z)(2π)

−n/ 2

σ

−n− 2

(x − Zβ) exp{−

2 σ

2

||x − Zβ||

2

}dx

which implies l ∈ lin(Z). (Remember:

∂β

||X − Zβ||

2

= Z

X − Z

Zβ).

Example 10.3.2. (One-way ANOVA). Suppose that n =

m

j=

n j

with m positive

integers n 1

,... , n m

(m blocks); let k j

j− 1

l=

n l

for j = 1... , m with k 0

= 0 and

X

i

= μ j

i

, i =

j− 1 ∑

l=

n l

  • 1,... , n j

, j = 1,... , m,

and β = (μ 1

,... , μ m

) ∈ R

m .

X

1

= μ 1

1

X

kj +

= μ j

kj +

X

n

= μ m

n

Then our model becomes X = Z(μ 1

,... , μ m

  • . Hence,

Z =

n 1

diag(1,... , 1)

n 2

n 3

nm

diag(1,... , 1) 0 ,... , 0

diag(1,... , 1) 0

diag(1,... , 1)

Note that

Z

Z =

n 1

0 n m

is invertible. Z

X = Z

Zβ implies

β = (Z

Z)

− 1 Z

X and it gives

β =

kj+

i=kj +

X

i

/n j

Sometimes, we are interested in testing μ 1

= μ 2

= · · · = μ m

= const in this

context, it is more convenient to rewrite the model as:

X

ij

= X

ki− 1 +j

ij

ki− 1 +j

μ i

= μ + α i

X

ij

= μ + α i

ij

, j = 1,... , n i

, i = 1,... , m

Here β = (μ, α 1

,... , α m

) ∈ R

m+ .

Z =

is an (m + 1) × n matrix and, consequently,