






Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
A lecture note from stat 9220, a biostatistics course at the medical college of georgia. The lecture focuses on the general linear model and its estimation using the method of least squares. The definition of the least squares estimator (lse), the normal equation, and the assumptions of linear models.
Typology: Study notes
1 / 12
This page cannot be seen from the preview
Don't miss anything!
One of the most useful statistical models is the general linear model
i
= β
Z i
i
, i = 1,... , n, (10.1)
where X i
is the ith observation (response); β is a p-vector of unknown parameters
(p < n); Z i
is the ith value of a p-vector of explanatory variables (or covariates);
1 ,... , n are random errors (not observed). Data consists of (X 1 , Z 1 ),... , (Xn, Zn).
Estimation of β is the problem of interest in (10.1). Let X = (X 1
n
= ( 1 ,... , n), and Z be the n × p matrix whose ith row is the vector Zi,
i = 1,... , n. A matrix form of model (10.1) is
X = Zβ + . (10.2)
Definition 10.1.1. Suppose that the range of β in model (10.2) is B ⊂ R
p
(typ-
ically B = R
p ). A least squares estimator (LSE) of β is defined to be any
β ∈ B
such that
β||
2
= min
b∈B
||X − Zb||
2
For any l ∈ R
p , l
β is called an LSE of l
β.
Note:
2
= ||X − Zb||
2
= (X − Zb)
(X − Zb) = X
X − X
Zb − b
Z
X + b
Z
Zb
We consider the following three assumptions about l.m.’s
n
(0, σ
2 I n
) with an unknown σ
2
0,
(A2) E() = 0 and V ar() = σ
2 I n
with an unknown σ
2
0,
(A3) E() = 0 and V ar() is an unknown matrix.
Under (A1) model is parametric, since X ∼ N n
(Zβ, σ
2 I n
), which is in an expo-
nential family P θ
with parameter θ = (β, σ
2
) ∈ R
p
× (0, ∞).
If Z is not of full rank, then P θ
is not identifiable, i.e. Zβ 1
= Zβ 2
does not imply
β 1
= β 2
(1) Suppose that rank(Z) = r < p, then we can find an (n × r) submatrix Z ∗
of Z such that
∗
and Z ∗
is of rank r, where Q is a fixed r × p matrix. Then
Zβ = Z ∗
Qβ
and P θ
is identifiable if we reparameterize
β = Qβ. Note that
β is in a subspace
of R
p with dimension r.
(2) Suppose we want to estimate ϑ = l
β, l ∈ R
p
. By (1) problem is not well
posed, unless r = p or l = Q
c for some c ∈ R
r so that
l
β = c
Qβ = c
˜ β.
Theorem 10.3.1. Assume linear model under (A3).
(i) A necessary and sufficient condition for l ∈ R
p
being Q
c for some c ∈ R
r
is l ∈ lin(Z) = lin(Z
Z), where Q is given by (10.4) and lin(A) is the smallest
linear subspace of R
p
containing all rows of A, i.e. of the form l = αZ for some
α.
(ii) If l ∈ lin(Z), then the LSE l
β is unique and unbiased for l
β. (UMVUE
under (A1))
(iii) If l /∈ lin(Z) and assumption (A1) holds, then l
β is not estimable.
Proof. (i) If l = Q
c, then
l = Q
c = Q
Z
∗
∗
∗
∗
− 1
= Z
[Z ∗
∗
∗
− 1
c] = Z
α.
Hence l ∈ lin(Z). If l ∈ lin(Z), then l = Z
α for some α and
l = (Z ∗
α = Q
c
with c = Z
∗
α.
(ii) If l ∈ lin(Z) = lin(Z
Z), then l = Z
Zα for some α and
E(l
ˆ β) = E[l
(Z
Z)
−
Z
X] = α
Z
Z(Z
Z)
−
Z
Zβ = α
Z
Zβ = l
β.
Thus l
ˆ β is unbiased.
Assume that
β is another LSE of β. Then
l
ˆ β − l
˜ β = α
(Z
Z)(
β −
β) = α
(Z
X − Z
X) = 0.
(iii) Under assumption (A1), suppose that there is an estimator h(X, Z) unbiased
for l
β, then
l
β =
R
n
h(x, Z)(2π)
−n/ 2
σ
−n
exp{−
2 σ
2
||x − Zβ||
2
}dx.
We may differentiate under integral sign (exponential family theorem)
l
=
R
n
h(x, Z)(2π)
−n/ 2
σ
−n− 2
(Z
x − Z
Zβ) exp{−
2 σ
2
||x − Zβ||
2
}dx =
R
n
h(x, Z)(2π)
−n/ 2
σ
−n− 2
(x − Zβ) exp{−
2 σ
2
||x − Zβ||
2
}dx
which implies l ∈ lin(Z). (Remember:
∂
∂β
||X − Zβ||
2
= Z
X − Z
Zβ).
Example 10.3.2. (One-way ANOVA). Suppose that n =
m
j=
n j
with m positive
integers n 1
,... , n m
(m blocks); let k j
j− 1
l=
n l
for j = 1... , m with k 0
= 0 and
i
= μ j
i
, i =
j− 1 ∑
l=
n l
, j = 1,... , m,
and β = (μ 1
,... , μ m
m .
1
= μ 1
1
kj +
= μ j
kj +
n
= μ m
n
Then our model becomes X = Z(μ 1
,... , μ m
n 1
diag(1,... , 1)
n 2
n 3
nm
diag(1,... , 1) 0 ,... , 0
diag(1,... , 1) 0
diag(1,... , 1)
Note that
Z =
n 1
0 n m
is invertible. Z
X = Z
Zβ implies
β = (Z
Z)
− 1 Z
X and it gives
β =
kj+
i=kj +
i
/n j
Sometimes, we are interested in testing μ 1
= μ 2
= · · · = μ m
= const in this
context, it is more convenient to rewrite the model as:
ij
ki− 1 +j
ij
ki− 1 +j
μ i
= μ + α i
ij
= μ + α i
ij
, j = 1,... , n i
, i = 1,... , m
Here β = (μ, α 1
,... , α m
m+ .
is an (m + 1) × n matrix and, consequently,