






Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
A lecture note from stat 9220, a biostatistics course at the medical college of georgia. The notes cover the topic of maximum likelihood estimation (mle) in generalized linear models (glms), which is a statistical method used to estimate parameters in models where the relationship between the expected value and covariates is nonlinear and/or the data are discrete. The structure of a glm, the mle of the parameter β, and the computation of mle using numerical methods such as the newton-raphson and fisher-scoring methods.
Typology: Study notes
1 / 12
This page cannot be seen from the preview
Don't miss anything!
Suppose that X has a distribution from a natural exponential family so that the likelihood function is
`(η) = exp{η>T (x) − ζ(η)}h(x),
where η ∈ Ξ is a vector of unknown parameters. The likelihood equation is then
∂ log `(η) ∂η
= T (x) −
∂ζ(η) ∂η
which has a unique solution T (x) = ∂ζ(η)/∂η, assuming that T (x) is in the range of ∂ζ(η)/∂η.
Note that ∂^2 log `(η) ∂η∂η>^
∂^2 ζ(η) ∂η∂η>^
= − Var(T )
(see the proof of Proposition 3.2 in text). Since Var(T ) is positive definite, − log `(η) is convex in η and T (x) is the unique MLE of the parameter μ(η) = ∂ζ(η)/∂η. Also, the function μ(η) is one-to-one so that μ−^1 exists and by definition the MLE of η is η̂ = μ−^1 (T (x)).
The GLM is a generalization of the normal linear model discussed earlier (see §3.3.1-§3.3.2 of the text).
The GLM is useful since it covers situations where the relationship between E(Xi) and Zi is nonlinear and/or Xi’s are discrete. The structure of a GLM is as follows The sample X = (X 1 , ..., Xn) has independent Xi’s and Xi has the p.d.f.
exp
ηixi−ζ(ηi) φi
h(xi, φi), i = 1, ..., n,
w.r.t. a σ-finite measure ν, where ηi and φi are unknown, φi > 0,
ηi ∈ Ξ =
η : 0 <
h(x, φ)eηx/φdν(x) < ∞
for all i, ζ and h are known functions, and ζ′′(η) > 0 is assumed for all η ∈ Ξ◦, the interior of Ξ. Note that the p.d.f. belongs to an exponential family if φi is known. As a consequence,
E(Xi) = ζ′(ηi) and Var(Xi) = φiζ′′(ηi), i = 1, ..., n.
Define μ(η) = E(Xi) = ζ′(η).
It is assumed that ηi is related to Zi, the ith value of a p-vector of covariates, through g(μ(ηi)) = β>Zi, i = 1, ..., n, where β is a p-vector of unknown parameters and
g, called a link function, is a known one-to-one, third-order continuously differen- tiable function on {μ(η) : η ∈ Ξ◦}.
If μ = g−^1 , then ηi = β>Zi and g is called the canonical or natural link function.
If g is not canonical, we assume that (^) dηd (g ◦ μ)(η) 6 = 0 for all η.
In a GLM, the parameter of interest is β.
We assume that the range of β is B = {β : (g ◦ μ)−^1 (β>z) ∈ Ξ◦^ for all z ∈ Z}, where Z is the range of Zi’s. φi’s are called dispersion parameters and are considered to be nuisance parameters.
Suppose that there is a solution β̂ ∈ B to the likelihood equation.
Var
∂ log `(θ) ∂β
= Mn(β)/φ,
∂^2 log `(θ) ∂β∂β>^
= [Rn(β) − Mn(β)]/φ.
where Mn(β) =
∑^ n
i=
[ψ′(β>Zi)]^2 ζ′′(ψ(β>Zi))tiZiZ i>
Rn(β) =
∑^ n
i=
[xi − μ(ψ(β>Zi))]ψ′′(β>Zi)tiZiZ> i.
Consider first the simple case of canonical g, ψ′′^ ≡ 0 and Rn ≡ 0.
If Mn(β) is positive definite for all β, then − log `(θ) is strictly convex in β for any fixed φ and, therefore, β̂ is the unique MLE of β.
For noncanonical g, Rn(β) 6 = 0 and β̂ is not necessarily an MLE.
If Rn(β) is dominated by Mn(β), i.e.,
[Mn(β)]−^1 /^2 Rn(β)[Mn(β)]−^1 /^2 → 0
in some sense, then − log `(θ) is convex and β̂ is an MLE for large n.
See more details in the proof of Theorem 4.18 in §4.5.2 of the text.
In a GLM, an MLE β̂ usually does not have an analytic form. A numerical method such as the Newton-Raphson or the Fisher-scoring method has to be applied.
In the Newton-Raphson iteration method, one repeatedly computes
θ̂(t+1)^ = θ̂(t)^ −
∂^2 log `(θ) ∂θ∂θτ
θ=θb(t)
∂ log `(θ) ∂θ
θ=bθ(t)
t = 0, 1 , ..., where θ̂(0)^ is an initial value and ∂^2 log (θ)/∂θ∂θτ^ is assumed of full rank for every θ ∈ Θ. If, at each iteration, we replace [ ∂^2 log
(θ) ∂θ∂θτ
θ=bθ(t)
by [{
E
∂^2 log `(θ) ∂θ∂θτ
θ=bθ(t)
where the expectation is taken under Pθ, then the method is known as the Fisher- scoring method. If the iteration converges, then ̂θ(∞)^ or θ̂(t)^ with a sufficiently large t is a numerical approximation to a solution of the likelihood equation.
Example 13.4.1. Consider the GLM with ζ(η) = η^2 /2, η ∈ R. If g is the canonical link, then the model is the same as a linear model with independent εi’s distributed as N (0, φi). If φi ≡ φ, then the likelihood equation is exactly the same as the normal equation in §3.3.1 of the text. If Z is of full rank, then Mn(β) = Z>Z is positive definite. Thus, the LSE β̂ in a normal linear model is the unique MLE of β. Suppose now that g is noncanonical but φi ≡ φ. Then the model reduces to the one with independent Xi’s and
Xi = N
g−^1 (β>Zi), φ
, i = 1, ..., n.
This type of model is called a nonlinear regression model (with normal errors) and an MLE of β under this model is also called a nonlinear LSE, since maximizing the log-likelihood is equivalent to minimizing the sum of squares
∑^ n
i=
[Xi − g−^1 (β>Zi)]^2.
Under certain conditions the matrix Rn(β) is dominated by Mn(β) and an MLE of β exists.
Example 13.4.2 (The Poisson model). Consider the GLM with ζ(η) = eη, η ∈ R, φi = φ/ti. If φi = 1, then Xi has the Poisson distribution with mean eηi^. Under the canonical link g(t) = log t,
Mn(β) =
∑^ n
i=
eβ
Zi tiZiZ i> ,
which is positive definite if infi eβ
Zi 0 and the matrix (
t 1 Z 1 , ...,
tnZn) is of full rank. There is one noncanonical link that deserves attention. Suppose that we choose a link function so that [ψ′(t)]^2 ζ′′(ψ(t)) ≡ 1. Then Mn(β) ≡
∑n i=1 tiZiZ
i does not depend on^ β. In §4.5.2 it is shown that the asymptotic variance of the MLE β̂ is φ[Mn(β)]−^1. The fact that Mn(β) does not depend on β makes the estimation of the asymptotic variance (and, thus, statistical inference) easy. Under the Poisson model, ζ′′(t) = et^ and, therefore, we need to solve the differential equation [ψ′(t)]^2 eψ(t)^ = 1. A solution is ψ(t) = 2 log(t/2) and the link g(μ) = 2
μ.