Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Introduction to Principles of Likelihood | 22S 138, Study notes of Statistics

Material Type: Notes; Professor: Cowles; Class: 22S - Bayesian Statistics; Subject: Statistics and Actuarial Science; University: University of Iowa; Term: Fall 2005;

Typology: Study notes

Pre 2010

Uploaded on 03/11/2009

koofers-user-n69
koofers-user-n69 🇺🇸

10 documents

1 / 3

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
1
22S:138
The Likelihood Principle
Lecture 22
Nov. 28, 2005
Kate Cowles
374 SH, 335-0727
kcowles@stat.uiowa.edu
2
The likelihood principle
Suppose that two different experiments may
inform about an unknown parameter θ
Suppose the outcomes of the experiements
are respectively yand z
Suppose the likelihoods for θresulting from
the two experiements are proportional; that
is
p(y;θ) = c p(z;θ)
where c is a constant
Then the information about θcontained in
both experiments is equivalent
3
Another way to state the likelihood prin-
ciple
For a given sample of data, any two proba-
bility models p(y|θ) that have the same like-
lihood function yield the same inference for
θ.
With regard to the information contained in
the data about the unknown parameter(s),
only the actual observed data yis relevant.
No other possible outcomes
Contrast this with the frequents p-value
the probability assuming H0is true, of
getting a test statistic as extreme as, or
more extreme than, the value that was
actually obtained
Not the researchers’ intentions
4
Example
We are given a coin. We are interested in
estimating θ, the probability of obtaining a
head on a single flip.
We want to test the hypotheses:
H0:θ=1
2
Ha:θ > 1
2
Experiment consists of flipping coin 12 times
independently.
Result is 9 heads and 3 tails.
pf3

Partial preview of the text

Download Introduction to Principles of Likelihood | 22S 138 and more Study notes Statistics in PDF only on Docsity!

22S:

The Likelihood Principle

Lecture 22 Nov. 28, 2005

Kate Cowles 374 SH, 335- kcowles@stat.uiowa.edu

The likelihood principle

  • Suppose that two different experiments may inform about an unknown parameter θ
  • Suppose the outcomes of the experiements are respectively y∗^ and z∗
  • Suppose the likelihoods for θ resulting from the two experiements are proportional; that is

p(y∗; θ) = c p(z∗; θ)

where c is a constant

  • Then the information about θ contained in both experiments is equivalent

3

Another way to state the likelihood prin- ciple

  • For a given sample of data, any two proba- bility models p(y|θ) that have the same like- lihood function yield the same inference for θ.
  • With regard to the information contained in the data about the unknown parameter(s), only the actual observed data y is relevant. - No other possible outcomes ∗ Contrast this with the frequents p-value the probability assuming H 0 is true, of getting a test statistic as extreme as, or more extreme than, the value that was actually obtained - Not the researchers’ intentions

4 Example

  • We are given a coin. We are interested in estimating θ, the probability of obtaining a head on a single flip.
  • We want to test the hypotheses: H 0 : θ = (^12) Ha : θ > (^12)
  • Experiment consists of flipping coin 12 times independently.
  • Result is 9 heads and 3 tails.

Example, continued

  • There are (at least) two possible ways the experiment might have been conducted: - Design 1: do 12 flips. The random variable Y is the number of heads obtained in n = 12 flips. - Design 2: Flip the coin until 9 heads are obtained. Random variable Y is the num- ber of tails that are obtained before the ninth head.
  • Frequentist inference for θ would be different depending on which design is used.
  • Bayesian inference would be the same under both designs because the likelihoods are pro- portional.
  • The negative binomial distribution
    • Y = the number of failures observed in a sequence of independent Bernoulli trials

before the kth^ success

  • Y ∼ N B(k, p)
  • p(y|p) =

  ^ k^ +^ y^ −^1 y

   pk(1 − p)y

  • E(Y ) = k(1 p−p)

7

Implications of the likelihood principle

  • the stopping rule principle
  • the likelihood principle and reference priors

8 “Stopping rules” are often used in de- signing frequentist statistical studies

  • instead of a fixed sample size
  • to make it possible to stop a study early if the results are in
  • particularly common in clinical trials
    • reducing the size and duration of a clinical trial reduces the number of patients who are exposed to the treatment that will be found to be inferior and speeds up the dis- semination of the results to the medical community
  • Frequentist statisticians must choose the stop- ping rule before the experiment is conducted and adhere to it exactly - deviations can produce serious errors if a frequentist analysis is used