- ↑
STA260 Lecture 10
STA260 Lecture 10 Raw
STA260 Lecture 10 Flashcards
-
Completed Notes Status
- Completed insertions: 8
- Ambiguities left unresolved: 2
-
Lecture Summary
- Central objective: Connect Fisher Information (curvature/precision of estimation) with core sampling distributions used for inference (Chi-square Distribution, t Distribution, F Distribution).
- Key concepts:
- Fisher Information
- Fisher information for a parameter
can be written as or equivalently (under regularity). - High Fisher information corresponds to a sharply peaked Likelihood Function around its maximizer, which supports more precise estimation.
- Fisher information for a parameter
- Exponential Distribution (Scale Parameter)
- For
with , the Fisher information in is (both score-squared and negative-Hessian methods).
- For
- Sample Variance
- The "biased" sample variance
equals , where is the unbiased sample variance. - Using
(normal sample case) and , you can compute and .
- The "biased" sample variance
- Fisher Information
- Connections:
- The "precision" idea in Fisher Information (steep likelihood) matches the inferential idea that estimators with smaller variance are more concentrated, and the review distributions (Chi-square Distribution, t Distribution, F Distribution) are the standard tools for quantifying that concentration in normal-model inference.
-
TK Resolutions
- #tk: "Do the second Fisher information method (negative expected second derivative) for
." - Answer:
- Answer:
- #tk: "Square
(exercise)." - Answer:
- Answer:
- #tk: "Useful information for test."
- If not answerable: Need the instructor's test blueprint (topics/chapters, allowed formulas, and which distributional results are in-scope).
- #tk: "Do the second Fisher information method (negative expected second derivative) for
-
Practice Questions
- Remember/Understand:
- State two equivalent formulas for Fisher Information
and explain (in words) what "high information" suggests about the Likelihood Function. - For
(scale), write and the Score Function . - Define Biased Sample Variance
and Unbiased Sample Variance and state the relationship between them.
- State two equivalent formulas for Fisher Information
- Apply/Analyze:
- Compute
for using the negative expected second derivative method. - If
, compute and and use them to derive when . - Let
; find the distributions of , , and .
- Compute
- Evaluate/Create:
- Build a "distribution-recognition checklist" that decides when a ratio becomes t Distribution vs F Distribution, including the independence conditions you must verify.
- Remember/Understand:
-
Challenging Concepts
- Fisher Information:
- Why it's challenging: The two formulas for
look unrelated until you remember the regularity conditions and how expectations interact with derivatives. - Study strategy: Practise deriving
both ways for 2–3 models (e.g., exponential scale, normal mean with known variance) and explicitly write each derivative step.
- Why it's challenging: The two formulas for
- Distribution engineering:
- Why it's challenging: Many expressions "almost" look like
or , but small details (df scaling, independence, whether a term appears inside the sum) completely change the result. - Study strategy: For every expression, rewrite it into the canonical form:
for and for , then check independence and scaling constants.
- Why it's challenging: Many expressions "almost" look like
- Fisher Information:
-
Action Plan
- Immediate review actions:
- Practice and application:
- Deep dive study:
- Verification and integration:
- Immediate review actions:
-
Footnotes